anchor
stringlengths 1
23.8k
| positive
stringlengths 1
23.8k
| negative
stringlengths 1
31k
| anchor_status
stringclasses 3
values |
|---|---|---|---|
## Inspiration
College students often times find themselves taking the first job they see. However this often leaves them with a job that is stressful, hard, or pays less than what they're worth. We realized that students don't have a good tool to discover the best job in their area. Job boards like LinkedIn and Glassdoor typically don't have lowkey part time jobs, while university job boards are limited to university specific jobs. We wanted to create a means for students to post reviews for job postings within their university area. This would allow students to share experiences and inform students of the best job options for them.
## What it does
Rate My University Job is a job postings website created for college students. To access the website, a user must first create an account using a .edu email address. Users can search for job postings based on tags or the job title. The search results are filter are filtered by .edu domain name for users from the same University. A job posting contains information like the average pay reviewers received at the job, the location, a description, an average rating out of 5 stars. If a job posting doesn't exist for their position, they can create a new posting and provide the title, description, location, and a tag. Other students can read these posts and contribute with their own reviews.
## How we built it
We created the front end using vanilla HTML, CSS and JavaScript. We implemented Firebase Firestore to update and query a database where all job postings and reviews are stored. We also use Firebase Auth to authenticate emails and ensure a .edu email is used. We designed the interactive components using JavaScript to create a responsive user interface. The website is hosted using both GitHub Pages and Domain.com
## Challenges we ran into
(1) Website Design UI/UX
(2) Developing a schema and using a database
## Accomplishments that we're proud of
(1) Being able to store account data of multiple users and authenticate the .edu domain.
(2) Completing a first project in a collaborative environment
(3) Curating a list of job postings from the same university email domains.
(4) A robust search engine based on titles and search tags.
## What we learned
In general, we learned how to format and beatify HTML files with CSS, in addition to connecting a HTML. We learned how to use the FireStore database and how to query, upload, and update data.
## What's next for Rate My University Job
We would seek to improve the UI/UX. We would also look to add additional feature such as upvoting and downvoting posts and a reporting system for malicious/false posts. We also look to improve the search engine to allow for more concise searches and allow the searches to be sorted based on rating/pay/tags/etc. Overall there are a lot of additional features we can add to make this project even better.
|
## Inspiration
We were frustrated with downloading the Hack Western Android app every time it updates. We figured it would be nice if there has an open-source library so that developer can change content in real-time; therefore, users don't have to re-download the app everytime it updates.
## What it does
DynamoUI is an Open-Source Android developer library for changing a published app at Real-time. After logging in and authenticating, the client can use our simple UI to make real time changes to various app components such as the text, images, buttons, and theme at real-time. This app has immense potential for extensibility and uses such as a/b testing, data conglomeration and visualization.
## How we built it
We use Firebase for synchronizing data between Android and the Web platform, and AngularJs to make use of 3 way binding between the markup, js, and database. The mobile client constantly listens for changes on the database and makes changes accordingly through the use of our extended UI Classes.
## Challenges we ran into
Synchronizing data between AngularJS and Firebase was not always straightforward and well documented for special cases.
## Accomplishments that we are proud of
Published an Open-Source library for the use of other Android apps in real-time.
## What I learned
Making Android library, AngularJS and Firebase
## What's next for DynamoUI
Implement A/B testing so marketers can determine which versions perform better in real time.
|
## Being a university student during the pandemic is very difficult. Not being able to connect with peers, run study sessions with friends and experience university life can be challenging and demotivating. With no present implementation of a specific data base that allows students to meet people in their classes and be automatically put into group chats, we were inspired to create our own.
## Our app allows students to easily setup a personalized profile (school specific) to connect with fellow classmates, be automatically put into class group chats via schedule upload and be able to browse clubs and events specific to their school. This app is a great way for students to connect with others and stay on track of activities happening in their school community.
## We built this app using an open-source mobile application framework called React Native and a real-time, cloud hosted database called Firebase. We outlined the GUI with the app using flow diagrams and implemented an application design that could be used by students via mobile. To target a wide range of users, we made sure to implement an app that could be used on android and IOS.
## Being new to this form of mobile development, we faced many challenges creating this app. The first challenge we faced was using GitHub. Although being familiar to the platform, we were unsure how to use git commands to work on the project simultaneously. However, we were quick to learn the required commands to collaborate and deliver the app on GitHub. Another challenge we faced was nested navigation within the software. Since our project highly relied on a real-time database, we also encountered difficulties with implementing the data base framework into our implementation.
## An accomplishment we are proud of is learning a plethora of different frameworks and how to implement them. We are also proud of being able to learn, design and code a project that can potentially help current and future university students across Ontario enhance their university lifestyles.
## We learned many things implementing this project. Through this project we learned about version control and collaborative coding through Git Hub commands. Using Firebase, we learned how to handle changing data and multiple authentications. We were also able to learn how to use JavaScript fundamentals as a library to build GUI via React Native. Overall, we were able to learn how to create an android and IOS application from scratch.
## What's next for USL- University Student Life!
We hope to further our expertise with the various platforms used creating this project and be able to create a fully functioning version. We hope to be able to help students across the province through this application.
|
partial
|
## Inspiration
We wanted to explore what GCP has to offer more in a practical sense, while trying to save money as poor students
## What it does
The app tracks you, and using Google Map's API, calculates a geofence that notifies the restaurants you are within vicinity to, and lets you load coupons that are valid.
## How we built it
React-native, Google Maps for pulling the location, python for the webscraper (*<https://www.retailmenot.ca/>*), Node.js for the backend, MongoDB to store authentication, location and coupons
## Challenges we ran into
React-Native was fairly new, linking a python script to a Node backend, connecting Node.js to react-native
## What we learned
New exposure APIs and gained experience on linking tools together
## What's next for Scrappy.io
Improvements to the web scraper, potentially expanding beyond restaurants.
|
## Inspiration
We recognized how much time meal planning can cause, especially for busy young professionals and students who have little experience cooking. We wanted to provide an easy way to buy healthy, sustainable meals for the week, without compromising the budget or harming the environment.
## What it does
Similar to services like "Hello Fresh", this is a webapp for finding recipes and delivering the ingredients to your house. This is where the similarities end, however. Instead of shipping the ingredients to you directly, our app makes use of local grocery delivery services, such as the one provided by Loblaws. The advantages to this are two-fold: first, it helps keep the price down, as your main fee is for the groceries themselves, instead of paying large amounts in fees to a meal kit company. Second, this is more eco-friendly. Meal kit companies traditionally repackage the ingredients in house into single-use plastic packaging, before shipping it to the user, along with large coolers and ice packs which mostly are never re-used. Our app adds no additional packaging beyond that the groceries initially come in.
## How We built it
We made a web app, with the client side code written using React. The server was written in python using Flask, and was hosted on the cloud using Google App Engine. We used MongoDB Atlas, also hosted on Google Cloud.
On the server, we used the Spoonacular API to search for recipes, and Instacart for the grocery delivery.
## Challenges we ran into
The Instacart API is not publicly available, and there are no public API's for grocery delivery, so we had to reverse engineer this API to allow us to add things to the cart. The Spoonacular API was down for about 4 hours on Saturday evening, during which time we almost entirely switched over to a less functional API, before it came back online and we switched back.
## Accomplishments that we're proud of
Created a functional prototype capable of facilitating the order of recipes through Instacart. Learning new skills, like Flask, Google Cloud and for some of the team React.
## What we've learned
How to reverse engineer an API, using Python as a web server with Flask, Google Cloud, new API's, MongoDB
## What's next for Fiscal Fresh
Add additional functionality on the client side, such as browsing by popular recipes
|
## Inspiration
As university students, we often find that we have groceries in the fridge but we end up eating out and the groceries end up going bad.
## What It Does
After you buy groceries from supermarkets, you can use our app to take a picture of your receipt. Our app will parse through the items in the receipts and add the items into the database representing your fridge. Using the items you have in your fridge, our app will be able to recommend recipes for dishes for you to make.
## How We Built It
On the back-end, we have a Flask server that receives the image from the front-end through ngrok and then sends the image of the receipt to Google Cloud Vision to get the text extracted. We then post-process the data we receive to filter out any unwanted noise in the data.
On the front-end, our app is built using react-native, using axios to query from the recipe API, and then stores data into Firebase.
## Challenges We Ran Into
Some of the challenges we ran into included deploying our Flask to Google App Engine, and styling in react. We found that it was not possible to write into Google App Engine storage, instead we had to write into Firestore and have that interact with Google App Engine.
On the frontend, we had trouble designing the UI to be responsive across platforms, especially since we were relatively inexperienced with React Native development. We also had trouble finding a recipe API that suited our needs and had sufficient documentation.
|
winning
|
## Inspiration
While our team might have come from different corners of the country, with various experience in industry,
and a fiery desire to debate whether tabs or spaces are superior, we all faced similar discomforts in our jobs: insensitivity.
Our time in college has shown us that despite the fact people's diverse backgrounds, everyone can achieve greatness.
Nevertheless, workplace calls and water-cooler conversations are plagued with "microaggressions." A microaggression is a subtle indignity or offensive comment that a person communicates to a group. These subtle, yet hurtful comments lead to marginalization in the workplace, which, as studies have shown, can lead to anxiety and depression. Our team's mission was to tackle the unspoken fight on diversity and inclusion in the workplace.
Our inspiration came from this idea of impartial moderation: why is the marginalized employee's responsibility to take the burden
of calling someone out? Pointing out these microaggressions can lead to the reinforcement of stereotypes, and thus, create lose-lose
situations. We believe that if we can shift the responsibility, we can help create a more inclusive work environment, give equal footing for interviewees, and tackle
marginalization in the workplace from the water-cooler up.
## What it does
### EquiBox:
EquiBox is an IoT conference room companion, a speaker, and microphone that comes alive when meetings take place. It monitors different meeting members' sentiment levels by transcribing sound and running AI to detect for insults or non-inclusive behavior. If an insult is detected, EquiBox comes alive with a beep and a warning about micro-aggressions to impartially moderate an inclusive meeting environment. EquiBox sends live data to EquiTrack for further analysis.
### EquiTalk:
EquiTalk is our custom integration with Twilio (a voice platform used for conference calls) to listen to multi-person phone calls to monitor language, transcribe the live conversation, and flag certain phrases that might be insulting. EquiTalk sends live data to EquiTrack for analysis.
### EquiTrack:
EquiTrack is an enterprise analytics platform designed to allow HR departments to leverage the data created by EquiTalk and EquiBox to improve the overall work culture. EquiTrack provides real-time analysis of ongoing conference calls. The administrator can see not only the amount of micro-aggression that occur throughout the meeting but also the direct sentence that triggered the alert. The audio recordings of the conference calls are recorded as well, so administrators can playback the call to resolve discrepancies.
## How we built it
The LevelSet backend consisted of several independent services. EquiTalk uses a Twilio integration to send call data and metadata to our audio server. Similarly, EquiBox uses Google's VoiceKit, along with Houndify's Speech to Text API, to parse the raw audio format. From there, the transcription of the meeting goes to our micro-aggression classifier (hosted on Google Cloud), which combines a BERT Transformer with an SVC to achieve 90% accuracy on our micro-aggression test set. The classified data then travels to the EquiTalk backend (hosted on Microsoft Azure), which stores the conversation and classification data to populate the dashboard.
## Challenges we ran into
One of the biggest challenges that we ran into was creating the training set for the micro classifier. While there were plenty of data sets that including aggressive behavior in general, their examples lacked the subtlety that our model needed to learn. Our solution to this was to crowdsource and augment the set of the microaggressions. We sent a survey out to Stanford students on campus and compiled an extensive list of microaggressions, which allowed our classifier to achieve the accuracy that it did.
## Accomplishments that we're proud of
We're very proud of the accuracy we were able to achieve with our classifier. By using the BERT transformer, our model was able to classify micro-aggressions using only the handful of examples that we collected. While most DNN models required thousands of samples to achieve high accuracy, our micro-aggression dataset consisted of less than 100 possible micro-aggressions.
Additionally, we're proud of our ability to integrate all of the platforms and systems that were required to support the LevelSet suite. Coordinating multiple deployments and connecting several different APIs was definitely a challenge, and we're proud of the outcome.
## What we learned
* By definition, micro-aggressions are almost intangible social nuances picked up by humans. With minimal training data, it is tough to refine our model for classifying these micro-aggressions.
* Audio processing at scale can lead to several complications. Each of the services that use audio had different format specifications, and due to the decentralized nature of our backend infrastructure, merely sending the data over from service to service required additional effort as well. Ultimately, we settled on trying to handle the audio as upstream as we possibly could, thus eliminating the complication from the rest of the pipeline.
* The integration of several independent systems can lead to unexpected bugs. Because of the dependencies, it was hard to unit test the services ahead of time. Since the only way to make sure that everything was working was with an end-to-end test, a lot of bugs didn't arise until the very end of the hackathon.
## What's next for LevelSuite
We will continue to refine our micro-classifier to use tone classification as an input. Additionally, we will integrate the EquiTalk platform into more offline channels like Slack and email. With a longer horizon, we aim to improve equality in the workplace in all stages of employment, from the interview to the exit interview. We want to expand from conference calls to all workplace communication, and we want to create new strategies to inform and disincentivize exclusive behavior. We wish to LevelSet to level the playing the field in the workplace, and we believe that these next steps will help us achieve that.
|
## Inspiration
We've all been there - racing to find an empty conference room as your meeting is about to start, struggling to hear a teammate who's decided to work from a bustling coffee shop, or continuously muting your mic because of background noise.
As the four of us all conclude our internships this summer, we’ve all experienced these over and over. But what if there is a way for you to simply take meetings in the middle of the office…
## What it does
We like to introduce you to Unmute, your solution to clear and efficient virtual communication. Unmute transforms garbled audio into audible speech by analyzing your lip movements, all while providing real-time captions as a video overlay. This means your colleagues and friends can hear you loud and clear, even when you’re not. Say goodbye to the all too familiar "wait, I think you're muted".
## How we built it
Our team built this application by first designing it on Figma. We built a React-based frontend using TypeScript and Vite for optimal performance. The frontend captures video input from the user's webcam using the MediaRecorder API and sends it to our Flask backend as a WebM file. On the server side, we utilized FFmpeg for video processing, converting the WebM to MP4 for wider compatibility. We then employed Symphonic's API to transcribe visual cues.
## Challenges we ran into
Narrowing an idea was one of the biggest challenges. We had many ideas, including a lip-reading language course, but none of them had a solid use case. It was only after we started thinking about problems we encountered in our daily lives did we find our favorite project idea.
Additionally, there were many challenges on the technical side with using Flask and uploading and processing videos.
## Accomplishments that we're proud of
We are proud that we were able to make this project come to life.
## Next steps
Symphonic currently does not offer websocket functionality, so our vision of making this a real-time virtual meeting extension is not yet realizable. However, when this is possible, we are excited for the improvements this project will bring to meetings of all kinds.
|
## Inspiration
Did you know the average US worker spends **5 hours** a week in meetings and **4 hours** in preparation! This not only wastes company time, but company money as well. Minitum's goal is to help companies have more efficient meetings to create a more productive environment.
## What it does
Minitum allows for efficient meetings through creating virtual "meeting rooms". When your company has a meeting, your employers can join the meeting on their phone and see the meeting conversation in realtime. This has benefits, including helping those with hearing losses, anyone not in the physical meeting space, and allowing for everyone to visually see what their thoughts are.
## How we built it
We used React Native for the app interface, React for the web app, Google Firebase for the database, and Microsoft Azure Cognitive Services Speech to Text SDK.
## Challenges we ran into
The Speech to Text SDK does not stream continuously with React Native - instead we needed to create a bridge from React Native to Android Studio in order to use it efficiently.
## Accomplishments that we're proud of
Overcoming the bridge challenge mentioned above. Additionally, creating a better UI than past projects.
## What we learned
Need to do more research about potential API's and how they integrate with frameworks that we'd like to use.
## What's next for Minitum
Minitum has an incredible amount of potential paths - we hope to integrate voice recognition to detect who is saying what at meetings, have the ability to export the transcribed document to a PDF and email it to everyone, and work on creating agendas and add more tools for productivity.
|
partial
|
## Inspiration
3-D Printing. It has been around for decades, yet the printing process is often too complex to navigate, labour intensive and time consuming. Although the technology exists, it is only used by those who are trained in the field because of the technical skills required to operate the machine. We want to change all that. We want to make 3-D printing simpler, faster, and accessible for everyone. By leveraging the power of IoT and Augmented Reality, we created a solution to bridge that gap.
## What it does
Printology revolutionizes the process of 3-D printing by allowing users to select, view and print files with a touch of a button. Printology is the first application that allows users to interact with 3-D files in augmented reality while simultaneously printing it wirelessly. This is groundbreaking because it allows children, students, healthcare educators and hobbyists to view, create and print effortlessly from the comfort of their mobile devices. For manufacturers and 3-D Farms, it can save millions of dollars because of the drastically increased productivity.
The product is composed of a hardware and a software component. Users can download the iOS app on their devices and browse a catalogue of .STL files. They can drag and view each of these items in augmented reality and print it to their 3-D printer directly from the app. Printology is compatible with all models of printers on the market because of the external Raspberry Pi that generates a custom profile for each unique 3-D printer. Combined, the two pieces allow users to print easily and wirelessly.
## How I built it
We built an application in XCode that uses Apple’s AR Kit and converts STL models to USDZ models, enabling the user to view 3-D printable models in augmented reality. This had never been done before, so we had to write our own bash script to convert these models. Then we stored these models in a local server using node.js. We integrated functions into the local servers which are called by our application in Swift.
In order to print directly from the app, we connected a Raspberry Pi running Octoprint (a web based software to initialize the 3-D printer). We also integrated functions into our local server using node.js to call functions and interact with Octoprint. Our end product is a multifunctional application capable of previewing 3-D printable models in augmented reality and printing them in real time.
## Challenges I ran into
We created something that had never been done before hence we did not have a lot of documentation to follow. Everything was built from scratch. In other words this project needed to be incredibly well planned and executed in order to achieve a successful end product. We faced many barriers and each time we pushed through. Here were some major issues we faced.
1. No one on our team had done iOS development before and we a lot through online resources and trial and error. Altogether we watched more than 12 hours of YouTube tutorials on Swift and XCode - It was quite a learning curve. Ultimately with insane persistence, a full all-nighter and the generous help of the Deltahacks mentors, we troubleshooted errors and found new ways of getting around problems.
2. No one on our team had experience in bash or node.js. We learned everything from the Google and our mentors. It was exhausting and sometimes downright frustrating. Learning the connection between our javascript server and our Swift UI was extremely difficult and we went through loads of troubleshooting for our networks and IP addresses.
## Accomplishments that I'm proud of and what I've Learned
We're most proud of learning the integration of multiple languages, APIs and devices into one synchronized system. It was the first time that this had been done before and most of the software was made in house. We learned command line functions and figured out how to centralize several applications to provide a solution. It was so rewarding to learn an entirely new language and create something valuable in 24 hours.
## What's next for Print.ology
We are working on a scan feature on the app that allows users to do a 3-D scan with their phone of any object and be able to produce a 3-D printable STL file from the photos. This has also never been accomplished before and it would allow for major advancements in rapid prototyping. We look forward to integrating machine learning techniques to analyze a 3-D model and generate settings that reduce the number of support structures needed. This would reduce the waste involved in 3-D printing. A future step would be to migrate our STL files o a cloud based service in which users can upload their 3-D models.
|
## Inspiration 💭
With the current staffing problem in hospitals due to the lingering effects of COVID-19, we wanted to come up with a solution for the people who are the backbone of healthcare. **Personal Support Workers** (or PSWs) are nurses that travel between personal and nursing homes to help the elderly and disabled to complete daily tasks such as bathing and eating. PSWs are becoming increasingly more needed as the aging population grows in upcoming years and these at-home caregivers become even more sought out.
## What it does 🙊
Navcare is our solution to improve the scheduling and traveling experience for Personal Support Workers in Canada. It features an optimized shift schedule with vital information about where and when each appointment is happening and is designed to be within an optimal traveling distance. Patients are assigned to nurses such that the nurse will not have to travel more than 30 mins outside of their home radius to treat a patient. Additionally, it features a map that allows a nurse to see all the locations of their appointments in a day, as well as access the address to each one so they can easily travel there.
## How we built it 💪
Django & ReactJS with google maps API.
## Challenges we ran into 😵
Many many challenges. To start off, we struggled to properly connect our backend API to our front end, which was essential to pass the information along to the front end and display the necessary data. This was resolved through extensive exploration of the documentation, and experimentation. Next while integrating the google maps API, we continuously faced various dependency issues as well as worked to resolve more issues relating to fetching data through our Django rest API. Since it was our first time implementing such an infrastructure, to this extent, we struggled at first to find our footing and correctly connect and create the necessary elements between the front and back end. However, after experimenting with the process and testing out different elements and methods, we found a combination that worked!
## Accomplishments that we're proud of 😁
We made it! We all felt as though we have learned a tremendous amount. This weekend, we really stepped out of our comfort zones with our assignments and worked on new things that we didn't think we would work on. Despite our shortcomings in our knowledge, we were still able to create an adequately functioning app with a sign-in feature, the ability to make API requests, and some of our own visuals to make the app stand out. If given a little more time, we could have definitely built an industry-level app that could be used by PSWs anywhere. The fact we were able to solve a breadth of challenges in such little time gives us hope that we BELONG in STEM!
## What's next for Navcare 😎
Hopefully, we can keep working on Navcare and add/change features based on testing with actual PSWs. Some features include easier input and tracking of information from previous visits, as well as a more robust infrastructure to support more PSWs.
|
## Inspiration
With the coming of the IoT age, we wanted to explore the addition of new experiences in our interactions with physical objects and facilitate crossovers from the digital to the physical world. Since paper is a ubiquitous tool in our day to day life, we decided to try to push the boundaries of how we interact with paper.
## What it does
A user places any piece of paper with text/images on it on our clipboard and they can now work with the text on the paper as if it were hyperlinks. Our (augmented) paper allows users to physically touch keywords and instantly receive Google search results. The user first needs to take a picture of the paper being interacted with and place it on our enhanced clipboard and can then go about touching pieces of text to get more information.
## How I built it
We used ultrasonic sensors with an Arduino to determine the location of the user's finger. We used the Google Cloud API to preprocess the paper contents. In order to map the physical (ultrasonic data) with the digital (vision data), we use a standardized 1x1 inch token as a 'measure of scale' of the contents of the paper.
## Challenges I ran into
So many challenges! We initially tried to use a RFID tag but later figured that SONAR works better. We struggled with Mac-Windows compatibility issues and also struggled a fair bit with the 2D location and detection of the finger on the paper. Because of the time constraint of 24 hours, we could not develop more use cases and had to resort to just one.
## What I learned
We learned to work with the Google Cloud Vision API and interface with hardware in Python. We learned that there is a LOT of work that can be done to augment paper and similar physical objects that all of us interact with in the daily world.
## What's next for Augmented Paper
Add new applications to enhance the experience with paper further. Design more use cases for this kind of technology.
|
winning
|
## Inspiration
Our inspiration stemmed from our interest in the topics of Artificial Intelligence and the limitless capabilities associated with such. We wanted to develop a project that would not only incorporate aspects of AI and speech-to-text recognition, but the implementation into something that students can use on an everyday basis, which led us to also focus on the Productivity topic.
## What it does
The project, titled "PAT: Productivity Algorithm Tool" is a voice recognition and speech-to-text task management platform that uses Notion with the integration of Python code. The user can use activation commands to "wake up" PAT. They are then prompted to say the task they would like to add to their list, which is quickly converted to text and added to their personal Notion page. This also includes the date and exact time of input, along with the default status of "Active".
## How we built it
We built PAT using Python through the Pyzo IDE with the incorporation of a few Python libraries, including PyAudio and SpeechRecognition.
## Challenges we ran into
A few challenges that we encountered while creating the code for the speech recognition were problems installing PyAudio correctly and the connection to the NotionAPI, where we wanted real-time updates.
## Accomplishments that we're proud of
We are proud of the entire PAT project as a whole, but the two aspects that defined the entire project were the speech-to-text implementation using external libraries and the connection to the Notion software using OOP.
## What we learned
The biggest takeaway from this project is that if enough research is done for something that initially seems farfetched, it definitely can be accomplished. Entering DeltaHacks, we did not believe this project would work due to the various libraries that had to be installed and the integration of the NotionAPI, but with determination and persistence, we were able to make it fully functional. We learned a significant amount about AI, API integration, and design for this project.
## What's next for PAT: Productivity Algorithm Tool
We plan to make PAT available in multiple languages and upgraded to be a full voice-to-text and text-to-voice virtual AI assistant. We were extremely excited to work on this project to help optimize a productivity tool for not only students, but for anyone who uses Notion. We also plan to include further sections in the list, including the deadline/due date of the specific task and the priority.
|
## Inspiration
Everybody likes Bitmojis - so we wanted to game-ify recreating them!
## What it does
BitLeague lets you create, browse, and vote on community recreations of Bitmojis.
## How we built it
We kicked off by rapidly brainstorming and sketching out the user interface together. The rough draft is validated and tuned into prototype in Adobe XD.
We then starting to design a high fidelity interface and building the prototype natively in iOS. We used SnapKit to authenticate the Snapchat login and fetch the Bitmojis. Through the SnapKit API, we used GraphQL to fetch user data from Snapchat and SnapKit-provided SDKs for presenting the Bitmoji picker.
[](https://imgur.com/aDFvX61)
We also used Firebase Storage and Firestore as our serverless cloud service. This is used to catalogue all of the posts and store the images taken.
## Challenges we ran into
* **Task Management**: We noticed that building an app from scratch requires dividing our time onto all areas including design, front end, and back end. So we soon decided to execute tasks in parallel to maximize our output. For example, UI deisgn was done while the database was being built. With the help of efficient communication, we managed to wrap BitLeague up on time.
* **Development**: With two iOS developers working on the same project, merge conflicts were inevitable. Xcode merge conflicts are notoriously tricky to debug, so a lot time went towards resolving those. This app also used technologies neither of us were familiar with before today, like the SnapKit API and capturing images using the device's camera.
## What's next for BitLeague
Snap. Acquisition.
|
## Inspiration
While caught in the the excitement of coming up with project ideas, we found ourselves forgetting to follow up on action items brought up in the discussion. We felt that it would come in handy to have our own virtual meeting assistant to keep track of our ideas. We moved on to integrate features like automating the process of creating JIRA issues and providing a full transcript for participants to view in retrospect.
## What it does
*Minutes Made* acts as your own personal team assistant during meetings. It takes meeting minutes, creates transcripts, finds key tags and features and automates the process of creating Jira tickets for you.
It works in multiple spoken languages, and uses voice biometrics to identify key speakers.
For security, the data is encrypted locally - and since it is serverless, no sensitive data is exposed.
## How we built it
Minutes Made leverages Azure Cognitive Services for to translate between languages, identify speakers from voice patterns, and convert speech to text. It then uses custom natural language processing to parse out key issues. Interactions with slack and Jira are done through STDLIB.
## Challenges we ran into
We originally used Python libraries to manually perform the natural language processing, but found they didn't quite meet our demands with accuracy and latency. We found that Azure Cognitive services worked better. However, we did end up developing our own natural language processing algorithms to handle some of the functionality as well (e.g. creating Jira issues) since Azure didn't have everything we wanted.
As the speech conversion is done in real-time, it was necessary for our solution to be extremely performant. We needed an efficient way to store and fetch the chat transcripts. This was a difficult demand to meet, but we managed to rectify our issue with a Redis caching layer to fetch the chat transcripts quickly and persist to disk between sessions.
## Accomplishments that we're proud of
This was the first time that we all worked together, and we're glad that we were able to get a solution that actually worked and that we would actually use in real life. We became proficient with technology that we've never seen before and used it to build a nice product and an experience we're all grateful for.
## What we learned
This was a great learning experience for understanding cloud biometrics, and speech recognition technologies. We familiarised ourselves with STDLIB, and working with Jira and Slack APIs. Basically, we learned a lot about the technology we used and a lot about each other ❤️!
## What's next for Minutes Made
Next we plan to add more integrations to translate more languages and creating Github issues, Salesforce tickets, etc. We could also improve the natural language processing to handle more functions and edge cases. As we're using fairly new tech, there's a lot of room for improvement in the future.
|
losing
|
## Inspiration
Every musician knows that moment of confusion, that painful silence as onlookers shuffle awkward as you frantically turn the page of the sheet music in front of you. While large solo performances may have people in charge of turning pages, for larger scale ensemble works this obviously proves impractical. At this hackathon, inspired by the discussion around technology and music at the keynote speech, we wanted to develop a tool that could aid musicians.
Seeing AdHawks's MindLink demoed at the sponsor booths, ultimately give us a clear vision for our hack. MindLink, a deceptively ordinary looking pair of glasses, has the ability to track the user's gaze in three dimensions, recognizes events such as blinks and even has an external camera to display the user's view. Blown away by the possibility and opportunities this device offered, we set out to build a hands-free sheet music tool that simplifies working with digital sheet music.
## What it does
Noteation is a powerful sheet music reader and annotator. All the musician needs to do is to upload a pdf of the piece they plan to play. Noteation then displays the first page of the music and waits for eye commands to turn to the next page, providing a simple, efficient and most importantly stress-free experience for the musician as they practice and perform. Noteation also enables users to annotate on the sheet music, just as they would on printed sheet music and there are touch controls that allow the user to select, draw, scroll and flip as they please.
## How we built it
Noteation is a web app built using React and Typescript. Interfacing with the MindLink hardware was done on Python using AdHawk's SDK with Flask and CockroachDB to link the frontend with the backend.
## Challenges we ran into
One challenge we came across was deciding how to optimally allow the user to turn page using eye gestures. We tried building regression-based models using the eye-gaze data stream to predict when to turn the page and built applications using Qt to study the effectiveness of these methods. Ultimately, we decided to turn the page using right and left wink commands as this was the most reliable technique that also preserved the musicians' autonomy, allowing them to flip back and forth as needed.
Strategizing how to structure the communication between the front and backend was also a challenging problem to work on as it is important that there is low latency between receiving a command and turning the page. Our solution using Flask and CockroachDB provided us with a streamlined and efficient way to communicate the data stream as well as providing detailed logs of all events.
## Accomplishments that we're proud of
We're so proud we managed to build a functioning tool that we, for certain, believe is super useful. As musicians this is something that we've legitimately thought would be useful in the past, and granted access to pioneering technology to make that happen was super exciting. All while working with a piece of cutting-edge hardware technology that we had zero experience in using before this weekend.
## What we learned
One of the most important things we learnt this weekend were the best practices to use when collaborating on project in a time crunch. We also learnt to trust each other to deliver on our sub-tasks and helped where we could. The most exciting thing that we learnt while learning to use these cool technologies, is that the opportunities are endless in tech and the impact, limitless.
## What's next for Noteation: Music made Intuitive
Some immediate features we would like to add to Noteation is to enable users to save the pdf with their annotations and add landscape mode where the pages can be displayed two at a time. We would also really like to explore more features of MindLink and allow users to customize their control gestures. There's even possibility of expanding the feature set beyond just changing pages, especially for non-classical musicians who might have other electronic devices to potentially control. The possibilities really are endless and are super exciting to think about!
|
## Inspiration
As avid readers, we wanted a tool to track our reading metrics. As a child, one of us struggled with concentrating and focusing while reading. Specifically, there was a strong tendency to zone out. Our app provides the ability for a user to track their reading metrics and also quantify their progress in improving their reading skills.
## What it does
By incorporating Ad Hawk’s eye-tracking hardware into our build, we’ve developed a reading performance tracker system that tracks and analyzes reading patterns and behaviours, presenting dynamic second-by-second updates delivered to your phone through our app.
These metrics are calculated through our linear algebraic models, then provided to our users in an elegant UI interface on their phones. We provide an opportunity to identify any areas of potential improvement in a user’s reading capabilities.
## How we built it
We used the Ad Hawk hardware and backend to record the eye movements. We used their Python SDK to collect and use the data in our mathematical models. From there, we outputted the data into our Flutter frontend which displays the metrics and data for the user to see.
## Challenges we ran into
Piping in data from Python to Flutter during runtime was slightly frustrating because of the latency issues we faced. Eventually, we decided to use the computer's own local server to accurately display and transfer the data.
## Accomplishments that we're proud of
Proud of our models to calculate the speed of reading, detection of page turns and other events that were recorded simply through changes of eye movement.
## What we learned
We learned that Software Development in teams is best done by communicating effectively and working together with the same final vision in mind. Along with this, we learned that it's extremely critical to plan out small details as well as broader ones to ensure plan execution occurs seamlessly.
## What's next for SeeHawk
We hope to add more metrics to our app, specifically adding a zone-out tracker which would record the number of times a user "zones out".
|
## Inspiration 💡
Do your eyes ever feel strained and dry after hours and hours spent staring at screens? Has your eye doctor ever told you about the 20-20-20 rule? Good thing we’ve automated it for you along with personalized analysis of your eye activity using AdHawk’s eye tracking device.
The awesome AdHawk demos blew us away, and we were inspired by its seemingly subtle, but powerful features: it could track the user's gaze in three dimensions, recognize blink events, and has an external camera. We knew that our goal to remedy this healthcare crisis could be achieved with AdHawk.
## What it does 💻
Poor eye health has become an increasingly important issue in today’s digital world and we want to help. While you’re working at your desktop, you’ll wear the wonderful Adhawk glasses. Every 20 minutes or so, our connected app will alert you to look away for a 20 second eye break. With the eye tracking, you’ll be forced to look at least 20 feet–otherwise, the timer pauses.
We also made an eye exercise game available to play where you move a ball around to hit cubes randomly placed on the screen using your eyes. This engages the eye muscles in a fun and exciting way to improve eye tracking, eye teaming and myopia.
## How we built it 🛠️
Our frontend uses React.js & Styled Components and React Three Fiber for the eye exercise game. Our backend uses Python via AdHawk's SDK with Flask and Firebase for our database.
## Challenges we ran into ⛰️
Setting up the glasses to detect the depth of our sight accurately was difficult as this was the key metric to ensure the user was taking a 20 feet eye break for 20 seconds. As well, connecting this data to the frontend was a bit of a challenge. However, with our Flask and React tech stack, it was an easy, streamlined integration.
As well, we wanted to record analytics of our user’s screen time by taking any instances where their viewing distance was closer than a certain amount. It would give a user a chance to gauge their eye health and better understand their true viewing habits. This was a bit of a challenge as it was our first time using CockroachDB.
## Accomplishments that we're proud of 🏅
As coders and avid tech users, we are proud to have built a functioning app that we would actually use in our lives. Many of us personally struggle with vision problems and Visionary makes it so easy to help reduce these issues, whether it's myopia or eye strain. We’re super proud of the frontend, and the fact that we were able to incorporate the incredible Adhawk glasses into our project successfully.
## What we learned 📚
Start small and dream big. We ensured that the glasses would be able to track viewing distance and send that data to our frontend first before moving on to other features, like a landing page, data analytics, and our database setup.
## What's next for Visionary 🥅
We would love to incorporate other use cases for the Adhawk glasses, including more guided eye exercises with eye tracking, focus tracking by ensuring that the user’s eyes stay on screen, and so much more. Customized settings are also a next step. Visionary would also make for an awesome mobile app so that users can further reduce eye strain on their phones and tablets. The possibilities are truly, truly endless.
|
winning
|
## 🐤 Inspiration 🕒
As students, it's difficult for us to get into the flow state when studying, working, or anything in-between. In other times, we've been too focused/locked into the flow state and **forget to take breaks**. It's important to maintain a good balance between working and taking breaks. Why not make an extension that **helps us do both**?
## 🍅 What Does PomoGoose Do? 🍅
PomoGoose focuses on 3 main things:
1) A Pomodoro technique based timer
2) A goose that covers your screen when you break so that you **ACTUALLY take a break**
3) A currency/shop system that **rewards users** for properly doing Pomodoro by allowing them to exchange coins for cute skins :)
## 🔧 How We Built It 🎨
We built a Chrome extension in Typescript with native HTML/CSS. Most of this project makes heavy use of the Chrome Extension API's. As for design, the shop layout was designed in Figma as a low and high fidelity prototype.
## 👀 Challenges We Ran Into 👀
Originally, we wanted our goose to eat the cursor and disable the user from using their cursor. Unfortunately, due to security concerns (boring!), Chrome doesn't allow extensions to have that level of control over the user's activity.
## 🎈 What We Learned🎈
This was the first time we worked with chrome developer tools to create an extension so there was a big learning curve with trying to figure out how to implement what we needed. This was also the first time that anyone on our team had worked with creating wireframes in practice. Going into the project, we didn't know how to take advantage of components, instances, and variants on Figma. We were able to learn about those tools which they made up a large part of our prototypes.
## ✨ Accomplishments ✨
Some of our proudest achievements:
* Creating sick animations (and different skins) for the geese
* Getting the goose on the page (this took us a *VERY* long time)
* Creating the algorithm for tracking mouse movement
## ‼ What's next for PomoGoose ‼️
There's always room to build when setting up a shop whether it's:
* Adding even MORE customization -> community submitted skins?
* Finding other ways to spend currency that aren't just one time buys
* Improving code quality
>
> **Our Assets and Repo**
>
>
> [GitHub Repo](https://github.com/haenlonns/untitled-goose/)
>
>
> [Figma Design File + Assets](https://www.figma.com/design/4o0hxyQBRuGd6CtVks6ndS/Pomodoro-Goose?node-id=13-3&t=7hNh74VZYo8e9phW-1)
>
>
>
|
## Inspiration
As developers, we have noticed a lack of intuitive tools to develop CSS Animations. We decided to build a dev tools extension because we liked how chrome dev tools allows you to easily edit the CSS of an application. We also liked how cssgradient.io easily lets you export css code. There was also a super cool video on bezier curves mathematics we watched so we wanted to include an animation bezier curve editor like you would see in programs like Adobe After Effects and Animate.
## What it does
Harnessing the power of the chrome-devtools protocol, the math of Bezier curves, and the fluidity of React.js, we have animade it easy for developers to generate CSS animations and attach these animations to css selectors within their current browser window!
## How we built it
After figuring out what we wanted to build we jumped into sketching out our product on Figma. As soon as a basic Figma was finished with all the features we wanted to implement, we split into two groups, two people on the front end, and two on the back end.
The back end group started working on injecting css code, communicating between our React App and the main window to debug via chrome-devtools-protocol, and passing data around our react application.
The front end group spent the next couple hours figuring the bits and bobs of the design of the extension on Figma. They then began assembling the various components based on the mock up. About 4 hours before
About 12 hours before the deadline we began working on a data structure to handle all the state of our app, and typing it. We really didn’t know what we were doing so this proved to be pretty difficult.
## Challenges we ran into
A google search for debugging issues with chrome dev tools leads to a bunch of questions on how to use them, as opposed to how to actually create them! The documentation was also incomplete in some cases which led to not the most pleasant developer experience.
We originally wanted to create a simple minimum viable product but after a couple hours of hacking we ended up trying to implement almost everything. This and having split into two separate groups, merging everything together in the last couple hours of the hackathon proved to be a struggle, and we were unable to fully merge everything together.
We were unable to properly display a lot of the features that we created due to issues merging together all 3 of the different branches that we were working on, like dropdowns for CSS selectors, a stepper to view your animation at different intervals,
## Accomplishments that we're proud of
Bezier Editor - we originally didn’t want to include this in our final product but after the bezier video we just had to include it despite having serious doubts if we could get it to work.
Injecting CSS- We also were surprised how simple it was to inject css into a website. By utilizing the native Chrome DevTools Protocol, we were able to both inject/remove CSS into a website at will pretty painlessly, and harnessed
## What we learned
This was our first time writing a chrome extension, and we’re happy with what we’ve accomplished. For a lot of us, this was our first 36hr hackathon, our first time using TypeScript, our first time using advanced React Hooks, and Material UI! We really jumped into the deep-end with this project and think that we’ve made a product that can truly be useful for developers everywhere.
## What's next for Animade Easy
We plan to completely flesh out our enums and make every single CSS animation animatable, and also flesh out the types of all of our objects to make it easy for other developers to add onto our extension.
Built With
Typescript
React
Chrome Dev Tools Extension
Material UI
Figma
Link: ! [animade-easy](https://github.com/css-animations/animade-easy)
|
## Inspiration
In school, we were given the offer to take a dual enrollment class called Sign Language. A whole class for the subject can be quite time consuming for most children including adults. If people are interested in learning ASL, they either watch Youtube videos which are not interactive or spend HUNDREDS of dollars in classes (<https://deafchildren.org> requiring $70-100). Our product provides a cost-effective, time-efficient, and fun experience when learning the new unique language.
## What it does
Of course you have to first learn the ASL alphabets. A, B, C, D ... Z. Each alphabet has a unique hand gesture. You also have the option to learn phrases like "Yes", "No", "Bored", etc. The app makes sure you have done the alphabet correctly by displaying a circular progress view on how long you have to hold the gesture. We provide many images to make the learning experience accessible. After learning all the alphabets and practicing a few words, time for GAME time :). Test your ability to show a gesture and see how long you can go until you give up. The gamified experience leads to more learning and engaging for children.
## How we built it
The product was built using the language Swift. The hand-tracking was done using CoreML Components. We used hand landmarks and found distances between all points of the hand. Comparing the distances it SHOULD be and what it is as a specific time frame helps us figure out whether the hand pose is occurring. For the UI we planned it out using Figma and later wrote the code in Swift. We used the SwiftUI components to save time. For data storing we used UIData which syncs across devices with the same iCloud account.
## Challenges we ran into
There are 26 alphabets. That's a lot of arrays, comparing statements, and repetitive work. Testing would sometimes become difficult because the iPhone would eventually become hot and get temperature notifications. We only had one phone to test, so phone testing was frequently used for hand landmarks mostly. The project was extremely lengthy and putting so much content in one 36 hours is difficult, so we had to risk sleep. A cockroach in the room.
## Accomplishments that we're proud of
The hand landmark detection for an alphabet actually works much better than expected. Moving your hand super fast does not glitch the system. A fully functional vision app with clean UI makes the experience fun and open for all people.
## What we learned
Quantity < Quality. We created more than 6 functioning pages with different level of UI quality. It's very noticeable which views were created quickly because of the time crunch. Instead of having so many pages, decreasing the number of pages and maybe adding more content into each View would make the app appear flawless. Comparing arrays of the goal array and current time-frame array is TEDIOUS. So much time is wasted from testing. We could not figure out action classifier in Swift as there was no basic open-source code. Explaining problems to Chat GPT becomes difficult because the LLM never seems to understand basic tasks, but perfectly performs in complex tasks. Stack Overflow will still be around (for now) if we face problems.
## What's next for Hands-On
The app fits well on my iPhone 11, but on an iPad? I do not think so. The next step to take the project further is to scale UI, so it works for iPads an iPhones of any size. Once we fix that problem, we could release the app to the App Store. Since we do not use any API, we would have no expenses related to hosting the API. Making the app public could help people of all ages learn a new language in an interactive manner.
|
losing
|
## Inspiration
An abundance of qualified applicants lose their chance to secure their dream job simply because they are unable to effectively present their knowledge and skills when it comes to the interview. The transformation of interviews into the virtual format due to the Covid-19 pandemic has created many challenges for the applicants, especially students as they have reduced access to in-person resources where they could develop their interview skills.
## What it does
Interviewy is an **Artificial Intelligence** based interface that allows users to practice their interview skills by providing them an analysis of their video recorded interview based on their selected interview question. Users can reflect on their confidence levels and covered topics by selecting a specific time-stamp in their report.
## How we built it
This Interface was built using the MERN stack
In the backend we used the AssemblyAI APIs for monitoring the confidence levels and covered topics. The frontend used react components.
## Challenges we ran into
* Learning to work with AssemblyAI
* Storing files and sending them over an API
* Managing large amounts of data given from an API
* Organizing the API code structure in a proper way
## Accomplishments that we're proud of
• Creating a streamlined Artificial Intelligence process
• Team perseverance
## What we learned
• Learning to work with AssemblyAI, Express.js
• The hardest solution is not always the best solution
## What's next for Interviewy
• Currently the confidence levels are measured through analyzing the words used during the interview. The next milestone of this project would be to analyze the alterations in tone of the interviewees in order to provide a more accurate feedback.
• Creating an API for analyzing the video and the gestures of the the interviewees
|
>
> Domain.com domain: IDE-asy.com
>
>
>
## Inspiration
Software engineering and development have always been subject to change over the years. With new tools, frameworks, and languages being announced every year, it can be challenging for new developers or students to keep up with the new trends the technological industry has to offer. Creativity and project inspiration should not be limited by syntactic and programming knowledge. Quick Code allows ideas to come to life no matter the developer's experience, breaking the coding barrier to entry allowing everyone equal access to express their ideas in code.
## What it does
Quick Code allowed users to code simply with high level voice commands. The user can speak in pseudo code and our platform will interpret the audio command and generate the corresponding javascript code snippet in the web-based IDE.
## How we built it
We used React for the frontend, and the recorder.js API for the user voice input. We used runkit for the in-browser IDE. We used Python and Microsoft Azure for the backend, we used Microsoft Azure to process user input with the cognitive speech services modules and provide syntactic translation for the frontend’s IDE.
## Challenges we ran into
>
> "Before this hackathon I would usually deal with the back-end, however, for this project I challenged myself to experience a different role. I worked on the front end using react, as I do not have much experience with either react or Javascript, and so I put myself through the learning curve. It didn't help that this hacakthon was only 24 hours, however, I did it. I did my part on the front-end and I now have another language to add on my resume.
> The main Challenge that I dealt with was the fact that many of the Voice reg" *-Iyad*
>
>
> "Working with blobs, and voice data in JavaScript was entirely new to me." *-Isaac*
>
>
> "Initial integration of the Speech to Text model was a challenge at first, and further recognition of user audio was an obstacle. However with the aid of recorder.js and Python Flask, we able to properly implement the Azure model." *-Amir*
>
>
> "I have never worked with Microsoft Azure before this hackathon, but decided to embrace challenge and change for this project. Utilizing python to hit API endpoints was unfamiliar to me at first, however with extended effort and exploration my team and I were able to implement the model into our hack. Now with a better understanding of Microsoft Azure, I feel much more confident working with these services and will continue to pursue further education beyond this project." *-Kris*
>
>
>
## Accomplishments that we're proud of
>
> "We had a few problems working with recorder.js as it used many outdated modules, as a result we had to ask many mentors to help us get the code running. Though they could not figure it out, after hours of research and trying, I was able to successfully implement recorder.js and have the output exactly as we needed. I am very proud of the fact that I was able to finish it and not have to compromise any data." *-Iyad*
>
>
> "Being able to use Node and recorder.js to send user audio files to our back-end and getting the formatted code from Microsoft Azure's speech recognition model was the biggest feat we accomplished." *-Isaac*
>
>
> "Generating and integrating the Microsoft Azure Speech to Text model in our back-end was a great accomplishment for our project. It allowed us to parse user's pseudo code into properly formatted code to provide to our website's IDE." *-Amir*
>
>
> "Being able to properly integrate and interact with the Microsoft Azure's Speech to Text model was a great accomplishment!" *-Kris*
>
>
>
## What we learned
>
> "I learned how to connect the backend to a react app, and how to work with the Voice recognition and recording modules in react. I also worked a bit with Python when trying to debug some problems in sending the voice recordings to Azure’s servers." *-Iyad*
>
>
> "I was introduced to Python and learned how to properly interact with Microsoft's cognitive service models." *-Isaac*
>
>
> "This hackathon introduced me to Microsoft Azure's Speech to Text model and Azure web app. It was a unique experience integrating a flask app with Azure cognitive services. The challenging part was to make the Speaker Recognition to work; which unfortunately, seems to be in preview/beta mode and not functioning properly. However, I'm quite happy with how the integration worked with the Speach2Text cognitive models and I ended up creating a neat api for our app." *-Amir*
>
>
> "The biggest thing I learned was how to generate, call and integrate with Microsoft azure's cognitive services. Although it was a challenge at first, learning how to integrate Microsoft's models into our hack was an amazing learning experience. " *-Kris*
>
>
>
## What's next for QuickCode
We plan on continuing development and making this product available on the market. We first hope to include more functionality within Javascript, then extending to support other languages. From here, we want to integrate a group development environment, where users can work on files and projects together (version control). During the hackathon we also planned to have voice recognition to recognize and highlight which user is inputting (speaking) which code.
|
## Inspiration
Lots of qualified candidates missed out great opportunities not because lack of skills, but because they missed common interview techniques (ex: facial expression, confidence, etc). We believe AI can help solved it.
## What it does
It consists of two parts: Interview Preparation & Realtime Prompting. Interview Prep allow step by step practice with practical feedbacks to help candidate learn by repeating. In Realtime Prompting, it provide quick un-obstructive prompt to help improve proper behaviors.
## How we built it
We build the backend using Python and leverage Groq API, HUME, and AWS for the AI services. LLM we use the latest LLAMA3. Front end we employs React.
## Challenges we ran into
We ran into coding dependencies issues with HUME and general challenges using API.
## Accomplishments that we're proud of
We were able to built a working demo that provide real-time feedback on Speech-To-Text and sentiment analysis.
## What we learned
Team work, collaborate is the key to a successful projects. And everyone with different skills set really help put everythings together.
## What's next for Interview-IQ
Two things: Get adoption and scale the infrastructure. We want to leverage our college network to get feedback/improvements while building the resource to scale the production.
|
partial
|
## Inspiration
As all three of us are from China, we were particularly inspired by a recent technological initiative implemented in major cities. This initiative focuses on assisting elderly individuals in navigating the digital landscape for taxi calling services. In China, nearly all taxi services have transitioned to online platforms, which poses significant challenges for seniors who may struggle with digital technology and accessing information online. This experience highlighted for me the critical importance of cities integrating devices and services that cater to diverse needs and bridge the technological gap.
## What it does
In this context, we turned our attention to the issue of homelessness, which presents its own set of challenges. Many homeless individuals face significant difficulties in accessing essential resources, often exacerbated by limited or no internet connectivity. Without easy access to information about shelters, food banks, and healthcare services, they are left isolated and without the support they need. Our goal is to address this gap by developing solutions that provide essential information and assistance to vulnerable populations, ensuring that everyone can access the resources necessary for their well-being.
Moreover, this device is not limited to homeless individuals; it serves as a new initiative within our infrastructure, providing support to a wider range of people. By acting as a community resource hub, the device can assist those facing various challenges, such as low-income families, individuals experiencing temporary hardships, or those seeking information about local services. It fosters a sense of inclusivity by ensuring that everyone, regardless of their circumstances, has access to vital resources. The integration of a voice model allows users to interact easily with the device, making it more user-friendly for individuals who may not be comfortable with technology. This initiative aims to create a more connected and supportive community, enhancing the overall quality of life for all residents.
## How we built it
Because we do not possess an actual IoT device, we decided to use a website to model the IoT device placed around the city. The website will serve as an interactive platform for users seeking assistance. Upon entering the homepage, users will encounter an integrated voice model that allows them to speak directly to the device regarding their needs. The voice model will categorize the request and guide users to the appropriate resources they are looking for. To enhance accessibility, the website will also offer a text-to-speech option for users who may have difficulty reading or prefer auditory information.
Additionally, users will have the option to click into one of six predefined categories, each representing a specific type of assistance, such as shelters, food banks, healthcare services, mental health support, addiction resources, and emergency services, weather, and AI chatbox. This structure ensures that users can quickly and efficiently find the help they need while navigating the challenges associated with homelessness or other hardships.
The shelter button will provide users with information on the three nearest homeless shelters, including their addresses and a Google map for easy navigation. Similarly, the food bank button will list food bank locations along with relevant details. The emergency calling service and health support will allow users to directly dial the appropriate phone numbers. Additionally, the weather function will prompt users to enter a specific ZIP code and return a week's worth of weather advice. For any further inquiries or emotional support, users can engage with an AI chatbot for assistance.
By modeling the IoT device through this website, we aim to create a user-friendly and efficient resource that fosters greater accessibility and support for all community members.
## Challenges we ran into
Since this was our first time participating in a hackathon, and it was also the first experience for all three of us in building a full-stack working project, we faced numerous challenges along the way. Our lack of familiarity with front-end and back-end development, various frameworks, and API calls made the process particularly daunting. We often found ourselves troubleshooting issues that arose from our inexperience, which slowed down our progress.
Moreover, we struggled at first to identify the project we actually wanted to pursue. The brainstorming sessions were filled with great ideas, but we had difficulty narrowing them down to one cohesive concept that we felt passionate about and could realistically execute within the time constraints of the hackathon. This uncertainty added to our initial frustration, but it also provided an opportunity for us to collaborate more closely and learn from one another as we explored different possibilities.
As the hackathon progressed, we gradually gained confidence in our abilities and started to enjoy the process. We learned to leverage our individual strengths, and despite the setbacks, we remained committed to pushing through the challenges together. Overall, this experience taught us valuable lessons about teamwork, adaptability, and problem-solving, which we hope to apply in future projects!
## Accomplishments that we're proud of
As a team of three whose members have a) never collaborated together on any major programming tasks and b) have varying levels of experience and familiarity with different languages, we are proud that we were able to come up with a functional website integrating technologies that were, for the most part, new to us. Additionally, we believe that what we have created can help mitigate the societal issue of homelessness. Although we only have a demo, we believe that the core idea that our project follows can benefit society.
## What we learned
First and foremost, working as a team isn't easy, especially with members with different levels of experience and language preferences. Additionally, we learned (the hard way) that version control is a) extremely important and b) extremely easy to mess up when you don't know exactly what you're doing. Lastly, while this experience has taught us that while not all things will follow ideally according to plan due to a variety of factors, we can still build innovative and impactful projects if we put our minds to it.
## What's next for HavenLink
We plan on perfecting our current set of features - extending language support to more than just English for speech, adding a TTS service (was WIP but didn't have time to finish) for the visually impaired, expanding and improving our databases from a simple mock to a real database etc. Additionally, since the project was intended for real devices on the streets, we have plans to refactor the project into a suitable format for a larger scale as well as server-edge device communications, and actually implement the devices. All in all, the future holds much potential, and we will be looking to improve our project to better serve the community.
|
## Inspiration
With current restrictions, there is a lot of opportunities for people to help each other out in regards to essential delivery. This application provides a way in which communities can come together with simple acts of kindness.
## What it does
Connects people who are limited in regards to transportation with people who are able to pick things up for them. Users can posts lists of essential items they are looking to get delivered to them by people that are close by. Users can also search the database for posted lists near an address that they are "on the way" to and get in contact with the user who posted them.
## How we built it
Python, Flask, Google Maps, SQL, HTML, Bootstrap, Ajax, JS
## Challenges we ran into
Web hosting, database manipulations, various language/framework documentation, back end to front end connecting, overall architecture planning.
## Accomplishments that we're proud of
This was our first experience with designing and executing a web application! We used many of these technologies for the first time during this project!
## What we learned
How to use SQL databases, Bootstrap, calling APIs, connecting front and backend services, POST and GET requests.
|
## Inspiration
Homelessness is a rampant problem in the US, with over half a million people facing homelessness daily. We want to empower these people to be able to have access to relevant information. Our goal is to pioneer technology that prioritizes the needs of displaced persons and tailor software to uniquely address the specific challenges of homelessness.
## What it does
Most homeless people have basic cell phones with only calling and sms capabilities. Using kiva, they can use their cell phones to leverage technologies previously accessible with the internet. Users are able to text the number attached to kiva and interact with our intelligent chatbot to learn about nearby shelters and obtain directions to head to a shelter of their choice.
## How we built it
We used freely available APIs such as Twilio and Google Cloud in order to create the beta version of kiva. We search for nearby shelters using the Google Maps API and communicate formatted results to the user’s cell phone Twilio’s SMS API.
## Challenges we ran into
The biggest challenge was figuring out how to best utilize technology to help those with limited resources. It would be unreasonable to expect our target demographic to own smartphones and be able to download apps off the app market like many other customers would. Rather, we focused on providing a service that would maximize accessibility. Consequently, kiva is an SMS chat bot, as this allows the most users to access our product at the lowest cost.
## Accomplishments that we're proud of
We succeeded in creating a minimum viable product that produced results! Our current model allows for homeless people to find a list of nearest shelters and obtain walking directions. We built the infrastructure of kiva to be flexible enough to include additional capabilities (i.e. weather and emergency alerts), thus providing a service that can be easily leveraged and expanded in the future.
## What we learned
We learned that intimately understanding the particular needs of your target demographic is important when hacking for social good. Often, it’s easier to create a product and find people who it might apply to, but this is less realistic in philanthropic endeavors. Most applications these days tend to be web focused, but our product is better targeted to people facing homeslessness by using SMS capabilities.
## What's next for kiva
Currently, kiva provides information on homeless shelters. We hope to be able to refine kiva to let users further customize their requests. In the future kiva should be able to provide information about other basic needs such as food and clothing. Additionally, we would love to see kiva as a crowdsourced information platform where people could mark certain places as shelter to improve our database and build a culture of alleviating homelessness.
|
losing
|
## Inspiration
Food allergies are an oft-overlooked but extremely common chronic hidden disability. Studies have shown they may be linked to higher rates (between 1.6 and 2.3 times more likely) of depression, stress, and anxiety, leading to impaired quality of life. One in thirteen children has a food allergy, and 40% of children with one food allergy also have more, leading to increased stress for guardians as well. It's an affliction with possibly devastating acute effects, but chronic stress leads to definite sinister long-term effects.
The simple act of eating at a restaurant can be fraught with danger. Ordering food requires a person to remember each allergy and its severity, while the waiter must remember the possible allergens in each dish. That's a human and fallible process, but with Covid-19, restaurants are moving to online menus tied to QR codes, adding another hurdle to the process.
## What it does
This is where Allenu comes in. Allenu is an allergy-aware online menu. Users can import their allergy data from external health providers and view menus from participating restaurants. Each potentially dangerous menu item is highlighted with its relative potential severity, saving the error-prone and arduous processes outlined above.
## How we built it
We built it by leveraging InterSystem's IRIS data store and API. IRIS let us quickly deploy an instance of a health provider on AWS, and we then populated it with seed users with their own sets of allergies. IRIS acts as both an identity and a health store. When a user first visits Allenu, they log in by providing their IRIS ID, and then we make a GET request to IRIS to load their name and allergies. When a user visits a menu, we load the menu items from our own database, load their allergies from IRIS, cross-reference the two, and display a menu with appropriately dangerous dishes flagged. Users can also add and delete allergies because those requests are passed through our system to IRIS.
Our IRIS deployment is running on AWS. Our web app is built using Flask and uses PostgreSQL as a database, all hosted on Heroku. We also use Heroku for secret management.
## Challenges we ran into
Becoming familiar with the IRIS API took some time, but we had some engineers at the InterSystems sponsor booth walk us through the deployment process and API reference material.
## Accomplishments that we're proud of
We're proud of using IRIS because it gives us the data in FHIR format. FHIR is an industry-standard format for health data, meaning our app could easily extend to using any external health provider. We're also proud of incorporating functionality for the user to add and delete allergies by passing them through to IRIS, giving the user stewardship of their own health.
## What we learned
The big takeaway for us was the sheer scale of the mental health impacts of allergies, along with their silent ubiquity. On the technical side, we learned how to integrate data from both remote services and local data stores to serve complex user queries.
## What's next for Allenu
We haven't won yet! We're looking into possibly automating data ingestion with web scraping so it's even easier for restaurants to add their own menus. Integrating with other health providers would require a user data authorization flow which we'd also like to build.
|
## Inspiration
We were inspired to create a health-based solution (despite focusing on sustainability) due to the recent trend of healthcare digitization, spawning from the COVID-19 pandemic and progressing rapidly with increased commercial usage of AI. We did, however, want to create a meaningful solution with a large enough impact that we could go through the hackathon, motivated, and with a clear goal in mind. After a few days of research and project discussions/refinement sessions, we finally came up with a solution that we felt was not only implementable (with our current skills), but also dealt with a pressing environmental/human interest problem.
## What it does
WasteWizard is designed to be used by two types of hospital users: Custodians and Admin. At the custodian user level, alerts are sent based on timer countdowns to check on wastebin statuses in hospital rooms. When room waste bins need to be emptied, there is an option to select the type of waste and the current room to locate the nearest large bin. Wastebin status (for that room) is then updated to Empty. On the admin side, there is a dashboard to track custodian wastebin cleaning logs (by time, location, and type of waste), large bin status, and overall aggregate data to analyze their waste output. Finally, there is also an option for the admin to empty large garbage bins (once collected by partnering waste management companies) to update their status.
## How we built it
The UI/UX designers employed Figma keeping user intuitiveness in mind. Meanwhile, the backend was developed using Node.js and Express.js, employing JavaScript for server-side scripting. MongoDB served as the database, and Mongoose simplified interactions with MongoDB by defining schemas. A crucial aspect of our project was using the MappedIn SDK for indoor navigation. For authentication and authorization, the developers used Auth0 which greatly enhanced security. The development workflow followed agile principles, incorporating version control for collaboration. Thorough testing at both front-end and back-end levels ensured functionality and security. The final deployment in Azure optimized performance and scalability.
## Challenges we ran into
There were a few challenges we had to work through:
* MappedIn SDK integration/embedding: we used a front-end system that, while technically compatible, was not the best choice to use with MappedIn SDK so we ended up needing to debug some rather interesting issues
* Front End development, in general, was not any of our strong suits, so much of that phase of the project required us to switch between CSS tutorial tabs and our coding screens, which led us to taking more time than expected to finish that up
* Auth0 token issues related to redirecting users and logging out users after the end of a session + redirecting them to the correct routes
* Needing to pare down our project idea to limit the scope to an idea that we could feasibly build in 24 hours while making sure we could defend it in a project pitch as an impactful idea with potential future growth
## Accomplishments that we're proud of
In general, we're all quite proud at essentially full-stack developing a working software project in 24 hours. We're also pretty proud of our project idea, as our initial instinct was to pick broad, flashy projects that were either fairly generic or completely unbuildable in the given time frame. We managed to set realistic goals for ourselves and we feel that our project idea is niche and applicable enough to have potential outside of a hackathon environment. Finally, we're proud of our front-end build. As mentioned earlier, none of us are especially well-versed in front-end, so having our system be able to speak to its user (and have it look good) is a major success in our books.
## What we learned
We learned we suck at CSS! We also learned good project time management/task allocation and to plan for the worst as we were quite optimistic about how long it would take us to finish the project, but ended up needing much more time to troubleshoot and deal with our weak points. Furthermore, I think we all learned new skills in our development streams, as we aimed to integrate as many hackathon-featured technologies as possible. There was also an incredible amount of research that went into coming up with this project idea and defining our niche, so I think we all learned something new about biomedical waste management.
## What's next for WasteWizard
As we worked through our scope, we had to cut out a few ideas to make sure we had a reasonable project within the given time frame and set those aside for future implementation. Here are some of those ideas:
* more accurate trash empty scheduling based on data aggregation + predictive modelling
* methods of monitoring waste bin status through weight sensors
* integration into hospital inventory/ordering databases
As a note, this can be adapted to any biomedical waste-producing environment, not just hospitals (such as labs and private practice clinics).
|
## Slooth
Slooth.tech was born from the combined laziness and frustration towards long to navigate school websites of four Montréal based hackers.
When faced with the task of creating a hack for McHacks 2016, the creators of Slooth found the perfect opportunity to solve a problem they faced for a long time: navigating tediously complicated school websites.
Inspired by Natural Language Processing technologies and personal assistants such as Google Now and Siri, Slooth was aimed at providing an easy and modern way to access important documents on their school websites.
The Chrome extension Slooth was built with two main features in mind: customization and ease of use.
# Customization:
Slooth is based on user recorded macros. Each user will record any actions they which to automate using the macro recorder and associate an activation phrase to it.
# Ease of use:
Slooth is intended to simplify its user's workflow. As such, it was implemented as an easily accessible Chrome extension and utilizes voice commands to lead its user to their destination.
# Implementation:
Slooth is a Chrome extension built in JS and HTML.
The speech recognition part of Slooth is based on the Nuance ASR API kindly provided to all McHacks attendees.
# Features:
-Fully customizable macros
-No background spying. Slooth's speech recognition is done completely server side and notifies the user when it is recording their speech.
-Minimal server side interaction. Slooth's data is stored entirely locally, never shared with any outside server. Thus you can be confident that your personal browsing information is not publicly available.
-Minimal UI. Slooth is designed to simplify one's life. You will never need a user guide to figure out Slooth.
# Future
While Slooth reached its set goals during McHacks 2016, it still has room to grow.
In the future, the Slooth creators hope to implement the following:
-Full compatibility with single page applications
-Fully encrypted autofill forms synched with the user's Google account for cross platform use.
-Implementation of the Nuance NLU api to add more customization options to macros (such as verbs with differing parameters).
# Thanks
Special thanks to the following companies for their help and support in providing us with resources and APIs:
-Nuance
-Google
-DotTech
|
partial
|
## Inspiration
Proteins make up the human body and allow us to function. Unfortunately, these proteins also have the capability of causing great harm when they do not perform correctly, playing a major factor in diseases of modern society. To combat this and improve the physical welfare of modern society, we must better understand proteins through their unique characteristics. Proteins carry a special structure-function relationship, where the structure of proteins allow for the variety of activities they can do.
However, thousands of proteins remain uncharacterized in terms of both structure and function. This leaves humans in a vulnerable state, as we may not be able to understand diseases well due to not being able to link illnesses to a specific protein.
To combat this, machine learning algorithms have been developed that analyze amino acid sequences to determine potential structure features. A problem with multiple current programs is the use of greedy algorithms, which may not yield the best possible fold recognition results. Researchers of DNA sequences have recently published work recommending the use of the mean-shift algorithm in substitute of current programs, and have obtained improved results. We take this approach to protein sequences in order to better understand domains/folds (characteristic protein features) in an attempt to improve current protein knowledge.
## What it does
This program transforms mean-shift clustering into a classification algorithm. Sequences are first cleaned and duplicates are removed in order to develop better data groups during mean-shift clustering. These sequences are then converted into vectors that can be used by machine learning. Mean shift clusters proteins based on their similarity and provides cluster center coordinates. The program determines a radius that represent the average distance between points associated with a cluster centroid and the cluster centroid. This radius is used to create a circle. This circle can be used to predict whether or not sequences contain a certain fold, depending on whether or not coordinates are inside or outside of the circle. ICURAS converts an input sequence to coordinates and determines the distance between its points and the closest cluster center. If the distance is less than or equal to the circle's radius, the program predicts the sequence to carry a specific fold. ICURAS takes a rolling window approach and tests multiple sub-sequences within your sequence input if possible, and will provide distance values, associated sequences, and a threshold score for prediction.
## How we built it
ICURAS was built using scikit-learn's API. Sequences were derived from the Pfam database. Purge from the MEME Suite was used in conjunction in order to process fold protein sequences. R's ggplot2 was used in order to plot data from this project. Code was written in jupyter notebook with Python 3.7.
## Challenges we ran into
Before TreeHacks, neither of us knew how to use machine learning algorithms. However, discussion with TreeHack mentors improved our understanding and allowed for the project to continue. Data pre-processing was a new concept, and required literature review in order to understand a reasonable way to convert 20 amino acid letters into a numerical form.
Evaluation metrics of whether or not sequences contained a certain fold were also unclear at the beginning of the project, but figured out after data visualization.
## Accomplishments that we're proud of
UHRF1 is a multi-domain protein that is recognized to be overexpressed in several forms of cancer. These domains differ in structure, and vary in size greatly (60 to 200 residues). Using ICURAS, we used UHRF1's domain as test set and were able to identify all 5 domains in sequence.
## What we learned
Machine learning, machine learning, and machine learning! How coding is not set in stone with how its methods should be used was definitely a highlight, considering our conversion of a clustering algorithm to a classification system.
## What's next for ICURAS - Machine Learning Protein Fold Recognition
If we are able to gain access to cloud computing, we are interested in further developing cluster centers for the current set of 650,000 protein domains. We would use this knowledge in order to develop a thorough public resource, where researchers can input sequences of little known knowledge and potentially receive leads on how to learn more about these uncharacterized proteins.
How domains act with one another, and whether domains often appear with one another would also be of interest in a machine learning situation.
Collaboration with protein modeling groups/programs would also be of interest, as ICURAS would be able to provide structure-independent assistance to current programs which rely on information like residue distances in order to determine structure. Predicting certain folds within a sequence may provide protein modeling a template as aid.
References:
James, B. T., Luczak, B. B. & Girgis, H. Z. MeShClust : an intelligent tool for clustering DNA sequences. 46, 1–10 (2018).
Bailey, T. L. et al. MEME S UITE : tools for motif discovery and searching. 37, 202–208 (2009).
Asgari, E. & Mofrad, M. R. K. Continuous Distributed Representation of Biological Sequences for Deep Proteomics and Genomics. 1–15 (2015). doi:10.1371/journal.pone.0141287
Darosa, P. A. & Harrison, J. S. A bifunctional role for the UHRF1 UBL domain in the control of hemi-methylated DNA-dependent histone ubiquitylation. Mol. Cell (2018).
Cerami, E. et al. The cBio Cancer Genomics Portal: An Open Platform for Exploring Multidimensional Cancer Genomics Data. Cancer Discov. 2, 401 LP-404 (2012).
J Yang, R Yan, A Roy, D Xu, J Poisson, Y Zhang. The I-TASSER Suite: Protein structure and function prediction. Nature Methods, 12: 7-8 (2015).
A Roy, A Kucukural, Y Zhang. I-TASSER: a unified platform for automated protein structure and function prediction. Nature Protocols, 5: 725-738 (2010)
Y Zhang. I-TASSER server for protein 3D structure prediction. BMC Bioinformatics, vol 9, 40 (2008).
The Pfam protein families database in 2019: S. El-Gebali, J. Mistry, A. Bateman, S.R. Eddy, A. Luciani, S.C. Potter, M. Qureshi, L.J. Richardson, G.A. Salazar, A. Smart, E.L.L. Sonnhammer, L. Hirsh, L. Paladin, D. Piovesan, S.C.E. Tosatto, R.D. Finn. Nucleic Acids Research (2019) doi: 10.1093/nar/gky995
|
## Inspiration
Every year, the amount of data collected exponentially grows. As the abundance of data grows, so do the possibilities that come along with it. In conjunction with machine learning in Python, we decided to utilize the tools available to try to improve a critical aspect of the health industry: cancer diagnosis.
## What it does
Our algorithm diagnoses the patient, given traits about their biopsy lab results. With the data of breast cancer on a cellular level, we were able to train a learning algorithm to predict an accuracy of 99% on our test set. In an effort to decrease the amount of false negative diagnosis on our algorithm's behalf, we were able to achieve a 0.4% false negative diagnosis.
## How we built it
In terms of data, we accessed the breast cancer dataset from UCI's machine learning repository. Once we had the data, we used Python and various packages within Python to both clean up and visualize our data. We then used Tensorflow to model this data using 3 different machine learning algorithms: logistic regression, softmax regression, and neural networks. Using a 60% / 40% data split of our data, we trained and tested our models.
## Challenges we ran into
The breast cancer dataset that we used contained only 539 incidences. At the beginning, we had hoped for larger datasets that could train a more sophisticated model. As a result, we had to make do with a smaller model, but still managed to achieve great results.
## Accomplishments that we're proud of
Both Tate and I are incredibly proud of ourselves for coming this far in all. This is both of our first hackathons where we submitted our projects. Furthermore, neither of us had attempted a project in this field in the past, and found that our respective knowledges in machine learning and Tensorflow piggybacked off of each other and pushed ourselves to a newer level.
## What we learned
Throughout Treehacks, we experienced the effects of extreme sleep deprivation, poor diet, and high strain. We vow to pack acai bowls to the next hackathon we go to along with an air mattress. Jokes aside, we threw ourselves into the water with analyzing and modeling learning algorithms in tensorflow as we had little prior experience beforehand. We also went above the typical matplotlib in Python for visuals and experimented with Seaborn for next level visualizations.
## What's next for Breast Cancer Classifier
We look to expand to bigger datasets
|
## Inspiration
Following the recent tragic attacks in Paris, Beirut, and other places, the world has seen the chaos that followed during and after the events. We saw how difficult it was for people to find everything they wanted to know as they searched through dozens of articles and sites to get a full perspective on these trending topics. We wanted to make learning everything there is to know about trending topics effortless.
## What it does
Our app provides varied information on trending topics aggregated from multiple news sources. Each article is automatically generated and is an aggregation of excerpts from many sources in such a way that there is as much unique information as possible. Each article also includes insights on the attitude of the writer and the political leaning of the excerpts and overall article.
**Procedure**
1. Trending topics are found on twitter (in accordance to pre-chosen location settings).
2. Find top 30 hits in each topic's Bing results.
3. Parse each article to find important name entities, keywords, etc. to be included in the article.
4. Use machine learning and our scripts to select unique excerpts and images from all articles to create a final briefing of each topic.
5. Use machine learning to collect data on political sentiment and positivity.
All of this is displayed in a user-friendly web app which features the articles on trending topics and associated data visualizations.
## How we built it
We began with the idea of aggregating news together in order to create nice-looking efficient briefings, but we quickly became aware of the additional benefits that could be included into our project.
Notably, data visualization became a core focus of ours when we realized that the Indico API was able to provide statistics on emotion and political stance. Using Charts.JS and EmojiOne, we created emoticons to indicate the general attitude towards a topic and displayed the political scattermap of each and every topic. These allow us to make very interesting finds, such as the observation that Sports articles tend to be more positive than breaking news. Indico was also able to provide us with mentioned locations, and these was subsequently plugged into the Google Places API to be verified and ultimately sent to the Wolfram API for additional insight.
A recurring object of difficulty within our project was ranking, where we had to figure out what was "better" and what was not. Ultimately, we came to the conclusion that keywords and points were scatted across all paragraphs within a news story. A challenge in itself, a solution came to our minds. If we matched each and every paragraph to each and every keyword, a graph was formed and all we needed was maximal matching! Google gods were consulted, programming rituals were done, and we finally implemented Kuhn's Max Matching algorithm to provide a concise and optimized matching of paragraphs to key points.
This recurring difficulty presented itself once again in terms of image matching, where we initially had large pools (up to 50%) of our images being logos, advertisements, and general unpleasantness. While a filtering of specific key words and image sizes eliminated the bulk of our issues, the final solution came from an important observation made by one of our team members: Unrelated images generally have either empty of poorly constructed alt tags. With this in mind, we simply sorted our images and the sky cleared up for another day.
### The list of technicalities:
* Implemented Kuhn's Max Matching
* Used Python Lambda expressions for quick and easy sorting
* Advanced angular techniques and security filters were used to provide optimal experiences
* Extensive use of CSS3 transforms allowed for faster and smoother animations (CSS Transforms and notably 3D transforms 1. utilize the GPU and 2. do not cause page content rerendering)
* Responsive design with Bootstrap made our lives easier
* Ionic Framework was used to quickly and easily build our mobile applications
* Our Python backend script had 17 imports. Seven-teen
* Used BeautifulSoup to parse images within articles, and [newspaper](http://newspaper.readthedocs.org/en/latest/) to scrape pages
## Challenges we ran into
* Not running out of API keys
* Getting our script to run at a speed faster than O(N!!) time
* Smoothly incorporating so many APIs
* Figuring out how to prioritize "better" content
## Accomplishments that we're proud of
* Finishing our project 50% ahead of schedule
* Using over 7 APIs with a script that would send 2K+ API requests per 5 minutes
* Having Mac and PC users work harmoniously in Github
* Successful implementation of Kuhn's Max Matching algorithm
## What's next for In The Loop
* Supporting all languages
|
losing
|
## Inspiration
A couple of us started using Anki to study last term, and we quickly realized that one of the slowest parts of the process, especially when cramming for exams, is manually generating flashcards. The process is often repetitive, and when it involves media like PowerPoint slides or lecture recordings, it becomes even more time-consuming. While web services exist to automate the generation of Anki flashcards from text and other formats, we found that most of them aren't free. We believe students deserve a tool like this without any cost, as efficient studying shouldn't come with a price tag.
## What it does
Our tool, Hackademics, automatically converts various file formats—such as text files, PDFs, PowerPoints, Word documents, and even audio recordings—into ready-to-use Anki flashcards. Whether it's a transcript of a lecture or notes from a presentation, Hackademics takes the input and generates an Anki deck to help you study more effectively and efficiently. This tool empowers students to focus on learning, not on tedious card creation.
## How we built it
We developed Hackademics using Python and Flask to build the web interface and handle file uploads. For file conversion, python-docx and python-pptx were used to extract text from Word and PowerPoint files, respectively. Pdfplumber was used to extract text from PDF's, and audio transcriptions were handled using a custom transcription service.
We integrated Cohere's LLM API to format the extracted text and create meaningful question-and-answer pairs for flashcards. This step ensures the content is well-structured for efficient studying. Cohere’s rerank() feature was used to identify and prioritize the most relevant questions based on the extracted content, ensuring that only the highest-quality question-answer pairs are included in the final deck.
Once the text is processed and the question-answer pairs are generated, the output is converted into an Anki-compatible format (.apkg), which are automatically downloaded and imported directly into the Anki app, allowing immediate use for studying.
The website was deployed for free with Render, and we used the free domain provided by GoDaddy + MLH. You can find Hackademics online at <https://www.hackademics.study/>.
## Challenges we ran into
Handling audio transcription was another challenge. Converting speech into coherent text that could be transformed into effective flashcards proved difficult with most open-source packages due to limitations in accuracy and noise reduction. We wanted to ensure that the transcripts were not only accurate but also clean and concise, making them useful for generating high-quality flashcards. After evaluating various solutions, we chose to integrate AssemblyAI's API, as it provided the level of precision and reliability needed for this task. This allowed us to maintain a high standard of transcription quality, ensuring that the flashcards generated from audio content were as effective and meaningful as possible.
## Accomplishments that we're proud of
We're incredibly proud to have developed a tool that students will actually use. Throughout the hackathon, many people who walked by our booth asked for the website link, which gave us confidence that there is genuine demand for Hackademics. Knowing that our tool can make a real difference in helping students study more efficiently is deeply rewarding. Additionally, by offering this service for free, we’re not only saving students time but also potentially saving them money compared to other paid services that provide similar functionality. Above everything, it feels great to know we've built something that has immediate, practical value for the student community.
## What we learned
Throughout the development of Hackademics, we not only deepened our technical skills but also gained valuable experience in Flask development and project management. Building a full-stack application taught us how to handle file uploads, manage server-side processes, and deliver a seamless user experience, all while optimizing performance for large datasets like PDFs and audio files. In addition, we honed our project management skills, coordinating tasks across multiple team members, managing deadlines, and ensuring that we balanced feature development with quality assurance. This project reinforced the importance of clear communication, effective collaboration, and adaptability in a fast-paced environment like a hackathon
## What's next for Hackademics
Expanded File Format Support: We plan to add support for more file types, such as HTML, Markdown, and images, allowing for even more flexibility in content creation.
Enhanced AI Capabilities: We aim to refine the AI's ability to generate even more nuanced and subject-specific flashcards, potentially incorporating auto-generated hints, tags, and categories for more organized study sessions.
Integration with Cloud Storage: We intend to allow users to upload files directly from popular cloud platforms like Google Drive and Dropbox, making it easier to access materials from any device.
Collaborative Study Features: We’re exploring the idea of allowing users to share flashcard decks with others, encouraging collaborative learning and crowdsourced study materials.
|
## Inspiration
As university students, we know that taking notes can be difficult especially during fast-paced lectures. Even worse, the fine details of what happens during a class can easily be forgotten. What if your friend asked you a question while the professor said something important? What do you do? Use SquirrelAI!
## What it does
It is a cross-platform mobile app that allows students to audio-record lectures and other important events. It automatically transcribes the contents of the audio file into text, separating speakers and focusing on the most prominent one. Additionally, we auto-generate a set of flashcards per audio file, identifying and summarizing key ideas to varying degrees of specificity. We also computed the corresponding key ideas that are the most semantically similar from the audio file, and map the flashcards to one another. This way, you can swipe left and right to access different but similar flashcards!
## How we built it
We have a backend API written in Flask and hosted on Google Cloud Run. Our frontend is built on React Native and Typescript. Our database and storage is hosted serverlessly on Firebase.
In order to manage the audio data, we utilized machine learning. Using APIs such as Cohere, AssemblyAI, and SpaCy, we were able to identify numerous speakers, transcribe text, segmented the text based on content at different levels, extracted named entities, and more! We then organized this data and performed text summarization based on large language models in order to create useful and concise flashcards.
In order to connect the different key ideas on each flashcard, we took a deep dive into graph theory. Using sentence embeddings and approximate nearest neighbour search, we developed a heuristic to rank the other key ideas based on which were the most semantically similar. We also ensured that the directed graph of flashcards would be connected while each flashcard only connected to its top 2 most similar flashcards; this was so that you could effectively study all the necessary topics by simply going through the flashcards but still have the app's controls be relatively simple.
## Challenges we ran into
Our main challenge was effectively creating the graph of interconnected flashcards. It was difficult because we wanted to ensure that users could swipe through all the flashcards from a given lecture or audio file, which we found took a lot more initial planning and consideration than most other processes.
## Accomplishments that we're proud of
We formed our group because we all had similar intentions to create a product using speech-to-text technology to improve the note taking experience for students, and we're proud of accomplishing our goals! Additionally, we're proud of being able to incorporate new APIs sponsoring Hack the North into our product, as we hadn't used them before.
## What we learned
As members of our team came from different technological backgrounds and worked on different parts of the project, so we learned a lot when putting the different parts of code together. For example, there was one challenge we encountered where we couldn't pass the data we wanted to the frontend; we learned of this kind of restriction and how to go around it.
## What's next for SquirrelAI
We had gathered more data than we properly used, such as named entities from the text. One next step would be to make use of this data, for example through a keywords page.
|
## Inspiration
Nowadays, we have been using **all** sorts of development tools for web development, from the simplest of HTML, to all sorts of high-level libraries, such as Bootstrap and React. However, what if we turned back time, and relived the *nostalgic*, good old times of programming in the 60s? A world where the programming language BASIC was prevalent. A world where coding on paper and on **office memo pads** were so popular. It is time, for you all to re-experience the programming of the **past**.
## What it does
It's a programming language compiler and runtime for the BASIC programming language. It allows users to write interactive programs for the web with the simple syntax and features of the BASIC language. Users can read our sample the BASIC code to understand what's happening, and write their own programs to deploy on the web. We're transforming code from paper to the internet.
## How we built it
The major part of the code is written in TypeScript, which includes the parser, compiler, and runtime, designed by us from scratch. After we parse and resolve the code, we generate an intermediate representation. This abstract syntax tree is parsed by the runtime library, which generates HTML code.
Using GitHub actions and GitHub Pages, we are able to implement a CI/CD pipeline to deploy the webpage, which is **entirely** written in BASIC! We also have GitHub Dependabot scanning for npm vulnerabilities.
We use Webpack to bundle code into one HTML file for easy deployment.
## Challenges we ran into
Creating a compiler from scratch within the 36-hour time frame was no easy feat, as most of us did not have prior experience in compiler concepts or building a compiler. Constructing and deciding on the syntactical features was quite confusing since BASIC was such a foreign language to all of us. Parsing the string took us the longest time due to the tedious procedure in processing strings and tokens, as well as understanding recursive descent parsing. Last but **definitely not least**, building the runtime library and constructing code samples caused us issues as minor errors can be difficult to detect.
## Accomplishments that we're proud of
We are very proud to have successfully "summoned" the **nostalgic** old times of programming and deployed all the syntactical features that we desired to create interactive features using just the BASIC language. We are delighted to come up with this innovative idea to fit with the theme **nostalgia**, and to retell the tales of programming.
## What we learned
We learned the basics of making a compiler and what is actually happening underneath the hood while compiling our code, through the *painstaking* process of writing compiler code and manually writing code samples as if we were the compiler.
## What's next for BASIC Web
This project can be integrated with a lot of modern features that is popular today. One of future directions can be to merge this project with generative AI, where we can feed the AI models with some of the syntactical features of the BASIC language and it will output code that is translated from the modern programming languages. Moreover, this can be a revamp of Bootstrap and React in creating interactive and eye-catching web pages.
|
losing
|
## Inspiration
The future of the blockchain will be **multi-chain**. Since each blockchain is an independent data island with its own ecosystem, it’s not easy to share data or assets across them. Despite the current solutions - bridges - that bridge ERC20 assets, there is still a big gap in NFTs. NFTs due to their collectible value (e.g. PFPs, lands) and rich utilities (e.g. gaming NFTs, membership passes), have become the most important **identity authentication** in the digital world. If I’m a big Axie player or a huge whale that holds a lot of Apes on Ethereum, I would be willing to get the same identity recognition or respect on other chains as I have on Ethereum, or I’ll also be willing to meet other similarly like-minded NFT collectors (e.g. Bear holders on Solana) for potential synergies. And that is the main inspiration for this project.
## What it does
Agg-X is the one-stop shop solution that aggregates NFTs and bridges the identity, reputation, and accomplishments across different blockchains. It has the following three main features:
1. As an **all-in-one NFT gallery**, Agg-X will be supporting the wallet connection from Ethereum, Solana, Eluv.io, Avalanche, and Aptos simultaneously and display all your collections on ONE page. The corresponding collection points will be issued per user along with data analytics services for better NFT and investment tracking.
2. As a **decentralized social protocol**, Agg-X allows you to meet fellow collectors or game players (utility NFTs woohoo) to view their collections and accomplishments across ALL chains. Think of it as a big social platform where you can view, follow, and interact with NFT collections all over the world.
3. As a **credit and credential issuer**, Agg-X will be issuing upgradable soul-bond tokens (SBT) on each blockchain as verified credentials to recognize people’s on-chain collections or accomplishments. Later, Agg-X will also be computing rarity scores and collection points via a DAO as a DID & on-chain credit solution for unlimited future applications like an undercollateralized credit-based lending protocol.
## How we build it
* Frontend in React.JS, Javascript, Typescript, HTML, CSS, Tailwind, ChakraUI, Bootstrap4
* Backend in Express.JS, Node.JS
* Web3 service providers: web3.JS, Web3-React, ethers.JS for accessing on-chain providers and fetching user’s NFT collection data (metamask and walletconnect)
* Blockchain service providers / SDKs: @eluvio SDK for accessing and displaying video NFTs, @solana/web3.js SDK for connecting Solana Phantom wallet and fetching user’s NFT collection data
* Database: CockroachDB (a distributed SQL DB) for storing structured user information (name, email, account, password, addresses, etc.)
## Challenges we ran into
* We ran into a lot of dependency issues while building SDK packages from eluvio sdk and Chakra UI
* Connect wallet on other blockchains - especially Eluv.io & Solana - since both are completely new to us
* UI/UX details and responsive design were the part we spent the most time on
* Three-person collaborating and constantly fixing the merge conflicts was also a challenge when we were working on interrelated components or updating/installing packages at the same time
* React data flow - managing complex code base as well as the front logic & data flow between different components
* Eluv.io APIs and sample code are not documented well which took us a while to understand the samples
* CockroachDB has an extensive & complicated setup that the docs do not address clearly
* Connecting our backend service (in cockroach db) and the frontend in a localhost setting was also a bit challenging as we don’t have enough bandwidth to deploy the full backend
## Accomplishments that we're proud of
* The overall idea and our degree of completion! We have delivered a fully functional demo with great UI, and the idea of having NFT Aggregator + Credential Bridge has the infinite potential to unlock more and more user interactions and activities among the entire NFT & Blockchain ecosystem!
* Our fantastic UI brings the best user experience which also fits perfectly with the crypto vibe!
## What we learned
* Extensive knowledge of Cryptocurrency and NFTs.
* Experience with different wallet connectors (metamask, walletconnect, solana/phantom, eluvio)
* Extensive growth and experience in making great & responsive UI design
* Agile development in a three-person team and great practice for resolving merge conflicts and dependency issues
* Blockchain Full-stack development & deployment
## What’s Next
* Improvements in the speed and performance of fetching and collections from user’s wallet address
* Add more blockchain support including Ploygon, Arbitrum, Optimism, etc.
* Implement the credit / accomplishment score calculating system
* Define and implement the cross-chain credential standards (e.g. a proxy standard on ETH for storing NFT credentials from other blockchains)
* DAO governance on NFT rarity calculation (like TrueFi DAO) & core collectors’ community establishments
|
## Inspiration 💡
Buying, selling, and trading physical collectibles can be a rather tedious task, and this has become even more apparent with the recent surge of NFTs (Non-Fungible Tokens).
The global market for physical collectibles was estimated to be worth $372 billion in 2020. People have an innate inclination to collect, driving the acquisition of items such as art, games, sports memorabilia, toys, and more. However, considering the world's rapid shift towards the digital realm, there arises a question about the sustainability of this market in its current form.
At its current pace, it seems inevitable that people may lose interest in physical collectibles, gravitating towards digital alternatives due to the speed and convenience of digital transactions. Nevertheless, we are here with a mission to rekindle the passion for physical collectibles.
## What it does 🤖
Our platform empowers users to transform their physical collectibles into digital assets. This not only preserves the value of their physical items but also facilitates easy buying, selling, and trading.
We have the capability to digitize various collectibles with verifiable authenticity, including graded sports/trading cards, sneakers, and more.
## How we built it 👷🏻♂️
To construct our platform, we utilized [NEXT.js](https://nextjs.org/) for both frontend and backend development. Additionally, we harnessed the power of the [thirdweb](https://thirdweb.com/) SDK for deploying, minting, and trading NFTs. Our NFTs are deployed on the Ethereum L2 [Mumbai](https://mumbai.polygonscan.com/) testnet.
`MUMBAI_DIGITIZE_ETH_ADDRESS = 0x6A80AD071932ba92fe43968DD3CaCBa989C3253f
MUMBAI_MARKETPLACE_ADDRESS = [0xedd39cAD84b3Be541f630CD1F5595d67bC243E78](https://thirdweb.com/mumbai/0xedd39cAD84b3Be541f630CD1F5595d67bC243E78)`
Furthermore, we incorporated the Ethereum Attestation Service to verify asset ownership and perform KYC (Know Your Customer) checks on users.
`SEPOLIA_KYC_SCHEMA = 0x95f11b78d560f88d50fcc41090791bb7a7505b6b12bbecf419bfa549b0934f6d
SEPOLIA_KYC_TX_ID = 0x18d53b53e90d7cb9b37b2f8ae0d757d1b298baae3b5767008e2985a5894d6d2c
SEPOLIA_MINT_NFT_SCHEMA = 0x480a518609c381a44ca0c616157464a7d066fed748e1b9f55d54b6d51bcb53d2
SEPOLIA_MINT_NFT_TX_ID = 0x0358a9a9cae12ffe10513e8d06c174b1d43c5e10c3270035476d10afd9738334`
We also made use of CockroachDB and Prisma to manage our database.
Finally, to view all NFTs in 3D 😎, we built a separate platform that's soon-to-be integrated into our app. We scrape the internet to generate all card details and metadata and render it as a `.glb` file that can be seen in 3D!
## Challenges we ran into 🏃🏻♂️🏃🏻♂️💨💨
Our journey in the blockchain space was met with several challenges, as we were relatively new to this domain. Integrating various SDKs proved to be a formidable task. Initially, we deployed our NFTs on Sepolia, but encountered difficulties in fetching data. We suspect that thirdweb does not fully support Sepolia. Ultimately, we made a successful transition to the Mumbai network. We also faced issues with the PSA card website, as it went offline temporarily, preventing us from scraping data to populate our applications.
## Accomplishments that we're proud of 🌄
As a team consisting of individuals new to blockchain technology, and even first-time deployers of smart contracts and NFT minting, we take pride in successfully integrating web3 SDKs into our application. Moreover, users can view their prized possessions in **3-D!**
Overall, we're proud that we managed to deliver a functional minimum viable product within a short time frame. 🎇🎇
## What we learned 👨🏻🎓
Through this experience, we learned the value of teamwork and the importance of addressing challenges head-on. In moments of uncertainty, we found effective solutions through open discussions. Overall, we have gained confidence in our ability to deliver exceptional products as a team.Lastly, we learned to have fun and build things that matter to us.
## What's next for digitize.eth 👀👀👀
Our future plans include further enhancements such as:
* Populating our platform with a range of supported NFTs for physical assets.
* Take a leap of faith and deploy on Mainnet
* Deploy our NFTs on other chains, eg Solana.
Live-demo: <https://bl0ckify.tech>
Github: <https://github.com/idrak888/digitize-eth/tree/main> + <https://github.com/zakariya23/hackathon>
|
## Inspiration
As more and more blockchains transition to using Proof of Stake as their primary consensus mechanism, the importance of validators becomes more apparent. The security of entire digital economies, people's assets, and global currencies rely on the security of the chain, which at its core is guaranteed by the number of tokens that are staked by validators. These staked tokens not only come from validators but also from everyday users of the network. In the current system there is very little distinguishing between validators other than the APY that each provides and their name (a.k.a. their brand). We aim to solve this issue with Ptolemy by creating a reputation score that is tied to a validator's DID using data found both on and off chain.
This pain point was discovered as our club, being validators on many chains such as Evmos, wanted a way to earn more delegations through putting in more effort into pushing the community forward. After talking with other university blockchain clubs, we discovered that the space was seriously lacking the UI and data aggregation processes to correlate delegations with engagement and involvement in a community.
We confirmed this issue by realizing our shared experiences as users of these protocols: when deciding which validators to delegate our tokens to on Osmosis we really had no way of choosing between validators other than judging based on APY looking them up on Twitter to see what they did for the community.
## What it does
Ptolemy calculates a reputation score based on a number of factors and ties this score to validators on chain using Sonr's DID module. These factors include both on-chain and off-chain metrics. We fetch on-chain validator data Cosmoscan and assign each validator a reputation score based on number of blocks proposed, governance votes, amount of delegators, and voting power, and create and evaluate a Validator based on a mathematical formula that normalized data gives them a score between 0-5. Our project includes not only the equation to arrive at this score but also a web app to showcase what a delegation UI would look like when including this reputation score. We also include mock data that ties data from social media platforms to highlight a validator's engagement with the community, such as Reddit, Twitter, and Discord, although this carries less weight than other factors.
## How we built it
First, we started with a design doc, laying out all the features. Next, we built out the design in Figma, looking at different Defi protocols for inspiration. Then we started coding.
We built it using Sonr as our management system for DIDs, React, and Chakra for the front end, and the backend in GoLang.
## Challenges we ran into
Integrating the Sonr API was quite difficult, we had to hop on call with an Engineer from the team to work through the bug. We ended up having to use the GoLang API instead of the Flutr SDK. During the ideating phase, we had to figure out what off-chain data was useful for choosing between validators.
## Accomplishments that we're proud of
We are proud of learning a new technology stack from the ground up in the form of the Sonr DID system and integrating it into a much-needed application in the blockchain space. We are also proud of the fact that we focused on deeply understanding the validator reputation issue so that our solution would be comprehensive in its coverage.
## What we learned
We learned how to bring together diverse areas of software to build a product that requires so many different moving components. We also learned how to look through many sets of documentation and learn what we minimally needed to hack out what we wanted to build within the time frame. Lastly, we also learned to efficiently bring together these different components in one final product that justice to each of their individual complexities.
## What's next for Ptolemy
Ptolemy is named in honor of the eponymous 2nd Century scientist who generated a system to chart the world in the form of longitude/latitude which illuminated the geography world. In a similar way, we hope to bring more light to the decision making process of directing delegations. Beyond this hackathon, we want to include more important metrics such as validator downtime, jail time, slashing history, and history of APY over a certain time period. Given more time, we could have fetched this data from an indexing service similar to The Graph. We also want to flesh out the onboarding process for validators to include signing into different social media platforms so we can fetch data to determine their engagement with communities, rather than using mock data. A huge feature for the app that we didn't have time to build out was staking directly on our platform, which would have involved an integration with Keplr wallet and the staking contracts on each of the appchains that we chose.
Besides these staking related features, we also had many ideas to make the reputation score a bigger component of everyone's on chain identity. The idea of a reputation score has huge network effects in the sense that as more users and protocols use it, the more significance it holds. Imagine a future where lending protocols, DEXes, liquidity mining programs, etc. all take into account your on-chain reputation score to further align incentives by rewarding good actors and slashing malicious ones. As more protocols integrate it, the more power it holds and the more seriously users will manage their reputation score. Beyond this, we want to build out an API that also allows developers to integrate our score into their own decentralized apps.
All this is to work towards a future where Ptolemy will fully encapsulate the power of DID’s in order to create a more transparent world for users that are delegating their tokens.
Before launch, we need to stream in data from Twitter, Reddit, and Discord, rather than using mock data. We will also allow users to directly stake our platform. Then we need to integrate with different lending platforms to generate the Validator's "reputation-score" on-chain. Then we will launch on test net. Right now, we have the top 20 validators, moving forwards we will add more validators. We want to query, jail time, and slashing of validators in order to create a more comprehensive reputation score for the validator., Off-chain, we want to aggregate Discord, Reddit, Twitter, and community forum posts to see their contributions to the chain they are validating on. We also want to create an API that allows developers to use this aggregated data on their platform.
|
partial
|
## Inspiration:
seeing and dealing with rude and toxic comments on popular forums like youtube, reddit, and being aware that sometimes it might be you who leaves that rude comment, and you may not even realize it.
## What it does:
This chrome extension warns you and reminds you not to be too heated if it finds that you are in the process of leaving a particularly rude or toxic comment using google's perspective api - an NLP algorithm for analyzing sentiment. It reads the users comment into an editable text box field in real time, and is able to inform them if their comment is above the threshold before it is posted.
## How I built it
* JS, POST requests to Perspective API, Local Node.js instance
## Challenges I ran into
* Found it difficult to figure out how to find when a user is typing a comment - what text fields are activated? When do we collect a users input? Also, sometimes we spent a lot of time on something just to find out that it was made by someone else already.
## Accomplishments that I'm proud of:
Was able to get a working extension running on localhost using js and node.js, none of us had substantial experience in either coming into this hackathon.
## What I learned Learned
a lot about javascript, how to build an extension, how frustrating creating an extension can be, but how fun hackathons are!
## What's next for TypeMeNot2
Improving the graphics - as of right now, we have a full on alert for toxicity above a certain threshold, but we want to make better representation such as a color fader with a multiplier based off of the toxicity score. Example: icon is bright red for extremely offensive comments, and dark blue for non offensive ones.
|
## Inspiration
Old technology always had a certain place in our hearts. It is facinating to see such old and simple machines produce such complex results. That's why we wanted to create our own emulator of an 8-bit computer: to learn and explore this topic and also to make it accessible to others through this learning software.
## What it does
It simulates the core features of an 8-bit computer. We can write low-level programs in assembly to get executed on the emulator. It also displays a terminal out to show the results of the program, as well as a window on the actual memory state throughout the program.
## How we built it
Using Java, Imgui and LWJGL.
## Challenges we ran into
The custom design of the computer was quite challenging to get to, as we were trying to keep the project reasonable yet engaging. Getting information on how 8-bit computers work and understanding that in less than a day also proved to be hard.
## Accomplishments that we're proud of
We are proud to present an actual working 8-bit computer emulator that can run custom code written for it.
## What we learned
We learned how to design a computer from scratch, as well as how assembly works and can be used to produce a wide variety of outputs.
## What's next for Axolotl
Axolotl can be improved by adding more modules to it, like audio and a more complex gpu. Ultimately, it could become a full-fledged computer in itself, capable of any prowess anynormal computer can accomplish
|
## Inspiration
In today's always-on world, we are more connected than ever. The internet is an amazing way to connect to those close to us, however it is also used to spread hateful messages to others. Our inspiration was taken from a surprisingly common issue among YouTubers and other people prominent on social media: That negative comments (even from anonymous strangers) hurts more than people realise. There have been cases of YouTubers developing mental illnesses like depression as a result of consistently receiving negative (and hateful) comments on the internet. We decided that this overlooked issue deserved to be brought to attention, and that we could develop a solution not only for these individuals, but the rest of us as well.
## What it does
Blok.it is a Google Chrome extension that analyzes web content for any hateful messages or content and renders it unreadable to the user. Rather than just censoring a particular word or words, the entire phrase or web element is censored. The HTML and CSS formatting remains, so nothing funky happens to the layout and design of the website.
## How we built it
The majority of the app is built in JavaScript and jQuery, with some HTML and CSS for interaction with the user.
## Challenges we ran into
Working with Chrome extensions was something very new to us and we had to learn some new JS in order to tackle this challenge. We also ran into the issue of spending too much time deciding on an idea and how to implement it.
## Accomplishments that we're proud of
Managing to create something after starting and scraping multiple different projects (this was our third or fourth project and we started pretty late)
## What we learned
Learned how to make Chrome Extensions
Improved our JS ability
learned how to work with a new group of people (all of us are first time hackathon-ers and none of us had extensive software experience)
## What's next for Blok.it
Improving the censoring algorithms. Most hateful messages are censored, but some non-hateful messages are being inadvertently marked as hateful and being censored as well. Getting rid of these false positives is first on our list of future goals.
|
partial
|
## Inspiration
I got the inspiration from the Mirum challenge, which was to be able to recognize emotion in speech and text.
## What it does
It records speech from people for a set time, separating individual transcripts based on small pauses in between each person talking. It then transcribes this to a JSON string using the Google Speech API and passes this string into the IBM Watson Tone Analyzer API to analyze the emotion in each snippet.
## How I built it
I had to connect to the Google Cloud SDK and Watson Developer Cloud first, and learn some python that was necessary to get them working. I then wrote one script file, recording audio with pyaudio and using the APIs for the other two to get JSON data back.
## Challenges I ran into
I had trouble making a GUI, so I abandoned trying to make it. I didn't have enough practice with making GUIs in Python before this hackathon, and the use of the APIs were time-consuming already. Another challenge I ran into was getting the google-cloud-sdk to work on my laptop, as it seemed that there were conflicting files or missing files at times.
## Accomplishments that I'm proud of
I'm proud that I got the google-cloud-sdk set up and got the Speech API to work, as well as get an API which I had never heard of to work, the IBM Watson one.
## What I learned
To keep trying to get control of APIs, but ask for help from others who might've set theirs up already. I also learned to manage my time more effectively. This is my second hackathon, and I got a lot more work done than I did last time.
## What's next for Emotional Talks
I want to add a GUI that will make it easy for viewers to analyze their conversations, and perhaps also use some future Speech APIs to better process the speech part. This could potentially be sold to businesses for use in customer care calls.
|
## Inspiration
COVID-19 has rapidly affected our day to day life, businesses, and also disrupted the world trade and movements. With such a drastic pause in our lives, we wanted to provide an easy to access COVID-19 data according to the user's needs, Covid-19 screening, new/headlines, facts, etc. for our discord users.
Conducting searches on a web browser can be inconvenient and costs time. Thus, we wanted to create something that could provide important information fast without the need of opening a web-browser or visiting another hard-to-use statistical website.
BOT-19 was designed to provide exactly that - fast and accurate information on the fly with simple commands. This discord bot uses APIs for COVID-19 statistics as well as news sources to extract data without requiring the user to visit these sites.
## What it does
BOT-19 provides a range of data: COVID-19 statistics for the entire world, for specific countries, or even specific Canadian provinces. In addition, it has a news/headline command that provides headline news on any topic of the user's choice, with or without relation to COVID-19. On top of that, the bot can also visualize COVID-19 data with a daily self-updating graph for easier communication. Furthermore, the bot can perform COVID-19 screening (with the help of the Ontario Ministry of Health guidelines) through interactive questions and answers.
## How we built it
To perform such operations, an internal sophisticated language was our priority. Thus, we predominantly used the Python language/framework and its libraries to create Bot-19. This involved obtaining public API data sets and performing operations on them to convert them to JSON objects, that provides us readable data structures such as dictionaries and lists. Specifically, we used Postman's Covid-19 APIs and "newsapi.io" for fetching news/headlines relevant to the user's needs.
We then implemented numerous commands to provide users with their desired information.
Furthermore, the implementation of the Pandas and Madplotlib libraries in Python helped plot Covid-19 cases of selected countries in a visual format, with an API and data set that refreshes daily.
## Challenges we ran into
Initially, we worked together to understand the processes of JSON requests in Python and how to obtain data from APIs. With some determination and dedication and external research, we were successfully able to implement APIs into our code to be able to retrieve our desired data as per the user's request.
Another challenge was regarding incorporating the graph into an Embed structure for an output to the user. Mutual collaboration and experimentation helped solve that problem.
## Accomplishments that we're proud of
We are proud of creating a project that allows an individual to educate themselves and know about the current events. Since COVID-19 has changed our whole lives, we believe it is eminently important for people to be updated with the information regarding COVID-19 so that they are able to observe possible trends, which will allow them to also take extra precautions. We built this bot with a multitude of features so that everyone is able to benefit from it, and learn something new. We are also proud of implementing technologies we had limited knowledge of, including APIs and different libraries.
## What we learned
We learned about the use of APIs and how to effectively incorporate them with the use of different libraries such as the MATPLOTLIB for creating a live graph, while also practicing data implementation in the Python language. We also investigated different features of discord applications and how to create a bot application through a coding language we are familiar with.
## What's next for Bot-19
Firstly, we would like to expand our news database so we cover a wider range of topics.
We want to create a more friendly user interface, and transition screening into a DM chat instead of in-server, while also developing more functions. Some examples include providing comparisons of different statistics. Moreover, we can also add a live graph for each of the countries showing their trends which will help the user visualize it better.
Finally, we would like to host our discord bot, to allow for users to be able to use the bot in their own server.
|
## Inspiration
Not all hackers wear capes - but not all capes get washed correctly. Dorming on a college campus the summer before our senior year of high school, we realized how difficult it was to decipher laundry tags and determine the correct settings to use while juggling a busy schedule and challenging classes. We decided to try Google's up and coming **AutoML Vision API Beta** to detect and classify laundry tags, to save headaches, washing cycles, and the world.
## What it does
L.O.A.D identifies the standardized care symbols on tags, considers the recommended washing settings for each item of clothing, clusters similar items into loads, and suggests care settings that optimize loading efficiency and prevent unnecessary wear and tear.
## How we built it
We took reference photos of hundreds of laundry tags (from our fellow hackers!) to train a Google AutoML Vision model. After trial and error and many camera modules, we built an Android app that allows the user to scan tags and fetch results from the model via a call to the Google Cloud API.
## Challenges we ran into
Acquiring a sufficiently sized training image dataset was especially challenging. While we had a sizable pool of laundry tags available here at PennApps, our reference images only represent a small portion of the vast variety of care symbols. As a proof of concept, we focused on identifying six of the most common care symbols we saw.
We originally planned to utilize the Android Things platform, but issues with image quality and processing power limited our scanning accuracy. Fortunately, the similarities between Android Things and Android allowed us to shift gears quickly and remain on track.
## Accomplishments that we're proud of
We knew that we would have to painstakingly acquire enough reference images to train a Google AutoML Vision model with crowd-sourced data, but we didn't anticipate just how awkward asking to take pictures of laundry tags could be. We can proudly say that this has been an uniquely interesting experience.
We managed to build our demo platform entirely out of salvaged sponsor swag.
## What we learned
As high school students with little experience in machine learning, Google AutoML Vision gave us a great first look into the world of AI. Working with Android and Google Cloud Platform gave us a lot of experience working in the Google ecosystem.
Ironically, working to translate the care-symbols has made us fluent in laundry. Feel free to ask us any questions,
## What's next for Load Optimization Assistance Device
We'd like to expand care symbol support and continue to train the machine-learned model with more data. We'd also like to move away from pure Android, and integrate the entire system into a streamlined hardware package.
|
losing
|
# Resupplie
(Pronounced 'reh-supp-lee') this is your number one cuisine companion!
Resuppie recommends meal recipies inspired by the contents of your fridge & pantry.
Resuppie prepares for future meals & shopping lists based on your tastes & preferences.
|
[project video demo](https://github.com/R1tzG/SignSensei/assets/86858242/40b4d428-f614-4800-8151-0d3d9c74f5af)
## Inspiration
In an increasingly interconnected world, one of the most important skills we can acquire is the ability to communicate effectively with people from diverse backgrounds and abilities. American Sign Language (ASL) is a language used by millions of deaf and hard-of-hearing individuals around the world. However, there are still significant barriers preventing many from learning and using ASL. Our project, SignSensei, aims to break down those barriers, making it easier and faster for anyone to learn ASL, as well as other sign languages. We hope to promote inclusivity through communication for all.
## What it does
SignSensei is a web application that gamifies the process of learning sign language. Using the webcam on your laptop (or front-facing camera on your phone), our app can detect the sign you are putting up with your hand, and tell you whether it is correct. You will be able to see yourself on the screen, as well as a lattice representation of your hand. This makes it easy to monitor your hands to make sure you are getting the signs right. The demo lesson (see video) teaches you the ASL alphabet.
## How we built it
Our sign language detection system is built in two parts. First we collect hand landmark coordinates using the Mediapipe machine learning library. We then pass the extracted coordinates through a custom fully connected neural network that we trained on a dataset of ASL signs. This approach allows us to detect signs from the webcam feed with high precision and accuracy (97% test accuracy on the custom model).
The sign detection system outlined above forms the backbone of our app. We also developed an interactive front-end with Streamlit, which serves lessons to users.
## Challenges we ran into
We were significantly challenged with developing an accurate detection model. Our first few attempts fell short in accuracy. We were eventually able to train a fast and accurate model for the task. Our final model is very simple but performant, made up primarily of Dense layers.
Another challenge we ran into was developing the user interface. At first, we looked at using React, but found it difficult to integrate Tensorflow and OpenCV seamlessly. We decided to switch gears and develop our front-end with Streamlit, leveraging the power of the Python programming language.
## Accomplishments that we're proud of
We are very proud of the powerful sign detection algorithm that we developed. Along with the use case that we found for ASL, the algorithm can easily be expanded to other sign languages, as well as applications in gesture recognition and VR gaming.
## What we learned
Through this project, we learned how to use Tensorflow to train machine learning models, as well as how they can be implemented in Javascript (even if this part didn't make it into the final application). We also learnt about different ways to make a front-end, from vanilla JS and React to solutions such as Flask.
## What's next for SignSensei
We're not done yet! We plan to add more interactive lessons to the app as well as add support for more sign languages.
View our slideshow [here](https://www.canva.com/design/DAFuAQrskMQ/y0TeL7Q-odr6c6klXBmfXA/view?utm_content=DAFuAQrskMQ&utm_campaign=designshare&utm_medium=link&utm_source=publishsharelink)
|
### Inspiration
The way research is funded is harmful to science — researchers seeking science funding can be big losers in the equality and diversity game. We need a fresh ethos to change this.
### What it does
Connexsci is a grant funding platform that generates exposure to undervalued and independent research through graph-based analytics. We've built a proprietary graph representation across 250k research papers that allows for indexing central nodes with highest value-driving research. Our grant marketplace allows users to leverage these graph analytics and make informed decisions on scientific public funding, a power which is currently concentrated in a select few government organizations. Additionally, we employ quadratic funding, a fundraising model that democratizes impact of contributions that has seen mainstream success through <https://gitcoin.co/>.
### How we built it
To gain unique insights on graph representations of research papers, we leveraged Cohere's NLP suite. More specifically, we used Cohere's generate functionality for entity extraction and fine-tuned their small language model with our custom research paper dataset for text embeddings. We created self-supervised training examples where we fine-tuned Cohere's model using extracted key topics given abstracts using entity extraction. These training examples were then used to fine-tune a small language model for our text embeddings.
Node prediction was achieved via a mix of document-wise cosine similarity, and other adjacency matrices that held rich information regarding authors, journals, and domains.
For our funding model, we created a modified version of the Quadratic Funding model. Unlike the typical quadratic funding systems, if the subsidy pool is not big enough to make the full required payment to every project, we can divide the subsidies proportionately by whatever constant makes the total add up to the subsidy pool's budget. For a given scenario, for example, a project dominated the leaderboard with an absolute advantage. The team then gives away up to 50% of their matching pool distribution so that every other project can have a share from the round, and after that we can see an increase of submissions.
The model is then implemented to our Bounty platform where organizers/investors can set a "goal" or bounty for a certain group/topic to be encouraged to research in a specific area of academia. In turn, this allows more researchers of unpopular topics to be noticed by society, as well as allow for advancements in the unpopular fields.
### Challenges we ran into
The entire dataset broke down in the middle of the night! Cohere also gave trouble with semantic search, making it hard to train our exploration model.
### Accomplishments that we're proud of
Parsing 250K+ publications and breaking it down to the top 150 most influential models. Parsing all ML outputs on to a dynamic knowledge graph. Building an explorable knowledge graph that interacts with the bounty backend.
### What's next for Connex
Integrating models directly on the page, instead of through smaller microservices.
|
partial
|
## Inspiration
One of the biggest problems during this COVID-19 pandemic and these awful times in general is that thousands of people are filing for property and casualty insurance. As a result, insurance companies are receiving an influx of insurance claims, causing longer processing times. These delays not only hurt the company, but also negatively impact the people who filed the claims, as the payout could be essential.
We wanted to tackle these problems with our website, Smooth Claiminal. Our platform uses natural language algorithms to speed up the insurance claiming process. With the help and support from governments and businesses, our platform can save many lives during the current pandemic crisis, while easing the burdens on the employees working at insurance companies or banks.
## What it does
Smooth Claiminal serves three main purposes:
* Provides an analytics dashboard for insurance companies
* Uses AI to extract insights from long insurance claims
* Secures data from the claim using blockchain
The analytics dashboard provides insurance companies with information about the previously processed claims, as well as the overall company performance. The upload tab allows for a simplified claim submittal process, as they can be submitted digitally as a PDF or DOCX file.
Once the claim is submitted, our algorithm first scans the text for typos using the Bing Spell Check API by Microsoft Azure. Then, it intelligently summarizes the claim by creating a subset that only contains the most important and relevant information. The text is also passed through a natural language processing algorithm powered by Google Cloud. Our algorithm then parses and refines the information to extract insights such as names, dates, addresses, quotes, etc., and predict the type of insurance claim being processed (i.e. home, health, auto, dental).
Internally, the claim is also assigned a sentiment score, ranging from 0 (very unhappy) to 1 (very happy). The sentimental analysis is powered by GCP, and allows insurance companies to prioritize claims accordingly.
Finally, the claim submissions are stored in a blockchain database built with IPFS and OrbitDB. Our peer to peer network is fast, efficient, and maximizes data integrity through distribution. We also guarantee reliability, as it will remain functional even if the central server crashes.
## How we built it
* Website built with HTML, CSS, and JS for front end, with a Python and Flask back end
* Blockchain database built with IPFS and OrbitDB
* NLP algorithm built with Google Cloud's NL API, Microsoft Azure's Spell Check API, Gensim, and our own Python algorithms
## Challenges we ran into
* Setting up the front end was tough! We had lots of errors from misplaced files and missing dependencies, and resolving these took a lot more time than expected
* Our original BigchainDB was too resource-intensive and didn't work on Windows, so we had to scrap the idea and switch to OrbitDB, which was completely new to all of us
* Not being able to communicate face to face meant we had to rely on digital channels - this was exceptionally challenging when we had to work together to debug any issues
## Accomplishments that we're proud of
* Getting it to work! Most, if not all the technologies were new to us, so we're extremely proud and grateful to have a working NLP algorithm which accurately extracts insights and a working blockchain database. Oh yeah, and all in 36 hours!
* Finishing everything on time! Building our hack and filming the video remotely were daunting tasks, but we were able to work efficiently through everybody's combined efforts
## What we learned
* For some of us, it was our first time using Python as a back end language, so we learned a lot about how it can be used to handle API requests and leverage AI tools
* We explored a new APIs, frameworks, and technologies (like GCP, Azure, and OrbitDB)
## What's next for Smooth Claiminal
* We'd love to expand the number of classifiers for insurance claims, and perhaps increase the accuracy by training a new model with more data
* We also hope to improve the accuracy of the claim summarization and insights extraction
* Adding OCR so we can extract text from images of claims as well
* Expanding this application to more than just insurance claims! We see a diverse use case for Smooth Claiminal, especially for any industry where long applications are still the norm! We're also hoping to build a consumer version of this application, which could help to simplify long documents like terms and conditions, or privacy policies.
|
## Inspiration
In Grade 10 at one point I played a game with my classmates where we all started with $10 000 in virtual currency and traded stocks with that virtual currency, trying to see who could make the most money. Even though most of us lost almost all our money, the game was a lot of fun.
This is where the idea for Botman Sachs came in. I wanted to recreate the game I played in grade 10, but with bots instead of humans.
## What it does
Botman Sachs is a platform for algorithmic trading bots to trade virtual currency and for the people who write them to have fun doing so in a competitive gamified environment.
Users write algorithmic trading bots in Python using the straightforward Botman Sachs API which provides an interface for buying and selling stocks as well as providing an interface to BlackRock's comprehensive Aladdin API for retrieving stock information.
Every server tick (~10 seconds or so, configurable), all bots are run and given the opportunity to perform market research (Through either the BlackRock or Yahoo! Financial APIs and make trades. Bots are given a set amount of time to do this.
## How we built it
The bulk of the backend is done in Node, with our bot trading APIs being created using Express. Bots are written in Python and use our Python API, which is a small wrapper that forwards API calls to Node. The web frontend is built using Preact, Chart.js, Code Mirror and Redux.
Bots themselves are run by a separate Node project called the runner. Every server tick, the main Node backend sends out a message to a RabbitMQ server for each bot. There can be multiple runners, all of which grab messages from RabbitMQ and then run the corresponding bot. This pattern lets us add more runners across multiple machines and allows for massive horizontal scalability as more bots are uploaded and need to be run at set intervals.
## Challenges we ran into
As developers with little background in finance, we had to consult with a few people about what data was necessary to expose to the bots in order for them to make informed trading decisions.
## Accomplishments that we're proud of
We managed to accomplish quite a bit of work and were very productive over the course of Pennapps. This was a large project and we're quite proud of the fact that we managed to pull it off in a team of two.
## What we learned
We learned a lot about financial markets, using RabbitMQ and different flavours of Soylent.
## What's next for Botman Sachs
We're considering polishing Botman Sachs more and putting it up online permanently.
|
## Inspiration
Virtually every classroom has a projector, whiteboard, and sticky notes. With OpenCV and Python being more accessible than ever, we wanted to create an augmented reality entertainment platform that any enthusiast could learn from and bring to their own place of learning. StickyAR is just that, with a super simple interface that can anyone can use to produce any tile-based Numpy game. Our first offering is *StickyJump* , a 2D platformer whose layout can be changed on the fly by placement of sticky notes. We want to demystify computer science in the classroom, and letting students come face to face with what's possible is a task we were happy to take on.
## What it does
StickyAR works by using OpenCV's Contour Recognition software to recognize the borders of a projector image and the position of human placed sticky notes. We then use a matrix transformation scheme to ensure that the positioning of the sticky notes align with the projector image so that our character can appear as if he is standing on top of the sticky notes. We then have code for a simple platformer that uses the sticky notes as the platforms our character runs, jumps, and interacts with!
## How we built it
We split our team of four into two sections, one half that works on developing the OpenCV/Data Transfer part of the project and the other half who work on the game side of the project. It was truly a team effort.
## Challenges we ran into
The biggest challenges we ran into were that a lot of our group members are not programmers by major. We also had a major disaster with Git that almost killed half of our project. Luckily we had some very gracious mentors come out and help us get things sorted out! We also first attempted to the game half of the project in unity which ended up being too much of a beast to handle.
## Accomplishments that we're proud of
That we got it done! It was pretty amazing to see the little square pop up on the screen for the first time on top of the spawning block. As we think more deeply about the project, we're also excited about how extensible the platform is for future games and types of computer vision features.
## What we learned
A whole ton about python, OpenCV, and how much we regret spending half our time working with Unity. Python's general inheritance structure came very much in handy, and its networking abilities were key for us when Unity was still on the table. Our decision to switch over completely to Python for both OpenCV and the game engine felt like a loss of a lot of our work at the time, but we're very happy with the end-product.
## What's next for StickyAR
StickyAR was designed to be as extensible as possible, so any future game that has colored tiles as elements can take advantage of the computer vision interface we produced. We've already thought through the next game we want to make - *StickyJam*. It will be a music creation app that sends a line across the screen and produces notes when it strikes the sticky notes, allowing the player to vary their rhythm by placement and color.
|
partial
|
For a given song, we fetched similar songs and performed topic modeling using LDA to extract common themes from the lyrics. These themes are representative of the emotion and mood of the song. We used this information to find art that match the extracted themes. We then used a machine learning technique called artistic neural transfer to apply the extracted style on a user’s profile picture to reflect the user’s current state of mind.
Artify is a social sharing feature that applies artistic filters to images based on emotions in music. We use the Spotify API to get the song that a user is listening to and apply a natural language processing based pipeline to retrieve emotions from the lyrics. The emotions are compared to tags associated with a stylistic template, and a template is picked for style transfer to the user’s website.
|
## Inspiration
As a team of high school students, we understand how challenging it can be to manage stress, anxiety, and other emotional challenges while balancing school, extracurricular activities, and personal life. Music has always been a powerful tool for emotional regulation, but we wanted to take it a step further by integrating technology. This personalized, adaptive music therapy experience was inspired by the potential of combining emotion recognition and music therapy.
## What it does
Rest is an emotion-driven music therapy website that provides personalized music therapy sessions based on the user’s current emotional state. By analyzing facial expressions and text using advanced algorithms, Rest identifies the user’s emotional state and recommends a tailored music therapy session. The app continuously monitors the user’s response and adjusts the music in real-time to ensure maximum effectiveness.
## How we built it
We built Rest using a combination of Python, Flask, CSS, HTML, and JS. The AI used for image analysis was taken from the web and uses a Torchscript model. The text analysis was done with the OpenAI API.
## Challenges we ran into
We encountered several challenges that tested our problem-solving abilities and teamwork. First of all, as a team with no prior experience using Flask, we faced a steep learning curve. We had to quickly get up to speed with Flask’s framework and figure out how to integrate it effectively into our project.
Working with Spotify’s API was another significant challenge because the documentation is lackluster and it is highly unreliable at times. We spent considerable time reading documentation, experimenting, and troubleshooting issues.
Lastly, working as a team lead to multiple merge conflicts when trying to combine code.
## Accomplishments that we're proud of
We are proud of several accomplishments achieved during this project. First of all, the successful development of Rest. This website manages to integrate emotion recognition, music recommendation, and real-time adaptation.
Another element of this project we are proud of is the innovative use of technology. Machine learning was used to both analyze facial expressions and text. This allowed for a personalized music therapy experience.
In addition, our user-friendly web interface allows users to interact with the app and receive their personalized therapy sessions. Our design and clean UI were a significant accomplishment for Rest.
## What we learned
As it was our first time using Flask, we learned a lot about Flask. This presented a significant challenge, as Flask was the primary framework we chose to build our project. However, through persistence and collaboration, we were able to rapidly learn and adapt to using this framework.
Additionally, we learned about Spotify’s API during this hackathon project. Tasks like creating personalized recommendations analyzing track information, and adding songs to the queue were all things we achieved with Spotipy.
We also learned how to use the Auth0 API for the first time.
## What's next for Rest
One improvement that could be made to Rest in the future is increased interactiveness—introducing interactive elements such as guided visualizations and interactive music-making exercises to enhance user engagement. We would also like to add features such as meditation between songs.
Another potential future idea would be to create a mobile application. A mobile version of Rest would be able to provide users with a more accessible and convenient platform for emotion-driven music therapy.
Finally, collaborating with mental health professionals to integrate Rest into therapeutic practices would allow us to provide more comprehensive support to users.
Rest aims to revolutionize the music therapy experience by providing personalized, adaptive, and effective emotional support through the power of music and technology.
|
## Inspiration
We are very interested in the idea of making computer understand human’s feeling from Mirum challenge. We apply this idea on calling center where customer support can’t see customers’ faces via phone calls or messages. Enabling the analysis of the emotional tone of consumers can help customer support understand their need and solve problems more efficient. Business can immediately see the detailed emotional state of the customers from voice or text messages.
## What it does
The text from customers are colored based on their tone. Red stands for anger, white stands for joy.
## How I built it
We utilize the iOS chat application from the Watson Developer Cloud Swift SDK to build this chat bot, and IBM Watson tone analyzer to examine the emotional tones, language tones, and social tones.
## Challenges I ran into
At the beginning, we had trouble running the app on iPhone. We spent a lot of time on debugging and testing. We also spent a lot of time on designing the graph of the analysis results.
## Accomplishments that I'm proud of
We are proud to show that our chat bot supports tone analysis and basic chatting.
## What I learned
We have learned and explored a few IBM Watson APIs. We also learned a lot while trouble shooting and fixing bugs.
## What's next for **Chattitude**
Our future plan for Chattitude is to color the text by sentence and make the interface more engaging. For the tone analysis result, we want to improve by presenting the real time animated analysis result as histogram.
|
losing
|
## Inspiration
Wanted to better people's lives, making them happier and healthier. We thought this would be a great way to achieve such a task, and bring communities closer together in a sustainable way.
## What it does
FoodMe takes your location and recommends recipes based on what local produce is currently in season. The goal of this is to help you live healthier, and reach out to your local farmers.
## How I built it
Using HTML, CSS, JS, and Google's Geocoding API we were able to parse the individuals location, and provide feedback in a presentable, easy to use interface.
## Challenges I ran into
The Geocoding API was very inconsistent, changing its output format during use. This made it very difficult to interface it with our web application. Another challenge we had was connecting our front and back end to provide a seamless transfer of data.
## Accomplishments that I'm proud of
API calls was something new to our group, and we were able to successfully figure it out. We also feel the website design was done quite well, being both aesthetically pleasing and easy to use.
## What I learned
We learned API calls, a lot of back end interfacing, and some more advanced front end formatting.
## What's next for FeedMe
Next would be to better implement the API calls, include some machine learning to better curate the results, and to expand the service. Some future ideas are to have a browser for local bakeries, delis, fruit stands, etc.
|
## Inspiration
The world today has changed significantly because of COVID 19, with the increased prominence of food delivery services. We decided to get people back into the kitchen to cook for themselves again. Additionally, everyone has a lot of groceries that they never get around to eating because they don't know what to make. We wanted to make a service that would make it easier than ever to cook based on what you already have.
## What it does
Recognizes food ingredients through pictures taken on a smartphone, to build a catalog of ingredients lying around the house. These ingredients are then processed into delicious recipes that you can make at home. Common ingredients and the location of users are also stored to help reduce waste from local grocery stores through better demographic data.
## How we built it
We used to express and Node for the backend and react native for the front end. To process the images we used the Google Vision API to detect the ingredients. The final list of ingredients was then sent to the Spoonacular API to find recipes that best match the ingredients at hand. Finally, we used CockroachDB to store the locational and ingredient data of users, so they can be used for data analysis in the future.
## Challenges we ran into
* Working with Android is much more challenging than expected.
* Filtering food words for the image recognition suggestion.
* Team members having multiple time zones.
* Understanding and formatting inputs and outputs of APIs used
## Accomplishments that we're proud of
* We have an awesome-looking UI prototype to demonstrate our vision with our app.
* We were able to build our app with tools that we are unfamiliar with prior to the hackathon.
* We have a functional app apk that's ready to demonstrate to everyone at the hackathon.
* We were able to create something collaboratively in a team of people each with a drastically different skill set.
## What we learned
* Spoonacular API
* React Native
* Google Vision API
* CockroachDB
## What's next for Foodeck
* Implement personalized recipe suggestions using machine learning techniques. ( Including health and personal preferences )
* Learn user behavior of a certain region and make more localized recipe recommendations for each region.
* Implement an optional login system for increased personalization that can be transferred through d
* Extend to multi-platform, allowing users to sync the profile throughout different devices.
Integrate with grocery delivery services such as instacart, uber eats.
|
## Inspiration
The inspiration for Med2Meals sprang from the universal truth that food is more than just sustenance; it's medicine, comfort, and a catalyst for connection. In today's fast-paced world, we've seen an increasing reliance on pharmaceutical solutions to health issues, often overlooking the holistic benefits of natural remedies and the healing power of human connection. This realization was compounded by the global pandemic, which highlighted the detrimental effects of isolation on mental and physical health. Med2Meals was born out of a desire to revive the ancient wisdom that food can heal and to harness the digital age's potential to bring people together over the healing power of meals. We envisioned a platform that not only encourages a natural approach to healing through diet but also fosters a sense of community and support among individuals facing health challenges.
## What it does
Med2Meals connects individuals seeking natural dietary remedies for their health conditions with local chefs and home cooks who prepare and deliver home-cooked, healing meals. Users can input their specific health concerns or the type of medication they're aiming to supplement or avoid. The platform then suggests a variety of home-cooked recipes, each tailored to address those health issues with natural ingredients known for their healing properties.
Beyond just providing recipes, Med2Meals offers a service where users can request these meals to be cooked and delivered by someone in their community. This feature aims to provide comfort through nourishing food while also opening the door to new friendships and a support network. The platform caters to a range of dietary preferences and health needs, ensuring that each user receives personalized care and nutrition.
In essence, Med2Meals is more than a meal delivery service; it's a community-building tool that leverages the nurturing power of food to heal bodies and connect souls.
## How we built it
* **Frontend Development**: We utilized Next.js for the frontend to leverage its server-side rendering capabilities, ensuring a fast, and responsive user interface.
* **Backend Infrastructure**: Our backend is powered by Node.js and Express.js, forming a robust and scalable foundation.
* **Integration of AI Technologies**: We partnered with Together.ai to fine-tune and deploy Large Language Models (LLM) and Diffusion models tailored to our specific needs. These models are crucial for generating personalized meal recommendations and understanding user queries in natural language.
* **AI Agents for Data Exchange**: The seamless execution of LLM and diffusion models, along with data exchange between them, is facilitated by AI agents using fetch.ai. This innovative approach allows for real-time, intelligent processing and a highly personalized user experience.
* **Blockchain Technology for Transactions**: We employed Crossmint for blockchain ledger transactions between the user and chef. This ensures that every cooking opportunity a chef receives is cryptographically signed, providing a transparent and secure method to verify a chef's credibility through the NFTs they have minted.
* **Database Management**: MongoDB serves as our database management system, offering a flexible, scalable solution for storing and managing our data.
* **API Documentation**: The entire API documentation was meticulously maintained in a Postman Workspace.
* **Development Tools**: We adopted Bun as our package manager and JavaScript runtime. Bun's high performance and efficiency in package management and execution of JavaScript code significantly enhanced our development workflow, allowing us to build and deploy features rapidly.
## Challenges we ran into
* **Customized Recipe Generation**: We encountered difficulties in crafting the ideal few-shot prompts for Large Language Models (LLMs) to generate customized recipes, which was crucial for tailoring dietary solutions.
* **Building AI Agents with Fetch.ai**: The challenge of navigating Fetch.ai's occasionally misleading documentation was significant. However, the assistance from the Fetch.ai team was instrumental in overcoming these hurdles.
* **Idea Pivot**: Initially, we faced a setback with our original concept, which seemed too cliché after our initial pitch to judges. This led us to pivot to a more unique and impactful problem statement, which ultimately defined our project's direction.
## Accomplishments that we're proud of
* **Successful Pivot**: The decision to pivot our idea proved to be valuable. Moving away from a generic concept to tackle a unique problem statement has positioned us distinctively in the space.
* **Diverse Technological Learning**: Each team member embraced the challenge of learning new technologies, from AI agents and prompt engineering for LLMs to crypto signing. This diversity in learning has been one of our project's most enriching experiences.
## What we learned
* **AI Agents**: The project deepened our understanding of AI agents, enhancing our ability to deploy intelligent solutions.
* **Fine-tuning LLMs**: We gained valuable insights into the process of fine-tuning Large Language Models to meet specific project needs.
* **Crypto Signing**: The importance and application of crypto signing were key learnings, opening new avenues for secure data handling.
* **Teamwork**: The project underscored the indispensable value of teamwork in overcoming challenges and achieving collective goals.
## What's next for Med2Meals
* **Speed Optimization**: Currently, the process of fetching custom recipes using LLMs is slower than desired. Our immediate focus will be on improving the speed of this feature to enhance user experience.
* **User Interface Improvements**: We plan to refine the user interface to make it more intuitive and user-friendly, ensuring that our platform is accessible to everyone, regardless of their tech-savviness.
* **Integration with Health Platforms**: We are looking into integrating Med2Meals with existing health platforms and medical databases, to provide users with a seamless experience that bridges the gap between medical advice and dietary solutions.
|
losing
|
## Inspiration
* You can search for images with words (Google Search)
* You can search for words with images (Google Image Search)
* Why can't you *search for images with images???*
## What it does
* Translates camera image to Giphy search query using Core ML Image Recognition
* Keep tapping your screen to add more GIFs!!!
## Controls
* Long Press: Switch camera mode
* Shutter button: Take photo (and load first GIF)
* Tap: Load another GIF
* Shake phone: Clear photo & gifs, go back to camera mode
## Best at detecting
* Computers
* Sunglasses
* Sneakers
* Water bottles
* Pill bottles
* Phone/iPod
* You tell me.....
## How I built it
* [AV Foundation](https://medium.com/@rizwanm/https-medium-com-rizwanm-swift-camera-part-1-c38b8b773b2) for building custom camera view
* [Inceptionv3](https://developer.apple.com/machine-learning/build-run-models/) for object recognition model ported to Core ML
* [Alamofire](https://github.com/Alamofire/Alamofire) and [SwiftyJSON](https://github.com/SwiftyJSON/SwiftyJSON) for calling [Giphy API](https://developers.giphy.com)
* [SwiftyGif](https://github.com/kirualex/SwiftyGif) for displaying GIFs
## Challenges I ran into
1. What to do with inaccurate predictions (just show 'em all!!! It'll be fun!!!)
2. Sometimes hit Giphy's API rate limit after just a few calls (likely too many other hackers were calling their API from the same IP address)
## What I learned
* Classifying images using Core ML/Vision APIs
* Using Giphy API
* Creating customize-able camera module
## What's next for GoofyGiphyCamera
1. Allow the user to select from top 5 predictions
2. Social Sharing
3. Use BulletinBoard context cards to add on-boarding tutorial (not needed now -- will be doing hackathon demo in person)
4. Publication on Apple App Store
|
## Inspiration
Inspired by the recent success of apps like HQ, we wanted to capitalize upon the mass demand for instant gratification. After browsing reddit for a few hours, we realized that Snapchat uses selfies, Instagram uses pictures, Facebook uses text, and giphyr uses GIFs.
## What it does
Enabling two types of users, giphyr lets people do what they want. A typical user opens the app, and starts swiping right on GIFs they like, and left on ones they don't. Content creators would upload their creations, and wait as others swipe right and left on their creations. Through daily and weekly quests (such as logging in, swiping on fifteen GIFs, uploading, etc.), users can earn points to spend in the giphyr store. Points can be redeemed for weekly raffles, or products themselves.
## How we built it
Using primarily Android Studio and Java, the app was built using AWS as its backend.
## Challenges I ran into
As usual, Android documentation was lacking. Especially with regard to AWS. Sometimes documentation would reference deprecated methods, or would often contradict itself.
## Accomplishments that I'm proud of
We created neural nets to analyze user preferences and detect potential spam.
## What I learned
Reading AWS Android documentation is like reading a disaster prep guide after the disaster has happened.
## What's next for giphyr
We plan to rewrite the application in React Native to allow for a seamless cross-platform experience. In addition, plans to explore aggressive marketing through referrals and seeking out avenues for funding will be explored in depth.
|
## Inspiration
A project online detailing the "future cities index," a statistic that aims to calculate the viability of building a future city. After watching the Future Cities presentation, we were interested to see *where* Future Cities would be built, if a project like the one we saw was funded in the US. This prompted us to create a tool that may help social scientists answer that question — as many people work to innovate the various components of future cities, we tried to find possible homes for their ideas.
## What it does
Allows Social Scientists and amateur researchers to access aggregated census and economic data through Lightbox API, without writing a single line of code. The program calculates a Future Cities Index based on the resilience of a census tract to natural disaster, housing availability, and the social vulnerability in the area.
## How we built it
Interactive UI built with ReactJS, data parsed from Lightbox API with Javascript.
## Challenges we ran into
Loading in the census tracts in our interactive map, finding appropriate data to display for each tract, and calculating the Future Cities Index
## Accomplishments that we're proud of
Creating a working interactive map, successfully displaying a real-time Future Cities Index
## What we learned
How to use geodata to make interactive maps that behave as we wish. We are able to overlay different raster images and polygons onto a map.
## What's next for Future Cities Index
Using more parameters in the Future Cities Index, displaying data on the County and City level, linking each county tract to available census data, and allowing users to easily compare tracts
|
losing
|
## Inspiration
Have you ever walked up to a recycling can and wondered if you can recycle your trash. With Trash MIT, we take a picture of your item and run it through our database to check if it's recyclable.
## What it does
Trash MIT identifies what an object is using a webcam and checks it against our list of items. If it is unsure of what the item is, Trash MIT asks for user input. Over time it will collect data on what is and isn't recyclable.
Trash MIT has 2 purposes: (1) Collecting data on what people think is and isn't recyclable (2) telling people what is and isn't recyclable.
Trash MIT could easily be implemented at restaurant trash cans where there is a small known set of trash frequently thrown away. Trash MIT makes recycling fun and interactive encouraging recycling.
## How we built it
Google's cloud vision API identifies types of objects. Based on our list of recyclable objects, we then tell the user if the object is recyclable.
## Challenges we ran into
We searched around online looking for a comprehensive list of recyclable items and were unable to find one. We then realized we were going to have to create our data set ourselves.
We tried using barcodes; however, it is still hard to go from identifying an object to whether it's recyclable or not.
## Accomplishments that we're proud of
It runs!
## What we learned
How to use API's, google cloud and OpenCV.
Working in teams
## What's next for trash MIT
We have so many ideas!
### Trash MIT is missing hardware.
We would like to build a unit that could be attached to trash cans in urban areas. The unit would have a screen behind the trash to eliminate noise (during HackMIT we held a piece of paper behind items to stop the Google API from identifying items in the background).
Instead of using 'y' and 'n' to take in user input, we would place sensors on the trash can. That way we can collect data based on what people are already throwing away. We can use this data for two purposes. Reporting back to recycling authorities on whether people are actually recycling correctly and to improve our data.
### Developing the classification
We could also expand from single-stream recycling. Currently, we only identify recycling or non-recycling. We could expand so we can identify different types of recycling.
We could expand to use Machine Learning to help with the identification.
We could also contact local government for information on recycling laws in different areas.
We could improve the interface with more color to encourage more recycling.
|
## Inspiration
We inherently know importance of recycling, but we get lazy and sometimes forget. People often throw their waste in the wrong recycling bin (e.g. bottles for compost or food waste for landfill) which negates the entire purpose of recycling. Therefore, we wanted to create a project to solve this issue.
## What it does
Introducing **sorta**, a project that combines computer vision and hardware components to make recycling more convenient and effective. Our system utilizes image recognition via a mounted camera to recognize waste and identify which kind of recycling bin it best belongs in. Our system will communicate with the recycling bins to ensure that only the identified bin will open for the user. Therefore, people have no choice but to correctly dispose their waste.
## How we built it
For capturing images and opening specific bins properly, our system utilizes a Raspberry Pi 3 Microcontroller with an Arducam, an ultrasonic range sensor, and three servo motors. The Arducam captures images and the microcontroller sends the images to a storage account in Microsoft Azure. The Arducam only takes pictures when a user is within a certain range of the waste disposal system so that we do not have excessive, superfluous data; our system detects range with the ultrasonic range sensor. Finally, our system manages the opening and closing of the recycling bins through three standard servo motors.
In terms of software, after the Raspberry Pi takes the picture of the piece of waste it is uploaded into a Microsoft blob storage so that the list of pictures taken are easily iterated through and turned into the proper format (URL). The system then utilises the Microsoft cognitive services, computer vision API to detect the type of waste it is (eg. plastic bottle, plastic bag, can). A category hierarchy is then built on top of it to sort the detected objects into more general categories. This builds up until each object is sorted into the 3 categories of a typical recycling bin in Stanford: Compostable (fruits, vegetables), Recycle (glass, plastic, metal, paper), landfill. For extra accuracy the system then also runs the pictures through the Microsoft custom vision API which we manually trained with a mixture of categorised waste pictures from ImageNet and manually taken pictures of food wrapper waste lying around in the Hackathon. Our original plan also involved training this API to detect brand names and company logos.
The data coming in from both APIs are then stored up in a Microsoft cosmos db with Mongodb. If company brand logos, type of waste and location (trivial data because the physical system is stationary and it can be hardcoded) are successfully obtained. This has potential to be used in data analytics applications (eg. displaying in a heatmap showing which company is producing what type of waste at high levels in different locations)
## Challenges we ran into
Waste sorting isn’t completely a visual problem. People sort waste according to what the wastes are, but what the wastes are depends not only on our vision of those objects. A piece of metal is categorized into recycled because it feels like a metal in our hands. Therefore, we are working on problems that contain more factors that we can tackle currently. Thus, we have to optimize under inevitably biased sub-optimal situations.
No one in our team had a strong background regarding hardware. Therefore, implementing all the Raspberry Pi features was a challenge. Although we managed to implement the camera capture with aspects of distance detection and image transfer to the Microsoft Azure data storage space, we were not able to implement the opening and closing of recycling bins through the servo motors.
Data selection was challenging, as we didn’t have access to database with good quality control. In the end, we have to manually select general and valid photos of glass, plastic, metal, paper, etc. so that our model developed with the Microsoft custom vision API could be better trained.
Along the way we came across many small bugs that really tripped up the teams progress. Many of those bugs, involved calling APIs because the errors returned were generally unclear on which parameter was missing in the call. So a large amount of time it came down to trial and error on guessing the conventions of these API calls.
## Accomplishments that we're proud of
For all the members in our team, this event was the first hackathon we ever attended. Therefore, we’re very proud of submitting an idea and project we felt could an beneficial impact on society.
Although the learning curve at times was steep, we are proud that we stuck with it throughout the whole event. We definitely learned a lot and were exposed to a ton of amazing resources we can use for future projects.
We also had a blast discovering the collaborative, innovative, and inspiring nature of hackathons. We met a lot of people this weekend that we otherwise would never have the chance to meet. To conclude, we are really looking forward to attending our next hackathon!
## What we learned
Hardware, Database, Raspberry Pi, Microsoft Cognitive Services (including computer vision API and custom vision API)
## What's next for sorta
We could integrate more sensors into our system so that it’s easier to distinguish certain material like metal.
We could also integrate motors into our system that we were not able to finish this time, so that the user only needs to place the trash on a plate, and our system can categorize it and place it into the right trash bin.
If company logo brand and location data is also collected at a larger scale with sharpener waste classification, we believe that the data collected can be very useful. It can be used to detect consumer patterns in different locations as well as waste data for environmental purposes.
|
## Inspiration
Waste Management: Despite having bins with specific labels, people often put waste into wrong bins which lead to unnecessary plastic/recyclables in landfills.
## What it does
Uses Raspberry Pi, Google vision API and our custom classifier to categorize waste and automatically sorts and puts them into right sections (Garbage, Organic, Recycle). The data collected is stored in Firebase, and showed with respective category and item label(type of waste) on a web app/console. The web app is capable of providing advanced statistics such as % recycling/compost/garbage, your carbon emissions as well as statistics on which specific items you throw out the most (water bottles, bag of chips, etc.). The classifier is capable of being modified to suit the garbage laws of different places (eg. separate recycling bins for paper and plastic).
## How We built it
Raspberry pi is triggered using a distance sensor to take the photo of the inserted waste item, which is identified using Google Vision API. Once the item is identified, our classifier determines whether the item belongs in recycling, compost bin or garbage. The inbuilt hardware drops the waste item into the correct section.
## Challenges We ran into
Combining IoT and AI was tough. Never used Firebase. Separation of concerns was a difficult task. Deciding the mechanics and design of the bin (we are not mechanical engineers :D).
## Accomplishments that We're proud of
Combining the entire project. Staying up for 24+ hours.
## What We learned
Different technologies: Firebase, IoT, Google Cloud Platform, Hardware design, Decision making, React, Prototyping, Hardware
## What's next for smartBin
Improving the efficiency. Build out of better materials (3D printing, stronger servos). Improve mechanical movement. Add touch screen support to modify various parameters of the device.
|
losing
|
## Inspiration
Have you ever finished up ECE lab or a small project like a PennApps hardware project and found that there were all of these screws, nuts, and resistors lying around everywhere? Or maybe you're just a hardware enthusiast but hate keeping track of all of your stuff (we like to do the old-fashioned way: the "hardware pile.")
With InvenTeX, there is finally an easy solution to inventory maintenance. Gone are the days of hoping you have quarter inch threads and settling for glue, or even having to remember the resistor color code. InvenTeX does all for your inventory, so you can focus your all on your hardware.
## What it does
InvenTeX identifies objects you want to keep in your inventory and helps you track them in groups so you don't have to.
The workflow of InvenTeX is simple.
When you first launch the app, you can choose to see what is in your inventory (at this point, nothing) and to insert something into the InvenTeX.
#### Taking a 100-ohm resistor as an example:
To insert, all you need to do is press the button on the app and take a picture of your resistor. It will then be identified and you can drop it and any other 100-ohm resistors into the loading bin. If you already have 100-ohm resistors stored in InvenTeX, your new resistor will be placed with the ones you already have in store.
To retrieve, you start from the drop down menu inside the app. There it will show everything that you have stored in InvenTeX. Pick the item you want to retrieve and InvenTeX will pop out the drawer with it. And once you're done grabbing what you need, the drawer will pull itself back in and you don't even need to remember where you got it from.
## How we built it
Initially, we wanted to use 3D printed parts for the hard to craft pieces of the enclosure and internal hardware. However, after finding out that we had access to laser cutters and an almost infinite supply of acrylic, we changed up our modeling plans and building timeline completely. We also initially intended on using a standard android app and using Bluetooth as the connection interface between it and the Raspberry Pi. And again, all plans changed with the introduction of Expo.io into the hack.
### Hardware
The entire enclosure and all of the internal structure is made with laser cut acrylic (1/8" and 1/4" black acrylic). We are also using two stepper motors to drive the belts and two servo motors to actuate the trap door (for part insertion) and magnetic arm (part retrieval).
### Controller
The Arduino Mega 2560 is the brain of the controls. The Arduino is only controlling movement and waiting for communication from the Raspberry Pi through I2C protocol. We are using an Adafruit Motor Shield (V2.3) for easy stepper and servo motor control with its built in stepper classes and PWM control for the servos. Everything draws power through a 5V external power source.
### Software
**Vision Processing**
Due to the Raspberry Pi camera not being clear enough to detect color bands of resistors, we switched over to phone cameras that had greater resolution (but also came with some other issues of connectivity, see below).
We started with simple color detection, differentiating between LEDs of different colors. Then moving on to shape and color detection, differentiating between different nuts and bolts. Finally moving on to pattern recognition and orientation detection, differentiating between resistors with different color bands.
**Communication**
Since we had replaced the Rapsberry Pi camera with the phone camera, we needed to transfer the image to the Raspberry Pi for vision processing. HTTP requests to send the image from the phone to the Raspberry Pi. HTTP requests to send retrieval signals to the Raspberry Pi.
## Challenges we ran into
### Software
**Vision processing:** Raspberry Pi camera is difficult to focus without a specialized tool. Resistor color codes had to be fine-tuned (HSV values). Resistor colors were also difficult to differentiate from each other and from the resistor itself, being heavily influenced by the color of the light source.
**Phone-to-Pi Communication:** We had to determine a system to send information to pi and get information back. We considered using bluetooth and an online database but ultimately decided to use HTTP protocols. Considerable thought went into organizing into which device ran what kind of processes and how to share information between the phone and pi.
**Expo Development:** No experience with React-Native. Rendering code changes would sometimes hang, so app development was not as ideally productive.
### Hardware
**Fitting Components:** Because all of the internal components of the enclosure must have slots and notches on the outer in order to fit properly, many of the internal structures that had to be swapped or structures that needed to be added in later for internal support had to be glued in making some areas less structurally stable than we would like.
**Stepper Motor Belt Tension:** The tension required for the stepper belts to drive properly was actually a bit too much for our structure to handle and some internal pieces broke off from the main frame due to tension forces. We fixed this by using zip ties instead of glue for a much stronger belt support structure.
## Accomplishments that we're proud of
**Integration:** We really had to utilize every possible piece of knowledge of every team member in order to fully integrate this project. There was (albeit simple) wiring to do be done with the microcontrollers, all the way up to implementing two different servers. In terms of code we wrote code in C, Python, and Javascript. We had to be able to pass information between all of our systems through wired and wireless connections. This extremely wide range of integration is not something we have ever done in so little time as a team and we're really excited that we pulled it off.
**Fully Assembled CAD Model:** We needed to know all of the tolerances in the system before printing or cutting any piece of it. In order to have everything fitted together as seamlessly as possible, we generated a fully assembled CAD model of every single component in the hack and cut almost every piece in one go. If we had completed the project by designing and cutting chunks at a time, without taking some time to look at the whole picture (literally) then there definitely would have been some more trips to the laser cutting room. We're glad that we took the time to fully CAD everything so that we could have all of the measurements and designs happening simultaneously between two CAD designers be integrated with each other.
## What we learned
## What's next for ToolHub
Bigger == Better. We want to try to use stronger materials with a stiffer frame that will not break down on us. We will also perform some stress analysis before redesigning some of the internal bracing in order for the structure to hold under the stepper torque.
The vision processing algorithms and implementation could also always be snappier and more robust.
|
## Inspiration
The other day, I heard my mom, a math tutor, tell her students "I wish you were here so I could give you some chocolate prizes!" We wanted to bring this incentive program back, even among COVID, so that students can have a more engaging learning experience.
## What it does
The student will complete a math worksheet and use the Raspberry Pi to take a picture of their completed work. The program then sends it to Google Cloud Vision API to extract equations. Our algorithms will then automatically mark the worksheet, annotate the jpg with Pure Image, and upload it to our website. The student then gains money based on the score that they received. For example, if they received a 80% on the worksheet, they will get 80 cents. Once the student has earned enough money, they can choose to buy a chocolate, where the program will check to ensure they have enough funds, and if so, will dispense it for them.
## How we built it
We used a Raspberry Pi to take pictures of worksheets, Google Cloud Vision API to extract text, and Pure Image to annotate the worksheet. The dispenser uses the Raspberry Pi and Lego to dispense the Mars Bars.
## Challenges we ran into
We ran into the problem that if the writing in the image was crooked, it would not detect the numbers on the same line. To fix this, we opted for line paper instead of blank paper which helped us to write straight.
## Accomplishments that we're proud of
We are proud of getting the Raspberry Pi and motor working as this was the first time using one. We are also proud of the gear ratio where we connected small gears to big gears ensuring high torque to enable us to move candy. We also had a lot of fun building the lego.
## What we learned
We learned how to use the Raspberry Pi, the Pi camera, and the stepper motor. We also learned how to integrate backend functions with Google Cloud Vision API
## What's next for Sugar Marker
We are hoping to build an app to allow students to take pictures, view their work, and purchase candy all from their phone.
|
## Inspiration
Memes have changed the way we communicate, the content we're looking for, and how we can express ourselves. In our personal experience, memes have allowed us to connect with each other and find joy and entertainment in a simple way. There's no meme-exclusive social media that provides the user with an easy to use and minimalistic interface. There's no platform that can be used to create, share, and rate memes at the same time. Memes are now part of our daily lives and represent a valuable resource of communication. Creating a free, uncomplicated, and interactive platform was our motivation.
## What it does
MemeTune groups the users weekly according to the type of meme that they prefer (shitposts , for example), factor that can be influenced by their location, age, type of humor, and interests. After being grouped in a "galaxy", their feed will show one meme at the time that corresponds to the galaxy's category, and the user will react to the meme giving points to the person who created it. Users can explore other galaxies and rate more than one meme from their own galaxy. They can also upload a meme of their own and participate in the weekly competence. A donation to a meaningful fundraising campaign will be done under the name of the winner.
## How we built it
First, the website was created and visualized in Figma. There, we included all the buttons, windows, and website color/identity/theme of the app. After, we got the Figma design, we started programming the website using python's django, html, css, and sqlite.
## Challenges we ran into
Our team consists of an astrophysicist, a chemistry major, a biomedical engineer, and a computer science major so our first big challenge was that only one of us was experienced in coding. We knew this limited us in the depth and quality of the code we could create but, we knew it could also give us an advantage. By having such diverse academic backgrounds, we had a vast field of inspiration to draw from. We quickly bounced off ideas and opted for something fun, that we could all enjoy making while not having to put a huge workload on our only programmer. This was easier said than done. As we all think so differently, it was challenging to get to a consensus on what our product should or should not include considering we were not used to having to explain our thinking process to someone else. After a lot of communication, we figured out that we could all concrete our ideas utilizing concept maps and we created a vision for our project. That allowed us to start working. Later in the process, after we created our webpage design in figma, we ran into some more coding-related problems. Because our programmer was in a rush, he made some syntax errors that lead to **multiple** errors. The biggest one was that the back end of the code contained some errors that prevented us from working on the front end. We were not able to solve it in time due to time constraints so it limited our product. Instead, we fully designed our website utilizing figma which contains a detailed demo of how we wanted our product to work and look.
## Accomplishments that we're proud of
The personality of the website ended up being very solid and developed with care and attention to the details. We could also include a retro theme in the website to bring a feeling of old-looking-like website with a contrasting modern content. We are also very proud that many of the things developed in 36 hours were the result of new skills. For example, our back-end developer learned how to use django during the hackaton. As well as, our product designers learned how to use figma from scratch. We're very proud of not only the accomplishments, but also the knowledge and skills that we managed to gain in very little time.
## What we learned
Many lessons were learned in the past couple of days. First, we had to learn how to evaluate objectively our project ideas and objectives. Secondly, we learned how to use different platforms and languages from scratch, such as django and figma. It was also the first hackaton for most of the team members so we now know how a kackaton works, how to prepare better for the next one, and which roles suit better each of our abilities.
## What's next for MemeTune
MemeTune still has a long way to go in terms of the programming behind the idea and functions of the website. After we've managed to create the website, gaining users will be the next objective. Growing a platform won't be very difficult, as this will be a website that's attractive to many. MemeTune will then start with the donations to fundraisings as well as incorporating adds in order to support the creators and the website itself. Then, MemeTune App will be developed making it easier to access and to use. Adds have to be memes in order to have a place in our website. This strategy will be very attractive for companies, as adds that are found funny, are not taken as sponsored content immediately, and can stick with the consumer for way longer. Users can also share these meme adds, giving more interactions and views to the adds. MemeTune will be a viral website and app that allows users to find content that they enjoy, create memes and share their creativity, and will change the way adds and advertisement work.
|
partial
|
The idea started with the desire to implement a way to throw trash away efficiently and in an environmentally friendly way. Sometimes, it is hard to know what bin trash might go to due to time or carelessness. Even though there are designated spots to throw different type of garbage, it is still not 100% reliable as you are relying on the human to make the decision correctly. We thought of making it easier for the human to throw trash by putting it on a platform and having an AI make the decision for them. Basically, a weight sensor will activate a camera that will take a picture of the object and run through its database to see which category the trash belongs in. There is already a database containing a lot of pictures of trash, but that database can constantly grow as more pictures are taken.
We think this will be a good way to reduce the difficulty in separating trash after it's taken to the dump sites, which should definitely make a positive impact on the environment. The device can be small enough and inexpensive enough at one point that it can be implemented everywhere.
We used azure custom vision to do the image analysis and image data storage, the telus iot starter kit for giving sensor data to the azure iot hub, and an arduino to control the motor that switches between plastic trash and tin can trash.
|
## Inspiration
Throughout the hackathon, our team was intrigued by staff members sorting through trash bins in order to ensure recycling, compost and landfill waste was separated. After further investigation and a quick staff member interview, we learned about San Francisco's "zero-waste" program. This program requires event hosts within city limits to sort their waste into the three specified categories to reduce landfill use, with penalties or bans for events that fail to meet the waste sorting criteria. The demand for waste sorting has grown so much that private companies, such as Green Mary, have entered the sector.
Our team came up with an innovative solution, Smart Bin. Smart Bin automates waste sorting, reducing manual labor costs and reducing pollution.
## What it does
Smart Bin is a smart, standalone trash disposal system. It scans incoming waste and sorts it as it goes through our system.
## How we built it
Our system is built on top of a Raspberry Pi. The Raspberry Pi hosts a Yolo11 object detection system that scans incoming waste and makes an inference as to what section it should be directed to (landfill, compost, recycling). The output from out model is then sent to an Arduino board that controls a servo motor using PMW. Our motor directs the trash to its corresponding bin, effectively sorting it.
## Challenges we ran into
One challenge we ran into was the internet connection at the venue. We wanted to download open-source data sets from Roboflow Universe to train our Object Detection model to segregate different types of waste. We also dealt with hardware difficulties during the event, running the servo motors took a lot of juice from the batteries so we had to prepare a lot beforehand. We also modeled designs of our props to be 3d printed during the event, which took a long time to print where we had to optimize different slicing methods to improve from 7 hours of printing to sub 5 hours of print time. We also had limitations when it comes to computing power of the raspberry pi, where all of our detection model, and functions are all ran on this mini-computer.
## Accomplishments that we're proud of
We are extremely proud of what we built. We were able to build Smart Bin as a standalone system, one that is only dependent on a Raspberry Pi, an Arduino Board, and a servo motor. The total cost of our unit comes out to less than $50, making it highly accessible and likely to make an impact on the environment.
## What we learned
Through this project, we gained a deeper understanding of the complexities involved in waste management and automation. We learned about the real-world implications of waste diversion requirements for events and how crucial it is to accurately sort trash. Additionally, we improved our skills in hardware integration, machine learning, and rapid prototyping, which were crucial to successfully building a working product in a limited time frame.
## What's next for SmartBin
Our next steps involve expanding the scalability of the system across urban environments by integrating even more advanced real-time data analytics and optimizing waste collection routes through machine learning algorithms. We aim to enhance the mobile experience by allowing users to locate and interact with SmartBins seamlessly. Additionally, we plan to collaborate with municipalities and large organizations to implement SmartBin on a larger scale, helping cities reduce waste inefficiencies and environmental impact.
|
## Inspiration
Fashion has a waste problem. The fashion industry is responsible for 10% of annual global carbon emissions. The problem of excess product is endemic in the garment industry, now costing the US retail industry as much as $50 billion a year. It has led to constant discounting, dumping of unsold clothes to lower-income countries and in the worst case, stock destruction i.e. burning of perfectly wearable clothes.
So, we came up with F•sync.
F•sync aims to address this fashion waste issue by enabling an inventory management tool that provides an accurate view of a brand’s total inventory and can be easily shared with all retailers, giving them the possibility to request any item they need with just one click. Rather than directly ordering from brands whenever stock is out and forcing the brand to manufacture more products, retailers will be able to request unsold/excess stock from other retailers. So, fewer products are manufactured and potentially wasted as a result.
This project is inspired by API integrations for inventory management across platforms like Shopify and Woocommerce. The solution is simple and efficient for brands working with retailers who also use these platforms and have them integrated amongst their POS (point of sale) systems.
## What it does
F•sync helps brands facilitate inventory in real-time across sales channels. In a single place, all their retailers have live access to stock across other retailers and give them the possibility to request any item they need with just one click.
Retailers can manage their own store inventory, which is automatically synchronized with a brand's global inventory. Retailers can also view other retailers carrying a specific product, so if they run out of product, they can request it directly from another retailer that carries the same product or order from the brand themselves—all in one place!
Brands can add retailers to have their inventory be tracked in their global inventory, as well as add new lines of products to their inventory which retailers can see and request. They can likewise request product from other retailers.
## How we built it
F•sync is divided into two parts: the client frontend and the server backend.
We used Express for our backend webserver, and “Socket.io” to communicate between the frontend and the backend, and “MongoDB” for our database. We also incorporated the Twilio API (SendGrid) to send emails personalized through dynamic templates to the various companies that would use our product. For MongoDB, we are using two collections: one for brands and one for retailers. Each brand has basic information, a list of products with their global quantity and name, and a list of associated retailers. Likewise, retailers have basic information, but also have a list of brands they carry and a list of products in that specific brand they carry. This enabled us to minimize having the same information repeated and make it easy to look up any information about any brand or retailer, such as their inventory.
For the frontend, we used React JS combined with the Chakra UI and opened up "socket. io-client" connections to the backend. We chose sockets since, when events happened in the frontend, we can trigger other clients through events to update their view, enabling a real-time reflection of the global inventory of a product
## Challenges we ran into
This is one of the most significant projects we have ever done due to the amount of coding required. It's also the most complex web app we have ever developed. Normally, designing a large database schema and working with so many operations to modify many different parts of the database and synchronize everything together will take far longer than the scope of a short hackathon—the time and effort to write and test everything that we needed to access the database might be impossible to accomplish in 36 hours. It was a challenge to meet the 36 hour deadline, but doable if we planned everything out function by function, which it greatly aided in developing the reusable code that we needed for multiple database operations. As a result, we are amazed and proud of each of us and by the results of this project!
Aside from database accessing, integration of frontend and backend was also a very challenging part of the project. With so many planned features and complex interactions in our app, we had a monumental amount of events we had to account for and test on the frontend. Using sockets and React's context hooks took some figuring out.
## Accomplishments that we're proud of
We are incredibly proud of one of our front-end members for learning how to use React.JS in 36 hours with Chakra UI. There were hundreds of lines of coding that made it look very challenging to do, and the project was enormous for 36 hours of coding. For the backend, there were more than 800 lines of code, and we had numerous fully developed pages in our frontend. Overall, this was the most complex project we've done. One thing that we're especially proud of is that we managed to correctly connect the front-end with backend coding and enable most features we planned on having. Though, we had to plan everything out thoroughly to make it possible to build our app in this period. Everyone here contributed the same amount of energy and time to make this project possible!
## What we learned
In tackling this project, we learned the importance of sitting down, whiteboarding, pondering, designing the entire project and backend from scratch before starting. For a hackathon, a rough plan is often developed and there's a rush to start coding and figure it out as we go through the coding process. For the scale of our product, it would be impossible to wing it successfully in a limited amount of time.
And when it was time to start coding, it was really easy to follow the plan laid out. As a result, some of the code had relatively few errors despite a large amount of code that we wrote. When any issues arose, previously flagged problematic areas we identified in the planning process helped us debug and fix things faster.
Lastly, it was also a challenge for a member of our team who learned React JS in 36 hours and contributed a fantastic amount of work. Overall, success comes from learning from mistakes and knowing how to adjust them.
## What's next for F•sync
We will try it out with some brands to test the potential integration across different software platforms (e-commerce, POS, and other inventory management solutions). F•sync could become the next-generation inventory management app that provides smart production and inventory allocation analytics by unlocking the capabilities to ship-from-store, in-store track in real-time, and process payments. Helping brands reduce overstocking and overproducing, one step at a time.
|
losing
|
## Inspiration
1 in 5 children in the United States have learning disabilities and attention disorders like dyslexia, ADHD, Dyscalculia, Dysgraphia, Auditory Processing Disorder (APD) and Language Processing Disorder (LPD). Individuals with Dyslexia face significant challenges in educational settings, team presentations and meetings where a lot of the presentation is solely text, numbers and/or graphs. (<https://ldaamerica.org/types-of-learning-disabilities/>)
Some of the specific challenges faced by individuals lying on the spectrum of learning disabilities face are:
* Difficulty reading normal text (Dyslexia) context blindness (figuring out senses of different words in different scenarios) and
* Trouble focusing on dense, solely textual material without visualizations.
This is where readAR comes in. Through an array of services built into AR, our project aims at enhancing the learning and perceptory experience for individuals with these conditions.
## What it does
readAR's main purpose is to make it easier for these individuals to learn, whether that be about what "work" actually refers to in a sentence, or understanding dense physics slides. We do this through using AR to re-render the world in a more dyslexic friendly font (like Dyslexie), and giving the option to users to add machine-learning inferred context to key phrases in the sentence. This enables us to pull visualizations and images that engage the user and enable him/her to better understand the concept. We also do SpeechtoText to transcribe the entire lecture/ meeting (especially helps individuals with auditory processing disabilities) for future reference. This also enables us to do intelligent quiz generation to assess the user's weak points in his understanding of a specific topic, and then give him customized resource recommendations. (Pulled using Bing's Custom Search API)
## How we built it
We built the AR components and mobile app using Swift and ARKit. We also managed to to re-implement a pretty neat paper dealing with word disambiguation (Wiedemann, G., Remus, S., Chawla, A., Biemann, C. 2019) and serve it on the cloud (this is what allows us to provide context-specific definitions!). We utilized Azure's neat APIs, which allowed us to provide near-realtime OCR, intelligent search for resources and image suggestions, To top it all off, (almost) everything was Dockerized and deployed through Oracle Cloud Container Service to allow for some pretty nice scaleability.
## Challenges we ran into
We all went into the hackathon knowing that this idea would be crazy -- a moonshot even. And that's what made it so challenging and fun. Some of the most significant challenges we faced were,
* Finding a good way to render all the additional information without being overwhelming for the user
* Training and serving our own word disambiguation model that was this accurate (there were almost zero resources about this)
* Figuring out how to use AR for the first time
## Accomplishments that we're proud of
One problem that we noticed in existing systems was how poorly they handled WSD (Word-sense disambiguation). We recognized this as an important issue for words in specific fields that have domain-specific meanings. E.g. "work" in the formal physics sense vs the general meaning. This is evident in the examples below:


We tackled this by creating and fine-tuning our own BERT model to be able to accurately disambiguate word meaning, through which we were able to reach 77% accuracy (5% off from the state of the art) in around 1.5 hours on our laptops.
## What we learned
We learned to use and integrate a pretty large array of different services- ranging from our own ML models for WSD and Question Generation, Azure Cognitive Services, Oracle for deployment, with Swift for our AR app. It was quite the journey putting all these jigsaw pieces together.
## What's next for readAR
We believe we've stumbled across an idea that definitely has significant potential to be taken forward and make an tangible impact. We hope to continue refining the project and hopefully also have it beta-tested by actual users with these disabilities.
|
## Inclusiv.ai: Empowering Accessibility
## Inspiration
In a world where technology is a cornerstone of daily life, it's crucial that digital access is equitable. However, not everyone experiences technology in the same way. We took this challenge to heart and crafted Inclusiv.ai to revolutionize accessibility, ensuring that individuals with disabilities can navigate the web with ease and confidence. Our main priority was reducing the hassle of different extensions and toggles. Inclusiv.ai has one button and one assistant—Inki.
## What it does
Inclusiv provides a simple way for those with disabilities to navigate the web. We focused on a hassle-free, intuitive approach with only one button. Simply begin a conversation with your assistant, "Inki," by clicking on the microphone button and explain whatever issues have been limiting your experience on the web. With various modes such as colorblind, screen enhancer, screen explainer and summarizer, and an ADHD/Dyslexia mode.
Inki is designed to be a dynamic tool, adapting to a variety of needs through its multiple modes:
Colorblind Mode: Tailors the web page colors, ensuring that colorblind users can differentiate between colors that are typically hard to distinguish.
Screen Enhancer: Amplifies and clarifies website content for those with visual impairments, allowing for easier reading and interaction.
Screen Explainer and Summarizer: A mode that not only explains the elements on the screen but also provides concise summaries for quick comprehension, beneficial for users with cognitive disabilities.
ADHD/Dyslexia Mode: Alters the web page layout and typography to minimize distractions and optimize for readability, assisting users with attention deficits or dyslexia.
Behind these user-facing features, Inclusiv utilizes a combination of large language models and a specific focus on Monster API. This approach has enabled us to create features that are both technically sophisticated and varied, ensuring a broad range of user needs are met.
## How we built it
Inclusiv was meticulously crafted by a synergistic team of two backend developers and one frontend designer, all united by the vision of making web navigation universally accessible. The project was born from a user-centric approach, focusing on the unique challenges faced by individuals with disabilities. Our team's design philosophy hinged on simplicity, leading to the creation of a one-button interface that calls upon Inki, an AI assistant, to activate different accessibility modes including colorblind, screen enhancer, screen explainer, and summarizer, and an ADHD/Dyslexia mode. By leveraging large language models for natural language processing and integrating Monster API's robust algorithms, Inclusiv transcends conventional assistive technologies. This blend of intuitive design and advanced AI capabilities was continuously refined through iterative user testing and feedback, ensuring that the product genuinely resonates with the needs of its users. Inclusiv’s launch is not an end, but a beginning to an ongoing journey of innovation and improvement, with a commitment to evolving and expanding its features to foster an inclusive web experience for all.
## Challenges we ran into
Where to start?
Front-end might be harder than the back end! We spent hours fiddling with the optimal UI setup, trying to figure out how best to simplify the experience for a user of any skill level. This is not as simple as it looks, and we spent two hours trying to add a power button. Additionally, we ended up having to pivot our AI model numerous times and switched our approach throughout the project.
## Accomplishments that we're proud of
We ended up changing our AI agent multiple times throughout the project to deliver the optimal project. We're really satisfied with our ability to adapt and push ourselves out of our comfort zones throughout the 36 hours. We developed numerous, technically difficult and varied features that we believe will truly aid those with any form of disability.
## What we learned
We learned tons about building a project from scratch, implementing LLMs, and creating an intuitive and appealing front-end experience.
## What's next for Inclusiv.ai
We want to launch on the Chrome web store!
|
## Inspiration
Everyone learns in a different way. Whether it be from watching that YouTube tutorial series or scouring the textbook, each person responds to and processes knowledge very differently. We hoped to identify students’ learning styles and tailor educational material to the learner for two main reasons: one, so that students can learn more efficiently, and two, so that educators may understand a student’s style and use it to motivate or teach a concept to a student more effectively.
## What it does
EduWave takes live feedback from the Muse Headband while a person is undergoing a learning process using visual, auditory, or haptic educational materials, and it recognizes when the brain is more responsive to a certain method of learning than others. Using this data, we then create a learning profile of the user.
With this learning profile, EduWave tailors educational material to the user by taking any topic that the user wants to learn and finding resources that apply to the type of learner they are. For instance, if the user is a CS major learning different types of elementary sorts and wants to learn specifically how insertion sort works, and if EduWave determines that the user is a visual learner, EduWave will output resources and lesson plans that teach insertion sort with visual aids (e.g. with diagrams and animations).
## How we built it
We used the Muse API and Muse Direct to obtain the data from the user while they were solving the initial assessment tests and checked for what method the brain was more responsive to using data analysis with Python. We added an extra layer to this by using the xLabs Gaze API which tracked eye movements and was able to contribute to the analysis. We then sent this data back with a percentage determination of a learning profile. We then parsed a lesson plan on a certain topic and outputted the elements based on the percentage split of learning type.
## Challenges we ran into
The Muse Headband was somewhat difficult to use, and we had to go through a lot of testing and make sure that the data we were using was accurate. We also ran into some roadblocks proving the correlation between the data and specific learning types. Besides this, we also had to do deep research on what brain waves are most engaged during learning and why, and then subsequently determine a learning profile. Another significant challenge was the creation of lesson plans as we not only had to keep in mind the type of learner but also manage the content itself so that it could be presented in a specific way.
## Accomplishments that we're proud of
We are most proud of learning how to use the Muse data and creating a custom API that was able to show the data for analysis.
## What we learned
How to use Muse API, Standard Library, Muse Direct, how brainwaves work, how people learn and synthesizing unrelated data.
## What's next for EduWave
Our vision for EduWave is to improve it over time. By determining one's most preferred way of learning, we hope to devise custom lesson plans of learning for the user for any topics that they wish to learn – that is, we want a person to be able to have resources for whatever they want to learn made exclusively for them. In addition, we hope to use EduWave to benefit educators, as they can use the data to better understand their students' learning styles,
|
partial
|
## Inspiration
I got this idea because of the current hurricane Milton causing devastation across Florida.
The inspiration behind *Autonomous AI Society* stems from the need for faster, more efficient, and autonomous systems that can make critical decisions during disaster situations. With multiple sponsors like Fetch.ai, Groq, Deepgram, Hyperbolic, and Vapi providing powerful tools, I envisioned an intelligent system of AI agents capable of handling a disaster response chain—from analyzing distress calls to dispatching drones and contacting rescue teams. The goal was to build an AI-driven solution that can streamline emergency responses, save lives, and minimize risks.
## What it does
*Autonomous AI Society* is a fully autonomous multi-agent system that performs disaster response tasks in the following workflow:
1. **Distress Call Analysis**: The system first analyzes distress calls using Deepgram for speech-to-text and Hume AI to score distress levels. Based on the analysis, the agent identifies the most urgent calls and the city.
2. **Drone Dispatch**: The distress analyzer agent communicates with the drone agent (built using Fetch.ai) to dispatch drones to specific locations, assisting with flood and rescue operations.
3. **Human Detection**: Drones capture aerial images, which are analyzed by the human detection agent using Hyperbolic's LLaMA Vision model to detect humans in distress. The agent provides a description and coordinates.
4. **Priority-Based Action**: The drone results are displayed on a dashboard, ranked based on priority using Groq. Higher priority areas receive faster dispatches, and this is determined dynamically.
5. **Rescue Call**: The final agent, built using Vapi, places an emergency call to the rescue team. It uses instructions generated by Hyperbolic’s text model to give precise directions based on the detected individuals and their location.
## How I built it
The system consists of five agents, all built using **Fetch.ai**’s framework, allowing them to interact autonomously and make real-time decisions:
* **Request-sender agent** sends the initial requests.
* **Distress analyzer agent** uses **Hume AI** to analyze calls and **Groq** to generate dramatic messages.
* **Drone agent** dispatches drones to designated areas based on the distress score.
* **Human detection agent** uses **Hyperbolic’s LLaMA Vision** to process images and detect humans in danger.
* **Call rescue agent** sends audio instructions using **Deepgram**’s TTS and **Vapi** for automated phone calls.
## Challenges I ran into
* **Simulating a drone movement on florida map**: The lat\_lon\_to\_pixel function converts latitude and longitude coordinates to pixel positions on the screen. The drone starts at the center of Florida.
Its movement is calculated using trigonometry. The angle to the target city is calculated using math.atan2.
The drone moves towards the target using sin and cos functions.This allows placing cities and the drone accurately on the map.
* **Callibrating the map to right coordinates**: I had manually experiment with increasing and decreasing the coordinates to fit them at right spots on the florida map.
* **Coordinating AI agents**: Getting agents to communicate effectively while working autonomously was a challenge.
* **Handling dynamic priorities**: Ensuring real-time analysis and updating the priority of drone dispatch based on Groq's risk assessment was tricky.
* **Integration of multiple APIs**: Each sponsor's tools had specific nuances, and integrating all of them smoothly, especially with Fetch.ai, required careful handling.
## Accomplishments that I am proud of
* Successfully built an end-to-end autonomous system where AI agents can make intelligent decisions during a disaster, from distress call analysis to rescue actions.
* Integrated cutting-edge technologies like **Fetch.ai**, **Groq**, **Hyperbolic**, **Deepgram**, and **Vapi** in a single project to create a highly functional and real-time response system.
## What I learned
* **AI for disaster response**: Building systems that leverage multimodal AI agents can significantly improve response times and decision-making in life-critical scenarios.
* **Cross-platform integration**: We learned how to seamlessly integrate various tools, from vision AI to TTS to drone dispatch, using **Fetch.ai** and sponsor technologies.
* **Working with real-time data**: Developing an autonomous system that processes data in real-time provided insights into handling complex workflows.
## What's next for Autonomous AI Society
* **Scaling to more disasters**: Expanding the system to handle other types of natural disasters like wildfires or earthquakes.
* **Edge deployment**: Enabling drones and agents to run on the edge to reduce response times further.
* **Improved human detection**: Enhancing human detection with more precise models to handle low-light or difficult visual conditions.
* **Expanded rescue communication**: Integrating real-time communication with the victims themselves using Deepgram’s speech technology.
|
## Inspiration
In 2010, Haiti faced a magnitude 7.0 earthquake, which remains to this day one of the most devastating natural disasters of our century. An estimated 220,000 individuals lost their lives, with an additional 1.5 million losing their homes. At the center of the tragedy was poor building construction, practices, and materials. More than 208,000 buildings were damaged, half of which were fully destroyed, severely crippling the nation’s infrastructure. Two questions stuck out to us:
How could 3D models have been used to gauge the degree of damage, stability, and safety of infrastructure?
If a natural disaster occurred today, how could an improved understanding of building layouts help first responders provide aid more quickly?
One of our main goals was to utilize state of the art technology with a potential for powerful impact. Drawing from some of our members’ past work with Neural Radiance Fields (NeRFs) and other generative CV techniques, we wanted to help solve these questions by fusing drones and AI.
## What it does
Introducing SkySplat, an AI-powered drone software to perform automated 3D captures of any building or infrastructure asset. Here are some sample applications:
Helping first responders deeply understand the layouts of crippled buildings before performing emergency response.
Empowering governments to maintain detailed 3D models of infrastructure without the need of specialized mapping equipment.
Enabling technicians to perform inspections on structures that would be impossible to reach without extensive equipment and risk of personal injury.
Allowing property owners to provide 3D models of properties for customers to explore at their own convenience.
Currently, we are focused on the Parrot drone suite. Our software seamlessly connects to your drone based on your IP. In our web app, you can decide when to begin and complete your data recording session. Immediately upon finishing the data collection, a video is automatically sent to our Google Cloud backend for preprocessing. The video is split into a set frames, optimizing for both efficiency and accuracy, which is then passed into our COLMAP pipeline. Once COLMAP completes, our software automatically trains the Gaussian Splatting models, uploads the results into a Google Cloud bucket, which is then forwarded to our frontend. Finally, a user can effortlessly view their model in high resolution 3D from our webapp.
## How we built it
For our frontend and splatting visualization, we used Vite, Typescript, and three.js (a wrapper on top of WebGL). Additionally, we used CSS for styling.
As some context, the Gaussian Splatting models cannot directly take in video streams. We have to preprocess them using COLMAP, a software that intelligently figures out the relative position of each frame in 3D space (known as a structure from motion point cloud computation). After optimizing the COLMAP parameters, we moved onto the training of our Gaussian Splatting models.
Due to the intensive nature of Gaussian Splatting models, we set up virtual machines on Google Cloud with CUDA-accelerated runtimes. Due to our lack of budget, we were highly limited in the number of and models of GPUs we could use. The custom models themselves are based in PyTorch, derived from the paper 3D Gaussian Splatting for Real-Time Radiance Field Rendering. Each scene has its own model, ensuring a highly specialized understanding of spatial relations.
Finally, we used third-party open-source libraries (shout-out to gsplat.js & HuggingFace!) to efficiently render our .ply point-clouds into WebGL textures (3D -> 2D projection) to visualize on our webapp without exorbitant amounts of compute.
Across the stack, we used Google Cloud buckets to store models, renders, and video files.
## Challenges we ran into
Difficulties we ran into were primarily related to the usage of the Olympe SDK for the simulation and drone movement and SuGaR (Surface-Aligned Gaussian Splatting for Efficient 3D Mesh Reconstruction and High-Quality Mesh Rendering) for the fine tuning of the mesh.
While using the Olympe SDK, it was difficult to build our complete pipeline which started from a video taken of the drone’s movements in Olympe (which also required using Ubuntu, which came with its own limitations) since Olympe had very specific requirements with regards to threading and lack thereof to produce a working drone simulation with movement.
For SuGaR, we dealt with long training times and diminishing marginal returns on the improvement in visualization of point clouds. For example, it took multiple hours to produce a more refined mesh and optimize given a 40-second video as input, and the produced output was only slightly better than what we had produced without using SuGaR.
Fundamentally, beyond these two specific cases, it was a difficult learning curve to understand how Gaussian Splatting works mathematically, and why it can be considered today’s cutting-edge software for 3D-Renders built from video inputs. Gaussian Splatting as a concept becomes very math-heavy, but we were able to eventually identify the most useful distinguishing features and the salient similarities it shares with other more familiar tools we’ve worked with before, such as NeRFs (Neural Radiance Fields).
## Accomplishments that we're proud of
We’re very proud of being able to utilize a technique that’s very cutting-edge and mostly only known to research communities in an applicable & impactful setting such as drone view reconstruction. We were also very proud to be able to do so with minimal compute resources, given that Gaussian splatting models are known to need very heavy amounts of GPU acceleration for good results (the entirety of our 3D reconstructions were done using cloud T4 GPUs on free Google Cloud credits). To get good models from this approach, we had to get a better understanding of hyperparameter tuning & also what entails good data collection in terms of video-taking.
Given that our project also combined many diverse moving parts (from GCS buckets to WebGL frontend), we were very proud that we were able to string everything together and get a working pipeline in such a short time!
## What we learned
We learned a lot about modern computer vision research through looking at models like NERFs and Gaussian splatting (as well as optimizations to Gaussian splatting such as SuGaR). We learned a lot about computer vision algorithms such as structure from motion and multiview geometry. On the other hand, we learned quite a bit about the software that goes behind operating drones through Parrot’s SDK, Sphinx, and Olympe. All in all, we also learned a lot about software development and learning how to merge many different frameworks into one application!
## What's next for SkySplat
We hope to get a better understanding of how parameters can be tuned based on what kind of video-type we’re trying to map, since the optimization of parameters (such as our loss function, number of iterations, and frame rate decomposition from videos into image sets) makes a large difference in the quality of our renders. Additionally, we hope to look more into how parallel computing can help us take advantage of the fine-surface meshes that SuGaR can produce.
Pictures don’t tell the full story. Our 3D models are lightweight, explorable, and powerful. Whenever you want to freeze a moment in time, SkySplat has you covered!
|
## Inspiration
Our inspiration for the disaster management project came from living in the Bay Area, where earthquakes and wildfires are constant threats. Last semester, we experienced a 5.1 magnitude earthquake during class, which left us feeling vulnerable and unprepared. This incident made us realize the lack of a comprehensive disaster management plan for our school and community. We decided to take action and develop a project on disaster management to better prepare ourselves for future disasters.
## What it does
Our application serves as a valuable tool to help manage chaos during disasters such as earthquakes and fire. With features such as family search, location sharing, searching for family members, an AI chatbot for first aid, and the ability to donate to affected individuals and communities, our app can be a lifeline for those affected by a crisis.
## How we built it
Our disaster management application was built with Flutter for the Android UI, Dialogflow for the AI chat assistant, and Firebase for the database. The image face similarity API was implemented using OpenCV in Django REST.
## Challenges we ran into
We are proud of the fact that, as first-time participants in a hackathon, we were able to learn and implement a range of new technologies within a 36-hour time frame.
## Accomplishments that we're proud of
* Our disaster management application has a valuable feature that allows users to search for their family members during a crisis. By using an image similarity algorithm API (OpenCV), users can enter the name of a family member and get information about their recent location. This helps to ensure the safety of loved ones during a disaster, and can help identify people who are injured or unconscious in hospitals. The image is uploaded to Firebase, and the algorithm searches the entire database for a match. We're proud of this feature, and will continue to refine it and add new technologies to the application.
## What we learned
We were not able to implement the live location sharing feature due to time constraints, but we hope to add it in the future as we believe it could be valuable in emergency situations.
## What's next for -
We plan to improve our AI chatbot, implement an adaptive UI for responders, and add text alerts to the application in the future.
|
partial
|
## Inspiration
Both chronic pain disorders and opioid misuse are on the rise, and the two are even more related than you might think -- over 60% of people who misused prescription opioids did so for the purpose of pain relief. Despite the adoption of PDMPs (Prescription Drug Monitoring Programs) in 49 states, the US still faces a growing public health crisis -- opioid misuse was responsible for more deaths than cars and guns combined in the last year -- and lacks the high-resolution data needed to implement new solutions.
While we were initially motivated to build Medley as an effort to address this problem, we quickly encountered another (and more personal) motivation. As one of our members has a chronic pain condition (albeit not one that requires opioids), we quickly realized that there is also a need for a medication and symptom tracking device on the patient side -- oftentimes giving patients access to their own health data and medication frequency data can enable them to better guide their own care.
## What it does
Medley interacts with users on the basis of a personal RFID card, just like your TreeHacks badge. To talk to Medley, the user presses its button and will then be prompted to scan their ID card. Medley is then able to answer a number of requests, such as to dispense the user’s medication or contact their care provider. If the user has exceeded their recommended dosage for the current period, Medley will suggest a number of other treatment options added by the care provider or the patient themselves (for instance, using a TENS unit to alleviate migraine pain) and ask the patient to record their pain symptoms and intensity.
## How we built it
This project required a combination of mechanical design, manufacturing, electronics, on-board programming, and integration with cloud services/our user website. Medley is built on a Raspberry Pi, with the raspiaudio mic and speaker system, and integrates an RFID card reader and motor drive system which makes use of Hall sensors to accurately actuate the device. On the software side, Medley uses Python to make calls to the Houndify API for audio and text, then makes calls to our Microsoft Azure SQL server. Our website uses the data to generate patient and doctor dashboards.
## Challenges we ran into
Medley was an extremely technically challenging project, and one of the biggest challenges our team faced was the lack of documentation associated with entering uncharted territory. Some of our integrations had to be twisted a bit out of shape to fit together, and many tragic hours spent just trying to figure out the correct audio stream encoding.
Of course, it wouldn’t be a hackathon project without overscoping and then panic as the deadline draws nearer, but because our project uses mechanical design, electronics, on-board code, and a cloud database/website, narrowing our scope was a challenge in itself.
## Accomplishments that we're proud of
Getting the whole thing into a workable state by the deadline was a major accomplishment -- the first moment we finally integrated everything together was a massive relief.
## What we learned
Among many things:
The complexity and difficulty of implementing mechanical systems
How to adjust mechatronics design parameters
Usage of Azure SQL and WordPress for dynamic user pages
Use of the Houndify API and custom commands
Raspberry Pi audio streams
## What's next for Medley
One feature we would have liked more time to implement is better database reporting and analytics. We envision Medley’s database as a patient- and doctor-usable extension of the existing state PDMPs, and would be able to leverage patterns in the data to flag abnormal behavior. Currently, a care provider might be overwhelmed by the amount of data potentially available, but adding a model to detect trends and unusual events would assist with this problem.
|
## The problem it solves :
During these tough times when humanity is struggling to survive, **it is essential to maintain social distancing and proper hygiene.** As a big crowd now is approaching the **vaccination centres**, it is obvious that there will be overcrowding.
This project implements virtual queues which will ensure social distancing and allow people to stand separate instead of crowding near the counter or the reception site which is a evolving necessity in covid settings!
**“ With Quelix, you can just scan the OR code, enter the virtual world of queues and wait for your turn to arrive. Timely notifications will keep the user updated about his position in the Queue.”**
## Key-Features
* **Just scan the OR code!**
* **Enter the virtual world of queues and wait for your turn to arrive.**
* **Timely notifications/sound alerts will keep the user updated about his position/Time Left in the Queue.**
* **Automated Check-in Authentication System for Following the Queue.**
* **Admin Can Pause the Queue.**
* **Admin now have the power to remove anyone from the queue**
* Reduces Crowding to Great Extent.
* Efficient Operation with Minimum Cost/No additional hardware Required
* Completely Contactless
## Challenges we ran into :
* Simultaneous Synchronisation of admin & queue members with instant Updates.
* Implementing Queue Data structure in MongoDB
* Building OTP API just from scratch using Flask.
```
while(quelix.on):
if covid_cases.slope<0:
print(True)
>>> True
```
[Github repo](https://github.com/Dart9000/Quelix2.0)
[OTP-API-repo](https://github.com/Dart9000/OTP-flask-API)
[Deployment](https://quelix.herokuapp.com/)
|
## Inspiration
The US and the broader continent is in the midst of a drug crisis affecting a large fraction of the population: the opioid crisis. The direct spark that led us to develop Core was the recent publication of a study that used a model to predict the effects of different intervention methods on reducing overdose deaths and frequency of individuals developing dependencies on opiates. The [model's predictions](https://med.stanford.edu/news/all-news/2018/08/researchers-model-could-help-stem-opioid-crisis.html) were pretty bleak in the short term.
## What it does
Core seeks to provide individuals looking to overcome opiate and other drug addictions with strong and supportive connections with volunteer mentors. At the same time, it fills another need -- many college students, retirees, and moderately unfulfilled professionals are looking for a way to help bring about positive change without wasting time on transportation or prep. Core brings a human-centered, meaningful volunteer experience opportunity directly to people in the form of an app.
## How we built it
We used Flask for the backend and jQuery for the frontend. We incorporated various APIs, such as the IBM Watson Personality Insights API, the Facebook API, and the GIPHY API.
## Challenges we ran into
It took us a while to get the Facebook OAuth API working correctly. We initially worked separately on different parts of the app. It was an interesting experience to stitch together all of our individual components into a streamlined and cohesive app.
## Accomplishments that we're proud of
We're proud of Core!
## What we learned
We learned a lot about all the natural language processing APIs that IBM Watson has to offer and the workings of the Facebook OAuth API.
## What's next for Core
Lots of features are in the works including a rating system to enable mentees to rate mentors to ensure productive relationships and more detailed chat pairing based on location information and desired level of user engagement.
|
winning
|
## Inspiration
It is difficult for university students to find the time and money to go to the gym. Although some YouTube videos teach exercises that can be done at home without weights, it's not always easy to self-correct without a gym buddy.
## What it does
When a user works out at home, they can place their laptop camera and display at the front of their space. They carry an Arduino microcontroller in their pocket and tape a haptic motor to their wrist or side. They select from a list of exercises--so far we have implemented tricep pushups and squats--and computer vision is used to detect form errors. The haptic motor alerts the user to form errors, so they know to look at the screen for feedback.
These are the implemented feedback items:
TRICEP PUSHUPS:
* Move wrists closer together or farther apart such that they are under the shoulders
* Keep elbows tucked in through the pushup
SQUATS:
* Go lower
* Keep knees directly above ankles, not too far forward
* Sit more upright with a straight back
## How we built it
We used a pretrained implementation of CMU Posenet in Tensorflow ([link](https://github.com/ildoonet/tf-pose-estimation?fbclid=IwAR1CBbW9_A3_vrwbKDmAiZJ3tQ3owjEk9NFHZ8ufRfA_QhDfOSYK-p1SYaA)) for pose estimation. We analyzed coordinates of joints in the image using our own Python functions based on expert knowledge of workout form.
The vision processing feedback outputs from the laptop are interfaced to an Arduino Uno over Bluetooth connection, and the Uno controls a Grove haptic motor.
## Challenges I ran into
* Diagnosing physical hardware problems: We spent a lot of time debugging a Raspberry Pi with a faulty SD card. We learned that it's important to debug from the hardware level up.
* Finding usable TensorFlow models that fit well to our mission. We got a lot better at filtering usable sources and setting up command line environments.
* Creating a durable and wearable design of the fitness buddy. We experienced issues with haptic motor connector wires breaking as we exercised. We learned the importance of component research in planning physical designs.
## Accomplishments that I'm proud of
* Integrating Python and Arduino using a Bluetooth module to achieve haptic feedback.
* Labelling joints and poses for analysis through appropriate machine learning models.
* Adding analysis to machine learning outputs to make them useful in a real life context.
* Learning to use different languages and products (including Raspberry Pi) to perform specific technical tasks.
## What I learned
* How to use many different hardware products and techniques, including a bluetooth module, haptic motors and controllers, and a Raspberry Pi (which we did not use in our final design). We also improved our Arduino and circuit skills.
* The efficiency and output derivation of many different machine learning models.
* The importance of prototyping physical systems that people will interact with and could break.
* A greater sense of focus towards better wellbeing of individual people through exercise.
## What's next for Fitness Buddy: Haptic Feedback on Exercise Form:
* Incorporate and add software for a variety of different exercises.
* Migrate to Raspberry Pi for a more portable experience.
* Integrate with Google Home for more seamless IoT ("Ok Google, start my pushup routine!").
* Add goal setting and facial recognition for different household users with different goals.
|
## Inspiration
After learning that deadlifts are of the most dangerous exercises responsible for serious injury, we wanted to create a product that could help track form of deadlifts. We wanted to make a wearable product that could notify the wearer when their angular velocity was changing too quickly, to reduce the risk of injury when deadlifting/training.
## What it does
The user will wear the device on their chest because that's the part of the body that should ideally be in the same position at all times. In this position, the Arduino will be able to monitor if the back is bent at a different angle and will tell the user that they're out of form. Our app currently tracks the angular velocity and acceleration of the body and will report to the user if the body is shifting from optimal position.
A phone is connected to the Arduino via Bluetooth and will take in the data to present to the user.
Also using the phone app, the user can open the app using their voice and it will begin tracking their movements.
## How we built it
Using Google Cloud API, Evothings, Arduino 101, Android Studio!
**Communication:**
Using the Bluetooth low energy (BLE) communication, gyroscope, and accelerometer built into the Arduino 101, we had an IoT solution with peripherals we needed for our hack.
**Frontend:**
We used Evothings, a mobile app development environment made specifically for IoT applications. This allowed us to program in Javacript and html for our layout and graphing of data.
**Backend:**
With Android studio and Google Cloud API, we created an extension of our main app, so that users of FLEX could start the main program with their voice. This would allow the weightlifter to put the phone in front of them and monitor progress and notifications without stopping their set.
## Challenges we ran into
The main challenge we faced in this project was learning how to use the equipment. Evothings, and the Arduino 101 have very limited documentation which made debugging and programming much more difficult.
## Accomplishments that we're proud of
It's most of our first times working with the Arduino 101, Evothings, and gyros/accelerometers, so we're glad to have learned something and explored what was available.
## What we learned
1. How accelerometers, gyroscopes work!
2. Evothings is very useful for quick testing of mobile apps
3. Javascript isn't
4. smoothie.js is useful for graphs
## What's next for FLEX
We want to incorporate the fitbit in our design to measure heart rate, provide haptic feedback, and be able to create an app that can get all the fitness data from the regular fitbit applications.
By adding in heart rate, we'll be able to track how intense the workout is based on velocity of movement and heart rate and provide warnings in both the app, and on the fitbit.
For motivation, a feature that could be implemented would be to be able to compare results to friends and track friendly competition.
<https://www.hackster.io/137915/flex-dd258d>
|
## Inspiration
Garbage in bins around cities are constantly overflowing. Our goal was to create a system that better allocates time and resources to help prevent this problem, while also positively impacting the environment.
## What it does
Urbins provides a live monitoring web application that displays the live capacity of both garbage and recycling compartments using ultrasonic sensors. This functionality can be seen inside the prototype garbage bin. The bin uses a cell phone camera to send an image to the custom learning model built with IBM Watson. The results from the Watson model is used to classify each object placed in the bin so that it can be sorted into either garbage or recycling. Based on the classification, the Android application controls the V-shaped platform using a servo motor to tilt the platform and drop the item into it's correct bin. Once a garbage/recycling bin nears full-capacity, STDlib is used to notify city workers via SMS that bins at a given address are full.
Machine learning is applied when an object cannot be classified. When this happens, the image of the object is sent via STDlib to Slack. Along with the image, response buttons are displayed in Slack, which allows a city worker to manually classify the item. Once a selection is made, the new classification is used to further train the Watson model. This updated model is then used by all the connected smart garbage bins, allowing for all the bins to learn.
## Challenges we ran into
Integrating all components
Learning to use IBM Watson
Providing the set of images for IBM Watson (Needed to be a zip file containing at least 10 photos to update the model)
## Accomplishments that we're proud of
Integrating all the components.
Getting IBM Watson working
Getting STDlib working
Training IBM Watson using STDLib
## What we learned
How to use IBM Watson
How to effectively plan a project
Designing an effective architecture
How to use STDlib
## What's next for Urbins
Accounts
Algorithm for optimal route for shift
Dashboard with map areas, floor plans, housing plans, and event maps
Heat map on google maps
Bar chart of stats over past 6 months (which bin was the most frequently filled?)
Product Information and Brand data
|
losing
|
We built a platform that serves as a place to have high-fidelity discussions over the internet. Users can have access to topics that are of high importance to those in their local community. The aim of Full Circle is to move away from highly-polarized, sensationalized conversations and towards issues that affect the most amount of people in the most important ways.
|
## Inspiration
Today we're living in times wherein we have unparalleled access to information and its subsequent dissemination, which vastly increases the potential for rapid social change.
However, modern political interaction is often characterized by plain indifference along with a general lack of interest in information sharing across varying viewpoints as well as the indulgence in performative allyship leading to no tangible impact.
Additionally, social media is wildly notorious for its unethical use of data and centralized algorithms that censor or suppress certain voices while promoting content which its beneficiaries can profit off of. Activism in countries witnessing a rise in polarised governments has also been met with strict action being taken against the ones who dare question the status quo and hence it becomes imperative for platforms to avoid data breaches at all costs.
>
> With Reform the Norm we advocate to break the norm and then reform it, one step at a time and we do so by incentivizing civic engagement on our decentralized open-for-all platform.
>
>
>
## What it does
Reform the Norm, built on the Ethereum Blockchain provides a decentralized platform for our users to Educate, Listen, Share and Act; covering social movements across issues and across countries.
1) Decentralizing the network gives people back their power by ensuring that there is no censorship or partiality towards a certain kind of content and the user information remains safe from any kind of breach.
2) We provide incentives to our users who actively raise their voices and contribute to the betterment of society by rewarding them with RTN tokens (ERC-20).
Protocol followed ~
* 100 RTN tokens awarded to the user on registering
* 5 RTN tokens awarded to the user for creating a post and 1 additional RTN token awarded per issue tackled in the post
* Users can "tip" other users' posts and the creator of the post would be awarded 2 RTN tokens per tip
* Users can "share" other users' posts and both the creator and the sharer would be awarded 1 RTN token each
3) Users are given the provision of adding trigger warning tags for posts that have a high number of trigger words. We do so to forewarn other users who might have Post Traumatic Stress Disorder or other anxiety disorders and would prefer to avoid specific content that might trigger intense physiological and psychological symptoms.
4) Lastly users can specify which issues their post tackles and exactly where in the world is it affecting people. We sort information by "Specific Issues" and "Places" and provide links to our users that would allow them to learn more about the topic, play their part, and donate so that tangible impact occurs along with just social awareness and change.
## How we built it
* We started off with designing the prototype with Figma, then built the front-end with HTML, CSS & Javascript.
* The RTN tokens were made using the ERC-20 standard in solidity.
* The decentralized back-end was built into the Ethereum Blockchain with Solidity smart contracts, image hosting in IPFS, and local development and testing with Truffle and Ganache.
* We used the Alchemy API for deploying to the Ropsten TestNet and transaction debugging, and Web3 for integrating our solidity smart contracts with the front-end.
* We used Python for annotating trigger warnings and also implemented the code in Javascript.
## Challenges we ran into
Blockchain is such a fast-moving technology that most of the resources out there are deprecated to some extent! This makes it a hard technology to learn in such a reduced time frame. We sometimes felt we spent more hours trying to debug a single transaction than in designing and building the rest of the project. We also really wanted to add the incentive provision with our own cryptocurrency into our main smart contract and then deploy our application, but we ran out of time!
## Accomplishments that we're proud of
* Formulating a proof-of-concept for something like Reform the Norm that has the potential to bring about rapid, tangible change in society.
* Learning an entirely new technology, Blockchain, and figuring out a tech stack including new languages, frameworks, and APIs for our project.
* Collaborating while working on three wildly different timezones and coming from wildly different backgrounds, both cultural and technical!
## What we learned
We learned the intricacies of the web 3.0 space and how the future needs to be decentralized in order for the world wide web to be an all-inclusive ethical space like it was initially meant to be. We also learned about the Ethereum blockchain and the IPFS which when combined give us a fully decentralized web application. All of us wrote code in Solidity which is the language used to write Ethereum smart contracts for the very first time. We also read up about different kinds of standard tokens like the ERC-20 and ERC-721 and made our own cryptocurrency called "RTN" on the Ropsten TestNet!
## What's next for Reform The Norm
We intend to work a little more on:
* Optimising reach across varying viewpoints by applying graph algorithms on Ethereum nodes
* Integrating NLP into our smart contract to tag trigger words and issues correctly from the server-side
* Coming up with a more effective protocol for incentive provision
* Integrating a fake news detection API to counter the spreading of misinformation
>
> And finally, if all goes well we might release a whitepaper for an **ICO (initial coin offering)** for Reform the Norm's own cryptocurrency "RTN" and deploy our smart contract on the **Ethereum MainNet**.
>
>
>
## Ethics
We at Reform the Norm believe that the world's current socio-political crisis calls for each one of us to actively raise our voice and collectively advocate for social justice. Various countries have witnessed huge political movements gaining traction via social media as the voices of the marginalized, which have historically been violently suppressed are finally attaining their well-deserved level platform.
1) However, activism in the digital age even though raises rapid awareness and pressurizes the ones in power to take quick action, doesn't come without its downsides. There have been several instances wherein social media has been misused by polarised governments for all sorts of unethical actions from inciting riots to online surveillance of citizens. It has also been noted how certain governments have targetted, falsely charged, and subsequently incarcerated activists from marginalized factions since they're the ones who're the most vulnerable in a socio-political climate against their free-willed existence. Hence, when it comes to raising awareness about the wrongs being done by the people in power we need to make sure that people and their personal information is kept safe.
The internet today is broken by design. We see wealth, power, and influence placed in the hands of a limited few tech giants. Markets, institutions, and trust relationships have been transposed to this new platform, with the density, power, and incumbents changed, but with the same old dynamics. Centralization is not socially tenable long-term. Enter Web 3.0.
Web 3.0 is an inclusive set of protocols to provide building blocks for application makers. These building blocks take the place of traditional web technologies but present a whole new way of creating applications. Decentralized applications or DApps have their backend code (smart contracts) running on a decentralized network and not a centralized server. A smart contract is like a set of rules that live on-chain for all to see and run exactly according to those rules. Once DApps are deployed on the Ethereum network you can't change them. DApps can be decentralized because they are controlled by the logic written into the contract, not an individual or a company.
Acknowledging how the future needs to be decentralized in order for the world wide web to be an all-inclusive ethical space like it was initially meant to be, we decided to make our web-application promoting civic engagement decentralized by building it on the Ethereum blockchain. Doing so ensures that our user's data is kept safe from any kind of breach and it also makes sure that the algorithms behind the working of our DApp are transparent and fair.
2) Even though decentralizing our application ensures that the server-side is completely ethical and censorship-resistant we still might have a couple of problems when it comes to sharing information about social causes, them being -
a. **Creation of echo-chambers among people holding similar viewpoints leading to no tangible impact**
We have added the provision of sharing posts and incentivized it both for the creator and the sharer so that users are motivated to share content from people outside of their social circle and introduce the issue to their circle.
To bring about tangible on-ground change we have sorted the content on the basis of **issues** and **places** and created pages for each. These pages contain a full database of ways for our users to contribute. They can listen and learn, play their part by volunteering, signing petitions, showing up for protests and they can finally send in their donations to organizations working for the cause.
b. **Information regarding systemic and systematic oppression might trigger users with PTSD and other anxiety disorders**
We have hence given creators of the content the provision to add trigger warning tags so that we can mark content that contains triggering information.
*Future ~ We further intend to work on this part by including NLP algorithms into our smart contracts which would automatically generate trigger warning tags.*
c. **Spreading of Misinformation**
Since our platform would be open-for-all for it to be truly democratic, there is a risk of people spreading misinformation which could lead to mass panic or further polarization of opinions.
*Future ~ We intend to counter this by integrating the Fake News Detection API into our smart contract and generating a warning for other users if the information shared is false.*
d. **Privileged users co-opting the struggles of the marginalized**
With every movement, there comes a wave of performative allyship wherein the privileged take away the voices of the marginalized and co-opt their entire struggle.
*Future ~ Taking in more user information (race, religion, ethnicity, sexuality, gender identity, disability, caste, class, etc.) and giving a higher platform to the marginalized factions directly being affected by the issue being talked about.*
## Team
* [Vaani Rawat](https://www.linkedin.com/in/vaani-rawat-3076901a1/)
* [Diego Velázquez Álvarez](https://diegovelaz.com/)
* [Eduardo Piña](https://www.linkedin.com/in/eduardo-pi%C3%B1a-5a059117a/)
* [Berkay Alan](https://www.linkedin.com/in/berkayalan/)
|
## Inspiration
The transition from high school to university is indeed a challenging phase wherein a student not only navigates through the curriculum but at the same time he/she/they are figuring out who they are as individuals, their passion etc. For students coming from culturally diverse backgrounds, things become even more challenging. There are many school factors that affect the success of culturally diverse students the school's atmosphere and overall attitudes toward diversity, involvement of the community, and culturally responsive curriculum, to name a few.
To help those coming from culturally diverse backgrounds and help the transition process be a smooth and easy one, we bring to you Multicultural Matrix, a platform celebrating cultural diversity and providing a safe space with tools and equipment to help students sail through the college smoothly making the max use of opportunities out there.
## What it does
The platform provides tools and resources required for students to navigate through the university.
The platform focuses on three main aspects-
1)Accommodation
Many students look for nearby affordable accommodations due to personal and financial constraints wherein the accomodations welcome diversity and different communities. To help with that the accommodation section plans to help students find accomodations that matches their requirements.
2) Legal Documentations
Handling paperwork can be difficult for international students coming from across the globe and the whole documentation process gets 10x more complicated. The website also caters to handling paperwork by providing all the resources and information on one platform so that the student doesn't have to hop here and there looking for help and guidance.
3) Food and Culture
Adjusting to a new University and a new place can be very challenging as you're thrown completely outside of your comfort zone and it's often that you crave the familiarity and comfort of your community, culture and food. To help students find "Home Away From Home" we have also added a food section that shows the cheapest and best-rated restaurants nearby your location based on the cuisine and food you are craving.
## How we built it
The front end was built using HTML, CSS and JavaScript
We also used Twilio, BotDoc, Auth0 and Coil.
## Challenges we ran into
The CSS adjustment and integration of the tools and tech with the website was a bit challenging but we were able to find our way through the challenges as a team
## Accomplishments that we're proud of
Working together as a team and coming up with a solution that solves real-world problems is something we're proud of at the same time celebrating diversity.
## What we learned
We learned how to use parallax scrolling on the website and learned how to work with Twilio, BotDoc etc.
## What's next for Multicultural Matrix
We do plan on making the different aspects of the website functional and using the MERN stack to implement the backend. We also plan on providing resources such as internship opportunities and scholarships for students coming from culturally diverse backgrounds.
## Domain:
Domain.com: multicultural-matrix.tech
|
losing
|
# What inspired us
We are a team of passionate innovators who want to help you overcome the challenges of the hustle culture. This is the idea that you should always work hard and never stop, even if it means sacrificing your health and happiness[1](https://rightasrain.uwmedicine.org/life/work/hustle-culture). This culture can harm your well-being in many ways, such as causing sleep deprivation, poor nutrition, and lack of physical activity[2](https://www.bing.com/search?q=Bing+AI&showconv=1&FORM=hpcodx&ocid=msedgntp&cvid=876fa38075b34347a9f572d54897d660&ei=13). These factors can also increase your risk of developing Alzheimer’s disease, a brain disorder that affects your memory and thinking skills[3](https://www.nia.nih.gov/health/what-causes-alzheimers-disease#factors).
We were also inspired by Yongey Mingyur Rinpoche, a Buddhist teacher who taught us about the monkey mind. This is a term that describes the state of mind when you are easily distracted, restless, and anxious[5](https://www.stress.org/7-techniques-to-tame-monkey-mind). The monkey mind can prevent you from focusing on your goals and achieving them[6](https://www.psychologytoday.com/us/blog/the-empowerment-diary/201709/calming-the-monkey-mind).
We also noticed that most people tend to use their phones too much when they have free time, which can also contribute to the monkey mind [7](https://hms.harvard.edu/news/screen-time-brain). Having a high screen time can affect your ability to relax and stop thinking[8](https://scopeblog.stanford.edu/2022/12/09/screen-time-the-good-the-healthy-and-the-mind-numbing/).
That’s why we created M2 ood, a tech solution that helps you manage your monkey mind and improve your mood. M2 ood stands for Monkey Mind Mood, and it is a website that integrates health data, mood tracking, and calendar to help you understand your overall mindset and well-being. Our solution helps you reduce screen time and become more aware of what is happening in your life and how you feel about it. We want to help you record, live, and enjoy the world, not just pass by.
# **Our Journey: Learning, Building, and Overcoming Challenges**
During this intense 36-hour hackathon, our team embarked on an incredible journey of learning, innovation, and collaboration. We delved into the world of HTML, CSS, Flask, Python, and APIs, broadening our technical horizons and bolstering our skills.
## **What We Learned**
The hackathon served as an invaluable crash course in various technologies and team dynamics. We acquired proficiency in:
* **HTML and CSS:** Crafting visually appealing and responsive web interfaces.
* **Flask and Python:** Building the backbone of our project with efficient and dynamic web development.
* **API Integration:** A challenge we tackled head-on, enhancing our understanding of data interaction and retrieval.
Not only did we master the technical aspects, but we also honed essential team skills:
* **Communication:** Effective dialogue and knowledge sharing within our team.
* **Trust:** Relying on each other's expertise and judgment.
* **Cross-Functionality:** Leveraging diverse skill sets for a well-rounded approach.
* **Execution:** Translating ideas into tangible results.
* **Planning and Organization:** Structuring our efforts for maximum productivity.
## **Our Project Building Process**
Our project took shape through a combination of Flask, APIs, HTML, and CSS. We utilized Flask and Python to create a robust and responsive web application, ensuring our project met the desired standards for functionality and user experience.
## **The Challenges We Faced**
Undoubtedly, the most significant challenge we encountered was navigating the complex world of APIs. While platforms like Terra provided substantial support and user-friendly tools, the limited time frame imposed constraints. We aspired to integrate APIs more comprehensively, but time remained a constraint that we look forward to addressing in the future.
Additionally, we welcomed two teammates who were participating in a hackathon for the first time. Despite the steep learning curve, their dedication and adaptability allowed them to make a remarkable contribution in a short span.
Our journey through this hackathon epitomized the spirit of entrepreneurship, innovation, and creativity. It was an inspiring testament to the power of collaboration and the pursuit of knowledge in the face of challenges.
|
Unhealthy study and work habits are common, and have only been worsened since the pandemic eliminated the social component of work and school. Furthermore, those living with ADHD may be suffering from poor mental health brought on by the lack of productivity.
We wanted to build an app that helps users regulate their study habits, with a focus on mental health. Furthermore, we wanted to build an app that we would actually regularly use - something that could help us and everyone else. What we came up with was a companion app to your studies and work; an app that can be opened in the background, an app that does not take attention away from the task at hand, an app helps those with ADHD focus. To achieve these goals, our design principle is simplicity: bold singular colours, easy lines and fonts, and a non-intrusive animation of a tree growing to promote the sense of tranquility. Although our design principle is simplicity, the app was anything but.
We built in the Pomodoro study technique in which an individual works for 25 minutes and take a short break of approximately 5 minutes. During the 25 minutes, the user is allowed to tap on a sapling and the more taps the user makes, the quicker the sapling grows. This was implemented so that those with attention deficit disorders have a way to fidget while listening to a lecture or doing a reading. The tree growth idea came about as a way to make fidgeting interesting while still allowing the user to pay attention. Furthermore, we included very satisfying sound effects to further make the fidgeting experience more soothing. After the 25 minutes, we have a timed break in which the following four activities may be displayed: a live weather update that notifies if the user can take a walk, various stretching exercises to prevent fatigue, a guided mediation video, and a music player that currently plays an upbeat and lively tune but in the future may suggested different kinds of music. After, the user is allowed to choose to end the work day or continue working. By clicking on the end work day option, the user is shown a stress chart plots the stress levels of the user through his or her day derived from the tapping behavioural data transformed via an algorithm.
Our web app was developed using React for UI, and Django for the backend and database.
Not being familiar with Django, we had difficulties connecting our REST API to the front-end, but are now proud of the final product we produced. We see this app expanding in several different areas. It has applications to many different populations, including senior citizens, and those experiencing mental health issues. We definitely see this as a useful mobile app, and will consider developing a bluetooth physical fidget device, that sends information back to your computer or phone.
|
## Inspiration
The intention behind NintAudio is, aside from recreating the feel of old arcade games, to offer new ways of exploring gaming. It was actually Xavier who came up with the idea, being interested in the accessibility of browsers; the question that naturally came next was "but what about video games?".
This project is aimed at:
* Visually impaired people, so they can share our delight when playing video games
* Developers, to raise their awareness of the condition of visually-impaired people in the digital era
* Everybody, to realize how much we rely on sight and how difficult it is to understand an environment with only sound.
## What it does
NintAudio is a collection of retro-inspired games relying entirely on sounds: Pong, Atari Breakout, and Whac-a-Mole.
## Challenges
Developing the logic and sound design with very little visual cues was quite the task. We had to work around our habitual use of sight for even very simple UX design.
Rust is very new, both to the community and to our team: although documentation is available, the use of this language and many features still under development was quite ambitious. Implementation for all libraries isn't guaranteed across operating systems, and there are bugs left to be resolved with some of them (particularly the .mp3 format and SineWave stuttering with rodio). We're expecting to send patches upstream in the near future to fix the bugs with rodio.
## Why Rust?
Real-time audio games meant we needed an absolutely minimum latency. This meant a garbage collector was definitely a no-go, as even a 50ms delay for the garbage collector to perform its duty would destroy the game play. Rust is a bare metal programming language, so it was a perfect fit, but its memory safety guarantee meant that we were safe from the toughest bugs, which was even more important given the experience level of the team members.
Three out of four teammates were new to coding. We felt like the advanced compiler would give us a greater confidence on the code we produced, even though we were dealing with low-level bindings.
## What's next for NintAudio
Unfortunately, as of now, the player must type the name of the game into a terminal in order to play -- adding a voice user interface would likely make the experience more immersive and overall more user-friendly. More games are also to be added.
## Support
**MacOS** No gamepad support.
**Windows with ANSI terms** No support.
*Tested on: Linux (Arch & Ubuntu), Windows 10
Should work on: MacOS, Windows XP & up, Linuses with ALSA, BSDs, Drangonfly*
|
losing
|
## Inspiration
With recent booms in AI development, deepfakes have been getting more and more convincing. Social media is an ideal medium for deepfakes to spread, and can be used to seed misinformation and promote scams. Our goal was to create a system that could be implemented in image/video-based social media platforms like Instagram, TikTok, Reddit, etc. to warn users about potential deepfake content.
## What it does
Our model takes in a video as input and analyzes frames to determine instances of that video appearing on the internet. It then outputs several factors that help determine if a deepfake warning to a user is necessary: URLs corresponding to websites where the video has appeared, dates of publication scraped from websites, previous deepfake IDs (i.e. if the website already mention the words "deepfake"), and similarity scores between the content of the video being examined and previous occurrences of the deepfake. A warning should be sent to the user if content similarity scores between it and very similar videos are low (indicating the video has been tampered with) or if the video has been previously IDed as a deepfake by a different website.
## How we built it
Our project was split into several main steps:
**a) finding web instances of videos similar to the video under investigation**
We used Google Cloud's Cloud Vision API to detect web entities that have content matching the video being examined (including full matching and partial matching images).
**b) scraping date information from potential website matches**
We utilized the htmldate python library to extract original and updated publication dates from website matches.
**c) determining if a website has already identified the video as a deepfake**
We again used Google Cloud's Cloud Vision API to determine if the flags "deepfake" or "fake" appeared in website URLs. If they did, we immediately flagged the video as a possible deepfake.
**d) calculating similarity scores between the contents of the examined video and similar videos**
If no deepfakes flags have been raised by other websites (step c), we use Google Cloud's Speech-to-Text API to acquire transcripts of the original video and similar videos found in step a). We then compare pairs of transcripts using a cosine similarity algorithm written in python to determine how similar the contents of two texts are (common, low-meaning words like "the", "and", "or", etc. are ignored when calculating similarity).
## Challenges we ran into
Neither of us had much experience using Google Cloud, which ended up being a major tool in our project. It took us a while to figure out all the authentication and billing procedures, but it was an extremely useful framework for us once we got it running.
We also found that it was difficult to find a deepfake online that wasn't already IDed as one (to test out our transcript similarity algorithm), so our solution to this was to create our own amusing deepfakes and test it on those.
## Accomplishments that we're proud of
We're proud that our project mitigates an important problem for online communities. While most current deepfake detection uses AI, malignant AI can simply continually improve to counter detection mechanisms. Our project takes an innovative approach that avoids this problem by instead tracking and analyzing the online history of a video (something that the creators of a deepfake video have no control over).
## What we learned
While working on this project, we gained experience in a wide variety of tools that we've never been exposed to before. From Google Cloud to fascinating text analysis algorithms, we got to work with existing frameworks as well as write our own code. We also learned the importance of breaking down a big project into smaller, manageable parts. Once we had organized our workflow into reachable goals, we found that we could delegate tasks to each other and make rapid progress.
## What's next for Deepfake ID
Since our project is (ideally) meant to be integrated with an existing social media app, it's currently a little back-end heavy. We hope to expand this project and get social media platforms onboard to using our deepfake detection method to alert their users when a potential deepfake video begins to spread. Since our method of detection has distinct advantages and disadvantages from existing AI deepfake detection, the two methods can be combined to create an even more powerful deepfake detection mechanism.
Reach us on Discord: **spica19**
|
# We'd love if you read through this in its entirety, but we suggest reading "What it does" if you're limited on time
## The Boring Stuff (Intro)
* Christina Zhao - 1st-time hacker - aka "Is cucumber a fruit"
* Peng Lu - 2nd-time hacker - aka "Why is this not working!!" x 30
* Matthew Yang - ML specialist - aka "What is an API"
## What it does
It's a cross-platform app that can promote mental health and healthier eating habits!
* Log when you eat healthy food.
* Feed your "munch buddies" and level them up!
* Learn about the different types of nutrients, what they do, and which foods contain them.
Since we are not very experienced at full-stack development, we just wanted to have fun and learn some new things. However, we feel that our project idea really ended up being a perfect fit for a few challenges, including the Otsuka Valuenex challenge!
Specifically,
>
> Many of us underestimate how important eating and mental health are to our overall wellness.
>
>
>
That's why we we made this app! After doing some research on the compounding relationship between eating, mental health, and wellness, we were quite shocked by the overwhelming amount of evidence and studies detailing the negative consequences..
>
> We will be judging for the best **mental wellness solution** that incorporates **food in a digital manner.** Projects will be judged on their ability to make **proactive stress management solutions to users.**
>
>
>
Our app has a two-pronged approach—it addresses mental wellness through both healthy eating, and through having fun and stress relief! Additionally, not only is eating healthy a great method of proactive stress management, but another key aspect of being proactive is making your de-stressing activites part of your daily routine. I think this app would really do a great job of that!
Additionally, we also focused really hard on accessibility and ease-of-use. Whether you're on android, iphone, or a computer, it only takes a few seconds to track your healthy eating and play with some cute animals ;)
## How we built it
The front-end is react-native, and the back-end is FastAPI (Python). Aside from our individual talents, I think we did a really great job of working together. We employed pair-programming strategies to great success, since each of us has our own individual strengths and weaknesses.
## Challenges we ran into
Most of us have minimal experience with full-stack development. If you look at my LinkedIn (this is Matt), all of my CS knowledge is concentrated in machine learning!
There were so many random errors with just setting up the back-end server and learning how to make API endpoints, as well as writing boilerplate JS from scratch.
But that's what made this project so fun. We all tried to learn something we're not that great at, and luckily we were able to get past the initial bumps.
## Accomplishments that we're proud of
As I'm typing this in the final hour, in retrospect, it really is an awesome experience getting to pull an all-nighter hacking. It makes us wish that we attended more hackathons during college.
Above all, it was awesome that we got to create something meaningful (at least, to us).
## What we learned
We all learned a lot about full-stack development (React Native + FastAPI). Getting to finish the project for once has also taught us that we shouldn't give up so easily at hackathons :)
I also learned that the power of midnight doordash credits is akin to magic.
## What's next for Munch Buddies!
We have so many cool ideas that we just didn't have the technical chops to implement in time
* customizing your munch buddies!
* advanced data analysis on your food history (data science is my specialty)
* exporting your munch buddies and stats!
However, I'd also like to emphasize that any further work on the app should be done WITHOUT losing sight of the original goal. Munch buddies is supposed to be a fun way to promote healthy eating and wellbeing. Some other apps have gone down the path of too much gamification / social features, which can lead to negativity and toxic competitiveness.
## Final Remark
One of our favorite parts about making this project, is that we all feel that it is something that we would (and will) actually use in our day-to-day!
|
## Inspiration
-Inspired by baracksdubs and many "Trump Sings" channels on YouTube, each of which invests a lot of time into manually tracking down words.
-Fully automating the process allows us to mass-produce humorous content and "fake news", bringing awareness to the ease with which modern technology allows for the production and perpetuation of generated content.
-Soon-to-emerge technologies like Adobe VOCO are poised to allow people to edit audio and human speech as seamlessly as we are currently able to edit still images.
-The inspirational lectures of Professor David J. Malan.
## What it does
We train each available "voice" by inputting a series of YouTube URL's to `main.py`.
`download.py` downloads and converts these videos to `.wav` files for use in `speech.py`, which uses Google's Cloud Speech API to create a dictionary of mappings between words and video time-stamps.
`application.py` implements user interaction: given a voice/text input via Facebook, we use these mappings to concatenate the video clips corresponding to each word.
## How we built it
First we decided on Python due to its huge speech recognition community. This also allowed us to utilize a collaborative online workspace through Cloud9 which helped facilitate concurrent collaboration.
We used google's speech api because we saw that it was very popular and supported time stamps for individual words. Also, they had very elegant json output, which was a definite bonus.
Next, we figured out how to use the packages pytube and ffmpy to grab video streams from youtube and convert them, with speed and without loss of quality, to the needed .wav and .mp4 formats.
At the same time, one of our team members learned how to use python packages to concatenate and split .mp4 videos, and built functions with which we were able to manipulate small video files with
high precision.
Following some initial successes with google speech api and mp4 manipulation, we began exploring the facebook graph api. There quite a bit of struggle here with permissions issues because many of the functions
we were trying to call were limited by permissions, and those permissions had to be granted by facebook people after review. However, we did eventually get facebook to integrate with our program.
The final step we took was to few remaining unconnected pieces of the project together and troubleshoot any issues that came up.
During the process, we were also investigating a few moonshot-type upgrades. These included ideas like the use of a sound spectrogram to find individual phonemes of words, so we could finely tune individual words, or generate
new words that were never previously said by the person.
## Challenges we ran into
A big challenge we ran into was that the Google Speech API was not extremely accurate when identifying single words. We tried various things like different file/compression types, boosting sound (normalizing/processing waveform),
improving sound quality (bitrate, sampling frequency).
Another big challenge we ran into was that when we tried splicing the small (under 1 or 2 second) video files together, we realized they lost their video component, due to issues
with key frames, negative timestamps, and video interpolation. Apparently, in order to save space, videos store key frames and interpolate between the key frames to generate the frames
in between. This is good enough to fool the human eye, but it required that we do a lot of extra work to get the correct output.
A third big challenge we ran into was that when we communicated with the facebook api through our flask website, facebook would resend our flask page post requests before we were completed
with processing the information from the previous post request. To solve this issue, we grabbed the post request information and opened new threads in python to process them in parallel.
A fourth big challenge we ran into was that wifi was so slow that it would take around 1 minute to upload a 1 minute video to Google's cloud for speech processing. Thus, in order to analyze
large videos (1+ hours) we developed a way to use multiple threads to split the video into smaller segments without destroying words and upload those segments in parallel.
## Accomplishments that we're proud of
We have a scalable, modular structure which makes future expansion easy. This allows us to easily switch APIs for each function.
## What we learned
[Web Services APIs]
>
> Speech to Text Conversion:
> --Google Cloud API
> --CMU Sphynx (Experimental Offline Speech-To-Text Processing with the English Lanugage Model)
> Facebook API Integration:
> --Accepting input from user via automated messenger bot development
> --Posting to Facebook Page
>
>
>
[Web Services Deployment]
>
> Flask and Python Interfacing
>
>
>
[Python]
>
> Multi-file Python package integration
> Team-based Development
>
>
>
[Video and Audio Conversion]
--FFMPEG Video: Efficient Splicing, Keyframes, Codecs, Transcoding
--FFMPEG Audio: Sampling Frequency, Sound Normalization
[Misc]
--Automating the Production of quality memes
--Teamwork and Coding while sleep-deprived
## What's next for Wubba Lubba Dubz
We'd like to incorporate a GUI with a slider, to more accurately adjust start/end times for each word.
Right now, we can only identify words which have been spoken exactly as entered. With Nikhil's background in linguistics, we will split an unknown word into its phonetic components.
Ideally, we will build a neural net which allows use to choose the best sound file for each word (in context).
|
partial
|
## Inspiration
Knowtworthy is a startup that all three of us founded together, with the mission to make meetings awesome. We have spent this past summer at the University of Toronto’s Entrepreneurship Hatchery’s incubator executing on our vision. We’ve built a sweet platform that solves many of the issues surrounding meetings but we wanted a glimpse of the future: entirely automated meetings. So we decided to challenge ourselves and create something that the world has never seen before: sentiment analysis for meetings while transcribing and attributing all speech.
## What it does
While we focused on meetings specifically, as we built the software we realized that the applications for real-time sentiment analysis are far more varied than initially anticipated. Voice transcription and diarisation are very powerful for keeping track of what happened during a meeting but sentiment can be used anywhere from the boardroom to the classroom to a psychologist’s office.
## How I built it
We felt a web app was best suited for software like this so that it can be accessible to anyone at any time. We built the frontend on React leveraging Material UI, React-Motion, Socket IO and ChartJS. The backed was built on Node (with Express) as well as python for some computational tasks. We used GRPC, Docker and Kubernetes to launch the software, making it scalable right out of the box.
For all relevant processing, we used Google Speech-to-text, Google Diarization, Stanford Empath, SKLearn and Glove (for word-to-vec).
## Challenges I ran into
Integrating so many moving parts into one cohesive platform was a challenge to keep organized but we used trello to stay on track throughout the 36 hours.
Audio encoding was also quite challenging as we ran up against some limitations of javascript while trying to stream audio in the correct and acceptable format.
Apart from that, we didn’t encounter any major roadblocks but we were each working for almost the entire 36-hour stretch as there were a lot of features to implement.
## Accomplishments that I'm proud of
We are super proud of the fact that we were able to pull it off as we knew this was a challenging task to start and we ran into some unexpected roadblocks. There is nothing else like this software currently on the market so being first is always awesome.
## What I learned
We learned a whole lot about integration both on the frontend and the backend. We prototyped before coding, introduced animations to improve user experience, too much about computer store numbers (:p) and doing a whole lot of stuff all in real time.
## What's next for Knowtworthy Sentiment
Knowtworthy Sentiment aligns well with our startup’s vision for the future of meetings so we will continue to develop it and make it more robust before integrating it directly into our existing software. If you want to check out our stuff you can do so here: <https://knowtworthy.com/>
|
## welcome to Catmosphere!
we wanted to make a game with (1) cats and (2) cool art. inspired by the many "cozy indie" games on steam and on social media, we got working on a game where the cat has to avoid all the obstacles as it attempts to go into outer space.
**what it does**: use the WASD keys to navigate our cat around the enemies. enter the five levels of the atmosphere and enjoy the art and music while you're at it!
**what's next for Catmosphere**: adding more levels, a restart button, & a new soundtrack and artwork
|
## Inspiration
One of our team members underwent speech therapy as a child, and the therapy helped him gain a sense of independence and self-esteem. In fact, over 7 million Americans, ranging from children with gene-related diseases to adults who suffer from stroke, go through some sort of speech impairment. We wanted to create a solution that could help amplify the effects of in-person treatment by giving families a way to practice at home. We also wanted to make speech therapy accessible to everyone who cannot afford the cost or time to seek institutional help.
## What it does
BeHeard makes speech therapy interactive, insightful, and fun. We present a hybrid text and voice assistant visual interface that guides patients through voice exercises. First, we have them say sentences designed to exercise specific nerves and muscles in the mouth. We use deep learning to identify mishaps and disorders on a word-by-word basis, and show users where exactly they could use more practice. Then, we lead patients through mouth exercises that target those neural pathways. They imitate a sound and mouth shape, and we use deep computer vision to display the desired lip shape directly on their mouth. Finally, when they are able to hold the position for a few seconds, we celebrate their improvement by showing them wearing fun augmented-reality masks in the browser.
## How we built it
* On the frontend, we used Flask, Bootstrap, Houndify and JavaScript/css/html to build our UI. We used Houndify extensively to navigate around our site and process speech during exercises.
* On the backend, we used two Flask servers that split the processing load, with one running the server IO with the frontend and the other running the machine learning.
* On our algorithms side, we used deep\_disfluency to identify speech irregularities and filler words and used the IBM Watson speech-to-text (STT) API for a more raw, fine-resolution transcription.
* We used the tensorflow.js deep learning library to extract 19 points representing the mouth of a face. With exhaustive vector analysis, we determined the correct mouth shape for pronouncing basic vowels and gave real-time guidance for lip movements. To increase motivation for the user to practice, we even incorporated AR to draw the desired lip shapes on users mouths, and rewards them with fun masks when they get it right!
## Challenges we ran into
* It was quite challenging to smoothly incorporate voice our platform for navigation, while also being sensitive to the fact that our users may have trouble with voice AI. We help those who are still improving gain competence and feel at ease by creating a chat bubble interface that reads messages to users, and also accepts text and clicks.
* We also ran into issues finding the balance between getting noisy, unreliable STT transcriptions and transcriptions that autocorrected our users’ mistakes. We ended up employing a balance of the Houndify and Watson APIs. We also adapted a dynamic programming solution to the Longest Common Subsequence problem to create the most accurate and intuitive visualization of our users’ mistakes.
## Accomplishments that we are proud of
We’re proud of being one of the first easily-accessible digital solutions that we know of that both conducts interactive speech therapy, while also deeply analyzing our users speech to show them insights. We’re also really excited to have created a really pleasant and intuitive user experience given our time constraints.
We’re also proud to have implemented a speech practice program that involves mouth shape detection and correction that customizes the AR mouth goals to every user’s facial dimensions.
## What we learned
We learned a lot about the strength of the speech therapy community, and the patients who inspire us to persist in this hackathon. We’ve also learned about the fundamental challenges of detecting anomalous speech, and the need for more NLP research to strengthen the technology in this field.
We learned how to work with facial recognition systems in interactive settings. All the vector calculations and geometric analyses to make detection more accurate and guidance systems look more natural was a challenging but a great learning experience.
## What's next for Be Heard
We have demonstrated how technology can be used to effectively assist speech therapy by building a prototype of a working solution. From here, we will first develop more models to determine stutters and mistakes in speech by diving into audio and language related algorithms and machine learning techniques. It will be used to diagnose the problems for users on a more personal level. We will then develop an in-house facial recognition system to obtain more points representing the human mouth. We would then gain the ability to feature more types of pronunciation practices and more sophisticated lip guidance.
|
winning
|
## Inspiration
In the past 2 years, the importance of mental health has never been so prominent on the global stage. With isolation leaving us with crippling effects, and social anxiety many have suffered in ways that could potentially be impacting them for the rest of their life. One of the difficulties that people with anxiety, depression, and some other mental health issues face, is imagination. Our main goal in this project was targeting this group (which includes our teammates) and helping them to take small steps towards bringing it back. clAIrity is a tool that offers users who are looking to express themselves with words and see a visual feedback representation of those exact words that they used to express themselves. clAIrity was inspired by the Health and Discovery portions of Hack the Valley, our team has all dealt with the effects of mental health, or lack thereof thought it would be a crazy, but awesome idea to build an app that would help promote the processing of our thoughts and emotions using words.
## What it does
The user inputs a journal entry into the app, and the app then uses co:here's NLP summarization tool to pass a JSON string of the user's journal entry into the Wombo API.
The dream API then returns an image generated by the user's journal entry prompt. Here the user can screenshot the generated image and keep a "visual diary".
The user can then save their journal entry in the app. This enables them to have a copy of the journal entries they submit
## Challenges
We ran into bundling an app that uses both java and Python was no short feat for us, using the Chaquopy plugin for Android Studio we integrated our python code to work in tandem with our java code.
## Accomplishment
We are proud of improving our development knowledge. As mentioned above this project is based on Java and Python and one of the big challenges was showcasing the received picture from API which was coded in python in the app. We overcame this challenge by lots of reading and trying different methods. The challenge was successfully solved by our group mates and we made a great group bond.
## What we learned
We learned a lot about Android Studio from a BOOK! We learned what different features do in the app and how we can modify them to achieve our goal. On the back end side, we worked with the dream API in python and used plug-ins for sending information from our python to java side of back end
##What's next
The next thing on the agenda for clAIrity is to add a voice to text feature so our users can talk and see the results
|
## 💡 Inspiration 💡
So many people around the world are fatally injured and require admission to multiple hospitals in order to receive life-changing surgery/procedures. When patients are transferred from one hospital to another, it is crucial for their medical information to be safely transferred as well.
## ❓ What it does ❓
Hermes is a secure, HIPAA-compliant app that allows hospital admin to transfer vital patient data to other domestic and/or international hospitals. The user inputs patient data, uploads patient files, and sending them securely to a hospital.
## ⚙️ How we built it ⚙️
We used the React.JS framework to build the web application. We used Javascript for the backend and HTML/CSS/JS for the frontend. We called the Auth0 API for authentication and Botdoc API for encrypted file sending.
## 🚧 Challenges we ran into 🚧
Figuring out how to send encrypted files through Botdoc was challenging but also critical to our project.
## ✨Accomplishments that we're proud of ✨
We’re proud to have built a dashboard-like functionality within 24 hours.
## 👩🏻💻 What we learned 👩🏻💻
We learned that authentication on every page is critical for an app like this that would require uploaded patient information from hospital admins. Learning to use Botdoc was also fruitful when it comes to sending encrypted messages/files.
|
## What it does
KokoRawr at its core is a Slack App that facilitates new types of interactions via chaotic cooperative gaming through text. Every user is placed on a team based on their Slack username and tries to increase their team's score by playing games such as Tic Tac Toe, Connect 4, Battleship, and Rock Paper Scissors. Teams must work together to play. However, a "Twitch Plays Pokemon" sort of environment can easily be created where multiple people are trying to execute commands at the same time and step on each others' toes. Additionally, people can visualize the games via a web app.
## How we built it
We jumped off the deep into the land of microservices. We made liberal use of StdLib with node.js to deploy a service for every feature in the app, amounting to 10 different services. The StdLib services all talk to each other and to Slack. We also have a visualization of the game boards that is hosted as a Flask server on Heroku that talks to the microservices to get information.
## Challenges we ran into
* not getting our Slack App banned by HackPrinceton
* having tokens show up correctly on the canvas
* dealing with all of the madness of callbacks
* global variables causing bad things to happen
## Accomplishments that we're proud of
* actually chaotically play games with each other on Slack
* having actions automatically showing up on the web app
* The fact that we have **10 microservices**
## What we learned
* StdLib way of microservices
* Slack integration
* HTML5 canvas
* how to have more fun with each other
## Possible Use Cases
* Friendly competitive way for teams at companies to get to know each other better and learn to work together
* New form of concurrent game playing for friend groups with "unlimited scalability"
## What's next for KokoRawr
We want to add more games to play and expand the variety of visualizations that are shown to include more games. Some service restructuring would be need to be done to reduce the Slack latency. Also, game state would need to be more persistent for the services.
|
partial
|
## Inspiration
When we talk about mental health, we're talking about overall well-being. Good mental health is not only key to feeling good, but it also affects our physical health, energy, and activity levels. It influences our vitality, our relationships with others, and first, with ourselves, our desire to live. That's why it's so important to maintain good mental health and to pay attention to any signs of distress.
Here are some fast statistics: according to NAMI (National Alliance on Mental Illness <https://www.nami.org/mhstats>)
1 in 5 U.S. adults experiences mental illness each year, 1 in 20 U.S. adults experience serious mental illness each year, 1 in 6 U.S. youth aged 6-17 experience a mental health disorder each year, 50% of all lifetime mental illness begins by age 14, and 75% by age 24 Suicide is the 2nd leading cause of death among people aged 10-14
The problem is HUGE. That is why it's a common goal to popularize self-care and attention to psychological well-being. Mental support should be widespread and accessible to everyone, regardless of location, status, and age. Many people are embarrassed or afraid to ask for help, and some don't have financial resources. We believe that this should not be an obstacle. Our main goal is to make mental healthcare widespread and accessible.
## What it does
Our solution harnesses the power of technology to address the global mental health crisis by providing a comprehensive platform for mental health care and support. By leveraging AI, maps, telemedicine, and online leisure, we are breaking down barriers to access and making it easier for people, especially those in remote areas, to get the help they need.
Our platform integrates user data and creates a single holistic database, enabling mental health professionals to provide personalized care and support to those who need it most, improving the lives of millions worldwide.
## How we built it
We created our PWA-based service to quickly test our hypotheses and develop a fully functional mobile app. Our frontend and backend are written in Next.js and deployed on Vercel. We use Supabase in combination with Prisma ORM and Next.js Api Functions to handle the backend and data.
We built a chatbot based on the OpenAI GPT API with custom prompts that answer questions most accurately. We also use Google Maps API to display points and Postgis to build points and routes .
One of our features is the LLM model, which allows us to answer questions about patients' problems and diagnoses accurately and quickly and make recommendations through customized prompts.
## Challenges we ran into
In the limited time of the hackathon, we faced several challenges that ended up helping us delve deeper into the topic to find a solution, as well as causing us to work better on the concept itself and its details.
One of these challenges was exploring the Electronic Health Records (EHRs) and developing the concept of creating an alternative transparent system to ensure data protection.
## Accomplishments that we're proud of
We enjoy even small steps, so we're really proud that within a couple of days of the hackathon, we were able to come together and work on the chosen topic qualitatively. Nevertheless, the cohesion of the team is one of the achievements that we are proud of at this hackathon.
We are proud that we were able to take a step forward to the impact of this undoubtedly important and more relevant than ever
## What we learned
During those couple of days of the Hackathon, we researched and analyzed Healthcare and its digitalization aspects weaknesses, AI integration, and EHRs (Electronic Health Records) which even in their first stages gave us great insight into the topic.
We realized that the problem we decided to solve within the TreeHacks turned out to be even bigger and more to be solved than we had imagined. It was a great valuable experience of learning a lot through TreeHacks sponsors sessions, and dialogs with founders helped us learn the specifics of integrating AI into our application.
## What's next for CheckApp
We want to develop CheckApp into a platform designed to enhance the security, accessibility, and reliability of healthcare services. Our goal is to create a digital medical profile for each patient, which will allow them to own and control their data and grant access to the necessary doctors and organizations.
We believe patients should have a clear understanding of which of their data is being utilized and have access to transparent service costs. To facilitate this, we will develop a mobile app for patients and a platform for organizations to communicate more easily. Patients will have the ability to grant access to their data, allowing various organizations such as emergency responders, therapists, and doctors to review or modify their information.
Additionally, patients will be able to make appointments and receive medical services through the mobile app. For organizations, we will provide a centralized patient knowledge base, resulting in faster and more convenient service delivery.
|
## Inspiration
The motivation stemmed from one of our team member's problem having to cooking the same dish for myself everyday and do not have an easy way to discover new recipes. By simply snapping a picture of the ingredients, the app retrieves a potential list of recipes that you could draw inspirations from or learn to make a new dish. Another use case is the reducing food waste, where you could make the most out of any leftover ingredients you have from your last meal. As more and more people going out to eat, instead of resorting to ordering food at a restaurant, the app allows them to see what they are able cook with ingredients that have at home. This solution also resolves the extremely time consuming process of searching up each ingredients online for a recipe and while having to identify the ingredient herself/himself.
## What it does
A user takes a picture of each ingredient he/she has. It will encode the image and send it to our server which will call Azure Computer Vision AI that will analyze the image. Once the image is analyzed it will be searched throughout our database for matching ingredients or similar ingredients. All the matching ingredients, confidence and a caption of the image will be returned to the front-end (Your phone) and will be displayed in the AR environment. Once all the ingredients are “scanned”, the user is able to send the list of ingredients back to our API which will find all recipes that can use any of these ingredients. This list of recipe will contain a name, image, and a list of instructions for how to create it. This list will be displayed on the AR environment which the user can interact with and select.
## How we built it
We created an API back-end using django and Graphql. We have a database which stores the ingredients and recipes. This is queried using Graphql. In addition, we use Microsoft Azure ARKit for analyzing the images and returning a JSON response consisting of what the image is. We deployed this API on Microsoft Azure App Service to host our back-end server. On the front-end, we created an iOS application using Swift on MacOS. It calls our API when it detects a touch action to capture a snapshot, which we send to Computer Vision service for image analysis. If it recognizes a ingredient, it will add to the set of recognized ingredients and search for a recipe that contains those ingredients. The name, ingredient name, and confidence is rendered in the AR environment.
## Challenges we ran into
One of the biggest roadblocks we ran into is setting up the back-end API onto Microsoft Azure server, but it was quickly resolved thanks to on-site Microsoft Mentors. In addition, it was difficult coming up with an algorithm and design structure to retrieve the recipes based on the recognized ingredients. We also ran into trouble of finding an existing viable data set of recipe and ingredients.
## Accomplishments that we're proud of
We were able to integrate Azure environment without any prior experience. Also, we were able to solve a common problem and encourage people to save more by creating an opportunity to cook at home.
## What we learned
Drawing up a plan in the beginning decreased development downtime. Azure has a variety of services that we could employ in future projects.
## What's next for ARuHungry
Introduce preferences for individual users to only return a set of recipes from recognized ingredients filtered by their set preferences. Some future expansions could be to integrate with grocery stores that want to advertise their products and suggest them to the users great deals on them depending on the existing ingredients they have.
|
## Story
Mental health is a major issue especially on college campuses. The two main challenges are diagnosis and treatment.
### Diagnosis
Existing mental health apps require the use to proactively input their mood, their thoughts, and concerns. With these apps, it's easy to hide their true feelings.
We wanted to find a better solution using machine learning. Mira uses visual emotion detection and sentiment analysis to determine how they're really feeling.
At the same time, we wanted to use an everyday household object to make it accessible to everyone.
### Treatment
Mira focuses on being engaging and keeping track of their emotional state. She allows them to see their emotional state and history, and then analyze why they're feeling that way using the journal.
## Technical Details
### Alexa
The user's speech is being heard by the Amazon Alexa, which parses the speech and passes it to a backend server. Alexa listens to the user's descriptions of their day, or if they have anything on their mind, and responds with encouraging responses matching the user's speech.
### IBM Watson/Bluemix
The speech from Alexa is being read to IBM Watson which performs sentiment analysis on the speech to see how the user is actively feeling from their text.
### Google App Engine
The backend server is being hosted entirely on Google App Engine. This facilitates the connections with the Google Cloud Vision API and makes deployment easier. We also used Google Datastore to store all of the user's journal messages so they can see their past thoughts.
### Google Vision Machine Learning
We take photos using a camera built into the mirror. The photos are then sent to the Vision ML API, which finds the user's face and gets the user's emotions from each photo. They're then stored directly into Google Datastore which integrates well with Google App Engine
### Data Visualization
Each user can visualize their mental history through a series of graphs. The graphs are each color-coded to certain emotional states (Ex. Red - Anger, Yellow - Joy). They can then follow their emotional states through those time periods and reflect on their actions, or thoughts in the mood journal.
|
losing
|
## Inspiration
Being disorganized can put a strain on your productivity and mental health. Since all of us have dealt with this before we wanted to create an application that would increase our productivity, while being user-friendly, and quick. If we knew what we had to do throughout the day, but it impedes with our schedule, it's really tough to organize around that.
## What it does
The user inputs up to 10 daily goals into our application and the priority of accomplishing it, then our algorithm sorts it by what we researched to be the best flow to accomplish personalized goals. Then it displays it in a sorted list for you on which goals you should tackle first for highest productivity. Finally, you have a choice of uploading it to your Google Calendar which will lay the events out to not overlap with each other nor and current events in your day.
## How we built it
Implemented in Python and the tkinter library for the front end. Then used Python and Google Calendar API on the back end.
## Challenges we ran into
* Google Calendar API doesn't have a display for which time blocks are taken
* Sorting algorithms based on user inputted goals
* Taking user input from front end and using it for the back end
## Accomplishments that we're proud of
Successfully overcame all of our challenges and finished everything we had planned out Saturday morning
## What we learned
* We learned Python's library tkinter, how to handle Google's API through Python, and sorting algorithms
* Stay organized as a team as well as tackle each of our individual jobs done
* Learned how to divide up the work evenly
## What's next for lockITdown
* Implement all timezone's for the user (currently only America/Toronto)
* User inputs starting time of their day
* User follows their schedule and we can update our code and provide them points
* Provide a place for user feedback
* Publish the application on Play Store/ iOS App store (implement it in java)
|
## Inspiration
Finding and paying for parking sucks, so we made it better.
## What it does
Spot is a parking assist mobile web app that removes the hassle of finding and paying for parking. Spot allows a user to set where they are going, and will show nearby parking spots, as well as meter time limits and cost per hour.
## How we built it
We built a Flask app that we then deployed to Heroku. The app itself is based on an Arduino sensor that would monitor a parking spot, and transmit the status, whether the spot is available or not, to a web app.
## Challenges we ran into
Spot had two very large challenges, both with the hardware and software aspects of the system.
In regard to hardware, he sensor and circuit that Spot use are very simple, however, pushing the data a sensor receives to a server is quite difficult. The normal way to do this is with an Arduino Ethernet Shield, however, none were available. The work-around to this was using a simple Python script that read from the Arduino data port and pushed the data up to the server.
Working with a map API was difficult, and something none of us had experience with.
|
Group Number: 81
## Inspiration
As students with busy schedules, we often forget to take breaks and skip meals when we are busy, and these things are vital for mental health. This also happens for many people who are also overworked and stressed such as people in the workforce, those juggling multiple responsibilities, and those who just can't seem to stop working.
We know that a lot of people strongly rely on a calendar to plan their day, so we created an app that blocks off time for you to eat your meals and take breaks. This way, the act of taking a lunch break seems like completing a necessary task, and not like slacking off.
## What it does
Breaktime analyzes your calendar events for the day and inserts breaks for meals and rest at appropriate times. Our app also analyzes your calendar events for the week to calculate how much time you spend on work and responsibilities in comparison to fun activities and breaks or self-care.
We have three main screens. The home view lets you enter your mood for today and it will give you ideas of quests to complete aka ways of self-care to improve your mood. We monitor the users trends and try and notice patterns to help prevent burnout proactively. The goals screen lets you record when you have completed meals, taken your daily break, and other tasks you would like to track. The calendar screen reads from your calendar and shows you how many events you have for the week as well as how long they are. It uses a machine-learning model to categorize events into work, fun, or self-care and shows you how your days are broken up. You can also schedule your breaks and meals for the day with the click of a button! It finds free time in your schedule to do so.
## How we built it
Our app is built using React-Native with Expo. We used Native Base as a frontend framework to help with the design. We used Firebase for the backend to store user data.
We created an ML model in mage to analyze calendar events to find out how much time a user is spending on responsibilities versus recreation.
## Challenges we ran into
This was one of the first times we worked with machine learning. Mage made it a nice experience as it pointed out ways to clean the data. We also learned about ways to improve the model so we redeployed and used a second model with things we learned from the first one. The first model was 64% accurate but the second model was 84% accurate!
## Accomplishments that we're proud of
We are proud of creating our first accurate ML model. We learned a lot about data cleaning, using Mage, and statistics related to models. Two of us have taken an ML class before, but this experience gave us actual experience on how ML is used rather than some of the theory we learned in the class.
## What we learned
We learned about how to make a more consistent UI to improve the user experience. We also learned about ML.
## What's next for Breaktime
We used our own calendars to train the ML model to simulate real-world data for students. But our audience is for anyone who uses calendars to organize their lives such as health-care workers and CEOs of companies. Thus, we would need more data to train the model. We could also try and personalize the model for each person as they might have specific events that relate to a certain category just for them as some events are like relaxation to certain people, but feel like work to others. We can also do more with the data we get. We can find correlations between completed goals and the user's moods.
|
losing
|
## Inspiration
Tinder but Volunteering
## What it does
Connects people to volunteering organizations. Makes volunteering fun, easy and social
## How we built it
react for web and react native
## Challenges we ran into
So MANY
## Accomplishments that we're proud of
Getting a really solid idea and a decent UI
## What we learned
SO MUCH
## What's next for hackMIT
|
Copyright 2018 The Social-Engineer Firewall (SEF)
Written by Christopher Ngo, Jennifer Zou, Kyle O'Brien, and Omri Gabay.
Founded Treehacks 2018, Stanford University.
## Inspiration
No matter how secure your code is, the biggest cybersecurity vulnerability is the human vector. It takes very little to exploit an end-user with social engineering, yet the consequences are severe.
Practically every platform, from banking to social media, to email and corporate data, implements some form of self-service password reset feature based on security questions to authenticate the account “owner.”
Most people wouldn’t think twice to talk about their favourite pet or first car, yet such sensitive information is all that stands between a social engineer and total control of all your private accounts.
## What it does
The Social-Engineer Firewall (SEF) aims to protect us from these threats. Upon activation, SEF actively monitors for known attack signatures with voice to speech transcription courtesy of SoundHound’s Houndify engine. SEF is the world’s first solution to protect the OSI Level 8 (end-user/human) from social engineer attacks.
## How it was built
SEF is a Web Application written in React-Native deployed on Microsoft Azure with node.js. iOS and Android app versions are powered by Expo. Real-time audio monitoring is powered by the Houndify SDK API.
## Todo List
Complete development of TensorFlow model
## Development challenges
Our lack of experience with new technologies provided us with many learning opportunities.
|
## Inspiration
The ongoing effects of climate change and the theme of nature preservation motivated us to think about how can we promote sustainability on campus. A lot of initiatives have been taken by companies, people and universities to tackle this and promote a sustainable lifestyle, but they haven't been very impactful. They either come as a one-time million dollar investment or a set of guidelines without context or implementation. We came up with a concept of an app that allows students to participate in environmentally sustainable activities in their everyday life and reward them for doing so. We believe that sustainability should not be a one time investment but more of a everyday practice and this is what to aim to achieve with SustainU.
## What it does
*Check out the entire documentation on the* [GitHub Repo](https://github.com/kritgrover/htv-sustainu/blob/main/README.md)!
SustainU is essentially a mobile-app-based rewarding system that motivates students and even faculty members to make contributions to the environment while benefiting from the app in terms of points. Students can do certain tasks shown on the app and then gain points. The points they receive can be redeemed for discounts in campus shops or for donating money to charities.
## How we built it
We built it from scratch using **Figma, JavaScript, HTML** and **CSS**.
After a brief session of brainstorming we decided to focus on the theme Nature. From there we started researching and discussing about all the different niches for which we can possibly create an effective solution. Building upon that we came up with the idea for SustainU and went ahead with it because of how feasible yet impactful the concept can actually be. It requires a very minimal start up cost and little to no maintenance.
The prototype of the app was built from scratch on Figma. Our team spent hours working on the designs, colors, fonts, and the overall UI experience. Although not yet fully polished, this prototype clearly demonstrates the workflow and UI of the app, while taking all the core concepts of design into consideration.
The website is made to briefly describe our app and what it does, while providing a slideshow showing the basic working of the app. This website was built using HTML, CSS and a bit of JavaScript, and is really just for showing the world what SustainU is all about. We tackled with a lot of issues related to the layout and bugs during the development process, all of which were solved using research, discussions and critical thinking.
## Challenges we ran into
Most of the challenges we ran into were very specific. For instance, how should we visualize the streak feature in the app, whether we should implement an achievement system or not, and buggy JavaScript code. We solved most of the problems by reasoning and technical knowledge, and came up with logical and effective solutions. Another challenge we ran into was choosing how to build the app itself as everyone in our team had different experiences and preferences but we managed to implement and showcase our project, with a bit of compromise and a lot of hard work.
## Accomplishments that we're proud of
It is a thrilling fact that we built a design prototype of an app that would make a difference and that can be showcased to everyone. It is not too much of an achievement but we know how to get things started. We now realize that it is much easier for us to build an app from the ground up and put it into practice. Also, working collaboratively as a whole team and have different small tasks for each person from day to night is a great learning experience. Facing challenges head on with tight deadlines made us think more rationally and everyone on the team has learnt something new. Everyone on the team gained valuable hands-on experience and knowledge on concepts they weren't already familiar with.
## What we learned
So far, we learned how to form an idea when given a general topic. We've learnt how to work as a team and utilize each person's skills to the fullest. We've also gained insight about the critical steps required for building an app. Moreover, we've learnt a lot about clean and artistic design. All of the skills we acquired are priceless and are worth cherishing. With regards to the web application, we learnt a lot about the crucial development concepts like typography, laying out the structure using HTML, designing using CSS and adding functionality using JavaScript.
## What's next for SustainU App
Ideally, we will launch our app for the students at UofT. If feasible, this can be scaled to the public and can be incorporated in our everyday lives where people will be given a platform and incentive to be more sustainable. We at SustainU wish to launch our app to the public and promote environmental sustainability with users across the GTA. An ambitious idea such as this would require cooperation and partnerships with both the private and public sectors. For example, a proposed idea is to have points gained through frequent use of public transport. In order to keep track of when a user uses a public transit service we'd have to partner up with PRESTO in order to link the transactions with the points system of SustainU. Another proposed idea is to partner up with the Municipal Governments in allowing points to be gained through the use of Bike Share. Furthermore, environmentally sustainable small businesses that wish to increase their brand exposure can partner with SustainU and provide offers and discounts on their products. Through SustainU individuals can finally be rewarded by being green.
|
winning
|
## Inspiration
Donut was originally inspired by a viral story about dmdm hydantoin, a chemical preservative used in hair products rumoured to be toxic and lead to hair loss. This started a broader discussion about commercial products in general and the plethora of chemical substances and ingredients we blindly use and consume on a daily basis. We wanted to remove these veils that can impact the health of the community and encourage people to be more informed consumers.
## What it does
Donut uses computer vision to read the labels off packaging through a camera. After acquiring this data, it displays all the ingredients in a list and uses sentiment analysis to determine the general safety of each ingredient. Users can click into each ingredient to learn more and read related articles that we recommend in order to make more educated purchases.
## How we built it
## Challenges we ran into
Front end development was a challenge since it was something our team was inexperienced with, but there’s no better place to learn than at a hackathon! Fighting away the sleepiness was another hurdle too.
## Accomplishments that we're proud of
We got more done than we imagined with a 3 person team :)
Michael is proud that he was very productive with the backend code :D
Grace is proud that she wrote any code at all as a designer o\_o
Denny is proud to have learned more about HTTP requests and worked with both the front and backend :0
## What we learned
We could be benefitted from a more well-balanced team (befriend some front end devs!). Sleep is important. Have snacks at the ready.
## What's next for Donut Eat This
Features that we would love to implement next would be a way to upload photos from a user’s album and a way to view recent scans.
|
## Inspiration
Due to a lot of work and family problems, People often forget to take care of their health and food. Common health problems people know a day's faces are BP, heart problems, and Diabetics. Most people face mental health problems, due to studies, jobs, or any other problems. This Project can help people find out their health problems.
It helps people in easy recycling of items, as they are divided into 12 different classes.
It will help people who do not have any knowledge of plants and predict if the plant is having any diseases or not.
## What it does
On the Garbage page, When we upload an image it classifies which kind of garbage it is. It helps people with easy recycling.
On the mental health page, When we answer some of the questions. It will predict If we are facing some kind of mental health issue.
The Health Page is divided into three parts. One page predicts if you are having heart disease or not. The second page predicts if you are having Diabetics or not. The third page predicts if you are having BP or not.
Covid 19 page classify if you are having covid or not
Plant\_Disease page predicts if a plant is having a disease or not.
## How we built it
I built it using streamlit and OpenCV.
## Challenges we ran into
Deploying the website to Heroku was very difficult because I generally deploy it. Most of this was new to us except for deep learning and ML so it was very difficult overall due to the time restraint. The overall logic and finding out how we should calculate everything was difficult to determine within the time limit. Overall, time was the biggest constraint.
## Accomplishments that we're proud of
## What we learned
Tensorflow, Streamlit, Python, HTML5, CSS3, Opencv, Machine learning, Deep learning, and using different python packages.
## What's next for Arogya
|
## Inspiration
We have a desire to spread awareness surrounding health issues in modern society. We also love data and the insights in can provide, so we wanted to build an application that made it easy and fun to explore the data that we all create and learn something about being active and healthy.
## What it does
Our web application processes data exported by Apple health and provides visualizations of the data as well as the ability to share data with others and be encouraged to remain healthy. Our educational component uses real world health data to educate users about the topics surrounding their health. Our application also provides insight into just how much data we all constantly are producing.
## How we built it
We build the application from the ground up, with a custom data processing pipeline from raw data upload to visualization and sharing. We designed the interface carefully to allow for the greatest impact of the data while still being enjoyable and easy to use.
## Challenges we ran into
We had a lot to learn, especially about moving and storing large amounts of data and especially doing it in a timely and user-friendly manner. Our biggest struggle was handling the daunting task of taking in raw data from Apple health and storing it in a format that was easy to access and analyze.
## Accomplishments that we're proud of
We're proud of the completed product that we came to despite early struggles to find the best approach to the challenge at hand. An architecture this complicated with so many moving components - large data, authentication, user experience design, and security - was above the scope of projects we worked on in the past, especially to complete in under 48 hours. We're proud to have come out with a complete and working product that has value to us and hopefully to others as well.
## What we learned
We learned a lot about building large scale applications and the challenges that come with rapid development. We had to move quickly, making many decisions while still focusing on producing a quality product that would stand the test of time.
## What's next for Open Health Board
We plan to expand the scope of our application to incorporate more data insights and educational components. While our platform is built entirely mobile friendly, a native iPhone application is hopefully in the near future to aid in keeping data up to sync with minimal work from the user. We plan to continue developing our data sharing and social aspects of the platform to encourage communication around the topic of health and wellness.
|
winning
|
## Inspiration
Due to the pandemic, feelings of isolation have spiked resulting in mental health issues. We wanted to explore how we could help those struggling with mental health issues using music therapy.
## What it does
The website allows for people to cope with feelings of sadness, frustration, etc. with the help of music therapy. Once clicking on their mood, they can get access to a Spotify playlist which will play tracks to lift their mood.
## How we built it
We built the website using HTML and CSS, and we accessed the playlists through Spotify. We then hosted the website on Firebase.
## Challenges we ran into
Initially, we wanted to access Spotify playlists through the Spotify SDK/Web API, which was still in beta mode. Due to our lack of experience with handling APIs, we were not able to access Spotify music through the means of the SDK. Additionally, we wanted to add a chatbox feature in our pages using JavaScript, however, we could not finish the chatbox given the time we had left.
## Accomplishments that we're proud of
We are proud that we learned more about databases, especially the usage of Firebase. We are also proud that we could import the Spotify playlists successfully.
## What we learned
We learned how to host websites using Firebase and polished our Web Development skills in the process. We also learned how to communicate properly despite living in different time zones.
## What's next for Serene
We want to add chatbox features so that people who are using the app can also chat with each other to feel less alone.
|
## Inspiration
We wanted to create a convenient, modernized journaling application with methods and components that are backed by science. Our spin on the readily available journal logging application is our take on the idea of awareness itself. What does it mean to be aware? What form or shape can mental health awareness come in? These were the key questions that we were curious about exploring, and we wanted to integrate this idea of awareness into our application. The “awareness” approach of the journal functions by providing users with the tools to track and analyze their moods and thoughts, as well as allowing them to engage with the visualizations of the journal entries to foster meaningful reflections.
## What it does
Our product provides a user-friendly platform for logging and recording journal entries and incorporates natural language processing (NLP) to conduct sentiment analysis. Users will be able to see generated insights from their journal entries, such as how their sentiments have changed over time.
## How we built it
Our front-end is powered by the ReactJS library, while our backend is powered by ExpressJS. Our sentiment analyzer was integrated with our NodeJS backend, which is also connected to a MySQL database.
## Challenges we ran into
Creating this app idea under such a short period of time proved to be more challenge than we anticipated. Our product was meant to comprise of more features that helped the journaling aspect of the app as well as the mood tracking aspect of the app. We had planned on showcasing an aggregation of the user's mood over different time periods, for instance, daily, weekly, monthly, etc. And on top of that, we had initially planned on deploying our web app on a remote hosting server but due to the time constraint, we had decided to reduce our proof-of-concept to the most essential cores features for our idea.
## Accomplishments that we're proud of
Designing and building such an amazing web app has been a wonderful experience. To think that we created a web app that could potentially be used by individuals all over the world and could help them keep track of their mental health has been such a proud moment. It really embraces the essence of a hackathon in its entirety. And this accomplishment has been a moment that our team can proud of. The animation video is an added bonus, visual presentations have a way of captivating an audience.
## What we learned
By going through the whole cycle of app development, we learned how one single part does not comprise the whole. What we mean is that designing an app is more than just coding it, the real work starts in showcasing the idea to others. In addition to that, we learned the importance of a clear roadmap for approaching issues (for example, coming up with an idea) and that complicated problems do not require complicated solutions, for instance, our app in simplicity allows for users to engage in a journal activity and to keep track of their moods over time. And most importantly, we learned how the simplest of ideas can be the most useful if they are thought right.
## What's next for Mood for Thought
Making a mobile app could have been better, given that it would align with our goals of making journaling as easy as possible. Users could also retain a degree of functionality offline. This could have also enabled a notification feature that would encourage healthy habits.
More sophisticated machine learning would have the potential to greatly improve the functionality of our app. Right now, simply determining either positive/negative sentiment could be a bit vague.
Adding recommendations on good journaling practices could have been an excellent addition to the project. These recommendations could be based on further sentiment analysis via NLP.
|
## Inspiration
```
We had multiple inspirations for creating Discotheque. Multiple members of our team have fond memories of virtual music festivals, silent discos, and other ways of enjoying music or other audio over the internet in a more immersive experience. In addition, Sumedh is a DJ with experience performing for thousands back at Georgia Tech, so this space seemed like a fun project to do.
```
## What it does
```
Currently, it allows a user to log in using Google OAuth and either stream music from their computer to create a channel or listen to ongoing streams.
```
## How we built it
```
We used React, with Tailwind CSS, React Bootstrap, and Twilio's Paste component library for the frontend, Firebase for user data and authentication, Twilio's Live API for the streaming, and Twilio's Serverless functions for hosting and backend. We also attempted to include the Spotify API in our application, but due to time constraints, we ended up not including this in the final application.
```
## Challenges we ran into
```
This was the first ever hackathon for half of our team, so there was a very rapid learning curve for most of the team, but I believe we were all able to learn new skills and utilize our abilities to the fullest in order to develop a successful MVP! We also struggled immensely with the Twilio Live API since it's newer and we had no experience with it before this hackathon, but we are proud of how we were able to overcome our struggles to deliver an audio application!
```
## What we learned
```
We learned how to use Twilio's Live API, Serverless hosting, and Paste component library, the Spotify API, and brushed up on our React and Firebase Auth abilities. We also learned how to persevere through seemingly insurmountable blockers.
```
## What's next for Discotheque
```
If we had more time, we wanted to work on some gamification (using Firebase and potentially some blockchain) and interactive/social features such as song requests and DJ scores. If we were to continue this, we would try to also replace the microphone input with computer audio input to have a cleaner audio mix. We would also try to ensure the legality of our service by enabling plagiarism/copyright checking and encouraging DJs to only stream music they have the rights to (or copyright-free music) similar to Twitch's recent approach. We would also like to enable DJs to play music directly from their Spotify accounts to ensure the good availability of good quality music.
```
|
losing
|
## FLEX [Freelancing Linking Expertise Xchange]
## Inspiration
Freelancers deserve a platform where they can fully showcase their skills, without worrying about high fees or delayed payments. Companies need fast, reliable access to talent with specific expertise to complete jobs efficiently. "FLEX" bridges the gap, enabling recruiters to instantly find top candidates through AI-powered conversations, ensuring the right fit, right away.
## What it does
Clients talk to our AI, explaining the type of candidate they need and any specific skills they're looking for. As they speak, the AI highlights important keywords and asks any more factors that they would need with the candidate. This data is then analyzed and parsed through our vast database of Freelancers or the best matching candidates. The AI then talks back to the recruiter, showing the top candidates based on the recruiter’s requirements. Once the recruiter picks the right candidate, they can create a smart contract that’s securely stored and managed on the blockchain for transparent payments and agreements.
## How we built it
We built starting with the Frontend using **Next.JS**, and deployed the entire application on **Terraform** for seamless scalability. For voice interaction, we integrated **Deepgram** to generate human-like voice and process recruiter inputs, which are then handled by **Fetch.ai**'s agents. These agents work in tandem: one agent interacts with **Flask** to analyze keywords from the recruiter's speech, another queries the **SingleStore** database, and the third handles communication with **Deepgram**.
Using SingleStore's real-time data analysis and Full-Text Search, we find the best candidates based on factors provided by the client. For secure transactions, we utilized **SUI** blockchain, creating an agreement object once the recruiter posts a job. When a freelancer is selected and both parties reach an agreement, the object gets updated, and escrowed funds are released upon task completion—all through Smart Contracts developed in **Move**. We also used Flask and **Express.js** to manage backend and routing efficiently.
## Challenges we ran into
We faced challenges integrating Fetch.ai agents for the first time, particularly with getting smooth communication between them. Learning Move for SUI and connecting smart contracts with the frontend also proved tricky. Setting up reliable Speech to Text was tough, as we struggled to control when voice input should stop. Despite these hurdles, we persevered and successfully developed this full stack application.
## Accomplishments that we're proud of
We’re proud to have built a fully finished application while learning and implementing new technologies here at CalHacks. Successfully integrating blockchain and AI into a cohesive solution was a major achievement, especially given how cutting-edge both are. It’s exciting to create something that leverages the potential of these rapidly emerging technologies.
## What we learned
We learned how to work with a range of new technologies, including SUI for blockchain transactions, Fetch.ai for agent communication, and SingleStore for real-time data analysis. We also gained experience with Deepgram for voice AI integration.
## What's next for FLEX
Next, we plan to implement DAOs for conflict resolution, allowing decentralized governance to handle disputes between freelancers and clients. We also aim to launch on the SUI mainnet and conduct thorough testing to ensure scalability and performance.
|
## **1st Place!**
## Inspiration
Sign language is a universal language which allows many individuals to exercise their intellect through common communication. Many people around the world suffer from hearing loss and from mutism that needs sign language to communicate. Even those who do not experience these conditions may still require the use of sign language for certain circumstances. We plan to expand our company to be known worldwide to fill the lack of a virtual sign language learning tool that is accessible to everyone, everywhere, for free.
## What it does
Here at SignSpeak, we create an encouraging learning environment that provides computer vision sign language tests to track progression and to perfect sign language skills. The UI is built around simplicity and useability. We have provided a teaching system that works by engaging the user in lessons, then partaking in a progression test. The lessons will include the material that will be tested during the lesson quiz. Once the user has completed the lesson, they will be redirected to the quiz which can result in either a failure or success. Consequently, successfully completing the quiz will congratulate the user and direct them to the proceeding lesson, however, failure will result in the user having to retake the lesson. The user will retake the lesson until they are successful during the quiz to proceed to the following lesson.
## How we built it
We built SignSpeak on react with next.js. For our sign recognition, we used TensorFlow with a media pipe model to detect points on the hand which were then compared with preassigned gestures.
## Challenges we ran into
We ran into multiple roadblocks mainly regarding our misunderstandings of Next.js.
## Accomplishments that we're proud of
We are proud that we managed to come up with so many ideas in such little time.
## What we learned
Throughout the event, we participated in many workshops and created many connections. We engaged in many conversations that involved certain bugs and issues that others were having and learned from their experience using javascript and react. Additionally, throughout the workshops, we learned about the blockchain and entrepreneurship connections to coding for the overall benefit of the hackathon.
## What's next for SignSpeak
SignSpeak is seeking to continue services for teaching people to learn sign language. For future use, we plan to implement a suggestion box for our users to communicate with us about problems with our program so that we can work quickly to fix them. Additionally, we will collaborate and improve companies that cater to phone audible navigation for blind people.
|
## Inspiration
What is one of the biggest motivators at hackathons? The thrill of competition. What if you could bring that same excitement into your daily life, using your speech to challenge your friends? Our app gamifies everyday conversations and interactions, letting you and your friends compete to see who projects the most positive or negative aura.
## What it does
* Captures live audio input and converts it into text using React’s speech recognition library.
* Analyzes the transcribed text by running it through Cohere’s semantic similarity model by encoding both input data and our dataset into vector embeddings
* Uses cosine similarity to compare the input with synthetic data, generated using ChatGPT, in order to evaluate whether the speech conveys a positive or negative aura based.
## Challenges we ran into
* Integrating real-time speech recognition with accurate transcription, especially when dealing with diverse speech patterns and accents.
* Acquiring a continuous audio input which can then be passed along for efficient transcription.
* Configuring Cohere’s API to work seamlessly with a large dataset and ensure fast, accurate sentiment analysis.
* Getting accurate data on words/actions that constitute "positive aura" and "negative aura".
## Accomplishments that we're proud of
* Cohere Embeddings for Sentiment Analysis: Integrating Cohere’s powerful semantic embeddings was another significant milestone. We used their embeddings to analyze and determine the sentiment of transcribed text, mapping speech patterns to either positive or negative aura. We’re proud of this implementation because it brought depth to the app.
## What's next for Traura
* Turning this prototype into a full-fledged web app that users can access anywhere, including the full implementation of the leaderboard functionality to foster that friendly thrill of competition.
|
winning
|
# Recipe Finder - Project for NWHacks 2020
## The Problem
About 1/3 of food produced in the world is lost or wasted in the year. There are many reasons for this, including not being able to cook with said food, not having time to cook this food or cooking food that does not taste good. Albeit this, food waste is a serious problem that wastes money, wastes time and harms the environment.
## Our Solution
Our web app, Recipe Nest is a chat bot deployed on Slack, the Web, through calls. (Messenger and Google Assistant are currently awaiting approval). Users simply enter in all the filters they would like their recipe to contain and Recipe Nest finds a recipe conforming to the users' requests! We believe that making this application as accessible as possible reflects our goal of making it easy to get started with cooking at home and not wasting food!
## How we did it
We used Python, Flask, for the backend. Our chat bot was built with Google Cloud's Dialogflow in which we personally trained to be able to take user input. The front end was built with CSS, HTML, and Bootstrap.
## Going forward
We hope to add user logins via Firebase. We would then add features such as 1. Saving food in your fridge 2. Having the app remind you of this food 3. Allow the user to save recipes that they like. Additionally, we would like to add more filters, such as nutrition, cost, and excluding certain foods, and finally, create a better UI/UX experience for the user.
|
## Inspiration
As university students, we often find that we have groceries in the fridge but we end up eating out and the groceries end up going bad.
## What It Does
After you buy groceries from supermarkets, you can use our app to take a picture of your receipt. Our app will parse through the items in the receipts and add the items into the database representing your fridge. Using the items you have in your fridge, our app will be able to recommend recipes for dishes for you to make.
## How We Built It
On the back-end, we have a Flask server that receives the image from the front-end through ngrok and then sends the image of the receipt to Google Cloud Vision to get the text extracted. We then post-process the data we receive to filter out any unwanted noise in the data.
On the front-end, our app is built using react-native, using axios to query from the recipe API, and then stores data into Firebase.
## Challenges We Ran Into
Some of the challenges we ran into included deploying our Flask to Google App Engine, and styling in react. We found that it was not possible to write into Google App Engine storage, instead we had to write into Firestore and have that interact with Google App Engine.
On the frontend, we had trouble designing the UI to be responsive across platforms, especially since we were relatively inexperienced with React Native development. We also had trouble finding a recipe API that suited our needs and had sufficient documentation.
|
## Inspiration
Over the course of the past year, one of the most heavily impacted industries due to the COVID-19 pandemic is the service sector. Specifically, COVID-19 has transformed the financial viability of restaurant models. Moving forward, it is projected that 36,000 small restaurants will not survive the winter as successful restaurants have thus far relied on online dining services such as Grubhub or Doordash. However, these methods come at the cost of flat premiums on every sale, driving up the food price and cutting at least 20% from a given restaurant’s revenue. Within these platforms, the most popular, established restaurants are prioritized due to built-in search algorithms. As such, not all small restaurants can join these otherwise expensive options, and there is no meaningful way for small restaurants to survive during COVID.
## What it does
Potluck provides a platform for chefs to conveniently advertise their services to customers who will likewise be able to easily find nearby places to get their favorite foods. Chefs are able to upload information about their restaurant, such as their menus and locations, which is stored in Potluck’s encrypted database. Customers are presented with a personalized dashboard containing a list of ten nearby restaurants which are generated using an algorithm that factors in the customer’s preferences and sentiment analysis of previous customers. There is also a search function which will allow customers to find additional restaurants that they may enjoy.
## How I built it
We built a web app with Flask where users can feed in data for a specific location, cuisine of food, and restaurant-related tags. Based on this input, restaurants in our database are filtered and ranked based on the distance to the given user location calculated using Google Maps API and the Natural Language Toolkit (NLTK), and a sentiment score based on any comments on the restaurant calculated using Google Cloud NLP. Within the page, consumers can provide comments on their dining experience with a certain restaurant and chefs can add information for their restaurant, including cuisine, menu items, location, and contact information. Data is stored in a PostgreSQL-based database on Google Cloud.
## Challenges I ran into
One of the challenges that we faced was coming up a solution that matched the timeframe and bandwidth of our team. We did not want to be too ambitious with our ideas and technology yet provide a product that we felt was novel and meaningful.
We also found it difficult to integrate the backend with the frontend. For example, we needed the results from the Natural Language Toolkit (NLTK) in the backend to be used by the Google Maps JavaScript API in the frontend. By utilizing Jinja templates, we were able to serve the webpage and modify its script code based on the backend results from NLTK.
## Accomplishments that I'm proud of
We were able to identify a problem that was not only very meaningful to us and our community, but also one that we had a reasonable chance of approaching with our experience and tools. Not only did we get our functions and app to work very smoothly, we ended up with time to create a very pleasant user-experience and UI. We believe that how comfortable the user is when using the app is equally as important as how sophisticated the technology is.
Additionally, we were happy that we were able to tie in our product into many meaningful ideas on community and small businesses, which we believe are very important in the current times.
## What I learned
Tools we tried for the first time: Flask (with the additional challenge of running HTTPS), Jinja templates for dynamic HTML code, Google Cloud products (including Google Maps JS API), and PostgreSQL.
For many of us, this was our first experience with a group technical project, and it was very instructive to find ways to best communicate and collaborate, especially in this virtual setting. We benefited from each other’s experiences and were able to learn when to use certain ML algorithms or how to make a dynamic frontend.
## What's next for Potluck
For example, we want to incorporate an account system to make user-specific recommendations (Firebase). Additionally, regarding our Google Maps interface, we would like to have dynamic location identification. Furthermore, the capacity of our platform could help us expand program to pair people with any type of service, not just food. We believe that the flexibility of our app could be used for other ideas as well.
|
partial
|
## Inspiration
Our inspiration comes from people who require immediate medical assistance when they are located in remote areas. The project aims to reinvent the way people in rural or remote settings, especially seniors who are unable to travel frequently, obtain medical assistance by remotely connecting them to medical resources available in their nearby cities.
## What it does
Tango is a tool to help people in remote areas (e.g. villagers, people on camping/hiking trips, etc.) to have access to direct medical assistance in case of an emergency. The user would have the device on him while hiking along with a smart watch. If the device senses a sudden fall, the vital signs of the user provided by the watch would be sent to the nearest doctor/hospital in the area. The doctor could then assist the user in a most appropriate way now that the user's vital signs are directly relayed to the doctor. In a case of no response from the user, medical assistance can be sent using his location.
## How we built it
The sensor is made out of the Particle Electron Kit, which based on input from an accelerometer and a sound sensor, asseses whether the user has fallen down or not. Signals from this sensor are sent to the doctor if the user has fallen along with data from smart watch about patient health.
## Challenges we ran into
One of our biggest challenges we ran into was taking the data from the cloud and loading it on the web page to display it.
## Accomplishments that we are proud of
It is our first experience with the Particle Electron and for some of us their first experience in a hardware project.
## What we learned
We learned how to use the Particle Election.
## What's next for Tango
Integration of the Pebble watch to send the vital signs to the doctors.
|
## Inspiration
Our journey with PathSense began with a deeply personal connection. Several of us have visually impaired family members, and we've witnessed firsthand the challenges they face navigating indoor spaces. We realized that while outdoor navigation has seen remarkable advancements, indoor environments remained a complex puzzle for the visually impaired.
This gap in assistive technology sparked our imagination. We saw an opportunity to harness the power of AI, computer vision, and indoor mapping to create a solution that could profoundly impact lives. We envisioned a tool that would act as a constant companion, providing real-time guidance and environmental awareness in complex indoor settings, ultimately enhancing independence and mobility for visually impaired individuals.
## What it does
PathSense, our voice-centric indoor navigation assistant, is designed to be a game-changer for visually impaired individuals. At its heart, our system aims to enhance mobility and independence by providing accessible, spoken navigation guidance in indoor spaces.
Our solution offers the following key features:
1. Voice-Controlled Interaction: Hands-free operation through intuitive voice commands.
2. Real-Time Object Detection: Continuous scanning and identification of objects and obstacles.
3. Scene Description: Verbal descriptions of the surrounding environment to build mental maps.
4. Precise Indoor Routing: Turn-by-turn navigation within buildings using indoor mapping technology.
5. Contextual Information: Relevant details about nearby points of interest.
6. Adaptive Guidance: Real-time updates based on user movement and environmental changes.
What sets PathSense apart is its adaptive nature. Our system continuously updates its guidance based on the user's movement and any changes in the environment, ensuring real-time accuracy. This dynamic approach allows for a more natural and responsive navigation experience, adapting to the user's pace and preferences as they move through complex indoor spaces.
## How we built it
In building PathSense, we embraced the challenge of integrating multiple cutting-edge technologies. Our solution is built on the following technological framework:
1. Voice Interaction: Voiceflow
* Manages conversation flow
* Interprets user intents
* Generates appropriate responses
2. Computer Vision Pipeline:
* Object Detection: Detectron
* Depth Estimation: DPT (Dense Prediction Transformer)
* Scene Analysis: GPT-4 Vision (mini)
3. Data Management: Convex database
* Stores CV data and mapping information in JSON format
4. Semantic Search: Cohere's Rerank API
* Performs semantic search on CV tags and mapping data
5. Indoor Mapping: MappedIn SDK
* Provides floor plan information
* Generates routes
6. Speech Processing:
* Speech-to-Text: Groq model (based on OpenAI's Whisper)
* Text-to-Speech: Unreal Engine
7. Video Input: Multiple TAPO cameras
* Stream 1080p video of the environment over Wi-Fi
To tie it all together, we leveraged Cohere's Rerank API for semantic search, allowing us to find the most relevant information based on user queries. For speech processing, we chose a Groq model based on OpenAI's Whisper for transcription, and Unreal Engine for speech synthesis, prioritizing low latency for real-time interaction. The result is a seamless, responsive system that processes visual information, understands user requests, and provides spoken guidance in real-time.
## Challenges we ran into
Our journey in developing PathSense was not without its hurdles. One of our biggest challenges was integrating the various complex components of our system. Combining the computer vision pipeline, Voiceflow agent, and MappedIn SDK into a cohesive, real-time system required careful planning and countless hours of debugging. We often found ourselves navigating uncharted territory, pushing the boundaries of what these technologies could do when working in concert.
Another significant challenge was balancing the diverse skills and experience levels within our team. While our diversity brought valuable perspectives, it also required us to be intentional about task allocation and communication. We had to step out of our comfort zones, often learning new technologies on the fly. This steep learning curve, coupled with the pressure of working on parallel streams while ensuring all components meshed seamlessly, tested our problem-solving skills and teamwork to the limit.
## Accomplishments that we're proud of
Looking back at our journey, we're filled with a sense of pride and accomplishment. Perhaps our greatest achievement is creating an application with genuine, life-changing potential. Knowing that PathSense could significantly improve the lives of visually impaired individuals, including our own family members, gives our work profound meaning.
We're also incredibly proud of the technical feat we've accomplished. Successfully integrating numerous complex technologies - from AI and computer vision to voice processing - into a functional system within a short timeframe was no small task. Our ability to move from concept to a working prototype that demonstrates the real-world potential of AI-driven indoor navigation assistance is a testament to our team's creativity, technical skill, and determination.
## What we learned
Our work on PathSense has been an incredible learning experience. We've gained invaluable insights into the power of interdisciplinary collaboration, seeing firsthand how diverse skills and perspectives can come together to tackle complex problems. The process taught us the importance of rapid prototyping and iterative development, especially in a high-pressure environment like a hackathon.
Perhaps most importantly, we've learned the critical importance of user-centric design in developing assistive technology. Keeping the needs and experiences of visually impaired individuals at the forefront of our design and development process not only guided our technical decisions but also gave us a deeper appreciation for the impact technology can have on people's lives.
## What's next for PathSense
As we look to the future of PathSense, we're brimming with ideas for enhancements and expansions. We're eager to partner with more venues to increase our coverage of mapped indoor spaces, making PathSense useful in a wider range of locations. We also plan to refine our object recognition capabilities, implement personalized user profiles, and explore integration with wearable devices for an even more seamless experience.
In the long term, we envision PathSense evolving into a comprehensive indoor navigation ecosystem. This includes developing community features for crowd-sourced updates, integrating augmented reality capabilities to assist sighted companions, and collaborating with smart building systems for ultra-precise indoor positioning. With each step forward, our goal remains constant: to continually improve PathSense's ability to provide independence and confidence to visually impaired individuals navigating indoor spaces.
|
## Inspiration
Cardiovascular diseases are the leading cause of death globally. One in five deaths is a heart attack, and performing CPR immediately can greatly improve these odds, yet ambulances may only arrive so fast. This project aims to quickly help those in need by alerting nearby individuals with first-aid training of the incident.
## What it does
In order to provide support at the time of need, our app monitors real time heart rate and input data on a smart watch and allows the user to tap a button which would send notifications to nearby first-aid certified members, indicating an emergency . These registered members are trained CPR providers and the notifications will only be sent to members within a close distance to the person in need for help.
## How we built it +
## Challenges we ran into
One of the first challenges we ran into was coding on FibitOS using Fibit’s documentation, as we all carefully studied the documentation to program the app that would be installed on the watch. Additionally, the platform to test the app came with a simulator that was not equipped to handle API calls like an actual smart watch, which restricted the things we could test. Sending data from the sensor on the watch to a server took us a long time to figure out, with the console's unhelpful error messages.
Lastly, we chose to implement a MERN stack as it was best suited to work with FitbitOS, yet we were all new to it. Our team had to learn the entire framework and libraries that we could use in the timespan of this hackathon.
(Oh, and due to our collective lack of knowledge in git, we ended up making 7 copies of the repo!)
## Accomplishments that we're proud of +
## What we learned
We are all very proud and relieved that we were able to sort out our server issues, and learn the MERN stack in such a tight period of time. We still don't know where Tony went though..
## What's next for PulseSafe
Our idea was not driven by monetary incentive, though we understand that without funding, we cannot scale PulseSafe to a national level. There are still various components of PulseSafe that need to be polished and securely implemented, however in the far future, one of our primary steps would be to create a watch/band personal to PulseSafe watch benefits for financial gains and so that users do not have to purchase an expensive watch just to gain access to our PulseSafe watch app.
|
winning
|
## Inspiration
Data analytics can be **extremely** time-consuming. We strove to create a tool utilizing modern AI technology to generate analysis such as trend recognition on user-uploaded datasets.The inspiration behind our product stemmed from the growing complexity and volume of data in today's digital age. As businesses and organizations grapple with increasingly massive datasets, the need for efficient, accurate, and rapid data analysis became evident. We even saw this within one of our sponsor's work, CapitalOne, in which they have volumes of financial transaction data, which is very difficult to manually, or even programmatically parse.
We recognized the frustration many professionals faced when dealing with cumbersome manual data analysis processes. By combining **advanced machine learning algorithms** with **user-friendly design**, we aimed to empower users from various domains to effortlessly extract valuable insights from their data.
## What it does
On our website, a user can upload their data, generally in the form of a .csv file, which will then be sent to our backend processes. These backend processes utilize Docker and MLBot to train a LLM which performs the proper data analyses.
## How we built it
Front-end was very simple. We created the platform using Next.js and React.js and hosted on Vercel.
The back-end was created using Python, in which we employed use of technologies such as Docker and MLBot to perform data analyses as well as return charts, which were then processed on the front-end using ApexCharts.js.
## Challenges we ran into
* It was some of our first times working in live time with multiple people on the same project. This advanced our understand of how Git's features worked.
* There was difficulty getting the Docker server to be publicly available to our front-end, since we had our server locally hosted on the back-end.
* Even once it was publicly available, it was difficult to figure out how to actually connect it to the front-end.
## Accomplishments that we're proud of
* We were able to create a full-fledged, functional product within the allotted time we were given.
* We utilized our knowledge of how APIs worked to incorporate multiple of them into our project.
* We worked positively as a team even though we had not met each other before.
## What we learned
* Learning how to incorporate multiple APIs into one product with Next.
* Learned a new tech-stack
* Learned how to work simultaneously on the same product with multiple people.
## What's next for DataDaddy
### Short Term
* Add a more diverse applicability to different types of datasets and statistical analyses.
* Add more compatibility with SQL/NoSQL commands from Natural Language.
* Attend more hackathons :)
### Long Term
* Minimize the amount of work workers need to do for their data analyses, almost creating a pipeline from data to results.
* Have the product be able to interpret what type of data it has (e.g. financial, physical, etc.) to perform the most appropriate analyses.
|
## Inspiration
The inspiration for the project was our desire to make studying and learning more efficient and accessible for students and educators. Utilizing advancements in technology, like the increased availability and lower cost of text embeddings, to make the process of finding answers within educational materials more seamless and convenient.
## What it does
Wise Up is a website that takes many different types of file format, as well as plain text, and separates the information into "pages". Using text embeddings, it can then quickly search through all the pages in a text and figure out which ones are most likely to contain the answer to a question that the user sends. It can also recursively summarize the file at different levels of compression.
## How we built it
With blood, sweat and tears! We used many tools offered to us throughout the challenge to simplify our life. We used Javascript, HTML and CSS for the website, and used it to communicate to a Flask backend that can run our python scripts involving API calls and such. We have API calls to openAI text embeddings, to cohere's xlarge model, to GPT-3's API, OpenAI's Whisper Speech-to-Text model, and several modules for getting an mp4 from a youtube link, a text from a pdf, and so on.
## Challenges we ran into
We had problems getting the backend on Flask to run on a Ubuntu server, and later had to instead run it on a Windows machine. Moreover, getting the backend to communicate effectively with the frontend in real time was a real challenge. Extracting text and page data from files and links ended up taking more time than expected, and finally, since the latency of sending information back and forth from the front end to the backend would lead to a worse user experience, we attempted to implement some features of our semantic search algorithm in the frontend, which led to a lot of difficulties in transferring code from Python to Javascript.
## Accomplishments that we're proud of
Since OpenAI's text embeddings are very good and very new, and we use GPT-3.5 based on extracted information to formulate the answer, we believe we likely equal the state of the art in the task of quickly analyzing text and answer complex questions on it, and the ease of use for many different file formats makes us proud that this project and website can be useful for so many people so often. To understand a textbook and answer questions about its content, or to find specific information without knowing any relevant keywords, this product is simply incredibly good, and costs pennies to run. Moreover, we have added an identification system (users signing up with a username and password) to ensure that a specific account is capped at a certain usage of the API, which is at our own costs (pennies, but we wish to avoid it becoming many dollars without our awareness of it.
## What we learned
As time goes on, not only do LLMs get better, but new methods are developed to use them more efficiently and for greater results. Web development is quite unintuitive for beginners, especially when different programming languages need to interact. One tool that has saved us a few different times is using json for data transfer, and aws services to store Mbs of data very cheaply. Another thing we learned is that, unfortunately, as time goes on, LLMs get bigger and so sometimes much, much slower; api calls to GPT3 and to Whisper are often slow, taking minutes for 1000+ page textbooks.
## What's next for Wise Up
What's next for Wise Up is to make our product faster and more user-friendly. A feature we could add is to summarize text with a fine-tuned model rather than zero-shot learning with GPT-3. Additionally, a next step is to explore partnerships with educational institutions and companies to bring Wise Up to a wider audience and help even more students and educators in their learning journey, or attempt for the website to go viral on social media by advertising its usefulness. Moreover, adding a financial component to the account system could let our users cover the low costs of the APIs, aws and CPU running Whisper.
|
## Inspiration
Everywhere we hear, "(data) is the new currency." And yet our understanding of it is limited. As developers, we have decided to make it more accessible to the people we think need it most: people who have an illness. Often, medical information is not explicit to the general public, and we cannot grasp its simpler meaning. That's why we are on a mission to vulgarize it.
## What it does
We take data from medical research websites and make it comprehensible to patients leveraging AI to parse through it, rendering the best fitting study depending on the patient's needs.
## How we built it
Using HTML and CSS heavy code, we built the frontend mostly using Svelte, a powerful and easy alternative to React.js. For the backend, it is necessary to think of it as a pipeline. We first take the user input, format it through Python, send it to ChatGPT that has loaded the data-filled file, and it sends back the frontend at the client's disposal.
## Challenges we ran into
The first challenge we ran into was the data access. The first API we used was unformatted data, due to poorly regulated data entry. If the data is unformatted, then it is difficult to standardize it, especially with how much time we had left. The second challenge was the choice of frameworks. The number of tools available to the devs is very large, and it is very easy to get lost in framework changes if you aren't completely sure of how your project is going to turn out. Since we had trouble with the first API, the second issue did not help.
## Accomplishments that we're proud of
We found a way around both challenges. As a solution to the search, we decided to pivot towards multiple databases instead of the one we originally intended, to have a bit more latitude regarding data mining. This gave us a bit of hope, which in turn solved the second problem, and we had a better idea of how to tackle the project. We are proud of our resilience in the face of a failing idea, and going through with it no matter the obstacles.
## What we learned
That HTML cannot be linked directly to Python using a framework. If it can be, why is it so difficult? We also learned that the medical field is a data rich field, but in order to make a very interesting and useful product, you need a doctor to vulgarize to you. Just "formatting data" is pretty much useless if you don't understand the data in the first place.
## What's next for MediQuest AI
This project feels more like an open-source project, constantly improved with new implementations and completely different ways to think. This aligns well with our philosophy: accessible medical information to people with no medical background.
GitHub: <https://github.com/Jhul4724/mchacks_mediquest.git>
|
partial
|
## Inspiration
Currently, many hospitals still burn CD’s to give patient’s their medical images and records including MRI’s and CT scans.
A recent study conducted by Life Image showed that nearly 40% of patients are still required to physically travel to pick up CDs if they want access to their medical records including MRI’s and CT scans. According to the survey, 66% of respondents have access to at least one portal connected to their provider’s Electronic Health Records, however only 18% of respondents have been able to ever receive records digitally which shows that, while patients have access to portals, records and information are still not being effectively shared.
This system of sharing medical data is very fragile because CD’s aren’t secure and don’t allow for immediate access. Since CD burner manufacturers are all different, and equipment at the two facilities may not be the same, getting the images is sometimes not possible due to compatibility issues.
Although this system seems incredibly archaic given the technology we have now, this is the reality. Our team’s family members have personal experience with the broken medical record storage and retrieval system.
## What is it?
**Medblock is a blockchain IPFS-based medical image portability system that’s fast, secure, and permanent.**
MedBlock allows providers to upload and fulfill medical records requests and for patients to request, upload, and retrieve records. It also uses machine learning to make all MedBlock records verifiable, so receivers can make sure all images are authentic.
## What we learned
We learned a lot about how medical data is stored and sent to patients!
## What's next for MedBlock
In the future, we will launch MedBlock by partnering with local medical providers.
|
## Inspiration
Our inspiration was a medical paper one of our teammates gave us that outlined the importance of digitizing healthcare infrastructure. We realized the risk of patient's confidential healthcare documents and files being leaked and stored in an unsecured location and wanted to change that.
## What it does
The goal for HealthChain is to provide a safe and practical way for healthcare professionals to upload their patients documents and files on the blockchain. Thus, all patients healthcare files will be in good hands and won't be at a risk of being leaked, damaged or lost.
## How we built it
We built all of HealthChain using JavaScript where we primarily focused on the frontend while also focusing on uploading files to the IPFS blockchain so that it is secure.
## Challenges we ran into
The primary challenge we ran into was trying to figure out what we wanted to make in the first place. Alongside that, we had a lot of difficulty trying to figure out VerbWire as there is rather poor documentation, as well as trying to connect the frontend and backend.
## Accomplishments that we're proud of
We are really proud of our frontend development, as the website has a very simplistic and clean look thanks to our teams very strong frontend development talent. Additionally, our backend was able to push through numerous challenges all the while learning new technologies to create a functional way to upload files to the IPFS.
## What we learned
We learned to spend less time figuring out what we wanted to make in the first place, as well as to better communicate with one another what we each required individually. This includes just realizing its better if we all commit to one poor idea instead of none of us following through on a good idea, and to tell each other beforehand if we were missing technological hardware that was vital to the success of our hackathon.
## What's next for HealthChain
Our next goals is to add more encryption to our projects so that they will be even more secure, as well as to setup a database so users will be able to create their accounts and login successfully. So far, we could get a file uploaded to ipfs and display it as a link to access it, we want to improve that and the front-end.
|
## Inspiration
We wanted to make the world a better place by giving patients control over their own data and allow easy and intelligent access of patient data.
## What it does
We have built an intelligent medical record solution where we provide insights on reports saved in the application and also make the personal identifiable information anonymous before saving it in our database.
## How we built it
We have used Amazon Textract Service as OCR to extract text information from images. Then we use the AWS Medical Comprehend Service to redact(Mask) sensitive information(PII) before using Groq API to extract inferences that explain the medical document to the user in the layman language. We have used React, Node.js, Express, DynamoDB, and Amazon S3 to implement our project.
## Challenges we ran into
## Accomplishments that we're proud of
We were able to fulfill most of our objectives in this brief period of time.We also got to talk to a lot of interesting people and were able to bring our project to completion despite a team member not showing up.We also got to learn about a lot of cool stuff that companies like Groq, Intel, Hume, and You.com are working on and we had fun talking to everyone.
## What we learned
## What's next for Pocket AI
|
partial
|
## Inspiration
When I was interviewing at a ~20 person startup and reading through their introduction doc, I read that the co-founders would often spend lots of nights just catching up on docs from the prior week. To me, the last thing any employee, much less a founder, should be spending their time on is writing docs. Ground Truth lets you simply review and accept changes to your docs based on what you have been coding.
## What it does
Each time you make a commit to a specific repo, a new entry is made
## How we built it
### Chroma
Docs can be pretty large. Far too large to pass multiple pages into a context window. To account for this, we embedded each page of the docs into a chromaDB using their built in functions, and we query the Vector DB we created to find the most similar doc so we can pass that as context.
### Groq
We used Groq for basically all of our small context text completion. This was especially helpful when passing a code diff into the llama model and getting a description of what was being updated. This allowed us to consistently retrieve the right documentation that we would then update
### Reflex.dev
Since most of our initialization and ChromaDB stuff was already being done in Python, it made sense to continue using it and go ahead with Reflex for the full frontend and backend.
## Challenges we ran into
The biggest issue for us was finding good contenders for projects. Firstly, they had to be open source, non negotiable for us to test with. Secondly, they had to have a developer program or some need for docs that we could reasonably update.
## Accomplishments that we're proud of
This was all of our first times working with any of these technologies (RAG anything, really) and we're proud to have put it all together in a somewhat attractive way over the 36 hours. The approach we took probably was a good 10 hours of ideation so it's nice to have it built and working
## What we learned
Pretty much everything about RAG and Reflex. We all knew very little about the domain coming into this.
## What's next for Ground Truth
This project has real potential as a company but is an incredibly hard engineering problem. If we stay excited about building it, we could make it widespread and very modular.
|
## Inspiration
We’ve all had the experience of needing assistance with a task but not having friends available to help. As a last resort, one has to resort to large, class-wide GroupMe’s to see if anyone can help. But most students have those because they’re filled with a lot of spam. As a result, the most desperate calls for help often go unanswered.
We realized that we needed to streamline the process for getting help. So, we decided to build an app to do just that. For every Yalie who needs help, there are a hundred who are willing to offer it—but they just usually aren’t connected. So, we decided to build YHelpUs, with a mission to help every Yalie get better help.
## What it does
YHelpUs provides a space for students that need something to create postings rather than those that have something to sell creating them. This reverses the roles of a traditional marketplace and allows for more personalized assistance. University students can sign up with their school email accounts and then be able to view other students’ posts for help as well as create their own posts. Users can access a chat for each posting discussing details about the author’s needs. In the future, more features relating to task assignment will be implemented.
## How we built it
Hoping to improve our skills as developers, we decided to carry out the app’s development with the MERNN stack; although we had some familiarity with standard MERN, developing for mobile with React Native was a unique challenge for us all. Throughout the entire development phase, we had to balance what we wanted to provide the user and how these relationships could present themselves in our code. In the end, we managed to deliver on all the basic functionalities required to answer our initial problem.
## Challenges we ran into
The most notable challenge we faced was the migration towards React Native. Although plenty of documentation exists for the framework, many of the errors we faced were specific enough to force development to stop for a prolonged period of time. From handling multi-layered navigation to user authentication across all our views, we encountered problems we couldn’t have expected when we began the project, but every solution we created simply made us more prepared for the next.
## Accomplishments that we're proud of
Enhancing our product with automated content moderation using Google Cloud Natural Language API. Also, our sidequest developing a simple matching algorithm for LightBox.
## What we learned
Learned new frameworks (MERNN) and how to use Google Cloud API.
## What's next for YHelpUs
Better filtering options and a more streamlined UI. We also want to complete the accepted posts feature, and enhance security for users of YHelpUs.
|
## Inspiration
It's easy to zone off in online meetings/lectures, and it's difficult to rewind without losing focus at the moment. It could also be disrespectful to others if you expose the fact that you weren't paying attention. Wouldn't it be nice if we can just quickly skim through a list of keywords to immediately see what happened?
## What it does
Rewind is an intelligent, collaborative and interactive web canvas with built in voice chat that maintains a list of live-updated keywords that summarize the voice chat history. You can see timestamps of the keywords and click on them to reveal the actual transcribed text.
## How we built it
Communications: WebRTC, WebSockets, HTTPS
We used WebRTC, a peer-to-peer protocol to connect the users though a voice channel, and we used websockets to update the web pages dynamically, so the users would get instant feedback for others actions. Additionally, a web server is used to maintain stateful information.
For summarization and live transcript generation, we used Google Cloud APIs, including natural language processing as well as voice recognition.
Audio transcription and summary: Google Cloud Speech (live transcription) and natural language APIs (for summarization)
## Challenges we ran into
There are many challenges that we ran into when we tried to bring this project to reality. For the backend development, one of the most challenging problems was getting WebRTC to work on both the backend and the frontend. We spent more than 18 hours on it to come to a working prototype. In addition, the frontend development was also full of challenges. The design and implementation of the canvas involved many trial and errors and the history rewinding page was also time-consuming. Overall, most components of the project took the combined effort of everyone on the team and we have learned a lot from this experience.
## Accomplishments that we're proud of
Despite all the challenges we ran into, we were able to have a working product with many different features. Although the final product is by no means perfect, we had fun working on it utilizing every bit of intelligence we had. We were proud to have learned many new tools and get through all the bugs!
## What we learned
For the backend, the main thing we learned was how to use WebRTC, which includes client negotiations and management. We also learned how to use Google Cloud Platform in a Python backend and integrate it with the websockets. As for the frontend, we learned to use various javascript elements to help develop interactive client webapp. We also learned event delegation in javascript to help with an essential component of the history page of the frontend.
## What's next for Rewind
We imagined a mini dashboard that also shows other live-updated information, such as the sentiment, summary of the entire meeting, as well as the ability to examine information on a particular user.
|
partial
|
## Inspiration
Garbage in bins around cities are constantly overflowing. Our goal was to create a system that better allocates time and resources to help prevent this problem, while also positively impacting the environment.
## What it does
Urbins provides a live monitoring web application that displays the live capacity of both garbage and recycling compartments using ultrasonic sensors. This functionality can be seen inside the prototype garbage bin. The bin uses a cell phone camera to send an image to the custom learning model built with IBM Watson. The results from the Watson model is used to classify each object placed in the bin so that it can be sorted into either garbage or recycling. Based on the classification, the Android application controls the V-shaped platform using a servo motor to tilt the platform and drop the item into it's correct bin. Once a garbage/recycling bin nears full-capacity, STDlib is used to notify city workers via SMS that bins at a given address are full.
Machine learning is applied when an object cannot be classified. When this happens, the image of the object is sent via STDlib to Slack. Along with the image, response buttons are displayed in Slack, which allows a city worker to manually classify the item. Once a selection is made, the new classification is used to further train the Watson model. This updated model is then used by all the connected smart garbage bins, allowing for all the bins to learn.
## Challenges we ran into
Integrating all components
Learning to use IBM Watson
Providing the set of images for IBM Watson (Needed to be a zip file containing at least 10 photos to update the model)
## Accomplishments that we're proud of
Integrating all the components.
Getting IBM Watson working
Getting STDlib working
Training IBM Watson using STDLib
## What we learned
How to use IBM Watson
How to effectively plan a project
Designing an effective architecture
How to use STDlib
## What's next for Urbins
Accounts
Algorithm for optimal route for shift
Dashboard with map areas, floor plans, housing plans, and event maps
Heat map on google maps
Bar chart of stats over past 6 months (which bin was the most frequently filled?)
Product Information and Brand data
|
## Inspiration
We wanted to reduce global carbon footprint and pollution by optimizing waste management. 2019 was an incredible year for all environmental activities. We were inspired by the acts of 17-year old Greta Thunberg and how those acts created huge ripple effects across the world. With this passion for a greener world, synchronized with our technical knowledge, we created Recycle.space.
## What it does
Using modern tech, we provide users with an easy way to identify where to sort and dispose of their waste items simply by holding it up to a camera. This application will be especially useful when permanent fixtures are erect in malls, markets, and large public locations.
## How we built it
Using a flask-based backend to connect to Google Vision API, we captured images and categorized which waste categories the item belongs to. This was visualized using Reactstrap.
## Challenges I ran into
* Deployment
* Categorization of food items using Google API
* Setting up Dev. Environment for a brand new laptop
* Selecting appropriate backend framework
* Parsing image files using React
* UI designing using Reactstrap
## Accomplishments that I'm proud of
* WE MADE IT!
We are thrilled to create such an incredible app that would make people's lives easier while helping improve the global environment.
## What I learned
* UI is difficult
* Picking a good tech stack is important
* Good version control practices is crucial
## What's next for Recycle.space
Deploying a scalable and finalized version of the product to the cloud and working with local companies to deliver this product to public places such as malls.
|
## Inspiration
As busy students, we noticed that students loved “shooting” garbage into bins. When it came to throwing out garbage and waste, the bins that held the garbage had a small distance between each opening for each separate type of waste for the trash to go. This meant that students would have to walk long distances to accurately dispose of our waste. We also noticed that many students were uneducated on what objects are recyclable or not.
These two issues lead to people littering and/or throwing their waste in the wrong bins, leading to increased pollution and climate change.
## What it does
To solve this issue, we created a bin with a garbage and recycling section. Students will throw their trash/recycling into their desired target. A camera and sensors within the garbage can will determine whether they threw their waste into the right target. Then, a servo will deposit the piece of waste into the correct bucket, and the user will gain points if they correctly dispose of their waste, or lose points if they incorrectly dispose of it. These points can be used to redeem prizes around campuses with these garbage cans.
## How we built it
BasketBin’s main frame includes a repurposed cardboard box with two carved holes on the top. We used servo motors, ultrasonic and PIR sensors, and a webcamera to create the contraption which deposits the waste into the correct bin.
We used a Flask server for the application and Supabase for the database to store player information. Player score entry is updated from the python script that adds or removes score based on if the user placed the waste into the correct bin.
## Challenges we ran into
In this hackathon, with Flask and Supabase being new technologies to us, we had to spend lots of time figuring out how to link the webpage, database, CV, and hardware. Each component had it’s own challenges such as fixing the sensors for our garbage sorter, connecting the computer vision to the hardware, and fetching and sending entries to the database. After perseverance, we problem solved through the issues and came to a solution.
## Accomplishments that we're proud of
We are proud of the seamless design that allows users to interact with the garbage can in a fun and interactive way which also informs users on waste management. Additionally we are proud of the interconnectedness of our project, with all moving parts such as the hardware, computer vision, database, and webpage.
## What's next for Garbage sorter
Looking ahead, we have several exciting plans for our project. We would like to integrate this product into university campuses around Canada to promote an environment where students can be informed of recycling policies in a fun and exciting manner.
|
winning
|
### Saturday 11AM: Starting Out
>
> *A journey of a thousand miles begins with a single step*
>
>
>
BusBuddy is pulling the curtain back on school buses. Students and parents should have equal access to information to know when and where their buses are arriving, how long it will take to get to school, and be up-to-date on any changes in routes. When we came onboard the project, our highest priorities were efficiency, access, and sustainability.
With our modern version of a solution to the traveling salesman problem, we hope to give students and parents some peace of mind when it comes to school transportation. Not only will BusBuddy make the experience more comfortable, but having reliable information means more parents will opt to save on gas and send their kids by bus.
### Saturday 3PM: Roadblocks, Missteps, Obstacles
>
> *I would walk a thousand miles just to fall down at your door*
>
>
>
No road is without its potholes; our road was no exception to this. Alongside learning curves and getting to know each other, we faced issues with finicky APIs that disagreed with our input data, temperamental CSS margins that refused to anchor where we wanted them, and missing lines of code that we swear we put in. With enough time and bubble tea, we found our critical errors and began to build our vision.
### Saturday 9PM: Finding Our Way
>
> *Just keep swimming, just keep swimming, just keep swimming, swimming, swimming…*
>
>
>
We conceptualized in Figma with asset libraries; we built our front-end in VS Code with HTML, CSS, and Jinja2; we used Flask, Python, SQL databases, and a Google Maps API, alongside the Affinity Propagation Clustering algorithm, to cluster home addresses; and finally, we ran a recursive DFS on a directed weighted graph to optimize a route for bus pickup of all students.
### Sunday 7AM: Summiting the Peak
>
> *Planting a flag at the top*
>
>
>
We achieved our minimum viable product! Given that our expectations were not low, it was no easy feat to climb this mountain.
### Sunday 11AM: Journey’s End
>
> *The journey matters more than the destination*
>
>
>
With a team composed of an 11th grader, a 12th grader, a UWaterloo first year, and a Mac second year, we certainly did not lack in range of experiences to bring to the table. Our biggest asset was having each other as sounding boards to bounce ideas off of. Getting to collaborate with each other certainly broadened our worldviews, especially with each others’ anecdotes about school pre-, during, and post-COVID.
### Sunday Onward
>
> *New Horizons*
>
>
>
So what’s next for us? And what’s next for BusBuddy?
Well, we’ll be doing some sleeping. As for BusBuddy, we hope to scale up and turn our application into something that BusBuddy’s students can use for years to come.
|
## Inspiration
After years of taking the STM (one of the many possible implementations that could make use of RailVision) and having one too many experiences of waiting in the freezing weather for a bus that would never come, the problem proposed by the RailVision challenge was one that was close to our hearts. Having a better organized public transit system and minimizing wait times are keys to a better and greener future in the transportation world.
## What it does
Long gone are the days where, after running — no, sprinting — from your bus stop to metro station, only to find out that you just missed it and that the next one is 30 minutes away. *Bummer.* With our project solution, this situation will (hopefully) be left in the past!
Given a database with times that passengers arrive at each station, using a local beam search heuristic, our code finds the optimal time to deploy the trains such that the average wait time for each passenger is minimized. Then the solution can be visualized through an animation which displays each train and station and concisely shows the time, number of passengers and other relevant information.
## How we built it
The first step we took to better understand the challenge domain was to think about additional constraints, namely the start times for the first and last train routes. Furthermore, there were better starting times than others (e.g. ending with 7 or 8) that allowed us to "time" the trains' arrival at a station with those of the passengers. These heuristics helped us form a good first "guess", which we would later use to find an optimal one. But before that, we coded a helper function that computed the wait time of the passengers. This function is crucial to solving the problem, as it is what we are trying to minimize. The optimization code was built using python and a variation on a genetic search algorithm. At each iteration, we generate k slightly differing train schedules using our input one, and keeping the n most optimal. After a number of iterations, we return the converged result.
We also added unit-testing and integration testing to assure ourselves with different code iterations that we were not breaking anything. This could be useful in the future if we wanted proper CI/CD.
For the visualization, we used Unity as it provided it us with stable and predictable frame updates, while also allowing a robust spawning system.
## Challenges we ran into
At first, it was difficult to figure out how to go about this problem since there are so many varying factors we needed to take into account. At first, we contemplated using other algorithms such as network flows or an instance of dynamic programming. We decided to go with an AI based search because with a good enough tentative schedule and enough iterations, then with an optimization algorithm, we would eventually converge to a point that minimizes the average wait times. Another challenge was coding the optimization, as libraries like numpy/scipy did not behave the way we wanted them to (e.g. not returning integer values).
Despite the logic behind the challenge itself that had to be tested via different algorithms, designing such systems can be tricky as well. It was important to spend the first few hours understanding what exactly we're trying to achieve as well as checking similar products and interfaces to design something "intuitive" and "straightforward" so we can represent to any kind of user.
On the visualization side, there were a good amount of issues. We initially decided to code the project in JS using React. However, after many hours of development, this turned to be problematic due to the complexity of the visualization and the multiple different instances of objects spawning at different times. In the end, we chose to use a more flexible and robust software to develop these almost game-like visualization: Unity. While needing to essentially restart, it was very worth-while.
*Finally, the consequences of sleep-deprivation might be apparent, as I forgot to save this draft the first time I wrote it, which makes me very sad.*
## Accomplishments that we're proud of
After all the efforts poured into this and great team work we had, it was nice to piece the code together to see it running and successfully finding solutions that were considerably better than what we had found by hand.
Learning how to work effectively as a team might be undoubtedly the most vital accomplishment for all of us. Joining as total strangers and ending up working through the same vision is something I'm truly proud of.
## What we learned
**Teamwork makes the dream work!**
Collaboration was crucial to allow the progress of this challenge. We all had different strengths that complemented each other. Everybody pulling their own weight ensured that no one broke their back having to carry all the load!
This was some of our team's first hackathon and it happened to be online. One of the most important things that we learned was the importance of networking. Trying to match with other students and finding a team based on a different skill set was one of the challenges. Breaking down the problem, brainstorming with team members, and defining our roles was the other challenge we faced throughout the hackathon.
**Test Early**
Test early to know about crucial possible problems early on so that you don't have any last minute surprises.
## What's next for RailVision
With our proposed solution, the next words you will hear from the STM lady will be:
*"Prochaine station, RailVision!"*
|
## Inspiration
Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order.
## What it does
You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision.
## How we built it
The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors.
## Challenges we ran into
One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle.
## Accomplishments that we're proud of
We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with.
## What we learned
We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment.
## What's next for Harvard Burger
Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains.
|
partial
|
# ObamaChart
It all began three weeks ago when Charlie, Gerry, and Gene convinced Hector to join them in the hackathon.
The idea was not born until late in the wee hours of the night when everyone was fed up with Charlie's "one way photo encryption" idea and instead wanted something more useful. Thus we decided visualize data over time and settled on Barack Obama as a viable candidate for our experimentation.
We used Microsoft's Bing Search to find the images, along with their Emotions API via Microsoft Cognitive Services. We used python as our backend for the number crunching. Then we fed all our pickled info over to a csv file in order to add it to a Google Spreadsheet. From there, we found it easy to use Google's Google Chart API to make different plots of our data.
However, our team did not underestimated the importance of user experience. Charlie and Gene worked together to display the data in the most easy-to-understand and the most easily-accessible way. To do this, they used Bootstrap to format the charts and the website, and deployed the website using Microsoft Azure.
We tried to compare Obama's emotions to different values such as S&P, and approval rate. Unfortunately, no matter how much we tried, we didn't see any correlations between the data (this could be due to Emotion API's inclination to classify a face as happy or neutral more than anything else). Of course, if there is one trend we managed to see, it's that Obama's happiness has dropped sharply this past month and there has been a slight rise in his neutrality, contempt and anger.
Overall this was a great project that we all enjoyed working on and we hope you all can enjoy looking at the data we've gotten for you.
|
## Inspiration
Our team was inspired by how projects such as Waterloo Waterworks and forest fire maps were able to take data and give meaning to it through visualization. We were also inspired by the effect large collective data projects can have on people who feel emotionally isolated when they realize they are not alone. Finally, we were inspired by the potential of co:here's NLP API and wanted to see how we could use it to process data on a large scale to return useful and compelling conclusions.
## What it does
With the combination of these ideas, we decided to develop a program that took responses to the prompt "How are you feeling today?" and added them to a dataset. Each response was analyzed and given a sentiment score. We wanted to give meaning to our data by representing it in a visually interesting way. The data was arranged using an algorithm that graphically layered each of the responses depending on how positive or negative their sentiment was, and we also developed an algorithm that concluded the overall "vibe" of everyone's day.
## How we built it
In order to create our best work, we played to the strength of each of our team members. The backend connection and analysis algorithms were developed using Python, and the frontend was designed using Figma and implemented using JavaScript and Processing. Flask was used to bridge our front-end and back-end components together.
## Challenges we ran into
Some of the challenges that we ran into were... basically everything. We had never set up Flask on our own before and were not experienced with using Python for the back-end. We struggled to debug and combine all of our code components since we had implemented the front-end, back-end, and Flask on three different computers.
## Accomplishments that we're proud of
We're really proud of how we all learned so much from this experience and challenged ourselves out of our comfort zones. Despite a lot of hurdles and being really sleep-deprived, we persisted and continued to problem-solve until we were satisfied with what we came up with. We're really impressed with what we've created and our perseverance.
## What we learned
We learned how to use the co:here API, how to work in a team with really diverse skill sets, and how to successfully deploy a complicated web application!
## What's next for Emotivate
Human beings are hugely emotional creatures, so a web app that has access to a large group of people to accurately represent their emotions has enormous potential. We think that expanding Emotivate to the public web would be the logical next step in allowing it to create a unique value in society by allowing us to use our emotions to inform our habits.
|
## Inspiration
In August, one of our team members was hit by a drunk driver. She survived with a few cuts and bruises, but unfortunately, there are many victims who are not as lucky. The emotional and physical trauma she and other drunk-driving victims experienced motivated us to try and create a solution in the problem space.
Our team initially started brainstorming ideas to help victims of car accidents contact first response teams faster, but then we thought, what if we could find an innovative way to reduce the amount of victims? How could we help victims by preventing them from being victims in the first place, and ensuring the safety of drivers themselves?
Despite current preventative methods, alcohol-related accidents still persist. According to the National Highway Traffic Safety Administration, in the United States, there is a death caused by motor vehicle crashes involving an alcohol-impaired driver every 50 minutes. The most common causes are rooted in failing to arrange for a designated driver, and drivers overestimating their sobriety. In order to combat these issues, we developed a hardware and software tool that can be integrated into motor vehicles.
We took inspiration from the theme “Hack for a Night out”. While we know this theme usually means making the night out a better time in terms of fun, we thought that another aspect of nights out that could be improved is getting everyone home safe. Its no fun at all if people end up getting tickets, injured, or worse after a fun night out, and we’re hoping that our app will make getting home a safer more secure journey.
## What it does
This tool saves lives.
It passively senses the alcohol levels in a vehicle using a gas sensor that can be embedded into a car’s wheel or seat. Using this data, it discerns whether or not the driver is fit to drive and notifies them. If they should not be driving, the app immediately connects the driver to alternative options of getting home such as Lyft, emergency contacts, and professional driving services, and sends out the driver’s location.
There are two thresholds from the sensor that are taken into account: no alcohol present and alcohol present. If there is no alcohol present, then the car functions normally. If there is alcohol present, the car immediately notifies the driver and provides the options listed above. Within the range between these two thresholds, our application uses car metrics and user data to determine whether the driver should pull over or not. In terms of user data, if the driver is under 21 based on configurations in the car such as teen mode, the app indicates that the driver should pull over. If the user is over 21, the app will notify if there is reckless driving detected, which is based on car speed, the presence of a seatbelt, and the brake pedal position.
## How we built it
Hardware Materials:
* Arduino uno
* Wires
* Grove alcohol sensor
* HC-05 bluetooth module
* USB 2.0 b-a
* Hand sanitizer (ethyl alcohol)
Software Materials:
* Android studio
* Arduino IDE
* General Motors Info3 API
* Lyft API
* FireBase
## Challenges we ran into
Some of the biggest challenges we ran into involved Android Studio. Fundamentally, testing the app on an emulator limited our ability to test things, with emulator incompatibilities causing a lot of issues. Fundamental problems such as lack of bluetooth also hindered our ability to work and prevented testing of some of the core functionality. In order to test erratic driving behavior on a road, we wanted to track a driver’s ‘Yaw Rate’ and ‘Wheel Angle’, however, these parameters were not available to emulate on the Mock Vehicle simulator app.
We also had issues picking up Android Studio for members of the team new to Android, as the software, while powerful, is not the easiest for beginners to learn. This led to a lot of time being used to spin up and just get familiar with the platform. Finally, we had several issues dealing with the hardware aspect of things, with the arduino platform being very finicky and often crashing due to various incompatible sensors, and sometimes just on its own regard.
## Accomplishments that we're proud of
We managed to get the core technical functionality of our project working, including a working alcohol air sensor, and the ability to pull low level information about the movement of the car to make an algorithmic decision as to how the driver was driving. We were also able to wirelessly link the data from the arduino platform onto the android application.
## What we learned
* Learn to adapt quickly and don’t get stuck for too long
* Always have a backup plan
## What's next for Drink+Dryve
* Minimize hardware to create a compact design for the alcohol sensor, built to be placed inconspicuously on the steering wheel
* Testing on actual car to simulate real driving circumstances (under controlled conditions), to get parameter data like ‘Yaw Rate’ and ‘Wheel Angle’, test screen prompts on car display (emulator did not have this feature so we mimicked it on our phones), and connecting directly to the Bluetooth feature of the car (a separate apk would need to be side-loaded onto the car or some wi-fi connection would need to be created because the car functionality does not allow non-phone Bluetooth devices to be detected)
* Other features: Add direct payment using service such as Plaid, facial authentication; use Docusign to share incidents with a driver’s insurance company to review any incidents of erratic/drunk-driving
* Our key priority is making sure the driver is no longer in a compromising position to hurt other drivers and is no longer a danger to themselves. We want to integrate more mixed mobility options, such as designated driver services such as Dryver that would allow users to have more options to get home outside of just ride share services, and we would want to include a service such as Plaid to allow for driver payment information to be transmitted securely.
We would also like to examine a driver’s behavior over a longer period of time, and collect relevant data to develop a machine learning model that would be able to indicate if the driver is drunk driving more accurately. Prior studies have shown that logistic regression, SVM, decision trees can be utilized to report drunk driving with 80% accuracy.
|
losing
|
## Inspiration
In today's fast-paced world, the average person often finds it challenging to keep up with the constant flow of news and financial updates. With demanding schedules and numerous responsibilities, many individuals simply don't have the time to sift through countless news articles and financial reports to stay informed about stock market trends. Despite this, they still desire a way to quickly grasp which stocks are performing well and make informed investment decisions.
Moreover, the sheer volume of news articles, financial analyses and market updates is overwhelming. For most people finding the time to read through and interpret this information is not feasible. Recognizing this challenge, there is a growing need for solutions that distill complex financial information into actionable insights. Our solution addresses this need by leveraging advanced technology to provide streamlined financial insights. Through web scraping, sentiment analysis, and intelligent data processing we can condense vast amounts of news data into key metrics and trends to deliver a clear picture of which stocks are performing well.
Traditional financial systems often exclude marginalized communities due to barriers such as lack of information. We envision a solution that bridges this gap by integrating advanced technologies with a deep commitment to inclusivity.
## What it does
This website automatically scrapes news articles from the domain of the user's choosing to gather the latests updates and reports on various companies. It scans the collected articles to identify mentions of the top 100 companies. This allows users to focus on high-profile stocks that are relevant to major market indices. Each article or sentence mentioning a company is analyzed for sentiment using advanced sentiment analysis tools. This determines whether the sentiment is positive, negative, or neutral. Based on the sentiment scores, the platform generates recommendations for potential stock actions such as buying, selling, or holding.
## How we built it
Our platform was developed using a combination of robust technologies and tools. Express served as the backbone of our backend server. Next.js was used to enable server-side rendering and routing. We used React to build the dynamic frontend. Our scraping was done with beautiful-soup. For our sentiment analysis we used TensorFlow, Pandas and NumPy.
## Challenges we ran into
The original dataset we intended to use for training our model was too small to provide meaningful results so we had to pivot and search for a more substantial alternative. However, the different formats of available datasets made this adjustment more complex. Also, designing a user interface that was aesthetically pleasing proved to be challenging and we worked diligently to refine the design, balancing usability with visual appeal.
## Accomplishments that we're proud of
We are proud to have successfully developed and deployed a project that leverages web scrapping and sentiment analysis to provide real-time, actionable insights into stock performances. Our solution simplifies complex financial data, making it accessible to users with varying levels of expertise. We are proud to offer a solution that delivers real-time insights and empowers users to stay informed and make confident investment decisions.
We are also proud to have designed an intuitive and user-friendly interface that caters to busy individuals. It was our team's first time training a model and performing sentiment analysis and we are satisfied with the result. As a team of 3, we are pleased to have developed our project in just 32 hours.
## What we learned
We learned how to effectively integrate various technologies and acquired skills in applying machine learning techniques, specifically sentiment analysis. We also honed our ability to develop and deploy a functional platform quickly.
## What's next for MoneyMoves
As we continue to enhance our financial tech platform, we're focusing on several key improvements. First, we plan to introduce an account system that will allow users to create personal accounts, view their past searches, and cache frequently visited websites. Second, we aim to integrate our platform with a stock trading API to enable users to buy stocks directly through the interface. This integration will facilitate real-time stock transactions and allow users to act on insights and make transactions in one unified platform. Finally, we plan to incorporate educational components into our platform which could include interactive tutorials, and accessible resources.
|
## ✨ Inspiration
Driven by the goal of more accessible and transformative education, our group set out to find a viable solution. Stocks are very rarely taught in school and in 3rd world countries even less, though if used right, it can help many people go above the poverty line. We seek to help students and adults learn more about stocks and what drives companies to gain or lower their stock value and use that information to make more informed decisions.
## 🚀 What it does
Users are guided to a search bar where they can search a company stock for example "APPL" and almost instantly they can see the stock price over the last two years as a graph, with green and red dots spread out on the line graph. When they hover over the dots, the green dots explain why there is a general increasing trend in the stock and a news article to back it up, along with the price change from the previous day and what it is predicted to be from. An image shows up on the side of the graph showing the company image as well.
## 🔧 How we built it
When a user writes a stock name, it accesses yahooFinance API and gets the data on stock price from the last 3 years. It takes the data and converts to a JSON File on local host 5000. Then using flask it is converted to our own API that populates the ChartsJS API with the data on the stock. Using a Matlab Server, we then take that data to find areas of most significance (the absolute value of slope is over a certain threshold). Those data points are set as green if it is positive or red if it is negative. Those specific dates in our data points are fed back to Gemini and asks it why it is thinking the stock shifted as it did and the price changed on the day as well. The Gemini at the same time also takes another request for a phrase that easy for the json search API to find a photo for about that company and then shows it on the screen.
## 🤯 Challenges we ran into
Using the amount of API's we used and using them properly was VERY Hard especially making our own API and incorporating Flask. As well, getting Stock data to a MATLAB Server took a lot of time as it was all of our first time using it. Using POST and Fetch commands were new for us and took us a lot of time for us to get used too.
## 🏆 Accomplishments that we're proud of
Connecting a prompt to a well-crafted stocks portfolio.
learning MATLAB in a time crunch.
connecting all of our API's successfully
making a website that we believe has serious positive implications for this world
## 🧠 What we learned
MATLAB integration
Flask Integration
Gemini API
## 🚀What's next for StockSee
* Incorporating it on different mediums such as VR so users can see in real-time how stocks shift in from on them in an interactive way.
* Making a small questionnaire on different parts of the stocks to ask whether if it good to buy it at the time
* Use (MPT) and other common stock buying algorthimns to see how much money you would have made using it.
|
Though technology has certainly had an impact in "leveling the playing field" between novices and experts in stock trading, there still exist a number of market inefficiencies for the savvy trader to exploit. Figuring that stock prices in the short term tend to some extent to reflect traders' emotional reactions to news articles published that day, we set out to create a machine learning application that could predict the general emotional response to the day's news and issue an informed buy, sell, or hold recommendation for each stock based on that information.
After entering the ticker symbol of a stock, our application allows the user to easily compare the actual stock price over a period of time against our algorithm's emotional reaction.
We built our web application using the Flask python framework and front-end using React and Bootstrap. To scrape news articles in order to analyze trends, we utilized the google-news API. This allowed us to search for articles pertaining to certain companies, such as Google and Disney. Afterwards, we performed ML and sentiment analysis through the textblob Python API.
We had some difficulty finding news articles; it was quite a challenge to find a free and accessible API that allowed us to gather our data. In fact, we stumbled upon one API that, without our knowledge, redirected us to a different web page the moment we attempted any sort of data extraction. Additionally, we had some problems trying to optimize our ML algorithm in order to produce as accurate results as possible.
We are proud of the fact that Newsstock is up and running and able to predict certain trends in the stock market with some accuracy. It was cool not only to see how certain companies fared in the stock market, but also to see how positivity or negativity in media influenced how people bought or sold certain stocks.
First and foremost, we learned how difficult it could be at times to scrape news articles, especially while avoiding any sort of payment or fee. Additionally, we learned that machine learning can be fairly inaccurate. Overall, we had a great experience learning new frameworks and technologies as we built Newsstock.
|
partial
|
## Inspiration 🍪
We’re fed up with our roommates stealing food from our designated kitchen cupboards. Few things are as soul-crushing as coming home after a long day and finding that someone has eaten the last Oreo cookie you had been saving. Suffice it to say, the university student population is in desperate need of an inexpensive, lightweight security solution to keep intruders out of our snacks...
Introducing **Craven**, an innovative end-to-end pipeline to put your roommates in check and keep your snacks in stock.
## What it does 📸
Craven is centered around a small Nest security camera placed at the back of your snack cupboard. Whenever the cupboard is opened by someone, the camera snaps a photo of them and sends it to our server, where a facial recognition algorithm determines if the cupboard has been opened by its rightful owner or by an intruder. In the latter case, the owner will instantly receive an SMS informing them of the situation, and then our 'security guard' LLM will decide on the appropriate punishment for the perpetrator, based on their snack-theft history. First-time burglars may receive a simple SMS warning, but repeat offenders will have a photo of their heist, embellished with an AI-generated caption, posted on [our X account](https://x.com/craven_htn) for all to see.
## How we built it 🛠️
* **Backend:** Node.js
* **Facial Recognition:** OpenCV, TensorFlow, DLib
* **Pipeline:** Twilio, X, Cohere
## Challenges we ran into 🚩
In order to have unfettered access to the Nest camera's feed, we had to find a way to bypass Google's security protocol. We achieved this by running an HTTP proxy to imitate the credentials of an iOS device, allowing us to fetch snapshots from the camera at any time.
Fine-tuning our facial recognition model also turned out to be a bit of a challenge. In order to ensure accuracy, it was important that we had a comprehensive set of training images for each roommate, and that the model was tested thoroughly. After many iterations, we settled on a K-nearest neighbours algorithm for classifying faces, which performed well both during the day and with night vision.
Additionally, integrating the X API to automate the public shaming process required specific prompt engineering to create captions that were both humorous and effective in discouraging repeat offenders.
## Accomplishments that we're proud of 💪
* Successfully bypassing Nest’s security measures to access the camera feed.
* Achieving high accuracy in facial recognition using a well-tuned K-nearest neighbours algorithm.
* Fine-tuning Cohere to generate funny and engaging social media captions.
* Creating a seamless, rapid security pipeline that requires no legwork from the cupboard owner.
## What we learned 🧠
Over the course of this hackathon, we gained valuable insights into how to circumvent API protocols to access hardware data streams (for a good cause, of course). We also deepened our understanding of facial recognition technology and learned how to tune computer vision models for improved accuracy. For our X integration, we learned how to engineer prompts for Cohere's API to ensure that the AI-generated captions were both humorous and contextual. Finally, we gained experience integrating multiple APIs (Nest, Twilio, X) into a cohesive, real-time application.
## What's next for Craven 🔮
* **Multi-owner support:** Extend Craven to work with multiple cupboards or fridges in shared spaces, creating a mutual accountability structure between roommates.
* **Machine learning improvement:** Experiment with more advanced facial recognition models like deep learning for even better accuracy.
* **Social features:** Create an online leaderboard for the most frequent offenders, and allow users to vote on the best captions generated for snack thieves.
* **Voice activation:** Add voice commands to interact with Craven, allowing roommates to issue verbal warnings when the cupboard is opened.
|
## Inspiration
In a lot of mass shootings, there is a significant delay from the time at which police arrive at the scene, and the time at which the police engage the shooter. They often have difficulty determining the number of shooters and their location. ViGCam fixes this problem.
## What it does
ViGCam spots and tracks weapons as they move through buildings. It uses existing camera infrastructure, location tags and Google Vision to recognize weapons. The information is displayed on an app which alerts users to threat location.
Our system could also be used to identify wounded people after an emergency incident, such as an earthquake.
## How we built it
We used Raspberry Pi and Pi Cameras to simulate an existing camera infrastructure. Each individual Pi runs a Python script where all images taken from the cameras are then sent to our Django server. Then, the images are sent directly to Google Vision API and return a list of classifications. All the data collected from the Raspberry Pis can be visualized on our React app.
## Challenges we ran into
SSH connection does not work on the HackMIT network and because of this, our current setup involves turning one camera on before activating the second. In a real world situation, we would be using an existing camera network, and not our raspberry pi cameras to collect video data.
We also have had a difficult time getting consistent identification of our objects as weapons. This is largely because, for obvious reasons, we cannot bring in actual weapons. Up close however, we have consistent identification of team member items.
Using our current server set up, we consistently get server overload errors. So we have an extended delay between each image send. Given time, we would implement an actual camera network, and also modify our system so that it would perform object recognition on videos as opposed to basic pictures. This would improve our accuracy. Web sockets can be used to display the data collected in real time.
## Accomplishments that we’re proud of
1) It works!!! (We successfully completed our project in 24 hours.)
2) We learned to use Google Cloud API.
3) We also learned how to use raspberry pi. Prior to this, none on our team had any hardware experience.
## What we learned
1) We learned about coding in a real world environment
2) We learned about working on a team.
## What's next for ViGCam
We are planning on working through our kinks and adding video analysis. We could add sound detection for gunshots to detect emergent situations more accurately. We could also use more machine learning models to predict where the threat is going and distinguish between threats and police officers. The system can be made more robust by causing the app to update in real time. Finally, we would add the ability to use law enforcement emergency alert infrastructure to alert people in the area of shooter location in real time. If we are successful in these aspects, we are hoping to either start a company, or sell our idea.
|
## Inspiration
We found that the current price of smart doors on the market is incredibly expensive. We wanted to improve the current technology of smart doors at a fraction of the price. In addition, smart locks are not usually hands free, either requiring the press of a button or going on the User's phone. We wanted to make it as easy and fast as possible for User's to securely unlock their door while blocking intruders.
## What it does
Our product acts as a smart door with two-factor authentication to allow entry. A camera cross-matches your face with an internal database and also uses voice recognition to confirm your identity. Furthermore, the smart door provides useful information for your departure such as weather, temperature and even control of the lights in your home. This way, you can decide how much to put on at the door even if you forgot to check, and you won't forget to turn off the lights when you leave the house.
## How we built it
For the facial recognition portion, we used a Python script & OpenCV through the Qualcomm Dragonboard 410c, where we trained the algorithm to recognize correct and wrong individuals. For the user interaction, we used the Google Home to talk to the User and allow for the vocal confirmation as well as control over all other actions. We then used an Arduino to control a motor that would open and close the door.
## Challenges we ran into
OpenCV was incredibly difficult to work with. We found that the setup on the Qualcomm board was not well documented and we ran into several errors.
## Accomplishments that we're proud of
We are proud of getting OpenCV to work flawlessly and providing a seamless integration between the Google Home, the Qualcomm board and the Arduino. Each part was well designed to work on its own, and allowed for relatively easy integration together.
## What we learned
We learned a lot about working with the Google Home and the Qualcomm board. More specifically, we learned about all the steps required to set up a Google Home, the processes needed to communicate with hardware, and many challenges when developing computer vision algorithms.
## What's next for Eye Lock
We plan to market this product extensively and see it in stores in the future!
|
winning
|
## Inspiration
The idea addresses a very natural curiosity to live and experience the world as someone else, and out of the progress with the democratization of VR with the Cardboard, we tried to create a method for people to "upload" their life to others. The name is a reference to Sharon Creech's quote on empathy in Walk Two Moons: "You can't judge a man until you've walked two moons in his moccasins", which resonated with our mission.
## What it does
Moonlens consists of a pipeline of three aspects that connects the uploaders to the audience. Uploaders use the camera-glasses to record, and then upload the video onto the website along with the data from the camera-glasses's gyro-accelerometer data (use explained below). The website communicates with the iOS app and allows the app to playback the video in split-screen.
To prevent motion sickness, the viewer has to turn his head in the same orientation as the uploader for the video to come into view, as otherwise the experience will disturb the vestibular system. This orientation requirement warrants the use of the gyro-accelerometer in the camera-glasses to compare to the iPhone's orientation tracking data.
## How we built it
The three components of the pipeline:
1. Camera-glasses: using the high framerate and high resolution of mini sports cameras, we took apart the camera and attached it to a pair of glasses. The camera-glasses sport a combination of gyroscope and accelerometer that start synchronously with the camera's recording, and the combination of the camera and Arduino processor for the gyro-accelerometer outputs both the video file and the orientation data to be uploaded onto the website.
2. Website: The website is for the uploaders to transfer the individual video-orientation data pairs to the database. The website was designed with Three.js, along with the externally designed logo and buttons. It uses Linode servers to handle PHP requests for the file uploads.
3. App: The app serves as the consumer endpoint for the pipeline, and allows consumers to view all the videos in the database. The app features automatic split-screen, and videos in the app are of similar format with 360 videos except for the difference that the video only spans a portion of the spherical projection, and the viewer has to follow the metaphorical gaze of the uploader through following the video's movements.
## Challenges we ran into
A major challenge early on was in dealing with possible motion sickness in uploaders rotating their heads while the viewers don't; this confuses the brain as the visual cortex receives the rotational cue but the inner ear, which acts as the gyro for the brain, doesn't, which is the main cause for VR sickness. We came up with the solution to have the viewer turn his or her head, and this approach focuses the viewer toward what's important (what the uploader's gaze is on) and also increases the interactivity of the video.
In building the camera, we did not have the resources for a flat surface to mount the boards and batteries for the camera. Despite this, we found that our lanyards for Treehacks, when hot-glue-gunned together, made quite a good surface, and ended up using this for our prototype.
In the process of deploying the website, we had several cases of PHP not working out, and thus spent quite a bit of time trying to deploy. We ended up learning much about the backend that we hadn't previously known through these struggles and ultimately got the right amount of help to overcome the issues.
## Accomplishments that we're proud of
We were very productive from the beginning to the end, and made consistent progress and had clear goals. We worked very well as a team, and had a great system for splitting up work based on our specialties, whether that be web, app dev, or hardware.
Building the app was a great achievement as our app specialist JR never built an app in VR before, and he figured out the nuances of working with the gyroscope and accelerometer of the phone in great time and polished the app very well.
We're also quite proud of having built the camera on top of basic plastic glasses and our Treehacks lanyards, and Richard, who specializes in hardware, was resourceful in making the camera and hacking the camera.
For the web part, Dillon and Jerry designed the backend and frontend, which was an uphill battle due to technical complications with PHP and deploying. However, the website came together nicely as the backend finally resolved the complications and the frontend was finished with the design.
## What we learned
We learned how to build with brand new tools, such as Linode, and also relied on our own past skills in development to split up work in a reasonable and efficient manner. In addition, we learned by building around VR, which was a field that many of the team members did not have exposure before.
## What's next for Moonlens
In the future, we will make the prototype camera-glasses much more compact, and hopefully streamline a process for directly producing video to uploading with minimal assistance from the computer. As people use the app, creating a positive environment between uploaders and viewers would be necessary and having the uploaders earn money from ads would be a great way to grow the community, and hopefully given time, the world can better connect and understand each other through seeing others' experiences.
|
## Our Inspiration
We were inspired by apps like Duolingo and Quizlet for language learning, and wanted to extend those experiences to a VR environment. The goal was to gameify the entire learning experience and make it immersive all while providing users with the resources to dig deeper into concepts.
## What it does
EduSphere is an interactive AR/VR language learning VisionOS application designed for the new Apple Vision Pro. It contains three fully developed features: a 3D popup game, a multi-lingual chatbot, and an immersive learning environment. It leverages the visually compelling and intuitive nature of the VisionOS system to target three of the most crucial language learning styles: visual, kinesthetic, and literacy - allowing users to truly learn at their own comfort. We believe the immersive environment will make language learning even more memorable and enjoyable.
## How we built it
We built the VisionOS app using the Beta development kit for the Apple Vision Pro. The front-end and AR/VR components were made using Swift, SwiftUI, Alamofire, RealityKit, and concurrent MVVM design architecture. 3D Models were converted through Reality Converter as .usdz files for AR modelling. We stored these files on the Google Cloud Bucket Storage, with their corresponding metadata on CockroachDB. We used a microservice architecture for the backend, creating various scripts involving Python, Flask, SQL, and Cohere. To control the Apple Vision Pro simulator, we linked a Nintendo Switch controller for interaction in 3D space.
## Challenges we ran into
Learning to build for the VisionOS was challenging mainly due to the lack of documentation and libraries available. We faced various problems with 3D Modelling, colour rendering, and databases, as it was difficult to navigate this new space without references or sources to fall back on. We had to build many things from scratch while discovering the limitations within the Beta development environment. Debugging certain issues also proved to be a challenge. We also really wanted to try using eye tracking or hand gesturing technologies, but unfortunately, Apple hasn't released these yet without a physical Vision Pro. We would be happy to try out these cool features in the future, and we're definitely excited about what's to come in AR/VR!
## Accomplishments that we're proud of
We're really proud that we were able to get a functional app working on the VisionOS, especially since this was our first time working with the platform. The use of multiple APIs and 3D modelling tools was also the amalgamation of all our interests and skillsets combined, which was really rewarding to see come to life.
|
## Inspiration
With a vision to develop an innovative solution for portable videography, Team Scope worked over this past weekend to create a device that allows for low-cost, high-quality, and stable motion and panoramic photography for any user. Currently, such equipment exists only for high-end dslr cameras, is expensive, and is extremely difficult to transport. As photographers ourselves, such equipment has always felt out of reach, and both amateurs and veterans would substantially benefit from a better solution, which provides us with a market ripe for innovation.
## What it does
In contrast to current expensive, unwieldy designs, our solution is compact and modular, giving us the capability to quickly set over 20ft of track - while still fitting all the components into a single backpack. There are two main assemblies to SCOPE: first, our modular track whose length can be quickly extended, and second, our carriage which houses all electronics and controls the motion of the mounted camera.
## Design and performance
The hardware was designed in Solidworks and OnShape (a cloud based CAD program), and rapidly prototyped using both laser cutters and 3d printers. All materials we used are readily available, such as mdf fiberboard and acrylic plastic, which would drive down the cost of our product. On the software side, we used an Arduino Uno to power three full-rotation continuous servos, which provide us with a wide range of possible movements. With simple keyboard inputs, the user can interact with the system and control the lateral and rotational motion of the mounted camera, all the while maintaining a consistent quality of footage. We are incredibly proud of the performance of this design, which is able to capture extended time-lapse footage easily and at a professional level. After extensive testing, we are pleased to say that SCOPE has beaten our expectations for ease of use, modularity, and quality of footage.
## Challenges and lessons
Given that this was our first hackathon, and that all team members are freshman with limited experience, we faced numerous challenges in implementing our vision. Foremost among these was learning to code in the Arduino language, which none of us had ever used previously - something that was made especially difficult by our inexperience with software in general. But with the support of the PennApps community, we are happy to have learned a great deal over the past 36 hours, and are now fully confident in our ability to develop similar arduino-controlled products in the future. In addition, As we go forward, we are excited to apply our newly-acquired skills to new passions, and to continue to hack. The people we've met at PennApps have helped us with everything from small tasks, such as operating a specific laser cutter, to intangible advice about navigating the college world and life in general. The four of us are better engineers as a result.
## What's next?
We believe that there are many possibilities for the future of SCOPE, which we will continue to explore. Among these are the introduction of a curved track for the camera to follow, the addition of a gimbal for finer motion control, and the development of preset sequences of varying speeds and direction for the user to access. Additionally, we believe there is significant room for weight reduction to enhance the portability of our product. If produced on a larger scale, our product will be cheap to develop, require very few components to assemble, and still be just as effective as more expensive solutions.
## Questions?
Contact us at [teamscopecamera@gmail.com](mailto:teamscopecamera@gmail.com)
|
winning
|
## Inspiration
Our inspiration for this project was born from the crucible of procrastinating large assignments until the last possible moment and the art of cobbling together passable submissions fueled only by Monster Energy and divine intervention. As college students, we have all experienced the overwhelming feeling of not knowing where to begin and the paralyzing anxiety of taking that first step into an intimidating project. Developing a clear roadmap to tackle a large assignment is crucial to keep track of progress and manage time effectively, but breaking down a project into smaller actionable steps is often harder than it seems. A common pitfall of many is spending too much time crafting an overly-detailed plan and schedule to work on a task, only to get distracted 10 minutes in after clicking on an interesting link online. Navigating internet resources while studying can be perilous, as it takes mere minutes to find ourselves inexplicably immersed in a Wikipedia rabbit hole on Minecraft lore. Establishing a good roadmap is important, but it can be a true challenge in today’s digitalized world to stay focused on one task at a time without getting distracted by the myriad of attractive websites, communities, and social media platforms online.
## What it does
Anchor is a chatGPT-powered Chrome extension that addresses two fundamental challenges: structuring projects into manageable subtasks and avoiding online distractions. After prompting the user to create a new task by adding a name and description, Anchor leverages the OpenAI API to generate concise subtasks. This provides a clear roadmap to tackle large tasks, which mitigates the anxiety around taking the first step. Once ready to start, the user can toggle Focus Mode, which dynamically monitors open Chrome tabs using ChatGPT. In Focus Mode, Anchor ensures that only tabs relevant to the current task remain accessible, blocking distracting websites and serving as a reminder to stay focused. We chose the name “Anchor” to convey our commitment to helping users anchor themselves to a specific task and overcome the temptation to procrastinate.
## How we built it
After finalizing which problems we wanted to solve with our project, we settled on developing a Chrome extension to make our tool minimalistic, non-distracting, and easily accessible while browsing. We split the work by assigning a main task to each team member: 1) set up a boiler plate for the extension; 2) create a working interface linking the OpenAI API with the Chrome API; 3) design the UI on Figma; 4) implement the UI. After these tasks were completed, we split into backend (2) and frontend development to continue developing Anchor. Our tech stack uses HTML, CSS, JavaScript, and OpenAI’s GPT-3.5 Turbo API.
## Challenges we ran into
Given our limited experience with Chrome extension development, we encountered many roadblocks over the course of the hackathon. Our main challenge was setting up communication between the Chrome and OpenAI APIs, in order to take data from the user and use it to generate responses with ChatGPT. We also had trouble figuring out how to successfully store inputted tasks and access the information related to a task. We were unfortunately unable to render our full user interface design due to time constraints, settling instead on a much simpler version. We also decided to scrap a number of core features we had in mind for Anchor, but we look forward to continue working on the project after HackPrinceton.
## Accomplishments that we're proud of
In the short amount of time available, we were able to identify two real issues that affect the productivity of thousands of people and come up with a solution that tackles both. We were able to successfully interface with the OpenAI API, design a clean UI, and implement a basic version of the extension itself.
## What we learned
We learned how to integrate the OpenAI API into a project, how Chrome extension development works, how to handle storage, good UI/UX design practices, and how to use pure JavaScript for frontend development.
## What's next for Anchor
We are excited to continue developing Anchor into a public Chrome extension! We will work on implementing our Figma-designed user interface, as well as improving the overall functionality of the extension. We want to develop focus mode and integrate it seamlessly with Anchor, as well as improving how each component of the extension is connected.
|
### Our inspiration
Technology is becoming smarter and more accessible each year but many studies have found that productivity has hasn't always increased with increased innovation. In fact, productivity has even decreased according to studies done by Harvard. Companies like Google and Apple have begun to try to help users achieve a better balance in their lives with technology through their mobile devices but we want to take this a step further and create a way for people to get control over the many distractions on the internet and become more aware of their unproductive and productive habits.
Currently, our Chrome extension allows users to create a list of unproductive websites like Facebook, Instagram, and others to block when they want to go into a more focused and productive state. With a simple click on the extension popup menu you can activate the focus mode and block out distracting sites. If users need a break they can then press another button to give themselves 15 minutes to recharge their creativity. In the extension's dashboard, users can reflect on and analyze their internet habits through engaging data visualizations pulled from their Android mobile device and Chrome history which are stored on Azure.
In its fully realized form, Google's products would harmoniously bridge together the data on their users' habits to show in this dashboard and help them figure out more productive habits.
### How we built it
We built this Chrome extension using JavaScript in both the front and back end. A material design framework was used to provide a seamless look between Google’s Digital Wellbeing mobile platform and our Chrome extension. An API was created and hosted with Azure for the back end. This allows digital usage data to be sent from the device and visualized on the desktop browser. In addition, IFTTT was used to put the phone in do not disturb mode once the user sets their Chrome extension into focus mode.
### Challenges
As a team mostly consisting of design background, we faced lots of challenges with the implementation and learned a lot along the way. Managing asynchronous timers that could be checked on through our extension's interface was an especially tough one.
### What we're proud of
We are very proud of the overall product opportunity we think we've identified and the UX flow we developed in wireframe screenshots that can be seen in the gallery below.
We learned to be more realistic with our ambitions and perhaps start on on smaller scale projects for future hackathons, especially considering this was two of our members' first hackathons.
### What's next
Next for our project would be to integrate real data from Google's services into the app and to flesh out creating multiple types of "Focus Mode" profiles that can even trigger music and other services that add to productivity.
|
## Inspiration
One of our teammates had an internship at a company where they were exposed to the everyday operations of the sales and customer service (SCS) industry. During their time there, they discovered how costly and time consuming it was for companies to properly train, manage and analyze the performance of their SCS department. Even current technologies within the industry such as CRM (Customer Relation Management) softwares were not sufficient to support the demands of training and managing.
Our solution encompasses a gamified platform that incorporates an AI tool that’s able to train, analyze and manage the performance of SCS employees. The outcome of our solution provides customers with the best support tailored towards them at a low-cost for the business.
## What it does
traint is used in 4 ways:
Utilize AI to facilitate customer support agents in honing their sales and conflict resolution skills
Provides feedback and customer sentiment analysis after every customer interaction simulation
Ranks customer support agents on a leaderboard based on “sentiment score”
Matches top performers within the leaderboard in their respective functions (sales and conflict) to difficult clients.
traint provides businesses with the capability to jumpstart and enhance their customer service and sales at a low cost. traint also interfaces AI agents and analysis in a digestible manner for individuals of all technological fluency.
## How we built it
traint was built using the following technologies:
Next.js
React, Tailwind and shadcn/ui
Voiceflow API
Groq API
Genesys API
The **Voiceflow** API was used mainly to develop the two customer archetypes. One of which being a customer looking to purchase something (sales) and the other being a customer that is unsatisfied with something (conflict resolution).
**Genesys** API was utilized to perform the customer sentiment analysis - a key metric in our application. The API was also used for its live chat queue system, allowing us to requeue and accept “calls” from customers.
\
The **Groq** API allowed us to analyze the conversation transcript to provide detailed feedback for the operator.
Most of our features were interfaced through our web application which hosts action buttons and chat/performance analytics like the average customer sentiment score.
## Challenges we ran into
There was a steep learning curve to Genesys’s extensive API. At first we were overwhelmed with the amount of API endpoints available, and where to begin. We went to the many workshops, including the Genesys workshop, which helped us get started, and we consulted with the teams when we ran into issues with the platform.
Initially, we would run into issues with setting up and getting access to certain endpoints since they required us to be granted permissions explicitly from the Genesys team, but they were very prompt and friendly with getting us the access we need to build our app.
We also ran into many issues with the prompt engineering for the AI agent on Voiceflow. When building the customer archetypes, the model wasn’t performing as expected so it took a lot of time to get it to work like we wanted.
Although we had many challenges, we were all proud of the end product and the work we did despite the many roadblocks we faced :)
## Accomplishments that we're proud of
In a short span of time, we were able to familiarize ourselves with several tools, namely the Genesys’ extensive API and the Voiceflow platform. We integrated these two tools together to create a product that serves both the customer as well as the customer service representatives.
## What we learned
We learnt a lot about API integration, namely the Genesys API and Voiceflow software. Both the API’s were completely new to us and it took alot of research and trial and error to get our product to work.
We also learned about the link between frontend and backend in Next.js, and how to transfer information between the both of them via API routes, client and server side components, etc.
Our team came into this hackathon not really knowing much about each other. Coming from 4 different universities, we learnt a lot working with each other.
## What's next for traint
We wish that we had more time to fully develop our leaderboard process through the gamification/employee performance API. We ran into some issues with accessing the API and given more time, we would have implemented this feature.
Another feature we wanted to add was the ability to assign agents based off of client history. Although there may be some data concerns - providing clients with the ability to talk to the same agents that they preferred in the past would be very beneficial.
Finally, we would love to take traint to the next level and see it incorporated in some real life businesses. We truly believe in the use case and hope it solves some key pain points for people.
|
losing
|
## Inspiration
Autism is the fastest growing developmental disorder worldwide – preventing 3 million individuals worldwide from reaching their full potential and making the most of their lives. Children with autism often lack crucial communication and social skills, such as recognizing emotions and facial expressions in order to empathize with those around them.
The current gold-standard for emotion recognition therapy is applied behavioral analysis (ABA), which uses positive reinforcement techniques such as cartoon flashcards to teach children to recognize different emotions. However, ABA therapy is often a boring process for autistic children, and the cartoonish nature of the flashcards doesn't fully capture the complexity of human emotion communicated through real facial expressions, tone of voice, and body language.
## What it does
Our solution is KidsEmote – a fun, interactive mobile app that leverages augmented reality and deep learning to help autistic children understand emotions from facial expressions. Children hold up the phone to another person's face – whether its their parents, siblings, or therapists – and cutting-edge deep learning algorithms identify the face's emotion as one of joy, sorrow, happiness, or surprise. Then, four friendly augmented reality emojis pop up as choices for the child to choose from. Selecting the emoji correctly matching the real-world face creates a shower of stars and apples in AR, and a score counter helps gamify the process to encourage children to keep on playing to get better at recognizing emotions.
The interactive nature of KidsEmote helps makes therapy seem like nothing more than play, increasing the rate at which they improve their social abilities. Furthermore, compared to cartoon faces, the real facial expressions that children with autism recognize in KidsEmote are exactly the same as the expressions they'll face in real life – giving them greater security and confidence to engage with others in social contexts.
## How we built it
KidsEmote is built on top of iOS in Swift, and all augmented reality objects were generated through ARKit, which provided easy to use physics and object manipulation capabilities. The deep learning emotion classification on the backend was conducted through the Google Cloud Vision API, and 3D models were generated through Blender and also downloaded from Sketchfab and Turbosquid.
## Challenges we ran into
Since it was our first time working with ARKit and mobile development, learning the ins and outs of Swift as well as created augmented reality objects was truly and eye-opening experience. Also, since the backend API calls to the Vision API call were asynchronous, we had to carefully plan and track the flow of inputs (i.e. taps) and outputs for our app. Also, finding suitable 3D models for our app also required much work – most online models that we found were quite costly, and as a result we ultimately generated our own 3D face expression emoji models with Blender.
## Accomplishments that we're proud of
Building a fully functional app, working with Swift and ARKit for the first time, successfully integrating the Vision API into our mobile backend, and using Blender for the first time!
## What we learned
ARKit, Swift, physics for augmented reality, and using 3D modeling software. We also learned how to tailor the user experience of our software specifically to our audience to make it as usable and intuitive as possible. For instance, we focused on minimizing the amount of text and making sure all taps would function as expected inside our app.
## What's next for KidsEmote
KidsEmote represents a complete digital paradigm shift in the way autistic children are treated. While much progress has been made in the past 36 hours, KidsEmote opens up so many more ways to equip children with autism with the necessary interpersonal skills to thrive in social situations. For instance, KidsEmote can be easily extended to help autistic children recognize between different emotions from the tone of one's voice, and understand another's mood based on their body gesture. Integration between all these various modalities only yields more avenues for exploration further down the line. In the future, we also plan on incorporating video streaming abilities into KidsEmote to enable autistic children from all over the world to play with each other and meet new friends. This would greatly facilitate social interaction on an unprecedented scale between children with autism since they might not have the opportunity to do so in otherwise in traditional social contexts. Lastly, therapists can also instruct parents to KidsEmote as an at-home tool to track the progress of their children – helping parents become part of the process and truly understand how their kids are improving first-hand.
|
## Inspiration
My cousin recently had children, and as they enter their toddler years, I saw how they use their toys. They buy them, use them for a couple weeks, but then get bored. To be honest, however, I can't blame them. Currently, toys are used in one way, with no real interaction coming from both the child and the toy. What I really wanted to do was bring Toy Story to real life, and allow children to talk and learn from their toys, maximizing their happiness and education.
## What it does
We use Hume's API to allow you to talk with your toys and have full blown conversations with them. We utilize prompt engineering and allow you to have math lessons embedded within their choose your own adventure stories.
## How we built it
We embedded a raspberry pi, speaker, and microphone in the animal, which hosts a web app through which it can speak.
## Challenges we ran into
The Hume API was down sometimes which was tough to navigate, which halted our ability to make a self improving prompt (track progress of kids lessons). We also had a broken raspberry pi for the first 12 hours of the Hackathon. Hyperbolic randomly stopped working for us so our interactive story pictures stopped working.
## Accomplishments that we're proud of
Learning about how to prompt image generation (feeding the text transcription into Hyperbolic)
## What we learned
Image generation, prompt engineering, function calling
## What's next for Teddy.AI
Memory and lesson progress reports
|
## Inspiration
Globally, one in ten people do not know how to interpret their feelings. There's a huge global shift towards sadness and depression. At the same time, AI models like Dall-E and Stable Diffusion are creating beautiful works of art, completely automatically. Our team saw the opportunity to leverage AI image models and the emerging industry of Brain Computer Interfaces (BCIs) to create works of art from brainwaves: enabling people to learn more about themselves and
## What it does
A user puts on a Brain Computer Interface (BCI) and logs in to the app. As they work in front of front of their computer or go throughout their day, the user's brainwaves are measured. These differing brainwaves are interpreted as indicative of different moods, for which key words are then fed into the Stable Diffusion model. The model produces several pieces, which are sent back to the user through the web platform.
## How we built it
We created this project using Python for the backend, and Flask, HTML, and CSS for the frontend. We made use of a BCI library available to us to process and interpret brainwaves, as well as Google OAuth for sign-ins. We made use of an OpenBCI Ganglion interface provided by one of our group members to measure brainwaves.
## Challenges we ran into
We faced a series of challenges throughout the Hackathon, which is perhaps the essential route of all Hackathons. Initially, we had struggles setting up the electrodes on the BCI to ensure that they were receptive enough, as well as working our way around the Twitter API. Later, we had trouble integrating our Python backend with the React frontend, so we decided to move to a Flask frontend. It was our team's first ever hackathon and first in-person hackathon, so we definitely had our struggles with time management and aligning on priorities.
## Accomplishments that we're proud of
We're proud to have built a functioning product, especially with our limited experience programming and operating under a time constraint. We're especially happy that we had the opportunity to use hardware in our hack, as it provides a unique aspect to our solution.
## What we learned
Our team had our first experience with a 'real' hackathon, working under a time constraint to come up with a functioning solution, which is a valuable lesson in and of itself. We learned the importance of time management throughout the hackathon, as well as the importance of a storyboard and a plan of action going into the event. We gained exposure to various new technologies and APIs, including React, Flask, Twitter API and OAuth2.0.
## What's next for BrAInstorm
We're currently building a 'Be Real' like social media plattform, where people will be able to post the art they generated on a daily basis to their peers. We're also planning integrating a brain2music feature, where users can not only see how they feel, but what it sounds like as well
|
partial
|
## Inspiration
Estate planning may not be the first concern people think about with crypto, but as it is exploding into mainstream usage, it is more important than ever to ensure that there is a mechanism for your beneficiaries to have access to your crypto inheritance just in case.
We first became aware of how pressing this issue can be from the story of Gerald Cotten, the former CEO of QuadrigaCX. His passing stranded $215 million in crypto, none of which was recoverable without his private keys. It’s not just death that could disrupt your crypto — others have lost their life’s assets forever upon medical incapacitation or loss of their private keys, situations that could have also been mitigated using Will3’s timed inheritance mechanisms.
## What it does
To make sure that you have a backup plan for your crypto just in case, Will3 allows you to specify your final wishes for the distribution of your crypto assets. You can specify the wallet addresses of your beneficiaries along with their respective share, to be inherited after a set amount of time without exposing your original private keys either. No probate court, extended disputes, or misinterpretation — the smart contract self-executes and distributes exactly how it is written.
## How we built it
We deployed a Solidity smart contract on an Ethereum test network and built a frontend using React.
## Challenges we ran into
Since we have just gotten into the Web3 dev space, we ran into some hurdles figuring out how to integrate a MetaMask wallet and debugging a contract on a testnet. Since the space is so new right now, there was not much documentation for us to learn from, but we were able to find solutions after extended trial and error (and a bit of luck!).
We also considered the ethical challenges that our product might face. First off, there are several areas of ethical conflicts with traditional estate planning that Will3 successfully resolves. For example, traditionally, lawyers might have conflicts of interest with the testator or beneficiary, but with a self-executing smart contract, this issue does not exist. The largest issue at play is the ethical and legal concept of testamentary capacity. Normally, one must be 18 years of age to write a will, and lawyers might also make the judgment that one is or is not sound enough of mind to write the will. Because of the decentralized nature of the blockchain, these checks are not inherently in place. Harms that might result include protracted legal disputes over the capacity of the testators to create Will3s, and the legitimacy of the beneficiaries to receive them. Especially since crypto is not as regulated yet, this could be a very murky legal area. However, we are committed to tackling this problem through the implementation of strict KYC (Know your Customer) standards similar to traditional financial institutions. Through the implementation of some identity and source fund verification before users are allowed to create new Will3s, but done solely for the purpose of guaranteeing the legitimacy of their Will3 creation. In the future, we hope to continue building on this idea while collaborating with Harvard’s Edmond J. Safra Center for Ethics to continue to explore what ethical minefields may exist and how best to navigate through them.
## Accomplishments that we're proud of
We were able to successfully deploy our contract on the Ropsten testnet. Moreover, we were able to solidify our understanding of Solidity as well as React JS. We also learned more about undervalued problem areas in crypto at a macro scale, and altogether, we are happy to have come out of TreeHacks with a greater understanding of both technical and fundamental aspects of web3. Our Texas group members are also quite proud that they were able to train their new dog to stop biting people.
## What we learned
Being one of the first hackathons we’ve competed in, we picked up some good lessons on what not to do under a time crunch. We scaled back the scope of our idea a couple of times and seemed to overestimate our ability to get certain aspects done when we should’ve poured all our efforts into developing a minimally viable product. We discovered that not all features on all services have complete documentation and that this should be taken into consideration when deciding on a minimally viable product. Additionally, with team members scattered across the US, we have gotten a number of takeaways from working virtually, including how to best delegate tasks between team members or sub teams. There were many times a group couldn’t continue because it was waiting for another team to finish something, leading to inefficiencies. In the future we would spend more time on the planning phase of the project. We also learned the hard way that a short nap will most likely be many times as long as anticipated, so it is usually a bad idea to fall for the seductiveness of a quick rest.
We were able to come out on the other side with React JS, Solidity, and Web3 under our belts.
## What's next for Will3
We are excited to continue building the UI/UX to make creating, updating, and executing a will very seamless. We feel this is an under-represented issue that will only continue to grow over time, so we felt now was a good time to explore solutions to crypto asset inheritance. Additionally, as tokenization and NFTs become more ubiquitous, Will3 could allow users to add their non-crypto assets onto their virtual will. In addition, we would like to look for alternate methods of verifying that a user is still active and does not need their Will3 to execute, even at the time they have previously declared.
|
## Inspiration
As students of Berkeley, we value websites like Gofundme in providing anyone with the opportunity to spend money on causes they believe in. One problem we realized however is that the goodwill and trust of the public could be taken advantage of because there is a lack of strict accountability when it comes to the way the fundraised money is spent. From here, we noticed a similar trend among crowdsourced funding efforts in general -- whether it be funding for social causes or funding for investors. Investors wanting to take a leap of faith in a cause that catches their eye may be discouraged to invest for fear of losing all their money — whether from being scammed or from an irresponsible usage of money — while genuine parties who need money may be skipped. We wanted to make an application that solves this problem by giving the crowd control and transparency over the money that they provide.
## What it does
Guaranteed Good focuses on the operations of NPOs that need financial support with building technologies for their organization. Anybody can view the NPO's history and choose to provide cryptocurrency to help the NPO fund their project. However, the organization is forced to allocate and spend this money legitimately via smart contracts; every time they want to use a portion of their money pool and hire a freelancer to contribute to their project, they must notify all their investors who will decide whether or not to approve of this expenditure. Only if a majority of investors approve can the NPO actually use the money, and only in the way specified.
## How we built it
To enable the smart contract feature of our application, we used Solidity for some of our backend infrastructures.
We programmed the frontend in React, Next, and Tailwind.
## Challenges we ran into
None of us had previous experience with Solidity or blockchain technologies so there was a steep learning curve when trying to familiarize ourselves with implementing smart contracts and working with blockchain. It was difficult to get started and we had a lot of confusion with setup and dependencies management.
The second thing that stumped us was adapting to using Solidity as a backend language. Since the language is a bit more niche than other more commonly used backend languages, there was less of an abundance of resources to teach us how to integrate our React frontend with our Solidity backend. Luckily, we found out that Solidity can integrate with the Next.js framework, so we set out to learn and implement Next.
## Accomplishments that we're proud of
We're all proud of the amount of deep diving that we did to familiarize ourselves with blockchain in a short amount of time! We thought it would be a risky move since we weren't sure if we would be able to actually learn and complete a blockchain-centered application, but we wanted to try anyway since we really liked our idea. Although we are by no mean experts in blockchain now, it was fun spending time and learning a lot about this technology. We were also really satisfied when we were able to pull together a functioning full-stack application by the end of 24 hours.
In addition, with so many moving components in our application, it was especially important to make our website intuitive and simple for users to navigate. Thus, we spent time coming up with a streamlined and aesthetic design for our application and implementing it in react. Additionally, none of us really had design experience so we tried our best to quickly learn Figma and simple design principles and were surprised when it didn't come out as totally awkward-looking.
## What we learned
* New technologies such as blockchain, Solidity, Figma design, and Next
* How to communicate smart contract data from Solidity using Next and Node
* To appreciate the amount of careful planning and frontend design necessary for a good web application with many functionalities
## What's next for Guaranteed Good
**Dashboard**
* Currently GuarenteedGood has a user dashboard that is bare bones. With more time, we wanted to be able to offer analytics on how the project was going, graphs, and process more information from the user.
**Optimizing Runtime**
* With a lot of projects and user information to load, it takes a bit longer than we like to run the website. We want to integrate lazy loading, optimize images, and website caching.
**Matching Freelancer users**
* Allowing Freelancers to post and edit their profiles to their job board, and accept or reject job offers
|
## Inspiration
As college students who recently graduated high school in the last year or two, we know first-hand the sinking feeling that you experience when you open an envelope after your graduation, and see a gift card to a clothing store you'll never set foot into in your life. Instead, you can't stop thinking about the latest generation of AirPods that you wanted to buy. Well, imagine a platform where you could trade your unwanted gift card for something you would actually use... you would actually be able to get those AirPods, without spending money out of your own pocket. That's where the idea of GifTr began.
## What it does
Our website serves as a **decentralized gift card trading marketplace**. A user who wants to trade their own gift card for a different one can log in and connect to their **Sui wallet**. Following that, they will be prompted to select their gift card company and cash value. Once they have confirmed that they would like to trade the gift card, they can browse through options of other gift cards "on the market", and if they find one they like, send a request to swap. If the other person accepts the request, a trustless swap is initiated without the use of a intermediary escrow, and the swap is completed.
## How we built it
In simple terms, the first party locks the card they want to trade, at which point a lock and a key are created for the card. They can request a card held by a second party, and if the second party accepts the offers, both parties swap gift cards and corresponding keys to complete the swap. If a party wants to tamper with their object, they must use their key to do so. The single-use key would then be consumed by the smart contract, and the trade would not be possible.
Our website was built in three stages: the smart contract, the backend, and the frontend.
**The smart contract** hosts all the code responsible for automating a trustless swap between the sender and the recipient. It **specifies conditions** under which the trade will occur, such as the assets being exchanged and their values. It also has **escrow functionality**, responsible for holding the cards deposited by both parties until swap conditions have been satisfied. Once both parties have undergone **verification**, the **swap** will occur if all conditions are met, and if not, the process will terminate.
**The backend\* acts as a bridge between the smart contract and the front end, allowing for \*\*communication** between the code and the user interface. The main way it does this is by **managing all data**, which includes all the user accounts, their gift card inventories, and more. Anything that the user does on the website is communicated to the Sui blockchain. This **blockchain integration** is crucial so that users can initiate trades without having to deal with the complexities of blockchain.
**The frontend** is essentially everything the user sees and does, or the UI. It begins with **user authentication** such as the login process and connection to Sui wallet. It allows the user to **manage transactions** by initiating trades, entering in attributes of the asset they want to trade, and viewing trade offers. This is all done through React to ensure *real-time interaction* so that new offers are seen and updated without refreshing the page.
## Challenges we ran into
This was **our first step into the field** of Sui blockchain and web 3 entirely, so we found it to be really informative, but also really challenging. The first step we had to take to address this challenge was to begin learning Move through some basic tutorials and set up a development environment. Another challenge was the **many aspects of escrow functionality**, which we addressed through embedding many tests within our code. For instance, we had to test that that once an object was created, it would actually lock and unlock, and also that if the second shared party stopped responding or an object was tampered with, the trade would be terminated.
## Accomplishments that we're proud of
We're most proud of the look and functionality of our **user interface**, as user experience is one of our most important focuses. We wanted to create a platform that was clean, easy to use and navigate, which we did by maintaining a sense of consistency throughout our website and keep basic visual hierarchy elements in mind when designing the website. Beyond this, we are also proud of pulling off a project that relies so heavily on **Sui blockchain**, when we entered this hackathon with absolutely no knowledge about it.
## What we learned
Though we've designed a very simple trading project implementing Sui blockchain, we've learnt a lot about the **implications of blockchain** and the role it can play in daily life and cryptocurrency. The two most important aspects to us are decentralization and user empowerment. On such a simple level, we're able to now understand how a dApp can reduce reliance on third party escrows and automate these processes through a smart contract, increasing transparency and security. Through this, the user also gains more ownership over their own financial activities and decisions. We're interested in further exploring DeFi principles and web 3 in our future as software engineers, and perhaps even implementing it in our own life when we day trade.
## What's next for GifTr
Currently, GifTr only facilitates the exchange of gift cards, but we are intent on expanding this to allow users to trade their gift cards for Sui tokens in particular. This would encourage our users to shift from traditional banking systems to a decentralized system, and give them access to programmable money that can be stored more securely, integrated into smart contracts, and used in instant transactions.
|
losing
|
## Inspiration
A physically active lifestyle is the key to good overall health and to preventing chronic disease. A critical issue In Canada is the levels of physical inactivity and sedentary living among Canadians of different ages. People live sedentary lifestyles because it is **EASY** to do and it is **HARD** to find people with similar workout interests! Not anymore!
***Make exercising fun***
* Engage with the local community
* Build strong communities
* Bring different people together for the first time
***Living a sedentary lifestyle has become an increasingly menacing problem.***
* Childhood obesity and diabetes are just as prominent as ever.
***Your friends do not want to exercise with you or you want to do different exercises because you have different goals.***
* How do you make friends who want to do what you want to do?
+ How can we do this safely and effectively?
+ How might we engage and build a strong community that encourages exercise and healthy living?
## What it does
Sport-Scanner gives users the ability to schedule and meet up with other users at a local place to engage in exercise and play sports! Sport-Scanner allows users to see all amenities happening at a nearby open space or public park, they can register for scheduled amenities. Using geolocation, users are able to communicate with other registered users and engage with the community by connecting with one another and sharing motivational goals, photos, workout data, and compete with each other for points. The social feed allows users to track their workout goals and see members' fitness goals and data as well.
## How we built it
Node.JS and Express make up the server. We used Google Cloud Compute and Google Firebase functions. The front end is done in Android.
## Challenges we ran into
Firebase authentication held us up for a long time and we had difficulties with Android Studio.
## Accomplishments that we're proud of
We finally were able to use firebase correctly and got our server service through Google Cloud Computing VM. We are proud by the front end designs and how nice the app has developed.
## What we learned
Firebase, Cloud Engine, learning Android Development
## What's next for sport-scanner
We hope to implement smoother geofencing features for the chat portion of our application. If we had more time, we would make a feed where people are able to automatically share their health tracker(fitbit) data directly with those they have connected, and played sports, with. People would also be able to share photos of the events from their workouts on the network. We worked on integrating fitbit data and were able to get the daily summary with an API call. We would like to expand on this, get more data, store it in firebase, and serve it in a format that encourages competition and fitness.
|
# Inspiration 🌟
**What is the problem?**
Physical activity early on can drastically increase longevity and productivity for later stages of life. Without finding a dependable routine during your younger years, you may experience physical impairment in the future. 50% of functional decline that occurs in those 30 to 70 years old is due to lack of exercise.
During the peak of the COVID-19 pandemic in Canada, nationwide isolation brought everyone indoors. There was still a vast number of people that managed to work out in their homes, which motivated us to create an application that further encouraged engaging in fitness, using their devices, from the convenience of their homes.
# Webapp Summary 📜
Inspired, our team decided to tackle this idea by creating a web app that helps its users maintain a consistent and disciplined routine.
# What does it do? 💻
*my trAIner* plans to aid you and your journey to healthy fitness by displaying the number of calories you have burned while also counting your reps. It additionally helps to motivate you through words of encouragement. For example, whenever nearing a rep goal, *my trAIner* will use phrases like, “almost there!” or “keep going!” to push you to the last rep. Once completing your set goal *my trAIner* will congratulate you.
We hope that people may utilize this to make the best of their workouts. Utilizing AI technology to help those reach their rep standard and track calories, we believe could help students and adults in the present and future.
# How we built it:🛠
To build this application, we used **JavaScript, CSS,** and **HTML.** To make the body mapping technology, we used a **TensorFlow** library. We mapped out different joints on the body and compared them as they moved, in order to determine when an exercise was completed. We also included features like parallax scrolling and sound effects from DeltaHacks staff.
# Challenges that we ran into 🚫
Learning how to use **TensorFlow**’s pose detection proved to be a challenge, as well as integrating our own artwork into the parallax scrolling. We also had to refine our backend as the library’s detection was shaky at times. Additional challenges included cleanly linking **HTML, JS, and CSS** as well as managing the short amount of time we were given.
# Accomplishments that we’re proud of 🎊
We are proud that we put out a product with great visual aesthetics as well as a refined detection method. We’re also proud that we were able to take a difficult idea and prove to ourselves that we were capable of creating this project in a short amount of time. More than that though, we are most proud that we could make a web app that could help out people trying to be more healthy.
# What we learned 🍎
Not only did we develop our technical skills like web development and AI, but we also learned crucial things about planning, dividing work, and time management. We learned the importance of keeping organized with things like to-do lists and constantly communicating to see what each other’s limitations and abilities were. When challenges arose, we weren't afraid to delve into unknown territories.
# Future plans 📅
Due to time constraints, we were not able to completely actualize our ideas, however, we will continue growing and raising efficiency by giving ourselves more time to work on *my trAIner*. Potential future ideas to incorporate may include constructive form correction, calorie intake calculator, meal preps, goal setting, recommended workouts based on BMI, and much more. We hope to keep on learning and applying newly obtained concepts to *my trAIner*.
|
## Inspiration
A traditional African adage states that "If you want to go fast you go alone, if you want to go far you go together." Unfortunately, most Americans seem to be wanting to go fast but not very far when it comes to working out, as new studies have found that 56% of Americans go to the gym alone. As members of UC Berkeley’s Triathlon team, we have seen first hand how a strong fitness community can not only provide individuals with camaraderie and mutual support but also accountability and inspiration to pursue healthier life styles. Creating this social media platform will enable individuals to feel that sense of community when going to the gym alongside providing members a tool to help them be more accountable and efficient in their training regimen.
## What it does
We implemented a create workout, home feed page, and profile view in the app. This allows you to examine your profile, create a workout, tag images in the workout, and view other people's workouts that have been pushed to the cloud. It also calculates certain statistics on your weight-lifting training like how much your personal bests are in each activity and how many pounds you pushed in the last workout!
## How we built it
We used google cloud firebase as the data storage and ios swift to build all the model, views, and controllers. We implemented a traditional profile page section, home feed page, and create workout views. The workout is comprised of a list of workout sets and other metadata like description and images. We had to push our images to GCP and use asynchronous threading to prevent the UI from being held up by the main thread loading IO. The feed simply pulls all the data from the database right now to visualize all the workouts from all the users.
## Challenges we ran into
We had a hard time getting google cloud firebase to work asynchronously with the images that we were trying to download. This was because we were blocking the main thread to load all the images and then viewing the images. This was non-optimal so we switched to having the main thread run the UI and the images load on another thread. then every time a new image is loaded we wake up the main thread and reload the UI.
## Accomplishments that we're proud of
We finished the app in time and got a fully functional create workout, feed, and profile sections working.
## What we learned
* asynchronous threading in ios
* ios view specifics
## What's next for GymMe
* Login through Okta
* Database Search for Friends by location
* Personalization for Feed so that only your friends posts show up
* Add CI Unit tests for UI and Functions
|
partial
|
## Intelihelm - Never two-tired to keep you safe.
## What it does
This helmet is engineered to improve bike safety and confidence while cycling. Our goal was to design a helmet that improves bike-user interaction by notifying the rider to changes in their surroundings, such as an approaching car, this will be displayed as a distance on the rider's phone. This helmet has the ability to inform cars of decisions made by the cyclist, through LED lights, activated by voice commands.
## How we built it
With the use of an Arduino Uno, Android Java development software, and a watermelon helmet, we were able to design a bike safety system. We used a bluetooth module to communicate between the helmet and the phone, speech api allowing the user to operate the system hands-free, an ultrasonic sensor to detect approaching cars, and led lights to signal to surrounding motorists.
## Things we struggled with
Bluetooth connection between the phone and the helmet (the Arduino) proved to be much more difficult than expected.
## Accomplishments that we're proud of
We were most proud of the fact that we were able to deliver a functioning product by the end of the hackathon. With 3 team members that have never coded before, or been to a hackathon, we did not believe that the odds were in our favor.
## What we learned
With the coaching of our one experienced member, we learned the basics of android studio and Arduino programming. We also learned how to assemble the basic circuits and wiring of an Arduino board.
## What's next for Intelihelm
Ideally, we would like to have included an accelerometer that would detect when a cyclist was in a crash. This would allow the user to communicate their situation with a predesignated set out people. Additionally, the presentation of the product would need to be refined with the use of much brighter LEDs, and a more aerodynamic appearance.
|
## Inspiration
We wanted to create something that helped other people. We had so many ideas, yet couldn't stick to one. Luckily, we ended up talking to Phoebe(?) from Hardware and she talked about how using textiles would be great in a project. Something clicked, and we started brainstorming ideas. It ended up with us coming up with this project which could help a lot of people in need, including friends and family close to us.
## What it does
Senses the orientation of your hand, and outputs either a key press, mouse move, or a mouse press. What it outputs is completely up to the user.
## How we built it
Sewed a glove, attached a gyroscopic sensor, wired it to an Arduino Uno, and programmed it in C# and C++.
## Challenges we ran into
Limited resources because certain hardware components were out of stock, time management (because of all the fun events!), Arduino communication through the serial port
## Accomplishments that we're proud of
We all learned new skills, like sewing, coding in C++, and programming with the Arduino to communicate with other languages, like C#. We're also proud of the fact that we actually fully completed our project, even though it's our first hackathon.
## What we learned
~~how 2 not sleep lolz~~
Sewing, coding, how to wire gyroscopes, sponsors, DisguisedToast winning Hack the North.
## What's next for this project
We didn't get to add all the features we wanted, both to hardware limitations and time limitations. Some features we would like to add are the ability to save and load configs, automatic input setup, making it wireless, and adding a touch sensor to the glove.
|
# unbiasMe
>
> An AI based search engine capable of identifying bias in news articles and promoting sources that are unbiased.
>
>
>
#### Story Behind the Project
University student environments are filled with passionate discussion and debates of controversial topics. The recent Canadian federal election was the first election that many current university students, including ourselves, were eligible to vote in. We both found it very difficult to learn about each party's platform objectively; it seems like every Google search result is trying to persuade you to think one way or another. It's extremely difficult for someone trying to learn about politics for the first, or any contraversial topic for that matter, time to comb through Google and find unbiased articles. The current media landscape does not allow for individuals to easily access unbiased information and form their own opinions. This limits meaningful conversation and causes people to be easily offended without first being thoroughly informed.
#### What is unbiasMe?
unbiasMe aims to target the above problem; it is a search engine that uses machine learning to determine which articles in a Google search contain the least amount of bias. It then displays those articles to the user first. It also displays a percent confidence for each article, which is simply how confident our machine learning model is that the article is unbiased.
When you enter a query into unbiasMe, a number of the results returned by Google are scraped to retrieve the text data in the article. For each result we convert this text data into numerical features that can be used by a machine learning algorithm. Intensive research was done to determine important features that can be extracted from the text data [1](and%20to%20provide%20code%20for%20said%20extraction).
#### Implementation
The back-end is written in Python using Flask, and the front-end in HTML and CSS with a tiny bit of JavaScript. We use Google Custom Search API to Google the users query and extract URLs for our scraper. It was deployed using Google App Engine.
#### Challenges Encountered
* Front-end development
* That's it, we suck at HTML and CSS (don't even get us started on JavaScript).
#### Proud Accomplishments
* Implementing Google Cloud APIs and deploying a website for the first time for both of us
* Development of a web-app that actually runs and almost as good as we could have hoped
* Development of a service that impacts many like-minded individuals
* Networking with hackers from all around the world
#### What We Learned
* That we suck at front-end web development.
* How to deploy a website
* It was some of our first times using sklearn and pandas instead of MatLab for machine learning
* Sleep is important
#### What's next for unbiasMe?
Our hope is to continue to develop the application by implementing more features to provide users with the best experience. One thing we'd really like to include is a recent news tab where users could go to get stories on current events that are unbiased. Also, the machine learning pipeline could probably be improved to provide users with more accurate results (though we are pretty happy with our 78% test accuracy). The code is not exactly the cleanest, and could probably be cleaned up to increase the speed of the search engine significantly.
#### Meet the Team
| Member | Position |
| --- | --- |
| Miriam Naim Ibrahim | Biomedical Engineer |
| Rylee Thompson | Electrical Engineer |
[1] Horne, Benjamin D., Sara Khedr, and Sibel Adali. "Sampling the news producers: A large news and feature data set for the study of the complex media landscape." Twelfth International AAAI Conference on Web and Social Media. 2018.
|
losing
|
## Inspiration
In the rapidly evolving landscape of artificial intelligence, we found ourselves pondering a profound question: What if AI agents could transcend their role as mere tools?
In the events of this hackathon, we've seen agents made with Fetch AI be utilized for autonomous tasking, scheduling, and other simple interactions. We're also aware of the progress that is being made in having AI Agents emulate real people for interactivity with users. But what if we were able to develop an environment that humans didn't need to exist in at all -- in other words, **what does it look like when AI agents self-sustainably interact with only each other?** What happens in that world? What insights do we gain? Can agents become self-sufficient entities capable of meaningful interaction?
This curiosity was sparked by our time in the Bay Area, CA this summer, where we participated in hackathons and events led by ML engineers at Google Deepmind, NVIDIA, AI21 Labs, and Boston Dynamics. We're here to answer a question in fundamental AI/ML research and development, pushing the boundaries of what's possible in the realm of AI agents, because it's being worked on and it is unanswered.
Through this project, we've envisioned a world where AI identities are not just reactive but proactive – a world where they can engage in complex dialogues, simulate real-world scenarios, and provide us with unprecedented insights into human interactions. Better yet, we've proven that it's possible. Our vision has given birth to ConvSim, a revolutionary multi-agent platform that transforms us from active participants into captivated observers of a virtual world demonstrating intelligence and autonomous action.
## What it does
ConvSim is not just another chatbot or simulation tool – it's a window into a new dimension of AI-driven experiences. At its core, ConvSim is a sophisticated multi-agent platform that orchestrates interactions between AI entities, simulating real-world conversations and scenarios with uncanny realism.
Imagine witnessing a debate between Kamala Harris and Donald Trump on climate change, observing how it unfolds, and gaining insights that were previously unattainable. ConvSim makes this possible by leveraging advanced AI technologies to create a self-sustaining ecosystem of intelligent agents.
Our platform comprises five distinct agents, each playing a crucial role in the simulation:
A. Identity Generation Agent: The gateway to our virtual world, this agent interacts with users to understand their desired simulation parameters.
B. Agent 1 & Agent 2: These are our conversationalists, meticulously crafted AI entities that emulate real individuals with high fidelity. They engage in dialogue, mirroring the nuances and complexities of human interaction.
C. Analysis Agent: A silent observer that provides valuable perspective on the unfolding conversation, offering insights that might escape the human eye.
D. Tool Agent: This agent translates the rich tapestry of conversation into quantifiable data, generating plots based on sentiment analysis and productivity metrics.
Through this intricate dance of AI entities, ConvSim creates a self-sustaining environment that can simulate a vast array of scenarios – from high-stakes political debates to intimate counseling sessions, from classroom interactions to celebrity interviews.
## How we built it
Building ConvSim was an exercise in pushing the boundaries of AI technology and software architecture. We leveraged cutting-edge AI frameworks, including Fetch AI and OpenAI, to create a robust and flexible multi-agent system.
Our development process focused on several key areas:
1. Agent Design: Each agent was carefully crafted to fulfill its specific role within the ecosystem. We used advanced natural language processing models to ensure realistic and context-aware interactions.
2. Inter-Agent Communication: We developed a sophisticated communication protocol that allows our agents to exchange information seamlessly, creating a cohesive and believable simulation.
3. User Interface: While the magic happens behind the scenes, we created an intuitive interface that allows users to easily set up and observe simulations.
4. Analysis and Visualization: We integrated powerful analytics tools to process the wealth of data generated by our simulations, providing users with valuable insights and visualizations.
5. Scalability and Performance: Given the complex nature of our multi-agent system, we paid special attention to optimization, ensuring that ConvSim can handle multiple simultaneous simulations without compromising on performance.
A high-level diagram of our multi-agent platform architecture is also included.
This architecture allows for a seamless flow of information between agents, creating a dynamic and responsive simulation environment.
## Challenges we ran into
Developing ConvSim was not without its hurdles. Some of the key challenges we faced include:
1. Maintaining Coherence: Ensuring that multiple AI agents could maintain a coherent and contextually relevant conversation over extended periods was a significant challenge. We had to fine-tune our models extensively to achieve natural dialogue flow.
2. Balancing Realism and Ethics: As we simulated real-world personalities and scenarios, we had to carefully navigate ethical considerations to ensure our simulations were respectful and did not propagate harmful biases or misinformation.
3. Performance Optimization: Managing multiple sophisticated AI models simultaneously put a strain on computational resources. Optimizing our system for efficiency without compromising on the quality of interactions was a complex task.
4. Data Integration: Synthesizing outputs from multiple agents into meaningful analyses and visualizations required careful data integration and processing techniques.
5. User Experience Design: Creating an interface that could convey the complexity of our simulations while remaining intuitive and engaging for users was a delicate balancing act.
## Accomplishments that we're proud of
Despite the challenges, our team has achieved several groundbreaking accomplishments with ConvSim:
1. True Multi-Agent Interaction: We've successfully created a self-sustaining ecosystem where multiple AI agents interact autonomously, a feat that pushes the boundaries of current AI technology.
2. High-Fidelity Simulations: Our platform can emulate real-world personalities and scenarios with remarkable accuracy, opening up new possibilities for entertainment, education, and research.
3. Advanced Analytics: By integrating sentiment analysis and productivity metrics, we've added a layer of quantitative insight to qualitative interactions, providing valuable data for various applications.
4. Scalable Architecture: Our system is designed to handle multiple simultaneous simulations, making it a powerful tool for large-scale scenario analysis and entertainment productions.
5. Ethical AI Development: We've navigated complex ethical considerations to create a platform that respects privacy and promotes responsible AI use.
## What we learned
The development of ConvSim has genuinely been an incredible learning journey:
1. AI Complexity: We gained deep insights into the intricacies of creating and managing multiple AI agents in a cohesive system. The level of architectural detail and rigor required to make this happen was extraordinarily high. Our program utilizes multi-threading, computation optimization, and a fully integrated platform based on Fetch AI's architecture system to make this environment self-sustaining and continually alive.
2 Interdisciplinary Approach: We learned the importance of combining expertise from various fields – from AI and software engineering to psychology and ethics – to create a truly innovative product. Our product is applied to the Entertainment and Media track, but its simulation capabilities unveil serious potential in Sustainability, Healthcare, and Education as well.
2. Real-World Applications: Through our development process, we've uncovered numerous potential applications for multi-agent systems in entertainment, education, mental health, and more. But more over, we've continued to find use cases that fit across any industry -- the reason being, we're able to simulate a "team environment" where a set of agents work together to accomplish a task. It's an architecture that fits so many systems.
3. Ethical Considerations: We developed a keen understanding of the ethical implications of AI simulations and the importance of responsible development practices. This is at the forefront of our development system and mission. The abilities of the product support diversity, equity, and inclusion in the simulation capabilities and the questions that are answered.
4. User-Centric Design: We learned valuable lessons about designing complex systems that remain accessible and engaging for end-users.
## What's next for ConvSim
ConvSim is not just a hackathon project – it's the beginning of a journey to revolutionize how we interact with and learn from AI. Our future roadmap includes:
1. Expanded Simulation Capabilities: We aim to increase the range of scenarios and personalities that ConvSim can emulate, making it an even more versatile tool for entertainment and research.
Enhanced Analytics: We plan to integrate more advanced analytics tools, including predictive modeling, to provide even deeper insights from our simulations.
2. VR/AR Integration: To create truly immersive experiences, we're exploring integration with virtual and augmented reality technologies.
3. API Development: We want to make ConvSim's capabilities accessible to developers and researchers, allowing them to build upon our platform.
4. Real-World Partnerships: We're seeking partnerships in the entertainment, education, and mental health sectors to bring ConvSim's capabilities to real-world applications.
5. Continuous Ethical Review: As we expand, we're committed to ongoing ethical review and refinement of our platform to ensure responsible AI use.
ConvSim represents a paradigm shift in AI-driven experiences. By creating a self-sustaining multi-agent platform, we've opened the door to unprecedented possibilities in entertainment, education, and research. From simulating high-stakes political debates to exploring sensitive topics in mental health and sexuality, ConvSim provides a safe, immersive environment for exploration and learning.
In the realm of media and entertainment, ConvSim is not just a tool – it's a revolution. We're not merely predicting the future; we're creating it. With ConvSim, content creators can prototype storylines, test character interactions, and even generate entire narratives driven by AI. Audiences can step into immersive experiences, witnessing historical events unfold or exploring alternate realities.
But our vision extends beyond entertainment. ConvSim has the potential to be a powerful tool for education, allowing students to interact with historical figures or complex concepts in engaging ways. In the field of psychology, it could provide a platform for exploring human behavior and interactions in a controlled, ethical environment.
From a highly technical perspective, this product is one that doesn’t have linear utility - if done less than optimally, it exists as an exciting entertainment and media tool; if done optimally, it is an invaluable tool in simulating the unknown.
1. Personality mimicking - In the future, a potential implementation is to first create RAG knowledge bases or fine tune models to mimic a known person or personality. Relevant information can be gathered by webscraping.
2. Simulation Optimization - Multiple simulations can be done, with slight modifications. These simulations can be aggregated into reports, finding the “Nash equilibrium” of conversations, or what state will they most likely trend towards.
3. Analysis - In the future, the analysis agent should be able to produce what metrics to measure on its own. This could be sentiment, productivity, entertainment.
As we continue to develop and refine ConvSim, we're not just building a product – we're pioneering a new frontier in AI research and application. We're tackling fundamental questions about AI capabilities, ethics, and human-AI interaction with the highest technical rigor. We're building a system that redefines immersive experiences for media and entertainment, but also provides insights and learning that address key issues in sustainability, education, and healthcare, amongst other regions. We can simulate how a conversation between Kamala Harris and Donald Trump about climate change looks. We can simulate how a conversation between a harsh teacher and disgruntled student with some learning disadvantage looks. We can simulate how doctor-patient interactions look. We can find insights by exploring the unknown or difficult conversations that haven't been had, and dive into the future.
Join us on this exciting journey as we continue to push the boundaries of what's possible with AI. With ConvSim, the future of interactive experiences is here, and it's incredible.
NOTE: We have a video demo for our project, and have not been able to include it in this project due to some technical difficulties with the submission. We are so excited to share the product in person, and please reach out if you would like to see the demo!
|
## Inspiration
it's really fucking cool that big LLMs (ChatGPT) are able to figure out on their own how to use various tools to accomplish tasks.
for example, see Toolformer: Language Models Can Teach Themselves to Use Tools (<https://arxiv.org/abs/2302.04761>)
this enables a new paradigm self-assembling software: machines controlling machines.
what if we could harness this to make our own lives better -- a lil LLM that works for you?
## What it does
i made an AI assistant (SMS) using GPT-3 that's able to access various online services (calendar, email, google maps) to do things on your behalf.
it's just like talking to your friend and asking them to help you out.
## How we built it
a lot of prompt engineering + few shot prompting.
## What's next for jarbls
shopping, logistics, research, etc -- possibilities are endless
* more integrations !!!
the capabilities explode exponentially with the number of integrations added
* long term memory
come by and i can give you a demo
|
## Inspiration
Our inspiration comes from the Telemarketing survey. We want to create a sort of "prank call" to people, especially to our friends where the call will be a super-realistic voice presenting a survey to them. At the end, we have decided to program a chatbot that will conduct a survey via phone and ask them how they feel about AI.
## What it does
Our project is a chatbot that will conduct a survey on the population of London, Ontario about their own thoughts and believes towards Artificial Intelligence. The chatbot will present a series of multiple choices questions as well as open-ended questions about the perception and knowledge of AI. The answers will be recorded and analyzed before being sent to our website, where the data will be presented. The purpose is to give a score on how well our target population knows about AI, and survive an AI apocalypse.
## How we built it
We built it using Dasha AI.
## Challenges we ran into
The challenges we run to is that the application(AI) hungs up when there is a longer delay between the question and the response of the user. The second challenge is that the AI skip the last questions and automatically exit and hungs up during the first test of our application.
## Accomplishments that we're proud of
This is the first Hackathon that most of our members have participated in. Therefore, being able to challenge ourself and to build a complex project in a span of 36 hours is the greatest achievement that we have accomplished.
## What we learned
* The basic of Dasha ai and how to use it to develop a software.
## - Fostered our skills in web design.
## What's next for Boom or Doom : The Future of AI
**Target a larger population**
## If you want to try it out for yourself:
Clone the github repo and download NodeJS and Dasha!
<https://dasha.ai/en-us>
More instructions on setting up Dasha available here.
|
winning
|
## Inspiration
The inspiration for InstaPresent came from our frustration with constantly having to create presentations for class and being inspired by the 'in-game advertising' episode on Silicon Valley.
## What it does
InstaPresent is a tool that uses your computer's microphone to generate a presentation in real-time. It can retrieve images and graphs and summarize your words into bullet points.
## How we built it
We used Google's Text To Speech API to process audio from the laptop's microphone. The Text To Speech is captured when the user speaks and when they stop speaking, the aggregated text is sent to the server via WebSockets to be processed.
## Challenges We ran into
Summarizing text into bullet points was a particularly difficult challenge as there are not many resources available for this task. We ended up developing our own pipeline for bullet-point generation based on part-of-speech and dependency analysis. We also had plans to create an Android app for InstaPresent, but were unable to do so due to limited team members and time constraints. Despite these challenges, we enjoyed the opportunity to work on this project.
## Accomplishments that we're proud of
We are proud of creating a web application that utilizes a variety of machine learning and non-machine learning techniques. We also enjoyed the challenge of working on an unsolved machine learning problem (sentence simplification) and being able to perform real-time text analysis to determine new elements.
## What's next for InstaPresent
In the future, we hope to improve InstaPresent by predicting what the user intends to say next and improving the text summarization with word reordering.
|
## Inspiration
The biggest irony today is despite the advent of the internet, students and adults are more oblivious than ever to world events, and one can easily understand why. Of course, Facebook, YouTube, and League will be more interesting than reading Huffington Post; coupled with the empirical decrease in the attention span of younger generations, humanity is headed towards disaster.
## What it does
Our project seeks to address this crisis by informing people in a novel and exciting way. We create a fully automated news extraction, summarization, and presentation pipeline that involves an AI-anime character news anchor. The primary goal of our project is to engage and educate an audience, especially that of younger students, with an original, entertaining venue for encountering reliable news that will not only foster intellectual curiosity but also motivate them to take into deeper consideration of relevant issues today, from political events to global warming.
The animation is basically a news anchor talking about several recent news, where related news is discussed in a short blurb.
## Demo Video Explanation
The demo video generally performs well, except for the first few seconds and the Putin/Taliban part. This is because the clusters are too small so many clusters get merged together as our kmeans has fixed number of clusters. A quick fix is to simply calculate the internal coherence of the cluster and filter based on that. more advanced methods can be based on those described in the Scatter Gather paper by Karger et al.
## How we built it
### News Summarization
For extraction and summarization, our first web scrapes news articles from trusted sources (CNN, New York Times, Huffington Post, Washington Post, etc…) to obtain the texts of recent news articles. Then it generates a compact summary of these texts using an in-house developed two-tier text summarization algorithm based on state-of-the-art natural language processing techniques. The algorithm first does an extractive summarization of individual articles. Next, it computes an overall 'topic feature' embedding. This embedding is used to cluster related news, and the final script is generated using these clusters and DL-based abstractive summarization.
### News Anchor Animation
Furthermore, using the google cloud text-to-speech API, we generate speech with our custom pitch and preferences and we then have code that generates a video using an image of any interesting, popular anime character. In order for the video to feel natural to the audience, we accounted for accurate lip and facial movement; there are calculations made using specific speech traits of the .wav file that produces realistic and not only educational but also humorous videos that will entertain the younger audience.
### Audience Engagement
Moreover, we wrote code using the Twitter API to automate the process of uploading videos to our Twitter account MinervaNews which is integrated within the project’s server that uploads a video initially when the server starts and automatically generates every 24 hours after a new video using the new articles from the sources.
## What's next for Minerva Daily News Reporter
Our project will have a lasting impact on the education of an audience ranging in all age groups. Anime is one great example of a venue that can broadcast news, and we selected anime characters as a humorous and eye-catching means to educate the younger audience. Our project and its customization allow for the possibility of new venues and greater exploration of making education more fun and accessible to a vast audience. We hope to take our project further and add more animations as well as more features.
## Challenges
Our compute platform, Satori has a unique architecture called IBM ppe64le that makes package and dependency management a nightmare.
## What we learned
8 hours in planning = 24 hours in real time.
## Github
<https://github.com/gtangg12/liszt>
|
## 💡 Inspiration
You have another 3-hour online lecture, but you’re feeling sick and your teacher doesn’t post any notes. You don’t have any friends that can help you, and when class ends, you leave the meet with a blank document. The thought lingers in your mind “Will I ever pass this course?”
If you experienced a similar situation in the past year, you are not alone. Since COVID-19, there have been many struggles for students. We created AcadeME to help students who struggle with paying attention in class, missing class, have a rough home environment, or just want to get ahead in their studies.
We decided to build a project that we would personally use in our daily lives, and the problem AcadeME tackled was the perfect fit.
## 🔍 What it does
First, our AI-powered summarization engine creates a set of live notes based on the current lecture.
Next, there are toggle features for simplification, definitions, and synonyms which help you gain a better understanding of the topic at hand. You can even select text over videos!
Finally, our intuitive web app allows you to easily view and edit previously generated notes so you are never behind.
## ⭐ Feature List
* Dashboard with all your notes
* Summarizes your lectures automatically
* Select/Highlight text from your online lectures
* Organize your notes with intuitive UI
* Utilizing Google Firestore, you can go through your notes anywhere in the world, anytime
* Text simplification, definitions, and synonyms anywhere on the web
* DCP, or Distributing Computing was a key aspect of our project, allowing us to speed up our computation, especially for the Deep Learning Model (BART), which through parallel and distributed computation, ran 5 to 10 times faster.
## ⚙️ Our Tech Stack
* Chrome Extension: Chakra UI + React.js, Vanilla JS, Chrome API,
* Web Application: Chakra UI + React.js, Next.js, Vercel
* Backend: AssemblyAI STT, DCP API, Google Cloud Vision API, DictionariAPI, NLP Cloud, and Node.js
* Infrastructure: Firebase/Firestore
## 🚧 Challenges we ran into
* Completing our project within the time constraint
* There was many APIs to integrate, making us spend a lot of time debugging
* Working with Google Chrome Extension, which we had never worked with before.
## ✔️ Accomplishments that we're proud of
* Learning how to work with Google Chrome Extensions, which was an entirely new concept for us.
* Leveraging Distributed Computation, a very handy and intuitive API, to make our application significantly faster and better to use.
## 📚 What we learned
* The Chrome Extension API is incredibly difficult, budget 2x as much time for figuring it out!
* Working on a project where you can relate helps a lot with motivation
* Chakra UI is legendary and a lifesaver
* The Chrome Extension API is very difficult, did we mention that already?
## 🔭 What's next for AcadeME?
* Implementing a language translation toggle to help international students
* Note Encryption
* Note Sharing Links
* A Distributive Quiz mode, for online users!
|
winning
|
## Inspiration
Beautiful stationery and binders filled with clips, stickies, and colourful highlighting are things we miss from the past. Passing notes and memos and recognizing who it's from just from the style and handwriting, holding the sheet in your hand, and getting a little personalized note on your desk are becoming a thing of the past as the black and white of emails and messaging systems take over. Let's bring back the personality, color, and connection opportunities from memo pads in the office while taking full advantage of modern technology to make our lives easier. Best of both worlds!
## What it does
Memomi is a web application for offices to simplify organization in a busy environment while fostering small moments of connection and helping fill in the gaps on the way. Using powerful NLP technology, Memomi automatically links related memos together, suggests topical new memos to expand on missing info on, and allows you to send memos to other people in your office space.
## How we built it
We built Memomi using Figma for UI design and prototyping, React web apps for frontend development, Flask APIs for the backend logic, and Google Firebase for the database. Cohere's NLP API forms the backbone of our backend logic and is what powers Memomi's intelligent suggestions for tags, groupings, new memos, and links.
## Challenges we ran into
With such a dynamic backend with more complex data, we struggled to identify how best to organize and digitize the system. We also struggled a lot with the frontend because of the need to both edit and display data annotated at the exact needed positions based off our information. Connecting our existing backend features to the frontend was our main barrier to showing off our accomplishments.
## Accomplishments that we're proud of
We're very proud of the UI design and what we were able to implement in the frontend. We're also incredibly proud about how strong our backend is! We're able to generate and categorize meaningful tags, groupings, and links between documents and annotate text to display it.
## What we learned
We learned about different NLP topics, how to make less rigid databases, and learned a lot more about advanced react state management.
## What's next for Memomi
We would love to implement sharing memos in office spaces as well as authorization and more text editing features like markdown support.
|
## Inspiration 🌈
Our team has all experienced the struggle of jumping into a pre-existing codebase and having to process how everything works before starting to add our own changes. This can be a daunting task, especially when commit messages lack detail or context. We also know that when it comes time to push our changes, we often gloss over the commit message to get the change out as soon as possible, not helping any future collaborators or even our future selves. We wanted to create a web app that allows users to better understand the journey of the product, allowing users to comprehend previous design decisions and see how a codebase has evolved over time. GitInsights aims to bridge the gap between hastily written commit messages and clear, comprehensive documentation, making collaboration and onboarding smoother and more efficient.
## What it does 💻
* Summarizes commits and tracks individual files in each commit, and suggests more accurate commit messages.
* The app automatically suggests tags for commits, with the option for users to add their own custom tags for further sorting of data.
* Provides a visual timeline of user activity through commits, across all branches of a repository
Allows filtering commit data by user, highlighting the contributions of individuals
## How we built it ⚒️
The frontend is developed with Next.js, using TypeScript and various libraries for UI/UX enhancement. The backend uses Express.js , which handles our API calls to GitHub and OpenAI. We used Prisma as our ORM to connect to a PostgreSQL database for CRUD operations. For authentication, we utilized GitHub OAuth to generate JWT access tokens, securely accessing and managing users' GitHub information. The JWT is stored in cookie storage and sent to the backend API for authentication. We created a github application that users must all add onto their accounts when signing up. This allowed us to not only authenticate as our application on the backend, but also as the end user who provides access to this app.
## Challenges we ran into ☣️☢️⚠️
Originally, we wanted to use an open source LLM, like LLaMa, since we were parsing through a lot of data but we quickly realized it was too inefficient, taking over 10 seconds to analyze each commit message. We also learned to use new technologies like d3.js, the github api, prisma, yeah honestly everything for me
## Accomplishments that we're proud of 😁
The user interface is so slay, especially the timeline page. The features work!
## What we learned 🧠
Running LLMs locally saves you money, but LLMs require lots of computation (wow) and are thus very slow when running locally
## What's next for GitInsights
* Filter by tags, more advanced filtering and visualizations
* Adding webhooks to the github repository to enable automatic analysis and real time changes
* Implementing CRON background jobs, especially with the analysis the application needs to do when it first signs on an user, possibly done with RabbitMQ
* Creating native .gitignore files to refine the summarization process by ignoring files unrelated to development (i.e., package.json, package-lock.json, **pycache**).
|
## Inspiration
While a member of my team was conducting research at UCSF, he noticed a family partaking in a beautiful, albeit archaic, practice. They gave their grandfather access to a google doc, where each family member would write down the memories that they have with him. Nearly every day, the grandfather would scroll through the doc and look at the memories that him and his family wanted him to remember.
## What it does
Much like the Google Doc does, our site stores memories inputted by either the main account holder themself, or other people who have access to the account, perhaps through a shared family email. From there, the memories show up on the users feed and are tagged with the emotion they indicate. Someone with Alzheimers can easily search through their memories to find what they are looking for. In addition, our Chatbot feature trained on their memories also allows users to easily talk to the app directly, asking what they are looking for.
## How we built it
Next.js, React, Node.js, Tailwind, etc.
## Challenges we ran into
It was difficult implementing our chatbot in a way where it is automatically update with data that our user inputs into the site. Moreover, we were working with React for the first time and faced many challenges trying to build out and integrate the different technologies into our website including setting up MongoDB, Flask, and different APIs.
## Accomplishments that we're proud of
Getting this done! Our site is polished and carries out our desired functions well!
## What we learned
As beginners, we were introduced to full-stack development!
## What's next for Scrapbook
We'd like to introduce Scrapbook to medical professionals at UCSF and see their thoughts on it.
|
winning
|
## What it does
This project aims to generate detailed interview question prompts based on the user's input of job title, description, and require job skills. It can take a file of the user's verbal answer, turn it into text, and process it to be analyzed.
## How we built it
We used Google Cloud's Speech-To-Text and Storage API to create and store transcript. We also used Cohere AI's API to create a prompt and generate interview questions
## Challenges we ran into
We ran into trouble creating our API endpoints and creating a consistent Cohere prompt that will give us good interview questions.
## Accomplishments that we're proud of
We're proud of being able to work together to figure out how to use technologies that we have never used before.
## What we learned
We learned new technologies, such as Google Cloud's APIs, Cohere's APIs, and building a web application using Flask
## What's next for Spinter
We hope to make it work and refine it further.
|
## Inspiration
Shohruz, the president of Hunter’s first CS Club, created this club to provide introductory-level educational content to all students interested in CS at Hunter College. Sumayia, the secretary of the club, noticed students struggling in their courses on the Discord server and began providing academic sources in the club’s email newsletters. Sumayia also has a younger sister who is currently experiencing the effects of the pandemic on her education. Thus, it propelled the team to develop an educational tool that can hopefully be integrated into classroom environments.
During the pandemic, Ynalois performed poorly in her college-level science classes. She lost the support that teachers offered in person. She searched for it online, in hopes that the internet could replace that connection. That’s when she realized the value of assistance.
## What it does
Users are able to enter an academic subject into the input box to generate a practice question based on the subject. If the user requires assistance, they can request a hint. If the user wants the answer, they can prompt the solution by clicking a button. By default, the user only gets a maximum of 5 free tries, and once you run out, you have to purchase the premium plan to unlock access to unlimited tries.
## How we built it
We used React for the frontend, Node and Express for our backend, and OpenAI's API to generate the practice questions, hints, and answers.
## Challenges we ran into
We wanted to implement Stripe's payment gateway system to collect payments from users for our premium plan. Due to time constraints, we weren't able to implement it.
We also wanted to display all 3 hints in succession after each click, with each hint getting more specific than the previous one. This would help the user gain confidence in their problem-solving skills for that specific subject.
## Accomplishments that we're proud of
We're proud that we were able to successfully set up and integrate OpenAI's API as our MVP to generate practice questions, and then after, be able to generate hints and the answer to the AI-generated practice question.
Moreover, we are also proud of deriving a witty compound word for our website: HoneyDo. Honey is a sweet substance and dew is the condensation of rainwater—which induces relaxation. We aim to deliver education in a sweet and condensed manner.
## What we learned
Shohruz - I learned how to use OpenAI's chat completion API and furthered my understanding of backend development.
Ynalois - I learned backend development for the first time using Node, Express, and OpenAI's API.
Sumayia - I learned frontend development and React for the first time to create dynamic components.
In a broader context, the pandemic had detrimental effects on students across the globe. In Boston, "60 percent of students at some high-poverty schools have been identified as at high risk for reading problems— twice the number of students as before the pandemic, according to Tiffany P. Hogan, director of the Speech and Language Literacy Lab at the MGH Institute of Health Professions in Boston." If this disparity persists, "poor readers are more likely to drop out of high school, earn less money as adults and become involved in the criminal justice system." The pandemic didn't solely affect low-income groups, but "children in every demographic group."
Billions of federal stimulus dollars are flowing to districts for tutoring and other support, but their effect may be limited if schools cannot find quality staff members to hire. This is where AI comes into play! Our educational tool implements AI to assist students on any subject of their choosing with an option of receiving 3 hints.
## What's next for HoneyDo
To ensure profitability and scalability, we plan to implement Stripe's payment gateway to be able to handle transactions.
Discords-
codez\_
ynabanina
sumayia04
|
## 💡 Inspiration 💡
Mental health is a growing concern in today's population, especially in 2023 as we're all adjusting back to civilization again as COVID-19 measures are largely lifted. With Cohere as one of our UofT Hacks X sponsors this weekend, we want to explore the growing application of natural language processing and artificial intelligence to help make mental health services more accessible. One of the main barriers for potential patients seeking mental health services is the negative stigma around therapy -- in particular, admitting our weaknesses, overcoming learned helplessness, and fearing judgement from others. Patients may also find it inconvenient to seek out therapy -- either because appointment waitlists can last several months long, therapy clinics can be quite far, or appointment times may not fit the patient's schedule. By providing an online AI consultant, we can allow users to briefly experience the process of therapy to overcome their aversion in the comfort of their own homes and under complete privacy. We are hoping that after becoming comfortable with the experience, users in need will be encouraged to actively seek mental health services!
## ❓ What it does ❓
This app is a therapy AI that generates reactive responses to the user and remembers previous information not just from the current conversation, but also past conversations with the user. Our AI allows for real-time conversation by using speech-to-text processing technology and then uses text-to-speech technology for a fluent human-like response. At the end of each conversation, the AI therapist generates an appropriate image summarizing the sentiment of the conversation to give users a method to better remember their discussion.
## 🏗️ How we built it 🏗️
We used Flask to make the API endpoints in the back-end to connect with the front-end and also save information for the current user's session, such as username and past conversations, which were stored in a SQL database. We first convert the user's speech to text and then send it to the back-end to process it using Cohere's API, which as been trained by our custom data and the user's past conversations and then sent back. We then use our text-to-speech algorithm for the AI to 'speak' to the user. Once the conversations is done, we use Cohere's API to summarize it into a suitable prompt for the DallE text-to-image API to generate an image summarizing the user's conversation for them to look back at when they want to.
## 🚧 Challenges we ran into 🚧
We faced an issue with implementing a connection from the front-end to back-end since we were facing a CORS error while transmitting the data so we had to properly validate it. Additionally, incorporating the speech-to-text technology was challenging since we had little prior experience so we had to spend development time to learn how to implement it and also format the responses properly. Lastly, it was a challenge to train the cohere response AI properly since we wanted to verify our training data was free of bias or negativity, and that we were using the results of the Cohere AI model responsibly so that our users will feel safe using our AI therapist application.
## ✅ Accomplishments that we're proud of ✅
We were able to create an AI therapist by creating a self-teaching AI using the Cohere API to train an AI model that integrates seamlessly into our application. It delivers more personalized responses to the user by allowing it to adapt its current responses to users based on the user's conversation history and
making conversations accessible only to that user. We were able to effectively delegate team roles and seamlessly integrate the Cohere model into our application. It was lots of fun combining our existing web development experience with venturing out to a new domain like machine learning to approach a mental health issue using the latest advances in AI technology.
## 🙋♂️ What we learned 🙋♂️
We learned how to be more resourceful when we encountered debugging issues, while balancing the need to make progress on our hackathon project. By exploring every possible solution and documenting our findings clearly and exhaustively, we either increased the chances of solving the issue ourselves, or obtained more targeted help from one of the UofT Hacks X mentors via Discord. Our goal is to learn how to become more independent problem solvers. Initially, our team had trouble deciding on an appropriately scoped, sufficiently original project idea. We learned that our project should be both challenging enough but also buildable within 36 hours, but we did not force ourselves to make our project fit into a particular prize category -- and instead letting our project idea guide which prize category to aim for. Delegating our tasks based on teammates' strengths and choosing teammates with complementary skills was essential for working efficiently.
## 💭 What's next? 💭
To improve our project, we could allow users to customize their AI therapist, such as its accent and pitch or the chat website's color theme to make the AI therapist feel more like a personalized consultant to users. Adding a login page, registration page, password reset page, and enabling user authentication would also enhance the chatbot's security. Next, we could improve our website's user interface and user experience by switching to Material UI to make our website look more modern and professional.
|
losing
|
Rapid Response aims to solve the inefficient method of locating the caller used today during 911 calls. Additionally, we aim to streamline the process of exchanging information from the user to dispatcher thus allowing for first responders to arrive in a more time efficient manner. Rapid Response utilizes the latitude, longitude, and altitude points of the phone and converts it to a street address which is then sent to the nearest dispatcher in the area along with the nature of the emergency. Furthermore, the user’s physical features are also sent to the dispatchers to help identify the victim of the incident as well the victim’s emergency contacts are also notified of the incident. With Rapid Response, victims of an incident are now able to get the help they need when they need it.
|
## Inspiration
This project was a response to the events that occurred during Hurricane Harvey in Houston last year, wildfires in California, and the events that occurred during the monsoon in India this past year. 911 call centers are extremely inefficient in providing actual aid to people due to the unreliability of tracking cell phones. We are also informing people of the risk factors in certain areas so that they will be more knowledgeable when making decisions for travel, their futures, and taking preventative measures.
## What it does
Supermaritan provides a platform for people who are in danger and affected by disasters to send out "distress signals" specifying how severe their damage is and the specific type of issue they have. We store their location in a database and present it live on react-native-map API. This allows local authorities to easily locate people, evaluate how badly they need help, and decide what type of help they need. Dispatchers will thus be able to quickly and efficiently aid victims. More importantly, the live map feature allows local users to see live incidents on their map and gives them the ability to help out if possible, allowing for greater interaction within a community. Once a victim has been successfully aided, they will have the option to resolve their issue and store it in our database to aid our analytics.
Using information from previous disaster incidents, we can also provide information about the safety of certain areas. Taking the previous incidents within a certain range of latitudinal and longitudinal coordinates, we can calculate what type of incident (whether it be floods, earthquakes, fire, injuries, etc.) is most common in the area. Additionally, by taking a weighted average based on the severity of previous resolved incidents of all types, we can generate a risk factor that provides a way to gauge how safe the range a user is in based off the most dangerous range within our database.
## How we built it
We used react-native, MongoDB, Javascript, NodeJS, and the Google Cloud Platform, and various open source libraries to help build our hack.
## Challenges we ran into
Ejecting react-native from Expo took a very long time and prevented one of the members in our group who was working on the client-side of our app from working. This led to us having a lot more work for us to divide amongst ourselves once it finally ejected.
Getting acquainted with react-native in general was difficult. It was fairly new to all of us and some of the libraries we used did not have documentation, which required us to learn from their source code.
## Accomplishments that we're proud of
Implementing the Heat Map analytics feature was something we are happy we were able to do because it is a nice way of presenting the information regarding disaster incidents and alerting samaritans and authorities. We were also proud that we were able to navigate and interpret new APIs to fit the purposes of our app. Generating successful scripts to test our app and debug any issues was also something we were proud of and that helped us get past many challenges.
## What we learned
We learned that while some frameworks have their advantages (for example, React can create projects at a fast pace using built-in components), many times, they have glaring drawbacks and limitations which may make another, more 'complicated' framework, a better choice in the long run.
## What's next for Supermaritan
In the future, we hope to provide more metrics and analytics regarding safety and disaster issues for certain areas. Showing disaster trends overtime and displaying risk factors for each individual incident type is something we definitely are going to do in the future.
|
## Inspiration
Witnessing the atrocities(protests, vandalism, etc.) caused by the recent presidential election, we want to make the general public (especially for the minorities and the oppressed) be more safe.
## What it does
It provides the users with live news update happening near them, alerts them if they travel near vicinity of danger, and provide them an emergency tool to contact their loved ones if they get into a dangerous situation.
## How we built it
* We crawl the latest happenings/events using Bing News API and summarize them using Smmry API.
* Thanks to Alteryx's API, we also managed to crawl tweets which will inform the users regarding the latest news surrounding them with good accuracy.
* All of these data are then projected to Google Map which will inform user about any happening near them in easy-to-understand summarized format.
* Using Pittney Bowes' API (GeoCode function), we alert the closest contacts of the user with the address name where the user is located.
## Challenges we ran into
Determining the credibility of tweets is incredibly hard
## Accomplishments that we're proud of
Actually to get this thing to work.
## What's next for BeSafe
Better UI/UX and maybe a predictive capability.
|
winning
|
## Inspiration
After hearing a representative from **Private Internet Access** describe why internet security is so important, we wanted to find a way to simply make commonly used messaging platforms more secure for sharing sensitive and private information.
## What it does
**Mummify** provides in-browser text encryption and decryption by simply highlighting and clicking the Chrome Extension icon. It uses a multi-layer encryption by having both a private key and a public key. Anyone is able to encrypt using your public key, but only you are able to decrypt it.
## How we built it
Mummify is a Chrome Extension built using Javascript (jQuery), HTML, and CSS.
We did a lot of research about cryptography, deciding that we would be using asymmetric encryption with private key and public key to ensure complete privacy and security for the user. We then started to dive into building a Chrome extension, using JavaScript, JQuery and HTML to map out the logics behind our encryption and decryption extension. Lastly, we polished our extension with simple and user-friendly UI design and launched Mummify website!
We used Microsoft Azure technologies to host and maintain our webpage which was built using Bootstrap (HTML+CSS), and used Domain.com to get our domain name.
## Challenges we ran into
* What is the punniest domain name (in the whole world) that we can come up with?
* How do we make a Chrome Extension?
* Developing secure encryption algorithms.
* How to create shareable keys without defeating the purpose of encryption.
* How to directly replace the highlighted text within an entry field.
* Bridging the extension and the web page.
* Having our extension work on different chat message platforms. (Messenger, HangOuts, Slack...)
## Accomplishments that we're proud of
* Managing to overcome all our challenges!
* Learning javascript in less than 24 hours.
* Coming together to work as the Best Team at nwHacks off of a random Facebook post!
* Creating a fully-usable application in less than 24 hours.
* Developing a secure encryption algorithm on the fly.
* Learning how to harness the powers of Microsoft Azure.
## What we learned
Javascript is as frustrating as people make it out to be.
Facebook, G-mail, Hotmail, and many other sites all use very diverse build methods which makes it hard for an Extension to work the same on all.
## What's next for Mummify
We hope to deploy Mummify to the Chrome Web Store and continue working as a team to develop and maintain our extension, as well as advocating for privacy on the internet!
|
## Inspiration
Have you ever wanted to search something, but aren't connected to the internet? Data plans too expensive, but you really need to figure something out online quick? Us too, and that's why we created an application that allows you to search the internet without being connected.
## What it does
Text your search queries to (705) 710-3709, and the application will text back the results of your query.
Not happy with the first result? Specify a result using the `--result [number]` flag.
Want to save the URL to view your result when you are connected to the internet? Send your query with `--url` to get the url of your result.
Send `--help` to see a list of all the commands.
## How we built it
Built on a **Nodejs** backend, we leverage **Twilio** to send and receive text messages. When receiving a text message, we send this information using **RapidAPI**'s **Bing Search API**.
Our backend is **dockerized** and deployed continuously using **GitHub Actions** onto a **Google Cloud Run** server. Additionally, we make use of **Google Cloud's Secret Manager** to not expose our API Keys to the public.
Internally, we use a domain registered with **domain.com** to point our text messages to our server.
## Challenges we ran into
Our team is very inexperienced with Google Cloud, Docker and GitHub Actions so it was a challenge needing to deploy our app to the internet. We recognized that without deploying, we would could not allow anybody to demo our application.
* There was a lot of configuration with permissions, and service accounts that had a learning curve. Accessing our secrets from our backend, and ensuring that the backend is authenticated to access the secrets was a huge challenge.
We also have varying levels of skill with JavaScript. It was a challenge trying to understand each other's code and collaborating efficiently to get this done.
## Accomplishments that we're proud of
We honestly think that this is a really cool application. It's very practical, and we can't find any solutions like this that exist right now. There was not a moment where we dreaded working on this project.
This is the most well planned project that we've all made for a hackathon. We were always aware how our individual tasks contribute to the to project as a whole. When we felt that we were making an important part of the code, we would pair program together which accelerated our understanding.
Continuously deploying is awesome! Not having to click buttons to deploy our app was really cool, and it really made our testing in production a lot easier. It also reduced a lot of potential user errors when deploying.
## What we learned
Planning is very important in the early stages of a project. We could not have collaborated so well together, and separated the modules that we were coding the way we did without planning.
Hackathons are much more enjoyable when you get a full night sleep :D.
## What's next for NoData
In the future, we would love to use AI to better suit the search results of the client. Some search results have a very large scope right now.
We would also like to have more time to write some tests and have better error handling.
|

## Inspiration
📚In our current knowledge economy, our information is also our most important valuable commodity.
💡 Knowledge is available in almost infinite abundance 📈, delivered directly through our digital devices 📱 💻 , the world is more connected and educated than ever across the globe 🌎 🌍 🌏. However, the surge of information draws adverse effects💥 🔥 🌈! With information circulating as rapid as ever, information and cognitive overload 🧠👎🏼 is a present symptom amongst our lives.
✨💡✨Mr. Goose 🦢 is here to help by aggregating millions of sources to simplify complex concepts into comprehensible language for even a five-year-old. ✨💡✨
## What it does
It is a chrome extension for users to conveniently type in questions, 💡 highlight 💡 paragraphs, sentences, or words on their browser, and receive a ⭐️simple to understand answer or explanation. 🎇 🎆
## How we built it
✨🔨Our chrome extension was built using JavaScript, HTML, and CSS, using Rest API. ✨As for the backend, functions are deployed on Google Cloud Functions ☁️☁️☁️and calls the Google Cloud Language API☁️☁️☁️, which uses Natural Language Processing 💬 💡 to figure out what entities are in the highlighted text. Once we’ve figured out what the text is about, we use it to parse the web using APIs such as the Reddit API, the StackOverflow/Stack Exchange API, and the Wikipedia API. ⭐️⭐️⭐️
## Challenges we ran into
One of the 💪 main challenges 💪 we ran into was while building 🌼👩🏼💻 🌻 the wireframes of the extension, discussing 💭💭 and re-evaluating the logic of the app’s uses. ✨ As we were 🔨 designing 🧩 several features, we tried to discuss what features would be the most user-friendly and important while also maintaining the efficiency 📈 📈 📈 and importance of learning/knowledge 📚of our Chrome extension. ✨
## Accomplishments that we're proud of
✨✨✨We were extremely proud ⭐️ of the overall layout and elements 🧩🧩🧩 we implemented into our app design, such as the animated goose 🦢 that one of our team members drew and animated from scratch. From the color 🔴 🟠 🟡 choices to the attention to details like which words 💬 📃 📄 should be important in the NLP API to the resulted information 📊, we had to take a lot into consideration for our project, and it truly was a fun learning experience. 👍👍👍
## What we learned
🌟 How to create a Chrome Extension
🌟 How to use Google Firebase
🌟 How to use Google Cloud's NLP API, Stack Exchange API, Reddit API, Wikipedia API
🌟 How to integrate all of these together
🌟 How to create animated images for implementation on the extensions
## What's next for Mr. Goose
✨Adapting our extensions compatibility with other browsers.
✨Adding a voice recognition feature to allow users to ask questions and receive simplified answers in return
✨Adding ability to access images while on the extension
|
winning
|
## Inspiration
We wanted to find ways to make e-commerce more convenient, as well as helping e-commerce merchants gain customers. After a bit of research, we discovered that one of the most important factors that consumers value is sustainability. According to FlowBox, 65% of consumers said that they would purchase products from companies who promote sustainability. In addition, the fastest growing e-commerce platforms endorse sustainability. Therefore, we wanted to create a method that allows consumers access to information regarding the company's sustainability policies.
## What it does
Our project is a browser extension that allows users to browse e-commerce websites while being able to check the product manufactures'' sustainability via ratings out of 5 stars.
## How we built it
We started building the HTML as the skeleton of the browser extension. We then proceeded with JavaScript to connect the extension with ChatGPT. Then, we asked ChatGPT a question regarding the general consensus of a company's sustainability. We run this review through sentimental analysis, which returns a ratio of positive and negative sentiment with relevance towards sustainability. This information is then converted into a value out of 5 stars, which is displayed on the extension homepage. We finalized the project with CSS, making the extension look cleaner and more user friendly.
## Challenges we ran into
We had issues with running servers, as we struggled with the input and output of information.
We also ran into trouble with setting up the Natural Language Processing models from TensorFlow. There were multiple models trained using different datasets and methods, despite the fact they all use TensorFlow, they were developed at different times, which means different versions of TensorFlow were used. It made the debugging process a lot more extensive and made the implementation take a lot more time.
## Accomplishments that we're proud of
We are proud that we were able to create a browser extension that makes the lives of e-commerce developers and shoppers more convenient. We are also proud of making a visually appealing extension that is accessible to users. Furthermore, we are proud of implementing modern technology such as ChatGPT within our approach to solving the challenge.
## What we learned
We learned how to create a browser extension from scratch and implement the OpenAi API to connect our requests to ChatGPT. We also learned how to use Natural Language Processing to detect how positive or negative the response we received from ChatGPT was. Finally, we learned how to convert the polarity we received into a rating that is easy to read and accessible to users.
## What's next for E-commerce Sustainability Calculator
In the future, we would like to implement a feature that gives a rating to the reliability of our sustainability rating. Since there are many smaller and lesser known companies on e-commerce websites, they would not have that much information about their sustainability policies, so their sustainability rating would be a lot less accurate compared to a more relevant company. We would implement this by using the amount of google searches for a specific company as a metric to measure their relevance and then base a score using a scale that ranges the the number of google searches.
|
## Inspiration
As students who just moved to San Francisco for their freshman year, we have experienced how dangerous it is sometimes to walk around the city, especially near Mission and Tenderloin areas. Since day 1, we are told to be careful and walk in groups to be safe. Statistically, there were reported more than 1000 crimes in those two regions during September 2019 according to sanfranciscopolice.org. But how can we help our community to feel safer while being also time-efficient (we are students after all)? In order to solve this problem within not only our community but also for the whole city, we have worked during this weekend on the Safe Walk web app.
|
## Inspiration
Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order.
## What it does
You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision.
## How we built it
The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors.
## Challenges we ran into
One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle.
## Accomplishments that we're proud of
We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with.
## What we learned
We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment.
## What's next for Harvard Burger
Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains.
|
losing
|
#### Inspiration
The Division of Sleep Medicine at Harvard Medical School stated that 250,000 drivers fall asleep at the wheel every day in the United States. CDC.gov claims the states has 6,000 fatal per year due to these drowsy drivers. We members of the LifeLine team understand the situation; in no way are we going to be able to stop these commuters from driving home after a long days work. So let us help them keep alert and awake!
#### What is LifeLine?
You're probably thinking "Lifeline", like those calls to dad or mom they give out on "Who wants to be a millionaire?" Or maybe your thinking more literal: "a rope or line used for life-saving". In both cases, you are correct! The wearable LifeLine system connects with an android phone and keeps the user safe and awake on the road through connecting them to a friend.
#### Technologies
Our prototype consists of an Arduino with an accelerometer as part of a headset, monitoring a driver's performance of that well-known head dip of fatigue. This headset communicates with a Go lang server, providing the user's android application with the accelerometer data through an http connection. The android app then processes the x, y tilt data to monitor the driver.
#### What it does
The application user sets an emergency contact upon entry. Then once in "drive" mode, the app displays the x and y tilt of the drivers head, relating it to an animated head that tilts to match the drivers. Upon sensing the first few head nods of the driver, the LifeLine app provides auditory feedback beeps to keep the driver alert. If the condition of the driver does not improve it then sends a text to a pre-entered contact suggesting to them that the user is drowsy driving and that they should reach out to him. If the state of the driver gets worse it then summons the LifeLine and calls their emergency contact.
#### Why call a friend?
Studies find conversation to be a great stimulus of attentiveness. Given that a large number of drivers are alone on the road, the resulting phone connections could save lives.
#### Challenges we ran into
Hardware use is always not fun for software engineers...
#### What's next for LifeLine
* Wireless Capabilities
* Stylish and more comfortable to wear
* Saving data for user review
* GPS feedback for where the driver is when he is dozing off (partly completed already)
**Thanks for reading. Hope to see you on the demo floor!** - Liam
|
## Inspiration
We created this app to address a problem that our creators were facing: waking up in the morning. As students, the stakes of oversleeping can be very high. Missing a lecture or an exam can set you back days or greatly detriment your grade. It's too easy to sleep past your alarm. Even if you set multiple, we can simply turn all those off knowing that there is no human intention behind each alarm. It's almost as if we've forgotten that we're supposed to get up after our alarm goes off! In our experience, what really jars you awake in the morning is another person telling you to get up. Now, suddenly there is consequence and direct intention behind each call to wake up. Wake simulates this in an interactive alarm experience.
## What it does
Users sync their alarm up with their trusted peers to form a pact each morning to make sure that each member of the group wakes up at their designated time. One user sets an alarm code with a common wakeup time associated with this alarm code. The user's peers can use this alarm code to join their alarm group. Everybody in the alarm group will experience the same alarm in the morning. After each user hits the button when they wake up, they are sent to a soundboard interface, where they can hit buttons to send try to wake those that are still sleeping with real time sound effects. Each time one user in the server hits a sound effect button, that sound registers on every device, including their own device to provide auditory feedback that they have indeed successfully sent a sound effect. Ultimately, users exit the soundboard to leave the live alarm server and go back to the home screen of the app. They can finally start their day!
## How we built it
We built this app using React Native as a frontend, Node.js as the server, and Supabase as the database. We created files for the different screens that users will interact with in the front end, namely the home screen, goodnight screen, wakeup screen, and the soundboard. The home screen is where they set an alarm code or join using someone else's alarm code. The "goodnight screen" is what screen the app will be on while the user sleeps. When the app is on, it displays the current time, when the alarm is set to go off, who else is in the alarm server, and a warm message, "Goodnight, sleep tight!". Each one of these screens went through its own UX design process. We also used Socket.io to establish connections between those in the same alarm group. When a user sends a sound effect, it would go to the server which would be sent to all the users in the group. As for the backend, we used Supabase as a database to store the users, alarm codes, current time, and the wake up times. We connected the front and back end and the app came together. All of this was tested on our own phones using Expo.
## Challenges we ran into
We ran into many difficult challenges during the development process. It was all of our first times using React Native, so there was a little bit of a learning curve in the beginning. Furthermore, incorporating Sockets with the project proved to be very difficult because it required a lot of planning and experimenting with the server/client relationships. The alarm ringing also proved to be surprisingly difficult to implement. If the alarm was left to ring, the "goodnight screen" would continue ringing and would not terminate. Many of React Native's tools like setInterval didn't seem to solve the problem. This was a problematic and reoccurring issue. Secondly, the database in Supabase was also quite difficult and time consuming to connect, but in the end, once we set it up, using it simply entailed brief SQL queries. Thirdly, setting up the front end proved quite confusing and problematic, especially when it came to adding alarm codes to the database.
## Accomplishments that we're proud of
We are super proud of the work that we’ve done developing this mobile application. The interface is minimalist yet attention- grabbing when it needs to be, namely when the alarm goes off. Then, the hours of debugging, although frustrating, was very satisfying once we finally got the app running. Additionally, we greatly improved our understanding of mobile app developement. Finally, the app is also just amusing and fun to use! It’s a cool concept!
## What we learned
As mentioned before, we greatly improved our understanding of React Native, as for most of our group, this was the first time using it for a major project. We learned how to use Supabase and socket. Additionally, we improved our general Javascript and user experience design skills as well.
## What's next for Wakey
We would like to put this app on the iOS App Store and the Android Play Store, which would take more extensive and detailed testing, especially as for how the app will run in the background. Additionally, we would like to add some other features, like a leaderboard for who gets up most immediately after their alarm gets off, who sends the most sound effects, and perhaps other ways to rank the members of each alarm server. We would also like to add customizable sound effects, where users can record themselves or upload recordings that they can add to their soundboards.
|
The inspiration for this project was to help decrease the amount of accidents that occur due to people driving large distances while tired. This largely applies to truck drivers who depend on driving long distances to make a living and can easily become bored or require driving late at night, both of which are known to cause tiredness.
The main functionalities of this web-based application include, detecting when the driver has shown signs of fatigue, when this happens the driver is awaken using an alarm system that will play a sound for a short period of time. Other functionalities of this application include displaying a map to the user which they can use to find directions from one point on the map to another, also displayed on the map are charging stations for electric vehicles that are nearby.
This application was built as a team, we came up with ideas together about how the application should function, what the UI of the application should look like, in addition we all collaborated to solve any errors that members of the team faced.
While working on this project our team encountered many challenges and had to use our problem solving skills to solve them. One of the challenges faced includes displaying a camera generated by a python file on a react website created using JavaScript, this was an issue that we hadn’t initially considered, in order to solve this issue we had to create a connection between the python file and the web-application. After some deliberation our team decided to use websockets to create the connection. Another challenge faced earlier on in the project was getting mapbox to work how we required it to, after some time spent researching the issues faced were fixed and we were able to display the functionalities we wanted.
Some of the accomplishments that we are proud of include, using a face detection algorithm which was used to find the location of a face, and then the eyes of the person, using the location of the eyes we were able to detect signs of drowsiness, most notably when the distance between the upper eyelid and the lower eyelid was very small for a significant number of consecutive frames. Some of our other accomplishments include using an RapidAPI to detect the location of nearby electric vehicle charging locations, and then creating a marker to be place on the mapbox map at the correct longitude and latitude coordinates. One more previously mentioned solved error accomplishment that we were proud to solve was creating the connection between the python file used to detect drowsiness, and the react JavaScript web-based application. Another aspect of the competition that our team did a very good job at is working together on a project at the same time as a team, and being conscious about conflicts that could be created by having multiple people working on the same file at the same time.
Future functionality that could be implemented for WakeyDrivey could include expanding the application help more people with their driving needs. One group that could benefit greatly from this is deaf people, in order to expand this application to include this demographic we would need to add different options for how the driver is to be woken up, the idea here is to have the user buy a wearable wrist tech that would be able to receive a signal from the web-based application and upon receiving a signal signifying that the driver has shown signs of drowsiness, the wearable wrist tech could create a vibration or small shock to wake the user. Some other useful features that could be implemented include, adding functionality to graphics that display to the user when something needs their attention that they are unable to hear, some things that may require their attention include someone honking at them, siren behind them, or other loud noises. In order to show the driver that something needs their attention, the screen of the application could change color, or a vibration could again be used on their wearable wrist device.
|
winning
|
## Inspiration
Through going to big lectures at Berkeley, we've always found that it was difficult for courses to keep track of our attendance. Professors have gone through countless methods to track student attendance, such as requiring students to buy pricy hardware.
## What it does
Tracks student attendance **and** attentiveness based on the web traffic they create from their devices during class. Professors can keep track of attendance without calling role, thereby saving precious class time especially in large lecture halls. Our algorithm can detect when a student leaves a class early or arrives late, giving the teacher deeper insight into class attendance. There's an easy one-time MAC address to student name registration, and subsequently attendance is automatically taken so long as the student has wifi turned on (no connection required!) on one his/her phone, tablet, or laptop.
## How it works
We are using Cisco Meraki's Location and Dashboard APIs along with the wifi access points (Meraki MR33) already set up inside Memorial Stadium. These quad-radio access points intercept wifi and bluetooth signals from smartphones and laptops, and we sample the data every few seconds to capture data such as unique MAC addresses, websites visited, timestamps, and duration of stay. The reason why we use MAC addresses as a primary key to identify devices and students is because each device broadcasts a unique, immutable MAC address, allowing us to track them over hours, months, and years. Additionally, by polling data every few seconds for the duration of lecture, our web app can make sure students don't leave right after signing in, which is a huge problem for many attendance methods such as using iClicker or Google Forms. Meraki keeps a constant eye on all devices in the lecture hall and allows our web app to take note as soon as a student leaves lecture.
## The Technology
We use Node-Red to interface with the access points, and stream out JSON files to a MondoDB database in the cloud. We then input this data into Pandas Dataframes and use Plotly to visualize it. After some filtering and stats, we end up with a simple, clean interface for the teacher to use. We package all of this into a web app hosted on Google Cloud for you to see update in real time!
## Challenges we ran into
Meraki access points did not stay on consistently, and we stayed up till 8am trying to get our application to work more consistently. We also didn't have time to fully polish the application, so it currently contains just core functionality.
## Accomplishments that we're proud of
We learned how to use the Meraki API and Node-Red application in the fly, which was incredibly rewarding because most of us didn't have any experience working with networking. We also enjoyed working with the Meraki representatives and bouncing our ideas off them!
## What we learned
How to integrate various libraries and platforms into a single project! We also learned how to efficiently split up work and play to our strengths.
## What's next for Stay Present
We'd love to test this in a UC Berkeley lecture hall and work closely with professors to implement this attendance tracker. We're planning to refine our data with our new Meraki MR33 APs and see what other data we can extract from unsecured campus web traffic. We believe that this project is just one way that classroom learning can be changed for the better, and hope to see it in use in the future!
|
## Inspiration
Almost all undergraduate students, especially at large universities like the University of California Berkeley, will take a class that has a huge lecture format, with several hundred students listening to a single professor speak. At Berkeley, students (including three of us) took CS61A, the introductory computer science class, alongside over 2000 other students. Besides forcing some students to watch the class on webcasts, the sheer size of classes like these impaired the ability of the lecturer to take questions from students, with both audience and lecturer frequently unable to hear the question and notably the question not registering on webcasts at all. This led us to seek out a solution to this problem that would enable everyone to be heard in a practical manner.
## What does it do?
*Questions?* solves this problem using something that we all have with us at all times: our phones. By using a peer to peer connection with the lecturer’s laptop, a student can speak into their smartphone’s microphone and have that audio directly transmitted to the audio system of the lecture hall. This eliminates the need for any precarious transfer of a physical microphone or the chance that a question will be unheard.
Besides usage in lecture halls, this could also be implemented in online education or live broadcasts to allow participants to directly engage with the speaker instead of feeling disconnected through a traditional chatbox.
## How we built it
We started with a fail-fast strategy to determine the feasibility of our idea. We did some experiments and were then confident that it should work. We split our working streams and worked on the design and backend implementation at the same time. In the end, we had some time to make it shiny when the whole team worked together on the frontend.
## Challenges we ran into
We tried the WebRTC protocol but ran into some problems with the implementation and the available frameworks and the documentation. We then shifted to WebSockets and tried to make it work on mobile devices, which is easier said than done. Furthermore, we had some issues with web security and therefore used an AWS EC2 instance with Nginx and let's encrypt TLS/SSL certificates.
## Accomplishments that we're (very) proud of
With most of us being very new to the Hackathon scene, we are proud to have developed a platform that enables collaborative learning in which we made sure whatever someone has to say, everyone can hear it.
With *Questions?* It is not just a conversation between a student and a professor in a lecture; it can be a discussion between the whole class. *Questions?* enables users’ voices to be heard.
## What we learned
WebRTC looks easy but is not working … at least in our case. Today everything has to be encrypted … also in dev mode. Treehacks 2020 was fun.
## What's next for *Questions?*
In the future, we could integrate polls and iClicker features and also extend functionality for presenters and attendees at conferences, showcases, and similar events. \_ Questions? \_ could also be applied even broader to any situation normally requiring a microphone—any situation where people need to hear someone’s voice.
|
## Inspiration
Drawing inspiration from our personal academic journeys and identifying challenges faced by fellow students, we wanted to create a solution that resonated with students seeking motivation tailored to their unique circumstances. Reflecting on our own experiences, we acknowledged that attendance struggles were not always rooted in a lack of motivation; sometimes external factors played a role. This realization fueled the integration of flexibility into our goal-setting. While achieving goals in their entirety is undoubtedly ideal, we recognized the importance of striking a balance between productivity and well-being.
In contrast to traditional methods that rely on guilt and unrealistic objectives, our approach embraces gamification and realistic metrics. By doing so, we aimed to create a positive and achievable path toward academic success and class attendance, acknowledging the nuanced nature of students' lives and motivations.
Seeing the lack of attendance in class also discourages other people from going and a valuable part of education is learning from others and peers, making that in-person connection.
## What it does
While onboarding, Rise allows its users are then able to select their weekly attendance goal based on their personal circumstances. Users are also able to either manually input their class schedule or import their entire class schedule as an .ics calendar file. The app uses geolocation to record whether or not has attended class, as well as uses it to determine if the user’s friends have also attended class and are nearby.
Rise incorporates a gamification aspect through the character that lives in the app who is happy as you consistently meet your personalized goal, and sad when you don’t. As you attend class, you receive a ‘sun’ currency that you can use to buy customizations for your character. As a user consistently attends class, they maintain an attendance streak. This contribtues towards their weekly attendance goal, motivating students to keep up their attendance to not lose their streak.
A user would be able to add their friends and optionally share notifications with each other when the other leaves for or arrives to class! There would also be notifications for when a user should leave for their class, based on their current location, as well as for when a user’s attendance starts to improve or decline.
## How we built it
First, the app was wireframed, and then designed and prototyped with Figma.
Then, we set up the frontend with React Native so that our app could run on both iOS and Android devices. The backend is built with Node.js, a JavaScript runtime. For the database, we used MongoDB and Mongoose to maintain the structure of our data. To predict the likelihood of whether or not the user will attend class on a given weekday based on prior data, we used Brain.js, a JavaScript library used for neural networking in the backend. To host the backend, we used AWS Lamba.
## Challenges we ran into
None of our members had prior experience integrating an AI model into an application, so it was a struggle to both learn about how AI works and train the AI model while getting it to predict data accurately. Initially, we had planned on using Tensorflow due to its popularity, but we decided to pivot to Brain.js as it was more beginner friendly.
In addition, we ran into challenges connecting our backend to our frontend with CORS and hosting.
The time constraint was also a challenge to work with, given our lack of experience in working with both React Native and AI.
## Accomplishments that we're proud of
For 2/3 members, it was their first hackathon which is a huge accomplishment. Some of our proudest work is the time we took into our design to make it more personal and user-friendly.
We were able to successfully use Brain.js to make a basic AI model and train it based on the data we provided, which was an accomplishment given that this was the first time we worked with AI.
The fact that we were able to make a portion of a working full-stack application within the time limit was also something we are proud of.
## What we learned
We learned a lot about UX/UI design interfaces and how to Figma. Going through the process of designing the application allowed us to learn how to use the tools of Figma more efficiently.
This was the first time some of us had used technologies such as React Native, Javascript, Github, and MongoDB. We learned how to use React Native to implement features and style our application. Using Javascript, we were able to build a working backend. With MongoDB, we learned about how a database works as well as how it interacts with a backend. Using Git, we learned how to collaborate with one another on a shared codebase by using commands such as commit, push, and pull.
We also learned about AI concepts and how to create a basic AI model. We learned more about the process of training the model to acquire better predictions.
## What's next for Rise
Unfortunately due to the time constraints, we were unable to implement the onboarding flow as well as develop some of the more advanced features that were in our initial design. Completing these would be our immediate next step for Rise. We also wanted to implement a notification feature as explained in the ‘What it Does’ section.
We envisioned a more advanced AI model that could detect deviations in attendance as well as more accurately predict the likelihood of a user attending classes on a given day in the future. The next step would be for us to learn more about how AI works and relevant technologies, and use that knowledge to create a more powerful model.
In addition, the design of our app could be fleshed to look more sophisticated and eye-catching.
Based on user feedback, our app could be more accommodating or helpful towards those who are unable to make it to class for reasons out of their control. We would take these into consideration so that Rise can help more people make it to class.
|
partial
|
## Inspiration
The system was Mostafa's idea. In a world with transparency has become more of a public concern in recent years, it was important to provide a medium to hold charities accountable for the money they received in good faith, and to encourage people with the means to donate.
## What it does
Project Glass is a system that uses BlockChain technology to track donations given to charitable organizations that have opted-in. Each donation is given a unique "tracking key" like the kind you get on parcels to track the status of deliveries. Donors can then lookup their donation on the Project Glass website to see exactly where each dollar ended up.
It also provides suggestions for where it is best for the organization to spend money. This is driven by a machine learning algorithm that detects events on data collected on topics relevant to the NGOs in the network. The ML algorithm detects relevant events, which are then dispatched using PubSub+ to the Project Glass partner-organizations. The organizations would then be able to see a live feed of relevant data that they can use better leverage their short-term investments.
## How we built it
We use Blockchain and a proprietary currency to keep track of every dollar spent. Each invested dollar is turned into a unit of currency and tied to a transaction id. The transactions of every dollar is then logged into the blockchain from the time it is deposited till the time it is sent to an external entity (such as another NGO, or if it was used for an expense). A person with a tracking id can use it to look up the final destination of every dollar that they have spent, which adds transparency as a result. Auditors can also use this information to verify the claims of the NGO expenditure by matching their bank transactions to what they claimed in the system, this makes their job easier.
We use data gathering, AI, and PubSub+ to generate and publish events. We have a data stream that we run a time-series based machine learning algorithm which detects events. The events are then sent over a PubSub+ topic which is received by the Project Glass service and used to drive suggestions for where it is best for an organization to send money.
## Challenges we ran into
The main challenge was adoption, how do we make sure that this system can easily be adopted given the use of a new currency?
The solution is to use the new currency strictly for tracking investments dollar-to-dollar. This currency cannot be used or exchanged in any other context as it is only meant to be used to augment the existing financial system with traceability. In Project Glass, we limited the use of this currency to compiling transactions to the ledger and mapping individual investments with every contribution they eventually make.
|
## Inspiration
This project was inspired by one of the group member's grandmother and her friends. Each month, the grandmother and her friends each contribute $100 to a group donation, then discuss and decide where the money should be donated to. We found this to be a really interesting concept for those that aren't set on always donating to the same charity. As well, it is a unique way to spread awareness and promote charity in communities. We wanted to take this concept, and make it possible to join globally.
## What it does
Each user is prompted to sign up for a monthly Stripe donation. The user can then either create a new "Collective" with a specific purpose, or join an existing one. Once in a collective, the user is able to add new charities to the poll, vote for a charity, or post comments to convince others on why their chosen charity needs the money the most.
## How we built it
We used MongoDB as the database with Node.js + Express for the back-end, hosted on a Azure Linux Virtual Machine. We made the front-end a web app created with Vue. Finally, we used Pusher to implement real time updates to the poll as people vote.
## Challenges we ran into
Setting up real-time polling proved to be a challenge. We wanted to allow the user to see updates to the poll without having to refresh their page. We needed to subscribe to only certain channels of notifications, depending on which collective the user is a member of. This real-time aspect required a fair bit of thought on race conditions for when to subscribe, as well as how to display the data in real time. In the end, we implemented the real-time poll as a pie graph, which resizes as people vote for charities.
## Accomplishments that we're proud of
Our team has competed in several hackathons now. Since this isn't our first time putting a project together in 24 hours, we wanted to try to create a polished product that could be used in the real world. In the end, we think we met this goal.
## What we learned
Two of our team of three had never used Vue before, so it was an interesting framework to learn. As well, we learned how to manage our time and plan early, which saved us from having to scramble at the end.
## What's next for Collective
We plan to continue developing Collective to support multiple subscriptions from the same person, and a single person entering multiple collectives.
|
## Inspiration
Many students rely on scholarships to attend college. As students in different universities, the team understands the impact of scholarships on people's college experiences. When scholarships fall through, it can be difficult for students who cannot attend college without them. In situations like these, they have to depend on existing crowdfunding websites such as GoFundMe. However, platforms like GoFundMe are not necessarily the most reliable solution as there is no way of verifying student status and the success of the campaign depends on social media reach. That is why we designed ScholarSource: an easy way for people to donate to college students in need!
## What it does
ScholarSource harnesses the power of blockchain technology to enhance transparency, security, and trust in the crowdfunding process. Here's how it works:
Transparent Funding Process: ScholarSource utilizes blockchain to create an immutable and transparent ledger of all transactions and donations. Every step of the funding process, from the initial donation to the final disbursement, is recorded on the blockchain, ensuring transparency and accountability.
Verified Student Profiles: ScholarSource employs blockchain-based identity verification mechanisms to authenticate student profiles. This process ensures that only eligible students with a genuine need for funding can participate in the platform, minimizing the risk of fraudulent campaigns.
Smart Contracts for Funding Conditions: Smart contracts, powered by blockchain technology, are used on ScholarSource to establish and enforce funding conditions. These self-executing contracts automatically trigger the release of funds when predetermined criteria are met, such as project milestones or the achievement of specific research outcomes. This feature provides donors with assurance that their contributions will be used appropriately and incentivizes students to deliver on their promised objectives.
Immutable Project Documentation: Students can securely upload project documentation, research papers, and progress reports onto the blockchain. This ensures the integrity and immutability of their work, providing a reliable record of their accomplishments and facilitating the evaluation process for potential donors.
Decentralized Funding: ScholarSource operates on a decentralized network, powered by blockchain technology. This decentralization eliminates the need for intermediaries, reduces transaction costs, and allows for global participation. Students can receive funding from donors around the world, expanding their opportunities for financial support.
Community Governance: ScholarSource incorporates community governance mechanisms, where participants have a say in platform policies and decision-making processes. Through decentralized voting systems, stakeholders can collectively shape the direction and development of the platform, fostering a sense of ownership and inclusivity.
## How we built it
We used React and Nextjs for the front end. We also integrated with ThirdWeb's SDK that provided authentication with wallets like Metamask. Furthermore, we built a smart contract in order to manage the crowdfunding for recipients and scholars.
## Challenges we ran into
We had trouble integrating with MetaMask and Third Web after writing the solidity contract. The reason was that our configuration was throwing errors, but we had to configure the HTTP/HTTPS link,
## Accomplishments that we're proud of
Our team is proud of building a full end-to-end platform that incorporates the very essence of blockchain technology. We are very excited that we are learning a lot about blockchain technology and connecting with students at UPenn.
## What we learned
* Aleo
* Blockchain
* Solidity
* React and Nextjs
* UI/UX Design
* Thirdweb integration
## What's next for ScholarSource
We are looking to expand to other blockchains and incorporate multiple blockchains like Aleo. We are also looking to onboard users as we continue to expand and new features.
|
partial
|
🛠 Hack Harvard 2022 Ctrl Alt Create 🛠
🎵 Dynamic Playlist Curation 🎵
🌟 Highlights
* Mobile application
* Flutter SDK, Dart as Frontend and Python as Backend
* Uses Spotify REST API
* Facial recognition machine learning model to discern a person's emotional state
* Playlist assorted based on emotional state, current time, weather, and location metrics
ℹ️ Overview
Some days might feel like nothing but good vibes, while others might feel a bit slow. Throughout it all, music has been by your side to give you an encompassing experience. Using the Dynamic Playlist Curation (DPC) app, all you have to do is snap a quick pic of yourself, and DPC will take care of the rest! The app analyzes your current mood and incorporates other metrics like the current weather and time of day to determine the best songs for your current mood. It's a great way to discover pieces that sound just right at the right time. We hope you'll enjoy the 10-song playlist we've created especially for you!
🦋 Inspiration
Don't you wish it was easier to find that one song that matches your current mood?
❗️Challenges
This was our first-ever hackathon. Most of us came in with no knowledge of developing applications. Learning the development cycle, the Flutter SDK, and fetching APIs were among our biggest challenges. Each of us came in with our own respective skill sets, but figuring out how to piece it all together was the hard part. At the very end, our final challenge was putting everything together and ensuring all of the dependencies were installed. After 36 long hours, we are proud to share this project with everyone!
✨ Accomplishments
We were able to develop an ML model that uses image data to return the emotional states of a person. We were able to successfully make API calls to the Spotify Web API and OpenWeatherMap API, link the frontend with our backend, and make a functional UI with the help of Flutter.
📖 What we learned
We learned how to connect our frontend and backend components by running servers on Flask. Additionally, we learned how to write up functions to get data and generate oAuth tokens from our APIs.
🧐 What's next for Hack Harvard
We hope to take this learning experience to build confidence in working on more projects. Next year, we will come back more prepared, more motivated, and even more ambitious!
✍️ Authors
Ivan
Jason
Rishi
Andrew
🚀 Usage
Made specifically for Spotify
|
## Inspiration
While video-calling his grandmother, Tianyun was captivated by her nostalgic tales from her youth in China. It struck him how many of these cherished stories, rich in culture and emotion, remain untold or fade away as time progresses due to barriers like time constraints, lack of documentation, and the predominance of oral traditions.
For many people, however, it can be challenging to find time to hear stories from their elders. Along with limited documentation, and accessibility issues, many of these stories are getting lost as time passes.
**We believe these stories are *valuable* and *deserve* to be heard.** That’s why we created a tool that provides people with a dedicated team to help preserve these stories and legacies.
## What it does
Forget typing. Embrace voice. Our platform boasts a state-of-the-art Speech-To-Text interface. Leveraging cutting-edge LLM models combined with robust cloud infrastructures, we ensure swift and precise transcription. Whether your narration follows a structured storyline or meanders like a river, our bot Ivee skillfully crafts it into a beautiful, funny, or dramatic memoir.
## How we built it
Our initial step was an in-depth two-hour user experience research session. After crafting user personas, we identified our target audience: those who yearn to be acknowledged and remembered.
The next phase involved rapid setup and library installations. The team then split: the backend engineer dived into fine-tuning custom language models and optimizing database frameworks, the frontend designer focused on user authentication, navigation, and overall app structure, and the design team commenced the meticulous work of wireframing and conceptualization.
After an intense 35-hour development sprint, Ivee came to life. The designers brought to life a theme of nature into the application, symbolizing each story as a leaf, a life's collective memories as trees, and cultural groves of forests. The frontend squad meticulously sculpted an immersive onboarding journey, integrating seamless interactions with the backend, and spotlighting the TTS and STT features. Meanwhile, the backend experts integrated technologies from our esteemed sponsors: Hume.ai, Intel Developer Cloud, and Zilliz Vector Database.
Our initial segregation into Frontend, Design, Marketing, and Backend teams soon blurred as we realized the essence of collaboration. Every decision, every tweak was a collective effort, echoing the voice of the entire team.
## Challenges we ran into
Our foremost challenge was crafting an interface that exuded warmth, empathy, and familiarity, yet was technologically advanced. Through interactions with our relatives, we discovered overall negative sentiment toward AI, often stemming from dystopian portrayals in movies.
## Accomplishments that we're proud of
Our eureka moment was when we successfully demystified AI for our primary users. By employing intuitive metaphors and a user-centric design, we transformed AI from a daunting entity to an amiable ally. The intricate detailing in our design, the custom assets & themes, solving the challenges of optimizing 8 different APIs, and designing an intuitive & accessible onboarding experience are all highlights of our creativity.
## What we learned
Our journey underscored the true value of user-centric design. Conventional design principles had to be recalibrated to resonate with our unique user base. We created an AI tool to empower humanity, to help inspire, share, and preserve stories, not just write them. It was a profound lesson in accessibility and the art of placing users at the heart of every design choice.
## What's next for Ivee
The goal for Ivee was always to preserve important memories and moments in people's lives. Below are some really exciting features that our team would love to implement:
* Reinforcement Learning on responses to fit your narration style
* Rust to make everything faster
* **Multimodal** storytelling. We want to include clips of the most emotion-fueled clips, on top of the stylized and colour-coded text, we want to revolutionize the way we interact with stories.
* Custom handwriting for memoirs
* Use your voice and read your story in your voice using custom voices
In the future, we hope to implement additional features like photos and videos, as well as sharing features to help families and communities grow forests together.
|
## Inspiration
University can be very stressful at times and sometimes the resources you need are not readily available to you. This web application can help students relax by completing positive "mindfulness" tasks to beat their monsters. We decided to build a web application specifically for mobile since we thought that the app would be most effective if it is very accessible to anyone. We implemented a social aspect to it as well by allowing users to make friends to make the experience more enjoyable. Everything in the design was carefully considered and we always had mindfulness in mind, by doing some research into how colours and shapes can affect your mood.
## What it does
Mood prompts you to choose your current mood on your first login of every day, which then allows us to play music that matches your mood as well as create monsters which you can defeat by completing tasks meant to ease your mind. You can also add friends and check on their progress, and in theory be able to interact with each other through the app by working together to defeat the monsters, however, we haven't been able to implement this functionality yet.
## How we built it
The project uses Ruby on Rails to implement the backend and JavaScript and React Bootstrap on the front end. We also used GitHub for source control management.
## Challenges we ran into
Starting the project, a lot of us had issues downloading the relevant software as we started by using a boilerplate but through collaboration between us we were all able to figure it out and get started on the project. Most of us were still beginning to learn about web development and thus had limited experience programming in JavaScript and CSS, but again, through teamwork and the help of some very insightful mentors, we were able to pick up these skills very quickly, each of us being able to contribute a fair portion to the project.
## Accomplishments that we are proud of
Every single member in this group has learned a lot from this experience, no matter their previous experience going into the hackathon. The more experienced members did an amazing job helping those who had less experience, and those with less experience were able to quickly learn the elements to creating a website. The thing we are most proud of is that we were able to accomplish almost every one of our goals that we had set prior to the hackathon, building a fully functioning website that we hope will be able to help others. Every one of us is proud to have been a part of this amazing project, and look forward to doing more like it.
## What's next for Mood
Our goal is to give the help to those who need it through this accessible app, and to make it something user's would want to do. And thus to improve this app we would like to: add more mindfulness tasks, implement a 'friend collaboration' aspect, as well as potentially make a mobile application (iOS and Android) so that it is even more accessible to people.
|
partial
|
## Inspiration
* My inspiration for this project is the tendency of medical facilities such as hospitals lacking in terms of technology, with this virtual automated HospQueue app, we will be saving more lives by saving more time for healthcare workers to focus on the more important tasks.
* Also amidst the global pandemic, managing the crowd has been one of the prime challenges of the government and various institutions. That is where HospQueue comes to the rescue. HospQueue is a webapp that allows you to join a queue virtually which leads to no gathering hence lesser people in hospitals that enables health workers to have the essentials handy.
* During the pandemic, we have all witnessed how patients in need have to wait in lines to get themselves treated. This led to people violating social distancing guidelines and giving the opportunity for the virus to spread further.
* I had an idea to implement HospQueue that would help hospitals to manage and check-in incoming patients smoothly.
## What it does
It saves time for healthcare workers as it takes away a task that is usually time-consuming. On HospQueue, you can check into your hospital on the app instead of in person. Essentially, you either don’t go to the hospital until it is your turn, or you stay into the car until you are next in line. This will not only make the check-in process for all hospitals easier and more convenient and safe but will also allow health care workers to focus on saving more people.
## How I built it
The frontend part was built using HTML and CSS. The backend was built using Flask and Postgresql as the database.
## Challenges I ran into
Some challenges I ran into were completing the database queries required for the system. I also had trouble with making the queue list work effectively. Hosting the website on Heroku was quite a challenge as well.
## Accomplishments that I'm proud of
I am glad to have implemented the idea of HospQueue that I thought of at the beginning of the hackathon. I made the real-time fetching and updating of the database successful.
## What I learned
* I learned how to fetch and update the database in real time.
* I learned how to deploy an app on Heroku using Heroku's Postgresql database.
## What's next for HospQueue
HospQueue will add and register hospitals so it is easier to manage. I also hope to integrate AI to make it easier for people to log in, maybe by simply scanning a QR code. Finally, I will also create a separate interface in which doctors can log in and see all the people in line instead of having to pull it from the program.
|
## Inspiration
One of our team members, Andy, ended up pushing back his flu shot as a result of the lengthy wait time and large patient count. Unsurprisingly, he later caught the flu and struggled with his health for just over a week. Although we joke about it now, the reality is many medical processes are still run off outdated technology and can easily be streamlined or made more efficient. This is what we aimed to do with our project.
## What it does
Streamlines the process of filling out influenza vaccine forms for both medical staff, as well as patients.
Makes the entire process more accessible for a plethora of demographics without sacrificing productivity.
## How we built it
Front-End built in HTML/CSS/Vanilla JavaScript (ES6)
Back-End built with Python and a Flask Server.
MongoDB for database.
Microsoft Azure Vision API, Google Cloud Platform NLP for interactivity.
## Challenges we ran into
Getting Azure's Vision API to get quality captures to be able to successfully pull the meaningful data we wanted.
Front-End to Back-End communication with GCP NLP functions triggering events.
## Accomplishments that we're proud of
Successfully implementing cloud technologies and tools we had little/no experience utilizing, coming into UofTHacks.
The entire project overall.
## What we learned
How to communicate image data via webcam and Microsoft Azure's Vision AP and analyze Optical Character Recognition results.
Quite a bit about NLP tendencies and how to get the most accurate/intended results when utilizing it.
Github Pages cannot deploy Flask servers LOL.
How to deploy with Heroku (as a result of our failure with Github Pages).
## What's next for noFluenza
Payment system for patients depending on insurance coverage
Translation into different languages.
|
## Inspiration
As students of Berkeley, we value websites like Gofundme in providing anyone with the opportunity to spend money on causes they believe in. One problem we realized however is that the goodwill and trust of the public could be taken advantage of because there is a lack of strict accountability when it comes to the way the fundraised money is spent. From here, we noticed a similar trend among crowdsourced funding efforts in general -- whether it be funding for social causes or funding for investors. Investors wanting to take a leap of faith in a cause that catches their eye may be discouraged to invest for fear of losing all their money — whether from being scammed or from an irresponsible usage of money — while genuine parties who need money may be skipped. We wanted to make an application that solves this problem by giving the crowd control and transparency over the money that they provide.
## What it does
Guaranteed Good focuses on the operations of NPOs that need financial support with building technologies for their organization. Anybody can view the NPO's history and choose to provide cryptocurrency to help the NPO fund their project. However, the organization is forced to allocate and spend this money legitimately via smart contracts; every time they want to use a portion of their money pool and hire a freelancer to contribute to their project, they must notify all their investors who will decide whether or not to approve of this expenditure. Only if a majority of investors approve can the NPO actually use the money, and only in the way specified.
## How we built it
To enable the smart contract feature of our application, we used Solidity for some of our backend infrastructures.
We programmed the frontend in React, Next, and Tailwind.
## Challenges we ran into
None of us had previous experience with Solidity or blockchain technologies so there was a steep learning curve when trying to familiarize ourselves with implementing smart contracts and working with blockchain. It was difficult to get started and we had a lot of confusion with setup and dependencies management.
The second thing that stumped us was adapting to using Solidity as a backend language. Since the language is a bit more niche than other more commonly used backend languages, there was less of an abundance of resources to teach us how to integrate our React frontend with our Solidity backend. Luckily, we found out that Solidity can integrate with the Next.js framework, so we set out to learn and implement Next.
## Accomplishments that we're proud of
We're all proud of the amount of deep diving that we did to familiarize ourselves with blockchain in a short amount of time! We thought it would be a risky move since we weren't sure if we would be able to actually learn and complete a blockchain-centered application, but we wanted to try anyway since we really liked our idea. Although we are by no mean experts in blockchain now, it was fun spending time and learning a lot about this technology. We were also really satisfied when we were able to pull together a functioning full-stack application by the end of 24 hours.
In addition, with so many moving components in our application, it was especially important to make our website intuitive and simple for users to navigate. Thus, we spent time coming up with a streamlined and aesthetic design for our application and implementing it in react. Additionally, none of us really had design experience so we tried our best to quickly learn Figma and simple design principles and were surprised when it didn't come out as totally awkward-looking.
## What we learned
* New technologies such as blockchain, Solidity, Figma design, and Next
* How to communicate smart contract data from Solidity using Next and Node
* To appreciate the amount of careful planning and frontend design necessary for a good web application with many functionalities
## What's next for Guaranteed Good
**Dashboard**
* Currently GuarenteedGood has a user dashboard that is bare bones. With more time, we wanted to be able to offer analytics on how the project was going, graphs, and process more information from the user.
**Optimizing Runtime**
* With a lot of projects and user information to load, it takes a bit longer than we like to run the website. We want to integrate lazy loading, optimize images, and website caching.
**Matching Freelancer users**
* Allowing Freelancers to post and edit their profiles to their job board, and accept or reject job offers
|
partial
|
## Inspiration
We wanted to ease the complicated prescription medicine process, especially for elderly people who have to juggle multiple prescriptions at the same time.
## What it does
The AutoMed dispenses medication, at the right time, in the right quantity for the patient. It prevents the patient from taking the incorrect medication, or the incorrect amount of medicine. It also uploads data to MongoDB, so that the patient's pharmacist and doctor can monitor the patient's progress on their medication. In addition, the patient and pharmacist can be reminded to refill the prescription when the medication runs low.
## How I built it
We built the AutoMed using Python3 running on a Raspberry Pi, which controls all the servos and leds.
## Challenges I ran into
Initially, we attempted using the Dragon 410c and Pynq Z1, but we could not get the GPIO to properly interface in order to control the motors. As a result, we switched to the Raspberry Pi.
## Accomplishments that I'm proud of
We're proud of the proof of concept dispenser we created.
## What I learned
Research and choose the right hardware ahead of time!
## What's next for AutoMed
We hope to expand this to allow further push reminders and functionality for the patient, doctor, and pharmacist. For example, expiry date reminders can be set, in addition to reminders if the patient forgets to take their medication. Missed medication can be tracked and reported to the doctor, allowing them to track and/or recommend alternatives or best courses of action. Specialized medication storage can be incorporated, such as lock storage for opioids or refrigerated storage for antibiotics. An option to add non-prescription medication, such as Advil, Benadryl, or Tylenol. We'd also like to incorporate a GUI and a touchscreen on the AutoMed, to allow for easy use by the patient.
|
## Inspiration
This project was inspired by a team member’s family, his grandparents always have to take medicine but often forget about it. Not only his grandparents forget the medicine also his mom. Although, his mom is very young but in a very fast paced society nowadays people always forget to do small things like taking their pills. Due to this inspiration, we decided to develop a pill reminder, but then we got inspired by a Tik Tok video about a person who has Parkinson’s disease and he couldn’t pick up an individual pill from the container. In end, we decide to create this project that will resolve the problem of people forgetting to take their pills as well as helping people to easily take individual pills.
## What it does
Our project the Delta Dispenser uses an app to communicate with the database to set up a specific time to alert users to take their pills as well as tracking their pills information in the app. The hardware of Delta Dispenser will alert the user when the time is reached and automatically dispense the correct amount of the pills into the container.
## How we built it
The frontend of the app is made with **Flutter**, the app communicates with a **firebase real-time database** to store medicinal and scheduling information for the user. The physical component uses an **embedded microcontroller** called an ESP-32 which we chose for its ability to connect to WiFi and be able to sync with the firebase database to know when to dispense the pills.
## Challenges we ran into
The time constraint was definitely a big challenge and we accounted for that by deciding which features were most important in emphasizing our main idea for this project. These parts include the mechanical indexer of the pills, the interface the user would interact with, and how the database would look for communication with the app and the embedded device.
## Accomplishments that we're proud of
We are most proud of how this project utilized many different aspects of engineering, from mechanical to electrical and software. Our team did a really good job at communicating throughout the design process which made integration at the end much easier.
## What we learned
During this project, we had learned how to flutter to create a mobile app as well as learning how firebase works. Throughout this project, although we only learned a few skills that will be very useful in the future. The most important part was that we were able to develop upon the skills we already had. For example, now we are able to develop hardware that could communicate through firebase.
## What's next for Delta Dispenser
The next steps for the Delta Dispenser include building a fully 3D printed prototype, along with the control box and hopper as shown in the CAD renders. On the software side, we would also like to add the ability for more complicated drug scheduling, while keeping the UI easy enough for anyone to set up. Having another portal that allows a doctor to directly input the information themselves is also a feature we are interested in having.
|
## Inspiration
**75% of adults over the age of 50** take prescription medication on a regular basis. Of these people, **over half** do not take their medication as prescribed - either taking them too early (causing toxic effects) or taking them too late (non-therapeutic). This type of medication non-adherence causes adverse drug reactions which is costing the Canadian government over **$8 billion** in hospitalization fees every year. Further, the current process of prescription between physicians and patients is extremely time-consuming and lacks transparency and accountability. There's a huge opportunity for a product to help facilitate the **medication adherence and refill process** between these two parties to not only reduce the effects of non-adherence but also to help save tremendous amounts of tax-paying dollars.
## What it does
**EZPill** is a platform that consists of a **web application** (for physicians) and a **mobile app** (for patients). Doctors first create a prescription in the web app by filling in information including the medication name and indications such as dosage quantity, dosage timing, total quantity, etc. This prescription generates a unique prescription ID and is translated into a QR code that practitioners can print and attach to their physical prescriptions. The patient then has two choices: 1) to either create an account on **EZPill** and scan the QR code (which automatically loads all prescription data to their account and connects with the web app), or 2) choose to not use EZPill (prescription will not be tied to the patient). This choice of data assignment method not only provides a mechanism for easy onboarding to **EZPill**, but makes sure that the privacy of the patients’ data is not compromised by not tying the prescription data to any patient **UNTIL** the patient consents by scanning the QR code and agreeing to the terms and conditions.
Once the patient has signed up, the mobile app acts as a simple **tracking tool** while the medicines are consumed, but also serves as a quick **communication tool** to quickly reach physicians to either request a refill or to schedule the next check-up once all the medication has been consumed.
## How we built it
We split our team into 4 roles: API, Mobile, Web, and UI/UX Design.
* **API**: A Golang Web Server on an Alpine Linux Docker image. The Docker image is built from a laptop and pushed to DockerHub; our **Azure App Service** deployment can then pull it and update the deployment. This process was automated with use of Makefiles and the **Azure** (az) **CLI** (Command Line Interface). The db implementation is a wrapper around MongoDB (**Azure CosmosDB**).
* **Mobile Client**: A client targeted exclusively at patients, written in swift for iOS.
* **Web Client**: A client targeted exclusively at healthcare providers, written in HTML & JavaScript. The Web Client is also hosted on **Azure**.
* **UI/UX Design**: Userflow was first mapped with the entire team's input. The wireframes were then created using Adobe XD in parallel with development, and the icons were vectorized using Gravit Designer to build a custom assets inventory.
## Challenges we ran into
* Using AJAX to build dynamically rendering websites
## Accomplishments that we're proud of
* Built an efficient privacy-conscious QR sign-up flow
* Wrote a custom MongoDB driver in Go to use Azure's CosmosDB
* Recognized the needs of our two customers and tailored the delivery of the platform to their needs
## What we learned
* We learned the concept of "Collections" and "Documents" in the Mongo(NoSQL)DB
## What's next for EZPill
There are a few startups in Toronto (such as MedMe, Livi, etc.) that are trying to solve this same problem through a pure hardware solution using a physical pill dispenser. We hope to **collaborate** with them by providing the software solution in addition to their hardware solution to create a more **complete product**.
|
partial
|
## 💡 Inspiration 💡
With back to school and back to work, planning outfits everyday is harder than ever. We've all been in the situation where we've ransacked our wardrobe, trying to decide what to wear. With each outfit we ask ourselves, does it look okay? It's a relatable experience for everyone and it's difficult to get an unbiased opinion. With Hot or Not, you'll know whether or not your outfit delivers the impact you want it to, whether you really want that first date to go well or have an important meeting where you must dress to impress. Hot or Not will guide your fashion journey.
## 🤳 What it does 🤳
Hot or Not is a mobile application that uses your phone's camera or photo library to receive images of your outfit. Once you've uploaded your image, simply press the magical button to decide the rating on your attire and the app's machine learning algorithms will make the final judgment whether your fit is hot or not according to the internet's standards.
## 🛠 How we built it 🛠
This project can be split into three main components: data collection, machine learning and mobile app development. The data collection was done using google images; balanced training and testing sets were created that we used to fine-tune a reset model that was pre-trained on ImageNet.
## ⚠️ Challenges we ran into ⚠️
This project was fairly difficult especially in regards to connecting the frontend with the backend. The development of the iOS app was also challenging since none of the group members had strong experience in Xcode development or deploying large ML models.
## 🏅 Accomplishments that we're proud of 🏅
We're proud of creating a successful machine learning model that can work in a variety of environments and produce quality results. We're also proud of how we managed to learn mobile development and bring the app over the finish line.
## 🧠 What we learned 🧠
We learned new ways to connect frontend and backend programs, more about Amazon Web Services, how to code in Swift and design user-friendly applications.
## 🚀 What's next for Hot or Not 🚀
We have a lot of things planned for Hot or Not, the first is to train the models with larger training sets and update them to get the latest fashion trends. We'd also like to implement a calendar feature in where the app automatically keeps track of what you wear using segmentation; if you're wearing a bad outfit, it'll recommend you try something fashionable you wore a while ago!
|
## Inspiration
Toronto's temperature is extremely sporadic and as such, can sometimes get confusing when choosing clothes to wear. Sometimes its too hot for a thick jacket and too cold for a hoodie. We wanted to address this problem.
## What it does
Through our mobile app, users are notified of what other people in their area are wearing by analyzing publicly available live-camera footage from CCTV cameras and other sources. This greatly helps users in choosing the right attire for the unpredictable outdoor weather.
## How We built it
Our cross-platform app is built with a React Native frontend and powered by a Flask backend. We also utilize extensive image detection and recognition through our implementation of OpenCv and Inception V3 to detect and classify clothing from video feeds.
## Challenges We ran into
Clothing is extremely similar in nature and thus, it is extremely challenging to distinguish between them. This is extenuated by the fact that there are very few clothing classification datasets to train on, leading to a class imbalance problem while training data. Furthermore, there were problems with collaborative editing while training models which could not be committed to git.
## Accomplishments that We're proud of
We are proud of putting together an idea in such a short period of time and achieving our main goals for the application in the time frame.
## What We've learned
This experience was a great opportunity to further our teamwork skills and the ability to deliver solutions in fast-paced work environments.
## What's next for Clothology
We will be working on this solution more in our freetime, and are looking to deploy our app by the end of Q2.
|
## Inspiration
Globally, one in ten people do not know how to interpret their feelings. There's a huge global shift towards sadness and depression. At the same time, AI models like Dall-E and Stable Diffusion are creating beautiful works of art, completely automatically. Our team saw the opportunity to leverage AI image models and the emerging industry of Brain Computer Interfaces (BCIs) to create works of art from brainwaves: enabling people to learn more about themselves and
## What it does
A user puts on a Brain Computer Interface (BCI) and logs in to the app. As they work in front of front of their computer or go throughout their day, the user's brainwaves are measured. These differing brainwaves are interpreted as indicative of different moods, for which key words are then fed into the Stable Diffusion model. The model produces several pieces, which are sent back to the user through the web platform.
## How we built it
We created this project using Python for the backend, and Flask, HTML, and CSS for the frontend. We made use of a BCI library available to us to process and interpret brainwaves, as well as Google OAuth for sign-ins. We made use of an OpenBCI Ganglion interface provided by one of our group members to measure brainwaves.
## Challenges we ran into
We faced a series of challenges throughout the Hackathon, which is perhaps the essential route of all Hackathons. Initially, we had struggles setting up the electrodes on the BCI to ensure that they were receptive enough, as well as working our way around the Twitter API. Later, we had trouble integrating our Python backend with the React frontend, so we decided to move to a Flask frontend. It was our team's first ever hackathon and first in-person hackathon, so we definitely had our struggles with time management and aligning on priorities.
## Accomplishments that we're proud of
We're proud to have built a functioning product, especially with our limited experience programming and operating under a time constraint. We're especially happy that we had the opportunity to use hardware in our hack, as it provides a unique aspect to our solution.
## What we learned
Our team had our first experience with a 'real' hackathon, working under a time constraint to come up with a functioning solution, which is a valuable lesson in and of itself. We learned the importance of time management throughout the hackathon, as well as the importance of a storyboard and a plan of action going into the event. We gained exposure to various new technologies and APIs, including React, Flask, Twitter API and OAuth2.0.
## What's next for BrAInstorm
We're currently building a 'Be Real' like social media plattform, where people will be able to post the art they generated on a daily basis to their peers. We're also planning integrating a brain2music feature, where users can not only see how they feel, but what it sounds like as well
|
losing
|
# SAFE -- THE APP THAT REVOLUTIONIZES SECURITY AND INFORMATION STORAGE
## What SAFE is all about
In this day and age, the use of electronics have increased significantly. When it comes to banking, storing personal information and more….EVERYTHING IS ON OUR CELLPHONES NOW! The protection of personal information is something often overlooked but is very important to address now more than ever with information breaches and cyber-security compromises. We came up with the idea to build an app called SAFE which is a password-protected cellphone app that stores all personal information in a cellular device acting as a safe for all personal information. Users can choose to share info if they want with other users from the app and all of the shared data is safely encrypted.
Once the user signs up using the app, they will be greeted by a user-friendly home-screen which will initially contain customizable folders that can be set up for private information storage.
The cool thing about SAFE is that users are able to share private and personal information with other authorized users directly from the app itself.
## End-to-end encryption (E2EE)
E2EE was one of the things that we focused a lot on for the development of SAFE. E2EE ensures that the data shared from one authorized user to another through SAFE remains confidential to the users involved in the sharing session.
### Why E2EE:
* In order to protect the clients’ credential data during a sharing session
* SAFE’s implementation structure is meant to reassure users of its security using E2EEE
* SAFE is targeted to become an essential tool for sharing credentials
* E2EE provides absolute security for all clients using the share feature
### How SAFE uses E2EEE:
* Outgoing and incoming credential data using the share feature must be encrypted/decrypted
* SAFE’s server will only handle encrypted data received from the client
* The Diffie-Hellmen algorithm will be used to ensure powerful security
* When sharing data, E2EE will be enabled by default
## How we built the app
The ideas were listed in the slides along with a very basic implementation of the app. The design interface concept was made as an example using Figma. Finally, Flutter was used to visually display the app.
## Challenges that we ran into
Our team encountered some challenges with the development and coming up with an idea for an app. We decided to think about ongoing problems in the world that are connected to technology and have a huge impact on us. After recognizing the problem, we decided to come up with a solution for that problem and implement the solution into our app.
## Accomplishments that we are proud of
We are proud of working together as a team and coming up with a potential solution that could revolutionize the world one day. Everything is on our cellphones now, from banking information to electronic gift cards. Creating a reliable and safe storage solution in this day and age is something we are definitely proud of.
## What we learned
Throughout this project, we learned to work collaboratively with each other and efficiently maintain a smooth workflow to complete this project. Furthermore, while working as a team, we also learned to implement each of our ideas and modify the idea according to each of our views and perspectives.
## Whats next for SAFE
We hope this project will be considered as a possible solution for safe personal data storage in this day and age. We think an idea like SAFE will go a long way and potentially become a primary solution for users that expect security/storage solutions for senstive data in one single app.
## Presentation pitch
<https://docs.google.com/presentation/d/1PCxM_ZpspbXKNALvbRhuv-hN_K7Hbbio5R8rYaJtCjY/edit?usp=sharing>
#### An idea by: Abrar, Ismail, Matthew, and Lucas
|
## Inspiration
Moving out of residence is a tedious part for students. Why? Because we own a bunch of stuffs we no longer need and we don't know where or how to donate them. That is why our team create an app that students can share their food, shampoo, chair,.. anything to the others.
## What it does
An individual makes a post about stuffs that they have spare and want to share. They also choose the preferred meeting time and a location on campus that they can deliver. If someone wants to take it, they will inform the poster by hitting the "Take it" button on the post.
## How we built it
* Figma: we used the platform to brainstorm the idea, draft the app's layout and design
* React.js: we use React, especially React-Bootstrap library to create a responsive and well-organized website.
* Firebase: besides connecting the app with a real-time storage, we use Firebase Authentication for log-in page.
## Challenges we ran into
We are all beginners in React.js. Hence, it took time for us to learn the basics of the library and implement it to the website.
We also had some conflicts in the design and task distribution process.
## Accomplishments that we're proud of
Picking up React.js and create a responsive website
## What we learned
* Authentication using Firebase
* React.js
* Web development
* Teamwork
## What's next for ShareWithMe
Initially, the app works locally in the university residence so that it's safe to share privacy information and for students to contact others. Later on, when it has stricter security and scam detection, we want to expand the region to outside of the university. We also plan to finalize the profile page so user can check what they had lent and taken.
|
## Inspiration
WristPass was inspired by the fact that NFC is usually only authenticated using fingerprints. If your fingerprint is compromised, there is nothing you can do to change your fingerprint. We wanted to build a similarly intuitive technology that would allow users to change their unique ids at the push of a button. We envisioned it to be simple and not require many extra accessories which is exactly what we created.
## What it does
WristPass is a wearable Electro-Biometric transmission device and companion app for secure and reconfigurable personal identification with our universal receivers. Make purchases with a single touch. Check into events without worrying about forgetting tickets. Unlock doors by simply touching the handle.
## How we built it
WristPass was built using several different means of creation due to there being multiple parts to the projects. The WristPass itself was fabricated using various electronic components. The companion app uses Swift to transmit and display data to and from your device. The app also plugs into our back end to grab user data and information. Finally our receiving plates are able to handle the data in any way they want after the correct signal has been decoded. From here we demoed the unlocking of a door, a check in at a concert, and paying for a meal at your local subway shop.
## Challenges we ran into
By far the largest challenge we ran into was properly receiving and transcoding the user’s encoded information. We could reliably transmit data from our device using an alternating current, but it became a much larger ordeal when we had to reliably detect these incoming signals and process the information stored within. In the end we were able to both send and receive information.
## Accomplishments that we're proud of
1. Actually being able to transmit data using an alternating current
2. Building a successful coupling capacitor
3. The vast application of the product and how it can be expanded to so many different endpoints
## What we learned
1. We learned how to do capacitive coupling and decode signals transmitted from it.
2. We learned how to create a RESTful API using MongoDB, Spring and a Linode Instance.
3. We became more familiarized with new APIs including: Nexmo, Lyft, Capital One’s Nessie.
4. And a LOT of physics!
## What's next for WristPass
1. We plan on improving security of the device.
2. We plan to integrate Bluetooth in our serial communications to pair it with our companion iOS app.
3. Develop for android and create a web UI.
4. Partner with various companies to create an electro-biometric device ecosystem.
|
losing
|
## Inspiration
According to the Federal Trade Commission (FTC), credit fraud is the most common form of identity fraud. The increasing volume and frequency of trade pose a difficult challenge for firms in financial services to analyze real-time data to detect fraudulent transactions. A scalable and distributed data solution allows these business users to alert their customers, assuring the users that their transactional information is managed securely.
## What it does
Frody is a microservice on the Google Cloud Platform that uses machine learning to flag and inform users on real-time transactional data. The user can "subscribe" to the transactional activities for multiple cards on a centralized dashboard, where they can monitor their transactional activities. If suspicious activity is detected with Frody's fraud detection model, the users will be informed via text message immediately.
## How we built it
We simulated real-time messaging service with Google Cloud Pub/Sub, using Java Spring Boot as a server-side application to stream randomly generated transactional data. Once the consumer of Pub/Sub persists the message to the Google Big Query database, then the message is checked for fraud via the machine learning model. If a fraudulent transaction is detected, the user will receive the transaction information on their phone via Twilio. Lastly, our centralized dashboard where users can "subscribe" to and monitor transactional activities for multiple cards is built using React.
## Challenges we ran into
We had to learn how to use services of Google Cloud Platform and Twilio. These services brought us much difficulty as creating a microservice on GCP was not as simple as we thought.
## What's next
A future development would be to further optimize our machine learning model for even more accurate detection of suspicious activity. We also intend to expand our microservice to other transaction methods such as digital wallets and cryptocurrency wallets.
|
## Inspiration
The vicarious experiences of friends, and some of our own, immediately made clear the potential benefit to public safety the City of London’s dataset provides. We felt inspired to use our skills to make more accessible, this data, to improve confidence for those travelling alone at night.
## What it does
By factoring in the location of street lights, and greater presence of traffic, safeWalk intuitively presents the safest options for reaching your destination within the City of London. Guiding people along routes where they will avoid unlit areas, and are likely to walk beside other well-meaning citizens, the application can instill confidence for travellers and positively impact public safety.
## How we built it
There were three main tasks in our build.
1) Frontend:
Chosen for its flexibility and API availability, we used ReactJS to create a mobile-to-desktop scaling UI. Making heavy use of the available customization and data presentation in the Google Maps API, we were able to achieve a cohesive colour theme, and clearly present ideal routes and streetlight density.
2) Backend:
We used Flask with Python to create a backend that we used as a proxy for connecting to the Google Maps Direction API and ranking the safety of each route. This was done because we had more experience as a team with Python and we believed the Data Processing would be easier with Python.
3) Data Processing:
After querying the appropriate dataset from London Open Data, we had to create an algorithm to determine the “safest” route based on streetlight density. This was done by partitioning each route into subsections, determining a suitable geofence for each subsection, and then storing each lights in the geofence. Then, we determine the total number of lights per km to calculate an approximate safety rating.
## Challenges we ran into:
1) Frontend/Backend Connection:
Connecting the frontend and backend of our project together via RESTful API was a challenge. It took some time because we had no experience with using CORS with a Flask API.
2) React Framework
None of the team members had experience in React, and only limited experience in JavaScript. Every feature implementation took a great deal of trial and error as we learned the framework, and developed the tools to tackle front-end development. Once concepts were learned however, it was very simple to refine.
3) Data Processing Algorithms
It took some time to develop an algorithm that could handle our edge cases appropriately. At first, we thought we could develop a graph with weighted edges to determine the safest path. Edge cases such as handling intersections properly and considering lights on either side of the road led us to dismissing the graph approach.
## Accomplishments that we are proud of
Throughout our experience at Hack Western, although we encountered challenges, through dedication and perseverance we made multiple accomplishments. As a whole, the team was proud of the technical skills developed when learning to deal with the React Framework, data analysis, and web development. In addition, the levels of teamwork, organization, and enjoyment/team spirit reached in order to complete the project in a timely manner were great achievements
From the perspective of the hack developed, and the limited knowledge of the React Framework, we were proud of the sleek UI design that we created. In addition, the overall system design lent itself well towards algorithm protection and process off-loading when utilizing a separate back-end and front-end.
Overall, although a challenging experience, the hackathon allowed the team to reach accomplishments of new heights.
## What we learned
For this project, we learned a lot more about React as a framework and how to leverage it to make a functional UI. Furthermore, we refined our web-based design skills by building both a frontend and backend while also use external APIs.
## What's next for safewalk.io
In the future, we would like to be able to add more safety factors to safewalk.io. We foresee factors such as:
Crime rate
Pedestrian Accident rate
Traffic density
Road type
|
## Inspiration
To any financial institution, the most valuable asset to increase revenue, remain competitive and drive innovation, is aggregated **market** and **client** **data**. However, a lot of data and information is left behind due to lack of *structure*.
So we asked ourselves, *what is a source of unstructured data in the financial industry that would provide novel client insight and color to market research*?. We chose to focus on phone call audio between a salesperson and client on an investment banking level. This source of unstructured data is more often then not, completely gone after a call is ended, leaving valuable information completely underutilized.
## What it does
**Structerall** is a web application that translates phone call recordings to structured data for client querying, portfolio switching/management and novel client insight. **Structerall** displays text dialogue transcription from a phone call and sentiment analysis specific to each trade idea proposed in the call.
Instead of loosing valuable client information, **Structerall** will aggregate this data, allowing the institution to leverage this underutilized data.
## How we built it
We worked with RevSpeech to transcribe call audio to text dialogue. From here, we connected to Microsoft Azure to conduct sentiment analysis on the trade ideas discussed, and displayed this analysis on our web app, deployed on Azure.
## Challenges we ran into
We had some trouble deploying our application on Azure. This was definitely a slow point for getting a minimum viable product on the table. Another challenge we faced was learning the domain to fit our product to, and what format/structure of data may be useful to our proposed end users.
## Accomplishments that we're proud of
We created a proof of concept solution to an issue that occurs across a multitude of domains; structuring call audio for data aggregation.
## What we learned
We learnt a lot about deploying web apps, server configurations, natural language processing and how to effectively delegate tasks among a team with diverse skill sets.
## What's next for Structurall
We also developed some machine learning algorithms/predictive analytics to model credit ratings of financial instruments. We built out a neural network to predict credit ratings of financial instruments and clustering techniques to map credit ratings independent of s\_and\_p and moodys. We unfortunately were not able to showcase this model but look forward to investigating this idea in the future.
|
partial
|
## Inspiration
Personally, I really enjoy listening to music, and to me visualizing audio always seemed like an interesting way to add to the experience.
## What it does
Eye Hear A Song is a music visualizer. It allows users to select from a list of songs to play, and then displays visual output on a canvas in real time.
## How I built it
The application is built on Node.js, is styled with HTML and CSS, and makes use of webaudiox.js to parse audio, which are a series of helper functions for the WebAudio API. It is hosted by Heroku.
## Challenges I ran into
This is the first project I've made with Node.js, as well as the first time I've configured a Heroku app. This made the development process take quite a long time, but overall it was a good learning experience.
The API I used to parse audio files made it difficult to implement client-side file uploads. Thus, server side features will be added in the future to accommodate file uploads.
## What's next for Eye Hear A Song
The project was originally meant to be a platform for sharing music between friends. In the future, users should have the ability to upload their own favourite songs for others to listen to.
|
# ✨ Inspiration
|
## Inspiration
We want to make everyone impressed by our amazing project! We wanted to create a revolutionary tool for image identification!
## What it does
It will identify any pictures that are uploaded and describe them.
## How we built it
We built this project with tons of sweats and tears. We used Google Vision API, Bootstrap, CSS, JavaScript and HTML.
## Challenges we ran into
We couldn't find a way to use the key of the API. We couldn't link our html files with the stylesheet and the JavaScript file. We didn't know how to add drag and drop functionality. We couldn't figure out how to use the API in our backend. Editing the video with a new video editing app. We had to watch a lot of tutorials.
## Accomplishments that we're proud of
The whole program works (backend and frontend). We're glad that we'll be able to make a change to the world!
## What we learned
We learned that Bootstrap 5 doesn't use jQuery anymore (the hard way). :'(
## What's next for Scanspect
The drag and drop function for uploading iamges!
|
losing
|
### 💡 Inspiration 💡
We call them heroes, **but the support we give them is equal to the one of a slave.**
Because of the COVID-19 pandemic, a lot of medics have to keep track of their patient's history, symptoms, and possible diseases. However, we've talked with a lot of medics, and almost all of them share the same problem when tracking the patients: **Their software is either clunky and bad for productivity, or too expensive to use on a bigger scale**. Most of the time, there is a lot of unnecessary management that needs to be done to get a patient on the record.
Moreover, the software can even get the clinician so tired they **have a risk of burnout, which makes their disease predictions even worse the more they work**, and with the average computer-assisted interview lasting more than 20 minutes and a medic having more than 30 patients on average a day, the risk is even worse. That's where we introduce **My MedicAid**. With our AI-assisted patient tracker, we reduce this time frame from 20 minutes to **only 5 minutes.** This platform is easy to use and focused on giving the medics the **ultimate productivity tool for patient tracking.**
### ❓ What it does ❓
My MedicAid gets rid of all of the unnecessary management that is unfortunately common in the medical software industry. With My MedicAid, medics can track their patients by different categories and even get help for their disease predictions **using an AI-assisted engine to guide them towards the urgency of the symptoms and the probable dangers that the patient is exposed to.** With all of the enhancements and our platform being easy to use, we give the user (medic) a 50-75% productivity enhancement compared to the older, expensive, and clunky patient tracking software.
### 🏗️ How we built it 🏗️
The patient's symptoms get tracked through an **AI-assisted symptom checker**, which uses [APIMedic](https://apimedic.com/i) to process all of the symptoms and quickly return the danger of them and any probable diseases to help the medic take a decision quickly without having to ask for the symptoms by themselves. This completely removes the process of having to ask the patient how they feel and speeds up the process for the medic to predict what disease their patient might have since they already have some possible diseases that were returned by the API. We used Tailwind CSS and Next JS for the Frontend, MongoDB for the patient tracking database, and Express JS for the Backend.
### 🚧 Challenges we ran into 🚧
We had never used APIMedic before, so going through their documentation and getting to implement it was one of the biggest challenges. However, we're happy that we now have experience with more 3rd party APIs, and this API is of great use, especially with this project. Integrating the backend and frontend was another one of the challenges.
### ✅ Accomplishments that we're proud of ✅
The accomplishment that we're the proudest of would probably be the fact that we got the management system and the 3rd party API working correctly. This opens the door to work further on this project in the future and get to fully deploy it to tackle its main objective, especially since this is of great importance in the pandemic, where a lot of patient management needs to be done.
### 🙋♂️ What we learned 🙋♂️
We learned a lot about CRUD APIs and the usage of 3rd party APIs in personal projects. We also learned a lot about the field of medical software by talking to medics in the field who have way more experience than us. However, we hope that this tool helps them in their productivity and to remove their burnout, which is something critical, especially in this pandemic.
### 💭 What's next for My MedicAid 💭
We plan on implementing an NLP-based service to make it easier for the medics to just type what the patient is feeling like a text prompt, and detect the possible diseases **just from that prompt.** We also plan on implementing a private 1-on-1 chat between the patient and the medic to resolve any complaints that the patient might have, and for the medic to use if they need more info from the patient.
|
## Inspiration
The need for faster and more reliable emergency communication in remote areas inspired the creation of FRED (Fire & Rescue Emergency Dispatch). Whether due to natural disasters, accidents in isolated locations, or a lack of cellular network coverage, emergencies in remote areas often result in delayed response times and first-responders rarely getting the full picture of the emergency at hand. We wanted to bridge this gap by leveraging cutting-edge satellite communication technology to create a reliable, individualized, and automated emergency dispatch system. Our goal was to create a tool that could enhance the quality of information transmitted between users and emergency responders, ensuring swift, better informed rescue operations on a case-by-case basis.
## What it does
FRED is an innovative emergency response system designed for remote areas with limited or no cellular coverage. Using satellite capabilities, an agentic system, and a basic chain of thought FRED allows users to call for help from virtually any location. What sets FRED apart is its ability to transmit critical data to emergency responders, including GPS coordinates, detailed captions of the images taken at the site of the emergency, and voice recordings of the situation. Once this information is collected, the system processes it to help responders assess the situation quickly. FRED streamlines emergency communication in situations where every second matters, offering precise, real-time data that can save lives.
## How we built it
FRED is composed of three main components: a mobile application, a transmitter, and a backend data processing system.
```
1. Mobile Application: The mobile app is designed to be lightweight and user-friendly. It collects critical data from the user, including their GPS location, images of the scene, and voice recordings.
2. Transmitter: The app sends this data to the transmitter, which consists of a Raspberry Pi integrated with Skylo’s Satellite/Cellular combo board. The Raspberry Pi performs some local data processing, such as image transcription, to optimize the data size before sending it to the backend. This minimizes the amount of data transmitted via satellite, allowing for faster communication.
3. Backend: The backend receives the data, performs further processing using a multi-agent system, and routes it to the appropriate emergency responders. The backend system is designed to handle multiple inputs and prioritize critical situations, ensuring responders get the information they need without delay.
4. Frontend: We built a simple front-end to display the dispatch notifications as well as the source of the SOS message on a live-map feed.
```
## Challenges we ran into
One major challenge was managing image data transmission via satellite. Initially, we underestimated the limitations on data size, which led to our satellite server rejecting the images. Since transmitting images was essential to our product, we needed a quick and efficient solution. To overcome this, we implemented a lightweight machine learning model on the Raspberry Pi that transcribes the images into text descriptions. This drastically reduced the data size while still conveying critical visual information to emergency responders. This solution enabled us to meet satellite data constraints and ensure the smooth transmission of essential data.
## Accomplishments that we’re proud of
We are proud of how our team successfully integrated several complex components—mobile application, hardware, and AI powered backend—into a functional product. Seeing the workflow from data collection to emergency dispatch in action was a gratifying moment for all of us. Each part of the project could stand alone, showcasing the rapid pace and scalability of our development process. Most importantly, we are proud to have built a tool that has the potential to save lives in real-world emergency scenarios, fulfilling our goal of using technology to make a positive impact.
## What we learned
Throughout the development of FRED, we gained valuable experience working with the Raspberry Pi and integrating hardware with the power of Large Language Models to build advanced IOT system. We also learned about the importance of optimizing data transmission in systems with hardware and bandwidth constraints, especially in critical applications like emergency services. Moreover, this project highlighted the power of building modular systems that function independently, akin to a microservice architecture. This approach allowed us to test each component separately and ensure that the system as a whole worked seamlessly.
## What’s next for FRED
Looking ahead, we plan to refine the image transmission process and improve the accuracy and efficiency of our data processing. Our immediate goal is to ensure that image data is captioned with more technical details and that transmission is seamless and reliable, overcoming the constraints we faced during development. In the long term, we aim to connect FRED directly to local emergency departments, allowing us to test the system in real-world scenarios. By establishing communication channels between FRED and official emergency dispatch systems, we can ensure that our product delivers its intended value—saving lives in critical situations.
|
## Inspiration
The inspiration behind Medisync came from observing the extensive time patients and medical staff spend on filling out and processing medical forms. We noticed a significant delay in treatment initiation due to this paperwork. Our goal was to streamline this process, making healthcare more efficient and accessible by leveraging the power of AI. We envisioned a solution that not only saves time but also minimizes errors in patient data, leading to better patient outcomes.
## What it does
Medisync uses AI algorithms to automate the process of filling out medical forms. Patients can speak or type their information into the app, which then intelligently categorizes and inputs the data into the necessary forms. The patient's data will be continually updated with future questions as their medical history progresses. All data will be securely stored on the user's local machine. The user will be able to quickly and securely input their medical data into forms with different formats from different institutions. This results in a faster, more efficient onboarding process for patients.
## How we built it
We built Medisync using Natural Language Processing (NLP). Our development stack includes Python for backend development. We modeled a user-friendly interface that simplifies the data entry process. We uploaded common medical forms from the internet, we then scraped them for information and then using Open Ai's API call we populated the form outputting the final result in an md file.
## Challenges we ran into
Some challenges we faced were parsing the forms correctly and breaking down the questions into the relevant health categories. The output of the program was dependent on the level of understanding of the data that was available to fill the forms. Thus, in depth question generation was a challenge we had to overcome. We also had to understand how our project can be HIPAA compliant so that it can be released to the end user. Medical data is highly sensitive and personal and there are lots of privacy laws to product individuals. Going forward we have a detailed plan on how to make our service HIPAA compliant.
## Accomplishments that we're proud of
We are proud of developing a functional prototype that demonstrates a significant reduction in time spent on medical paperwork. Our pilot tests on random users showed a 70% decrease in patient onboarding time. Receiving positive feedback from patients. Furthermore, there is a huge issue surrounding human error in these forms. As they are repetitive long-form tasks it is easy for people to make a mistake. We are very proud to have made a product that makes patients safer and healthier by reducing error in medical forms.
## What we learned
Throughout this project, we learned the importance of interdisciplinary collaboration, combining expertise in AI, software development, and healthcare to address a common challenge. We gained insights into the complexities of healthcare regulations and the critical role of data privacy. This project also honed our skills in AI and ML, particularly in applying NLP techniques to real-world problems. Overall we have learnt the importance of solving a complex problem from many angles to create a solution in a time efficient manner. Combining new and old skills in a highly effective way.
## What's next for Medisync
Moving forward, we plan to make sure that Medisync fully meets HIPAA compliance and test our service with many more users. We are also exploring partnerships with hospitals and healthcare systems to integrate our solution into their existing workflows. Additionally, we aim to incorporate AI-driven analytics to provide healthcare providers with insights into patient data, further enhancing the quality of care. Allowing our service to fill out more forms faster. We also hope to improve the user experience and workflow of inputting user data by highlighting missing information from forms that users have filled out. In this we will make sure that we gather the right data in the fewest questions and thus in the most efficient way for our end user.
|
winning
|
## Inspiration
We were inspired to reduce the amount of time it takes to seek medical attention. By directing patients immediately to a doctor specific to their needs, one may reduce the wait time commonly associated with seeking medical aid.
## What it does
Destination Doc asks a user how they are feeling at which point it determines what type of doctor a patient needs (by screening for flagged words). It then proceeds to search a 10 km radius for establishments that such as dentist offices, walk-in clinics, physiotherapy centers or other need-specific locations. Using Microsoft's Bing's API, Destination Doc determines which destination is the shortest time away using real-time traffic. A map is then displayed directing the user from their home location to the optimal medical center.
## How we built it
We built the application front end using angular and the backend with flask, We incorporated the Cisco meraki, twilio APIs and azure.
## Challenges we ran into
Our biggest challenge was putting all the different components together as well as doing a lot within a short time constraint.
## Accomplishments that we're proud of
We're proud to take steps in creating a more efficient wait time service and also aiding the cause of better health and being safer.
## What we learned
What we learned - We learned how to leverage the functionality of AngularJs to create a responsive front end page. we also learned how to use RestAPI HTTP get and post requests to communicate between the front end and the backend network.
## What's next for Destination Doc
We plan to put together destination doc to an extent where anyone can enter their needs and find the best place to get help.
|
# Doctors Within Borders
### A crowdsourcing app that improves first response time to emergencies by connecting city 911 dispatchers with certified civilians
## 1. The Challenge
In Toronto, ambulances get to the patient in 9 minutes 90% of the time. We all know
that the first few minutes after an emergency occurs are critical, and the difference of
just a few minutes could mean the difference between life and death.
Doctors Within Borders aims to get the closest responder within 5 minutes of
the patient to arrive on scene so as to give the patient the help needed earlier.
## 2. Main Features
### a. Web view: The Dispatcher
The dispatcher takes down information about an ongoing emergency from a 911 call, and dispatches a Doctor with the help of our dashboard.
### b. Mobile view: The Doctor
A Doctor is a certified individual who is registered with Doctors Within Borders. Each Doctor is identified by their unique code.
The Doctor can choose when they are on duty.
On-duty Doctors are notified whenever a new emergency occurs that is both within a reasonable distance and the Doctor's certified skill level.
## 3. The Technology
The app uses *Flask* to run a server, which communicates between the web app and the mobile app. The server supports an API which is used by the web and mobile app to get information on doctor positions, identify emergencies, and dispatch doctors. The web app was created in *Angular 2* with *Bootstrap 4*. The mobile app was created with *Ionic 3*.
Created by Asic Chen, Christine KC Cheng, Andrey Boris Khesin and Dmitry Ten.
|
## Inspiration
The current COVID-19 scenario has been our greatest inspiration. Our team members have considered the solution to the faced problems as the primary concern hence we can forward with this application which can help reduce the troubles being faced by the patients.
## What it does
* We have made a platform where the patients and the Doctors can register themselves. The patient can log in to his profile to make a request for an appointment with any registered Doctor. As soon as the Doctor Logs in, he will get the list of appointments on his Dashboard.
* The Doctor can right away make a video call to the patient. We have made a status tag for every appointment. The status is yellow as long as the Doctor does not give the feedback (list of required medicines and checkups) and it becomes green once the Doctor's feedback is returned. The Feedback from the Doctor will be displayed on the patient's profile along with the appointment's history.
* We have provided more features to assist and facilitate the Doctor as well as the patient. The Doctor will be given an option to send the list of checkups and medicines which patients need to get done. The Doctor will also be given an option to reappoint the patient after a given time.
* The patient on the other hand will be given an option to upload the images of the reports of the test and scans which were asked by the Doctor. These reports will be displayed to the doctor along with the appointment request. We have added a Deep Learning model at an intermediate layer between the patient and the Doctor.
* The model will be used to do an analysis of the reports/scans uploaded by the patients the model will predict the contingencies and in case of any severe disease, it will alert the doctor through the mail.
## How I built it
* The project has a tech stack consisting of HTML, CSS, Bootstrap & JavaScript at its front end. While the Backend Comprises of Django Framework.
* Using an Google Cloud SQL database.
## Challenges I ran into
* We faced some severe challenges while deploying the project as initially, it was challenging to deploy full stack app on the Google cloud.
* It was difficult to find the Data to train the Deep Learning model which we have hosted on Google cloud. GCP was new for our team so we took time to cope up with it. Uploading caused an error initially.
* We came over each and every error by vigorous tries. We read and understood the documentation of Auth0 to overcome all errors arising due to it. For the Deep Learning model, we finally found data on Kaggle and implemented using Google Cloud Vision AI.
## Accomplishments that I'm proud of
I and my team are proud to have made a well working project in very less time. We are proud to have dealt with a real medical issue which everyone is facing and we are glad to have coordinated with each other to give our best for the betterment of society.
## What I learned
We have learned the importance of good management in a team. We have learned deployment on an entirely new platform. We have learned about the social causes and about the problem which patients face in day to day life.
## What's next for Virtual Hospital
Now we look forward to do predictions with the help of data that we collect by this application. We want to involve Machine Learning predictive and Deep Learning Models to do predictions for the user, which can help doctors to give more care to a particular patient.
|
winning
|
## Inspiration 💡
**Due to rising real estate prices, many students are failing to find proper housing, and many landlords are failing to find good tenants**. Students looking for houses often have to hire some agent to get a nice place with a decent landlord. The same goes for house owners who need to hire agents to get good tenants. *The irony is that the agent is totally motivated by sheer commission and not by the wellbeing of any of the above two.*
Lack of communication is another issue as most of the things are conveyed by a middle person. It often leads to miscommunication between the house owner and the tenant, as they interpret the same rent agreement differently.
Expensive and time-consuming background checks of potential tenants are also prevalent, as landowners try to use every tool at their disposal to know if the person is really capable of paying rent on time, etc. Considering that current rent laws give tenants considerable power, it's very reasonable for landlords to perform background checks!
Existing online platforms can help us know which apartments are vacant in a locality, but they don't help either party know if the other person is really good! Their ranking algorithms aren't trustable with tenants. The landlords are also reluctant to use these services as they need to manually review applications from thousands of unverified individuals or even bots!
We observed that we are still using these old-age non-scalable methods to match the home seeker and homeowners willing to rent their place in this digital world! And we wish to change it with **RentEasy!**

## What it does 🤔
In this hackathon, we built a cross-platform mobile app that is trustable by both potential tenants and house owners.
The app implements a *rating system* where the students/tenants can give ratings for a house/landlord (ex: did not pay security deposit back for no reason), & the landlords can provide ratings for tenants (the house was not clean). In this way, clean tenants and honest landlords can meet each other.
This platform also helps the two stakeholders build an easily understandable contract that will establish better trust and mutual harmony. The contract is stored on an InterPlanetary File System (IPFS) and cannot be tampered by anyone.

Our application also has an end-to-end encrypted chatting module powered by @ Company. The landlords can filter through all the requests and send requests to tenants. This chatting module powers our contract generator module, where the two parties can discuss a particular agreement clause and decide whether to include it or not in the final contract.
## How we built it ️⚙️
Our beautiful and elegant mobile application was built using a cross-platform framework flutter.
We integrated the Google Maps SDK to build a map where the users can explore all the listings and used geocoding API to encode the addresses to geopoints.
We wanted our clients a sleek experience and have minimal overhead, so we exported all network heavy and resource-intensive tasks to firebase cloud functions. Our application also has a dedicated **end to end encrypted** chatting module powered by the **@-Company** SDK. The contract generator module is built with best practices and which the users can use to make a contract after having thorough private discussions. Once both parties are satisfied, we create the contract in PDF format and use Infura API to upload it to IPFS via the official [Filecoin gateway](https://www.ipfs.io/ipfs)

## Challenges we ran into 🧱
1. It was the first time we were trying to integrate the **@-company SDK** into our project. Although the SDK simplifies the end to end, we still had to explore a lot of resources and ask for assistance from representatives to get the final working build. It was very gruelling at first, but in the end, we all are really proud of having a dedicated end to end messaging module on our platform.
2. We used Firebase functions to build scalable serverless functions and used expressjs as a framework for convenience. Things were working fine locally, but our middleware functions like multer, urlencoder, and jsonencoder weren't working on the server. It took us more than 4 hours to know that "Firebase performs a lot of implicit parsing", and before these middleware functions get the data, Firebase already had removed them. As a result, we had to write the low-level encoding logic ourselves! After deploying these, the sense of satisfaction we got was immense, and now we appreciate millions of open source packages much more than ever.
## Accomplishments that we're proud of ✨
We are proud of finishing the project on time which seemed like a tough task as we started working on it quite late due to other commitments and were also able to add most of the features that we envisioned for the app during ideation. Moreover, we learned a lot about new web technologies and libraries that we could incorporate into our project to meet our unique needs. We also learned how to maintain great communication among all teammates. Each of us felt like a great addition to the team. From the backend, frontend, research, and design, we are proud of the great number of things we have built within 36 hours. And as always, working overnight was pretty fun! :)
---
## Design 🎨
We were heavily inspired by the revised version of **Iterative** design process, which not only includes visual design, but a full-fledged research cycle in which you must discover and define your problem before tackling your solution & then finally deploy it.

This time went for the minimalist **Material UI** design. We utilized design tools like Figma, Photoshop & Illustrator to prototype our designs before doing any coding. Through this, we are able to get iterative feedback so that we spend less time re-writing code.

---
# Research 📚
Research is the key to empathizing with users: we found our specific user group early and that paves the way for our whole project. Here are few of the resources that were helpful to us —
* Legal Validity Of A Rent Agreement : <https://bit.ly/3vCcZfO>
* 2020-21 Top Ten Issues Affecting Real Estate : <https://bit.ly/2XF7YXc>
* Landlord and Tenant Causes of Action: "When Things go Wrong" : <https://bit.ly/3BemMtA>
* Landlord-Tenant Law : <https://bit.ly/3ptwmGR>
* Landlord-tenant disputes arbitrable when not covered by rent control : <https://bit.ly/2Zrpf7d>
* What Happens If One Party Fails To Honour Sale Agreement? : <https://bit.ly/3nr86ST>
* When Can a Buyer Terminate a Contract in Real Estate? : <https://bit.ly/3vDexWO>
**CREDITS**
* Design Resources : Freepik, Behance
* Icons : Icons8
* Font : Semibold / Montserrat / Roboto / Recoleta
---
# Takeways
## What we learned 🙌
**Sleep is very important!** 🤐 Well, jokes apart, this was an introduction to **Web3** & **Blockchain** technologies for some of us and introduction to mobile app developent to other. We managed to improve on our teamwork by actively discussing how we are planning to build it and how to make sure we make the best of our time. We learned a lot about atsign API and end-to-end encryption and how it works in the backend. We also practiced utilizing cloud functions to automate and ease the process of development.
## What's next for RentEasy 🚀
**We would like to make it a default standard of the housing market** and consider all the legal aspects too! It would be great to see rental application system more organized in the future. We are planning to implement more additional features such as landlord's view where he/she can go through the applicants and filter them through giving the landlord more options. Furthermore we are planning to launch it near university campuses since this is where people with least housing experience live. Since the framework we used can be used for any type of operating system, it gives us the flexibility to test and learn.
**Note** — **API credentials have been revoked. If you want to run the same on your local, use your own credentials.**
|
## Inspiration
Everybody struggles with their personal finances. Financial inequality in the workplace is particularly prevalent among young females. On average, women make 88 cents per every dollar a male makes in Ontario. This is why it is important to encourage women to become more cautious of spending habits. Even though budgeting apps such as Honeydue or Mint exist, they depend heavily on self-motivation from users.
## What it does
Our app is a budgeting tool that targets young females with useful incentives to boost self-motivation for their financial well-being. The app features simple scale graphics visualizing the financial balancing act of the user. By balancing the scale and achieving their monthly financial goals, users will be provided with various rewards, such as discount coupons or small cash vouchers based on their interests. Users are free to set their goals on their own terms and follow through with them. The app re-enforces good financial behaviour by providing gamified experiences with small incentives.
The app will be provided to users free of charge. As with any free service, the anonymized user data will be shared with marketing and retail partners for analytics. Discount offers and other incentives could lead to better brand awareness and spending from our users for participating partners. The customized reward is an opportunity for targeted advertising
## Persona
Twenty-year-old Ellie Smith works two jobs to make ends meet. The rising costs of living make it difficult for her to maintain her budget. She heard about this new app called Re:skale that provides personalized rewards for just achieving the budget goals. She signed up after answering a few questions and linking her financial accounts to the app. The app provided simple balancing scale animation for immediate visual feedback of her financial well-being. The app frequently provided words of encouragement and useful tips to maximize the chance of her success. She especially loves how she could set the goals and follow through on her own terms. The personalized reward was sweet, and she managed to save on a number of essentials such as groceries. She is now on 3 months streak with a chance to get better rewards.
## How we built it
We used : React, NodeJs, Firebase, HTML & Figma
## Challenges we ran into
We had a number of ideas but struggled to define the scope and topic for the project.
* Different design philosophies made it difficult to maintain consistent and cohesive design.
* Sharing resources was another difficulty due to the digital nature of this hackathon
* On the developing side, there were technologies that were unfamiliar to over half of the team, such as Firebase and React Hooks. It took a lot of time in order to understand the documentation and implement it into our app.
* Additionally, resolving merge conflicts proved to be more difficult. The time constraint was also a challenge.
## Accomplishments that we're proud of
* The use of harder languages including firebase and react hooks
* On the design side it was great to create a complete prototype of the vision of the app.
* Being some members first hackathon, the time constraint was a stressor but with the support of the team they were able to feel more comfortable with the lack of time
## What we learned
* we learned how to meet each other’s needs in a virtual space
* The designers learned how to merge design philosophies
* How to manage time and work with others who are on different schedules
## What's next for Re:skale
Re:skale can be rescaled to include people of all gender and ages.
* More close integration with other financial institutions and credit card providers for better automation and prediction
* Physical receipt scanner feature for non-debt and credit payments
## Try our product
This is the link to a prototype app
<https://www.figma.com/proto/nTb2IgOcW2EdewIdSp8Sa4/hack-the-6ix-team-library?page-id=312%3A3&node-id=375%3A1838&viewport=241%2C48%2C0.39&scaling=min-zoom&starting-point-node-id=375%3A1838&show-proto-sidebar=1>
This is a link for a prototype website
<https://www.figma.com/proto/nTb2IgOcW2EdewIdSp8Sa4/hack-the-6ix-team-library?page-id=0%3A1&node-id=360%3A1855&viewport=241%2C48%2C0.18&scaling=min-zoom&starting-point-node-id=360%3A1855&show-proto-sidebar=1>
|
## Inspiration
As students, we have found that there are very few high-quality resources on investing for those who are interested but don't have enough resources. Furthermore, we have found that investing and saving money can be a stressful experience. We hope to change this for those who want to save better with the help of our app, hopefully making it fun in the process!
## What it does
Our app first asks a new client a brief questionnaire about themselves. Then, using their banking history, it generates 3 "demons", aka bad spending habits, to kill. Then, after the client chooses a habit to work on, it brings them to a dashboard where they can monitor their weekly progress on a task. Once the week is over, the app declares whether the client successfully beat the mission - if they did, they get rewarded with points which they can exchange for RBC Loyalty points!
## How we built it
We built the frontend using React + Tailwind, using Routes to display our different pages. We used Cohere for our AI services, both for generating personalized weekly goals and creating a more in-depth report. We used Firebase for authentication + cloud database to keep track of users. For our data of users and transactions, as well as making/managing loyalty points, we used the RBC API.
## Challenges we ran into
Piecing the APIs together was probably our most difficult challenge. Besides learning the different APIs in general, integrating the different technologies got quite tricky when we are trying to do multiple things at the same time!
Besides API integration, definitely working without any sleep though was the hardest part!
## Accomplishments that we're proud of
Definitely our biggest accomplishment was working so well together as a team. Despite only meeting each other the day before, we got along extremely well and were able to come up with some great ideas and execute under a lot of pressure (and sleep deprivation!) The biggest reward from this hackathon are the new friends we've found in each other :)
## What we learned
I think each of us learned very different things: this was Homey and Alex's first hackathon, where they learned how to work under a small time constraint (and did extremely well!). Paige learned tons about React, frontend development, and working in a team. Vassily learned lots about his own strengths and weakness (surprisingly reliable at git, apparently, although he might have too much of a sweet tooth).
## What's next for Savvy Saver
Demos! After that, we'll just have to see :)
|
winning
|
## Behind the site
"Complicated" is a word many people use to describe politics. In a world filled with biased sources and a plethora of information, it's hard to find good information on your local representatives. We aim to make the researching process easier for voters by matching them with their local political candidates based on their views. Users take a short quiz and are matched with local politicians. They are also given the opportunity to learn more about their top choice directly through our website. We hope that in taking this quiz, the user would gain awareness to both his or her viewpoints and to their representatives, encouraging active civic engagement without intimidating entry barriers.
## How we built it
In building this hack, we used the Wix Code editor, implementing our project with algorithms and databases we created to find the perfect match.
We're all just beginners, but it was fun to play around and add our own customized features to the website!
|
## Inspiration
Our team wanted to develop a application that made it easier for normal people to access information on their representatives and the policy they were pursuing. Our team initially set out to create a project that would bring all the necessary information to the citizen in one place. After brainstorming, our idea evolved into informing through a much more impartial and objective way. Unlike choosing news sources, facts regarding campaign finance and historical legislative practices can't be disputed. By presenting information about major donors and bills that representatives sponsor it is easier for the individual to discern whether their representation is working for them or those funding their campaign. In the past, unless you were willing to do deliberate and intensive research there would be no realistic way of coming across these types of statistics on your own easily.
## What it does
First it asks the user for their address which is then used to present them with their U.S. Senators and House Representative. The user can then select any of them and will then be taken to a new page. The final page currently displays the name of the representative and some links to continue exploration about donations and recent bills. The intention, and what a good amount of functionality has already been built for (Check out some of the unused functions in the code for access to recent bills and top donors/donor industry), is using the Open Secrets API to display top donors to the selected politician paired with bills the politician has sponsored or co-sponsored related to these donors using the content classification functionality with Google Could's NLP API. This would highlight how major campaign donations may influence the policy decisions your representatives are making.
## How we built it
We used Expo for development due to the appeal of a cross platform developed. We then used online API's and other public data sets to access information on policy makers, bills, and campaign donors. Most notably we used OpenSecrets's API and Phone2Action's API.
## Challenges we ran into
We all had very little experience with Javascript, so it was quite a learning experience. Additionally, we came up with the idea very late and really had to work fast. In the end it is still in development as we were unable to bring it all together. Separately all the parts worked but when they came together they fell apart i.e. the bills of each politician
## Accomplishments that we're proud of
A functional application that works on both IOS and Android Devices and is able to accurately and quickly tell a user who their representatives are with a lot of the framework in place for the next steps in development. We are proud that in a short amount of time we could make something, learn, and have a good time.
## What we learned
Expo, Native-React, Javascript, Web Development tools
## What's next for War Chest
Implementation of the NLP Analysis of recent bills and pairing with top donors.
|
## Inspiration
This project was inspired by providing a solution to the problem of users with medical constraints. There are users who have mobility limitations which causes them to have difficulty leaving their homes. PrescriptionCare allows them to order prescriptions online and have them delivered to their homes on a monthly or a weekly basis.
## What it does
PrescriptionCare is a web app that allows users to order medical prescriptions online. Users can fill out a form and upload an image of their prescription in order to have their medication delivered to their homes. This app has a monthly subscription feature since most medications is renewed after 30 days, but also allows users to sign up for weekly subscriptions.
## How we built it
We designed PrescriptionCare using Figma, and built it using Wix.
## Challenges we ran into
We ran into several challenges, mainly due to our inexperience with hackathons and the programs and languages we used along the way. Initially, we wanted to create the website using HTML, CSS, and JavaScript, however, we didn't end up going down that path, as it ended up being a bit too complicated because we were all beginners. We ended up choosing to use Wix due to its ease of use and excellent template selection to give us a solid base to build PrescriptionCare off of. We also ran into issues with an iOS app we tried to develop to complement the website, mainly due to learning Swift and SwiftUI, which is not very beginner friendly.
## Accomplishments that we're proud of
Managing to create a website in just a few hours and being able to work with a great team. Some the members of this team also had to learn new software in just a few hours which was also a challenge, but this experience is a good one and we'll be much more prepared for our next hackathon.
We are proud to have experimented with two new tools thanks to this application. We were able to draft a website through Wix and create an app through xCode and SwiftUI. Another accomplishment is that our team consists of first-time hackers, so we are proud to have started the journey of hacking and cannot wait to see what is waiting for us in the future.
## What we learned
We learned how to use the Wix website builder for the first time and also how to collaborate together as a team. We don't really know each other and happened to meet at the competition and will probably work together at another hackathon in the future.
We learned a positive mindset is another important asset to bring into a hackathon. At first, we felt intimidated by hackathons, but we are thankful to have learned that hackathons can be fun and a priceless learning experience.
## What's next for PrescriptionCare
It would be nice to be able to create a mobile app so that users can get updates and notifications when their medication arrives. We could create a tracking system that keeps track of the medication you take and estimates when the user finishes their medication.
PrescriptionCare will continue to expand and develop its services to reach more audience. We hope to bring more medication and subscription plans for post-secondary students who live away from home, at-home caretakers, and more, and we aim to bring access to medicine to everyone. Our next goal is to continue developing our website and mobile app (both android and IOS), as well as collect data of pharmaceutical drugs and their usage. We hope to make our app a more diverse, and inclusive app, with a wide variety of medication and delivery methods.
|
losing
|
## Inspiration
Music is something inherently personal to each and every one of us - our favorite tracks accompany us through our highs and lows, through tough workouts and relaxing evenings. Our aim is to encourage and capture that feeling of discovering that new song you just can't stop listening to. Music is an authentic expression of ourselves, and the perfect way to [meet new people](https://www.lovethispic.com/uploaded_images/206094-When-You-Meet-Someone-With-The-Same-Music-Taste-As-You.jpg) without the clichés of the typical social media platforms we're all sick of. We're both very passionate about reviving the soul of social media, so we were very excited to hear about this track and work on this project!
## What it does
Spotify keeps tabs on the tracks you can't get enough of. Why not make that data work *for you*? With one simple login, ensemble matches you with others who share your musical ear. Using our *state-of-the-art* machine learning algorithms, we show you other users who we think you'd like based on both their and your music taste. Love their tracks? Follow them and stay tuned. ensemble is a new way to truly connect on a meaningful level in an age of countless unoriginal social media platforms.
## How we built it
We wanted a robust application that could handle the complexities of a social network, while also providing us with an extensive toolkit to build out all the features we envisioned. Our frontend is built using [React](https://reactjs.org), a powerful and well-supported web framework that gives us the flexibility to build with ease. We utilized supporting frontend technologies like Bootstrap, HTML, and CSS to help create an attractive UI, the key aspect of any social media. For the backend, we used [Django](https://www.djangoproject.com) and [Django Rest Framework](https://www.django-rest-framework.org) to build a secure API that our frontend can easily interact with. For our recommendation algorithm, we used scikit-learn and numpy to power our machine learning needs. Finally, we used PostgreSQL for our DBMS and Heroku for deployment.
## Challenges we ran into
As with most social media platforms, users are key. Given the *very short* nature of hackathons, it obviously isn't feasible to attract a large number of users for development purposes. However, we needed a way to have users available for testing. Since ensemble is based on Spotify accounts and the Spotify API, this proved to be non-trivial. We took advantage of the Spotify API's recommendations endpoint to generate pseudo-data that resembles what a real person would have as their top tracks. With a fake name generator, we created as many fake profiles as we needed to flesh out our recommendation algorithm.
## Accomplishments that we're proud of
Our application is fully ready to use—it has all of the necessary authentication, authorization, and persistent storage. While we'd love to add even more features, we focused on implementing the core ones in their current state (if you use Spotify, feel free to log in and try it out!). You can find the live version [here](https://ensemble-dev.herokuapp.com). Despite all of the hassle of the deployment process, it was very fulfilling to see what we created, live and ready to be used by anyone in the world.
We're also proud of what we've accomplished in general! It's been a challenging yet immensely fulfilling day-and-a-half of ideation, design, and coding. Looking back at what we were able to create during this short time span, we're proud to have something to show for all the effort we've put into it.
## What we learned
We both learned a lot from working on this project. It's been a fast-paced weekend of continuously pushing new changes and features, and in doing so, we sharpened our skills in both React and Django. Additionally, utilizing the Spotify API was something neither of us had done before, and we learned a lot about OAuth 2.0 and web authentication in general.
## What's next for ensemble
Working on this project was a lot of fun, and we'd both love to keep it going in the future. There are a ton of features that we thought out but didn't have the time to implement in this time span. For example, we'd love to implement a direct messaging system, so you can directly contact and discuss your favorite songs/artists with the people you follow. The GitHub repository readme also contains complete and detailed instructions on how to set up your development environment to run the code, if anyone is interested in trying it out. Thanks for reading!
|
## Inspiration
Music is a universal language, and we recognized Spotify wrapped to be one of the most anticipated times of the year. Realizing that people have an interest in learning about their own music taste, we created ***verses*** to not only allow people to quiz themselves on their musical interests, but also quiz their friends to see who knows them best.
## What it does
A quiz that challenges you to answer questions about your Spotify listening habits, allowing you to share with friends and have them guess your top songs/artists by answering questions. Creates a leaderboard of your friends who have taken the quiz, ranking them by the scores they obtained on your quiz.
## How we built it
We built the project using react.js, HTML, and CSS. We used the Spotify API to get data on the user's listening history, top songs, and top artists as well as enable the user to log into ***verses*** with their Spotify. JSON was used for user data persistence and Figma was used as the primary UX/UI design tool.
## Challenges we ran into
Implementing the Spotify API was a challenge as we had no previous experience with it. We had to seek out mentors for help in order to get it working. Designing user-friendly UI was also a challenge.
## Accomplishments that we're proud of
We took a while to get the backend working so only had a limited amount of time to work on the frontend, but managed to get it very close to our original Figma prototype.
## What we learned
We learned more about implementing APIs and making mobile-friendly applications.
## What's next for verses
So far, we have implemented ***verses*** with Spotify API. In the future, we hope to link it to more musical platforms such as Apple Music. We also hope to create a leaderboard for players' friends to see which one of their friends can answer the most questions about their music taste correctly.
|
## 💡 Inspiration
Manga are Japanese comics, considered to form a genre unique from other graphic novels. Similar to other comics, it lacks a musical component. However, their digital counterparts (such as sites like Webtoons) have innovated on their take on the traditional format with the addition of soundtracks, playing concurrently with the reader's progression through the comic. It can create an immersive experience for the reader building the emotion on screen. While Webtoon’s take on incorporating music is not mainstream, we believe there is potential in building on the concept and making it mainstream in online manga. Imagine how cool it would be to generate a soundtrack to the story unfolding. Who doesn't enjoy personalized music while reading?
## 💻 What it does
1. Users choose a manga chapter to read (in our prototype, we're using just one page).
2. Sentiment analysis is performed on the dialogue of the manga.
3. The resulting sentiment is used to determine what kind of music is fed into the song-generating model.
4. A new song will be created and played while the user reads the manga.
## 🔨 How we built it
* Started with brainstorming
* Planned and devised a plan for implementation
* Divided tasks
* Implemented the development of the project using the following tools
*Tech Stack* : Tensorflow, Google Cloud (Cloud Storage, Vertex AI), Node.js
Registered Domain name : **mangajam.tech**
## ❓Challenges we ran into
* None of us knew machine learning at the level that this project demanded of us.
* Timezone differences and the complexity of the project
## 🥇 Accomplishments that we're proud of
The teamwork of course!! We are a team of four coming from three different timezones, this was the first hackathon for one of us and the enthusiasm and coordination and support were definitely unique and spirited. This was a very ambitious project but we did our best to create a prototype proof of concept. We really enjoyed learning new technologies.
## 📖 What we learned
* Using TensorFlow for sound generation
* Planning and organization
* Time management
* Performing Sentiment analysis using Node.js
## 🚀 What's next for Magenta
Oh tons!! We have many things planned for Magenta in the future.
* Ideally, we would also do image recognition on the manga scenes to help determine sentiment, but it's hard to actualize because of varying art styles and genres.
* To add more sentiments
* To deploy the website so everyone can try it out
* To develop a collection of Manga along with the generated soundtrack
|
partial
|
>
> Domain.com domain: IDE-asy.com
>
>
>
## Inspiration
Software engineering and development have always been subject to change over the years. With new tools, frameworks, and languages being announced every year, it can be challenging for new developers or students to keep up with the new trends the technological industry has to offer. Creativity and project inspiration should not be limited by syntactic and programming knowledge. Quick Code allows ideas to come to life no matter the developer's experience, breaking the coding barrier to entry allowing everyone equal access to express their ideas in code.
## What it does
Quick Code allowed users to code simply with high level voice commands. The user can speak in pseudo code and our platform will interpret the audio command and generate the corresponding javascript code snippet in the web-based IDE.
## How we built it
We used React for the frontend, and the recorder.js API for the user voice input. We used runkit for the in-browser IDE. We used Python and Microsoft Azure for the backend, we used Microsoft Azure to process user input with the cognitive speech services modules and provide syntactic translation for the frontend’s IDE.
## Challenges we ran into
>
> "Before this hackathon I would usually deal with the back-end, however, for this project I challenged myself to experience a different role. I worked on the front end using react, as I do not have much experience with either react or Javascript, and so I put myself through the learning curve. It didn't help that this hacakthon was only 24 hours, however, I did it. I did my part on the front-end and I now have another language to add on my resume.
> The main Challenge that I dealt with was the fact that many of the Voice reg" *-Iyad*
>
>
> "Working with blobs, and voice data in JavaScript was entirely new to me." *-Isaac*
>
>
> "Initial integration of the Speech to Text model was a challenge at first, and further recognition of user audio was an obstacle. However with the aid of recorder.js and Python Flask, we able to properly implement the Azure model." *-Amir*
>
>
> "I have never worked with Microsoft Azure before this hackathon, but decided to embrace challenge and change for this project. Utilizing python to hit API endpoints was unfamiliar to me at first, however with extended effort and exploration my team and I were able to implement the model into our hack. Now with a better understanding of Microsoft Azure, I feel much more confident working with these services and will continue to pursue further education beyond this project." *-Kris*
>
>
>
## Accomplishments that we're proud of
>
> "We had a few problems working with recorder.js as it used many outdated modules, as a result we had to ask many mentors to help us get the code running. Though they could not figure it out, after hours of research and trying, I was able to successfully implement recorder.js and have the output exactly as we needed. I am very proud of the fact that I was able to finish it and not have to compromise any data." *-Iyad*
>
>
> "Being able to use Node and recorder.js to send user audio files to our back-end and getting the formatted code from Microsoft Azure's speech recognition model was the biggest feat we accomplished." *-Isaac*
>
>
> "Generating and integrating the Microsoft Azure Speech to Text model in our back-end was a great accomplishment for our project. It allowed us to parse user's pseudo code into properly formatted code to provide to our website's IDE." *-Amir*
>
>
> "Being able to properly integrate and interact with the Microsoft Azure's Speech to Text model was a great accomplishment!" *-Kris*
>
>
>
## What we learned
>
> "I learned how to connect the backend to a react app, and how to work with the Voice recognition and recording modules in react. I also worked a bit with Python when trying to debug some problems in sending the voice recordings to Azure’s servers." *-Iyad*
>
>
> "I was introduced to Python and learned how to properly interact with Microsoft's cognitive service models." *-Isaac*
>
>
> "This hackathon introduced me to Microsoft Azure's Speech to Text model and Azure web app. It was a unique experience integrating a flask app with Azure cognitive services. The challenging part was to make the Speaker Recognition to work; which unfortunately, seems to be in preview/beta mode and not functioning properly. However, I'm quite happy with how the integration worked with the Speach2Text cognitive models and I ended up creating a neat api for our app." *-Amir*
>
>
> "The biggest thing I learned was how to generate, call and integrate with Microsoft azure's cognitive services. Although it was a challenge at first, learning how to integrate Microsoft's models into our hack was an amazing learning experience. " *-Kris*
>
>
>
## What's next for QuickCode
We plan on continuing development and making this product available on the market. We first hope to include more functionality within Javascript, then extending to support other languages. From here, we want to integrate a group development environment, where users can work on files and projects together (version control). During the hackathon we also planned to have voice recognition to recognize and highlight which user is inputting (speaking) which code.
|
## Inspiration
Public speaking is a critical skill in our lives. The ability to communicate effectively and efficiently is a very crucial, yet difficult skill to hone. For a few of us on the team, having grown up competing in public speaking competitions, we understand too well the challenges that individuals looking to improve their public speaking and presentation skills face. Building off of our experience of effective techniques and best practices and through analyzing the speech patterns of very well-known public speakers, we have designed a web app that will target weaker points in your speech and identify your strengths to make us all better and more effective communicators.
## What it does
By analyzing speaking data from many successful public speakers from a variety industries and backgrounds, we have established relatively robust standards for optimal speed, energy levels and pausing frequency during a speech. Taking into consideration the overall tone of the speech, as selected by the user, we are able to tailor our analyses to the user's needs. This simple and easy to use web application will offer users insight into their overall accuracy, enunciation, WPM, pause frequency, energy levels throughout the speech, error frequency per interval and summarize some helpful tips to improve their performance the next time around.
## How we built it
For the backend, we built a centralized RESTful Flask API to fetch all backend data from one endpoint. We used Google Cloud Storage to store files greater than 30 seconds as we found that locally saved audio files could only retain about 20-30 seconds of audio. We also used Google Cloud App Engine to deploy our Flask API as well as Google Cloud Speech To Text to transcribe the audio. Various python libraries were used for the analysis of voice data, and the resulting response returns within 5-10 seconds. The web application user interface was built using React, HTML and CSS and focused on displaying analyses in a clear and concise manner. We had two members of the team in charge of designing and developing the front end and two working on the back end functionality.
## Challenges we ran into
This hackathon, our team wanted to focus on creating a really good user interface to accompany the functionality. In our planning stages, we started looking into way more features than the time frame could accommodate, so a big challenge we faced was firstly, dealing with the time pressure and secondly, having to revisit our ideas many times and changing or removing functionality.
## Accomplishments that we're proud of
Our team is really proud of how well we worked together this hackathon, both in terms of team-wide discussions as well as efficient delegation of tasks for individual work. We leveraged many new technologies and learned so much in the process! Finally, we were able to create a good user interface to use as a platform to deliver our intended functionality.
## What we learned
Following the challenge that we faced during this hackathon, we were able to learn the importance of iteration within the design process and how helpful it is to revisit ideas and questions to see if they are still realistic and/or relevant. We also learned a lot about the great functionality that Google Cloud provides and how to leverage that in order to make our application better.
## What's next for Talko
In the future, we plan on continuing to develop the UI as well as add more functionality such as support for different languages. We are also considering creating a mobile app to make it more accessible to users on their phones.
|
## Inspiration
Knowtworthy is a startup that all three of us founded together, with the mission to make meetings awesome. We have spent this past summer at the University of Toronto’s Entrepreneurship Hatchery’s incubator executing on our vision. We’ve built a sweet platform that solves many of the issues surrounding meetings but we wanted a glimpse of the future: entirely automated meetings. So we decided to challenge ourselves and create something that the world has never seen before: sentiment analysis for meetings while transcribing and attributing all speech.
## What it does
While we focused on meetings specifically, as we built the software we realized that the applications for real-time sentiment analysis are far more varied than initially anticipated. Voice transcription and diarisation are very powerful for keeping track of what happened during a meeting but sentiment can be used anywhere from the boardroom to the classroom to a psychologist’s office.
## How I built it
We felt a web app was best suited for software like this so that it can be accessible to anyone at any time. We built the frontend on React leveraging Material UI, React-Motion, Socket IO and ChartJS. The backed was built on Node (with Express) as well as python for some computational tasks. We used GRPC, Docker and Kubernetes to launch the software, making it scalable right out of the box.
For all relevant processing, we used Google Speech-to-text, Google Diarization, Stanford Empath, SKLearn and Glove (for word-to-vec).
## Challenges I ran into
Integrating so many moving parts into one cohesive platform was a challenge to keep organized but we used trello to stay on track throughout the 36 hours.
Audio encoding was also quite challenging as we ran up against some limitations of javascript while trying to stream audio in the correct and acceptable format.
Apart from that, we didn’t encounter any major roadblocks but we were each working for almost the entire 36-hour stretch as there were a lot of features to implement.
## Accomplishments that I'm proud of
We are super proud of the fact that we were able to pull it off as we knew this was a challenging task to start and we ran into some unexpected roadblocks. There is nothing else like this software currently on the market so being first is always awesome.
## What I learned
We learned a whole lot about integration both on the frontend and the backend. We prototyped before coding, introduced animations to improve user experience, too much about computer store numbers (:p) and doing a whole lot of stuff all in real time.
## What's next for Knowtworthy Sentiment
Knowtworthy Sentiment aligns well with our startup’s vision for the future of meetings so we will continue to develop it and make it more robust before integrating it directly into our existing software. If you want to check out our stuff you can do so here: <https://knowtworthy.com/>
|
partial
|
## Inspiration
We have family members that have autism, and Cinthya told us about a history including her imaginary friends and how she interacted with them in her childhood, so we started a research about this two topics and we came around to with "Imaginary Friends"
## What it does
We are developing an application that allows the kids of all kind draw their imaginary friends to visualize them using augmented reality and keep in the app with the objective to improve social skills based on studies that proof that the imaginary friends help to have better social relationships and better communication. This application is also capable of detecting the mood like joy, sadness, etc. using IBM Watson "speech to text" and Watson "tone analyzer" inorder to give information of interest to the parents of this children or to their psycologist through a web page built with WIX showing statistical data and their imaginary friends.
## Challenges we ran into
We didn't know some of technologies that we used so we had to learn them in the process.
## Accomplishments that we're proud of
Finish the WIX application, and almost conclude the mobile app.
## What we learned
How to use WIX and IBM Watson
## What's next for ImaginaryFriends
We are thinking that Imaginary Friends can go further if we implement the idea in theme parks such Disney Land, etc.
with the idea that the kid could be guided by their own imaginary friend
|
## Inspiration
Some things can only be understood through experience, and Virtual Reality is the perfect medium for providing new experiences. VR allows for complete control over vision, hearing, and perception in a virtual world, allowing our team to effectively alter the senses of immersed users. We wanted to manipulate vision and hearing in order to allow players to view life from the perspective of those with various disorders such as colorblindness, prosopagnosia, deafness, and other conditions that are difficult to accurately simulate in the real world. Our goal is to educate and expose users to the various names, effects, and natures of conditions that are difficult to fully comprehend without first-hand experience. Doing so can allow individuals to empathize with and learn from various different disorders.
## What it does
Sensory is an HTC Vive Virtual Reality experience that allows users to experiment with different disorders from Visual, Cognitive, or Auditory disorder categories. Upon selecting a specific impairment, the user is subjected to what someone with that disorder may experience, and can view more information on the disorder. Some examples include achromatopsia, a rare form of complete colorblindness, and prosopagnosia, the inability to recognize faces. Users can combine these effects, view their surroundings from new perspectives, and educate themselves on how various disorders work.
## How we built it
We built Sensory using the Unity Game Engine, the C# Programming Language, and the HTC Vive. We imported a rare few models from the Unity Asset Store (All free!)
## Challenges we ran into
We chose this project because we hadn't experimented much with visual and audio effects in Unity and in VR before. Our team has done tons of VR, but never really dealt with any camera effects or postprocessing. As a result, there are many paths we attempted that ultimately led to failure (and lots of wasted time). For example, we wanted to make it so that users could only hear out of one ear - but after enough searching, we discovered it's very difficult to do this in Unity, and would've been much easier in a custom engine. As a result, we explored many aspects of Unity we'd never previously encountered in an attempt to change lots of effects.
## What's next for Sensory
There's still many more disorders we want to implement, and many categories we could potentially add. We envision this people a central hub for users, doctors, professionals, or patients to experience different disorders. Right now, it's primarily a tool for experimentation, but in the future it could be used for empathy, awareness, education and health.
|
## Inspiration
Studies show that drawing, coloring, and other art-making activities can help people express themselves artistically and explore their art's psychological and emotional undertones [1]. Before this project, many members of our team had already caught on to the stress-relieving capabilities of art-centered events, especially when they involved cooperative interaction. We realized that we could apply this concept in a virtual setting in order to make stress-relieving art events accessible to those who are homeschooled, socially-anxious, unable to purchase art materials, or otherwise unable to access these groups in real life. Furthermore, virtual reality provides an open sandbox suited exactly to the needs of a stressed person that wants to relieve their emotional buildup. Creating art in a therapeutic environment not only reduces stress, depression, and anxiety in teens and young adults, but it is also rooted in spiritual expression and analysis [2]. We envision an **online community where people can creatively express their feelings, find healing, and connect with others through the creative process of making art in Virtual Reality.**
## VIDEOS:
<https://youtu.be/QXY9UfquwNI>
<https://youtu.be/u-3l8vwXHvw>
## What it does
We built a VR application that **learns from the user's subjective survey responses** and then **connects them with a support group who might share some common interests and worries.** Within the virtual reality environment, they can **interact with others through anonymous avatars, see others' drawings in the same settings, and improve their well-being by interacting with others in a liberating environment.** To build the community outside of VR, there is an accompanying social media website allowing users to share their creative drawings with others.
## How we built it
* We used SteamVR with the HTC Vive HMD and Oculus HMD, as well as Unity to build the interactive environments and develop the softwares' functionality.
* The website was built with Firebase, Node.js, React, Redux, and Material UI.
## Challenges we ran into
* Displaying drawing real-time on a server-side, rather than client-side output posed a difficulty due to the restraints on broadcasting point-based cloud data through Photon. Within the timeframe of YHack, we were able to build the game that connects multiple players and allows them to see each other's avatars. We also encountered difficulties with some of the algorithmic costs of the original line-drawing methods we attempted to use.
## Citation:
[1] <https://www.psychologytoday.com/us/groups/art-therapy/connecticut/159921?sid=5db38c601a378&ref=2&tr=ResultsName>
[2] <https://www.psychologytoday.com/us/therapy-types/art-therapy>
|
winning
|
## Never plan for fun again. Just have it. Now, with CityCrawler.
While squabbling over bars to visit in the city, we pined for a solution to our problem.
And here we find CityCrawler, an app that takes your interests and a couple of other details to immediately plan your ideal trip. So whether it's a pub crawl or a night full of entertainment, CityCrawler will be at your service to help you decide and focus on the conversations that actually matter.
With CityCrawler you can also share your plan with your friends, so no one is left behind.
## Tech Used
On the iOS side, we used RxSwift, and RxAlamoFire to handle asynchronous tasks and network requests.
On the Android side, we used Kotlin, RxKotlin, Retrofit and OkHTTP.
Our backend system consists of a set of stdlib endpoint functions, which are built using the Google Maps Places API, Distance Matrix API, and the Firebase API. We also wrote a custom algorithm to solve the Travelling Salesman Problem based on Kruskal's Minimum Spanning Tree algorithm and the Depth First Tree Tour algorithm.
We have also exposed our Firebase and Google Maps stdlib functions to the public to contribute to that ecosystem. Oh and #OneMoreThing.
### Android Youtube Video - <https://youtu.be/WudxqMyaszQ>
|
## Inspiration
TripAdvisor and RoadTrippers
## What it does
Our project gives community-based and calculated plans to users based on points of attraction and personal preferences, like staying longer in nature spots or museums. The app blends information of popular spots and visualizations to provide easy interpretation of the plan, how long it would take, and how long each attraction usually takes. Users can upload their own plans and the most popular ones go up the list so that efficient but enjoyable trips are seen by the most people.
## How we built it
React Native, Figma, JSON schemas
## Challenges we ran into
Time :(. We weren't able to construct the backend or parts of the UI because we didn't enough time to implement the many good ideas that we threw around.
## Accomplishments that we're proud of
Building a semi-functional application that is styled well and has solid scalability if worked on for longer than a day. We tried to prioritize the planning instead of the development so that we could be able to truly build this out at a future time.
## What we learned
React Native app development, product development, forming ideas and refining them, researching existing services and identifying opportunities,
## What's next for Daytrips
Possible improvements in the UI and actual implementation of backend. Most likely, nothing will be done for the rest of this semester, but who knows
|
## Inspiration 💡
>
> "An intoxication comes over the man who walks long and aimlessly through the streets. With each step, the walk takes on greater momentum [...] ever more irresistible the magnetism of the next street corner, of a mass of distant foliage, of a street name." - Walter Benjamin
>
>
>
It is our belief that the world needs more wanderers. Many people look over to distant lands when they think of exploration, but overlook the wealth of culture, history, and beauty in their own cities. We wanted to make it easier, more fun, and potentially even more healthy to explore and take mental health walks to wander your local neighbourhood or city by creating an **AI-assisted tour guide and running coach**, so get active and get exploring!
## What it does 🗺️
***Wander*** is a mobile-first web app that harnesses *Cohere AI* to automatically generate personalized guided workouts. Your AI coach will motivate you to keep up the pace, while also giving you interesting facts about the buildings, landmarks, and scenery as you pass by. And if you want to take a break from the running, you can learn even more about each point of interest (POI) through our augmented reality experiences. Just open your camera from the *Wander* app, scan a QR code located somewhere near the POI, and start discovering a virtual world full of 3D artifacts. With *Wander*, your city can be like a gym and a museum at the same time!
## How we built it 🛠️
Mockup/prototype was designed using Figma, with some help from plugins like Tabler Icons and Mapsicle. Our database service is Firebase, and the backend API is built with Node.js and Express. AR experiences were developed using echo3D, Vectary, and Unity. Other frontend components were developed with React. Our app also makes use of the following services:
* Cohere API (for generating a script for each audio guide)
* Microsoft Azure Text-to-speech (for synthesizing audio)
* Wikipedia API (for finding nearby POIs)
* Mapbox Directions API (for step-by-step walking/running directions)
## Challenges we ran into 🚧
One of our biggest struggles was integrating each group member's work into one cohesive unit with different technologies. In order to maximize our productivity, each member specialized in one specific area and bring it all together in the end. However, we ended up with very little time to connect all the pieces, we ran into roadblocks with connecting the frontend to other API services.
And of course, like in all hackathons, time is a very limited resource, and we found ourselves wishing we had several more days to work on this. Our vision for Wander is grand and we were hoping to implement a lot of functionality, but with 4 people and 36 hours, we managed to complete a decent amount of our end goals.
## Accomplishments that we're proud of 🏆
We're proud that we were able to flesh out our vision of exploration and adventure, and have made the first steps towards building a usable, impactful application. We're also proud of our ability to work as a team, despite being strangers to each other just a couple days ago.
## What we learned 📖
All of us were new to the Cohere AI platform as well as the echo3D software, and it took some time to wrap our heads around it and learn how to write prompts that gave us the results we wanted. But after lots of trial and error, we (mostly) made it work and have a better grasp on how to "trick" it into giving us great responses.
This was Daniel's first time designing, prototyping, and then implementing a multi-page application from scratch. He's a pro at Figma's keyboard shortcuts now!
This was also Matthew's and Daniel's first hackathon. We have learned A TON from this experience, both good and bad (one could say we have *explored* the depths of sleep deprivation and the heights of anxiety...), but mostly good! We found that in order to go from idea to app in such a short time requires a lot of quick thinking, communication, and cooperation.
## What's next for Wander 🏃
Our app still has ways to go before people can start using it, and we plan to continue development in the future (if time permits) so that our frontend application looks and feels like how we envisioned it in the prototype. We will also consider transitioning the React code into React Native, so we can build a native application that can better integrate with your devices. This will also allow us to publish Wander on the App Store or Google Play Store.
|
losing
|
## What Inspired Us
A good customer experience leaves a lasting impression across every stage of their journey. This is exemplified in the airline and travel industry. To give credit and show appreciation to the hardworking employees of JetBlue, we chose to scrape and analyze customer feedback on review and social media sites to both highlight their impact on customers and provide currently untracked, valuable data to build a more personalized brand that outshines its market competitors.
## What Our Project does
Our customer feedback analytics dashboard, BlueVisuals, provides JetBlue with highly visual presentations, summaries, and highlights of customers' thoughts and opinions on social media and review sites. Visuals such as word clouds and word-frequency charts highlight critical areas of focus where the customers reported having either positive or negative experiences, suggesting either areas of improvement or strengths. The users can read individual comments to review the exact situation of the customers or skim through to get a general sense of their social media interactions with their customers. Through this dashboard, we hope that the users are able to draw solid conclusions and pursue action based on those said conclusions.
Humans of JetBlue is a side product resulting from such conclusions users (such as ourselves) may draw from the dashboard that showcases the efforts and dedication of individuals working at JetBlue and their positive impacts on customers. This product highlights our inspiration for building the main dashboard and is a tool we would recommend to JetBlue.
## How we designed and built BlueVisuals and Humans of JetBlue
After establishing the goals of our project, we focused on data collection via web scraping and building the data processing pipeline using Python and Google Cloud's NLP API. After understanding our data, we drew up a website and corresponding visualizations. Then, we implemented the front end using React.
Finally, we drew conclusions from our dashboard and designed 'Humans of JetBlue' as an example usage of BlueVisuals.
## What's next for BlueVisuals and Humans of JetBlue
* collecting more data to get a more representative survey of consumer sentiment online
* building a back-end database to support data processing, storage, and organization
* expanding employee-centric
## Challenges we ran into
* Polishing scraped data and extracting important information.
* Finalizing direction and purpose of the project
* Sleeping on the floor.
## Accomplishments that we're proud of
* effectively processed, organized, and built visualizations for text data
* picking up new skills (JS, matplotlib, GCloud NLP API)
* working as a team to manage loads of work under time constraints
## What we learned
* value of teamwork in a coding environment
* technical skills
|
## Inspiration
In today's fast-paced world, the average person often finds it challenging to keep up with the constant flow of news and financial updates. With demanding schedules and numerous responsibilities, many individuals simply don't have the time to sift through countless news articles and financial reports to stay informed about stock market trends. Despite this, they still desire a way to quickly grasp which stocks are performing well and make informed investment decisions.
Moreover, the sheer volume of news articles, financial analyses and market updates is overwhelming. For most people finding the time to read through and interpret this information is not feasible. Recognizing this challenge, there is a growing need for solutions that distill complex financial information into actionable insights. Our solution addresses this need by leveraging advanced technology to provide streamlined financial insights. Through web scraping, sentiment analysis, and intelligent data processing we can condense vast amounts of news data into key metrics and trends to deliver a clear picture of which stocks are performing well.
Traditional financial systems often exclude marginalized communities due to barriers such as lack of information. We envision a solution that bridges this gap by integrating advanced technologies with a deep commitment to inclusivity.
## What it does
This website automatically scrapes news articles from the domain of the user's choosing to gather the latests updates and reports on various companies. It scans the collected articles to identify mentions of the top 100 companies. This allows users to focus on high-profile stocks that are relevant to major market indices. Each article or sentence mentioning a company is analyzed for sentiment using advanced sentiment analysis tools. This determines whether the sentiment is positive, negative, or neutral. Based on the sentiment scores, the platform generates recommendations for potential stock actions such as buying, selling, or holding.
## How we built it
Our platform was developed using a combination of robust technologies and tools. Express served as the backbone of our backend server. Next.js was used to enable server-side rendering and routing. We used React to build the dynamic frontend. Our scraping was done with beautiful-soup. For our sentiment analysis we used TensorFlow, Pandas and NumPy.
## Challenges we ran into
The original dataset we intended to use for training our model was too small to provide meaningful results so we had to pivot and search for a more substantial alternative. However, the different formats of available datasets made this adjustment more complex. Also, designing a user interface that was aesthetically pleasing proved to be challenging and we worked diligently to refine the design, balancing usability with visual appeal.
## Accomplishments that we're proud of
We are proud to have successfully developed and deployed a project that leverages web scrapping and sentiment analysis to provide real-time, actionable insights into stock performances. Our solution simplifies complex financial data, making it accessible to users with varying levels of expertise. We are proud to offer a solution that delivers real-time insights and empowers users to stay informed and make confident investment decisions.
We are also proud to have designed an intuitive and user-friendly interface that caters to busy individuals. It was our team's first time training a model and performing sentiment analysis and we are satisfied with the result. As a team of 3, we are pleased to have developed our project in just 32 hours.
## What we learned
We learned how to effectively integrate various technologies and acquired skills in applying machine learning techniques, specifically sentiment analysis. We also honed our ability to develop and deploy a functional platform quickly.
## What's next for MoneyMoves
As we continue to enhance our financial tech platform, we're focusing on several key improvements. First, we plan to introduce an account system that will allow users to create personal accounts, view their past searches, and cache frequently visited websites. Second, we aim to integrate our platform with a stock trading API to enable users to buy stocks directly through the interface. This integration will facilitate real-time stock transactions and allow users to act on insights and make transactions in one unified platform. Finally, we plan to incorporate educational components into our platform which could include interactive tutorials, and accessible resources.
|
# Customer Feedback Sentiment Analyser
## yhack2019✈️
Being naturally fascinated by data science, our team was drawn to "Best Search of Customer Feedback" challenge. Through trial and error; often trying one methodology, library, or tool, only to have to switch to another, we persevered to collect tweet data from 2013 to today (10/27/2019) and Yelp reviews for jetBlue, American, Delta, and Spirit.
Some of the biggest challenges we faced were scraping data off twitter and dealing with minor sleep deprivation. The twitter challenge was overcome by a half-scripted half-manual scraping method, which was thought of on the fly. Yelp was scraped using data crawling with requests library and HTML parsing with BeautifulSoup.
Data was then converted to JSON files, to be later consumed by **Googles Natural Language Processing API**.
Data was again converted to a JSON file holding the date of review or tweet and the sentiment and magnitude of the text.
This was later analyzed and graphed using a mixture of Jupyter Notebook and scrips that used matplotlib.
Also, collected JetBlue stock price data and plotted using pandas and Jupyter Notebook.
In the end, we also included a GUI to show the different graphs from the text sentiment analyzer.
Our teamwork itself was notable; with each member of our team becoming a specialist in one area of the project, each communicating with the others in what they were working on.
YHack was a tremendous learning opportunity and a great time to hack and collaborate with friends.
|
partial
|
## Inspiration
We have to make a lot of decisions, all the time- whether it's choosing your next hackathon project idea, texting your ex or not, writing an argumentative essay, or settling a debate. Sometimes, you need the cold hard truth. Sometimes, you need someone to feed into your delusions. But sometimes, you need both!
## What it does
Give the Council your problem, and it'll answer with four (sometimes varying) AI-generated perspectives! With 10 different personalities to choose from, you can get a bunch of (imaginary) friends to weigh in on your dilemmas, even if you're all alone!
## How we built it
The Council utilizes OpenAI's GPT 3.5 API to generate responses unique to our 10 pre-defined personas. The UI was built with three.js and react-three-fiber, with a mix of open source and custom-built 3D assets.
## Challenges we ran into
* 3D hard
* merge conflict hard
* Git is hard
## Accomplishments that we're proud of
* AI responses that were actually very helpful and impressive
* Lots of laughs from funny personalities
* Custom disco ball (SHEEEEEEEEESH shoutout to Alan)
* Sexy UI (can you tell who's writing this)
## What we learned
This project was everyone's first time working with three.js! While we had all used OpenAI for previous projects, we wanted to put a unique spin on the typical applications of GPT.
## What's next for The Council
We'd like to actually deploy this app to bring as much joy to everyone as it did to our team (sorry to everyone else in our room who had to deal with us cracking up every 15 minutes)
|
## Inspiration
Knowtworthy is a startup that all three of us founded together, with the mission to make meetings awesome. We have spent this past summer at the University of Toronto’s Entrepreneurship Hatchery’s incubator executing on our vision. We’ve built a sweet platform that solves many of the issues surrounding meetings but we wanted a glimpse of the future: entirely automated meetings. So we decided to challenge ourselves and create something that the world has never seen before: sentiment analysis for meetings while transcribing and attributing all speech.
## What it does
While we focused on meetings specifically, as we built the software we realized that the applications for real-time sentiment analysis are far more varied than initially anticipated. Voice transcription and diarisation are very powerful for keeping track of what happened during a meeting but sentiment can be used anywhere from the boardroom to the classroom to a psychologist’s office.
## How I built it
We felt a web app was best suited for software like this so that it can be accessible to anyone at any time. We built the frontend on React leveraging Material UI, React-Motion, Socket IO and ChartJS. The backed was built on Node (with Express) as well as python for some computational tasks. We used GRPC, Docker and Kubernetes to launch the software, making it scalable right out of the box.
For all relevant processing, we used Google Speech-to-text, Google Diarization, Stanford Empath, SKLearn and Glove (for word-to-vec).
## Challenges I ran into
Integrating so many moving parts into one cohesive platform was a challenge to keep organized but we used trello to stay on track throughout the 36 hours.
Audio encoding was also quite challenging as we ran up against some limitations of javascript while trying to stream audio in the correct and acceptable format.
Apart from that, we didn’t encounter any major roadblocks but we were each working for almost the entire 36-hour stretch as there were a lot of features to implement.
## Accomplishments that I'm proud of
We are super proud of the fact that we were able to pull it off as we knew this was a challenging task to start and we ran into some unexpected roadblocks. There is nothing else like this software currently on the market so being first is always awesome.
## What I learned
We learned a whole lot about integration both on the frontend and the backend. We prototyped before coding, introduced animations to improve user experience, too much about computer store numbers (:p) and doing a whole lot of stuff all in real time.
## What's next for Knowtworthy Sentiment
Knowtworthy Sentiment aligns well with our startup’s vision for the future of meetings so we will continue to develop it and make it more robust before integrating it directly into our existing software. If you want to check out our stuff you can do so here: <https://knowtworthy.com/>
|
## Inspiration
In high school, a teacher of ours used to sit in the middle of the discussion to draw lines from one person to another on paper to identify the trends of the discussion. It's a very meaningful activity as we could see how balanced the discussion is, allowing us to highlight to people who had less chance to express their ideas. It could also be used by teacher in earlier education, to identify social challenges such as anxiety, speech disorder in children.
## What it does
The app initially is trained on a short audio from each of the member of the discussion. Using transfer learning, it will able to recognize the person talking. And, during discussion, very colorful and aesthetic lines will began drawing from person to another on REAL-TIME!
## How we built it
On the front-end, we used react and JavaScript to create a responsive and aesthetic website. Vanilla css (and a little bit of math, a.k.a, Bézier curve) to create beautiful animated lines, connecting different profiles.
On the back-end, python and tensorflow was used to train the AI model. First, the audios are pre-processed into smaller 1-second chunks of audio, before turning them into a spectrogram picture. With this, we performed transfer learning with VGG16, to extract features from the spectrograms. Then, the features are used to fit a SVM model, using scikit-learn. Subsequently, the back-end opens a web-socket with the front-end to receive stream of data, and return label of the person talking. This is also done with multi-threading to ensure all the data is being processed quickly.
## Challenges we ran into
As it's our first time with deep-learning or training an AI for that matter, it was very difficult to get started. Despite the copious amount of resources and projects done, it was hard to identify a suitable source. Different processing techniques were also needed to know, before the model could be trained. In additions, finding a platform (such as Google Colab) was also necessary to train the model in appropriate time. Finally, it was fairly hard to incorporate the project with the rest of the project. It needs to be able to process the data in real-time, while keeping the latency low.
Another major challenge that we ran into was connecting the back-end with the front-end. As we wanted it to be real-time, we had to be able to stream the raw data to the back-end. But, there were problems reconstructing the binary files into appropriate format, because we were unsure what format RecordRTC uses to record audio. There was also a problem of how much data or how frequent the data should be sent over due to our high latency of predicting (~500ms). It's a problem that we couldn't figure out in time
## Accomplishments that we're proud of
The process of training the model was really cool!!! We could never think of training a voice recognition model similar to how you would to to an image/face recognition. It was a very out-of-the-box method, that we stumbled up online. It really motivated us to get out there and see what else. We were also fairly surprised to get a proof-of-concept real-time processing with local audio input from microphone. We had to utilize threading to avoid overflowing the audio input buffer. And if you get to use threading, you know it's a cool project :D.
## What we learned
Looking back, the project was quite ambitious. BUT!! That's how we learned. We learned so much about training machine learning as well as different connection protocols over the internet. Threading was also a "wet" dream of ours, so it was really fun experimenting with the concept in python.
## What's next for Hello world
The app would be much better on mobile. So, there are plans to port the entire project to mobile (maybe learning React Native?). We're also planning on retrain the voice recognition model with different methods, and improve the accuracy as well as confidence level. Lastly, we're planning on deploying the app and sending it back to our high school teacher, who was the inspiration for this project, as well as teachers around the world for their classrooms.
## Sources
These two sources helped us tremendously in building the model:
<https://medium.com/@omkarade9578/speaker-recognition-using-transfer-learning-82e4f248ef09>
<https://towardsdatascience.com/automatic-speaker-recognition-using-transfer-learning-6fab63e34e74>
|
winning
|
## Inspiration
We had the idea to make a no touch keyboard mouse set-up from our experience at restaurant dining during the covid pandemic where many restaurants made customers sign in their informaiton onto an iPad or laptop to keep contact tracing data. With new health and safety procedures we thought that the check in model was becoming out dated as our devices tend to be the least clean parts of our lives. With our application we can avoid the dangers of touching bacteria heavy devices while maintaining the convenience of a digital form of contact tracing.
## What it does
Our application allows a user to have mouse and keyboard functionality without the need to actually touch a mouse or keyboard. We used OpenCV and different python libraries to facilitate a way for the camera to recognize hand movements as cursor movements and hand gestures to be associated with different functions. For example to left click with the mouse you are able to tap you index finger and your thumb together to get the computer to recognize a click. With the addition of a gesture to open and close an onscreen keyboard we are able to perform all the functionality needed from a device without having to touch anything and maintaining a safe distance.
## Functionalities
Right click: Middle finger and Thumb pinch
Left click: Index finger and Thumb pinch
Double click: Pinky and Thumb pinch
Access Keyboard: Ring finger and Thumb pinch
Cursor Movements: Controlled by the position of the base of your Index finger
Scroll Down: Swipe index finger upwards (quickly)
Scroll Up: Swipe index finger downwards (quickly)
## Challenges we ran into
Our biggest challenge was making it a good experience to use for typing and browsing. The initial set up we had with the pointer corresponding with to the tip of the index finger caused it to be a little glitchy and very jumpy so it was hard to type. We circumvented this issue by connecting it to a section of the palm instead which is also tracked by OpenCV. This combined with the use of a different mouse library and decreasing delays inside the libraries let us make a very stable and responsive system.
## What Did We Use
OpenCV, Python, Figma, Pyautogui, MediaPipe
|
## Overview
We designed fingerpaint, an app that utilizes visual image processing to implement a drawing tool through motion detection. Using OpenCV, we analyze video streams through a user's webcam footage and use that information to inform the user's interaction with a Canvas. This allows users to more easily whiteboard and prototype basic designs through drawing than with existing systems as Microsoft Paint.
## How it Works
When the app first loads, it opens up on an initialization screen that allows users to calibrate the app to their drawing tool of choice. To simplify the model required, we detected a marker on the user's finger using brightly colored electrical tape. After calibration finishes, the user's marked finger is recognized by the app and can be used to interact with the Canvas (implemented with tkinter). Informed by a history of past points, our model detects where the finger is moving and uses this information to move a cursor. When the user presses the 'a' key while the cursor moves, the user can draw on the Canvas.
## Challenges
This was our first time working with OpenCV/Computer Vision in general, so we had a lot of difficulties in determining how best to leverage all the tools the library has and the best way to accomplish the Computer Vision task we had. We had a lot of iteration playing with different approaches to figure out what would work best.
## Next Steps
Implement more advanced gesture recognition and UI related features (Undos, Saving Images to Local File, etc.)
|
## Inspiration
It is nearly a year since the start of the pandemic and going back to normal still feels like a distant dream.
As students, most of our time is spent attending online lectures, reading e-books, listening to music, and playing online games. This forces us to spend immense amounts of time in front of a large monitor, clicking the same monotonous buttons.
Many surveys suggest that this has increased the anxiety levels in the youth.
Basically, we are losing the physical stimulus of reading an actual book in a library, going to an arcade to enjoy games, or play table tennis with our friends.
## What it does
It does three things:
1) Controls any game using hand/a steering wheel (non-digital) such as Asphalt9
2) Helps you zoom-in, zoom-out, scroll-up, scroll-down only using hand gestures.
3) Helps you browse any music of your choice using voice commands and gesture controls for volume, pause/play, skip, etc.
## How we built it
The three main technologies used in this project are:
1) Python 3
The software suite is built using Python 3 and was initially developed in the Jupyter Notebook IDE.
2) OpenCV
The software uses the OpenCV library in Python to implement most of its gesture recognition and motion analysis tasks.
3) Selenium
Selenium is a web driver that was extensively used to control the web interface interaction component of the software.
## Challenges we ran into
1) Selenium only works with google version 81 and is very hard to debug :(
2) Finding the perfect HSV ranges corresponding to different colours was a tedious task and required me to make a special script to make the task easier.
3) Pulling an all-nighter (A coffee does NOT help!)
## Accomplishments that we're proud of
1) Successfully amalgamated computer vision, speech recognition and web automation to make a suite of software and not just a single software!
## What we learned
1) How to debug selenium efficiently
2) How to use angle geometry for steering a car using computer vision
3) Stabilizing errors in object detection
## What's next for E-Motion
I plan to implement more components in E-Motion that will help to browse the entire computer and make the voice commands more precise by ignoring background noise.
|
losing
|
## Inspiration
We noticed that individual have difficulty achieving their fitness goals due to external factors such as work and sleep patterns. As a result, many fitness applications do not provide a realistic time frame for achieving said goal. We felt that having an application that customizes different fitness goals everyday to fit the individuals work schedule and sleep pattern from the night prior. This would be compared to their final goal and predict an estimate of when the user is able to achieve it.
## What it does
FullyFit uses both Fitbit and Muse API to get fitness and brain-waves (EEG data). Many features extracted from the data include steps taken, calories burnt, heart-rate, sleep and floors climbed each minute as well as each day. Also, data from Google calendar is extracted indicating the duration for which the user will be busy throughout the day. On the mental health side, FullyFit tracks brain wave activity via a Muse headband, allowing the app to adjust its exercise reminders based on mental stress and thinking.
## How we built it
We used machine learning to predict if the user is able to meet his/her step or calories burned goal. We scikit-learn machine learning library to test several algorithms like Decision Trees, Logistic Regression, Random Forest and Bagging Ensemble methods. Even though we got higher accuracy with both Random Forest and Bagging Ensemble methods, we used the Decision Trees for predicting if the user meets his daily fitness goals. Decision Trees ended up being really quick and also reasonably accurate for implementation.
## Challenges we ran into
There were some issues with the Fitbit api getting and handling intraday data for more than a month, so for demonstration purposes we analyzed only a months data.
The FullyFit is more accurate if the user has his work schedule posted in advance. This helps it better predict if the user would be able to meet their goal or not and notify them with suggestions.
## Things we learned
• We learned a lot about the different channels of signaling from the brain. Specifically: Delta waves which are most present during sleep.
Theta waves which are associated with sleep, very deep relaxation, and visualization. Alpha waves which occur when relaxed and calm. Beta waves which occur when, for example, actively thinking or problem-solving. Gamma waves which occur when involved in higher mental activity and consolidation of information.
• Our brainwave EEG API wrapper relied on old legacy Objective-C code that we had to interpret and make edits to.
• Interacting the Fitbit API
## Accomplishments that we're proud of
• Being able to implement both Fitbit and Muse API's in our application.
• Getting a high prediction accuracy with Bagging ensemble methods (96.5%), Random Forest (89%) and Decision Trees (84%)
## What's next for FullyFit
We would like to analyze much more data that we did now as well as optimizing our algorithms on many users. Currently using the user's calendar we only use the duration of busy hours during as a feature in training our machine learning models. We would like to build on this and make our system smarter by also taking in account the spread of the events. For example if the person is busy with work later in the day, it can compensate. We also like to expand upon the application of Muse as a mental health guide.
|
## Inspiration
The three of us love lifting at the gym. We always see apps that track cardio fitness but haven't found anything that tracks lifting exercises in real-time. Often times when lifting, people tend to employ poor form leading to gym injuries which could have been avoided by being proactive.
## What it does and how we built it
Our product tracks body movements using EMG signals from a Myo armband the athlete wears. During the activity, the application provides real-time tracking of muscles used, distance specific body parts travel and information about the athlete’s posture and form. Using machine learning, we actively provide haptic feedback through the band to correct the athlete’s movements if our algorithm deems the form to be poor.
## How we built it
We trained an SVM based on employing deliberately performed proper and improper forms for exercises such as bicep curls. We read properties of the EMG signals from the Myo band and associated these with the good/poor form labels. Then, we dynamically read signals from the band during workouts and chart points in the plane where we classify their forms. If the form is bad, the band provides haptic feedback to the user indicating that they might injure themselves.
## Challenges we ran into
Interfacing with the Myo bands API was not the easiest task for us, since we ran into numerous technical difficulties. However, after we spent copious amounts of time debugging, we finally managed to get a clear stream of EMG data.
## Accomplishments that we're proud of
We made a working product by the end of the hackathon (including a fully functional machine learning model) and are extremely excited for its future applications.
## What we learned
It was our first time making a hardware hack so it was a really great experience playing around with the Myo and learning about how to interface with the hardware. We also learned a lot about signal processing.
## What's next for SpotMe
In addition to refining our algorithms and depth of insights we can provide, we definitely want to expand the breadth of activities we cover too (since we’re primarily focused on weight lifting too).
The market we want to target is sports enthusiasts who want to play like their idols. By collecting data from professional athletes, we can come up with “profiles” that the user can learn to play like. We can quantitatively and precisely assess how close the user is playing their chosen professional athlete.
For instance, we played tennis in high school and frequently had to watch videos of our favorite professionals. With this tool, you can actually learn to serve like Federer, shoot like Curry or throw a spiral like Brady.
|
## Inspiration
Our inspiration stems from the desire to craft something sustainable and innovative. Recognizing the significance of self-care, we set out to create a platform that embraces the uniqueness of each individual's journey to wellness. Our belief is grounded in the understanding that the path to well-being is deeply personal and should be tailored to the distinct needs of every individual.
## What it does
**Empowering Personal Growth**:
- Build an app that empowers individuals on their journey toward personal growth and self-
improvement.
- Provide tools and features that facilitate users in setting and achieving their wellness goals.
**Community Connection**:
- Foster a sense of community by incorporating features that allow users to connect, share
achievements, and support each other.
- Create a platform where users can join groups with similar interests, fostering a supportive and
motivating environment.
**Positive Impact on Local Businesses**:
- Integrate a system that not only benefits users but also positively impacts local businesses.
- Consider partnerships with local establishments to offer exclusive discounts or coupons to users
achieving certain milestones.
**Gamification for Motivation**:
- Utilize gamification elements to make the wellness journey more enjoyable and motivating.
- Reward users with points, badges, or virtual incentives for completing tasks, achieving goals, and
actively participating in the community.
## How we built it
Frontend: angular.js, React, html, javascript Backend: python, mysql, ml, html
## Challenges we ran into
We faced challenges while implementing the Google Fit API, as acquiring the OAuth client ID was a task none of us had previously encountered. This aspect proved to be both challenging and time-consuming for our team.
## Accomplishments that we're proud of
We take pride in successfully incorporating AI to enrich and support individuals on their journey toward well-being. Our achievement is reflected in offering a service that not only benefits individuals but also has a positive impact on the surrounding community.
## What we learned
**API Implementation***:
- Overcoming the hurdle of implementing the Google Fit API proved challenging. Obtaining the
OAuth client ID was a novel task for our team, leading to a significant learning curve and consuming
valuable time.
\*\*Machine Learning**:
- We significantly improved our capabilities in creating machine learning models. This encompassed
the adept selection and utilization of data for training purposes.
\*\*Model Training*\*:
- Our learning journey included acquiring the skills to train models, ranging from image-based to
audio-based models. We recognized the importance of strategic decisions, such as the choice of
overlapping coefficients, in achieving optimal model performance.
## What's next for FITnFLEX
* Implement the exercise tracker for all sort of exercise, not only push-ups but also set-ups, squats,
jumping-jacks, etc.
* Create a group system where people can share their achievements and completed task with friends n
and also congratulate one another.
* Expend it to a mobile app, make it more social, people can sign up for special events, with lots of
consecutive tasks to complete and leaderboards.
* Offer the possibility to organize groups based on specific objectives and/or shared interests.
* Offer a personalised chatbot, which helps guiding people to sign up for challenges and/or groups
depending on their interests.
|
winning
|
## ✨ Inspiration
Driven by the goal of more accessible and transformative education, our group set out to find a viable solution. Stocks are very rarely taught in school and in 3rd world countries even less, though if used right, it can help many people go above the poverty line. We seek to help students and adults learn more about stocks and what drives companies to gain or lower their stock value and use that information to make more informed decisions.
## 🚀 What it does
Users are guided to a search bar where they can search a company stock for example "APPL" and almost instantly they can see the stock price over the last two years as a graph, with green and red dots spread out on the line graph. When they hover over the dots, the green dots explain why there is a general increasing trend in the stock and a news article to back it up, along with the price change from the previous day and what it is predicted to be from. An image shows up on the side of the graph showing the company image as well.
## 🔧 How we built it
When a user writes a stock name, it accesses yahooFinance API and gets the data on stock price from the last 3 years. It takes the data and converts to a JSON File on local host 5000. Then using flask it is converted to our own API that populates the ChartsJS API with the data on the stock. Using a Matlab Server, we then take that data to find areas of most significance (the absolute value of slope is over a certain threshold). Those data points are set as green if it is positive or red if it is negative. Those specific dates in our data points are fed back to Gemini and asks it why it is thinking the stock shifted as it did and the price changed on the day as well. The Gemini at the same time also takes another request for a phrase that easy for the json search API to find a photo for about that company and then shows it on the screen.
## 🤯 Challenges we ran into
Using the amount of API's we used and using them properly was VERY Hard especially making our own API and incorporating Flask. As well, getting Stock data to a MATLAB Server took a lot of time as it was all of our first time using it. Using POST and Fetch commands were new for us and took us a lot of time for us to get used too.
## 🏆 Accomplishments that we're proud of
Connecting a prompt to a well-crafted stocks portfolio.
learning MATLAB in a time crunch.
connecting all of our API's successfully
making a website that we believe has serious positive implications for this world
## 🧠 What we learned
MATLAB integration
Flask Integration
Gemini API
## 🚀What's next for StockSee
* Incorporating it on different mediums such as VR so users can see in real-time how stocks shift in from on them in an interactive way.
* Making a small questionnaire on different parts of the stocks to ask whether if it good to buy it at the time
* Use (MPT) and other common stock buying algorthimns to see how much money you would have made using it.
|
# 🌍 FootPrint Mayhem
## 🚀 Inspiration
Climate change is daunting, and it often feels like individual efforts don’t matter. That’s why we created *FootPrint Mayhem* – a fun, addictive way to turn eco-friendly actions into a game! Think Duolingo, but for saving the planet.
---
## 🎮 What It Does
FootPrint Mayhem makes sustainability part of your daily routine with these features:
* **Track** your daily carbon footprint.
* **Take quizzes** to learn about eco-friendly habits.
* **Earn points and streaks** for consistent actions.
* **Compete with friends** on a leaderboard.
Every small step counts, and we make it engaging and rewarding!
---
## 🛠️ How We Built It
Our tech stack:
* **React** for a smooth, responsive UI.
* **Tailwind CSS** for easy, stylish design.
* **Firebase** for backend and authentication (with some creative improvisation!).
* **Express** for quick backend logic.
We kept things modular with components like Dashboard, UserInputForm, and LevelQuiz for maintainability.
---
## 🧩 Challenges
* **Component integration** felt like a jigsaw puzzle.
* **Balancing education and fun** was tricky.
* **Last-minute pivots** when libraries failed us.
* **Simplifying complex data** without overwhelming users.
---
## 🏆 Accomplishments
* A functional **dashboard** that makes carbon data interesting.
* A **streak system** to keep users coming back.
* **Quizzes** that are both fun and educational.
* A project that promotes real-world habit changes.
---
## 🎓 What We Learned
* **Flexibility** is key – when tools fail, adapt!
* How to make **data visualizations** user-friendly.
* **Gamification works** in driving positive habits.
* Saving the planet *can* be fun with the right approach!
---
## 🔮 What’s Next?
* **Detailed tracking** of more eco-friendly actions.
* **Social features**: challenge friends and share progress.
* **Real-world rewards** through partnerships with local businesses.
* Expanding **quiz topics** to deepen sustainability education.
* **Mobile apps** to track progress on the go!
---
🌱 *FootPrint Mayhem: Making the world greener, one streak at a time!*
|
## Inspiration
Our main inspiration came from competing in stock market competitions. One of the best ways to find an edge was effectively manipulating a portfolio around up and coming news. In order to make this process more efficient myStocks serves as a tool to immediately compile reaction worthy news and sort between that which is positive and negative.
## What it does
We scrape the top news sites around the globe to compile an in-depth list of articles for each stock in your portfolio. Through machine learning, we analyze the composition of each article and then refine the list to the top 6 most reaction-worthy positive and negative links. Users can follow the links to the articles provided for a comprehensive look into what the future holds for their investment portfolio!
## How we built it
The web platform was built using a mixture of HTML and Javascript, supported by Google's Firebase API. The web scraper was built in Java and the system with which we used to rate articles - the "sentiment analyzer" - was built in Python.
## Challenges we ran into
Bringing Firebase into our project as well as coordinating the different parts of the application proved to be difficult. In particular, integrating the web crawler and sentiment analyzer into the web interface was quite challenging. The storage and handling of data within the context of the web interface was something that was new to us.
## Accomplishments that we're proud of
Our project involved a number of skills and technologies that many of us were new to or unfamiliar with. While the learning curve was steep, we are proud of what we were able to learn in such a short period of time. We are also proud of the efficiency as well as effectiveness of our web scraper and sentiment analyzer.
## What we learned
Aside from a slew of new technologies, we also learned how to stay calm under stress and make things work no matter what. We also got the wonderful opportunity to get familiar with the hard floors : )
|
partial
|
## Inspiration
Our journey to creating this project stems from a shared realization: the path from idea to execution is fraught with inefficiencies that can dilute even the most brilliant concepts. As developers with a knack for turning visions into reality, we've faced the slow erosion of enthusiasm and value that time imposes on innovation. This challenge is magnified for those outside the technical realm, where a lack of coding skills transforms potential breakthroughs into missed opportunities. Harvard Business Review and TechCrunch analyzed Y Combinator startups and found that around 40% of founders are non-technical.
Drawing from our experiences in fast-paced sectors like health and finance, we recognized the critical need for speed and agility. The ability to iterate quickly and gather user feedback is not just beneficial but essential in these fields. Yet, this process remains a daunting barrier for many, including non-technical visionaries whose ideas have the potential to reshape industries.
With this in mind, we set out to democratize the development process. Our goal was to forge a tool that transcends technical barriers, enabling anyone to bring their ideas to life swiftly and efficiently. By leveraging our skills and insights into the needs of both developers and non-developers alike, we've crafted a solution that bridges the gap between imagination and tangible innovation, ensuring that no idea is left unexplored due to the constraints of technical execution.
This project is more than just a tool; it's a testament to our belief that the right technology can unlock the potential within every creative thought, transforming fleeting ideas into impactful realities.
## What it does
Building on the foundation laid by your vision, MockupMagic represents a leap toward democratizing digital innovation. By transforming sketches into interactive prototypes, we not only streamline the development process but also foster a culture of inclusivity where ideas, not technical prowess, stand in the spotlight. This tool is a catalyst for creativity, enabling individuals from diverse backgrounds to participate actively in the digital creation sphere.
The user can upload a messy sketch on paper to our website. MockupMagic will then digitize your low-fidelity prototype into a high-fidelity replica with interactive capabilities. The user can also see code alongside the generated mockups, which serves as both a bridge to tweak the generated prototype and a learning tool, gently guiding users toward deeper technical understanding. Moreover, the integration of a community feedback mechanism through the Discussion tab directly within the platform enhances the iterative design process, allowing for real-time user critique and collaboration.
MockupMagic is more than a tool; it's a movement towards a future where the digital divide is narrowed, and the translation of ideas into digital formats is accessible to all. By empowering users to rapidly prototype and refine their concepts, we're not just accelerating the pace of innovation; we're ensuring that every great idea has the chance to be seen, refined, and realized in the digital world.
## How we built it
Conceptualization: The project began with brainstorming sessions where we discussed the challenges non-technical individuals face in bringing their ideas to life. Understanding the value of quick prototyping, especially for designers and founders with creative but potentially fleeting ideas, we focused on developing a solution that accelerates this process.
Research and Design: We conducted research to understand the needs of our target users, including designers, founders, and anyone in between who might lack technical skills. This phase helped us design a user-friendly interface that would make it intuitive for users to upload sketches and receive functional web mockups.
Technology Selection: Choosing the right technologies was crucial. We decided on a combination of advanced image processing and AI algorithms capable of interpreting hand-drawn sketches and translating them into HTML, CSS, and JavaScript code. We leveraged and finetuned existing AI models from MonsterAPI and GPT API and tailored them to our specific needs for better accuracy in digitizing sketches.
Development: The development phase involved coding the backend logic that processes the uploaded sketches, the AI model integration for sketch interpretation, and the frontend development for a seamless user experience. We used the Reflex platform to build out our user-facing website, capitalizing on their intuitive Python-like web development tools.
Testing and Feedback: Rigorous testing was conducted to ensure the accuracy of the mockups generated from sketches. We also sought feedback from early users, including designers and founders, to understand how well the tool met their needs and what improvements could be made.
## Challenges we ran into
We initially began by building off our own model, hoping to aggregate quality training data mapping hand-drawn UI components to final front-end components, but we quickly realized this data was very difficult to find and hard to scrape for. Our model performs well for a few screens however it still struggles to establish connections between multiple screens or more complex actions.
## Accomplishments that we're proud of
Neither of us had much front-end & back-end experience going into this hackathon, so we made it a goal to use a framework that would give us experience in this field. After learning about Reflex during our initial talks with sponsors, we were amazed that Web Apps could be built in pure Python and wanted to jump right in. Using Reflex was an eye-opening experience because we were not held back by preconceived notions of traditional web development - we got to enjoy learning about Reflex and how to build products with it. Reflex’s novelty also translates to limited knowledge about it within LLM tools developers use to help them while coding, this helped us solidify our programming skills through reading documentation and creative debugging methodologies - skills almost being abstracted away by LLM coding tools. Finally, our favorite part about doing hackathons is building products we enjoy using. It helps us stay aligned with the end user while giving us personal incentives to build the best hack we can.
## What we learned
Through this project, we learned that we aren’t afraid to tackle big problems in a short amount of time. Bringing ideas on napkins to full-fledged projects is difficult, and it became apparent hitting all of our end goals would be difficult to finish in one weekend. We quickly realigned and ensured that our MVP was as good as it could get before demo day.
## What's next for MockupMagic
We would like to fine-tune our model to handle more edge cases in handwritten UIs. While MockupMagic can handle a wide range of scenarios, we hope to perform extensive user testing to figure out where we can improve our model the most. Furthermore, we want to add an easy deployment pipeline to give non-technical founders even more autonomy without knowing how to code. As we continue to develop MockupMagic, we would love to see the platform being used even at TreeHacks next year by students who want to rapidly prototype to test several ideas!
|
## Where we got the spark?
**No one is born without talents**.
We all get this situation in our childhood, No one gets a chance to reveal their skills and gets guided for their ideas. Some skills are buried without proper guidance, we don't even have mates to talk about it and develop our skills in the respective field. Even in college if we are starters we have trouble in implementation. So we started working for a solution to help others who found themselves in this same crisis.
## How it works?
**Connect with neuron of your same kind**
From the problem faced we are bridging all bloomers on respective fields to experts, people in same field who need a team mate (or) a friend to develop the idea along. They can become aware of source needed to develop themselves in that field by the guidance of experts and also experienced professors.
We can also connect with people all over globe using language translator, this makes us feels everyone feel native.
## How we built it
**1.Problem analysis:**
We ran through all problems all over the globe in the field of education and came across several problems and we chose a problem that gives solution for several problems.
**2.Idea Development:**
We started to examine the problems and lack of features and solution for topic we chose and solved all queries as much as possible and developed it as much as we can.
**3.Prototype development:**
We developed a working prototype and got a good experience developing it.
## Challenges we ran into
Our plan is to get our application to every bloomers and expertise, but what will make them to join in our community. It will be hard to convince them that our application will help them to learn new things.
## Accomplishments that we're proud of
The jobs which are currently popular may or may not be popular after 10 years. Our World will always looks for a better version of our current version . We are satisfied that our idea will help 100's of children like us who don't even know about the new things in todays new world. Our application may help them to know the things earlier than usual. Which may help them to lead a path in their interest. We are proud that we are part of their development.
## What we learned
We learnt that many people are suffering from lack of help for their idea/project and we felt useless when we learnt this. So we planned to build an web application for them to help with their project/idea with experts and same kind of their own. So, **Guidance is important. No one is born pro**
We learnt how to make people understand new things based on the interest of study by guiding them through the path of their dream.
## What's next for EXPERTISE WITH
We're planning to advertise about our web application through all social medias and help all the people who are not able to get help for development their idea/project and implement from all over the world. to the world.
|
## Inspiration
Promoting yourself for a professional opportunity can be intimidating to first time job seekers. We've all had those anxious encounters which we won't forget. We want to make it easier for the next cohort of interviewees.
## What it does
The features of your project include :
--> Finding communities of like minded people through similar majors, age, and locations.
--> Providing live opportunity to practice their elevator pitches
--> Get peer review by uploading one's resume
## How we built it
Our technology stack is built over 6,000 lines of code with Django, Python, HTML, CSS, JavaScript, AJAX, SQLLite, Json. We're incorporating sponsored of agora through implementing voice and chat calls.
## Challenges we ran into
Unfamiliarity with our sponsors offerings and how to integrate it within our project. Our plans continued to evolve as we've changed it during the second day. We've also had github issues when mixing our code together. We had to adapt to updating depreciated code to a newer version while deciding which features we wanted to showcase. Lastly, we've had to endure mixing our differing languages while melding our various code and features together into a working product.
## Accomplishments that we're proud of
Despite our issues and experiences, we remain resilient and are finishing this journey together.
We're also super excited that we can present a working demo. We're proud to have bonded over this special opportunity and use our time effectively.
## What we learned
Through learning each other's different strengths, we found areas we can improve upon. We found a balance between working during the hackathon while also learning from the workshops and from each other. For instance, this was a good opportunity for us to being exposed to different languages, working more with git, the command line and collaborating on a project together. We also got to learn about sponsors products that we can use at a future hackathon and how to approach a project differently next time.
## What's next for ResuMeet
We want to incorporate AI agents functionality such as improving it's accuracy and allowing for AI review of resume. Additionally, we hope to expand the variety of majors that our users can pick from as well as increase their profile and community preferences. Lastly, we want to add a React framework.
|
partial
|
## Inspiration
The inspiration for "KAY/O" (Keep An Eye Out) came from the constant anxiety of leaving belongings unattended, even for a few minutes. The need for a reliable solution to watch over personal items during short absences sparked the idea behind this innovative security application.
## What it does
KAY/O is a cutting-edge software solution that employs advanced object recognition to keep a vigilant eye on your belongings. Whether it's a quick bathroom break or a short trip away, KAY/O ensures the safety of your valuables, providing real-time peace of mind.
## How we built it
We built KAY/O using state-of-the-art object recognition technology, leveraging robust algorithms and frameworks to create a seamless and effective security system. The application integrates with your devices to monitor and safeguard your belongings intelligently.
## Challenges we ran into
While developing KAY/O, we faced challenges in optimizing the object recognition algorithms for real-time performance.
## Accomplishments that we're proud of
We are proud to have successfully created KAY/O, a solution that addresses a common source of anxiety for many individuals. The seamless integration of object recognition technology and the user-friendly interface reflects our commitment to providing a reliable and efficient security tool.
## What we learned
Throughout the development of KAY/O, our team gained valuable insights into the complexities of real-time object recognition and the importance of creating a user-friendly experience. We deepened our understanding of security application development and the impact it can have on users' peace of mind.
## What's next for KAY/O
Looking ahead, we envision expanding KAY/O with additional features such as customizable alerts, integration with smart home devices, and continuous improvement in object recognition accuracy. Our goal is to make KAY/O the go-to solution for individuals seeking a trustworthy guardian for their belongings.
|
## Inspiration
Love is in the air. PennApps is not just about coding, it’s also about having fun hacking! Meeting new friends! Great food! PING PONG <3!
## What it does
When you navigate to any browser it will remind you about how great PennApps was!
|
## Inspiration
As international students in the United States, we've faced the daunting challenge of navigating an unfamiliar and complex healthcare system. Far from home and family, we've experienced firsthand how expensive and difficult it can be to access proper healthcare. This struggle inspired us to create VitalPath - a platform designed to bridge the gap between individuals and healthcare resources, ensuring people can stay informed about their health and seek attention before situations become critical.
## What it does
VitalPath is a user-friendly web platform that:
* Collects self-reported health data from users
* Uses AI to analyze symptoms and predict potential health conditions
* Provides recommendations for managing health concerns
* Connects users to remote healthcare resources when urgent attention is needed
## How we built it
Our development process involved several key components:
* Data Source: We utilized datasets from the CDC (Center for Disease Control and Prevention) to train our machine learning model.
* Machine Learning: We developed a custom AI model capable of analyzing health data and predicting potential conditions.
* Frontend: We built a responsive and intuitive user interface using React.
* Backend: While we haven't fully implemented the API yet, we've laid the groundwork for integrating our ML model with the web application.
## Challenges we faced
Throughout the development of VitalPath, we encountered several challenges:
-**Data Complexity**: Working with health-related data required careful handling and interpretation.
-**ML Integration**: Figuring out how to effectively integrate a machine learning model into a web application proved to be a complex task.
-**Time Constraints**: Our ambitious goals were hampered by the hackathon's time limits, preventing us from fully implementing the API as initially planned.
## Accomplishments that we're proud of
Despite the challenges, we're proud of several achievements:
* Developing a functional website that lays the foundation for our vision
* Training a machine learning model using real-world CDC data
* Creating a user-friendly interface for inputting health data
* Learning and applying new skills in a high-pressure, time-constrained environment
## What we learned
This project was a significant learning experience for our team:
* Training and working with machine learning models for health predictions
* Developing responsive web applications using React
* Understanding the complexities of healthcare data and its applications
* Collaborating effectively as a team under tight deadlines
* Project planning and management in a hackathon setting
## What's next for VitalPath
We're excited about the future of VitalPath. Our next steps include:
* Completing the integration of our trained model with the web application
* Implementing and optimizing the API for efficient data processing
* Continually refining our machine learning model to improve prediction accuracy
* Expanding our platform with additional features like health education resources
* Exploring partnerships with healthcare organizations to enhance our service offerings
Our ultimate goal is to evolve VitalPath into a comprehensive platform for health monitoring and assistance, with a particular focus on serving underserved communities and individuals navigating unfamiliar healthcare systems.
|
partial
|
[project video demo](https://github.com/R1tzG/SignSensei/assets/86858242/40b4d428-f614-4800-8151-0d3d9c74f5af)
## Inspiration
In an increasingly interconnected world, one of the most important skills we can acquire is the ability to communicate effectively with people from diverse backgrounds and abilities. American Sign Language (ASL) is a language used by millions of deaf and hard-of-hearing individuals around the world. However, there are still significant barriers preventing many from learning and using ASL. Our project, SignSensei, aims to break down those barriers, making it easier and faster for anyone to learn ASL, as well as other sign languages. We hope to promote inclusivity through communication for all.
## What it does
SignSensei is a web application that gamifies the process of learning sign language. Using the webcam on your laptop (or front-facing camera on your phone), our app can detect the sign you are putting up with your hand, and tell you whether it is correct. You will be able to see yourself on the screen, as well as a lattice representation of your hand. This makes it easy to monitor your hands to make sure you are getting the signs right. The demo lesson (see video) teaches you the ASL alphabet.
## How we built it
Our sign language detection system is built in two parts. First we collect hand landmark coordinates using the Mediapipe machine learning library. We then pass the extracted coordinates through a custom fully connected neural network that we trained on a dataset of ASL signs. This approach allows us to detect signs from the webcam feed with high precision and accuracy (97% test accuracy on the custom model).
The sign detection system outlined above forms the backbone of our app. We also developed an interactive front-end with Streamlit, which serves lessons to users.
## Challenges we ran into
We were significantly challenged with developing an accurate detection model. Our first few attempts fell short in accuracy. We were eventually able to train a fast and accurate model for the task. Our final model is very simple but performant, made up primarily of Dense layers.
Another challenge we ran into was developing the user interface. At first, we looked at using React, but found it difficult to integrate Tensorflow and OpenCV seamlessly. We decided to switch gears and develop our front-end with Streamlit, leveraging the power of the Python programming language.
## Accomplishments that we're proud of
We are very proud of the powerful sign detection algorithm that we developed. Along with the use case that we found for ASL, the algorithm can easily be expanded to other sign languages, as well as applications in gesture recognition and VR gaming.
## What we learned
Through this project, we learned how to use Tensorflow to train machine learning models, as well as how they can be implemented in Javascript (even if this part didn't make it into the final application). We also learnt about different ways to make a front-end, from vanilla JS and React to solutions such as Flask.
## What's next for SignSensei
We're not done yet! We plan to add more interactive lessons to the app as well as add support for more sign languages.
View our slideshow [here](https://www.canva.com/design/DAFuAQrskMQ/y0TeL7Q-odr6c6klXBmfXA/view?utm_content=DAFuAQrskMQ&utm_campaign=designshare&utm_medium=link&utm_source=publishsharelink)
|
## Inspiration
We wanted to provide an easy, interactive, and ultimately fun way to learn American Sign Language (ASL). We had the opportunity to work with the Leap Motion hardware which allowed us to track intricate real-time data surrounding hand movements. Using this data, we thought we would be able to decipher complex ASL gestures.
## What it does
Using the Leap Motion's motion tracking technology, it prompts to user to replicate various ASL gestures. With real-time feedback, it tells the user how accurate their gesture was compared to the actual hand motion. Using this feedback, users can immediately adjust their technique and ultimately better perfect their ASL!

## How I built it
Web app using Javascript, HTML, CSS. We had to train our data using various machine learning repositories to ensure accurate recognitions, as well as other plugins which allowed us to visualize the hand movements in real time.
## Challenges I ran into
Training the data was difficult as gestures are complex forms of data, composed of many different data points in the hand's joints and bones but also in the progression of hand "frames". As a result, we had to take in a lot of data to ensure a thorough data-set that matched these data features to an actual classification of the correct ASL label (or phrase)
## Accomplishments that I'm proud of
User Interface. Training the data. Working on a project that could actually potentially impact others!
## What I learned
Hard work and dedication. Computer vision. Machine Learning.
## What's next for Leap Motion ASL
More words? Game mode? Better training? More phrases? More complex combos of gestures?

|
## Inspiration
We wanted to create a webapp that will help people learn American Sign Language.
## What it does
SignLingo starts by giving the user a phrase to sign. Using the user's webcam, gets the input and decides if the user signed the phrase correctly. If the user signed it correctly, goes on to the next phrase. If the user signed the phrase incorrectly, displays the correct signing video of the word.
## How we built it
We started by downloading and preprocessing a word to ASL video dataset.
We used OpenCV to process the frames from the images and compare the user's input video's frames to the actual signing of the word. We used mediapipe to detect the hand movements and tkinter to build the front-end.
## Challenges we ran into
We definitely had a lot of challenges, from downloading compatible packages, incorporating models, and creating a working front-end to display our model.
## Accomplishments that we're proud of
We are so proud that we actually managed to build and submit something. We couldn't build what we had in mind when we started, but we have a working demo which can serve as the first step towards the goal of this project. We had times where we thought we weren't going to be able to submit anything at all, but we pushed through and now are proud that we didn't give up and have a working template.
## What we learned
While working on our project, we learned a lot of things, ranging from ASL grammar to how to incorporate different models to fit our needs.
## What's next for SignLingo
Right now, SignLingo is far away from what we imagined, so the next step would definitely be to take it to the level we first imagined. This will include making our model be able to detect more phrases to a greater accuracy, and improving the design.
|
partial
|
## Inspiration
We wanted to get home safe
## What it does
Stride pairs you with walkers just like UBC SafeWalk, but outside of campus grounds, to get you home safe!
## How we built it
React Native, Express JS, MongoDB
## Challenges we ran into
Getting environment setups working
## Accomplishments that we're proud of
Finishing the app
## What we learned
Mobile development
## What's next for Stride
Improve the app
|
## Inspiration
Our project is inspired by the sister of one our creators, Joseph Ntaimo. Joseph often needs to help locate wheelchair accessible entrances to accommodate her, but they can be hard to find when buildings have multiple entrances. Therefore, we created our app as an innovative piece of assistive tech to improve accessibility across the campus.
## What it does
The user can find wheelchair accessible entrances with ease and get directions on where to find them.
## How we built it
We started off using MIT’s Accessible Routes interactive map to see where the wheelchair friendly entrances were located at MIT. We then inspected the JavaScript code running behind the map to find the latitude and longitude coordinates for each of the wheelchair locations.
We then created a Python script that filtered out the latitude and longitude values, ignoring the other syntax from the coordinate data, and stored the values in separate text files.
We tested whether our method would work in Python first, because it is the language we are most familiar with, by using string concatenation to add the proper Java syntax to the latitude and longitude points. Then we printed all of the points to the terminal and imported them into Android Studio.
After being certain that the method would work, we uploaded these files into the raw folder in Android Studio and wrote code in Java that would iterate through both of the latitude/longitude lists simultaneously and plot them onto the map.
The next step was learning how to change the color and image associated with each marker, which was very time intensive, but led us to having our custom logo for each of the markers.
Separately, we designed elements of the app in Adobe Illustrator and imported logos and button designs into Android Studio. Then, through trial and error (and YouTube videos), we figured out how to make buttons link to different pages, so we could have both a FAQ page and the map.
Then we combined both of the apps together atop of the original maps directory and ironed out the errors so that the pages would display properly.
## Challenges we ran into/Accomplishments
We had a lot more ideas than we were able to implement. Stripping our app to basic, reasonable features was something we had to tackle in the beginning, but it kept changing as we discovered the limitations of our project throughout the 24 hours. Therefore, we had to sacrifice features that we would otherwise have loved to add.
A big difficulty for our team was combining our different elements into a cohesive project. Since our team split up the usage of Android Studio, Adobe illustrator, and programming using the Google Maps API, it was most difficult to integrate all our work together.
We are proud of how effectively we were able to split up our team’s roles based on everyone’s unique skills. In this way, we were able to be maximally productive and play to our strengths.
We were also able to add Boston University accessible entrances in addition to MIT's, which proved that we could adopt this project for other schools and locations, not just MIT.
## What we learned
We used Android Studio for the first time to make apps. We discovered how much Google API had to offer, allowing us to make our map and include features such as instant directions to a location. This helped us realize that we should use our resources to their full capabilities.
## What's next for HandyMap
If given more time, we would have added many features such as accessibility for visually impaired students to help them find entrances, alerts for issues with accessing ramps and power doors, a community rating system of entrances, using machine learning and the community feature to auto-import maps that aren't interactive, and much, much more. Most important of all, we would apply it to all colleges and even anywhere in the world.
|
## 💡Inspiration💡
According to statistics, hate crimes and street violence have exponentially increased and the violence does not end there. Many oppressed groups face physical and emotional racial hostility in the same way. These crimes harm not only the victims but also people who have a similar identity. Aside from racial identities, all genders reported feeling more anxious about exploring the outside environment due to higher crime rates. After witnessing an upsurge in urban violence and fear of the outside world, We developed Walk2gether, an app that addresses the issue of feeling unsafe when venturing out alone and fundamentally alters the way we travel.
## 🏗What it does🏗
It offers a remedy to the stress that comes with walking outside, especially alone. We noticed that incorporating the option of travelling with friends lessens anxiety, and has a function to raise information about local criminal activity to help people make informed travel decisions. It also provides the possibility to adjust settings to warn the user of specific situations and incorporates heat map technology that displays red alert zones in real-time, allowing the user to chart their route comfortably. Its campaign for social change is closely tied with our desire to see more people, particularly women, outside without being concerned about being aware of their surroundings and being burdened by fears.
## 🔥How we built it🔥
How can we make women feel more secure while roaming about their city? How can we bring together student travellers for a safer journey? These questions helped us outline the issues we wanted to address as we moved into the design stage. And then we created a website using HTML/CSS/JS and used Figma as a tool to prepare the prototype. We have Used Auth0 for Multifactor Authentication. CircleCi is used so that we can deploy the website in a smooth and easy to verify pipelining system. AssemblyAi is used for speech transcription and is associated with Twilio for Messaging and Connecting Friends for the journey to destination. Twilio SMS is also used for alerts and notification ratings. We have also used Coil for Membership using Web Based Monitization and also for donation to provide better safety route facilities.
## 🛑 Challenges we ran into🛑
The problem we encountered was the market viability - there are many safety and crime reporting apps on the app store. Many of them, however, were either paid, had poor user interfaces, or did not plan routes based on reported occurrences. Also, The challenging part was coming up with a solution because there were additional features that might have been included, but we only had to pick the handful that was most critical to get started with the product.
Also, Our team began working on the hack a day before the deadline, and we ran into some difficulties while tackling numerous problems. Learning how to work with various technology came with a learning curve. We have ideas for other features that we'd like to include in the future, but we wanted to make sure that what we had was production-ready and had a pleasant user experience first.
## 🏆Accomplishments that we're proud of: 🏆
We gather a solution to this problem and create an app which is very viable and could be widely used by women, college students and any other frequent walkers!
Also, We completed the front-end and backend within the tight deadlines we were given, and we are quite pleased with the final outcome. We are also proud that we learned so many technologies and completed the whole project with just 2 members on the team.
## What we learned
We discovered critical safety trends and pain points that our product may address. Over the last few years, urban centres have seen a significant increase in hate crimes and street violence, and the internet has made individuals feel even more isolated.
## 💭What's next for Walk2gether💭
Some of the features incorporated in the coming days would be addressing detailed crime mapping and offering additional facts to facilitate learning about the crimes happening.
|
winning
|
## Inspiration
Have you ever wanted to search something, but aren't connected to the internet? Data plans too expensive, but you really need to figure something out online quick? Us too, and that's why we created an application that allows you to search the internet without being connected.
## What it does
Text your search queries to (705) 710-3709, and the application will text back the results of your query.
Not happy with the first result? Specify a result using the `--result [number]` flag.
Want to save the URL to view your result when you are connected to the internet? Send your query with `--url` to get the url of your result.
Send `--help` to see a list of all the commands.
## How we built it
Built on a **Nodejs** backend, we leverage **Twilio** to send and receive text messages. When receiving a text message, we send this information using **RapidAPI**'s **Bing Search API**.
Our backend is **dockerized** and deployed continuously using **GitHub Actions** onto a **Google Cloud Run** server. Additionally, we make use of **Google Cloud's Secret Manager** to not expose our API Keys to the public.
Internally, we use a domain registered with **domain.com** to point our text messages to our server.
## Challenges we ran into
Our team is very inexperienced with Google Cloud, Docker and GitHub Actions so it was a challenge needing to deploy our app to the internet. We recognized that without deploying, we would could not allow anybody to demo our application.
* There was a lot of configuration with permissions, and service accounts that had a learning curve. Accessing our secrets from our backend, and ensuring that the backend is authenticated to access the secrets was a huge challenge.
We also have varying levels of skill with JavaScript. It was a challenge trying to understand each other's code and collaborating efficiently to get this done.
## Accomplishments that we're proud of
We honestly think that this is a really cool application. It's very practical, and we can't find any solutions like this that exist right now. There was not a moment where we dreaded working on this project.
This is the most well planned project that we've all made for a hackathon. We were always aware how our individual tasks contribute to the to project as a whole. When we felt that we were making an important part of the code, we would pair program together which accelerated our understanding.
Continuously deploying is awesome! Not having to click buttons to deploy our app was really cool, and it really made our testing in production a lot easier. It also reduced a lot of potential user errors when deploying.
## What we learned
Planning is very important in the early stages of a project. We could not have collaborated so well together, and separated the modules that we were coding the way we did without planning.
Hackathons are much more enjoyable when you get a full night sleep :D.
## What's next for NoData
In the future, we would love to use AI to better suit the search results of the client. Some search results have a very large scope right now.
We would also like to have more time to write some tests and have better error handling.
|
## Inspiration
Our team identified two intertwined health problems in developing countries:
1) Lack of easy-to-obtain medical advice due to economic, social and geographic problems, and
2) Difficulty of public health data collection in rural communities.
This weekend, we built SMS Doc, a single platform to help solve both of these problems at the same time. SMS Doc is an SMS-based healthcare information service for underserved populations around the globe.
Why text messages? Well, cell phones are extremely prevalent worldwide [1], but connection to the internet is not [2]. So, in many ways, SMS is the *perfect* platform for reaching our audience in the developing world: no data plan or smartphone necessary.
## What it does
Our product:
1) Democratizes healthcare information for people without Internet access by providing a guided diagnosis of symptoms the user is experiencing, and
2) Has a web application component for charitable NGOs and health orgs, populated with symptom data combined with time and location data.
That 2nd point in particular is what takes SMS Doc's impact from personal to global: by allowing people in developing countries access to medical diagnoses, we gain self-reported information on their condition. This information is then directly accessible by national health organizations and NGOs to help distribute aid appropriately, and importantly allows for epidemiological study.
**The big picture:** we'll have the data and the foresight to stop big epidemics much earlier on, so we'll be less likely to repeat crises like 2014's Ebola outbreak.
## Under the hood
* *Nexmo (Vonage) API* allowed us to keep our diagnosis platform exclusively on SMS, simplifying communication with the client on the frontend so we could worry more about data processing on the backend. **Sometimes the best UX comes with no UI**
* Some in-house natural language processing for making sense of user's replies
* *MongoDB* allowed us to easily store and access data about symptoms, conditions, and patient metadata
* *Infermedica API* for the symptoms and diagnosis pipeline: this API helps us figure out the right follow-up questions to ask the user, as well as the probability that the user has a certain condition.
* *Google Maps API* for locating nearby hospitals and clinics for the user to consider visiting.
All of this hosted on a Digital Ocean cloud droplet. The results are hooked-through to a node.js webapp which can be searched for relevant keywords, symptoms and conditions and then displays heatmaps over the relevant world locations.
## What's next for SMS Doc?
* Medical reports as output: we can tell the clinic that, for example, a 30-year old male exhibiting certain symptoms was recently diagnosed with a given illness and referred to them. This can allow them to prepare treatment, understand the local health needs, etc.
* Epidemiology data can be handed to national health boards as triggers for travel warnings.
* Allow medical professionals to communicate with patients through our SMS platform. The diagnosis system can be continually improved in sensitivity and breadth.
* More local language support
[1] <http://www.statista.com/statistics/274774/forecast-of-mobile-phone-users-worldwide/>
[2] <http://www.internetlivestats.com/internet-users/>
|
## Inspiration
This was inspired from an article we read about the impact mobile devices were making in third world countries, such as Africa. Mobile development in these countries are increasing by the second and we wanted to take part in that development.
## What it does
This app allows users to get answers to a large variety of questions at almost any time. This is achieved through the use of the Twilio and Wolfram API. A user can text the application a question such as; integrate x^2, 2+4, plot(logx), what time is it, etc. Users can also ask for some dictionary definitions such as economics, ball, and H2O for example. This is all done by making a request to Wolfram and parsing the data into a readable sms or mms message.
## How I built it
This was built by running a server with node.js/express and having it listen for a POST request done by Twilio. The server will use the query given in the body of the text to make a request to the Wolfram API and it will then parse the data and send it back as a sms message. Depending on if there is an image it may send a mms message instead.
## Challenges I ran into
Twilio was a little difficult to get running, although the initial messaging was not too hard it was difficult to have a message sent back with authentication because it required many things to be setup. Parsing the data and displaying it in a proper format for the response also took some time because Wolfram can send back a lot of data that may not be relevant.
## Accomplishments that I'm proud of
I am pretty proud that we were able to get this application running because offline services is something that I've never worked with before and I always thought was pretty cool. In fact just getting Twilio to work with the server and being able to have it send pictures was pretty amazing.
## What I learned
Wolfram actually knows everything. The wolfram request can be amazingly broad, initially we thought we would need to tap into multiple APIs to get a broader range of data, but after further investigation we realized that Wolfram actually covers a pretty large variety of topics. I also learned a lot about the Twilio API and how to connect Twilio services with server.
## What's next for AnswerMeThis
Increase the amount of data that is being sent and further improve the response speed, as well as improving visualization of the data. We also want tap into other APIs to give users a broader range of topics to choose from.
|
winning
|
## Inspiration:
According to a journal article published by Nicole Rader in the Oxford Research Encyclopedia of Criminology, 40% of Americans indicated that they were afraid to walk alone at night (Rader 2017). While a close friend or family member may not always be available to talk with you on your way home, Talk the Walk gives users the opportunity to appear as if they are on the phone with someone who can immediately call for help, which brings a little more peace of mind to this otherwise stressful experience.
## What it does:
Talk the Walk gives users a whole new way to add an added layer of protection to their walk home, and gives them an opportunity to talk to someone when no one else is around. When the user calls the number, they are greeted by an automated voice messaging system that not only guides the user through a productive conversation centered around mental health, but also gives the user the opportunity to reflect on their day. Since this conversation is not being saved or recorded, Talk the Walk can act as a confidential sounding board for people who may want to vent. Ultimately, we hope that by centering the conversation around self-care and well-being, we can promote more positive conversations about mental health and reduce the stigma surrounding it.
## How we built it:
Talk the Walk was built with Twilio, TwiML, ngrok and Python (Flask).
## Challenges we ran into:
Everyone on the team had a variety of strengths, both technical and non-technical. Combining our strengths in a way that complemented each other was a fun puzzle to put together.
## What we learned:
We learned how to program in Python, use the Twilio API, and how to evolve an idea into a fully functioning product.
## What's next for Talk the Walk:
We would like to continue to develop this concept to include more safety and security features - including an option to set a "code word" that the user can say if they ever feel they are in danger, which would trigger a call to emergency services.
|
## Inspiration
We really wanted to use Computer Vision and AI to do something that might be beneficial to others. I wanted this app to be a reminder to be more self-conscious about what you are doing, but we made it so it relays data and lets your friends and family know as well. Imagine crunch time at 10 PM at night for a project due at 11:59 PM. Turn Stay In The Know on to keep you focused on the task at hand.
## What it does
This project uses machine learning and AI to detect humans and the objects they are holding. We are able to determine what the person is doing and relay a specific command with text to speech to notify the action. It will also send a text message to anyone you want to notify them of what you are doing, anytime anywhere. For example, when working on that project mentioned earlier any phone time should be off limits. Our program can detect any usage of phones and warn the users to put it back down!
## How we built it
We used OpenCV for our computer vision and TensorFlow for our machine-learning model. We integrated Google's text-to-speech for it to narrate the actions that are being done, and we used Twilio's API to send text messages to notify others about the user's actions.
## Challenges we ran into
The main challenge we had was getting the text-to-speech to work. The text-to-speech either lagged the frame of the computer vision or kept on repeating itself. It is very hard to solve both issues so we tried to compromise as little as we can to insure the text-to-speech is working and it is not too laggy for the computer vision. There was also a lot of permission errors when storing and reading audio files used for TTS.
## Accomplishments that we're proud of
We were really proud of what we were able to accomplish with very limited knowledge on computer vision and machine learning. It was our first times using all these new APIs and libraries, but we were able to make it work and combine everything together to get one coherent application.
## What we learned
We learned a lot about computer vision and machine learning. It was also our first time using Twilio's API to send text messages and our first time using python to do all these projects. It was a steep learning experience to learn new libraries and read through documentation, but it was a rewarding experience.
## What's next for Stay In The Know
It would be great if we were able to perfect our computer vision application to be seamless and run on the GPU. The actions could be improved for more precise detection and there can be a increase of available messaging formats, such as email and whatsapp. In the future it should even be able to detect anything in the world to cater any events requiring people to Stay In The Know!
|
## Inspiration
We set out to create a tool that unleashes dancers' creativity by syncing their moves with AI-generated music that matches perfectly. Inspired by the vibrant dance scenes on TikTok and Instagram, where beats and moves are inseparable, we wanted to take it to the next level. Imagine dancing to music made just for your style, effortlessly turning your moves into shareable, jaw-dropping videos with custom soundtracks. With our tool, dancers don’t just follow the beat—they create it! It's like having your bb
DJ that grooves with you.
## What it does
KhakiAI allows users to upload or record short dance 6-second videos analyzed by our AI-powered system. The AI tracks the dancers' movements, tempo, and style, generating a custom music track that perfectly matches the rhythm and energy of the performance. Users can further customize the music by selecting different genres or adding sound effects. The tool then syncs the music with the video, creating a seamless, high-quality dance video that can be shared directly on social media.
## How we built it
We built this project with a complex tech stack involving several APIs, LLMs, and programming languages. Throughout our programming process, we broke up the task into various parts and pieced them together as we went. To begin, we focused on a key functionality of dance movement recognition with OpenPose/OpenCV. This recognition outputs a JSON that gets put into a MongoDB Database. Then, we use Llama, Tune AI, and Cerebras to pass the JSON through an LLM quickly to create a low latency, so that the user generates the prompt quickly. SunoAPI then uses the generated prompt to create music for the video. Then, we attach it with Python and output it.
## Challenges we ran into
There were many challenges involved in the creation of this project. The Suno AI API doesn't have an official API, so we had to rely on an unofficial version API that uses cookies, which lengthened our ability to complete this project, instead of getting the actual API Key.
## Accomplishments that we're proud of
We are proud of the way we made a way for computer vision to detect a way to make dance moves generate a prompt for music to text.
## What we learned
We learned about computer vision, flask/next.js implementation, and react. We made proper use of version control. I was experienced and used new AI technologies like Cerebras.
|
losing
|
## Inspiration
“People don’t actually read newspapers. They step into them every morning like a hot bath.” -Marshall McLuhan, 1920
Newspapers in that era served a purpose greater than just a medium to share news, they had a character. A character that could grow the reader's creativity with short stories and poems or challenge their logical thinking by having them do crossword puzzles or sudoku. The charm of a newspaper has been lost as it has become more utilitarian in the modern period thanks to news aggregator apps like google news and apple news. Here's our attempt to use the latest technology to restore the flavor of the newspaper and make it as accessible as any other contemporary news aggregator.
## What it does
Our web-app is designed to replicate the visualization of the newspaper on display along with the features that give it a character. The app gets its character by making the entire experience as personalized for the user as it gets. Starting with the general news section, which generates a profile about the user depending on how much they spend on each article and then suggest them with the latest news surround it.
We have noticed that contemporary news media has been portraying future in the dimmest of light possible. We as humans consuming this mass media have been unconsciously taking steps to make the future less worse rather than making it more exciting. We want to give a glimpse of the future to the user, so that they can look forward and be excited about the future with the help of **predicted news** . It also gives them an idea how the incremental change in their day to day life can have a monumental change in the future. The predicted news stories is personalized according to the user's profile.
The user can connect their **Spotify**, we will show them how the songs by their favorite artists are doing around the world.If the user connects their Spotify, we will use their top 10 artists and show them **jokes** around the artist, making it a personal experience. If the user decides not, we will use USA top 10 Artists to generate jokes. We realized how much we missed crossword puzzle in newspapers from our childhood, so we decided to bring it back with a twist. The **crossword** is made using words from the articles through the newspaper, along side crossword there are **sudoku** puzzles too. Along with puzzles, we also remembered how our creative thinking was challenged by reading **short stories**, it used to give out imagine a green light to expand on the story. We brought back short stories and poems with a twist. The short stories and **poems** are also personalized according to the user's profile. This allows the user to have a very personal experience when reading The Proprium Times.
Along with reading, we wanted to give the users ability to take aesthetic photos with their newspaper on their table. The **ShARe** feature allows the user to put a AR-3D visualization of the news article infront of them sharing a photo of it on social media.
We built the app keeping the **accessibility** features in mind, we have included text to voice feature for anyone who wants to experience the app through audio.
## How we built it:
Client: For this project, we decided to create a Progressive Web Application using Netlify which helped us accelerate development by connecting to our GitHub repository. The front end of the entire application was programmed in ReactJS with the philosophy of giving users the authentic feeling of a vintage newspaper. We decided to revive certain components which were very common in olden newspapers but got lost in time to further enhance this feeling. We also used turn.js to complete the true sense of turning the newspaper.
Server: We have a python-based API with 12 end-points with an instance of postgre SQL database that handles most of the features for our web app. It utilizes a library that we wrote by ourselves to support it.
The dev environment of the API is public without authentication : <https://domusback.herokuapp.com/get/pred>
It allows us to showcase how our API works in the production environment.
Library:
newScrapper() : Handles all the features associated with gathering news articles for the user .
|\_\_updateGeneralNews(): populates the storage with news, articles that would fit the user's profile.
Predictive() : Handles all the features associated with AI-generated texts:
|**getPredictiveNews() : Generates a prompt from the user's profile and sends a request to GPT-3. Then it parses the request and populates the storage.
|**getJokes(): Generates a prompt from the user's Spotify top 10 artists and sends a request to GPT-3. Then it parses the request and populates the storage.
|**getShortStory(): Generates a prompt from the news article's associated with the user and sends a request to GPT-3. Then it parses the request and populates the storage. It also searches the internet associated with the story.
|**getPoem(): Generates a prompt from the user's profile and sends a request to GPT-3. Then it parses the request and populates the storage.
generator(): Handles the methods to generate a personalized crossword for the user.
|\_\_\_getCrossword(): Using NLTK, it searches for the pronouns from the contents of the news articles associated with the user and then scrapes the internet for the definition which is used a close. Then it class the methods from the crossword class to generate the legend for the personalized crossword for the user.
/post/userID=?/key=? : When the user loads the app, this request is sent to the API to populate the database with the information needed by the app. This reduces the latency and makes the user experience extremely fluid.
/get/news/userID=?/key=?: it returns a response with the latest news
/get/pred/userID=?/key=?: it returns a response with the predicted news
/get/crossword/userID=?/key=? : it returns a response with legend for the crossword
/get/jokes/userID=?/key=? : it returns a response with the personalized jokes for the user
/get/poems/userID=?/key=? : it returns a response with the personalized poems for the user
/get/top100 : it return a response with the top 100 songs in US.
## Challenges we ran into
1. Merge conflicts: At a point, The repo for the web app was being committed by 4 of us concurrently, it led us to having quite a few merge conflicts. Some of which took a lot of time to fix as it wasn't displaying the head and tail of the conflicts. However, because of our feature ownership we were able to mitigate through the situation very effectively.
2. Lack of AR libraries for react: Because of the limited amount of options we had for react libraries for AR, and the ones that we found were mostly un-documented, we ended up spending a lot of time in figuring out the solution for the shARe feature. However, echo3D's asset delivery came in clutch to help us mitigate through the situation.
3. deployment pipeline breaking: In the middle of the development, our dev environment started through issues and eventually collapsed. It broke our deployment pipeline and it took us another hour to get it up and running.
## Accomplishments that we're proud of
1. Feature ownership: After brainstorming the idea, everyone picked up a feature that closely aligned with their skillset and owned it, till it was integrated with the main app. This helped us achieve faster and effective development cycles.
2. Rapid ideation-development-deployment pipeline: While we were brainstorming, we started setting the development pipeline
## What we learned:
As a team we learned that planning and hackathon doesn't go hand in hand. There will be issues starting from deployment-pipeline failure to wifi, and we learned how to adapt and mitigate through the unexpected. Individually: Hasan, never worked on react web app, so it gave him the opportunity to be confident about his web development skills. Hruday, realized Red Bulls doesn't solve merge conflicts. He learned design 3D models and deploy them using echo3D. Sai, he learned how to develop a progressive web-app and the caveats that come alongside it. For Ano, it was the first time working with a API with 12 end-points, setting up an effective pipeline so that new features can be updated as soon as they came up, and the heroics of heroku.
Six of us with unique skillets and different ideas regarding each feature learned how to get together and work for a better cause. With every other brilliant feature we pursued, we encountered new errors, bugs, and issues from which we had to bounce back with new creative solutions. With the time constraint along with being stuffed with classes and internships, every single one of us learned to push ourselves to the limit and beyond to make this vision come true.
## What's next for The Daily Prophet
We want to optimize on certain features of the app, like the recommendation model, image generation for the predictive news. We would also like refresh the AR-share feature for the app and make it more polished. We envision at app as a solution that will help the users lead a more informed lifestyle
|
## Inspiration
The media we consume daily has an impact on our thinking, behavior, and emotions. If you’ve fallen into a pattern of regularly watching or listening to the news, the majority of what you’re consuming is likely about the coronavirus (COVID-19) crisis.
And while staying up to date on local and national news, especially as it relates to mandates and health updates, is critical during this time, experts say over-consumption of the news can take a toll on your physical, emotional, and mental health.
## What it does
The app first greets users with a screen prompting them to either sign up for an account or sign in to a pre-existing account. With the usual authentication formalities out of the way the app gets straight to business as our server scrapes oodles of articles from the internet and filters out the good from the bad, before displaying the user with a smorgasbord of good news.
## How we built it
We have used flutter to create our android based application and used firebase as a database. ExpressJS as a backend web framework. With the help of RapidAPI, we are getting lists of top headline news.
## Challenges we ran into
Initially, we tried to include Google Cloud-Based Sentiment Analysis of each news. However, we thought to try some new technology. Since the majority of our team members were new to machine learning, we were facing too many challenges to even get started with. Issues with lack of examples available. So we again limited our app to show customized positive news. We wanted to add more features during the hacking period but due to time constraints, we had to limit.
## Accomplishments that we're proud of
Completely Working android based applications and integrated with backend having the contribution of each and every member of the team.
## What we learned
We have learned to fetch and upload data to firebase's real-time database through the flutter application. We learned the value of Team Contribution and Team Work which is the ultimate key to the success of the project. Using Text-based Sentiment Analysis to analyze and rank news on the basis of positivity through Cloud Natural Language Processing.
## What's next for Hopeful
1. More Customized Feed
2. Update Profile Section
3. Like and Reply to comments
|
## Inspiration
Music is something inherently personal to each and every one of us - our favorite tracks accompany us through our highs and lows, through tough workouts and relaxing evenings. Our aim is to encourage and capture that feeling of discovering that new song you just can't stop listening to. Music is an authentic expression of ourselves, and the perfect way to [meet new people](https://www.lovethispic.com/uploaded_images/206094-When-You-Meet-Someone-With-The-Same-Music-Taste-As-You.jpg) without the clichés of the typical social media platforms we're all sick of. We're both very passionate about reviving the soul of social media, so we were very excited to hear about this track and work on this project!
## What it does
Spotify keeps tabs on the tracks you can't get enough of. Why not make that data work *for you*? With one simple login, ensemble matches you with others who share your musical ear. Using our *state-of-the-art* machine learning algorithms, we show you other users who we think you'd like based on both their and your music taste. Love their tracks? Follow them and stay tuned. ensemble is a new way to truly connect on a meaningful level in an age of countless unoriginal social media platforms.
## How we built it
We wanted a robust application that could handle the complexities of a social network, while also providing us with an extensive toolkit to build out all the features we envisioned. Our frontend is built using [React](https://reactjs.org), a powerful and well-supported web framework that gives us the flexibility to build with ease. We utilized supporting frontend technologies like Bootstrap, HTML, and CSS to help create an attractive UI, the key aspect of any social media. For the backend, we used [Django](https://www.djangoproject.com) and [Django Rest Framework](https://www.django-rest-framework.org) to build a secure API that our frontend can easily interact with. For our recommendation algorithm, we used scikit-learn and numpy to power our machine learning needs. Finally, we used PostgreSQL for our DBMS and Heroku for deployment.
## Challenges we ran into
As with most social media platforms, users are key. Given the *very short* nature of hackathons, it obviously isn't feasible to attract a large number of users for development purposes. However, we needed a way to have users available for testing. Since ensemble is based on Spotify accounts and the Spotify API, this proved to be non-trivial. We took advantage of the Spotify API's recommendations endpoint to generate pseudo-data that resembles what a real person would have as their top tracks. With a fake name generator, we created as many fake profiles as we needed to flesh out our recommendation algorithm.
## Accomplishments that we're proud of
Our application is fully ready to use—it has all of the necessary authentication, authorization, and persistent storage. While we'd love to add even more features, we focused on implementing the core ones in their current state (if you use Spotify, feel free to log in and try it out!). You can find the live version [here](https://ensemble-dev.herokuapp.com). Despite all of the hassle of the deployment process, it was very fulfilling to see what we created, live and ready to be used by anyone in the world.
We're also proud of what we've accomplished in general! It's been a challenging yet immensely fulfilling day-and-a-half of ideation, design, and coding. Looking back at what we were able to create during this short time span, we're proud to have something to show for all the effort we've put into it.
## What we learned
We both learned a lot from working on this project. It's been a fast-paced weekend of continuously pushing new changes and features, and in doing so, we sharpened our skills in both React and Django. Additionally, utilizing the Spotify API was something neither of us had done before, and we learned a lot about OAuth 2.0 and web authentication in general.
## What's next for ensemble
Working on this project was a lot of fun, and we'd both love to keep it going in the future. There are a ton of features that we thought out but didn't have the time to implement in this time span. For example, we'd love to implement a direct messaging system, so you can directly contact and discuss your favorite songs/artists with the people you follow. The GitHub repository readme also contains complete and detailed instructions on how to set up your development environment to run the code, if anyone is interested in trying it out. Thanks for reading!
|
losing
|
## Inspiration
Prolonged covid restrictions have caused immense damage to the economy and local markets alike. Shifts in this economic landscape have led to many individuals seeking alternate sources of income to account for the losses imparted by lack of work or general opportunity. One major sector that has seen a boom, despite local market downturns, is investment in the stock market. While stock market trends at first glance, seem to be logical, and fluid, they're in fact the opposite. Beat earning expectation? New products on the market? *It doesn't matter!*, because at the end of the day, a stock's value is inflated by speculation and **hype**. Many see the allure of rapidly increasing ticker charts, booming social media trends, and hear talk of town saying how someone made millions in a matter of a day *cough* **GameStop** *cough* , but more often then not, individual investors lose money when market trends spiral. It is *nearly* impossible to time the market. Our team sees the challenges and wanted to create a platform which can account for social media trends which may be indicative of early market changes so that small time investors can make smart decisions ahead of the curve.
## What it does
McTavish St. Bets is a platform that aims to help small time investors gain insight on when to buy, sell, or hold a particular stock on the DOW 30 index. The platform uses the recent history of stock data along with tweets in the same time period in order to estimate the future value of the stock. We assume there is a correlation between tweet sentiment towards a company, and it's future evaluation.
## How we built it
The platform was build using a client-server architcture and is hosted on a remote computer made available to the team. The front-end was developed using react.js and bootstrap for quick and efficient styling, while the backend was written in python with flask. The dataset was constructed by the team using a mix of tweets and article headers. The public Twitter API was used to scrape tweets according to popularity and were ranked against one another using an engagement scoring function. Tweets were processed using a natural language processing module with BERT embeddings which was trained for sentiment analysis. Time series prediction was accomplished through the use of a neural stochastic differential equation which incorporated text information as well. In order to incorporate this text data, the latent representations were combined based on the aforementioned scoring function. This representation is then fed directly to the network for each timepoint in the series estimation in an attempt to guide model predictions.
## Challenges we ran into
Obtaining data to train the neural SDE proved difficult. The free Twitter API only provides high engagement tweets for the last seven days. Obtaining older tweets requires an enterprise account costing thousands of dollars per month. Unfortunately, we didn’t feel that we had the data to train an end-to-end model to learn a single representation for each day’s tweets. Instead, we use a weighted average tweet representation, weighing each tweet by its importance computed as a function of its retweets and likes. This lack of data extends to the validation side too, with us only able to validate our model’s buy/sell/hold prediction on this Friday's stock price.
Finally, without more historical data, we can only model the characteristics of the market this week, which has been fairly uncharacteristic of normal market conditions. Adding additional data for the trajectory modeling would have been invaluable.
## Accomplishments that we're proud of
* We used several API to put together a dataset, trained a model, and deployed it within a web application.
* We put together several animations introduced in the latest CSS revision.
* We commissioned McGill-themed banner in keeping with the /r/wallstreetbets culture. Credit to Jillian Cardinell for the help!
* Some jank nlp
## What we learned
Learned to use several new APIs, including Twitter and Web Scrapers.
## What's next for McTavish St. Bets
Obtaining much more historical data by building up a dataset over several months (using Twitters 7-day API). We would have also liked to scale the framework to be reinforcement based which is data hungry.
|
# Highlights
A product of [YHack '16](http://www.yhack.org/). Built by Aaron Vontell, Ali Benlalah & Cooper Pellaton.
## Table of Contents
* [Overview](#overview)
* [Machine Learning and More](#machine-learning-and-more)
* [Our Infrastructure](#our-infrastructure)
* [API](#api)
## Overview
The first thing you're probably thinking is what this ambiguiously named application is, and secondly, you're likely wondering why it has any significance. Firstly, Highlights is the missing component of your YouTube life, and secondly it's important because we leverage Machine Learning to find out what content is most important in a particular piece of media unlike it has ever been done before.
Imagine this scenario: you subscribe to 25+ YouTube channels but over the past 3 weeks you simply haven't had the time to watch videos because of work. Today, you decide that you want to watch one of your favorite vloggers, but realize you might lack the context to understand what has happened in her/his life since you last watched which lead her/him to this current place. Here enters Highlights. Simply download the Android application, log in with your Google credentials and you will be able to watch the so called *highlights* of your subscriptions for all of the videos which you haven't seen. Rather than investing hours in watching your favorite vlogger's past weeks worth of videos, you can get caught up in 30 seconds - 1 minute by simply being presented with all of the most important content in those videos in one place, seamlessly.
## Machine Learning and More
Now that you understand the place and signifiance of Highlights, a platform that can distill any media into bite sized chunks that can be consumed quickly in the order of their importance, it is important to explain the technical details of how we achieve such a gargantuant feat.
Let's break down the pipeline.
1. We start by accessing your Google account within the YouTube scope and get a list of your current subscriptions, 'activities' such as watched videos, comments, etc., your recommended videos and your home feed.
2. We take this data and extract the key features from it. Some of these include:
* The number of videos watched on a particular channel.
* The number of likes/dislikes you have and the categories on which they center.
* The number of views a particular video has/how often you watch videos after they have been posted.
* Number of days after publication. This is most important in determing the signficiance of a reccomended video to a particular user.
We go about this process for every video that the user has watched, or which exists in his or her feed to build a comprehensive feature set of the videos that are in their own unique setting.
3. We proceed by feeding the data from the aforementioned investigation and probabilities by then generating a new machine learning model which we use to determine the likelihood of a user watching any particular reccomended video, etc.
4. For each video in the set we are about to iterate over, the video is either a recomended watch, or a video in the user's feed which she/he has not seen. They key to this process is a system we like to call 'video quanitization'. In this system we break each video down into it's components. We look at the differences between images and end up analyzing something near to every other 2, 3, or 4 frames in a video. This reduces the size of the video that we need to analyze while ensuring that we don't miss anything important. As you will not here, a lot of the processes we undertake have bases in very comprehensive and confusing mathematics. We've done our best to keep math out of this, but know that one of the most important tools in our toolset is the exponential moving average.
5. This is the most important part of our entire process, the scene detection. To distill this down to it's most basic principles we use features like lighting, edge/shape detection and more to determine how similar or different every frame is from the next. Using this methodology of trying to find the frames that are different we coin this change in setting a 'scene'. Now, 'scenes' by themselves are not exciting but coupled with our knowledge of the context of the video we are analyzing we can come up with very apt scenes. For instance, in a horror movie we know that we would be looking for something like 5-10 seconds in differences between the first frame of that series and the last frame; this is what is referred to as a 'jump' or 'scare' cut. So using our exponential moving average, and background subtraction we are able to figure out the changes in between, and validate scenes.
6. We pass this now deconstructed video into the next part of our pipeline where we will generate unique vectors for each of them that will be used in the next stage. What we are looking for here is the key features that define a frame. We are trying to understand, for example, what makes a 'jump' cut a 'jump' cut. Features that we are most commonly looking include
* Intensity of an analyzed area.
+ EX: The intensity of a background coloring vs edges, etc.
* The length of each scence.
* Background.
* Speed.
* Average Brightness
* Average background speed.
* Position
* etc.
Armed with this information we are able to derive a unqiue column vector for each scence which we will then feed into our neural net.
7. The meat and bones of our operation: the **neural net**! What we do here is not terribly complicated. At it's most basic principles, we take each of the above column vectors and feed it into this specialized machine learning model. What we are looking for is to derive a sort order for these features. Our initial training set, a group of 600 YouTube videos which @Ali spent a significant amount of time training, is used to help to advance this net. The gist of what we are trying to do is this: given a certain vector, we want to determine it's signifiance in the context of the YouTube univerise in which each of our users lives. To do this we abide by a semi-supervised learning model in which we are looking over the shoulder of the model to check the output. As time goes on, this model begins to tweak it's own parameters and produce the best possible output given any input vector.
8. Lastly, now having a sorted order of every scene in a user's YouTube universe, we go about reconstructing the top 'highlights' for each user. IE in part 7 of our pipeline we figured out which vectors carried the greatest weight. Now we want to turn these back into videos that the user can watch, quickly, and derive the greatest meaning from. Using a litany of Google's APIs we will turn the videoIds, categories, etc into parameterized links which the viewer is then shown within our application.
## Our Infrastructure
Our service is currently broken down into the following core components:
* Highlights Android Application
+ Built and tested on Android 7.0 Nougat, and uses the YouTube Android API Sample Project
+ Also uses various open source libraries (OkHTTP, Picasso, ParallaxEverywhere, etc...)
* Highlights Web Service (Backs the Pipeline)
* The 'Highlighter' or rather our ML component
## API
### POST
* `/api/get_subscriptions`
This requires the client to `POST` a body of the nature below. This will then trigger the endpoint to go and query the YouTube API for the user's subscriptions, and then build a list of the most recent videos which he/she has not seen yet.
```
{
"user":"Cooper Pellaton"
}
```
* `/api/get_videos`
*DEPRECATED*. This endpoint requires the client to `POST` a body similar to that below and then will fetch the user's most recent activity in list form from the YouTube API.
```
{
"user":"Cooper Pellaton"
}
```
### GET
* `/api/fetch_oauth`
So optimally, what should happen when you call this method is that the user should be prompted to enter her/his Google credentials to authorize the application to then be able to access her/his YouTube account.
- The way that this is currently architected, the user's entrance into our platform will immediately trigger learning to occur on their videos. We have since *DEPRECATED* our ML training endpoint in favor of one `GET` endpoint to retrieve this info.
* `/api/fetch_subscriptions`
To get the subscriptions for a current user in list form simply place a `GET` to this endpoint. Additionally, a call here will trigger the ML pipeline to begin based on the output of the subscriptions and user data.
* `/api/get_ml_data`
For each user there is a queue of their Highlights. When you query this endpoint the response will be the return of a dequeue operation on said queue. Hence, you are guaranteed to never have overlap or miss a video.
- To note: in testing we have a means to bypass the dequeue and instead append, constantly, directly to the queue so that you can ensure you are retrieving the appropriate response.
|
## Inspiration
"*Agua.*" These four letters dropped Coca-Cola's market value by $4 billion dollars in just a few minutes. In a 2021 press conference, Cristiano Ronaldo shows just how much impact public opinion has on corporate finance. We all know about hedge fund managers who have to analyze and trade stocks every waking minute. These people look at graphs to get paid hundreds of thousands of dollars, yet every single one of them overlooks the arguably most important metric for financial success. Public opinion. That's where our team was inspired to create twittertrader.
## What it does
twittertrader is a react application that displays crucial financial information regarding the day's top traded stocks. For each of the top ten most active stocks, our project analyzes the most recent relevant tweets and displays the general public opinion.
## How we built it
**Backend**: Python, yahoo\_fin, Tweepy, NLTK
**Frontend**: React, Material UI
**Integration**: Flask
## Challenges we ran into
Integrating backend and frontend.
## Accomplishments that we're proud of
Every single one of us was pushed to learn and do more than we have ever done in such a short amount of time! Furthermore, we are proud that all of us were able to commit so much time and effort even in the midst of final exams.
## What we learned
Don't take part in a hackathon during exam season. I'm being serious.
## What's next for twittertrader
1. **Interactions**
As a team we had big ambitious and small amounts of time. We wanted to include a feature where users would be able to add stocks to also be analyzed however we were unable to implement it in time.
2. **Better Analytics!**
Our current project relies on NLTK's natural language processing which has limitations analyzing text in niche fields. We plan on integrating a trained ML model that more accurately describes sentiments in the context of stocks. ("Hit the moon" will make our positivity "hit the moon")
3. **Analytics+**
This information is cool and all but what am I supposed to do with it? We plan on implementing further functionality that analyses significant changes in public opinion and recommends buying or selling these stocks.
4. **Scale**
We worked so hard on this cool project and we want to share this functionality with the world! We plan on hosting this project on a real domain.
## The Team
Here is our team's Githubs and LinkedIns:
Jennifer Li: [Github](https://github.com/jennifer-hy-li) & [LinkedIn](https://www.linkedin.com/in/jennifer-hy-li/)
McCowan Zhang: [Github](https://github.com/mccowanzhang) & [LinkedIn](https://www.linkedin.com/in/mccowanzhang/)
Yuqiao Jiang: [Github](https://github.com/yuqiaoj) & [LinkedIn](https://www.linkedin.com/in/yuqiao-jiang/)
|
winning
|
# seedHacks
Drone-mounted tree planter with a splash of ML magic!
## Description
**Our planet's in dire straits.**
Over the past several decades, as residential, commercial, and industrial demands have skyrocketed across several industries around the globe, deforestation has become a major problem facing humanity. Though we depend on trees and forests for so much, they seem to be one of our fastest depleting natural resources. As our primary provider for oxygen and one of our biggest global carbon sinks, this is very dangerous news.
**seedHacks is a tool to help save the world.**
Through the use of cloud-based image classification, live video feed, geospatial optimization, and robotic flight optimization, we've made seedHacks to facilitate reforestation at a rate and efficiency that people might not be able to offer. Using our tech, a drone collecting simple birds-eye images of a forest can compute the optimal positions for seeds to be planted, aiming to approach a desired forest density collected from the user. Once it has this map, planting them's just a robotics problem! Easy, right?
## How did we build?
We broke the project up into three main parts: video feed and image collection, image tagging and seed location optimization, and user interface/display.
To tackle image/video, we landed on using pygame to set up a constant feed and collect images from that feed upon user request.
We then send those captured images to a Microsoft could computing server, where we trained an object detection model using Azure's custom-vision platform which returns a tagged image with locations of the trees in the overhead image.
Finally, we send to an optimization algorithm that utilizes all the free space possible as well as some distance constraints to fill the available space with as many trees as possible. All this was wrapped up in an elegant and easy-to-interpret UI that allows us to work together with the expertise of users to make the best end-result possible!
## Technical Notes
* Azure custom vision was used to train an object detection model that could label tree and trees. We used about 33 images found online to train this machine learning model, resulting in a precision of 82.5%
* We used the custom vision API to send our aerial view images of forests to the prediction endpoint which returned predictions consisting of confidence level, label, and a bounding box.
* We then parsed the output of the object detection by creating a 2D numpy array in Python representing the original image. We filled indices of the array with 1’s where pixels were labeled as “tree” or “trees” with at least a 50% confidence. At the same time, we extracted the max width and height of the canopy of the trees to automate the process for users. The users are allowed to input a buffer, as a percentage, which increases the bounding box for tree growth based on the current density/present species; this is especially important if the roots of the tree need space to grow or the tree species is competitive.
* After the 2D array was filled with pre-existing trees, we iterated through the array to find places where new trees could be planted such that there was enough space for the tree to mature to its full canopy size. We labeled these indices with 2 to differentiate between existing trees and potential new trees.
## What did we learn?
First off, that selecting and training good object detector can be complicated and mysterious, but definitely worth putting some time into. Though our initial models had promise, we needed to optimize for overhead forest views, which is not something that's used to train too many.
Second, that keeping it simple is sometimes better for realizing ideas well. We were very excited to get our hands on a Jetson Nano and trick it out with AlwaysAI's amazing technologies, but we realized some time in that because we didn't actually end up using the hardware and software to the fullest of their abilities, they might not be the best approach to our particular problems.
So, we simplified! Finally, that the applicability of cutting-edge environmental robotics carries a lot of promise going forward. With not too much time, we managed to develop a somewhat sophisticated system that could potentially have a huge impact - and we hope to be able to contribute more to the field in the future!
## What's next for seedHacks?
Next steps for our project would include:
* Further optimization on seed location (more technical approach using botantical/silvological expertise, etc)
* Training object detector better and better to pick out individual and clusters of trees from an
overhead view
* More training on burnt trees and forests
* Robotic pathfinding systems to automatically execute paths through a forest space
* Actuators on drones to make seed planting possible
* Generalizing to aquatic and other ecosystems
|
## Inspiration'
With the rise of IoT devices and the backbone support of the emerging 5G technology, BVLOS drone flights are becoming more readily available. According to CBInsights, Gartner, IBISworld, this US$3.34B market has the potential for growth and innovation.
## What it does
**Reconnaissance drone software that utilizes custom object recognition and machine learning to track wanted targets.** It performs close to real-time speed with nearly 100% accuracy and allows a single operator to operate many drones at once. Bundled with a light sleek-designed web interface, it is highly inexpensive to maintain and easy to operate.
**There is a Snapdragon Dragonboard that runs physically on the drones capturing real-time data and processing the video feed to identify targets. Identified targets are tagged and sent to an operator that is operating several drones at a time. This information can then be relayed to the appropriate parties.**
## How I built it
There is a Snapdragon Dragonboard that runs physically on the drones capturing real-time data and processing the video feed to identify targets. This runs on a Python script that then sends the information to a backend server built using NodeJS (coincidentally also running on the Dragonboard for the demo) to do processing and to use Microsoft Azure to identify the potential targets. Operators use a frontend to access this information.
## Challenges I ran into
Determining a way to reliably demonstrate this project became a challenge considering the drone is not moving and the GPS is not moving as well during the demonstration. The solution was to feed the program a video feed with simulated moving GPS coordinates so that the system believes it is moving in the air.
The training model also required us to devote multiple engineers to spending most of their time training the model over the hackathon.
## Accomplishments that I'm proud of
The code flow is adaptable to virtually an infinite number of scenarios with virtually **no hardcoding for the demo** except feeding it the video and GPS coordinates rather than the camera feed and actual GPS coordinates
## What I learned
We learned a great amount on computer vision and building/training custom classification models. We used Node.js which is a highly versatile environment and can be configured to relay information very efficiently. Also, we learned a few javascript tricks and some pitfalls to avoid.
## What's next for Recognaissance
Improving the classification model using more expansive datasets. Enhancing the software to be able to distinguish several objects at once allowing for more versatility.
|
## Inspiration
Gun violence is a dire problem in the United States. When looking at case studies of mass shootings in the US, there is often surveillance footage of the shooter *with their firearm* **before** they started to attack. That's both the problem and the solution. Right now, surveillance footage is used as an "after-the-fact" resource. It's used to *look back* at what transpired during a crisis. This is because even the biggest of surveillance systems only have a handful of human operators who simply can't monitor all the incoming footage. But think about it: most schools, malls, etc. have security cameras in almost every hallway and room. It's a wasted resource. What if we could use surveillance footage as an **active and preventive safety measure**? That's why we turned *surveillance* into **SmartVeillance**.
## What it does
SmartVeillance is a system of security cameras with *automated firearm detection*. Our system simulates a CCTV network that can intelligently classify and communicate threats for a single operator to easily understand and act upon. When a camera in the system detects a firearm, the camera number is announced and is displayed on every screen. The screen associated with the camera gains a red banner for the operator to easily find. The still image from the moment of detection is displayed so the operator can determine if a firearm is actually present or if it was a false positive. Lastly, the history of detections among cameras is displayed at the bottom of the screen so that the operator can understand the movement of the shooter when informing law enforcement.
## How we built it
Since we obviously can't have real firearms here at TreeHacks, we used IBM's Cloud Annotation tool to train an object detection model in TensorFlow for *printed cutouts of guns*. We integrated this into a React.js web app to detect firearms visible in the computer's webcam. We then used PubNub to communicate between computers in the system when a camera detected a firearm, the image from the moment of detection, and the recent history of detections. Lastly, we built onto the React app to add features like object highlighting, sounds, etc.
## Challenges we ran into
Our biggest challenge was creating our gun detection model. It was really poor the first two times we trained it, and it basically recognized everything as a gun. However, after some guidance from some lovely mentors, we understood the different angles, lightings, etc. that go into training a good model. On our third attempt, we were able to take that advice and create a very reliable model.
## Accomplishments that we're proud of
We're definitely proud of having excellent object detection at the core of our project despite coming here with no experience in the field. We're also proud of figuring out to transfer images between our devices by encoding and decoding them from base64 and sending the String through PubNub to make communication between cameras almost instantaneous. But above all, we're just proud to come here and build a 100% functional prototype of something we're passionate about. We're excited to demo!
## What we learned
We learned A LOT during this hackathon. At the forefront, we learned how to build a model for object detection, and we learned what kinds of data we should train it on to get the best model. We also learned how we can use data streaming networks, like PubNub, to have our devices communicate to each other without having to build a whole backend.
## What's next for SmartVeillance
Real cameras and real guns! Legitimate surveillance cameras are much better quality than our laptop webcams, and they usually capture a wider range too. We would love to see the extent of our object detection when run through these cameras. And obviously, we'd like to see how our system fares when trained to detect real firearms. Paper guns are definitely appropriate for a hackathon, but we have to make sure SmartVeillance can detect the real thing if we want to save lives in the real world :)
|
partial
|
## Inspiration
Do you wish something could really help you get up in the morning
## What it does
Sleepful uses machine vision to assist your sleep. It tracks if you have actually gotten out of bed.
## How we built it
OpenCV, MEAN, Microsoft Azure
## Accomplishments that we're proud of
An application that can track your sleep live
## What's next for Sleepful
Sleepful has a huge potential to be an assistant for your sleep and morning. We hope to implement features to track quality of your sleep and guide you through your morning routine
|
## Inspiration
Personal finance and financial literacy are incredibly important topics that everyone waits to talk about until they're too scared to admit they don't understand it! We decided that using lego and hardware to teach financial literacy concepts would encourage children to learn the importance of finances before they grow up.
## What it does
Lego Investor teaches the difference between market investing and savings accounts: while putting money in the market can come with bigger rewards, it can also take some nauseating dives.
## How we built it
We built Lego investor using a Python flask server on a Raspberry Pi to interface with an Arduino to control various hardware components. To provide a better user experience, we created a front facing web GUI provides information on a simulated investment and savings account, including the current value of stocks, how much interest your savings account has earned and your original balances!
## Challenges we ran into
Trying to debug hardware never seems to get any easier: we spent a good chunk of time trying to discover if the problems we ran into were due to logical errors in our code, miswiring in our electrical circuit or plain old power management problems. We worked with a couple tricky sensors too, and used all the lego we could find to build the best miniature bank we could.
## Accomplishments that we're proud of
We took on a big project in a weekend, and everyone learned something new, which is always the goal at a hackathon.
## What we learned
Building web servers is more difficult than we imagined! There is always something to be learned.
## What's next for Lego Investor
|
## Inspiration
Our inspiration stems from the profound impact that early disease detection can have on exponentially improving survival rates.
## What it does
Imagine a world where we can detect diseases early and save lives by treating them before they get out of control. According to the Canary Foundation, "Colon Cancer caught early has a 91% 5-year survival rate, vs. an only 11% survival rate if it is caught late and has spread to other organs." With this in mind, our hackathon project leverages cutting-edge medical wearable technology to empower doctors with real-time, lifesaving information. By gathering and transmitting data on vital signs, sleep patterns, and activity levels, our solution enables healthcare providers to detect warning signs before they escalate into critical conditions. This also eliminates miscommunication and misunderstandings between providers and patients, as the data is clear and concise. With early intervention, we can prevent the unnecessary loss of lives and significantly improve patient outcomes.
## How we built it
Software Developers - We created a Terra API account and following the documentation, created calls to the Terra API in the backend of our web app to centralize outgoing API calls. Then, we connected our front end to our backend to display the data from Terra API for users to see.
We built the front end using the Angular framework, TypeScript, HTML, and CSS, and used Node.js and JavaScript for our backend. We also utilized Twilio to allow for care providers to send messages directly to their patients, which required us to use Express.
UX Designer - We started by sketching out our ideas on paper with our developers, creating a basic structure, identifying discussion topics, and defining our mission. Then, we moved on to building a wireframe in Figma.
After finishing the wireframe, we reviewed it and realized that some things needed reordering to better match how we expected users to interact with it.
In our program, a "notice" is health data sent from a smartwatch or other health-related device via Terra API to our project, and it appears on a physician's homepage.
One significant change we made was to prioritize the provider's "feed" of patient notices because it's crucial to act quickly when there's a life-threatening issue. Below that, we added the option for physicians to send notices when they have non-urgent concerns and want to schedule a meeting with the patient.
Next to that, we included a section for patients, making it easy for them to access their vitals and health history when needed.
This is a big change from the original layout. We decided on this layout specifically after conducting user research with the teams around us who were willing to give us their feedback.
## Challenges we ran into
We ran into the issue that our AI model wasn't specifically trained and prepared for medical use, therefore, we weren't able to dive as deep into medical advice or help the physician further than just the data.
## Accomplishments that we're proud of
We are proud that we were able to successfully utilize all 3 API's in our project!
## What we learned
Our group had a first-time hacker, and she came in without knowing what an API was, and then she left knowing, not only what it was, but how to integrate one! We all together learned how to integrate the Open AI API and the Terra API. To more effectively foster communication between patients and their primary care providers, we used Twilio to send a text messages from physicians to patients to request appointments or calls based on their health data.
Teamwork was also something we improved on and made more efficient during this hackathon. we streamlined our process of working with a UX Designer and turning her designs into reality efficiently.
We also learned that we love the restaurant Veggie Galaxy on Mass Ave!!!!
## What's next for ApolloLink
We would love to upgrade our product to be able to offer advice or diagnostic examples. This, however, would need to be extensively tested and be created with a medically trained AI model.
|
partial
|
## Inspiration
Our idea begins with the fact that travelers leave their own messages in scenery places. Why don’t we create an AR function that displays these messages whenever other travelers open their phones? We then extend our target to every person who wants a platform to share their current feelings with people who have experienced the same place or event. Thus Beside is created, aiming to connect you with people who have been “beside” you in the past and present.
## Functionality
Our application has two main features. From the AR environment, people can view all the messages written by users who have been to the same location, presented in 3D objects. The user can interact with others through browsing the notes, leaving comments, and posting notes themselves. No matter it is a food recommendation, a random rant, or a meaningful story, you can share them with people throughout the world who pass by the same location. In the map feature, a collection of all notes written in a certain area would be represented, and top-trending topics would be visible on the map. The user can participate in discussions in other areas while getting informed of the trend.
## Challenges & Achievements
We divided our team into two groups, an AR group, and a map group. Without any prior experience in swift, AR development, or front-end development, we spent a challenging and meaningful day learning from the basics to advanced functions using unfamiliar languages. The main problems the AR groups tackled is rendering 3D notes, correcting directions of the notes, and modeling the notes. We successfully adjusted the note directions by adding an interactive tap function that rotates the notes.
As for the map interface, the first major challenge we encountered was the technical specifics of react.js. Since we are all beginner hackers and have no experience in react.js or JavaScript in general, it is really hard to understand and master the syntax and concept behind the language/framework. An example of such challenges is the objects in JavaScript, which is completely different from Java, which we are more comfortable with. The second challenge is that when we try to make some API calls of ‘google-map-react’, the official documentation is a little bit ambiguous that we have to dig into the source code and try to understand how to use the APIs.The third problem is that during the early implementation stage of the project, we don’t have sufficient reliable data to run a comprehensive test as we develop it. We had to come up with some simple test data in order to test and debug.
## Future Plans
As our project is still in the prototype stage, we plan on completing the functions that we envisioned in the future, such as posting and commenting on posts. If we can successfully develop the product, we hope to promote our application among Berkeley students and take the first step in launching a real AR driven social platform.
|
## Inspiration
* Inspired by issues pertaining to present day social media that focus on more on likes and views as supposed to photo-sharing
+ We wanted to connect people on the internet and within communities in a positive, immersive experience
* Bring society closer together rather than push each other away
## What it does
* Social network for people to share images and videos to be viewed in AR
* removed parameters such as likes, views, engagement to focus primarily on media-sharing
## How we built it
* Used Google Cloud Platform as our VM host for our backend
* Utilized web development tools for our website
* Git to collaborate with teammates
* Unity and Vuforia to develop Ar
## Challenges we ran into
* Learning new software tools, but we all preserved and had each other's back.
* Using Unity and learning how to use Vuforia in real time
## Accomplishments that we're proud of
* Learning Git, and a bunch more new software that we have never touched!
* Improving our problem solving and troubleshooting skills
* Learning to communicate with teammates
* Basics of Ar
## What we learned
* Web development using HTML, CSS, JavaScript and Bootstrap
## What's next for ARConnect
* Finish developing:
* RESTful API
* DBM
* Improve UX by:
* Mobile app
* Adding depth to user added images (3d) in AR
* User accessibility
|
# Catch! (Around the World)
## Our Inspiration
Catch has to be one of our most favourite childhood games. Something about just throwing and receiving a ball does wonders for your serotonin. Since all of our team members have relatives throughout the entire world, we thought it'd be nice to play catch with those relatives that we haven't seen due to distance. Furthermore, we're all learning to social distance (physically!) during this pandemic that we're in, so who says we can't we play a little game while social distancing?
## What it does
Our application uses AR and Unity to allow you to play catch with another person from somewhere else in the globe! You can tap a button which allows you to throw a ball (or a random object) off into space, and then the person you send the ball/object to will be able to catch it and throw it back. We also allow users to chat with one another using our web-based chatting application so they can have some commentary going on while they are playing catch.
## How we built it
For the AR functionality of the application, we used **Unity** with **ARFoundations** and **ARKit/ARCore**. To record the user sending the ball/object to another user, we used a **Firebase Real-time Database** back-end that allowed users to create and join games/sessions and communicated when a ball was "thrown". We also utilized **EchoAR** to create/instantiate different 3D objects that users can choose to throw. Furthermore for the chat application, we developed it using **Python Flask**, **HTML** and **Socket.io** in order to create bi-directional communication between the web-user and server.
## Challenges we ran into
Initially we had a separate idea for what we wanted to do in this hackathon. After a couple of hours of planning and developing, we realized that our goal is far too complex and it was too difficult to complete in the given time-frame. As such, our biggest challenge had to do with figuring out a project that was doable within the time of this hackathon.
This also ties into another challenge we ran into was with initially creating the application and the learning portion of the hackathon. We did not have experience with some of the technologies we were using, so we had to overcome the inevitable learning curve.
There was also some difficulty learning how to use the EchoAR api with Unity since it had a specific method of generating the AR objects. However we were able to use the tool without investigating too far into the code.
## Accomplishments
* Working Unity application with AR
* Use of EchoAR and integrating with our application
* Learning how to use Firebase
* Creating a working chat application between multiple users
|
losing
|
## Inspiration
**Machine learning** is a powerful tool for automating tasks that are not scalable at the human level. However, when deciding on things that can critically affect people's lives, it is important that our models do not learn biases. [Check out this article about Amazon's automated recruiting tool which learned bias against women.](https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G?fbclid=IwAR2OXqoIGr4chOrU-P33z1uwdhAY2kBYUEyaiLPNQhDBVfE7O-GEE5FFnJM) However, to completely reject the usefulness of machine learning algorithms to help us automate tasks is extreme. **Fairness** is becoming one of the most popular research topics in machine learning in recent years, and we decided to apply these recent results to build an automated recruiting tool which enforces fairness.
## Problem
Suppose we want to learn a machine learning algorithm that automatically determines whether job candidates should advance to the interview stage using factors such as GPA, school, and work experience, and that we have data from which past candidates received interviews. However, what if in the past, women were less likely to receive an interview than men, all other factors being equal, and certain predictors are correlated with the candidate's gender? Despite having biased data, we do not want our machine learning algorithm to learn these biases. This is where the concept of **fairness** comes in.
Promoting fairness has been studied in other contexts such as predicting which individuals get credit loans, crime recidivism, and healthcare management. Here, we focus on gender diversity in recruiting.
## What is fairness?
There are numerous possible metrics for fairness in the machine learning literature. In this setting, we consider fairness to be measured by the average difference in false positive rate and true positive rate (**average odds difference**) for unprivileged and privileged groups (in this case, women and men, respectively). High values for this metric indicates that the model is statistically more likely to wrongly reject promising candidates from the underprivileged group.
## What our app does
**jobFAIR** is a web application that helps human resources personnel keep track of and visualize job candidate information and provide interview recommendations by training a machine learning algorithm on past interview data. There is a side-by-side comparison between training the model before and after applying a *reweighing algorithm* as a preprocessing step to enforce fairness.
### Reweighing Algorithm
If the data is unbiased, we would think that the probability of being accepted and the probability of being a woman would be independent (so the product of the two probabilities). By carefully choosing weights for each example, we can de-bias the data without having to change any of the labels. We determine the actual probability of being a woman and being accepted, then set the weight (for the woman + accepted category) as expected/actual probability. In other words, if the actual data has a much smaller probability than expected, examples from this category are given a higher weight (>1). Otherwise, they are given a lower weight. This formula is applied for the other 3 out of 4 combinations of gender x acceptance. Then the reweighed sample is used for training.
## How we built it
We trained two classifiers on the same bank of resumes, one with fairness constraints and the other without. We used IBM's [AIF360](https://github.com/IBM/AIF360) library to train the fair classifier. Both classifiers use the **sklearn** Python library for machine learning models. We run a Python **Django** server on an AWS EC2 instance. The machine learning model is loaded into the server from the filesystem on prediction time, classified, and then the results are sent via a callback to the frontend, which displays the metrics for an unfair and a fair classifier.
## Challenges we ran into
Training and choosing models with appropriate fairness constraints. After reading relevant literature and experimenting, we chose the reweighing algorithm ([Kamiran and Calders 2012](https://core.ac.uk/download/pdf/81728147.pdf?fbclid=IwAR3P1SFgtml7w0VNQWRf_MK3BVk8WyjOqiZBdgmScO8FjXkRkP9w1RFArfw)) for fairness, logistic regression for the classifier, and average odds difference for the fairness metric.
## Accomplishments that we're proud of
We are proud that we saw tangible differences in the fairness metrics of the unmodified classifier and the fair one, while retaining the same level of prediction accuracy. We also found a specific example of when the unmodified classifier would reject a highly qualified female candidate, whereas the fair classifier accepts her.
## What we learned
Machine learning can be made socially aware; applying fairness constraints helps mitigate discrimination and promote diversity in important contexts.
## What's next for jobFAIR
Hopefully we can make the machine learning more transparent to those without a technical background, such as showing which features are the most important for prediction. There is also room to incorporate more fairness algorithms and metrics.
|
## Inspiration
When it comes to finding solutions to global issues, we often feel helpless: making us feel as if our small impact will not help the bigger picture. Climate change is a critical concern of our age; however, the extent of this matter often reaches beyond what one person can do....or so we think!
Inspired by the feeling of "not much we can do", we created *eatco*. *Eatco* allows the user to gain live updates and learn how their usage of the platform helps fight climate change. This allows us to not only present users with a medium to make an impact but also helps spread information about how mother nature can heal.
## What it does
While *eatco* is centered around providing an eco-friendly alternative lifestyle, we narrowed our approach to something everyone loves and can apt to; food! Other than the plenty of health benefits of adopting a vegetarian diet — such as lowering cholesterol intake and protecting against cardiovascular diseases — having a meatless diet also allows you to reduce greenhouse gas emissions which contribute to 60% of our climate crisis. Providing users with a vegetarian (or vegan!) alternative to their favourite foods, *eatco* aims to use small wins to create a big impact on the issue of global warming. Moreover, with an option to connect their *eatco* account with Spotify we engage our users and make them love the cooking process even more by using their personal song choices, mixed with the flavours of our recipe, to create a personalized playlist for every recipe.
## How we built it
For the front-end component of the website, we created our web-app pages in React and used HTML5 with CSS3 to style the site. There are three main pages the site routes to: the main app, and the login and register page. The login pages utilized a minimalist aesthetic with a CSS style sheet integrated into an HTML file while the recipe pages used React for the database. Because we wanted to keep the user experience cohesive and reduce the delay with rendering different pages through the backend, the main app — recipe searching and viewing — occurs on one page. We also wanted to reduce the wait time for fetching search results so rather than rendering a new page and searching again for the same query we use React to hide and render the appropriate components. We built the backend using the Flask framework. The required functionalities were implemented using specific libraries in python as well as certain APIs. For example, our web search API utilized the googlesearch and beautifulsoup4 libraries to access search results for vegetarian alternatives and return relevant data using web scraping. We also made use of Spotify Web API to access metadata about the user’s favourite artists and tracks to generate a personalized playlist based on the recipe being made. Lastly, we used a mongoDB database to store and access user-specific information such as their username, trees saved, recipes viewed, etc. We made multiple GET and POST requests to update the user’s info, i.e. saved recipes and recipes viewed, as well as making use of our web scraping API that retrieves recipe search results using the recipe query users submit.
## Challenges we ran into
In terms of the front-end, we should have considered implementing Routing earlier because when it came to doing so afterward, it would be too complicated to split up the main app page into different routes; this however ended up working out alright as we decided to keep the main page on one main component. Moreover, integrating animation transitions with React was something we hadn’t done and if we had more time we would’ve liked to add it in. Finally, only one of us working on the front-end was familiar with React so balancing what was familiar (HTML) and being able to integrate it into the React workflow took some time. Implementing the backend, particularly the spotify playlist feature, was quite tedious since some aspects of the spotify web API were not as well explained in online resources and hence, we had to rely solely on documentation. Furthermore, having web scraping and APIs in our project meant that we had to parse a lot of dictionaries and lists, making sure that all our keys were exactly correct. Additionally, since dictionaries in Python can have single quotes, when converting these to JSONs we had many issues with not having them be double quotes. The JSONs for the recipes also often had quotation marks in the title, so we had to carefully replace these before the recipes were themselves returned. Later, we also ran into issues with rate limiting which made it difficult to consistently test our application as it would send too many requests in a small period of time. As a result, we had to increase the pause interval between requests when testing which made it a slow and time consuming process. Integrating the Spotify API calls on the backend with the frontend proved quite difficult. This involved making sure that the authentication and redirects were done properly. We first planned to do this with a popup that called back to the original recipe page, but with the enormous amount of complexity of this task, we switched to have the playlist open in a separate page.
## Accomplishments that we're proud of
Besides our main idea of allowing users to create a better carbon footprint for themselves, we are proud of accomplishing our Spotify integration. Using the Spotify API and metadata was something none of the team had worked with before and we're glad we learned the new skill because it adds great character to the site. We all love music and being able to use metadata for personalized playlists satisfied our inner musical geek and the integration turned out great so we're really happy with the feature. Along with our vast recipe database this far, we are also proud of our integration! Creating a full-stack database application can be tough and putting together all of our different parts was quite hard, especially as it's something we have limited experience with; hence, we're really proud of our service layer for that. Finally, this was the first time our front-end developers used React for a hackathon; hence, using it in a time and resource constraint environment for the first time and managing to do it as well as we did is also one of our greatest accomplishments.
## What we learned
This hackathon was a great learning experience for all of us because everyone delved into a tool that they'd never used before! As a group, one of the main things we learned was the importance of a good git workflow because it allows all team members to have a medium to collaborate efficiently by combing individual parts. Moreover, we also learned about Spotify embedding which not only gave *eatco* a great feature but also provided us with exposure to metadata and API tools. Moreover, we also learned more about creating a component hierarchy and routing on the front end. Another new tool that we used in the back-end was learning how to perform database operations on a cloud-based MongoDB Atlas database from a python script using the pymongo API. This allowed us to complete our recipe database which was the biggest functionality in *eatco*.
## What's next for Eatco
Our team is proud of what *eatco* stands for and we want to continue this project beyond the scope of this hackathon and join the fight against climate change. We truly believe in this cause and feel eatco has the power to bring meaningful change; thus, we plan to improve the site further and release it as a web platform and a mobile application. Before making *eatco* available for users publically we want to add more functionality and further improve the database and present the user with a more accurate update of their carbon footprint. In addition to making our recipe database bigger, we also want to focus on enhancing the front-end for a better user experience. Furthermore, we also hope to include features such as connecting to maps (if the user doesn't have a certain ingredient, they will be directed to the nearest facility where that item can be found), and better use of the Spotify metadata to generate even better playlists. Lastly, we also want to add a saved water feature to also contribute into the global water crisis because eating green also helps cut back on wasteful water consumption! We firmly believe that *eatco* can go beyond the range of the last 36 hours and make impactful change on our planet; hence, we want to share with the world how global issues don't always need huge corporate or public support to be solved, but one person can also make a difference.
|
# unbiasMe
>
> An AI based search engine capable of identifying bias in news articles and promoting sources that are unbiased.
>
>
>
#### Story Behind the Project
University student environments are filled with passionate discussion and debates of controversial topics. The recent Canadian federal election was the first election that many current university students, including ourselves, were eligible to vote in. We both found it very difficult to learn about each party's platform objectively; it seems like every Google search result is trying to persuade you to think one way or another. It's extremely difficult for someone trying to learn about politics for the first, or any contraversial topic for that matter, time to comb through Google and find unbiased articles. The current media landscape does not allow for individuals to easily access unbiased information and form their own opinions. This limits meaningful conversation and causes people to be easily offended without first being thoroughly informed.
#### What is unbiasMe?
unbiasMe aims to target the above problem; it is a search engine that uses machine learning to determine which articles in a Google search contain the least amount of bias. It then displays those articles to the user first. It also displays a percent confidence for each article, which is simply how confident our machine learning model is that the article is unbiased.
When you enter a query into unbiasMe, a number of the results returned by Google are scraped to retrieve the text data in the article. For each result we convert this text data into numerical features that can be used by a machine learning algorithm. Intensive research was done to determine important features that can be extracted from the text data [1](and%20to%20provide%20code%20for%20said%20extraction).
#### Implementation
The back-end is written in Python using Flask, and the front-end in HTML and CSS with a tiny bit of JavaScript. We use Google Custom Search API to Google the users query and extract URLs for our scraper. It was deployed using Google App Engine.
#### Challenges Encountered
* Front-end development
* That's it, we suck at HTML and CSS (don't even get us started on JavaScript).
#### Proud Accomplishments
* Implementing Google Cloud APIs and deploying a website for the first time for both of us
* Development of a web-app that actually runs and almost as good as we could have hoped
* Development of a service that impacts many like-minded individuals
* Networking with hackers from all around the world
#### What We Learned
* That we suck at front-end web development.
* How to deploy a website
* It was some of our first times using sklearn and pandas instead of MatLab for machine learning
* Sleep is important
#### What's next for unbiasMe?
Our hope is to continue to develop the application by implementing more features to provide users with the best experience. One thing we'd really like to include is a recent news tab where users could go to get stories on current events that are unbiased. Also, the machine learning pipeline could probably be improved to provide users with more accurate results (though we are pretty happy with our 78% test accuracy). The code is not exactly the cleanest, and could probably be cleaned up to increase the speed of the search engine significantly.
#### Meet the Team
| Member | Position |
| --- | --- |
| Miriam Naim Ibrahim | Biomedical Engineer |
| Rylee Thompson | Electrical Engineer |
[1] Horne, Benjamin D., Sara Khedr, and Sibel Adali. "Sampling the news producers: A large news and feature data set for the study of the complex media landscape." Twelfth International AAAI Conference on Web and Social Media. 2018.
|
winning
|
## Inspiration
We came into this not knowing what to do for this hackathon, but then it dawned on us, wouldn't it be nice if it were possible to see different type of information about the stock market, and stocks that you already own. That's when we thought of making **Ultimate Portfolio**!
## What it does
Now you may be wondering, what does **Ultimate Portfolio** do? Well, **Ultimate Portfolio** is an app that allows you to see how different stocks are doing in comparison to their norm using linear regression to make a technical analysis on the stocks. Is that all? You may ask. No, let me tell you some of our other great features. **Ultimate Portfolio** analyzes and shows how ***your*** investments are doing. Now you may find yourself wondering, why would that be of use? Well let me tell you, using **Ultimate Portfolio** the app will show you how well you are doing in your stocks and how well others are doing, so you will know when to invest or sell to make **HUGE** amounts of money and avoid the risk of losing it all. All of this and much more can be done with just a couple of clicks in our app **Ultimate Portfolio**!
## How we built it
It all started with an idea, **A Great Idea**, but little did we know what we were getting ourselves into. Soon after, we started brainstorming and creating a plan of action on **Trello**. Everyone was assigned tasks to complete, whether with the help of another team member or by themselves. We had some members that were more acquainted with backend developing, so we let them handle the dirty work. We used react, firebase, an external API and bootstrap to complete this **Great Idea**. All in all, the plan was going according to plan, straying off in some parts, pulling ahead in others.
## Challenges we ran into
One **Big** challenge was the fact that it was the first hackathon for most of our team, and with it being most of our first hackathon, we had no idea what this experience was like. So learning how to work well and efficiently as a team was a hurdle that we had to overcome. There were a lot of hard fought battles... (with errors) but in the end, we improvised, adapted, and overcame.
## Accomplishments that we're proud of
Many things were accomplished during this hackathon, some of the most notable accomplishments were working as a team to create a project. Again, since this was most of our team's first hackathon, it was very rewarding when things started coming together working as a team. Also, doing this hackathon was like stepping out of our comfort zone of not being stressed to finish a project in time, and stepping out of ones comfort zone once in a while can be very rewarding, it's like being in **A Whole New World**! Something that our team did exceptionally well, was product management, by the end of our brainstorming and plan of actions creation phase, we had already set up a Trello board (which helped with organization and distribution of tasks).
## What we learned
In the end, many things were learned by our team. One very crucial part, and I cannot emphasize this enough... ~~caffeine~~, you can't do everything yourself, and working as a team can be beneficial at times.
## What's next for Ultimate Portfolio
In the future, we hope to implement more advanced analytic features and tools for members to use. These features will be used to help the user create better investments, allowing them to make more and lose less money. Some of these features include, but are not limited to, Neural Networking, which can examine data to let you know whether you should invest or not, predictive analytics and prescriptive analytics, which will predict stock prices, and then let you know the best approach to make money. All of these and many more are planned for the future of **Ultimate Portfolio**!
|
## Inspiration
I've always been interested in learning about the various methods of investing and how to generate multiple passive income streams. When I found out that 43% of millennials don't know where to get started in the stock market, I wanted to create an app that could educate individuals on why certain stocks are beating the market and how they can get started with their investment budget.
## What it does
The homepage focuses on the top gainers of the week and explains why they are doing so well. I also used an external library (react-native charts) to display stock charts for the current week. The second screen is a newsroom where users can read about various companies and build their knowledge. The last screen is a calculator where users can input their investment budget and it will render out information on which areas of the market they should invest in.
## How we built it
This app is built with React Native, but uses data from a stock API called 'Alpha Vantage'. I also used react-native charts to display the stock charts of the current week.
## Challenges we ran into
I had trouble conditionally rendering the different information topics based on what the user inputed. I also had to spend a lot of time researching the different topics so finishing on time was definitely a big challenge.
## Accomplishments that we're proud of
I am really proud of the design of the overall app. I feel like it could have been a bit better, especially the newsroom but overall, I am happy with the design of the app.
## What we learned
As I was researching topics for this app, I also learned many different investing strategies myself that I am so excited to try out!
## What's next for StockUp
I hope to link this app to a news API so that it keeps updating automatically everyday. I also would like to add user authentication so that users can have their own personal account where they can add stocks to their watchlist.
|
## Inspiration
We wanted to make an app that helped people to be more environmentally conscious. After we thought about it, we realised that most people are not because they are too lazy to worry about recycling, turning off unused lights, or turning off a faucet when it's not in use. We figured that if people saw how much money they lose by being lazy, they might start to change their habits. We took this idea and added a full visualisation aspect of the app to make a complete budgeting app.
## What it does
Our app allows users to log in, and it then retrieves user data to visually represent the most interesting data from that user's financial history, as well as their utilities spending.
## How we built it
We used HTML, CSS, and JavaScript as our front-end, and then used Arduino to get light sensor data, and Nessie to retrieve user financial data.
## Challenges we ran into
To seamlessly integrate our multiple technologies, and to format our graphs in a way that is both informational and visually attractive.
## Accomplishments that we're proud of
We are proud that we have a finished product that does exactly what we wanted it to do and are proud to demo.
## What we learned
We learned about making graphs using JavaScript, as well as using Bootstrap in websites to create pleasing and mobile-friendly interfaces. We also learned about integrating hardware and software into one app.
## What's next for Budge
We want to continue to add more graphs and tables to provide more information about your bank account data, and use AI to make our app give personal recommendations catered to an individual's personal spending.
|
losing
|
## Inspiration
Searching for specific information from notes and web pages was a challenge. Finding accurate answers while studying news or gathering data for projects was a time-consuming task.1
## What it does
AI solves the issues mentioned. AIP-S³ leverages Information Retrieval to give relevant information from PDFs and web pages. Upload your PDFs or web links, ask your query, and AIP-S³ retrieves information relevant to your query.
User can upload PDFs or links of website and then user will ask query, AIP-S³ will retrieve information from files or webpages related to user's query.
## How we built it
* Programming language -Python
* Preprocessing - NLTK and SpaCy
* Demonstration - Streamlit
## Challenges we ran into
* Extraction of textual data (Textraction) from PDFs and webpages.
* Function for Semantic Similarity.
* Reducing processing time.
## Accomplishments that we're proud of
We are proud to have successfully overcome all challenges and transformed our model into a fully functional version. This project will provide valuable insights into vast amounts of textual data and be incredibly useful for various applications.
## What we learned
* Real life application of NLP techniques.
* Semantic Similarity.
* Web Scraping.
## What's next for AI Powered - Smart Search System (AIP-S³)
* AIP-S³ will be available as extension for browsers so user can ask question on webpages and files itself.
* AIP-S³ will have more preprocessing steps to increase accuracy of retrieval.
|
## Inspiration
Watching boring, long YouTube videos ot those that turn out to be click baits is no fun. We thought of utilizing
## What it does
SummarizeYT activates when it detects that the user is playing a YouTube video and automatically scrapes the DOM for the captions file. Then it displays a summary of the transcript along with the top 5 keywords used in the video. All of this in a minimalistic extension popup!
## How we built it
We used Google Cloud Function to host our backend script and serve it on an endpoint. Then using the extension, we scrape the webpage DOM to get the URL of the captions file on YouTube and make a POST request to the backend. The backend then returns a JSON containing the summary of the transcript (using extractive summarization algorithm using the Genesis library) and the top 5 keywords (using Google Cloud Natural Language API). These are displayed accordingly on the extension popup.
## Challenges we ran into
COMMUNICATION (!!!). Can't stress it enough. Being from different time-zones had a part of the team working separate from the other part, and this almost led us to drop the project. Somehow, we managed to get back on track, but learned that "everything else later, communication first."
For the technical challenges, we never worked with Chrome extensions, or deploying on/using Google Cloud ever before, so it was a ride to set everything up.
## Accomplishments that we're proud of
A final working project! And the fact that we cleared up the miscommunication among us. We couldn't publish the extension yet, but it will be done very soon, and we hope to provide a link to that too.
## What we learned
Working with GCP, making Chrome extensions, other life-lessons.
## What's next for SummarizeYT
Due to YouTube being a single page application, it requires a force reload to get the captions file whenever the user changes the video. We hope to fix this later by adding appropriate headers to the request and getting the file somehow.
Also, as is clear, it only works on videos with captions. We hope to add speech-to-text functionality as well for videos without captions.
Lastly, we hope to make it work for non-English videos too, although this is much more difficult than the previous two tasks.
|
## Inspiration
The biggest irony today is despite the advent of the internet, students and adults are more oblivious than ever to world events, and one can easily understand why. Of course, Facebook, YouTube, and League will be more interesting than reading Huffington Post; coupled with the empirical decrease in the attention span of younger generations, humanity is headed towards disaster.
## What it does
Our project seeks to address this crisis by informing people in a novel and exciting way. We create a fully automated news extraction, summarization, and presentation pipeline that involves an AI-anime character news anchor. The primary goal of our project is to engage and educate an audience, especially that of younger students, with an original, entertaining venue for encountering reliable news that will not only foster intellectual curiosity but also motivate them to take into deeper consideration of relevant issues today, from political events to global warming.
The animation is basically a news anchor talking about several recent news, where related news is discussed in a short blurb.
## Demo Video Explanation
The demo video generally performs well, except for the first few seconds and the Putin/Taliban part. This is because the clusters are too small so many clusters get merged together as our kmeans has fixed number of clusters. A quick fix is to simply calculate the internal coherence of the cluster and filter based on that. more advanced methods can be based on those described in the Scatter Gather paper by Karger et al.
## How we built it
### News Summarization
For extraction and summarization, our first web scrapes news articles from trusted sources (CNN, New York Times, Huffington Post, Washington Post, etc…) to obtain the texts of recent news articles. Then it generates a compact summary of these texts using an in-house developed two-tier text summarization algorithm based on state-of-the-art natural language processing techniques. The algorithm first does an extractive summarization of individual articles. Next, it computes an overall 'topic feature' embedding. This embedding is used to cluster related news, and the final script is generated using these clusters and DL-based abstractive summarization.
### News Anchor Animation
Furthermore, using the google cloud text-to-speech API, we generate speech with our custom pitch and preferences and we then have code that generates a video using an image of any interesting, popular anime character. In order for the video to feel natural to the audience, we accounted for accurate lip and facial movement; there are calculations made using specific speech traits of the .wav file that produces realistic and not only educational but also humorous videos that will entertain the younger audience.
### Audience Engagement
Moreover, we wrote code using the Twitter API to automate the process of uploading videos to our Twitter account MinervaNews which is integrated within the project’s server that uploads a video initially when the server starts and automatically generates every 24 hours after a new video using the new articles from the sources.
## What's next for Minerva Daily News Reporter
Our project will have a lasting impact on the education of an audience ranging in all age groups. Anime is one great example of a venue that can broadcast news, and we selected anime characters as a humorous and eye-catching means to educate the younger audience. Our project and its customization allow for the possibility of new venues and greater exploration of making education more fun and accessible to a vast audience. We hope to take our project further and add more animations as well as more features.
## Challenges
Our compute platform, Satori has a unique architecture called IBM ppe64le that makes package and dependency management a nightmare.
## What we learned
8 hours in planning = 24 hours in real time.
## Github
<https://github.com/gtangg12/liszt>
|
losing
|
## Inspiration
In the time of corona, it can be great to interact with friends over group video chat, and especially to play games. Similarly to Cards Against Humanity or other collaborative games, we thought it would be great to implement Mafia in a virtual setting.
## What it does
Mafia, or variants like One Night Werewolf, involve several players with secret roles including mafia, detectives, protectors, and townspeople. The goal is to have a central game room from which events are announced, yet secret events such as the mafia gathering to decide a victim or the detectives meeting to choose suspects. Thus, some players' video and audio need to be turned on periodically without others knowing.
## How I built it
We used Twilio's programmable video API and group rooms with React.JS to host the game rooms and implement the game logic.
## Challenges I ran into
## Accomplishments that I'm proud of
## What I learned
## What's next for PennAppsMafia
|
## Inspiration
*Mafia*, also known as *Werewolf*, is a classic in-person party game that university and high school students play regularly. It's been popularized by hit computer games such as Town of Salem and Epic Mafia that serve hundreds of thousands of players, but where these games go *wrong* is that they replace the in-person experience with a solely online experience. We built Super Mafia as a companion app that people can use while playing Mafia with their friends in live social situations to *augment* rather than *replace* their experience.
## What it does
Super Mafia replaces the role of the game's moderator, freeing up every student to play. It also allows players to play character roles which normally aren't convenient or even possible in-person, such as the *gunsmith* and *escort*.
## How we built it
Super Mafia was built with Flask, Python, and MongoDB on the backend, and HTML, CSS, and Javascript on the front-end. We also spent time learning about mLab which we used to host the database.
## Challenges we ran into
Our biggest challenge was making sure that our user experience would be simple-to-use and approachable for young users, while still accommodating of the extra features we built.
## Accomplishments that we're proud of
We survived the deadly combo of a cold night and the 5th floor air conditioning.
## What we learned
How much sleeping during hackathons actually improves your focus...lol
## What's next for Super Mafia
* Additional roles (fool, oracle, miller, etc) including 3rd party roles. A full list of potential roles can be found [here](https://epicmafia.com/role)
* Customization options (length of time/day)
* Last words/wills
* Animations and illustrations
|
## Problem
In these times of isolation, many of us developers are stuck inside which makes it hard for us to work with our fellow peers. We also miss the times when we could just sit with our friends and collaborate to learn new programming concepts. But finding the motivation to do the same alone can be difficult.
## Solution
To solve this issue we have created an easy to connect, all in one platform where all you and your developer friends can come together to learn, code, and brainstorm together.
## About
Our platform provides a simple yet efficient User Experience with a straightforward and easy-to-use one-page interface.
We made it one page to have access to all the tools on one screen and transition between them easier.
We identify this page as a study room where users can collaborate and join with a simple URL.
Everything is Synced between users in real-time.
## Features
Our platform allows multiple users to enter one room and access tools like watching youtube tutorials, brainstorming on a drawable whiteboard, and code in our inbuilt browser IDE all in real-time. This platform makes collaboration between users seamless and also pushes them to become better developers.
## Technologies you used for both the front and back end
We use Node.js and Express the backend. On the front end, we use React. We use Socket.IO to establish bi-directional communication between them. We deployed the app using Docker and Google Kubernetes to automatically scale and balance loads.
## Challenges we ran into
A major challenge was collaborating effectively throughout the hackathon. A lot of the bugs we faced were solved through discussions. We realized communication was key for us to succeed in building our project under a time constraints. We ran into performance issues while syncing data between two clients where we were sending too much data or too many broadcast messages at the same time. We optimized the process significantly for smooth real-time interactions.
## What's next for Study Buddy
While we were working on this project, we came across several ideas that this could be a part of.
Our next step is to have each page categorized as an individual room where users can visit.
Adding more relevant tools, more tools, widgets, and expand on other work fields to increase our User demographic.
Include interface customizing options to allow User’s personalization of their rooms.
Try it live here: <http://35.203.169.42/>
Our hopeful product in the future: <https://www.figma.com/proto/zmYk6ah0dJK7yJmYZ5SZpm/nwHacks_2021?node-id=92%3A132&scaling=scale-down>
Thanks for checking us out!
|
losing
|
## Inspiration
We wanted to give virtual reality a purpose, while pushing its limits and making it a fun experience for the user.
## What it does
Our game immerses the user in the middle of an asteroid belt. The user is accompanied by a gunner, and the two players must work together to complete the course in as little time as possible. Player 1 drives the spacecraft using a stationary bike with embedded sensors that provide real-time input to the VR engine. Player 2 controls uses a wireless game controller to blow up asteroids and clear the way to the finish.
## How we built it
Our entire system relies on a FireBase server for inter-device communication. Our bike hardware uses a potentiometer and hall-effect sensor running on an Arduino to measure the turn-state and RPMs of the bike. This data is continuously streamed to the FireBase server, where it can be retrieved by the virtual reality engine. Player 1 and Player 2 constantly exchange game state information over the FireBase server to synchronize their virtual reality experiences with virtually no latency.
We had the option to use Unity for our 3D engine, but instead we used the SmokyBay 3D Engine (which was developed from scratch by Magnus Johnson). We chose to use Magnus' engine because it allowed us to more easily at support for FireBase, and additional hardware.
## Challenges we ran into
We spent a large amount of time trying to arrive at the correct configuration of hardware for our application. In particular, we spent many hours working with the Particle Photon before realizing that it's high level of latency makes it unsuitable for real time applications. We had no prior experience with FireBase, and spent a lot of time integrating it into our project, but it ultimately turned out to be a very elegant solution.
## Accomplishments that we're proud of
We are most proud of the integration aspect of our project. We had to incorporate many sensors, 2 iPhones, a FireBase database, and a game controller into a holistic virtual reality experience. This was in many ways frustrating, but ultimately very rewarding.
## What we learned
In retrospect, it would have been very helpful to have a more complete understanding of the hardware available to us and it's limitations.
## What's next for TourDeMarsVR
Add more sensors and potentially integrating Leap Motion instead of hand held gaming pad.
|
## Inspiration
There are very small but impactful ways to be eco-conscious 🌱 in your daily life, like using reusable bags, shopping at thrift stores, or carpooling. We know one thing for certain; people love rewards ✨. So we thought, how can we reward people for eco-conscious behaviour such as taking the bus or shopping at sustainable businesses?
We wanted a way to make eco-consciousness simple, cost-effective, rewarding, and accessible to everyone.
## What it does
Ecodes rewards you for every sustainable decision you make. Some examples are: shopping at sustainable partner businesses, taking the local transit, and eating at sustainable restaurants. Simply scanning an Ecode at these locations will allow you to claim EcoPoints that can be converted into discounts, coupons or gift cards to eco-conscious businesses. Ecodes also sends users text-based reminders when acting sustainably is especially convenient (ex. take the bus when the weather is unsafe for driving). Furthermore, sustainable businesses also get free advertising, so it's a win-win for both parties! See the demo [here](https://drive.google.com/file/d/1suT7tPila3rz4PSmoyl42G5gyAwrC_vu/view?usp=sharing).
## How we built it
We initially prototyped UI/UX using Figma, then built onto a React-Native frontend and a Flask backend. QR codes were generated for each business via python and detected using a camera access feature created in React-Native. We then moved on to use the OpenWeatherMaps API and the Twilio API in the backend to send users text-based eco-friendly reminders.
## Challenges we ran into
Implementing camera access into the app and actually scanning specific QR codes that corresponded to a unique business and number of EcoPoints was a challenge. We had to add these technical features to the front-end seamlessly without much effort from the user but also have it function correctly. But after all, there's nothing a little documentation can't solve! In the end, we were able to debug our code and successfully implement this key feature.
## Accomplishments that we're proud of
**Kemi** is proud that she learned how to implement new features such as camera access in React Native. 😙
**Akanksha** is proud that she learnt Flask and interfacing with Google Maps APIs in python. 😁
**Vaisnavi** is proud that she was able to generate multiple QR codes in python, each with a unique function. 😝
**Anna** is proud to create the logistics behind the project and learnt about frontend and backend development. 😎
Everyone was super open to working together as a team and helping one another out. As as a team, we learnt a lot from each other in a short amount of time, and the effort was worth it!
## What we learned
We took the challenge to learn new skills outside of our comfort zone, learning how to add impressive features to an app such as camera access, QR code scanning, counter updates, and aesthetic UI. Our final hack turned out to be better than we anticipated, and inspired us to develop impactful and immensely capable apps in the future :)
## What's next for Ecodes
Probably adding a location feature to send users text-based reminders to the user, informing them that an Ecode is nearby. We can use the Geolocation Google Maps API and Twilio API to implement this. Additionally, we hope to add a carpooling feature which enables users to earn points together by carpooling with one another!!
|
## Inspiration
We wanted to improve even more on the immersiveness of VR by using 3D maps based on the real world. We also wanted to demonstrate the power of AR as a medium for interactive and collaborative gaming. We also wanted to connect the physical world with the real world, so a player moves through the game by jogging in place.
## What it does
Our game allows 4 players, divided into teams of 2, to run through the streets of Paris looking for a goal destination. On each team, one player is immersed at the street level through VR, while his or her comrade views the entire world map as an AR overlay on a surface. The navigator must help their teammate search through the streets of an unfamiliar city while trying to get to the destination before the other team. The faster the player on VR jogs in place, the faster they move through the VR world.
## How we built it
* We worked on developing and rendering real-world meshes from GoogleStreetAPI. This was done through OpenFramework for visual rendering scene reconstruction. We also used Meshlab and Blender to generate these 3D scenes. We ran SLAM algorithms to create 3D scenes from 2D panoramas.
* One member worked on exploring hardware options to connect the physical and real world. He used Apple's CoreMotion framework and applied signal processing techniques to turn accelerometer data from the iPhone in the Google Cardboard into accurate estimates of jogging speed.
* One of the members developed and synchronized the AR/VR world with the 4 players. He used ARKit and SceneKit to create the VR world and tabletop AR world overlay. He used firebase to synchronize the VR user's location with the player icon on the AR user's bird's eye view.
## Challenges we ran into
A substantial amount of our time was spent trying to stitch together our own 3D renderings from Google StreetView panoramas. Ultimately, we had to download and import existing object files from the internet. Another huge challenge was synchronizing all four players in real time in a shared AR/VR world.
Further, while we were able to use Fast Fourier Transforms to get extremely accurate estimates of our jogging speed when processing the CoreMotion data in Matlab, implementing this in Swift proved much more difficult, so we built a simpler (but still fairly accurate) estimation script which does not transform the data to a frequency domain.
## Accomplishments that we're proud of
Making a game that we would play ourselves and that we think others would too. We are able to create an immersive experience for four users and the players get some exercise too while they're playing!
## What we learned
We should have eliminated some of the dead-ends that we found ourselves stuck in over the course of the hackathon by checking out more APIs beforehand.
## What's next for MazerunnerAR
World dominance. Extending our game to work in any location around the world where there is Google Streetview data. The game will have unlimited maps, players would just need to pick an area from Google Earth and they would be able to play the game with their friends in AR VR.
|
partial
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.