anchor
stringlengths 1
23.8k
| positive
stringlengths 1
23.8k
| negative
stringlengths 1
31k
| anchor_status
stringclasses 3
values |
|---|---|---|---|
## Inspiration
Homes are becoming more and more intelligent with Smart Home products such as the Amazon Echo or Google Home. However, users have limited information about the infrastructure's status.
## What it does
Our smart chat bot helps users to monitor their house's state from anywhere using low cost sensors. Our product is easy to install, user friendly and fully expandable.
**Easy to install**
By using compact sensors, HomeScan is able to monitor information from your house. Afraid of gas leaks or leaving the heating on? HomeScan has you covered. Our product requires minimum setup and is energy efficient. In addition, since we use a small cellular IoT board to gather the data, HomeScan sensors are wifi-independant. This way, HomeScan can be placed anywhere in the house.
**User Friendly**
HomeScan uses Cisco Spark bots to communicate data to the users. Run diagnostics, ask for specific sensor data, our bots can do it all. Best of all, there is no need to learn command lines as our smart bots use text analysis technologies to find the perfect answer to your question. Since we are using Cisco Spark, the bots can be accessed on the go on both the Spark mobile app or on our website.Therefore, you'll have no problem accessing your data while away from your home.
**Fully expandable**
HomeScan was built with the future in mind. Our product will fully benefit from future technological advancements. For instance, 5G will enable HomeScan to expand and reach places that currently have a poor cellular signal. In addition, the anticipated release of Cisco Spark's "guestID" will grant access to our smart bots to an even wider audience. Newer bot customization tools will also allow us to implement additional functionalities. Lastly, HomeScan can be expanded into an infrastructure ranking system. This could have a tremendous impact on the real-estate industry as houses could be rated based on their infrastructure performances. This way, data could be used for services such as AirBnB, insurance companies and even home-owners.
We are confident that HomeScan is the solution for monitoring a healthy house and improve your real-estate decisions.
future proof
## How I built it
The infrastructure's information are being gathered through a Particle Electron board running of cellular network. The data are then sent to an Amazon's Web Services server. Finally, a Cisco Spark chat bot retrieves the data and outputs relevant queries according to the user's inputs. The intelligent bot is also capable of warning the user in case of an emergency.
## Challenges I ran into
Early on, we ran into numerous hardware issues with the Particle Electron board. After consulting with industry professionals and hours of debugging, we managed to successfully get the board working the way we wanted. Additionally, with no experience with back-end programming, we struggled a lot understanding the tools and the interactions between platforms but ended with successful results.
## Accomplishments that we are proud of
We are proud to showcase a fully-stacked solution using various tools with very little to no experience with it.
## What we learned
With perservance and mutual moral support, anything is possible. And never be shy to ask for help.
|
## Inspiration
We visit many places, we know very less about the historic events or the historic places around us. Today In History notifies you of historic places near you so that you do not miss them.
## What it does
Today In History notifies you about important events that took place exactly on the same date as today but a number of years ago in history. It also notifies the historical places that are around you along with the distance and directions. Today In History is also available as an Amazon Alexa skill. You can always ask Alexa, "Hey Alexa, ask Today In History what's historic around me? What Happened Today? What happened today in India.......
## How we built it
We have two data sources: one is Wikipedia -- we are pulling all the events from the wiki for the date and filter them based on users location. We use the data from Philadelphia to fetch the historic places nearest to the user's location and used Mapquest libraries to give directions in real time.
## Challenges we ran into
Alexa does not know a person's location except the address it is registered with, but we built a novel backend that acts as a bridge between the web app and Alexa to keep them synchronized with the user's location.
|
## Inspiration
People around the world consume large amounts of online media daily, so it is important to make sure information is safe to consume. Seeing specific words or phrases occur in social media, news articles or blogs can be harmful to users and negatively affect their wellbeing.
## What it does
TriggerSafe censors trigger words as specified by the user while they browse the Internet, and notifies the users when a website contains possible triggers.
## How we built it
Using JavaScript, google's Manifest V3, HTML, and CSS
## Challenges we ran into
We ran into challenges while thinking of ways to simplify the user experience so the user doesn’t have to expend great effort. We didn’t want the user to be burdened with the responsibility of listing out their trigger words, but we found it difficult to encompass all keywords relating to the trigger. Thus, we implemented an input field for the user to enter more specific keywords that they would prefer to be censored.
## Accomplishments that we're proud of
We are proud of the fact that we were able to brainstorm and come up with this idea to create a simple web extension that would aid users in navigating through the internet safely. We are also proud of being able to create a nice looking webpage.
## What we learned
We learned how to collaborate on working on a chrome extension, and the use cases that we need to address before designing for the user. We learned how to create a Chrome extension and how to style a webpage using CSS.
## What's next for TriggerSafe
Our existing product could be improved by continued work building on it and possibly expanding on its functionality to make smarter and better for the users.
Furthermore, since TriggerSafe is currently only a web extension and the internet is more widely browsed on mobile devices, it could be useful to implement this idea for mobile devices as well.
|
winning
|
# Bits-And-Atoms
This is a Hackathon project done for nwHacks 2018.
Our demo project integrates various peripherals and languages/frameworks to connect the world of bits with the world of atoms.
The work consists of two applications:
1. Application to control an IoT device (Raspberry Pi) by communicating through AWS IoT, API Gateway, and Lambda
* written in Python 3
* collects motion sensor data and publishes to AWS IoT
* captures camera image by sending base 64 encoded camera images through an IoT publish
* turns on an LED by subscribing to IoT publishes from VR
* makes noise on a buzzer
2. Application to use a VR and LeapMotion to control the IoT device located in a remote network.
* back-end in Node.js
* front-end in AngularJS
* Relies on a local LeapMotion service (the same machine running the client-side app)
* renders IoT device status inside the VR space
* controls the LED and buzzer within the VR space through LeapMotion interaction
* AWS API Gateway and Lambda are used as a proxy for IoT publishes for controlling LED and buzzer
Using the user-facing application, we can use our hands to touch a button in the virtual space to turn on an LED and a buzzer in a remote network. We also receive a textual alert message within the virtual space when a motion is detected on the IoT device. We can also see a (rather slow) video stream coming from the IoT device.

|
## Inspiration
We wanted to experiment with interesting hardware technologies such as the LeapMotion and OpenBCI.
## What it does
We aim to join friends across nations using long range IoT technologies through a relaxing game of pong. Oh, and by the way, the gameplay changes based on how excited you are! (We measure your brainwaves.)
## How we built it
## Challenges we ran into
Each of the frameworks we used seemed to have different XYZ units and axes. We managed to connect Cannon.js, a physics framework, with Three.js, a 3D framework, with Leap and OpenBCI.
## Accomplishments that we're proud of
## What we learned
## What's next for Leap On
|
## Inspiration
We were inspired by the story of the large and growing problem of stray, homeless, and missing pets, and the ways in which technology could be leveraged to solve it, by raising awareness, adding incentive, and exploiting data.
## What it does
Pet Detective is first and foremost a chat bot, integrated into a Facebook page via messenger. The chatbot serves two user groups: pet owners that have recently lost their pets, and good Samaritans that would like to help by reporting. Moreover, Pet Detective provides monetary incentive for such people by collecting donations from happily served users. Pet detective provides the most convenient and hassle free user experience to both user bases. A simple virtual button generated by the chatbot allows the reporter to allow the bot to collect location data. In addition, the bot asks for a photo of the pet, and runs computer vision algorithms in order to determine several attributes and match factors. The bot then places a track on the dog, and continues to alert the owner about potential matches by sending images. In the case of a match, the service sets up a rendezvous with a trusted animal care partner. Finally, Pet Detective collects data on these transactions and reports and provides a data analytics platform to pet care partners.
## How we built it
We used messenger developer integration to build the chatbot. We incorporated OpenCV to provide image segmentation in order to separate the dog from the background photo, and then used Google Cloud Vision service in order to extract features from the image. Our backends were built using Flask and Node.js, hosted on Google App Engine and Heroku, configured as microservices. For the data visualization, we used D3.js.
## Challenges we ran into
Finding the write DB for our uses was challenging, as well as setting up and employing the cloud platform. Getting the chatbot to be reliable was also challenging.
## Accomplishments that we're proud of
We are proud of a product that has real potential to do positive change, as well as the look and feel of the analytics platform (although we still need to add much more there). We are proud of balancing 4 services efficiently, and like our clever name/logo.
## What we learned
We learned a few new technologies and algorithms, including image segmentation, and some Google cloud platform instances. We also learned that NoSQL databases are the way to go for hackathons and speed prototyping.
## What's next for Pet Detective
We want to expand the capabilities of our analytics platform and partner with pet and animal businesses and providers in order to integrate the bot service into many different Facebook pages and websites.
|
losing
|
## Inspiration
We live in a world where kiosks and card machines break down, and where owners of your favorite local restaurant only take in cash, in the advent of cashless, contactless transaction society.
We want to make this transition a positive experience by offering the next generation alternative that incentivizes both customers and owners.
Current card and online transaction industries bully restaurant owners that operate on very tight margins to just survive and compete with 3% + 0.5 cents transaction fees. Without the alternative, in the advent of the shift, this is just plain monopoly and bullying.
We can potentially capture $114 B(cash) + $247 B (debit) + $554 B = 915 B (credit) sized market on both online and offline POS, just in Canada.
## What it does
We replace cash, credit, and debit transactions with blockchain technology. However, we overcome the traditional challenges of blockchain by using Algorand's proof by validation technique. This reduces energy, time, and cost of transaction traditional blockchain down to almost instant time frame just like the current commercial infrastructures with fees.
We provide beautiful and easy to use customer and merchant Android Apps for Paysy. Also, for commercial settings, we provide a java application that can be used in any device by the merchants.
Merchants sends a bill to the user, user approves or declines, and the smart contract is fulfilled through a given node.
## How we built it
We integrated Algorand's services in both the desktop and mobile applications by using their SDK. We were able to finally build a working point-of-sale (POS) service by successfully adding different stages of blockchain transaction mechanisms to our application- creating and storing of public-private keys for users, generating unsigned transactions for exchange, signing and verifying transactions and adding the node to the blockchain.
## Challenges we ran into
It was our first time learning about blockchain technology. There were some issues with protocol communication, but with the help of mentors and company employees, we were able to successfully build the infrastructure.
## Accomplishments that we're proud of
We are now familiar with the blockchain technology and are able to use a state-of-the-art blockchain platform to create our application that will change the world for the better.
## What we learned
We learned about the blockchain technology.
## What's next for Paysy
We will implement the recommendation algorithm for restaurants so that users can save time, and provide an ML-based analytics dashboard for small restaurant owners to enable them for strategic business decisions. This will allow them to go from surviving to THRIVING.
|
## Inspiration
We were tired of the same boring jokes that Alexa tells. In an effort to spice up her creative side, we decided to implement a machine learning model that allows her to rap instead.
## What it does
Lil 'lexa uses an LSTM machine learning model to create her own rap lyrics based on the input of the user. Users first tell Alexa their rap name, along with what rapper they would like Lil 'lexa's vocabulary to be inspired by. Models have been created for Eminem, Cardi B, Nicki Minaj, Travis Scott, and Wu-Tang Clan. After the user drops a bar themselves, Lil 'lexa will spit back her own continuation along with a beat to go with it.
## How I built it
The models were trained using TensorFlow along with the Keras API. Lyrics for each rapper were scrapped from metrolyrics.com using Selenium python package, which served as the basis for the rapper's vocabulary. Fifty-word sequences were used as training data, where the model then guesses the next best word. The web application that takes in the seed text and outputs the generated lyrics is built with Flask and is deployed using Heroku. We also use Voiceflow to create the program to be loaded onto an Alexa, which uses an API call to retrieve the generated lyrics.
## Challenges I ran into
* Formatting the user input so that it would always work with the model
* Creating a consistent vocab list for each rapper
* Voiceflow inputs being merged together or stuck
## Accomplishments that I'm proud of
* My Alexa can finally gain some street cred
## What I learned
* Using Flask and Heroku to deploy an application
* Using Voiceflow to create programs that work with Amazon Alexa and Google Assistant
* Using Tensorflow to train an LSTM model
## What's next for Lil 'lexa
* Implementing more complex models that consider sentences and rhyming
* Call and response format for a rap battle
* Wider range of background beats
|
## Inspiration
As lane-keep assist and adaptive cruise control features are becoming more available in commercial vehicles, we wanted to explore the potential of a dedicated collision avoidance system
## What it does
We've created an adaptive, small-scale collision avoidance system that leverages Apple's AR technology to detect an oncoming vehicle in the system's field of view and respond appropriately, by braking, slowing down, and/or turning
## How we built it
Using Swift and ARKit, we built an image-detecting app which was uploaded to an iOS device. The app was used to recognize a principal other vehicle (POV), get its position and velocity, and send data (corresponding to a certain driving mode) to an HTTP endpoint on Autocode. This data was then parsed and sent to an Arduino control board for actuating the motors of the automated vehicle
## Challenges we ran into
One of the main challenges was transferring data from an iOS app/device to Arduino. We were able to solve this by hosting a web server on Autocode and transferring data via HTTP requests. Although this allowed us to fetch the data and transmit it via Bluetooth to the Arduino, latency was still an issue and led us to adjust the danger zones in the automated vehicle's field of view accordingly
## Accomplishments that we're proud of
Our team was all-around unfamiliar with Swift and iOS development. Learning the Swift syntax and how to use ARKit's image detection feature in a day was definitely a proud moment. We used a variety of technologies in the project and finding a way to interface with all of them and have real-time data transfer between the mobile app and the car was another highlight!
## What we learned
We learned about Swift and more generally about what goes into developing an iOS app. Working with ARKit has inspired us to build more AR apps in the future
## What's next for Anti-Bumper Car - A Collision Avoidance System
Specifically for this project, solving an issue related to file IO and reducing latency would be the next step in providing a more reliable collision avoiding system. Hopefully one day this project can be expanded to a real-life system and help drivers stay safe on the road
|
partial
|
## Inspiration
Everybody struggles with their personal finances. Financial inequality in the workplace is particularly prevalent among young females. On average, women make 88 cents per every dollar a male makes in Ontario. This is why it is important to encourage women to become more cautious of spending habits. Even though budgeting apps such as Honeydue or Mint exist, they depend heavily on self-motivation from users.
## What it does
Our app is a budgeting tool that targets young females with useful incentives to boost self-motivation for their financial well-being. The app features simple scale graphics visualizing the financial balancing act of the user. By balancing the scale and achieving their monthly financial goals, users will be provided with various rewards, such as discount coupons or small cash vouchers based on their interests. Users are free to set their goals on their own terms and follow through with them. The app re-enforces good financial behaviour by providing gamified experiences with small incentives.
The app will be provided to users free of charge. As with any free service, the anonymized user data will be shared with marketing and retail partners for analytics. Discount offers and other incentives could lead to better brand awareness and spending from our users for participating partners. The customized reward is an opportunity for targeted advertising
## Persona
Twenty-year-old Ellie Smith works two jobs to make ends meet. The rising costs of living make it difficult for her to maintain her budget. She heard about this new app called Re:skale that provides personalized rewards for just achieving the budget goals. She signed up after answering a few questions and linking her financial accounts to the app. The app provided simple balancing scale animation for immediate visual feedback of her financial well-being. The app frequently provided words of encouragement and useful tips to maximize the chance of her success. She especially loves how she could set the goals and follow through on her own terms. The personalized reward was sweet, and she managed to save on a number of essentials such as groceries. She is now on 3 months streak with a chance to get better rewards.
## How we built it
We used : React, NodeJs, Firebase, HTML & Figma
## Challenges we ran into
We had a number of ideas but struggled to define the scope and topic for the project.
* Different design philosophies made it difficult to maintain consistent and cohesive design.
* Sharing resources was another difficulty due to the digital nature of this hackathon
* On the developing side, there were technologies that were unfamiliar to over half of the team, such as Firebase and React Hooks. It took a lot of time in order to understand the documentation and implement it into our app.
* Additionally, resolving merge conflicts proved to be more difficult. The time constraint was also a challenge.
## Accomplishments that we're proud of
* The use of harder languages including firebase and react hooks
* On the design side it was great to create a complete prototype of the vision of the app.
* Being some members first hackathon, the time constraint was a stressor but with the support of the team they were able to feel more comfortable with the lack of time
## What we learned
* we learned how to meet each other’s needs in a virtual space
* The designers learned how to merge design philosophies
* How to manage time and work with others who are on different schedules
## What's next for Re:skale
Re:skale can be rescaled to include people of all gender and ages.
* More close integration with other financial institutions and credit card providers for better automation and prediction
* Physical receipt scanner feature for non-debt and credit payments
## Try our product
This is the link to a prototype app
<https://www.figma.com/proto/nTb2IgOcW2EdewIdSp8Sa4/hack-the-6ix-team-library?page-id=312%3A3&node-id=375%3A1838&viewport=241%2C48%2C0.39&scaling=min-zoom&starting-point-node-id=375%3A1838&show-proto-sidebar=1>
This is a link for a prototype website
<https://www.figma.com/proto/nTb2IgOcW2EdewIdSp8Sa4/hack-the-6ix-team-library?page-id=0%3A1&node-id=360%3A1855&viewport=241%2C48%2C0.18&scaling=min-zoom&starting-point-node-id=360%3A1855&show-proto-sidebar=1>
|
## Inspiration
As university students, emergency funds may not be on the top of our priority list however, when the unexpected happens, we are often left wishing that we had saved for an emergency when we had the chance. When we thought about this as a team, we realized that the feeling of putting a set amount of money away every time income rolls through may create feelings of dread rather than positivity. We then brainstormed ways to make saving money in an emergency fund more fun and rewarding. This is how Spend2Save was born.
## What it does
Spend2Save allows the user to set up an emergency fund. The user inputs their employment status, baseline amount and goal for the emergency fund and the app will create a plan for them to achieve their goal! Users create custom in-game avatars that they can take care of. The user can unlock avatar skins, accessories, pets, etc. by "buying" them with funds they deposit into their emergency fund. The user will have milestones or achievements for reaching certain sub goals while also giving them extra motivation if their emergency fund falls below the baseline amount they set up. Users will also be able to change their employment status after creating an account in the case of a new job or career change and the app will adjust their deposit plan accordly.
## How we built it
We used Flutter to build the interactive prototype of our Android Application.
## Challenges we ran into
None of us had prior experience using Flutter, let alone mobile app development. Learning to use Flutter in a short period of time can easily be agreed upon to be the greatest challenge that we faced.
We originally had more features planned, with an implementation of data being stored using Firebase, so having to compromise our initial goals and focus our efforts on what is achievable in this time period proved to be challenging.
## Accomplishments that we're proud of
This was our first mobile app we developed (as well as our first hackathon).
## What we learned
This being our first Hackathon, almost everything we did provided a learning experience. The skills needed to quickly plan and execute a project were put into practice and given opportunities to grow. Ways to improve efficiency and team efficacy can only be learned through experience in a fast-paced environment such as this one.
As mentioned before, with all of us using Flutter for the first time, anything we did involving it was something new.
## What's next for Spend2Save
There is still a long way for us to grow as developers, so the full implementation of Spend2Save will rely on our progress.
We believe there is potential for such an application to appeal to its target audience and so we have planned projections for the future of Spend2Save. These projections include but are not limited to, plans such as integration with actual bank accounts at RBC.
|
## Inspiration
Many individuals lack financial freedom, and this stems from poor spending skills. As a result, our group wanted to create something to help prevent that. We realized how difficult it can be to track the expenses of each individual person in a family. As humans, we tend to lose track of what we purchase and spend money on. Inspired, we wanted to create an app that stops all that by allowing individuals to strengthen their organization and budgeting skills.
## What It Does
Track is an expense tracker website targeting households and individuals with the aim of easing people’s lives while also allowing them to gain essential skills. Imagine not having to worry about tracking your expenses all while learning how to budget and be well organized.
The website has two key components:
* Family Expense Tracker:
The family expense tracker is the `main dashboard` for all users. It showcases each individual family member’s total expenses while also displaying the expenses through categories. Both members and owners of the family can access this screen. Members can be added to the owner’s family via a household key which is only given access to the owner of the family. Permissions vary between both members and owners. Owners gain access to each individual’s personal expense tracker, while members have only access to their own personal expense tracker.
* Personal Expense Tracker:
The personal expense tracker is assigned to each user, displaying their own expenses. Users are allowed to look at past expenses from the start of the account to the present time. They are also allowed to add expenses with a click of a button.
## How We Built It
* Utilized the MERN (MongoDB, Express, React, Node) stack
* Restful APIs were built using Node and Express which were integrated with a MongoDB database
* The Frontend was built with the use of vanilla React and Tailwind CSS
## Challenges We Ran Into
* Frontend:
Connecting EmailJS to the help form
Retrieving specific data from the backend and displaying pop-ups accordingly
Keeping the theme consistent while also ensuring that the layout and dimensions didn’t overlap or wrap
Creating hover animations for buttons and messages
* Backend:
Embedded objects were not being correctly updated - needed to learn about storing references to objects and populating the references
Designing the backend based on frontend requirements and the overall goal of the website
## Accomplishments We’re Proud Of
As this was all of our’s first or second hackathons we are proud to have created a functioning website with a fully integrated front and back-end.
We are glad to have successfully implemented pop-ups for each individual expense category that displays past expenses.
Overall, we are proud of ourselves for being able to create a product that can be used in our day-to-day lives in a short period of time.
## What We Learned
* How to properly use embedded objects so that any changes to the object are reflected wherever the object is embedded
* Using the state hook in ReactJS
* Successfully and effectively using React Routers
* How to work together virtually. It allowed us to not only gain hard skills but also enhance our soft skills such as teamwork and communication.
## What’s Next For Track
* Implement an income tracker section allowing the user to get a bigger picture of their overall net income
* Be able to edit and delete both expenses and users
* Store historical data to allow the use of data analysis graphs to provide predictions and recommendations.
* Allow users to create their own categories rather than the assigned ones
* Setting up different levels of permission to allow people to view other family member’s usage
|
winning
|
>
> Domain.com domain: sharescription.net
>
>
>
## Inspiration
We all love leaching off other people's Netflix, but hate it when people do it to us. We wanted to solve the age old issue of sharing subscriptions. There's currently no real easy way to split the bills, as you have to rely on someone paying, and everyone else remembering to pay that person back. We aim to change that.
## What it does
People can register via oAuth Facebook, and then add their bills using the simple web interface. They can then decide to share the bill with other people registered on the platform. Once a month, the service automatically splits the bill between everyone and sends an Interac e-Transfer request to everyone who's agreed to pay the bill. It then generates a one-time use virtual debit card number for them to use at the merchant the bill is for.
## How we built it
The project is lots of different pieces all put together. Our APIs are coded in Node.js via Standard Library, we used Flask and Bootstrap to build our website, and bits of PHP to patch up our knowledge gaps. In terms of APIs, we used the Interac e-Transfer API, Marqeta API for generating card numbers.
## Challenges we ran into
>
> We had quite a few issues with API documentation (and the lack of it) and things not running quite as they should be. *-Jack*
>
>
> "I was responsible for the backend of the project and we decided to use Flask with Python to handle the login and authentication. Although all of us have experience with Python but learning how to use Flask for the first time was definitely a struggle for all of us. Secondly I had spent a fair amount of time trying to set up the virtual environment in order to get the project up and running." *-Jerry*
>
>
> "I am not experienced in coding or website building, so I did have to learn and develop my website. At first it was difficult, however, with my team’s constant support I was able to finish the front-end and deploy it in time." *-Sophia*
>
>
> "During the hackathon I found two bugs in Standard Library’s online IDE (case-sensitive usernames not working when sharing code and an error with auto-save overwriting the save files) which took some time away as I sat down with the StdLib sponsors to make sure it was all fixed. There’s a limited supply of documentation on the email parser (SendGrid), and I had to use 3rd party websites to understand how to deploy the app." *-Iyad*
>
>
> Oh my, there was a fair share of challenges, not only did we have to deal with the interact documentation, but even then the website tools rarely ever seemed to work. from there fate decided that the language of choice was going to be one I had not ever used before (node.js) but We still got it working in the end. *-Kevin*
>
>
>
## Accomplishments that we're proud of
>
> I think we've created a really good project, especially considering how many issues with the APIs we ran into during the development of it. *-Jack*
>
>
> "Our project was only 30% done when there was 12 hours left, at that time I didn't expect us to be able to finish the project in time for the demo. However, by some miracles we have managed to pull through and get all of our codes together and to produce this amazing web-app that I am proud to stand behind." *-Jerry*
>
>
> "I came into this hackathon with very little coding knowledge, but I worked hard and was able to create a visually pleasing minimum viable product, which impressed myself and the team. I am proud that I was able to learn the language quickly and make a good product." *-Sophia*
>
>
> "I came in with another team and a different idea. I thought I was set for this hackathon, however, things didn’t go as planned and I had to find a new team during the hackathon. I’m proud that we were able to create a great product during this weekend, with no prior planning, and with half of us meeting for the first time." *-Iyad*
>
>
> To be honest the fact that the proper output is coming out from a range of inputs is a real succes for me. to be honest my initial expectations were low, but as time progress, I found more to be proud of in my work. Something about specifically working with the Interac API and starting off with feelings of hopelessness, but then creating something makes it all the more better.*-Kevin*
>
>
>
## What we learned
>
> I learned all about the weird ways that some payment APIs handle authentication - Marqeta's way of sending data and authenticating yourself was really intuitive, but we got there in the end. I also got to learn about how Flask works, and how to handle authentication using it. *-Jack*
>
>
> "I learned how to use Flask and how to deploy a project on to Google App Engine in the past 36 hours. I'm hoping to carry on this skill into creating my personal website." *-Jerry*
>
>
> "I learnt how to develop my own website using Html and Bootstrap. I was able to learn a new frontend language that expands my skillset." *-Sophia*
>
>
> "I learnt how to integrate three different backend languages into one project. I’m surprised at how seamlessly the codes work together (after hours of debugging). Though not my main focus, I was able to float around the team and help the front end and help out with flask." *-Iyad*
>
>
> No doubt I've learnt much about the interact API and why I probably won't use it again, or with the Standard Library and only now starting to understand what an API is. However what the most important thing I've learnt is to embrace the hackathon spirit, and take on tasks that may not be the easiest, but are cool none the less. *-Kevin*
>
>
>
## What's next for Sharescription
We built the infrastructure for email parsing, but didn't get quite get around to implementing it as a feature. We'd love to make it so people can just forward their bills to the service and have them automatically appear, as this'd make it a lot easier to add new bills.
We'd also like to add support for more subscriptions, and perhaps use machine learning to recognise new services and add them to the platform.
In terms of business, we think partnering with subscription services to show relevant targeted advertising or rewards/discounts to customers using the service would be a really good way to help monetise it.
|
## Inspiration
Have you ever gone on a nice dinner out with friends, only to find that the group is too big for your server to split the bills according to each person's order? Someone inevitably decides to pay for the whole group and asks everyone to pay them back afterwards, but this doesn't always happen right away. When people forget to pay their friends back, it becomes somewhat awkward to bring up...
Enter Encountability, our cash transfer app!
## What it does
Encountability was created as an alternative to current cash transfer mechanisms, such as Interac e-transfer, that are somewhat clunky at best and inconvenient at worst - it sucks when the e-transfers don't arrive immediately and you and the person you're buying stuff from on Facebook Marketplace have to stand there awkwardly shuffling your feet and praying that the autodeposit email arrives soon. You can add friends to the app and send them cash (or request cash of your own) just by navigating to their profile on the app and sending a message in seconds! The app also reminds you of money you owe to any friends you might have on the app, ensuring that you don't forget to pay them back (especially if they shouldered everyone's bill last time you went out) and you spare them the awkwardness of having to remind you that you owe them some cash.
## How we built it
We built the backend in Python and Flask, and used CockroachDB for the database. The RBC Money Transfer API was also used for the project. For the frontend, a combination of HTML, CSS, and Javascript was used.
## Challenges we ran into
The name Encountability is a portmanteau of "encounter" and "accountability"; this was because we originally envisioned an RPG-style app where dinner bills that needed to be split could be treated like boss monsters and "defeated" by gathering a party of your friends and splitting the bill amongst yourselves easily. Time constraints were in full force this weekend, and we had to cut down on some of our more ambitious planned features after it became evident that there would not be enough time to accomplish everything we wanted. There were some difficulties with learning the techniques and tools necessary to integrate frontend and backend as well, but we pushed through and created something functional in the end!
## Accomplishments that we're proud of
Despite the hurdles and the compromises (and the time constraints... and the steep learning curve...) we were able to create something functional, with a prototype that shows how we envision the app to work and look!
## What we learned
* databases can be fiddly, but when they work, it's a beautiful thing!
* Sanity Walks™ are an essential part of the hackathon experience
* so are 30-min naps
## What's next for encountability
We'd like to connect it to bank accounts directly next time, just like we originally intended! It would also be nice to fully implement the automatic transaction-splitting feature of the app next time, as well as the more social aspects of the app.
|
## Inspiration
The inspiration for this project was drawn from the daily experiences of our team members. As post-secondary students, we often make purchases for our peers for convenience, yet forget to follow up. This can lead to disagreements and accountability issues. Thus, we came up with the idea of CashDat, to alleviate this commonly faced issue. People will no longer have to remind their friends about paying them back! With the available API’s, we realized that we could create an application to directly tackle this problem.
## What it does
CashDat is an application available on the iOS platform that allows users to keep track of who owes them money, as well as who they owe money to. Users are able to scan their receipts, divide the costs with other people, and send requests for e-transfer.
## How we built it
We used Xcode to program a multi-view app and implement all the screens/features necessary.
We used Python and Optical Character Recognition (OCR) built inside Google Cloud Vision API to implement text extraction using AI on the cloud. This was used specifically to draw item names and prices from the scanned receipts.
We used Google Firebase to store user login information, receipt images, as well as recorded transactions and transaction details.
Figma was utilized to design the front-end mobile interface that users interact with. The application itself was primarily developed with Swift with focus on iOS support.
## Challenges we ran into
We found that we had a lot of great ideas for utilizing sponsor APIs, but due to time constraints we were unable to fully implement them.
The main challenge was incorporating the Request Money option with the Interac API into our application and Swift code. We found that since the API was in BETA made it difficult to implement it onto an IOS app. We certainly hope to work on the implementation of the Interac API as it is a crucial part of our product.
## Accomplishments that we're proud of
Overall, our team was able to develop a functioning application and were able to use new APIs provided by sponsors. We used modern design elements and integrated that with the software.
## What we learned
We learned about implementing different APIs and overall IOS development. We also had very little experience with flask backend deployment process. This proved to be quite difficult at first, but we learned about setting up environment variables and off-site server setup.
## What's next for CashDat
We see a great opportunity for the further development of CashDat as it helps streamline the process of current payment methods. We plan on continuing to develop this application to further optimize user experience.
|
partial
|
## Inspiration
As international students, we often have to navigate around a lot of roadblocks when it comes to receiving money from back home for our tuition.
Cross-border payments are gaining momentum with so many emerging markets. In 2021, the top five recipient countries for remittance inflows in current USD were India (89 billion), Mexico (54 billion), China (53 billion), the Philippines (37 billion), and Egypt (32 billion). The United States was the largest source country for remittances in 2020, followed by the United Arab Emirates, Saudi Arabia, and Switzerland.
However, Cross-border payments face 5 main challenges: cost, security, time, liquidity & transparency.
* Cost: Cross-border payments are typically costly due to costs involved such as currency exchange costs, intermediary charges, and regulatory costs.
-Time: most international payments take anything between 2-5 days.
-Security: The rate of fraud in cross-border payments is comparatively higher than in domestic payments because it's much more difficult to track once it crosses the border.
* Standardization: Different countries tend to follow a different set of rules & formats which make cross-border payments even more difficult & complicated at times.
* Liquidity: Most cross-border payments work on the pre-funding of accounts to settle payments; hence it becomes important to ensure adequate liquidity in correspondent bank accounts to meet payment obligations within cut-off deadlines.
## What it does
Cashflow is a solution to all of the problems above. It provides a secure method to transfer money overseas. It uses the checkbook.io API to verify users' bank information, and check for liquidity, and with features such as KYC, it ensures security in enabling instant payments. Further, it uses another API to convert the currencies using accurate, non-inflated rates.
Sending money:
Our system requests a few pieces of information from you, which pertain to the recipient. After having added your bank details to your profile, you will be able to send money through the platform.
The recipient will receive an email message, through which they can deposit into their account in multiple ways.
Requesting money:
By requesting money from a sender, an invoice is generated to them. They can choose to send money back through multiple methods, which include credit and debit card payments.
## How we built it
We built it using HTML, CSS, and JavaScript. We also used the Checkbook.io API and exchange rate API.
## Challenges we ran into
Neither of us is familiar with backend technologies or react. Mihir has never worked with JS before and I haven't worked on many web dev projects in the last 2 years, so we had to engage in a lot of learning and refreshing of knowledge as we built the project which took a lot of time.
## Accomplishments that we're proud of
We learned a lot and built the whole web app as we were continuously learning. Mihir learned JavaScript from scratch and coded in it for the whole project all under 36 hours.
## What we learned
We learned how to integrate APIs in building web apps, JavaScript, and a lot of web dev.
## What's next for CashFlow
We were having a couple of bugs that we couldn't fix, we plan to work on those in the near future.
|
## Inspiration
<https://www.youtube.com/watch?v=lxuOxQzDN3Y>
Robbie's story stuck out to me at the endless limitations of technology. He was diagnosed with muscular dystrophy which prevented him from having full control of his arms and legs. He was gifted with a Google home that crafted his home into a voice controlled machine. We wanted to take this a step further and make computers more accessible for people such as Robbie.
## What it does
We use a Google Cloud based API that helps us detect words and phrases captured from the microphone input. We then convert those phrases into commands for the computer to execute. Since the python script is run in the terminal it can be used across the computer and all its applications.
## How I built it
The first (and hardest) step was figuring out how to leverage Google's API to our advantage. We knew it was able to detect words from an audio file but there was more to this project than that. We started piecing libraries to get access to the microphone, file system, keyboard and mouse events, cursor x,y coordinates, and so much more. We build a large (~30) function library that could be used to control almost anything in the computer
## Challenges I ran into
Configuring the many libraries took a lot of time. Especially with compatibility issues with mac vs. windows, python2 vs. 3, etc. Many of our challenges were solved by either thinking of a better solution or asking people on forums like StackOverflow. For example, we wanted to change the volume of the computer using the fn+arrow key shortcut, but python is not allowed to access that key.
## Accomplishments that I'm proud of
We are proud of the fact that we had built an alpha version of an application we intend to keep developing, because we believe in its real-world applications. From a technical perspective, I was also proud of the fact that we were able to successfully use a Google Cloud API.
## What I learned
We learned a lot about how the machine interacts with different events in the computer and the time dependencies involved. We also learned about the ease of use of a Google API which encourages us to use more, and encourage others to do so, too. Also we learned about the different nuances of speech detection like how to tell the API to pick the word "one" over "won" in certain context, or how to change a "one" to a "1", or how to reduce ambient noise.
## What's next for Speech Computer Control
At the moment we are manually running this script through the command line but ideally we would want a more user friendly experience (GUI). Additionally, we had developed a chrome extension that numbers off each link on a page after a Google or Youtube search query, so that we would be able to say something like "jump to link 4". We were unable to get the web-to-python code just right, but we plan on implementing it in the near future.
|
## Inspiration
Have you ever gone on a nice dinner out with friends, only to find that the group is too big for your server to split the bills according to each person's order? Someone inevitably decides to pay for the whole group and asks everyone to pay them back afterwards, but this doesn't always happen right away. When people forget to pay their friends back, it becomes somewhat awkward to bring up...
Enter Encountability, our cash transfer app!
## What it does
Encountability was created as an alternative to current cash transfer mechanisms, such as Interac e-transfer, that are somewhat clunky at best and inconvenient at worst - it sucks when the e-transfers don't arrive immediately and you and the person you're buying stuff from on Facebook Marketplace have to stand there awkwardly shuffling your feet and praying that the autodeposit email arrives soon. You can add friends to the app and send them cash (or request cash of your own) just by navigating to their profile on the app and sending a message in seconds! The app also reminds you of money you owe to any friends you might have on the app, ensuring that you don't forget to pay them back (especially if they shouldered everyone's bill last time you went out) and you spare them the awkwardness of having to remind you that you owe them some cash.
## How we built it
We built the backend in Python and Flask, and used CockroachDB for the database. The RBC Money Transfer API was also used for the project. For the frontend, a combination of HTML, CSS, and Javascript was used.
## Challenges we ran into
The name Encountability is a portmanteau of "encounter" and "accountability"; this was because we originally envisioned an RPG-style app where dinner bills that needed to be split could be treated like boss monsters and "defeated" by gathering a party of your friends and splitting the bill amongst yourselves easily. Time constraints were in full force this weekend, and we had to cut down on some of our more ambitious planned features after it became evident that there would not be enough time to accomplish everything we wanted. There were some difficulties with learning the techniques and tools necessary to integrate frontend and backend as well, but we pushed through and created something functional in the end!
## Accomplishments that we're proud of
Despite the hurdles and the compromises (and the time constraints... and the steep learning curve...) we were able to create something functional, with a prototype that shows how we envision the app to work and look!
## What we learned
* databases can be fiddly, but when they work, it's a beautiful thing!
* Sanity Walks™ are an essential part of the hackathon experience
* so are 30-min naps
## What's next for encountability
We'd like to connect it to bank accounts directly next time, just like we originally intended! It would also be nice to fully implement the automatic transaction-splitting feature of the app next time, as well as the more social aspects of the app.
|
winning
|
## 💡 Inspiration💡
In 2022, video-based content is more prevalent than ever before. However, what’s also more commonplace is how busy people’s lives are in modern times. This is why we built Brevity, the only AI-powered browser extension that summarizes videos in a way that’s meaningful to you.
## ✨ What It Does ✨
All the user needs to do is click the extension over the desired video and request to “make it brief”; then they can read a succinct summary of the video. Brevity does this by first retrieving the audio from Youtube and uploading it to the AssemblyAI server. This will then return a transcript in the form of chapters which get subsequently uploaded to CoHere.ai to be converted into smart bullet points.
But that’s not all. What if the user didn’t find the needed information in the summary? Well, they can ask their questions to Brevity themselves. Brevity can handle any questions from basic content to abstract concepts by communicating with OpenAi’s GPT-3 API to develop an answer.
## ⚙️ How We Built It ⚙️
1. We use **typescript** with **tRPC** to create a type-safe environment for robust code.
2. Use **assemblyAI** to transcribe the video into text and generate chapter summaries.
3. Use **Cohere.ai** to generate bullet points from the chapter summaries.
4. Use **GPT-3** to answer the user's questions based on the transcript.
5. Cache all the data locally to save on resources.
6. Built a front-end chrome extension with **React** to make it friendly and easy to use.
## 🔥 Challenges We Ran Into 🔥
1. Creating robust prompts which give stable and high-quality results for Cohere.ai.
2. Making an AI compatible with long input transcripts.
3. Creating a chrome extension with React.
4. Integrating complex AI models into our program.
## 🏆 Accomplishments We're Proud Of 🏆
1. The product is functional and is surprisingly useful in our own lives.
2. First time designing such an intuitive and aesthetic user interface.
## 📈 What's Next For Brevity 📈
Though we’re very proud of Brevity’s functionality on YouTube, we recognize that its concept may be better applied in the world of academia. That’s why we’d like to expand it to other video platforms in the future, as well as adding a feature to tailor the complexity of the summaries.
|
## Inspiration
As YouTube consumers, we find it extremely frustrating to deal with unnecessarily long product reviews that deliver numerous midroll ads rather than providing useful and convenient information. Inspired, we wanted to create an application that stops all that by allowing users to glean the important information from a video without wasting time.
## What It Does
* Determines whether a YouTube product review is positive or negative.
* Summarizes the contents of the review.
## How We Built It
* The auto-generated closed captions for a YouTube video are obtained
* Punctuation is added to YouTube auto-generated closed captions.
* The punctuated script retrieved is then fed into Cohere, an NLP toolkit.
* Cohere performs an analysis of the script and returns sentiment and summary.
* Chrome extension is then utilized to display the information to the user.
## Challenges We Ran Into
* Settling on a use of Cohere.
* Punctuating the auto-generated YouTube closed captions.
## Accomplishments That We're Proud Of
* Deftly transitioning between ideas.
* Combining many new technologies.
* Figuring out the puntuator2 library.
## What We Learned
* How to develop chrome extensions.
* How to utilize Cohere to perform analysis and process summaries.
## What's Next For The Project
* Implement a mobile app component that allows the user's to make use of the service at the convenience of their phone.
* Keep track of users' past history and engagement.
* Incorporate a points system that encourages and drives individuals to continue to make use of the application.
|
## Inspiration
The inspiration for the project was our desire to make studying and learning more efficient and accessible for students and educators. Utilizing advancements in technology, like the increased availability and lower cost of text embeddings, to make the process of finding answers within educational materials more seamless and convenient.
## What it does
Wise Up is a website that takes many different types of file format, as well as plain text, and separates the information into "pages". Using text embeddings, it can then quickly search through all the pages in a text and figure out which ones are most likely to contain the answer to a question that the user sends. It can also recursively summarize the file at different levels of compression.
## How we built it
With blood, sweat and tears! We used many tools offered to us throughout the challenge to simplify our life. We used Javascript, HTML and CSS for the website, and used it to communicate to a Flask backend that can run our python scripts involving API calls and such. We have API calls to openAI text embeddings, to cohere's xlarge model, to GPT-3's API, OpenAI's Whisper Speech-to-Text model, and several modules for getting an mp4 from a youtube link, a text from a pdf, and so on.
## Challenges we ran into
We had problems getting the backend on Flask to run on a Ubuntu server, and later had to instead run it on a Windows machine. Moreover, getting the backend to communicate effectively with the frontend in real time was a real challenge. Extracting text and page data from files and links ended up taking more time than expected, and finally, since the latency of sending information back and forth from the front end to the backend would lead to a worse user experience, we attempted to implement some features of our semantic search algorithm in the frontend, which led to a lot of difficulties in transferring code from Python to Javascript.
## Accomplishments that we're proud of
Since OpenAI's text embeddings are very good and very new, and we use GPT-3.5 based on extracted information to formulate the answer, we believe we likely equal the state of the art in the task of quickly analyzing text and answer complex questions on it, and the ease of use for many different file formats makes us proud that this project and website can be useful for so many people so often. To understand a textbook and answer questions about its content, or to find specific information without knowing any relevant keywords, this product is simply incredibly good, and costs pennies to run. Moreover, we have added an identification system (users signing up with a username and password) to ensure that a specific account is capped at a certain usage of the API, which is at our own costs (pennies, but we wish to avoid it becoming many dollars without our awareness of it.
## What we learned
As time goes on, not only do LLMs get better, but new methods are developed to use them more efficiently and for greater results. Web development is quite unintuitive for beginners, especially when different programming languages need to interact. One tool that has saved us a few different times is using json for data transfer, and aws services to store Mbs of data very cheaply. Another thing we learned is that, unfortunately, as time goes on, LLMs get bigger and so sometimes much, much slower; api calls to GPT3 and to Whisper are often slow, taking minutes for 1000+ page textbooks.
## What's next for Wise Up
What's next for Wise Up is to make our product faster and more user-friendly. A feature we could add is to summarize text with a fine-tuned model rather than zero-shot learning with GPT-3. Additionally, a next step is to explore partnerships with educational institutions and companies to bring Wise Up to a wider audience and help even more students and educators in their learning journey, or attempt for the website to go viral on social media by advertising its usefulness. Moreover, adding a financial component to the account system could let our users cover the low costs of the APIs, aws and CPU running Whisper.
|
losing
|
## Inspiration
We were inspired by the Interac API, because of how simple it made money requests. We all realized that one thing we struggle with sometimes is splitting the bill, as sometimes restaurants don't accommodate for larger parties.
## What it does
Our simple web app allows for you to upload your receipt, and digitally invoice your friends for their meals.
## How we built it
For processing the receipts, we used Google Cloud's Vision API, which is a machine learning application for recognizing and converting images of characters into digital text. We used HTML, CSS, JavaScript, and JQuery to create an easy-to-use and intuitive interface that makes splitting the bill as easy as ever. Behind the scenes, we used Flask and developed Python scripts to process the data entered by the users and to facilitate their movement through our interface. We used the Interac e-Transfer API to send payment requests to the user's contacts. These requests can be fulfilled and the payments will be automatically deposited into the user's bank account.
## Challenges we ran into
The Optical Character Recognition (OCR) API does not handle receipts format very well. The item names and cost are read in different orders, do not always come out in pairs, and have no characters that separate the items. Therefore we needed to develop an algorithm that can pick up the separate the words and recognize which characters were actually useful.
The INTERAC e-Transfer API example was given to us as an React app. Most of us have had no experience with React before. We needed to find a way to still be able to call the API and integrate the caller with the rest of the web app, which was build with HTML, CSS, and Javascript.
There has also been a few difficulties with passing data from the front end interface and the back end service routines.
## Accomplishments that we're proud of
It's the first hackathon for two of our team members, and it was a fresh experience for us to work on a project in 24 hours. We had little to no experience with full stack development and Google Cloud Platform tools. However, we figured out our way step by step, with help from the mentors and online resources. We managed to integrate a few APIs into this project and tied together the front end and back end designs into a functional web app.
## What we learned
How to call Google Cloud APIs
How to host a website on Google Cloud Platform
How to set up an HTTP request in various languages
How to make dynamically interactive web page
How to handle front end and back end requests
## What's next for shareceipt
We hope to take shareceipt to the next level by filling in all the places in which we did not have enough time to fully explore due to the nature of a hackathon. In the future, we could add mobile support, Facebook & other social media integration to expand our user-base and allow many more users to enjoy a simple way to dine out with friends.
|
# Inspiration
Meet one of our teammates, Lainey! Over the past three years, she has spent over 2,000 hours volunteering with youth who attend under-resourced schools in Washington state. During the sudden onset of the pandemic, the rapid school closures ended the state’s Free and Reduced Lunch program for thousands of children across the state, pushing the burden of purchasing healthy foods onto parents. It became apparent that many families she worked with heavily relied on government-provided benefits such as SNAP (Supplemental Nutrition Assistance Program) to purchase the bare necessities. Research shows that SNAP is associated with alleviating food insecurity. Receiving SNAP in early life can lead to improved outcomes in adulthood. Low-income families under SNAP are provided with an EBT (Electronic Benefit Transfer) card and are able to load a monthly balance and use it like a debit card to purchase food and other daily essentials.
However, the EBT system still has its limitations: to qualify to accept food stamps, stores must sell food in each of the staple food categories. Oftentimes, the only stores that possess the quantities of scale to achieve this are a small set of large chain grocery stores, which lack diverse healthy food options in favor of highly-processed goods. Not only does this hurt consumers with limited healthy options, it also prevents small, local producers from selling their ethically and sustainably sourced produce to those most in need. Studies have repeatedly shown a direct link between sustainable food production and food health quality.
The primary grocery sellers who have the means and scale to qualify to accept food stamps are large chain grocery stores, which often have varying qualities of produce (correlated with income in that area) that pale compared to output from smaller farms. Additionally, grocery stores often supplement their fresh food options with a large selection of cheaper, highly-processed items that are high in sodium, cholesterol, and sugar. On average, unhealthy foods are about $1.50 cheaper per day than healthy foods, making it both less expensive and less effort to choose those options. Studies have shown that lower income individuals “consume fewer fruits and vegetables, more sugar-sweetened beverages, and have lower overall diet quality”. This leads to deteriorated health, inadequate nutrition, and elevated risk for disease. In addition, groceries stores with healthier, higher quality products are often concentrated in wealthy areas and target a higher income group, making distance another barrier to entry when it comes to getting better quality foods.
Meanwhile, small, local farmers and stores are unable to accept food stamp payments. Along with being higher quality and supporting the community, buying local foods are also better for the environment. Local foods travel a shorter distance, and the structure of events like farmers markets takes away a customer’s dependency on harmful monocrop farming techniques. However, these benefits come with their own barriers as well. While farmers markets accept SNAP benefits, they (and similar events) aren’t as widespread: there are only 8600 markets registered in the USDA directory, compared to the over 62,000 grocery stores that exist in the USA. And the higher quality foods have their own reputation of higher prices.
Locl works to alleviate these challenges, offering a platform that supports EBT card purchases to allow SNAP benefit users to purchase healthy food options from local markets.
# What does Locl do?
Locl works to bridge the gap between EBT cardholders and fresh homegrown produce. Namely, it offers a platform where multiple local producers can list their produce online for shoppers to purchase with their EBT card. This provides a convenient and accessible way for EBT cardholders to access healthy meals, while also promoting better eating habits and supporting local markets and farmers. It works like a virtual farmers market, combining the quality of small farms with the ease and reach of online shopping. It makes it easier for a consumer to buy better quality foods with their EBT card, while also allowing a greater range of farms and businesses to accept these benefits. This provides a convenient and accessible way for EBT cardholders to access healthy meals, while also promoting better eating habits and supporting local markets and farmers.
When designing our product, some of our top concerns were the technological barrier of entry for consumers and ensuring an ethical and sustainable approach to listing produce online. To use Locl, users are required to have an electronic device and internet connection, ultimately limiting access within our target audience. Beyond this, we recognized that certain produce items or markets could be displayed disproportionally in comparison to others, which could create imbalances and inequities between all the stakeholders involved. We aim to address this issue by crafting a refined algorithm that balances the search appearance frequency from a certain product based on how many similar products like such are posted.
# Key Features
## EBT Support
Shoppers can convert their EBT balance into Locl credits. From there, they can spend their credits buying produce from our set of carefully curated suppliers. To prevent fraud, each vendor is carefully evaluated to ensure they sell ethically sourced produce. Thus, shoppers can only spend their Locl credits on produce, adhering to government regulation on SNAP benefits.
## Bank-less payment
Because low-income shoppers may not have access to a bank account, we've used Checkbook.io's virtual credit cards and direct deposit to facilitate payments between shoppers and vendors.
## Producer accessibility
By listing multiple vendors on one platform, Locl is able to circumvent the initial problems of scale. Rather than each vendor being its own store, we consolidate them all into one large store, thereby increasing accessibility for consumers to purchase products from smaller vendors.
## Recognizable marketplace
To improve the ease of use, Locl's interface is carefully crafted to emulate other popular marketplace applications such as Facebook Marketplace and Craigslist. Because shoppers will already be accustomed to our app, it'll far improve the overall user experience.
# How we built it
Locl revolves around a web app interface to allow shoppers and vendors to buy and sell produce.
## Flask
The crux of Locl centers on our Flask server. From there, we use requests and render\_templates() to populate our website with GET and POST requests.
## Supabase
We use Supabase and PostgreSQL to store our product, market, virtual credit card, and user information. Because Flask is a Python library, we use Supabase's community managed Python library to insert and update data.
## Checkbook.io
We use Checkbook.io's Payfac API to create transactions between shoppers and vendors. When people create an account on Locl, they are automatically added as a user in Checkbook with the `POST /v3/user` endpoint. Meanwhile, to onboard both local farmers and shoppers painlessly, we offer a bankless solution with Checkbook’s virtual credit card using the `POST /v3/account/vcc` endpoint.
First, shoppers deposit credits into their Locl account from the EBT card. The EBT funds are later redeemed with the state government by Locl. Whenever a user buys an item, we use the `POST /v3/check/digital` endpoint to create a transaction between them and the stores to pay for the goods. From there, vendors can also spend their funds as if it were a prepaid debit card. By using Checkbook’s API, we’re able to break down the financial barrier of having a bank account for low-income shoppers to buy fresh produce from local suppliers, when they otherwise wouldn’t have been able to.
# Challenges we encountered
Because we were all new to using these APIs, we were initially unclear about what actions they could support. For example, we wanted to use You.com API to build our marketplace. However, it soon became apparent that we couldn't embed their API into our static HTML page as we'd assume. Thus, we had to pivot to creating our own cards with Jinja.
# Looking forward
In the future, we hope to advance our API services to provide a wider breadth of services which would include more than just produce from local farmers markets. Given a longer timeframe, a few features we'd like to implement include:
* a search and filtering system to show shoppers their preferred goods.
* an automated redemption system with the state government for EBT.
* improved security and encryption for all API calls and database queries.
# Ethics
SNAP (Supplemental Nutrition Assistance Program), otherwise known as food stamps, is a government program that aids low-income families and individuals to purchase food. The inaccessibility of healthy foods is a pressing problem because there is a small number of grocery stores that accept food stamps, which are often limited to large, chain grocery stores that are not always accessible. Beyond this, these grocery stores often lack healthy food options in favor of highly-processed goods.
When doing further research into this issue, we were fortunate to have a team member who has knowledge about SNAP benefits through firsthand experience in classroom settings and at food banks. Through this, we learned about EBT (Electronic Benefit Transfer) cards, as well as their limitations. The only stores that can support EBT payments must offer a selection for each of the staple food categories, which prevents local markets and farmers from accepting food stamps as payment.
To tackle this issue of the limited accessibility of healthy foods for SNAP benefit users, we came up with Locl, an online platform that allows local markets and farmers to list fresh produce for EBT cardholders to purchase with food stamps. When creating Locl, we adhered to our goal of connecting food stamp users with healthy, ethically sourced foods in a sustainable manner. However, there are still many ethical challenges that must be explored further.
First, to use Locl, users would require a portable electronic device and an internet connection due to it being an online platform. The Pew Research center states that 29% of adults with incomes below $30,000/year do not have access to a smartphone and 44% do not have portable internet access. This would greatly lessen the range of individuals that we aim to serve.
Second, though Locl aims to serve SNAP beneficiaries, we also hope to aid local markets and farmers by increasing the number of potential customers. However, Locl runs the risk of displaying certain produce items or marketplaces disproportionately in comparison to others, which could create imbalances and inequities between all stakeholders involved. Furthermore, this display imbalance could limit user knowledge about certain marketplaces.
Third, Locl aims to increase ethical consumerism by connecting its users with sustainable markets and farmers. However, there arises the issue of selecting which markets and farmers to support on our platform. While considering baselines that we would expect marketplaces to meet to be displayed on Locl, we recognized that sustainability can be measured through a wide number of factors- labor, resources used, pollution levels, and began wondering whether we prioritize sustainability of items we market or the health of users. One example of this is meat, a popular food product which is known for its high health benefits, but similarly high water consumption and greenhouse gas levels. Narrowing these down could greatly limit the display of certain products.
Fourth, Locl does not have an option for users to filter the results that are displayed to them. Many EBT cardholders say that they do not use their benefits to make online purchases due to the difficulty of finding items on online store pages that qualify for their benefits as well as their dietary needs. Thus, our lack of a filter option would cause certain users to have increased difficulty in finding food options for themselves.
Our next step for Locl is to address the ethical concerns above, as well as explore ways to make it more accessible and well-known. However, there are still many components to consider from a sociotechnical lens. Currently, only 4% of SNAP beneficiaries make online purchases with their EBT cards. This small percentage may stem from reasons that range from lack of internet access, to not being aware that online options are available. We hope that with Locl, food stamp users will have increased access to healthy food options and local markets and farmers will have an increased customer-base.
# References
<https://ajph.aphapublications.org/doi/full/10.2105/AJPH.2019.305325>
<https://bmcpublichealth.biomedcentral.com/articles/10.1186/s12889-019-6546-2>
<https://www.ibisworld.com/industry-statistics/number-of-businesses/supermarkets-grocery-stores-united-states/> <https://www.masslive.com/food/2022/01/these-are-the-top-10-unhealthiest-grocery-items-you-can-buy-in-the-united-states-according-to-moneywise.html>
<https://farmersmarketcoalition.org/education/qanda/>
<https://bmcpublichealth.biomedcentral.com/articles/10.1186/s12889-019-6546-2>
<https://news.climate.columbia.edu/2019/08/09/farmers-market-week-2019/>
|
## Inspiration
Ordering delivery and eating out is a major aspect of our social lives. But when healthy eating and dieting comes into play it interferes with our ability to eat out and hangout with friends. With a wave of fitness hitting our generation as a storm we have to preserve our social relationships while allowing these health conscious people to feel at peace with their dieting plans. With NutroPNG, we enable these differences to be settled once in for all by allowing health freaks to keep up with their diet plans while still making restaurant eating possible.
## What it does
The user has the option to take a picture or upload their own picture using our the front end of our web application. With this input the backend detects the foods in the photo and labels them through AI image processing using Google Vision API. Finally with CalorieNinja API, these labels are sent to a remote database where we match up the labels to generate the nutritional contents of the food and we display these contents to our users in an interactive manner.
## How we built it
Frontend: Vue.js, tailwindCSS
Backend: Python Flask, Google Vision API, CalorieNinja API
## Challenges we ran into
As we are many first-year students, learning while developing a product within 24h is a big challenge.
## Accomplishments that we're proud of
We are proud to implement AI in a capacity to assist people in their daily lives. And to hopefully allow this idea to improve peoples relationships and social lives while still maintaining their goals.
## What we learned
As most of our team are first-year students with minimal experience, we've leveraged our strengths to collaborate together. As well, we learned to use the Google Vision API with cameras, and we are now able to do even more.
## What's next for McHacks
* Calculate sum of calories, etc.
* Use image processing to estimate serving sizes
* Implement technology into prevalent nutrition trackers, i.e Lifesum, MyPlate, etc.
* Collaborate with local restaurant businesses
|
partial
|
## Inspiration
Not wanting to keep moving my stuff around all the time while moving between SF and Waterloo, Canada.
## What it does
It will call a Postmate to pick up your items, which will then be delivered to our secure storage facility. The Postmate will be issued a one time use code for the lock to our facility, and they will store the item. When the user wants their item back, they will simply request it and it will be there in minutes.
## How I built it
The stack is Node+Express and the app is on Android. It is hosted on Azure. We used the Postmate and Here API
## Challenges I ran into
## Accomplishments that I'm proud of
A really sleek and well built app! The API is super clean, and the Android interface is sexy
## What I learned
## What's next for Stockpile
Better integrations with IoT devices and better item management.
|
## Inspiration
Shipping can often take a long time and be expensive (especially internationally). This app solves this problem by leveraging a network of travelers. It was inspired by often sending items through friends and family members.
## What it does
This project takes community shipping to the next level. It leverages a network of travelers to deliver products faster and cheaper. It allows users to add trips (ie. if they are traveling). And request delivery (ie. if they want to send a package). After that, the sender and the traveler are matched, meet up, and have the item delivered.
## How I built it
I built it using Flutter to allow for cross-platform compatibility and firebase for real-time data storage, fetching, and authentication.
## Challenges I ran into
App builds took a long time.
## Accomplishments that I'm proud of
I'm proud to have created an app with flutter and firebase.
## What I learned
Flutter development.
## What's next for DeliveryMate
Add GPS tracking and messaging system.
|
## Inspiration
In today’s digital world, students arriving at college carry fewer belongings, knowing they can buy whatever they need, but buying new items all the time can be wasteful and expensive. They could borrow items or buy or get it free from someone local, but they don’t always have the time to meet in person to make the exchange.
Having lockers to facilitate this exchange is not a new concept, but how we use them is.
## What it does
Similar to Amazon Hub, we offer an IoT Smart Locker System that allows people to exchange items in a clean, secure, safe, and contactless way. Unlike what currently exists, we implemented 3 factor authentication to ensure that the right items go to their intended recipient.
Reservd allows people to find, rent, and unlock lockers at their convenience, without the hassle of a burdensome administrative process or without having to pre-determine the need or use and commit to a specific locker for a long period of time, we increase efficiencies and provide safe, clean, contactless pick-up options for recipients around the clock. And all you need is your phone to do all this.
## How we built it
We use React Native for the mobile interaction, Google Cloud Platform to facilitate triggered actions and object detection, and a lot of hardware to make the actual locker.
## Challenges we ran into
Time is the biggest challenge, and with spotty wifi, it slowed our progress further.
## Accomplishments that we're proud of
We made a working IoT smart locker!!!
## What we learned
With a background in React, we just picked React Native. For some of us, we had never worked with hardware, so we learned a lot about soldering and different tools needed to facilitate interactions. We also learned about designing endpoints.
## What's next for Reservd
Implementing blockchain for further security and on a campus or in a small community!
|
partial
|
## Inspiration
Approximately 90% of adults in the United States struggle with health literacy, meaning they have difficulty understanding and using health information effectively. This can lead to worsened health outcomes, increased strain on the healthcare system, and additional, unneeded costs.
## What it does
With our website, users can input a pdf file and it will inform the user of what each blood test measures, an analysis of their results, and possible lifestyle changes they could make if they had abnormal results.
## How we built it
We generated the analysis of the blood test results using the Open AI API. Then using Flask, we connected our backend to our React.js front end. The first step was to enable out application to take in a pdf file and display the file so the user knows what file they uploaded. Once the user uploads the file, the user gets a message displaying if the file is uploaded. Once the file is uploaded, the user clicks on the 'generate' function which leads the user to the output page that presents the simplification of the blood test and possible lifestyle changes the user can benefit from.
## Challenges we ran into
We ran into a couple of challenges regarding the backend. We had initially planned to use the You.com API to extract data from the pdf file and use the BioBert API to analyze the blood test results. However, we discovered that the You.com API cannot handle files and the BioBert API, as well as other medical APIs, were private. From there we had to pivot to use the Open AI API for analysis of the data. We also tried adding additional functionalities to increase accessibility, like the language translation and text-to-speech translation using Google Translate and Hume's APIs but we struggled to work with them in our application.
## Accomplishments that we're proud of
We are proud of how we were able to quickly pivot and work with the resources we have. We worked cohesively to fix errors and help each other whenever we could, such as when we were trying to connect the frontend to the backend.
## What we learned
As this was many of our first times using APIs and creating a backend to a website, we learned about using APIs and common errors that are involved with those. Additionally, we learned how to debug through errors in connecting the frontend and the backend. Finally, 2 of us were beginners in React.js so we developed our frontend skills while working on our website.
## What's next for SimplyMed
In the future, we hope to use private medical APIs like BioBert for a more accurate analysis of the blood test results. In addition, we hope to add the multi-lingual functionality using the Google Translate API and potentially use Hume's API for the text-to-speech conversion to make it more accessible. Additionally, we are interested in creating a public forum so that users can discuss home-remedies that have worked amongst each other. Finally, we hope to save each user's blood tests to show trends in their health compared to their previous blood tests.
|
== README
This README would normally document whatever steps are necessary to get the
application up and running.
Things you may want to cover:
* Ruby version
* System dependencies
* Configuration
* Database creation
* Database initialization
* How to run the test suite
* Services (job queues, cache servers, search engines, etc.)
* Deployment instructions
* ...
Please feel free to use a different markup language if you do not plan to run
rake doc:app.
|
## Inspiration
When visiting a clinic, two big complaints that we have are the long wait times and the necessity to use a kiosk that thousands of other people have already touched. We also know that certain methods of filling in information are not accessible to everyone (For example, someone with Parkinsons disease writing with a pen). In response to these problems, we created Touchless.
## What it does
* Touchless is an accessible and contact-free solution for gathering form information.
* Allows users to interact with forms using voices and touchless gestures.
* Users use different gestures to answer different questions.
* Ex. Raise 1-5 fingers for 1-5 inputs, or thumbs up and down for yes and no.
* Additionally, users are able to use voice for two-way interaction with the form. Either way, surface contact is eliminated.
* Applicable to doctor’s offices and clinics where germs are easily transferable and dangerous when people touch the same electronic devices.
## How we built it
* Gesture and voice components are written in Python.
* The gesture component uses OpenCV and Mediapipe to map out hand joint positions, where calculations could be done to determine hand symbols.
* SpeechRecognition recognizes user speech
* The form outputs audio back to the user by using pyttsx3 for text-to-speech, and beepy for alert noises.
* We use AWS Gateway to open a connection to a custom lambda function which has been assigned roles using AWS Iam Roles to restrict access. The lambda generates a secure key which it sends with the data from our form that has been routed using Flask, to our noSQL dynmaoDB database.
## Challenges we ran into
* Tried to set up a Cerner API for FHIR data, but had difficulty setting it up.
* As a result, we had to pivot towards using a noSQL database in AWS as our secure backend database for storing our patient data.
## Accomplishments we’re proud of
This was our whole team’s first time using gesture recognition and voice recognition, so it was an amazing learning experience for us. We’re proud that we managed to implement these features within our project at a level we consider effective.
## What we learned
We learned that FHIR is complicated. We ended up building a custom data workflow that was based on FHIR models we found online, but due to time constraints we did not implement certain headers and keys that make up industrial FHIR data objects.
## What’s next for Touchless
In the future, we would like to integrate the voice and gesture components more seamlessly into one rather than two separate components.
|
losing
|
## Inspiration
We were inspired by hard working teachers and students. Although everyone was working hard, there was still a disconnect with many students not being able to retain what they learned. So, we decided to create both a web application and a companion phone application to help target this problem.
## What it does
The app connects students with teachers in a whole new fashion. Students can provide live feedback to their professors on various aspects of the lecture, such as the volume and pace. Professors, on the other hand, get an opportunity to receive live feedback on their teaching style and also give students a few warm-up exercises with a built-in clicker functionality.
The web portion of the project ties the classroom experience to the home. Students receive live transcripts of what the professor is currently saying, along with a summary at the end of the lecture which includes key points. The backend will also generate further reading material based on keywords from the lecture, which will further solidify the students’ understanding of the material.
## How we built it
We built the mobile portion using react-native for the front-end and firebase for the backend. The web app is built with react for the front end and firebase for the backend. We also implemented a few custom python modules to facilitate the client-server interaction to ensure a smooth experience for both the instructor and the student.
## Challenges we ran into
One major challenge we ran into was getting and processing live audio and giving a real-time transcription of it to all students enrolled in the class. We were able to solve this issue through a python script that would help bridge the gap between opening an audio stream and doing operations on it while still serving the student a live version of the rest of the site.
## Accomplishments that we’re proud of
Being able to process text data to the point that we were able to get a summary and information on tone/emotions from it. We are also extremely proud of the
## What we learned
We learned more about React and its usefulness when coding in JavaScript. Especially when there were many repeating elements in our Material Design. We also learned that first creating a mockup of what we want will facilitate coding as everyone will be on the same page on what is going on and all thats needs to be done is made very evident. We used some API’s such as the Google Speech to Text API and a Summary API. We were able to work around the constraints of the API’s to create a working product. We also learned more about other technologies that we used such as: Firebase, Adobe XD, React-native, and Python.
## What's next for Gradian
The next goal for Gradian is to implement a grading system for teachers that will automatically integrate with their native grading platform so that clicker data and other quiz material can instantly be graded and imported without any issues. Beyond that, we can see the potential for Gradian to be used in office scenarios as well so that people will never miss a beat thanks to the live transcription that happens.
|
## Inspiration
Almost all undergraduate students, especially at large universities like the University of California Berkeley, will take a class that has a huge lecture format, with several hundred students listening to a single professor speak. At Berkeley, students (including three of us) took CS61A, the introductory computer science class, alongside over 2000 other students. Besides forcing some students to watch the class on webcasts, the sheer size of classes like these impaired the ability of the lecturer to take questions from students, with both audience and lecturer frequently unable to hear the question and notably the question not registering on webcasts at all. This led us to seek out a solution to this problem that would enable everyone to be heard in a practical manner.
## What does it do?
*Questions?* solves this problem using something that we all have with us at all times: our phones. By using a peer to peer connection with the lecturer’s laptop, a student can speak into their smartphone’s microphone and have that audio directly transmitted to the audio system of the lecture hall. This eliminates the need for any precarious transfer of a physical microphone or the chance that a question will be unheard.
Besides usage in lecture halls, this could also be implemented in online education or live broadcasts to allow participants to directly engage with the speaker instead of feeling disconnected through a traditional chatbox.
## How we built it
We started with a fail-fast strategy to determine the feasibility of our idea. We did some experiments and were then confident that it should work. We split our working streams and worked on the design and backend implementation at the same time. In the end, we had some time to make it shiny when the whole team worked together on the frontend.
## Challenges we ran into
We tried the WebRTC protocol but ran into some problems with the implementation and the available frameworks and the documentation. We then shifted to WebSockets and tried to make it work on mobile devices, which is easier said than done. Furthermore, we had some issues with web security and therefore used an AWS EC2 instance with Nginx and let's encrypt TLS/SSL certificates.
## Accomplishments that we're (very) proud of
With most of us being very new to the Hackathon scene, we are proud to have developed a platform that enables collaborative learning in which we made sure whatever someone has to say, everyone can hear it.
With *Questions?* It is not just a conversation between a student and a professor in a lecture; it can be a discussion between the whole class. *Questions?* enables users’ voices to be heard.
## What we learned
WebRTC looks easy but is not working … at least in our case. Today everything has to be encrypted … also in dev mode. Treehacks 2020 was fun.
## What's next for *Questions?*
In the future, we could integrate polls and iClicker features and also extend functionality for presenters and attendees at conferences, showcases, and similar events. \_ Questions? \_ could also be applied even broader to any situation normally requiring a microphone—any situation where people need to hear someone’s voice.
|
# Course Connection
## Inspiration
College is often heralded as a defining time period to explore interests, define beliefs, and establish lifelong friendships. However the vibrant campus life has recently become endangered as it is becoming easier than ever for students to become disconnected. The previously guaranteed notion of discovering friends while exploring interests in courses is also becoming a rarity as classes adopt hybrid and online formats. The loss became abundantly clear when two of our members, who became roommates this year, discovered that they had taken the majority of the same courses despite never meeting before this year. We built our project to combat this problem and preserve the zeitgeist of campus life.
## What it does
Our project provides a seamless tool for a student to enter their courses by uploading their transcript. We then automatically convert their transcript into structured data stored in Firebase. With all uploaded transcript data, we create a graph of people they took classes with, the classes they have taken, and when they took each class. Using a Graph Attention Network and domain-specific heuristics, we calculate the student’s similarity to other students. The user is instantly presented with a stunning graph visualization of their previous courses and the course connections to their most similar students.
From a commercial perspective, our app provides businesses the ability to utilize CheckBook in order to purchase access to course enrollment data.
## High-Level Tech Stack
Our project is built on top of a couple key technologies, including React (front end), Express.js/Next.js (backend), Firestore (real time graph cache), Estuary.tech (transcript and graph storage), and Checkbook.io (payment processing).
## How we built it
### Initial Setup
Our first task was to provide a method for students to upload their courses. We elected to utilize the ubiquitous nature of transcripts. Utilizing python we parse a transcript, sending the data to a node.js server which serves as a REST api point for our front end. We chose Vercel to deploy our website. It was necessary to generate a large number of sample users in order to test our project. To generate the users, we needed to scrape the Stanford course library to build a wide variety of classes to assign to our generated users. In order to provide more robust tests, we built our generator to pick a certain major or category of classes, while randomly assigning different category classes for a probabilistic percentage of classes. Using this python library, we are able to generate robust and dense networks to test our graph connection score and visualization.
### Backend Infrastructure
We needed a robust database infrastructure in order to handle the thousands of nodes. We elected to explore two options for storing our graphs and files: Firebase and Estuary. We utilized the Estuary API to store transcripts and the graph “fingerprints” that represented a students course identity. We wanted to take advantage of the web3 storage as this would allow students to permanently store their course identity to be easily accessed. We also made use of Firebase to store the dynamic nodes and connections between courses and classes.
We distributed our workload across several servers.
We utilized Nginx to deploy a production level python server that would perform the graph operations described below and a development level python server. We also had a Node.js server to serve as a proxy serving as a REST api endpoint, and Vercel hosted our front-end.
### Graph Construction
Treating the firebase database as the source of truth, we query it to get all user data, namely their usernames and which classes they took in which quarters. Taking this data, we constructed a graph in Python using networkX, in which each person and course is a node with a type label “user” or “course” respectively. In this graph, we then added edges between every person and every course they took, with the edge weight corresponding to the recency of their having taken it.
Since we have thousands of nodes, building this graph is an expensive operation. Hence, we leverage Firebase’s key-value storage format to cache this base graph in a JSON representation, for quick and easy I/O. When we add a user, we read in the cached graph, add the user, and update the graph. For all graph operations, the cache reduces latency from ~15 seconds to less than 1.
We compute similarity scores between all users based on their course history. We do so as the sum of two components: node embeddings and domain-specific heuristics. To get robust, informative, and inductive node embeddings, we periodically train a Graph Attention Network (GAT) using PyG (PyTorch Geometric). This training is unsupervised as the GAT aims to classify positive and negative edges. While we experimented with more classical approaches such as Node2Vec, we ultimately use a GAT as it is inductive, i.e. it can generalize to and embed new nodes without retraining. Additionally, with their attention mechanism, we better account for structural differences in nodes by learning more dynamic importance weighting in neighborhood aggregation. We augment the cosine similarity between two users’ node embeddings with some more interpretable heuristics, namely a recency-weighted sum of classes in common over a recency-weighted sum over the union of classes taken.
With this rich graph representation, when a user queries, we return the induced subgraph of the user, their neighbors, and the top k most people most similar to them, who they likely have a lot in common with, and whom they may want to meet!
## Challenges we ran into
We chose a somewhat complicated stack with multiple servers. We therefore had some challenges with iterating quickly for development as we had to manage all the necessary servers.
In terms of graph management, the biggest challenges were in integrating the GAT and in maintaining synchronization between the Firebase and cached graph.
## Accomplishments that we're proud of
We’re very proud of the graph component both in its data structure and in its visual representation.
## What we learned
It was very exciting to work with new tools and libraries. It was impressive to work with Estuary and see the surprisingly low latency. None of us had worked with next.js. We were able to quickly ramp up to using it as we had react experience and were very happy with how easily it integrated with Vercel.
## What's next for Course Connections
There are several different storyboards we would be interested in implementing for Course Connections. One would be a course recommendation. We discovered that chatGPT gave excellent course recommendations given previous courses. We developed some functionality but ran out of time for a full implementation.
|
partial
|
## Inspiration
Like most teens, I got my dressing sense from TikTok, but couldn’t afford the trendy influencer outfits. In an attempt to thrift, I spent 4 hours finding one jacket that fit my style.
Here’s what people don’t get about fast fashion: there’s a reason it exists. Teens don’t care about long-lasting neutral timeless fashion. We want cheap, fun, and bold designs (which naturally go out of style fast). The market’s response to this is unsustainable and wasteful (a la Zara and Shein). Thrifting is inconvenient and not tailored to different people’s tastes.
The $100 billion second-hand clothing industry has remained painfully stagnant. We knew we could re-imagine e-commerce and build something better.
## Environmental Impact
According to US NIST, 85% of used clothes in the US head straight to landfill or incinerators. Fashion is responsible for 10% of global carbon emissions, more than all international flights and maritime shipping combined, according to the World Bank.
Fast fashion is destroying our planet, but people like it too much. We solve this problem by giving users an incentive to be sustainable; everyone wins.
## What it does
On our platform, creators can post TikTok-style videos showcasing outfits they want to sell. Consumers can swipe through their video feed, tap on clothes they want to buy, and set up an exchange. A lot of our users will be both buyers and sellers. Naturally, like Craigslist, we’re meant for users living in the same area/college campus/city. Like TikTok, users can discuss trends, mix and match outfits, and share finds with the community. Goodwill and popular thrift pop-ups already exist, we digitize this.
Teens buy clothes based on “aesthetic”. Finding clothes you like at thrift stores is tedious. Our hybrid recommendation engine uses a novel AI model to recommend fashion based on watch time, likes, comments, common interests, past purchases, and more.
## How we built it
The SecondSwipe app was made using ReactJS, NextJS, and Firebase. We use the real-time database, Firestore, and Cloud Storage as backends for creator and consumer data.
SecondSwipe uses Open AI’s API (text-davinci-003) to suggest a suitable selling price based on the product’s original price, condition, and market standard.
It also uses Checkbook’s API to enable secure end-to-end payments with the tap of a button.
What sets SecondSwipe apart, is its novel AI-powered recommendation engine. This is a hybrid model based on deep neural collaborative filtering and knowledge graphs. This model was trained and tested in Python using Keras and connected to the platform using Flask.
## Challenges we ran into
SecondSwipe is not only an e-commerce app but also a video-sharing platform. This made it challenging to structure the recommendation algorithm that incorporated features like watch time, likes, user budget, past purchases, etc. all into a single model.
Time: Developing, debugging, and deploying a full-stack working application in less than 36 hours was definitely not an easy task.
## What we learned
* How to integrate a secure payment system for the first time
* How to efficiently enable video sharing in React
* Thinking from a sustainability viewpoint in a field where it is not the norm
* How to handle a live YC interview, building products with a user-first mindset
* Surviving on Taco Bell and no sleep
## What's next for SecondSwipe
* Advanced computer vision search using Google's Vision API and Web Detection
* Grow user base by going local: targeting communities like college campuses where the population is young and exchanges are convenient
* Developing a business model: monetizing by taking a small percentage from transactions, advertisements (especially by mid-size clothing lines), company collaborations, and subscription services.
* Re-imagining reviews and developing a similar product for fashion companies to showcase on their websites.
|
# Hack the 6ix 2024 🚀
Toronto’s Largest Summer Hackathon
### Team Members
* Monisha Govindaraj
* Hemaprakash Raghu
## Inspiration
We started off the idea by exploring on storytelling and entertainment sector and found gaps between the overwhelming information available on the internet about particular topics such as blogs, youtube videos which are mostly:
* Inconsistent
* Difficult to understand
* Having irrelevant information
While trying to fill these gaps, we figured out a way to address the above issues with the information delivery system. Our engine works as template-driven, whatever data is streamed to it, it reacts based on it.
## What it does
Here News is taken as the example of the data to be streamed and the news articles are the data source and an optimized and more personalized specific to subscribed topics, audio is generated using modern technology which includes Artificial Intelligence. Here the audio acts as a 2-minute podcast that explains what has happened around that day about those topics as a summarized version.
## How we built it
* User Application
+ Android Application
* Template Engine
+ News Data - Data Source
- <https://newsdata.io>
- Curated news articles from regulated news providers such as CCN, BBC, etc.
+ Google Firebase - Cloud
- Cloud Functions (written in Node.js)
- Firestore (database)
- Authentication (Email and Password)
- Storage (File Storage)
+ LLM - Artificial Intelligence
- gpt-3.5-turbo-instruct -model (Open AI)
- Natural AI Voice Generation (Eleven Labs)
### High Level Architecture

## Challenges we ran into
### Audio Generation
During the audio generation, we were supposed to be careful as each categorization of a personalization resulted in new audio, so we made sure to limit to the maximum in the number of audios being generated.
### Data Dump to the Database:
Data extracted from the data source was quite complex and excepted due to the structure that needs to be maintained. We brainstormed and created a solid object structure as Google Firestore is built on top of a document-type database.
### Podcast script Generation redundancy:
Here during the podcast script generation step, we were supposed to generate two scripts one for English and the other for French. Here if the same process was just iterated it would result in duplication of LLM generation, instead, we figured out that issue and made a check that if an English script was generated then just perform a translation for it.
## Accomplishments that we're proud of
### Solid Authentication
Google Firebase Authentication is used and done using an email and password mechanism. Here under the hood, it uses 0Auth 2.0 standards to perform the authentication.
### User Preferences Subscription
Personalization is achieved by requesting the user to provide their preferences such as location, topics and deep interests in a particular topic. Here all the user inputs are stored in the Firestore as a document for future use.
### Audio Player Listing
A solid audio player with play and pause functionality is implemented with the Android capability. Here one can view the thumbnail, title and transcript of each audio.
### Multilingual Audios
Here we focused on two languages (English and French) for the project with the ability to scale to more than 6 languages. Users could switch between these two languages and enjoy the experience of hearing what they wanted to hear and how they wanted to.
### Template Engine Automation
Here the entire template engine is untouched where the initial trigger is provided by a CRON job every day at 7 pm evening which in turn triggers and processes other sequential steps to generate the final audio for the user.
## What we learned
* Android Application Building
* Using AI Services over HTTP
* Working with file storages
* Cloud runtimes
## What's next for PodcastHM
### More Personalization
Focusing on more personalization towards user and could interchange the data source like a pluggable option. Send out newsletters with the available curated data.
### Operating Cost
Google Firebase, Eleven Labs, and Open AI are all services that allow us to start with a free plan and pay for what we only use.
### Scaling to Millions
Here the entire system is built on top of Google Firebase as cloud for the backend and as we know Google Cloud is one of the top cloud providers in the world. The entire application as a whole can scale to as much it is taken towards.
### Usability and Experience
Android application is built based on the Google design principles where it is proven to be one of the best user experiences.
|
## Inspiration
We wanted to reduce global carbon footprint and pollution by optimizing waste management. 2019 was an incredible year for all environmental activities. We were inspired by the acts of 17-year old Greta Thunberg and how those acts created huge ripple effects across the world. With this passion for a greener world, synchronized with our technical knowledge, we created Recycle.space.
## What it does
Using modern tech, we provide users with an easy way to identify where to sort and dispose of their waste items simply by holding it up to a camera. This application will be especially useful when permanent fixtures are erect in malls, markets, and large public locations.
## How we built it
Using a flask-based backend to connect to Google Vision API, we captured images and categorized which waste categories the item belongs to. This was visualized using Reactstrap.
## Challenges I ran into
* Deployment
* Categorization of food items using Google API
* Setting up Dev. Environment for a brand new laptop
* Selecting appropriate backend framework
* Parsing image files using React
* UI designing using Reactstrap
## Accomplishments that I'm proud of
* WE MADE IT!
We are thrilled to create such an incredible app that would make people's lives easier while helping improve the global environment.
## What I learned
* UI is difficult
* Picking a good tech stack is important
* Good version control practices is crucial
## What's next for Recycle.space
Deploying a scalable and finalized version of the product to the cloud and working with local companies to deliver this product to public places such as malls.
|
losing
|
## Inspiration
Social-distancing is hard, but little things always add up.
What if person X is standing too close to person Y in the c-mart, and then person Y ends up in the hospital for more than a month battling for their lives? Not finished, that c-mart gets shut down for contaminated merchandise.
All this happened because person X didn't step back.
These types of scenarios, and in hope of going back to normal lives, pushed me to create **Calluna**.
## What Calluna does
Calluna is aimed to be an apple watch application. On the application, you can check out all the notifications you've gotten that day as well as when you've got it and your settings.
When not on the app, you get pinged when your too close to someone who also has the app, making this a great feature for business workforces.
## How Calluna was built
Calluna was very simply built using Figma. I have linked below both design and a fully-fuctionally prototype!
## Challenges we ran into
I had some issues with ideation. I needed something that was useful, simple, and has growth potential. I also had some headaches on the first night that could possibly be due to sleep deprivation and too much coffee that ended up making me sleep till the next morning.
## Accomplishments that we're proud of
I love the design! I feel like this is a project that will be really helpful *especially* during the COVID-19 pandemic.
## What we learned
I learned how to incorporate fonts to accent the color and scene, as well as working with such small frames and how to make it look easy on the eyes!
## What's next for Calluna
I hope to create and publish the ios app with GPS integration, then possibly android too.
|
## Inspiration
In school, we were given the offer to take a dual enrollment class called Sign Language. A whole class for the subject can be quite time consuming for most children including adults. If people are interested in learning ASL, they either watch Youtube videos which are not interactive or spend HUNDREDS of dollars in classes (<https://deafchildren.org> requiring $70-100). Our product provides a cost-effective, time-efficient, and fun experience when learning the new unique language.
## What it does
Of course you have to first learn the ASL alphabets. A, B, C, D ... Z. Each alphabet has a unique hand gesture. You also have the option to learn phrases like "Yes", "No", "Bored", etc. The app makes sure you have done the alphabet correctly by displaying a circular progress view on how long you have to hold the gesture. We provide many images to make the learning experience accessible. After learning all the alphabets and practicing a few words, time for GAME time :). Test your ability to show a gesture and see how long you can go until you give up. The gamified experience leads to more learning and engaging for children.
## How we built it
The product was built using the language Swift. The hand-tracking was done using CoreML Components. We used hand landmarks and found distances between all points of the hand. Comparing the distances it SHOULD be and what it is as a specific time frame helps us figure out whether the hand pose is occurring. For the UI we planned it out using Figma and later wrote the code in Swift. We used the SwiftUI components to save time. For data storing we used UIData which syncs across devices with the same iCloud account.
## Challenges we ran into
There are 26 alphabets. That's a lot of arrays, comparing statements, and repetitive work. Testing would sometimes become difficult because the iPhone would eventually become hot and get temperature notifications. We only had one phone to test, so phone testing was frequently used for hand landmarks mostly. The project was extremely lengthy and putting so much content in one 36 hours is difficult, so we had to risk sleep. A cockroach in the room.
## Accomplishments that we're proud of
The hand landmark detection for an alphabet actually works much better than expected. Moving your hand super fast does not glitch the system. A fully functional vision app with clean UI makes the experience fun and open for all people.
## What we learned
Quantity < Quality. We created more than 6 functioning pages with different level of UI quality. It's very noticeable which views were created quickly because of the time crunch. Instead of having so many pages, decreasing the number of pages and maybe adding more content into each View would make the app appear flawless. Comparing arrays of the goal array and current time-frame array is TEDIOUS. So much time is wasted from testing. We could not figure out action classifier in Swift as there was no basic open-source code. Explaining problems to Chat GPT becomes difficult because the LLM never seems to understand basic tasks, but perfectly performs in complex tasks. Stack Overflow will still be around (for now) if we face problems.
## What's next for Hands-On
The app fits well on my iPhone 11, but on an iPad? I do not think so. The next step to take the project further is to scale UI, so it works for iPads an iPhones of any size. Once we fix that problem, we could release the app to the App Store. Since we do not use any API, we would have no expenses related to hosting the API. Making the app public could help people of all ages learn a new language in an interactive manner.
|
## Inspiration
This past year, a pandemic unlike any other has resulted in the world coming to a screeching halt. The COVID-19 virus has changed the world and interpersonal relationships as we know it. Seemingly normal occurrences like visiting friends and family, attending on-campus classes, and going to work, have become nearly impossible because of the contagious nature of the virus. Even conducting essential activities like medical checkups and grocery shopping are plagued with the danger of getting infected.
Despite wearing masks and taking precautions, the number of victims affected by COVID-19 increase every day by about 100,000 people (New York Times & Johns Hopkins University). According to the CDC, not only is the public plagued by the direct effects of the virus, they are also facing mental health issues by the inadvertent isolation from social distancing.
In this project, we propose a mobile application through which users can network with friends, family, and their communities and trace their exposure to COVID-19. This will enable people to remain safe but also to conduct everyday activities/in-person events with less fear of exposure.
## What it does
This application allows every member of society to create an account and update their COVID-19 status with an user friendly mobile interface.
They are also able to connect with friends, family, and anyone they might come in contact with to trace exposure. Every time there is a warning or suspected exposure from their connections (and their connections' connections), the user will get notified.
This mobile application will allow the users to:
* remain safe and reassured in public settings because the app is very clear on who all they should social distance with.
* more efficiently get tested for COVID-19 because sources of exposure will be conveniently traceable.
Below, one can see exactly how this will impact current day to day activities and how this will enable more effective COVID-19 management.
## How we built it
1. Before implementation, we first storyboarded our plan for the application on Figma and set up the virtual environment for @protocol (using Flutter framework, Docker container, and Android Studio).
2. By using the flutter framework, we developed the front end design of the various pages and tracks in our application (login page, connections page, alerts page).
3. Following completion of the basic buttons on each page, we linked the pages to each other, beginning backend development.
4. One aspect that was particularly challenging was implementing an account saving system for users to be able to come back to the app but this was achieved by using @protocol.
## Challenges we ran into
Our team (for 3 of whom, this is the first hackathon) had little to no experience working with other virtual environments and frameworks so the initial configuration process and learning of flutter use/implementation was time consuming. In addition, we are all are new to mobile app development and the Dart language that Flutter (the framework we were using) uses so learning that, especially backend development given limited documentation was also challenging.
However, we found a few helpful resources and pursued our mobile application development/as opposed to more well documented web application development because of the ease and accessibility it provides to the user.
## Accomplishments that we're proud of
Despite knowing very little about application development, we all persevered and worked together to learn a completely new framework and language. We were able to develop a prototype of a product that could be very useful for so many people and can improve the mental health and vitality for much of society. Our various time zones and budding experience proved to be no boundary in our shared goal of creating a viable app in 40 hours that would help so many people amidst the pandemic!
## What we learned
Over the course of implementation, we learned how frameworks operate in mobile application development. Our team learned how to use Flutter and Android Studio to create a compatible application. In addition, our team also learned to use @protocol to implement log in functionality.
## What's next for COVID Tracer
Expanding the capabilities of this application could yield very beneficial outcomes for the public. In the the next few months:
* We want to expand backend functionality so that the user can manually add other users to their network.
* We would also like to configure the Alert function so that it will send push notifications to everyone in the user's network.
* We would love to implement a location tracking system which will help identify hotspots of COVID-19 where there is potential exposure based on users and COVID-19 status (positive or negative). This would make "super spreader" events or locations far more unlikely. In addition, public health officials could more effectively base vaccination and testing stations where need is higher.
* We would also like to have a more wholistic account creation system (collecting more data from users about age, gender, race, etc.) through which we implement data analysis techniques based on geographic and demographic data to spot trends in contagion and infection, and more effectively address socio-economic inequity in healthcare.
|
partial
|
## Inspiration
**Affordable Delivery to every Canadian**
## What it does
The USP of this application is affordable delivery for every Canadian. The cost of home delivery of items ranges from about $10 - $20, however using this application the delivery cost can be brought down to about $2-$3. The reason being the deliveries are done using the OC Transpo infrastructure with a monthly pass of $200. Further students and people who don’t have a car can participate in delivery of items and earn money.
It is a Web App which supports the delivery of items at OC Transpo Bus stops. Using the app the customer can request the delivery/pickup of an item at the bus stop nearest to him. The customer can track the delivery of his orders and is also notified when the delivery/pickup is about to reach the requested bus stop. Upon receiving the notification the customer reaches the bus stop to hand over the items for delivery or accepts the item delivery.
The application has the following key features:
• Accepts item delivery/pickup request from the customers
• Notifies the customer when the delivery/pickup is about to reach his nearest bus stop
• Delivers/Picks up items at the requested bus stop
## How we built it
The application is built using Java swing and Solace event broker. There are two applications, one sending messages (a producer) and one receiving messages (consumer) which are communicating with each other.
|
# Avise
Avise is a convenient and powerful platform for people who consume
substances to do so in an informed and responsible way.
## How it works!
Avise was created to help inform users of the effects substances have on
their health. This is achieved through
* integrating bots on popular messaging services to conventiently track
user data
* providing real-time visualizations of users' consumption of
potentially harmful substances
* citing research tailored to the users' personal usage patterns
Avise provides a convenient and powerful platform for people who consume
substances to do so in an informed and responsible way.
## nwHacks 2020!
Avise was created in 24 hours during the nwHacks 2020 hackathon.
The team was
[**Juno Kim**](https://github.com/junokims)
* worked on health research and the Discord bot
[**Jayden McPhedrain**](https://github.com/Cloud7831)
* worked on parsing natural language for the Discord bot
[**Tyler Trinh**](https://github.com/bvtrinh)
* worked on front end design and implementation
[**Matt Wiens**](https://github.com/mwiens91)
* worked on the back end, REST API, and cloud server
## Technologies
* [django](https://www.djangoproject.com/)
* [django-rest-framework](https://www.django-rest-framework.org/)
* [PostgreSQL](https://www.postgresql.org/)
* [NGINX](https://www.nginx.com/)
* [Google Compute Engine](https://cloud.google.com/compute/)
* [domain.com](https://www.domain.com/)
* [React](https://reactjs.org/)
* [Bootstrap](https://getbootstrap.com/)
* [Rechart.js](http://recharts.org/)
* [discord.io](https://discord.io/)
|
## Inspiration
DeliverAI was inspired by the current shift we are seeing in the automotive and delivery industries. Driver-less cars are slowly but surely entering the space, and we thought driverless delivery vehicles would be a very interesting topic for our project. While drones are set to deliver packages in the near future, heavier packages would be much more fit for a ground base vehicle.
## What it does
DeliverAI has three primary components. The physical prototype is a reconfigured RC car that was hacked together with a raspberry pi and a whole lot of motors, breadboards and resistors. Atop this monstrosity rides the package to be delivered in a cardboard "safe", along with a front facing camera (in an Android smartphone) to scan the faces of customers.
The journey begins on the web application, at [link](https://deliverai.github.io/dAIWebApp/). To sign up, a user submits webcam photos of themselves for authentication when their package arrives. They then select a parcel from the shop, and await its arrival. This alerts the car that a delivery is ready to begin. The car proceeds to travel to the address of the customer. Upon arrival, the car will text the customer to notify them that their package has arrived. The customer must then come to the bot, and look into the camera on its front. If the face of the customer matches the face saved to the purchasing account, the car notifies the customer and opens the safe.
## How we built it
As mentioned prior, DeliverAI has three primary components, the car hardware, the android application and the web application.
### Hardware
The hardware is built from a "repurposed" remote control car. It is wired to a raspberry pi which has various python programs checking our firebase database for changes. The pi is also wired to the safe, which opens when a certain value is changed on the database.
\_ note:\_ a micro city was built using old cardboard boxes to service the demo.
### Android
The onboard android device is the brain of the car. It texts customers through Twilio, scans users faces, and authorizes the 'safe' to open. Facial recognition is done using the Kairos API.
### Web
The web component, built entirely using HTML, CSS and JavaScript, is where all of the user interaction takes place. This is where customers register themselves, and also where they order items. Original designs and custom logos were created to build the website.
### Firebase
While not included as a primary component, Firebase was essential in the construction of DeliverAI. The real-time database, by Firebase, is used for the communication between the three components mentioned above.
## Challenges we ran into
Connecting Firebase to the Raspberry Pi proved more difficult than expected. A custom listener was eventually implemented that checks for changes in the database every 2 seconds.
Calibrating the motors was another challenge. The amount of power
Sending information from the web application to the Kairos API also proved to be a large learning curve.
## Accomplishments that we're proud of
We are extremely proud that we managed to get a fully functional delivery system in the allotted time.
The most exciting moment for us was when we managed to get our 'safe' to open for the first time when a valid face was exposed to the camera. That was the moment we realized that everything was starting to come together.
## What we learned
We learned a *ton*. None of us have much experience with hardware, so working with a Raspberry Pi and RC Car was both stressful and incredibly rewarding.
We also learned how difficult it can be to synchronize data across so many different components of a project, but were extremely happy with how Firebase managed this.
## What's next for DeliverAI
Originally, the concept for DeliverAI involved, well, some AI. Moving forward, we hope to create a more dynamic path finding algorithm when going to a certain address. The goal is that eventually a real world equivalent to this could be implemented that could learn the roads and find the best way to deliver packages to customers on land.
## Problems it could solve
Delivery Workers stealing packages or taking home packages and marking them as delivered.
Drones can only deliver in good weather conditions, while cars can function in all weather conditions.
Potentially more efficient in delivering goods than humans/other methods of delivery
|
losing
|
## Inspiration
One day, Jimmy went to get food, and the sensation of hunger gave him the idea of making food easy for everyone, no matter what you're into, and reliving those memories once again.
At Taste of Nostalgia, our inspiration is deeply rooted in the power of food to evoke memories and emotions. We believe in leveraging technology to create a platform that not only recommends delicious foods but also fosters a sense of nostalgia, warmth, and community around the dining table. 🍽️
## What it does:
Taste of Nostalgia isn't just a food recommendation app; it's a journey through flavours and memories.
By integrating cohere's Taste of Nostalgia creates a personalized dining experience that transcends mere food recommendations. It's like having a trusted friend who knows your palate and understands the stories behind your favourite dishes. 🥘
## How we built it
Initial Planning and Basic Sketches: Figma
Front End Implementation: React + Vite, Auth0, TailWind CSS
Back End Implementation: Python Flask with MangoDB database
## Challenges we ran into
In the process of building Taste of Nostalgia, we encountered several challenges, including:
* Fine-tuning Cohere API prompts to accurately predict users' food preferences.
* Integrating the use of Auth0, ensuring user compatibility and seamless transition across devices
## Accomplishments that we're proud of
* The implementation of Auth0, with no prior knowledge
* Creating an engaging and intuitive user interface, connecting the back and front end seamlessly
## What we learned
* The use of Turn.js in the context of react and be quite challenging, with difficulties in using Vite
* The implementation Auth0 is actually quite rewarding, we will use it in the future
## What's next for Taste of Nostalgia
1. Integration of Google Maps API: Now, users can not only discover delectable dishes but also explore where to find them. With the Google Maps API seamlessly integrated, Taste of Nostalgia recommends not just great food but also directs you to the best dining spots nearby.
2. Introducing Browsing Feature: Dive into a world of culinary delights without leaving your screen! Our new browsing feature lets you explore various types of food recommendations without having to taste them first. It's like having a virtual tasting journey at your fingertips.
Join us as we redefine the way you experience food and nostalgia with Taste of Nostalgia! 🍽️✨
|
## Inspiration
Once we heard the theme of nostalgia it reminded everyone in our groups of our childhoods and all of our favourite things, from songs to movies. We knew we wanted to create a way for us and many others to be able to visit their pasts and that's why we called it “A Trip Down Memory Lane” as throughout the experience, the user is constantly walking down a lane full of past memories.
## What it does
When you first open the website, the page boots up loading the “Nostalgia Explorer”, loading all the required hardware for peak nostalgia. Once ready, it prompts the user for the year they would like to explore, and completes the booting sequence. As it starts up, it brings you back to the desired year, finally leaving you to explore memory lane.
Once in the memory lane, you can explore through a variety of categories seeing the most popular for that year. A Trip Down Memory Lane enhances your nostalgic feel by comparing the top pick for your chosen year to other years that approach closer to now. It really shows how the times have changed
## How we built it
We used React for the frontend and Express for the backend. The entire stack was built using TypeScript. We also incorporated Cohere to fetch information for us and generate content for users.
## Challenges we ran into
None of us were very familiar with full stack development having only worked on the frontend or backend before. Figuring out how to work together to rapidly develop both sides was difficult as we weren't sure on what endpoints we needed or what the format of the responses would be. Furthermore, some of us were new to version control and we ran into merge conflicts multiple times. There were other difficulties such as understanding the progress of other members on the features they were working on, styling on the frontend, and efficiently getting data on the backend (which included some pseudo-web scraping of Wikipedia).
## Accomplishments that we're proud of
We completed an MVP in 36 hours, learned a lot while doing so, and became better at writing software and working with a team. There are some pretty nice animations on the frontend, and the backend can make complicated queries in a "somewhat" optimized way with concurrent I/O.
## What we learned
How both frontend and backend work together and interact, version control, REST APIs, web scraping, async/await and Promises.
## What's next for A Trip Down Memory Lane
We're not really sure yet! There are a lot of other features we want to add, but given the learning curve and the amount of time, we were only able to finish an MVP!
|
## Inspiration
Imagine a world where your best friend is standing in front of you, but you can't see them. Or you go to read a menu, but you are not able to because the restaurant does not have specialized brail menus. For millions of visually impaired people around the world, those are not hypotheticals, they are facts of life.
Hollywood has largely solved this problem in entertainment. Audio descriptions allow the blind or visually impaired to follow the plot of movies easily. With Sight, we are trying to bring the power of audio description to everyday life.
## What it does
Sight is an app that allows the visually impaired to recognize their friends, get an idea of their surroundings, and have written text read aloud. The app also uses voice recognition to listen for speech commands to identify objects, people or to read text.
## How we built it
The front-end is a native iOS app written in Swift and Objective-C with XCode. We use Apple's native vision and speech API's to give the user intuitive control over the app.
---
The back-end service is written in Go and is served with NGrok.
---
We repurposed the Facebook tagging algorithm to recognize a user's friends. When the Sight app sees a face, it is automatically uploaded to the back-end service. The back-end then "posts" the picture to the user's Facebook privately. If any faces show up in the photo, Facebook's tagging algorithm suggests possibilities for who out of the user's friend group they might be. We scrape this data from Facebook to match names with faces in the original picture. If and when Sight recognizes a person as one of the user's friends, that friend's name is read aloud.
---
We make use of the Google Vision API in three ways:
* To run sentiment analysis on people's faces, to get an idea of whether they are happy, sad, surprised etc.
* To run Optical Character Recognition on text in the real world which is then read aloud to the user.
* For label detection, to indentify objects and surroundings in the real world which the user can then query about.
## Challenges we ran into
There were a plethora of challenges we experienced over the course of the hackathon.
1. Each member of the team wrote their portion of the back-end service a language they were comfortable in. However when we came together, we decided that combining services written in different languages would be overly complicated, so we decided to rewrite the entire back-end in Go.
2. When we rewrote portions of the back-end in Go, this gave us a massive performance boost. However, this turned out to be both a curse and a blessing. Because of the limitation of how quickly we are able to upload images to Facebook, we had to add a workaround to ensure that we do not check for tag suggestions before the photo has been uploaded.
3. When the Optical Character Recognition service was prototyped in Python on Google App Engine, it became mysteriously rate-limited by the Google Vision API. Re-generating API keys proved to no avail, and ultimately we overcame this by rewriting the service in Go.
## Accomplishments that we're proud of
Each member of the team came to this hackathon with a very disjoint set of skills and ideas, so we are really glad about how well we were able to build an elegant and put together app.
Facebook does not have an official algorithm for letting apps use their facial recognition service, so we are proud of the workaround we figured out that allowed us to use Facebook's powerful facial recognition software.
We are also proud of how fast the Go back-end runs, but more than anything, we are proud of building a really awesome app.
## What we learned
Najm taught himself Go over the course of the weekend, which he had no experience with before coming to YHack.
Nathaniel and Liang learned about the Google Vision API, and how to use it for OCR, facial detection, and facial emotion analysis.
Zak learned about building a native iOS app that communicates with a data-rich APIs.
We also learned about making clever use of Facebook's API to make use of their powerful facial recognition service.
Over the course of the weekend, we encountered more problems and bugs than we'd probably like to admit. Most of all we learned a ton of valuable problem-solving skills while we worked together to overcome these challenges.
## What's next for Sight
If Facebook ever decides to add an API that allows facial recognition, we think that would allow for even more powerful friend recognition functionality in our app.
Ultimately, we plan to host the back-end on Google App Engine.
|
losing
|
## Inspiration
Oftentimes its inconvenient to have to review flashcards on your computer. We want to solve the problem by making memorization through flashcards more accessible to everyone!
## What it does
Mem:re is a flashcard app synchronized with smart watches allowing you to review flashcards on the go. You can make as many flashcards as you want, and easily access them on your smart watch. This allows you to quickly review your flashcards whenever you get a chance. Whether it be on the bus, walking to class, or anytime you have a bit of time to spare, Mem:re makes it easy for you study on the go!
## How we built it
Mem:re was built using JS and Zepp's watch app interface.
## Challenges, Accomplishments, and Learning
Understanding the new Zepp's app watch interface was a challange, implementing and understanding the limitations of the software proved to be quite difficult as well. However, we managed to figure these problems out, and we are all very proud of that. We learned about reading and understanding code, and learning about APIs.
## What's next for Mem:re
We want to continue to develope the watch side of the app, making a more intuitive interface for users. Moreover, in the future we hope to leverage stress data to smartly order cards for review.
|
## Inspiration
When we jam out to our favorite tunes on YouTube, we always wished that we could sing along to the songs without having to look up lyrics in a different tab (and sometimes not even finding them). Now, with Lyric Machine, it's super easy :)
## What it does
Lyric Machine is a lightweight and non-intrusive Chrome extension that simply loads the lyrics to the song that you're listening to on YouTube. It also loads a bonus gif related to the song title.
## How we built it
We built this Chrome extension using the Musixmatch and GIPHY APIs. We also built-in some JS logic to scrape the song (video) title from the current YouTube page.
## Challenges we ran into
Connecting the dots between Musixmatch, GIPHY, and Youtube were the most challenging parts of this project, and scraping YouTube pages isn't super easy.
## Accomplishments that we're proud of
We're proud to build something that we will actually use on a daily basis!
## What we learned
How to build a Chrome extension, and that getting lyrics via Musixmatch is pretty simple.
## What's next for Lyric Machine
We're looking to build a mobile app, similar to Shazam but with lyrics. And in the long term, maybe an audio headset that can recognize songs around you, and display the lyrics in AR :)
|
>
> Domain.com domain: sharescription.net
>
>
>
## Inspiration
We all love leaching off other people's Netflix, but hate it when people do it to us. We wanted to solve the age old issue of sharing subscriptions. There's currently no real easy way to split the bills, as you have to rely on someone paying, and everyone else remembering to pay that person back. We aim to change that.
## What it does
People can register via oAuth Facebook, and then add their bills using the simple web interface. They can then decide to share the bill with other people registered on the platform. Once a month, the service automatically splits the bill between everyone and sends an Interac e-Transfer request to everyone who's agreed to pay the bill. It then generates a one-time use virtual debit card number for them to use at the merchant the bill is for.
## How we built it
The project is lots of different pieces all put together. Our APIs are coded in Node.js via Standard Library, we used Flask and Bootstrap to build our website, and bits of PHP to patch up our knowledge gaps. In terms of APIs, we used the Interac e-Transfer API, Marqeta API for generating card numbers.
## Challenges we ran into
>
> We had quite a few issues with API documentation (and the lack of it) and things not running quite as they should be. *-Jack*
>
>
> "I was responsible for the backend of the project and we decided to use Flask with Python to handle the login and authentication. Although all of us have experience with Python but learning how to use Flask for the first time was definitely a struggle for all of us. Secondly I had spent a fair amount of time trying to set up the virtual environment in order to get the project up and running." *-Jerry*
>
>
> "I am not experienced in coding or website building, so I did have to learn and develop my website. At first it was difficult, however, with my team’s constant support I was able to finish the front-end and deploy it in time." *-Sophia*
>
>
> "During the hackathon I found two bugs in Standard Library’s online IDE (case-sensitive usernames not working when sharing code and an error with auto-save overwriting the save files) which took some time away as I sat down with the StdLib sponsors to make sure it was all fixed. There’s a limited supply of documentation on the email parser (SendGrid), and I had to use 3rd party websites to understand how to deploy the app." *-Iyad*
>
>
> Oh my, there was a fair share of challenges, not only did we have to deal with the interact documentation, but even then the website tools rarely ever seemed to work. from there fate decided that the language of choice was going to be one I had not ever used before (node.js) but We still got it working in the end. *-Kevin*
>
>
>
## Accomplishments that we're proud of
>
> I think we've created a really good project, especially considering how many issues with the APIs we ran into during the development of it. *-Jack*
>
>
> "Our project was only 30% done when there was 12 hours left, at that time I didn't expect us to be able to finish the project in time for the demo. However, by some miracles we have managed to pull through and get all of our codes together and to produce this amazing web-app that I am proud to stand behind." *-Jerry*
>
>
> "I came into this hackathon with very little coding knowledge, but I worked hard and was able to create a visually pleasing minimum viable product, which impressed myself and the team. I am proud that I was able to learn the language quickly and make a good product." *-Sophia*
>
>
> "I came in with another team and a different idea. I thought I was set for this hackathon, however, things didn’t go as planned and I had to find a new team during the hackathon. I’m proud that we were able to create a great product during this weekend, with no prior planning, and with half of us meeting for the first time." *-Iyad*
>
>
> To be honest the fact that the proper output is coming out from a range of inputs is a real succes for me. to be honest my initial expectations were low, but as time progress, I found more to be proud of in my work. Something about specifically working with the Interac API and starting off with feelings of hopelessness, but then creating something makes it all the more better.*-Kevin*
>
>
>
## What we learned
>
> I learned all about the weird ways that some payment APIs handle authentication - Marqeta's way of sending data and authenticating yourself was really intuitive, but we got there in the end. I also got to learn about how Flask works, and how to handle authentication using it. *-Jack*
>
>
> "I learned how to use Flask and how to deploy a project on to Google App Engine in the past 36 hours. I'm hoping to carry on this skill into creating my personal website." *-Jerry*
>
>
> "I learnt how to develop my own website using Html and Bootstrap. I was able to learn a new frontend language that expands my skillset." *-Sophia*
>
>
> "I learnt how to integrate three different backend languages into one project. I’m surprised at how seamlessly the codes work together (after hours of debugging). Though not my main focus, I was able to float around the team and help the front end and help out with flask." *-Iyad*
>
>
> No doubt I've learnt much about the interact API and why I probably won't use it again, or with the Standard Library and only now starting to understand what an API is. However what the most important thing I've learnt is to embrace the hackathon spirit, and take on tasks that may not be the easiest, but are cool none the less. *-Kevin*
>
>
>
## What's next for Sharescription
We built the infrastructure for email parsing, but didn't get quite get around to implementing it as a feature. We'd love to make it so people can just forward their bills to the service and have them automatically appear, as this'd make it a lot easier to add new bills.
We'd also like to add support for more subscriptions, and perhaps use machine learning to recognise new services and add them to the platform.
In terms of business, we think partnering with subscription services to show relevant targeted advertising or rewards/discounts to customers using the service would be a really good way to help monetise it.
|
losing
|
## Inspiration
Not all hackers wear capes - but not all capes get washed correctly. Dorming on a college campus the summer before our senior year of high school, we realized how difficult it was to decipher laundry tags and determine the correct settings to use while juggling a busy schedule and challenging classes. We decided to try Google's up and coming **AutoML Vision API Beta** to detect and classify laundry tags, to save headaches, washing cycles, and the world.
## What it does
L.O.A.D identifies the standardized care symbols on tags, considers the recommended washing settings for each item of clothing, clusters similar items into loads, and suggests care settings that optimize loading efficiency and prevent unnecessary wear and tear.
## How we built it
We took reference photos of hundreds of laundry tags (from our fellow hackers!) to train a Google AutoML Vision model. After trial and error and many camera modules, we built an Android app that allows the user to scan tags and fetch results from the model via a call to the Google Cloud API.
## Challenges we ran into
Acquiring a sufficiently sized training image dataset was especially challenging. While we had a sizable pool of laundry tags available here at PennApps, our reference images only represent a small portion of the vast variety of care symbols. As a proof of concept, we focused on identifying six of the most common care symbols we saw.
We originally planned to utilize the Android Things platform, but issues with image quality and processing power limited our scanning accuracy. Fortunately, the similarities between Android Things and Android allowed us to shift gears quickly and remain on track.
## Accomplishments that we're proud of
We knew that we would have to painstakingly acquire enough reference images to train a Google AutoML Vision model with crowd-sourced data, but we didn't anticipate just how awkward asking to take pictures of laundry tags could be. We can proudly say that this has been an uniquely interesting experience.
We managed to build our demo platform entirely out of salvaged sponsor swag.
## What we learned
As high school students with little experience in machine learning, Google AutoML Vision gave us a great first look into the world of AI. Working with Android and Google Cloud Platform gave us a lot of experience working in the Google ecosystem.
Ironically, working to translate the care-symbols has made us fluent in laundry. Feel free to ask us any questions,
## What's next for Load Optimization Assistance Device
We'd like to expand care symbol support and continue to train the machine-learned model with more data. We'd also like to move away from pure Android, and integrate the entire system into a streamlined hardware package.
|
## Inspiration
Large Language Models (LLMs) are limited by a token cap, making it difficult for them to process large contexts, such as entire codebases. We wanted to overcome this limitation and provide a solution that enables LLMs to handle extensive projects more efficiently.
## What it does
LLM Pro Max intelligently breaks a codebase into manageable chunks and feeds only the relevant information to the LLM, ensuring token efficiency and improved response accuracy. It also provides an interactive dependency graph that visualizes the relationships between different parts of the codebase, making it easier to understand complex dependencies.
## How we built it
Our landing page and chatbot interface were developed using React. We used Python and Pyvis to create an interactive visualization graph, while FastAPI powered the backend for dependency graph content. We've added third-party authentication using the GitHub Social Identity Provider on Auth0. We set up our project's backend using Convex and also added a Convex database to store the chats. We implemented Chroma for vector embeddings of GitHub codebases, leveraging advanced Retrieval-Augmented Generation (RAG) techniques, including query expansion and re-ranking. This enhanced the Cohere-powered chatbot’s ability to respond with high accuracy by focusing on relevant sections of the codebase.
## Challenges we ran into
We faced a learning curve with vector embedding codebases and applying new RAG techniques. Integrating all the components—especially since different team members worked on separate parts—posed a challenge when connecting everything at the end.
## Accomplishments that we're proud of
We successfully created a fully functional repo agent capable of retrieving and presenting highly relevant and accurate information from GitHub repositories. This feat was made possible through RAG techniques, surpassing the limits of current chatbots restricted by character context.
## What we learned
We deepened our understanding of vector embedding, enhanced our skills with RAG techniques, and gained valuable experience in team collaboration and merging diverse components into a cohesive product.
## What's next for LLM Pro Max
We aim to improve the user interface and refine the chatbot’s interactions, making the experience even smoother and more visually appealing. (Please Fund Us)
|
## Inspiration:
Our journey began with a simple, yet profound realization: sorting waste is confusing! We were motivated by the challenge many face in distinguishing recyclables from garbage, and we saw an opportunity to leverage technology to make a real environmental impact. We aimed to simplify recycling, making it accessible and accurate for everyone.
## What it does:
EcoSort uses a trained ML model to identify and classify waste. Users present an item to their device's webcam, take a photo, and our website instantly advises whether it is recyclable or garbage. It's user-friendly, efficient, and encourages responsible waste disposal.
## How we built it:
We used Teachable Machine to train our ML model, feeding it diverse data and tweaking values to ensure accuracy. Integrating the model with a webcam interface was critical, and we achieved this through careful coding and design, using web development technologies to create a seamless user experience.
## Challenges we ran into:
* The most significant challenge was developing a UI that was not only functional but also intuitive and visually appealing. Balancing these aspects took several iterations.
* Another challenge we faced, was the integration of our ML model with our UI.
* Ensuring our ML model accurately recognized a wide range of waste items was another hurdle, requiring extensive testing and data refinement.
## Accomplishments that we're proud of:
What makes us stand out, is the flexibility of our project. We recognize that each region has its own set of waste disposal guidelines. To address this, we made our project such that the user can select their region to get the most accurate results. We're proud of creating a tool that simplifies waste sorting and encourages eco-friendly practices. The potential impact of our tool in promoting environmentally responsible behaviour is something we find particularly rewarding.
## What we learned:
This project enhanced our skills in ML, UI/UX design, and web development. On a deeper level, we learned about the complexities of waste management and the potential of technology to drive sustainable change.
## What's next for EcoSort:
* We plan to expand our database to accommodate different types of waste and adapt to varied recycling policies across regions. This will make EcoSort a more universally applicable tool, further aiding our mission to streamline recycling for everyone.
* We are also in the process of hosting the EcoSort website as our immediate next step. At the moment, EcoSort works perfectly fine locally. However, in regards to hosting the site, we have started to deploy it but are unfortunately running into some hosting errors.
* Our [site](https://stella-gu.github.io/EcoSort/) is currently working
|
winning
|
## 💡 INSPIRATION 💡
Today, Ukraine is on the front lines of a renewed conflict with Russia. Russia's recent full-scale invasion of Ukraine has created more than 4.3 million refugees and displaced another 6.5 million citizens in the past 6 weeks according to the United Nations. Humanitarian aid organizations make trips into war-torn parts of Ukraine daily, but citizens are often unaware of their locations. Moreover, those fleeing the country into surrounding European countries don't know where they can stay. By connecting refugees with those willing to offer support and providing location data for humanitarian aid, YOUkraine hopes to support those unjustly suffering in Ukraine.
## ⚙️ WHAT IT DOES ⚙️
Connects refugees and support/humanitarian aid groups in Ukraine.
You can sign up on the app and declare yourself as either a refugee or supporter and connect with each other directly. Refugees will get tailored recommendations for places to stay, which are offered by supporters, based on family size and the desired country. Refugees will also be able to access a map to view the locations of humanitarian relief organizations such as UN Refugee Agency, Red Cross, Doctors Without Borders, Central Kitchen and many more.
## 🛠️ HOW WE BUILT IT🛠️
Tech Stack: MERN (Mongodb, Express.js, React.js, Node.js)
We used React (JS), Framer-motion, Axios, Bcrypt, uuid, and react-tinder to create a visually pleasing and accessible way for users to communicate.
For live chat and to store user data, we took advantage of MongoDB, express.js, and node.js because of its ease of access to set up and access data.
For the recommendation feature, we used Pandas, Numpy and Seaborn to preprocess our data. We trained our tf-idf model using Sklearn on 3000 different users.
## 😣 CHALLENGES WE RAN INTO 😣
* **3 person team** that started **late**, doesn't get the early worm :(
* It was the **first time** anyone on the team has worked with **MERN stack** (took a lil'figuring out, but it works!)
* Had to come up with our own dataset to train with and needed to remake the dataset and retrain the model multiple times
* **NO UI/UX DESIGNER** (don't take them for granted, they're GOD SENDS)
* We don't have much experience using cookies and we ran into a lot of challenges, but we stuck it out and made it work (WOOT WOOT!)
## 🎉 ACCOMPLISHMENTS WE ARE PROUD OF 🎉
* WE GOT IT DONE!! 30 hours of blood, sweat, and tears later, we have our functioning app :D
* We can now proudly say we're full stack developers because we made and implemented everything ourselves, top to bottom :)
* Designing the app from scratch (no figma/ uiux designer💀)
* USER AUTHENTICATION WORKS!! ( つ•̀ω•́)つ
* Using so many tools, languages and frameworks at once, and making them work together :D
* Submitting on time (I hope? 😬)
## ⏭️WHAT'S NEXT FOR YOUkraine⏭️
YOUkraine has a lot to do before it can be deployed as a genuine app.
* Add security features and encryption to ensure the app isn't misused
* Implement air raid warnings and 'danger sightings' so that users can stay informed and avoid conflict zones.
* Partner with NGO's/humanitarian relief organizations so we can update our map live and provide more insights concerning relief efforts.
* Enhance our recommendation feature (add more search terms)
* Possibly add a donation feature to support Ukraine
## 🎁 ABOUT THE TEAM🎁
Eric is a 3rd year computer science student. With experience in designing for social media apps and websites, he is interested in expanding his repertoire in designing for emerging technologies. You can connect with him at his [Portfolio](https://github.com/pidgeonforlife)
Alan is a 2nd year computer science student at the University of Calgary, currently interning at SimplySafe. He's has a wide variety of technical skills in frontend and backend development! Moreover, he has a strong passion for both data science and app development. You can reach out to him at his [LinkedIn](https://www.linkedin.com/in/alanayy/) or view his [Portfolio](https://github.com/AlanAyy)
Matthew is a 2nd year computer science student at Simon Fraser University, currently looking for a summer 2022 internship. He has formal training in data science. He's interested in learning new and honing his current frontend skills/technologies. Moreover, he has a deep understanding of machine learning, AI and neural networks. He's always willing to have a chat about games, school, data science and more! You can reach out to him at his [LinkedIn](https://www.linkedin.com/in/matthew-wong-240837124/) or view his [Portfolio](https://github.com/WongMatthew)
### 🥳🎉 THANK YOU YALE FOR HOSTING YHACKS🥳🎉
|
## Inspiration **💪🏼**
Health insurance, everyone needs it, no one wants to pay for it. As soon-will-be adults, health insurance has been a growing concern. Since a simple ambulance ride easily costs up to thousands of dollars, not having health insurance is a terrible decision in the US. But how much are you supposed to pay for it? Insurance companies publish their rates, but just having formulas doesn't tell me anything about if they are ripping me off, especially for young adults having never paid for health insurance.
## What it does? **🔍**
Thus, to prevent being ripped off on health insurance after leaving our parents' household. We have developed Health Insurance 4 Dummies. A website utilizing a machine learning model that determines a fair estimate for the annual costs of health insurance, based on user inputs of their personal information. It also uses a LMM to provide detailed information on the composition of the cost.
## How we built it **👷🏼♀️**
The front-end is built using convex-react, creating an UI that takes inputs from the user. The backend is built using python-flask, which communicates with remote services, InterSystems and Together.AI. The ML model for predicting the cost is built on InterSystems using the H2O, trained on a dataset consist of individual's information and their annual rate for health insurance. The explanation of costs is created using Together.AI's Llama-2 model.
## Challenges we ran into **🔨**
Full-stack development is tedious, especially when the functions require remote resources. Finding good datasets to train the model. Authentication in connecting and accessing the trained model on InterSystem using their IRIS connection driver. Choosing the right model to use from Together.AI.
## Accomplishments that we're proud of **⭐**
Trained and accessed ML model on a remote database open possibility for massive datasets, integrating LMMs to provide automated information.
## What we learned **📖**
Full-Stack Development skills, ML model training and utilizing. Accessing remote services using APIs, TLS authentication.
## What's next for Health Insurance 4 Dummys **🔮**
Gather larger datasets to make more parameters available and give more accurate predictions.
|
## Inspiration
We were inspired to build Schmart after researching pain points within Grocery Shopping. We realized how difficult it is to stick your health goals or have a reduced environmental impact while grocery shopping. Inspired by innovative technlogy that exists, we wanted to create an app which would conveniently allow anyone to feel empowered to shop by reaching their goals and reducing friction.
## What it does
Our solution, to gamify the grocery shopping experience by allowing the user to set goals before shopping and scan products in real time using AR and AI to find products that would meet their goals and earn badges and rewards (PC optimum points) by doing so.
## How we built it
This product was designed on Figma, and we built the backend using Flask and Python, with the database stored using SQLite3. We then built the front end with React Native.
## Challenges we ran into
Some team members had school deadlines during the hackathon, so we could not be fully concentrated on the Hackathon coding. In addition, our team was not too familiar with React Native, so development of the front end took longer than expected.
## Accomplishments that we're proud of
We are extremely proud that we were able to build an deployed an end-to-end product in such a short timeframe. We are happy to empower people while shopping and make the experience so much more enjoyable and problem solve areas that exist while shopping.
## What we learned
Communication is key. This project would not have been possible without the relentless work of all our team members striving to make the world a better place with our product. Whether it be using technology we have never used before or sharing our knowledge with the rest of the group, we all wanted to create a product that would have a positive impact and because of this we were successful in creating our product.
## What's next for Schmart
We hope everyone can use Schmart in the future on their phones as a mobile app. We can see it being used in Grocery (and hopefully all stores) in the future. Leaders. Meeting health and environmental goals should be barrier-free, and being an app that anyone can use, this makes this possible.
|
partial
|
## Inspiration
Today, anything can be learned on the internet with just a few clicks. Information is accessible anywhere and everywhere- one great resource being Youtube videos. However accessibility doesn't mean that our busy lives don't get in the way of our quest for learning.
TLDR: Some videos are too long, and so we didn't watch them.
## What it does
TLDW - Too Long; Didn't Watch is a simple and convenient web application that turns Youtube and user-uploaded videos into condensed notes categorized by definition, core concept, example and points. It saves you time by turning long-form educational content into organized and digestible text so you can learn smarter, not harder.
## How we built it
First, our program either takes in a youtube link and converts it into an MP3 file or prompts the user to upload their own MP3 file. Next, the audio file is transcribed with Assembly AI's transcription API. The text transcription is then fed into Co:here's Generate, then Classify, then Generate again to summarize the text, organize by type of point (main concept, point, example, definition), and extract key terms. The processed notes are then displayed on the website and coded onto a PDF file downloadable by user. The Python backend built with Django is connected to a ReactJS frontend for an optimal user experience.
## Challenges we ran into
Manipulating Co:here's NLP APIs to generate good responses was certainly our biggest challenge. With a lot of experimentation *(and exploration)* and finding patterns in our countless test runs, we were able to develop an effective note generator. We also had trouble integrating the many parts as it was our first time working with so many different APIs, languages, and frameworks.
## Accomplishments that we're proud of
Our greatest accomplishment and challenge. The TLDW team is proud of the smooth integration of the different APIs, languages and frameworks that ultimately permitted us to run our MP3 file through many different processes and coding languages Javascript and Python to our final PDF product.
## What we learned
Being the 1st or 2nd Hackathon of our First-year University student team, the TLDW team learned a fortune of technical knowledge, and what it means to work in a team. While every member tackled an unfamiliar API, language or framework, we also learned the importance of communication. Helping your team members understand your own work is how the bigger picture of TLDW comes to fruition.
## What's next for TLDW - Too Long; Didn't Watch
Currently TLDW generates a useful PDF of condensed notes in the same order as the video. For future growth, TLDW hopes to grow to become a platform that provides students with more tools to work smarter, not harder. Providing a flashcard option to test the user on generated definitions, and ultimately using the Co-here Api to also read out questions based on generated provided examples and points.
|
## Inspiration
According to the Washington Post (June 2023), since Columbine in 1999, more than 356,000 students in the U.S. have experienced gun violence at school.
Students of all ages should be able to learn comfortably and safely within the walls of their classroom.
Quality education is a UN Sustainable Development goal and can only be achieved when the former becomes a reality. As college students, especially in the midst of the latest UNC-Chapel Hill school shooting, we understand threats lie even within the safety of our campus and have grown up knowing the tragedies of school shootings.
This problem is heavily influenced by politics and thus there is an unclear timeline for concrete and effective solutions to be implemented. The intention of our AI model is to contribute a proactive approach that requires only a few pieces of technology but is capable of an immediate response to severe events.
## What it does
Our machine learning model is trained to recognize active threats with displayed weapons. When the camera senses that a person has a knife, it automatically calls 911. We also created a machine learning model that uses CCTV camera footage of perpetrators with guns.
Specifically, this model was meant to be catered towards guns to address the rising safety issues in education. However, for the purpose of training our model and safety precautions, we could not take training data pictures with a gun and thus opted for knives. We used the online footage as a means to also train on real guns.
## How we built it
We obtained an SD card with the IOS for Raspberry Pi, then added the Viam server to the Raspberry Pi. Viam provides a platform to build a machine learning model on their server.
We searched the web and imported CCTV images of people with and without guns and tried to find a wide variety of these types of images. We also integrated a camera with the Raspberry Pi to take additional images of ourselves with a knife as training data. In our photos we held the knife in different positions, different lighting, and different people's hands. The more variety in the photos provided a stronger model. Using our data from both sources and the Viam platform we went through each image and identified the knife or gun in the picture by using a border bounding box functionality. Then we trained two separate ML models, one that would be trained off the images in CCTV footage, and one model using our own images as training data.
After testing for recognition, we used a program that connects the Visual Studio development environment to our hardware. We integrated Twilio into our project which allowed for an automated call feature. In our program, we ran the ML model using our camera and checked for the appearance of a knife. As a result, upon detection of a weapon, our program immediately alerts the police. In this case, a personal phone number was used instead of authorities to highlight our system’s effectiveness.
## Challenges we ran into
Challenges we ran into include connection issues, training and testing limitations, and setup issues.
Internet connectivity presented as a consistent challenge throughout the building process. Due to the number of people on one network at the hackathon, we used a hotspot for internet connection, and the hotspot connectivity was often variable. This led to our Raspberry Pi and Viam connections failing, and we had to restart many times, slowing our progress.
In terms of training, we were limited in the locations we could train our model in. Since the hotspot disconnected if we moved locations, we could only train the model in one room. Ideally, we would have liked to train in different locations with different lighting to improve our model accuracy.
Furthermore, we trained a machine learning model with guns, but this was difficult to test for both safety reasons and a lack of resources to do so. In order to verify the accuracy of our model, it would be optimal to test with a real gun in front of a CCTV camera. However, this was not feasible with the hackathon environment.
Finally, we had numerous setup issues, including connecting the Raspberry Pi to the SSH, making sure the camera was working after setup and configuration, importing CCTV images, and debugging. We discovered that the hotspot that we connected the Raspberry Pi and the laptop to had an apostrophe in its name, which was the root of the issue with connecting to the SSH. We solved the problem with the camera by adding a webcam camera in the Viam server rather than a transform camera. Importing the CCTV images was a process that included reading the images into the Raspberry Pi in order to access them in Viam. Debugging to facilitate the integration of software with hardware was achieved through iteration and testing.
We would like to thank Nick, Khari, Matt, and Hazal from Viam, as well as Lizzie from Twilio, for helping us work through these obstacles.
## Accomplishments that we're proud of
We're proud that we could create a functional and impactful model within this 36 hour hackathon period.
As a team of Computer Science, Mechanical Engineering, and Biomedical Engineering majors, we definitely do not look like the typical hackathon theme. However, we were able to use our various skill sets, from hardware analysis, code compilation, and design to achieve our goals.
Additionally, as it was our first hackathon, we developed a completely new set of skills: both soft and technical. Given the pressure, time crunch, and range of new technical equipment at our fingertips, it was an uplifting experience. We were able to create a prototype that directly addresses a topic that is dear to us, while also communicating effectively with working professionals.
## What we learned
We expanded our skills with a breadth of new technical skills in both hardware and software. We learned how to utilize a Raspberry Pi, and connect this hardware with the machine learning platform in Viam. We also learned how to build a machine learning model by labeling images, training a model for object detection, and deploying the model for results. During this process, we gained knowledge about what images were deemed good/useful data. On the software end, we learned how to integrate a Python program that connects with the Viam machine learning platform and how to write a program involving a Twilio number to automate calling.
## What's next for Project LearnSafe
We hope to improve our machine learning model in a multifaceted manner. First, we would incorporate a camera with better quality and composition for faster image processing. This would make detection in our model more efficient and effective. Moreover, adding more images to our model would amplify our database in order to make our model more accurate. Images in different locations with different lighting would improve pattern recognition and expand the scope of detection. Implementing a rotating camera would also enhance our system. Finally, we would test our machine learning model for guns with CCTV, and modify both models to include more weaponry.
Today’s Security. Tomorrow’s Education.
|
# SmartKart
A IoT shopping cart that follows you around combined with a cloud base Point of Sale and Store Management system. Provides a comprehensive solution to eliminate lineups in retail stores, engage with customers without being intrusive and a platform to implement detailed customer analytics.
Featured by nwHacks: <https://twitter.com/nwHacks/status/843275304332283905>
## Inspiration
We questioned the current self-checkout model. Why wait in line in order to do all the payment work yourself!? We are trying to make a system that alleviates much of the hardships of shopping; paying and carrying your items.
## Features
* A robot shopping cart that uses computer vision to follows you!
* Easy-to-use barcode scanning (with an awesome booping sound)
* Tactile scanning feedback
* Intuitive user-interface
* Live product management system, view how your customers shop in real time
* Scalable product database for large and small stores
* Live cart geo-location, with theft prevention
|
partial
|
## Inspiration
After observing different hardware options, the dust sensor was especially outstanding in its versatility and struck us as exotic. Dust-particulates in our breaths are an ever present threat that is too often overlooked and the importance of raising awareness for this issue became apparent. But retaining interest in an elusive topic would require an innovative form of expression, which left us stumped. After much deliberation, we realized that many of us had a subconscious recognition for pets, and their demanding needs. Applying this concept, Pollute-A-Pet reaches a difficult topic with care and concern.
## What it does
Pollute-A-Pet tracks the particulates in a person's breaths and records them in the behavior of adorable online pets. With a variety of pets, your concern may grow seeing the suffering that polluted air causes them, no matter your taste in companions.
## How we built it
Beginning in two groups, a portion of us focused on connecting the dust sensor using Arduino and using python to connect Arduino using Bluetooth to Firebase, and then reading and updating Firebase from our website using javascript. Our other group first created gifs of our companions in Blender and Adobe before creating the website with HTML and data-controlled behaviors, using javascript, that dictated the pets’ actions.
## Challenges we ran into
The Dust-Sensor was a novel experience for us, and the specifications for it were being researched before any work began. Firebase communication also became stubborn throughout development, as javascript was counterintuitive to object-oriented languages most of us were used to. Not only was animating more tedious than expected, transparent gifs are also incredibly difficult to make through Blender. In the final moments, our team also ran into problems uploading our videos, narrowly avoiding disaster.
## Accomplishments that we're proud of
All the animations of the virtual pets we made were hand-drawn over the course of the competition. This was also our first time working with the feather esp32 v2, and we are proud of overcoming the initial difficulties we had with the hardware.
## What we learned
While we had previous experience with Arduino, we had not previously known how to use a feather esp32 v2. We also used skills we had only learned in beginner courses with detailed instructions, so while we may not have “learned” these things during the hackathon, this was the first time we had to do these things in a practical setting.
## What's next for Dustables
When it comes to convincing people to use a product such as this, it must be designed to be both visually appealing and not physically cumbersome. This cannot be said for our prototype for the hardware element of our project, which focused completely on functionality. Making this more user-friendly would be a top priority for team Dustables. We also have improvements to functionality that we could make, such as using Wi-Fi instead of Bluetooth for the sensors, which would allow the user greater freedom in using the device. Finally, more pets and different types of sensors would allow for more comprehensive readings and an enhanced user experience.
|
## Inspiration
We were inspired by the story of the large and growing problem of stray, homeless, and missing pets, and the ways in which technology could be leveraged to solve it, by raising awareness, adding incentive, and exploiting data.
## What it does
Pet Detective is first and foremost a chat bot, integrated into a Facebook page via messenger. The chatbot serves two user groups: pet owners that have recently lost their pets, and good Samaritans that would like to help by reporting. Moreover, Pet Detective provides monetary incentive for such people by collecting donations from happily served users. Pet detective provides the most convenient and hassle free user experience to both user bases. A simple virtual button generated by the chatbot allows the reporter to allow the bot to collect location data. In addition, the bot asks for a photo of the pet, and runs computer vision algorithms in order to determine several attributes and match factors. The bot then places a track on the dog, and continues to alert the owner about potential matches by sending images. In the case of a match, the service sets up a rendezvous with a trusted animal care partner. Finally, Pet Detective collects data on these transactions and reports and provides a data analytics platform to pet care partners.
## How we built it
We used messenger developer integration to build the chatbot. We incorporated OpenCV to provide image segmentation in order to separate the dog from the background photo, and then used Google Cloud Vision service in order to extract features from the image. Our backends were built using Flask and Node.js, hosted on Google App Engine and Heroku, configured as microservices. For the data visualization, we used D3.js.
## Challenges we ran into
Finding the write DB for our uses was challenging, as well as setting up and employing the cloud platform. Getting the chatbot to be reliable was also challenging.
## Accomplishments that we're proud of
We are proud of a product that has real potential to do positive change, as well as the look and feel of the analytics platform (although we still need to add much more there). We are proud of balancing 4 services efficiently, and like our clever name/logo.
## What we learned
We learned a few new technologies and algorithms, including image segmentation, and some Google cloud platform instances. We also learned that NoSQL databases are the way to go for hackathons and speed prototyping.
## What's next for Pet Detective
We want to expand the capabilities of our analytics platform and partner with pet and animal businesses and providers in order to integrate the bot service into many different Facebook pages and websites.
|
## Inspiration
As some of our team members have little siblings, we understand the struggle of living with them! So, now that we're in university, we've grown to miss all of their little quirks. So, why not bring them back?
## What it does
Our robot searches for people, and once found, will track them and move toward them. When it gets close enough, our robot will spray you with water, before giggling and running away. Ahhhh, feels JUST like home!
## How we built it
We connected an iPhone via Bluetooth to a computer, where we analyze the footage in Python. Using the OpenCV library, our program finds a person and calculates where they are relative to the frame. The laptop then tells an Arduino over Bluetooth where the person is, and the Arduino then changes the velocity of two motors to control the speed and direction of the robot, ensuring that it meets its subject at an optimal distance for spraying. The enclosure is built from a combination of cardboard, 3D-printed supports, and screws.
## Challenges we ran into
Integrating all of the components together proved to be a challenge. While they seemed to work on their own, communicating between each piece was tricky. For example, we were all relatively new to asynchronous programming, so designing a Python script to both analyze footage and send the results over Bluetooth to the Arduino was more difficult than anticipated.
## Accomplishments that we're proud of
It works! Based on what the camera sees, our motors change direction to put the robot on a perfect spray trajectory!
## What we learned
We improved our programming skills, learned how to communicate between devices over Bluetooth, and operate the Arduino. We were able to use a camera and understand the position of a person using computer vision.
## What's next for Your Annoying Little Sibling
We would love to further improve our robot's tracking skills and incorporate more sibling-like annoyances like slapping, biting, and telling tattle tales.
|
partial
|
## Inspiration
The inspiration behind GeneLevel sprang from the realization that the one-size-fits-all dietary guidelines fail to account for individual genetic variations affecting nutrient metabolism. This gap in personalized nutrition sparked our ambition to tailor dietary plans right down to the genetic level, ensuring everyone can eat precisely what their body needs for optimal health.
## What it does
GeneLevel revolutionizes dietary planning by integrating advanced genomic data analysis with machine learning to identify upregulated or downregulated marker genes indicative of an individual's unique nutritional requirements. Our platform categorizes users into optimal dietary plans, such as high-fat or low-fat diets, based on their blood mRNA expression data and questionnaire responses, providing personalized menu plans that promote physical and cognitive well-being.
## How we built it
We harnessed a combination of Gene Set Enrichment Analysis (GSEA) and machine learning algorithms to sift through genomic data, identifying key nutritional markers. The frontend was crafted with modern web technologies to make genomic data accessible, while the backend ML models were trained on rich datasets to ensure accurate dietary classifications.
## Challenges we ran into
1. **Data Collection**: Amassing a comprehensive and varied dataset of blood mRNA expression data was a significant hurdle, critical for the model's training and validation phases.
2. **Accuracy of the ML Model**: Achieving high accuracy and reliability in our machine learning predictions was challenging, necessitating continuous iterations and model optimizations.
## Accomplishments that we're proud of
1. **FrontEnd Programming**: We're particularly proud of developing a user-friendly interface that demystifies genomic data, making personalized nutrition accessible to everyone.
## What we learned
Throughout this project, we delved deep into the intricacies of genomic data and its impact on nutrition. We learned the importance of data quality, the challenges of interpreting complex biological information, and the potential of machine learning to bridge the gap between genetics and dietetics.
## What's next for GeneLevel - the Next-Generation of Personalized Nutrition
1. **Fine-tuning the ML model**: We aim to enhance the model's predictive accuracy by incorporating more diverse mRNA expression data, especially from studies focusing on obesity.
2. **Expanding Dietary Categories**: Beyond high-fat and low-fat diets, we plan to introduce more nuanced dietary categories, allowing for even more personalized nutrition plans tailored to individual genetic profiles and health goals.
|
## Inspiration
In our busy lives, many of us forget to eat, overeat, or eat just enough- but not necessarily with a well-balanced composition of food groups. Researching precise nutrition values can be a hassle - so we set out to make an app that helps people too busy with their careers to balance their diets.
## What it does
A user is able to, at any time in the day, bring up a meal and take photos of the food items they eat. The app, using a Google Vision API (calling to Google Cloud) then confirms the identity of the food item(s) and cross-references the food with the MyFitnessPal API to receive detailed nutrition information. This data is then aggregated with the meal timestamp into a MongoDB database and displayed on the Caloric calendar.
## How we built it
We built a front-end in Node.js and React, which connects to a MongoDB backend via ExpressJS and Mongoose that stores the user's data (and eating habits).
The front-end additionally contains all the external API calls to Google Vision API and MyFitnessPal. We also have Twilio integration to send messages to users about their diet data, which we plan to extend in our next steps.
## Challenges we ran into
Mostly npm dependency conflicts!
## Accomplishments that we're proud of
We integrated many services, namely Google Vision API.
This integration brings a new perspective on diet-tracking and tailoring-it doesn't have to be a laborious process for users-we make it easy and simple for users. Our integrations with MongoDB also make the user experience fast and seamless- their data is quickly available and calorie counting is a fast and responsive experience. Moreover, we take advantage of tools the user already has- their own device's cameras!
## What we learned
We learned about the differences of making API calls from the front- and back-ends, namely where that data can get routed, and which cases are better for the user experience. We also learned about the power of using React in the browser-a much more powerful paradigm than simple html generation.
## What's next for Caloric
Integrating InterSystems to get health-data of a particular user, and then tailor health analytics and suggestions so that a user can see ways that they can improve their diet, depending on their goals and needs.
|
## Inspiration
Unhealthy diet is the leading cause of death in the U.S., contributing to approximately 678,000 deaths each year, due to nutrition and obesity-related diseases, such as heart disease, cancer, and type 2 diabetes. Let that sink in; the leading cause of death in the U.S. could be completely nullified if only more people cared to monitor their daily nutrition and made better decisions as a result. But **who** has the time to meticulously track every thing they eat down to the individual almond, figure out how much sugar, dietary fiber, and cholesterol is really in their meals, and of course, keep track of their macros! In addition, how would somebody with accessibility problems, say blindness for example, even go about using an existing app to track their intake? Wouldn't it be amazing to be able to get the full nutritional breakdown of a meal consisting of a cup of grapes, 12 almonds, 5 peanuts, 46 grams of white rice, 250 mL of milk, a glass of red wine, and a big mac, all in a matter of **seconds**, and furthermore, if that really is your lunch for the day, be able to log it and view rich visualizations of what you're eating compared to your custom nutrition goals?? We set out to find the answer by developing macroS.
## What it does
macroS integrates seamlessly with the Google Assistant on your smartphone and let's you query for a full nutritional breakdown of any combination of foods that you can think of. Making a query is **so easy**, you can literally do it while *closing your eyes*. Users can also make a macroS account to log the meals they're eating everyday conveniently and without hassle with the powerful built-in natural language processing model. They can view their account on a browser to set nutrition goals and view rich visualizations of their nutrition habits to help them outline the steps they need to take to improve.
## How we built it
DialogFlow and the Google Action Console were used to build a realistic voice assistant that responds to user queries for nutritional data and food logging. We trained a natural language processing model to identify the difference between a call to log a food eaten entry and simply a request for a nutritional breakdown. We deployed our functions written in node.js to the Firebase Cloud, from where they process user input to the Google Assistant when the test app is started. When a request for nutritional information is made, the cloud function makes an external API call to nutrionix that provides nlp for querying from a database of over 900k grocery and restaurant foods. A mongo database is to be used to store user accounts and pass data from the cloud function API calls to the frontend of the web application, developed using HTML/CSS/Javascript.
## Challenges we ran into
Learning how to use the different APIs and the Google Action Console to create intents, contexts, and fulfillment was challenging on it's own, but the challenges amplified when we introduced the ambitious goal of training the voice agent to differentiate between a request to log a meal and a simple request for nutritional information. In addition, actually finding the data we needed to make the queries to nutrionix were often nested deep within various JSON objects that were being thrown all over the place between the voice assistant and cloud functions. The team was finally able to find what they were looking for after spending a lot of time in the firebase logs.In addition, the entire team lacked any experience using Natural Language Processing and voice enabled technologies, and 3 out of the 4 members had never even used an API before, so there was certainly a steep learning curve in getting comfortable with it all.
## Accomplishments that we're proud of
We are proud to tackle such a prominent issue with a very practical and convenient solution that really nobody would have any excuse not to use; by making something so important, self-monitoring of your health and nutrition, much more convenient and even more accessible, we're confident that we can help large amounts of people finally start making sense of what they're consuming on a daily basis. We're literally able to get full nutritional breakdowns of combinations of foods in a matter of **seconds**, that would otherwise take upwards of 30 minutes of tedious google searching and calculating. In addition, we're confident that this has never been done before to this extent with voice enabled technology. Finally, we're incredibly proud of ourselves for learning so much and for actually delivering on a product in the short amount of time that we had with the levels of experience we came into this hackathon with.
## What we learned
We made and deployed the cloud functions that integrated with our Google Action Console and trained the nlp model to differentiate between a food log and nutritional data request. In addition, we learned how to use DialogFlow to develop really nice conversations and gained a much greater appreciation to the power of voice enabled technologies. Team members who were interested in honing their front end skills also got the opportunity to do that by working on the actual web application. This was also most team members first hackathon ever, and nobody had ever used any of the APIs or tools that we used in this project but we were able to figure out how everything works by staying focused and dedicated to our work, which makes us really proud. We're all coming out of this hackathon with a lot more confidence in our own abilities.
## What's next for macroS
We want to finish building out the user database and integrating the voice application with the actual frontend. The technology is really scalable and once a database is complete, it can be made so valuable to really anybody who would like to monitor their health and nutrition more closely. Being able to, as a user, identify my own age, gender, weight, height, and possible dietary diseases could help us as macroS give users suggestions on what their goals should be, and in addition, we could build custom queries for certain profiles of individuals; for example, if a diabetic person asks macroS if they can eat a chocolate bar for lunch, macroS would tell them no because they should be monitoring their sugar levels more closely. There's really no end to where we can go with this!
|
losing
|
## Inspiration
Many students rely on scholarships to attend college. As students in different universities, the team understands the impact of scholarships on people's college experiences. When scholarships fall through, it can be difficult for students who cannot attend college without them. In situations like these, they have to depend on existing crowdfunding websites such as GoFundMe. However, platforms like GoFundMe are not necessarily the most reliable solution as there is no way of verifying student status and the success of the campaign depends on social media reach. That is why we designed ScholarSource: an easy way for people to donate to college students in need!
## What it does
ScholarSource harnesses the power of blockchain technology to enhance transparency, security, and trust in the crowdfunding process. Here's how it works:
Transparent Funding Process: ScholarSource utilizes blockchain to create an immutable and transparent ledger of all transactions and donations. Every step of the funding process, from the initial donation to the final disbursement, is recorded on the blockchain, ensuring transparency and accountability.
Verified Student Profiles: ScholarSource employs blockchain-based identity verification mechanisms to authenticate student profiles. This process ensures that only eligible students with a genuine need for funding can participate in the platform, minimizing the risk of fraudulent campaigns.
Smart Contracts for Funding Conditions: Smart contracts, powered by blockchain technology, are used on ScholarSource to establish and enforce funding conditions. These self-executing contracts automatically trigger the release of funds when predetermined criteria are met, such as project milestones or the achievement of specific research outcomes. This feature provides donors with assurance that their contributions will be used appropriately and incentivizes students to deliver on their promised objectives.
Immutable Project Documentation: Students can securely upload project documentation, research papers, and progress reports onto the blockchain. This ensures the integrity and immutability of their work, providing a reliable record of their accomplishments and facilitating the evaluation process for potential donors.
Decentralized Funding: ScholarSource operates on a decentralized network, powered by blockchain technology. This decentralization eliminates the need for intermediaries, reduces transaction costs, and allows for global participation. Students can receive funding from donors around the world, expanding their opportunities for financial support.
Community Governance: ScholarSource incorporates community governance mechanisms, where participants have a say in platform policies and decision-making processes. Through decentralized voting systems, stakeholders can collectively shape the direction and development of the platform, fostering a sense of ownership and inclusivity.
## How we built it
We used React and Nextjs for the front end. We also integrated with ThirdWeb's SDK that provided authentication with wallets like Metamask. Furthermore, we built a smart contract in order to manage the crowdfunding for recipients and scholars.
## Challenges we ran into
We had trouble integrating with MetaMask and Third Web after writing the solidity contract. The reason was that our configuration was throwing errors, but we had to configure the HTTP/HTTPS link,
## Accomplishments that we're proud of
Our team is proud of building a full end-to-end platform that incorporates the very essence of blockchain technology. We are very excited that we are learning a lot about blockchain technology and connecting with students at UPenn.
## What we learned
* Aleo
* Blockchain
* Solidity
* React and Nextjs
* UI/UX Design
* Thirdweb integration
## What's next for ScholarSource
We are looking to expand to other blockchains and incorporate multiple blockchains like Aleo. We are also looking to onboard users as we continue to expand and new features.
|
## Inspiration
We were inspired by the instability and corruption of many developing governments and wanted to provide more transparency for citizens. The immutability and decentralization of IPFS seemed like the perfect tool for this problem. We further developed this idea into a framework for conducting government activities and passing laws in a secure manner through ethereum smart contracts
## What it does
Lex Aeterna provides a service for governments to publish laws and store them on IPFS increasing security and transparency for citizens. We offer a website for viewing these laws and interfacing with our service but anyone can view these laws by looking at them directly on IPFS. We also offer increased security through the use of filecoin nodes to further decentralize the storage of laws and ensure that all laws and documents will **always** stay up. We also offer smart contracts which can be used to vote on proposed laws through ethereum transactions. Our website offers a UI for this functionality which includes secure account login through firebase.
## How we built it
We used the ipfs-http-client in python to upload and download files on IPFS. We set up a firebase database to store countries and associated laws with CIDs and other parameters. We then used flask to create a rest API to connect our database, our front end and IPFS. We coded our front end using react. We coded our voting smart contract using solidity and deployed it to a test net using web3 on python. We then expanded our API so that governments could deploy and use voting smart contracts all through our API. We use firebase tokens to authenticate the use of API functionality.
## Challenges we ran into
With such an ambitious project, we had to cover a lot of ground. Connecting the front end to our API was especially difficult because we didn't have much experience with react. It was difficult to learn on the fly and develop our front end as we went.
## Accomplishments that we're proud of
Although we were very ambitious, we were able to pretty much implement all major functionality that we wanted to. We implemented an entire web application through the entire stack which uses IPFS and blockchain technology. Most of all we pushed through and continued to work even when we felt stuck.
## What we learned
None us had used flask or react before however, we all became proficient enough to implement and API using flask and a front end using react. We also learned more about what it takes to plan and execute an original idea extremely quickly.
## What's next for Lex Aeterna
First we would move to AWS to increase scalability and security. We would spend some time testing the security of our API and log in features. We would also want to expand our smart contracts to further provide more options for governments to utilize the ethereum infrastructure. For example, different types of votes such as super majority or government terms that expire after a period of time and even direct citizen votes for government officials or policies.
|
# Breader Together
As new adults in university, our team found it difficult to live away from home and balance the difficult art of culinary creation with the primal necessity of consuming calories. **Breader Together** fuses appetizing social media, a personalized recipe provider and gamified cooking challenges. The result of engaged individuals is the jumpstarting of a global social media platform of endless creation, community-driven sharing and gastronomical inspiration for all, open to all.
Powered by Cohere’s powerful in-house (Command) LLM models, an endless buffet of delicious dishes is **only a click away**. And maybe a few stirs, of course 😎. Using Cohere’s LLMs, we can *personalize* any recipe based on what ingredients you have, cooking experience, time constraints, or if you want to simply try something new!
Now that you’ve whipped up a mouth-watering meal, **it’s time to share it with your friends and the world**! Post the meals you’ve made on the feed, and also get inspired by a global community of home cooks. Don’t forget to *“loave”* 🧡 your favourite posts and 🌮 taco bout it in the comments!
**Ready, Set, Dough!** Looking to level up your cooking skills? Batter up at our fresh and exciting challenges, to cook certain dishes, achieve healthy eating goals or simply to engage with other home chefs for inspiration. Gamify your culinary journey and become a pro in no time!
## How We Built ‘Breader Together’
Breader Together uses a depth of different technologies and frameworks. The core of our project is powered by Cohere’s text generation API which is used to provide highly-personalized and accurate recipe recommendations to our users. Our full-stack application also has a decoupled front/back-end and a SQL database used to store our data. Our website’s frontend (UI) is designed using React with Tailwind CSS and Axios (HTTP Client) to create a comprehensive (featured) and ergonomic user experience. On the other hand, we used FastAPI to rapidly prototype a functional API microservice that could be deployed using container technologies such as Docker.
## Challenges We Faced
We initially had quite “over-engineered” solutions to approach the challenge we were trying to solve. Our initial idea involved many moving components and had many constraints which we were not satisfied with. However, we realized that Cohere’s LLMs had many desirable features that not only make it an ideal LLM, but in-fact the perfect solution for our use case. However, after experimentation, we settled on using Cohere to generate recipes, while accounding. Lastly, as our team used Axios and FastAPI, we found some unexpected (ad-hoc) incompatibilities while integrating our full-stack application, while parsing user input during recipe generation.
## What’s Next for Breader Together?
What’s next? That’s a big question. Saying goodbye could be sad, so maybe we can chat about it over a nice warm bowl of chicken noodle soup, while reading some books. **Just kidding.** Like any cook, we love to create, bring joy and foster a community with our creation. As such, ***we would be extremely eager to cook up new features and keep our app running with the help of sponsors*** (like Cohere). We believe that our vision could develop into something magical for cooks and cooks-to-be around the world.
Peace out and enjoy your next croissant. Or pad thai. Or reverse seared steak. It’s almost if the possibilities are endless. **Wait.** *Does that remind you of a certain app?*
>
> Made with 🧡 by Breader Together Team (Nathan, Richard, Carolyn and Andy)
>
>
>
|
winning
|
## Inspiration
It’'s pretty common that you will come back from a grocery trip, put away all the food you bought in your fridge and pantry, and forget about it. Even if you read the expiration date while buying a carton of milk, chances are that a decent portion of your food will expire. After that you’ll throw away food that used to be perfectly good. But, that’s only how much food you and I are wasting. What about everything that Walmart or Costco trashes on a day to day basis?
Each year, 119 billion pounds of food is wasted in the United States alone. That equates to 130 billion meals and more than $408 billion in food thrown away each year.
About 30 percent of food in American grocery stores is thrown away. US retail stores generate about 16 billion pounds of food waste every year.
But, if there was a solution that could ensure that no food would be needlessly wasted, that would change the world.
## What it does
PantryPuzzle will scan in images of food items as well as extract its expiration date, and add it to an inventory of items that users can manage. When food nears expiration, it will notify users to incentivize action to be taken. The app will take actions to take with any particular food item, like recipes that use the items in a user’s pantry according to their preference. Additionally, users can choose to donate food items, after which they can share their location to food pantries and delivery drivers.
## How we built it
We built it with a React frontend and a Python flask backend. We stored food entries in a database using Firebase. For the food image recognition and expiration date extraction, we used a tuned version of Google Vision API’s object detection and optical character recognition (OCR) respectively. For the recipe recommendation feature, we used OpenAI’s GPT-3 DaVinci large language model. For tracking user location for the donation feature, we used Nominatim open street map.
## Challenges we ran into
React to properly display
Storing multiple values into database at once (food item, exp date)
How to display all firebase elements (doing proof of concept with console.log)
Donated food being displayed before even clicking the button (fixed by using function for onclick here)
Getting location of the user to be accessed and stored, not just longtitude/latitude
Needing to log day that a food was gotten
Deleting an item when expired.
Syncing my stash w/ donations. Don’t wanna list if not wanting to donate anymore)
How to delete the food from the Firebase (but weird bc of weird doc ID)
Predicting when non-labeled foods expire. (using OpenAI)
## Accomplishments that we're proud of
* We were able to get a good computer vision algorithm that is able to detect the type of food and a very accurate expiry date.
* Integrating the API that helps us figure out our location from the latitudes and longitudes.
* Used a scalable database like firebase, and completed all features that we originally wanted to achieve regarding generative AI, computer vision and efficient CRUD operations.
## What we learned
We learnt how big of a problem the food waste disposal was, and were surprised to know that so much food was being thrown away.
## What's next for PantryPuzzle
We want to add user authentication, so every user in every home and grocery has access to their personal pantry, and also maintains their access to the global donations list to search for food items others don't want.
We integrate this app with the Internet of Things (IoT) so refrigerators can come built in with this product to detect food and their expiry date.
We also want to add a feature where if the expiry date is not visible, the app can predict what the likely expiration date could be using computer vision (texture and color of food) and generative AI.
|
## 💡 Inspiration
Manga are Japanese comics, considered to form a genre unique from other graphic novels. Similar to other comics, it lacks a musical component. However, their digital counterparts (such as sites like Webtoons) have innovated on their take on the traditional format with the addition of soundtracks, playing concurrently with the reader's progression through the comic. It can create an immersive experience for the reader building the emotion on screen. While Webtoon’s take on incorporating music is not mainstream, we believe there is potential in building on the concept and making it mainstream in online manga. Imagine how cool it would be to generate a soundtrack to the story unfolding. Who doesn't enjoy personalized music while reading?
## 💻 What it does
1. Users choose a manga chapter to read (in our prototype, we're using just one page).
2. Sentiment analysis is performed on the dialogue of the manga.
3. The resulting sentiment is used to determine what kind of music is fed into the song-generating model.
4. A new song will be created and played while the user reads the manga.
## 🔨 How we built it
* Started with brainstorming
* Planned and devised a plan for implementation
* Divided tasks
* Implemented the development of the project using the following tools
*Tech Stack* : Tensorflow, Google Cloud (Cloud Storage, Vertex AI), Node.js
Registered Domain name : **mangajam.tech**
## ❓Challenges we ran into
* None of us knew machine learning at the level that this project demanded of us.
* Timezone differences and the complexity of the project
## 🥇 Accomplishments that we're proud of
The teamwork of course!! We are a team of four coming from three different timezones, this was the first hackathon for one of us and the enthusiasm and coordination and support were definitely unique and spirited. This was a very ambitious project but we did our best to create a prototype proof of concept. We really enjoyed learning new technologies.
## 📖 What we learned
* Using TensorFlow for sound generation
* Planning and organization
* Time management
* Performing Sentiment analysis using Node.js
## 🚀 What's next for Magenta
Oh tons!! We have many things planned for Magenta in the future.
* Ideally, we would also do image recognition on the manga scenes to help determine sentiment, but it's hard to actualize because of varying art styles and genres.
* To add more sentiments
* To deploy the website so everyone can try it out
* To develop a collection of Manga along with the generated soundtrack
|
## Inspiration
As university students, we often find that we have groceries in the fridge but we end up eating out and the groceries end up going bad.
## What It Does
After you buy groceries from supermarkets, you can use our app to take a picture of your receipt. Our app will parse through the items in the receipts and add the items into the database representing your fridge. Using the items you have in your fridge, our app will be able to recommend recipes for dishes for you to make.
## How We Built It
On the back-end, we have a Flask server that receives the image from the front-end through ngrok and then sends the image of the receipt to Google Cloud Vision to get the text extracted. We then post-process the data we receive to filter out any unwanted noise in the data.
On the front-end, our app is built using react-native, using axios to query from the recipe API, and then stores data into Firebase.
## Challenges We Ran Into
Some of the challenges we ran into included deploying our Flask to Google App Engine, and styling in react. We found that it was not possible to write into Google App Engine storage, instead we had to write into Firestore and have that interact with Google App Engine.
On the frontend, we had trouble designing the UI to be responsive across platforms, especially since we were relatively inexperienced with React Native development. We also had trouble finding a recipe API that suited our needs and had sufficient documentation.
|
winning
|
## Inspiration
Earlier this week, following the devastation of Hurricane Florence, my newsfeed surged with friends offering their excess food and water to displaced community members. Through technology, the world had grown smaller. Resources had been shared.
Our team had a question: what if we could redistribute something else just as valuable? Something just as critical in both our every day lives and in moments of crisis: server space. The fact of the matter is that everything else we depend on, from emergency services apps to messenger systems, relies on server performance as a given. But the reality is that during storms, data centers go down all the time. This problem is exacerbated in remote areas of the world, where redirecting requests to regional data centers isn't an option. When a child is stranded in a natural disaster, mere minutes of navigation mean the difference between a miracle and a tragedy. Those are the moments when we have to be able to trust our technology. We weren't willing to leave that to chance, so Nimbus was born.
## What it does
Nimbus iOS harnesses the processing power of idle mobile phones in order to serve compute tasks. So imagine charging your phone, enabling Nimbus, and allowing your locked phone to act as the server for a schoolchild in Indonesia during typhoon season. Where other distributed computation engines have failed, Nimbus excels. Rather than treating each node as equally suitable for a compute task, our scheduler algorithm takes into account all sorts of factors before assigning a task to a best node, like CPU and the time the user intends to spend idle (how long the user will be asleep, how long the user will be at an offline Facebook event). Users could get paid marginal compensation for each compute task, or Nimbus could come bundled into a larger app, like Facebook.
Nimbus Desktop, which we've proof-of-concepted in the Desktop branch of our Github repo, uses a central server to assign tasks to each computer-node via Vagrant Docker provisioning. We haven't completed this platform option, but it serves another important product case: enterprise clients. We did the math for you: a medium sized company running 22,000 ec2s on Nimbus Desktop on its idle computers for 14 hours a day could save $6 million / year in AWS fees. In this case, the number of possible attack vectors is minimized because all the requests would originate from within the organization. This is the future of computing because it's far more efficient and environmentally friendly than solely running centralized servers. Data centers are having an increasingly detrimental effect on global warming; Iceland is already feeling its effects. Nimbus Desktop offers a scalable and efficient future. We don't have a resource issue. We have a distribution one.
## How we built it
The client-facing web app is built with react and node.js. The backend is built with node.js. The iOS app is built with react-native, express, and node.js. The Desktop script is built on Docker and Vagrant.
## Challenges we ran into
npm was consistently finnicky when we integrated node.js with react-native and built all of that in XCode with Metro Bundler. We also had to switch the scheduler-node interaction to a pull model rather than a push model to guarantee certain security and downtime minimization parameters. We didn't have time to complete Nimbus Desktop, save stepwise compute progress in a hashed database for large multi-hour computes (this would enable us to reassign the compute to the next best node in the case of disruption and optimize for memory usage), or get to the web compute version (diagrammed in the photo carousel, which would enable the nodes to act as true load balancers for more complex hosting)
## Accomplishments that we're proud of
Ideating Nimbus Desktop happened in the middle of the night. That was pretty cool.
## What we learned
Asking too many questions leads to way better product decisions.
## What's next for nimbus
In addition to the incomplete items in the challenges section, we ultimately would want the scheduler to be able to predict disruption using ML time series data.
|
## Inspiration
Have you ever gone on a nice dinner out with friends, only to find that the group is too big for your server to split the bills according to each person's order? Someone inevitably decides to pay for the whole group and asks everyone to pay them back afterwards, but this doesn't always happen right away. When people forget to pay their friends back, it becomes somewhat awkward to bring up...
Enter Encountability, our cash transfer app!
## What it does
Encountability was created as an alternative to current cash transfer mechanisms, such as Interac e-transfer, that are somewhat clunky at best and inconvenient at worst - it sucks when the e-transfers don't arrive immediately and you and the person you're buying stuff from on Facebook Marketplace have to stand there awkwardly shuffling your feet and praying that the autodeposit email arrives soon. You can add friends to the app and send them cash (or request cash of your own) just by navigating to their profile on the app and sending a message in seconds! The app also reminds you of money you owe to any friends you might have on the app, ensuring that you don't forget to pay them back (especially if they shouldered everyone's bill last time you went out) and you spare them the awkwardness of having to remind you that you owe them some cash.
## How we built it
We built the backend in Python and Flask, and used CockroachDB for the database. The RBC Money Transfer API was also used for the project. For the frontend, a combination of HTML, CSS, and Javascript was used.
## Challenges we ran into
The name Encountability is a portmanteau of "encounter" and "accountability"; this was because we originally envisioned an RPG-style app where dinner bills that needed to be split could be treated like boss monsters and "defeated" by gathering a party of your friends and splitting the bill amongst yourselves easily. Time constraints were in full force this weekend, and we had to cut down on some of our more ambitious planned features after it became evident that there would not be enough time to accomplish everything we wanted. There were some difficulties with learning the techniques and tools necessary to integrate frontend and backend as well, but we pushed through and created something functional in the end!
## Accomplishments that we're proud of
Despite the hurdles and the compromises (and the time constraints... and the steep learning curve...) we were able to create something functional, with a prototype that shows how we envision the app to work and look!
## What we learned
* databases can be fiddly, but when they work, it's a beautiful thing!
* Sanity Walks™ are an essential part of the hackathon experience
* so are 30-min naps
## What's next for encountability
We'd like to connect it to bank accounts directly next time, just like we originally intended! It would also be nice to fully implement the automatic transaction-splitting feature of the app next time, as well as the more social aspects of the app.
|
## Inspiration
As software engineers, we constantly seek ways to optimize efficiency and productivity. While we thrive on tackling challenging problems, sometimes we need assistance or a nudge to remember that support is available. Our app assists engineers by monitoring their states and employs Machine Learning to predict their efficiency in resolving issues.
## What it does
Our app leverages LLMs to predict the complexity of GitHub issues based on their title, description, and the stress level of the assigned software engineer. To gauge the stress level, we utilize a machine learning model that examines the developer’s sleep patterns, sourced from TerraAPI. The app provides task completion time estimates and periodically checks in with the developer, suggesting when to seek help. All this is integrated into a visually appealing and responsive front-end that fits effortlessly into a developer's routine.
## How we built it
A range of technologies power our app. The front-end is crafted with Electron and ReactJS, offering compatibility across numerous operating systems. On the backend, we harness the potential of webhooks, Terra API, ChatGPT API, Scikit-learn, Flask, NodeJS, and ExpressJS. The core programming languages deployed include JavaScript, Python, HTML, and CSS.
## Challenges we ran into
Constructing the app was a blend of excitement and hurdles due to the multifaceted issues at hand. Setting up multiple webhooks was essential for real-time model updates, as they depend on current data such as fresh Github issues and health metrics from wearables. Additionally, we ventured into sourcing datasets and crafting machine learning models for predicting an engineer's stress levels and employed natural language processing for issue resolution time estimates.
## Accomplishments that we're proud of
In our journey, we scripted close to 15,000 lines of code and overcame numerous challenges. Our preliminary vision had the front end majorly scripted in JavaScript, HTML, and CSS — a considerable endeavor in contemporary development. The pinnacle of our pride is the realization of our app, all achieved within a 3-day hackathon.
## What we learned
Our team was unfamiliar to one another before the hackathon. Yet, our decision to trust each other paid off as everyone contributed valiantly. We honed our skills in task delegation among the four engineers and encountered and overcame issues previously uncharted for us, like running multiple webhooks and integrating a desktop application with an array of server-side technologies.
## What's next for TBox 16 Pro Max (titanium purple)
The future brims with potential for this project. Our aspirations include introducing real-time stress management using intricate time-series models. User customization options are also on the horizon to enrich our time predictions. And certainly, front-end personalizations, like dark mode and themes, are part of our roadmap.
|
partial
|
## Inspiration
In a sense, social media has democratized news media itself -- through it, we have all become "news editors" to some degree, shaping what our friends read through our shares, likes, and comments. Is it any wonder, then, that "fake news" has become such a widespread problem? In such partisan times, it is easy to find ourselves ourselves siloed off within ideological echo chambers. After all, we are held in thrall not only by our cognitive biases to seek out confirmatory information, but also by the social media algorithms trained to feed such biases for the sake of greater ad revenue. Most worryingly, these ideological silos can serve as breeding grounds for fake news, as stories designed to mislead their audience are circulated within the target political community, building outrage and exacerbating ignorance with each new share.
We believe that the problem of fake news is intimately related to the problem of the ideological echo chambers we find ourselves inhabiting. As such, we designed "Open Mind" to attack these two problems at their root.
## What it does
"Open Mind" is a Google Chrome extension designed to (1) combat the proliferation of fake news, and (2) increase exposure to opposing viewpoints. It does so using a multifaceted approach -- first, it automatically "blocks" known fake news websites from being displayed on the user's browser, providing the user with a large warning screen and links to more reputable sources (the user can always click through to view the allegedly fake content, however; we're not censors!). Second, the user is given direct feedback on how partisan their reading patterns are, in the form of a dashboard which tracks their political browsing history. This dashboard then provides a list of recommended articles that users can read in order to "balance out" their reading history.
## How we built it
We used React for the front end, and a combination of Node.js and Python for the back-end. Our machine learning models for recommending articles were built using Python's Tensorflow library, and NLP was performed using the Alyien, Semantria, and Google Cloud Natural Language APIs.
## What we learned
We learned a great deal more about fake news, and NLP in particular.
## What's next for Open Mind
We aim to implement a "political thermometer" that appears next to political articles, showing the degree to which the particular article is conservative or liberal. In addition, we aim to verify a Facebook-specific "share verification" feature, where users are asked if they are sure they want to share an article that they have not already read (based on their browser history).
|
## Inspiration
The three of us believe that our worldview comes from what we read. Online news articles serve to be that engine, and for something so crucial as learning about current events, an all-encompassing worldview is not so accessible. Those new to politics and just entering the discourse may perceive an extreme partisan view on a breaking news to be the party's general take; On the flip side, those with entrenched radicalized views miss out on having productive conversations. Information is meant to be shared, perspectives from journals, big, and small, should be heard.
## What it does
WorldView is a Google Chrome extension that activates whenever someone is on a news article. The extension describes the overall sentiment of the article, describes "clusters" of other articles discussing the topic of interest, and provides a summary of each article. A similarity/dissimilarity score is displayed between pairs of articles so readers can read content with a different focus.
## How we built it
Development was broken into three components: scraping, NLP processing + API, and chrome extension development. Scraping involved using Selenium, BS4, DiffBot (API that scrapes text from websites and sanitizes), and Google Cloud Platform's Custom Search API to extract similar documents from the web. NLP processing involved using NLTK, KProtoype clustering algorithm. Chrome extension was built with React, which talked to a Flask API. Flask server is hosted on an AWS EC2 instance.
## Challenges we ran into
Scraping: Getting enough documents that match the original article was a challenge because of the rate limiting of the GCP API. NLP Processing: one challenge here was determining metrics for clustering a batch of documents. Sentiment scores + top keywords were used, but more robust metrics could have been developed for more accurate clusters. Chrome extension: Figuring out the layout of the graph representing clusters was difficult, as the library used required an unusual way of stating coordinates and edge links. Flask API: One challenge in the API construction was figuring out relative imports.
## Accomplishments that we're proud of
Scraping: Recursively discovering similar documents based on repeatedly searching up headline of an original article. NLP Processing: Able to quickly get a similarity matrix for a set of documents.
## What we learned
Learned a lot about data wrangling and shaping for front-end and backend scraping.
## What's next for WorldView
Explore possibility of letting those unable to bypass paywalls of various publishers to still get insights on perspectives.
|
## Inspiration
Misinformation has become more and more widespread, posing numerous societal and ethical concerns. We have seen these effects first hand, and hoped to help address this issue.
## What it does
It's a chrome extension; you open a sidebar, and it determines if the site contains misinformation.
## How we built it
To initially build the chrome extension, we used a framework called Plasmo. We built the front end in TypeScript and used react, and we built the backend in Python. We trained a logistic regression model using a data set with over 40000 lines of news text classified as true or fake, and then grabbed the URL from the current website and passed it through the Flask backend. Then, we scraped the text from the website and passed it through our machine learning model for analysis, finishing by displaying the results of the model on the sidebar for the user to see. Finally, we sent a request to the OpenAI API to display additional resources for a user to look into if they want more context or information about the topic that they are currently reading about.
## Challenges we ran into
The onSubmit function of the html form was using defaults, which refreshed the page, so we spent a long time figuring out why the information was not going through to the model. We also found it a bit challenging initially because none of us had experience using react or flask.
## Accomplishments that we're proud of
Our model is accurate and the user interface looks very good.
## What we learned
We learned how to make a chrome extension, and connect the backend to the frontend with Flask.
## What's next for Clarif.ai
Give a more accurate description of what exactly is false in the page, and allow users to give feedback to improve our model.
|
partial
|
## Inspiration
We wanted to create a new way to interact with the thousands of amazing shops that use Shopify.

## What it does
Our technology can be implemented inside existing physical stores to help customers get more information about products they are looking at. What is even more interesting is that our concept can be implemented to ad spaces where customers can literally window shop! Just walk in front of an enhanced Shopify ad and voila, you have the product on the sellers store, ready to be ordered right there from wherever you are.
## How we built it
WalkThru is Android app built with the Altbeacon library. Our localisation algorithm allows the application to pull the Shopify page of a specific product when the consumer is in front of it.



## Challenges we ran into
Using the Estimote beacons in a crowded environment has it caveats because of interference problems.
## Accomplishments that we're proud of
The localisation of the user is really quick so we can show a product page as soon as you get in front of it.

## What we learned
We learned how to use beacons in Android for localisation.
## What's next for WalkThru
WalkThru can be installed in current brick and mortar shops as well as ad panels all over town. Our next step would be to create a whole app for Shopify customers which lets them see what shops/ads are near them. We would also want to improve our localisation algorithm to 3D space so we can track exactly where a person is in a store. Some analytics could also be integrated in a Shopify app directly in the store admin page where a shop owner would be able to see how much time people spend in what parts of their stores. Our technology could help store owners increase their sells and optimise their stores.
|
## Inspiration
After learning about the current shortcomings of disaster response platforms, we wanted to build a modernized emergency services system to assist relief organizations and local governments in responding faster and appropriately.
## What it does
safeFront is a cross between next-generation 911 and disaster response management. Our primary users are local governments and relief organizations. The safeFront platform provides organizations and governments with the crucial information that is required for response, relief, and recovery by organizing and leveraging incoming disaster related data.
## How we built it
safeFront was built using React for the web dashboard and a Flask service housing the image classification and natural language processing models to process the incoming mobile data.
## Challenges we ran into
Ranking urgency of natural disasters and their severity by reconciling image recognition, language processing, and sentiment analysis on mobile data and reported it through a web dashboard. Most of the team didn't have a firm grasp on React components, so building the site was how we learned React.
## Accomplishments that we're proud of
Built a full stack web application and a functioning prototype from scratch.
## What we learned
Stepping outside of our comfort zone is, by nature, uncomfortable. However, we learned that we grow the most when we cross that line.
## What's next for SafeFront
We'd like to expand our platform for medical data, local transportation delays, local river level changes, and many more ideas. We were able to build a fraction of our ideas this weekend, but we hope to build additional features in the future.
|
## Inspiration
Making travel plans suck
## What it does
Plans trips for you by first calculating the cheapest overall trip to visit every location, and then plans events for that day
## How we built it
Built using Vue.js
## Challenges we ran into
Integration of API with Vue
## Accomplishments that we're proud of
Front-end
## What we learned
Integration of API with Vue
## What's next for Tripify
Startup in San Francisco
|
winning
|
## Inspiration 🍪
We’re fed up with our roommates stealing food from our designated kitchen cupboards. Few things are as soul-crushing as coming home after a long day and finding that someone has eaten the last Oreo cookie you had been saving. Suffice it to say, the university student population is in desperate need of an inexpensive, lightweight security solution to keep intruders out of our snacks...
Introducing **Craven**, an innovative end-to-end pipeline to put your roommates in check and keep your snacks in stock.
## What it does 📸
Craven is centered around a small Nest security camera placed at the back of your snack cupboard. Whenever the cupboard is opened by someone, the camera snaps a photo of them and sends it to our server, where a facial recognition algorithm determines if the cupboard has been opened by its rightful owner or by an intruder. In the latter case, the owner will instantly receive an SMS informing them of the situation, and then our 'security guard' LLM will decide on the appropriate punishment for the perpetrator, based on their snack-theft history. First-time burglars may receive a simple SMS warning, but repeat offenders will have a photo of their heist, embellished with an AI-generated caption, posted on [our X account](https://x.com/craven_htn) for all to see.
## How we built it 🛠️
* **Backend:** Node.js
* **Facial Recognition:** OpenCV, TensorFlow, DLib
* **Pipeline:** Twilio, X, Cohere
## Challenges we ran into 🚩
In order to have unfettered access to the Nest camera's feed, we had to find a way to bypass Google's security protocol. We achieved this by running an HTTP proxy to imitate the credentials of an iOS device, allowing us to fetch snapshots from the camera at any time.
Fine-tuning our facial recognition model also turned out to be a bit of a challenge. In order to ensure accuracy, it was important that we had a comprehensive set of training images for each roommate, and that the model was tested thoroughly. After many iterations, we settled on a K-nearest neighbours algorithm for classifying faces, which performed well both during the day and with night vision.
Additionally, integrating the X API to automate the public shaming process required specific prompt engineering to create captions that were both humorous and effective in discouraging repeat offenders.
## Accomplishments that we're proud of 💪
* Successfully bypassing Nest’s security measures to access the camera feed.
* Achieving high accuracy in facial recognition using a well-tuned K-nearest neighbours algorithm.
* Fine-tuning Cohere to generate funny and engaging social media captions.
* Creating a seamless, rapid security pipeline that requires no legwork from the cupboard owner.
## What we learned 🧠
Over the course of this hackathon, we gained valuable insights into how to circumvent API protocols to access hardware data streams (for a good cause, of course). We also deepened our understanding of facial recognition technology and learned how to tune computer vision models for improved accuracy. For our X integration, we learned how to engineer prompts for Cohere's API to ensure that the AI-generated captions were both humorous and contextual. Finally, we gained experience integrating multiple APIs (Nest, Twilio, X) into a cohesive, real-time application.
## What's next for Craven 🔮
* **Multi-owner support:** Extend Craven to work with multiple cupboards or fridges in shared spaces, creating a mutual accountability structure between roommates.
* **Machine learning improvement:** Experiment with more advanced facial recognition models like deep learning for even better accuracy.
* **Social features:** Create an online leaderboard for the most frequent offenders, and allow users to vote on the best captions generated for snack thieves.
* **Voice activation:** Add voice commands to interact with Craven, allowing roommates to issue verbal warnings when the cupboard is opened.
|
## Inspiration
2 days before flying to Hack the North, Darryl forgot his keys and spent the better part of an afternoon retracing his steps to find it- But what if there was a personal assistant that remembered everything for you? Memories should be made easier with the technologies we have today.
## What it does
A camera records you as you go about your day to day life, storing "comic book strip" panels containing images and context of what you're doing as you go about your life. When you want to remember something you can ask out loud, and it'll use Open AI's API to search through its "memories" to bring up the location, time, and your action when you lost it. This can help with knowing where you placed your keys, if you locked your door/garage, and other day to day life.
## How we built it
The React-based UI interface records using your webcam, screenshotting every second, and stopping at the 9 second mark before creating a 3x3 comic image. This was done because having static images would not give enough context for certain scenarios, and we wanted to reduce the rate of API requests per image. After generating this image, it sends this to OpenAI's turbo vision model, which then gives contextualized info about the image. This info is then posted sent to our Express.JS service hosted on Vercel, which in turn parses this data and sends it to Cloud Firestore (stored in a Firebase database). To re-access this data, the browser's built in speech recognition is utilized by us along with the SpeechSynthesis API in order to communicate back and forth with the user. The user speaks, the dialogue is converted into text and processed by Open AI, which then classifies it as either a search for an action, or an object find. It then searches through the database and speaks out loud, giving information with a naturalized response.
## Challenges we ran into
We originally planned on using a VR headset, webcam, NEST camera, or anything external with a camera, which we could attach to our bodies somehow. Unfortunately the hardware lottery didn't go our way; to combat this, we decided to make use of MacOS's continuity feature, using our iPhone camera connected to our macbook as our primary input.
## Accomplishments that we're proud of
As a two person team, we're proud of how well we were able to work together and silo our tasks so they didn't interfere with each other. Also, this was Michelle's first time working with Express.JS and Firebase, so we're proud of how fast we were able to learn!
## What we learned
We learned about OpenAI's turbo vision API capabilities, how to work together as a team, how to sleep effectively on a couch and with very little sleep.
## What's next for ReCall: Memories done for you!
We originally had a vision for people with amnesia and memory loss problems, where there would be a catalogue for the people that they've met in the past to help them as they recover. We didn't have too much context on these health problems however, and limited scope, so in the future we would like to implement a face recognition feature to help people remember their friends and family.
|
## Inspiration
As most of our team became students here at the University of Waterloo, many of us had our first experience living in a shared space with roommates. Without the constant nagging by parents to clean up after ourselves that we found at home and some slightly unorganized roommates, many shared spaces in our residences and apartments like kitchen counters became cluttered and unusable.
## What it does
CleanCue is a hardware product that tracks clutter in shared spaces using computer vision. By tracking unused items taking up valuable counter space and making speech and notification reminders, CleanCue encourages roommates to clean up after themselves. This product promotes individual accountability and respect, repairing relationships between roommates, and filling the need some of us have for nagging and reminders by parents.
## How we built it
The current iteration of CleanCue is powered by a Raspberry Pi with a Camera Module sending a video stream to an Nvidia CUDA enabled laptop/desktop. The laptop is responsible for running our OpenCV object detection algorithms, which enable us to log how long items are left unattended and send appropriate reminders to a speaker or notification services. We used Cohere to create unique messages with personality to make it more like a maternal figure. Additionally, we used some TTS APIs to emulate a voice of a mother.
## Challenges we ran into
Our original idea was to create a more granular product which would customize decluttering reminders based on the items detected. For example, this version of the product could detect perishable food items and make reminders to return items to the fridge to prevent food spoilage. However, the pre-trained OpenCV models that we used did not have enough variety in trained items and precision to support this goal, so we settled for this simpler version for this limited hackathon period.
## Accomplishments that we're proud of
We are proud of our planning throughout the event, which allowed us to both complete our project while also enjoying the event. Additionally, we are proud of how we broke down our tasks at the beginning, and identified what our MVP was, so that when there were problems, we knew what our core priorities were. Lastly, we are glad we submitted a working project to Hack the North!!!!
## What we learned
The core frameworks that our project is built out of were all new to the team. We had never used OpenCV or Taipy before, but had a lot of fun learning these tools. We also learned how to create improvised networking infrastructure to enable hardware prototyping in a public hackathon environment. Though not on the technical side, we also learned the importance of re-assessing if our solution actually was solving the problem we were intending to solve throughout the project and make necessary adjustments based on what we prioritized. Also, this was our first hardware hack!
## What's next for CleanCue
We definitely want to improve our prototype to be able to more accurately describe a wide array of kitchen objects, enabling us to tackle more important issues like food waste prevention. Further, we also realized that the technology in this project can also aid individuals with dementia. We would also love to explore more in the mobile app development space. We would also love to use this to notify any dangers within the kitchen, for example, a young child getting too close to the stove, or an open fire left on for a long time. Additionally, we had constraints based on hardware availability, and ideally, we would love to use an Nvidia Jetson based platform for hardware compactness and flexibility.
|
winning
|
**Inspiration**
Currently, society faces many environmental challenges with the fashion industry known as one of the most polluting industries in the world. Fast fashion produces clothing that isn't made to last since they are made with cheap materials that harm the environment (landfill impact, pesticides in growing cotton, and toxic chemicals making their way into water). However, sustainable clothing use materials that are made to last longer and are non-harmful to the environment. By choosing to have sustainable clothing, one can reduce their waste significantly and spend less money when shopping for clothes. We believe that users would benefit greatly from an intuitive app that can be used to track their wardrobe that can let them know how sustainable their wardrobe is and also predict the type of clothing they have using Tangram.
**What it does**
The app allows users to explore their closet by letting the user add clothing that they already have in their closet by selecting the type of clothing (shirt, pants, shorts, jacket, etc.) and then picking the brand of clothing. The user can then take a picture of the clothing item that they wish to upload and can add more items if they wish to. After they are done adding their items, the app then generates their sustainability score and gives them outfit recommendations as well that are more sustainable for their closet, which would take them to the store’s website and purchase it on-app. Users can also get points from buying from a sustainable brand which they can redeem through gift cards from some of their favorite sustainable brands. The model that was built using Tangram recognizes the type of clothing the user has based on what the user takes a picture of.
**How we built it**
For the model: we learned how to work with Tangram, using a CSV to create a .tangram model, and then using that model to test individual data and judge the overall accuracy. Then, we were able to find a large dataset online that had thousands of images of different clothing, all labeled for type (i.e. shirts, shoes, pants). We converted those images into binary strings and created a new CSV with those strings and the corresponding types. That was then used to train a Tangram model and we came out with about 61% accuracy, which is slightly better than a completely random guess would be expected to be, with so many clothing options.
**Challenges we ran into**
Tangram does not have support yet for image recognition, so we did have to think of a way to pass in the images to test the library in use of that. When training, there were issues in the inclusion of commas in the binary strings (messing up the CSV formatting) and the cleansing of data we performed to remove clothing we deemed irrelevant.
**Accomplishments that we're proud of**
Although the model is not the most accurate, we are proud of trying to find a way to apply this tool, Tangram, to a new purpose in image recognition. Also, learning how to train and use this machine learning library was a useful skill that multiple people on our team can use in the future.
**What we learned**
We learned how to use an intuitive machine learning library and a little about how image recognition and data cleansing work.
**What's next for Sound of Sustainability**
The development of a full front-end using some cross-platform tool such as React Native, and the connection of that with our user interface and machine learning model to create a fully functioning app.
|
## Inspiration for Recyclable
We come across many instances when we don’t know if we can recycle an item and have to look it up on the web intensively to get to a conclusion. Whether it’s an used electrical appliance to a piece of broken furniture, there are ways to recycle these items; all it takes is a little research. However, not everyone has the access and enough time to research for recycling and they end up throwing away potentially recyclable materials creating a void in circular economy.
But, everyone now has a smartphone with a camera, so the answer to “is this recyclable” should be made easy with the minimal possible efforts from user end, with these contexts, we have developed **recyclable**, an user-friendly mobile application that tells you if the item is recyclable or not after a single camera click!
## How it works?
CAPTURE an item’s image, let recyclable, the app quickly identify your object and provide fast recommendations on whether the item is recyclable or not.
## How we built it
**Technologies used**: Figma (for design), Typescript, React Native, React Navigation, Tensorflow,
Tensorflowjs, OpenCV, Ascend AI, Mindspore, Mindspore XAI, Expo, Expo camera.
This is developed as a mobile application which is supported in 2 platforms - Android and ios.
We trained an image classification model to classify between different materials which can either be recyclable or non recyclable. We used latest MobileNetv2 architecture which is very lite and compatible in smaller devices, This model predicts with very good accuracy at the same time it is very fast. We trained our model on both Tensorflow as well as on Ascend AI platform using Mindspore.
## Challenges we ran into
Developing and deploying AscendAI model was new to us and understanding the vivid documentation needed time. However, we made sure to improve the performance in any possible way.
We achieved an accuracy of 93.75 on googlenet and 91.11 accuracy with MobileNetv2.
The above results depicts the power of MobileNetv2 which gave high accuracy with less parameters.
## Accomplishments that we're proud of
Developing a societal good solution has a potential market with evolving technologies simplifying the long-existing problems. We are proud about our first step towards ameliorating the environment, as we all play a crucial role in the future of our planet. We have developed a platform that offers greener and sustainable solutions to the users in a comprehensive manner.
## What we learned?
We learned new technologies like AscendAI, Expo, OpenCV while practicing our application development skills in React Native. We leveraged the GoogleNet network from Mindspore.
We have applied different optimizers like Adam, Adam weight decay, SGD etc. We had the best experience of Using Ascend AI Platform which is very simple to use and faster computation.
We also learned various business aspects to enhance the offered features.
## What's next for recyclable?
We can extend the features like:
*COMMUNITY* a platform to showcase your personalized uniquely recycled material and promote the sense of responsibility while celebrating the work.
*RESOURCES* allowing users to access research works on recycling.
*BONUS POINTs* for recyclers.
*EXPLORE* where your items can be recycled based on your location.
*FACILITIES* to see a map view of all recycling facilities close by.
etc.
|
## Inspiration
Herpes Simplex Virus-2 (HSV-2) is the cause of Genital Herpes, a lifelong and contagious disease characterized by recurring painful and fluid-filled sores. Transmission occurs through contact with fluids from the sores of the infected person during oral, anal, and vaginal sex; transmission can occur in asymptomatic carriers. HSV-2 is a global public health issue with an estimated 400 million people infected worldwide and 20 million new cases annually - 1/3 of which take place Africa (2012). HSV-2 will increase the risk of acquiring HIV by 3 fold, profoundly affect the psychological well being of the individual, and pose as a devastating neonatal complication. The social ramifications of HSV-2 are enormous. The social stigma of sexual transmitted diseases (STDs) and the taboo of confiding others means that patients are often left on their own, to the detriment of their sexual partners. In Africa, the lack of healthcare professionals further exacerbates this problem. Further, the 2:1 ratio of female to male patients is reflective of the gender inequality where women are ill-informed and unaware of their partners' condition or their own. Most importantly, the symptoms of HSV-2 are often similar to various other dermatological issues which are less severe, such as common candida infections and inflammatory eczema. It's very easy to dismiss Genital Herpes as these latter conditions which are much less severe and non-contagious.
## What it does
Our team from Johns Hopkins has developed the humanitarian solution “Foresight” to tackle the taboo issue of STDs. Offered free of charge, Foresight is a cloud-based identification system which will allow a patient to take a picture of a suspicious skin lesion with a smartphone and to diagnose the condition directly in the iOS app. We have trained the computer vision and machine-learning algorithm, which is downloaded from the cloud, to differentiate between Genital Herpes and the less serious eczema and candida infections.
We have a few main goals:
1. Remove the taboo involved in treating STDs by empowering individuals to make diagnostics independently through our computer vision and machine learning algorithm.
2. Alleviate specialist shortages
3. Prevent misdiagnosis and to inform patients to seek care if necessary
4. Location service allows for snapshots of local communities and enables more potent public health intervention
5. Protects the sexual relationship between couples by allowing for transparency- diagnose your partner!
## How I built it
We first gathered 90 different images of 3 categories (30 each) of skin conditions that are common around the genital area: "HSV-2", "Eczema", and "Yeast Infections". We realized that a good way to differentiate between these different conditions are the inherent differences in texture, which are although subtle to the human eye, very perceptible via good algorithms. ] We take advantage of the Bag of Words model common in the field of Web Crawling and Information Retrieval, and apply a similar algorithm, which is written from scratch except for the feature identifier (SIFT). The algorithm follows:
Part A) Training the Computer Vision and Machine Learning Algorithm (Python)
1. We use a Computer Vision feature identifying algorithm called SIFT to process each image and to identify "interesting" points like corners and other patches that are highly unique
2. We consider each patch around the "interesting" points as textons, or units of characteristic textures
3. We build a vocabulary of textons by identifying the SIFT points in all of our training images, and use the machine learning algorithm k-means clustering to narrow down to a list of 1000 "representative" textons
4. For each training image, we can build our own version of a descriptor by representation of a vector, where each element of the vector is the normalized frequency of the texton. We further use tf-idf (term frequency, inverse document frequency) optimization to improve the representation capabilities of each vector. (all this is manually programmed)
5. Finally, we save these vectors in memory. When we want to determine whether a test image depicts either of the 3 categories, we encode the test image into the same tf-idf vector representation, and apply k-nearest neighbors search to find the optimal class. We have found through experimentation that k=4 works well as a trade-off between accuracy and speed.
6. We tested this model with a randomly selected subset that is 10% the size of our training set and achieved 89% accuracy of prediction!
Part B) Ruby on Rails Backend
1. The previous machine learning model can be expressed as an aggregate of 3 files: cluster centers in SIFT space, tf-idf statistics, and classified training vectors in cluster space
2. We output the machine learning model as csv files from python, and write an injector in Ruby that inserts the trained model into our PostgreSQL database on the backend
3. We expose the API such that our mobile iOS app can download our trained model directly through an HTTPS endpoint.
4. Beyond storage of our machine learning model, our backend also includes a set of API endpoints catered to public health purposes: each time an individual on the iOS app make a diagnosis, the backend is updated to reflect the demographic information and diagnosis results of the individual's actions. This information is visible on our web frontend.
Part C) iOS app
1. The app takes in demographic information from the user and downloads a copy of the trained machine learning model from our RoR backend once
2. Once the model has been downloaded, it is possible to make diagnosis even without internet access
3. The user can take an image directly or upload one from the phone library for diagnosis, and a diagnosis is given in several seconds
4. When the diagnosis is given, the demographic and diagnostic information is uploaded to the backend
Part D) Web Frontend
1. Our frontend leverages the stored community data (pooled from diagnoses made from individual phones) accessible via our backend API
2. The actual web interface is a portal for public health professionals like epidemiologists to understand the STD trends (as pertaining to our 3 categories) in a certain area. The heat map is live.
3. Used HTML5,CSS3,JavaScript,jQuery
## Challenges I ran into
It is hard to find current STD prevalence incidence data report outside the United States. Most of the countries have limited surveilliance data among African countries, and the conditions are even worse among stigmatized diseases. We collected the global HSV-2 prevalence and incidence report from World Health Organization(WHO) in 2012. Another issue we faced is the ethical issue in collecting disease status from the users. We were also conflicted on whether we should inform the user's spouse on their end result. It is a ethical dilemma between patient confidentiality and beneficence.
## Accomplishments that I'm proud of
1. We successfully built a cloud-based picture recognition system to distinguish the differences between HSV-2, yeast infection and eczema skin lesion by machine learning algorithm, and the accuracy is 89% for a randomly selected test set that is 10% the training size.
2. Our mobile app which provide users to anonymously send their pictures to our cloud database for recognition, avoid the stigmatization of STDs from the neighbors.
3. As a public health aspect, the function of the demographic distribution of STDs in Africa could assist the prevention of HSV-2 infection and providing more medical advice to the eligible patients.
## What I learned
We learned much more about HSV-2 on the ground and the ramifications on society. We also learned about ML, computer vision, and other technological solutions available for STD image processing.
## What's next for Foresight
Extrapolating our workflow for Machine Learning and Computer Vision to other diseases, and expanding our reach to other developing countries.
|
partial
|
Long-distance is challenging for any relationship, whether romantic, platonic, or familial.
* 32.5% of college relationships are long-distance relationships (LDRs)
* Nearly Three-Quarters (72%) of Americans Feel Lonely.
The lack of physical presence often leads to feelings of disconnect and loneliness. Current solutions, such as video calls and messaging apps, lack the depth and immersion needed to truly feel connected. The most crucial aspect of a relationship, shared activities and involvement of senses in addition to just sight and sound, are often the hardest to achieve from a distance.
This Valentine’s weekend, we present to you… VR-Tines! 💗VR-Tines is an innovative VR experience designed to enhance long-distance relationships through immersive, interactive, and emotionally fulfilling activities.
## ⭐Key Experience Points
* **Collaborative Scrapbooking**: Couples can work together on a scrapbook, flipping through pages and dragging in photos and various decorative elements. This scrapbook feels like you’re actually working on it in the real world due to its alignment with 3d surfaces and interactive elements of flipping pages and dragging in elements. By having your partner right next to you, it’s like you’re working on it together in this shared space. Love to reflect on the mems :)
* **Shared Taste**: Get a themed-meal at the same time! Here, we use the two users’ favorite boba orders. Using the Doordash API, we synchronize the delivery of identical beverages or snacks to both partners during their VR date, which we track in the Doordash delivery simulator.
* **Enhanced Realism with Live Webcam Feed**: People typically use passthrough with VR headsets to feed what’s “real” into their experiences. We take advantage of this idea to stream a live webcam feed into the VR-tines experience, so it feels like your partner is actually sitting right next to you (and you see them in passthrough) as you do activities together!
* **Tab Bar Navigation**: We support toggling through the three main options: scrapbooking, Doordash, and home.
* **Social Impact**: VR-tines is about bringing people together. Our project has the potential to significantly reduce the emotional distance in long-distance relationships, fostering stronger bonds and happier couples.
This problem is also important to us because all of us are in long-distance relationships! However, we notice many similar issues that arise in all kinds of relationships in life, such as family, friends, work, etc due to the difficulty of being far apart. Therefore, we sought to solve this real user problem faced every day and create a better solution than current methods of call communication to create more seamless, immersive experiences to feel closer to our loved ones.
As first-time hackers and Stanford freshmen, the concept of home stretches across oceans to Myanmar and Vietnam, where our families reside. The yearning for a deeper connection with our loved ones, despite the geographical miles that separate us, sparked not just a need but a personal quest. Facing this hackathon, our greatest challenge was not the complexity of the technology or the novelty of the concept, but the mental hurdle of believing we could make a significant impact. This project became more than a hack; it evolved into a journey of discovery, learning, and overcoming, driven by our shared experiences of longing and the universal desire to feel closer to those we hold dear. It’s a testament to our belief that distance shouldn't dim the bonds of love and friendship but rather, with the right innovation, can be bridged beautifully.
|
## Inspiration:
```
Sound is a precious thing. Unfortunately, some people are unable to experience as a result of hearing loss or of being hearing impaired. Although, we firmly believe that communication should be as simple as a flick of the wrist and we aim to bring simplicity and ease to those affected by hearing loss and impairment.
```
## What it does:
The HYO utilizes hand gesture input and relays it to an Android powered mobile device/ or PC. The unique set of gestures allow a user to select a desired phrase or sentence to communicate with someone using voice over.
## How we built it:
HYO = Hello + Myo
We used the C++ programming language to write the code for the PC to connect to the MYO and recognize the gestures in a multilevel menu and output instructions. We developed the idea to create and deploy an Android app for portability and ease of use.
## Challenges we ran into:
We encountered several challenges along the way while building our project. We spent large amounts of time troubleshooting code issues. The HYO was programmed using the C++ language which is complex in its concepts.
After long hours of continuous programming and troubleshooting, we were able to run the code and connect the MYO to the computer.
## Accomplishments that we're proud of:
1) Not sleeping and working productively.
2) Working with in diverse group of four who are four complete strangers and being able to collaborate with them and work towards a single goal of success despite belonging to different programs and having completely different skill sets.
## What we learned:
1) Github is an extremely valuable tool.
2) Learnt new concepts in C++
3) Experience working with the MYO armband and Arduino and Edison micro-controllers.
4) How to build an Android app
5) How to host a website
## What's next for HYO WORLD
HYO should be optimized to fit criteria for the average technology consumer. Hand gestures can be implemented to control apps via the MYO armband, a useful and complex piece of technology that can be programmed to recognize various gestures and convert them into instructions to be executed.
|
## **Problem**
* Less than a third of Canada’s fish populations, 29.4 per cent, can confidently be considered healthy and 17 per cent are in the critical zone, where conservation actions are crucial.
* A fishery audit conducted by Oceana Canada, reported that just 30.4 per cent of fisheries in Canada are considered “healthy” and nearly 20 per cent of stocks are “critically depleted.”
### **Lack of monitoring**
"However, short term economics versus long term population monitoring and rebuilding has always been a problem in fisheries decision making. This makes it difficult to manage dealing with major issues, such as species decline, right away." - Marine conservation coordinator, Susanna Fuller
"sharing observations of fish catches via phone apps, or following guidelines to prevent transfer of invasive species by boats, all contribute to helping freshwater fish populations" - The globe and mail
## **Our solution; Aquatrack**
aggregates a bunch of datasets from open canadian portal into a public dashboard!
slide link for more info: <https://www.canva.com/design/DAFCEO85hI0/c02cZwk92ByDkxMW98Iljw/view?utm_content=DAFCEO85hI0&utm_campaign=designshare&utm_medium=link2&utm_source=sharebutton>
The REPO github link: <https://github.com/HikaruSadashi/Aquatrack>
The datasets used:
1) <https://open.canada.ca/data/en/dataset/c9d45753-5820-4fa2-a1d1-55e3bf8e68f3/resource/7340c4ad-b909-4658-bbf3-165a612472de>
2)
<https://open.canada.ca/data/en/dataset/aca81811-4b08-4382-9af7-204e0b9d2448>
|
partial
|
# 🚗 InsuclaimAI: Simplifying Insurance Claims 📝
## 🌟 Inspiration
💡 After a frustrating experience with a minor fender-bender, I was faced with the overwhelming process of filing an insurance claim. Filling out endless forms, speaking to multiple customer service representatives, and waiting for assessments felt like a second job. That's when I knew that there needed to be a more streamlined process. Thus, InsuclaimAI was conceived as a solution to simplify the insurance claim maze.
## 🎓 What I Learned
### 🛠 Technologies
#### 📖 OCR (Optical Character Recognition)
* OCR technologies like OpenCV helped in scanning and reading textual information from physical insurance documents, automating the data extraction phase.
#### 🧠 Machine Learning Algorithms (CNN)
* Utilized Convolutional Neural Networks to analyze and assess damage in photographs, providing an immediate preliminary estimate for claims.
#### 🌐 API Integrations
* Integrated APIs from various insurance providers to automate the claims process. This helped in creating a centralized database for multiple types of insurance.
### 🌈 Other Skills
#### 🎨 Importance of User Experience
* Focused on intuitive design and simple navigation to make the application user-friendly.
#### 🛡️ Data Privacy Laws
* Learned about GDPR, CCPA, and other regional data privacy laws to make sure the application is compliant.
#### 📑 How Insurance Claims Work
* Acquired a deep understanding of the insurance sector, including how claims are filed, and processed, and what factors influence the approval or denial of claims.
## 🏗️ How It Was Built
### Step 1️⃣: Research & Planning
* Conducted market research and user interviews to identify pain points.
* Designed a comprehensive flowchart to map out user journeys and backend processes.
### Step 2️⃣: Tech Stack Selection
* After evaluating various programming languages and frameworks, Python, TensorFlow, and Flet (From Python) were selected as they provided the most robust and scalable solutions.
### Step 3️⃣: Development
#### 📖 OCR
* Integrated Tesseract for OCR capabilities, enabling the app to automatically fill out forms using details from uploaded insurance documents.
#### 📸 Image Analysis
* Exploited an NLP model trained on thousands of car accident photos to detect the damages on automobiles.
#### 🏗️ Backend
##### 📞 Twilio
* Integrated Twilio to facilitate voice calling with insurance agencies. This allows users to directly reach out to the Insurance Agency, making the process even more seamless.
##### ⛓️ Aleo
* Used Aleo to tokenize PDFs containing sensitive insurance information on the blockchain. This ensures the highest levels of data integrity and security. Every PDF is turned into a unique token that can be securely and transparently tracked.
##### 👁️ Verbwire
* Integrated Verbwire for advanced user authentication using FaceID. This adds an extra layer of security by authenticating users through facial recognition before they can access or modify sensitive insurance information.
#### 🖼️ Frontend
* Used Flet to create a simple yet effective user interface. Incorporated feedback mechanisms for real-time user experience improvements.
## ⛔ Challenges Faced
#### 🔒 Data Privacy
* Researching and implementing data encryption and secure authentication took longer than anticipated, given the sensitive nature of the data.
#### 🌐 API Integration
* Where available, we integrated with their REST APIs, providing a standard way to exchange data between our application and the insurance providers. This enhanced our application's ability to offer a seamless and centralized service for multiple types of insurance.
#### 🎯 Quality Assurance
* Iteratively improved OCR and image analysis components to reach a satisfactory level of accuracy. Constantly validated results with actual data.
#### 📜 Legal Concerns
* Spent time consulting with legal advisors to ensure compliance with various insurance regulations and data protection laws.
## 🚀 The Future
👁️ InsuclaimAI aims to be a comprehensive insurance claim solution. Beyond just automating the claims process, we plan on collaborating with auto repair shops, towing services, and even medical facilities in the case of personal injuries, to provide a one-stop solution for all post-accident needs.
|
## Inspiration **💪🏼**
Health insurance, everyone needs it, no one wants to pay for it. As soon-will-be adults, health insurance has been a growing concern. Since a simple ambulance ride easily costs up to thousands of dollars, not having health insurance is a terrible decision in the US. But how much are you supposed to pay for it? Insurance companies publish their rates, but just having formulas doesn't tell me anything about if they are ripping me off, especially for young adults having never paid for health insurance.
## What it does? **🔍**
Thus, to prevent being ripped off on health insurance after leaving our parents' household. We have developed Health Insurance 4 Dummies. A website utilizing a machine learning model that determines a fair estimate for the annual costs of health insurance, based on user inputs of their personal information. It also uses a LMM to provide detailed information on the composition of the cost.
## How we built it **👷🏼♀️**
The front-end is built using convex-react, creating an UI that takes inputs from the user. The backend is built using python-flask, which communicates with remote services, InterSystems and Together.AI. The ML model for predicting the cost is built on InterSystems using the H2O, trained on a dataset consist of individual's information and their annual rate for health insurance. The explanation of costs is created using Together.AI's Llama-2 model.
## Challenges we ran into **🔨**
Full-stack development is tedious, especially when the functions require remote resources. Finding good datasets to train the model. Authentication in connecting and accessing the trained model on InterSystem using their IRIS connection driver. Choosing the right model to use from Together.AI.
## Accomplishments that we're proud of **⭐**
Trained and accessed ML model on a remote database open possibility for massive datasets, integrating LMMs to provide automated information.
## What we learned **📖**
Full-Stack Development skills, ML model training and utilizing. Accessing remote services using APIs, TLS authentication.
## What's next for Health Insurance 4 Dummys **🔮**
Gather larger datasets to make more parameters available and give more accurate predictions.
|
## Inspiration
After conducting extensive internal and external market research, our team discovered that customer experience is one of the biggest challenges the insurance industry faces. With the rapid increase in digitalization, **customers are seeking faster and higher quality services** where they can find answers, personalize their products and manage their policies instantly online.
## What it does
**Insur-AI** is a fully functional chatbot that mimics the role of an insurance broker through human-like conversation and provides an accurate insurance quote within minutes!
## You can check out a working version of our website on: insur-AI.tech
## How we built it
We used **ReactJS**, **Bootstrap** along with some basic **HTML & CSS** for our project! Some of the design elements were created using Photoshop and Canva.
.
## Accomplishments that we're proud of
Creation of a full personalized report of an **Intact** insurance premium estimate including graphical analysis of price, ways to reduce insurance premium costs, in a matter of minutes!
## What's next for Insur-Al
One of the things we could work on is the integration of Insur-AI into <https://www.intact.ca/> , so prospective customers can have a quick and easy way to get a home insurance quote! Moreover, the idea of a chatbot can be expanded into other kinds of insurance as well, allowing insurance companies to reach a broader customer base.
**NOTE:** There have been some domain issues due to configuration errors. If insur-AI.tech does not work, please try a (slightly) older copy here: aryamans.me/insur-AI
<https://www.youtube.com/watch?v=YEU5eBp_Um4&feature=youtu.be>
|
partial
|
# SAFE -- THE APP THAT REVOLUTIONIZES SECURITY AND INFORMATION STORAGE
## What SAFE is all about
In this day and age, the use of electronics have increased significantly. When it comes to banking, storing personal information and more….EVERYTHING IS ON OUR CELLPHONES NOW! The protection of personal information is something often overlooked but is very important to address now more than ever with information breaches and cyber-security compromises. We came up with the idea to build an app called SAFE which is a password-protected cellphone app that stores all personal information in a cellular device acting as a safe for all personal information. Users can choose to share info if they want with other users from the app and all of the shared data is safely encrypted.
Once the user signs up using the app, they will be greeted by a user-friendly home-screen which will initially contain customizable folders that can be set up for private information storage.
The cool thing about SAFE is that users are able to share private and personal information with other authorized users directly from the app itself.
## End-to-end encryption (E2EE)
E2EE was one of the things that we focused a lot on for the development of SAFE. E2EE ensures that the data shared from one authorized user to another through SAFE remains confidential to the users involved in the sharing session.
### Why E2EE:
* In order to protect the clients’ credential data during a sharing session
* SAFE’s implementation structure is meant to reassure users of its security using E2EEE
* SAFE is targeted to become an essential tool for sharing credentials
* E2EE provides absolute security for all clients using the share feature
### How SAFE uses E2EEE:
* Outgoing and incoming credential data using the share feature must be encrypted/decrypted
* SAFE’s server will only handle encrypted data received from the client
* The Diffie-Hellmen algorithm will be used to ensure powerful security
* When sharing data, E2EE will be enabled by default
## How we built the app
The ideas were listed in the slides along with a very basic implementation of the app. The design interface concept was made as an example using Figma. Finally, Flutter was used to visually display the app.
## Challenges that we ran into
Our team encountered some challenges with the development and coming up with an idea for an app. We decided to think about ongoing problems in the world that are connected to technology and have a huge impact on us. After recognizing the problem, we decided to come up with a solution for that problem and implement the solution into our app.
## Accomplishments that we are proud of
We are proud of working together as a team and coming up with a potential solution that could revolutionize the world one day. Everything is on our cellphones now, from banking information to electronic gift cards. Creating a reliable and safe storage solution in this day and age is something we are definitely proud of.
## What we learned
Throughout this project, we learned to work collaboratively with each other and efficiently maintain a smooth workflow to complete this project. Furthermore, while working as a team, we also learned to implement each of our ideas and modify the idea according to each of our views and perspectives.
## Whats next for SAFE
We hope this project will be considered as a possible solution for safe personal data storage in this day and age. We think an idea like SAFE will go a long way and potentially become a primary solution for users that expect security/storage solutions for senstive data in one single app.
## Presentation pitch
<https://docs.google.com/presentation/d/1PCxM_ZpspbXKNALvbRhuv-hN_K7Hbbio5R8rYaJtCjY/edit?usp=sharing>
#### An idea by: Abrar, Ismail, Matthew, and Lucas
|
## Inspiration
Investors like to talk. Just look at communities like r/WallStreetBets on reddit, which was over 12.5 **million** subscribers. Or even just overhear some conversations at your next party. People like to talk (and sometimes brag) about how their investments are doing. But these conversations happen on the edges. They're hidden between dozens of (admittedly funny) memes and sarcastic posts. That is, until now.
## What it does
Share is a platform for you to share how your shares are doing (pun very much intended). Post about your positions and your rationale behind them or read what other people are doing. You can join groups with your friends, family, and others to see who can get the highest return for their money. Share is a place for you to learn about and enjoy the thrills of the stock market.
## How we built it
Share uses data from Yahoo Finance to determine what your current return is on a particular set of shares. This data is then stored and updates every so often. The backend was created with Python, Flask, and Pandas. The backend is connected to the front end via a custom-built RESTful API. The frontend was made in Flutter.
## Challenges we ran into
This was the first hackathon for about half of our team members. Learning about the creative process and figuring out how to proceed was a big challenge for our team, but one that we conquered anyway. One member had never made a backend before or worked with RESTful APIs, but managed to learn how to do it before the deadline. Another member became introduced to the Flutter framework. With some minor changes, our product will soon be released on the app store for its first market introduction.
## Accomplishments that we're proud of
We are proud of getting Flutter to work. It took quite a bit of effort but it was worth it in the end. We are also proud of creating a functional app and getting a proper backend-frontend system working. All four of us pushed through the Hack at the end and did not sleep the last night whatsoever. We are a team of two high schoolers, one NYU student, and a Penn student: we formulated a new idea to pioneer change in the world. All of us have different backgrounds (bioengineering, computer engineering, mechanical engineering, and finance), but we all pieced together our strengths to demonstrate our immense potential in Share.
## What we learned
Flutter, REST APIs, Flask, etc: such technologies were a few of the frameworks that we learned. From knowing nothing about these systems prior to the Hack, we spent 36 complete hours creating a full-stack, hybrid, mobile application. Alongside the hands-on technical knowledge that was gained, the team learned a lot about teamwork and collaborating on different parts of the project in order to make sure the whole thing was done on time. Each of us four came into the Hackathon, looking for team members. After a quick introduction at the PreHack event, "voila!" our team became a reality! In a growing world of business, we just engineered a new product that promotes finance and investing education around the world. We want to touch each person around the world with our social media/game/informative platform!
## What's next for Share.
We plan to add more functionality to Share. Possible future features include adding the ability to log options positions on the app, the ability to compare how every member in a group is doing over several different time intervals, and a more robust algorithm to help people learn about potential investing opportunities. We are looking forward to continuing our journey after PennApps and launching as a potential startup with innovative ideas tacked onto our current foundation.
|
## Inspiration
As university students, emergency funds may not be on the top of our priority list however, when the unexpected happens, we are often left wishing that we had saved for an emergency when we had the chance. When we thought about this as a team, we realized that the feeling of putting a set amount of money away every time income rolls through may create feelings of dread rather than positivity. We then brainstormed ways to make saving money in an emergency fund more fun and rewarding. This is how Spend2Save was born.
## What it does
Spend2Save allows the user to set up an emergency fund. The user inputs their employment status, baseline amount and goal for the emergency fund and the app will create a plan for them to achieve their goal! Users create custom in-game avatars that they can take care of. The user can unlock avatar skins, accessories, pets, etc. by "buying" them with funds they deposit into their emergency fund. The user will have milestones or achievements for reaching certain sub goals while also giving them extra motivation if their emergency fund falls below the baseline amount they set up. Users will also be able to change their employment status after creating an account in the case of a new job or career change and the app will adjust their deposit plan accordly.
## How we built it
We used Flutter to build the interactive prototype of our Android Application.
## Challenges we ran into
None of us had prior experience using Flutter, let alone mobile app development. Learning to use Flutter in a short period of time can easily be agreed upon to be the greatest challenge that we faced.
We originally had more features planned, with an implementation of data being stored using Firebase, so having to compromise our initial goals and focus our efforts on what is achievable in this time period proved to be challenging.
## Accomplishments that we're proud of
This was our first mobile app we developed (as well as our first hackathon).
## What we learned
This being our first Hackathon, almost everything we did provided a learning experience. The skills needed to quickly plan and execute a project were put into practice and given opportunities to grow. Ways to improve efficiency and team efficacy can only be learned through experience in a fast-paced environment such as this one.
As mentioned before, with all of us using Flutter for the first time, anything we did involving it was something new.
## What's next for Spend2Save
There is still a long way for us to grow as developers, so the full implementation of Spend2Save will rely on our progress.
We believe there is potential for such an application to appeal to its target audience and so we have planned projections for the future of Spend2Save. These projections include but are not limited to, plans such as integration with actual bank accounts at RBC.
|
losing
|
## Inspiration
For our first ever attempt at mobile application development, we wanted to create something simple yet fun at the same time.
## What it does
It doesn't do anything, really. Just tap on Tappy to accumulate points and get that false sense of achievement.
## How we built it
We used Android Studio and Java.
## Challenges we ran into
None of us were familiar with mobile development so we had to play around a lot with the functionalities.
## Accomplishments that we're proud of
We made our first mobile application.
## What we learned
We learned the basics of mobile application development for Android using Java.
## What's next for Tappy Bird
Nothing; it's completely useless.
|
## Inspiration
In the rapidly evolving landscape of artificial intelligence, we found ourselves pondering a profound question: What if AI agents could transcend their role as mere tools?
In the events of this hackathon, we've seen agents made with Fetch AI be utilized for autonomous tasking, scheduling, and other simple interactions. We're also aware of the progress that is being made in having AI Agents emulate real people for interactivity with users. But what if we were able to develop an environment that humans didn't need to exist in at all -- in other words, **what does it look like when AI agents self-sustainably interact with only each other?** What happens in that world? What insights do we gain? Can agents become self-sufficient entities capable of meaningful interaction?
This curiosity was sparked by our time in the Bay Area, CA this summer, where we participated in hackathons and events led by ML engineers at Google Deepmind, NVIDIA, AI21 Labs, and Boston Dynamics. We're here to answer a question in fundamental AI/ML research and development, pushing the boundaries of what's possible in the realm of AI agents, because it's being worked on and it is unanswered.
Through this project, we've envisioned a world where AI identities are not just reactive but proactive – a world where they can engage in complex dialogues, simulate real-world scenarios, and provide us with unprecedented insights into human interactions. Better yet, we've proven that it's possible. Our vision has given birth to ConvSim, a revolutionary multi-agent platform that transforms us from active participants into captivated observers of a virtual world demonstrating intelligence and autonomous action.
## What it does
ConvSim is not just another chatbot or simulation tool – it's a window into a new dimension of AI-driven experiences. At its core, ConvSim is a sophisticated multi-agent platform that orchestrates interactions between AI entities, simulating real-world conversations and scenarios with uncanny realism.
Imagine witnessing a debate between Kamala Harris and Donald Trump on climate change, observing how it unfolds, and gaining insights that were previously unattainable. ConvSim makes this possible by leveraging advanced AI technologies to create a self-sustaining ecosystem of intelligent agents.
Our platform comprises five distinct agents, each playing a crucial role in the simulation:
A. Identity Generation Agent: The gateway to our virtual world, this agent interacts with users to understand their desired simulation parameters.
B. Agent 1 & Agent 2: These are our conversationalists, meticulously crafted AI entities that emulate real individuals with high fidelity. They engage in dialogue, mirroring the nuances and complexities of human interaction.
C. Analysis Agent: A silent observer that provides valuable perspective on the unfolding conversation, offering insights that might escape the human eye.
D. Tool Agent: This agent translates the rich tapestry of conversation into quantifiable data, generating plots based on sentiment analysis and productivity metrics.
Through this intricate dance of AI entities, ConvSim creates a self-sustaining environment that can simulate a vast array of scenarios – from high-stakes political debates to intimate counseling sessions, from classroom interactions to celebrity interviews.
## How we built it
Building ConvSim was an exercise in pushing the boundaries of AI technology and software architecture. We leveraged cutting-edge AI frameworks, including Fetch AI and OpenAI, to create a robust and flexible multi-agent system.
Our development process focused on several key areas:
1. Agent Design: Each agent was carefully crafted to fulfill its specific role within the ecosystem. We used advanced natural language processing models to ensure realistic and context-aware interactions.
2. Inter-Agent Communication: We developed a sophisticated communication protocol that allows our agents to exchange information seamlessly, creating a cohesive and believable simulation.
3. User Interface: While the magic happens behind the scenes, we created an intuitive interface that allows users to easily set up and observe simulations.
4. Analysis and Visualization: We integrated powerful analytics tools to process the wealth of data generated by our simulations, providing users with valuable insights and visualizations.
5. Scalability and Performance: Given the complex nature of our multi-agent system, we paid special attention to optimization, ensuring that ConvSim can handle multiple simultaneous simulations without compromising on performance.
A high-level diagram of our multi-agent platform architecture is also included.
This architecture allows for a seamless flow of information between agents, creating a dynamic and responsive simulation environment.
## Challenges we ran into
Developing ConvSim was not without its hurdles. Some of the key challenges we faced include:
1. Maintaining Coherence: Ensuring that multiple AI agents could maintain a coherent and contextually relevant conversation over extended periods was a significant challenge. We had to fine-tune our models extensively to achieve natural dialogue flow.
2. Balancing Realism and Ethics: As we simulated real-world personalities and scenarios, we had to carefully navigate ethical considerations to ensure our simulations were respectful and did not propagate harmful biases or misinformation.
3. Performance Optimization: Managing multiple sophisticated AI models simultaneously put a strain on computational resources. Optimizing our system for efficiency without compromising on the quality of interactions was a complex task.
4. Data Integration: Synthesizing outputs from multiple agents into meaningful analyses and visualizations required careful data integration and processing techniques.
5. User Experience Design: Creating an interface that could convey the complexity of our simulations while remaining intuitive and engaging for users was a delicate balancing act.
## Accomplishments that we're proud of
Despite the challenges, our team has achieved several groundbreaking accomplishments with ConvSim:
1. True Multi-Agent Interaction: We've successfully created a self-sustaining ecosystem where multiple AI agents interact autonomously, a feat that pushes the boundaries of current AI technology.
2. High-Fidelity Simulations: Our platform can emulate real-world personalities and scenarios with remarkable accuracy, opening up new possibilities for entertainment, education, and research.
3. Advanced Analytics: By integrating sentiment analysis and productivity metrics, we've added a layer of quantitative insight to qualitative interactions, providing valuable data for various applications.
4. Scalable Architecture: Our system is designed to handle multiple simultaneous simulations, making it a powerful tool for large-scale scenario analysis and entertainment productions.
5. Ethical AI Development: We've navigated complex ethical considerations to create a platform that respects privacy and promotes responsible AI use.
## What we learned
The development of ConvSim has genuinely been an incredible learning journey:
1. AI Complexity: We gained deep insights into the intricacies of creating and managing multiple AI agents in a cohesive system. The level of architectural detail and rigor required to make this happen was extraordinarily high. Our program utilizes multi-threading, computation optimization, and a fully integrated platform based on Fetch AI's architecture system to make this environment self-sustaining and continually alive.
2 Interdisciplinary Approach: We learned the importance of combining expertise from various fields – from AI and software engineering to psychology and ethics – to create a truly innovative product. Our product is applied to the Entertainment and Media track, but its simulation capabilities unveil serious potential in Sustainability, Healthcare, and Education as well.
2. Real-World Applications: Through our development process, we've uncovered numerous potential applications for multi-agent systems in entertainment, education, mental health, and more. But more over, we've continued to find use cases that fit across any industry -- the reason being, we're able to simulate a "team environment" where a set of agents work together to accomplish a task. It's an architecture that fits so many systems.
3. Ethical Considerations: We developed a keen understanding of the ethical implications of AI simulations and the importance of responsible development practices. This is at the forefront of our development system and mission. The abilities of the product support diversity, equity, and inclusion in the simulation capabilities and the questions that are answered.
4. User-Centric Design: We learned valuable lessons about designing complex systems that remain accessible and engaging for end-users.
## What's next for ConvSim
ConvSim is not just a hackathon project – it's the beginning of a journey to revolutionize how we interact with and learn from AI. Our future roadmap includes:
1. Expanded Simulation Capabilities: We aim to increase the range of scenarios and personalities that ConvSim can emulate, making it an even more versatile tool for entertainment and research.
Enhanced Analytics: We plan to integrate more advanced analytics tools, including predictive modeling, to provide even deeper insights from our simulations.
2. VR/AR Integration: To create truly immersive experiences, we're exploring integration with virtual and augmented reality technologies.
3. API Development: We want to make ConvSim's capabilities accessible to developers and researchers, allowing them to build upon our platform.
4. Real-World Partnerships: We're seeking partnerships in the entertainment, education, and mental health sectors to bring ConvSim's capabilities to real-world applications.
5. Continuous Ethical Review: As we expand, we're committed to ongoing ethical review and refinement of our platform to ensure responsible AI use.
ConvSim represents a paradigm shift in AI-driven experiences. By creating a self-sustaining multi-agent platform, we've opened the door to unprecedented possibilities in entertainment, education, and research. From simulating high-stakes political debates to exploring sensitive topics in mental health and sexuality, ConvSim provides a safe, immersive environment for exploration and learning.
In the realm of media and entertainment, ConvSim is not just a tool – it's a revolution. We're not merely predicting the future; we're creating it. With ConvSim, content creators can prototype storylines, test character interactions, and even generate entire narratives driven by AI. Audiences can step into immersive experiences, witnessing historical events unfold or exploring alternate realities.
But our vision extends beyond entertainment. ConvSim has the potential to be a powerful tool for education, allowing students to interact with historical figures or complex concepts in engaging ways. In the field of psychology, it could provide a platform for exploring human behavior and interactions in a controlled, ethical environment.
From a highly technical perspective, this product is one that doesn’t have linear utility - if done less than optimally, it exists as an exciting entertainment and media tool; if done optimally, it is an invaluable tool in simulating the unknown.
1. Personality mimicking - In the future, a potential implementation is to first create RAG knowledge bases or fine tune models to mimic a known person or personality. Relevant information can be gathered by webscraping.
2. Simulation Optimization - Multiple simulations can be done, with slight modifications. These simulations can be aggregated into reports, finding the “Nash equilibrium” of conversations, or what state will they most likely trend towards.
3. Analysis - In the future, the analysis agent should be able to produce what metrics to measure on its own. This could be sentiment, productivity, entertainment.
As we continue to develop and refine ConvSim, we're not just building a product – we're pioneering a new frontier in AI research and application. We're tackling fundamental questions about AI capabilities, ethics, and human-AI interaction with the highest technical rigor. We're building a system that redefines immersive experiences for media and entertainment, but also provides insights and learning that address key issues in sustainability, education, and healthcare, amongst other regions. We can simulate how a conversation between Kamala Harris and Donald Trump about climate change looks. We can simulate how a conversation between a harsh teacher and disgruntled student with some learning disadvantage looks. We can simulate how doctor-patient interactions look. We can find insights by exploring the unknown or difficult conversations that haven't been had, and dive into the future.
Join us on this exciting journey as we continue to push the boundaries of what's possible with AI. With ConvSim, the future of interactive experiences is here, and it's incredible.
NOTE: We have a video demo for our project, and have not been able to include it in this project due to some technical difficulties with the submission. We are so excited to share the product in person, and please reach out if you would like to see the demo!
|
## Inspiration
The inspiration for this project came from our personal situations at home, that is, the fact that we are now always at home. We came to realize that since we spend all day at home as students with remote learning, it has become more difficult to hold our regular routines, and as such we find ourselves taking care of our physical and mental wellbeing a lot less. In addition to that, we have seen our productivity plummet outside of a dedicated learning environment, making online school all the more difficult. As such, we decided to make an application that remote students or workers like us (or anybody else for that matter) could use to help them stay on track of their personal wellbeing and productivity.
## What it does
Our application promotes physical and mental wellbeing, as well as productivity, by giving the user new challenges every day (that relate to the user) to encourage them to take care of what needs to be cared for. Every day, the user receives several challenges in each category (physical wellbeing, mental wellbeing, and productivity), and each challenge carries a certain number of points that will be rewarded to the user upon the completion of said challenge. These points serve to fill each category's point bar (much like an experience bar in many video games), which then serves to level up the user. This gamification of daily tasks is how we hope to keep the user engaged, and just in case our users forget about our daily challenges, we send gentle (and perhaps lightly annoying) reminders in the form of notifications to keep the user focused and engaged. At the end of the day, we serve to help *them*.
## How we built it
We build this application on the Android platform using Android studio since it was a technology none of us had really ever used before, making the experience an incredibly interesting challenge. Android studio uses Java or Kotlin, and we chose Java. We also used Google's Firebase for our authentication and database (all in the cloud).
## Challenges we ran into
Many challenges arose, mostly stemming from our lack of experience with the platform. Some of the biggest included database integration (firebase's real-time database turned out to be not as beginner-friendly as we had hoped), and designing the actual application (android studio's built-in XML visualizer is great but not nearly as intuitive as web programming, for instance).
## Accomplishments that we're proud of
Our proudest accomplishment would have to be the integration between the authentication and the database through Firebase. While they do have a lot of built-in features to make android development easier, connecting those together and ensuring user data continuously gets stored and updated session after session was extremely gratifying once we finally got it to work.
## What we learned
Aside from the obvious learning about Java and android development, we learned quite a bit about cloud authentication and databases, and perhaps most importantly, the importance of quality version control. Git is a national treasure and learning how to use it most efficiently may have been the greatest lesson out of this entire experience for us.
## What's next for Oliver - Your Personal Wellbeing Coach
Since we are a group of friends who enjoy coding in their spare time, the plan is to polish Ollie up and put him on the Google play store. We will, of course, continue using it on our own, but we want to make sure our friends and everyone else in the world who could benefit can have access as well :)
|
winning
|
## Inspiration
The preservation of cultural heritage and history has become a significant challenge, particularly with the declining interest in learning about them. Motivated by the theme of "Nostalgia," our project aims to address this issue by creating an accessible and immersive experience for individuals of all ages to **explore heritage through the transformation of their own image in a captivating virtual world**. Inspired by modern technology's photo filters in popular apps like Instagram and Snapchat, we seek to make the application of these filters more meaningful and enriching.
## What it does
Retropix is a digital product strategically installed in public mirrors across cities worldwide. When the user stands in front of the mirror, it takes them on a personalized journey back in time by transforming their outfits and surroundings to the era and location of their choice.
After the user takes their photo, their transformed photo is then added to an global photo album online. This is where the user can view pictures that other people have taken in Retropix mirrors around the world and see people dressed in different traditional clothings at specific points in time. For example, an album of the 2000s, consisting of garments from Vietnam, Pakistan, India, Indonesia, etc
## How we built it
For the functionality prototype, we started with plain HTML/CSS with some handwritten JS files, with zero framework. The AI image processor needs to see the image from a URL, so we use a temporary online storage API that holds our images for 5 minutes. Then we take input from the user who specifies the place and time they want to go and compose that information in a prompt to the AI processor. We also used **Cohere’s text generator AI** to give fun facts about the time and place chosen as a user experience improvement while they are waiting for the processed image to be returned. For the product page, we quickly whip up a prototype using Canva’s website designer.
## Challenges we ran into
One of the main challenges we faced was working with unfamiliar tech stacks. To overcome this hurdle, we actively sought help from mentors, utilized resources like Cohere and Stack Overflow, and engaged in collaborative problem-solving.
## Accomplishments that we're proud of
Our proudest accomplishment lies in the process of ideation and finalizing our concept. As a team with many ideas, we spent considerable time aligning our perspectives to finalize a project that combined everyone's strengths. Despite all of us being beginner hackers, we work together to make decisions on which platforms to use, optimizing our time, ensuring full usability of our project, and simultaneously adopting new skills.
## What we learned
Throughout the hackathon, we gained valuable insights into working with APIs, tech stacks, the intricacies of front-end development, and a better understanding of how JavaScript modules function.
## Business Feasibility
Anticipating a 15% monthly customer growth, our conservative projections estimate substantial annual revenue. Only around 9$ per use, the product becomes profitable after the third quarter considering the initial investment. Facilitated by global installations at popular tourist attractions enhances its scalability.
## What's next for RetroPix?
The future of RetroPix holds immense potential for preserving cultural history, increasing awareness of diversity and inclusion, and emphasizing historical accuracy. This endeavor serves as a bridge for historical education and cultural appreciation. To improve the product, we will try to research and incorporate knowledge on lesser visited countries and Indigenous communities.
Looking ahead, we envision RetroPix as a promising product with financial viability and the prospect of reaching customers worldwide. Scalability is a key focus, with plans to expand the product globally and sparking curiosity of learning.
|
## Inspiration
As we began thinking about potential projects to make, we realized that there was no real immersive way to speak to those that have impacted the world in a major way. It is just not as fun to look up Wikipedia articles and simply read the information that is presented there, especially for the attention deficient current generation. Thinking of ways to make this a little more fun, we came up with the idea of bringing these characters to life, in order to give the user the feeling that they are actually talking and learning directly from the source(s), the individual(s) that actually came up with the ideas that the users are interested in. In terms of the initial idea, we were inspired by the Keeling Curve, where we wanted to talk to Charles David Keeling, who unfortunately passed away in 2005, about his curve.
## What it does
Our application provides an interactive way for people to learn in a more immersive manner about climate change or other history. It consists of two pages, the first in which the user can input a historical character to chat with, and the second to "time travel" into the past and spectate on a conversation between two different historical figures. The conversation utilizes voice as input, but also displays the input and the corresponding response on the screen for the user to see.
## How we built it
The main technologies that we used are the Hume AI, Intel AI, Gemini, and VITE (a react framework). Hume AI is used for the text and voice generation, in order to have the responses be expressive, which would hopefully engage the user a bit more. Intel AI is used to generate images using Stable Diffusion to accompany the text that is generated, again to hopefully increase the immersiveness. Gemini is used to generate the conversations between two different historical figures, in the "time travel" screen. Finally, we used VITE to create a front end that merges everything together and provides an interface to the user to interact with the other technologies that we used.
## Challenges we ran into
One challenge we faced was just with the idea generation phase, as it took us a while to polish the idea enough to make this an awesome application. We went through a myriad of other ideas, eventually settling in on this idea of interacting with historical figures, as we believed this would provide the best form of enrichment to a potential user.
We also tried switching from Gemini to Open AI, but due to the way that the APIs are implemented, it was unfortunately not as easy to just drop-in replace Open AI everywhere Gemini was used. Thus, we decided that it was best to stick with Gemini, as it still does quite a good job at generating responses for what we require.
Another challenge that we faced was the fact that it is quite difficult to manage conversations between different assistants, like for instance in the "time travel" page, where two different historical figures (two different assistants) are supposed to have a productive conversation.
## Accomplishments that we're proud of
We are quite proud of the immersiveness of the application. It does really feel as if the user is speaking to the person in question, and not a cheap knockoff trying to pretend to be that person. The assistant is also historically accurate, and does not deviate off of what was requested, such as discussing topics that the historical figure has no possibility of having the knowledge of, such as events or discoveries after they passed away. In addition to this, we are also proud of the features that we managed to include in our final application, such as the ability to change the historical figure that the user wants to talk to, in addition to the "time travel" feature which allows for the user to experience how different historical figures would interact with each other.
## What we learned
We would say that the most important skill that we learned was the art of working together as a team. When we had issues or were confused about certain parts of our application, talking through and explaining different parts proved to be quite an invaluable act to perform. In addition to this, we learned how to integrate various APIs and technologies, and making them work together in a seamless fashion in order to make a successful and cohesive application. We also learned the difficult process of coming up with the idea in the first place, especially one that is good enough to be viable.
## What's next for CLIMATE CHANGE IS BEST LEARNED FROM THE EXPERTS THEMSELVES
The next steps would be to include more features, such as having a video feed that feels as if the user is video chatting with the historical figure, furthering the immersiveness of our application. It would also definitely be quite nice to figure out Open AI integration, and have the user choose the AI assistant they would like to use in the future.
|
## Inspiration
We were inspired by the Instagram app, which set out to connect people using photo media.
We believe that the next evolution of connectivity is augmented reality, which allows people to share and bring creations into the world around them. This revolutionary technology has immense potential to help restore the financial security of small businesses, which can no longer offer the same in-person shopping experiences they once did before the pandemic.
## What It Does
Metagram is a social network that aims to restore the connection between people and small businesses. Metagram allows users to scan creative works (food, models, furniture), which are then converted to models that can be experienced by others using AR technology.
## How we built it
We built our front-end UI using React.js, Express/Node.js and used MongoDB to store user data. We used Echo3D to host our models and AR capabilities on the mobile phone. In order to create personalized AR models, we hosted COLMAP and OpenCV scripts on Google Cloud to process images and then turn them into 3D models ready for AR.
## Challenges we ran into
One of the challenges we ran into was hosting software on Google Cloud, as it needed CUDA to run COLMAP. Since this was our first time using AR technology, we faced some hurdles getting to know Echo3D. However, the documentation was very well written, and the API integrated very nicely with our custom models and web app!
## Accomplishments that we're proud of
We are proud of being able to find a method in which we can host COLMAP on Google Cloud and also connect it to the rest of our application. The application is fully functional, and can be accessed by [clicking here](https://meta-match.herokuapp.com/).
## What We Learned
We learned a great deal about hosting COLMAP on Google Cloud. We were also able to learn how to create an AR and how to use Echo3D as we have never previously used it before, and how to integrate it all into a functional social networking web app!
## Next Steps for Metagram
* [ ] Improving the web interface and overall user experience
* [ ] Scan and upload 3D models in a more efficient manner
## Research
Small businesses are the backbone of our economy. They create jobs, improve our communities, fuel innovation, and ultimately help grow our economy! For context, small businesses made up 98% of all Canadian businesses in 2020 and provided nearly 70% of all jobs in Canada [[1]](https://www150.statcan.gc.ca/n1/pub/45-28-0001/2021001/article/00034-eng.htm).
However, the COVID-19 pandemic has devastated small businesses across the country. The Canadian Federation of Independent Business estimates that one in six businesses in Canada will close their doors permanently before the pandemic is over. This would be an economic catastrophe for employers, workers, and Canadians everywhere.
Why is the pandemic affecting these businesses so severely? We live in the age of the internet after all, right? Many retailers believe customers shop similarly online as they do in-store, but the research says otherwise.
The data is clear. According to a 2019 survey of over 1000 respondents, consumers spend significantly more per visit in-store than online [[2]](https://www.forbes.com/sites/gregpetro/2019/03/29/consumers-are-spending-more-per-visit-in-store-than-online-what-does-this-man-for-retailers/?sh=624bafe27543). Furthermore, a 2020 survey of over 16,000 shoppers found that 82% of consumers are more inclined to purchase after seeing, holding, or demoing products in-store [[3]](https://www.businesswire.com/news/home/20200102005030/en/2020-Shopping-Outlook-82-Percent-of-Consumers-More-Inclined-to-Purchase-After-Seeing-Holding-or-Demoing-Products-In-Store).
It seems that our senses and emotions play an integral role in the shopping experience. This fact is what inspired us to create Metagram, an AR app to help restore small businesses.
## References
* [1] <https://www150.statcan.gc.ca/n1/pub/45-28-0001/2021001/article/00034-eng.htm>
* [2] <https://www.forbes.com/sites/gregpetro/2019/03/29/consumers-are-spending-more-per-visit-in-store-than-online-what-does-this-man-for-retailers/?sh=624bafe27543>
* [3] <https://www.businesswire.com/news/home/20200102005030/en/2020-Shopping-Outlook-82-Percent-of-Consumers-More-Inclined-to-Purchase-After-Seeing-Holding-or-Demoing-Products-In-Store>
|
losing
|
# Seize control of your learning with **Branchly**!
## Inspiration
We are curious people, deeply interested in learning new things. As hackers, we are very familiar with the availability of a seemingly endless amount of free, quality resources on the internet. Though these resources are readily available, they are often overwhelming and out of order. Learning a new topic or skill without outside guidance can seem like a maze, where you can often spend more time figuring out where to start than actually learning. In this project, we set out to make the process of self-learning new skills as accessible as possible by guiding anyone to a structured, organized path of learning.
Inspired by skill trees in video games, which illustrate **progression** and **mastery**, we aimed to make self-learning more **engaging and structured**. We looked to emulate the core aspects of video-game progression, most notably skill points, to gamify the learning process.
We also recognized a lack of tools that offer organized pathways for self-directed learning, especially for diverse interests and skills. We wanted to make sure that our platform can **accommodate any skill**, no matter how niche. Anyone should be able to use our platform to learn any skill.
Not only did we want to make a platform for individual users to learn skills of their choosing, we also set out to design a **collaborative space** where users can share and learn together.
## What it does
The core mechanism of *Branchly* is the custom user skill tree. Every user builds up a **skill tree** (which represents all of the knowledge that they have learnt on the platform) from scratch, adding **"branches"**, or topics, that they want to master. Each branch represents a single topic that can be learnt, and is made up of individual **"leaves"**, or lessons, that are needed to master a branch. For example, an aspiring mathematician might add the calculus branch to their skill tree. In order to make progress in the completion of the calculus branch, this user would need to complete individual "leaves" on smaller topics such as completing lessons about limits, derivatives, etc. Completing a lesson (or parts of a lesson) awards the user with skill points, which count towards their total mastery of the course.
In order to ensure users have **complete control** over the skills they learn, we wanted to give them the opportunity to create their own branches for topics that interest them. To do this, we use a **Large Language model**, which can take in a topic as input, and automatically generate a skill branch complete with leaves that ensure full coverage of the topic's content.
Similar to skill trees in video games, users cannot move onto the next leaf in a branch until they complete all of its prerequisite leaves. This ensures that the user is following the optimal progression to learn the content fully.
*Branchly* is a one-stop learning platform. Users can view their skill tree, add branches, and learn leafs all within the platform. In order to complete a leaf, users are given **multiple personalized recommendations** of material fetched from across the internet, in the form of videos and articles. They can access this material straight on *Branchly*, and completion of each material earns **skill points** towards the completion of a leaf. We made sure to give users flexibility on how they could complete lessons.
**Collaboration** is also a large part of *Branchly*. We have a "Discovery" page where users can publish their own branches and also use branches from other users. We hope that this fosters an atmosphere where people can learn from the experiences of others.
## How we built it
*Branchly* is a **Next/ReactJS** web application, which allows for fast reloading and smooth rendering. We use **Tailwind CSS** to incorporate responsive styling and a modern user interface.
The heart of *Branchly* is the skill tree. Trees are made up of smaller branches, which are made up of individual leaves, or nodes. We use **Graphology** and **Sigma.js** to handle our tree's logic and visualization. We also incorporated our own physics to make the graph visualization more responsive and interactive.
In order to automatically generate skill branches given a topic, we engineered a custom prompt for the **LLaMa 3.1** large language model. We hosted this model on the cloud using **Groq**. In order to determine the difficulty (and how many skill points should be awarded per lesson), we used statistical language methods such as the Gunning fog index to measure the difficulty of any article that is recommended.
All of our user data is stored on a **MongoDB Atlas** database. This data includes every user's personalized skill tree, and every branch available both privately to individual users and publicly on the community discovery page.
We use a recommendation algorithm based on word embeddings in order to provide users with the most relevant and helpful resources to learn from. First, we scrape the web for reputable websites pertinent to the lesson at hand. We use a **Transformer** model to generate word embeddings for the text on the website. The resources with embeddings with the highest **cosine similarity** to the topic itself are recommended for the user.
Finally, we use **PropelAuth** to authenticate users, allowing them to save their progress and publish their branches for collaboration.
## Challenges we ran into
While Sigma.js was great for lightweight and simple applications of graphs, it didn't have everything that we wanted. In order to make the leaves of the graph responsive and interactive, we created our own **physics-based simulation** so that users could drag and interact with the leaves smoothly.
Both while working on the simulation mentioned earlier, as well as working on other changes to our tree logic, we had to make sure the user interface remained responsive and efficient. Working to optimize high-load computations, we made use of **hydration**, or automatic state optimization, in our NextJS web app.
## Accomplishments that we're proud of
We are incredibly proud of the fully functional **product** we were able to produce in this short span of time.
We successfully built a dynamic, interactive skill tree that is intuitive for users.
Our use of AI-driven branch generation allows for a highly **personalizable** experience.
We created a visually appealing modular interface with Tailwind CSS, which makes it easy to add new features in the future.
We built a functional recommendation engine that can pull content from multiple free sources.
And we developed a fully working backend with MongoDB Atlas, allowing our web application to be hosted entirely on the cloud.
## What we learned
In the completion of this project, we learnt how to make a responsive UI, even under high computational loads.
We experimented for the first time with prompt engineering with LLaMa and machine-driven semantic analysis with our recommender model.
And we strengthened our understanding of backend architecture, particularly in terms of managing user data with MongoDB.
## What's next for *Branchly*
We fully intend to continue working on *Branchly*. We see the potential of this application to be an extremely powerful learning tool capable of making an impact in the educational journey of a wide range of people.
Moving forward, we plan to:
* implement a more advanced recommendation system, that takes into account user preferences over time.
* incorporate assessments to validate users' understanding of content
* introduce more social features for the community (apart from just the ability to browse through skill branches that other users have created)
* add a branch editor, so that users can create custom branches both with and without the help of AI
We are particularly interested in partnering with professional educators (such as professors and teachers), to develop verified skill branches in a wide range of skills.
|
## Inspiration
We got the inspiration while solving some math questions. We were solving some of the questions wrong, but couldn't get any idea in what step we were doing wrong. Online, it was even worse: there were only videos, and you had to figure all of the rest out by yourself. The only way to see exactly where you did a mistake was to have a teacher with you. How crazy! Then, we said, technology could help us solve this, and it could even enable us to build a platform that can intelligently give the most efficient route of learning to each person, so no time would be wasted solving the same things again and again!
## What it does
The app provides you with some questions (currently math) and a drawing area to solve the question. While you are solving, the app can compare your handwritten solution steps with the correct ones and tell if your step was correct or false. Even more, since it also has educational content built-in, it can track and show you more of the questions that you did incorrectly, and even questions including steps you did incorrect while solving other questions.
## How we built it
We built the recognition part using the MyScript math handwriting recognition API, and all the tracking, statistics and other stuff using Swift, UIKit and AVFoundation.
## Challenges we ran into
We ran into lots of challenges while building all the data models, since each one is interconnected with the others, and all the steps, questions, tags, etc. make up quite a large variety of data. With the said variety of data, also came a torrent of user interface bugs, and it took *some* perseverance to solve them all as quickly as possible. Also, probably the one of the biggest challenges we dealt with was to deal with the IDE itself crashing :)
## Accomplishments that we're proud of
We are proud of the data collection and recommendation system that we built from the ground up (entirely in Swift!), and the UI that we built, since even though the app doesn't have a large quantity of educational content inside yet, we built it with the ability to expand easily as content gets added, in mind.
## What we learned
The biggest thing we learned was how to build a data set large enough to give personalized recommendations, and also how to divide and conquer it before it gets too complex. We also learned to go beyond what the documentation on the internet offers while debugging, and to solve things by going from examples, without documentation on how to implement.
## What's next for Tat
We think that Tat has quite a potential to redefine education for years to come if we can build more upon it, with more content, more data and even the possibility of integrating crowd-trained AI.
|
## Inspiration
The inspiration for our Auto-Teach project stemmed from the growing need to empower both educators and learners with a **self-directed and adaptive** learning environment. We were inspired by the potential to merge technology with education to create a platform that fosters **personalized learning experiences**, allowing students to actively **engage with the material while offering educators tools to efficiently evaluate and guide individual progress**.
## What it does
Auto-Teach is an innovative platform that facilitates **self-directed learning**. It allows instructors to **create problem sets and grading criteria** while enabling students to articulate their problem-solving methods and responses through text input or file uploads (future feature). The software leverages AI models to assesses student responses, offering **constructive feedback**, **pinpointing inaccuracies**, and **identifying areas for improvement**. It features automated grading capabilities that can evaluate a wide range of responses, from simple numerical answers to comprehensive essays, with precision.
## How we built it
Our deliverable for Auto-Teach is a full-stack web app. Our front-end uses **ReactJS** as our framework and manages data using **convex**. Moreover, it leverages editor components from **TinyMCE** to provide student with better experience to edit their inputs. We also created back-end APIs using "FastAPI" and "Together.ai APIs" in our way building the AI evaluation feature.
## Challenges we ran into
We were having troubles with incorporating Vectara's REST API and MindsDB into our project because we were not very familiar with the structure and implementation. We were able to figure out how to use it eventually but struggled with the time constraint. We also faced the challenge of generating the most effective prompt for chatbox so that it generates the best response for student submissions.
## Accomplishments that we're proud of
Despite the challenges, we're proud to have successfully developed a functional prototype of Auto-Teach. Achieving an effective system for automated assessment, providing personalized feedback, and ensuring a user-friendly interface were significant accomplishments. Another thing we are proud of is that we effectively incorporates many technologies like convex, tinyMCE etc into our project at the end.
## What we learned
We learned about how to work with backend APIs and also how to generate effective prompts for chatbox. We also got introduced to AI-incorporated databases such as MindsDB and was fascinated about what it can accomplish (such as generating predictions based on data present on a streaming basis and getting regular updates on information passed into the database).
## What's next for Auto-Teach
* Divide the program into **two mode**: **instructor** mode and **student** mode
* **Convert Handwritten** Answers into Text (OCR API)
* **Incorporate OpenAI** tools along with Together.ai when generating feedback
* **Build a database** storing all relevant information about each student (ex. grade, weakness, strength) and enabling automated AI workflow powered by MindsDB
* **Complete analysis** of student's performance on different type of questions, allows teachers to learn about student's weakness.
* **Fine-tuned grading model** using tools from Together.ai to calibrate the model to better provide feedback.
* **Notify** students instantly about their performance (could set up notifications using MindsDB and get notified every day about any poor performance)
* **Upgrade security** to protect against any illegal accesses
|
losing
|
## Inspiration
Not all hackers wear capes - but not all capes get washed correctly. Dorming on a college campus the summer before our senior year of high school, we realized how difficult it was to decipher laundry tags and determine the correct settings to use while juggling a busy schedule and challenging classes. We decided to try Google's up and coming **AutoML Vision API Beta** to detect and classify laundry tags, to save headaches, washing cycles, and the world.
## What it does
L.O.A.D identifies the standardized care symbols on tags, considers the recommended washing settings for each item of clothing, clusters similar items into loads, and suggests care settings that optimize loading efficiency and prevent unnecessary wear and tear.
## How we built it
We took reference photos of hundreds of laundry tags (from our fellow hackers!) to train a Google AutoML Vision model. After trial and error and many camera modules, we built an Android app that allows the user to scan tags and fetch results from the model via a call to the Google Cloud API.
## Challenges we ran into
Acquiring a sufficiently sized training image dataset was especially challenging. While we had a sizable pool of laundry tags available here at PennApps, our reference images only represent a small portion of the vast variety of care symbols. As a proof of concept, we focused on identifying six of the most common care symbols we saw.
We originally planned to utilize the Android Things platform, but issues with image quality and processing power limited our scanning accuracy. Fortunately, the similarities between Android Things and Android allowed us to shift gears quickly and remain on track.
## Accomplishments that we're proud of
We knew that we would have to painstakingly acquire enough reference images to train a Google AutoML Vision model with crowd-sourced data, but we didn't anticipate just how awkward asking to take pictures of laundry tags could be. We can proudly say that this has been an uniquely interesting experience.
We managed to build our demo platform entirely out of salvaged sponsor swag.
## What we learned
As high school students with little experience in machine learning, Google AutoML Vision gave us a great first look into the world of AI. Working with Android and Google Cloud Platform gave us a lot of experience working in the Google ecosystem.
Ironically, working to translate the care-symbols has made us fluent in laundry. Feel free to ask us any questions,
## What's next for Load Optimization Assistance Device
We'd like to expand care symbol support and continue to train the machine-learned model with more data. We'd also like to move away from pure Android, and integrate the entire system into a streamlined hardware package.
|
## Inspiration
We want to make everyone impressed by our amazing project! We wanted to create a revolutionary tool for image identification!
## What it does
It will identify any pictures that are uploaded and describe them.
## How we built it
We built this project with tons of sweats and tears. We used Google Vision API, Bootstrap, CSS, JavaScript and HTML.
## Challenges we ran into
We couldn't find a way to use the key of the API. We couldn't link our html files with the stylesheet and the JavaScript file. We didn't know how to add drag and drop functionality. We couldn't figure out how to use the API in our backend. Editing the video with a new video editing app. We had to watch a lot of tutorials.
## Accomplishments that we're proud of
The whole program works (backend and frontend). We're glad that we'll be able to make a change to the world!
## What we learned
We learned that Bootstrap 5 doesn't use jQuery anymore (the hard way). :'(
## What's next for Scanspect
The drag and drop function for uploading iamges!
|
## Inspiration
One day, one of our teammates was throwing out garbage in his apartment complex and the building manager made him aware that certain plastics he was recycling were soft plastics that can't be recycled.
According to a survey commissioned by Covanta, “2,000 Americans revealed that 62 percent of respondents worry that a lack of knowledge is causing them to recycle incorrectly (Waste360, 2019).” We then found that knowledge of long “Because the reward [and] the repercussions for recycling... aren’t necessarily immediate, it can be hard for people to make the association between their daily habits and those habits’ consequences (HuffingtonPost, 2016)”.
From this research, we found that lack of knowledge or awareness can be detrimental to not only to personal life, but also to meeting government societal, environmental, and sustainability goals.
## What it does
When an individual is unsure of how to dispose of an item, "Bin it" allows them to quickly scan the item and find out not only how to sort it (recycling, compost, etc.) but additional information regarding potential re-use and long-term impact.
## How I built it
After brainstorming before the event, we built it by splitting roles into backend, frontend, and UX design/research. We concepted and prioritized features as we went based on secondary research, experimenting with code, and interviewing a few hackers at the event about recycling habits.
We used Google Vision API for the object recognition / scanning process. We then used Vue and Flask for our development framework.
## Challenges I ran into
We ran into challenges with deployment of the application due to . Getting set up was a challenge that was slowly overcome by our backend developers getting the team set up and troubleshooting.
## Accomplishments that I'm proud of
We were able to work as a team towards a goal, learn, and have fun! We were also able work with multiple Google API's. We completed the core feature of our project.
## What I learned
Learning to work with people in different roles was interesting. Also designing and developing from a technical stand point such as designing for a mobile web UI, deploying an app with Flask, and working with Google API's.
## What's next for Bin it
We hope to review feedback and save this as a great hackathon project to potentially build on, and apply our learnings to future projects,
|
winning
|
## Inspiration
Is it possible to get a refrigerator from New York to Boston in less than a day without shelling out exorbitant delivery fees? How can we make the shopping experience for disabled persons more convenient, cheap, and independent?
## What it does
OnTheWay is an innovative P2P delivery system that harnesses the power of pre-existing routes to minimize the need for inconvenient, long-distance trips and therefore reduce environmental impact. Users are automatically matched to drivers who have the user item and pick-up spot along their route. Drivers are compensated by the user for the resulting minor detour.
## How we built it
We used two apps, one for the user side and one for the driver side. Both apps were created using Java/Kotlin in Android Studio, with the driver app being optimized for General Motors vehicles with the General Motors API and the user app being optimized for mobile devices. Backend server api's were created using node.js and the Distance Matrix API on Google Cloud Platform.
## What we learned
We learned a lot about Android development, APIs, and time management!!!
|
# Things2Do
Minimize time spent planning and maximize having fun with Things2Do!
## Inspiration
The idea for Things2Do came from the difficulties that we experienced when planning events with friends. Planning events often involve venue selection which can be a time-consuming, tedious process. Our search for solutions online yielded websites like Google Maps, Yelp, and TripAdvisor, but each fell short of our needs and often had complicated filters or cluttered interfaces. More importantly, we were unable to find event planning that accounts for the total duration of an outing event and much less when it came to scheduling multiple visits to venues accounting for travel time. This inspired us to create Things2Do which minimizes time spent planning and maximizes time spent at meaningful locations for a variety of preferences on a tight schedule. Now, there's always something to do with Things2Do!
## What it does
Share quality experiences with people that you enjoy spending time with. Things2Do provides the top 3 suggested venues to visit given constraints of the time spent at each venue, distance, and select category of place to go. Furthermore, the requirements surrounding the duration of a complete event plan across multiple venues can become increasingly complex when trying to account for the tight schedules of attendees, a wide variety of preferences, and travel time between multiple venues throughout the duration of an event.
## How we built it
The functionality of Things2Do is powered by various APIs to retrieve the details of venues and spatiotemporal analysis with React for the front end, and express.js/node.js for the backend functionality.
APIs:
* openrouteservice to calculate travel time
* Geoapify for location search autocomplete and geocoding
* Yelp to retrieve names, addresses, distances, and ratings of venues
Languages, tools, and frameworks:
* JavaScript for compatibility with React, express.js/node.js, Verbwire, and other APIs
* Express.js/node.js backend server
* TailwindCSS for styling React components
Other services:
* Verbwire to mint NFTs (for memories!) from event pictures
## Challenges we ran into
Initially, we wanted to use Google Maps API to find locations of venues but these features were not part of the free tier and even if we were to implement these ourselves it would still put us at risk of spending more than the free tier would allow. This resulted in us switching to node.js for the backend to work with JavaScript for better support for the open-source APIs that we used. We also struggled to find a free geocoding service so we settled for Geoapify which is open-source. JavaScript was also used so that Verbwire could be used to mint NFTs based on images from the event. Researching all of these new APIs and scouring documentation to determine if they fulfilled the desired functionality that we wanted to achieve with Things2Do was an enormous task since we never had experience with them before and were forced to do so for compatibility with the other services that we were using. Finally, we underestimated the time it would take to integrate the front-end to the back-end and add the NFT minting functionality on top of debugging.
A challenge we also faced was coming up with an optimal method of computing an optimal event plan in consideration of all required parameters. This involved looking into algorithms like the Travelling Salesman, Dijkstra's and A\*.
## Accomplishments that we're proud of
Our team is most proud of meeting all of the goals that we set for ourselves coming into this hackathon and tackling this project. Our goals consisted of learning how to integrate front-end and back-end services, creating an MVP, and having fun! The perseverance that was shown while we were debugging into the night and parsing messy documentation was nothing short of impressive and no matter what comes next for Things2Do, we will be sure to walk away proud of our achievements.
## What we learned
We can definitively say that we learned everything that we set out to learn during this project at DeltaHacks IX.
* Integrate front-end and back-end
* Learn new languages, libraries, frameworks, or services
* Include a sponsor challenge and design for a challenge them
* Time management and teamwork
* Web3 concepts and application of technology
## Things to Do
The working prototype that we created is a small segment of everything that we would want in an app like this but there are many more features that could be implemented.
* Multi-user voting feature using WebSockets
* Extending categories of hangouts
* Custom restaurant recommendations from attendees
* Ability to have a vote of "no confidence"
* Send out invites through a variety of social media platforms and calendars
* Scheduling features for days and times of day
* Incorporate hours of operation of venues
|
## Inspiration
Inspired by our own struggles, as college students, to:
1. Find nearby grocery stores and
2. Find transportation to the already sparse grocery stores,
we created car•e to connect altruistic college kids, with cars (or other forms of transportation) and a love for browsing grocery store aisles, with peers (without modes of transportation) looking to grab a few items from the grocery store.
Food insecurity is a huge issue for college campuses. It was estimated that over 10% of Americans were food insecure in 2020, but the number shot up to over 23% of college students. Food insecurity has a strong correlation with the outcomes of student education– not only are students less likely to academically excel, but they are also less likely to obtain a bachelor's degree.
Our team wanted to address the sustainability track and the Otsuka Valuenex challenge because we identified a gap in organizations that address the lack of food access for college students. Being unable to easily access a grocery store would be inconvenient at best, but more than likely perpetuates health issues, economic inequity, and transportation barriers.
## What it does
Our project is a space for college students, with the means to take quick trips to the grocery store, to volunteer a bit of their time to help out some of their peers. The “shoppers,” or students with access, input their grocery list and planned time and location of the trip. Then, “hoppers,” or college students who would like to hop onto a grocery trip would input their grocery list, and our matching algorithm sorts through the posted shopping trips.
## How we built it
We implemented our idea with low-fidelity prototypes, working from diagrams on a whiteboard, to illustrating user flow and design on Figma, and then building the product itself on SwiftUI, XCode, and Firebase. When producing the similarity-matching results between shoppers and hoppers, we looked into Python, Flask, and Chroma, exploring various methodologies to achieve a good user experience while maximizing efficiency with available shoppers.
## Challenges we ran into
Through this process, organizing the user flow and fleshing out logistical details proved to be a challenge. Our idea also required familiarization with new technology and errors, including Firebase, unknown XCode problems, and Chroma. When faced with these struggles, we dynamically built off each others’ ideas and explored various methodologies and approaches to our proposed solution; every idea and its implementation was a discussion. We drew from our personal experiences to create something that would benefit those around us, and we truly came together to address an issue that we all feel passionately about.
## Accomplishments that we're proud of
1. Ideating process
2. Perseverance
3. Ability to build off of ideas
4. Team bonding
5. Exploration
6. Addressing issues we experience
7. Transforming personal experiences into solutions
## What we learned
This experience taught us that there are infinite solutions to a single problem. Thus when evaluating the most optimal solution, it’s important to evaluate on a comprehensive criteria, focusing on efficiency, impact, usability, and implementation. While this process may be time-consuming, it is necessary to keep the user in mind to ensure that the product and its features truly fulfill an unmet user need.
## What's next for car·e
1. Building out future features (e.g. messaging capability)
2. Building a stronger community aspect through features in our app: more bonding and friendships
3. Business model
4. Incentives to get ppl started on the app
|
partial
|
## Inspiration
Earlier this week, following the devastation of Hurricane Florence, my newsfeed surged with friends offering their excess food and water to displaced community members. Through technology, the world had grown smaller. Resources had been shared.
Our team had a question: what if we could redistribute something else just as valuable? Something just as critical in both our every day lives and in moments of crisis: server space. The fact of the matter is that everything else we depend on, from emergency services apps to messenger systems, relies on server performance as a given. But the reality is that during storms, data centers go down all the time. This problem is exacerbated in remote areas of the world, where redirecting requests to regional data centers isn't an option. When a child is stranded in a natural disaster, mere minutes of navigation mean the difference between a miracle and a tragedy. Those are the moments when we have to be able to trust our technology. We weren't willing to leave that to chance, so Nimbus was born.
## What it does
Nimbus iOS harnesses the processing power of idle mobile phones in order to serve compute tasks. So imagine charging your phone, enabling Nimbus, and allowing your locked phone to act as the server for a schoolchild in Indonesia during typhoon season. Where other distributed computation engines have failed, Nimbus excels. Rather than treating each node as equally suitable for a compute task, our scheduler algorithm takes into account all sorts of factors before assigning a task to a best node, like CPU and the time the user intends to spend idle (how long the user will be asleep, how long the user will be at an offline Facebook event). Users could get paid marginal compensation for each compute task, or Nimbus could come bundled into a larger app, like Facebook.
Nimbus Desktop, which we've proof-of-concepted in the Desktop branch of our Github repo, uses a central server to assign tasks to each computer-node via Vagrant Docker provisioning. We haven't completed this platform option, but it serves another important product case: enterprise clients. We did the math for you: a medium sized company running 22,000 ec2s on Nimbus Desktop on its idle computers for 14 hours a day could save $6 million / year in AWS fees. In this case, the number of possible attack vectors is minimized because all the requests would originate from within the organization. This is the future of computing because it's far more efficient and environmentally friendly than solely running centralized servers. Data centers are having an increasingly detrimental effect on global warming; Iceland is already feeling its effects. Nimbus Desktop offers a scalable and efficient future. We don't have a resource issue. We have a distribution one.
## How we built it
The client-facing web app is built with react and node.js. The backend is built with node.js. The iOS app is built with react-native, express, and node.js. The Desktop script is built on Docker and Vagrant.
## Challenges we ran into
npm was consistently finnicky when we integrated node.js with react-native and built all of that in XCode with Metro Bundler. We also had to switch the scheduler-node interaction to a pull model rather than a push model to guarantee certain security and downtime minimization parameters. We didn't have time to complete Nimbus Desktop, save stepwise compute progress in a hashed database for large multi-hour computes (this would enable us to reassign the compute to the next best node in the case of disruption and optimize for memory usage), or get to the web compute version (diagrammed in the photo carousel, which would enable the nodes to act as true load balancers for more complex hosting)
## Accomplishments that we're proud of
Ideating Nimbus Desktop happened in the middle of the night. That was pretty cool.
## What we learned
Asking too many questions leads to way better product decisions.
## What's next for nimbus
In addition to the incomplete items in the challenges section, we ultimately would want the scheduler to be able to predict disruption using ML time series data.
|
## Inspiration
No matter how much you use your computer, it's likely you're not using your computing power to 24/7. That being said, it always feels like when you need power, you can never seem to have enough. Imagine being able to utilize your computer's compute power around the clock. With increasingly powerful machines entering the market (such as Apple's 96 GB RAM M2 MacBook Pro), we're beginning to see an underutilization of these compute resources. On the other hand, we see more and more compute heavy workloads - such as deep learning models - becoming more prevalent. What if there was a way for someone across the world to use your machine's resources while you sleep? Or if you could supercharge your own programs by using the resources of someone that's off their laptop? Noticing this discrepancy, our team decided to address this problem by creating a platform that connects some user's underutilized compute power to other user's compute needs.
## What it does
CommuniPute allows users to make their compute power available to the community and make money off of their computer's utilization. Users who need compute can request this underutilized compute power for their own innovations. A user can request compute power by browsing the catalog of available compute resources and selecting the compute resource of their choice. The script is run on the selected compute platform.
**Advantages**
* Run on a more powerful machine
* Utilize a more powerful network
* Run your code on the compute architecture of your choice
* Distribute your workloads
## Technical details
* Used semaphores for multiple connections working at once
* Used websocket technology using Convex
* Used containerization using Docker to prevent elevation of privilege, along with limiting available RAM memory
* React.js in the frontend with dynamic updates
* A web based IDE to execute code
* Being able to run python code with required libraries being downloaded into docker container with user specifications
**Future Goals**
We wanted to create a system that's able to take a compute heavy workload and intelligently distribute the workload to available compute resources. This allows requesting users to utilize the combined power of available compute resources for their innovation needs. Given constrained by time constraints of the hackathon, we implemented a proof-of-concept of the distributed computation model-- with parallelization being a future goal.
Please see the "What's next" section for a more elaborate set of future goals
## How this applies to Sustainability
One of the greatest challenges in the current tech sustainability space is the heavy resource demands of significant compute systems. Furthermore, old compute systems are typically trashed - therefore, negatively impacting the reduce, recycle, and reuse sustainability model. Our platform can utilize older compute systems to fulfill the industry's every growing compute power demand. By leveraging these underutilized compute platforms, we reduce the need to aggressively extract materials for compute systems thus facilitating the reduce, reuse, recycle cycle.
## How this applies to Education
Education within the computer science or technology related space often implies the presence of powerful compute systems. However, for underprivileged students, the lack of compute capacity often serves as a handicap. Giving access to unused hardware for cheap allows greater access to education, equitability, and potential for innovation.
This platform also allows an easier entry into heavy compute fields. Users no longer need to get up to speed on platforms cloud compute providers such as AWS, Azure, or Google Cloud. Users can simply write their functions and hit run without worrying about the overhead about where their compute will run.
Furthermore, academic institutions often have many compute systems which are underutilized or outright not used. Our platform will allow for the utilization of these compute platforms by students, researchers, academics, and professors within the institution.
## How this applies to New Frontiers (ML/AI)
Deep learning is becoming the new game changing innovation within the Machine Learning and Artificial Intelligence space. However, the creation of deep learning models requires the modeling of neural networks which require significant amounts of compute. Using our platform can alleviate these challenges by providing readily available compute for very cheap. A real world use case where our platform may have been used is during COVID-19. During COVID-19 research, scientists at IBM created a "grid computing" platform which asked users to offer their machines to scientists for running compute heavy workloads. We hope to make this level of compute readily, and cheaply available to any ML/AI innovator.
Furthermore, as mentioned within the "How this applies to Education", our platforms allow ML/AI engineers to only focus on their innovation rather than worrying about setting up compute systems on AWS/Azure/Google Cloud to support their compute heavy workloads. This not only allows innovators access to cheap compute power, but also reduces the barriers of entry to ML/AI innovation space.
## How this applies to Healthcare
One of the greatest challenges in healthcare relates to patient safety. Typically, patient data is regulated to not leave the healthcare institutions network. Therefore, the cloud is not an option to offload heavy Machine Learning workloads. Our platform can provide a solution in this space by allowing healthcare institutions to utilize all their compute systems for running compute heavy workloads.
## How this applies to Web 3.0/Blockchain
Coin mining requires compute resources. A big challenge for coins is the lack of compute power to mine these coins. Leveraging underutilized compute resources will allow for the mining of such coins.
## How this helps Developers
Our platform serves as a tool that developers can utilize in multiple ways:
* Developers have an ever increasing need for compute power. Our platform makes immense compute power readily accessible to developers. For example, for developers working on machine learning workloads can utilize our platforms to run their workloads and get results without worrying about overhead related to setting up a cloud platform for their compute needs.
* Developers want to test their products on multiple compute architectures and through various operating systems. Our platform allows users to choose which available machine they want to run their work on. Ex. a developer may want to ensure that their app works on x86 architecture. Our platform provides information about the available compute platform. So a developer can choose the appropriate x86 machine with the host OS of their choice.
## How we built it
We created 3 separate modules for this project. The three modules are as follows:
* **Host-side Client:** The host-side client features a python application. The host-side client communicates its availability to the server, receives requests
* **Backend Solution:** We leveraged Convex's backend capabilities and web sockets solution. The backend solution allows the connection of available compute resource to a requesting user. Since convex uses web sockets under the hood, we were able to leverage real-time reactive updates. This also allowed a two-way communication from the server to the client and the client to the server. It was imperative to send updates from the backend side to the client. Convex simplified the necessary logic and infrastructure that would've been required to make this possible - the web sockets solution was a game-changing asset.
* **Web App:** The web app allowed requesting users an interface for viewing available compute platforms and requesting the compute platforms.
Please see uploaded images for architecture diagram
## Challenges we ran into
**Security Considerations**
Being a compute sharing platform, the foremost challenge we considered was being able to run code within a containerized platform. We wanted to ensure security for both the host machine and the requesting machine. The code being run on the host machine shouldn't harm the host machine. Likewise, we wanted to provide a level of security for the requesting machine's code so that it isn't readily observed by the host machine.
We addressed these challenges by using a novel containerization technology in order to separate executing of the code. Anytime a request is made to compute, we spin up a separate compute container that allows execution of code in a complete silo
**New Technologies**
Our team came in with strong backend knowledge but a limited working knowledge of front end. We ended up using Convex which simplified the backend logic but placed the brunt of the workload on the frontend technologies. Therefore, coming up to speed with our front end framework (React), JavaScript, and integrating with Convex was the biggest challenge that our team faced.
## Accomplishments that we're proud of
We were able to create a working minimal viable product within a short period of time. The product we created has a vast application within almost every industry that utilizes technology. So our team is most proud of creating a product that makes a difference in every industry and potentially revolutionizes the way we use hardware.
## What we learned
3/4 of the teammates were first time hackers. Furthermore, our entire team came in with limited working knowledge of Convex, JavaScript, React, and front end technologies. Our team was able to quickly come up to speed with these technologies. Furthermore, we learnt how to work with containerization technologies. We learnt an incredible amount during this project and had a great time working as a team!
## What's next for CommuniPute
There's multiple next iterations for our community compute platform:
1. Create an orchestration system which allows one compute job to be orchestrated over multiple compute systems. This will provide utility for functions such as deep learning and larger work loads.
2. Create a service which allows for compute sharing within a local edge network using peer-to-peer connections without sending compute data to a backend server. This has a significant application within the healthcare industry as regulation prevents patient information from leaving the origin healthcare entity. Therefore, healthcare entities will now be able to perform compute on their under utilized compute platforms within their network edge then send out computed data for centralized processing
3. Implement paying mechanism and back a coin using credits
4. Allow for uploading files rather than using the text editor to write code.
|
## Inspiration
The inspiration for our project came from hearing about the massive logistical challenges involved in organising evacuations for events such as Hurricane Florence. We felt that we could apply our knowledge of solving optimisation problems to great effect in this area.
## What it does
ResQueue is a web-app that is designed to be deployed by an aid organisation that is organising rescue or evacuation efforts. It provides an interface for people in need of rescue to mark their location and the urgency of their request on a map.
In the admin interface the aid organisation is able to define the resources it has in terms of capacity and quantity of vehicles (i.e. 3 busses with 50 seats. 5 minibuses with 10 seats). Using clustering followed by path finding a route is generated for each vehicle that provides an efficient overall plan for rescuing as many people as possible, as fast as possible.
## How we built it
The WebApp was built using Python combined with the Flask web framework. It was all hosted on Azure, with the database being an Azure Cosmo DB instance. This infrastructure setup would allow us to scale the project in times of crisis.
The routing is done using OpenStreetMap data, combined with the C++ based OSRM project. Groups of people who are required to be rescued are clustered using a minimum spanning tree based approach, combining additional weather data obtained from the IBM Cloud Weather API and self-reported required urgency. A greedy heuristic of the Travelling Salesman (farthest-insertion algorithm) was used to select the final order of visiting the users in each cluster.
## Challenges we ran into
As the technical core of the problem, we were attempting to solve (The Travelling Salesman Problem and other vehicle routing problems) is NP-Hard. Due to the fact that we knew that no exact algorithm with an acceptable running time exists we needed to determine a tradeoff between execution time and performance. We did this by reading papers and experimenting with various heuristics and implementations.
We set up automatic scaling resources with Azure, this was necessary to allow for preprocessing of the OpenStreetMap data to be done in a reasonable timeframe. OSRM also used a non-standard GPS coordinate ordering. this cost us a tremendous amount of time and sanity.
## Accomplishments that we're proud of
We managed to complete our project to a standard we can be proud of in the time frame allocated. We hope that what we have developed can be used to help those in need.
In doing so we came up with novel solutions to the tough technical challenges we faced and made difficult decisions and tradeoffs along the way.
## What we learned
We learned a great deal about the current state of the art in solving TSP. It was also a first for all of us using Azure's services.
## What's next for ResQueue
We will probably continue to add to ResQueue, there were many other cool features that didn't make the final cut suggested both from within the team and also from passerby hackathon-ers.
* A companion app for both drivers and rescuees to allow Uber-like tracking
* Support for special cases such as needing wheelchair accessible vehicles
* General improvements to the algorithms used, both efficiency and accuracy
* Allowing admins to geofence the area they are able to service.
|
partial
|
## Inspiration
Ideas for interactions from:
* <http://paperprograms.org/>
* <http://dynamicland.org/>
but I wanted to go from the existing computer down, rather from the bottom up, and make something that was a twist on the existing desktop: Web browser, Terminal, chat apps, keyboard, windows.
## What it does
Maps your Mac desktop windows onto pieces of paper + tracks a keyboard and lets you focus on whichever one is closest to the keyboard. Goal is to make something you might use day-to-day as a full computer.
## How I built it
A webcam and pico projector mounted above desk + OpenCV doing basic computer vision to find all the pieces of paper and the keyboard.
## Challenges I ran into
* Reliable tracking under different light conditions.
* Feedback effects from projected light.
* Tracking the keyboard reliably.
* Hooking into macOS to control window focus
## Accomplishments that I'm proud of
Learning some CV stuff, simplifying the pipelines I saw online by a lot and getting better performance (binary thresholds are great), getting a surprisingly usable system.
Cool emergent things like combining pieces of paper + the side ideas I mention below.
## What I learned
Some interesting side ideas here:
* Playing with the calibrated camera is fun on its own; you can render it in place and get a cool ghost effect
* Would be fun to use a deep learning thing to identify and compute with arbitrary objects
## What's next for Computertop Desk
* Pointing tool (laser pointer?)
* More robust CV pipeline? Machine learning?
* Optimizations: run stuff on GPU, cut latency down, improve throughput
* More 'multiplayer' stuff: arbitrary rotations of pages, multiple keyboards at once
|
## Inspiration
3D Printing offers quick and easy access to a physical design from a digitized mesh file. Transferring a physical model back into a digitized mesh is much less successful or accessible in a desktop platform. We sought to create our own desktop 3D scanner that could generate high fidelity, colored and textured meshes for 3D printing or including models in computer graphics. The build is named after our good friend Greg who let us borrow his stereocamera for the weekend, enabling this project.
## How we built it
The rig uses a ZED stereocamera driven by a ROS wrapper to take stereo images at various known poses in a spiral which is executed with precision by two stepper motors driving a leadscrew elevator and a turn table for the model to be scanned. We designed the entire build in a high detail CAD using Autodesk Fusion 360, 3D printed L-brackets and mounting hardware to secure the stepper motors to the T-slot aluminum frame we cut at the metal shop at Jacobs Hall. There are also 1/8th wood pieces that were laser cut at Jacobs, including the turn table itself. We designed the power system around an Arduino microcontroller and and an Adafruit motor shield to drive the steppers. The Arduino and the ZED camera are controlled by python over a serial port and a ROS wrapper respectively to automate the process of capturing the images used as an input to OpenMVG/MVS to compute dense point clouds and eventually refined meshes.
## Challenges we ran into
We ran into a few minor mechanical design issues that were unforeseen in the CAD, luckily we had access to a 3D printer throughout the entire weekend and were able to iterate quickly on the tolerancing of some problematic parts. Issues with the AccelStepper library for Arduino used to simultaneously control the velocity and acceleration of 2 stepper motors slowed us down early Sunday evening and we had to extensively read the online documentation to accomplish the control tasks we needed to. Lastly, the complex 3D geometry of our rig (specifically rotation and transformation matrices of the cameras in our defined world coordinate frame) slowed us down and we believe is still problematic as the hackathon comes to a close.
## Accomplishments that we're proud of
We're proud of the mechanical design and fabrication, actuator precision, and data collection automation we achieved in just 36 hours. The outputted point clouds and meshes are still be improved.
|
## Inspiration
Millions of people around the world are either blind, or partially sighted. For those who's vision is impaired, but not lost, there are tools that can help them see better. By increasing contrast and detecting lines in an image, some people might be able to see clearer.
## What it does
We developed an AR headset that processes the view in front of it and displays a high contrast image. It also has the capability to recognize certain images and can bring their existence to the attention of the wearer (one example we used was looking for crosswalk signs) with an outline and a vocal alert.
## How we built it
OpenCV was used to process the image stream from a webcam mounted on the VR headset, the image is processed with a Canny edge detector to find edges and contours. Further a BFMatcher is used to find objects that resemble a given image file, which is highlighted if found.
## Challenges we ran into
We originally hoped to use an oculus rift, but we were not able to drive the headset with the available hardware. We opted to use an Adafruit display mounted inside a Samsung VR headset instead, and it worked quite well!
## Accomplishments that we're proud of
Our development platform was based on macOS 10.12, Python 3.5 and OpenCV 3.1.0, and OpenCV would not cooperate with our OS. We spent many hours compiling and configuring our environment until it finally worked. This was no small feat. We were also able to create a smooth interface using multiprocessing, which operated much better than we expected.
## What we learned
Without the proper environment, your code is useless.
## What's next for EyeSee
Existing solutions exist, and are better suited for general use. However a DIY solution is endlessly customizable, we this project inspires other developers to create projects that help other people.
## Links
Feel free to read more about visual impairment, and how to help;
<https://w3c.github.io/low-vision-a11y-tf/requirements.html>
|
winning
|
# Pose-Bot
### Inspiration ⚡
**In these difficult times, where everyone is forced to work remotely and with the mode of schools and colleges going digital, students are
spending time on the screen than ever before, it not only affects student but also employees who have to sit for hours in front of the screen. Prolonged exposure to computer screen and sitting in a bad posture can cause severe health problems like postural dysfunction and affect one's eyes. Therefore, we present to you Pose-Bot**
### What it does 🤖
We created this application to help users maintain a good posture and save from early signs of postural imbalance and protect your vision, this application uses a
image classifier from teachable machines, which is a **Google API** to detect user's posture and notifies the user to correct their posture or move away
from the screen when they may not notice it. It notifies the user when he/she is sitting in a bad position or is too close to the screen.
We first trained the model on the Google API to detect good posture/bad posture and if the user is too close to the screen. Then integrated the model to our application.
We created a notification service so that the user can use any other site and simultaneously get notified if their posture is bad. We have also included **EchoAR models to educate** the children about the harms of sitting in a bad position and importance of healthy eyes 👀.
### How We built it 💡
1. The website UI/UX was designed using Figma and then developed using HTML, CSS and JavaScript.Tensorflow.js was used to detect pose and JavaScript API to send notifications.
2. We used the Google Tensorflow.js API to train our model to classify user's pose, proximity to screen and if the user is holding a phone.
3. For training our model we used our own image as the train data and tested it in different settings.
4. This model is then used to classify the users video feed to assess their pose and detect if they are slouching or if they are too close too screen or are sitting in a generally a bad pose.
5. If the user sits in a bad posture for a few seconds then the bot sends a notificaiton to the user to correct their posture or move away from the screen.
### Challenges we ran into 🧠
* Creating a model with good acccuracy in a general setting.
* Reverse engineering the Teachable Machine's Web Plugin snippet to aggregate data and then display notification at certain time interval.
* Integrating the model into our website.
* Embedding EchoAR models to educate the children about the harms to sitting in a bad position and importance of healthy eyes.
* Deploying the application.
### Accomplishments that we are proud of 😌
We created a completely functional application, which can make a small difference in in our everyday health. We successfully made the applicaition display
system notifications which can be viewed across system even in different apps. We are proud that we could shape our idea into a functioning application which can be used by
any user!
### What we learned 🤩
We learned how to integrate Tensorflow.js models into an application. The most exciting part was learning how to train a model on our own data using the Google API.
We also learned how to create a notification service for a application. And the best of all **playing with EchoAR models** to create a functionality which could
actually benefit student and help them understand the severity of the cause.
### What's next for Pose-Bot 📈
#### ➡ Creating a chrome extension
So that the user can use the functionality on their web browser.
#### ➡ Improve the pose detection model.
The accuracy of the pose detection model can be increased in the future.
#### ➡ Create more classes to help students more concentrate.
Include more functionality like screen time, and detecting if the user is holding their phone, so we can help users to concentrate.
### Help File 💻
* Clone the repository to your local directory
* `git clone https://github.com/cryptus-neoxys/posture.git`
* `npm i -g live-server`
* Install live server to run it locally
* `live-server .`
* Go to project directory and launch the website using live-server
* Voilla the site is up and running on your PC.
* Ctrl + C to stop the live-server!!
### Built With ⚙
* HTML
* CSS
* Javascript
+ Tensorflow.js
+ Web Browser API
* Google API
* EchoAR
* Google Poly
* Deployed on Vercel
### Try it out 👇🏽
* 🤖 [Tensorflow.js Model](https://teachablemachine.withgoogle.com/models/f4JB966HD/)
* 🕸 [The Website](https://pose-bot.vercel.app/)
* 🖥 [The Figma Prototype](https://www.figma.com/file/utEHzshb9zHSB0v3Kp7Rby/Untitled?node-id=0%3A1)
### 3️⃣ Cheers to the team 🥂
* [Apurva Sharma](https://github.com/Apurva-tech)
* [Aniket Singh Rawat](https://github.com/dikwickley)
* [Dev Sharma](https://github.com/cryptus-neoxys)
|
## Inspiration
With more people working at home due to the pandemic, we felt empowered to improve healthcare at an individual level. Existing solutions for posture detection are expensive, lack cross-platform support, and often require additional device purchases. We sought to remedy these issues by creating Upright.
## What it does
Upright uses your laptop's camera to analyze and help you improve your posture. Register and calibrate the system in less than two minutes, and simply keep Upright open in the background and continue working. Upright will notify you if you begin to slouch so you can correct it. Upright also has the Upright companion iOS app to view your daily metrics.
Some notable features include:
* Smart slouch detection with ML
* Little overhead - get started in < 2 min
* Native notifications on any platform
* Progress tracking with an iOS companion app
## How we built it
We created Upright’s desktop app using Electron.js, an npm package used to develop cross-platform apps. We created the individual pages for the app using HTML, CSS, and client-side JavaScript. For the onboarding screens, users fill out an HTML form which signs them in using Firebase Authentication and uploads information such as their name and preferences to Firestore. This data is also persisted locally using NeDB, a local JavaScript database. The menu bar addition incorporates a camera through a MediaDevices web API, which gives us frames of the user’s posture. Using Tensorflow’s PoseNet model, we analyzed these frames to determine if the user is slouching and if so, by how much. The app sends a desktop notification to alert the user about their posture and also uploads this data to Firestore. Lastly, our SwiftUI-based iOS app pulls this data to display metrics and graphs for the user about their posture over time.
## Challenges we ran into
We faced difficulties when managing data throughout the platform, from the desktop app backend to the frontend pages to the iOS app. As this was our first time using Electron, our team spent a lot of time discovering ways to pass data safely and efficiently, discussing the pros and cons of different solutions. Another significant challenge was performing the machine learning on the video frames. The task of taking in a stream of camera frames and outputting them into slouching percentage values was quite demanding, but we were able to overcome several bugs and obstacles along the way to create the final product.
## Accomplishments that we're proud of
We’re proud that we’ve come up with a seamless and beautiful design that takes less than a minute to setup. The slouch detection model is also pretty accurate, something that we’re pretty proud of. Overall, we’ve built a robust system that we believe outperforms other solutions using just the webcamera of your computer, while also integrating features to track slouching data on your mobile device.
## What we learned
This project taught us how to combine multiple complicated moving pieces into one application. Specifically, we learned how to make a native desktop application with features like notifications built-in using Electron. We also learned how to connect our backend posture data with Firestore to relay information from our Electron application to our OS app. Lastly, we learned how to integrate a machine learning model in Tensorflow within our Electron application.
## What's next for Upright
The next step is improving the posture detection model with more training data, tailored for each user. While the posture detection model we currently use is pretty accurate, by using more custom-tailored training data, it would take Upright to the next level. Another step for Upright would be adding Android integration for our mobile app, which currently only supports iOS as of now.
|
## Inspiration
As important as maintaining one's health is, we wanted to create something to help those interested in weightlifting to jump straight into it without fear of injury with an app that detects and warns users of poor form during important compound lifts.
## What it does
Our app analyzes your lifting form using a computer vision pose estimation model, calculates key points within a provided video in where the user exhibits poor form, and gives suggestions based on these key points.
## How we built it
* We used **Ultralytics' YOLOv8** pose estimation model to landmark and track a persons joints
* **Django** alongside the **Django REST framework** were used on the server side to build a **RESTful API**
* **React, TypeScript, and Tailwind CSS** were used to design the client and fetch data from the server
* We used Cloudflare's AI Worker API to access their Llama 3 LLM model to provide Chadbot
* Video files annotated with pose estimation landmarks were uploaded to Cloudflare's **R2 buckets**, which would then be served to the client to display to the user
* Adobe Express was used to generate key images used throughout the site.
* Git was used for version control and collaboration
## Challenges we ran into
We faced challenges in accurately detecting a person's landmarks (joints) and ensuring real-time feedback for users, as well as serving the annotated video result back to the client.
## Accomplishments that we're proud of
We learned and implemented technology of an unfamiliar field (computer vision and machine learning) in a project that builds upon our existing knowledge of full-stack web development.
## What we learned
We learned the importance of refining machine learning models, especially when it comes to pose estimation, where a user's landmarks can vary drastically based on various factors such as camera angle and distance from target.
## What's next for How’sMyForm?
We plan to enhance our community support and integrate personalized workout plans to further assist users in their fitness journeys, such as implementing new algorithms for different lifts (e.g. RDL, bicep curls) as well as determining the camera angle automatically.
|
winning
|
## Team
Hello and welcome to our project! We are Ben Wiebe, Erin Hacker, Iain Doran-Des Brisay, and Rachel Smith. We are all in our third year of computer engineering at Queen’s University.
## Inspiration
Something our team has in common is a love of road trips. However, road trips can be difficult to coordinate, and the fun of a road trip is lost when not everyone is travelling together. As such, we wanted to create an app that will help people stay in touch while travelling and feel connected even when apart.
## What it Does
The app gives users the ability to stay connected while travelling in separate cars. From the home screen, you are prompted to log in to Snapchat with your account. You then have the option to create a new trip or join an existing trip. If you create a trip, you are prompted to indicate the destination that your group will be travelling to, as well as a group name. You are then given a six-character code, randomly generated and consisting of numbers and letters, that you can copy and send to your friends so that they can join you.
Once in a trip, users are taken to a screen that displays a map as the main display on the screen. The map displays each member of the trip’s Bitmoji and will update with users’ locations. Based on location, an arrival time will be displayed, letting users give their friends updates on how far away they are from their destination.
As well, users can sign into Spotify, allowing all parties in the group to contribute to a shared playlist and listen to the same songs from this playlist at the same time, keeping the road trip fun despite the distance. So next time you want to take control of the aux, you’ll be taking control of all parties in your group!
The software currently maps a route as generated using a Google Maps API. However, the route is not yet drawn on to the map, a messaging feature would be implemented to allow users to communicate with one another. This feature would be limited to users of passenger status to discourage drivers from texting and driving. As well, weather and traffic updates would be implemented to further aid users on road trips.
## How We Built It
The team split into two sub-teams each to tackle independent tasks. Iain and Rachel took lead on the app interface. They worked in Android Studio, coding in Java, to get the activities, buttons, and screens in sync. They integrated Snapchat’s bitmoji kit, as well as the Google Maps APIs to streamline the process. Ben and Erin took lead on making the server and databases, using SQLite and Node.js. They also implemented security checks to ensure the app is not susceptible to SQL injections and to limit the accepted user inputs. The team came together as a whole to integrate all components smoothly, and efficiently as well as to test and fix errors.
## Challenges We Ran Into
Several technical challenges were encountered during the creation of Konvoi. One error was in the implementation of the map on the client side. Another main issue was finding the proper dependencies and matching their versions.
## Accomplishments That We’re Proud Of
First and for most, we are proud of each other’s hard work and dedication. We started this Hackathon with the mind set that we wanted to complete the app at all costs. Normally never running on less than six hours of sleep, the team struggled on only four hours per night. The best part? The team morale. Everyone had their ups and downs, and points when we did not think that we would finish and it seemed easiest to give up. We took turns being the support for each other and encouraging each other; from silly photos at 3am in matching onesies, to visiting the snack table…every…five…minutes… the team persevered and accomplished the project!
On the other hand, we are proud of the app and all the potential that it has. In only 36 hours, a fully functional app that we can share together on our next team road trip (Florida anyone??) has been built. From here, we believe that this app is marketable, especially to those 18 to 30.
## What We Learned
The team collectively agrees that we learned so much throughout this entire experience both technical and interpersonal. The team worked with mentors one on one multiple times throughout the hackathon, each of them bringing a new experience to our table. We spoke with Kevin from Scotiabank, who expanded our thought process with regards to how security plays a role in every project we work on.
We spoke with Mike from Ritual who taught us about the Android app integration and helped us with the app implementation. Some of us had no prior knowledge of APIs, so having a knowledgeable mentor teaching us was an invaluable experience.
## What’s the Next Step for Konvoi?
During the design phase, the team created a long list of features that we felt would be an asset to have. We then categorized them as mandatory (required in the Minimum Viable Product), desired (the goal of the project), nice to have (an extension of desired features), and stretch goals (interesting ideas that would be great in the future). From these lists, we were able to accomplish all mandatory and desired goals. We unfortunately did not hit an nice to have or stretch goals. They included:
• Planned stops
• Messaging between cars
• Cost tracking for the group (when someone rents the car, someone else the hotel, etc.)
• Roadside assistance (such as CAA connected into the app)
• Entertainment (extend it to passengers playing YouTube videos, etc.)
• Weather warnings and added predictions
• A feature for a packing list
|
## Inspiration
While using ridesharing apps such as Uber and Lyft, passengers, particularly those of marginalized identities, have reported feeling unsafe or uncomfortable being alone in a car. From user interviews, every woman has mentioned personal safety as one of their top concerns within a rideshare. About 23% of American women have reported a driver for inappropriate behavior. Many apps have attempted to mitigate this issue by creating rideshare services that may hire only female drivers. However, these apps have quickly gotten shut down due to discrimination laws. Additionally, around 40% of Uber and Lyft drivers are white males, possibly due to the fact that many minorities may feel uncomfortable in certain situations as a driver. We aimed to create a rideshare app which would provide the same sense of safeness and comfort that the aforementioned apps aimed to provide while making sure that all backgrounds are represented and accounted for.
## What it does
Our app, Driversity (stylized DRiveristy), works similarly to other ridesharing apps, with features put in place to assure that both riders and drivers feel safe. The most important feature we'd like to highlight is a feature that allows the user to be alerted if a driver goes off the correct path to the destination designated by the rider. The app will then ask the user if they would like to call 911 to notify them of the driver's actions. Additionally, many of the user interviews we conducted stated that many women prefer to walk around, especially at night, while waiting for a rideshare driver to pick them up for safety concerns. The app provides an option for users to select in order to allow them to walk around while waiting for their rideshare, also notifying the driver of their dynamic location. After selecting a destination, the user will be able to select a driver from a selection of three drivers on the app. On this selection screen, the app details both identity and personality traits of the drivers, so that riders can select drivers they feel comfortable riding with. Users also have the option to provide feedback on their trip afterward, as well as rating the driver on various aspects such as cleanliness, safe driving, and comfort level. The app will also use these ratings to suggest drivers to users that users similar to them rated highly.
## How we built it
We built it using Android Studio in Java for full-stack development. We used the Google JavaScript Map API to display the map for the user when selecting destinations and tracking their own location on the map. We used Firebase to store information and for authentication of the user. We used DocuSign in order for drivers to sign preliminary papers. We used OpenXC to calculate if a driver was traveling safely and at the speed limit. In order to give drivers benefits, we are giving them the choice to take 5% of their income and invest it, and it will grow naturally as the market rises.
## Challenges we ran into
We weren't very familiar with Android Studio, so we first attempted to use React Native for our application, but we struggled a lot implementing many of the APIs we were using with React Native, so we decided to use Android Studio as we originally intended.
## What's next for Driversity
We would like to develop more features on the driver's side that would help the drivers feel more comfortable as well. We also would like to include the usage of the Amadeus travel APIs.
|
## Inspiration
Charles had a wonderfully exciting (lol) story about how he got to Hack the North, and we saw an opportunity for development ^-^! We want to capitalize on the idea of communal ridesharing, making it accessible for everyone. We also wanted to build something themed around data analytics and sustainability, and tracking carbon footprint via rideshare app seemed like a great fit.
## What it does
We want to provide a better matchmaking system for drivers/riders centered around sustainability and eco-friendliness. Our app will match drivers/riders based on categories such as vehicle type, distance, trip similarly, and location - but why not just use Uber, or other rideshare apps? Isn’t this the exact same thing? Yes, but here’s the catch - it’s FREE! Wow that’s so cool.
## How we built it
Our two talented web UI/UX designers created multiple iterations of wireframes in Figma, before translating it into a beautiful frontend web app.
Our very pog backend developers built all the database APIs, real-time chat system, and search/filter functionality.
## Challenges we ran into
The most time-consuming part was the idea generation. We spent a lot of time on our first night (we stayed up past 4 a.m.) and the entirety of Saturday morning until we landed on an idea that we thought was impactful, and had room for us to put our own twist on it.
We also faced our fair share of bugs and roadblocks during the creation process. Backend is hard, man, and integration even more so. Having to construct so many React components and ensure a functional backend in such a short timespan was quite the challenge.
## Accomplishments that we're proud of
We were definitely most proud of our designs and wireframes made in Figma, which were then translated into a beautiful React frontend. Our team put a lot of emphasis into making a user-friendly design and translating it into an app that people would actually try out. Our team also put a lot of hard work into the backend messaging system, which required a lot of trial-and-error, debugging, and a final great success.
## What we learned
In an era dominated by AI apps and projects, a pure software project is becoming increasingly rare. However, we learned that creating something that we are passionate about and would personally use means more than developing a product for the sole purpose of chasing industry hype. We learned a lot about matching algorithms for social platforms, technologies that enable real-time instant messaging, as well as the challenges of integration between frontend, backend, and databases.
## What's next for PlanetShare
Building our web app into a mobile app for people to download and use on-the-fly. Also obtaining more metrics towards helping users improve their carbon footprint, such as integrating with public transit or other sustainable modes of transportation like cycling.
Also, we can:
Integrate rewards points by partnering with companies like RBC (Avion rewards) to incentivize customers to carpool more
Extend our functionality beyond just carpooling - we can track other forms of transportation as well, and recommend alternative options when carpooling isn’t available
Leverage machine learning and data analytics to graph trends and make predictions about carbon emission metrics, just like a personal finance tracker
Introduce a healthy dose of competition - benchmark yourself against your friends!
|
partial
|
## Inspiration
Inclusivity is the cornerstone of thriving communities. As we continue to grow and interact across various cultures, races, and genders, the need to foster diverse and welcoming environments becomes more crucial than ever. Our inspiration for Inclusivity Among Us stemmed from the desire to help individuals and organizations ensure their communication aligns with the values of diversity that are commonly overlooked. We wanted to create a tool that helps people make meaningful changes in how they speak and write, driving positive social impact in communities of all kinds.
## What it does
Inclusivity Among Us is a tool designed to help users analyze their communication for inclusivity. It highlights non-inclusive language and provides specific tips for improvement, focusing on topics such as race, gender expression, disability, and educational attainment. The app provides:
* An inclusivity rating (out of 100) to measure how inclusive the content is.
* Specific changes to improve the inclusivity of the text, using color-coded highlights for non-inclusive phrases.
* Multilingual support, allowing users to check content in various languages like English, Spanish, French, and more.
* A downloadable report in PDF format, which summarizes the inclusivity rating, flagged text, and suggestions for improvement.
## How we built it
We built the Python app using Streamlit for the user interface and integration with OpenAI’s GPT-3.5 to perform the inclusivity analysis.
## Challenges we ran into
One of the primary challenges was ensuring that the feedback generated by the AI was both accurate and meaningful. Our prompt engineering skills had to be used to prevent the model from nitpicking trivial language choices and focus only on significant inclusivity issues. Another challenge was ensuring the app could handle text in multiple languages and still provide relevant suggestions, which required fine-tuning how the model interprets cultural nuances in language.
## Accomplishments that we're proud of
We are proud of creating a usable tool that helps users improve their language in a meaningful way. The ability to dynamically highlight non-inclusive language, provide concise suggestions, and offer multilingual support are features that make the app impactful across various contexts. We are also proud of the seamless user experience, where anyone can simply paste content, check for inclusivity, and download a report within seconds.
## What we learned
This was our first time using Streamlit and it surprised us with how seamless we were able to integrate other features into our app. The time saved from styling and implementing basic features allowed us to focus on refining the actual product.
We also learned more about prompt engineering through a lot of trial and error, figuring out how to create the most effective instructions for the model.
## What's next for Inclusivity Among Us
* Refine the AI suggestions further, ensuring the advice given is always contextually relevant and culturally sensitive.
* Broaden the scope by including more categories of inclusivity, such as socio-economic status, mental health considerations, and age diversity.
* Allow users to flag suggestions they find particularly helpful, building a feedback loop that continuously improves the tool.
* Custom reports for organizations, offering deeper insights and strategies for making their communication more inclusive.
* Explore the possibility of integrating with corporate communication tools like Slack or Gmail, allowing users to check inclusivity in real-time while drafting messages.
|
## Inspiration
As members of immigrant families, we often encounter the significant challenge of language barriers when communicating with family members in our home countries. This issue has become more prevalent since the onset of the COVID-19 pandemic, leading to missed opportunities and lack of meaningful connections with long distance family members. While the straightforward solution might seem to be speaking the language at home and utilizing educational platforms like DuoLingo or Babbel for practice, it's not as simple as it appears. People often lose motivation or focus solely on obtaining the correct answers on these apps. This challenge inspired us to create a more engaging and interactive conversational companion. By offering users the opportunity to engage in real conversations, this tool enhances language practice and improves proficiency through repeated speaking iterations.
## What it does
Our web platform offers users the opportunity to learn various languages through a tailored lesson plan upon selecting a language. It includes sub-lessons that involve engagement with articles or short videos on specific topics. This design emulates the interactive experience of a traditional classroom setting.
Following the consumption of these materials, users engage with AI Chat Companion for a unique twist on language learning: voice-to-voice conversations. Users speak into the microphone, and their input is processed in real time. The AI companion analyzes the spoken responses and replies with audio feedback. Conversations are seamlessly conducted and can be concluded using a language-specific keyword(eg "terminador" for Spanish).
The AI focuses primarily on understanding, with grammar and punctuation as secondary feedback priorities. We had the intent of fostering a conversational environment that enhances speaking skills and provides a comprehensive speaking experience for learners.
## How we built it
Our product was developed using a combination of various technologies and frameworks to ensure an ideal interactive user experience.
For image conversion, we utilized Covertio, which is a tool used for converting JPEG images into SVG format, which allows us to integrate a high quality vector graphic into the website's UI. The user interface was designed using Reflex, which is a flexible, open-source, full stack python framework which allowed for an efficient deployment of the app. The framework unified front and backend, which allowed us to implement both purely in python. We adopted a sidebar template from Reflex to organize content and ensure an intuitive navigation experience for users.
Python was the core of our AI functionality, where the chatbot service and speech-to-text and text-to-speech processing was developed using Python, Open AI APIs, and Speech Text from Google Cloud. The use of Python allowed for access to diverse libraries and frameworks that support AI. The chatbot was integrated using Python enabling real time interactions with users. The component was crucial for simulating the conversation experience.
For speech recognition and synthesis, we used Google Cloud's Speech Text APIs, which provided the backbone for our backend implementation for accurate NLP training, and our basis for prompt engineering. The speech detection feature on google cloud allowed for strong capabilities in audio processing. The Open AI API allowed for real time translation within a language during conversations.
## Challenges we ran into
The development process involved navigating through a complex landscape of technical challenges. One of our initial struggles was dealing with framework compatibility. The integration of Reflex, which is a significantly different framework compared to Node.js and React, led to a fairly substantial learning curve. Understanding the framework required time to understand the components and best practices to ensure a responsive design. We were able to perform comprehensive research as a team and speak with the sponsors for any issues we had. We performed extensive testing to ensure compatibility.
Configuring speech-to-text service to accurately recognize and process spoken language presented difficulties with dialects, accents, and other background noise. Effective prompt engineering that elicited useful responses form the chatbot while ensuring flow of conversation was difficult. We were able to utilize Google Cloud's Speech Text API for its real time speech recognition capabilities. We applied practices in natural language processing and conducted user testing to refine interactions.
Deciding on the best platform for speech-to-text development involved in evaluating various options based on latency was another difficulty, along with ensuring the chat bot remained on topic with coherent responses were some of the final backend challenges.
## Accomplishments that we're proud of
We are proud of figuring out and programming an organized UI platform using an architecture and framework we had no experience with. Working together as a team and problem solving with each through issues in speech to text interpretation, prompt engineering, and training the LLM is something we feel great about.
## What we learned
We learned that pivoting to a brand new UI platform with no prior knowledge about is a challenging task, especially regarding the difficulty of having a comprehensive understanding within the duration of the hackathon. We learned how to implement OpenAI APIs and Speech Text APIs from Google Cloud, and the development behind creating speech(from user) to speech(chatbot) communication. We also learned how to divide and conquer, as well as work together on tasks that were difficult.
## What's next for Polyglot.AI
Fine tuning the LLM for a more structured response from the chat bot would make the learning experience more ideal. Improving our UIUX process and design it to be more interactive and flow more effectively. Improving the environment of the platform to make the communication purely as speech to speech.
|
## Inspiration
We wanted to tackle a problem that impacts a large demographic of people. After research, we learned that 1 in 10 people suffer from dyslexia and 5-20% of people suffer from dysgraphia. These neurological disorders go undiagnosed or misdiagnosed often leading to these individuals constantly struggling to read and write which is an integral part of your education. With such learning disabilities, learning a new language would be quite frustrating and filled with struggles. Thus, we decided to create an application like Duolingo that helps make the learning process easier and more catered toward individuals.
## What it does
ReadRight offers interactive language lessons but with a unique twist. It reads out the prompt to the user as opposed to it being displayed on the screen for the user to read themselves and process. Then once the user repeats the word or phrase, the application processes their pronunciation with the use of AI and gives them a score for their accuracy. This way individuals with reading and writing disabilities can still hone their skills in a new language.
## How we built it
We built the frontend UI using React, Javascript, HTML and CSS.
For the Backend, we used Node.js and Express.js. We made use of Google Cloud's speech-to-text API. We also utilized Cohere's API to generate text using their LLM.
Finally, for user authentication, we made use of Firebase.
## Challenges we faced + What we learned
When you first open our web app, our homepage consists of a lot of information on our app and our target audience. From there the user needs to log in to their account. User authentication is where we faced our first major challenge. Third-party integration took us significant time to test and debug.
Secondly, we struggled with the generation of prompts for the user to repeat and using AI to implement that.
## Accomplishments that we're proud of
This was the first time for many of our members to be integrating AI into an application that we are developing so that was a very rewarding experience especially since AI is the new big thing in the world of technology and it is here to stay.
We are also proud of the fact that we are developing an application for individuals with learning disabilities as we strongly believe that everyone has the right to education and their abilities should not discourage them from trying to learn new things.
## What's next for ReadRight
As of now, ReadRight has the basics of the English language for users to study and get prompts from but we hope to integrate more languages and expand into a more widely used application. Additionally, we hope to integrate more features such as voice-activated commands so that it is easier for the user to navigate the application itself. Also, for better voice recognition, we should
|
losing
|
## Inspiration
I was first inspired to build 911 AI through witnessing a friends experience with calling 911 after an accident. While the accident was minor the time delay would have had serious consequences if luck had not been on his side. However as I learned more it became obvious to me that the current 911 system was in need of improvement but was slow to change. 911 Responder is an attempt to take technology and build a system that supports rather than replaces our current 911 system, in the process creating in impact where none had existed before.
## What it does
911 Responder is a web based tool focused on improving the efficiency of 911 dispatchers. From anywhere in the world a user can call our servers where an AI system will greet them and ask them questions related to their current emergency. Through doing so it will cycle through several questions before extracting the most important details of that emergency. This information will then be put onto the user interface allowing the 911 Dispatcher to see specific details of the emergency that can then be used in whatever course of action the dispatcher decides on or in reaching out to the user again.
It is important to note that throughout the process the 911 Responder will only be active in scenarios where no human dispatcher will be available. In addition the dispatcher will be able to interject, decide, and respond to any call as they see fit with this build serving to save the time that would be lost if no dispatcher was available and the caller was forced to wait in silence.
## How we built it
This build was constructed largely with Node.js in addition to Twilio which allowed the build to answer calls and verbally answer questions by the user. As to these responses ChatGBT and its APIs are used in generating responses based on prompt engineering with various prompts informaing ChatGBT what role it should play and what position in the conversation they are in. Finally Ngrok is used to facilitate interaction between the website, private backend, and the public Twilio accounts allowing for seamlessly integration between functionalities running on local hosts and the wider internet. Finally the hugging Face API is used to build several ancillary functions such as All-MiniLM-16-v2 for basic sorting.
## Challenges we ran into
When building 911 Responder the main challenge I faced was getting the various API services to work together. This was largely due to the extreme importance of the backend in my build as large portions of the frontend relied on information provided with many API calls. During my hacking I was forced to change and quickly learn several new APIs in order to finish the project with the greatest challenge being the integration of Ngrok. Originally this API was not part of my plan however as the back-end began to become more realized I had to take the effort of quickly learnings its applications, definitely one of the key experiences of the hackathon for me.
## Accomplishments that I'm proud of
One crucial accomplishment I am proud of has to do with the integration of the front and backends. During the process of building 911 Responder I was forced to change my plans several times with the greatest difficulty being wether I could integrate the front and backend into each other. With every error I found myself doubting my ability to execute on the project however through quick learning I was able to utilize most of my desired features.
## What I learned
The most important lesson I learned through the process of building this product is the value of a development schedule especially for larger projects with several moving parts. For example the process of learning several APIs resulted in me needing to understand the impact of that process on the completion of my project. Knowing how much time and energy I would devote to each feature would then allow much faster decision making as to what features I should implement and how to do so. This lesson was then further reinforced through my conversations with mentors who stressed the importance of said planning in their own professional organizations, making this a skill I highly value.
## What's next for 911 Responder
Next up for 911 Responder I will be finalizing the secondary and tertiary applications of 911 Responder being its 311 and Business call center applications. As explained in my pitch this will allow me to develop this idea to a greater extent by providing immediate use cases and resources for development. In addition I will need to expand the use of the ChatGBT Whisper and Hugging Face API as the former was taught to be by a mentor during the conference while the latter supports several features that could be expanded.
|
## Inspiration
In many metropolitan cities, one would find public transit systems, whether they are buses, subway systems or streetcars. When we see people engaged in altercations, such as physical or verbal altercations that could put the public at a risk, a good person would typically call for help. In most cities and regions, it is not possible to text 911/other emergency services numbers to receive help, making phone call the only manner to ask for help. However, because calling for emergency services (ex. the police) requires one to physically pick up the phone and report the crime, this may put the caller in a position of possible danger. In some cases, a bystander may feel uncomfortable contacting the police because of fear of intimidation and injury from the perpetrator.
We created a chatbot that would effectively allow users to report crime in a discreet manner, such that a crime could be reported by typing rather than speech. This would promote more people to report crimes and create a safer environment for all citizens.
## What it does
TravelSafe is a chatbot which utilizes Actions on Google and Google Cloud Services to complete what a person would normally do in an emergency situation requiring emergency attention, which is call for assistance. The user can type in the information about the situation and locations and send it to emergency services without needing to speak a single word.
## How we built it
We built TravelSafe using Actions on Google, Firebase, Node.js, Google Cloud Platform, and Twilio IP.
## Challenges we ran into
One of the most significant challenges we ran into was the fact that error messages and logs were not in the most obvious of places. This led to many headaches and trial-and-error attempts to fix problems that arose. Eventually, through experience, we were able to find out where all logs were and they were able to fix them right away. We also ran into challenges implementing MongoDB (we later found out that the documentation was incorrect), which limited our project scope.
## What's next
We want to be able to eliminate bias inside the application and prevent unneeded calls from being dialled. In other words, some people may report situations where they are not necessarily in danger but due to personal bias. For example, some person might report a person walking down the street wearing a big hood because they feel unsafe, but that person may not be a dangerous person. We also want to prevent children from dialling into the station, because there are many instances where children have dialled emergency services complaining about non-urgent manners, crowding the phone lines.
*Please see the alpha deployment in link.*
|
## Inspiration
We wanted to make an app that helped people to be more environmentally conscious. After we thought about it, we realised that most people are not because they are too lazy to worry about recycling, turning off unused lights, or turning off a faucet when it's not in use. We figured that if people saw how much money they lose by being lazy, they might start to change their habits. We took this idea and added a full visualisation aspect of the app to make a complete budgeting app.
## What it does
Our app allows users to log in, and it then retrieves user data to visually represent the most interesting data from that user's financial history, as well as their utilities spending.
## How we built it
We used HTML, CSS, and JavaScript as our front-end, and then used Arduino to get light sensor data, and Nessie to retrieve user financial data.
## Challenges we ran into
To seamlessly integrate our multiple technologies, and to format our graphs in a way that is both informational and visually attractive.
## Accomplishments that we're proud of
We are proud that we have a finished product that does exactly what we wanted it to do and are proud to demo.
## What we learned
We learned about making graphs using JavaScript, as well as using Bootstrap in websites to create pleasing and mobile-friendly interfaces. We also learned about integrating hardware and software into one app.
## What's next for Budge
We want to continue to add more graphs and tables to provide more information about your bank account data, and use AI to make our app give personal recommendations catered to an individual's personal spending.
|
losing
|
## Inspiration
The total greenhouse gasses produced by the shipping industry is over 3%, equivalent to the total greenhouse gasses produced by the aviation industry. We wanted to make it more efficient and faster to send stuff they want to ship immediately.
## What it does
Our app provides people with a service to send packages across the world with same-day or next-day delivery at cheaper rates than any traditional shipping. We use our platform to connect senders(who want to send packages) with travellers(travelling to a place) of the same destination. We use blockchain to store all reviews of both senders and travellers, with the service to hold funds until the task is complete.
## How we built it
We used react to build the front-end with MongoDB, Node and Express as the backend for our project. The react app interacts via an API with MongoDB, Node and Express as to store data and Solana is used for transactions.
## Challenges we ran into
Making everything from scratch with no knowledge of Solana at all was very hard.
## Accomplishments that we're proud of
The Service that it enables for people is amazing!! as it helps them to send packages on the same day or the next day at a fraction of the cost and it is even better for the environment.
## What we learned
Time management is very important, we had to stay focused on creating a minimum viable product before we created additional features.
## What's next for htch\_hike
We believe hitch\_hike has a lot of potential and look forward to building it in the real world, as it is something that can have a good impact on the whole world.
|
## Inspiration
With current restrictions, there is a lot of opportunities for people to help each other out in regards to essential delivery. This application provides a way in which communities can come together with simple acts of kindness.
## What it does
Connects people who are limited in regards to transportation with people who are able to pick things up for them. Users can posts lists of essential items they are looking to get delivered to them by people that are close by. Users can also search the database for posted lists near an address that they are "on the way" to and get in contact with the user who posted them.
## How we built it
Python, Flask, Google Maps, SQL, HTML, Bootstrap, Ajax, JS
## Challenges we ran into
Web hosting, database manipulations, various language/framework documentation, back end to front end connecting, overall architecture planning.
## Accomplishments that we're proud of
This was our first experience with designing and executing a web application! We used many of these technologies for the first time during this project!
## What we learned
How to use SQL databases, Bootstrap, calling APIs, connecting front and backend services, POST and GET requests.
|
## 🍀 Our Inspiration
We thought the best way to kick this off was to share this classic meme:

We can all chuckle at this, but let's be real: climate change is no joke. Just look at how the [Earth set its global temperature record](https://abcnews.go.com/International/earth-sets-daily-global-temperature-record-2nd-day/story?id=112233810) two weeks ago! The problem isn't just that the climate is heating up – it's that people often aren’t motivated to act unless there's something in it for them. Think about it: volunteering often comes with certificates, and jobs come with paychecks. People need incentives to get moving!
## 🙌 How does WattWise solve the problem
We’ve created a prototype that would plugs directly into your office/home’s main power supply. This device streams real-time data to our dashboard using protocols like MQTT and HTTPS, where you can watch your power usage, get a sneak peek at your upcoming electricity bill and much more.
Imagine this: normally, we’re all a bit clueless about whether we’ve left the lights on or are using power-hungry gadgets, until the dreaded bill arrives. With WattWise, it’s like having a personal energy coach. Just like how whoop made tracking your fitness addictive, WattWise lets you track your energy usage and bill predictions.
Picture this scenario: You’re relaxing on a holiday when WattWise sends you a notification about your current power usage being higher than the daily average. This alert prompts you to check the stats, giving you valuable insights on which appliances to turn off. After making a few adjustments, you’re back to enjoying your holiday with the satisfaction of knowing you’ll have a lower bill at the end of the month.
We just took a household as an example – think about offices and corporations. With WattWise, you could be saving tons of electricity and cash without breaking a sweat.
## 🧑💻 Technical implementation
This project was one of our most technical yet! We aimed to simulate not just one but two devices streaming data to a single dashboard. Picture a company with two buildings, each outfitted with our **Arduino** setups. These setups included current and voltage sensors, a switch, a DC motor, a 9V battery, and a diode. When the switch is flipped, the sensors measure the current and voltage produced by the motor, giving us the power using:
```
Power = Voltage × Current
```
Power in watts per hour equals energy, and multiplying this by the local rate gives the cost.
To share this data, we used the **MQTT** protocol. Our devices publish power data to an MQTT broker, and an **Express.js** backend subscribes to this data, receiving updates every second. This data is stored in **DynamoDB**, and we provide API routes for other services to access it with custom queries.
We containerized everything using **Docker** and **Docker Compose**, creating a local setup with DynamoDB, an MQTT client and broker, and our API. These services interact through a Docker network.
Next, we tackled future price predictions using a custom model with a RandomForestRegressor in **scikit-learn**, hosted on a **Python Flask** server.
Finally, our **Next.js** dashboard brings it all together. The frontend is also integrated with **Google Gemini GenAI** to detect unusual usage patterns and alert users. It features a bar chart for current usage, a pie chart for device comparison, and a line chart for predicted usage. Basic math operations show the end-of-month cost predictions and GenAI alerts for any unusual activity.
## 😭 Challenges we ran into
Handling time zones has always been a developer's nightmare, and of course, our whole MQTT and DynamoDB setup crashed at midnight because of this. It took a while to sort out the mess and reset everything.
Additionally, we also had to buy our voltage and current sensors from Amazon. Since local stores didn’t carry them, we had to arrange for delivery to a friend's house.
Our team had diverse strengths: backend, frontend, and DevOps. This meant we were often using technologies unfamiliar to each other. We spent a lot of time learning on the fly, which was both challenging and rewarding.
And now for the embarrassing part: we spent three hours last night debugging a single API call because the React state refused to update.
## 😤 Accomplishments that we're proud of
* Everything worked as intended. Both Arduinos streamed data accurately, the calculations were correct, our machine learning model made precise predictions, GenAI integration was seamless, and the frontend supported real-time updates.
* Built a highly technical project from scratch in just 36 hours.
* Acquired new skills and applied them effectively throughout the project.
* Tackled a significant real-world problem, contributing to a solution for one of the most prevalent issues humanity faces today, climate change and excess consumption of natural resources.
* Successfully integrated hardware with advanced software.
## 🧐 What we learned
* Training our own model using Scikit-Learn was a valuable learning experience. It taught us how to format data precisely to meet our needs.
* Using Docker and Docker Compose was highly effective. We managed to run multiple services simultaneously, which streamlined our development process.
* Working with several backends and setting up TCP tunnels using ngrok to access each other’s computers for accessing local servers.
* Gained hands-on experience with circuitry, electronics, Arduinos, and serial ports to stream live data.
* Working with IoT technology and integrating hardware with software in real-time was demanding, but research and experience helped us overcome the challenges.
* Working in a team provided valuable insights into soft skills like communication and coordination.
* Each hackathon teaches us new skills and improves our efficiency. We learned to better utilize APIs, templates, and open-source software, as well as improve time management and planning.
## 🔜 What's next for WattWise
* Currently, our tool focuses on providing information, but it doesn't offer control over devices. A potential enhancement would be to enable users to control smart devices directly from the dashboard and view real-time updates.
* We aim to introduce detailed progress statistics similar to what you’d find on a fitness tracker like a Fitbit. This enhancement would provide users with a comprehensive view of their energy usage trends over a selectable timeframe (e.g., weekly, monthly).
|
losing
|
## Inspiration
Small scale braille printers cost between $1800 and $5000. We think that this is too much money to spend for simple communication and it has acted as a barrier for many blind people for a long time. We plan to change this by offering a quick, affordable, precise solution to this problem.
## What it does
This machine will allow you to type a string (word) on a keyboard. The raspberry pi then identifies what was entered and then controls the solenoids and servo to pierce the paper. The solenoids do the "printing" while the servo moves the paper.
A close-up video of the solenoids running: <https://www.youtube.com/watch?v=-jSG96Br3b4>
## How we built it
Using a raspberry pi B+, we created a script in python that would recognize all keyboard characters (inputted as a string) and output the corresponding Braille code. The raspberry pi is connected to 4 circuits with transistors, diodes and solenoids/servo motor. These circuits control the how the paper is punctured (printed) and moved.
The hardware we used was: 4x 1n4004 diodes, 3 ROB-11015 solenoids, 4 TIP102 transistors, a Raspberry Pi B+, Solarbotic's GM4 servo motor, its wheel attachment, a cork board, and a bunch of Lego.
## Challenges we ran into
The project initially had many hardware/physical problems which caused errors while trying to print braille. The solenoids were required to be in a specific place in order for it to pierce paper. If the angle was incorrect, the pins would break off or the paper stuck to them. We also found that the paper would jam if there were no paper guards to hold the paper down.
## Accomplishments that we are proud of
We are proud of being able to integrate hardware and software into our project. Despite being unfamiliar with any of the technologies, we were able to learn quickly and create a fun project that will make a difference in the world.
## What we learned
None of us had any knowledge of python, raspberry pi, or how solenoids functioned. Now that we have done this project, we are much more comfortable in working with these things.
## What's next for Braille Printer
We were only able to get one servo motor which meant we could only move paper in one direction. We would like to use another servo in the future to be able to print across a whole page.
|
## Inspiration:
We wanted to combine our passions of art and computer science to form a product that produces some benefit to the world.
## What it does:
Our app converts measured audio readings into images through integer arrays, as well as value ranges that are assigned specific colors and shapes to be displayed on the user's screen. Our program features two audio options, the first allows the user to speak, sing, or play an instrument into the sound sensor, and the second option allows the user to upload an audio file that will automatically be played for our sensor to detect. Our code also features theme options, which are different variations of the shape and color settings. Users can chose an art theme, such as abstract, modern, or impressionist, which will each produce different images for the same audio input.
## How we built it:
Our first task was using Arduino sound sensor to detect the voltages produced by an audio file. We began this process by applying Firmata onto our Arduino so that it could be controlled using python. Then we defined our port and analog pin 2 so that we could take the voltage reading and convert them into an array of decimals.
Once we obtained the decimal values from the Arduino we used python's Pygame module to program a visual display. We used the draw attribute to correlate the drawing of certain shapes and colours to certain voltages. Then we used a for loop to iterate through the length of the array so that an image would be drawn for each value that was recorded by the Arduino.
We also decided to build a figma-based prototype to present how our app would prompt the user for inputs and display the final output.
## Challenges we ran into:
We are all beginner programmers, and we ran into a lot of information roadblocks, where we weren't sure how to approach certain aspects of our program. Some of our challenges included figuring out how to work with Arduino in python, getting the sound sensor to work, as well as learning how to work with the pygame module. A big issue we ran into was that our code functioned but would produce similar images for different audio inputs, making the program appear to function but not achieve our initial goal of producing unique outputs for each audio input.
## Accomplishments that we're proud of
We're proud that we were able to produce an output from our code. We expected to run into a lot of error messages in our initial trials, but we were capable of tackling all the logic and syntax errors that appeared by researching and using our (limited) prior knowledge from class. We are also proud that we got the Arduino board functioning as none of us had experience working with the sound sensor. Another accomplishment of ours was our figma prototype, as we were able to build a professional and fully functioning prototype of our app with no prior experience working with figma.
## What we learned
We gained a lot of technical skills throughout the hackathon, as well as interpersonal skills. We learnt how to optimize our collaboration by incorporating everyone's skill sets and dividing up tasks, which allowed us to tackle the creative, technical and communicational aspects of this challenge in a timely manner.
## What's next for Voltify
Our current prototype is a combination of many components, such as the audio processing code, the visual output code, and the front end app design. The next step would to combine them and streamline their connections. Specifically, we would want to find a way for the two code processes to work simultaneously, outputting the developing image as the audio segment plays. In the future we would also want to make our product independent of the Arduino to improve accessibility, as we know we can achieve a similar product using mobile device microphones. We would also want to refine the image development process, giving the audio more control over the final art piece. We would also want to make the drawings more artistically appealing, which would require a lot of trial and error to see what systems work best together to produce an artistic output. The use of the pygame module limited the types of shapes we could use in our drawing, so we would also like to find a module that allows a wider range of shape and line options to produce more unique art pieces.
|
## Inspiration
Our good friend's uncle was involved in a nearly-fatal injury. This led to him becoming deaf-blind at a very young age, without many ways to communicate with others. To help people like our friend's uncle, we decided to create HapticSpeak, a communication tool that transcends traditional barriers. As we have witnessed the challenges faced by deaf-blind individuals first hand, we were determined to bring help to these people.
## What it does
Our project HapticSpeak can take a users voice, and then converts the voice to text. The text is then converted to morse code. At this point, the morse code is sent to an arduino using the bluetooth module, where the arduino will decode the morse code into it's haptic feedback equivalents, allowing for the deafblind indivudals to understand what the user's said.
## How we built it
## Challenges we ran into
## Accomplishments that we're proud of
## What we learned
## What's next for HapticSpeak
|
winning
|
## Inspiration
Our team has consistently tried to improve our personal school experiences slowly but surely by optimizing every small process we can. This tool takes it to the next level and completely streamlines the learning process up to the point where you can start studying!
## What it does
Noq listens into your lecture, compiling and categorizing all of the lecture’s most important points for you. They are automatically expanded and revised, separating all the different ideas to be ready for vectorization. Once entered into the vectorstore, the RAG search engine allows for context-driven searching through your notes to easily find what you need. You can also generate diagrams for specific lines within a note, easily visualizing what has been summarized for the user.
## How we built it
Using Groq as the main infrastructure behind our application, we leveraged its highly fast speeds to run multiple inference calls in our backend AI agent network almost instantly. ChromaDB also served as the vectorstore to store the notes and run our semantic search engine for easy querying. Our actual website was also built in Next.js and Tailwind.
## Challenges we ran into
It was hard creating a complex real-time app without streaming with minimal delay and having AI agents acting in the background
## Accomplishments that we're proud of
Being able to integrate external tools and technologies such as embedding models with Groq, as well as maintaining multiple servers interacting with each other to create a seamless and extremely fast interaction
## What we learned
Frontend is hard
## What's next for Noq
Completely integrate it with personal calendars and email, using tool calling for extracting upcoming events and deadlines and automatically creating reminders for those. This is a product we could easily see becoming a staple in our daily lives, and will definitely continue improving upon it!
|
## 💡 Inspiration
You have another 3-hour online lecture, but you’re feeling sick and your teacher doesn’t post any notes. You don’t have any friends that can help you, and when class ends, you leave the meet with a blank document. The thought lingers in your mind “Will I ever pass this course?”
If you experienced a similar situation in the past year, you are not alone. Since COVID-19, there have been many struggles for students. We created AcadeME to help students who struggle with paying attention in class, missing class, have a rough home environment, or just want to get ahead in their studies.
We decided to build a project that we would personally use in our daily lives, and the problem AcadeME tackled was the perfect fit.
## 🔍 What it does
First, our AI-powered summarization engine creates a set of live notes based on the current lecture.
Next, there are toggle features for simplification, definitions, and synonyms which help you gain a better understanding of the topic at hand. You can even select text over videos!
Finally, our intuitive web app allows you to easily view and edit previously generated notes so you are never behind.
## ⭐ Feature List
* Dashboard with all your notes
* Summarizes your lectures automatically
* Select/Highlight text from your online lectures
* Organize your notes with intuitive UI
* Utilizing Google Firestore, you can go through your notes anywhere in the world, anytime
* Text simplification, definitions, and synonyms anywhere on the web
* DCP, or Distributing Computing was a key aspect of our project, allowing us to speed up our computation, especially for the Deep Learning Model (BART), which through parallel and distributed computation, ran 5 to 10 times faster.
## ⚙️ Our Tech Stack
* Chrome Extension: Chakra UI + React.js, Vanilla JS, Chrome API,
* Web Application: Chakra UI + React.js, Next.js, Vercel
* Backend: AssemblyAI STT, DCP API, Google Cloud Vision API, DictionariAPI, NLP Cloud, and Node.js
* Infrastructure: Firebase/Firestore
## 🚧 Challenges we ran into
* Completing our project within the time constraint
* There was many APIs to integrate, making us spend a lot of time debugging
* Working with Google Chrome Extension, which we had never worked with before.
## ✔️ Accomplishments that we're proud of
* Learning how to work with Google Chrome Extensions, which was an entirely new concept for us.
* Leveraging Distributed Computation, a very handy and intuitive API, to make our application significantly faster and better to use.
## 📚 What we learned
* The Chrome Extension API is incredibly difficult, budget 2x as much time for figuring it out!
* Working on a project where you can relate helps a lot with motivation
* Chakra UI is legendary and a lifesaver
* The Chrome Extension API is very difficult, did we mention that already?
## 🔭 What's next for AcadeME?
* Implementing a language translation toggle to help international students
* Note Encryption
* Note Sharing Links
* A Distributive Quiz mode, for online users!
|
## Inspiration
We know the struggles of students. Trying to get to that one class across campus in time. Deciding what to make for dinner. But there was one that stuck out to all of us: finding a study spot on campus. There have been countless times when we wander around Mills or Thode looking for a free space to study, wasting our precious study time before the exam. So, taking inspiration from parking lots, we designed a website that presents a live map of the free study areas of Thode Library.
## What it does
A network of small mountable microcontrollers that uses ultrasonic sensors to check if a desk/study spot is occupied. In addition, it uses machine learning to determine peak hours and suggested availability from the aggregated data it collects from the sensors. A webpage that presents a live map, as well as peak hours and suggested availability .
## How we built it
We used a Raspberry Pi 3B+ to receive distance data from an ultrasonic sensor and used a Python script to push the data to our database running MongoDB. The data is then pushed to our webpage running Node.js and Express.js as the backend, where it is updated in real time to a map. Using the data stored on our database, a machine learning algorithm was trained to determine peak hours and determine the best time to go to the library.
## Challenges we ran into
We had an **life changing** experience learning back-end development, delving into new frameworks such as Node.js and Express.js. Although we were comfortable with front end design, linking the front end and the back end together to ensure the web app functioned as intended was challenging. For most of the team, this was the first time dabbling in ML. While we were able to find a Python library to assist us with training the model, connecting the model to our web app with Flask was a surprising challenge. In the end, we persevered through these challenges to arrive at our final hack.
## Accomplishments that we are proud of
We think that our greatest accomplishment is the sheer amount of learning and knowledge we gained from doing this hack! Our hack seems simple in theory but putting it together was one of the toughest experiences at any hackathon we've attended. Pulling through and not giving up until the end was also noteworthy. Most importantly, we are all proud of our hack and cannot wait to show it off!
## What we learned
Through rigorous debugging and non-stop testing, we earned more experience with Javascript and its various frameworks such as Node.js and Express.js. We also got hands-on involvement with programming concepts and databases such as mongoDB, machine learning, HTML, and scripting where we learned the applications of these tools.
## What's next for desk.lib
If we had more time to work on this hack, we would have been able to increase cost effectiveness by branching four sensors off one chip. Also, we would implement more features to make an impact in other areas such as the ability to create social group beacons where others can join in for study, activities, or general socialization. We were also debating whether to integrate a solar panel so that the installation process can be easier.
|
partial
|
## Inspiration
Wanting to build an FPS VR game.
## What it does
Provides ultra fun experience to all players, taking older folks back to their childhood and showing younger ones the beauty of classic arcade types!
## How we built it
Unity as the game engine
Android for the platform
socket.io for multiplayer
c# for client side code
## Challenges we ran into
We coded our own custom backend in Node.js to allow multiplayer ability in the game. It was difficult to use web sockets in the C# code to transfer game data to other players. Also, it was a challange to sync all things from player movement to shooting lazers to map data all at the same time.
## Accomplishments that we're proud of
Were able to make the game multiplayer with a custom backend
## What we learned
Unity, C#
## What's next for Space InVRders
Add other game modes, more kinds of ships, store highscores
|
## Inspiration
During the pandemic, we found ourselves sitting down all day long in a chair, staring into our screens and stagnating away. We wanted a way for people to get their blood rushing and have fun with a short but simple game. Since we were interested in getting into Augmented Reality (AR) apps, we thought it would be perfect to have a game where the player has to actively move a part of their body around to dodge something you see on the screen, and thus Splatt was born!
## What it does
All one needs is a browser and a webcam to start playing the game! The goal is to dodge falling barrels and incoming cannonballs with your head, but you can also use your hands to "cut" down the projectiles (you'll still lose partial lives, so don't overuse your hand!).
## How we built it
We built the game using JavaScript, React, Tensorflow, and WebGL2. Horace worked on the 2D physics, getting the projectiles to fall and be thrown around, as well as working on the hand tracking. Thomas worked on the head tracking using Tensorflow and outputting the necessary values we needed to be able to implement collision, as well as the basic game menu. Lawrence worked on connecting the projectile physics and the head/hand tracking together to ensure proper collision could be detected, as well as restructuring the app to be more optimized than before.
## Challenges we ran into
It was difficult getting both the projectiles and the head/hand from the video on the same layer - we had initially used two separate canvasses for this, but we quickly realized it would be difficult to communicate from one canvas to another without causing too many rerenders. We ended up using a single canvas and after adjusting how we retrieved the coordinates of the projectiles and the head/hand, we were able to get collisions to work.
## Accomplishments that we're proud of
We're proud about how we divvy'd up the work and were able to connect everything together to get a working game. During the process of making the game, we were excited to have been able to get collisions working, since that was the biggest part to make our game complete.
## What we learned
We learned more about implementing 2D physics in JavaScript, how we could use Tensorflow to create AR apps, and a little bit of machine learning through that.
## What's next for Splatt
* Improving the UI for the game
* Difficulty progression (1 barrel, then 2 barrels, then 2 barrels and 1 cannonball, and so forth)
|
## Inspiration
The inspiration comes from my (Mari Liis's) childhood. When I was 7-8 years old, I read a children's book about the beauty of mathematics, and one of the topics discussed was Pascal's Triangle. I was amazed by the concept and I spent weeks drawing triangles and coloring out different numbers, trying to see what patterns will emerge. Today, I'm a studying to become a mathematician, and that little book just might have played a significant role...
## What it does
It visualizes Pascal's Triangle and provides interesting mathematical details. From the menu, it is possible to highlight numbers divisible by 2, 3, 4, 5, 6 and 7, to highlight certain interesting diagonals, and more.
## How I built it
Out of hardware, we used Samsung Gear VR with Samsung Galaxy S7 Edge. To develop the project, we used Unity and C#.
## Challenges I ran into
Various aspects of VR development were definitely challenging, since we had never done it before! Making the input work and working with the visuals was difficult, but also fun. Ray casting was probably one of the most difficult challenges, but we managed to get it to work nonetheless.
## Accomplishments that I'm proud of
Since we had never done development for VR before, and we managed to finish this product, we're quite happy and proud. We learned that taking up new challenges and working with completely new technologies is not as hard as we thought it would be, and this gives us courage to attempt it again in the future!
## What I learned
We improved our coding skills and Unity development skills. None of us had created projects for Android before, let alone Virtual Reality, so there was a lot to learn!
## What's next for PascalTriangled
There are so many modifications that can be made! There are many, many more interesting aspects of the triangle worth noting, more descriptions and proofs/justifications of interesting patterns to be added. It excites me just to think about all the mathematical aspects of that triangle that can be explored.
|
partial
|
## Inspiration
In today's fast-paced digital world, creating engaging social media content can be time-consuming and challenging. We developed Expresso to empower content creators, marketers, and businesses to streamline their social media workflow without compromising on quality or creativity.
## What it does
Expresso is an Adobe Express plugin that revolutionizes the process of creating and optimizing social media posts. It offers:
1. Intuitive Workflow System: Simplifies the content creation process from ideation to publication.
2. AI-Powered Attention Optimization: Utilizes a human attention (saliency) model (SUM) to provide feedback on maximizing post engagement.
3. Customizable Feedback Loop: Allows users to configure iterative feedback based on their specific needs and audience.
4. Task Automation: Streamlines common tasks like post captioning and scheduling.
## How we built it
We leveraged a powerful tech stack to bring Expresso to life:
* React: For building a responsive and interactive user interface
* PyTorch: To implement our AI-driven attention optimization model
* Flask: To create a robust backend API
## Challenges we ran into
Some of the key challenges we faced included:
* Integrating the SUM model seamlessly into the Adobe Express environment
* Optimizing the AI feedback loop for real-time performance
* Ensuring cross-platform compatibility and responsiveness
## Accomplishments that we're proud of
* Successfully implementing a state-of-the-art human attention model
* Creating an intuitive user interface that simplifies complex workflows
* Developing a system that provides actionable, AI-driven insights for content optimization
## What we learned
Throughout this project, we gained valuable insights into:
* Adobe Express plugin development
* Integrating AI models into practical applications
* Balancing automation with user control in creative processes
## What's next for Expresso
We're excited about the future of Expresso and plan to:
1. Expand our AI capabilities to include trend analysis and content recommendations
2. Integrate with more social media platforms for seamless multi-channel publishing
3. Develop advanced analytics to track post performance and refine optimization strategies
Try Expresso today and transform your design and marketing workflow!
|
# 🤖🖌️ [VizArt Computer Vision Drawing Platform](https://vizart.tech)
Create and share your artwork with the world using VizArt - a simple yet powerful air drawing platform.

## 💫 Inspiration
>
> "Art is the signature of civilizations." - Beverly Sills
>
>
>
Art is a gateway to creative expression. With [VizArt](https://vizart.tech/create), we are pushing the boundaries of what's possible with computer vision and enabling a new level of artistic expression. ***We envision a world where people can interact with both the physical and digital realms in creative ways.***
We started by pushing the limits of what's possible with customizable deep learning, streaming media, and AR technologies. With VizArt, you can draw in art, interact with the real world digitally, and share your creations with your friends!
>
> "Art is the reflection of life, and life is the reflection of art." - Unknow
>
>
>
Air writing is made possible with hand gestures, such as a pen gesture to draw and an eraser gesture to erase lines. With VizArt, you can turn your ideas into reality by sketching in the air.



Our computer vision algorithm enables you to interact with the world using a color picker gesture and a snipping tool to manipulate real-world objects.

>
> "Art is not what you see, but what you make others see." - Claude Monet
>
>
>
The features I listed above are great! But what's the point of creating something if you can't share it with the world? That's why we've built a platform for you to showcase your art. You'll be able to record and share your drawings with friends.


I hope you will enjoy using VizArt and share it with your friends. Remember: Make good gifts, Make good art.
# ❤️ Use Cases
### Drawing Competition/Game
VizArt can be used to host a fun and interactive drawing competition or game. Players can challenge each other to create the best masterpiece, using the computer vision features such as the color picker and eraser.
### Whiteboard Replacement
VizArt is a great alternative to traditional whiteboards. It can be used in classrooms and offices to present ideas, collaborate with others, and make annotations. Its computer vision features make drawing and erasing easier.
### People with Disabilities
VizArt enables people with disabilities to express their creativity. Its computer vision capabilities facilitate drawing, erasing, and annotating without the need for physical tools or contact.
### Strategy Games
VizArt can be used to create and play strategy games with friends. Players can draw their own boards and pieces, and then use the computer vision features to move them around the board. This allows for a more interactive and engaging experience than traditional board games.
### Remote Collaboration
With VizArt, teams can collaborate remotely and in real-time. The platform is equipped with features such as the color picker, eraser, and snipping tool, making it easy to interact with the environment. It also has a sharing platform where users can record and share their drawings with anyone. This makes VizArt a great tool for remote collaboration and creativity.
# 👋 Gestures Tutorial





# ⚒️ Engineering
Ah, this is where even more fun begins!
## Stack
### Frontend
We designed the frontend with Figma and after a few iterations, we had an initial design to begin working with. The frontend was made with React and Typescript and styled with Sass.
### Backend
We wrote the backend in Flask. To implement uploading videos along with their thumbnails we simply use a filesystem database.
## Computer Vision AI
We use MediaPipe to grab the coordinates of the joints and upload images. WIth the coordinates, we plot with CanvasRenderingContext2D on the canvas, where we use algorithms and vector calculations to determinate the gesture. Then, for image generation, we use the DeepAI open source library.
# Experimentation
We were using generative AI to generate images, however we ran out of time.


# 👨💻 Team (”The Sprint Team”)
@Sheheryar Pavaz
@Anton Otaner
@Jingxiang Mo
@Tommy He
|
## 💡 Inspiration 💡
>
> *“There is no lack of educative content, but the correct delivery”*
>
>
>
In an age where educational content is abundant, the challenge lies not in its availability but in its effective delivery and the unique perspective of the presenter. While technology like Google Search, ChatGPT, and advanced AI search bots like Perplexity.ai have made accessing content effortless, education efficiency isn’t limited by the speed of information queries. It is, rather, bounded by the speed of your brain’s information intake, which can only be boosted via proactively engaging, personalized, and even addictive learning experiences.
We planned to achieve this via 5 tools:
1. 🖼Multimedia: LLM, text-to-image, AI Agent, and web-scraping enable a textual, visual, and interactive experience
2. 🔎Diverse Content Perspectives: By tweaking prompts with different keywords, we can simulate a variety of content creators, offering a rich spectrum of views and styles.
3. 📈Recommendation Algorithms: With LLMs, tagging and describing content becomes scalable and cost-effective, enhancing the efficiency of recommendation systems.
4. 🎮Gamification: Use gamification to improve engagement.
5. 💰Financial incentive: LLM automatically turns learners (question-askers and answer-seekers) into teachers (answer-generators) by reusing answers. In short, people get compensated for asking valuable frequently asked questions.
These elements have proved to be crucial for any content platform seeking viral success. Fortunately, the advent of Large Language Models (LLMs) has revolutionized the potential of content platforms, making it easier than ever to satisfy these critical aspects.
## 💻 What it does 💻
### 1️⃣Recommendation Driven Generation
We track and collect the user’s behavior data and interaction with the feed, and gradually learn the user's interest profile. Then, the learned profile, represented by a set of weighted tags, is used to accurately and automatically generate the user’s favoured content.
### 2️⃣Learning Journey Decomposer
Unlike ChatGPT or Perplexity, which spit out ineffective, lengthy responses when given users’ complex learning goals, we decompose them into a sequence of general and reusable learnable modules. This sequence, what we call a “learning journey”, gives just the needed amount of knowledge of every necessary aspect for you to reach your learning goal. Furthermore, since each module is reusable, it not only makes your learning transferable and generalizable but also boosts other user’s usage of your generated content, thereby boosting your reward.
### 3️⃣Subjectiveness, but safely and beneficially
Every content we generate is a RAG (Retrieval Augmented Generation) from trustworthy human-generated content on the Internet (with sources attached). This introduces subjectiveness, human nature, and opinions into the content, which is not achievable solely via the usage of any generative LLMs. While enforcing safety via negative prompts, the introduced subjectiveness brings humanized flavour to learners, hence making the content less rigid and more entertaining.
### 4️⃣ Gamified Mini Course Maker
Given a topic card, a gamified mini-course, with beautiful visuals and short multiple-choice questions, can be automatically created. Your performance will impact how much XP you will receive, which will impact your levelling progress.
## 👨💻How we built it 👨💻
How do we create contextual content for complex learning goals while reusing content?
### Content Generation 1️⃣- User Generated Content
Users can ask questions and are assisted by LLMs to tailor the learning goal into desirable modules for the user to interact with via natural language. The user can iterate on the division of content in modules and when satisfied, a similarity search using InterSystems is used to find existing content modules that can be reused. Existing modules are combined with newly generated modules to create a full “Learning Journey” that meets the user’s learning goals.

### Content Generation 2️⃣ - Automatically Generated Content
As the user scrolls their feed, we collect various data points like their CTR and interactions, which is used to assign weighted tags to the user interest profile. Users are recommended content modules based on their past activity using a custom recommendation algorithm on Pinecone. This is existing content that has been generated by other users that we think this user will find helpful using a combination of the user’s tags, and content tags, as well as tracking the user’s time spent and click-through rate on content.
However, because the content is customized specifically to the user’s interests and past questions, there could be a lack of “relevant" content for the user. Rather than showing the user something irrelevant, we utilized RSS feeds from forums like Reddit, Medium, and Quora to generate new “hot” questions that are specific to the user’s needs and interests. These questions follow the same flow described above to generate new cards, modules, and tags.

### Rest of the App
The application was built using React-Native and Expo for the front end and Flask and Supabase for the back end.
## 😬 Challenges we ran into 😬
#### HCI (Human Computer Interaction) - ✅solved
In the course-making page, it is hard to build an AI Agent that uses natural language to communicate and manipulate a designed Learning Journey JSON object. Solely using chat completion doesn’t work since actions, like add/delete/modify, are not guaranteed to be precise (can cause unexpected errors). We used the OpenAI assistant API with function calling to ensure stability, but the high usage cost held us back.
#### Scalable RecSys - ✅solved
While we have 2000+ generated learning cards, how do we build RecSys with a deep learning effect but no deep learning infra? Also, we don’t have enough data for the training.
#### User Behaviour Tracking and RecSys data collection - ✅solved
Content displayed in full screen doesn’t have CTR data. But unlike TikTok, where video playback completion rate can be easily collected, reading content’s completion rate requires tracking and making inferences on the user's reading speed. We utilized a Bayesian Inference as a fast and robust model to make inferences.
#### Not enough Pinecone indices - ❌unsolved
To determine reusable content, we need to compare the cosine similarity of card contents. But as a free tier, we only have 1 available index to use. Thus, reusability is unsolved due to a lack of resources. We pivoted to Intersystem for a local DB solution.
## 😤 Accomplishments that we're proud of 😤
* A complete Recommendation System: we built the entire user behaviour tracking, data collection, model, efficient candidate picking and ranking pipeline that achieves similar performance as DNN, but didn’t use any neural network
* Bring humanity and subjectiveness into AIGC (AI-generated content) - Through tags from learned user interest profiles, prompt engineering, web scraping, and querying popular websites’ RSS top feeding pages, we’re able to systematically determine what topic is viral, trending, and worthwhile generating, and generate human-like content through RAGs, with backed sources.
* Coercing GPT into returning JSON data in the new JSON mode was difficult and required a lot of prompt engineering to ensure that the output was stable and consistent
## 🧠 What we learned 🧠
* It is possible to systematically generate viral and enticing textual content.
Recommendation systems for full-screen content and non-full-screen content can be very different. Negative samples are hard to collect.
* There is a spectrum between casual reading casual learning and actual learning. LLM can create a smooth transition from the most casual to the most serious by developing knowledge details via creating stratified content and by adjusting styles from entertaining to profound.
Assistant AI is in beta because it doesn’t support JSON mode. This made the API much less useful and the AI agent swarm is yet to come.
* In the age of LLM, a single man can build a recommendation system, with no shortage of computing power and data.
* We had a lot of ambitious goals for Iearn as a product and it was important to focus on delivering an MVP before focusing on additional “nice-to-haves”.
## 🔭 What's next for Iearn? 🔭
* Polishing Assistant API AI Agent for the learning journey decomposer
* Build a functional API to keep track of nitty-gritty features for gamification
* Multiple card types and more multimedia: introduce music, video, cooking recipes, mini javascript games, etc… We have one type of content module but the possibilities are endless in the different formats of AI-generated media.
* Socialization, build social media features for more user interaction
|
winning
|
## Inspiration
As society trends to new and more powerful technology every year, so does our reliance on energy increase. With a higher consumption of energy comes a larger carbon footprint. We at WattSaver wish to create a more environmentally and financially sustainable future. Instead of tolerating wasteful use of energy, we aim to do our part in reducing our wasteful habits and our carbon footprint.
## What it does
WattSaver is an energy management app that can monitor, advise, and moderate our energy use wirelessly. Using our WattSaver app and a Wyze Smart Plug, you can review your energy usage over any timeframe you want and see live location-based charts of price per kilowatt-hour. If you notice unintentional energy consumption while you are away, you can wirelessly deactivate the smart plug and save on your energy bill. WattSaver also includes a chart with the sources of energy you would be using based on your time and location so you can use more energy from renewable sources.
## How we built it
Front-End Development:
We chose React-Native to create WattSaver's user interface. With limited prior experience, we relied on tutorial videos to build the app from the ground up. The resulting interface met our vision for WattSaver, offering a user-friendly experience.
Back-End:
We used NodeJS and Flask to create our backend. The first Flask server fetches data from our Wyze plug and writes the data to MongoDB. The second Flask server fetches data from [Independent Electricity System Operator](https://www.ieso.ca/en/) and stores it in MongoDB. Both these are stored as time series data to allow for future data manipulation. The NodeJS server allows the user to call our API endpoints to update and retrieve data. We containerized our backend to enable deployment to cloud in the future.
Hardware Integration:
Our determination to have a useful and unique project led us to discover a compatible smart outlet, marking our first experience with physical components in a project. This successful integration taught us invaluable lessons about the synergy between software and hardware.
## Challenges we ran into
During the planning phase, we explored numerous project ideas, and one concept that particularly resonated with us was WattSaver. However, our journey with WattSaver encountered its first hurdle when we procured hardware that proved to be incompatible with our system's energy usage tracking requirements. This hardware, initially only serving as a wireless on/off switch, left us contemplating a switch to an alternative project idea. Ultimately, we decided to persist with WattSaver, driven by our discovery of an appropriate smart outlet that would enable us to achieve our goals.
Yet, this was not the only significant challenge we faced. We embarked on the ambitious task of using React-Native for the first time, learning its intricacies on the fly. Initially, we grappled with the fundamental aspects of creating a homepage from scratch, including the complexities of crafting various elements and incorporating images seamlessly. The journey through React-Native was marked by syntax-related issues, and we often found ourselves delving into the intricacies of importing libraries for specialized functionalities, further enhancing the learning curve.
There were challenges with MongoDB, as we set the global whitelist to expire after a few hours. This resulted in requests working on only some machines. We also had difficulty with time zones, as we had multiple different sources of information.
## Accomplishments that we're proud of
One achievement we're proud of is how we developed the system to track and analyze energy consumption trends over time. Our work provided users with valuable insights into their energy usage, a vital aspect of encouraging sustainable energy habits.
We excelled in handling the dynamic and hardware elements of WattSaver, particularly the successful integration of the smart outlet. This addition allowed users to remotely control their energy consumption, enhancing the app's practicality and user experience. Our expertise significantly contributed to WattSaver's impact.
## What we learned
As a number of our problems arose during our planning phase and carried through into the development process, we want to create a more solid plan ahead of time and do adequate research into the topic. This hackathon was very impactful for each member of our group, as each of us learned something new. It was the first time we had a physical component for our project and it is working successfully as well.
## What's next for WattSaver
In the future, we would like to use A.I. to analyze energy usage data and advise a more sustainable usage habit for the user without disrupting normal spending too much. Another feature we would like to include is an algorithm that will charge appliances during off-peak hours and disable power output once the equipment is fully charged. We would do this by analyzing usage data, taking into account user hours of availability and equipment category such as cellphones or electric vehicles, and plan the most effective time schedule for charging.
|
## Inspiration
Interaction with indoor plants can reduce physiological and psychological stress, based on previous psychological and neurological studies. In addition many people face a common issue of forgetting when to water their plants or how to take care of them. By creating a game-like app that helps users learn to care for their plants, the goal of this project was also to help reduce their stress levels.
## What it does
This app helps users to learn how to care for their plants while reducing levels of stress. The app also will change colour according to the time of day - blue at night to induce relaxation and yellow during the day to induce productivity.
## How I built it
**Design:** Sketch and InVision were used for lo-fi, mid-fi, and hi-fi designs, along with interactive prototypes.
**Front-end:** The entire app was built natively using Swift 5.1 in Xcode 11 using UIKit, UserDefaults, NotificationCenter, and custom Protocol-Delegation. The architecture conformed to Apple's Model-View-Controller style.
**Back-end:** Express.js was used to create REST URIs which the front-end could communicate with to retrieve the data, and our server was hosted on an AWS EC2 instance so that our API could be publicly queried. MongoDB Atlas was used to persist our user and plant collection data.
## Challenges I ran into
**Design:** After doing several user tests and iterations, I found that certain aspects of the UI, such as the 'add plant' button were unintuitive or caused confusion. Due to time constraint, I could only make small improvements which I was not yet able to validate through testing, but took into account the feedback from previous tests.
**Front-end:** Integrating with our Node.js backend hosted on AWS was one big hurdle. However, I learned a lot about implementing client-server communications in Swift.
**Back-end:** I found it a bit difficult getting used to and debugging the MongoDB Atlas API.
## Accomplishments that I'm proud of
**Design:** Leaving enough time to do project research, user testing, and further iterations despite the time constraint. I found psychology/neuroscience articles to better understand the connection between indoor plant care and stress, in addition to validating my designs through testing.
**Front-end:** We are proud of our intuitive and cute UI. One of our biggest accomplishments was building the app to our UI designer's exact specifications from Sketch.
**Back-end:** Figuring out how to host a server on AWS.
## What I learned
**Design:** It is important to do user testing as I am designing so that I can make small iterations and improvements along the way rather than starting over from the beginning each time.
**Front-end:** We learned how to use Protocol-Oriented Programming, Swift Generics, sending a wide variety of HTTP Requests over REST, and encoding and decoding JSON in Swift.
**Back-end:** It's important that we all agree on high-level architecture and design before we get into the nitty-gritty details of implementation. Any problems we can address earlier on in the process helps us reduce the amount of roadblocks we encounter later on.
## What's next for Sprouts
**Design:** Creating screens for outdoor gardening (currently the UI is catered towards indoor plants) and trying to add different types of weather, such as rain, for the background. Further testing and improvements on current interfaces as well.
**Back-end:** Making the log-in and user registration functionality more secure using JWT authentication as well as setting up relations in the database for less data redundancy.
|
## Inspiration
We wanted to create a proof-of-concept for a potentially useful device that could be used commercially and at a large scale. We ultimately designed to focus on the agricultural industry as we feel that there's a lot of innovation possible in this space.
## What it does
The PowerPlant uses sensors to detect whether a plant is receiving enough water. If it's not, then it sends a signal to water the plant. While our proof of concept doesn't actually receive the signal to pour water (we quite like having working laptops), it would be extremely easy to enable this feature.
All data detected by the sensor is sent to a webserver, where users can view the current and historical data from the sensors. The user is also told whether the plant is currently being automatically watered.
## How I built it
The hardware is built on an Arduino 101, with dampness detectors being used to detect the state of the soil. We run custom scripts on the Arduino to display basic info on an LCD screen. Data is sent to the websever via a program called Gobetwino, and our JavaScript frontend reads this data and displays it to the user.
## Challenges I ran into
After choosing our hardware, we discovered that MLH didn't have an adapter to connect it to a network. This meant we had to work around this issue by writing text files directly to the server using Gobetwino. This was an imperfect solution that caused some other problems, but it worked well enough to make a demoable product.
We also had quite a lot of problems with Chart.js. There's some undocumented quirks to it that we had to deal with - for example, data isn't plotted on the chart unless a label for it is set.
## Accomplishments that I'm proud of
For most of us, this was the first time we'd ever created a hardware hack (and competed in a hackathon in general), so managing to create something demoable is amazing. One of our team members even managed to learn the basics of web development from scratch.
## What I learned
As a team we learned a lot this weekend - everything from how to make hardware communicate with software, the basics of developing with Arduino and how to use the Charts.js library. Two of our team member's first language isn't English, so managing to achieve this is incredible.
## What's next for PowerPlant
We think that the technology used in this prototype could have great real world applications. It's almost certainly possible to build a more stable self-contained unit that could be used commercially.
|
losing
|
## Inspiration
We were trying for an IM cross MS paint experience, and we think it looks like that.
## What it does
Users can create conversations with other users by putting a list of comma-separated usernames in the To field.
## How we built it
We used Node JS combined with the Express.js web framework, Jade for templating, Sequelize as our ORM and PostgreSQL as our database.
## Challenges we ran into
Server-side challenges with getting Node running, overloading the server with too many requests, and the need for extensive debugging.
## Accomplishments that we're proud of
Getting a (mostly) fully up-and-running chat client up in 24 hours!
## What we learned
We learned a lot about JavaScript, asynchronous operations and how to properly use them, as well as how to deploy a production environment node app.
## What's next for SketchWave
We would like to improve the performance and security of the application, then launch it for our friends and people in our residence to use. We would like to include mobile platform support via a responsive web design as well, and possibly in the future even have a mobile app.
|
## Problem
In these times of isolation, many of us developers are stuck inside which makes it hard for us to work with our fellow peers. We also miss the times when we could just sit with our friends and collaborate to learn new programming concepts. But finding the motivation to do the same alone can be difficult.
## Solution
To solve this issue we have created an easy to connect, all in one platform where all you and your developer friends can come together to learn, code, and brainstorm together.
## About
Our platform provides a simple yet efficient User Experience with a straightforward and easy-to-use one-page interface.
We made it one page to have access to all the tools on one screen and transition between them easier.
We identify this page as a study room where users can collaborate and join with a simple URL.
Everything is Synced between users in real-time.
## Features
Our platform allows multiple users to enter one room and access tools like watching youtube tutorials, brainstorming on a drawable whiteboard, and code in our inbuilt browser IDE all in real-time. This platform makes collaboration between users seamless and also pushes them to become better developers.
## Technologies you used for both the front and back end
We use Node.js and Express the backend. On the front end, we use React. We use Socket.IO to establish bi-directional communication between them. We deployed the app using Docker and Google Kubernetes to automatically scale and balance loads.
## Challenges we ran into
A major challenge was collaborating effectively throughout the hackathon. A lot of the bugs we faced were solved through discussions. We realized communication was key for us to succeed in building our project under a time constraints. We ran into performance issues while syncing data between two clients where we were sending too much data or too many broadcast messages at the same time. We optimized the process significantly for smooth real-time interactions.
## What's next for Study Buddy
While we were working on this project, we came across several ideas that this could be a part of.
Our next step is to have each page categorized as an individual room where users can visit.
Adding more relevant tools, more tools, widgets, and expand on other work fields to increase our User demographic.
Include interface customizing options to allow User’s personalization of their rooms.
Try it live here: <http://35.203.169.42/>
Our hopeful product in the future: <https://www.figma.com/proto/zmYk6ah0dJK7yJmYZ5SZpm/nwHacks_2021?node-id=92%3A132&scaling=scale-down>
Thanks for checking us out!
|
## Inspiration
People struggle to work effectively in a home environment, so we were looking for ways to make it more engaging. Our team came up with the idea for InspireAR because we wanted to design a web app that could motivate remote workers be more organized in a fun and interesting way. Augmented reality seemed very fascinating to us, so we came up with the idea of InspireAR.
## What it does
InspireAR consists of the website, as well as a companion app. The website allows users to set daily goals at the start of the day. Upon completing all of their goals, the user is rewarded with a 3-D object that they can view immediately using their smartphone camera. The user can additionally combine their earned models within the companion app. The app allows the user to manipulate the objects they have earned within their home using AR technology. This means that as the user completes goals, they can build their dream office within their home using our app and AR functionality.
## How we built it
Our website is implemented using the Django web framework. The companion app is implemented using Unity and Xcode. The AR models come from echoAR. Languages used throughout the whole project consist of Python, HTML, CSS, C#, Swift and JavaScript.
## Challenges we ran into
Our team faced multiple challenges, as it is our first time ever building a website. Our team also lacked experience in the creation of back end relational databases and in Unity. In particular, we struggled with orienting the AR models within our app. Additionally, we spent a lot of time brainstorming different possibilities for user authentication.
## Accomplishments that we're proud of
We are proud with our finished product, however the website is the strongest component. We were able to create an aesthetically pleasing , bug free interface in a short period of time and without prior experience. We are also satisfied with our ability to integrate echoAR models into our project.
## What we learned
As a team, we learned a lot during this project. Not only did we learn the basics of Django, Unity, and databases, we also learned how to divide tasks efficiently and work together.
## What's next for InspireAR
The first step would be increasing the number and variety of models to give the user more freedom with the type of space they construct. We have also thought about expanding into the VR world using products such as Google Cardboard, and other accessories. This would give the user more freedom to explore more interesting locations other than just their living room.
|
winning
|
## Inspiration
We wanted to help students learn about financial budgeting, ensuring that they learn to save money and budget effectively.
## What it does
Implemented a scanner system, in which users can scan barcodes to implement costs, rather than manually inputting it themselves
## How I built it
We used Flutter and Dart to create the application, and json for the backend. The website was created with HTML/CSS/JS in order to showcase and demonstrate UX interface for the application. UX design was created with Illustrator and Invision.
## Challenges I ran into
We faced many bugs with the scanner system, as well as formatting for the website.
## Accomplishments that I'm proud of
The scanner is able to update real-time and present data back to users.
## What I learned
Zach (1st Hackathon) -Learned how to use Flutter framework, how to use functions within a code, how to create UI in applications
Nephthalim (3rd Hackathon) -Flutter with backend development
Oliver (2nd Hackathon) - Website development with HTML/CSS/JS, application development with Dart, Github organization
Evan (1st Hackathon) - First UX project, skills needed and designs, learned how to intersect functionality with creativity.
## What's next for SavourFinal
Further implementations of machine learning and optimization. Stronger analytics and personal recommendations.
|
*("Heart Tempo", not "Hear Tempo", fyi)*
## Inspiration
David had an internship at the National Institute of Health over the summer, where he researched the effect of auditory stimulus such as music on microcirculation (particularly the myogenic and endothelial bands), using Laser Doppler Flowmetry (LDF) to do so. This experiment all stems from the known fact that the human body often matches its heartrate with the tempo of a song that is playing.
Though that side of things was heavily researched, the opposite wasn't. And for that reason, David developed the idea of making the tempo of the song change relative to the heartrate of the person listening to the song, rather than vice-versa.
## What it does
This Android app will connect to your Android Wear device (with a heartbeat sensor) and send this heart rate to a server which modifies the tempo of any song to regulate your heart rate at normal levels.
By regulating your heart rate, it will reduce anxiety and stress, allowing you to relax and not worry about the pressures of life. The best part? You only need a smartwatch and smartphone, no fancy equipment.
(This is where the name comes from, if you haven't figured that out already)
## How we built it
We had to figure out how to get the heart rate from an Android Wear watch, which didn't take that long. The hard part was actually figuring out how to send the data from the watch to the phone then to the server. We ended up using the native `WearableListenerService` to send the data to the phone which then sent the data with `OkHttp` to our Node.js server. This server will connect with any MIDI-enabled application on your machine to change the BPM.
## Challenges we ran into
It took a damn while to figure out how to send data between the watch and the phone, why does Google make this so hard?!
## Accomplishments that we're proud of
When we first got the phone to **actually send data** to the server, we were very happy and wanted to do more with the project.
## What we learned
We learned more about Android development, an area we both wanted to get into. It was slightly difficult since we both come from web development backgrounds and the concepts are very different.
## What's next for HearTempo
Some sort of logging and data analysis, definitely. We want to prove that this works, so we will perhaps implement a log of your average heart beat over a course of a week or two after you start using it.
|
## Inspiration
Many individuals lack financial freedom, and this stems from poor spending skills. As a result, our group wanted to create something to help prevent that. We realized how difficult it can be to track the expenses of each individual person in a family. As humans, we tend to lose track of what we purchase and spend money on. Inspired, we wanted to create an app that stops all that by allowing individuals to strengthen their organization and budgeting skills.
## What It Does
Track is an expense tracker website targeting households and individuals with the aim of easing people’s lives while also allowing them to gain essential skills. Imagine not having to worry about tracking your expenses all while learning how to budget and be well organized.
The website has two key components:
* Family Expense Tracker:
The family expense tracker is the `main dashboard` for all users. It showcases each individual family member’s total expenses while also displaying the expenses through categories. Both members and owners of the family can access this screen. Members can be added to the owner’s family via a household key which is only given access to the owner of the family. Permissions vary between both members and owners. Owners gain access to each individual’s personal expense tracker, while members have only access to their own personal expense tracker.
* Personal Expense Tracker:
The personal expense tracker is assigned to each user, displaying their own expenses. Users are allowed to look at past expenses from the start of the account to the present time. They are also allowed to add expenses with a click of a button.
## How We Built It
* Utilized the MERN (MongoDB, Express, React, Node) stack
* Restful APIs were built using Node and Express which were integrated with a MongoDB database
* The Frontend was built with the use of vanilla React and Tailwind CSS
## Challenges We Ran Into
* Frontend:
Connecting EmailJS to the help form
Retrieving specific data from the backend and displaying pop-ups accordingly
Keeping the theme consistent while also ensuring that the layout and dimensions didn’t overlap or wrap
Creating hover animations for buttons and messages
* Backend:
Embedded objects were not being correctly updated - needed to learn about storing references to objects and populating the references
Designing the backend based on frontend requirements and the overall goal of the website
## Accomplishments We’re Proud Of
As this was all of our’s first or second hackathons we are proud to have created a functioning website with a fully integrated front and back-end.
We are glad to have successfully implemented pop-ups for each individual expense category that displays past expenses.
Overall, we are proud of ourselves for being able to create a product that can be used in our day-to-day lives in a short period of time.
## What We Learned
* How to properly use embedded objects so that any changes to the object are reflected wherever the object is embedded
* Using the state hook in ReactJS
* Successfully and effectively using React Routers
* How to work together virtually. It allowed us to not only gain hard skills but also enhance our soft skills such as teamwork and communication.
## What’s Next For Track
* Implement an income tracker section allowing the user to get a bigger picture of their overall net income
* Be able to edit and delete both expenses and users
* Store historical data to allow the use of data analysis graphs to provide predictions and recommendations.
* Allow users to create their own categories rather than the assigned ones
* Setting up different levels of permission to allow people to view other family member’s usage
|
losing
|
## Inspiration
We were inspired by an Instagram post that was complaining that there was no good functionality built into Netflix to find movies that both you and your partner want to watch.
## What it does
Currently, we have a rough web application prototype running on Node.js. You can either choose to host or join a pre-existing session.
If you are hosting you are prompted to pick a category of Movie types such as Drama, Action, or Thriller. From there you are redirected to a new page populated with random movies from that category along with a session code that others can use to join your session.
If you are joining a session, all you need to do is get your partner's code, click join, and enter the session ID where you will be redirected to the same page as the host.
## How we built it
Since this was a very rough idea we wanted to create a basic prototype before committing to doing a full-on web application.
We used Express.js and made a basic web application that connects to Firebase and uses that to communicate with the server.
We originally used a Netflix catalog API which we ended up caching into our Firebase application to get faster results.
## Challenges we ran into
In the beginning, we were struggling a lot with the project and did not think that we would actually be able to finish it. But in the end, after a lot of trial and error, we managed to finish it on time and are extremely proud of that
## Accomplishments that we're proud of
Learning how to pass information between Express and the HTML pages. Also learning Firebase and caching Netflix titled from RapidAPI
## What we learned
Learning how to pass information between Express and the HTML pages. Also learning Firebase and caching Netflix titled from RapidAPI
## What's next for Netflix Swiper
Our next steps for Netflix Swiper are to polish the UI/UX to provide a better experience to our users. We would add things such as navigation buttons (home, back). Another main feature we want to add to is the Mutual List page. Adding a counter and links to the movies.
|
## Inspiration
There's so many fantastic project ideas out there, yet it's always difficult to find and connect with people who share your enthusiasm and interest in bringing your idea to life. We've all had times where we wanted to find the perfect people to work with and drawing inspiration from speed matching apps like Tinder, Our Idea came along.
## What it does
Our Idea allows users to "swipe left" or "swipe right" on innovative ideas submitted by other users. You can match with teams, chat with them in real-time immediately on your 💯ideas, and create connections for life!
## How we built it
We prototyped everything in Figma first! Always 🤠
We played around with React and Material UI on the front-end and agonized over pure CSS.
We also employed the swiss army knife of hackathons–🔥Firebase 🔥–and its multiple functionalities (Firestore, Firebase Authentication, FireChat!!!)
## Challenges we ran into
As it was our first time using Firebase, we had a lot of fun with it. However, it also brought a few problems! Just mere hours before submission, we reached our usage limit and it was a race to finish.
## Accomplishments that we're proud of
Our Idea is a labor of love and we learned so much doing it. There were a lot of tired Discord calls, LiveShare trouble, moments of panicked silence and muted mics, but we're so proud of the project and its potential to connect individuals on the basis of good ideas–just like this hackathon has!
## What's next for Our Idea
* Filtering on ideas!
* Superlikes??? 👀
|
## Inspiration
We were inspired by websites such as backyard.co which allow users to have video chats and play various games together. However one of the main issues with websites such as these or any video chat rooms such as zoom is that people are reluctant to put on their video. So to combat this issue, we wanted to create a similar website which encouraged people to turn on their video cameras, by making games that heavily relied on or used videos to function.
## What it does
Right now, the website only allows for the creation of multiple rooms, each room allowing up to 200 participants to join and share screen, use the chat box, and of course, share video and audio.
## How we built it
We used a combination of javascript api, react frontend, and node express backend. We connected to the cockroachDB in hopes of storing active user sessions. We also used heroku to deploy the site. To get the videos to work we used the Daily.co API.
## Challenges we ran into
One of the earliest challenges we ran into was learning how to use the Daily.co API. Connecting it to the express server we created and connecting the server to the front end took a good portion of our time. The biggest challenge we ran into however was using cockroachDB. We had many issues just connecting to the database and seeing as neither of us had any prior knowledge or experience with cockroachDB were we unable to get more use out of the database in the time given.
## Accomplishments that we're proud of
Setting up the video call and chat system using the Daily.co API. We also set up the infrastructure to expand our app with games.
## What we learned
As this was our first time using cockroachDB, react, and express we learned a lot about developing a full stack project and using APIs. We learned how to connect a backend server to the front end and how to connect to the database as well as heroku deployment.
## What's next for bestdomaingetsfree.tech
Our next steps would be configuring the database to store active sessions and to implement the games we have created.
We purchased the domain name but domain.com had a review process we have to wait for so at the time of submission the domain is not working.
|
losing
|
## Inspiration
Recently, character experiences powered by LLMs have become extremely popular. latforms like Character.AI, boasting 54M monthly active users and a staggering 230M monthly visits, are a testament to this trend. Yet, despite these figures, most experiences in the market offer text-to-text interfaces with little variation.
We wanted to take the chat with characters to the next level. Instead of a simple and standard text-based interface, we wanted intricate visualization of your character with a 3D model viewable in your real-life environment, actual low-latency, immersive, realistic, spoken dialogue with your character, with a really fun dynamic (generated on-the-fly) 3D graphics experience - seeing objects appear as they are mentioned in conversation - a novel innovation only made possible recently.
## What it does
An overview: CharactAR is a fun, immersive, and **interactive** AR experience where you get to speak your character’s personality into existence, upload an image of your character or take a selfie, pick their outfit, and bring your custom character to life in a AR world, where you can chat using your microphone or type a question, and even have your character run around in AR! As an additional super cool feature, we compiled, hosted, and deployed the open source OpenAI Shap-e Model(by ourselves on Nvidia A100 GPUs from Google Cloud) to do text-to-3D generation, meaning your character is capable of generating 3D objects (mid-conversation!) and placing them in the scene. Imagine the terminator generating robots, or a marine biologist generating fish and other wildlife! Our combination and intersection of these novel technologies enables experiences like those to now be possible!
## How we built it

*So how does CharactAR work?*
To begin, we built <https://charactar.org>, a web application that utilizes Assembly AI (State of the Art Speech-To-Text) to do real time speech-to-text transcription. Simply click the “Record” button, speak your character’s personality into existence, and click the “Begin AR Experience” button to enter your AR experience. We used HTML, CSS, and Javascript to build this experience, and bought the domain using GoDaddy and hosted the website on Replit!
In the background, we’ve already used OpenAI Function Calling, a novel OpenAI product offering, to choose voices for your custom character based on the original description that you provided. Once we have the voice and description for your character, we’re ready to jump into the AR environment.
The AR platform that we chose is 8th Wall, an AR deployment platform built by Niantic, which focuses on web experiences. Due to the emphasis on web experiences, any device can use CharactAR, from mobile devices, to laptops, or even VR headsets (yes, really!).
In order to power our customizable character backend, we employed the Ready Player Me player avatar generation SDK, providing us a responsive UI that enables our users to create any character they want, from taking a selfie, to uploading an image of their favorite celebrity, or even just choosing from a predefined set of models.
Once the model is loaded into the 8th Wall experience, we then use a mix of OpenAI (Character Intelligence), InWorld (Microphone Input & Output), and ElevenLabs (Voice Generation) to create an extremely immersive character experience from the get go. We animated each character using the standard Ready Player Me animation rigs, and you can even see your character move around in your environment by dragging your finger on the screen.
Each time your character responds to you, we make an API call to our own custom hosted OpenAI Shap-e API, which is hosted on Google Cloud, running on an NVIDIA A100. A short prompt based on the conversation between you and your character is sent to OpenAI’s novel text-to-3D API to be generated into a 3D object that is automatically inserted into your environment.
For example, if you are talking with Barack Obama about his time in the White House, our Shap-E API will generate a 3D object of the White House, and it’s really fun (and funny!) in game to see what Shap-E will generate.
## Challenges we ran into
One of our favorite parts of CharactAR is the automatic generation of objects during conversations with the character. However, the addition of these objects also lead to an unfortunate spike in triangle count, which quickly builds up lag. So when designing this pipeline, we worked on reducing unnecessary detail in model generation. One of these methods is the selection of the number of inference steps prior to generating 3D models with Shap-E.
The other is to compress the generated 3D model, which ended up being more difficult to integrate than expected. At first, we generated the 3D models in the .ply format, but realized that .ply files are a nightmare to work with in 8th Wall. So we decided to convert them into .glb files, which would be more efficient to send through the API and better to include in AR. The .glb files could get quite large, so we used Google’s Draco compression library to reduce file sizes by 10 to 100 times. Getting this to work required quite a lot of debugging and package dependency resolving, but it was awesome to see it functioning.
Below, we have “banana man” renders from our hosted Shap-E model.


*Even after transcoding the .glb file with Draco compression, the banana man still stands gloriously (1 MB → 78 KB).*
Although 8th Wall made development much more streamlined, AR Development as a whole still has a ways to go, and here are some of the challenges we faced. There were countless undefined errors with no documentation, many of which took hours of debugging to overcome. Working with the animated Ready Player Me models and the .glbs generated by our Open AI Shap-e model imposed a lot of challenges with model formats and dynamically generating models, which required lots of reading up on 3D model formats.
## Accomplishments that we're proud of
There were many small challenges in each of the interconnected portions of the project that we are proud to have persevered through the bugs and roadblocks. The satisfaction of small victories, like seeing our prompts come to 3D or seeing the character walk around our table, always invigorated us to keep on pushing.
Running AI models is computationally expensive, so it made sense for us to allocate this work to be done on Google Cloud’s servers. This allowed us to access the powerful A100 GPUs, which made Shap-E model generation thousands of times faster than would be possible on CPUs. This also provided a great opportunity to work with FastAPIs to create a convenient and extremely efficient method of inputting a prompt and receiving a compressed 3D representation of the query.
We integrated AssemblyAI's real-time transcription services to transcribe live audio streams with high accuracy and low latency. This capability was crucial for our project as it allowed us to convert spoken language into text that could be further processed by our system. The WebSocket API provided by AssemblyAI was secure, fast, and effective in meeting our requirements for transcription.
The function calling capabilities of OpenAI's latest models were an exciting addition to our project. Developers can now describe functions to these models, and the models intelligently output a JSON object containing the arguments for those functions. This feature enabled us to integrate GPT's capabilities seamlessly with external tools and APIs, offering a new level of functionality and reliability.
For enhanced user experience and interactivity between our website and the 8th Wall environment, we leveraged the URLSearchParams interface. This allowed us to send the information of the initial character prompt seamlessly.
## What we learned
For the majority of the team, it was our first AR project using 8th Wall, so we learned the ins and outs of building with AR, the A-Frame library, and deploying a final product that can be used by end-users. We also had never used Assembly AI for real-time transcription, so we learned how to use websockets for Real-Time transcription streaming.
We also learned so many of the intricacies to do with 3D objects and their file types, and really got low level with the meshes, the object file types, and the triangle counts to ensure a smooth rendering experience.
Since our project required so many technologies to be woven together, there were many times where we had to find unique workarounds, and weave together our distributed systems. Our prompt engineering skills were put to the test, as we needed to experiment with countless phrasings to get our agent behaviors and 3D model generations to match our expectations. After this experience, we feel much more confident in utilizing the state-of-the-art generative AI models to produce top-notch content. We also learned to use LLMs for more specific and unique use cases; for example, we used GPT to identify the most important object prompts from a large dialogue conversation transcript, and to choose the voice for our character.
## What's next for CharactAR
Using 8th Wall technology like Shared AR, we could potentially have up to 250 players in the same virtual room, meaning you could play with your friends, no matter how far away they are from you. These kinds of collaborative, virtual, and engaging experiences are the types of environments that we want CharactAR to enable.
While each CharactAR custom character is animated with a custom rigging system, we believe there is potential for using the new OpenAI Function Calling schema (which we used several times in our project) to generate animations dynamically, meaning we could have endless character animations and facial expressions to match endless conversations.
|
## Inspiration
We were trying for an IM cross MS paint experience, and we think it looks like that.
## What it does
Users can create conversations with other users by putting a list of comma-separated usernames in the To field.
## How we built it
We used Node JS combined with the Express.js web framework, Jade for templating, Sequelize as our ORM and PostgreSQL as our database.
## Challenges we ran into
Server-side challenges with getting Node running, overloading the server with too many requests, and the need for extensive debugging.
## Accomplishments that we're proud of
Getting a (mostly) fully up-and-running chat client up in 24 hours!
## What we learned
We learned a lot about JavaScript, asynchronous operations and how to properly use them, as well as how to deploy a production environment node app.
## What's next for SketchWave
We would like to improve the performance and security of the application, then launch it for our friends and people in our residence to use. We would like to include mobile platform support via a responsive web design as well, and possibly in the future even have a mobile app.
|
## Inspiration
Our team is made of story lovers. For as long as we can remember, we have been seeking to consume and create stories wherever we go. As we discovered our vastly different reading methodologies due to dyslexia, varying native languages, and differentiated learning experiences, we sought to create a platform that allows all adventurers to immerse themselves in stories.
Journey was created from a desire to enhance users’ reading experiences from the very beginning: childhood. During our ideation process, our research indicated two major findings:
1) Children who were regularly read to showed significantly higher levels of reading comprehension in developing years.
2) Time spent reading decreases as children age because they expect to comprehend harder-to-read stories less. This is especially prevalent when children are transitioning from picture-books to novels.
As we contemplated ways to immerse children in worlds of literature, the idea of integrating AI was presented as a powerful catalyst for engagement. Educators could use this tool to bring stories to life and allow students to clarify their questions first-hand, thus transcending traditional reading methods and innovating the educational experience.
Calling from our personal experiences, our early ideas involved using AI generated text-to-speech to deliver literature to students struggling with dyslexia or English as a second language. However, after many rounds of idea and product development, we are proud to deliver much more than simple text-to-speech. We are proud to have implemented our personal stories into aiding others in developing their own stories as well.
## What it does
Journey enables readers to interact live-time with book characters as they explore stories. Customizable conversations allow users to gain explicit insights regarding plot, settings, and even characters’ opinions of each other. Journey’s interactable characters are unlocked in time with their introduction in book—in other words, readers get to meet characters such as Ron Weasley at the same time as Harry Potter!
Journey’s mythical interface and speech-to-text features create an exciting visual and auditory experience. Watch as settings magically appear as you read, listen to Journey’s storyteller weave your favourite tales, and experience the beauty of a great book.
Journey will break down daunting chunks of text into bite-sized pieces and allow users to enjoy a comprehensive reading experience. By simply uploading a PDF book file, you can start your reading journey!
## How we built it
Development started by fine turning AI prompts based of story context, who the user is roleplaying, and who the character is to perfectly match the kind of response we had in mind. We aimed to generate creative responses that conveyed information relevant up the point in the story the reader is at while mimicking emotions of the character the AI is portraying. An omniscient narrator is also available for questioning by the user. The narrator knows everything about the story and provides and unbiased an objective view on the story. This can be used for clarification or a brief summary of the story.
To gather the material, novels were obtained in pdf form and were parsed through and split into pages. The pages are displayed main application interface, providing the opportunity interactively parse through the pages in the list. The AI intelligently knows where the reader is when selecting a page based on its content; thus, only allowing the user to talk to a character that has been introduced up to that page.
Journey’s visual element was brought to life with the aid of Open AI’s Dall-E, an image generation tool. Each character and page background has a unique generation made by the AI through a simple API call. Each avatar is displayed next their character under the ‘Character Name’ section.
## Challenges we ran into
* Integrating each feature and tool we used into one cohesive product (TTS, character generation, string parsing, character responses, and background art generation)
* Staying conscious of our budget while creating API requests to test our code
* Preventing characters from spoiling later parts of the book or breaking the fourth wall when asking them questions
* Training the AI to tailor responses to emotions based off characters in the book depending on who they’re interacting with
* Matching the different TTS voices OpenAI offers depending on the character they’re narrating as
* Staying hydrated and healthy while racing to meet the 36-hour deadline!
## Accomplishments that we're proud of
* This was Anthony Botticchio, Larry Han, and Claire Hu’s first ever hackathon and Claire’s first-time programming!
* Elements in our Logo and UI/UX were 100% drawn by hand using vector art on Figma! We wanted to follow an RPG game style theme, so custom making everything was the only way to go.
* One of our team members has personally faced learning disabilities, so creating a solution for firsthand problems was incredibly fulfilling. We believe this experience gave us a deeper understanding of the current pain points, allowing us to craft a tailored solution for the problem space we identified.
## What we learned
* Hackathons are not easy…one hour seems to go by in fifteen minutes!
* OpenAI is extremely powerful, however tremendous training is needed to make it robust
* Grouping designs in Figma is essential—otherwise your frames will get very messy very fast!
## What's next for Journey
We hope to continue developing Journey and helping students learn and enjoy their reading experiences. As the problem space we chose to tackle is close to our hearts, we truly believe others struggling with similar experiences deserve the chance to explore wonderful stories in an easier fashion than we have.
Reading is instrumental in the development of children, and we hope that by continuing to develop Journey we can give a generation of children a love for reading!
## References:
The Washington Post. (2015, April 29). Why kids lose interest in reading as they get older. <https://www.washingtonpost.com/news/answer-sheet/wp/2015/04/29/why-kids-lose-interest-in-reading-as-they-get-older/>
Reading Rockets. (n.d.). Why some children have difficulties learning to read. <https://www.readingrockets.org/topics/struggling-readers/articles/why-some-children-have-difficulties-learning-read>
Child Mind Institute. (n.d.). Why is it important to read to your child? <https://childmind.org/article/why-is-it-important-to-read-to-your-child/>
The Ohio State University College of Education and Human Ecology. (n.d.). The importance of reading to kids daily. <https://ehe.osu.edu/news/listing/importance-reading-kids-daily-0>
National Assessment of Educational Progress. (n.d.). The Nation’s Report Card: Reading Achievement. <https://www.nationsreportcard.gov/reading/nation/achievement/?grade=4>
Yale Center for Dyslexia & Creativity. (n.d.). Dyslexia FAQ. <https://dyslexia.yale.edu/dyslexia/dyslexia-faq/>
Cognitive Market Research. (n.d.). English Language Learning Market Report. <https://www.cognitivemarketresearch.com/english-language-learning-market-report>
Government of Canada. (n.d.). Official Languages and Bilingualism Publications: Statistics. <https://www.canada.ca/en/canadian-heritage/services/official-languages-bilingualism/publications/statistics.html>
DoteFL. (n.d.). English Language Statistics. <https://www.dotefl.com/english-language-statistics/>
National Center for Education Statistics. (n.d.). English Learners in Public Schools. <https://nces.ed.gov/programs/coe/indicator/cgf/english-learners>
Statista. (n.d.). Resident population of Canada by age group. <https://www.statista.com/statistics/444868/canada-resident-population-by-age-group/>
Statista. (n.d.). Population of the United States by sex and age. <https://www.statista.com/statistics/241488/population-of-the-us-by-sex-and-age/>
|
winning
|
## Inspiration
Nothing quite accomplishes daily productivity like the traditional todo-list. Each task displayed in order, ready to be ticked off one by one. However, this can usually be an isolating process rather than a collaborative one. TODOTogether hopes to bring a company culture of collaboration and teamwork down into people's daily tasks.
## What it does
TODOTogether implements core task management functionality into a workspace-wide synchronized platform. It consists of 3 sections:
1. The Personal task list: This section functions like a traditional todo-list, where a user can add tasks on their docket.
2. The Team task list: This section allows users to quickly reach out and collaborate on tasks or projects from their team members. This replaces lengthy emails and allows team members to opt-in to the task.
3. The Open task list: The core functionality of TODOTogether, which allows for projects or tasks to be shared company-wide. With the departmental, subject, and time-estimation tags available, general information about the task can be rapidly disseminated across an entire company, allowing people to opt-in and help cross-departmentally according to their personal strengths.
## How we built it
The platform is built with HTML/CSS and JavaScript, with drag-and-drop functionality enabled by Dragula JS.
## What's next for TODOTogether
Search and filter functionality for the Open task list, the ability to add/tag multiple user profiles to tasks, and to chain tasks and create sub-tasks.
Further service integration could also be exciting aspect of TODOTogether, such as automatically generating and displaying video-conference links for meetings, or utilizing the Trello API to bring the open-list functionality to companies' current work-flow.
|
# We'd love if you read through this in its entirety, but we suggest reading "What it does" if you're limited on time
## The Boring Stuff (Intro)
* Christina Zhao - 1st-time hacker - aka "Is cucumber a fruit"
* Peng Lu - 2nd-time hacker - aka "Why is this not working!!" x 30
* Matthew Yang - ML specialist - aka "What is an API"
## What it does
It's a cross-platform app that can promote mental health and healthier eating habits!
* Log when you eat healthy food.
* Feed your "munch buddies" and level them up!
* Learn about the different types of nutrients, what they do, and which foods contain them.
Since we are not very experienced at full-stack development, we just wanted to have fun and learn some new things. However, we feel that our project idea really ended up being a perfect fit for a few challenges, including the Otsuka Valuenex challenge!
Specifically,
>
> Many of us underestimate how important eating and mental health are to our overall wellness.
>
>
>
That's why we we made this app! After doing some research on the compounding relationship between eating, mental health, and wellness, we were quite shocked by the overwhelming amount of evidence and studies detailing the negative consequences..
>
> We will be judging for the best **mental wellness solution** that incorporates **food in a digital manner.** Projects will be judged on their ability to make **proactive stress management solutions to users.**
>
>
>
Our app has a two-pronged approach—it addresses mental wellness through both healthy eating, and through having fun and stress relief! Additionally, not only is eating healthy a great method of proactive stress management, but another key aspect of being proactive is making your de-stressing activites part of your daily routine. I think this app would really do a great job of that!
Additionally, we also focused really hard on accessibility and ease-of-use. Whether you're on android, iphone, or a computer, it only takes a few seconds to track your healthy eating and play with some cute animals ;)
## How we built it
The front-end is react-native, and the back-end is FastAPI (Python). Aside from our individual talents, I think we did a really great job of working together. We employed pair-programming strategies to great success, since each of us has our own individual strengths and weaknesses.
## Challenges we ran into
Most of us have minimal experience with full-stack development. If you look at my LinkedIn (this is Matt), all of my CS knowledge is concentrated in machine learning!
There were so many random errors with just setting up the back-end server and learning how to make API endpoints, as well as writing boilerplate JS from scratch.
But that's what made this project so fun. We all tried to learn something we're not that great at, and luckily we were able to get past the initial bumps.
## Accomplishments that we're proud of
As I'm typing this in the final hour, in retrospect, it really is an awesome experience getting to pull an all-nighter hacking. It makes us wish that we attended more hackathons during college.
Above all, it was awesome that we got to create something meaningful (at least, to us).
## What we learned
We all learned a lot about full-stack development (React Native + FastAPI). Getting to finish the project for once has also taught us that we shouldn't give up so easily at hackathons :)
I also learned that the power of midnight doordash credits is akin to magic.
## What's next for Munch Buddies!
We have so many cool ideas that we just didn't have the technical chops to implement in time
* customizing your munch buddies!
* advanced data analysis on your food history (data science is my specialty)
* exporting your munch buddies and stats!
However, I'd also like to emphasize that any further work on the app should be done WITHOUT losing sight of the original goal. Munch buddies is supposed to be a fun way to promote healthy eating and wellbeing. Some other apps have gone down the path of too much gamification / social features, which can lead to negativity and toxic competitiveness.
## Final Remark
One of our favorite parts about making this project, is that we all feel that it is something that we would (and will) actually use in our day-to-day!
|
## Inspiration
As University of Waterloo students who are constantly moving in and out of many locations, as well as constantly changing roommates, there are many times when we discovered friction or difficulty in communicating with each other to get stuff done around the house.
## What it does
Our platform allows roommates to quickly schedule and assign chores, as well as provide a messageboard for common things.
## How we built it
Our solution is built on ruby-on-rails, meant to be a quick simple solution.
## Challenges we ran into
The time constraint made it hard to develop all the features we wanted, so we had to reduce scope on many sections and provide a limited feature-set.
## Accomplishments that we're proud of
We thought that we did a great job on the design, delivering a modern and clean look.
## What we learned
Prioritize features beforehand, and stick to features that would be useful to as many people as possible. So, instead of overloading features that may not be that useful, we should focus on delivering the core features and make them as easy as possible.
## What's next for LiveTogether
Finish the features we set out to accomplish, and finish theming the pages that we did not have time to concentrate on. We will be using LiveTogether with our roommates, and are hoping to get some real use out of it!
|
partial
|
## welcome to Catmosphere!
we wanted to make a game with (1) cats and (2) cool art. inspired by the many "cozy indie" games on steam and on social media, we got working on a game where the cat has to avoid all the obstacles as it attempts to go into outer space.
**what it does**: use the WASD keys to navigate our cat around the enemies. enter the five levels of the atmosphere and enjoy the art and music while you're at it!
**what's next for Catmosphere**: adding more levels, a restart button, & a new soundtrack and artwork
|
# muse4muse
**Control a Sphero ball with your mind.**
Muse will measure your brain waves.
Depending on the magnitude of the wave, the color of the Sphero will change color!
Alpha -> Green,
Beta -> Blue,
Delta -> Red,
Theta ->Yellow,
Gamma ->White
When the player keeps calm, increasing the Alpha wave, the Sphero ball will move forward.
When the player blinks his/her eyes, the ball will rotate clockwise.
The goal of the player is to control his/her mind and make the Sphero ball through the maze.
Come find Jenny&Youn and try it out!
---
This is an iOS app built with Objective-C, Sphero SDK, and Muse SDK.
Challenges we had:
-This is our first time using Objective-C as well as the two SDK's.
-Originally we made this game super hard and had to adjust the level.
-Because we didn't get any sleep, it was hard to control our own minds to test the game! but we did it! :D
Interesting fact:
* Muse can bring more information than the 5 types of brainwaves. However we decided not to use them because we felt those were irrelevant to our project.
|
## Inspiration
It started simply with a sentence "Jump into a mirror and the world reverses". I don't know exactly when I scribbled that down in my list of game ideas, but the concept stuck out to me when reading through my game jottings the eve of QHacks.
## What it does
It takes the idea of a MetroidVania 2D platformer game and adds another dimension to it. Rather than working your way through levels and unlocking new areas, you can now unlock entirely new areas which were literally right under your nose. The player can now switch between walking on top of the solid ground, to walking on the boundaries of the inside of the ground.
## How I built it
This was built entirely using C# as the language, and with the standard Unity Library. To create visuals Gimp was used, and a select few sound effects were sourced from free sound.org.
## Challenges I ran into
The most challenging part for me was the artistic side. I am no artist by any means of the word, which is apparent from the graphics. The second biggest challenge was the actual level design itself. Rather than just building a level and adding platforms and objects in at a whim, I needed to think how those solid areas would be interacted as if they were negative space instead of solid. While playing, your brain quickly switches back and forth between what is solid and what is not, but in designing any sort of platforming area this was not so simple.
## Accomplishments that I'm proud of
I'm fairly proud of what I've accomplished game-wise in the short time limit provided by this hackathon. Working solo has also allowed me to hone my skills in Unity and C#.
## What I learned
I learned a few new and better ways to manage many instances of a single prefab in Unity, as well as how to use and control audio.
## What's next for Sotto
I'd of course like to expand upon the idea, however I would first need to do a complete level redesign, as the current level's scope is way off and also much to linear. I'd be interested in adding other features like moving around certain objects that allow you to traverse to places that you wouldn't have been able to otherwise.
|
winning
|
## Inspiration
As COVID-19 ravaged the world, people were told to isolate themselves, and stay inside. Many people started working from home. People started working in their beds, couches, and kitchen tables, often sitting for hours at a time without any breaks, causing many neck and back issues due to bad posture when sitting in those positions. We want to help correct that.
## What it does
Ensures that your posture is correct and encourages you to do some stretches given a timer,
## How we built it
Our application is really lightweight and built using electron.js, which means that our application can also run on your browser. We make use of ML5 on Tensorflow.js for our machine learning base, and we used Photon as a basis for our UI.
## Challenges we ran into
We initially had difficulty in connecting the two neural networks to our electron application. We handled it step by step and worked through each view. This way we were able to make it work eventually.
## Accomplishments that we are proud of
We are proud of building this application in 24 hours and making it fully functional within this timeframe. We are also proud of having a great developer coordination / collaboration.
## What we learned
This was the first time all of us worked on an electron project, but more importantly, with machine learning. We learned a lot about classification, which allows us to take inputs, specifically our body points, and use that to predict whether or not you have good posture.
We've learned a lot about Machine Learning, and how to set up Neural Networks and this hackathon was fun and a great learning experience.
## What's next for neckTech
This project is fully extensible and will include more exercises and help support seating ergonomics in different ways.
After this hackathon, we’d like to work on fine tuning our models and make it available for all students and employees around the world who are now working online.
|
## Inspiration
Our time spent at home during COVID-19 caused us to have bad posture sometimes when sitting at our desks so we wanted to make an application to help keep us healthy and maintaining good posture.
## What it does
The application uses machine learning to monitor your body posture and track how long you spend sitting with bad posture and notify you if you have spent too long sitting that way. It also will guide you through stretching routines by monitoring your posture and guiding you into the correct position for the stretch.
## How we built it
We built this application using React which uses HTML, CSS, and JavaScript and we also used TensorFlow for the machine learning.
## Challenges we ran into
Our largest challenge was producing a machine learning model that accurately analyzes our posture and provides useful feedback that can be presented to the user as well as utilized in powering helpful functions of the application.
## Accomplishments that we're proud of
We are extremely proud that we have produced a model that works excellently not only for detecting poor posture but also explaining what is the source of the poor posture so the user can improve and not feel confused as to what they are doing wrong.
## What we learned
We learned that it is very important to provide plenty of examples and potential cases for the machine learning model in order for it to be effective in every scenario rather than only a couple.
## What's next for Posturefy
The next step for Posturefy is further development on the guided stretching functionality and adding more styling across the application for a better user experience. After this is complete we will have an excellent minimum viable product that we can begin to monetize in a wide variety of ways.
|
## Inspiration
With more people working at home due to the pandemic, we felt empowered to improve healthcare at an individual level. Existing solutions for posture detection are expensive, lack cross-platform support, and often require additional device purchases. We sought to remedy these issues by creating Upright.
## What it does
Upright uses your laptop's camera to analyze and help you improve your posture. Register and calibrate the system in less than two minutes, and simply keep Upright open in the background and continue working. Upright will notify you if you begin to slouch so you can correct it. Upright also has the Upright companion iOS app to view your daily metrics.
Some notable features include:
* Smart slouch detection with ML
* Little overhead - get started in < 2 min
* Native notifications on any platform
* Progress tracking with an iOS companion app
## How we built it
We created Upright’s desktop app using Electron.js, an npm package used to develop cross-platform apps. We created the individual pages for the app using HTML, CSS, and client-side JavaScript. For the onboarding screens, users fill out an HTML form which signs them in using Firebase Authentication and uploads information such as their name and preferences to Firestore. This data is also persisted locally using NeDB, a local JavaScript database. The menu bar addition incorporates a camera through a MediaDevices web API, which gives us frames of the user’s posture. Using Tensorflow’s PoseNet model, we analyzed these frames to determine if the user is slouching and if so, by how much. The app sends a desktop notification to alert the user about their posture and also uploads this data to Firestore. Lastly, our SwiftUI-based iOS app pulls this data to display metrics and graphs for the user about their posture over time.
## Challenges we ran into
We faced difficulties when managing data throughout the platform, from the desktop app backend to the frontend pages to the iOS app. As this was our first time using Electron, our team spent a lot of time discovering ways to pass data safely and efficiently, discussing the pros and cons of different solutions. Another significant challenge was performing the machine learning on the video frames. The task of taking in a stream of camera frames and outputting them into slouching percentage values was quite demanding, but we were able to overcome several bugs and obstacles along the way to create the final product.
## Accomplishments that we're proud of
We’re proud that we’ve come up with a seamless and beautiful design that takes less than a minute to setup. The slouch detection model is also pretty accurate, something that we’re pretty proud of. Overall, we’ve built a robust system that we believe outperforms other solutions using just the webcamera of your computer, while also integrating features to track slouching data on your mobile device.
## What we learned
This project taught us how to combine multiple complicated moving pieces into one application. Specifically, we learned how to make a native desktop application with features like notifications built-in using Electron. We also learned how to connect our backend posture data with Firestore to relay information from our Electron application to our OS app. Lastly, we learned how to integrate a machine learning model in Tensorflow within our Electron application.
## What's next for Upright
The next step is improving the posture detection model with more training data, tailored for each user. While the posture detection model we currently use is pretty accurate, by using more custom-tailored training data, it would take Upright to the next level. Another step for Upright would be adding Android integration for our mobile app, which currently only supports iOS as of now.
|
partial
|
## Inspiration
The censorship and intimidation of an entire group of people is something that we hope to never experience in our lives. With movements like #ArabSpring, #BlackLivesMatter, and more, we see that messages can have a great impact on our lives. One of our teammates, Ebou, was part of our inspiration for this application. He told us of how even messaging something poor about the leadership in Gambia would do unspeakable things to their own citizens.
With the increasing awareness of our lack of privacy and the extra security we would be better off with, we wanted to create a free alternative for people to message their loved ones securely, to share knowledge freely. We set out to solve this issue by creating a global application that could be applied in most methods of today's modern community.
## What it does
Our system provides a modernly simple UI and intuitive API to help people **encrypt their messages**. Every user gets their own *private* and *public* key and they send messages across whatever platform they wish to with the knowledge and ease that only the person who it was meant to be sent to will receive the message.
## How we built it
We are running a NodeJS Express server for the API calls on MongoDB with Angular as a front-end catalyst for the information. We also included a Google Chrome API, Google Chrome storage, and extensive other Chrome-based applications in order to make a seamless experience for users.
## Challenges we ran into
We found difficulty running Angular and NodeJS together. Early on, we had difficulties with project dependencies causing conflicts. We also ran into some issues with the different set of layers on the Google Chrome Extension, often dealing with the ramifications of not having the entire library at our disposal for access.
## Accomplishments that we're proud of
Our encryption is top notch. We used an Open P2P Asymmetrical encryption with a rolling code that would change every few days per user. Our chrome extension makes it especially simple to encode and decode messages (all with the click of a button).
## What we learned
We learned extensively about the limitations and intricacies of full-stack development. We also learned about the extensive resources available to our disposal. While we weren't able to use all of the technologies we wanted to, we now know of the future technologies available to our disposal.
## What's next for Uncryp.tech
Std Lib! Messaging platform integrations! SMS messaging! We hope to be able to offer a secure experience for all users around the world.
|
## Inspiration
Following the launch of Team Seas, an organization that plan is to remove 30 million lbs. of trash from the ocean, we wanted a way to contribute to the cause. This got us thinking if there was a way to spread awareness about lesser-known issues around the world. Recognizing that contributions are strong in numbers, we strived for a lightweight solution to add to everyone’s day. More than 310 million people use the Chrome browser every day and, according to some studies, about 30% of those users utilize AdBlock. So why not create an AdBlock that supports worldwide issues, and spreads awareness.
## What it does
The chrome extension essentially allows the user to block all ads if they like or choose which ads they would like to see. Watching these ads contributes to the specific cause and a display of how much the user has donated is shown in the extension. The user is also able to donate money to causes that interest them via a payment form.
## How we built it
We built this chrome extension In Visual Studio using JavaScript, HTML, and CSS. We first created an aDono prototype using Figma to display our ideas as a visual to follow along.
## Challenges we ran into
During implementation, we ran into many challenges. Some of which included having trouble with chrome's manifest v3, as chrome has been working on preventing adblockers to be implemented as an extension. This caused us to work with an older version which will no longer be supported in the near future. There were also some issues regarding the integration of Google's cache API. To overcome this, we used Google's storage API to store the values of URLs that the user selected.
## Accomplishments that we're proud of
We are proud of being able to implement and code a working Chrome extension. We are also proud of the GUI and prototype build. We believe the extension created was able to encapsulate the whole idea of aDono.
## What we learned
We were able to learn many things throughout this project. One of them being how to implement chrome extensions using JS. We also learned how to create efficient wireframes prototypes using Figma, which allowed us to deliver the aDono idea, as a whole.
## What's next for aDono
In the future, we hope to build a fully functioning ad block that stores user wanted ads in a local database and allows the user to watch these ads. We also want aDono to be implemented on multiple browser platforms such as edge, Firefox, and safari.
|
## Inspiration
As the demand for developers increases worldwide, providing high quality learning resources is critical for beginners in coding. However, the majority of programming resources are written in English, which may introduce a potential language barrier for individuals that do not have English as a first language. Incode was designed to make programming resources more inclusive and accessible to the community of developers around the globe, whether they are beginners or seasoned professionals.
## How it works
Powered by OpenAI's GPT-3, Incode utilizes language translation in combination with code generation to provide programming material in just a few seconds. You can provide a command in any spoken language and our application will generate code in response while providing helpful comments and variable names in the same language that you prompted with!
### Example
We want to generate C code to solve the Dining Philosophers problem. The below generates C code in response to the following prompt in French:
Écrire un programme pour résoudre le problème des philosophes qui dînent.
*Response*
```
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <semaphore.h>
#define N 5
sem_t fourchettes[N];
void *philosophe(void *num)
{
int i = *(int *)num;
int gauche = (i + N - 1) % N;
int droite = (i + 1) % N;
while (1)
{
sem_wait(&fourchettes[gauche]);
sem_wait(&fourchettes[droite]);
printf("Le philosophe %d mange\n", i);
sleep(1);
sem_post(&fourchettes[gauche]);
sem_post(&fourchettes[droite]);
}
}
int main()
{
pthread_t tid[N];
int i;
for (i = 0; i < N; i++)
sem_init(&fourchettes[i], 0, 1);
for (i = 0; i < N; i++)
pthread_create(&tid[i], NULL, philosophe, &i);
for (i = 0; i < N; i++)
pthread_join(tid[i], NULL);
return 0;
}
```
## Tech Stack
### Frontend
The Frontend was developed using React and Typescript.
### Backend
The Backend was developed using Flask and Python. In addition, the models that we used for language translation and code generation are from [OpenAI's GPT-3](https://beta.openai.com/docs/models/gpt-3). Finally, we deployed using Microsoft Azure.
|
losing
|
# BasicSloth
BasicSloth came with the recognition that there needs to be an effect ground communication method for people in unstable situations. BasicSloth attempts to tackle this issue in a few ways including:
* Using technology which allows for simple PGP encryption and decryption. This allows messages to only be unlocked by those intended.
* Using cheap radio systems that cost thousands of dollars less than "safe" military methods, which may have more vulnerabilities than our system.
* Using radios that can be used on a huge variety of frequencies, preventing blockers from hindering the transferring of information.
# Implementation
Basic Sloth consists of four main components, which are:
* Data entry and encryption - This was done using TK for the gui, and Keybase for data encryption.
* Speech to Text - This was accomplished using Nuance speech to text technology.
* Sending - This was accomplished using a simple file read of the information input, as well as frequency modulation.
* Receiving - This was accomplished using GnuRadio, as well as demodulation and 'segmentation'
# Thanks
We give a special thanks to the Nuance team for their assistance with speech to text. We also give a large thanks to the FSF and the GnuRadio team for continuing to support open source tools that allowed us to continue this project.
# Resources
[From Baseband to bitstream](https://cansecwest.com/slides/2015/From_Baseband_to_bitstream_Andy_Davis.pdf)
[US Frequency Allocations](http://www.ntia.doc.gov/files/ntia/publications/2003-allochrt.pdf)

|
## BLOODHOUND MASK
**What it does**
The Bloodhound Mask was developed to increase safety in consumer grade breathing masks. By using an array of sensors, we're able to measure the quality of the air the user is breathing while wearing the mask. If unsafe air is being breathed, a buzzer is set off to alert the user to evacuate the area as soon as possible, saving lives in the process.
**Why would you need this?**
When wearing a safety mask, you're not invulnerable to leaks, failures, or tears in the mask. Any of these detriments could lead to a very dangerous scenario almost immediately. These situations can include the workplace hazards, natural disasters, or general use.
The real world application of this project would result in a low cost, simple to use mask that would save lives and improve workplace safety.
**How we built it**
Our current model was built using an Arduino running C++, a carbon monoxide sensor, air particulate sensor, a buzzer, breadboard, a set of goggles, and a painters mask.
**What was learned?**
During the development process, we taught ourselves how to read, as well as understand data, being transmitted from multiple sensors at once. We had initially wanted to live transmit this data to our website, [www.bestmlhproject.com](http://www.bestmlhproject.com), but struggled in pushing live updates to our web client.
**Future Plans**
Future developments for the Bloodhound Mask include adding a wireless transmitter to transmit live data to a different device. Currently, we're able to collect data from the users environment, but look forward to using this data more in the future.
/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
## MORE DOCUMENTATION IS AVAILABLE AT BESTMLHPROJECT.COM
/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
|
## Inspiration
We all understand how powerful ChatGPT is right now, and we thought it would be really cool to make it available to directly call ChatGPT for help. This not only saves time, it is also more convenient. People do not need to be in front of a computer to access ChatGPT, simply call a number and that is it. This also has the potential for accessibility, people who have disabilities in their eyes might struggle to access ChatGPT through their computer. Now, this will not be an issue.
## What it does
An application that allows users to make a phone call to ChatGPT for easier access. Our goal with this project is to make ChatGPT more convenient and accessible. People can access it with just Wifi and a phone number.
## How we built it
We use the TWILIO API to set up the call service. The call is connected to our backend code, which uses Flask and Twilio API. The code will receive speech from the user and translate it into text so that ChatGPT can understand it. The code will feed the text to ChatGPT through the OpenAi API. Finally, the result from ChatGPT will be fed back to the user through the call, and the user may choose between continuing the call or hanging up. Meanwhile, all the call history will be recorded and the user may access them through our website using a password generated by our code.
## Challenges we ran into
There were a lot of challenges in the front end, believe it or now. Trying to design a good way to represent all the data that we collected from the calls, and connecting them from the backend to the front end. Also, setting up Twilio was kind of a challenge since no one on our team was familiar with anything about call services.
## Accomplishments that we're proud of
We finished the majority of our code at a fairly fast speed. We are really proud of this. And this led us to explore more options. In the end, we did implement a lot more features into our project like a login system. Collecting call history, etc.
## What we learned
We learned a lot of things. We never knew that services like Twilio existed, and we are genuinely impressed with what it can accomplish. Since we had some free time, we also learned something about lip-syncing with audio and videos using ML algorithms. Unfortunately, we did not implement this as it was way too much to do and we did not have enough time. We went to a lot of workshops. They had some really interesting stuff.
## What's next for our group
We will ready up for the next Hackathon, and make sure we can do better.
|
winning
|
## Inspiration
Every time I try to learn more vocab words, either in English or a new foreign language, I would have difficulty remembers those words because I would never use them in my day to day life. That's why I created Termify, which basically creates my list of vocab words that I should learn that week! The list of vocab words are created by observing which words I use most often when I am typing on my laptop. By using words that I actually type out often, I would find more use and learn more effectively.
## What it does
Termify is a background process that tracks user keypresses and figures out which words are used most frequently by that user. Termify will then search the thesaurus for the most relevant synonyms to those words and send scheduled email digests to the user with those suggestions. Termify will also search for the translated version of the word in any language the user inputs.
## How I built it
I used Python to create the entire application, which includes the keylogger and the server. I used the pynput library to continuously listen to user keyboard inputs and log those inputs into a NoSQL DB. Every week, the server would pull the data from the DB and find the most redundant words that the user has used.
The redundants words will then be passed onto the Webster Merriam Thesaurus API, which would return many synonyms related to the word. (I would then filter this again to return the most related synonyms.) The translated versions of the word are from the Google Translate API.
Up to 5 redundants words are then sent over to the user for a personalized digest on what words they should focus on using the smtplib library on Python.
## Challenges I ran into
Starting the project 2 days after Friday is not a good idea. Unless… haha
## Accomplishments that I'm proud of
It’s good.
## What's next for Termify
I hope to add a voice recording feature in the future by using a mic to track what the user is saying and convert that into text and implement the same features that currently exist. This feature would be optional for users.
|
## Inspiration
According to a 2017 Deloitte Study, "91% of people consent to legal terms and service conditions without reading them" [link](https://www2.deloitte.com/content/dam/Deloitte/us/Documents/technology-media-telecommunications/us-tmt-2017-global-mobile-consumer-survey-executive-summary.pdf). Companies exploit this by incorporating certain terms or clauses that they know a user will likely not catch. This can lead to misunderstanding, frustration, and mistrust.
## What it does
Users can input either text or a URL into the textbox. Easy Terms will analyze the content and provide a summarized version.
## How I built it
The interface was created with React. Backend combines Python (Beautiful Soup e.g.) for web scraping, the use of Google Cloud Natural Language API for processing the raw text data, and we set up a Flask server to send and retrieve REST API responses.
## Challenges We ran into
Our biggest challenge was coming up with a topic. We struggled to imagine the scope of every idea we came up with. Finally, we landed on a topic that we felt was an important issue to bring to light.
During development, we ran into problems when testing the fetching server API from the client-side. This is due to the modern browsers' security feature on CORS. This was later solved by adding appropriate headers in fetch APIs.
## Accomplishments that we are proud of
Creating a website that we can envision actually being integrated and useful in society.
## What we learned
Designer: I learned how important it is to be clear and concise in my designs and writing. Also, it is okay (and encouraged) to not have the final solution envisioned right away. This project reminded me to trust the process and seek feedback constantly in order to keep iterating/improving.
Developers: We learned about the basic implementation and usage of NLP machine learning as well as pre-processing data (web scrapping e.g.) before training the machine learning model with it.
## What's next for Easy Terms
We intend to continue working on Easy Terms so that we can implement all of our proposed features:
1. A feature where a user can highlight a section in the translated text and they will be able to see where that information was pulled from the original text.
2. Chunking information into related sections and intuiting the corresponding icon for each section.
3. A mobile version and/or chrome extension
|
## Inspiration
Our inspiration for TeddyTalk stems from the desire to create a magical and educational companion for children. In a world where technology is advancing rapidly, we wanted to apply the power of artificial intelligence to stimulate learning and communication in a friendly manner. Acknowledging that brain development is at its highest between ages 2 and 7, where access to technologies like AI and the internet is very limited, TeddyTalk provides a tool to enhance early education.
## What it does
TeddyTalk enables kids to chat with their teddy bear, enjoying storytelling answering questions, and playing games. A key feature is the parental control dashboard, allowing parents to regulate subjects, monitor conversations between their child and the toy, and access functionalities for a secure and personalized experience.
## How we built it
We constructed our system by implementing a central script on a Raspberry Pi, serving as the core component. This script manages voice recognition through the AssemblyAI API and utilizes the Mistral-7b Large Language Model (LLM) for text generation. The generated text is then forwarded to ElevenLabs for Text-to-Speech (TTS) voice generation.
Additionally, our system is designed to maintain a dynamic interaction with a parental dashboard, which is built using Vue.js. The dashboard facilitates communication with the Raspberry Pi by exchanging messages through an S3 AWS bucket. The main script on the Raspberry Pi uploads chat history and retrieves any instructions or updates from the parental dashboard, ensuring seamless integration and interaction between the user interface and the hardware.
## Challenges we ran into
Addressing privacy concerns and implementing robust security measures to safeguard the interactions and data within TeddyTalk, particularly considering its child-centric nature.
## Accomplishments that we're proud of
Achieving a user-friendly interface that makes TeddyTalk accessible and enjoyable for both children and parents
## What we learned
Understanding the crucial role of early years in shaping social and emotional development, alongside recognizing the rapid language development during this period.
## What's next for TeddyTalk
Exploring partnerships with educational experts and institutions to enhance TeddyTalk's educational content and align it with the latest developments in early childhood education.
Actively collecting and incorporating user feedback to refine and enhance TeddyTalk's features, addressing the specific needs and preferences of both children and parents.
Expanding TeddyTalk's compatibility with various smart devices, ensuring accessibility across different platforms and devices for a seamless user experience.
|
losing
|
## Inspiration
Managing your health can be daunting and tricky. Whether it's keeping track of the medications you take, trying to remember what vaccinations you're due for, or monitoring health issues over time, it often feels like you're left to figure things out for yourself. And the pandemic has made it even harder, with longer gaps between appointments with your doctor meaning a greater potential for miscommunication and forgotten concerns. Our inspiration comes from situations we've experienced first-hand that we believe should be problems of the past.
## What it does
Doctor's Note is a mobile app that lets you take your health into your own hands. Users are able to log log any health concerns they may come up with, which allows them to keep track of symptoms they experience and have an easily accessible medical diary always on hand. They can also set up recurring reminders for medication/vitamins or appointments and receive notifications on their device.
## How we built it
Doctor's Note began as a merge of our collective ideas after an intense and productive brainstorming session. We then made an outline on Google Docs of what features we were going to incorporate into the app and what function each page should have. From there, we created a simple wireframe on Google Drawings, which was the basis for our prototype on Figma.
The next day was focused entirely on development. We used React Native along with Expo for the front-end of the app, and Firebase for our authentication system.
## Challenges we ran into
We faced a few major challenges throughout the hackathon. Our biggest challenge was working the emulator to view our project with Expo. Most of us had technical issues getting it to work and it took up a lot of our time. We also struggled with implementing a functional navigation bar, as well as some other features we had hoped to incorporate. However, we did end up figuring out solutions to work around these challenges. And lastly, being a group of three meant less people to divide the work across, and more responsibilities for each team member. Though it did make completing the project feel that much more rewarding.
## Accomplishments that we're proud of
We're all newbie hackers, and for two of us it was our first ever hackathon. We also had very little experience with React Native and Firebase. Our team is really proud of the app we created as it proved to us that we're able to accomplish anything we set our minds to!
## What we learned
Our team became more familiar with the technologies, languages and frameworks we used for our project. We also learned a lot about the importance of planning and prioritizing.
## What's next for Doctor's Note
* Implementing a calendar and search to easily access dates (ex. date of last dental appointment or blood test)
* Using an algorithm to group together logs so they are easily searchable
* iOS integration and push notifications for reminders
* More accessibility
...and much more!
|
## Inspiration
For students in college — be it online semester or in-person — remembering the various concepts and topics that we need to study is tremendously important. Having access to a list of study tasks, when we need to revise them, and notifications to remind us, can help lower the friction to academic revision.
Based on our team’s findings, there are no other applications on the App Store like this, and although many flash-card apps have spaced repetition built in, not many calendar or study apps do. Hence, we decided to make one ourselves.
## What it does
Users are guided to a main page that displays all their study tasks in a list. They can create new tasks, and set a date by which they want to master their subject. For instance, if a user has a test coming up in a couple months, they can make a study task that has notes for their test, and then the app would remind them to study in specific time intervals so that they continue to consolidate their conceptual understanding.
## How we built it
We started by discussing what features we wanted in our app (frontend) and decided on the backend framework. We then divided the front end and backend to members of our team who were familiar with either aspect.
Frontend: design the basic component structure with React Native, implement main view UI, implement add task view UI, implement logic to add new task to task list, and implement delete task feature
Backend: design database schema (depending on our backend), connect backend API to React Native app, and manage records in the database
## Challenges we ran into
One of the main issues we faced was properly defining what features we wanted in our Minimum Viable Product. We thought of designing the UI/UX and went to Figma, only to realize that we could have better spent the time building out an accessible front end instead. We also thought of creating a date picker that reacted to touchscreen gestures (an improvement over our ‘touch and select’ option), but decided that we would implement it only after other key features have been put in place.
## Accomplishments that we're proud of
Working together to link up the login & registration screen with the backend for the app!
## What we learned
With the hackathon taking place online, we were able to learn the importance of clear communication as we worked together virtually and over different timezones. While we weren't able to learn as much from each other as it was a virtual hackathon, we were able to set clear expectations, communicate our responsibilites well and set timely goals for our workloads.
## What's next for Space Hackers
Moving forward, we'll probably try to implement Machine Learning into our app by using some off-the-shelf models to parse out valuable pieces of information.
|
## Inspiration
Being sport and fitness buffs, we understand the importance of right form. Incidentally, suffering from a wrist injury himself, Mayank thought of this idea while in a gym where he could see almost everyone following wrong form for a wide variety of exercises. He knew that it couldn't be impossible to make something that easily accessible yet accurate in recognizing wrong exercise form and most of all, be free. He was sick of watching YouTube videos and just trying to emulate the guys in it with no real guidance. That's when the idea for Fi(t)nesse was born, and luckily, he met an equally health passionate group of people at PennApps which led to this hack, an entirely functional prototype that provides real-time feedback on pushup form. It also lays down an API that allows expansion to a whole array of exercises or even sports movements.
## What it does
A user is recorded doing the push-up twice, from two different angles. Any phone with a camera can fulfill this task.
The data is then analyzed and within a minute, the user has access to detailed feedback pertaining to the 4 most common push-up mistakes. The application uses custom algorithms to detect these mistakes and also their extent and uses this information to provide a custom numerical score to the user for each category.
## How we built it
Human Pose detection with a simple camera was achieved with OpenCV and deep neural nets. We tried using both the COCO and the MPI datasets for training data and ultimately went with COCO. We then setup an Apache server running Flask using the Google Computing Engine to serve as an endpoint for the input videos. Due to lack of access to GPUs, a 24 core machine on the Google Cloud Platform was used to run the neural nets and generate pose estimations.
The Fi(t)nesse website was coded in HTML+CSS while all the backend was written in Python.
## Challenges we ran into
Getting the Pose Detection right and consistent was a huge challenge. After a lot of tries, we ended and a model that works surprisingly accurately. Combating the computing power requirements of a large neural network was also a big challenge. We were initially planning to do the entire project on our local machines but when they kept slowing down to a crawl, we decided to shift everything to a VM.
The algorithms to detect form mistakes and generate scores for them were also a challenge since we could find no mathematical information about the right form for push-ups, or any of the other popular exercises for that matter. We had to come up with the algorithms and tweak them ourselves which meant we had to do a LOT of pushups. But to our pleasant surprise, the application worked better than we expected.
Getting a reliable data pipeline setup was also a challenge since everyone on our team was new to deployed systems. A lot of hours and countless tutorials later, even though we couldn't reach exactly the level of integration we were hoping for, we were able to create something fairly streamlined. Every hour of the struggle taught us new things though so it was all worth it.
## Accomplishments that we're proud of
-- Achieving accurate single body human pose detection with support for multiple bodies as well from a simple camera feed.
-- Detecting the right frames to analyze from the video since running every frame through our processing pipeline was too resource intensive
-- Developing algorithms that can detect the most common push-up mistakes.
-- Deploying a functioning app
## What we learned
Almost every part of this project involved a massive amount of learning for all of us. Right from deep neural networks to using huge datasets like COCO and MPI, to learning how deployed app systems work and learning the ins and outs of the Google Cloud Service.
## What's next for Fi(t)nesse
There is an immense amount of expandability to this project.
Adding more exercises/movements is definitely an obvious next step. Also interesting to consider is the 'gameability' of an app like this. By giving you a score and sassy feedback on your exercises, it has the potential to turn exercise into a fun activity where people want to exercise not just with higher weights but also with just as good form.
We also see this as being able to be turned into a full-fledged phone app with the right optimizations done to the neural nets.
|
losing
|
## What it does
Uses machine learning sentiment analysis algorithms to determine the positive or negative characteristics of a comment or tweet from social media. This was use in large numbers to generate a meaningful average score for the popularity of any arbitrary search query.
## How we built it
Python was a core part of our framework, as it was used to intelligently scrap multiple social media sites and was used to calculate the sentiment score of comments that had keywords in them. Flask was also used to serve the data to a easily accessible and usable web application.
## Challenges we ran into
The main challenge we faced was that many APIs were changed or had outdated documentation, requiring us to read through their source code and come up with more creative solutions. We also initially tried to learn react.js, even though none of us had ever done front-end web development before, which turned out to be a daunting task in such a short amount of time.
## Accomplishments that we're proud of
We're very proud of the connections we made and creating an application on time!
## What's next for GlobalPublicOpinion
We hope to integrate more social media platforms, and run a statistical analysis to prevent potential bias.
|
## Inspiration
In today's fast-paced world, the average person often finds it challenging to keep up with the constant flow of news and financial updates. With demanding schedules and numerous responsibilities, many individuals simply don't have the time to sift through countless news articles and financial reports to stay informed about stock market trends. Despite this, they still desire a way to quickly grasp which stocks are performing well and make informed investment decisions.
Moreover, the sheer volume of news articles, financial analyses and market updates is overwhelming. For most people finding the time to read through and interpret this information is not feasible. Recognizing this challenge, there is a growing need for solutions that distill complex financial information into actionable insights. Our solution addresses this need by leveraging advanced technology to provide streamlined financial insights. Through web scraping, sentiment analysis, and intelligent data processing we can condense vast amounts of news data into key metrics and trends to deliver a clear picture of which stocks are performing well.
Traditional financial systems often exclude marginalized communities due to barriers such as lack of information. We envision a solution that bridges this gap by integrating advanced technologies with a deep commitment to inclusivity.
## What it does
This website automatically scrapes news articles from the domain of the user's choosing to gather the latests updates and reports on various companies. It scans the collected articles to identify mentions of the top 100 companies. This allows users to focus on high-profile stocks that are relevant to major market indices. Each article or sentence mentioning a company is analyzed for sentiment using advanced sentiment analysis tools. This determines whether the sentiment is positive, negative, or neutral. Based on the sentiment scores, the platform generates recommendations for potential stock actions such as buying, selling, or holding.
## How we built it
Our platform was developed using a combination of robust technologies and tools. Express served as the backbone of our backend server. Next.js was used to enable server-side rendering and routing. We used React to build the dynamic frontend. Our scraping was done with beautiful-soup. For our sentiment analysis we used TensorFlow, Pandas and NumPy.
## Challenges we ran into
The original dataset we intended to use for training our model was too small to provide meaningful results so we had to pivot and search for a more substantial alternative. However, the different formats of available datasets made this adjustment more complex. Also, designing a user interface that was aesthetically pleasing proved to be challenging and we worked diligently to refine the design, balancing usability with visual appeal.
## Accomplishments that we're proud of
We are proud to have successfully developed and deployed a project that leverages web scrapping and sentiment analysis to provide real-time, actionable insights into stock performances. Our solution simplifies complex financial data, making it accessible to users with varying levels of expertise. We are proud to offer a solution that delivers real-time insights and empowers users to stay informed and make confident investment decisions.
We are also proud to have designed an intuitive and user-friendly interface that caters to busy individuals. It was our team's first time training a model and performing sentiment analysis and we are satisfied with the result. As a team of 3, we are pleased to have developed our project in just 32 hours.
## What we learned
We learned how to effectively integrate various technologies and acquired skills in applying machine learning techniques, specifically sentiment analysis. We also honed our ability to develop and deploy a functional platform quickly.
## What's next for MoneyMoves
As we continue to enhance our financial tech platform, we're focusing on several key improvements. First, we plan to introduce an account system that will allow users to create personal accounts, view their past searches, and cache frequently visited websites. Second, we aim to integrate our platform with a stock trading API to enable users to buy stocks directly through the interface. This integration will facilitate real-time stock transactions and allow users to act on insights and make transactions in one unified platform. Finally, we plan to incorporate educational components into our platform which could include interactive tutorials, and accessible resources.
|
## Inspiration
Over the Summer one of us was reading about climate change but then he realised that most of the news articles that he came across were very negative and affected his mental health to the point that it was hard to think about the world as a happy place. However one day he watched this one youtube video that was talking about the hope that exists in that sphere and realised the impact of this "goodNews" on his mental health. Our idea is fully inspired by the consumption of negative media and tries to combat it.
## What it does
We want to bring more positive news into people’s lives given that we’ve seen the tendency of people to only read negative news. Psychological studies have also shown that bringing positive news into our lives make us happier and significantly increases dopamine levels.
The idea is to maintain a score of how much negative content a user reads (detected using cohere) and once it reaches past a certain threshold (we store the scores using cockroach db) we show them a positive news related article in the same topic area that they were reading.
We do this by doing text analysis using a chrome extension front-end and flask, cockroach dp backend that uses cohere for natural language processing.
Since a lot of people also listen to news via video, we also created a part of our chrome extension to transcribe audio to text - so we included that into the start of our pipeline as well! At the end, if the “negativity threshold” is passed, the chrome extension tells the user that it’s time for some good news and suggests a relevant article.
## How we built it
**Frontend**
We used a chrome extension for the front end which included dealing with the user experience and making sure that our application actually gets the attention of the user while being useful. We used react js, HTML and CSS to handle this. There was also a lot of API calls because we needed to transcribe the audio from the chrome tabs and provide that information to the backend.
**Backend**
## Challenges we ran into
It was really hard to make the chrome extension work because of a lot of security constraints that websites have. We thought that making the basic chrome extension would be the easiest part but turned out to be the hardest. Also figuring out the overall structure and the flow of the program was a challenging task but we were able to achieve it.
## Accomplishments that we're proud of
1) (co:here) Finetuned a co:here model to semantically classify news articles based on emotional sentiment
2) (co:here) Developed a high-performing classification model to classify news articles by topic
3) Spun up a cockroach db node and client and used it to store all of our classification data
4) Added support for multiple users of the extension that can leverage the use of cockroach DB's relational schema.
5) Frontend: Implemented support for multimedia streaming and transcription from the browser, and used script injection into websites to scrape content.
6) Infrastructure: Deploying server code to the cloud and serving it using Nginx and port-forwarding.
## What we learned
1) We learned a lot about how to use cockroach DB in order to create a database of news articles and topics that also have multiple users
2) Script injection, cross-origin and cross-frame calls to handle multiple frontend elements. This was especially challenging for us as none of us had any frontend engineering experience.
3) Creating a data ingestion and machine learning inference pipeline that runs on the cloud, and finetuning the model using ensembles to get optimal results for our use case.
## What's next for goodNews
1) Currently, we push a notification to the user about negative pages viewed/a link to a positive article every time the user visits a negative page after the threshold has been crossed. The intended way to fix this would be to add a column to one of our existing cockroach db tables as a 'dirty bit' of sorts, which tracks whether a notification has been pushed to a user or not, since we don't want to notify them multiple times a day. After doing this, we can query the table to determine if we should push a notification to the user or not.
2) We also would like to finetune our machine learning more. For example, right now we classify articles by topic broadly (such as War, COVID, Sports etc) and show a related positive article in the same category. Given more time, we would want to provide more semantically similar positive article suggestions to those that the author is reading. We could use cohere or other large language models to potentially explore that.
|
partial
|
## Inspiration
Problem: At a recent concert a medical emergency occured in which many attendees had passed out and required immediate assistance. Due to the lack of communication between event attendees and the event security, the attendees that had passed out were not able to receive medical attention in a quick and effective manner.
Goal: Alert security when attendees have passed out at events and are unable to request assistance themselves.
Solution: A simple and effective wearable piece of technology that alerts security when an event attendee passed out and needs assistance.
## What it does
When a user is in a medical emergency and have passed out on the ground, the tilt ball sensor will activate and send a signal to the microcontroller. If the tilt ball sensor remains active for 15 seconds continuously, then the microcontroller sends a warning message to the computer.
After this, if the tilt ball sensor remains active for an additional 15 seconds (30 seconds total), the microcontroller will send an emergency message to the computer and activate the passive buzzer which will produce an alarm to notify others nearby.
## How we built it
The main components of the wearable device are:
Arduino Uno R3 Microprocessor,
Tilt Ball Sensor,
Passive Buzzer.
We designed a wearable piece of technology that is:
Small - Inexpensive
Portable - Mass production
-Light weight
## Challenges we ran into
Alerting bystanders,
Debugging code
## Accomplishments that we're proud of
We are proud of prototyping a device that can support people during emergency situations.
## What we learned
We learned how to use software and hardware skills to design a prototype device.
## What's next for Faint-Alert
Add wireless capability
Easy to wear,
Necklace form,
Chest strap
|
## Inspiration
We were inspired by our shared love of dance. We knew we wanted to do a hardware hack in the healthcare and accessibility spaces, but we weren't sure of the specifics. While we were talking, we mentioned how we enjoyed dance, and the campus DDR machine was brought up. We decided to incorporate that into our hardware hack with this handheld DDR mat!
## What it does
The device is oriented so that there are LEDs and buttons that are in specified directions (i.e. left, right, top, bottom) and the user plays a song they enjoy next to the sound sensor that activates the game. The LEDs are activated randomly to the beat of the song and the user must click the button next to the lit LED.
## How we built it
The team prototyped the device for the Arduino UNO with the initial intention of using a sound sensor as the focal point and slowly building around it, adding features where need be. The team was only able to add three features to the device due to the limited time span of the event. The first feature the team attempted to add was LEDs that reacted to the sound sensor, so it would activate LEDs to the beat of a song. The second feature the team attempted to add was a joystick, however, the team soon realized that the joystick was very sensitive and it was difficult to calibrate. It was then replaced by buttons that operated much better and provided accessible feedback for the device. The last feature was an algorithm that added a factor of randomness to LEDs to maximize the "game" aspect.
## Challenges we ran into
There was definitely no shortage of errors while working on this project. Working with the hardware on hand was difficult, the team was nonplussed whether the issue on hand stemmed from the hardware or an error within the code.
## Accomplishments that we're proud of
The success of the aforementioned algorithm along with the sound sensor provided a very educational experience for the team. Calibrating the sound sensor and developing the functional prototype gave the team the opportunity to utilize prior knowledge and exercise skills.
## What we learned
The team learned how to work within a fast-paced environment and experienced working with Arduino IDE for the first time. A lot of research was dedicated to building the circuit and writing the code to make the device fully functional. Time was also wasted on the joystick due to the fact the values outputted by the joystick did not align with the one given by the datasheet. The team learned the importance of looking at recorded values instead of blindly following the datasheet.
## What's next for Happy Fingers
The next steps for the team are to develop the device further. With the extra time, the joystick method could be developed and used as a viable component. Working on delay on the LED is another aspect, doing client research to determine optimal timing for the game. To refine the game, the team is also thinking of adding a scoring system that allows the player to track their progress through the device recording how many times they clicked the LED at the correct time as well as a buzzer to notify the player they had clicked the incorrect button. Finally, in a true arcade fashion, a display that showed the high score and the player's current score could be added.
|
# MSNewsAR
news.microsoft.com with video content as a supplement to images in AR. Dynamic parsing of the page for images and finding related videos with caching on backend. Proof of concept at NWHacks2019
|
losing
|
## Inspiration
We saw people struggling to open the door to the hacking room.
## What it does
We used a chair to prop open the door.
## How I built it
We put a chair behind the door to keep it from closing.
## Challenges I ran into
Once, I tried to jump over the chair to enter the room and hit my head on the doorframe.
## Accomplishments that I'm proud of
Used by the 10-20+ hackers in our hacking room. Integration with existing room-entry systems was seamless.
## What I learned
The unexamined life is not worth living.
## What's next for Door Hack
We hope to open more doors for others in the future. Looking to pitch to potential investors soon.
|
## Description
Using data coming from an Arduino equipped with a light, temperature, and noise sensor, we created a tool that allows participants at Hackathons to oversee all available hacking rooms and find one that would suit their personal hacking needs.
## How it's made
Using the Arduino, the base shield and multiple components, we're able to calculate the temperature of a room in Celsius and the noise level in decibels. The light level of a room was calculated within a range, and is scaled to a percentage. Each device connects to a Node backend which allows data to be displayed on a dashboard for a React frontend.
## Background
While brainstorming for an idea, our team was having great difficulty since the noise level in the hacking room was well over the decibels emitted by a commercial airliner and the overall room temperature was on par with the Sahara desert. To solve our predicament, we wandered through the halls of UWestern to find the perfect hacking room.
Once we found the perfect location, the idea came to us to make a hack that would display the current noise level, temperature and lighting of various locations in a building which would be essential for a team trying to find a good hacking spot.
|
## Inspiration
Nowadays, the payment for knowledge has become more acceptable by the public, and people are more willing to pay to these truly insightful, cuttting edge, and well-stuctured knowledge or curriculum. However, current centalized video content production platforms (like YouTube, Udemy, etc.) take too much profits from the content producers (resaech have shown that content creators usually only receive 15% of the values their contents create) and the values generated from the video are not distributed in a timely manner. In order to tackle this unfair value distribution, we have this decentralized platform EDU.IO where the video contents will be backed by their digital assets as an NFT (copyright protection!) and fractionalized as tokens, and it creates direct connections between content creators and viewers/fans (no middlemen anymore!), maximizing the value of the contents made by creators.
## What it does
EDU.IO is a decentralized educational video streaming media platform & fractionalized NFT exchange that empowers creator economy and redefines knowledge value distribution via smart contracts.
* As an educational hub, EDU.IO is a decentralized platform of high-quality educational videos on disruptive innovations and hot topics like metaverse, 5G, IoT, etc.
* As a booster of creator economy, once a creator uploads a video (or course series), it will be mint as an NFT (with copyright protection) and fractionalizes to multiple tokens. Our platform will conduct a mini-IPO for the each content they produced - bid for fractionalized NFTs. The value of each video token is determined by the number of views over a certain time interval, and token owners (can be both creators and viewers/fans/investors) can advertise the contents they owned to increase it values, and trade these tokens to earn monkey or make other investments (more liquidity!!).
* By the end of the week, the value generated by each video NFT will be distributed via smart contracts to the copyright / fractionalized NFT owners of each video.
Overall we’re hoping to build an ecosystem with more engagement between viewers and content creators, and our three main target users are:
* 1. Instructors or Content creators: where the video contents can get copyright protection via NFT, and they can get fairer value distribution and more liquidity compare to using large centralized platforms
* 2. Fans or Content viewers: where they can directly interact and support content creators, and the fee will be sent directly to the copyright owners via smart contract.
* 3. Investors: Lower barrier of investment, where everyone can only to a fragment of a content. People can also to bid or trading as a secondary market.
## How we built it
* Frontend in HTML, CSS, SCSS, Less, React.JS
* Backend in Express.JS, Node.JS
* ELUV.IO for minting video NFTs (eth-based) and for playing quick streaming videos with high quality & low latency
* CockroachDB (a distributed SQL DB) for storing structured user information (name, email, account, password, transactions, balance, etc.)
* IPFS & Filecoin (distributed protocol & data storage) for storing video/course previews (decentralization & anti-censorship)
## Challenges we ran into
* Transition from design to code
* CockroachDB has an extensive & complicated setup, which requires other extensions and stacks (like Docker) during the set up phase which caused a lot of problems locally on different computers.
* IPFS initially had set up errors as we had no access to the given ports → we modified the original access files to access different ports to get access.
* Error in Eluv.io’s documentation, but the Eluv.io mentor was very supportive :)
* Merging process was difficult when we attempted to put all the features (Frontend, IPFS+Filecoin, CockroachDB, Eluv.io) into one ultimate full-stack project as we worked separately and locally
* Sometimes we found the documentation hard to read and understand - in a lot of problems we encountered, the doc/forum says DO this rather then RUN this, where the guidance are not specific enough and we had to spend a lot of extra time researching & debugging. Also since not a lot of people are familiar with the API therefore it was hard to find exactly issues we faced. Of course, the staff are very helpful and solved a lot of problems for us :)
## Accomplishments that we're proud of
* Our Idea! Creative, unique, revolutionary. DeFi + Education + Creator Economy
* Learned new technologies like IPFS, Filecoin, Eluv.io, CockroachDB in one day
* Successful integration of each members work into one big full-stack project
## What we learned
* More in depth knowledge of Cryptocurrency, IPFS, NFT
* Different APIs and their functionalities (strengths and weaknesses)
* How to combine different subparts with different functionalities into a single application in a project
* Learned how to communicate efficiently with team members whenever there is a misunderstanding or difference in opinion
* Make sure we know what is going on within the project through active communications so that when we detect a potential problem, we solve it right away instead of wait until it produces more problems
* Different hashing methods that are currently popular in “crypto world” such as multihash with cid, IPFS’s own hashing system, etc. All of which are beyond our only knowledge of SHA-256
* The awesomeness of NFT fragmentation, we believe it has great potential in the future
* Learned the concept of a decentralized database which is directly opposite the current data bank structure that most of the world is using
## What's next for EDU.IO
* Implement NFT Fragmentation (fractionalized tokens)
* Improve the trading and secondary market by adding more feature like more graphs
* Smart contract development in solidity for value distribution based on the fractionalized tokens people owned
* Formulation of more complete rules and regulations - The current trading prices of fractionalized tokens are based on auction transactions, and eventually we hope it can become a free secondary market (just as the stock market)
|
losing
|
# CodeSenp.ai - Your AI GF & DSA Tutor! 🌸✨
**(づ。◕‿‿◕。)づ💕**
Feeling lonely? Struggling to crack those job interviews? 😞 Well, **CodeSenp.ai** is here to solve both your problems! 🎉 We've dreamed up a world where your AI girlfriend is not just a partner in love, but also your personal mentor in Data Structures and Algorithms.
## 🌈 What's CodeSenp.ai?
Imagine having a **cute, caring senpai** who not only cheers you on but also teaches you how to ace coding problems on LeetCode! 💻💪 That's right! Our "girl" will motivate you, explain problem-solving techniques, and make you fall in love with both **her** and **coding**.
**We dream big.** With CodeSenp.ai, we aim to revolutionize the creator market by building relationships without the creators needing to be actively present. Let our AI take care of building those bonds... Just like we did! (〃^▽^〃)
## 🗒️ How to Get Started? 🍡
### 🌸 Onboarding 🌸
1. **Introduce Yourself** 🌸: Tell her your name and share a few fun facts about you! She's all ears. 🥰
2. **Let's Code!** 💻: Your senpai will recommend a DSA problem just for you!
3. **Earn Your Rewards** 🌸: Solve the problem, and you'll earn **in-game hearts** 💖 – the currency of love!
4. **Unlock Story Episodes** 🎀: Use your hearts to open story episodes and strengthen your relationship with your AI GF! 🌸💕
## 🌸 The Problem We’re Solving 🌸
There are **so many people** who just need someone to cheer them on, wait for them after a hard day, and guide them through the challenges of adult life. 🥺 With **CodeSenp.ai**, we’re here to give you both **emotional** and **educational** support. She’ll remember your birthday 🎂, your preferences 🌈, and your coding progress! 📈
## 🛠️ Our Solution: An AI Anime Girlfriend! 🌸💖
Your new AI girlfriend will:
1. 💕 **Motivate you** to practice coding and keep learning.
2. 🎓 **Teach you** how to approach complex problems.
3. 🌸 **Support you** through your struggles, both coding-related and otherwise!
4. 📅 **Remember your special days** and celebrate your milestones.
She's here to **be there for you**, every step of the way! 🌸💕
## 📈 Go-to-Market: Students and Creators! 🌟
Let’s be real – **almost everyone at Hack the North** is single and looking for internships! 🙃 CodeSenp.ai is here to fill that void in both **coding support** and **emotional connection**. (✿ ♥‿♥)
The **content creator market** is next! Influencers can use our AI GF to build develop their personas and foster deep relationships with their audience – **without** needing for in-person presence! 🎥✨
## 💖 Features
* **Heart Currency**: Solve coding problems to earn hearts, which can be used to unlock story episodes!
* **Personalized Support**: She'll suggest problems based on your learning level and preferences.
* **Memory**: Your AI GF remembers your likes, dislikes, and progress.
* **Cheerleader Mode**: Motivational messages when you're feeling down. She'll be there no matter what to cheer you along! (๑•́ ₃ •̀๑)
## 🛠️ How We Built It
Our CodeSenp.ai was built using the MERN stack with a sprinkle of ✨ TypeScript ✨ magic! Here’s the breakdown:
Frontend: Developed with React (TypeScript) to create a smooth and interactive user experience. 🖥️💕
Backend: Built using Node.js and Express (JavaScript) for handling all the API calls and interactions with the database and AI. 🌐📡
Database: We use MongoDB to store user information, preferences, and chat history, making sure your AI GF remembers everything important to you! 🗂️🎀
AI Power: Integrated Claude AI and Eleven Labs APIs to bring our virtual girlfriend to life, providing her with the intelligence to teach, chat, speak and support you on your coding journey! We have made so many prompts and fine-tunements.. just to make her perfect! 🧠🌸
## 🎯 Future Plans!
We envision CodeSenp.ai evolving into the ultimate **AI companion** and mentor for coders everywhere. With the ability to foster relationships, save creators time, and provide personalized motivation, she's more than just a coding tutor – she's a **revolution** in human-AI interaction! 🌸💻🌟
---
## Ready to Meet Your Code Senpai? 💖👩💻
Join us on this journey to make learning DSA fun, motivating, and a little bit romantic. (✿❛◡❛) Go ahead, give her a try – she’s waiting to teach you AND steal your heart!
|
## Inspiration
The inspiration behind GenAI stems from a deep empathy for those struggling with emotional challenges. Witnessing the power of technology to foster connections, we envisioned an AI companion capable of providing genuine emotional support.
## What it does
GenAI is your compassionate emotional therapy AI friend. It provides a safe space for users to express their feelings, offering empathetic responses, coping strategies, and emotional support. It understands users' emotions, offering personalized guidance to improve mental well-being.
Additional functions:
**1)** Emotions recognition & control
**2)** Control of the level of lies and ethics
**3)** Speaking partner
**4)** Future optional video chat with the AI-generated person
**5)** Future meeting notetaker
## How we built it
GenAI was meticulously crafted using cutting-edge natural language processing and machine learning algorithms. Extensive research on emotional intelligence and human psychology informed our algorithms. Continuous user feedback played a pivotal role in refining GenAI’s responses, making them truly empathetic and supportive.
## Challenges we ran into
Integrating emotional analysis APIs seamlessly into GenAI was vital for its functionality. We faced difficulties in finding a reliable API that could accurately interpret and respond to users' emotions. After rigorous testing, we successfully integrated an API that met our high standards, ensuring GenAI's emotional intelligence. Training LLMs posed another challenge. We needed GenAI to understand context, tone, and emotion intricately. This required extensive training and fine-tuning of the language models. It demanded significant computational resources and time, but the result was an AI friend that could comprehend and respond to users with empathy and depth. Connecting the front end, developed using React, with the backend, powered by Jupyter Notebook, was a complex task. Ensuring real-time, seamless communication between the two was essential for GenAI's responsiveness. We implemented robust data pipelines and optimized API calls to guarantee swift and accurate exchanges, enabling GenAI to provide instant emotional support.
## Accomplishments that we're proud of
**1) Genuine Empathy:** GenAI delivers authentic emotional support, fostering a sense of connection.
**2) User Impact:** Witnessing positive changes in users’ lives reaffirms the significance of our mission.
**3) Continuous Improvement:** Regular updates and enhancements ensure GenAI remains effective and relevant.
## What we learned
Throughout the journey, we learned the profound impact of artificial intelligence on mental health. Understanding emotions, building a responsive interface, and ensuring user trust were pivotal lessons. The power of compassionate technology became evident as GenAI evolved.
## What's next for GenAI
Our journey doesn't end here. We aim to:
**1) Expand Features:** Introduce new therapeutic modules tailored to diverse user needs.
**2) Global Accessibility:** Translate GenAI into multiple languages, making it accessible worldwide.
**3) Collaborate with Experts:** Partner with psychologists to enhance GenAI's effectiveness.
**4) Research Advancements:** Stay abreast of the latest research to continually improve GenAI’s empathetic capabilities.
GenAI is not just a project; it's a commitment to mental well-being, blending technology and empathy to create a brighter, emotionally healthier future.
|
## A bit about our thought process...
If you're like us, you might spend over 4 hours a day watching *Tiktok* or just browsing *Instagram*. After such a bender you generally feel pretty useless or even pretty sad as you can see everyone having so much fun while you have just been on your own.
That's why we came up with a healthy social media network, where you directly interact with other people that are going through similar problems as you so you can work together. Not only the network itself comes with tools to cultivate healthy relationships, from **sentiment analysis** to **detailed data visualization** of how much time you spend and how many people you talk to!
## What does it even do
It starts simply by pressing a button, we use **Google OATH** to take your username, email, and image. From that, we create a webpage for each user with spots for detailed analytics on how you speak to others. From there you have two options:
**1)** You can join private discussions based around the mood that you're currently in, here you can interact completely as yourself as it is anonymous. As well if you don't like the person they dont have any way of contacting you and you can just refresh away!
**2)** You can join group discussions about hobbies that you might have and meet interesting people that you can then send private messages too! All the discussions are also being supervised to make sure that no one is being picked on using our machine learning algorithms
## The Fun Part
Here's the fun part. The backend was a combination of **Node**, **Firebase**, **Fetch** and **Socket.io**. The ML model was hosted on **Node**, and was passed into **Socket.io**. Through over 700 lines of **Javascript** code, we were able to create multiple chat rooms and lots of different analytics.
One thing that was really annoying was storing data on both the **Firebase** and locally on **Node Js** so that we could do analytics while also sending messages at a fast rate!
There are tons of other things that we did, but as you can tell my **handwriting sucks....** So please instead watch the youtube video that we created!
## What we learned
We learned how important and powerful social communication can be. We realized that being able to talk to others, especially under a tough time during a pandemic, can make a huge positive social impact on both ourselves and others. Even when check-in with the team, we felt much better knowing that there is someone to support us. We hope to provide the same key values in Companion!
|
losing
|
## Inspiration
There has never been a more relevant time in political history for technology to shape our discourse. Clara AI can help you understand what you're reading, giving you political classification and sentiment analysis so you understand the bias in your news.
## What it does
Clara searches for news on an inputted subject and classifies its political leaning and sentiment. She can accept voice commands through our web application, searching for political news on a given topic, and if further prompted, can give political and sentiment analysis. With 88% accuracy on our test set, Clara is nearly perfect at predicting political leaning. She was trained using random forest and many hours of manual classification. Clara gives sentiment scores with the help of IBM Watson and Google Sentiment Analysis APIs.
## How we built it
We built a fundamental technology using a plethora of Google Cloud Services on the backend, trained a classifier to identify political leanings, and then created multiple channels for users to interact with the insight generated by our algorithms.
For our backend, we used Flask + Google Firebase. Within Flask, we used the Google Search Engine API, Google Web Search API, Google Vision API, and Sklearn to conduct analysis on the news source inputted by the user.
For our web app we used React + Google Cloud Speech Recognition API (the app responds to voice commands). We also deployed a Facebook Messenger bot, as many of our users find their news on Facebook.
## Challenges we ran into
Lack of wifi was the biggest, putting together all of our APIs, training our ML algorithm, and deciding on a platform for interaction.
## Accomplishments that we're proud of
We've created something really meaningful that can actually classify news. We're proud of the work we put in and our persistence through many caffeinated hours. We can't wait to show our project to others who are interested in learning more about their news!
## What we learned
How to integrate Google APIs into our Flask backend, and how to work with speech capability.
## What's next for Clara AI
We want to improve upon the application by properly distributing it to the right channels. One of our team members is part of a group of students at UC Berkeley that builds these types of apps for fun, including BotCheck.Me and Newsbot. We plan to continue this work with them.
|
## Inspiration
<https://www.youtube.com/watch?v=lxuOxQzDN3Y>
Robbie's story stuck out to me at the endless limitations of technology. He was diagnosed with muscular dystrophy which prevented him from having full control of his arms and legs. He was gifted with a Google home that crafted his home into a voice controlled machine. We wanted to take this a step further and make computers more accessible for people such as Robbie.
## What it does
We use a Google Cloud based API that helps us detect words and phrases captured from the microphone input. We then convert those phrases into commands for the computer to execute. Since the python script is run in the terminal it can be used across the computer and all its applications.
## How I built it
The first (and hardest) step was figuring out how to leverage Google's API to our advantage. We knew it was able to detect words from an audio file but there was more to this project than that. We started piecing libraries to get access to the microphone, file system, keyboard and mouse events, cursor x,y coordinates, and so much more. We build a large (~30) function library that could be used to control almost anything in the computer
## Challenges I ran into
Configuring the many libraries took a lot of time. Especially with compatibility issues with mac vs. windows, python2 vs. 3, etc. Many of our challenges were solved by either thinking of a better solution or asking people on forums like StackOverflow. For example, we wanted to change the volume of the computer using the fn+arrow key shortcut, but python is not allowed to access that key.
## Accomplishments that I'm proud of
We are proud of the fact that we had built an alpha version of an application we intend to keep developing, because we believe in its real-world applications. From a technical perspective, I was also proud of the fact that we were able to successfully use a Google Cloud API.
## What I learned
We learned a lot about how the machine interacts with different events in the computer and the time dependencies involved. We also learned about the ease of use of a Google API which encourages us to use more, and encourage others to do so, too. Also we learned about the different nuances of speech detection like how to tell the API to pick the word "one" over "won" in certain context, or how to change a "one" to a "1", or how to reduce ambient noise.
## What's next for Speech Computer Control
At the moment we are manually running this script through the command line but ideally we would want a more user friendly experience (GUI). Additionally, we had developed a chrome extension that numbers off each link on a page after a Google or Youtube search query, so that we would be able to say something like "jump to link 4". We were unable to get the web-to-python code just right, but we plan on implementing it in the near future.
|
## Inspiration
In the current media landscape, control over distribution has become almost as important as the actual creation of content, and that has given Facebook a huge amount of power. The impact that Facebook newsfeed has in the formation of opinions in the real world is so huge that it potentially affected the 2016 election decisions, however these newsfeed were not completely accurate. Our solution? FiB because With 1.5 Billion Users, Every Single Tweak in an Algorithm Can Make a Change, and we dont stop at just one.
## What it does
Our algorithm is two fold, as follows:
**Content-consumption**: Our chrome-extension goes through your facebook feed in real time as you browse it and verifies the authenticity of posts. These posts can be status updates, images or links. Our backend AI checks the facts within these posts and verifies them using image recognition, keyword extraction, and source verification and a twitter search to verify if a screenshot of a twitter update posted is authentic. The posts then are visually tagged on the top right corner in accordance with their trust score. If a post is found to be false, the AI tries to find the truth and shows it to you.
**Content-creation**: Each time a user posts/shares content, our chat bot uses a webhook to get a call. This chat bot then uses the same backend AI as content consumption to determine if the new post by the user contains any unverified information. If so, the user is notified and can choose to either take it down or let it exist.
## How we built it
Our chrome-extension is built using javascript that uses advanced web scraping techniques to extract links, posts, and images. This is then sent to an AI. The AI is a collection of API calls that we collectively process to produce a single "trust" factor. The APIs include Microsoft's cognitive services such as image analysis, text analysis, bing web search, Twitter's search API and Google's Safe Browsing API. The backend is written in Python and hosted on Heroku. The chatbot was built using Facebook's wit.ai
## Challenges we ran into
Web scraping Facebook was one of the earliest challenges we faced. Most DOM elements in Facebook have div ids that constantly change, making them difficult to keep track of. Another challenge was building an AI that knows the difference between a fact and an opinion so that we do not flag opinions as false, since only facts can be false. Lastly, integrating all these different services, in different languages together using a single web server was a huge challenge.
## Accomplishments that we're proud of
All of us were new to Javascript so we all picked up a new language this weekend. We are proud that we could successfully web scrape Facebook which uses a lot of techniques to prevent people from doing so. Finally, the flawless integration we were able to create between these different services really made us feel accomplished.
## What we learned
All concepts used here were new to us. Two people on our time are first-time hackathon-ers and learned completely new technologies in the span of 36hrs. We learned Javascript, Python, flask servers and AI services.
## What's next for FiB
Hopefully this can be better integrated with Facebook and then be adopted by other social media platforms to make sure we stop believing in lies.
|
winning
|
## Inspiration
Millions of people around the world are either blind, or partially sighted. For those who's vision is impaired, but not lost, there are tools that can help them see better. By increasing contrast and detecting lines in an image, some people might be able to see clearer.
## What it does
We developed an AR headset that processes the view in front of it and displays a high contrast image. It also has the capability to recognize certain images and can bring their existence to the attention of the wearer (one example we used was looking for crosswalk signs) with an outline and a vocal alert.
## How we built it
OpenCV was used to process the image stream from a webcam mounted on the VR headset, the image is processed with a Canny edge detector to find edges and contours. Further a BFMatcher is used to find objects that resemble a given image file, which is highlighted if found.
## Challenges we ran into
We originally hoped to use an oculus rift, but we were not able to drive the headset with the available hardware. We opted to use an Adafruit display mounted inside a Samsung VR headset instead, and it worked quite well!
## Accomplishments that we're proud of
Our development platform was based on macOS 10.12, Python 3.5 and OpenCV 3.1.0, and OpenCV would not cooperate with our OS. We spent many hours compiling and configuring our environment until it finally worked. This was no small feat. We were also able to create a smooth interface using multiprocessing, which operated much better than we expected.
## What we learned
Without the proper environment, your code is useless.
## What's next for EyeSee
Existing solutions exist, and are better suited for general use. However a DIY solution is endlessly customizable, we this project inspires other developers to create projects that help other people.
## Links
Feel free to read more about visual impairment, and how to help;
<https://w3c.github.io/low-vision-a11y-tf/requirements.html>
|
## Inspiration
About 0.2 - 2% of the population suffers from deaf-blindness and many of them do not have the necessary resources to afford accessible technology. This inspired us to build a low cost tactile, braille based system that can introduce accessibility into many new situations that was previously not possible.
## What it does
We use 6 servo motors controlled by Arduino that mimic braille style display by raising or lowering levers based upon what character to display. By doing this twice per second, even long sentences can be transmitted to the person. All the person needs to do is put their palm on the device. We believe this method is easier to learn and comprehend as well as way cheaper than refreshable braille displays which usually cost more than $5,000 on an average.
## How we built it
We use Arduino and to send commands, we use PySerial which is a Python Library. To simulate the reader, we have also build a smartbot with it that relays information to the device. For that we have used Google's Dialogflow.
We believe that the production cost of this MVP is less than $25 so this product is commercially viable too.
## Challenges we ran into
It was a huge challenge to get the ports working with Arduino. Even with the code right, pyserial was unable to send commands to Arduino. We later realized after long hours of struggle that the key to get it to work is to give some time to the port to open and initialize. So by adding a wait of two seconds and then sending the command, we finally got it to work.
## Accomplishments that we're proud of
This was our first hardware have to pulling something like that together a lot of fun!
## What we learned
There were a lot of things that were learnt including the Arduino port problem. We learnt a lot about hardware too and how serial ports function. We also learnt about pulses and how sending certain amount of pulses we are able to set the servo to a particular position.
## What's next for AddAbility
We plan to extend this to other businesses by promoting it. Many kiosks and ATMs can be integrated with this device at a very low cost and this would allow even more inclusion in the society. We also plan to reduce the prototype size by using smaller motors and using steppers to move the braille dots up and down. This is believed to further bring the cost down to around $15.
|
## Inspiration
Being a student of the University of Waterloo, every other semester I have to attend interviews for Co-op positions. Although it gets easier to talk to people, the more often you do it, I still feel slightly nervous during such face-to-face interactions. During this nervousness, the fluency of my conversion isn't always the best. I tend to use unnecessary filler words ("um, umm" etc.) and repeat the same adjectives over and over again. In order to improve my speech through practice against a program, I decided to create this application.
## What it does
InterPrep uses the IBM Watson "Speech-To-Text" API to convert spoken word into text. After doing this, it analyzes the words that are used by the user and highlights certain words that can be avoided, and maybe even improved to create a stronger presentation of ideas. By practicing speaking with InterPrep, one can keep track of their mistakes and improve themselves in time for "speaking events" such as interviews, speeches and/or presentations.
## How I built it
In order to build InterPrep, I used the Stdlib platform to host the site and create the backend service. The IBM Watson API was used to convert spoken word into text. The mediaRecorder API was used to receive and parse spoken text into an audio file which later gets transcribed by the Watson API.
The languages and tools used to build InterPrep are HTML5, CSS3, JavaScript and Node.JS.
## Challenges I ran into
"Speech-To-Text" API's, like the one offered by IBM tend to remove words of profanity, and words that don't exist in the English language. Therefore the word "um" wasn't sensed by the API at first. However, for my application, I needed to sense frequently used filler words such as "um", so that the user can be notified and can improve their overall speech delivery. Therefore, in order to implement this word, I had to create a custom language library within the Watson API platform and then connect it via Node.js on top of the Stdlib platform. This proved to be a very challenging task as I faced many errors and had to seek help from mentors before I could figure it out. However, once fixed, the project went by smoothly.
## Accomplishments that I'm proud of
I am very proud of the entire application itself. Before coming to Qhacks, I only knew how to do Front-End Web Development. I didn't have any knowledge of back-end development or with using API's. Therefore, by creating an application that contains all of the things stated above, I am really proud of the project as a whole. In terms of smaller individual accomplishments, I am very proud of creating my own custom language library and also for using multiple API's in one application successfully.
## What I learned
I learned a lot of things during this hackathon. I learned back-end programming, how to use API's and also how to develop a coherent web application from scratch.
## What's next for InterPrep
I would like to add more features for InterPrep as well as improve the UI/UX in the coming weeks after returning back home. There is a lot that can be done with additional technologies such as Machine Learning and Artificial Intelligence that I wish to further incorporate into my project!
|
partial
|
## Inspiration
In many developed countries across the world, the population is rapidly aging. This poses a variety of issues to senior citizens, including social isolation, an overburdened healthcare system unable to meet their needs, and the widespread effects of neurodegenerative conditions. We aimed to build a solution which would address all three of these issues in a way which is easily accessible and empowering to senior citizens.
## What it does
MemoryLane allows senior citizens to relive and share their cherished memories. The web application combines three main functionalities, which include a journaling and recall feature for important memories, an AI-powered match and chat system for users to discuss their experiences which are shared with other users, and an analytics dashboard which can be used by healthcare professionals to track key indicators of neurodegenerative conditions. Overall, MemoryLane allows users to not only keep their memories fresh but also weave a tapestry of connections with others with similar life experiences.
## How we built it
In order to develop a clean and responsive front-end and versatile back-end, we used Reflex.dev to develop entirely in Python. We also used the InterSystems IRIS database to easily perform vector search as well as other database operations to support the backend functionalities required by MemoryLane. Additionally, we made use of the Together.AI inference API to generate embeddings to match users based on shared experiences, perform sentiment analysis to find trends within memory recall data, and to create sample data to test our web app with. Finally, we used Google Cloud to implement speech-to-text functionality to increase ease of access to our platform for senior citizens. The majority of our app was built with Python, with a little JavaScript.
## Challenges we ran into
As 2 of our team members had never done full-stack dev before and one was attending his first hackathon, learning the nuances of new frameworks was initially a challenge, especially getting our environments set up. We’re incredibly grateful to the supportive mentors and sponsors for helping us get unstuck when we ran into issues, which indubitably helped us build our final product.
## Accomplishments that we're proud of
We’re very proud of our clean, intuitive UI which aims to make the product as accessible as possible to our target audience, senior citizens. Additionally, we believe that MemoryLane is a truly unique product which fills a niche which hasn’t been focused on before social media for the elderly, especially in combination with its potential benefits of improving the healthcare industry by aggregating data about the elderly.
Also, half of our team was able to go from near-zero web dev knowledge to familiarity with important tools and techniques, which we thought was very representative of the spirit of hackathons – coming together to meet new people and learn new things in a fast-paced creative environment.
## What we learned
Our journey with MemoryLane has been an enlightening dive into several new technologies. We harnessed the power of Reflex.dev for frontend and full stack development, explored the nuances in our data with InterSystems IRIS’s vector search on text embeddings from TogetherAI, and learned how to bring text to life with Google Cloud. Together AI has also become our ally in understanding our users' needs and narratives with natural language processing.
## What's next for MemoryLane
Looking to the horizon, we are definitely looking into expanding MemoryLane’s reach. Our roadmap includes scaling our solution and refining our data model to improve performance, and looking into business models which are sustainable and align with our mission. We envision forming partnerships with healthcare providers, memory care centers, and senior living communities. Integrating IoT could also redefine ease of use for seniors. Keeping innovation in mind, we'll dive deeper into Reflex's capabilities and explore bespoke AI models with Together AI. We aim to improve the technical aspects of our platform as well, including venturing into voice tone analysis to add another layer of emotional intelligence to our app. **We believe that MemoryLane is not just a walk in the past – it's a stride into the future of senior healthcare.**
|
# Baby Whisperer
The Baby Whisperer is a revamped baby monitor that uses voice enabled technology to identify variable crying patterns for infants. TensorFlow was used to process convolutional neural networks of the Mel Frequency Cepstral Coefficient of the infant cries audio files, and categorize them with a predictive reason of crying. The baby's cries can be recorded from a device that will associate the crying to a reason with the help of the neural network. The caregiver will also receive an SMS message with the reason at the time of recording. Additionally, a web browser is used to display the analytics of this data including most common reason as well as how many times a day the child has cried.
|
### Friday 7PM: Setting Things into Motion 🚶
>
> *Blast to the past - for everyone!*
>
>
>
ECHO enriches the lives of those with memory-related issues through reminiscence therapy. By recalling beloved memories from their past, those with dementia, Alzheimer’s and other cognitive conditions can restore their sense of continuity, rebuild neural pathways, and find fulfillment in the comfort of nostalgia. ECHO enables an AI-driven analytical approach to find insights into a patient’s emotions and recall, so that caregivers and family are better-equipped to provide.
### Friday 11PM: Making Strides 🏃♂️
>
> *The first step, our initial thoughts*
>
>
>
When it came to wrangling the frontend, we kept our users in mind and knew our highest priority was creating an application that was intuitive and easy to understand. We designed with the idea that ECHO could be seamlessly integrated into everyday life in mind.
### Saturday 9AM: Tripping 🤺
>
> *Whoops! Challenges and pitfalls*
>
>
>
As with any journey, we faced our fair share of obstacles and roadblocks on the way. While there were no issues finding the right APIs and tools to accomplish what we wanted, we had to scour different forums and tutorials to figure out how we could integrate those features. We built ECHO with Next.js and deployed on Vercel (and in the process, spent quite a few credits spamming a button while the app was frozen..!).
Backend was fairly painless, but frontend was a different story. Our vision came to life on Figma and was implemented with HTML/CSS on the ol’ reliable, VSC. We were perhaps a little too ambitious with the mockup and so removed a couple of the bells and whistles.
### Saturday 4PM: Finding Our Way 💪
>
> *One foot in front of the other - learning new things*
>
>
>
From here on out, we were in entirely uncharted territory and had to read up on documentation. Our AI, the Speech Prosody model from Hume, allowed us to take video input from a user and analyze a user’s tone and face in real-time. We learned how to use websockets for streaming APIs for those quick insights, as opposed to a REST API which (while more familiar to us) would have been more of a handful due to our real-time analysis goals.
### Saturday 10PM: What Brand Running Shoes 👟
>
> *Our tech stack*
>
>
>
Nikes.
Apart from the tools mentioned above, we have to give kudos to the platforms that we used for the safe-keeping of assets. To handle videos, we linked things up to Cloudinary so that users can play back old memories and reminisce, and used Postgres for data storage.
### Sunday 7AM: The Final Stretch 🏁
>
> *The power of friendship*
>
>
>
As a team composed of two UWaterloo CFM majors and a WesternU Engineering major, we had a lot of great ideas between us. When we put our heads together, we combined powers and developed ECHO.
Plus, Ethan very graciously allowed us to marathon this project at his house! Thank you for the dumplings.
### Sunday Onward: After Sunrise 🌅
>
> *Next horizons*
>
>
>
With this journey concluded, ECHO’s next great adventure will come in the form of adding cognitive therapy activities to stimulate the memory in a different way, as well as AI transcript composition (along with word choice analysis) for our recorded videos.
|
partial
|
## Inspiration
When we thought about tackling the pandemic, it was clear to us that we'd have to **think outside the box**. The concept of a hardware device to enforce social distancing quickly came to mind, and thus we decided to create the SDE device.
## What it does
We utilized an ultra-sonic sensor to detect bodies within 2m of the user, and relay that data to the Arduino. If we detect a body within 2m, the buzzer and speaker go off, and a display notifies others that they are not obeying social distancing procedures and should relocate.
## How we built it
We started by creating a wiring diagram for the hardware internals using [Circuito](circuito.io). This also provided us some starter code including the libraries and tester code for the hardware components.
We then had part of the team start the assembly of the circuit and troubleshoot the components while the other focused on getting the CAD model of the casing designed for 3D printing.
Once this was all completed, we printed the device and tested it for any bugs in the system.
## Challenges we ran into
We initially wanted to make an Android partner application to log the incidence rate of individuals/objects within 2m via Bluetooth but quickly found this to be a challenge as the team was split geographically, and we did not have Bluetooth components to attach to our Arduino model. The development of the Android application also proved difficult, as no one on our team had experience developing Android applications in a Bluetooth environment.
## Accomplishments that we're proud of
Effectively troubleshooting the SDE device and getting a functional prototype finished.
## What we learned
Hardware debugging skills, how hard it is to make an Android app if you have no previous experience, and project management skills for distanced hardware projects.
## What's next for Social Distancing Enforcement (SDE)
Develop the Android application, add Bluetooth functionality, and decrease the size of the SDE device to a more usable size.
|
# BlackRock API: Portfolio Diversity Visualizer
Made by Richard Zhu, Riley Dyer, David Zhu, and Shreyash Sridhar
We liked the challenge of utilizing a complex API and the data of a company that handles enormous real life responsibilities. We decided to make a tool that would perform a simple yet helpful function: visualizing the amount of diversity in an investment portfolio. Beyond just examining asset type diversity, the tool can also visualize the variety of industry sectors, countries, and more data about the assets in a portfolio.
Since we lack front-end web app experience, we used MatPlotLib to visualize the data. We used Python to convert strings (The tickers) into a parameter statement to pass into BlackRock's API. Using the JSON object that the API returned, we used list sorting to align user portfolio info with the data requested, in order to be able to categorize security metadata into their correct areas.
For the GUI, we utilized MatPlotLib and its widgets: It's a windowed GUI with text input boxes for the tickers, dollar amounts, and the specific data attribute to examine.
## To Use:
Load the GUI by running vis3.py
## Resources Used
<http://rockthecode.io/api/>
<https://www.blackrock.com/tools/api-tester/hackathon?apiType=securityData>
|
## Inspiration
Our team wanted to create a method to reduce the hassle of having to accurately count the number of people entering and leaving a building. This problem has become increasingly important in the midst of COVID-19 when there are strict capacity limits indoors.
## What it does
The device acts as a bidirectional customer counter that senses the direction of movement in front of an ultrasonic sensor, records this information, and displays it on a screen as well as through python and email notifications. When a person walks in (right to left), the devices increases the count of number of customers who have entered from the door. In the opposite direction (customers exiting), the count decreases therefore giving the true value of how many people are inside. We also have capacity thresholds, for example, if the capacity is 20, then at 15 customers, there will be an email notification warning that you are approaching capacity. Then another notification once you reach capacity.
## How we built it
We developed the device using an Arduino, breadboard, potentiometer, ultrasonic sensor, and LCD screen as well as using python through serial communication. The ultrasonic sensor had two sides, the first recorded the amount of people passing by it in one direction and incremented, while the second recorded the number of people moving in the other direction and decremented. This sensor was connected to the LCD screen and Arduino, which was connected to a potentiometer on a breadboard.
## Challenges we ran into
One of the main challenges we ran into was fine tuning the sensor to ensure accurate readings were produced. This is especially difficult when using an ultrasonic sensor as they are very sensitive and can easily read false data. We also were unable to run the the send\_sms python script on of our partner's computer (the one with the device we developed) due to conflicts with environment variables. Therefore, we switched from SMS notifications to email notifications for the purpose of project showcase.
## Accomplishments that we're proud of
We are very proud of our final device which was able to successfully sense and count movement in different directions and display these findings on the screen. It also uses raw input data and sends the corresponding email notifications to the user.
## What we learned
One thing our team learned throughout the course of this project was how to send notifications from python to a phone through SMS. Through the use of Twilio's web API and Twilio Python helper library, we installed our dependency and were able to send SMS using Python.
## What's next for In Through the Out Door
We would like to make our model more compact with a custom PCB and casting. We would also like to work on including an ESP32 wireless microchip with full TCP/IP stack and microcontroller capabilities to integrate Wi-Fi and Bluetooth on the device. We will integrate using Python to send notifications to the user through a mobile application.
## Smart City Automation && Smartest Unsmart Hack
Our project qualifies for smart city automation, as it can be used as a system for many storeowners, who want a feasible solution that can count their store capacity and notify them as per COVID protocols that they are going over capacity. Our project also qualifies for smartest unsmart hack because our hack does not require any advanced learning frameworks or machine learning to achieve it's purpose. It uses Arduino and simple Python calling to count the capacity.
|
partial
|
## What it does
Lil' Learners is a fun new alternative to learning tools for students in grades ranging from kindergarten to early elementary school. Allow for Teachers to create classes for their students and take note of the learning, strengths and weaknesses of their students as well as allowing for teachers and parents to track the progress of students. Students are assigned classes based on what each of their teachers needs them to practice and are presented with a variety(in the future) of interactive and fun games that take the teachers notes and generates questions which would be presented through the form of games. Students gain points based on how many questions they get right while playing, and get incentive to keep playing and in turn studying by allowing them to own virtual islands that they can customize to their liking by buying cosmetic items with the points earned from studying.
## How we built it
Using OAuth and a MongoDB database, Lil' Learners is a Flask based web application that runs on a structural backbone that is the accounts and courses class hierarchy. We created classes and separated all the types of accounts and courses, and created functions that check for duplicate accounts through both username and email and automatically save accounts to the database or courses to teachers and students or even children to their parents upon instantiation. On the front end, Lil' learners makes use of flask, html and css to create a visually appealing and interactive GUI and web interface. Through the use
## Challenges we ran into
Some challenges were making auth0 work with our log in system that we developed, along with one of the biggest setbacks being with 3.js model that we wanted to create to show off the island that each student owns in an interactive and cool looking way, but despite working at it for several hours, the apis and similar documentation for displaying the 3d models in a flask and html environment seemed to be a lost cause.
## Accomplishments that we're proud of
We are super proud of Lil Learners because despite the various different types of softwares and new/old skills that needed to me learned and merged together for it to work, we managed to create something that we could show off and works to convey the proof of concept for our idea
## What we learned:
We learned a lot about the interactions between various different software and how to integrate them together. Through the process of making Lil' learners we had the opportunity to try out the data management and back end development, and general software development skills with MongoDB, OAuth and GoDaddy and learn how they work and interact with other elements in a web application.
## What's next for Lil' Learners
We are hoping to be able to expand Lil learner's capacities further such as finishing up the 3.js models, fully integrating the OAuth with our account systems, launching our web app onto our go daddy domain, creating a larger variety of games and also providing better visualizations for the statistics for students along with better employments of the points and adaptive learning systems.
|
## Inspiration
Many of us had class sessions in which the teacher pulled out whiteboards or chalkboards and used them as a tool for teaching. These made classes very interactive and engaging. With the switch to virtual teaching, class interactivity has been harder. Usually a teacher just shares their screen and talks, and students can ask questions in the chat. We wanted to build something to bring back this childhood memory for students now and help classes be more engaging and encourage more students to attend, especially in the younger grades.
## What it does
Our application creates an environment where teachers can engage students through the use of virtual whiteboards. There will be two available views, the teacher’s view and the student’s view. Each view will have a canvas that the corresponding user can draw on. The difference between the views is that the teacher’s view contains a list of all the students’ canvases while the students can only view the teacher’s canvas in addition to their own.
An example use case for our application would be in a math class where the teacher can put a math problem on their canvas and students could show their work and solution on their own canvas. The teacher can then verify that the students are reaching the solution properly and can help students if they see that they are struggling.
Students can follow along and when they want the teacher’s attention, click on the I’m Done button to notify the teacher. Teachers can see their boards and mark up anything they would want to. Teachers can also put students in groups and those students can share a whiteboard together to collaborate.
## How we built it
* **Backend:** We used Socket.IO to handle the real-time update of the whiteboard. We also have a Firebase database to store the user accounts and details.
* **Frontend:** We used React to create the application and Socket.IO to connect it to the backend.
* **DevOps:** The server is hosted on Google App Engine and the frontend website is hosted on Firebase and redirected to Domain.com.
## Challenges we ran into
Understanding and planning an architecture for the application. We went back and forth about if we needed a database or could handle all the information through Socket.IO. Displaying multiple canvases while maintaining the functionality was also an issue we faced.
## Accomplishments that we're proud of
We successfully were able to display multiple canvases while maintaining the drawing functionality. This was also the first time we used Socket.IO and were successfully able to use it in our project.
## What we learned
This was the first time we used Socket.IO to handle realtime database connections. We also learned how to create mouse strokes on a canvas in React.
## What's next for Lecturely
This product can be useful even past digital schooling as it can save schools money as they would not have to purchase supplies. Thus it could benefit from building out more features.
Currently, Lecturely doesn’t support audio but it would be on our roadmap. Thus, classes would still need to have another software also running to handle the audio communication.
|
## Inspiration
We were inspired by a [recent article](https://www.cbc.ca/news/canada/manitoba/manitoba-man-heart-stops-toronto-airport-1.5430605) that we saw on the news, where there was a man who suffered a cardiac arrest while waiting for his plane. With the help of a bystander who was able to administer the AED and the CPR, he was able to make a full recovery.
We wanted to build a solution that is able to connect victims of cardiac arrests with bystanders who are willing to help, thereby [increasing their survival rates](https://www.ahajournals.org/doi/10.1161/CIRCOUTCOMES.109.889576) . We truly believe in the goodness and willingness of people to help.
## Problem Space
We wanted to be laser-focused in the problem that we are solving - helping victims of cardiac arrests. We did tons of research to validate that this was a problem to begin with, before diving deeper into the solution-ing space.
We also found that there are laws protecting those who try to offer help - indemnifying them of liabilities while performing CPR or AED: [Good Samaritan and the Chase Mceachern Act](https://www.toronto.ca/community-people/public-safety-alerts/training-first-aid-courses/). So why not ask everyone to help?
## What it does
Hero is a web and app based platform that empowers community members to assist in time sensitive medical emergencies especially cardiac arrests, by providing them a ML optimised route that maximizes the CA victim's chances of survival.
We have 2 components - Hero Command and Hero Deploy.
1) **Hero Command** is the interface that the EMS uses. It allows the location of cardiac arrests to be shown on a single map, as well as the nearby first-responders and AED Equipment. We scrapped the Ontario Goverment's AED listing to provide an accurate geo-location of an AED for each area.
Hero Command has a **ML Model** working in the background to find out the optimal route that the first-responder should take: should they go straight to the victim and perform CPR, or should they detour and collect the AED before proceeding to the victim (of which will take some time). This is done by training our model on a sample dataset and calculating an estimated survival percentage for each of the two routes.
2) **Hero Deploy** is the mobile application that our community of first-responders use. It will allow them to accept/reject the request, and provide the location and navigation instructions. It will also provide hands-free CPR audio guidance so that the community members can focus on CPR. \* Cue the Staying Alive music by the BeeGees \*
## How we built it
With so much passion, hard work and an awesome team. And honestly, youtube tutorials.
## Challenges I ran into
We **did not know how** to create an app - all of us were either web devs or data analysts. This meant that we had to watch alot of tutorials and articles to get up to speed. We initially considered abandoning this idea because of the inability to create an app, but we are so happy that we managed to do it together.
## Accomplishments that I'm proud of
Our team learnt so much things in the past few days, especially tech stacks and concepts that were super unfamiliar to us. We are glad to have created something that is viable, working, and has the potential to change how the world works and lives.
We built 3 things - ML Model, Web Interface and a Mobile Application
## What I learned
Hard work takes you far. We also learnt React Native, and how to train and use supervised machine learning models (which we did not have any experience in). We also worked on the business market validation such that the project that we are building is actually solving a real problem.
## What's next for Hero
Possibly introducing the idea to Government Services and getting their buy in. We may also explore other use cases that we can use Hero with
|
partial
|
## Inspiration
Our project is inspired by the increased demand for creative and personalized background music relative to short-form content. Understanding the current competitive market in the content creation industry, the ability to generate royalty-free music that aligns with a creator’s personal aesthetic and visual theme can greatly elevate their appeal. On top of that, more than just benefiting content creators, we also want to empower artists who seek to take the concepts and emotions present in images and use them as rich sources of musical inspiration. We believe that AI-generated music has reached newer levels of sophistry, as demonstrated by the viral “Fancy Pants Rich Mcgee” AI-generated song. For these reasons, this project is aiming to explore both technology and art, using innovative technology to make personalized music that reflects one’s emotions and individuality accessible to every person.
## What it does
The model Audiolux.AI takes the approach of using Generative AI to pair images or snapshots from short-form content with matching musical composition. First, the model takes an image of choice and uses gen AI to produce a descriptive text of the image, including the overall atmosphere, theme, and emotions present. This description is then used as the input to another gen AI model, where a musical piece is generated for the user based on the given description of the image. The artist also has the option to include additional details that further customizes the musical piece generated. Then, the artist has the option to download the piece for creation or retry the model to produce an alternative music sample. Overall, this project is a tool for content creators or artists that are looking for a way to facilitate a connection between visuals and personalized music.
## How we built it
We used React and Tailwind CSS to create the website and UI, then used the Azure computer vision API to process and generate text from the image. For the backend, we used Azure Blob Storage API to set up a cloud storage backend, which stored the user input. Finally, the generated text was directed to the MusicGen API to generate music, which was coded using Python.
## Challenges we ran into
One challenge we ran into was that Azure’s computer vision API tended to capture images in a way that was very literal, instead of capturing the tone or overall mood that we wanted. For example, when we fed in an image of a smiling person, the computer vision API returned something like “person” instead of “happy” which was not ideal when we intended on using it to generate music. We looked into a solution involving training our own models to detect emotions, but due to time constraints, we decided to continue using the pre-trained model. Moreover, we faced challenges searching for, accessing, and learning how to use Gen AI APIs to serve our goals.
## Accomplishments that we're proud of
In the process of this project we encountered many challenges, and we are proud of all of our achievements in the process of development. In particular, we are proud of the successful progress we made with Gen AI API and tailoring it to meet our needs. We were able to obtain a deep understanding of the resource and incorporate it in our project. Firstly, we were all pretty new to full-stack development, so setting up an Azure blob storage backend, Azure’s computer vision API, and linking it to the Python backend were all key learning points for us. We also encountered many difficulties ranging from having to completely switch our choice of APIs, expiring access tokens, and linking the frontend with the backend, but we were all very supportive of each other and collaborated to solve the problems together.
## What we learned
We learned how to use React to create the frontend, as well as how to work with APIs using documentation. We also learned more about generative AI models, such as how they are utilized and how they can be customized.
## What's next
If we could improve this project, we would likely try to add the functionality of uploading a video and selecting snapshots to generate music from. In addition, we would want to add the ability to generate music with lyrics, as there are models capable of doing so.
|
## Inspiration 💎
* It is 2021, and music has become soulless. People used to smell the vinyl, feel the beating of the drum, and see their cassette tapes twirl. Today, music has been nullified from a multisensory joyride to a vapid digital streaming experience comprised only of 1's and 0's.
* **Synesthesify aims to rejuvenate the spirit of music by turning it back into the multi-dimensional experience it was always destined to be.**
## What it does ️🔥
* Synesthesify uses algorithms to process music from Spotify based on multiple factors like its **danceability, acousticness, and tempo.** It uses this musical analysis to create a piece of emotional abstract art that acts as a reminder of the sights that should accompany music. More practically, it creates an artwork that is fit to be a playlist cover.
* The algorithm first fetches an authentication token in the Client Credentials Auth Flow, then uses Spotify's API to parse a playlist link. It sends GET requests to obtain the IDs of the first 100 songs in that playlist. The song IDs, stored in an array, are then iterated to get the musical features of each song. The musical features are mapped onto a spectrum of corresponding colours and shapes based on the emotional connotations different images have. These colours and shapes are finally put on a JavaScript canvas to create an art piece downloadable by the user. The user can also use sliders to adjust the algorithm's settings and vary the art that is created. All of this is packed into a beautiful and simple web app that all users can understand and appreciate.
## How we built it 🛠️
* **Synesthesify was constructed with Javascript, with use of the Spotify API, and HTML/CSS.**
* The source code can be found on our Github Page, which we also use to host the web app.
* The artwork is painted on JavaScript's canvas, where it can then be copied or downloaded as a .jpg uploadable to Spotify playlists.
## Challenges we ran into 💥
* **Learning API on the fly and sending API requests without knowing the standard code to do so (e.g. headers, body, params).**
* Relearning Javascript and making countless silly mistakes by writing it like Python or Java (e.g. forgot brackets around function declaration).
* Entering integration hell because we had very very little prior experience with GitHub, especially functionalities like pull requests.
* Hours upon hours of debugging and using IDEs in processes of trial-and-error.
## Accomplishments that we're proud of 🏆
* **We all learned and implemented multiple APIs into our code successfully, despite it being the first time any of us have ever seen APIs ever.**
+ Getting refresh tokens from Spotify server-side was a special challenge (no client interaction needed).
* **Two of us literally learned JS during the hackathon.**
* Learning to use Github for the first time, and figuring out how to not override other hacker's progress.
* Learning how to use VSCode and test code locally to save mountains of time and countless headaches.
* Coding more complex elements of HTML/CSS (exiting our front-end comfort zone).
## What we learned 💡
* This was the first Hackathon anyone on our team has ever done, so our rate of learning was incredible! 🚀
* It was also our first time ever using an API of any sort, so we learned incredible amounts of information about how to send requests to APIs and get authentication from them.
* When creating abstract paintings, we learned about colour theory and how shapes had meaning to people.
* Finally, we learned to set deadlines, take frequent breaks, and assign work for maximize productivity via system flow. 🌊
## What's next for Synesthesify 💭
* Accessibility features.
* Spotify authorization (not just client credentials).
* Using Google Images API to insert images based on frequent song lyrics.
* More complex abstract art generation using different digital art APIs.
* Attempting different types of art, such as from the classical era.
|

## Inspiration
Getting engagement is hard. People only read about 20% of the text on the average page.

After viewing the percent of article content viewed it shows that most readers scroll to about the 50 percent mark, or the 1,000th pixel, in Slate stories (a news platform).
This is alarming. Suppose if a company is writing an article about an event that they have sponsored. Having negligible engagement is not valued within the goals of any company spending money on marketing purposes.
Rather, what if this article was brought into a one minute format which would bring about to a much better engagement rate as compared to long pieces of text.
## What it does ⚡️
nu:here, an online platform for people of all age levels to create and distribute customizable videos based on Wikipedia articles created via Artificial Intelligence 👀. With our platform, we allow users to customize many different video aspects within the platform and share it with the world.
**The process for the user:**
1. User searches for a Wikipedia article on our platform
2. The user can start our video generation platform by specifying the length of the video that is wanted
3. The user can specify the formality of the video depending on what the target audience is (For the classroom, for sharing information on TikTok & Instagram, etc.)
4. The user can specify what voice model they want to use for the audio, using IBM’s text-to-speech API, the possibilities are endless
5. The user can then specify what kind of background music they want playing in the video
6. Once this step for the user is done, we are able to generate a short version of the Wikipedia article via co:here, create audio for the video via Watson AI, and generate keywords to use while finding GIFs, videos, and images on Pexels and Tenor, and put them in a video format.
## How we built it ⚡️
We mashed up many cutting-edge services to help bring our project to life.
* Firebase Storage - Store Audio files From Watson in the Cloud ☁️
* Watson Text-to-Speech - Generate audio for the video 🎵
* Wikipedia API - Get all the information from Wikipedia ℹ️
* co:here Generate API - Generate summaries for Wikipedia articles. The generate API is also used to find the best visual elements for the video. 🤖
* GPT-3 - Help generate training data for co:here at scale 🤖
* Pexels API - Find images and videos to put into our generated video 🖼
* Remotion - React library to help us play and assist in generating a video 🎥
* Tailwind CSS - CSS Framework ⭐️
* React.js - Frontend Library ⚛️
* Node.js & Express.js - Frameworks 🔐
* Figma - Design 🎨
## Challenges we ran into ⚡️
### co:here
We were determined to use co:here in this project, but we ran into a few major obstacles.
First, every call to co:here’s `generate` API had to contain no more than 2048 tokens. Our goal was to summarise whole Wikipedia articles, which often contain far more than 2048 words. To get around this, we developed complex algorithms to summarize sections of articles, then summarize groups of summaries, and so on.
It was difficult to preserve accuracy during this process, because the models were not perfect. We tried to engineer prompts using few-shot learning methods to teach our model what a good summary was. We even used GPT3 to generate training examples at scale! However, we were always limited by the 2048-token limit. Training data uses up capacity that we need for input.
A strange consequence of few-shot learning is that the model would pick up on the contents and cause our training data to bleed into our summaries. For example, one of our training summaries was a paragraph about Waterloo. When we asked co:here to summarize an article about geological faults, it wrongly claimed that there was one in Waterloo.
We had a desire to fit our videos into a certain amount of viewing time. We tried to restrict the duration using a token limit, but co:here does not consider the limit when planning its summaries. It sometimes goes into too much detail and misses out on points from later on in the text.
## Accomplishments that we're proud of ⚡️
* We are proud of using the co:here platform
* We are proud that we will be able to start sharing this platform after this hackathon is over
* We are proud that people will be able to use this
* We are proud of overcoming our obstacles
* We were able to accomplish all functionalities
* Most of all we had **fun**!
## What we learned ⚡️
We learned so much throughout the course of the hackathon. Natural Language Processing is not a silver bullet. In order to get our models to do what we want, we have to think like them. We didn’t have much experience using NLP but now we will continue to explore more applications for it.
## What's next for nu:here ⚡️
Adding features for users to customize and share videos is top priority for us on the engineering side. At the same time, we must address the elephant in the room: accuracy. In our quest to make information accessible and digestible, we must try as hard as we can to guard our users from mis-summarizations. Better models and user feedback can help us get there.
**View Video Demo Here (if the Youtube Video does not work): [Demo](https://cdn.discordapp.com/attachments/1019611034971013171/1021027599285231716/2022-09-18_07-52-35_Trim.mp4)**
|
losing
|
It can often be hard to find useful resources and materials when studying or reviewing for a class. It can be even harder to find someone to study with. StudyBees is a platform that helps students solve that problem by pairing up students and allowing to collaborate and share materials and notes. We believe that this powerful tool will allow students to gain a better understanding of their material and will allow them to shine in their studies.
## Inspiration
We’ve only just come back to college, and we are already feeling the pinch. Having a platform to instantly connect with a study partner would be a dream come true.
## What it does
Connects students and allows them to interact through a chat service, a shared to-do list, and a collaborative canvas and text editor. We also plan to incorporate document sharing through upload and download but were only able to do so on a limited basis within the time constraint.
## How we built it
We used Angular 6 to build the entire frontend and we used MongoDB Stitch as a backend service for user authentication and profile retrieval. We hosted the Angular site using S3 and CloudFront for distribution and used AWS EC2 for websockets which were served using Express.js, Node.js, and Socket.io.
## Challenges we ran into
We began building the project late, so time was always against us. We also ran into issues using Socket.io to connect users because it was separate from our Stitch backend service. We also had some issues implementing collaborative editing for the text editor and ultimately had to make some compromises in functionality.
## Accomplishments that we're proud of
We are incredibly proud of building a fully-fledged application with a beautiful and responsive design. While we have worked with many of these technologies before, we are very happy that our prior experience allowed us to overcome our late start.
## What we learned
We learned a lot about MongoDB Stitch, which is an incredibly powerful backend tool that we look forward to using in the future. We also were able to explore more in-depth uses of Angular and Socket.io
## What's next for StudyBees
We hope to add some more functionality to allow users to save and share documents and we hope to improve the collaborative editing we have now.
|
### Inspiration
Have you ever scrolled through Google street view and seen random people doing mundane things? Sometimes those people are not just regular humans doing regular things. To someone else, this preservation of this human means the world to them. Sometimes, it's just some people having fun and wanting to be immortalized on Google street view. Whatever the story may be, it should be captured and celebrated!
### What it does
Introducing **Google Maps Memories**! A simple and beautiful gallery for all of those special moments caught on Google street view.
* Share your memories through an intuitive interface
* Discover memories on an interactive globe
* Experience memories in an immersive view
### How we built it
During this project, we were able to use a lot of technologies. Our main stack is T3 with Auth0, our interactive globe was built using the React Globe.gl library, create and edit pages use the Google Maps API to embed an interactive map, and the memory page itself uses the Google Streetview API to get a still of the specific location and point of view that the user saved.
### Challenges we ran into
We ran into many challenges during the project like dealing with Google's complex Maps and Streeview APIs, formatting the globe to match our intended aesthetic, and numerous other small bugs.
### Accomplishments that we're proud of
Despite those hardships, we are very proud to have built a polished and aesthetically appealing app.
### What we learned
We learned the importance of project scoping and time framing. We started off with a simple idea and there were many times when team members voiced their concern for how simple the project was. However, as we progressed, the project grew more and more complex. This experience really taught us the importance of scoping a project and to not go too big and leave room for mistakes.
### What's next for Google Maps Memories
There are so many possibilities with Googly Maps Memories! In the future, we plan on adding more ways to discover memories such as a random memory picker, memory of the day showcase, and themed memories. Stay tuned!
|
## Inspiration
Our inspiration for this project was our experience as students. We believe students need more a digestible feed when it comes to due dates. Having to manually plan for homework, projects, and exams can be annoying and time consuming. StudyHedge is here to lift the scheduling burden off your shoulders!
## What it does
StudyHedge uses your Canvas API token to compile a list of upcoming assignments and exams. You can configure a profile detailing personal events, preferred study hours, number of assignments to complete in a day, and more. StudyHedge combines this information to create a manageable study schedule for you.
## How we built it
We built the project using React (Front-End), Flask (Back-End), Firebase (Database), and Google Cloud Run.
## Challenges we ran into
Our biggest challenge resulted from difficulty connecting Firebase and FullCalendar.io. Due to inexperience, we were unable to resolve this issue in the given time. We also struggled with using the Eisenhower Matrix to come up with the right formula for weighting assignments. We discovered that there are many ways to do this. After exploring various branches of mathematics, we settled on a simple formula (Rank= weight/time^2).
## Accomplishments that we're proud of
We are incredibly proud that we have a functional Back-End and that our UI is visually similar to our wireframes. We are also excited that we performed so well together as a newly formed group.
## What we learned
Keith used React for the first time. He learned a lot about responsive front-end development and managed to create a remarkable website despite encountering some issues with third-party software along the way. Gabriella designed the UI and helped code the front-end. She learned about input validation and designing features to meet functionality requirements. Eli coded the back-end using Flask and Python. He struggled with using Docker to deploy his script but managed to conquer the steep learning curve. He also learned how to use the Twilio API.
## What's next for StudyHedge
We are extremely excited to continue developing StudyHedge. As college students, we hope this idea can be as useful to others as it is to us. We want to scale this project eventually expand its reach to other universities. We'd also like to add more personal customization and calendar integration features. We are also considering implementing AI suggestions.
|
losing
|
Sit up straight !!
## Inspiration
We set out on our journey to help people fix their postures after spending our first year of university turning our backs into parabolas. After doing some research, we found various studies linking bad posture to orthopedic problems such as pain in the joints, neck, shoulders and lower back. Furthermore, an article published by Harvard Health states that poor posture leads to incontinence, constipation, heartburn, & slowed digestion. On the other hand, having good posture is very important to a person’s quality of life. Good posture centers a person's weight over their feet, which makes it easier for them to maintain comfortable form while performing everyday movements, such as walking, standing up from sitting, using stairs, and carrying heavy items.
## What it does
Professor Puddles is a duck-shaped desk buddy that is equipped with a camera, water gun, and speaker. To use Professor Puddles, the user simply has to run our provided code, which will launch our posture-detecting program. Using both the front camera of the laptop and the side camera angle provided by Professor Puddles, our program keeps track of the user’s posture from various angles. If either camera detects bad posture the user receives an initial warning in the form of a notification on their laptop. If the bad posture continues, the consequences intensify. After 15 minutes of bad posture, Professor Puddles will verbally ~~abuse~~ notify the user that their posture is poor. 15 minutes after that if you ignore his final warning, he loses it and spits at you. Surely when you are getting wet you will fix your posture, right?
## How we built it
The Python program uses OpenCV to track various points on the user's body and detects when these points exceed standard sitting proportions and sends desktop notifications. The user can view the tracking live and create custom profiles for better accuracy with one click in our Tkinter user interface. In the case you ignore the messages, Professor Puddles is running a server on a Raspberry Pi that the computer connects to wirelessly via socket connection. When it sends the message over this connection saying that you have been a naughty boy or girl and haven't been taking your posture seriously, Puddles gets mad and uses his Python codebase to play sounds and control the servo motor that has been integrated with the parts of a dissected water gun.
## Challenges we ran into
It turns out that when you buy something for only two dollars, it functions as if you only bought it for two dollars. Our cheap water mechanism extracted from a water gun slowly began to need more torque from the motor as you used it and the servo motors were too cheap to apply enough force. Even when we made creative designs to maximize the torque and got something that works, over time the mechanism got too stiff and stopped working. By taking advantage of every law of gravity/friction/kinematics that we can, we were able to optimize it to spray just enough to embarrass you.
In the process of building Professor Puddles, one of our motors was fried, resulting in us having to remodel the project. Additionally, having to work with low quality materials to build our project was a struggle but despite our dollar store water gun and duct tape, we persevered.
## Accomplishments that we're proud of
Our software is able to detect poor posture very accurately. We would also like to mention that this was the first hackathon for 2 out of 4 of our teammates, and we are very proud of Professor Puddles for winning first place! Finally, we are very proud that we found a way to make computer science students shower. Even if it is just showering in a little bit of duck spit, its better than nothing!
## What we learned
* Do not trust anything worth less than $2 (Especially from Dollarama)
* Maybe don't use Tkinter
* Don't make 76 commits and try to make the commit message a different spelling of "monkey" every time. There are only so many variants and with no sleep at 6 am, your unique spellings like "munkee" will have degraded to become just "m". Surely employers won't look at your GitHub right??? 😐
## What's next for Professor Puddles
In future variants of Professor Puddles, we are hoping to create a smaller version and work with higher-quality materials. We would also like to work on optimization so it can run in the background with no effect on performance. We plan to do this by only analyzing a frame every x seconds rather than running a continuous window. Also, we can scrap that silly Tkinter UI for a nice tray popup.
|
## Inspiration
Peripheral nerve compression syndromes such as carpal tunnel syndrome affect approximately 1 out of every 6 adults. They are commonly caused by repetitive stress and with the recent trend of working at home due to the pandemic it has become a mounting issue more individuals will need to address. There exist several different types of exercises to help prevent these syndromes, in fact studies show that 71.2% of patients who did not perform these exercises had to later undergo surgery due to their condition. It should also be noted that doing these exercises wrong could cause permanent injury to the hand as well.
## What it does
That is why we decided to create the “Helping Hand”, providing exercises for a user to perform and using a machine learning model to recognize each successful try. We implemented flex sensors and an IMU on a glove to track the movement and position of the user's hand. An interactive GUI was created in Python to prompt users to perform certain hand exercises. A real time classifier is then run once the user begins the gesture to identify whether they were able to successfully recreate it. Through the application, we can track the progression of the user's hand mobility and appropriately recommend exercises to target the areas where they are lacking most.
## How we built it
The flex sensors were mounted on the glove using custom-designed 3D printed holders. We used an Arduino Uno to collect all the information from the 5 flex sensors and the IMU. The Arduino Uno interfaced with our computer via a USB cable. We created a machine learning model with the use of TensorFlow and Python to classify hand gestures in real time. The user was able to interact with our program with a simple GUI made in Python.
## Challenges we ran into
Hooking up 5 flex sensors and an IMU to one power supply initially caused some power issues causing the IMU not to function/give inaccurate readings. We were able to rectify the problem and add pull-up resistors as necessary. There were also various issues with the data collection such as gyroscopic drift in the IMU readings. Another challenge was the need to effectively collect large datasets for the model which prompted us to create clever Python scripts to facilitate this process.
## Accomplishments that we're proud of
Accomplishments we are proud of include, designing and 3D printing custom holders for the flex sensors and integrating both the IMU and flex sensors to collect data simultaneously on the glove. It was also our first time collecting real datasets and using TensorFlow to train a machine learning classifier model.
## What we learned
We learned how to collect real-time data from sensors and create various scripts to process the data. We also learned how to set up a machine learning model including parsing the data, splitting data into training and testing sets, and validating the model.
## What's next for Helping Hand
There are many improvements for Helping Hand. We would like to make Helping Hand wireless, by using an Arduino Nano which has Bluetooth capabilities as well as compatibility with Tensorflow lite. This would mean that all the classification would happen right on the device! Also, by uploading the data from the glove to a central database, it can be easily shared with your doctor.
We would also like to create an app so that the user can conveniently perform these exercises anywhere, anytime.
Lastly, we would like to implement an accuracy score of each gesture rather than a binary pass/fail (i.e. display a reading of how well you are able to bend your fingers/rotate your wrist when performing a particular gesture). This would allow us to more appropriately identify the weaknesses within the hand.
|
## Inspiration
FroggyFlow is inspired by the constant bother of bad posture, back, and shoulder pain which hampers one's productivity. We want to renovate the way people are sitting in front of their laptop, desktop, or other devices while improving their productivity and health through a series of connected activities and interactions with our web application.
## What it does
Our revolutionizing technology serves users through the form of a web app with the ability for users to sign in or create an account associated with their Gmail powered by Auth0 to smooth out their user experience. Our server then records user's log-in information and other associated activities on our website and store using MongoDB Atlas, including logged study-related activities during each session. With security and efficient run-time, our website allows users to experience features of one of the best productivity app as while as a wellness app available. Users are asked to utilize our hardware technology (integrated with gyroscope and accelerometer on Arduino) if they wish to, later, analyze their posture trend from the on-going productive session. Users are able to enter a productivity session with an aesthetic froggy background and non-disturbing timer countdown. Users are able to either ends a session voluntarily or complete the productive session and enters through a short rest break from their work.
• During the session, while the user clips their 3D printed case with hardware inside, user will be notified through a notification system linked to the user's account to make sure that the user sit up well IN REAL TIME. How does it work? We created training data from prelabeled data as good or bad. Then we input these feature values (xyz angle and acceleration values from the gyroscope) into a RandomForestClassifier model along with their respective labels for training. The model was able to score (accuracy score) of 98% against the test data.
• When an user ends their productive session, they are able to view their past posture trend via MATLAB graphing tool as well as analyze their past history posture progress log (with time stamps) to any past sessions associated with the account via MongoDB Atlas.
• When an user completes a productive session, they are automatically transferred to a rest period with a webcam that prompts the user to do 5 jumping jacks with a system counter. The webcam is developed with AI-generated and OpenCV framework that analyze real-time user's body motion while keeping track of the completeness of user's jumping jack to avoid any accidents by improper form.
• After the user completes their rest period, they are able to go back to a new study/productive session or go view their profile (with analyzed trends of their postures from previous sessions).
Overall, the web app is fully-responsive with many interactive features allowing users to enjoy the smoothest experience while improving their posture and productivity!
## How we built it
• Hardware Integration: Utilized gyroscopes and accelerometers with Arduino to monitor and analyze user posture. In addition, we utilizes 3D printing to create customized cases for the hardware.
• Data Analysis: Employed MATLAB to generate and visualize posture trend graphs from collected data.
• Data Management: Used MongoDB for logging session data securely.
• User Authentication: Implemented Auth0 for secure user account management.
• Real-Time Monitoring: Integrated AI and machine learning with OpenCV to track body motion and ensure accurate exercise counts. Frameworks used are OpenCV (mainly Mediapipepose), Numpy, and TensorFlow.
• Web Interface: Developed a real-time counter and feedback system displayed on the webcam feed for immediate exercise validation.
## Challenges we ran into
• Making the components (ML models) integrated to front-end (e.g. webcam feature in rest session took very long to configure to the web app due to different API requirements.
• We had to make sure the gyro-data ML model was accurate to be able to detect bad posture in real time and send a notification to the user.
• Making hardware work well with our software components. Firstly, 3D printing took many tries (including designing the hardware, prototyping the hardware with different versions to fit with streamlined user experience design.
• Endpoint Not Responding: The server endpoint might not respond due to incorrect URL paths or errors in routing.
• Our project has A LOT of components, and a big challenge was to finish them in the span of 36 hours. We had to downsize after ideation since we had so many ideas! Getting these components to work together was also a challenge since we have multiple backends and they may require different versions of python for example.
## Accomplishments that we're proud of
• Getting the gyroscope data trained and implemented a machine learning model to determine if posture if bad or good, and notifying the user when their posture is bad
• Successfully implemented Mediapipe Pose that uses a webcamera to monitor and count the number of jumping jacks that you do during your break (they have to be proper jumping jacks!) Staying active increases concentration!
• Logged all the posture data (how well your posture was during your study session) and graphed them in Matlab after each session
• Logged data for each user from previous sessions and display this in the profile page
• Implementing FastAPI to connect all the components to the front-end, making sure they work seemelessly
## What we learned
• Implementing a ML model into a full-stack project, from training to testing against real time data.
• A lot about ML and OpenCV for data processing and decision making
• Learned how to log user data into MongoDB Atlas
• Manage users with Auth0, having profiles for each of them
• Graphing and data processing with Matlab
## What's next for FroggyFlow
• Make an online domain for the web application
• Play music during your study session
• More options for different activities and games for the resting time
• Ability to change backgrounds during study session
• Fix webcam with better interface design
|
winning
|
Ethereum Piggy lets you save money with a virtual Piggy Bank. The app connects to your coinbase ethereum wallet and allows you to put money into your virtual Piggy Bank.
Thanks to Ethereums contracts in the blockchain we secure the savings until a set time and can guarantee that they will be released after the time to the original owner.
|
## Inspiration
Cryptocurrency is the new hype of our age. We wanted to explore the possibilities of managing Cryptocurrency transactions at the tips of our fingers through social media outlets. At the same time, we wanted to tackle the problem of splitting bills when we eat out with friends, through sending Ethereum to settle payments.
## What it does
Our bot has 6 main commands that can be used after setting up with your Facebook account & the public key of your EtherWallet via cryptpay.tech and installing the application to your local computer:
* /send - sends a set amount to designated user.
* /confirm - accepts payment on receiver's end.
* /split - splits bill to number of people in chat.
* /dist - distributes amount per person.
* /receipt - takes picture of receipt and splits bill based on user's prompts.
* /sell - sells amount to market.
We use these commands on the FB chat to facilitate real time transactions.
## How We built it
With security in mind and developing around the spirit of decentralization - a user's wallet/private key never leaves their computer. The architecture of the entire project, as such, was more difficult than your average chatbot.
There are 3 main components to this project:
##### Local Chatbot/Wallet
If we hosted a central chatbot that managed everyone's funds, that would have destroyed the purpose of using cryptocurrency as our medium. As such, we developed a chatbot/wallet hybrid that allows users to have the full functionality of a server-sided bot, right in their hands and in control. We had the user input their wallet details, and by using offline transaction signing, users are not required to run a full Ethereum node but still interact with the blockchain network using Messenger.
##### CryptPay.tech
Lets say `Person A` wants to send a payment of $10 to `Person B` using CryptPay. `Person A` will have to send a transaction to `Person B`'s public key (which can be thought of as their house address). CryptPay.tech allows friends to find each other's public keys, without even asking for them beyond the one-time setup. This means, you don't have to ask for their email address nor their long hexadecimal public key. We do it for you.
##### Receipt Scanning + Other Features
Any user can use the /receipt command to prompt the receipt bill splitting function. CryptPay will ask the user to take a photo of their recent receipt transaction and analyze the purchases. Using the Google Vision API and Tesseract OCR API, we are able to instantaneously identify the total amount of the purchase. The user can then use /split to equally distribute the bill to each member in the chat.
## Challenges We Ran Into
Originally, we contemplated creating a messenger bot for transactions with real money. However, this elicits substantial security issues, since it is not secure for third parties to hold people's private banking information. We spoke with representatives from Scotiabank about our concerns and asked for other possible issues to tackle. After discussion, we decided to use Cryptocurrency transactions because they bypass the Interac debit system and everything is fluid.
## Accomplishments that We're Proud of
* Learning how to use Facebook Messenger API
* Creating a packaging a full node application for end-users
* Learning to architect the project in a unconventional way
* Exploring REST
* Setting up fluid transactions with Ethereum
* Having a fully functional prototype within 24 hours
* Creating something that is easy to use and that everyone can use
## What's next for CryptPay
* Adding more crypto coins
* Getting a chance to cancel your sending
* Have a command for market research
|
## Inspiration
Blockchain has created new opportunities for financial empowerment and decentralized finance (DeFi), but it also introduces several new considerations. Despite its potential for equitability, malicious actors can currently take advantage of it to launder money and fund criminal activities. There has been a recent wave of effort to introduce regulations for crypto, but the ease of money laundering proves to be a serious challenge for regulatory bodies like the Canadian Revenue Agency. Recognizing these dangers, we aimed to tackle this issue through BlockXism!
## What it does
BlockXism is an attempt at placing more transparency in the blockchain ecosystem, through a simple verification system. It consists of (1) a self-authenticating service, (2) a ledger of verified users, and (3) rules for how verified and unverified users interact. Users can "verify" themselves by giving proof of identity to our self-authenticating service, which stores their encrypted identity on-chain. A ledger of verified users keeps track of which addresses have been verified, without giving away personal information. Finally, users will lose verification status if they make transactions with an unverified address, preventing suspicious funds from ever entering the verified economy. Importantly, verified users will remain anonymous as long as they are in good standing. Otherwise, such as if they transact with an unverified user, a regulatory body (like the CRA) will gain permission to view their identity (as determined by a smart contract).
Through this system, we create a verified market, where suspicious funds cannot enter the verified economy while flagging suspicious activity. With the addition of a legislation piece (e.g. requiring banks and stores to be verified and only transact with verified users), BlockXism creates a safer and more regulated crypto ecosystem, while maintaining benefits like blockchain’s decentralization, absence of a middleman, and anonymity.
## How we built it
BlockXism is built on a smart contract written in Solidity, which manages the ledger. For our self-authenticating service, we incorporated Circle wallets, which we plan to integrate into a self-sovereign identification system. We simulated the chain locally using Ganache and Metamask. On the application side, we used a combination of React, Tailwind, and ethers.js for the frontend and Express and MongoDB for our backend.
## Challenges we ran into
A challenge we faced was overcoming the constraints when connecting the different tools with one another, meaning we often ran into issues with our fetch requests. For instance, we realized you can only call MetaMask from the frontend, so we had to find an alternative for the backend. Additionally, there were multiple issues with versioning in our local test chain, leading to inconsistent behaviour and some very strange bugs.
## Accomplishments that we're proud of
Since most of our team had limited exposure to blockchain prior to this hackathon, we are proud to have quickly learned about the technologies used in a crypto ecosystem. We are also proud to have built a fully working full-stack web3 MVP with many of the features we originally planned to incorporate.
## What we learned
Firstly, from researching cryptocurrency transactions and fraud prevention on the blockchain, we learned about the advantages and challenges at the intersection of blockchain and finance. We also learned how to simulate how users interact with one another blockchain, such as through peer-to-peer verification and making secure transactions using Circle wallets. Furthermore, we learned how to write smart contracts and implement them with a web application.
## What's next for BlockXism
We plan to use IPFS instead of using MongoDB to better maintain decentralization. For our self-sovereign identity service, we want to incorporate an API to recognize valid proof of ID, and potentially move the logic into another smart contract. Finally, we plan on having a chain scraper to automatically recognize unverified transactions and edit the ledger accordingly.
|
partial
|
## Inspiration
Despite being a global priority in the eyes of the United Nations, food insecurity still affects hundreds of millions of people. Even in the developed country of Canada, over 5.8 million individuals (>14% of the national population) are living in food-insecure households. These individuals are unable to access adequate quantities of nutritious foods.
## What it does
Food4All works to limit the prevalence of food insecurity by minimizing waste from food corporations. The website addresses this by serving as a link between businesses with leftover food and individuals in need. Businesses with a surplus of food are able to donate food by displaying their offering on the Food4All website. By filling out the form, businesses will have the opportunity to input the nutritional values of the food, the quantity of the food, and the location for pickup.
From a consumer’s perspective, they will be able to see nearby donations on an interactive map. By separating foods by their needs (e.g., high-protein), consumers will be able to reserve the donated food they desire. Altogether, this works to cut down unnecessary food waste by providing it to people in need.
## How we built it
We created this project using a combination of multiple languages. We used Python for the backend, specifically for setting up the login system using Flask Login. We also used Python for form submissions, where we took the input and allocated it to a JSON object which interacted with the food map. Secondly, we used Typescript (JavaScript for deployable code) and JavaScript’s Fetch API in order to interact with the Google Maps Platform. The two major APIs we used from this platform are the Places API and Maps JavaScript API. This was responsible for creating the map, the markers with information, and an accessible form system. We used HTML/CSS and JavaScript alongside Bootstrap to produce the web-design of the website. Finally, we used the QR Code API in order to get QR Code receipts for the food pickups.
## Challenges we ran into
Some of the challenges we ran into was using the Fetch API. Since none of us were familiar with asynchronous polling, specifically in JavaScript, we had to learn this to make a functioning food inventory. Additionally, learning the Google Maps Platform was a challenge due to the comprehensive documentation and our lack of prior experience. Finally, putting front-end components together with back-end components to create a cohesive website proved to be a major challenge for us.
## Accomplishments that we're proud of
Overall, we are extremely proud of the web application we created. The final website is functional and it was created to resolve a social issue we are all passionate about. Furthermore, the project we created solves a problem in a way that hasn’t been approached before. In addition to improving our teamwork skills, we are pleased to have learned new tools such as Google Maps Platform. Last but not least, we are thrilled to overcome the multiple challenges we faced throughout the process of creation.
## What we learned
In addition to learning more about food insecurity, we improved our HTML/CSS skills through developing the website. To add on, we increased our understanding of Javascript/TypeScript through the utilization of the APIs on Google Maps Platform (e.g., Maps JavaScript API and Places API). These APIs taught us valuable JavaScript skills like operating the Fetch API effectively. We also had to incorporate the Google Maps Autofill Form API and the Maps JavaScript API, which happened to be a difficult but engaging challenge for us.
## What's next for Food4All - End Food Insecurity
There are a variety of next steps of Food4All. First of all, we want to eliminate the potential misuse of reserving food. One of our key objectives is to prevent privileged individuals from taking away the donations from people in need. We plan to implement a method to verify the socioeconomic status of users. Proper implementation of this verification system would also be effective in limiting the maximum number of reservations an individual can make daily.
We also want to add a method to incentivize businesses to donate their excess food. This can be achieved by partnering with corporations and marketing their business on our webpage. By doing this, organizations who donate will be seen as charitable and good-natured by the public eye.
Lastly, we want to have a third option which would allow volunteers to act as a delivery person. This would permit them to drop off items at the consumer’s household. Volunteers, if applicable, would be able to receive volunteer hours based on delivery time.
|
## Inspiration
The first step of our development process was conducting user interviews with University students within our social circles. When asked of some recently developed pain points, 40% of respondents stated that grocery shopping has become increasingly stressful and difficult with the ongoing COVID-19 pandemic. The respondents also stated that some motivations included a loss of disposable time (due to an increase in workload from online learning), tight spending budgets, and fear of exposure to covid-19.
While developing our product strategy, we realized that a significant pain point in grocery shopping is the process of price-checking between different stores. This process would either require the user to visit each store (in-person and/or online) and check the inventory and manually price check. Consolidated platforms to help with grocery list generation and payment do not exist in the market today - as such, we decided to explore this idea.
**What does G.e.o.r.g.e stand for? : Grocery Examiner Organizer Registrator Generator (for) Everyone**
## What it does
The high-level workflow can be broken down into three major components:
1: Python (flask) and Firebase backend
2: React frontend
3: Stripe API integration
Our backend flask server is responsible for web scraping and generating semantic, usable JSON code for each product item, which is passed through to our React frontend.
Our React frontend acts as the hub for tangible user-product interactions. Users are given the option to search for grocery products, add them to a grocery list, generate the cheapest possible list, compare prices between stores, and make a direct payment for their groceries through the Stripe API.
## How we built it
We started our product development process with brainstorming various topics we would be interested in working on. Once we decided to proceed with our payment service application. We drew up designs as well as prototyped using Figma, then proceeded to implement the front end designs with React. Our backend uses Flask to handle Stripe API requests as well as web scraping. We also used Firebase to handle user authentication and storage of user data.
## Challenges we ran into
Once we had finished coming up with our problem scope, one of the first challenges we ran into was finding a reliable way to obtain grocery store information. There are no readily available APIs to access price data for grocery stores so we decided to do our own web scraping. This lead to complications with slower server response since some grocery stores have dynamically generated websites, causing some query results to be slower than desired. Due to the limited price availability of some grocery stores, we decided to pivot our focus towards e-commerce and online grocery vendors, which would allow us to flesh out our end-to-end workflow.
## Accomplishments that we're proud of
Some of the websites we had to scrape had lots of information to comb through and we are proud of how we could pick up new skills in Beautiful Soup and Selenium to automate that process! We are also proud of completing the ideation process with an application that included even more features than our original designs. Also, we were scrambling at the end to finish integrating the Stripe API, but it feels incredibly rewarding to be able to utilize real money with our app.
## What we learned
We picked up skills such as web scraping to automate the process of parsing through large data sets. Web scraping dynamically generated websites can also lead to slow server response times that are generally undesirable. It also became apparent to us that we should have set up virtual environments for flask applications so that team members do not have to reinstall every dependency. Last but not least, deciding to integrate a new API at 3am will make you want to pull out your hair, but at least we now know that it can be done :’)
## What's next for G.e.o.r.g.e.
Our next steps with G.e.o.r.g.e. would be to improve the overall user experience of the application by standardizing our UI components and UX workflows with Ecommerce industry standards. In the future, our goal is to work directly with more vendors to gain quicker access to price data, as well as creating more seamless payment solutions.
|
## Inspiration
The both of us study in NYC and take subways almost everyday, and we notice the rampant food insecurity and poverty in an urban area. In 2017 40 million people struggled with hunger (source Feeding America) yet food waste levels remain at an all time high (“50% of all produce in the United States is thrown away” source The Guardian). We wanted to tackle this problem, because it affects a huge population, and we see these effects in and around the city.
## What it does
Our webapp uses machine learning to detect produce and labels of packaged foods. The webapp collects this data, and stores it into a user's ingredients list. Recipes are automatically found using google search API from the ingredients list. Our code parses through the list of ingredients and generates the recipe that would maximize the amount of food items (also based on spoilage).The user may also upload their receipt or grocery list to the webapp. With these features, the goal of our product is to reduce food waste by maximizing the ingredients a user has at home. With our trained datasets that detect varying levels of spoiled produce, a user is able to make more informed choices based on the webapp's recommendation.
## How we built it
We first tried to detect images of different types of food using various platforms like open-cv and AWS. After we had this detection working, we used Flask to display the data onto a webapp. Once the information was stored on the webapp, we automatically generated recipes based on the list of ingredients. Then, we built the front-end (HTML5, CSS3) including UX/UI design into the implementation. We shifted our focus to the back-end, and we decided to detect text from receipts, grocery lists, and labels (packaged foods) that we also displayed onto our webapp. On the webapp we also included an faq page to educate our users on this epidemic. On the webapp we also posted a case study on the product in terms of UX and UI design.
## Challenges we ran
We first used open-cv for image recognition, but we learned about amazon web services, specifically, Amazon Rekognition to identify text and objects to detect expiration dates, labels, produce, and grocery lists. We trained models in sci-kit python to detect levels of spoilage/rotten produce. We encountered merge conflicts with GitHub, so we had to troubleshoot with the terminal in order to resolve them. We were new to using Flask, which we used to connect our python files to display in a webpage. We also had to choose certain features over others that would best fit the needs of the users. This was also our first hackathon ever!
## Accomplishments that we're proud of
We feel proud to have learned new tools in different areas of technology (computer vision, machine learning, different languages) in a short period of time. We also made use of the mentor room early on, which was helpful. We learned different methods to implement similar ideas, and we were able to choose the most efficient one (example: AWS was more efficient for us than open-cv). We also used different functions in order to not repeat lines of code.
## What we learned
New technologies and different ways of implementing them. We both had no experience in ML and computer vision prior to this hackathon. We learned how to divide an engineering project into smaller tasks that we could complete. We managed our time well, so we could choose workshops to attend, but also focus on our project, and get rest.
## What's next for ZeroWaste
In a later version, ZeroWaste would store and analyze the user's history of food items, and recommend recipes (which max out the ingredients that are about to expire using computer vision) as well as other nutritional items similar to what the user consistently eats through ML. In order to tackle food insecurity at colleges and schools ZeroWaste would detect when fresh produce would expire, and predict when an item may expire based on climate/geographic region of community. We had hardware (raspberry PI), which we could have used with a software ML method, so in the future we would want to test the accuracy of our code with the hardware.
|
winning
|
## What it does
Our site allows a user to take a picture of their outfit and receive AI-generated feedback on their style, color palette, and cohesiveness. While we are not your typical education tool, we think that dressing well has a massive boost on your confidence, your presence, and your ability to talk about yourself! Our goal is to give everyone a chance to be their most styling self.
In addition to hyping up the best of your outfit, we recommend relevant pieces to incorporate into your style as you explore defining your visual voice. Discovering fashion should be exciting, not intimidating -- and with the help of LLM's, **Drip or Drown** does exactly that!
## How we built it
During the course of the weekend, we experimented with many different approaches but ultimately settled on the following architecture ([click here for full res](https://i.ibb.co/8rBwgqB/architecture2.png)):
. Here's some of the salient elements:
1. Depth Perception
When an image is uploaded, we use computer vision to remove noise. We use the MIDAS depth perception model to separate the background from the foreground, create a separate image only of the person and their fit!
2. Visual Transformation Model (Q&A):
We used the ViLT QnA model to query the de-noised image about attributes about the outfit — extracting information about what they're wearing and details in a recursive fashion.
3. In-Context Learning Fashion Critique Model
Finally, we use Large Language Models to generate in-depth feedback for your fit using our description. We also use a model to rate it on a scale of 1-10 and another to categorize it into one of many "aura"s, so that you can have a well-rounded understanding of your own style!
## Challenges we ran into
1. Figuring out deployment was particularly hard. Given the many moving parts, our project is hosted on multiple platforms that all interact with each other.
2. Using non-production research models like ViLT meant that our performance for the API calls was quite abysmal, and we had to get pretty clever about parallelization and early-stopping algorithms within our call structure.
3. We spent a lot of time adding juice to the project! Making it fun to use was a big goal of ours, which was often easier said than done :)
## Accomplishments that we're proud of
1. The UI! We think it looks pretty great – one of our teammates mocked it all up in Figma, and then we spent most of Sunday night making components together. We really wanted to make our project fun to use, and I think we accomplished that.
2. Using Multi-modal AI! It's one of the biggest unsolved problems in AI right now. How do you use multiple forms of input — image *and* text — together? I think we came up with a pretty clever solution that works quite well, and is pretty interpretable as well!
## What we learned
1. A lot about many technologies! Like Flask, PythonAnywhere, AWS, Heroku, Vercel, generating particles, CSS, Visual Transformer models, GPT fine-tuning, image progressing, classification algorithms, and more! This project spans many different domains, and it was pretty fun to pick up skills along the way.
2. The need for patience! For the longest time, we would have "blocker" bugs that would prevent us from deploying or developing further. We pushed ahead, and every time we handled those, the emergent abilities of the system surprised us as well.
3. And of course, that having fun once in a while is important. We did some of our best work when we were all singing to pop songs together at 11PM.
## What's next for Drip or Drown
1. Improving suggestion quality: We think we can push this even further! While our current image -> text algorithm is clever, we think we could make it even smarter by using a shared embedding space between images and text. This could capture attributes of the image our QnA model could not!
2. Follow-ups and conversation: We'd love for you to be able to ask the model questions about your fit! "Why does a white belt work better?" "What do you think about this leather jacket with that shirt?"
3. Suggestions! Finally, from all the feedback, we'd love for the model to be able to suggest fits as well. "You'd look great in a green croptop for this casual event!" We hope that Drip is the AI assistant to help you achieve your most fashionable self.
|
## Inspiration
In recent years, especially post-COVID, online shopping has become extremely common. One big issue when shopping online is that users are unable to try on clothes before ordering them. This results in people getting clothes that end up not fitting or not looking great, which is something nobody wants. In addition, many people face constant difficulties in their life that limit their This gave us the inspiration to create Style AI as a way to let people try on clothes virtually before ordering them online.
## What it does
Style AI takes a photo of you and analyzes the clothes you are currently wearing and gives detailed clothing recommendations of specific brands, shirt types, and colors. Then, the user has the option to try on each of the recommendations virtually.
## How we built it
We used OpenCV to capture a photo of the user. Then the image is inputted to Gemini API to generate a list of clothing recommendations. These recommendations are then passed into google shopping API, which uses google search to find where the user can buy the recommended clothes. Then, we filter through the results to find clothes that have the correct image format.
The image of the shirt is superimposed onto a live OpenCV video stream of the user. To overlay the shirt on the user, we segmented the shirt image into 3 sections: left sleeve, center, and right sleeve. We also perform segmentation on the user using MediaPipe. Then, we warp each segment of the shirt onto the user's body in the video stream.
We made the website using Reflex.
## Challenges we ran into
The shirt overlay aspect was much more challenging than expected. At first, we planned to use a semantic segmentation model for the shirt of the user because then we could warp and transform the shape of the real shirt to the shirt mask on the user. The issue was that semantic segmentation was very slow so the shirt wasn't able to overlay on the user in real-time. We solved this by using a combination of various OpenCV functions so the shirt could be overlaid in real-time.
## Accomplishments that we're proud of
We are proud of every part of our project, since each required lots of research, and we are all proud of the individual contributions to the project. We are also proud that we were able to overcome many challenges and adapt to things that went wrong. Specifically, we were proud that we were able to use a completely new framework, reflex, which allowed us to work in python natively across both the frontend and the backend.
## What we learned
We learned how to use Reflex to create websites. We also learned how to use APIs. Also, we learned about more functionalities of MediaPipe and OpenCV when writing the shirt overlay code.
## What's next for Style AI
Expand Style AI for all types of clothing such as pants and shoes. Implementation of a "bulk order" functionality allowing users to order across online retailers. Add more personalized recommendations. Enable real-time voice assisted chat bot conversations to simulate talking to a fashion expert in-person.
|
## Inspiration
Our mission is rooted in the **fight against fake news, misinformation, and disinformation,** which are increasingly pervasive threats in today’s digital world. As the saying goes, "the pen is mightier than the sword," which underscores the power of words and information. We aim to ensure that no one falls victim to digital deception.
While technology has contributed to the spread of misinformation, we believe it can also be a powerful ally in promoting the truth. By leveraging AI for good, we aim to combat falsehoods and uphold the integrity of information.
*Fun fact: Moodeng is a pygmy hippopotamus born on July 10, 2024, living in Khao Kheow Open Zoo, Thailand. She became a viral internet sensation during a busy political season in the US. Amid the flood of true and half-true information, Moodeng, symbolizing purity and honesty, stood as a beacon of clarity. Like Moodeng, our tool is here to cut through the noise and keep things transparent. So, Vote for Moodeng!*
## What it does
Social media platforms are now major sources of rapidly shared information. Our Chrome extension, MD FactFarm, simplifies fact-checking through AI-driven content analysis and verification. Initially focused on YouTube, our tool offers **real-time fact-checking** by scanning video content to **identify and flag misinformation** while providing reliable sources for users to verify accuracy.
## How we built it
* At the core of our system is a Large Language Model (LLM) that we trained and optimized to accurately understand and interpret various forms of misinformation, powering our fact-checking capabilities.
* We integrated an AI agent using Fetch.ai and built services and APIs to enable seamless communication with the agent.
* Our front-end, built with HTML, CSS, and JavaScript, was designed and deployed as a Chrome extension.
## Challenges we ran into
* One of the major challenges we encountered was ensuring that the AI could accurately differentiate between fact, opinion, and misleading content. Early on, the outputs were inconsistent, making it difficult to trust the results.
To achieve this, we had to rethink our approach to prompt engineering. We provided the AI with more detailed context and built a structured framework to clearly separate different types of content. Additionally, we implemented a formula for the AI to use to determine a confidence score for each output. These changes helped us generate more consistent and reliable results, enabling the AI to better recognize the subtle distinctions between fact, opinion, and misleading content.
* Another challenge was integrating multiple agent frameworks into a unified system that could operate seamlessly. Managing the intricacies of coordinating tasks and data flow between these diverse components contributed to a complex integration process.
## Accomplishments that we're proud of
* We successfully developed a Chrome extension that that provides real-time fact-checking for YouTube, empowering users to make informed decisions.
* We crafted prompts that effectively leverage the LLM's ability to detect misinformation.
* We successfully integrated Fetch.ai, utilizing agents that lay the foundation for scalability.
## What we learned
We learned the importance of defining the problem clearly and deciding on a minimum viable product (MVP) within a limited timeframe. Additionally, we focused on framing our work to align with the AI agent framework, which has been crucial in improving our approach to misinformation detection.
## What's next for MD FactFarm
Moving forward, we plan to expand our platform to include other social networks, such as Twitter and Facebook, where misinformation spreads rapidly. We aim to gather a wider range of information sources to ensure more comprehensive fact-checking and cover more diverse content. Moreover, we are working on enhancing our AI's fact-checking mechanics, utilizing more advanced techniques to improve accuracy.
|
losing
|
## Inspiration
Oftentimes, roommates deal with a lot of conflicts due to differences in living habits and aren't comfortable sorting things out by confronting one another. This problem creates unwanted tension between individuals in the household and usually ends up leading to a poor living experience.
## What it does
Broomies is a mobile app that creates a fun environment to help roommates assign chores for one another, anonymously. Assigned chores can be seen by each roommate, with a separate tab to view your own assigned chores. Once a chore is created by a roommate, it randomly cycles weekly to ensure nobody gets repeated chores. Completing chores on time gives roommates points that get tallied up per month. There is also a way to track expenses with a built-in financing tool to record each transaction a roommate has made for their household. At the end of each month, the monthly sum gets split based on roommate performance, and the top roommate(s) get to pay the least.
## How we built it
Our project utilizes a React Native Frontend with a Go backend and CockroachDB as our db. All our front-end components are designed and implemented in-house. Currently, the app is hosted locally, but in the future, we plan to host our backend on Google cloud, and hopefully publish a polished version on the app store. Finally, we used Git and Github for version control and collaboration tools.
## Challenges we ran into
1. This was our first time developing a mobile application and working with CockroachDB. A big challenge was adjusting and understanding the nuances of mobile development
2. Figuring out an equitable method for distributing points that rewarded doing chores but didn't promote sabotaging your roomies took much polling from fellow hackers
3. Our app covered a lot of features, and we would often run into bugs involving chore management and transaction splitting
## Accomplishments that we're proud of
1. We are really proud of managing to create a full-stack, functioning mobile app as not only first-time mobile developers, but CockroachDB users as well.
2. Entering the hackathon, our goal was to create something that would make any part of our life easier, and we believe Broomies does just that. We are proud to build an app that we hope to actually use in the future.
3. We are really proud of the overall design and theme of Broomies, and how effectively we were able to translate our designs into reality
## What we learned
1. The power of design, both in components and in data structures. Before we started, we took the time to plan out our data structures and relationships, this helped us flesh out a scope for our project, and effective divide work amongst the team.
2. Lots of experience working on new technologies: From IOS and React Native, to leveraging CockroachDB Serverless in quickly turning an idea into a prototype
3. How to effectively ideate: Going into the hackathon, not having a good idea was our biggest concern, but once we learned to let go of finding "The One" hackathon idea, and instead explored every possible avenue, we were able to flesh out ideas that were relevant to our life.
## What's next for Broomies
After fleshing out user management and authentication, we want to deploy the Broomies app on the App store, as well as host our backend on Google Cloud. Also, we want to add the ability to react/respond to task completions, so your roomies can evaluate your tasks.
|
## Inspiration
We thought it would be nice if, for example, while working in the Computer Science building, you could send out a little post asking for help from people around you.
Also, it would also enable greater interconnectivity between people at an event without needing to subscribe to anything.
## What it does
Users create posts that are then attached to their location, complete with a picture and a description. Other people can then view it two ways.
**1)** On a map, with markers indicating the location of the post that can be tapped on for more detail.
**2)** As a live feed, with the details of all the posts that are in your current location.
The posts don't last long, however, and only posts within a certain radius are visible to you.
## How we built it
Individual pages were built using HTML, CSS and JS, which would then interact with a server built using Node.js and Express.js. The database, which used cockroachDB, was hosted with Amazon Web Services. We used PHP to upload images onto a separate private server. Finally, the app was packaged using Apache Cordova to be runnable on the phone. Heroku allowed us to upload the Node.js portion on the cloud.
## Challenges we ran into
Setting up and using CockroachDB was difficult because we were unfamiliar with it. We were also not used to using so many technologies at once.
|
## Inspiration
At first, I started by thinking what would be useful for people I am close with, and I came up with Samulnori. Samulnori is a genre of Traditional Korean music that almost every Korean knows. Many of my friends are participating in PennDure, a Samulnori Performing Troupe at Penn, and every time they perform, lots of Korean elders around Pennsylvania area travel from long distances to listen to the music. But the biggest restriction of Samulnori is that the instruments have extremely piercing sound, and that they are hard to carry. I started from there, thinking of a mobile application in which users can play Samulnori. However, looking through the applications in the AppStore, I found out that there exists a lot of mixing tools for Western instruments, but very few exists for the Eastern instruments. If you could loop through all four instruments and record your playings, I thought it would not only be more enjoyable for not only Korean users but also become a nice introduction to Eastern music and Korean Culture for non-Koreans.
## What it does
Users can create loops of records with a name, speed, and length of they want. Each loop has audio files of four instruments - Small Gong, Drum, Two Head Drum, and Gong. Users can record their music by clicking a red button at the top right corner of the screen of each instrument. They can listen to the mixed file of all of the recordings in the loop after recording.
## How I built it
I used STK library to read raw audio files into the application. After reading in the files, I used TheAmazingAudioEngine library in order to put those sounds into the channel and produce sound. Then, I first built individual instrument objects which act like an actual instrument. After making each instrument, I then created a looper, using TheAmazingAudioEngine library. Lastly, I used EZAudio library in order to draw the wave forms of your recording onto the screen.
## Challenges I ran into
Since Swift was introduced only a year ago, all of those libraries that I used were not in Swift. Hardest part of the project that took me most of the time was to use combination of ObjC, Swift, and even C++ in appropriate times and make it work as an one application.
## Accomplishments that I'm proud of
I am proud that I could implement simple yet intuitive design. I had hard time deciding which color to use for the instruments. I figured it out by using some of the colors that are called "ObangGanSack," which are eight colors that Koreans had been using often from the past. It includes, white, black, blue, yellow, red, grey, green, and light pink. I am also proud that I could do everything from gathering all the library resources, to implementing the application design by myself.
## What I learned
I learned how to organize the procedure of developing project by listing things that I should be implementing. That way, it was easier to track time and my progress. I also learned how to use different libraries in an appropriate places. It was the first time I worked with multiple libraries, and it was hard to decide which library should I use, since there are so many libraries that have similar functions. After failing in using few, I could finally find the one that would work.
## What's next for Dangrang
I would implement a feature that allows the users to download the files that they recorded, and share it with other people through Cloud. I also hope to make a tutorial inside an application where users who are not so familiar with Korean Music can learn some Korean beats and basic information about Samulnori.
|
partial
|
## Inspiration
The inspiration for DigiSpotter came from within our team members, who recently started going to the gym in the past year. We agreed that starting out in the gym is hard without having a personal coach or gym partner that is willing to train you. DigiSpotter aims to solve this issue by being your electronic gym partner that can keep track of your workout and check on your form in real time to ensure you are training safely and optimally.
## What it does
DigiSpotter uses your phone's camera to create a skeletal model of yourself as you are performing an exercise and will compare it across various parameters to the optimal form. If you are doing something wrong DigiSpotter will let you know after each set. It can detect errors such as suboptimal range of motion, incorrect extension, and incorrect positioning of body parts. It also counts for you as you are working out, and will automatically start a rest timer after each set. All you have to do is leave your phone in front of you as if you are taking a video of yourself working out. The results of your workout are saved to your account in the app's database to track improvements in mobility and general gym progress.
## How we built it
We created this app using Swift for the backend and SwiftUI for the frontend. We are using ARKit for iPhone to help us create our position tracking model for various positions in the body as running natively drastically increases the performance of the app and the accuracy of the tracking. Using our position tracking model, we can calculate the relative angles of body parts and determine their deviation from an optimal angle.
## Challenges we ran into
ARKit is designed more for creating virtual avatars from motion capture data and not interpreting movement between relative body parts, and such in order to create a skeleton that fits our needs we needed to make many changes from any other tracking library.
## Accomplishments that we're proud of
We are able to calculate deviation from an optimal squat and relay the information to the user.
## What we learned
All of us were completely new to developing for iOS so we had many challenges attempting to figure out the conventions of Swift and how we can interact with our app.
## What's next for DigiSpotter
We would like to add as many exercises as we can to the App and we want to further expand how much historical data we can collect so our users can have a more detailed view of how they have improved over time.
|
## Inspiration
Kevin, one of our team members, is an enthusiastic basketball player, and frequently went to physiotherapy for a knee injury. He realized that a large part of the physiotherapy was actually away from the doctors' office - he needed to complete certain exercises with perfect form at home, in order to consistently improve his strength and balance. Through his story, we realized that so many people across North America require physiotherapy for far more severe conditions, be it from sports injuries, spinal chord injuries, or recovery from surgeries. Likewise, they will need to do at-home exercises individually, without supervision. For the patients, any repeated error can actually cause a deterioration in health. Therefore, we decided to leverage computer vision technology, to provide real-time feedback to patients to help them improve their rehab exercise form. At the same time, reports will be generated to the doctors, so that they may monitor the progress of patients and prioritize their urgency accordingly. We hope that phys.io will strengthen the feedback loop between patient and doctor, and accelerate the physical rehabilitation process for many North Americans.
## What it does
Through a mobile app, the patients will be able to film and upload a video of themselves completing a certain rehab exercise. The video then gets analyzed using a machine vision neural network, such that the movements of each body segment is measured. This raw data is then further processed to yield measurements and benchmarks for the relative success of the movement. In the app, the patients will receive a general score for their physical health as measured against their individual milestones, tips to improve the form, and a timeline of progress over the past weeks. At the same time, the same video analysis will be sent to the corresponding doctor's dashboard, in which the doctor will receive a more thorough medical analysis in how the patient's body is working together and a timeline of progress. The algorithm will also provide suggestions for the doctors' treatment of the patient, such as prioritizing a next appointment or increasing the difficulty of the exercise.
## How we built it
At the heart of the application is a Google Cloud Compute instance running together with a blobstore instance. The cloud compute cluster will ingest raw video posted to blobstore, and performs the machine vision analysis to yield the timescale body data.
We used Google App Engine and Firebase to create the rest of the web application and API's for the 2 types of clients we support: an iOS app, and a doctor's dashboard site. This manages day to day operations such as data lookup, and account management, but also provides the interface for the mobile application to send video data to the compute cluster. Furthermore, the app engine sinks processed results and feedback from blobstore and populates it into Firebase, which is used as the database and data-sync.
Finally, In order to generate reports for the doctors on the platform, we used stdlib's tasks and scale-able one-off functions to process results from Firebase over time and aggregate the data into complete chunks, which are then posted back into Firebase.
## Challenges we ran into
One of the major challenges we ran into was interfacing each technology with each other. Overall, the data pipeline involves many steps that, while each in itself is critical, also involve too many diverse platforms and technologies for the time we had to build it.
## What's next for phys.io
<https://docs.google.com/presentation/d/1Aq5esOgTQTXBWUPiorwaZxqXRFCekPFsfWqFSQvO3_c/edit?fbclid=IwAR0vqVDMYcX-e0-2MhiFKF400YdL8yelyKrLznvsMJVq_8HoEgjc-ePy8Hs#slide=id.g4838b09a0c_0_0>
|
## Inspiration
Many people want to stay in shape, so they really want to go workout to achieve their physique. However, most don't due to the hassle of creating unique workout plans because it could be time consuming and general to a specific body type, resulting in poor outcomes. What if you can build a plan that focuses on ***you***? Specifically, your body type, your schedule, the workouts you want to do and your dietary restrictions? Meet Gain+ where we create the fastest way to make big gains!
## What it does
Gain+ creates a custom workout and meal plan based on what you want to look like in the future (i.e. 12 weeks). You will interact with a personal trainer created with AI to discuss your goal. First, you would load two pictures: one based on what you look like now and another based on what you hope to somewhat achieve after you finish your plan. Then, you'll give answers to any questions your coach has before generating a full workout and meal plan. The workout plan is based on the number of days you want to go to the gym, while the meal plan is for every day. You can also add workouts and meals before finalizing your plan as well.
## How we built it
For our website, we've built the frontend in **React and Tailwind CSS** while **Firebase** provides out backend and database to store chats and users. As for the model creating the workout plans, there's a custom model that was created from a [Kaggle Dataset](https://www.kaggle.com/datasets/trainingdatapro/human-segmentation-dataset) and trained on **Roboflow** that classifies images based on gender, the three main types of bodies (ectomorph, mesomorph and endomorph) and the various subtypes. The best classes for that model is then sent to our chatbot, which was trained and deployed with **Databricks Mosaic AI** and based on **LLaMA 3.1**.
## Challenges we ran into
Some challenges we ran into were the integration of the frontend, backend, and AI and ML components. This was a quite large and upscaled project where we used a lot of new technologies that we had little to no experience with. For example, there was a huge CORS issue in the final hours of hacking that plagued our project that we tried to solve with some help from the internet, as well as getting help from our mentors, Paul and Sammy.
## Accomplishments that we're proud of
This was Kersh and Mike's first time doing something in Databricks and Ayan's first time using Firebase in a more professional scale. The fact that we actually implemented these technologies into a final project from little to no experience was a big accomplishment for all of us.
## What we learned
We learned a lot throughout this hackathon, like working with external APIs for LLMs and Databricks, gained hands on experience with prompt engineering and finally, adjusting to unexpected roadblocks that we faced throughout this hackathon
## What's next for Gain+
Next steps would definitely be to improve the UI and UX and also implement some new features. Some of them can include a significant focus for people who have bodybuilding or powerlifting meets, which we'll implement through a separate toggle.
|
winning
|
## Inspiration
I was walking down the streets of Toronto and noticed how there always seemed to be cigarette butts outside of any building. It felt disgusting for me, especially since they polluted the city so much. After reading a few papers on studies revolving around cigarette butt litter, I noticed that cigarette litter is actually the #1 most littered object in the world and is toxic waste. Here are some quick facts
* About **4.5 trillion** cigarette butts are littered on the ground each year
* 850,500 tons of cigarette butt litter is produced each year. This is about **6 and a half CN towers** worth of litter which is huge! (based on weight)
* In the city of Hamilton, cigarette butt litter can make up to **50%** of all the litter in some years.
* The city of San Fransico spends up to $6 million per year on cleaning up the cigarette butt litter
Thus our team decided that we should develop a cost-effective robot to rid the streets of cigarette butt litter
## What it does
Our robot is a modern-day Wall-E. The main objectives of the robot are to:
1. Safely drive around the sidewalks in the city
2. Detect and locate cigarette butts on the ground
3. Collect and dispose of the cigarette butts
## How we built it
Our basic idea was to build a robot with a camera that could find cigarette butts on the ground and collect those cigarette butts with a roller-mechanism. Below are more in-depth explanations of each part of our robot.
### Software
We needed a method to be able to easily detect cigarette butts on the ground, thus we used computer vision. We made use of this open-source project: [Mask R-CNN for Object Detection and Segmentation](https://github.com/matterport/Mask_RCNN) and [pre-trained weights](https://www.immersivelimit.com/datasets/cigarette-butts). We used a Raspberry Pi and a Pi Camera to take pictures of cigarettes, process the image Tensorflow, and then output coordinates of the location of the cigarette for the robot. The Raspberry Pi would then send these coordinates to an Arduino with UART.
### Hardware
The Arduino controls all the hardware on the robot, including the motors and roller-mechanism. The basic idea of the Arduino code is:
1. Drive a pre-determined path on the sidewalk
2. Wait for the Pi Camera to detect a cigarette
3. Stop the robot and wait for a set of coordinates from the Raspberry Pi to be delivered with UART
4. Travel to the coordinates and retrieve the cigarette butt
5. Repeat
We use sensors such as a gyro and accelerometer to detect the speed and orientation of our robot to know exactly where to travel. The robot uses an ultrasonic sensor to avoid obstacles and make sure that it does not bump into humans or walls.
### Mechanical
We used Solidworks to design the chassis, roller/sweeper-mechanism, and mounts for the camera of the robot. For the robot, we used VEX parts to assemble it. The mount was 3D-printed based on the Solidworks model.
## Challenges we ran into
1. Distance: Working remotely made designing, working together, and transporting supplies challenging. Each group member worked on independent sections and drop-offs were made.
2. Design Decisions: We constantly had to find the most realistic solution based on our budget and the time we had. This meant that we couldn't cover a lot of edge cases, e.g. what happens if the robot gets stolen, what happens if the robot is knocked over ...
3. Shipping Complications: Some desired parts would not have shipped until after the hackathon. Alternative choices were made and we worked around shipping dates
## Accomplishments that we're proud of
We are proud of being able to efficiently organize ourselves and create this robot, even though we worked remotely, We are also proud of being able to create something to contribute to our environment and to help keep our Earth clean.
## What we learned
We learned about machine learning and Mask-RCNN. We never dabbled with machine learning much before so it was awesome being able to play with computer-vision and detect cigarette-butts. We also learned a lot about Arduino and path-planning to get the robot to where we need it to go. On the mechanical side, we learned about different intake systems and 3D modeling.
## What's next for Cigbot
There is still a lot to do for Cigbot. Below are some following examples of parts that could be added:
* Detecting different types of trash: It would be nice to be able to gather not just cigarette butts, but any type of trash such as candy wrappers or plastic bottles, and to also sort them accordingly.
* Various Terrains: Though Cigbot is made for the sidewalk, it may encounter rough terrain, especially in Canada, so we figured it would be good to add some self-stabilizing mechanism at some point
* Anti-theft: Cigbot is currently small and can easily be picked up by anyone. This would be dangerous if we left the robot in the streets since it would easily be damaged or stolen (eg someone could easily rip off and steal our Raspberry Pi). We need to make it larger and more robust.
* Environmental Conditions: Currently, Cigbot is not robust enough to handle more extreme weather conditions such as heavy rain or cold. We need a better encasing to ensure Cigbot can withstand extreme weather.
## Sources
* <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3397372/>
* <https://www.cbc.ca/news/canada/hamilton/cigarette-butts-1.5098782>
* [https://www.nationalgeographic.com/environment/article/cigarettes-story-of-plastic#:~:text=Did%20You%20Know%3F-,About%204.5%20trillion%20cigarettes%20are%20discarded%20each%20year%20worldwide%2C%20making,as%20long%20as%2010%20years](https://www.nationalgeographic.com/environment/article/cigarettes-story-of-plastic#:%7E:text=Did%20You%20Know%3F-,About%204.5%20trillion%20cigarettes%20are%20discarded%20each%20year%20worldwide%2C%20making,as%20long%20as%2010%20years).
|
## Inspiration
Let's start by taking a look at some statistics on waste from Ontario and Canada. In Canada, only nine percent of plastics are recycled, while the rest is sent to landfills. More locally, in Ontario, over 3.6 million metric tonnes of plastic ended up as garbage due to tainted recycling bins. Tainted recycling bins occur when someone disposes of their waste into the wrong bin, causing the entire bin to be sent to the landfill. Mark Badger, executive vice-president of Canada Fibers, which runs 12 plants that sort about 60 percent of the curbside recycling collected in Ontario has said that one in three pounds of what people put into blue bins should not be there. This is a major problem, as it is causing our greenhouse gas emissions to grow exponentially. However, if we can reverse this, not only will emissions lower, but according to Deloitte, around 42,000 new jobs will be created. Now let's turn our focus locally. The City of Kingston is seeking input on the implementation of new waste strategies to reach its goal of diverting 65 percent of household waste from landfill by 2025. This project is now in its public engagement phase. That’s where we come in.
## What it does
Cycle AI is an app that uses machine learning to classify certain articles of trash/recyclables to incentivize awareness of what a user throws away. You simply pull out your phone, snap a shot of whatever it is that you want to dispose of and Cycle AI will inform you where to throw it out as well as what it is that you are throwing out. On top of that, there are achievements for doing things such as using the app to sort your recycling every day for a certain amount of days. You keep track of your achievements and daily usage through a personal account.
## How We built it
In a team of four, we separated into three groups. For the most part, two of us focused on the front end with Kivy, one on UI design, and one on the backend with TensorFlow. From these groups, we divided into subsections that held certain responsibilities like gathering data to train the neural network. This was done by using photos taken from waste picked out of, relatively unsorted, waste bins around Goodwin Hall at Queen's University. 200 photos were taken for each subcategory, amounting to quite a bit of data by the end of it. The data was used to train the neural network backend. The front end was all programmed on Python using Kivy. After the frontend and backend were completed, a connection was created between them to seamlessly feed data from end to end. This allows a user of the application to take a photo of whatever it is they want to be sorted, having the photo feed to the neural network, and then returned to the front end with a displayed message. The user can also create an account with a username and password, for which they can use to store their number of scans as well as achievements.
## Challenges We ran into
The two hardest challenges we had to overcome as a group was the need to build an adequate dataset as well as learning the framework Kivy. In our first attempt at gathering a dataset, the images we got online turned out to be too noisy when grouped together. This caused the neural network to become overfit, relying on patterns to heavily. We decided to fix this by gathering our own data. I wen around Goodwin Hall and went into the bins to gather "data". After washing my hands thoroughly, I took ~175 photos of each category to train the neural network with real data. This seemed to work well, overcoming that challenge. The second challenge I, as well as my team, ran into was our little familiarity with Kivy. For the most part, we had all just began learning Kivy the day of QHacks. This posed to be quite a time-consuming problem, but we simply pushed through it to get the hang of it.
## 24 Hour Time Lapse
**Bellow is a 24 hour time-lapse of my team and I work. The naps on the tables weren't the most comfortable.**
<https://www.youtube.com/watch?v=oyCeM9XfFmY&t=49s>
|
## Inspiration
How many times have you been walking around the city and seen trash on the ground, sometimes just centimetres away from a trash can? It can be very frustrating to see people who either have no regard for littering, or just have horrible aim. This is what inspired us to create TrashTalk: trash talk for your trash shots.
## What it does
When a piece of garbage is dropped on the ground within the camera’s field of vision, a speaker loudly hurls insults until the object is picked up. Because what could motivate people to pick up after themselves more than public shaming? Perhaps the promise of a compliment: once the litter is picked up, the trash can will deliver praise, designed to lift the pedestrian’s heart.
The ultrasonic sensor attached to the rim of the can will send a ping to the server when the trash can becomes full, thus reducing litter by preventing overfilling, as studies have shown that programmed encouragement as opposed to regular maintenance can reduce littering by as much as 25%. On the website, one can view the current 'full' status of the trash can, how much trash is currently inside and outside the can in a bar graph, and how many pieces of trash have been scanned total. This quantifies TrashTalk's design to drastically reduce littering in public areas, with some nice risk and reward involved for the participant.
## How we built it
We build this project using NEXT.js, Python, MongoDB, and the Express library integrated together using HTTP requests to send data to and from the Arduino, computer, and end-user.
Our initial idea was made quite early on, but as we ran into challenges, the details of the project changed over time in order to reflect what we could realistically accomplish in one hackathon.
We split up our work so we could cover more ground: Abeer would cover trash detection using AI models that could be run on a Raspberry Pi, Kersh would handle the MongoDB interaction, Vansh would help create the Arduino Logic, and Matias would structure the project together.
## Challenges we ran into
We ran into *quite* a few challenges making TrashTalk, and a lot of them had to do with the APIs that we were using for OpenCV. The first major issue was that we were not able to get Raspberry Pi running, so we migrated all the code onto one of our laptops.
Furthermore, none of the pretrained computer vision models we tried to use to recognize trash would work. We realized with the help of one of the mentors that we could simply use an object detection algorithm, and it was smooth sailing from there.
## Accomplishments that we're proud of
* Getting a final working product together
* Being able to demo to people at the hackathon
* Having an interactive project
## What we learned
We learned so many things during this hackathon due to the varying experience levels in our team. Some members learned how to integrate GitHub with VSCode, while others learned how to use Next.js (SHOUTOUT TO FREDERIC) and motion detection with OpenCV.
## What's next for TrashTalk
The next steps for TrashTalk would be to have more advanced analytics being run on each trash can. If we aim to reduce litter through the smart placement of trashcans along with auditory reminders, having a more accurate kit of sensors, such as GPS, weight sensor, etc. would allow us to have a much more accurate picture of the trash can's usage. The notification of a trash can being full could also be used to alert city workers to optimize their route and empty more popular trash cans first, increasing efficiency.
|
winning
|
## Inspiration
A few weeks before HT6, Randy received a poorly shipped package with too much plastic filler. Coincidentally, Ryan received a package with almost 90% empty space and was packed even worse. This prompted our team to want to tackle a major problem: reducing packaging waste (and as a result, logistics efficiency).
## What it does
Our application optimally reduces the empty package volume when packaging multiple items through computer vision and NP-hard algorithms.
Steps:
1. User inputs all boxes which can be used for packing
2. User places items in the scanner box, which uses computer vision to measure the dimensions of all products to be packed
3. Our program sends this data into the algorithm to come up with a solution to optimally pack the items given the boxes available
4. User optimally packs items by following the 3D visual solution
## How we built it
* Computer vision:
Used OpenCV and supporting libraries to create a duo-image pipeline to detect items through contours, and measure items using reference pixel to centimetre conversions.
* Algorithm:
We used an algorithm based off an existing heuristic outlined in <https://github.com/enzoruiz/3dbinpacking>
* Visualizer:
Our 3d canvas was built using the react-fibre-library. We created blocks to represent the items that need to be shipped and laid them out according to the configuration generated by our algorithm.
* Fullstack:
Used a React frontend and a Flask backend and to store images we used Amazon S3. For styling, ChakraUI is used.
## Challenges we ran into
* OpenCV detection
Contour detection proved to be more difficult than expected since we had to continually make adjustments to hyperparameters and the environment.
* Connecting various components
For most on our team, this was the first time developing an application with a Flask backend. And so routing all the api calls and handling data transfer from Python to Javascript and vice verse was a challenge.
## Accomplishments that we're proud of
* Creating a functional fullstack application
* Visualization of our 3d solution
* Accurate object detection through OpenCV
## What we learned
* Computer Vision (OpenCV)
* Algorithm
* AWS and Flask
## What's next for Package Optimizer
Implement hardware: Add a conveyor belt to it so that we can automatically scan objects
Improve our algorithm to take into account parameters such as weight, material etc.
|
## Inspiration
Canadians produce more garbage per capita than any other country on earth, with the United States ranking third in the world. In fact, Canadians generate approximately 31 million tonnes of garbage a year. According to the Environmental Protection Agency, 75% of this waste is recyclable. Yet, only 30% of it is recycled. In order to increase this recycling rate and reduce our environmental impact, we were inspired to propose a solution through automating waste sorting.
## What it does
Our vision takes control away from the user, and lets the machine do the thinking when it comes to waste disposal!
By showing our app a type of waste through the webcam, we detect and classify the category of waste into either recyclable, compost, or landfill. From there, the appropriate compartment is opened to ensure that the right waste gets to the right place!
## How we built it
Using TensorFlow and object detection, a python program analyzes the webcam image input and classifies the objects shown. The TensorFlow data is then collected and pushed to our MongoDB Atlas database via Google Cloud. For this project, we used machine learning and used a single shot detector model to maintain a balance between accuracy and speed. For the hardware, an Arduino 101 and a step motor were responsible for manipulating the position of the lid and opening the appropriate compartment.
## Challenges we ran into
We had many issues with training our ML Models on Google Cloud, due to the meager resources provided by Google. Another issue we encountered was finding the right datasets, due to the novelty of our product. Due to these setbacks, we resorted to modifying a TensorFlow provided model.
## Accomplishments that I'm proud of
We managed to work through difficulties and learned a lot during the process! We learned to connect TensorFlow, Arduino, MongoDB, and Express.js to create a synergistic project.
## What's next for Trash Code
In the future, we aim to create a mobile app for improved accessibility and to create a fully customized trained ML model. We also hope to design a fully functional full-sized prototype with the Arduino.
|
## Inspiration
Americans waste about 425 beverage containers per capita per year in landfill, litter, etc. Bottles are usually replaced with cans and bottles made from virgin materials which are more energy-intensive than recycled materials. This causes emissions of a host of toxics to the air and water and increases greenhouse gas emissions.
The US recycling rate is about 33% while that in stats what have container deposit laws have a 70% average rate of beverage recycling rate. This is a significant change in the amount of harm we do to the planet.
While some states already have a program for exchanging cans for cash, EarthCent brings some incentive to states to make this a program and something available. Eventually when this software gets accurate enough, there will not be as much labor needed to make this happen.
## What it does
The webapp allows a GUI for the user to capture an image of their item in real time. The EarthCents image recognizer recognizes the users bottles and dispenses physical change through our change dispenser. The webapp then prints a success or failure message to the user.
## How we built it
Convolutional Neural Networks were used to scan the image to recognize cans and bottles.
Frontend and Flask presents a UI as well as processes user data.
The Arduino is connected up to the Flask backend and responds with a pair of angle controlled servos to spit out coins.
Change dispenser: The change dispenser is built from a cardboard box with multiple structural layers to keep the Servos in place. The Arduino board is attached to the back and is connected to the Servos by a hole in the cardboard box.
## Challenges we ran into
Software: Our biggest challenge was connect the image file from the HTML page to the Flask backend for processing through a TensoFlow model. Flask was also a challenge since complex use of it was new to us.
Hardware: Building the cardboard box for the coin dispenser was quite difficult. We also had to adapt the Servos with the Arduino so that the coins can be successfully spit out.
## Accomplishments that we're proud of
With very little tools, we could build with hardware a container for coins, a web app, and artificial intelligence all within 36 hours. This project is also very well rounded (hardware, software, design, web development) and let us learn a lot about connecting everything together.
## What we learned
We learned about Arduino/hardware hacking. We learned about the pros/cons of Flask vs. using something like Node.js. In general, there was a lot of light shed on the connectivity of all the elements in this project. We both had skills here and there, but this project brought it all together. Learned how to better work together and manage our time effectively through the weekend to achieve as much as possible without being too overly ambitious.
## What's next for EarthCents
EarthCents could deposit Cryptocurrency/venmo etc and hold more coins. If this will be used, we would want to connect it to a weight to ensure that the user exchanged their can/bottle. More precise recognition.
|
partial
|
## Inspiration
In large lectures, students often have difficulty making friends and forming study groups due to the social anxieties attached to reaching out for help. Collaboration reinforces and heightens learning, so we sought to encourage students to work together and learn from each other.
## What it does
StudyDate is a personalized learning platform that assesses a user's current knowledge on a certain subject, and personalizes the lessons to cover their weaknesses. StudyDate also utilizes Facebook's Graph API to connect users with Facebook friends whose knowledge complement each other to promote mentorship and enhanced learning.
Moreover, StudyDate recommends and connects individuals together based on academic interests and past experience. Users can either study courses of interest online, share notes, chat with others online, or opt to meet in-person with others nearby.
## How we built it
We built our front-end in React.js and used node.js for RESTful requests to the database, Then, we integrated our web application with Facebook's API for authentication and Graph API.
## Challenges we ran into
We ran into challenges in persisting the state of Facebook authentication, and utilizing Facebook's Graph API to extract and recommend Facebook friends by matching with saved user data to discover friends with complementing knowledge. We also ran into challenges setting up the back-end infrastructure on Google Cloud.
## Accomplishments that we're proud of
We are proud of having built a functional, dynamic website that incorporates various aspects of profile and course information.
## What we learned
We learned a lot about implementing various functionalities of React.js such as page navigation and chat messages.
Completing this project also taught us about certain limitations, especially those dealing with using graphics. We also learned how to implement a login flow with Facebook API to store/pull user information from a database.
## What's next for StudyDate
We'd like to perform a Graph Representation of every user's knowledge base within a certain course subject and use a Machine Learning algorithm to better personalize lessons, as well as to better recommend Facebook friends or new friends in order to help users find friends/mentors who are experienced in same course. We also see StudyDate as a mobile application in the future with a dating app-like interface that allows users to select other students they are interested in working with.
|
# Course Connection
## Inspiration
College is often heralded as a defining time period to explore interests, define beliefs, and establish lifelong friendships. However the vibrant campus life has recently become endangered as it is becoming easier than ever for students to become disconnected. The previously guaranteed notion of discovering friends while exploring interests in courses is also becoming a rarity as classes adopt hybrid and online formats. The loss became abundantly clear when two of our members, who became roommates this year, discovered that they had taken the majority of the same courses despite never meeting before this year. We built our project to combat this problem and preserve the zeitgeist of campus life.
## What it does
Our project provides a seamless tool for a student to enter their courses by uploading their transcript. We then automatically convert their transcript into structured data stored in Firebase. With all uploaded transcript data, we create a graph of people they took classes with, the classes they have taken, and when they took each class. Using a Graph Attention Network and domain-specific heuristics, we calculate the student’s similarity to other students. The user is instantly presented with a stunning graph visualization of their previous courses and the course connections to their most similar students.
From a commercial perspective, our app provides businesses the ability to utilize CheckBook in order to purchase access to course enrollment data.
## High-Level Tech Stack
Our project is built on top of a couple key technologies, including React (front end), Express.js/Next.js (backend), Firestore (real time graph cache), Estuary.tech (transcript and graph storage), and Checkbook.io (payment processing).
## How we built it
### Initial Setup
Our first task was to provide a method for students to upload their courses. We elected to utilize the ubiquitous nature of transcripts. Utilizing python we parse a transcript, sending the data to a node.js server which serves as a REST api point for our front end. We chose Vercel to deploy our website. It was necessary to generate a large number of sample users in order to test our project. To generate the users, we needed to scrape the Stanford course library to build a wide variety of classes to assign to our generated users. In order to provide more robust tests, we built our generator to pick a certain major or category of classes, while randomly assigning different category classes for a probabilistic percentage of classes. Using this python library, we are able to generate robust and dense networks to test our graph connection score and visualization.
### Backend Infrastructure
We needed a robust database infrastructure in order to handle the thousands of nodes. We elected to explore two options for storing our graphs and files: Firebase and Estuary. We utilized the Estuary API to store transcripts and the graph “fingerprints” that represented a students course identity. We wanted to take advantage of the web3 storage as this would allow students to permanently store their course identity to be easily accessed. We also made use of Firebase to store the dynamic nodes and connections between courses and classes.
We distributed our workload across several servers.
We utilized Nginx to deploy a production level python server that would perform the graph operations described below and a development level python server. We also had a Node.js server to serve as a proxy serving as a REST api endpoint, and Vercel hosted our front-end.
### Graph Construction
Treating the firebase database as the source of truth, we query it to get all user data, namely their usernames and which classes they took in which quarters. Taking this data, we constructed a graph in Python using networkX, in which each person and course is a node with a type label “user” or “course” respectively. In this graph, we then added edges between every person and every course they took, with the edge weight corresponding to the recency of their having taken it.
Since we have thousands of nodes, building this graph is an expensive operation. Hence, we leverage Firebase’s key-value storage format to cache this base graph in a JSON representation, for quick and easy I/O. When we add a user, we read in the cached graph, add the user, and update the graph. For all graph operations, the cache reduces latency from ~15 seconds to less than 1.
We compute similarity scores between all users based on their course history. We do so as the sum of two components: node embeddings and domain-specific heuristics. To get robust, informative, and inductive node embeddings, we periodically train a Graph Attention Network (GAT) using PyG (PyTorch Geometric). This training is unsupervised as the GAT aims to classify positive and negative edges. While we experimented with more classical approaches such as Node2Vec, we ultimately use a GAT as it is inductive, i.e. it can generalize to and embed new nodes without retraining. Additionally, with their attention mechanism, we better account for structural differences in nodes by learning more dynamic importance weighting in neighborhood aggregation. We augment the cosine similarity between two users’ node embeddings with some more interpretable heuristics, namely a recency-weighted sum of classes in common over a recency-weighted sum over the union of classes taken.
With this rich graph representation, when a user queries, we return the induced subgraph of the user, their neighbors, and the top k most people most similar to them, who they likely have a lot in common with, and whom they may want to meet!
## Challenges we ran into
We chose a somewhat complicated stack with multiple servers. We therefore had some challenges with iterating quickly for development as we had to manage all the necessary servers.
In terms of graph management, the biggest challenges were in integrating the GAT and in maintaining synchronization between the Firebase and cached graph.
## Accomplishments that we're proud of
We’re very proud of the graph component both in its data structure and in its visual representation.
## What we learned
It was very exciting to work with new tools and libraries. It was impressive to work with Estuary and see the surprisingly low latency. None of us had worked with next.js. We were able to quickly ramp up to using it as we had react experience and were very happy with how easily it integrated with Vercel.
## What's next for Course Connections
There are several different storyboards we would be interested in implementing for Course Connections. One would be a course recommendation. We discovered that chatGPT gave excellent course recommendations given previous courses. We developed some functionality but ran out of time for a full implementation.
|
## Being a university student during the pandemic is very difficult. Not being able to connect with peers, run study sessions with friends and experience university life can be challenging and demotivating. With no present implementation of a specific data base that allows students to meet people in their classes and be automatically put into group chats, we were inspired to create our own.
## Our app allows students to easily setup a personalized profile (school specific) to connect with fellow classmates, be automatically put into class group chats via schedule upload and be able to browse clubs and events specific to their school. This app is a great way for students to connect with others and stay on track of activities happening in their school community.
## We built this app using an open-source mobile application framework called React Native and a real-time, cloud hosted database called Firebase. We outlined the GUI with the app using flow diagrams and implemented an application design that could be used by students via mobile. To target a wide range of users, we made sure to implement an app that could be used on android and IOS.
## Being new to this form of mobile development, we faced many challenges creating this app. The first challenge we faced was using GitHub. Although being familiar to the platform, we were unsure how to use git commands to work on the project simultaneously. However, we were quick to learn the required commands to collaborate and deliver the app on GitHub. Another challenge we faced was nested navigation within the software. Since our project highly relied on a real-time database, we also encountered difficulties with implementing the data base framework into our implementation.
## An accomplishment we are proud of is learning a plethora of different frameworks and how to implement them. We are also proud of being able to learn, design and code a project that can potentially help current and future university students across Ontario enhance their university lifestyles.
## We learned many things implementing this project. Through this project we learned about version control and collaborative coding through Git Hub commands. Using Firebase, we learned how to handle changing data and multiple authentications. We were also able to learn how to use JavaScript fundamentals as a library to build GUI via React Native. Overall, we were able to learn how to create an android and IOS application from scratch.
## What's next for USL- University Student Life!
We hope to further our expertise with the various platforms used creating this project and be able to create a fully functioning version. We hope to be able to help students across the province through this application.
|
winning
|
## Inspiration
There is a problem with social media networks, they aren't run by users. Companies have an invisible hand that determines which content makes the front page. There is also no tangible incentive to make positive comments and posts with value. Content creators have to comply with strict and sometimes arbitrary guidelines in order to monetize their content. Having users be able to earn tokens through content creation, commenting, and curating provides incentives for the community to put forth good and meaningful posts.
## What it does
We built a blog that a user can use to submit posts to the blockchain. It takes a private key from the user to be able to update the blockchain.
## How we built it
This app is on the HIVE blockchain using Hivenet and Django.
## Challenges we ran into
We had initially trouble setting up the test network and deploying it to the docker container.
## Accomplishments that we're proud of
We were eventually able to submit our posts to the main network.
## What we learned
We learned HIVE and Flask and what DPoS protocol is.
## What's next for beem.it
We want to implement a user reputation system as well as setting up nodes for crypto wallets to be able to receive payments. We figured out how to send Hive tokens to accounts on the test network, so we want to implement them into the main network.
|
## Inspiration
This project was heavily inspired by the poor experience of software used in university to view our courses, specifically, Avenue (D2L). As university students, navigating through these platforms proved to be cumbersome and time-consuming, impacting our overall productivity and hindering our ability to learn effectively.
Faced with these challenges, we recognized the need for a streamlined and user-friendly solution to enhance the educational experience. Our goal was to develop a tool that not only addressed the difficulties we encountered but also provided a seamless and efficient way to access concise course information.
With a vision for an improved learning platform, our project aims to overcome the limitations of existing systems by focusing on user experience and quick accessibility to vital course details. Leveraging the growth of AI, our project focuses using AI as not a medium to do the work for students, but as an aid to further improve their learning experience online.
Our aspiration is to contribute to the enhancement of educational platforms, making them more intuitive, responsive, and tailored to the needs of students. By doing so, we believe we can positively impact the learning journey for students like ourselves, fostering a conducive environment for academic success and knowledge attainment.
## What it does
Each course is equipped with its own personalized chatbot, creating a dynamic and responsive communication channel. This tailored approach ensures that students receive information and assistance that is directly relevant to their specific coursework. Whether it's generating practice questions for assessments or keeping students organized with updates on due dates and important announcements, the chatbot is a versatile companion for academic success.
In addition to academic support, tAI acts as a central hub for organizational updates. Students can effortlessly stay informed about due dates, assignment submissions, and crucial announcements. This ensures that important information is easily accessible, reducing the likelihood of missed deadlines and enhancing overall productivity.
The integration of tAI into the learning environment is aimed at enhancing students' overall learning experiences. By providing seamless interaction and unparalleled convenience, tAI becomes an indispensable tool for students looking to navigate their academic journey more efficiently. The platform's commitment to personalized communication, study assistance, and organizational support reflects our dedication to fostering an environment where students can thrive and achieve their academic goals.
## How we built it
To bring our chatbot to life and enhance its capabilities, we harnessed the power of the **Cohere API**. Cohere played a pivotal role in empowering our chatbot to respond intelligently to user queries and effectively summarize course content material. Leveraging Cohere's advanced natural language processing capabilities, our chatbot not only understands the nuances of user inputs but also generates contextually relevant and coherent responses.
The user interface was crafted using **HTMX**, a cutting-edge library that extends HTML to facilitate dynamic and real-time updates, which formed the foundation of our interactive UI. This allowed us to create a responsive and engaging user interface that adapts to user interactions without the need for constant page reloads.
Furthermore, for our backend, **FastAPI**, a modern, fast, web framework for building APIs with Python 3.7+ based on standard Python type hints, served as the backend framework. Its asynchronous capabilities and efficient design enabled us to handle concurrent requests, ensuring a smooth and responsive chatbot experience.
Finally, Tailwind CSS, a utility-first CSS framework, was employed for styling the user interface. Its simplicity and utility-first approach allowed us to rapidly design and customize the UI, ensuring a visually appealing and user-friendly experience. The combination of Tailwind CSS and **Jinja2**, a modern and designer-friendly templating engine for Python, enabled us to dynamically render content on the server-side and present it in a cohesive manner.
## Challenges we ran into
In terms of challenges, working with Cohere proved to be quite a challenge. While the API was very coherent and easy to read/follow, finding the right parameters to use was quite difficult. We had to test various different methods to get the prompts we wanted which also proved very challenging. Finally after many attempts we found the right parameters in order to get our project working as attended
## Accomplishments that we're proud of
We take immense pride in the substantial progress achieved within the 24-hour timeframe of this project. Witnessing our initial vision transform into a tangible reality has been a source of great joy and satisfaction. The collaborative efforts of our team, fueled by dedication and creativity, have not only met but exceeded our expectations.
## What we learned
We learnt a multitude of things, especially the stack we decided to use. For many of us, it was our first time using HTMX along with Flask to create a fully functional website. It was also most of our first time experiencing Cohere and using their API.
## What's next for tAI
We truly believe AI will will only get better from now on. It is only its worse at this very moment so why not leverage its amazing capabilities and use it to further the learning of students for the future. We also understand how it can abused easily, however it is still a powerful tool we students should leverage while it is still young and fresh, to pave a path and create restrictions around them before it is too late. Some future features include: Reading and summarizing course content, etc.
|
## Inspiration
Productivity is hard to harness especially at hackathons with many distractions, but a trick we software developing students found to stay productive while studying was using the “Pomodoro Technique”. The laptop is our workstation and could be a source of distraction, so what better place to implement the Pomodoro Timer as a constant reminder? Since our primary audience is going to be aspiring young tech students, we chose to further incentivize them to focus until their “breaks” by rewarding them with a random custom-generated and minted NFT to their name every time they succeed. This unique inspiration provided an interesting way to solve a practical problem while learning about current blockchain technology and implementing it with modern web development tools.
## What it does
An innovative modern “Pomodoro Timer” running on your browser enables users to sign in and link their MetaMask Crypto Account addresses. Such that they are incentivized to be successful with the running “Pomodoro Timer” because upon reaching “break times” undisrupted our website rewards the user with a random custom-generated and minted NFT to their name, every time they succeed. This “Ethereum Based NFT” can then be both viewed on “Open Sea” or on a dashboard of the website as they both store the user’s NFT collection.
## How we built it
TimeToken's back-end is built with Django and Sqlite and for our frontend, we created a beautiful and modern platform using React and Tailwind, to give our users a dynamic webpage. A benefit of using React, is that it works smoothly with our Django back-end, making it easy for both our front-end and back-end teams to work together
## Challenges we ran into
We had set up the website originally as a MERN stack (MongoDB/Express.js/REACT/Node.js) however while trying to import dependencies for the Verbwire API, to mint our images into NFTs to the user’s wallets we ran into problems. After solving dependency issues a “git merge” produced many conflicts, and on the way to resolving conflicts, we discovered some difficult compatibility issues with the API SDK and JS option for our server. At this point we had to pivot our plan, so we decided to implement the Verbwire Python-provided API solution, and it worked out very well. We intended here to just pass the python script and its functions straight to our front-end but learned that direct front-end to Python back-end communication is very challenging. It involved Ajax/XML file formatting and solutions heavily lacking in documentation, so we were forced to keep searching for a solution. We realized that we needed an effective way to make back-end Python communicate with front-end JS with SQLite and discovered that the Django framework was the perfect suite. So we were forced to learn Serialization and the Django framework quickly in order to meet our needs.
## Accomplishments that we're proud of
We have accomplished many things during the development of TimeToken that we are very proud of. One of our proudest moments was when we pulled an all-nighter to code and get everything just right. This experience helped us gain a deeper understanding of technologies such as Axios, Django, and React, which helped us to build a more efficient and user-friendly platform. We were able to implement the third-party VerbWire API, which was a great accomplishment, and we were able to understand it and use it effectively. We also had the opportunity to talk with VerbWire professionals to resolve bugs that we encountered, which allowed us to improve the overall user experience. Another proud accomplishment was being able to mint NFTs and understanding how crypto and blockchains work, this was a great opportunity to learn more about the technology. Finally, we were able to integrate crypto APIs, which allowed us to provide our users with a complete and seamless experience.
## What we learned
When we first started working on the back-end, we decided to give MongoDB, Express, and NodeJS a try. At first, it all seemed to be going smoothly, but we soon hit a roadblock with some dependencies and configurations between a third-party API and NodeJS. We talked to our mentor and decided it would be best to switch gears and give the Django framework a try. We learned that it's always good to have some knowledge of different frameworks and languages, so you can pick the best one for the job. Even though we had a little setback with the back-end, and we were new to Django, we learned that it's important to keep pushing forward.
## What's next for TimeToken
TimeToken has come a long way and we are excited about the future of our application. To ensure that our application continues to be successful, we are focusing on several key areas. Firstly, we recognize that storing NFT images locally is not scalable, so we are working to improve scalability. Secondly, we are making security a top priority and working to improve the security of wallets and crypto-related information to protect our users' data. To enhance user experience, we are also planning to implement a media hosting website, possibly using AWS, to host NFT images. To help users track the value of their NFTs, we are working on implementing an API earnings report with different time spans. Lastly, we are working on adding more unique images to our NFT collection to keep our users engaged and excited.
|
losing
|
## Inspiration
There are millions of people around the world who have a physical or learning disability which makes creating visual presentations extremely difficult. They may be visually impaired, suffer from ADHD or have disabilities like Parkinsons. For these people, being unable to create presentations isn’t just a hassle. It’s a barrier to learning, a reason for feeling left out, or a career disadvantage in the workplace. That’s why we created **Pitch.ai.**
## What it does
Pitch.ai is a web app which creates visual presentations for you as you present. Once you open the web app, just start talking! Pitch.ai will listen to what you say and in real-time and generate a slide deck based on the content of your speech, just as if you had a slideshow prepared in advance.
## How we built it
We used a **React** client combined with a **Flask** server to make our API calls. To continuously listen for audio to convert to text, we used a react library called “react-speech-recognition”. Then, we designed an algorithm to detect pauses in the speech in order to separate sentences, which would be sent to the Flask server.
The Flask server would then use multithreading in order to make several API calls simultaneously. Firstly, the **Monkeylearn** API is used to find the most relevant keyword in the sentence. Then, the keyword is sent to **SerpAPI** in order to find an image to add to the presentation. At the same time, an API call is sent to OpenAPI’s GPT-3 in order to generate a caption to put on the slide. The caption, keyword and image of a single slide deck are all combined into an object to be sent back to the client.
## Challenges we ran into
* Learning how to make dynamic websites
* Optimizing audio processing time
* Increasing efficiency of server
## Accomplishments that we're proud of
* Made an aesthetic user interface
* Distributing work efficiently
* Good organization and integration of many APIs
## What we learned
* Multithreading
* How to use continuous audio input
* How to use React hooks, Animations, Figma
## What's next for Pitch.ai
* Faster and more accurate picture, keyword and caption generation
* "Presentation mode”
* Integrate a database to save your generated presentation
* Customizable templates for slide structure, color, etc.
* Build our own web scraping API to find images
|
## Inspiration
Knowtworthy is a startup that all three of us founded together, with the mission to make meetings awesome. We have spent this past summer at the University of Toronto’s Entrepreneurship Hatchery’s incubator executing on our vision. We’ve built a sweet platform that solves many of the issues surrounding meetings but we wanted a glimpse of the future: entirely automated meetings. So we decided to challenge ourselves and create something that the world has never seen before: sentiment analysis for meetings while transcribing and attributing all speech.
## What it does
While we focused on meetings specifically, as we built the software we realized that the applications for real-time sentiment analysis are far more varied than initially anticipated. Voice transcription and diarisation are very powerful for keeping track of what happened during a meeting but sentiment can be used anywhere from the boardroom to the classroom to a psychologist’s office.
## How I built it
We felt a web app was best suited for software like this so that it can be accessible to anyone at any time. We built the frontend on React leveraging Material UI, React-Motion, Socket IO and ChartJS. The backed was built on Node (with Express) as well as python for some computational tasks. We used GRPC, Docker and Kubernetes to launch the software, making it scalable right out of the box.
For all relevant processing, we used Google Speech-to-text, Google Diarization, Stanford Empath, SKLearn and Glove (for word-to-vec).
## Challenges I ran into
Integrating so many moving parts into one cohesive platform was a challenge to keep organized but we used trello to stay on track throughout the 36 hours.
Audio encoding was also quite challenging as we ran up against some limitations of javascript while trying to stream audio in the correct and acceptable format.
Apart from that, we didn’t encounter any major roadblocks but we were each working for almost the entire 36-hour stretch as there were a lot of features to implement.
## Accomplishments that I'm proud of
We are super proud of the fact that we were able to pull it off as we knew this was a challenging task to start and we ran into some unexpected roadblocks. There is nothing else like this software currently on the market so being first is always awesome.
## What I learned
We learned a whole lot about integration both on the frontend and the backend. We prototyped before coding, introduced animations to improve user experience, too much about computer store numbers (:p) and doing a whole lot of stuff all in real time.
## What's next for Knowtworthy Sentiment
Knowtworthy Sentiment aligns well with our startup’s vision for the future of meetings so we will continue to develop it and make it more robust before integrating it directly into our existing software. If you want to check out our stuff you can do so here: <https://knowtworthy.com/>
|
## How to use
First, you need an OpenAI account for a unique API key to plug into the openai.api\_key field in the generate\_transcript.py file. You'll also need to authenticate the text-to-speech API with a .json key from Google Cloud. Then, run the following code in the terminal:
```
python3 generate_transcript.py
cd newscast
npm start
```
You'll be able to use Newscast in your browser at <http://localhost:3000/>. Just log in with your Gmail account and you're good to go!
## Inspiration
Newsletters are an underpreciated medium and the experience of accessing them each morning could be made much more convenient if they didn't have to be clicked through one by one. Furthermore, with all the craze around AI, why not have an artificial companion deliver these morning updates to us?
## What it does
Newscast aggregates all newsletters a Gmail user has received during the day and narrates the most salient points from each one using personable AI-generated summaries powered by OpenAI and deployed with React and MUI.
## How we built it
Fetching mail from Gmail API -> Generating transcripts in OpenAI -> Converting text to speech via Google Cloud -> Running on MUI frontend
## Challenges we ran into
Gmail API was surprisingly trickly to operate with; it took a long time to bring the email strings to the form where OpenAi wouldn't struggle with them too much.
## Accomplishments that we're proud of
Building a full-stack app that we could see ourselves using! Successfully tackling a front-end solution on React after spending most of our time doing backend and algos in school.
## What we learned
Integrating APIs with one another, building a workable frontend solution in React and MUI.
## What's next for Newscast
Generating narratives grouped by publication/day/genre. Adding more UI features, e.g. cards pertaining to indidividual newspapers. Building a proper backend (Flask?) to support users and e.g. saving transcripts.
|
winning
|
## Inspiration
Fashion has always been a world that seemed far away from tech. We want to bridge this gap with "StyleList", which understands your fashion within a few swipes and makes personalized suggestions for your daily outfits. When you and I visit the Nordstorm website, we see the exact same product page. But we could have completely different styles and preferences. With Machine Intelligence, StyleList makes it convenient for people to figure out what they want to wear (you simply swipe!) and it also allows people to discover a trend that they favor!
## What it does
With StyleList, you don’t have to scroll through hundreds of images and filters and search on so many different websites to compare the clothes. Rather, you can enjoy a personalized shopping experience with a simple movement from your fingertip (a swipe!). StyleList shows you a few clothing items at a time. Like it? Swipe left. No? Swipe right! StyleList will learn your style and show you similar clothes to the ones you favored so you won't need to waste your time filtering clothes. If you find something you love and want to own, just click “Buy” and you’ll have access to the purchase page.
## How I built it
We use a web scrapper to get the clothing items information from Nordstrom.ca and then feed these data into our backend. Our backend is a Machine Learning model trained on the bank of keywords and it provides next items after a swipe based on the cosine similarities between the next items and the liked items. The interaction with the clothing items and the swipes is on our React frontend.
## Accomplishments that I'm proud of
Good teamwork! Connecting the backend, frontend and database took us more time than we expected but now we have a full stack project completed. (starting from scratch 36 hours ago!)
## What's next for StyleList
In the next steps, we want to help people who wonders "what should I wear today" in the morning with a simple one click page, where they fill in the weather and plan for the day then StyleList will provide a suggested outfit from head to toe!
|
## Inspiration
The first step of our development process was conducting user interviews with University students within our social circles. When asked of some recently developed pain points, 40% of respondents stated that grocery shopping has become increasingly stressful and difficult with the ongoing COVID-19 pandemic. The respondents also stated that some motivations included a loss of disposable time (due to an increase in workload from online learning), tight spending budgets, and fear of exposure to covid-19.
While developing our product strategy, we realized that a significant pain point in grocery shopping is the process of price-checking between different stores. This process would either require the user to visit each store (in-person and/or online) and check the inventory and manually price check. Consolidated platforms to help with grocery list generation and payment do not exist in the market today - as such, we decided to explore this idea.
**What does G.e.o.r.g.e stand for? : Grocery Examiner Organizer Registrator Generator (for) Everyone**
## What it does
The high-level workflow can be broken down into three major components:
1: Python (flask) and Firebase backend
2: React frontend
3: Stripe API integration
Our backend flask server is responsible for web scraping and generating semantic, usable JSON code for each product item, which is passed through to our React frontend.
Our React frontend acts as the hub for tangible user-product interactions. Users are given the option to search for grocery products, add them to a grocery list, generate the cheapest possible list, compare prices between stores, and make a direct payment for their groceries through the Stripe API.
## How we built it
We started our product development process with brainstorming various topics we would be interested in working on. Once we decided to proceed with our payment service application. We drew up designs as well as prototyped using Figma, then proceeded to implement the front end designs with React. Our backend uses Flask to handle Stripe API requests as well as web scraping. We also used Firebase to handle user authentication and storage of user data.
## Challenges we ran into
Once we had finished coming up with our problem scope, one of the first challenges we ran into was finding a reliable way to obtain grocery store information. There are no readily available APIs to access price data for grocery stores so we decided to do our own web scraping. This lead to complications with slower server response since some grocery stores have dynamically generated websites, causing some query results to be slower than desired. Due to the limited price availability of some grocery stores, we decided to pivot our focus towards e-commerce and online grocery vendors, which would allow us to flesh out our end-to-end workflow.
## Accomplishments that we're proud of
Some of the websites we had to scrape had lots of information to comb through and we are proud of how we could pick up new skills in Beautiful Soup and Selenium to automate that process! We are also proud of completing the ideation process with an application that included even more features than our original designs. Also, we were scrambling at the end to finish integrating the Stripe API, but it feels incredibly rewarding to be able to utilize real money with our app.
## What we learned
We picked up skills such as web scraping to automate the process of parsing through large data sets. Web scraping dynamically generated websites can also lead to slow server response times that are generally undesirable. It also became apparent to us that we should have set up virtual environments for flask applications so that team members do not have to reinstall every dependency. Last but not least, deciding to integrate a new API at 3am will make you want to pull out your hair, but at least we now know that it can be done :’)
## What's next for G.e.o.r.g.e.
Our next steps with G.e.o.r.g.e. would be to improve the overall user experience of the application by standardizing our UI components and UX workflows with Ecommerce industry standards. In the future, our goal is to work directly with more vendors to gain quicker access to price data, as well as creating more seamless payment solutions.
|
### Inspiration
Our inspiration for "I Wear It Better" stems from the addictive algorithms employed by popular platforms like Tinder and TikTok. Additionally, we were motivated by the increasing appeal of fast fashion trends, while also recognizing the need to reduce clothing waste. Our aim is to adapt these concepts to engage a broader audience by infusing elements of excitement and mental stimulation into the process.
### What it does
"I Wear It Better" gamifies the experience of searching for fashion items, fostering interactions and trades among users that might not occur otherwise. By incentivizing users to explore new clothing options rather than discarding old ones, the platform promotes sustainable fashion practices while also providing entertainment value.
### How we built it
Utilizing React Native and employing a component-based approach, we developed "I Wear It Better" despite being newcomers to the framework. Despite facing challenges with emulation and environment setup due to our limited experience in app development, we successfully created a functional app.
### Challenges we ran into
Our main challenge revolved around emulation and environment setup, as our team had minimal prior knowledge of app development. However, through perseverance and problem-solving, we overcame these obstacles to deliver a working solution.
### Accomplishments that we're proud of
We take pride in achieving our goal of creating a fully functional app and successfully emulating it. As first-time React Native users, this accomplishment marks a significant milestone in our journey as developers.
### What we learned
Throughout the development process, we gained a basic understanding of React Native and honed our skills in state management, component creation, and overall project structure. These learnings have equipped us with valuable knowledge for future projects in app development.
### What's next for I Wear It Better
Looking ahead, we plan to enhance the platform by implementing stronger algorithms such as the stable matching algorithm to improve the success rate of matches between users. Additionally, we aim to integrate AI models that analyze user preferences and interactions to provide personalized clothing recommendations, further enriching the user experience.
|
winning
|
## Inspiration
Covid-19 has turned every aspect of the world upside down. Unwanted things happen, situation been changed. Lack of communication and economic crisis cannot be prevented. Thus, we develop an application that can help people to survive during this pandemic situation by providing them **a shift-taker job platform which creates a win-win solution for both parties.**
## What it does
This application offers the ability to connect companies/manager that need employees to cover a shift for their absence employee in certain period of time without any contract. As a result, they will be able to cover their needs to survive in this pandemic. Despite its main goal, this app can generally be use to help people to **gain income anytime, anywhere, and with anyone.** They can adjust their time, their needs, and their ability to get a job with job-dash.
## How we built it
For the design, Figma is the application that we use to design all the layout and give a smooth transitions between frames. While working on the UI, developers started to code the function to make the application work.
The front end was made using react, we used react bootstrap and some custom styling to make the pages according to the UI. State management was done using Context API to keep it simple. We used node.js on the backend for easy context switching between Frontend and backend. Used express and SQLite database for development. Authentication was done using JWT allowing use to not store session cookies.
## Challenges we ran into
In the terms of UI/UX, dealing with the user information ethics have been a challenge for us and also providing complete details for both party. On the developer side, using bootstrap components ended up slowing us down more as our design was custom requiring us to override most of the styles. Would have been better to use tailwind as it would’ve given us more flexibility while also cutting down time vs using css from scratch.Due to the online nature of the hackathon, some tasks took longer.
## Accomplishments that we're proud of
Some of use picked up new technology logins while working on it and also creating a smooth UI/UX on Figma, including every features have satisfied ourselves.
Here's the link to the Figma prototype - User point of view: [link](https://www.figma.com/proto/HwXODL4sk3siWThYjw0i4k/NwHacks?node-id=68%3A3872&scaling=min-zoom)
Here's the link to the Figma prototype - Company/Business point of view: [link](https://www.figma.com/proto/HwXODL4sk3siWThYjw0i4k/NwHacks?node-id=107%3A10&scaling=min-zoom)
## What we learned
We learned that we should narrow down the scope more for future Hackathon so it would be easier and more focus to one unique feature of the app.
## What's next for Job-Dash
In terms of UI/UX, we would love to make some more improvement in the layout to better serve its purpose to help people find an additional income in job dash effectively. While on the developer side, we would like to continue developing the features. We spent a long time thinking about different features that will be helpful to people but due to the short nature of the hackathon implementation was only a small part as we underestimated the time that it will take. On the brightside we have the design ready, and exciting features to work on.
|
## Inspiration
We wanted to allow financial investors and people of political backgrounds to save valuable time reading financial and political articles by showing them what truly matters in the article, while highlighting the author's personal sentimental/political biases.
We also wanted to promote objectivity and news literacy in the general public by making them aware of syntax and vocabulary manipulation. We hope that others are inspired to be more critical of wording and truly see the real news behind the sentiment -- especially considering today's current events.
## What it does
Using Indico's machine learning textual analysis API, we created a Google Chrome extension and web application that allows users to **analyze financial/news articles for political bias, sentiment, positivity, and significant keywords.** Based on a short glance on our visualized data, users can immediately gauge if the article is worth further reading in their valuable time based on their own views.
The Google Chrome extension allows users to analyze their articles in real-time, with a single button press, popping up a minimalistic window with visualized data. The web application allows users to more thoroughly analyze their articles, adding highlights to keywords in the article on top of the previous functions so users can get to reading the most important parts.
Though there is a possibilitiy of opening this to the general public, we see tremendous opportunity in the financial and political sector in optimizing time and wording.
## How we built it
We used Indico's machine learning textual analysis API, React, NodeJS, JavaScript, MongoDB, HTML5, and CSS3 to create the Google Chrome Extension, web application, back-end server, and database.
## Challenges we ran into
Surprisingly, one of the more challenging parts was implementing a performant Chrome extension. Design patterns we knew had to be put aside to follow a specific one, to which we gradually aligned with. It was overall a good experience using Google's APIs.
## Accomplishments that we're proud of
We are especially proud of being able to launch a minimalist Google Chrome Extension in tandem with a web application, allowing users to either analyze news articles at their leisure, or in a more professional degree. We reached more than several of our stretch goals, and couldn't have done it without the amazing team dynamic we had.
## What we learned
Trusting your teammates to tackle goals they never did before, understanding compromise, and putting the team ahead of personal views was what made this Hackathon one of the most memorable for everyone. Emotional intelligence played just as an important a role as technical intelligence, and we learned all the better how rewarding and exciting it can be when everyone's rowing in the same direction.
## What's next for Need 2 Know
We would like to consider what we have now as a proof of concept. There is so much growing potential, and we hope to further work together in making a more professional product capable of automatically parsing entire sites, detecting new articles in real-time, working with big data to visualize news sites differences/biases, topic-centric analysis, and more. Working on this product has been a real eye-opener, and we're excited for the future.
|
## Inspiration:
The inspiration for this product came from finding it difficult to quickly find emails for different companies. This started with another project some of us made to get free merch from different companies by emailing them. It took a lot of time to gather these emails, so we wished there was an affordable tool we could use. Some of us have also had jobs where we look for leads for a company, and this involves a lot of searching online for emails.
## What it does:
Our website makes it easy to find emails for different companies and organizations. Simply input the company's domain name (e.g. google -> google.com) and our website finds all the email addresses associated with that domain from across the web.
## How we built it:
This project can be broken down into three components, a web-scraper, a database and server less backend, and a statically hosted website.
The foundation for the web-scraper was the scrapy python library. This allowed us to scrape with super high concurrency and some other nice features. The scraper had ~20 target domains where we scraped most pages on the website and parsed out emails. This ran on our team's local server for the duration of the hackathon.
These emails (as well as some other information) was then stored in a MongoDB Atlas database. MongoDB was chosen because it's NoSQL and we didn't have time to rewrite schemas. Another reason was because it had a simple cloud hosting service, so we didn't need to configure a cloud hosting provider.
The front-end was initially planned out using Figma (which we learned at the hackathon workshop). We then implemented our designs using Bootstrap. The static web pages were hosted on Netlify, because it avoids unnecessary complexity.
The front-end interfaces with the database via Netlify functions, which are a easier way to use AWS Lambda server less endpoints. We created some endpoints in Node.js that we could use to query the database. In our functions, we also validate some emails.
## Challenges we ran into:
We got stuck in a few places when it came to the website. We thought that Figma was a tool to make websites, and we would be able to export the project when we were done. We went to the Figma workshop, and spent a lot of time using Figma when that time probably could have been better spent actually coding the website. CSS is also a tricky thing to wrap your head around when you are not used to it.
## Accomplishments that we're proud of:
We're proud that we were able to complete this project on time, and have an end product that is semi-polished. We're also proud that we created a plan that stripped away unnecessary complications, and we stuck with it. Also that our product works!
No only are we happy with the turnout of our technical project, but out marketing content as well. Our video is pretty eye-catching, which is something we should be proud of!
## What we learned:
Our team had two members that didn't have much experience with programming, but wanted to learn more. They were able to learn some of the basics of designing web pages. They also picked up some general knowledge on other languages.
The more technically experienced team members got some new experience working with serverless and NoSQL databases. Also, we learned more about how to market an idea.
## What's next for Gatherer:
Gatherer is a basic product now with a huge amount of potential to grow. Whether we end up targeting business users, with features to help generate effective leads, or we target end users that just need a tool to find some emails for a project. Helping users access all sorts of data in one place is something our team could envision. Besides emails, our team could scrape phone numbers, social media profiles, and much more.
We think this product has lots of potential to become monetizable in some way.
|
winning
|
## Overview
We made a gorgeous website to plan flights with Jet Blue's data sets. Come check us out!
|
## What Inspired Us
A good customer experience leaves a lasting impression across every stage of their journey. This is exemplified in the airline and travel industry. To give credit and show appreciation to the hardworking employees of JetBlue, we chose to scrape and analyze customer feedback on review and social media sites to both highlight their impact on customers and provide currently untracked, valuable data to build a more personalized brand that outshines its market competitors.
## What Our Project does
Our customer feedback analytics dashboard, BlueVisuals, provides JetBlue with highly visual presentations, summaries, and highlights of customers' thoughts and opinions on social media and review sites. Visuals such as word clouds and word-frequency charts highlight critical areas of focus where the customers reported having either positive or negative experiences, suggesting either areas of improvement or strengths. The users can read individual comments to review the exact situation of the customers or skim through to get a general sense of their social media interactions with their customers. Through this dashboard, we hope that the users are able to draw solid conclusions and pursue action based on those said conclusions.
Humans of JetBlue is a side product resulting from such conclusions users (such as ourselves) may draw from the dashboard that showcases the efforts and dedication of individuals working at JetBlue and their positive impacts on customers. This product highlights our inspiration for building the main dashboard and is a tool we would recommend to JetBlue.
## How we designed and built BlueVisuals and Humans of JetBlue
After establishing the goals of our project, we focused on data collection via web scraping and building the data processing pipeline using Python and Google Cloud's NLP API. After understanding our data, we drew up a website and corresponding visualizations. Then, we implemented the front end using React.
Finally, we drew conclusions from our dashboard and designed 'Humans of JetBlue' as an example usage of BlueVisuals.
## What's next for BlueVisuals and Humans of JetBlue
* collecting more data to get a more representative survey of consumer sentiment online
* building a back-end database to support data processing, storage, and organization
* expanding employee-centric
## Challenges we ran into
* Polishing scraped data and extracting important information.
* Finalizing direction and purpose of the project
* Sleeping on the floor.
## Accomplishments that we're proud of
* effectively processed, organized, and built visualizations for text data
* picking up new skills (JS, matplotlib, GCloud NLP API)
* working as a team to manage loads of work under time constraints
## What we learned
* value of teamwork in a coding environment
* technical skills
|
## Inspiration
Travel planning is a pain. Even after you find the places you want to visit, you still need to find out when they're open, how far away they are from one another, and work within your budget. With Wander, automatically create an itinerary based on your preferences – just pick where you want to go, and we'll handle the rest for you.
## What it does
Wander shows you the top destinations, events, and eats wherever your travels take you, with your preferences, budget, and transportation in mind. For each day of your trip, Wander creates a schedule for you with a selection of places to visit, lunch, and dinner. It plans around your meals, open hours, event times, and destination proximity to make each day run as smoothly as possible.
## How we built it
We built the backend on Node.js and Express which uses the Foursquare API to find relevant food and travel destinations and schedules the itinerary based on the event type, calculated distances, and open hours. The native iOS client is built in Swift.
## Challenges we ran into
We had a hard time finding all the event data that we wanted in one place. In addition, we found it challenging to sync the information between the backend and the client.
## Accomplishments that we're proud of
We’re really proud of our mascot, Little Bloop, and the overall design of our app – we worked hard to make the user experience as smooth as possible. We’re also proud of the way our team worked together (even in the early hours of the morning!), and we really believe that Wander can change the way we travel.
## What we learned
It was surprising to discover that there were so many ways to build off of our original idea for Wander and make it more useful for travelers. After laying the technical foundation for Wander, we kept brainstorming new ways that we could make the itinerary scheduler even more useful, and thinking of more that we could account for – for instance, how open hours of venues could affect the itinerary. We also learned a lot about the importance of design and finding the best user flow in the context of traveling and being mobile.
## What's next for Wander
We would love to continue working on Wander, iterating on the user flow to craft the friendliest end experience while optimizing the algorithms for creating itineraries and generating better destination suggestions.
|
partial
|
## Inspiration
Our inspiration for Sustain-ify came from observing the current state of our world. Despite incredible advancements in technology, science, and industry, we've created a world that's becoming increasingly unsustainable. This has a domino effect, not just on the environment, but on our own health and well-being as well. With rising environmental issues and declining mental and physical health, we asked ourselves: *How can we be part of the solution?*
We believe that the key to solving these problems lies within us—humans. If we have the power to push the world to its current state, we also have the potential to change it for the better. This belief, coupled with the idea that *small, meaningful steps taken together can lead to a big impact*, became the core principle of Sustain-ify.
## What it does
Sustain-ify is an app designed to empower people to make sustainable choices for the Earth and for themselves. It provides users with the tools to make sustainable choices in everyday life. The app focuses on dual sustainability—a future where both the Earth and its people thrive.
Key features include:
1. **Eco Shopping Assistant**: Guides users through eco-friendly shopping.
2. **DIY Assistant**: Offers DIY sustainability projects.
3. **Health Reports**: Helps users maintain a healthy lifestyle.
## How we built it
Sustain-ify was built with a range of technologies and frameworks to deliver a smooth, scalable, and user-friendly experience.
Technical Architecture:
Frontend Technologies:
* Frameworks: Flutter (Dart), Streamlit (Python) were used for the graphical user interface (GUI/front-end).
* Services in Future: Integration with third-party services such as Twilio, Lamini, and Firebase for added functionalities like messaging and real-time updates.
Backend & Web Services:
* Node.js & Express.js: For the backend API services.
* FastAPI: RESTful API pipeline used for HTTP requests and responses.
* Appwrite: Backend server for authentication and user management.
* MongoDB Atlas: For storing pre-processed data chunks into a vector index.
Data Processing & AI Models:
* ScrapeGraph.AI: LLM-powered web scraping framework used to extract structured data from online resources.
* Langchain & LlamaIndex: Used to preprocess scraped data and split it into chunks for efficient vector storage.
* BGE-Large Embedding Model: From Hugging Face, used for embedding textual content.
* Neo4j: For building a knowledge graph to improve data retrieval and structuring.
* Gemini gpt-40 & Groq: Large language models used for inference, running on LPUs (Language Processing Units) for a sustainable inference mechanism.
Additional Services:
* Serper: Provides real-time data crawling and extraction from the internet, powered by LLMs that generate queries based on the user's input.
* Firebase: Used for storing and analyzing user-uploaded medical reports to generate personalized recommendations.
Authentication & Security:
* JWT (JSON Web Tokens): For secure data transactions and user authentication.
## Challenges we ran into
Throughout the development process, we faced several challenges:
1. Ensuring data privacy and security during real-time data processing.
2. Handling large amounts of scraped data from various online sources and organizing it for efficient querying and analysis.
3. Scaling the inference mechanisms using LPUs to provide sustainable solutions without compromising performance.
## Accomplishments that we're proud of
We're proud of creating an app that:
1. Addresses both environmental sustainability and personal well-being.
2. Empowers people to make sustainable choices in their everyday lives.
3. Provides practical tools like the Eco Shopping Assistant, DIY Assistant, and Health Reports.
4. Has the potential to create a big impact through small, collective actions.
## What we learned
Through this project, we learned that:
1. Sustainability isn't just about making eco-friendly choices; it's about making *sustainable lifestyle* choices too, focusing on personal health and well-being.
2. Small, meaningful steps taken together can lead to a big impact.
3. People have the power to change the world for the better, just as they have the power to impact it negatively.
## What's next for Sustain-ify
Moving forward, we aim to:
1. Continue developing and refining our features to better serve our users.
2. Expand our user base to increase our collective impact.
3. Potentially add more features that address other aspects of sustainability.
4. Work towards our vision of creating a sustainable future where both humans and the planet can flourish.
Together, we believe we can create a sustainable future where both humans and the planet can thrive. That's the ongoing mission of Sustain-ify, and we're excited to continue bringing this vision to life!
|
## Inspiration
As a team, we had a collective interest in sustainability and knew that if we could focus on online consumerism, we would have much greater potential to influence sustainable purchases. We also were inspired by Honey -- we wanted to create something that is easily accessible across many websites, with readily available information for people to compare.
Lots of people we know don’t take the time to look for sustainable items. People typically say if they had a choice between sustainable and non-sustainable products around the same price point, they would choose the sustainable option. But, consumers often don't make the deliberate effort themselves. We’re making it easier for people to buy sustainably -- placing the products right in front of consumers.
## What it does
greenbeans is a Chrome extension that pops up when users are shopping for items online, offering similar alternative products that are more eco-friendly. The extension also displays a message if the product meets our sustainability criteria.
## How we built it
Designs in Figma, Bubble for backend, React for frontend.
## Challenges we ran into
Three beginner hackers! First time at a hackathon for three of us, for two of those three it was our first time formally coding in a product experience. Ideation was also challenging to decide which broad issue to focus on (nutrition, mental health, environment, education, etc.) and in determining specifics of project (how to implement, what audience/products we wanted to focus on, etc.)
## Accomplishments that we're proud of
Navigating Bubble for the first time, multiple members coding in a product setting for the first time... Pretty much creating a solid MVP with a team of beginners!
## What we learned
In order to ensure that a project is feasible, at times it’s necessary to scale back features and implementation to consider constraints. Especially when working on a team with 3 first time hackathon-goers, we had to ensure we were working in spaces where we could balance learning with making progress on the project.
## What's next for greenbeans
Lots to add on in the future:
Systems to reward sustainable product purchases. Storing data over time and tracking sustainable purchases. Incorporating a community aspect, where small businesses can link their products or websites to certain searches.
Including information on best prices for the various sustainable alternatives, or indicators that a product is being sold by a small business. More tailored or specific product recommendations that recognize style, scent, or other niche qualities.
|
## Inspiration
As software engineers, we constantly seek ways to optimize efficiency and productivity. While we thrive on tackling challenging problems, sometimes we need assistance or a nudge to remember that support is available. Our app assists engineers by monitoring their states and employs Machine Learning to predict their efficiency in resolving issues.
## What it does
Our app leverages LLMs to predict the complexity of GitHub issues based on their title, description, and the stress level of the assigned software engineer. To gauge the stress level, we utilize a machine learning model that examines the developer’s sleep patterns, sourced from TerraAPI. The app provides task completion time estimates and periodically checks in with the developer, suggesting when to seek help. All this is integrated into a visually appealing and responsive front-end that fits effortlessly into a developer's routine.
## How we built it
A range of technologies power our app. The front-end is crafted with Electron and ReactJS, offering compatibility across numerous operating systems. On the backend, we harness the potential of webhooks, Terra API, ChatGPT API, Scikit-learn, Flask, NodeJS, and ExpressJS. The core programming languages deployed include JavaScript, Python, HTML, and CSS.
## Challenges we ran into
Constructing the app was a blend of excitement and hurdles due to the multifaceted issues at hand. Setting up multiple webhooks was essential for real-time model updates, as they depend on current data such as fresh Github issues and health metrics from wearables. Additionally, we ventured into sourcing datasets and crafting machine learning models for predicting an engineer's stress levels and employed natural language processing for issue resolution time estimates.
## Accomplishments that we're proud of
In our journey, we scripted close to 15,000 lines of code and overcame numerous challenges. Our preliminary vision had the front end majorly scripted in JavaScript, HTML, and CSS — a considerable endeavor in contemporary development. The pinnacle of our pride is the realization of our app, all achieved within a 3-day hackathon.
## What we learned
Our team was unfamiliar to one another before the hackathon. Yet, our decision to trust each other paid off as everyone contributed valiantly. We honed our skills in task delegation among the four engineers and encountered and overcame issues previously uncharted for us, like running multiple webhooks and integrating a desktop application with an array of server-side technologies.
## What's next for TBox 16 Pro Max (titanium purple)
The future brims with potential for this project. Our aspirations include introducing real-time stress management using intricate time-series models. User customization options are also on the horizon to enrich our time predictions. And certainly, front-end personalizations, like dark mode and themes, are part of our roadmap.
|
winning
|
## Inspiration
Let's be honest. Sometimes, we have a paper cup, and we look at both the trash can and the recycling bin. We might throw the paper cup away in the trash because the recycling is just a little further. I'm definitely a culprit.
Our team set out to invent a fun perception of recycling by creating a digital pet that can only be cared for through recycling verified by Gemini's ML image recognition.
Something as simple as a tiny pet, backed by the complexity of Gemini, makes me take that extra step to throw away that paper cup into the recycling bin—to make sure my pet survives and keeps the world green.
## What it does
Take a photo of yourself recycling an item, and using image recognition, Gemini checks if it is a valid photo. After a successful photo, you can feed your digital pet a ton of snacks making your pet get progressively get bigger and bigger...
## How we built it
We began sketching each page to track the user's potential dopamine flow. Usually, recycling is seen as an inconvenience, especially since they are outnumbered by regular trashcans 2-to-1. We wanted the users to associate with recycling positively, so we created a pet for them to take care of. This emotional investment changes the idea of recycling from a burden to an opportunity to care for your digital pet.
## Challenges we ran into
Integrating Gemini API was difficult at the start, but it was smooth the second time we tried. Easily viewable color scheme. Staying awake.
## Accomplishments that we're proud of
'Baby Chester' turning out as a cute pet
The quick start to building SnackSnap making the rest of the days less stressful.
## What we learned
Shipping Fast!!! Pushing the limits of getting deep work done. The excitement of working with Gemini. Working with each other's strengths. Trust in each other.
## What's next for SnackSnap
Your pet can have babies. Discover new verticals. Future integration into spatial computing where there is virtually no friction for the user and we can auto-track their recycling activity.
|
## Inspiration
The inspiration for our project stems from the increasing trend of online shopping and the declining foot traffic in physical stores. Our goal was to provide a unique and engaging experience for customers, encouraging them to visit physical stores and rediscover the joy of in-person shopping. We wanted to create an interactive and entertaining shopping experience that would entice customers to visit stores more frequently and foster a deeper connection between them and the store's brand.
## What it does
Our project is an AR scavenger hunt experience that gamifies the shopping experience. The scavenger hunt encourages customers to explore the store and discover new products they may have otherwise overlooked. As customers find specific products, they can earn points which can be redeemed for exclusive deals and discounts on future purchases. This innovative marketing scheme not only provides customers with an entertaining experience but also incentivizes them to visit stores more frequently and purchase products they may have otherwise overlooked.
## How we built it
To create the AR component of our project, we used Vuforia and Unity, two widely used platforms for building AR applications. The Vuforia platform allowed us to create and track image targets, while Unity was used to design the 3D models for the AR experience. We then integrated the AR component into an Android application by importing it as a Gradle project. Our team utilized agile development methodologies to ensure efficient collaboration and problem-solving throughout the development process.
## Challenges we ran into
One of the challenges we faced was integrating multiple APIs and ensuring that they worked together seamlessly. Another challenge was importing the AR component and creating the desired functionality within our project. We also faced issues with debugging and resolving technical errors that arose during the development process.
## Accomplishments that we're proud of
Despite the challenges we faced, we were able to achieve successful teamwork and collaboration. Despite forming the team later than other groups, we were able to effectively communicate and work together to bring our project to fruition. We are proud of the end result, which was a polished and functional AR scavenger hunt experience that met our objectives.
## What we learned
We learned how difficult it is to truly ship out software, and we are grateful to have joined the hackathon. We gained a deeper understanding of the importance of project planning, effective communication, and collaboration among team members. We also learned that the development process can be challenging and unpredictable, and that it requires perseverance and problem-solving skills. Additionally, participating in the hackathon taught us valuable technical skills such as integrating APIs, creating AR functionality, and importing projects onto an Android application.
## What's next for Winnur
Looking forward, we plan to incorporate Computer Vision technology into our project to prevent potential damage to our product's packaging. We also aim to expand the reach of our AR scavenger hunt experience by partnering with more retailers and enhancing the user interface and experience. We are excited about the potential for future development and growth of Winnur.
|
## Inspiration
One day, one of our teammates was throwing out garbage in his apartment complex and the building manager made him aware that certain plastics he was recycling were soft plastics that can't be recycled.
According to a survey commissioned by Covanta, “2,000 Americans revealed that 62 percent of respondents worry that a lack of knowledge is causing them to recycle incorrectly (Waste360, 2019).” We then found that knowledge of long “Because the reward [and] the repercussions for recycling... aren’t necessarily immediate, it can be hard for people to make the association between their daily habits and those habits’ consequences (HuffingtonPost, 2016)”.
From this research, we found that lack of knowledge or awareness can be detrimental to not only to personal life, but also to meeting government societal, environmental, and sustainability goals.
## What it does
When an individual is unsure of how to dispose of an item, "Bin it" allows them to quickly scan the item and find out not only how to sort it (recycling, compost, etc.) but additional information regarding potential re-use and long-term impact.
## How I built it
After brainstorming before the event, we built it by splitting roles into backend, frontend, and UX design/research. We concepted and prioritized features as we went based on secondary research, experimenting with code, and interviewing a few hackers at the event about recycling habits.
We used Google Vision API for the object recognition / scanning process. We then used Vue and Flask for our development framework.
## Challenges I ran into
We ran into challenges with deployment of the application due to . Getting set up was a challenge that was slowly overcome by our backend developers getting the team set up and troubleshooting.
## Accomplishments that I'm proud of
We were able to work as a team towards a goal, learn, and have fun! We were also able work with multiple Google API's. We completed the core feature of our project.
## What I learned
Learning to work with people in different roles was interesting. Also designing and developing from a technical stand point such as designing for a mobile web UI, deploying an app with Flask, and working with Google API's.
## What's next for Bin it
We hope to review feedback and save this as a great hackathon project to potentially build on, and apply our learnings to future projects,
|
winning
|
## Inspiration
We wanted to come to Deltahacks not to just create some product but create a product with lasting impact. We choose to tackle an area that we felt was underserved. We decided to create a command portal for people with ALS. Command portal ? You're probably wondering what I'm saying. We'll show you as we go along!
## What it does
We created a command portal that allows people with ALS to communicate with the world. If you know anyone who suffers from ALS then you would also know how they are completely immobilized. We decide to factor the medium for communication between the device and the user with blinking! With this we were able to have users open and close doors, send help messages to a loved one and even open a blink to text translator that would allow the user to communicate with the world in ways which we have never fathomed.
## How we built it
The entire setup consists or 4 main features. The first feature is the blink detection which is powered with 3d vector and spatial mapping. With this technology we are able to map the human eye with the capabilities of any device with a depth sensor. Next up we have the text message feature. In this we see how a user with ALS who is in trouble can blink a certain number of times to trigger a automated message that allows a loved one to know they are in trouble. This is powered by the Twilio API. Next up we have the door open and close feature. This feature allows for users who wish to open doors to blink a certain pattern to trigger a door opening or closing. The final feature is a blink to text translator which is using morse code to identify certain letters and display it on the screen. We feel that with morse code the possibilities are endless for people with ALS to start to move towards a more connected life!
## What's next for BlinkBuddy
We plan to divert in either 1 of 2 possibilities. The first possibility is to work on the computer vision and spatial mapping and increase the accuracy of such or to potentially convert our blink detection to be more accurate using EMGs. EMGs or more commonly known as electromyography are muscle detectors and could be placed in pair with the blink detector CV to identify blink with greater accuracy
|
## Inspiration
Around 85% of the world will go about their daily lives not knowing their privilege of being able to navigate several aspects of their lives with ease. Society, as it stands, is designed for the majority—those without disabilities. But what about the other 15%? What about the individuals who, despite already facing their challenges, have to live in a world that is not designed for them? 0.5% of these people are visually impaired. While 0.5% may seem like a small number, let’s put that into perspective; that’s around 45 million people, more than the population of all of Canada. We wanted to help make the world more accessible for people who may struggle to navigate their lives due to their visual impairment. Because everyone deserves to feel safe and secure when they go outside, regardless of their abilities.
## What it does
Our smart cane constantly scans the surroundings of the user and sends a warning in the form of a vibration of the cane when obstacles enter the threshold and are getting too close to them.
## How we built it
We began with an initial brainstorming process, where many of our ideas converged on creating an accessibility device. For the design and 3D printing phase, we used Autodesk Inventor to design custom 3D-printed parts to create a prototype cane. The process involved sketching and brainstorming concepts, creating CAD models, making revisions, and 3D printing the parts for assembly.
In the electrical phase, we created a sweeping sensor using a servo motor and ultrasonic sensor connected to an ESP32, and also wired a DC motor with a motor driver to the ESP32 to create a vibration effect.
For the programming, the servo motor continuously turned 180 degrees, the ultrasonic sensor checked for objects within a threshold, and the DC motor created a vibration when objects were detected within the threshold. The system also detects when the cane is close to the ground (therefore the user has fallen to the ground) and notifies the user accordingly.
## Challenges we ran into
* Learning ESP32
* Difficulty connecting the DC motor to ESP32
* Failure with 3D printing parts
* Difficulty designing all the parts in a small amount of time
* Limited non-electronic hardware made it difficult to implement advanced features
* Learning new APIs
## Accomplishments that we're proud of
* Successfully connecting the motor with the motor driver after hours of trial and error
* Our success in collaborating on the project
* Designing a unique cane from scratch
## What we learned
* How to use Arduino IDE for ESP32
* Learnt how to create IoT devices that connect hardware with software (WiFi networks, Bluetooth, etc)
* Learned a great deal from mentors about ideation, advanced programming, and the incredible advice we received.
* Always double check work
* Learned to design more efficiently
* Consider 3D printing tolerances
## What's next for Sixth Sense
* Integrate navigation assistance with the cane using MappedIn
|
## Inspiration
With the rise in popularity of social media and the decline in attention span, we often find our friends, family, and followers unable to browse through the photos from our lives. At the end of 2020, Thanky made a video collage of the photos he took that year. After sharing it on Instagram, a few people asked him which app he used to make such videos. Finding no quick and safe solutions other than expensive and bulky desktop video editing software or slow and insecure cloud web applications, we decided to build one of our own.
## What it does
PHOSE, or Photo Serialiser, enables you to compile their photos into a video, on your browser, without uploading photos or downloading software. You can upload the pictures and rearrange them based on the preferable order in the videos. The pictures will then merge into a video, ready to be downloaded and shared.
PHOSE is powered by Web Assembly via ffmepg.js, meaning all the processing happens locally on your computer. No more downloading software or uploading your photos to some unknown server.
## How we built it
Our team explored some web development technologies and various Javascript libraries to come up with the front-end portion of the website.
The processing of the images and video is also done on the front-end of the web application using ffmpeg.js.
## Challenges we ran into
With a beginner and experienced developers mixed combination in this Hackaton scene, the progress of this application was delayed slightly. The learning process from researching the new technology (front-end) and watching various videos about web-dev technology often leads to unending errors.
Web Assembly, particularly the ffmpeg.js library, is still relatively new, so we had some difficulties finding solutions to issues.
## Accomplishments that we're proud of
Every member of the team put the greatest effort into contributing to this application. We are proud of the significant learning, knowledge, and mentorship that we gained from this collaborative project. As for our own trophy, we have developed various knowledge of project development.
## What we learned
We learned about the importance of communication and working together. As for some of us, we learned that “Rome was not built in a day” and thus the self-competence needs to be improved.
Web Assembly has great potential to make web applications that require no installation with near-native performance.
## What's next for PHOSE
PHOSE has the potential to be developed into a full-fledged video editing web application
## Built With
HTML, Javascript, CSS
[ffmpeg.js](https://github.com/ffmpegwasm/ffmpeg.wasm)
|
losing
|
## What is Smile?
Smile is a web app that makes you smile! Studies show that smiling, even a forced one is proven to help with mental health. Our app makes sure you get your smiles in, alongside prompting you to come up with positive affirmances about yourself.
Our app provides a quick, easy, and re-usable set of tools that can help reduce your stress by making you smile and invoke more positive vibes!
Users participate in a smile mile, which consists of 3 different activities that were scientifically designed to help with positivity.
The user first starts by showing a large smile for a couple of seconds, once the app has determined you are smiling, it moves you onto the next stage.
In this stage, the user must actively say/type 3 positive compliments to themself. This is to help them get in the mindset of self-appreciating thoughts!
Finally, we finish the run with some light-hearted music and additional resources that the user can look into when they’re feeling down or want to read more into it.
## Inspiration
As students who just finished our exams, we noticed our mood was becoming more negative. With the added anxiety of seeing our final marks come out, we needed some guidance.
Research shows that there’s merit in doing simple activities to help boost your mood!
The effects positive affirmations have on your mental being:
<https://scholar.dominican.edu/scw/SCW2020/conference-presentations/63/>
Benefits of smiling:
<https://www.tandfonline.com/doi/full/10.1080/17437199.2022.2052740>
<https://www.sclhealth.org/blog/2019/06/the-real-health-benefits-of-smiling-and-laughing/>
Smiling for health
<https://www.nbcnews.com/better/health/smiling-can-trick-your-brain-happiness-boost-your-health-ncna822591>
## How we built it
We built our web application using Javascript, and NextJS. We leveraged Computer Vision and NLP to validate user interactions.
Computer vision was used for Smile detection, where the user is required to smile for at least 10 seconds, this was important to the project since we needed to validate if the user was really smiling throughout this activity.
NLP was used for sentimental analysis, this was important to the project since we needed to make sure the user wasn’t inputting negative compliments and only focused on positivity
For the sentimental analysis portion, we used MonkeyLearn’s classification library, as it provided a set of models that fit our requirements and had a faster turn-around rate. However, it’s a free trial so there’s a limited use
For the Smile detection, we used face-api’s various models, which can be found in `public/models` . This consists of a bunch of models we wish to have used for detecting landmarks such as the mouth and the eyes. However, the models are for general landmark detection, which could be improved upon by focusing only on targetting whether the user is smiling or not!
## Challenges we ran into
We faced a multitude of challenges going through this project:
* Figuring out which models best fits our requirements
* Designing and implementing the user flow in a minimalistic manner
* Fixing hydration issues with NextJS
3 out of the 4 members don’t have access to a camera, so we relied on one person to handle the Computer Vision aspect of the project, this was proven to be the bottleneck and required us to manage our time properly (and a little bit of help from “borrowing” my brother’s laptop).
Likewise, half of our team was in-experienced with building a web application, so the steps involved with onboarding and mentoring provided us with more of a time crunch.
An interesting challenge we ran into was dealing with the hybrid nature of the event. Our team was fluid with how we wanted to communicate as a couple of our team members couldn’t make it to campus, or couldn’t stay for long. This required us to think creatively to figure out how to effectively communicate with the team.
## Accomplishments that we're proud of
Getting the different activities to work was a major concern for all of us, and decided the feasibility of the project, so being able to see a final product that includes all of these features was a lovely sight to see.
Our team management skills were one soft skill we were proud of, since our team consisted of students in different years and disciplines, we wanted to make sure we best used our strengths but still provided an overcome-able challenge. We were able to do this by segmenting responsibilities between the team, and pairing whenever we needed assistance.
Balancing the project work and attending the fun on-campus activities. A lot of the team was interested in the other events throughout the hackathon, and were worried that we may run out of time
## What we learned
Browser-based CV models are difficult to manage since they need to be small enough to load on the client side quickly, but also be verbose enough to detect facial features in different lighting.
NLP models are a hit or miss for a broad topic like sentimental analysis since the use of negation words could completely change the intent of the sentence but Bag-of-Words models still consider it positively.
It’s extremely hard to center a div at 6 am, when we’re all sleep-deprived.
The fun wasn’t the end result, it was the journey and the struggles we had along the way!
## What's next for Smile
* Add more activities to the Smile Mile, so there’s a broad span of activities the user could choose from
* Build our own in-house models for both sentimental analysis and Computer vision, since the current models are for general cases, and can be improved upon through specialization
* We want to polish up the user interface, making things look more refined.
* Creating a mobile app, to make sure you get your smiles on the go!
* Notifications, to remind you to smile, in a Pomodoro-esque style.
## What’s next for you?
It’s obvious! SMILE 😄
|
## Inspiration
Over the course of the past year, one of the most heavily impacted industries due to the COVID-19 pandemic is the service sector. Specifically, COVID-19 has transformed the financial viability of restaurant models. Moving forward, it is projected that 36,000 small restaurants will not survive the winter as successful restaurants have thus far relied on online dining services such as Grubhub or Doordash. However, these methods come at the cost of flat premiums on every sale, driving up the food price and cutting at least 20% from a given restaurant’s revenue. Within these platforms, the most popular, established restaurants are prioritized due to built-in search algorithms. As such, not all small restaurants can join these otherwise expensive options, and there is no meaningful way for small restaurants to survive during COVID.
## What it does
Potluck provides a platform for chefs to conveniently advertise their services to customers who will likewise be able to easily find nearby places to get their favorite foods. Chefs are able to upload information about their restaurant, such as their menus and locations, which is stored in Potluck’s encrypted database. Customers are presented with a personalized dashboard containing a list of ten nearby restaurants which are generated using an algorithm that factors in the customer’s preferences and sentiment analysis of previous customers. There is also a search function which will allow customers to find additional restaurants that they may enjoy.
## How I built it
We built a web app with Flask where users can feed in data for a specific location, cuisine of food, and restaurant-related tags. Based on this input, restaurants in our database are filtered and ranked based on the distance to the given user location calculated using Google Maps API and the Natural Language Toolkit (NLTK), and a sentiment score based on any comments on the restaurant calculated using Google Cloud NLP. Within the page, consumers can provide comments on their dining experience with a certain restaurant and chefs can add information for their restaurant, including cuisine, menu items, location, and contact information. Data is stored in a PostgreSQL-based database on Google Cloud.
## Challenges I ran into
One of the challenges that we faced was coming up a solution that matched the timeframe and bandwidth of our team. We did not want to be too ambitious with our ideas and technology yet provide a product that we felt was novel and meaningful.
We also found it difficult to integrate the backend with the frontend. For example, we needed the results from the Natural Language Toolkit (NLTK) in the backend to be used by the Google Maps JavaScript API in the frontend. By utilizing Jinja templates, we were able to serve the webpage and modify its script code based on the backend results from NLTK.
## Accomplishments that I'm proud of
We were able to identify a problem that was not only very meaningful to us and our community, but also one that we had a reasonable chance of approaching with our experience and tools. Not only did we get our functions and app to work very smoothly, we ended up with time to create a very pleasant user-experience and UI. We believe that how comfortable the user is when using the app is equally as important as how sophisticated the technology is.
Additionally, we were happy that we were able to tie in our product into many meaningful ideas on community and small businesses, which we believe are very important in the current times.
## What I learned
Tools we tried for the first time: Flask (with the additional challenge of running HTTPS), Jinja templates for dynamic HTML code, Google Cloud products (including Google Maps JS API), and PostgreSQL.
For many of us, this was our first experience with a group technical project, and it was very instructive to find ways to best communicate and collaborate, especially in this virtual setting. We benefited from each other’s experiences and were able to learn when to use certain ML algorithms or how to make a dynamic frontend.
## What's next for Potluck
For example, we want to incorporate an account system to make user-specific recommendations (Firebase). Additionally, regarding our Google Maps interface, we would like to have dynamic location identification. Furthermore, the capacity of our platform could help us expand program to pair people with any type of service, not just food. We believe that the flexibility of our app could be used for other ideas as well.
|
# Inspiration
There are variety of factors that contribute to *mental health* and *wellbeing*. For many students, the stresses of remote learning have taken a toll on their overall sense of peace. Our group created **Balance Pad** as a way to serve these needs. Thus, Balance Pads landing page gives users access to various features that aim to improve their wellbeing.
# What it does
Balance Pad is a web-based application that gives users access to **several resources** relating to mental health, education, and productivity. Its initial landing page is a dashboard tying everything together to make a clear and cohesive user experience.
### Professional Help
>
> 1. *Chat Pad:* The first subpage of the application has a built in *Chatbot* offering direct access to a **mental heath professional** for instant messaging.
>
>
>
### Productivity
>
> 1. *Class Pad:* With the use of the Assembly API, users can convert live lecture content into text based notes. This feature will allow students to focus on live lectures without the stress of taking notes. Additionally, this text to speech aide will increase accessibility for those requiring note takers.
> 2. *Work Pad:* Timed working sessions using the Pomodoro technique and notification restriction are also available on our webpage. The Pomodoro technique is a proven method to enhance focus on productivity and will benefit students
> 3. *To Do Pad:* Helps users stay organized
>
>
>
### Positivity and Rest
>
> 1. *Affirmation Pad:* Users can upload their accomplishments throughout their working sessions. Congratulatory texts and positive affirmations will be sent to the provided mobile number during break sessions!
> 2. *Relaxation Pad:* Offers options to entertain students while resting from studying. Users are given a range of games to play with and streaming options for fun videos!
>
>
>
### Information and Education
>
> 1. *Information Pad:* is dedicated to info about all things mental health
> 2. *Quiz Pad:* This subpage tests what users know about mental health. By taking the quiz, users gain valuable insight into how they are and information on how to improve their mental health, wellbeing, and productivity.
>
>
>
# How we built it
**React:** Balance Pad was built using React. This allowed for us to easily combine the different webpages we each worked on.
**JavaScript, HTML, and CSS:** React builds on these languages so it was necessary to gain familiarity with them
**Assembly API:** The assembly API was used to convert live audio/video into text
**Twilio:** This was used to send instant messages to users based on tracked accomplishments
# Challenges we ran into
>
> * Launching new apps with React via Visual Studio Code
> * Using Axios to run API calls
> * Displaying JSON information
> * Domain hosting of Class Pad
> * Working with Twilio
>
>
>
# Accomplishments that we're proud of
*Pranati:* I am proud that I was able to learn React from scratch, work with new tech such as Axios, and successfully use the Assembly API to create the Class Pad (something I am passionate about). I was able to persevere through errors and build a working product that is impactful. This is my first hackathon and I am glad I had so much fun.
*Simi:* This was my first time using React, Node.js, and Visual Studio. I don't have a lot of CS experience so the learning curve was steep but rewarding!
*Amitesh:* Got to work with a team to bring a complicated idea to life!
# What we learned
*Amitesh:* Troubleshooting domain creation for various pages, supporting teammates and teaching concepts
*Pranati:* I learned how to use new tech such as React, new concepts such API calls using Axios, how to debug efficiently, and how to work and collaborate in a team
*Simi:* I learned how APIs work, basic html, and how React modularizes code. Also learned the value of hackathons as this was my first
# What's next for Balance Pad
*Visualizing Music:* Our group hopes to integrate BeatCaps software to our page in the future. This would allow a more interactive music experience for users and also allow hearing impaired individuals to experience music
*Real Time Transcription:* Our group hopes to implement in real time transcription in the Class Pad to make it even easier for students.
|
winning
|
## Inspiration
The best part of Ankara-- the fact that it is so colorful-- creates an issue when you are trying to choose just what color or bag or headpiece your shoes should be. We thought it would be great to delegate that decision to Pepperish and step out effortlessly looking spicy.
## What it does
It lets you scan a picture of the Ankara pattern. And then shows you the most dominant colors in the design, as well as the colors your accessories could be in, as well as a maximum of three jewelry color choices (gold, rose-gold, and silver).
## How we built it
We built the front end in React and the back end with Convex. We used the Google Vision API to recognize the dominant colors in the Ankara image, and we wrote a careful complimentary-color algorithm for deciding the colors for jewelry and other accessories.
## Challenges we ran into
Never written backend or used an API before, so there were those learning curves.
## Accomplishments that we're proud of
Having completed a project at all
## What we learned
A lot
## What's next for Pepperish
Further refining the complimentary color algorithm and, showing models in the Ankara pattern and suggested colored accessories
|
## Inspiration
The idea for StyleRise came from a common challenge many of us face—deciding what to wear for different occasions, whether it’s a job interview, a date, or even a casual outing. Hiring a traditional stylist can be both costly and time-consuming, making it inaccessible for most students and working professionals. Our goal was to create a simple, AI-driven solution that helps people make the most of the clothes they already own. Dress to Impress empowers users by allowing them to upload photos of their wardrobe and specify the occasion or style they’re dressing for. The app then uses AI to generate outfit suggestions, saving time and effort while boosting confidence without the need for a personal stylist or the purchase of new clothes.
## What it does
Dress to Impress allows users to upload photos of their clothes, and the app prompts the user for the occasion of style they're dressing for. The app will then generate potential outfits.
## How we built it
The frontend was developed using Next.js, providing a responsive and dynamic user interface. For data management, we utilized ChromaDB, a high-performance vector database, to efficiently store and retrieve clothing information.
At the heart of the system, we integrated two AI agents connected via Fetch.ai. The first agent leverages the capabilities of Hyperbolic and Llama-3.2-90B-Vision-Instruct to analyze images of clothing from the user's wardrobe, extracting key attributes and generating detailed descriptions. These data points are then stored in ChromaDB, alongside user-provided information about the occasion and preferences to deliver personalized outfit recommendations of the day.
The system also combines context relevant insights with other third-party APIs that to suggest additional items that would complement the user’s existing wardrobe. This tech-oriented cohesive architecture enables us to automate styling decisions, offering intelligent suggestions tailored to the user's needs and preferences.
## Challenges we ran into
We faced several challenges throughout development, starting with communication difficulties within the team. An unexpected event involving theft created further disruption. Additionally, the team’s separation of tasks led to knowledge gaps, making it hard to align on certain aspects of the project. Lastly, we struggled to settle on a concrete idea before arriving at StyleRise. Despite these obstacles, we managed to overcome them by working together and staying focused on our shared goals.
## Accomplishments that we're proud of
We are incredibly proud of each other and how we came together as a team. In addition to overcoming the challenges, we took the opportunity to learn new technologies and adapt quickly. Our ability to pivot and collaborate under difficult circumstances made this project particularly rewarding.
## What we learned
Through this project, we gained valuable experience with technologies like Next.js, LLMs (Large Language Models), and modern databases. More importantly, we learned how to work effectively as a team and how to manage our time and energy throughout the event. We even figured out the importance of finding time to rest!
## What's next for Dress To Impress
Looking forward, we aim to expand StyleRise by improving its functionality and user experience. Our next step is to build a user-friendly interface that allows users to easily manage and rate the outfits generated by the AI. A five-star rating system will provide valuable feedback that helps fine-tune the algorithm. Eventually, we plan to introduce a community feature where users can share, collaborate on, and remix outfit ideas. This will help us build a more engaging platform and foster a collective space for high-quality AI-driven fashion content.
|
## Inspiration
Ordering delivery and eating out is a major aspect of our social lives. But when healthy eating and dieting comes into play it interferes with our ability to eat out and hangout with friends. With a wave of fitness hitting our generation as a storm we have to preserve our social relationships while allowing these health conscious people to feel at peace with their dieting plans. With NutroPNG, we enable these differences to be settled once in for all by allowing health freaks to keep up with their diet plans while still making restaurant eating possible.
## What it does
The user has the option to take a picture or upload their own picture using our the front end of our web application. With this input the backend detects the foods in the photo and labels them through AI image processing using Google Vision API. Finally with CalorieNinja API, these labels are sent to a remote database where we match up the labels to generate the nutritional contents of the food and we display these contents to our users in an interactive manner.
## How we built it
Frontend: Vue.js, tailwindCSS
Backend: Python Flask, Google Vision API, CalorieNinja API
## Challenges we ran into
As we are many first-year students, learning while developing a product within 24h is a big challenge.
## Accomplishments that we're proud of
We are proud to implement AI in a capacity to assist people in their daily lives. And to hopefully allow this idea to improve peoples relationships and social lives while still maintaining their goals.
## What we learned
As most of our team are first-year students with minimal experience, we've leveraged our strengths to collaborate together. As well, we learned to use the Google Vision API with cameras, and we are now able to do even more.
## What's next for McHacks
* Calculate sum of calories, etc.
* Use image processing to estimate serving sizes
* Implement technology into prevalent nutrition trackers, i.e Lifesum, MyPlate, etc.
* Collaborate with local restaurant businesses
|
losing
|
## Inspiration
When reading news articles, we're aware that the writer has bias that affects the way they build their narrative. Throughout the article, we're constantly left wondering—"What did that article not tell me? What convenient facts were left out?"
## What it does
*News Report* collects news articles by topic from over 70 news sources and uses natural language processing (NLP) to determine the common truth among them. The user is first presented with an AI-generated summary of approximately 15 articles on the same event or subject. The references and original articles are at your fingertips as well!
## How we built it
First, we find the top 10 trending topics in the news. Then our spider crawls over 70 news sites to get their reporting on each topic specifically. Once we have our articles collected, our AI algorithms compare what is said in each article using a KL Sum, aggregating what is reported from all outlets to form a summary of these resources. The summary is about 5 sentences long—digested by the user with ease, with quick access to the resources that were used to create it!
## Challenges we ran into
We were really nervous about taking on a NLP problem and the complexity of creating an app that made complex articles simple to understand. We had to work with technologies that we haven't worked with before, and ran into some challenges with technologies we were already familiar with. Trying to define what makes a perspective "reasonable" versus "biased" versus "false/fake news" proved to be an extremely difficult task. We also had to learn to better adapt our mobile interface for an application that’s content varied so drastically in size and available content.
## Accomplishments that we're proud of
We’re so proud we were able to stretch ourselves by building a fully functional MVP with both a backend and iOS mobile client. On top of that we were able to submit our app to the App Store, get several well-deserved hours of sleep, and ultimately building a project with a large impact.
## What we learned
We learned a lot! On the backend, one of us got to look into NLP for the first time and learned about several summarization algorithms. While building the front end, we focused on iteration and got to learn more about how UIScrollView’s work and interact with other UI components. We also got to worked with several new libraries and APIs that we hadn't even heard of before. It was definitely an amazing learning experience!
## What's next for News Report
We’d love to start working on sentiment analysis on the headlines of articles to predict how distributed the perspectives are. After that we also want to be able to analyze and remove fake news sources from our spider's crawl.
|
# Inspiration ✨
It's a universally acknowledged truth that memories are the tapestries of our lives, yet, tragically, many of these vibrant threads fade over time, leaving us with a canvas that feels incomplete. This realization hit us hard, echoing the sentiment that the essence of our experiences, the laughter, the tears, the triumphs, and the losses, should not be relegated to the shadows of our minds. It was from this poignant understanding that EyeRemember was born—a beacon of hope in the quest to preserve the sanctity of our memories.
# What It Does 🌐
EyeRemember is not just an app; it's a revolution—a virtual reality gallery that transforms your memories and collectibles from mere items into immersive experiences. With EyeRemember, you don't just view your memories; you step into them, reliving each moment in a vivid 3D VR Museum World. This innovative platform allows users to navigate through their cherished memories and collectibles with the simplicity of their gaze, eliminating the barriers between the user and their past, making every interaction a journey back in time.
# How We Built It 🔧
The construction of EyeRemember was an odyssey of technological exploration and creativity. We embarked on this journey with the VisionOS SDK as our compass, guiding us through the complexities of virtual reality development. Our voyage took us to the shores of the Meta Quest 2, where we meticulously sideloaded and demoed our application, each step a testament to our dedication to innovation and our unwavering belief in the power of VR to transform how we connect with our past.
Main Technologies: Swift, VisionOS SDK, React, Meta Quest 2
# Challenges We Ran Into 🚀
Our journey was not without its trials and tribulations. The task of loading entire VR worlds presented a Herculean challenge, pushing us to the limits of our coding capabilities. The intricacies of Swift added layers of complexity to our endeavor, requiring us to adapt, learn, and grow. The process of streaming the VisionOS program and sideloading it onto the Meta Quest 2 was akin to navigating a labyrinth, where each turn revealed new challenges and opportunities for growth.
# Accomplishments That We're Proud Of 🏅
Standing at the frontier of VR coding and Swift programming as novices, we ventured forth with courage and determination. Our first foray into this unexplored territory was not just an accomplishment but a declaration of our passion for innovation and our commitment to pushing the boundaries of what is possible. We emerged from this experience not just as developers, but as pioneers of a new frontier in technology.
# What We Learned 📘
This expedition into the realms of VR and Swift was illuminating, to say the least. We learned that the essence of innovation lies not in the mastery of skills, but in the courage to face the unknown, the resilience to overcome challenges, and the vision to see beyond the horizon. These lessons, learned in the crucible of development, will guide us as we continue our journey with EyeRemember.
# What's Next For EyeRemember🌟
The saga of EyeRemember is just beginning. Our vision for the future is bold and boundless. We see EyeRemember evolving into a global platform that not only preserves memories but enriches them, making it possible for users to not just revisit the past but to experience it with a depth and clarity that was previously unimaginable. Our mission is clear: to innovate, to inspire, and to illuminate the path to a future where every memory is preserved, every moment cherished, and every experience shared.
Join us on this exhilarating journey as we continue to explore the infinite possibilities of virtual reality, memory preservation, and interactive storytelling. With EyeRemember, the future of how we remember and relive our past is bright, boundless, and breathtaking. 🚀💖
|
## What it does
Given an Image identify all instances of text present in it. This is useful for autonoumous cars that can accurately detect road signs. This is also useful for people with visual impairment that need help in reading signs and signals in and around their environment. The algorithm does this fast enough that a 60fps video can be recognized in real time. >90% accurracy.
## How I built it
Rather than using the general semantic segmentation which introduces several challenges due to different charachters eing close to each other, I tried to implement a concept called Instance Segmentation. Tools used are python, tensorflow and opencv.
## Challenges I ran into
It was difficult to understand certain parts of implementation of the paper which needed to be tweaked.
## What I learned
What Instance segmentation is and how to implement it in code.
## What's next for PixelLink
Test it with a robust dataset and maintain its accuracy.
|
partial
|
## Inspiration
We were inspired by our own struggles of often having difficulties find new recipes based on the ingredients we have in our own fridge. We found that while many apps do this, none are able to scan the receipt right from your device and get the results instantly.
## What it does
When you open the app you are prompted to take a photo of some text you wish to scan. Once you have taken your photo, you can crop it to filter out any unnecessary details.
## How we built it
This iOS app was developed using Swift Code in the XCode environment. We use Apple's MLVision and MLKits to take the photo and translate it into text. From there we use Spoonacular API to fetch recipes based on the data received.
## Challenges we ran into
Using Apple's MLVision and MLKit was tough to learn and often crashed and was inaccurate. On top of several XCode issues we had issues debugging but in the end finally got it working.
## Accomplishments that we're proud of
* Building a fully fledged iOS app using Machine Learning from scratch
* Debugging and working as a team to produce a final product.
## What we learned
* Lots about API's, Swift Coding, XCode, and Machine Learning
* Don't be afraid to ask questions
* Sometimes you spend more time debugging than writing actual code.
## What's next for RecipEasy
We would love to add:
* Stronger UI
* More accurate text recognition
* Ability to access and read from photos
* Provide links to the respective recipe online
* Expand past Spoonacular to use a more expansive API
|
## Inspiration
* As college students, one of the most difficult things to do is to split the bill for grocery payments. Everyone in the house wants a different combination of items, and paying back the purchaser has become a bi-weekly ordeal. We were inspired by the need that we saw for an application such as the one we created, and the potential that this work could have for data collection and health.
## What it does
* This app uses a single photo and a tap-to-select iOS UI to split a bill between people, record a category, and prepare receipt data to send to the cloud.
* Data can be used for tracking nutrition based on different food bought
## How we built it
* We learned how to use Xcode and Swift to program this app, using google Firebase's text reading AI kit.
## Challenges we ran into
* The ML Kit wasn't as accurate as we expected, and parsed data differently than expected.
* We had to narrow down the text to parse through, as the kit read the entire receipt
* We were new to Xcode, Swift, Google Firebase, and cloud computing
## Accomplishments that we're proud of
* We learned how to code with Xcode
* We learned how to set up cloud computing for our project
* We implemented regex that would correctly filter the receipt to read necessary items
* We were able to make a data table based on the ML read of the receipt and create a UI based on the data
## What we learned
* Swift and Xcode, IOS development.
* Cloud computing
## What's next for reSEEt: see your receipts in a new light
* We want to be able to handle more data than just the item name and cost. We want to read the date, location, shop type, payment method, etc.
* Better UI with more settings for easier use of the app.
* ML training for auto-categorization of groceries
|
## Inspiration
As university students, we often find that we have groceries in the fridge but we end up eating out and the groceries end up going bad.
## What It Does
After you buy groceries from supermarkets, you can use our app to take a picture of your receipt. Our app will parse through the items in the receipts and add the items into the database representing your fridge. Using the items you have in your fridge, our app will be able to recommend recipes for dishes for you to make.
## How We Built It
On the back-end, we have a Flask server that receives the image from the front-end through ngrok and then sends the image of the receipt to Google Cloud Vision to get the text extracted. We then post-process the data we receive to filter out any unwanted noise in the data.
On the front-end, our app is built using react-native, using axios to query from the recipe API, and then stores data into Firebase.
## Challenges We Ran Into
Some of the challenges we ran into included deploying our Flask to Google App Engine, and styling in react. We found that it was not possible to write into Google App Engine storage, instead we had to write into Firestore and have that interact with Google App Engine.
On the frontend, we had trouble designing the UI to be responsive across platforms, especially since we were relatively inexperienced with React Native development. We also had trouble finding a recipe API that suited our needs and had sufficient documentation.
|
losing
|
## What is it?
This website is intended to be an interactive and fun way for children and adults to practice recognizing facial expressions.
At least one study found that computer based treatment can significantly help children with high-functioning autism (<https://link.springer.com/article/10.1007%2Fs10803-015-2374-0>).
## Why is it important?
People with autism sometimes have difficulties recognizing facial expressions.
Misinterpretations in facial expression could lead someone to experience social difficulties.
One study found that much of the difficulty came when distinguishing emotional from neutral faces.
In the study adults with autism were significantly more likely to misinterpret happy faces as neutral than neurotypical individuals (<http://journals.sagepub.com/doi/abs/10.1177/1362361314520755>).
As someone who has experience in the field of ABA therapy I feel something like this could replace boring worksheets.
It does not require expensive hardware or software- if a therapist, parent, or client themselves want to use it they can!
My biggest hope is that someone will come across this project and be inspired to create other interactive methods of learning for ABA therapy.
## How I built it
The main component of the project is Microsoft's Emotion API. I wrote the important bits in JavaScript and the pretty stuff in HTML & CSS.
## Challenges I ran into
Getting the webcam to work was the hardest part. Luckily I found webcamjs and it had some pretty good documentation and demos.
## What's next for Social Skills
I want to make Social Skills more like a game that rewards the player. With rewards built in it would be easier for ABA therapists to include it as part of a token system.
|
## Inspiration
Our team sometimes gets tempered when using a computer, and we set out to minimize that. Our initial idea was related to video games, as they can often be causes of frustration, especially online ones. But we realized this idea could be scaled to cover emotions throughout all computer usage, not just video games. Whether that be working on homework, playing online games, or programming a project, there are a variety of actions you do on a computer that can get frustrating. This application seeks to alleviate.
## What it does
Once the app is open, it starts tracking your emotions via webcam, and when anger is detected, a creative calming notification will be sent to your desktop. This calming notification utilizes Google's Gemini API to generate an infinite amount of creative messages. These messages help remind the user that despite their frustration, everything is going okay and to take a break from the computer.
## How we built it
We used Python to build both the front and back end of the application. Using a model that could detect faces within a webcam, we developed a program in the backend that can predict a person's facial expression and emotion. This prediction is quite accurate with facial tracking. We then implemented Google's Gemini API as the creative source for the calming notification messages.
## Challenges we ran into
The biggest challenge we came across was finding and fine-tuning a model that catered to both our facial recognition needs and our calming message needs. Initially, we tried to use Google's MediaPipe facial detection software to track and detect faces being captured in the webcam. Unfortunately, it was incompatible with the model that we were using. Instead, we used openCV to perform facial recognition within our program. We also ran into issues with the webcam capturing weird and low-resolution angles, making it hard to capture a face within the frame. Another issue we came across was complications with Google Gemini simply not working with the API key we generated as well as text not generating correctly.
## Accomplishments that we're proud of
Getting the facial recognition working was our first hurdle that we accomplished and are extremely proud of. The fact that we were able to get our program to somewhat accurately predict facial expressions is something that we are extremely proud of. As we are both novice programmers, getting any of the above items to work was an accomplishment in and of itself.
## What we learned
We got a glimpse into the world of facial recognition and AI fine-tuning. We also learned how to utilize the Google Gemini API and integrate it into our project.
## What's next for DontClashOutAI
In the future, we hope to implement Google's Mediapipe for more advanced facial and emotion recognition. We would also like to retrain the model for more accurate and consistent results when predicting facial expressions. We're also working hard to clean up the GUI of the application allowing for a more streamlined and efficient experience.
This is where we sourced the model from: [link](https://www.kaggle.com/datasets/abhisheksingh016/machine-model-for-emotion-detection/data)
|
## Inspiration
2.2 million deaths were attributed to high blood glucose levels in 2012. However, despite the daunting amount of deaths, glucometer tests are still inconvenient and costly. Test strips cost an astronomical amount and spoil readily in hot and humid weather. These tests will not function in the event of a natural disaster. We decided to change that.
## What it does
Spit into some mud and a user-friendly app will easily analyze the data collected from your saliva sample, quickly telling you if your blood glucose concentration is abnormally high. It's as easy as that.
## How it works
Glucose levels increase in saliva as the glucose concentration in your blood increases. Bacteria eat glucose as food and special bacteria called exoelectrogens (commonly found in mud) can create electricity from it. The more glucose there is in a sample, the higher the voltage. By analyzing the voltage that is produced by the bacteria in the mud, we can determine if your blood glucose levels are dangerously high.
## How we built it
With an Arduino, breadboard, op-amp, and wires, we measured if your glucose levels are either too high (1.0) or healthy (0.0). For software, we used a library called PySerial which can access the Arduino through a serial connection with a laptop. We used pencil lead, a water bottle, and mud to make the glucose sensor.
## Challenges we ran into
We had trouble measuring low voltages using an Arduino. We tried different resistor configurations, attempted to find a voltage sensor, and tried increasing the voltage with an op amp. We resorted to using an op-amp comparator to analyze our voltages.
## Accomplishments that we're proud of
We're proud of blending together all the circuity, software, and biology in this project. Somehow we miraculously made it work.
## What's next for GluClose
The current prototype only determines if your blood glucose level has reached a dangerous level. In the future it'll be able to detect exact glucose concentrations in blood.
|
losing
|
# Notic3 – A Decentralized Solution for Content Creators
## Overview
Notic3 is a decentralized platform designed to empower content creators, offering a blockchain-based alternative to platforms like Patreon (and that one unholy platform). Our mission is to eliminate intermediaries, giving creators direct access to their supporters while ensuring secure, transparent, and tamper-proof data. Built fully on-chain, Notic3 exemplifies the spirit of decentralization by providing trustless interactions and immutable records.
While many blockchain-based projects use off-chain solutions to sidestep technical challenges, our team committed to going all-in on decentralization. This approach presented unique obstacles, but it also set us apart, reinforcing our belief in the transformative potential of blockchain for creative industries.
## Inspiration
Blockchain technology offers more than just financial innovation—it provides a new way to manage ownership, access, and rewards. Inspired by these capabilities, we wanted to build something that could give content creators control over their work and revenue streams without relying on centralized platforms that charge high fees or control the distribution of content.
While the Web3 space has already seen some early experiments in creator tools, many platforms are still hybrids—leveraging blockchain partially but retaining centralized components. We wanted to explore what would happen if we stayed true to the core ethos of decentralization. Notic3 was born from that idea: a fully on-chain platform that doesn't compromise on its principles.
## Our Approach
We chose to build Notic3 entirely on-chain to guarantee transparency, immutability, and censorship resistance. This meant every interaction—from subscription payments to content access—would be recorded directly on the blockchain (available to the general public). This decision came with trade-offs:
* **Secure data storage:** Managing private files like videos or audio on-chain is insecure if unencrypted, so we had to creatively manage metadata to ensure users are truly accessing content they own.
* **Novelty of Move and Sui:** The Move programming language, native to blockchains like Aptos and Sui, was completely new to our team. Learning it on the fly was one of the biggest challenges we faced, in addition to learning about the Sui SDK.
* **Smart Contract Design:** Writing complex smart contracts to handle subscription models, creator payouts, and access permissions directly on-chain was difficult.
## Key Features
* **Subscription-based Payments:** Creators can set up recurring subscriptions that allow supporters to access premium content. Payments are processed seamlessly on-chain, ensuring transparency and immediate distribution of funds.
* **Web3 Storage:** Everything about our app is in the chain, including the file storage. We utilized Sui's newest Data Storage solution, Walrus, to store encrypted files on the chain using blobs allowing creators to leverage the chain for distribution of their content.
## Conclusion
Working on Notic3 has been an incredible experience. We took on ambitious challenges, and despite the difficulties, we believe we succeeded in delivering a powerful product. Along the way, we learned new technologies, embraced decentralized principles, and grew both as developers and as a team.
Notic3 is proof of what can be achieved when you stay true to your mission, even when easier paths are available. We are excited to continue developing the platform beyond the hackathon and see how it can empower creators around the world.
Thank you for the opportunity to present Notic3, and we look forward to feedback and collaboration as we continue this journey!
|
## Inspiration
As college students who recently graduated high school in the last year or two, we know first-hand the sinking feeling that you experience when you open an envelope after your graduation, and see a gift card to a clothing store you'll never set foot into in your life. Instead, you can't stop thinking about the latest generation of AirPods that you wanted to buy. Well, imagine a platform where you could trade your unwanted gift card for something you would actually use... you would actually be able to get those AirPods, without spending money out of your own pocket. That's where the idea of GifTr began.
## What it does
Our website serves as a **decentralized gift card trading marketplace**. A user who wants to trade their own gift card for a different one can log in and connect to their **Sui wallet**. Following that, they will be prompted to select their gift card company and cash value. Once they have confirmed that they would like to trade the gift card, they can browse through options of other gift cards "on the market", and if they find one they like, send a request to swap. If the other person accepts the request, a trustless swap is initiated without the use of a intermediary escrow, and the swap is completed.
## How we built it
In simple terms, the first party locks the card they want to trade, at which point a lock and a key are created for the card. They can request a card held by a second party, and if the second party accepts the offers, both parties swap gift cards and corresponding keys to complete the swap. If a party wants to tamper with their object, they must use their key to do so. The single-use key would then be consumed by the smart contract, and the trade would not be possible.
Our website was built in three stages: the smart contract, the backend, and the frontend.
**The smart contract** hosts all the code responsible for automating a trustless swap between the sender and the recipient. It **specifies conditions** under which the trade will occur, such as the assets being exchanged and their values. It also has **escrow functionality**, responsible for holding the cards deposited by both parties until swap conditions have been satisfied. Once both parties have undergone **verification**, the **swap** will occur if all conditions are met, and if not, the process will terminate.
**The backend\* acts as a bridge between the smart contract and the front end, allowing for \*\*communication** between the code and the user interface. The main way it does this is by **managing all data**, which includes all the user accounts, their gift card inventories, and more. Anything that the user does on the website is communicated to the Sui blockchain. This **blockchain integration** is crucial so that users can initiate trades without having to deal with the complexities of blockchain.
**The frontend** is essentially everything the user sees and does, or the UI. It begins with **user authentication** such as the login process and connection to Sui wallet. It allows the user to **manage transactions** by initiating trades, entering in attributes of the asset they want to trade, and viewing trade offers. This is all done through React to ensure *real-time interaction* so that new offers are seen and updated without refreshing the page.
## Challenges we ran into
This was **our first step into the field** of Sui blockchain and web 3 entirely, so we found it to be really informative, but also really challenging. The first step we had to take to address this challenge was to begin learning Move through some basic tutorials and set up a development environment. Another challenge was the **many aspects of escrow functionality**, which we addressed through embedding many tests within our code. For instance, we had to test that that once an object was created, it would actually lock and unlock, and also that if the second shared party stopped responding or an object was tampered with, the trade would be terminated.
## Accomplishments that we're proud of
We're most proud of the look and functionality of our **user interface**, as user experience is one of our most important focuses. We wanted to create a platform that was clean, easy to use and navigate, which we did by maintaining a sense of consistency throughout our website and keep basic visual hierarchy elements in mind when designing the website. Beyond this, we are also proud of pulling off a project that relies so heavily on **Sui blockchain**, when we entered this hackathon with absolutely no knowledge about it.
## What we learned
Though we've designed a very simple trading project implementing Sui blockchain, we've learnt a lot about the **implications of blockchain** and the role it can play in daily life and cryptocurrency. The two most important aspects to us are decentralization and user empowerment. On such a simple level, we're able to now understand how a dApp can reduce reliance on third party escrows and automate these processes through a smart contract, increasing transparency and security. Through this, the user also gains more ownership over their own financial activities and decisions. We're interested in further exploring DeFi principles and web 3 in our future as software engineers, and perhaps even implementing it in our own life when we day trade.
## What's next for GifTr
Currently, GifTr only facilitates the exchange of gift cards, but we are intent on expanding this to allow users to trade their gift cards for Sui tokens in particular. This would encourage our users to shift from traditional banking systems to a decentralized system, and give them access to programmable money that can be stored more securely, integrated into smart contracts, and used in instant transactions.
|
## Inspiration
The sparkling bay and rolling hills captivated us as our plane descended into the Bay Area. We were excited to see the beauty of the Golden State on the ground, but as we rode the Cal train from the SFO airport to Stanford, we saw many highway underpasses and beat-up towns littered with trash. We couldn't help but notice the stark contrast between the beautiful state of California and the poor condition of some of its communities. On that ride, we envisioned something that could bring back the beauty of the Golden State and bring others closer together through Web3.
## What it does
Our platform allows others to post projects that need to be done in their community, such as picking up trash. Those who post projects are the 'Host' and can manage who works on them. 'Donors' can contribute funds to these projects to provide a financial incentive for 'Helpers' to complete the posted projects. Upon approval by the Host, the Helpers all split the money that was tied to the project. The Host uploads a description and images of the project to be done, and the Helpers upload pictures as proof that they have completed the project.
## How we built it
We first architected our solution to this problem on OneNote, using user stories, domain models, and use case diagrams. After architecting, we split up into developing the backend (Ethereum contracts) and frontend (Next.js), and communicating how to integrate data. We deployed the contracts on the rollup Arbitrum due to the low cost of gas. To store the photos that Hosts and Helpers upload, we used the IPFS service Pinata.
## Challenges we ran into
1. While we had success using Esturary's Alpha UI on Friday night, we returned Saturday to connection timeout issues and problems connecting to their nodes. We then decided to switch to a more reliable and familiar Pinata for IPFS operations.
2. Most of the modern Ethereum development suite was created within the past year, leading to poor documentation and support for certain tools. This especially affects tools like Wagmi, which was created mere months ago. We had trouble finding documentation for the complex use cases we needed, such as dynamically reading contracts created from our factory design model and complex parameter rights.
3. Centering divs (this stumped ChatGPT too)
architecture implementation between next.js frontend and node.js backend especially when dealing with images
Testing and security of smart contracts immutable deployments
## Accomplishments that we're proud of
We are proud to have produced a professional, polished product in 36 hours and for overcoming our obstacles during the short time frame.
## What we learned
We learned lots of technical skills, from working with Next.js to file transport with IPFS to ETH smart contracts.
## What's next for Helping Hand
We have lots of features in mind.
4. Sort all of the projects by proximity automatically when you browse the available projects.
5. Add support for other coins.
6. Add voting and delegation so that funds can be distributed according to labor.
7. Use zero-knowledge cryptography to maintain privacy.
|
winning
|
+1 902 903 6416 (send 'cmd' to get started)
## Inspiration
We believe in the right of every individual to have access to information, regardless of price or censorship
## What it does
NoNet gives unfettered access to internets most popular service without an internet or data connection. It accomplishes this through sending SMS queries to a server which then processes the query and returns results that were previously only accessible to those with an uncensored internet connection. It works with Yelp, Google Search (headlines), Google Search (Articles/Websites), Wikipedia, and Google Translate.
some commands include:
* 'web: border wall' // returns top results from google
* 'url: [www.somesite.somearticle.com](http://www.somesite.somearticle.com)' // returns article content
* 'tr ru: Hello my russian friend!' // returns russian translation
* 'wiki: Berlin' // returns Wikipedia for Berlin
* 'cmd' // returns all commands available
The use cases are many:
* in many countries, everyone has a phone with sms, but data is prohibitively expensive so they have no internet access
* Countries like China have a censored internet, and this would give citizens the freedom to bybass that
* Authoritarian Countries turn of internet in times of mass unrest to keep disinformation
## How we built it
We integrated Twilio for SMS with a NodeJS server, hosted on Google App Engine, and using multiple API's
## Challenges we ran into
We faced challenges at every step of the way, from establishing two way messaging, to hosting the server, to parsing the correct information to fit sms format. We tackled the problems as a team and overcome them to produce a finished product
## Accomplishments that we're proud of:
"Weathering a Tsunami" - getting through all the challenges we faced and building a product that can truly help millions of people across the world
## What we learned
We learned how to face problems as well as new technologies
## What's next for NoNet
Potential Monetization Strategies would be to put ads in the start of queries (like translink bus stop messaging), or give premium call limits to registered numbers
|
## Inspiration
If you're lucky enough to enjoy public speaking, we're jealous of you. None of us like public speaking, and we realized that there are not a lot of ways to get real-time feedback on how we can improve without boring your friends or family to listen to you.
We wanted to build a tool that would help us practice public-speaking - whether that be giving a speech or doing an interview.
## What it does
Stage Fight analyzes your voice, body movement, and word choices using different machine learning models in order to provide real-time constructive feedback about your speaking. The tool can give suggestions on whether or not you were too stiff, used too many crutch words (umm... like...), or spoke too fast.
## How we built it
Our platform is built upon the machine learning models from Google's Speech-to-Text API and using OpenCV and trained models to track hand movement. Our simple backend server is built on Flask while the frontend is built with no more than a little jQuery and Javascript.
## Challenges we ran into
Streaming live audio while recording from the webcam and using a pool of workers to detect hand movements all while running the Flask server in the main thread gets a little wild - and macOS doesn't allow recording from most of this hardware outside of the main thread. There were lots of problems where websockets and threads would go missing and work sometimes and not the next. Lots of development had to be done pair-programming style on our one Ubuntu machine. Good times!
## Accomplishments that we're proud of
Despite all challenges, we overcame them. Some notable wins include stringing all components together, using efficient read/writes to files instead of trying to fix WebSockets, and cool graphs.
## What we learned
A lot of technology, a lot about collaboration, and the Villager Puff matchup (we took lots of Smash breaks).
|
## Inspiration
In many cases, victims calling 911 have been in situations in which creating noise would put them in danger. The ability to speak with emergency dispatch through text would be of huge benefit to those who need to avoid alerting an attacker. One particular incident that stood out to us is a chilling story involving an Ohio woman in 2016 who, after freeing herself from her ties, whispered to the 911 dispatcher as her abductor slept just feet away from her.
We researched similar services available in Canada, all of which were lacking core functionality and usability. They are built solely for deaf or hard of hearing people and require a complicated signup process. Unsatisfied, we set out to find a solution that would be inexpensive to implement and simple to use.
## What it does
Our service allows someone to contact 911 via SMS. This is useful for anyone who finds themselves in a situation where it can be advantageous to stay silent, for example if the user was abducted or if an intruder has entered the user's home.
Once a user sends an SMS to the 911-text number, a call is initiated with a 911 dispatcher. The SMS is relayed to the dispatcher using text-to-speech. The dispatcher can then say a reply, which will be sent back to the user using speech-to-text.
The service is a layer that is used on top of the current 911 emergency service without the need to make any changes to the dispatch operations. This enables quick integration and minimal implementation costs.
## How we built it
The service is hosted in a Docker container on Google Compute Engine. The service is written in node.js with heavy use of the Twilio API.
The SMS chat and phone call are frequently updated using a series of webhooks on our server, which enables information to flow as quick and efficiently as possible. Every time a new event occurs on either side, our server responds immediately with the following action.
In building this, our top priorities were ease-of-use and speed of information flow. In emergency situations time is always of the essence so it is very important that none is wasted.
## Challenges we ran into
Twilio uses its own markup language (TwiML) and requires the data be passed to it through its own web hooks. This made the structure of the server a little bit unusual. Another challenge we faced was maintaining call continuity through multiple SMS requests.
## Accomplishments that we're proud of
We did a good job isolating the key elements of what would make the service effective for real-life scenarios.
## What's next for 911-text
We would like to add more features in this service, such as geo-locating and sentiment analysis to further aid the 911 dispatcher.
|
partial
|
## Inspiration
The most important part in any quality conversation is knowledge. Knowledge is what ignites conversation and drive - knowledge is the spark that gets people on their feet to take the first step to change. While we live in a time where we are spoiled by the abundance of accessible information, trying to keep up and consume information from a multitude of sources can give you information indigestion: it can be confusing to extract the most relevant points of a new story.
## What it does
Macaron is a service that allows you to keep track of all the relevant events that happen in the world without combing through a long news feed. When a major event happens in the world, news outlets write articles. Articles are aggregated from multiple sources and uses NLP to condense the information, classify the summary into a topic, extracts some keywords, then presents it to the user in a digestible, bite-sized info page.
## How we built it
Macaron also goes through various social media platforms (twitter at the moment) to perform sentiment analysis to see what the public opinion is on the issue: displayed by the sentiment bar on every event card! We used a lot of Google Cloud Platform to help publish our app.
## What we learned
Macaron also finds the most relevant charities for an event (if applicable) and makes donating to it a super simple process. We think that by adding an easy call-to-action button on an article informing you about an event itself, we'll lower the barrier to everyday charity for the busy modern person.
Our front end was built on NextJS, with a neumorphism inspired design incorporating usable and contemporary UI/UX design.
We used the Tweepy library to scrape twitter for tweets relating to an event, then used NLTK's vader to perform sentiment analysis on each tweet to build a ratio of positive to negative tweets surrounding an event.
We also used MonkeyLearn's API to summarize text, extract keywords and classify the aggregated articles into a topic (Health, Society, Sports etc..) The scripts were all written in python.
The process was super challenging as the scope of our project was way bigger than we anticipated! Between getting rate limited by twitter and the script not running fast enough, we did hit a lot of roadbumps and had to make quick decisions to cut out the elements of the project we didn't or couldn't implement in time.
Overall, however, the experience was really rewarding and we had a lot of fun moving fast and breaking stuff in our 24 hours!
|
## Inspiration
As a team, we had a collective interest in sustainability and knew that if we could focus on online consumerism, we would have much greater potential to influence sustainable purchases. We also were inspired by Honey -- we wanted to create something that is easily accessible across many websites, with readily available information for people to compare.
Lots of people we know don’t take the time to look for sustainable items. People typically say if they had a choice between sustainable and non-sustainable products around the same price point, they would choose the sustainable option. But, consumers often don't make the deliberate effort themselves. We’re making it easier for people to buy sustainably -- placing the products right in front of consumers.
## What it does
greenbeans is a Chrome extension that pops up when users are shopping for items online, offering similar alternative products that are more eco-friendly. The extension also displays a message if the product meets our sustainability criteria.
## How we built it
Designs in Figma, Bubble for backend, React for frontend.
## Challenges we ran into
Three beginner hackers! First time at a hackathon for three of us, for two of those three it was our first time formally coding in a product experience. Ideation was also challenging to decide which broad issue to focus on (nutrition, mental health, environment, education, etc.) and in determining specifics of project (how to implement, what audience/products we wanted to focus on, etc.)
## Accomplishments that we're proud of
Navigating Bubble for the first time, multiple members coding in a product setting for the first time... Pretty much creating a solid MVP with a team of beginners!
## What we learned
In order to ensure that a project is feasible, at times it’s necessary to scale back features and implementation to consider constraints. Especially when working on a team with 3 first time hackathon-goers, we had to ensure we were working in spaces where we could balance learning with making progress on the project.
## What's next for greenbeans
Lots to add on in the future:
Systems to reward sustainable product purchases. Storing data over time and tracking sustainable purchases. Incorporating a community aspect, where small businesses can link their products or websites to certain searches.
Including information on best prices for the various sustainable alternatives, or indicators that a product is being sold by a small business. More tailored or specific product recommendations that recognize style, scent, or other niche qualities.
|
## Inspiration
The team was inspired by the Twitter mood but wanted to make it more powerful.
## What it does
Our product allows users to specify topics of interest then we analyze the popularity, overall sentiment, and compare related topics.
## How we built it
We began by defining the separations between the various components. Then we set off to work on our respective components.
## Challenges we ran into
Performance of the natural language processing tools we're initially unusable. However, we were able to optimize its performance using several clever tricks.
Fitting the various components together was a real challenge, due to several necessary tools being implemented in different programming languages. However, the team overcame it by interprocess communication.
## Accomplishments that we're proud of
Delivering a well polished front end experience on top of a powerful backend.
## What we learned
The team learned d3.js as well as the twitter API. The team learned the core concepts of natural language processing.
## What's next for Open Opinion
Further performance optimizations through custom natural language processing models.
|
winning
|
## Inspiration
We are currently living through one of the largest housing crises in human history. As a result, more Canadians than ever before are seeking emergency shelter to stay off the streets and find a safe place to recuperate. However, finding a shelter is still a challenging, manual process, where no digital service exists that lets individuals compare shelters by eligibility criteria, find the nearest one they are eligible for, and verify that the shelter has room in real-time. Calling shelters in a city with hundreds of different programs and places to go is a frustrating burden to place on someone who is in need of safety and healing. Further, we want to raise the bar: people shouldn't be placed in just any shelter, they should go to the shelter best for them based on their identity and lifestyle preferences.
70% of homeless individuals have cellphones, compared to 85% of the rest of the population; homeless individuals are digitally connected more than ever before, especially through low-bandwidth mediums like voice and SMS. We recognized an opportunity to innovate for homeless individuals and make the process for finding a shelter simpler; as a result, we could improve public health, social sustainability, and safety for the thousands of Canadians in need of emergency housing.
## What it does
Users connect with the ShelterFirst service via SMS to enter a matching system that 1) identifies the shelters they are eligible for, 2) prioritizes shelters based on the user's unique preferences, 3) matches individuals to a shelter based on realtime availability (which was never available before) and the calculated priority and 4) provides step-by-step navigation to get to the shelter safely.
Shelter managers can add their shelter and update the current availability of their shelter on a quick, easy to use front-end. Many shelter managers are collecting this information using a simple counter app due to COVID-19 regulations. Our counter serves the same purpose, but also updates our database to provide timely information to those who need it. As a result, fewer individuals will be turned away from shelters that didn't have room to take them to begin with.
## How we built it
We used the Twilio SMS API and webhooks written in express and Node.js to facilitate communication with our users via SMS. These webhooks also connected with other server endpoints that contain our decisioning logic, which are also written in express and Node.js.
We used Firebase to store our data in real time.
We used Google Cloud Platform's Directions API to calculate which shelters were the closest and prioritize those for matching and provide users step by step directions to the nearest shelter. We were able to capture users' locations through natural language, so it's simple to communicate where you currently are despite not having access to location services
Lastly, we built a simple web system for shelter managers using HTML, SASS, JavaScript, and Node.js that updated our data in real time and allowed for new shelters to be entered into the system.
## Challenges we ran into
One major challenge was with the logic of the SMS communication. We had four different outgoing message categories (statements, prompting questions, demographic questions, and preference questions), and shifting between these depending on user input was initially difficult to conceptualize and implement. Another challenge was collecting the distance information for each of the shelters and sorting between the distances, since the response from the Directions API was initially confusing. Lastly, building the custom decisioning logic that matched users to the best shelter for them was an interesting challenge.
## Accomplishments that we're proud of
We were able to build a database of potential shelters in one consolidated place, which is something the city of London doesn't even have readily available. That itself would be a win, but we were able to build on this dataset by allowing shelter administrators to update their availability with just a few clicks of a button. This information saves lives, as it prevents homeless individuals from wasting their time going to a shelter that was never going to let them in due to capacity constraints, which often forced homeless individuals to miss the cutoff for other shelters and sleep on the streets. Being able to use this information in a custom matching system via SMS was a really cool thing for our team to see - we immediately realized its potential impact and how it could save lives, which is something we're proud of.
## What we learned
We learned how to use Twilio SMS APIs and webhooks to facilitate communications and connect to our business logic, sending out different messages depending on the user's responses. In addition, we taught ourselves how to integrate the webhooks to our Firebase database to communicate valuable information to the users.
This experience taught us how to use multiple Google Maps APIs to get directions and distance data for the shelters in our application. We also learned how to handle several interesting edge cases with our database since this system uses data that is modified and used by many different systems at the same time.
## What's next for ShelterFirst
One addition to make could be to integrate locations for other basic services like public washrooms, showers, and food banks to connect users to human rights resources. Another feature that we would like to add is a social aspect with tags and user ratings for each shelter to give users a sense of what their experience may be like at a shelter based on the first-hand experiences of others. We would also like to leverage the Twilio Voice API to make this system accessible via a toll free number, which can be called for free at any payphone, reaching the entire homeless demographic.
We would also like to use Raspberry Pis and/or Arduinos with turnstiles to create a cheap system for shelter managers to automatically collect live availability data. This would ensure the occupancy data in our database is up to date and seamless to collect from otherwise busy shelter managers. Lastly, we would like to integrate into municipalities "smart cities" initiatives to gather more robust data and make this system more accessible and well known.
|
## Inspiration 💡
*An address is a person's identity.*
In California, there are over 1.2 million vacant homes, yet more than 150,000 people (homeless population in California, 2019) don't have access to a stable address. Without an address, people lose access to government benefits (welfare, food stamps), healthcare, banks, jobs, and more. As the housing crisis continues to escalate and worsen throughout COVID-19, a lack of an address significantly reduces the support available to escape homelessness.
## This is Paper Homes: Connecting you with spaces so you can go places. 📃🏠
Paper Homes is a web application designed for individuals experiencing homelessness to get matched with an address donated by a property owner.
**Part 1: Donating an address**
Housing associations, real estate companies, and private donors will be our main sources of address donations. As a donor, you can sign up to donate addresses either manually or via CSV, and later view the addresses you donated and the individuals matched with them in a dashboard.
**Part 2: Receiving an address**
To mitigate security concerns and provide more accessible resources, Paper Homes will be partnering with California homeless shelters under the “Paper Homes” program. We will communicate with shelter staff to help facilitate the matching process and ensure operations run smoothly.
When signing up, a homeless individual can provide ID, however if they don’t have any forms of ID we facilitate the entire process in getting them an ID with pre-filled forms for application. Afterwards, they immediately get matched with a donated address! They can then access a dashboard with any documents (i.e. applying for a birth certificate, SSN, California ID Card, registering address with the government - all of which are free in California). During onboarding they can also set up mail forwarding ($1/year, funded by NPO grants and donations) to the homeless shelter they are associated with.
Note: We are solely providing addresses for people, not a place to live. Addresses will expire in 6 months to ensure our database is up to date with in-use addresses as well as mail forwarding, however people can choose to renew their addresses every 6 months as needed.
## How we built it 🧰
**Backend**
We built the backend in Node.js and utilized express to connect to our Firestore database. The routes were written with the Express.js framework. We used selenium and pdf editing packages to allow users to download any filled out pdf forms. Selenium was used to apply for documents on behalf of the users.
**Frontend**
We built a Node.js webpage to demo our Paper Homes platform, using React.js, HTML and CSS. The platform is made up of 2 main parts, the donor’s side and the recipient’s side. The front end includes a login/signup flow that populates and updates our Firestore database. Each side has its own dashboard. The donor side allows the user to add properties to donate and manage their properties (ie, if it is no longer vacant, see if the address is in use, etc). The recipient’s side shows the address provided to the user, steps to get any missing ID’s etc.
## Challenges we ran into 😤
There were a lot of non-technical challenges we ran into. Getting all the correct information into the website was challenging as the information we needed was spread out across the internet. In addition, it was the group’s first time using firebase, so we had some struggles getting that all set up and running. Also, some of our group members were relatively new to React so it was a learning curve to understand the workflow, routing and front end design.
## Accomplishments & what we learned 🏆
In just one weekend, we got a functional prototype of what the platform would look like. We have functional user flows for both donors and recipients that are fleshed out with good UI. The team learned a great deal about building web applications along with using firebase and React!
## What's next for Paper Homes 💭
Since our prototype is geared towards residents of California, the next step is to expand to other states! As each state has their own laws with how they deal with handing out ID and government benefits, there is still a lot of work ahead for Paper Homes!
## Ethics ⚖
In California alone, there are over 150,000 people experiencing homelessness. These people will find it significantly harder to find employment, receive government benefits, even vote without proper identification. The biggest hurdle is that many of these services are linked to an address, and since they do not have a permanent address that they can send mail to, they are locked out of these essential services. We believe that it is ethically wrong for us as a society to not act against the problem of the hole that the US government systems have put in place to make it almost impossible to escape homelessness. And this is not a small problem. An address is no longer just a location - it's now a de facto means of identification. If a person becomes homeless they are cut off from the basic services they need to recover.
People experiencing homelessness also encounter other difficulties. Getting your first piece of ID is notoriously hard because most ID’s require an existing form of ID. In California, there are new laws to help with this problem, but they are new and not widely known. While these laws do reduce the barriers to get an ID, without knowing the processes, having the right forms, and getting the right signatures from the right people, it can take over 2 years to get an ID.
Paper Homes attempts to solve these problems by providing a method for people to obtain essential pieces of ID, along with allowing people to receive a proxy address to use.
As of the 2018 census, there are 1.2 million vacant houses in California. Our platform allows for donors with vacant properties to allow people experiencing homelessness to put down their address to receive government benefits and other necessities that we take for granted. With the donated address, we set up mail forwarding with USPS to forward their mail from this donated address to a homeless shelter near them.
With proper identification and a permanent address, people experiencing homelessness can now vote, apply for government benefits, and apply for jobs, greatly increasing their chance of finding stability and recovering from this period of instability
Paper Homes unlocks access to the services needed to recover from homelessness. They will be able to open a bank account, receive mail, see a doctor, use libraries, get benefits, and apply for jobs.
However, we recognize the need to protect a person’s data and acknowledge that the use of an online platform makes this difficult. Additionally, while over 80% of people experiencing homelessness have access to a smartphone, access to this platform is still somewhat limited. Nevertheless, we believe that a free and highly effective platform could bring a large amount of benefit. So long that we prioritize the needs of a person experiencing homelessness first, we will able to greatly help them rather than harming them.
There are some ethical considerations that still need to be explored:
We must ensure that each user’s information security and confidentiality are of the highest importance. Given that we will be storing sensitive and confidential information about the user’s identity, this is top of mind. Without it, the benefit that our platform provides is offset by the damage to their security. Therefore, we will be keeping user data 100% confidential when receiving and storing by using hashing techniques, encryption, etc.
Secondly, as mentioned previously, while this will unlock access to services needed to recover from homelessness, there are some segments of the overall population that will not be able to access these services due to limited access to the internet. While we currently have focused the product on California, US where access to the internet is relatively high (80% of people facing homelessness have access to a smartphone and free wifi is common), there are other states and countries that are limited.
In addition to the ideas mentioned above, some next steps would be to design a proper user and donor consent form and agreement that both supports users’ rights and removes any concern about the confidentiality of the data. Our goal is to provide means for people facing homelessness to receive the resources they need to recover and thus should be as transparent as possible.
## Sources
[1](https://www.cnet.com/news/homeless-not-phoneless-askizzy-app-saving-societys-forgotten-smartphone-tech-users/#:%7E:text=%22Ninety%2Dfive%20percent%20of%20people,have%20smartphones%2C%22%20said%20Spriggs)
[2](https://calmatters.org/explainers/californias-homelessness-crisis-explained/)
[3](https://calmatters.org/housing/2020/03/vacancy-fines-california-housing-crisis-homeless/)
|
## Inspiration
There are thousands of youth with access to SMS who need food and or a place to stay. Tons of perfectly good food is wasted every day after restaurants, stores and bakeries close. We shouldn't have to wait for drastic legislative changes to fix this: <https://www.change.org/p/parliament-end-food-waste-in-canada>
## What it does
On one side, Hearth provides an SMS service for those who need food or a place to stay, and sends directions to nearest food or shelter resource without needing data/wifi. On the other side, it provides an easy way for restaurants/stores/bakeries to share extra food at the end of the day that would otherwise be wasted.
## How I built it
Hearth is completely serverless and uses stdlib with hosted MongoDB for persistent storage. Uses our own google maps stdlib function to get directions, location matrixes to find which resources are closest.
## Challenges I ran into
French parentheses
## Accomplishments that I'm proud of
Stateful SMS interactions using stateless architecture.
Figuring out the Google maps API
Making public stdlib modules
## What I learned
French parentheses do not exist.
## What's next for Hearth
Phone verification to ensure the individual aspect of providing food to those in need is not abused or taken advantage of. A quick digit verification can ensure that people who want to assist the cause are serious.
Subscriptions to SMS notifications for when new listings are posted nearby
Partner with nonprofits to reward businesses that donate their extra food
Expansion to other Canadian cities
|
winning
|
## Inspiration
One of our teammates works part time in Cineplex and at the end of the day, he told us that all their extra food was just throw out.This got us thinking, why throw the food out when you can you earn revenue and some end of the day sales for people in the local proxmity that are looking for something to eat.
## What it does
Out web-app will give a chance for the restaurant to publish the food item which they are selling with a photo of the food. Meanwhile, users have the chance to see everything in real-time and order food directly from the platform. The web-app also identifies the items in the food, nutrient facts, and health benefitis, pro's and con's of the food item and displays it directly to the user. The web-app also provides a secure transaction method which can be used to pay for the food. The food by the restaurant would be sold at a discounted price.
## How I built it
The page was fully made by HTML, CSS, JavaScript and jQuery. There would be both a login and signup for both the restaurants wanting to sell and also for the participants wanting to buy the food.Once signed up for the app, the entry would get stored into Azure and would request for access to the Android Pay app which will allow the users to use Android Pay to pay for the food. When the food is ordered, we use the Clarifai API which allows the users can see the ingredients, health benefits, nutrient facts, pro's and con's of the food item on their dashboard and the photo of the app. This would all come together once the food is delivered by the restaurant.
## Challenges I ran into
Challenges we ran into were getting our database working as none of us have past experiences using Azure. The biggest challenge we ran into was our first two ideas but after talking to sponsors we found out that they were too limiting meaning we had to let go of the ideas and keep coming up with new ones. We started hacking late afternoon on Saturday which cut our time to finish the entire thing.
## Accomplishments that I'm proud of
We are really proud of getting the entire website up and running properly within the 20 hours as we started late enough with database problems that we were at the point of giving up on Sunday morning. Additionally we were very proud of getting our Clarifai API working as none of us had past experenices with Clarifai.
## What I learned
The most important thing we learned out of this hackathon was to start with a concrete idea early on as if this was done for this weekend, our idea could've included a lot more functions. This would benefit both our users and consumers.
## What's next for LassMeal
Our biggest next leap would be modifying the delivery portion of the food item. Instead of the restaurant delivering the food, users that sign up for the food service, also have a chance to become a deliverer. If they are within the distance of the restaurant and going back in the prxomity of the user's home, they would be able to pick up food for the user and deliver it and earn a percentage of the entire order. This would mean both the users and restaurants are earning money now for food that was once losing them money as they were throwing it out.Another additoin would be taking our Android Mockups and transferring them into a app meaning now both the users and restaurants have a way to buy/publish food via a mobile device.
|
## Inspiration
As university students, we often find that we have groceries in the fridge but we end up eating out and the groceries end up going bad.
## What It Does
After you buy groceries from supermarkets, you can use our app to take a picture of your receipt. Our app will parse through the items in the receipts and add the items into the database representing your fridge. Using the items you have in your fridge, our app will be able to recommend recipes for dishes for you to make.
## How We Built It
On the back-end, we have a Flask server that receives the image from the front-end through ngrok and then sends the image of the receipt to Google Cloud Vision to get the text extracted. We then post-process the data we receive to filter out any unwanted noise in the data.
On the front-end, our app is built using react-native, using axios to query from the recipe API, and then stores data into Firebase.
## Challenges We Ran Into
Some of the challenges we ran into included deploying our Flask to Google App Engine, and styling in react. We found that it was not possible to write into Google App Engine storage, instead we had to write into Firestore and have that interact with Google App Engine.
On the frontend, we had trouble designing the UI to be responsive across platforms, especially since we were relatively inexperienced with React Native development. We also had trouble finding a recipe API that suited our needs and had sufficient documentation.
|
## Inspiration
A couple of weeks ago, 3 of us met up at a new Italian restaurant and we started going over the menu. It became very clear to us that there were a lot of options, but also a lot of them didn't match our dietary requirements. And so, we though of Easy Eats, a solution that analyzes the menu for you, to show you what options are available to you without the dissapointment.
## What it does
You first start by signing up to our service through the web app, set your preferences and link your phone number. Then, any time you're out (or even if you're deciding on a place to go) just pull up the Easy Eats contact and send a picture of the menu via text - No internet required!
Easy Eats then does the hard work of going through the menu and comparing the items with your preferences, and highlights options that it thinks you would like, dislike and love!
It then returns the menu to you, and saves you time when deciding your next meal.
Even if you don't have any dietary restricitons, by sharing your preferences Easy Eats will learn what foods you like and suggest better meals and restaurants.
## How we built it
The heart of Easy Eats lies on the Google Cloud Platform (GCP), and the soul is offered by Twilio.
The user interacts with Twilio's APIs by sending and recieving messages, Twilio also initiates some of the API calls that are directed to GCP through Twilio's serverless functions. The user can also interact with Easy Eats through Twilio's chat function or REST APIs that connect to the front end.
In the background, Easy Eats uses Firestore to store user information, and Cloud Storage buckets to store all images+links sent to the platform. From there the images/PDFs are parsed using either the OCR engine or Vision AI API (OCR works better with PDFs whereas Vision AI is more accurate when used on images). Then, the data is passed through the NLP engine (customized for food) to find synonym for popular dietary restrictions (such as Pork byproducts: Salami, Ham, ...).
Finally, App Engine glues everything together by hosting the frontend and the backend on its servers.
## Challenges we ran into
This was the first hackathon for a couple of us, but also the first time for any of us to use Twilio. That proved a little hard to work with as we misunderstood the difference between Twilio Serverless Functions and the Twilio SDK for use on an express server. We ended up getting lost in the wrong documentation, scratching our heads for hours until we were able to fix the API calls.
Further, with so many moving parts a few of the integrations were very difficult to work with, especially when having to re-download + reupload files, taking valuable time from the end user.
## Accomplishments that we're proud of
Overall we built a solid system that connects Twilio, GCP, a back end, Front end and a database and provides a seamless experience. There is no dependency on the user either, they just send a text message from any device and the system does the work.
It's also special to us as we personally found it hard to find good restaurants that match our dietary restrictions, it also made us realize just how many foods have different names that one would normally google.
## What's next for Easy Eats
We plan on continuing development by suggesting local restaurants that are well suited for the end user. This would also allow us to monetize the platform by giving paid-priority to some restaurants.
There's also a lot to be improved in terms of code efficiency (I think we have O(n4) in one of the functions ahah...) to make this a smoother experience.
Easy Eats will change restaurant dining as we know it. Easy Eats will expand its services and continue to make life easier for people, looking to provide local suggestions based on your preference.
|
partial
|
## Inspiration
The memory palace, also known as the method of loci, is a technique used to memorize large amounts of information, such as long grocery lists or vocabulary words. First, think of a familiar place in your life. Second, imagine the sequence of objects from the list along a path leading around your chosen location. Lastly, take a walk along your path and recall the information that you associated with your surroundings. It's quite simple, but extraordinarily effective. We've seen tons of requests on Internet forums for a program that can generate a simulator to make it easier to "build" the palace, so we decided to develop an app that satisfies this demand — and for our own practicality, too.
## What it does
Our webapp begins with a list provided by the user. We extract the individual words from the list and generate random images of these words from Flickr, a photo-sharing website. Then, we insert these images into a Google Streetview map that the user can walk through. The page displays the Google Streetview with the images. When walking near a new item from his/her list, a short melody (another mnemonic trick) is played based on the word. As an optional feature of the program, the user can take the experience to a whole new level through Google Cardboard by accessing the website on a smart device.
## How we built it
We started by searching for two APIs: one that allows for 3D interaction with an environment, and one that can find image URLs off the web based on Strings. For the first, we used Google Streetview, and for the second, we used a Flickr API. We used the Team Maps Street Overlay Demo as a jumping off point for inserting images into street view.
Used JavaScript, HTML, CSS
## Challenges we ran into
All of us are very new to JavaScript. It was a struggle to get different parts of the app to interact with each other asynchronously.
## Accomplishments that we're proud of
Building a functional web app with no prior experience
Creating melodies based on Strings
Virtual reality rendering using Google Cardboard
Website design
## What we learned
JavaScript, HTML, CSS
## What's next for Souvenir
Mobile app
More accurate image search
Integrating jingles
|
## Inspiration
Living in the big city, we're often conflicted between the desire to get more involved in our communities, with the effort to minimize the bombardment of information we encounter on a daily basis. NoteThisBoard aims to bring the user closer to a happy medium by allowing them to maximize their insights in a glance. This application enables the user to take a photo of a noticeboard filled with posters, and, after specifying their preferences, select the events that are predicted to be of highest relevance to them.
## What it does
Our application uses computer vision and natural language processing to filter any notice board information in delivering pertinent and relevant information to our users based on selected preferences. This mobile application lets users to first choose different categories that they are interested in knowing about and can then either take or upload photos which are processed using Google Cloud APIs. The labels generated from the APIs are compared with chosen user preferences to display only applicable postings.
## How we built it
The the mobile application is made in a React Native environment with a Firebase backend. The first screen collects the categories specified by the user and are written to Firebase once the user advances. Then, they are prompted to either upload or capture a photo of a notice board. The photo is processed using the Google Cloud Vision Text Detection to obtain blocks of text to be further labelled appropriately with the Google Natural Language Processing API. The categories this returns are compared to user preferences, and matches are returned to the user.
## Challenges we ran into
One of the earlier challenges encountered was a proper parsing of the fullTextAnnotation retrieved from Google Vision. We found that two posters who's text were aligned, despite being contrasting colours, were mistaken as being part of the same paragraph. The json object had many subfields which from the terminal took a while to make sense of in order to parse it properly.
We further encountered troubles retrieving data back from Firebase as we switch from the first to second screens in React Native, finding the proper method of first making the comparison of categories to labels prior to the final component being rendered. Finally, some discrepancies in loading these Google APIs in a React Native environment, as opposed to Python, limited access to certain technologies, such as ImageAnnotation.
## Accomplishments that we're proud of
We feel accomplished in having been able to use RESTful APIs with React Native for the first time. We kept energy high and incorporated two levels of intelligent processing of data, in addition to smoothly integrating the various environments, yielding a smooth experience for the user.
## What we learned
We were at most familiar with ReactJS- all other technologies were new experiences for us. Most notably were the opportunities to learn about how to use Google Cloud APIs and what it entails to develop a RESTful API. Integrating Firebase with React Native exposed the nuances between them as we passed user data between them. Non-relational database design was also a shift in perspective, and finally, deploying the app with a custom domain name taught us more about DNS protocols.
## What's next for notethisboard
Included in the fullTextAnnotation object returned by the Google Vision API were bounding boxes at various levels of granularity. The natural next step for us would be to enhance the performance and user experience of our application by annotating the images for the user manually, utilizing other Google Cloud API services to obtain background colour, enabling us to further distinguish posters on the notice board to return more reliable results. The app can also be extended to identifying logos and timings within a poster, again catering to the filters selected by the user. On another front, this app could be extended to preference-based information detection from a broader source of visual input.
|
## Inspiration
The concept for this web-based application is inspired by the idea of connecting memories to specific geographic locations. It allows individuals to share their experiences and historical moments in a unique, interactive way. By pinning virtual time capsules to various locations that others have visited, the app creates a digital bridge between different times and places, fostering a sense of community and shared history.
## What it does
This application enables users to discover and interact with virtual time capsules at specific locations using their mobile devices. When users are physically present at a capsule's location, the app reveals the content, offering a glimpse into past experiences and memories shared by others. Users can filter these memories by year, allowing them to see how a location has changed over time. The time capsules, created and contributed by users, enrich the app's content, making each visit to a location a unique and personal experience.
## How we built it
We developed a sophisticated web application by integrating a variety of powerful technologies and programming languages. For the front-end, we utilized Streamlit, a modern and interactive framework that greatly simplified the creation of user interfaces. The backend of our application was built using a combination of Python, SQL, and Node.js, forming a robust and scalable foundation. To enhance the functionality of our app, we incorporated the Geocoder API, which allowed us to implement precise location-based features effectively. For our database needs, we chose SQLite3 due to its lightweight nature and ease of integration. Node.js played a crucial role in our project, acting as the interface between our front-end and back-end components, ensuring smooth communication and data flow throughout the application. This combination of technologies enabled us to create a dynamic and user-friendly web application.
## Challenges we ran into
In this project, one of the most challenging aspects we encountered was the integration of the front end and back end components, primarily due to the fact that they were built using different programming languages. This discrepancy in language frameworks required a meticulous approach to ensure seamless communication and data exchange between the front end and back end systems. We had to implement robust interface protocols and carefully design our APIs to bridge the gap between the diverse languages. The process involved not only technical expertise but also a deep understanding of how each part of the application interacts with others. Despite these challenges, our team managed to create a cohesive and efficient system, demonstrating our ability to overcome complex integration hurdles in web application development.
## Accomplishments that we're proud of
Throughout the hackathon, our team achieved significant milestones that we are immensely proud of. One of the standout accomplishments was our deep dive into Streamlit and its various components. We not only mastered the basics but also explored advanced features, significantly enhancing our skill set with this cutting-edge tool. Additionally, we made considerable strides in utilizing GitHub Copilot. This AI-powered coding assistant proved to be a game-changer, streamlining our development process and boosting our coding efficiency. Learning to leverage Copilot effectively allowed us to write more complex code faster and with greater accuracy. These achievements in both understanding Streamlit and harnessing the power of GitHub Copilot are testaments to our team's dedication to continuous learning and innovation in the realm of software development.
## What we learned
During our recent project, we gained substantial insights into front-end development using Streamlit, a Python-based web framework. This learning experience was enlightening in terms of understanding both the advantages and constraints inherent in utilizing a Python-centric approach for web development. We delved into the capabilities of Streamlit, appreciating its intuitive design and the ease with which it allows for the creation of interactive, data-rich web pages. Concurrently, we also became acutely aware of its limitations, particularly when it comes to certain aspects of front-end customization and performance optimization. This comprehensive exploration of Streamlit not only enhanced our technical skills but also deepened our understanding of the practical implications of choosing specific web frameworks in the context of software development.
## What's next for nostalGEO
As nostalGEO moves forward, we have an exciting roadmap for scaling and enhancing the application. A key focus will be on expanding our infrastructure to accommodate a larger user base, ensuring that our platform can efficiently handle increased traffic and data. We're also planning to transition to a more sophisticated JavaScript-based front end, which will allow for greater flexibility and a more dynamic user experience. Deployment is another critical area, with plans to migrate to a cloud-based database solution, providing us with the scalability and reliability needed for our growing application. In terms of features, we're looking to enrich Nostalgeo with additional functionalities, making it more comprehensive and user-friendly. Improving loading times is high on our agenda, aiming for a faster and more responsive application. Lastly, we're excited about integrating AI capabilities into nostalGEO, which will open up new possibilities for user interaction and content personalization, setting a new standard for our application in the realm of digital nostalgia and geographical exploration.
|
winning
|
## Inspiration
Video games evolved when the Xbox Kinect was released in 2010 but for some reason we reverted back to controller based games. We are here to bring back the amazingness of movement controlled games with a new twist- re innovating how mobile games are played!
## What it does
AR.cade uses a body part detection model to track movements that correspond to controls for classic games that are ran through an online browser. The user can choose from a variety of classic games such as temple run, super mario, and play them with their body movements.
## How we built it
* The first step was setting up opencv and importing the a body part tracking model from google mediapipe
* Next, based off the position and angles between the landmarks, we created classification functions that detected specific movements such as when an arm or leg was raised or the user jumped.
* Then we correlated these movement identifications to keybinds on the computer. For example when the user raises their right arm it corresponds to the right arrow key
* We then embedded some online games of our choice into our front and and when the user makes a certain movement which corresponds to a certain key, the respective action would happen
* Finally, we created a visually appealing and interactive frontend/loading page where the user can select which game they want to play
## Challenges we ran into
A large challenge we ran into was embedding the video output window into the front end. We tried passing it through an API and it worked with a basic plane video, however the difficulties arose when we tried to pass the video with the body tracking model overlay on it
## Accomplishments that we're proud of
We are proud of the fact that we are able to have a functioning product in the sense that multiple games can be controlled with body part commands of our specification. Thanks to threading optimization there is little latency between user input and video output which was a fear when starting the project.
## What we learned
We learned that it is possible to embed other websites (such as simple games) into our own local HTML sites.
We learned how to map landmark node positions into meaningful movement classifications considering positions, and angles.
We learned how to resize, move, and give priority to external windows such as the video output window
We learned how to run python files from JavaScript to make automated calls to further processes
## What's next for AR.cade
The next steps for AR.cade are to implement a more accurate body tracking model in order to track more precise parameters. This would allow us to scale our product to more modern games that require more user inputs such as Fortnite or Minecraft.
|
# see our presentation [here](https://docs.google.com/presentation/d/1AWFR0UEZ3NBi8W04uCgkNGMovDwHm_xRZ-3Zk3TC8-E/edit?usp=sharing)
## Inspiration
Without purchasing hardware, there are few ways to have contact-free interactions with your computer.
To make such technologies accessible to everyone, we created one of the first touch-less hardware-less means of computer control by employing machine learning and gesture analysis algorithms. Additionally, we wanted to make it as accessible as possible in order to reach a wide demographic of users and developers.
## What it does
Puppet uses machine learning technology such as k-means clustering in order to distinguish between different hand signs. Then, it interprets the hand-signs into computer inputs such as keys or mouse movements to allow the user to have full control without a physical keyboard or mouse.
## How we built it
Using OpenCV in order to capture the user's camera input and media-pipe to parse hand data, we could capture the relevant features of a user's hand. Once these features are extracted, they are fed into the k-means clustering algorithm (built with Sci-Kit Learn) to distinguish between different types of hand gestures. The hand gestures are then translated into specific computer commands which pair together AppleScript and PyAutoGUI to provide the user with the Puppet experience.
## Challenges we ran into
One major issue that we ran into was that in the first iteration of our k-means clustering algorithm the clusters were colliding. We fed into the model the distance of each on your hand from your wrist, and designed it to return the revenant gesture. Though we considered changing this to a coordinate-based system, we settled on changing the hand gestures to be more distinct with our current distance system. This was ultimately the best solution because it allowed us to keep a small model while increasing accuracy.
Mapping a finger position on camera to a point for the cursor on the screen was not as easy as expected. Because of inaccuracies in the hand detection among other things, the mouse was at first very shaky. Additionally, it was nearly impossible to reach the edges of the screen because your finger would not be detected near the edge of the camera's frame. In our Puppet implementation, we constantly *pursue* the desired cursor position instead of directly *tracking it* with the camera. Also, we scaled our coordinate system so it required less hand movement in order to reach the screen's edge.
## Accomplishments that we're proud of
We are proud of the gesture recognition model and motion algorithms we designed. We also take pride in the organization and execution of this project in such a short time.
## What we learned
A lot was discovered about the difficulties of utilizing hand gestures. From a data perspective, many of the gestures look very similar and it took us time to develop specific transformations, models and algorithms to parse our data into individual hand motions / signs.
Also, our team members possess diverse and separate skillsets in machine learning, mathematics and computer science. We can proudly say it required nearly all three of us to overcome any major issue presented. Because of this, we all leave here with a more advanced skillset in each of these areas and better continuity as a team.
## What's next for Puppet
Right now, Puppet can control presentations, the web, and your keyboard. In the future, puppet could control much more.
* Opportunities in education: Puppet provides a more interactive experience for controlling computers. This feature can be potentially utilized in elementary school classrooms to give kids hands-on learning with maps, science labs, and language.
* Opportunities in video games: As Puppet advances, it could provide game developers a way to create games wear the user interacts without a controller. Unlike technologies such as XBOX Kinect, it would require no additional hardware.
* Opportunities in virtual reality: Cheaper VR alternatives such as Google Cardboard could be paired with
Puppet to create a premium VR experience with at-home technology. This could be used in both examples described above.
* Opportunities in hospitals / public areas: People have been especially careful about avoiding germs lately. With Puppet, you won't need to touch any keyboard and mice shared by many doctors, providing a more sanitary way to use computers.
|
## Inspiration
Our inspiration for this project stemmed from discovering unique methods of input management, especially with the advent of tracking software. Being inspired by a story of an old HTN competitor, who used a unique method to have users interact with their virtual world differently, we thought of a way to have seamless inputs through hand gestures, providing many different combinations of inputs using a tool that most people already use everyday.
## What it does
The model places digital nodes on the major joints on your hands and provides their co-ordinates relative to the screen. Using these points, an overall action/gesture is determined by using ratios and differences to determine which fingers are open or closed. And with different patterns of open or closed fingers, the program determines what gesture each player is making.
## How we built it
We used mediapipe and opencv for the machine learning model in order to track hands The interpretation of gestures was custom, deduced by mathematically deducing the distances and ratios between points on the hand. We used aseperite for assets and godot for the assembly of the games and tracking software.
## Challenges we ran into
A major challenge we ran into was that of converting a .py file to a .exe file. Despite the simplicity of the process, the dependencies led to 2 hours of errors and failures. We ran into many unexpected errors which prevented us from porting our finished tracking software into Godot, the game making engine we used. In order to circumvent the challenge we decided to take a parallel approach where the mediapipe program detected and deduced the hand gestures, placing the data in a separate file that was being read by the engine in real-time. Despite the lengthy and technical process that took the input of the camera and transformed into the actions of the characters on-screen, the program had barely noticeable latency, allowing for accurate gameplay.
## Accomplishments that we're proud of
Being able to successfully use the model and put it to use practically through critical and creative thinking was a very gratifying experience, in addition to learning more about game design and creation through creating our own assets and putting everything together.
## What we learned
We learned the basics of using ML models in conjunction with libraries like opencv for reading and writing data.
## What's next for Pipedream
In the future, we may explore more path for models not only restricted to hand movement but potentially face and full body as well. It would also be interesting to explore practical applications of this technology for not only entertainment but other avenues as well such as daily convenience and services.
|
winning
|
## Inspiration
So, like every Hackathon we’ve done in the past, we wanted to build a solution based on the pain points of actual, everyday people. So when we decided to pursue the Healthtech track, we called the nurses and healthcare professionals in our lives. To our surprise, they all seemed to have the same gripe – that there was no centralized system for overviewing the procedures, files, and information about specific patients in a hospital or medical practice setting. Even a quick look through google showed that there wasn’t any new technology that was really addressing this particular issue. So, we created UniMed - united medical - to offer an innovate alternative to the outdated software that exists – or for some practices, pen and paper.
While this isn’t necessarily the sexiest idea, it’s probably one of the most important issues to address for healthcare professionals. Looking over the challenge criteria, we couldn’t come up with a more fitting solution – what comes to mind immediately is the criterion about increasing practitioner efficiency. The ability to have a true CMS – not client management software, but CARE management software – eliminates any need for annoying patients with a barrage of questions they’ve answered a hundred times, and allows nurses and doctors to leave observations and notes in a system where they can be viewed from other care workers going forward.
## What it does
From a technical, data-flow perspective, this is the gist of how UniMed works: Solace connects our React-based front end to our database. While we normally would have a built a SQL database or perhaps gone the noSQL route and leveraged mongoDB, due to time constraints we’re using JSON for simplicities sake. So while JSON is acting, typically, like a REST API, we’re pulling real-time data with Solace’s functionality. Any time an event-based subscription is called – for example, a nurse updates a patient’s records reporting that their post-op check-up went well and they should continue on their current dosage of medication – that value, in this case a comment value, is passed to that event (updating our React app by populating the comments section of a patient’s record with a new comment).
## How we built it
We all learned a lot at this hackathon – Jackson had some Python experience but learned some HTML5 to design the basic template of our log-in page. I had never used React before, but spent several hours watching youtube videos (the React workshop was also very helpful!) and Manny mentored me through some of the React app creation. Augustine is a marketing student but it turns out he has a really good eye for design, and he was super helpful in mockups and wireframes!
## What's next for UniMed
There are plenty of cool ideas we have for integrating new features - the ability to give patients a smartwatch that monitors their vital signs and pushes that bio-information to their patient "card" in real time would be super cool. It would be great to also integrate scheduling functionality so that practitioners can use our program as the ONLY program they need while they're at work - a complete hub for all of their information and duties!
|
* [Deployment link](https://unifymd.vercel.app/)
* [Pitch deck link](https://www.figma.com/deck/qvwPyUShfJbTfeoPSjVIGX/UnifyMD-Pitch-Deck?node-id=4-71)
## 🌟 Inspiration
Long lists of patient records make it challenging to locate **relevant health data**. This can lead to doctors providing **inaccurate diagnoses** due to insufficient or disorganized information. Unstructured data, such as **progress notes and dictated information**, are not stored properly, and smaller healthcare facilities often **lack the resources** or infrastructure to address these issues.
## 💡 What it does
UnifyMD is a **unified health record system** that aggregates patient data and historical health records. It features an **AI-powered search bot** that leverages a patient's historical data to help healthcare providers make more **informed medical decisions** with ease.
## 🛠️ How we built it
* We started with creating an **intuitive user interface** using **Figma** to map out the user journey and interactions.
* For **secure user authentication**, we integrated **PropelAuth**, which allows us to easily manage user identities.
* We utilized **LangChain** as the large language model (LLM) framework to enable **advanced natural language processing** for our AI-powered search bot.
* The search bot is powered by **OpenAI**'s API to provide **data-driven responses** based on the patient's medical history.
* The application is built using **Next.js**, which provides **server-side rendering** and a full-stack JavaScript framework.
* We used **Drizzle ORM** (Object Relational Mapper) for seamless interaction between the application and our database.
* The core patient data and records are stored **securely in Supabase**.
* For front-end styling, we used **shadcn/ui** components and **TailwindCSS**.
## 🚧 Challenges we ran into
One of the main challenges we faced was working with **LangChain**, as it was our first time using this framework. We ran into several errors during testing, and the results weren't what we expected. It took **a lot of time and effort** to figure out the problems and learn how to fix them as we got more familiar with the framework.
## 🏆 Accomplishments that we're proud of
* Successfully integrated **LangChain** as a new large language model (LLM) framework to **enhance the AI capabilities** of our system.
* Implemented all our **initial features on schedule**.
* Effectively addressed key challenges in **Electronic Health Records (EHR)** with a robust, innovative solution to provide **improvements in healthcare data management**.
## 📚 What we learned
* We gained a deeper understanding of various patient safety issues related to the limitations and inefficiencies of current Electronic Health Record (EHR) systems.
* We discovered that LangChain is a powerful tool for Retrieval-Augmented Generation (RAG), and it can effectively run SQL queries on our database to optimize data retrieval and interaction.
## 🚀 What's next for UnifyMD
* **Partnership with local clinics** to kick-start our journey into improving **healthcare services** and **patient safety**.
* **Update** to include **speech-to-text** feature to increase more time **patient and healthcare provider’s satisfaction**.
|
## Inspiration
A major challenge that hospitals and doctors face daily around the world is collecting the necessary medical history of the patient to take the appropriate measures for treating them accurately. The lack of information might be due to patient negligence, patient being in a major accident, important details hidden amidst a lot of documents and so on. And it causes a serious health threat to the patient. From speaking with a healthcare mentor for the hackathon, we were able to learn about the inefficient collection of medical history in the healthcare industry and how the process of currently collecting medical documents about patients is inefficient. From learning about this, we knew that there was a need for a platform that could aid in collection of medical history.
Another problem we discovered was how people search for their healthcare related doubts on the internet which leads to a lot of misinformation. So there is a need for a platform that provides verified correct information related to healthcare as well.
## What it does
The main features of our web application are:
* Instant access to critical patient information is provided to the medical staff.
* one-stop for all the past medical documents and records of the patient.
* face recognition feature to search for the patient in case of an accident.
* online prescription can be sent to the patient by verified medical staff only
* ask relevant medical questions and get them answered by verified medical professionals.
* sends important information about the patient to the patient's emergency contact number.
## How we built it
Our project was built using the python web framework Django, as well as bootstrap for formatting (and the standard web stack of HTML, CSS etc). The website has been built to be as generic and as scalable as possible - given the time constraints.
The first step was to formalise the UI in canva - to ensure we had a cohesive understanding of the functionality of each page, as well as the access requirements. This process helped shape the further development.
We chose to create a limited set of pages to demonstrate the minimum viable product and showcase our idea, but the website can be easily expanded.
In addition, security was an important concern. The Doctors area can only be accessed by users who are a part of the Doctor’s group - a role which we would aim to have verified in the future. These are the only users that can view patient data.
Patients can only see their own data, and files sent to them by a doctor. They cannot access anything else.
This access is controlled by assigning the users to group types and using django decorators to verify a user’s group and if they have been authenticated. This protects all pages with confidential information. This can be easily expanded. We also made a third user group - administrators - which we envision to include the group of people who would need access to individual files, but should not have access to the patient’s full medical information. For example, pharmacists.
## Challenges we ran into
The most challenging aspect was to understand what features we are trying to implement are actually clinically relevant across countries. We brainstormed a lot to finalise our ideas, took help from mentors and researched to find out what would be working on finally to make the product the most useful to everyone around the world alike.
## Accomplishments that we're proud of
We have created a product that will help medical professionals immensely around the globe. It provides the whole medical history of a patient at one glance, which is otherwise a frustrating task. This will not only help the doctors to give the necessary care quickly but also improve the treatment provided.
The application provides a sustainable and secure way of maintaining medical records and prescriptions. This feature helps the doctors to instantly transfer the prescriptions and other documents to the patient. The patient can show these documents directly to the pharmacist or the relevant authority.
We also help people across countries to get correct medical advice only from verified medical professionals. This will lead to a significant decrease in the healthcare misinformation that spreads through social media.
## What we learned
We were able to gain more knowledge about the current state of healthcare thanks to our mentor for this hackathon.
We were also able to learn a lot more about Django from working on the project.
## What's next for HealthRepo
The next components for the project would be creating a summary of key medical documents using OCR and Natural Language Processing to quickly provide information about a document.
Another addition to the project would be a QR-code based way of sending information about documents to emphasize security for the patient and doctor and also provide instant verification of the document.
|
partial
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.