anchor
stringlengths 1
23.8k
| positive
stringlengths 1
23.8k
| negative
stringlengths 1
31k
| anchor_status
stringclasses 3
values |
|---|---|---|---|
# giphyFaceSwap
Replace faces in GIFs with your own.
Make sure to check out some snapshots of GIFs made with **GIPHY Face Swap** in the slideshow above.
## Inspiration
GIFs are supposed to be fun and social. Enabling users to customize any of the GIFs from GIPHY's expansive library makes them even more fun and the ability to swap faces with friends or celebrities makes them even more social. Generate a face-swapped GIF and download or share!
## What it Does
Three steps:
* Use the GIPHY API Search Bar to find your favorite GIF with a face in it
* Upload a photo from your computer (selfies work best here, friends)
* Press the face swap button to see the original animated GIF, but now animated with your face on it!
## How I Built It
The front-end of the website is built with HTML, CSS, and JS. I also utilize jquery to interface with the GIPHY API that searches the GIPHY database. The back-end is coded entirely in Python, utilizing a software called OpenCV to analyze the facial structures in the images.
GIFs are unique because they are low-resolution and contain few frames, allowing OpenCV to process the GIF frame-by-frame, creating a custom fit for each frame and taking into account the slight orientation, lighting changes, skin color differences, and size discrepancies between the two faces. The result is an extremely fast processing time and an utterly smooth mask.
## Challenges, Accomplishments, and Learning
I started this project about 12 hours before it was due after my team left the competition. I started from scratch with my own idea and very little web-dev experience. This weekend I learned how to use various JS queries, setup a local server, interface with flask, and access web APIs with the help of the amazing PennApps mentors from Qualtrics.
## What's Next
After the amazing feedback received from testers, I aim to take GIPHY FaceSwap to mobile. This may be a Facebook Messenger app, Android App, iOS App, or all three! This hackathon was an amazing proof-of-concept that users not only want to have GIPHY FaceSwap on their phones, but that they would even be willing to pay for it. I'm looking forward to developing my mobile coding skills and bringing my hack to the world!
|
## Inspiration
What inspired us to build this application was spreading mental health awareness in relationship with the ongoing COVID-19 pandemic around the world. While it is easy to brush off signs of fatigue and emotional stress as just "being tired", often times, there is a deeper problem at the root of it. We designed this application to be as approachable and user-friendly as possible and allowed it to scale and rapidly change based on user trends.
## What it does
The project takes a scan of a face using a video stream and interprets that data by using machine learning and specially-trained models for emotion recognition. Receiving the facial data, the model is then able to process it and output the probability of a user's current emotion. After clicking the "Recommend Videos" button, the probability data is exported as an array and is processed internally, in order to determine the right query to send to the YouTube API. Once the query is sent and a response is received, the response is validated and the videos are served to the user. This process is scalable and the videos do change as newer ones get released and the YouTube algorithm serves new content. In short, this project is able to identify your emotions using face detection and suggest you videos based on how you feel.
## How we built it
The project was built as a react app leveraging face-api.js to detect the emotions and youtube-music-api for the music recommendations. The UI was designed using Material UI.
The project was built using the [REACT](https://reactjs.org/) framework, powered by [NodeJS](https://nodejs.org/en/). While it is possible to simply link the `package.json` file, the core libraries that were used were the following
* **[Redux](https://react-redux.js.org/)**
* **[Face-API](https://justadudewhohacks.github.io/face-api.js/docs/index.html)**
* **[GoogleAPIs](https://www.npmjs.com/package/googleapis)**
* **[MUI](https://mui.com/)**
* The rest were sub-dependencies that were installed automagically using [npm](https://www.npmjs.com/)
## Challenges we ran into
We faced many challenges throughout this Hackathon, including both programming and logistical ones, most of them involved dealing with React and its handling of objects and props. Here are some of the most harder challenges that we encountered with React while working on the project:
* Integration of `face-api.js`, as initially figuring out how to map the user's face and adding a canvas on top of the video stream proved to be a challenge, given how none of us really worked with that library before.
* Integration of `googleapis`' YouTube API v3, as the documentation was not very obvious and it was difficult to not only get the API key required to access the API itself, but also finding the correct URL in order to properly formulate our search query. Another challenge with this library is that it does not properly communicate its rate limiting. In this case, we did not know we could only do a maximum of 100 requests per day, and so we quickly reached our API limit and had to get a new key. Beware!
* Correctly set the camera refresh interval so that the canvas can update and be displayed to the user. Finding the correct timing and making sure that the camera would be disabled when the recommendations are displayed as well as when switching pages was a big challenge, as there was no real good documentation or solution for what we were trying to do. We ended up implementing it, but the entire process was filled with hurdles and challenges!
* Finding the right theme. It was very important to us from the very start to make it presentable and easy to use to the user. Because of that, we took a lot of time to carefully select a color palette that the users would (hopefully) be pleased by. However, this required many hours of trial-and-error, and so it took us quite some time to figure out what colors to use, all while working on completing the project we had set out to do at the start of the Hackathon.
## Accomplishments that we're proud of
While we did face many challenges and setbacks as we've outlined above, the results we something that we can really be proud of. Going into specifics, here are some of our best and satisfying moments throughout the challenge:
* Building a well-functioning app with a nice design. This was the initial goal. We did it. We're super proud of the work that we put in, the amount of hours we've spent debugging and fixing issues and it filled us with confidence knowing that we were able to plan everything out and implement everything that we wanted, given the amount of time that we had. An unforgettable experience to say the least.
* Solving the API integration issues which plagued us since the start. We knew, once we set out to develop this project, that meddling with APIs was never going to be an easy task. We were very unprepared for the amount of pain we were about to go through with the YouTube API. Part of that is mostly because of us: we chose libraries and packages that we were not very familiar with, and so, not only did we have to learn how to use them, but we also had to adapt them to our codebase to integrate them into our product. That was quite a challenge, but finally seeing it work after all the long hours we put in is absolutely worth it, and we're really glad it turned out this way.
## What we learned
To keep this section short, here are some of the things we learned throughout the Hackathon:
* How to work with new APIs
* How to debug UI issues use components to build our applications
* Understand and fully utilize React's suite of packages and libraries, as well as other styling tools such as MaterialUI (MUI)
* Rely on each other strengths
* And much, much more, but if we kept talking, the list would go on forever!
## What's next for MoodChanger
Well, given how the name **is** *Moodchanger*, there is one thing that we all wish we could change next. The world!
PS: Maybe add file support one day? :pensive:
PPS: Pst! The project is accessible on [GitHub](https://github.com/mike1572/face)!
|
## Inspiration
* You can search for images with words (Google Search)
* You can search for words with images (Google Image Search)
* Why can't you *search for images with images???*
## What it does
* Translates camera image to Giphy search query using Core ML Image Recognition
* Keep tapping your screen to add more GIFs!!!
## Controls
* Long Press: Switch camera mode
* Shutter button: Take photo (and load first GIF)
* Tap: Load another GIF
* Shake phone: Clear photo & gifs, go back to camera mode
## Best at detecting
* Computers
* Sunglasses
* Sneakers
* Water bottles
* Pill bottles
* Phone/iPod
* You tell me.....
## How I built it
* [AV Foundation](https://medium.com/@rizwanm/https-medium-com-rizwanm-swift-camera-part-1-c38b8b773b2) for building custom camera view
* [Inceptionv3](https://developer.apple.com/machine-learning/build-run-models/) for object recognition model ported to Core ML
* [Alamofire](https://github.com/Alamofire/Alamofire) and [SwiftyJSON](https://github.com/SwiftyJSON/SwiftyJSON) for calling [Giphy API](https://developers.giphy.com)
* [SwiftyGif](https://github.com/kirualex/SwiftyGif) for displaying GIFs
## Challenges I ran into
1. What to do with inaccurate predictions (just show 'em all!!! It'll be fun!!!)
2. Sometimes hit Giphy's API rate limit after just a few calls (likely too many other hackers were calling their API from the same IP address)
## What I learned
* Classifying images using Core ML/Vision APIs
* Using Giphy API
* Creating customize-able camera module
## What's next for GoofyGiphyCamera
1. Allow the user to select from top 5 predictions
2. Social Sharing
3. Use BulletinBoard context cards to add on-boarding tutorial (not needed now -- will be doing hackathon demo in person)
4. Publication on Apple App Store
|
partial
|
# GREENTRaiL
## Inspiration
Hiking has exploded in popularity since the pandemic with more than 80 million Americans hiking in 2022 alone. There are many large mental and physical health benefits to hiking, however it can be daunting to select routes as a beginner. It is difficult to imagine how a route would feel before going on it, especially for those without past input.
In addition, many times hikers also don't take into account wildlife when choosing routes. Animals such as elks have also been shown to change behavior up to 1 mile away from hiking trails, and this has far reaching implications to the greater biosphere. With climate change being a threat to traditional migration paths, increased human activity can be detrimental to the already fragile patterns.
GREENTRaiL is an app that will give users personalized recommendations and help make hiking more eco-friendly.
## What it does
Using biometric and environmental data, GREENTRaiL recommends users hiking trails based on average statistics of others who have completed the hike and synthesizes difficulty ratings. It will also use migratory and wildlife data to suggest less obtrusive hikes to local migratory patterns.
## How we built it
UI/UX prototyping was sketched first traditionally, and then brought into Procreate to develop final color and brand identity. High fidelity wire-framing was then done on Figma, and then the final UI/UX was refined using those prototypes.
GREENTRaiL was coded using Swift and integrates terraAPI to get wearable data and aggregate data of all the people who have taken the past trail.
## Challenges we ran into
All of us were new to Swift, and one of us fully couldn't run Xcode on their computer. Our UX/UI designer had also never designed for IOS before either, so there was a bit of a learning curve. Our coders ran into a lot difficulty integrating the terra API into the code, as well as general problems with front end and back end integration.
## What we learned
We learned how to develop using Swift, prototype for IOS on Figma and integrate terraAPI.
## What's next for GREENTRaiL
Future areas of development include syncing with other nature apps such as iNaturalist's API and AllTrails to give the user even more comprehensive data on wildlife and qualitative description.
## Figma Design
<https://www.figma.com/file/S9wlv984UYBPaX8IiPqRJe/greentrAIl?type=design&node-id=2%3A87&mode=design&t=mIexhgpxiinAegGd-1>
## Technologies Used
     
|
## Inspiration
Throughout Quarantine and the global pandemic that we are all currently experiencing, I have begun to reflect on my own life and health in general. We may believe we have a particular ailment when, in fact, it is actually our fear getting the best of us. But how can one be sure that the symptoms they are having are as severe as they seem to be? As a result, we developed the Skin Apprehensiveness Valdiator and Educator App to help not only maintain our mental balance in regards to paranoia about our own health, but also to help front line staff solve the major pandemic that has plagued the planet.
## What it does
The home page is the most critical aspect of the app. It has a simple user interface that allows the user to choose between using an old photo or taking a new photo of any skin issues they may have. They can then pick or take a screenshot, and then, after submission, we can use the cloud to run our model and receive results from it, thanks to Google's Cloud ML Kit. We get our overall diagnosis of what illness our Model thinks it is, as well as our trust level. Following that, we have the choice of viewing more information about this disease through a wikipedia link or sending this to a specialist for confirmation.
Tensorflow and a convolutional neural network with several hidden layers were used to build the first Machine Learning Algorithm. It uses the Adam optimizer and has a relu activation layer. We've also improved the accuracy by using cross validation.
The second section of the app is the specialist section, where you can view doctors whose diagnoses you can check online. You may send them a text, email, or leave a voicemail to request a consultation and learn more about your diagnosis. This involves questions like what should my treatment be, when should I begin, where should I go, and how much will it cost, as well as some others.
The third section of the app is the practices section, which helps you to locate dermatology practices in your area that can assist you with your care. You can see a variety of information about a location, including its Google Review Average ranking, total number of reviews, phone number, address, and other relevant information. You also get a glimpse into how their office is decorated. You may also click on the location to be redirected to Google Maps directions to that location.
The tips section of the app is where you can find different links from reputable sources that will help you get advice on care, diagnosis, or skin disorders in a geological environment.
## How we built it
ReactJS, Expo, Google Cloud ML Kit, Tensorflow, practoAPI, Places API
|
## Inspiration
Our inspiration was the recent heatwave, a byproduct of the global warming that has been happening over the past few decades. As an individual, it is difficult to make substanstial change in the fight against climate change. The issues are highly systemic and influenced by large corporations who's energy usage overshadow the individual's. However, once connected to a large enough community, people can join together and make waves, which is what this app is all about.
## What it does
Users can select a region of the world with unique environmental issues, such as water shortages, plastic pollution, or deforestation, and complete daily tasks that help contribute to resolving the issue. Some tasks include researching information about questions or reducing your carbon footprint by eating vegan for a meal. It aims to create a community of eco conscious members, allowing them to connect to organize wide scale events. You can also compete with them through the leaderboard system.
## How we built it
We built EcoTracker completely with SwiftUI. From the start, we built one feature at a time and added on top of them.
## Challenges we ran into
Our lack of familiarity with Swift and IOS development made the entire project pretty challenging. Debugging was incredibly difficult and time consuming, especially while sleep deprived, leading to some frustrating moments and features that were never implemented. Getting the layout right for the daily tasks was difficult, as well. Finally, managing all of the Views and how everything connected was very confusing, especially because we did almost everything on a single file.
## Accomplishments that we're proud of
We are proud of getting a functioning app done within the time period with only two people in the team. We didn't have much experience with Swift beforehand, so it was a struggle at first—the learning curve was slightly steep. The whole UI turned out better than expected as well, which was a nice bonus.
## What we learned
To use multiple files! It makes life much easier and finding things quicker, something that would have been helpful when tiried. We also learned a lot about IOS development in general, ranging from how the different V, H, Z Stacks worked to the different Views available. Being our first and second in-person hackathon between our team members, we also learned to fit in naps to optimize perfomance.
## What's next for EcoTasker
There are many features we'd like to implement in the future given more time. By giving some advertising space to companies, we are able to further incentivize users to complete tasks by offering monetary rewards. There could be coins users can earn and then donate to a resolving an issue.
We also want to have a chatbot using the OpenAI API that allows users to ask questions about environmental stewardship (eg: what bin do glass bottles go into?). We could let them customize the avatar based on areas they have completed/saved, like a coral reef pet for Australia.
|
partial
|
## Inspiration
One of the makers of the project volunteers at a long term care every week for 2 hours. He works with residents that have chronic back pain and other physical disabilities. We wanted to create a device that would reduce the chances of getting back pain from a young age because in our world of technology, even schools give students computers from when they're 7 years old.
## What it does
Ouch! DeSlouch implements OpenCV machine vision functions to calculate the ratio between the user's shoulder width and face size. As the ratio decreases, the user is alerted that they are slouching through a web app making an audible tone. The user currently adds Ouch! DeSlouch to their work routine by opening a web app.
## How I built it
Ouch! DeSlouch uses OpenCV, C++, React, Javascript and ReactStrap.
## Challenges I ran into
Tweaking the MV to accurately detect users posture and provide reliable feedback proved challenging. The front-on view used by most users' webcams is suboptimal for this posture, but the team used stochastic testing to overcome the area and produce a robust, deployable system.
## Accomplishments that I'm proud of
One team member learned the basics of React over the weekend to develop the web application. Several team members had never used MV prior to this experience and uOttaHack3 proved a valuable learning experience and exposure to exciting, cutting-edge technologies.
## What I learned
Team members learned React and greatly expanded our knowledge of Javascript. Two members learned a great deal about OpenCV, its implementation and limitations.
## What's next for Ouch! DeSlouch
The team will continue to improve the algorithm and make the technology more reliable. An executable script that operates in the background upon start-up and works independently of the web browser is an ideal delivery option for this service that will be deployed in the future.
|
## Inspiration
Approximately 107.4 million Americans choose walking as a regular mode of travel for both social and work purposes. In 2015, about 70,000 pedestrians were injured in motor vehicle accidents while over 5,300 resulted in fatalities. Catastrophic accidents as such are usually caused by negligence or inattentiveness from the driver.
With the help of **Computer** **Vision** and **Machine** **Learning**, we created a tool that assists the driver when it comes to maintaining attention and being aware of his/her surroundings and any nearby pedestrians. Our goal is to create a product that provides social good and potentially save lives.
## What it does
We created **SurroundWatch** which assists with detecting nearby pedestrians and notifying the driver. The driver can choose to attach his/her phone to the dashboard, click start on the simple web application and **SurroundWatch** processes the live video feed sending notifications to the driver in the form of audio or visual cues when he/she is in danger of hitting a pedestrian. Since we designed it as an API, it can be incorporated into various ridesharing and navigation applications such as Uber and Google Maps.
## How we built it
Object detection and image processing was done using **OpenCV** and **YOLO-9000**. A web app that can run on both Android and iOS was built using **React**, **JavaScript**, and **Expo.io**. For the backend, **Flask** and **Heroku** was used. **Node.js** was used as the realtime environment.
## Challenges we ran into
We struggled with getting the backend and frontend to transmit information to one another along with converting the images to base64 to send as a POST request. We encountered a few hiccups in terms of node.js, ubuntu and react crashes, but we're successfully able to resolve them. Being able to stream live video feed was difficult given the limited bandwith, therefore, we resulted to sending images every 1000 ms.
## Accomplishments that we're proud of
We were able to process and detect images using YOLO-9000 and OpenCV, send image information using the React app and communicate between the front end and the Heroku/Flask backend components of our project. However, we are most excited to have built and shipped meaningful code that is meant to provide social good and potentially save lives.
## What we learned
We learned the basics of creating dynamic web apps using React and Expo along with passing information to a server where processing can take place. Our team work and hacking skills definitely improved and have made us more adept at building software products.
## What's next for SurroundWatch
Next step for SurroundWatch would be to offset the processing to AWS or Google Cloud Platform to improve speed of real-time image processing. We'd also like to create a demo site to allow users to see the power of SurroundWatch. Further improvements include improving our backend, setting up real-time image processing for live video streams over AWS or Google Cloud Platform.
|
## Inspiration
We've all heard horror stories of people with EVs running out of battery during a trip and not being able to find a charging station. Then, even if they do find one they have to wait so long for their car ot charge it throws off their whole trip. We wanted to make that process better for EV owners.
## What it does
RouteEV makes the user experience of owning and routing with an electric vehicle easy. It takes a trip in and based on the user's current battery, weather conditions, and route recommends if their trip is feasible or not. RouteEV then displays and recommends EV charging stations that have free spots near the route and readjusts the route to show whether charging at that station can help the user reach the destination.
## How we built it
We built RouteEV as a Javascript web app with React. It acts as a user interface for an electric Ford car that a user would interact with. Under the hood, we use various APIs such as the Google Maps API to display the map, markers, routing, and finding EV charging stations nearby. We also use APIs to collect weather information and provide Spotify integration.
## Challenges we ran into
Many members of our team hadn't used React before and we were all relatively experienced with front-end work. Trying to style and layout our application was a big challenge. The Google Maps API was also difficult to use at first and required lots of debugging to get it functional.
## Accomplishments that we're proud of
The main thing that we're proud of is that we were able to complete all the features we set out to complete at the beginning with time to spare. With our extra time we were able to have some fun and add fun integrations like Spotify.
## What we learned
We learned a lot about using React as well as using the Google Maps API and more about APIs in general. We also all learned a lot about front-end web development and working with CSS and JSX in React.
|
losing
|
## Inspiration
In today's fast-paced world, the average person often finds it challenging to keep up with the constant flow of news and financial updates. With demanding schedules and numerous responsibilities, many individuals simply don't have the time to sift through countless news articles and financial reports to stay informed about stock market trends. Despite this, they still desire a way to quickly grasp which stocks are performing well and make informed investment decisions.
Moreover, the sheer volume of news articles, financial analyses and market updates is overwhelming. For most people finding the time to read through and interpret this information is not feasible. Recognizing this challenge, there is a growing need for solutions that distill complex financial information into actionable insights. Our solution addresses this need by leveraging advanced technology to provide streamlined financial insights. Through web scraping, sentiment analysis, and intelligent data processing we can condense vast amounts of news data into key metrics and trends to deliver a clear picture of which stocks are performing well.
Traditional financial systems often exclude marginalized communities due to barriers such as lack of information. We envision a solution that bridges this gap by integrating advanced technologies with a deep commitment to inclusivity.
## What it does
This website automatically scrapes news articles from the domain of the user's choosing to gather the latests updates and reports on various companies. It scans the collected articles to identify mentions of the top 100 companies. This allows users to focus on high-profile stocks that are relevant to major market indices. Each article or sentence mentioning a company is analyzed for sentiment using advanced sentiment analysis tools. This determines whether the sentiment is positive, negative, or neutral. Based on the sentiment scores, the platform generates recommendations for potential stock actions such as buying, selling, or holding.
## How we built it
Our platform was developed using a combination of robust technologies and tools. Express served as the backbone of our backend server. Next.js was used to enable server-side rendering and routing. We used React to build the dynamic frontend. Our scraping was done with beautiful-soup. For our sentiment analysis we used TensorFlow, Pandas and NumPy.
## Challenges we ran into
The original dataset we intended to use for training our model was too small to provide meaningful results so we had to pivot and search for a more substantial alternative. However, the different formats of available datasets made this adjustment more complex. Also, designing a user interface that was aesthetically pleasing proved to be challenging and we worked diligently to refine the design, balancing usability with visual appeal.
## Accomplishments that we're proud of
We are proud to have successfully developed and deployed a project that leverages web scrapping and sentiment analysis to provide real-time, actionable insights into stock performances. Our solution simplifies complex financial data, making it accessible to users with varying levels of expertise. We are proud to offer a solution that delivers real-time insights and empowers users to stay informed and make confident investment decisions.
We are also proud to have designed an intuitive and user-friendly interface that caters to busy individuals. It was our team's first time training a model and performing sentiment analysis and we are satisfied with the result. As a team of 3, we are pleased to have developed our project in just 32 hours.
## What we learned
We learned how to effectively integrate various technologies and acquired skills in applying machine learning techniques, specifically sentiment analysis. We also honed our ability to develop and deploy a functional platform quickly.
## What's next for MoneyMoves
As we continue to enhance our financial tech platform, we're focusing on several key improvements. First, we plan to introduce an account system that will allow users to create personal accounts, view their past searches, and cache frequently visited websites. Second, we aim to integrate our platform with a stock trading API to enable users to buy stocks directly through the interface. This integration will facilitate real-time stock transactions and allow users to act on insights and make transactions in one unified platform. Finally, we plan to incorporate educational components into our platform which could include interactive tutorials, and accessible resources.
|
## ✨ Inspiration
Driven by the goal of more accessible and transformative education, our group set out to find a viable solution. Stocks are very rarely taught in school and in 3rd world countries even less, though if used right, it can help many people go above the poverty line. We seek to help students and adults learn more about stocks and what drives companies to gain or lower their stock value and use that information to make more informed decisions.
## 🚀 What it does
Users are guided to a search bar where they can search a company stock for example "APPL" and almost instantly they can see the stock price over the last two years as a graph, with green and red dots spread out on the line graph. When they hover over the dots, the green dots explain why there is a general increasing trend in the stock and a news article to back it up, along with the price change from the previous day and what it is predicted to be from. An image shows up on the side of the graph showing the company image as well.
## 🔧 How we built it
When a user writes a stock name, it accesses yahooFinance API and gets the data on stock price from the last 3 years. It takes the data and converts to a JSON File on local host 5000. Then using flask it is converted to our own API that populates the ChartsJS API with the data on the stock. Using a Matlab Server, we then take that data to find areas of most significance (the absolute value of slope is over a certain threshold). Those data points are set as green if it is positive or red if it is negative. Those specific dates in our data points are fed back to Gemini and asks it why it is thinking the stock shifted as it did and the price changed on the day as well. The Gemini at the same time also takes another request for a phrase that easy for the json search API to find a photo for about that company and then shows it on the screen.
## 🤯 Challenges we ran into
Using the amount of API's we used and using them properly was VERY Hard especially making our own API and incorporating Flask. As well, getting Stock data to a MATLAB Server took a lot of time as it was all of our first time using it. Using POST and Fetch commands were new for us and took us a lot of time for us to get used too.
## 🏆 Accomplishments that we're proud of
Connecting a prompt to a well-crafted stocks portfolio.
learning MATLAB in a time crunch.
connecting all of our API's successfully
making a website that we believe has serious positive implications for this world
## 🧠 What we learned
MATLAB integration
Flask Integration
Gemini API
## 🚀What's next for StockSee
* Incorporating it on different mediums such as VR so users can see in real-time how stocks shift in from on them in an interactive way.
* Making a small questionnaire on different parts of the stocks to ask whether if it good to buy it at the time
* Use (MPT) and other common stock buying algorthimns to see how much money you would have made using it.
|
## Inspiration
We wanted to allow financial investors and people of political backgrounds to save valuable time reading financial and political articles by showing them what truly matters in the article, while highlighting the author's personal sentimental/political biases.
We also wanted to promote objectivity and news literacy in the general public by making them aware of syntax and vocabulary manipulation. We hope that others are inspired to be more critical of wording and truly see the real news behind the sentiment -- especially considering today's current events.
## What it does
Using Indico's machine learning textual analysis API, we created a Google Chrome extension and web application that allows users to **analyze financial/news articles for political bias, sentiment, positivity, and significant keywords.** Based on a short glance on our visualized data, users can immediately gauge if the article is worth further reading in their valuable time based on their own views.
The Google Chrome extension allows users to analyze their articles in real-time, with a single button press, popping up a minimalistic window with visualized data. The web application allows users to more thoroughly analyze their articles, adding highlights to keywords in the article on top of the previous functions so users can get to reading the most important parts.
Though there is a possibilitiy of opening this to the general public, we see tremendous opportunity in the financial and political sector in optimizing time and wording.
## How we built it
We used Indico's machine learning textual analysis API, React, NodeJS, JavaScript, MongoDB, HTML5, and CSS3 to create the Google Chrome Extension, web application, back-end server, and database.
## Challenges we ran into
Surprisingly, one of the more challenging parts was implementing a performant Chrome extension. Design patterns we knew had to be put aside to follow a specific one, to which we gradually aligned with. It was overall a good experience using Google's APIs.
## Accomplishments that we're proud of
We are especially proud of being able to launch a minimalist Google Chrome Extension in tandem with a web application, allowing users to either analyze news articles at their leisure, or in a more professional degree. We reached more than several of our stretch goals, and couldn't have done it without the amazing team dynamic we had.
## What we learned
Trusting your teammates to tackle goals they never did before, understanding compromise, and putting the team ahead of personal views was what made this Hackathon one of the most memorable for everyone. Emotional intelligence played just as an important a role as technical intelligence, and we learned all the better how rewarding and exciting it can be when everyone's rowing in the same direction.
## What's next for Need 2 Know
We would like to consider what we have now as a proof of concept. There is so much growing potential, and we hope to further work together in making a more professional product capable of automatically parsing entire sites, detecting new articles in real-time, working with big data to visualize news sites differences/biases, topic-centric analysis, and more. Working on this product has been a real eye-opener, and we're excited for the future.
|
partial
|
## Inspiration
Every year, our school does a Grand Challenges Research project where they focus on important topic in the world. This year, the focus is on mental health which providing cost-effective treatment and make it accessible to everyone. We all may know someone who has a phobia, came back from a tour in the military, or is living with another mental illness, 1 in 5 Americans to be exact. As the increase of mental health awareness rises, the availability of appointments with counselors and treatments lessens. Additionally, we this could be used to provide at home inquires for people who are hesitant to get help. With Alexa M.D. we hope to use IoT (internet of things) to bring the necessary treatment to the patient for better access, cost, and also to reduce the stigma of mental illness.
## What it does
The user can receive information and various treatment options through Alexa M.D. First, the user speaks to Alexa through the Amazon echo, the central interface, and they can either inquire about various medical information or pick at-home treatment options. Through web-scraping of Web M.D. and other sites, Alexa M.D. provides information simply by asking. Next, Alexa M.D. will prompt the user with various treatment options which are a version of exposure therapy for many of the symptoms. The user will engage in virtual reality treatment by re-enacting various situations that may usually cause them anxiety or distress, but instead in a controlled environment through the Oculus Rift. Treatments will incrementally lessen the user's anxieties; they can use the Leap Motion to engage in another dimension of treatment when they are ready to move to the next step. This virtualizes an interaction with many of the stimuli that they are trying to overcome. When the treatment session has concluded, Alexa M.D. will dispense the user's prescribed medication through the automated medicine dispenser, powered by the Intel Edison. This ensures users take appropriate dosages while also encouraging them to go through their treatment session before taking their medication.
## How we built it
We used the Alexa skills sets to teach the Amazon Echo to recognize new commands. This enables communication to both the Oculus and our automated medicine dispenser through our backend on Firebase. We generated various virtual environments through Unity; the Leap Motion is connected to the Oculus which enables the user to interact with their virtual environment. When prompted by a medical questions, Alexa M.D. uses web-scraping from various medical websites, including Web M.D., to produce accurate responses. To make the automated medicine dispenser, we 3D printed the dispensing mechanism, and laser cut acrylic to provide the structural support. The dispenser is controlled by a servo motor via the Intel Edison and controls output of the medication as prescribed by Alexa M.D.
## Challenges we ran into
We found it difficult to sync the various components together (Oculus, Intel Edison, Amazon Alexa), and communicating between all 3 pieces.
## Accomplishments that we're proud of
The Internet of Things is the frontier of technology, and we are proud of integrating the 3 very distinct components together. Additionally, the pill dispenser was sketched and created all within the span of the hackathon, and we were able to utilize various new methods such as laser cutting.
## What we learned
Through the weekend, we learned a great deal about working with Amazon web service, as well as Amazon Alexa and how to integrate these technologies. Additionally, we learned about using modeling software for both 3D printing and laser printing. Furthermore, we learned how to set up the Arduino shield for the Intel Edison and integrating the leap motion with the Oculus Rift.
## What's next for Alexa M.D.
We hope that this can become available for all households, and that it can reduce the cost necessary for treatments, as well as improve access to such treatments. Costs for regular treatment include transportation, doctors and nurses, pharmacy visits, and more. It can be a first step for people are are hesitant to consult a specialist, or a main component of long-term treatment. Some mental illnesses, such as PTSD, even prevent patients from being able to interact with the outside world, which present difficulties when going to seek treatment. Additionally, we hope that this can reduce the stigma of treatment of mental illnesses by integrating such treatments easily into the daily lives of users. Patients can continue their treatments in the privacy of their own home where they won't feel any pressures.
|
## Story
Mental health is a major issue especially on college campuses. The two main challenges are diagnosis and treatment.
### Diagnosis
Existing mental health apps require the use to proactively input their mood, their thoughts, and concerns. With these apps, it's easy to hide their true feelings.
We wanted to find a better solution using machine learning. Mira uses visual emotion detection and sentiment analysis to determine how they're really feeling.
At the same time, we wanted to use an everyday household object to make it accessible to everyone.
### Treatment
Mira focuses on being engaging and keeping track of their emotional state. She allows them to see their emotional state and history, and then analyze why they're feeling that way using the journal.
## Technical Details
### Alexa
The user's speech is being heard by the Amazon Alexa, which parses the speech and passes it to a backend server. Alexa listens to the user's descriptions of their day, or if they have anything on their mind, and responds with encouraging responses matching the user's speech.
### IBM Watson/Bluemix
The speech from Alexa is being read to IBM Watson which performs sentiment analysis on the speech to see how the user is actively feeling from their text.
### Google App Engine
The backend server is being hosted entirely on Google App Engine. This facilitates the connections with the Google Cloud Vision API and makes deployment easier. We also used Google Datastore to store all of the user's journal messages so they can see their past thoughts.
### Google Vision Machine Learning
We take photos using a camera built into the mirror. The photos are then sent to the Vision ML API, which finds the user's face and gets the user's emotions from each photo. They're then stored directly into Google Datastore which integrates well with Google App Engine
### Data Visualization
Each user can visualize their mental history through a series of graphs. The graphs are each color-coded to certain emotional states (Ex. Red - Anger, Yellow - Joy). They can then follow their emotional states through those time periods and reflect on their actions, or thoughts in the mood journal.
|
## Inspiration
Mental health issues are difficult enough to feel, and another thing to try to actively address. There are many valid therapeutic methods of addressing mental health, but sometimes a simple metric is a great place to start. When I was struggling with my mental health, I wished I could know better how often I was actively struggling. When I got anxious, it felt like I had always been anxious and would always be anxious. I tried to get an official diagnosis but struggled to accurately convey how much my mental health was impacting my ability to operate. Experience can be subjective, but data can offer clarity.
## What it does
This devices records your mood whenever you feel like recording it. There are three mood options to choose: high, neutral, and low. While this measure is coarse, it also allows the user to choose quickly without overthinking or being overwhelmed by options. This device also records this data with only the tap of a button that does not require the use of a cellphone. This was sought intentionally, as personal devices are often filled with distracting or distressing notifications. Sometimes it's useful just to check in without struggling to find the energy or wade through a sea of distractions.
The device also features two buttons related to focus. Another common mental health issue is feeling regularly unfocused. This can easily spiral into assuming it is impossible to focus, when perhaps there was a great deal of energy put towards focusing recently. This is also a valuable metric to track for a student's mental health.
This devices has a few additional data collection points. There is a mounted camera that has been trained to assess mood based on facial expression. There is also a temperature sensor meant to capture skin temperature during mood recording. Together, the mood, temperature, and facial expression are meant to feed into a larger algorithm. This data was stored using Mongo and then prepared for eventual use in a neural network as well as Mongo Atlas . Overtime, more associations could be made that will further aid in understanding the effects mental health has. How valuable would it be to gain insight as to ones mental state by a correlation with a non invasive temperature measurement?
## How we built it
We focused on using open source materials so that we could focus on creating a cohesive unit. The system is controlled by a raspberry pi 3 and an arduino leonardo. The computer vision model was developed using tensorflow. The housing was created primarily with laser cut pieces with some fanciful 3d printed additions.
We wanted to make this device functional, but we also wanted to make it pleasant. The camera is mounted on a spring to create a "bobble head" to amuse the user during a mental health episode. There are LEDs in the shape of hearts that light up when the user indicates that their mood is low. The two robot arms have unique movements that respond to the neutral mood indicator and focus mode.
We wanted this device to feel like something of a companion. A cute enhancement to a desk space that provides valuable insights about the user.
## Challenges we ran into
Primarily time. There are some integration steps not yet implemented due to a lack of time. We tried to dream feasibly, but still managed to get caught up in what we wanted this to be.
Specifically, the computer vision model took it's time to train. This is not surprising but difficult to mitigate. It is difficult to speed up training and maintain model integrity.
There were a few errors along the way! Design choices that were abandoned part way through because while they were desired, they were taking more time than could be allotted.
## Accomplishments that we're proud of
We are proud of the functionality of our project and of how much we learned. Our design is well made given the time constraints and should already stand up to extended use. Furthermore, we set the ground work for alot of further development. The analysis of the harvested data could lead to much more exploration in the realm of passive mental health assessment.
We're proud of how we operated as a team. None of us knew eachother before this, but we were able to come together and communicate effectively to create a project we're all quite satisfied with.
## What we learned
We learned how to generate ideas together. We learned new technologies that we hadn't used before - specifically implementing interrupts for the buttons was an interesting endeavour!
We learned how to design mechanically well during a hackathon - and the answer is laser cutting. 3D printing is glamorous, but the speed of laser cutting simply can't be beat.
We learned to get creative with what we had on hand. We found it difficult to check out the hardware items we were looking for and so we needed to get creative with what we had.
## What's next for Psyche Tracker
More design iterations. There are little aspects of many parts of the build that could be improved. Finetuning what we have done so far will go a long way to elevate this project into something truly special. After that, the project would benefit from more sensors that have investigated links to mental health. Then, all of that data can come together to create the data set describing the physical effects of mental health on a day to day basis.
|
partial
|
## Inspiration
SustainaPal is a project that was born out of a shared concern for the environment and a strong desire to make a difference. We were inspired by the urgent need to combat climate change and promote sustainable living. Seeing the increasing impact of human activities on the planet's health, we felt compelled to take action and contribute to a greener future.
## What it does
At its core, SustainaPal is a mobile application designed to empower individuals to make sustainable lifestyle choices. It serves as a friendly and informative companion on the journey to a more eco-conscious and environmentally responsible way of life. The app helps users understand the environmental impact of their daily choices, from transportation to energy consumption and waste management. With real-time climate projections and gamification elements, SustainaPal makes it fun and engaging to adopt sustainable habits.
## How we built it
The development of SustainaPal involved a multi-faceted approach, combining technology, data analysis, and user engagement. We opted for a React Native framework, and later incorporated Expo, to ensure the app's cross-platform compatibility. The project was structured with a focus on user experience, making it intuitive and accessible for users of all backgrounds.
We leveraged React Navigation and React Redux for managing the app's navigation and state management, making it easier for users to navigate and interact with the app's features. Data privacy and security were paramount, so robust measures were implemented to safeguard user information.
## Challenges we ran into
Throughout the project, we encountered several challenges. Integrating complex AI algorithms for climate projections required a significant amount of development effort. We also had to fine-tune the gamification elements to strike the right balance between making the app fun and motivating users to make eco-friendly choices.
Another challenge was ensuring offline access to essential features, as the app's user base could span areas with unreliable internet connectivity. We also grappled with providing a wide range of educational insights in a user-friendly format.
## Accomplishments that we're proud of
Despite the challenges, we're incredibly proud of what we've achieved with SustainaPal. The app successfully combines technology, data analysis, and user engagement to empower individuals to make a positive impact on the environment. We've created a user-friendly platform that not only informs users but also motivates them to take action.
Our gamification elements have been well-received, and users are enthusiastic about earning rewards for their eco-conscious choices. Additionally, the app's offline access and comprehensive library of sustainability resources have made it a valuable tool for users, regardless of their internet connectivity.
## What we learned
Developing SustainaPal has been a tremendous learning experience. We've gained insights into the complexities of AI algorithms for climate projections and the importance of user-friendly design. Data privacy and security have been areas where we've deepened our knowledge to ensure user trust.
We've also learned that small actions can lead to significant changes. The collective impact of individual choices is a powerful force in addressing environmental challenges. SustainaPal has taught us that education and motivation are key drivers for change.
## What's next for SustainaPal
The journey doesn't end with the current version of SustainaPal. In the future, we plan to further enhance the app's features and expand its reach. We aim to strengthen data privacy and security, offer multi-language support, and implement user support for a seamless experience.
SustainaPal will also continue to evolve with more integrations, such as wearable devices, customized recommendations, and options for users to offset their carbon footprint. We look forward to fostering partnerships with eco-friendly businesses and expanding our analytics and reporting capabilities for research and policy development.
Our vision for SustainaPal is to be a global movement, and we're excited to be on this journey towards a healthier planet. Together, we can make a lasting impact on the world.
|
## Inspiration
We wanted to build a sustainable project which gave us the idea to plant crops on a farmland in a way that would give the farmer the maximum profit. The program also accounts for crop rotation which means that the land gets time to replenish its nutrients and increase the quality of the soil.
## What it does
It does many things, It first checks what crops can be grown in that area or land depending on the weather of the area, the soil, the nutrients in the soil, the amount of precipitation, and much more information that we have got from the APIs that we have used in the project. It then forms a plan which accounts for the crop rotation process. This helps the land regain its lost nutrients while increasing the profits that the farmer is getting from his or her land. This means that without stopping the process of harvesting we are regaining the lost nutrients. It also gives the farmer daily updates on the weather in that area so that he can be prepared for severe weather.
## How we built it
For most of the backend of the program, we used Python.
For the front end of the website, we used HTML. To format the website we used CSS. we have also used Javascript for formates and to connect Python to HTML.
We used the API of Twilio in order to send daily messages to the user in order to help the user be ready for severe weather conditions.
## Challenges we ran into
The biggest challenge that we faced during the making of this project was the connection of the Python code with the HTML code. so that the website can display crop rotation patterns after executing the Python back end script.
## Accomplishments that we're proud of
While making this each of us in the group has accomplished a lot of things. This project as a whole was a great learning experience for all of us. We got to know a lot of things about the different APIs that we have used throughout the project. We also accomplished making predictions on which crops can be grown in an area depending on the weather of the area in the past years and what would be the best crop rotation patterns. On the whole, it was cool to see how the project went from data collection to processing to finally presentation.
## What we learned
We have learned a lot of things in the course of this hackathon. We learned team management and time management, Moreover, we got hands on experience in Machine Learning. We got to implement Linear Regression, Random decision trees, SVM models. Finally, using APIs became second nature to us because of the number of them we had to use to pull data.
## What's next for ECO-HARVEST
For now, the data we have is only limited to the United States, in the future we plan to increase it to the whole world and also increase our accuracy in predicting which crops can be grown in the area. Using the crops that we can grow in the area we want to give better crop rotation models so that the soil will gain its lost nutrients faster. We also plan to give better and more informative daily messages to the user in the future.
|
## Inspiration
At Carb0, we're committed to empowering individuals to take control of their carbon footprint and contribute to a more sustainable future. Our inspiration comes from the fact that 72% of CO2 emissions could be reduced by changes in consumer behavior, yet many companies lack the motivation to conduct ESG reports if not required by investors or the government. We believe that establishing consumer-driven ESG can drive companies to be accountable and take action to provide more sustainable products and services.
## What it does
We created **a personal carbon tracker** that **incentivizes** customers to adopt low-carbon lifestyles and **democratizes carbon footprint data**, making it easier for everyone to contribute to a sustainable future. Our platform provides information to influence consumers' purchase decisions and provides alternatives to help them make sustainable decisions. This way, we can encourage companies, investors, and the government to take responsibility and be more sustainable.
## How we built it
We began by identifying the problem and then went through an intense ideation process to converge on our consumer-driven ESG idea. We defined the user journey and pain points to create a convenient, incentivizing, and user-centric platform. Our reward system easily links to digital payment details and helps track CO2 emissions with data visualization and cashback based on monthly summaries. We also make product carbon footprint data easily accessible and searchable.
## Challenges we ran into
Our biggest challenge was integrating front-end and back-end and defining scope. We faced technical assumptions since the accurate database was not available due to time constraints.
## Accomplishments that we're proud of
Despite these challenges, we are proud of our self-sustaining system to establish consumer-driven ESG, successful integration of front-end and back-end with a user-friendly interface, and the intense ideation process we went through.
## What we learned
During this project, we learned how to rapidly prototype a digital app in limited time and resources, gained a deeper understanding of ESG, its current challenges, and potential solutions.
## What's next for Carb0 - Empower your carbon journey
Our next steps are to conduct user testing and iterations for a higher-fidelity prototype, enrich carbon footprint database coverage and accuracy. We also plan to potentially add Carb0 as an add-on for digital wallets to reach a broader audience and engage more people in a more sustainable lifestyle.
Our vision is that **consumer-driven ESG** will incentivize governments, investors, and companies to take more initiatives in creating a more sustainable world. Join us on our journey to a sustainable future with Carb0!
|
winning
|
# Journally - A journal entry a day. All through text.
## Welcome to Journally! Where we restore our memories one journal, one day at a time.
## Inspiration and What it Does
With everyone returning to their busy lives of work, commuting, school, and other commitments, people need an opportunity to restore their peace of mind. Journalling has been shown to improve mental health and can help restore memories, so that you don't get too caught up in the minutiae of life and can instead appreciate the big picture. *Journally* encourages you to quickly and easily record a daily journal entry - it's all done through text!
*Journally* sends you a daily text message reminder and then you simply reply back with whatever you want to record about your day. Your journal entries are available to view through the Journally website later, for whenever you want to take a walk down memory lane.
## Challenges and Major Accomplishments
This was the first full-stack project that either of us has completed, so there was definitely a lot of learning involved. In particular, integrating the many different servers was difficult -- Python Flask for sending and receiving text messages via the Twilio messaging API, a MySQL database, and the Node.js webserver. With so many complex parts, we were very proud of our ability to get it all running in under 24 hours! Moreover, we realized that this project was quite a bit for two people to complete. We weren't able to get everything to work perfectly, but at least we have a working product!
## What we learned
It was our first time working with API routings in Node.js and interacting with databases, so we learned a lot from that! We also learned how to work with Twilio's API using Flask. We had lots of fun sending ourselves a ton of test SMS messages.
## How we built it
* **Twilio** to send our registered users *daily* messages to Journal!
* Secure `MySQL` database to to store user registration info and their Journally entries
* `Flask` to *send* SMS from a user database of phone numbers
* `Flask` to *receive* SMS and store the user's Journallys into the database
* `Node.JS` for server routings, user registration on site, and storing user data into the database
* `Express.js` backend to host Journally
## Next Steps:
* allow simple markups like bolds in texts
* allow user to rate their day on a scale
* sort by scale feature
* Feel free to contribute!! Let's Journally together
# Check out our GitHub repo:
[GitHub](https://github.com/natalievolk/UofTHacks)
|
# TextMemoirs
## nwHacks 2023 Submission by Victor Parangue, Kareem El-Wishahy, Parmvir Shergill, and Daniel Lee
Journalling is a well-established practice that has been shown to have many benefits for mental health and overall well-being. Some of the main benefits of journalling include the ability to reflect on one's thoughts and emotions, reduce stress and anxiety, and document progress and growth. By writing down our experiences and feelings, we are able to process and understand them better, which can lead to greater self-awareness and insight. We can track our personal development, and identify the patterns and triggers that may be contributing to our stress and anxiety. Journalling is a practice that everyone can benefit from.
Text Memoirs is designed to make the benefits of journalling easy and accessible to everyone. By using a mobile text-message based journaling format, users can document their thoughts and feelings in a real-time sequential journal, as they go about their day.
Simply text your assigned number, and your journal text entry gets saved to our database. You're journal text entries are then displayed on our web app GUI. You can view all your text journal entries on any given specific day on the GUI.
You can also text commands to your assigned number using /EDIT and /DELETE to update your text journal entries on the database and the GUI (see the image gallery).
Text Memoirs utilizes Twilio’s API to receive and store user’s text messages in a CockroachDB database. The frontend interface for viewing a user’s daily journals is built using Flutter.
![]()
![]()
# TextMemoirs API
This API allows you to insert users, get all users, add texts, get texts by user and day, delete texts by id, get all texts and edit texts by id.
## Endpoints
### Insert User
Insert a user into the system.
* Method: **POST**
* URL: `/insertUser`
* Body:
`{
"phoneNumber": "+17707626118",
"userName": "Test User",
"password": "Test Password"
}`
### Get Users
Get all users in the system.
* Method: **GET**
* URL: `/getUsers`
### Add Text
Add a text to the system for a specific user.
* Method: **POST**
* URL: `/addText`
* Body:
`{
"phoneNumber": "+17707626118", "textMessage": "Text message #3", "creationDate": "1/21/2023", "creationTime": "2:57:14 PM"
}`
### Get Texts By User And Day
Get all texts for a specific user and day.
* Method: **GET**
* URL: `/getTextsByUserAndDay`
* Parameters:
+ phoneNumber: The phone number of the user.
+ creationDate: The date of the texts in the format `MM/DD/YYYY`.
### Delete Texts By ID
Delete a specific text by ID.
* Method: **DELETE**
* URL: `/deleteTextsById`
* Body:
`{
"textId": 3
}`
### Edit Texts By ID
Edit a specific text by ID.
* Method: **PUT**
* URL: `/editTextsById`
* Parameters:
+ id: The ID of the text to edit.
* Body:
`{
"textId": 2,
"textMessage": "Updated text message"
}`
### Get All Texts
Get all texts in the database.
* Method: **GET**
* URL: `/getAllTexts`
|
## Inspiration
The idea for SlideForge came from the struggles researchers face when trying to convert complex academic papers into presentations. Many academics spend countless hours preparing slides for conferences, lectures, or public outreach, often sacrificing valuable time they could be using for research. We wanted to create a tool that could automate this process while ensuring that presentations remain professional, audience-friendly, and adaptable to different contexts.
## What it does
SlideForge takes LaTeX-formatted academic papers and automatically converts them into well-structured presentation slides. It extracts key content such as equations, figures, and citations, then organizes them into a customizable slide format. Users can easily adjust the presentation based on the intended audience—whether it’s for peers, students, or the general public. The platform provides customizable templates, integrates citations, and minimizes the time spent on manual slide creation.
## How we built it
We built SlideForge using a combination of Python for the backend and JavaScript with React for the frontend. The backend handles the LaTeX parsing, converting key elements into slides using Flask to manage the process. We also integrated JSON files to store and organize the structure of presentations, formulas, and images. On the frontend, React is used to create an interactive user interface where users can upload their LaTeX files, adjust presentation settings, and preview the output.
## Challenges we ran into
One of the biggest challenges we faced was ensuring that the LaTeX parser could accurately extract and format complex equations and figures into slide-friendly content. Maintaining academic rigor while making the content accessible to different audiences also required a lot of trial and error with the customizable templates. Finally, integrating the backend and frontend in a way that made the process seamless and efficient posed technical hurdles that required collaboration and creative problem-solving.
## Accomplishments that we're proud of
We’re proud of the fact that SlideForge significantly reduces the time required for researchers to create professional presentations. What used to take hours can now be done in minutes. We’re also proud of the adaptability of our templates, which allow users to target different audiences without needing to redesign their slides from scratch. Additionally, the successful integration of LaTeX parsing and slide generation is a technical achievement we’re particularly proud of.
## What we learned
Throughout this project, we learned a lot about LaTeX and how to parse and handle its complex structures programmatically. We also gained a deeper understanding of user experience design, ensuring that our platform was both intuitive and powerful. From a technical standpoint, integrating the backend and frontend and ensuring smooth communication between the two taught us valuable lessons in full-stack development.
## What's next for SlideForge
Next, we plan to expand SlideForge’s functionality by adding more customization options for users, such as advanced styling and animation features. We’re also looking into integrating cloud storage solutions so users can save and edit their presentations across devices. Additionally, we hope to support more document formats beyond LaTeX, making SlideForge a universal tool for academics and professionals alike.
|
partial
|
## Inspiration: As per the Stats provided by Annual Disability Statistics Compendium, 19,344,883 civilian veterans ages 18 years and over live in the community in 2013, of which 5,522,589 were individuals with disabilities . DAV - Disabled American Veterans organization has spent about $ 61.8 million to buy and operate vehicles to act as a transit service for veterans but the reach of this program is limited.
Following these stats we wanted to support Veterans with something more feasible and efficient.
## What it does: It is a web application that will serve as a common platform between DAV and Uber. Instead of spending a huge amount on buying cars the DAV instead pay Uber and Uber will then provide free rides to veterans. Any veteran can register with his Veteran ID and SSN. During the application process our Portal matches the details with DAV to prevent non-veterans from using this service. After registration, Veterans can request rides on our website, that uses Uber API and can commute free.
## How we built it: We used the following technologies:
Uber API ,Google Maps, Directions, and Geocoding APIs, WAMP as local server.
Boot-Strap to create website, php-MyAdmin to maintain SQL database and webpages are designed using HTML, CSS, Javascript, Python script etc.
## Challenges we ran into: Using Uber API effectively, by parsing through data and code to make javascript files that use the API endpoints. Also, Uber API has problematic network/server permission issues.
Another challenge was to figure out the misuse of this service by non-veterans. To save that, we created a dummy Database, where each Veteran-ID is associated with corresponding 4 digits SSN. The pair is matched when user registers for free Uber rides. For real-time application, the same data can be provided by DAV and that can be used to authenticate a Veteran.
## Accomplishments that we're proud of: Finishing the project well in time, almost 4 hours before. From a team of strangers, brainstorming ideas for hours and then have a finished product in less than 24 hours.
## What we learned: We learnt to use third party APIs and gained more experience in web-development.
## What's next for VeTransit: We plan to launch a smartphone app that will be developed for the same service.
It will also include Speech recognition. We will display location services for nearby hospitals and medical facilities based on veteran’s needs. Using APIs of online job providers, veterans will receive data on jobs.
To access the website, Please register as user first.
During that process, It will ask Veteran-ID and four digits of SSN.
The pair should match for successful registration.
Please use one of the following key pairs from our Dummy Data, to do that:
VET00104 0659
VET00105 0705
VET00106 0931
VET00107 0978
VET00108 0307
VET00109 0674
|
## Inspiration
Around 43.3% of NFT users are victims of NFT fraud. To prevent this and benefit society, we created a publicly available website, nftlaundromat.tech, where you can see and track NFT fraudsters. This way, NFT fraudsters will hesitate to commit fraud in the future as they know they would be publicly shamed. Thus, healthier NFT space would be created.
## What it does
It pulls the publicly available data from NFT wallets and extracts all the users who committed wash trading or rug pulling. We identified the fraudsters using the machine learning graph theory algorithm we developed based on past research papers. To clarify, rug pulling is a scam promoting a crypto token via social media. After the price has been driven up, the scammer sells, and the price generally falls to zero. On the other hand, wash trading is dishonest to drive up the price of NFTs by the buyer and seller. The buyer and seller can sell the piece back and forth to drive the cost but only publicly report the first sale. The money and NFT are returned to the original seller in the following exchange. Users can go to our website, read about each fraudster, and then shame the fraudster's social account with a simple click on the button.
## How we built it
We first read research papers on NFT rug pulling and wash trading. After, that we improved the algorithm by shifting the identification of the fraudsters into graph theory. We extracted the data using SQL queries from the transpose.io. After extracting the data, we ran our algorithm to identify all the fraudsters. We store the data in Firebase, and show it to the users in front-end using JS.
## Challenges we ran into
There is a very little research in the space of NFT rug pulling and wash trading. It took us few hours to improve the algorithm and to improve the accuracy of existing algorithms for the identification. Algorithms written in research papers were unclear, and not working. To develop our algorithms, we first had to optimize finding all the cycles in the graph, and then identifiying what does fall within the standard deviation.
## Accomplishments that we're proud of
Teamwork and team energy was up to the maximum level, which helped us developed the project. Diversity of the whole team and different backgrounds played a huge role in our accomplishment. In just 36 hours, we managed to improve the algorithm from the research that took a few years. Furthermore, each part of the team had to deal with the parts of the project that were not the most comfortable parts for them, which helped us learn a lot.
## What we learned
We learned that we can definitely continue to build and deploy this project fully. There are so many externalities that need to be taken into account to achieve perfect accuracy score. We learned that APIs are not that easy to integrate into the application, and that graph theory come in handy so often.
## What's next for NFT Laundromat
The next steps are:
-> Improve UI/UX
-> Identify more externalities and add them into the algorithm
-> Train the algorithm through machine learning to tweak the parameters
-> Market the product to reach wider audience
|
## Inspiration
During our brainstorming phase, we cycled through a lot of useful ideas that later turned out to be actual products on the market or completed projects. After four separate instances of this and hours of scouring the web, we finally found our true calling at QHacks: building a solution that determines whether an idea has already been done before.
## What It Does
Our application, called Hack2, is an intelligent search engine that uses Machine Learning to compare the user’s ideas to products that currently exist. It takes in an idea name and description, aggregates data from multiple sources, and displays a list of products with a percent similarity to the idea the user had. For ultimate ease of use, our application has both Android and web versions.
## How We Built It
We started off by creating a list of websites where we could find ideas that people have done. We came up with four sites: Product Hunt, Devpost, GitHub, and Google Play Store. We then worked on developing the Android app side of our solution, starting with mock-ups of our UI using Adobe XD. We then replicated the mock-ups in Android Studio using Kotlin and XML.
Next was the Machine Learning part of our solution. Although there exist many machine learning algorithms that can compute phrase similarity, devising an algorithm to compute document-level similarity proved much more elusive. We ended up combining Microsoft’s Text Analytics API with an algorithm known as Sentence2Vec in order to handle multiple sentences with reasonable accuracy. The weights used by the Sentence2Vec algorithm were learned by repurposing Google's word2vec ANN and applying it to a corpus containing technical terminology (see Challenges section). The final trained model was integrated into a Flask server and uploaded onto an Azure VM instance to serve as a REST endpoint for the rest of our API.
We then set out to build the web scraping functionality of our API, which would query the aforementioned sites, pull relevant information, and pass that information to the pre-trained model. Having already set up a service on Microsoft Azure, we decided to “stick with the flow” and build this architecture using Azure’s serverless compute functions.
After finishing the Android app and backend development, we decided to add a web app to make the service more accessible, made using React.
## Challenges We Ran Into
From a data perspective, one challenge was obtaining an accurate vector representation of words appearing in quasi-technical documents such as Github READMEs and Devpost abstracts. Since these terms do not appear often in everyday usage, we saw a degraded performance when initially experimenting with pretrained models. As a result, we ended up training our own word vectors on a custom corpus consisting of “hacker-friendly” vocabulary from technology sources. This word2vec matrix proved much more performant than pretrained models.
We also ran into quite a few issues getting our backend up and running, as it was our first using Microsoft Azure. Specifically, Azure functions do not currently support Python fully, meaning that we did not have the developer tools we expected to be able to leverage and could not run the web scraping scripts we had written. We also had issues with efficiency, as the Python libraries we worked with did not easily support asynchronous action. We ended up resolving this issue by refactoring our cloud compute functions with multithreaded capabilities.
## What We Learned
We learned a lot about Microsoft Azure’s Cloud Service, mobile development and web app development. We also learned a lot about brainstorming, and how a viable and creative solution could be right under our nose the entire time.
On the machine learning side, we learned about the difficulty of document similarity analysis, especially when context is important (an area of our application that could use work)
## What’s Next for Hack2
The next step would be to explore more advanced methods of measuring document similarity, especially methods that can “understand” semantic relationships between different terms in a document. Such a tool might allow for more accurate, context-sensitive searches (e.g. learning the meaning of “uber for…”). One particular area we wish to explore are LSTM Siamese Neural Networks, which “remember” previous classifications moving forward.
|
partial
|
# DriveWise: Building a Safer Future in Route Planning
Motor vehicle crashes are the leading cause of death among teens, with over a third of teen fatalities resulting from traffic accidents. This represents one of the most pressing public safety issues today. While many route-planning algorithms exist, most prioritize speed over safety, often neglecting the inherent risks associated with certain routes. We set out to create a route-planning app that leverages past accident data to help users navigate safer routes.
## Inspiration
The inexperience of young drivers contributes to the sharp rise in accidents and deaths as can be seen in the figure below.

This issue is further intensified by challenging driving conditions, road hazards, and the lack of real-time risk assessment tools. With limited access to information about accident-prone areas and little experience on the road, new drivers often unknowingly enter high-risk zones—something traditional route planners like Waze or Google Maps fail to address. However, new drivers are often willing to sacrifice speed for safer, less-traveled routes. Addressing this gap requires providing insights that promote safer driving choices.
## What It Does
We developed **DriveWise**, a route-planning app that empowers users to make informed decisions about the safest routes. The app analyzes 22 years of historical accident data and utilizes a modified A\* heuristic for personalized planning. Based on this data, it suggests alternative routes that are statistically safer, tailoring recommendations to the driver’s skill level. By factoring in variables such as driver skill, accident density, and turn complexity, we aim to create a comprehensive tool that prioritizes road safety above all else.
### How It Works
Our route-planning algorithm is novel in its incorporation of historical accident data directly into the routing process. Traditional algorithms like those used by Google Maps or Waze prioritize the shortest or fastest routes, often overlooking safety considerations. **DriveWise** integrates safety metrics into the edge weights of the routing graph, allowing the A\* algorithm to favor routes with lower accident risk.
**Key components of our algorithm include:**
* **Accident Density Mapping**: We map over 3.1 million historical accident data points to the road network using spatial queries. Each road segment is assigned an accident count based on nearby accidents.
* **Turn Penalties**: Sharp turns are more challenging for new drivers and have been shown to contribute to unsafe routes. We calculate turn angles between road segments and apply penalties for turns exceeding a certain threshold.
* **Skillfulness Metric**: We introduce a driver skill level parameter that adjusts the influence of accident risk and turn penalties on route selection. New drivers are guided through safer, simpler routes, while experienced drivers receive more direct paths.
* **Risk-Aware Heuristic**: Unlike traditional A\* implementations that use distance-based heuristics, we modify the heuristic to account for accident density, further steering the route away from high-risk areas.
By integrating these elements, **DriveWise** offers personalized route recommendations that adapt as the driver's skill level increases, ultimately aiming to reduce the likelihood of accidents for new drivers.
## Accomplishments We're Proud Of
We are proud of developing an algorithm that not only works effectively but also has the potential to make a real difference in road safety. Creating a route-planning tool that factors in historical accident data is, to our knowledge, a novel approach in this domain. We successfully combined complex data analysis with an intuitive user interface, resulting in an app that is both powerful and user-friendly.
We are also kinda proud about our website. Learn more about us at [idontwannadie.lol](https://idontwannadie.lol/)
## Challenges We Faced
This was one of our first hackathons, and we faced several challenges. Having never deployed anything before, we spent a significant amount of time learning, debugging, and fixing deployment issues. Designing the algorithm to analyze accident patterns while keeping the route planning relatively simple added considerable complexity. We had to balance predictive analytics with real-world usability, ensuring that the app remained intuitive while delivering sophisticated results.
Another challenge was creating a user interface that encourages engagement without overwhelming the driver. We wanted users to trust the app’s recommendations without feeling burdened by excessive information. Striking the right balance between simplicity and effectiveness through gamified metrics proved to be an elegant solution.
## What We Learned
We learned a great deal about integrating large datasets into real-time applications, the complexities of route optimization algorithms, and the importance of user-centric design. Working with the OpenStreetMap and OSMnx libraries required a deep dive into geospatial analysis, which was both challenging and rewarding. We also discovered the joys and pains of deploying an application, from server configurations to domain name setups.
## Future Plans
In the future, we see the potential for **DriveWise** to go beyond individual drivers and benefit broader communities. Urban planners, law enforcement agencies, and policymakers could use aggregated data to identify high-risk areas and make informed decisions about where to invest in road safety improvements. By expanding our dataset and refining our algorithms, we aim to make **DriveWise** functional in more regions and for a wider audience.
## Links
* **Paper**: [Mathematical Background](https://drive.google.com/drive/folders/1Q9MRjBWQtXKwtlzObdAxtfBpXgLR7yfQ?usp=sharing)
* **GitHub**: [DriveWise Repository](https://github.com/pranavponnusamy/Drivewise)
* **Website**: [idontwannadie.lol](https://idontwannadie.lol/)
* **Video Demo**: [DriveWise Demo](https://www.veed.io/view/81d727bc-ed6b-4bba-95c1-97ed48b1738d?panel=share)
|
## Inspiration
Most of us have probably donated to a cause before — be it $1 or $1000. Resultantly, most of us here have probably also had the same doubts:
* who is my money really going to?
* what is my money providing for them...if it’s providing for them at all?
* how much of my money actually goes use by the individuals I’m trying to help?
* is my money really making a difference?
Carepak was founded to break down those barriers and connect more humans to other humans. We were motivated to create an application that could create a meaningful social impact. By creating a more transparent and personalized platform, we hope that more people can be inspired to donate in more meaningful ways.
As an avid donor, CarePak is a long-time dream of Aran’s to make.
## What it does
CarePak is a web application that seeks to simplify and personalize the charity donation process. In our original designs, CarePak was a mobile app. We decided to make it into a web app after a bit of deliberation, because we thought that we’d be able to get more coverage and serve more people.
Users are given options of packages made up of predetermined items created by charities for various causes, and they may pick and choose which of these items to donate towards at a variety of price levels. Instead of simply donating money to organizations,
CarePak's platform appeals to donators since they know exactly what their money is going towards. Once each item in a care package has been purchased, the charity now has a complete package to send to those in need. Through donating, the user will build up a history, which will be used by CarePak to recommend similar packages and charities based on the user's preferences. Users have the option to see popular donation packages in their area, as well as popular packages worldwide.
## How I built it
We used React with the Material UI framework, and NodeJS and Express on the backend. The database is SQLite.
## Challenges I ran into
We initially planned on using MongoDB but discovered that our database design did not seem to suit MongoDB too well and this led to some lengthy delays. On Saturday evening, we made the decision to switch to a SQLite database to simplify the development process and were able to entirely restructure the backend in a matter of hours. Thanks to carefully discussed designs and good teamwork, we were able to make the switch without any major issues.
## Accomplishments that I'm proud of
We made an elegant and simple application with ideas that could be applied in the real world. Both the front-end and back-end were designed to be modular and could easily support some of the enhancements that we had planned for CarePak but were unfortunately unable to implement within the deadline.
## What I learned
Have a more careful selection process of tools and languages at the beginning of the hackathon development process, reviewing their suitability in helping build an application that achieves our planned goals. Any extra time we could have spent on the planning process would definitely have been more than saved by not having to make major backend changes near the end of the Hackathon.
## What's next for CarePak
* We would love to integrate Machine Learning features from AWS in order to gather data and create improved suggestions and recommendations towards users.
* We would like to add a view for charities, as well, so that they may be able to sign up and create care packages for the individuals they serve. Hopefully, we would be able to create a more attractive option for them as well through a simple and streamlined process that brings them closer to donors.
|
## Inspiration
Our inspiration came from how we are all relatively new drivers and terrified of busy intersections. Although speed is extremely important to transport from one spot to another, safety should always be highlighted when it comes to the road because car accidents are the number one cause of death in the world.
## What it does
When the website is first opened, the user is able to see the map with many markers indicating the fact that a fatal collision happened here. As noted in the top legend, the colours represent the different types of collision frequency. When the user specifies an address for the starting and ending location, our algorithm will detect the safest route in order to avoid all potentially dangerous/busy intersections. However, if the route must pass a dangerous intersection, our algorithm will ultimately return it back.
## How we built it
For the backend, we used Javascript functions that took in the latitude and longitude of collisions in order to mark them on the Google Map API. We also had several functions to not only check if the user's path would come across a collision, but also check alternatives in which the user would avoid that intersection.
We were able to find an Excel spreadsheet listing all Toronto's fatal collisions in the past 5 years and copied that into a SQL database. That was then connected to Google SQL to be used as a public host and then using Node.js, data was then taken from it to mark the specified collisions.
For the frontend, we also used a mix of HTML, CSS, Javascript and Node.js to serve the web app to the user. Once the request is made for the specific two locations, Express will read the .JSON file and send information back to other Javascript files in order to display the most optimal and safest path using the Google Map API.
To host the website, a domain registered on Domain.com and launched by creating a simple engine virtual machine on Compute Engine. After creating a Linux machine, a basic Node.js server was set up and the domain was then connected to Google Cloud DNS. After verifying that we did own our domain via DNS record, a bucket containing all the files was stored on Google Cloud and set to be publicly accessible.
## Challenges we ran into
We have all never used Javascript and Google Cloud services before, so challenges that kept arising was our unfamiliarity with new functions (Eg. callback). In addition, it was difficult to set up and host Domain.com since we were new to web hosting. Lastly, Google Cloud was challenging since we were mainly using it to combine all aspects of the project together.
## Accomplishments that We're proud of
We're very proud of our final product. Although we were very new to Javascript, Google Cloud Services, and APIs, my team is extremely proud of utilizing all resources provided at the hackathon. We searched the web, as well as asked mentors for assistance. It was our determination and great time management that pushed us to ultimately finish the project.
## What we learned
We learned about Javascript, Google APIs, and Google Cloud services. We were also introduced to many helpful tutorials (through videos, and online written tutorials). We also learned how to deploy it to a domain in order for worldwide users to access it.
## What's next for SafeLane
Currently, our algorithm will return the most optimal path avoiding all dangerous intersections. However, there may be cases where the amount of travel time needed could be tremendously more than the quickest path. We hope to only show paths that have a maximum 20-30% more travel time than the fastest path. The user will be given multiple options for paths they may take. If the user chooses a path with a potentially dangerous intersection, we will issue out a warning stating all areas of danger.
We also believe that SafeLane can definitely be expanded to first all of Ontario, and then eventually on a national/international scale. SafeLane can also be used by government/police departments to observe all common collision areas and investigate how to make the roads safer.
|
winning
|
## Idea
An improved way of delivering PDT (Photodymanic Therapy) for surface level lesions. The idea is to create a modified array of LED lights (two types with spectral peak differences of 10 - 15nm), and vary the current through the lights slightly to produce an exitation light custom to each patient. Blue light photos will be taken of the treatment area after each treatment to track progress. The goal is to along with the doctor’s qualitative input, implement a machine learning algorithm to deliver the most ideal spectrum of light to the patient depending on their previous progress, required light penetration depth, and the similarity of physiology with other patients.
## How we built it
Web app built using Angular, Node.js, MongoDB
Machine learning algorithm implemented using Python Scipy
Image processing implemented using Python PIL
Hardware built using Arduino
## Challenges we ran into
Had to change language from MATLAB to python with less than 24 hours left due to compatibility issues
## Accomplishments that we're proud of
Built circuit from scratch
Machine learning algorithm built from scratch
Built custom imaging algorithm
One guy did the whole front end and back end of the web app
## What we learned
New languages: syntax, libraries associated with them
## What's next for Photodynamic Therapy
Optimization to be done with more data and fine tuning algorithms for edge cases
More realistic patient trials
|
## Inspiration
Inspired by Traditional Chinese Medicine (TCM), we are developing an AI-powered medical diagnostic service to address the needs of individuals who struggle to access healthcare. Our focus is on providing timely support for those unable to visit a doctor due to mobility issues, busy schedules, long wait times, or discomfort with in-person consultations.
## What it does
Our service utilizes computer vision to analyze facial and tongue images to assess the likelihood of specific health conditions. This allows healthcare professionals to deliver quick and accurate diagnoses, enabling patients to receive essential medical advice without unnecessary delays.
## How we built it
We integrated advanced AI algorithms with a user-friendly interface, ensuring that patients can easily upload their images for analysis. Our team collaborated with medical experts to align the AI assessments with TCM diagnostic methods, enhancing the service's credibility.
1. Frontend Development with React Native:
We opted to use **React Native** for building the frontend, taking advantage of its robust ecosystem for cross-platform mobile development. React Native's reusable components and modular architecture enabled us to efficiently create a responsive, high-performance UI that works seamlessly across multi-platforms. Its rich community and powerful libraries streamlined our development process, reducing overhead while maintaining flexibility for future scaling.
2. Backend Integration with **RESTful APIs**:
For communication between the backend and frontend, we implemented a RESTful API to handle data exchange. HTTP requests serve as the backbone of this interaction, allowing the frontend to send and receive data from the server efficiently.
3. Algorithm Related:
We captured an image of a face and an image of a tongue. For the face image, we first use a model to extract the eye region, cheek region, and lip region. Each of these regions is then processed using independently trained models to predict a corresponding result label, which is subsequently mapped to an observation description. The obtained observation descriptions are then passed to our custom-designed LLM API to generate a diagnostic response based on traditional Chinese medicine (TCM).
4. DeFang Deployment:
We have deployed the diagnostic service through **DeFang**.
## Challenges we ran into
We faced challenges in ensuring the accuracy of our AI models and the sensitivity of the data used for training. Additionally, addressing privacy concerns while delivering a seamless user experience required careful planning and implementation.
## Accomplishments that we're proud of
We successfully developed a prototype that demonstrates our technology's potential to reduce patient deterioration by providing timely diagnostic support. Our initial testing showed a promising accuracy rate, validating our approach and inspiring further development.
## What we learned
We learned the importance of combining traditional knowledge with modern technology to improve healthcare access. Engaging with potential users highlighted the necessity for quick, reliable diagnostics to prevent patient deterioration, reinforcing our mission.
## What's next for ChiBalance
Moving forward, we aim to refine our AI algorithms and expand our user base, targeting not only Chinese communities but also individuals worldwide facing similar healthcare barriers. Our goal is to democratize access to fast diagnostics and improve overall patient security through innovation.
|
## Inspiration
We came into the hackathon knowing we wanted to focus on a project concerning sustainability. While looking at problems, we found that lighting is a huge contributor to energy use, accounting for about 15% of global energy use (DOE, 2015). Our idea for this specific project came from the dark areas in the Pokémon video games, where the player only has limited visibility around them. While we didn't want to have as harsh of a limit on field of view, we wanted to be able to dim the lights in areas that weren't occupied, to save energy. We especially wanted to apply this to large buildings, such as Harvard's SEC, since oftentimes all of the lights are left on despite very few people being in the building. In the end, our solution is able to dynamically track humans, and adjust the lights to "spotlight" occupied areas, while also accounting for ambient light.
## Methodology
Our program takes video feed from an Intel RealSense D435 camera, which gives us both RGB and depth data. With OpenCV, we use Histogram of Oriented Gradients (HOG) feature extraction combined with a Support Vector Machine (SVM) classifier to detect the locations of people in a frame, which we then stitch with the camera's depth data to identify the locations of people relative to the camera’s location. Using user-provided knowledge of the room’s layout, we can then determine the position of the people relative to the room, and then use a custom algorithm to determine the power level for each light source in the room. We can visualize this in our custom simulator, developed to be able to see how multiple lights overlap.
## Ambient Light Sensing
We also implement a sensor system to sense ambient light, to ensure that when a room is brightly lit energy isn't wasted in lighting. Originally, we set up a photoresistor with a SparkFun RedBoard, but after having driver issues with Windows we decided to pivot and use camera feedback from a second camera to detect brightness. To accomplish this we use a 3-step process, within which we first convert the camera's input to grayscale, then apply a box filter to blur the image, and then finally sample random points within the image and average their intensity to get an estimate of brightness. The random sampling boosts our performance significantly, since we're able to run this algorithm far faster than if we sampled every single point's intensity.
## Highlights & Takeaways
One of our group members focused on using the video input to determine people’s location within the room, and the other worked on the algorithm for determining how the lights should be powered, as well as creating a simulator for the output. Given that neither of us had worked on a large-scale software project in a while, and one of us had practically never touched Python before the start of the hackathon, we had our work cut out for us.
Our proudest moment was by far when we finally got our video code working, and finally saw the bounding box appear around the person in front of the camera and the position data started streaming across our terminal. However between calibrating devices, debugging hardware issues, and a few dozen driver installations, we learned the hard way that working with external devices can be quite a challenge.
## Future steps
We've brainstormed a few ways to improve this project going forward: more optimized lighting algorithms could improve energy efficiency, and multiple cameras could be used to detect orientation and predict people's future states. A discrete ambient light sensor module could also be developed, for mounting anywhere the user desires. We also could develop a bulb socket adapter to retrofit existing lighting systems, instead of rebuilding from the ground up.
|
losing
|
## Inspiration
Large Language models is what everyone is talking about in the AI industry. We decided to leverage LLMs and create an application that students around the world could use, especially those that might not have access to a professor and want to efficiently learn complex topics while using their favorite STEM YouTube playlists and/or notes, EffiSTEM does just that and more while offering a customizable learning experience for each user based on their learning preferences and even hobbies, using analogies and a language that a given user can better understand. Learning concepts in STEM has never been this easy.
## What it does
EffiSTEM allows users to specify their preferred way of learning material which allows the large language model to approach explanations in a way that benefits the student. We discovered that if a user specifies for their hobbies for instance, the LLM will use that to it's advantage and explain a technical concept using a vocabulary and analogies that the given student understands. The user specifies a YouTube playlist and also has the choice of uploading notes for a STEM subject which allows the Large Language Model to have access to relevant material that interests the student, i.e., the student is able to talk with the content embedded within the YouTube playlist and notes uploaded. EffiSTEM allows you to study better, whether you have an exam the upcoming day or want to learn without an instructor, EffiSTEM is here to assist you at any time anywhere.
## How we built it
Our team leveraged React.js in the Front-End, and Flask in the Back-End. We used multiple API's such as Googles Speech to Text API, MathPix for SOTA STEM OCR, and LangChain for optimal LLM development and deployment. Instruction fine-tuning was used as a way to reduce model hallucinations (Temperature is also set to 0).
## Challenges we ran into
Organizing a team of only 3 individuals to complete a very large projects in such a short time-frame.
Finding out creative ways to use LLMs
## Accomplishments that we're proud of
As a team we were able to learn and develop a unique application that students around the world would use in a heartbeat. EffiSTEM has the potential to reduce the amount of time student have to study, and greatly improves the experience a student has when interacting with chatbots from a STEM standpoint.
## What we learned
We learned how to leverage LLMs for creating a customized learning experience for the user. As a team we are much more comfortable in working on large projects and implementing functionalities from a Full-Stack Development standpoint.
## What's next for EffiSTEM
The next steps for EffiSTEM is to add the capabilities to receive feedback from the user and therefore adapt the outputs based on the given feedback (RLHF). Our team also wants to look into efficiently fine tune LLMs such that they are better adapted for our given use case and further instruction fine-tune our LLMs in order to reduce the chances of hallucinating.
|
Driven by a shared passion for leveraging AI in education, our team of four embarked on a hackathon journey to revolutionize how learners access information. We envisioned a world where lengthy textbooks, lectures, and videos could be transformed into concise, engaging learning experiences.
Pooling our diverse skill sets, we collaborated on building an AI pipeline that ingested various lengthy media formats such as textbooks and lectures, leveraging Large Language Models (LLMs) to extract key insights and generate summaries. We harnessed the power of text-to-speech (TTS) AI engine LMNT to give voice to the summaries and combined them with informative visuals through an AI powered video renderer. The result is an informative video lecture that concisely captures the essence of the original content.
But our vision went beyond content delivery. We integrated question generation and AI-powered grading into the pipeline, allowing users to assess their comprehension and receive personalized feedback. All outputs – video lectures, questions, and feedback – were seamlessly integrated into a user-friendly web application deployed on the cloud.
Throughout the hackathon, we faced challenges as a team, from integrating complex APIs to handling massive amounts of data that resulted in lengthy query times. We tackled each obstacle with collaboration and creative problem-solving, drawing strength from our shared commitment to the project's potential impact. The moments of triumph, when we saw the pipeline seamlessly transform content and the web application deliver a smooth learning experience, reinforced our belief in the power of teamwork and the potential of AI to reshape education.
This hackathon was not just about building a project; it was about proving that a team of passionate individuals could leverage AI to make knowledge more accessible, engaging, and personalized. The journey continues as we explore new possibilities for adaptive learning, expanded media support, and global reach. Our hackathon experience has instilled in us the confidence that, together, we can transform education and empower learners worldwide.
|
## Inspiration
In a world full of so much info but limited time as we are so busy and occupied all the time, we wanted to create a tool that helps students make the most of their learning—quickly and effectively. We imagined a platform that could turn complex material into digestible, engaging content tailored for a fast-paced generation.
## What it does
Lemme Learn More (LLM) transforms study materials into bite-sized TikTok-style trendy and attractive videos or reels, flashcards, and podcasts. Whether it's preparing for exams or trying to stay informed on the go, LLM breaks down information into formats that match how today's students consume content. If you're an avid listener of podcasts during commute to work - this is the best platform for you.
## How we built it
We built LLM using a combination of AI-powered tools like OpenAI for summaries, Google TTS for podcasts, and pypdf2 to pull data from PDFs. The backend runs on Flask, while the frontend is React.js, making the platform both interactive and scalable. Also used fetch.ai ai agents to deploy on test net - blockchain.
## Challenges we ran into
Due to a highly-limited time, we ran into a deployment challenge where we faced some difficulty setting up on Heroku cloud service platform. It was an internal level issue wherein we were supposed to change config files - I personally spent 5 hours on that - my team spent some time as well. Could not figure it out by the time hackathon ended : we decided not to deploy today.
In brainrot generator module, audio timing could not match with captions. This is something for future scope.
One of the other biggest challenges was integrating sponsor Fetch.ai agentverse AI Agents which we did locally and proud of it!
## Accomplishments that we're proud of
Biggest accomplishment would be that we were able to run and integrate all of our 3 modules and a working front-end too!!
## What we learned
We learned that we cannot know everything - and cannot fix every bug in a limited time frame. It is okay to fail and it is more than okay to accept it and move on - work on the next thing in the project.
## What's next for Lemme Learn More (LLM)
Coming next:
1. realistic podcast with next gen TTS technology
2. shorts/reels videos adjusted to the trends of today
3. Mobile app if MVP flies well!
|
losing
|
## Inspiration
We have family members that have autism, and Cinthya told us about a history including her imaginary friends and how she interacted with them in her childhood, so we started a research about this two topics and we came around to with "Imaginary Friends"
## What it does
We are developing an application that allows the kids of all kind draw their imaginary friends to visualize them using augmented reality and keep in the app with the objective to improve social skills based on studies that proof that the imaginary friends help to have better social relationships and better communication. This application is also capable of detecting the mood like joy, sadness, etc. using IBM Watson "speech to text" and Watson "tone analyzer" inorder to give information of interest to the parents of this children or to their psycologist through a web page built with WIX showing statistical data and their imaginary friends.
## Challenges we ran into
We didn't know some of technologies that we used so we had to learn them in the process.
## Accomplishments that we're proud of
Finish the WIX application, and almost conclude the mobile app.
## What we learned
How to use WIX and IBM Watson
## What's next for ImaginaryFriends
We are thinking that Imaginary Friends can go further if we implement the idea in theme parks such Disney Land, etc.
with the idea that the kid could be guided by their own imaginary friend
|
## Inspiration
Having previously volunteered and worked with children with cerebral palsy, we were struck with the monotony and inaccessibility of traditional physiotherapy. We came up with a cheaper, more portable, and more engaging way to deliver treatment by creating virtual reality games geared towards 12-15 year olds. We targeted this age group because puberty is a crucial period for retention of plasticity in a child's limbs. We implemented interactive games in VR using Oculus' Rift and Leap motion's controllers.
## What it does
We designed games that targeted specific hand/elbow/shoulder gestures and used a leap motion controller to track the gestures. Our system improves motor skill, cognitive abilities, emotional growth and social skills of children affected by cerebral palsy.
## How we built it
Our games use of leap-motion's hand-tracking technology and the Oculus' immersive system to deliver engaging, exciting, physiotherapy sessions that patients will look forward to playing. These games were created using Unity and C#, and could be played using an Oculus Rift with a Leap Motion controller mounted on top. We also used an Alienware computer with a dedicated graphics card to run the Oculus.
## Challenges we ran into
The biggest challenge we ran into was getting the Oculus running. None of our computers had the ports and the capabilities needed to run the Oculus because it needed so much power. Thankfully we were able to acquire an appropriate laptop through MLH, but the Alienware computer we got was locked out of windows. We then spent the first 6 hours re-installing windows and repairing the laptop, which was a challenge. We also faced difficulties programming the interactions between the hands and the objects in the games because it was our first time creating a VR game using Unity, leap motion controls, and Oculus Rift.
## Accomplishments that we're proud of
We were proud of our end result because it was our first time creating a VR game with an Oculus Rift and we were amazed by the user experience we were able to provide. Our games were really fun to play! It was intensely gratifying to see our games working, and to know that it would be able to help others!
## What we learned
This project gave us the opportunity to educate ourselves on the realities of not being able-bodied. We developed an appreciation for the struggles people living with cerebral palsy face, and also learned a lot of Unity.
## What's next for Alternative Physical Treatment
We will develop more advanced games involving a greater combination of hand and elbow gestures, and hopefully get testing in local rehabilitation hospitals. We also hope to integrate data recording and playback functions for treatment analysis.
## Business Model Canvas
<https://mcgill-my.sharepoint.com/:b:/g/personal/ion_banaru_mail_mcgill_ca/EYvNcH-mRI1Eo9bQFMoVu5sB7iIn1o7RXM_SoTUFdsPEdw?e=SWf6PO>
|
## Inspiration
One charge of the average EV's battery uses as much electricity as a house uses every 2.5 days. This puts a huge strain on the electrical grid: people usually plug in their car as soon as they get home, during what is already peak demand hours. At this time, not only is electricity the most expensive, but it is also the most carbon-intensive; as much as 20% generated by fossil fuels, even in Ontario, which is not a primarily fossil-fuel dependent region. We can change this: by charging according to our calculated optimal time, not only will our users save money, but save the environment.
## What it does
Given an interval in which the user can charge their car (ex., from when they get home to when they have to leave in the morning), ChargeVerte analyses live and historical data of electricity generation to calculate an interval in which electricity generation is the cleanest. The user can then instruct their car to begin charging at our recommended time, and charge with peace of mind knowing they are using sustainable energy.
## How we built it
ChargeVerte was made using a purely Python-based tech stack. We leveraged various libraries, including requests to make API requests, pandas for data processing, and Taipy for front-end design. Our project pulls data about the electrical grid from the Electricity Maps API in real-time.
## Challenges we ran into
Our biggest challenges were primarily learning how to handle all the different libraries we used within this project, many of which we had never used before, but were eager to try our hand at. One notable challenge we faced was trying to use the Flask API and React to create a Python/JS full-stack app, which we found was difficult to make API GET requests with due to the different data types supported by the respective languages. We made the decision to pivot to Taipy in order to overcome this hurdle.
## Accomplishments that we're proud of
We built a functioning predictive algorithm, which, given a range of time, finds the timespan of electricity with the lowest carbon intensity.
## What we learned
We learned how to design critical processes related to full-stack development, including how to make API requests, design a front-end, and connect a front-end and backend together. We also learned how to program in a team setting, and the many strategies and habits we had to change in order to make it happen.
## What's next for ChargeVerte
A potential partner for ChargeVerte is power-generating companies themselves. Generating companies could package ChargeVerte and a charging timer, such that when a driver plugs in for the night, ChargeVerte will automatically begin charging at off-peak times, without any needed driver oversight. This would reduce costs significantly for the power-generating companies, as they can maintain a flatter demand line and thus reduce the amount of expensive, polluting fossil fuels needed.
|
winning
|
## What we built
We built trending-news-annotator to facilitate the creation of question-answer pairs for trending-news articles that appear on Facebook. The question-answer pairs are formatted to resemble the Stanford Question Answering Dataset (SQuAD) for machine comprehension.
## Submission
Our submission is contained in the Github repo <https://github.com/cherls/trending-news-annotator>
The file structure consists of subfolders app/ and dashboard/ consisting of a backend Flask API and React frontend respectively.
|
## Inspiration
As young adults, we're navigating the new waves of independence and university life, juggling numerous responsibilities and a busy schedule. Amidst the hustle, we often struggle to keep track of everything, including our groceries. It's all too common for food to get pushed to the back of the fridge, only to be rediscovered when it's too late and has gone bad. That’s how we came up with preservia - a personal grocery smart assistant designed to help you save money, reduce food waste, and enjoy fresher meals.
## What it does
**Catalogue food conveniently:** preservia.tech allows grocery shoppers to keep track of their purchased food, ensuring less goes to waste. Users take photos of their receipts and the app will identify the food items bought, estimate reasonable expiry timeframes, and catalogue them within a user-friendly virtual inventory. Users also have the option of directly photographing their grocery items and the app will add them to the database as well, or even manually enter items.
**Inventory:** The user interface offers intuitive control, allowing users to delete items from the inventory at their will once items are used. Users can also request the application to reevaluate expiry dates if they suspect any mistakes in the AI predictions.
**Recipes:** Additionally, users can select food items in their grocery inventory and prompt the application to suggest a recipe based on selected ingredients.
## How we built it
Preservia.tech is built around leveraging Large Language Models (**Cohere**) as flexible databases and answer engines, here to give nuanced answers about expiration, for even the **most specific food!** This allows us to enter any possible food item, and the AI systems will do their best to understand and classify them. The predictive power of Preservia.tech will only expand as LLMs grow.
**OpenAI’s GPT-4** was also used as a flexible system to accurately decipher cryptic and generally unstandardized receipts, a task probably impossible without such models. GPT-4 is also the engine generating recipes.
We employed Google’s **MediaPipe** for food item classification, and converted images to text with **API Ninjas** to read the receipts.
Our app is primarily built on a Python backend for computation, with Flask to handle the web app, and mySQL as a database to track items. The web pages are written in HTML with some CSS and JavaScript.
We can connect it to a smartphone through a local network to take pictures more easily.
## Challenges we ran into
Working with cutting edge APIs and AI was a brand-new challenge for the entire team, so we had to navigate different types of models and documentations, overcoming integration hell to eventually arrive at a successful project. We also found prompt engineering hard, especially trying to get the most accurate results possible.
It was all of our first times working with Flask, so there was a learning curve there. Deploying our app to online services like Replit or Azure also posed a major challenge.
## Accomplishments that we're proud of
Our team is especially proud of successfully integrating such a broad range of AI features we had never worked with before. From image classification to Optical Character Recognition, and leveraging LLMs in novel ways as flexible databases and parsers.
For our team members, this marked the beginning of our deep dive into the realm of APIs and AI, making the experience all the more exciting. We were impressed with our quick progress in bringing the project to life. Finally, We’re proud that our vision was realized in the app and our brand, preservia.tech, a clever play on the words — preserve [food] via technology.
## What we learned
Our team learned how to use different kinds of **APIs**, the functionings and **applications of LLMs and image models**, as well as **flask** and **mySQL** principles to build future projects with easy web interfaces.
Our team was new to working with APIs and image-to-text models like MediaPipe. To integrate the image-to-text, text classification, image classification, and text interpretation features into our project, we strengthened our fundamental coding skills and learned how to weave APIs in to create a viable product.
## What's next for preservia.tech
In the future, we hope to enhance our image recognition software to recognize multiple food items within a single image, and with better accuracy, surpassing the current capability of one at a time. Additionally, we’re looking into other AI LLM models that can exhibit high precision in estimating food expiry dates. We may even be able to train machine-learning models ourselves to elevate the accuracy of our backend expiry date prediction system. It’ll also be interesting to build a mobile app to make uploading content even easier, as well as accelerating the LLMs we are using.
|
This project was developed with the RBC challenge in mind of developing the Help Desk of the future.
## What inspired us
We were inspired by our motivation to improve the world of work.
## Background
If we want technical support, we usually contact companies by phone, which is slow and painful for both users and technical support agents, especially when questions are obvious. Our solution is an online chat that responds to people immediately using our own bank of answers. This is a portable and a scalable solution.
## Try it!
<http://www.rbcH.tech>
## What we learned
Using NLP, dealing with devilish CORS, implementing docker successfully, struggling with Kubernetes.
## How we built
* Node.js for our servers (one server for our webapp, one for BotFront)
* React for our front-end
* Rasa-based Botfront, which is the REST API we are calling for each user interaction
* We wrote our own Botfront database during the last day and night
## Our philosophy for delivery
Hack, hack, hack until it works. Léonard was our webapp and backend expert, François built the DevOps side of things and Han solidified the front end and Antoine wrote + tested the data for NLP.
## Challenges we faced
Learning brand new technologies is sometimes difficult! Kubernetes (and CORS brought us some painful pain... and new skills and confidence.
## Our code
<https://github.com/ntnco/mchacks/>
## Our training data
<https://github.com/lool01/mchack-training-data>
|
losing
|
♫♫ ##Inspiration ♫♫
Wouldn't everyone like someone to sit in their house and serenade them on the piano? With any song they want? Wouldn't it be easier for everyone to learn songs on the piano if there was a machine that simply listened to songs and taught it to them? Why should you listen to music through a speaker when you could have it played live in your house? Introducing **Happy Keys**.
🎹🎹 ##What it does 🎹🎹
Happy Keys will...
1. Listen to any song that is either played/sang at the machine or an audio file uploaded to our software.
2. Simplify the song to a series of notes that can be played on the piano and their respective durations
3. Play this song for you on the piano
⚙️⚙️ ##How it works ⚙️⚙️
Software (identifying how to play the music): Fourier analysis is a method for expressing a function as a sum of its periodic components. Using a python library called Librosa, we used this analysis to simplify a complicated song into a series of notes composing the melody. We then measured how long these notes are played in the song to measure their respective durations. This information gets exported to an Arduino which in turn controls the hardware.
Hardware (playing the notes):
The hardware consists entirely of 2 micro controllers, 4 stepper motors and their drivers.
We were fortunate to have had 2 stepper motors beforehand which made the access to resources a little easier, but still challenging. The power must come from multiple sources since all the piano and different motors have different consumptions and requirements. The Piano must be plugged into the wall, the NEMA motors use batteries in series to get the recommended voltage. The 28BYJ stepper motors take advantage of the voltage regulator on the Arduinos and obtain their 5 volts from the batteries as well. An external battery pack is also used incase of deficient current.
💪💪 ##Challenges we ran into 💪💪
1. **Integrating python and C++ code in the Arduino IDE**. Our software was coded in Python but we coded our hardware in C++. Unfortunately, Arduino IDE best works with one file in one language, so this integration was generally difficult.
2. **Power source resources and wire management**. We had trouble finding enough power, with the right voltage and current to power our hardware.
🎩🎩 ## Accomplishments that we're proud of 🎩🎩
1. Some of us learned to code in C++ or Python.
2. Powering the stepper motors
3. Using Python to control the Arduinos
4. Using LT93D Arduino Nano and Nima Motor together
## What's next for Happy Keys
Currently, our project can only play 4 keys. This is obviously due to lack of materials and sufficient power. However, going forward, we plan to simply replicate the same machinery across more keys, possibly over the whole keyboard. We also coded the project in Python so the code is not as efficient as it could have been. Coding the project in C++ or a more efficient language would reduce any delay times we encountered and be able to play very advanced songs. Hopefully, at the same speed as a human. Finally, we would like to attach our machinery to the hardware of the piano (the inner machinery) in order to make our product work better.
|
## Inspiration
As a team of musicians, we wanted to share our love of music with others by creating a way to teach people who are traditionally unable to learn music due to special circumstances. This was our inspiration to create pianoEd, a program to teach visually impaired people how to play piano.
## What it does
Given some input music, this system uses a camera to detect whether any of a player’s fingers are over the right note, and it sends a vibration to the corresponding finger that is above the correct note.
## How we built it
We used OpenCV on a Logitech camera to locate positions of fingertips through contour detection. If any of the fingertips’ locations matched the location of the required note, a request was sent to the Arduino to cause the respective motor on that finger to vibrate.
## Challenges we ran into
Since our project has both a hardware and software component, we ran into some challenges in creating and integrating the 2 components together.
For hardware:
\*We had to build a circuit differently from our origin plan due to a lack of critical circuit components
\*We had to use alternative diodes due to the lack of proper diodes
\*Power consumption was an issue so we had to scale down to 1 motor for haptic feedback instead of 5 (for each finger)
\*We did not have access to a 3D printer, so we had to improvise a camera stand
For software:
\*We had difficulty setting up a server/client to send data from our OpenCV program to the ESP8266, so we adapted by using serial communication
\*We had originally used a pretrained deep convolutional neural network to predict the locations of the fingertips. However, the computations for the predictions were too expensive for each frame, so we switched to using OpenCV’s contour detection.
## Accomplishments that we're proud of
Despite all the challenges we faced in building this project, we never gave up on our idea, persisted as a team, and gave our all to try to make it work. We had to improvise many alternative solutions to the designs and plans we originally had, and we each slept a total of less than 6 hours this weekend working on this, but I am incredibly proud of my team for our joint effort and dedication to this project.
## What we learned
\*How to work with advanced circuit components
\*Some of us had the opportunity to work with hardware for the first time (and learned to solder)
\*How computer vision works
\*How to use OpenCV
\*How to set up an ESP8266
\*How to work as a team
## What's next for pianoEd
\*Wireless communication between ESP8266 and the OpenCV program
\*Expansion to 5 fingers for haptic feedback instead of just 1
\*Potential use in teaching people how to type
|
## Inspiration
The failure of a certain project using Leap Motion API motivated us to learn it and use it successfully this time.
## What it does
Our hack records a motion password desired by the user. Then, when the user wishes to open the safe, they repeat the hand motion that is then analyzed and compared to the set password. If it passes the analysis check, the safe unlocks.
## How we built it
We built a cardboard model of our safe and motion input devices using Leap Motion and Arduino technology. To create the lock, we utilized Arduino stepper motors.
## Challenges we ran into
Learning the Leap Motion API and debugging was the toughest challenge to our group. Hot glue dangers and complications also impeded our progress.
## Accomplishments that we're proud of
All of our hardware and software worked to some degree of success. However, we recognize that there is room for improvement and if given the chance to develop this further, we would take it.
## What we learned
Leap Motion API is more difficult than expected and communicating with python programs and Arduino programs is simpler than expected.
## What's next for Toaster Secure
-Wireless Connections
-Sturdier Building Materials
-User-friendly interface
|
losing
|
## Inspiration
Vehicle emissions account for a large percentage of North America's yearly carbon dioxide emissions. However, despite many efforts in the past, people remain reluctant to proactively make even small contributions to this generational issue. We believe that this lack of proactivity is due to a lack of reliable metrics to inform people about the changes they are able to enact upon their livlihoods to improve the world. To help people better visualize the changes they are capable of, we were inspired to create our DeltaHacks project.
## What it does
ClearSky Travelytics is a travel companion that grants insightful information about travel routes. Beyond just location data, Travelytics shares detailed information to promote good health and well being as well as environmental stewardship. Our vision is to provide travelers with a convenient way to analyze their carbon footprint along with various other greenhouse gas metrics accrued throughout their journey. Additionally, Travelytics encourages users to proactively seek a healthier lifestyle by demonstrating how walking and cycling can be an effective form of exercise, posing the question: "does practicing environmental mindfulness necessarily have to come as a disadvantage to yourself?"
## How we built it
We used the Google Maps API platform to create a Progressive Web App along with basic HTML, CSS, and JavaScript.
We chose Google Maps because we knew that we would need to retrieve location data in order to plot trips for our users, calculate various environmental and wellness metrics, and display a map of the overall route.
After discussing relevant features and the tools we wanted to use, we set off creating a simple prototype of our web app's design, building up important features one at a time.
Eventually, we were able to produce a solid final iteration of the app after major reworks, and we used Google Firebase to host our app online.
## Challenges we ran into
One of the most difficult challenges we ran into was decyphering the Google Maps API documentation. We had great trouble interpreting the code demos made available to us, and many times we felt that we found
seemingly blatant contradictions and impossibilities. Our team was able to come up with the idea for our project very early on, but we were unable to quickly agree upon a final design which met all our expectations.
Only in the very end, did we produce something we are proud to display.
## Accomplishments that we're proud of
We are proud of coming together to work together on an ambitious project, despite the fact that none of us are very well-versed in back-end architecture.
We are proud of the effort we put into creating such a difficult project, and we are happy that our app might present genuine utility to people who are concerned about their carbon footprint and health.
We are exceedingly proud that our project did not fail in its final stages!
## What we learned
We learned that during web design, it is never a good idea to develop responsive design starting from the desktop version. A small screen translates much better to a big screen than the other way around, where you have to squeeze around elements to make them fit on the page!
We learned that it is important to ideate and come together with a cohesive idea of what the project should strive to be.
We learned that Hackathons are exceedingly fun, and we are all looking forwards to more events in the future.
## What's next for ClearSky Travelytics
One of our group members has expressed interest in maintaining the current version of our app, perhaps even improving upon it and marketing it in the future.
|
## Inspiration
Urban areas are increasingly polluted by traffic, and although people make an effort to ride share, cars still produce a large portion of urban carbon dioxide emissions. App-based directions offer an option between distance-optimized or time-optimized routes, but never present the possibility of an eco-friendly route. In addition, many people want to be more green, but don't know what factors most impact the carbon footprint of their vehicle.
## What it does
Our interface provides an information-based solution. By synthesizing millions of very precise data points from Ford's OpenXC platform, we can isolate factors like idling, tailgating, aggressive driving, and analyze their impacts on fuel efficiency. We strip the noise in raw data to find the important trends and present them in clear visualizations.
## How we built it
The bulk of data analysis is handled by python which processes the raw json data files. Then, we use pandas to streamline data into tables, which are easier to handle and filter. Given the extreme precision of the data points (records in fractions of a second), the data was initially very difficult to interpret. With the help of numpy, we were able to efficiently calculate MPG figures and overlay additional trends on several visuals.
## Challenges we ran into
Data points for specific vehicle attributes are taken very irregularly and do not match up at the same timestamps. The user's interaction with their car's usage - negative fuel usage figures when tanks were filled. Column names in the data were inconsistent across sets (e.g. Odometer vs Fine Odometer Since Restart). Plenty of files had missing data for certain attributes, resulting in a scattering of NaNs across the dataset. Given this, we had to be clever with data filtering and condense the data so important metrics could be compared.
## Accomplishments that we're proud of
Beautiful visuals indicating clear trends in data. Clean filtering of extremely noisy raw data. A fun frontend that's visually appealing to the user.
## What we learned
Big data is not as easy as running a few functions on data that's simply downloaded from a database. Much of analytics is the filtration and data handling, and trends may often be surprising.
## What's next for MPGreen
We could integrate Maps and Directions APIs to find more eco friendly routes in order to directly provide the user with ways to reduce their carbon footprint. As it stands, our system is a strong tool to view and share information, but has potential to actually impact the environment.
|
## Inspiration
Understanding and expressing emotions can be a complex and challenging journey. Many people struggle to connect and identify with their feelings, which can be overwhelming and confusing. Let It Out was born from the desire to create a supportive space where users can explore and engage with their emotions, fostering self-awareness and personal growth. Whether Let It Out is used as a safe place to vent, to recount good memories, or to explore sources of anxiety, Let It Out is here to support users with any emotion they may be experiencing.
## What it does
The user is first prompted to record a vocal burst, to attempt to express their emotions in a purely primitive and natural way. Even when the user isn’t sure what emotion lies at the source of this vocal expression, with the power of Hume AI, Let It Out analyzes the user’s expression, and identifies an emotion present in the user. The user is then routed to a personalized journal prompt and template, designed to guide the user through a short session of self discovery, compassion, and reflection. The user is able to view an analysis past entries in their journal from ChatGPT which provides insights about the user’s emotional experiences across the dates they have journaled.
## How we built it
Let It Out is a full stack web app. The front end is built with Next.js, Typescript, Chakra UI, and TinyMCE API for the custom journaling templates and embedded text editor. The back end is built with Python and Flask, which connects to Hume AI’s Streaming API to analyze the user’s vocal burst, OpenAI’s ChatGPT API to analyze the user’s journals, and MongoDB to integrate user authentication and store the user’s journals for future reflection.
## Challenges we ran into
The main challenges we ran into came in our first project idea, in which we faced API paywalls and a lack of ideas to go forward with. However after attending Hume’s workshop we made a quick transition into this project and adapted well. We also ran into issues with slow run times which we greatly lessened by integrating Hume’s Streaming API rather than Batch API, and optimizing other aspects of our application.
## Accomplishments we’re proud of
We are proud of how full the project turned out, at first it felt vague and without much direction, but as we continued to develop this project, new ideas were formed and we managed to reach something fairly well-rounded.
## What we learned
We learned how to integrate modern technologies into our projects to create a rich and complex application. We learned how to connect different parts of a complex program, like building the front and back end separately but in parallel. Our beginner hacker learned how fun it can be to create in a fast-paced environment like a hackathon
## What’s next for Let It Out
We want to improve the journal analysis ability of our application by incorporating some kind of emotionally intelligent model rather than just base ChatGPT, we think we can do this by creating a custom model with Hume that would provide the summarization and analysis tools of ChatGPT but also include the emotional intelligence of Hume’s models.
|
partial
|
## Inspiration
Inspired by block-chain systems and the security of a decentralized system. P2Secure allows an intermediary to handle collateral when lending or borrowing assets, with security and credibility.
## What it does
Allows parties to place a collateral in a contract with specified return dates when lending or borrowing assets as well as placing a service fee. This insures the lender from risks liabilities and ensures that the borrower will return the assets.
## How we built it
The contracts are made through a block chain app platform called Ethereum and stored in the block chain. The front end website, accesses the block chain by communicating with the backend which is written in express.js and data is retrieved. We used Mongoose to access the Mongo Database which holds user data and we store a hashed password in AES-256-CTR which is decrypted to compare with the password given from the front end. A token is generated from the hash password if authenticated, which is then used in every API call by the front end to ensure security.
## Challenges we ran into
-Limited knowledge on Block Chain Applications
-Poor documentation of Ethereum
-Unforeseen Javascript Packages interface changes
-Fatigue
## Accomplishments that we're proud of
-We completed most of the project
-Nobody collapsed from lack of sleep
## What we learned
-JS Promises
-Node package manager
-Resolving merge conflicts and how to designate tasks to minimize merge conflicts
-How blockchains work
## What's next for P2Secure
-Transact using a widely accept cryptocurrency (Ethereum)
-Implement a reputation system for lenders and borrowers
|
## Inspiration
The counterfeiting industry is anticipated to grow to $2.8 trillion in 2022 costing 5.4 million jobs. These counterfeiting operations push real producers to bankruptcy as cheaper knockoffs with unknown origins flood the market. In order to solve this issue we developed a blockchain powered service with tags that uniquely identify products which cannot be faked or duplicated while also giving transparency. As consumers today not only value the product itself but also the story behind it.
## What it does
Certi-Chain uses a python based blockchain to authenticate any products with a Certi-Chain NFC tag. Each tag will contain a unique ID attached to the blockchain that cannot be faked. Users are able to tap their phones on any product containing a Certi-Chain tag to view the authenticity of a product through the Certi-Chain blockchain. Additionally if the product is authentic users are also able to see where the products materials were sourced and assembled.
## How we built it
Certi-Chain uses a simple python blockchain implementation to store the relevant product data. It uses a proof of work algorithm to add blocks to the blockchain and check if a blockchain is valid. Additionally, since this blockchain is decentralized, nodes (computers that host a blockchain) have to be synced using a consensus algorithm to decide which version of the blockchain from any node should be used.
In order to render web pages, we used Python Flask with our web server running the blockchain to fetch relative information from the blockchain and displayed it to the user in a style that is easy to understand. A web client to input information into the chain was also created using Flask to communicate with the server.
## Challenges we ran into
For all of our group members this project was one of the toughest we had. The first challenge we ran into was that once our idea was decided we quickly realized only one group member had the appropriate hardware to test our product in real life. Additionally, we deliberately chose an idea in which none of us had experience in. This meant we had to spent a portion of time to understand concepts such as blockchain and also using frameworks like flask. Beyond the starting choices we also hit several roadblocks as we were unable to get the blockchain running on the cloud for a significant portion of the project hindering the development. However, in the end we were able to effectively rack our minds on these issues and achieve a product that exceeded our expectations going in. In the end we were all extremely proud of our end result and we all believe that the struggle was definitely worth it in the end.
## Accomplishments that we're proud of
Our largest achievement was that we were able to accomplish all our wishes for this project in the short time span we were given. Not only did we learn flask, some more python, web hosting, NFC interactions, blockchain and more, we were also able to combine these ideas into one cohesive project. Being able to see the blockchain run for the first time after hours of troubleshooting was a magical moment for all of us. As for the smaller wins sprinkled through the day we were able to work with physical NFC tags and create labels that we stuck on just about any product we had. We also came out more confident in the skills we already knew and also developed new skills we gained on the way.
## What we learned
In the development of Certi-Check we learnt so much about blockchains, hashes, encryption, python web frameworks, product design, and also about the counterfeiting industry too. We came into the hackathon with only a rudimentary idea what blockchains even were and throughout the development process we came to understand the nuances of blockchain technology and security. As for web development and hosting using the flask framework to create pages that were populated with python objects was certainly a learning curve for us but it was a learning curve that we overcame. Lastly, we were all able to learn more about each other and also the difficulties and joys of pursuing a project that seemed almost impossible at the start.
## What's next for Certi-Chain
Our team really believes that what we made in the past 36 hours can make a real tangible difference in the world market. We would love to continue developing and pursuing this project so that it can be polished for real world use. This includes us tightening the security on our blockchain, looking into better hosting, and improving the user experience for anyone who would tap on a Certi-Chain tag.
|
## Inspiration
**An·thro·po·cene:** relating to or denoting the current geological age, viewed as the period during which human activity has been the dominant influence on climate and the environment.
Every year Toronto residents accumulate 491,747 tonnes of waste that end up in Landfills. A solution to this waste is an activity that we've learned since elementary school. Recycling. But with over 26% of Toronto's recycling being contaminated and having to be thrown away. According to Jim McKay, the General Waste Manager of the City of Toronto stated that the city of Toronto alone could save $600,000 to $1 million for each percentage point decrease in contamination. 12 Hours wanted to help everyday waste conscious citizens by helping them identify where their garbage should end up.
## What it does
This project helps consumers reduce their mistakes in everyday recycling that causes these recycling contaminations that lead to large avoidable losses to taxpayers. When unsure where a waste item belongs, consumers can bring up our WebApp and quickly take a picture. Our unique solution properly identifies where or how the trash should be disposed.
Using Microsofts Azure Computer Vision API to identify waste products from Toronto's Waste Database and classify whether they should be recycled or not.
## How we built it
Using Javascript for front-end and using Python's Flask framework for back-end, while incorporating Microsoft Azures Computer Vision API and Waste Wizards Database.
## Challenges we ran into
Android Apps are a lot more complex then we thought they would be.
## Accomplishments that we're proud of
Incoporating Machine Learning APIs; coming in to the project we were unsure whether or not to use something as scary as Machine Learning. But with the workshops provided over the weekend and the help of the mentors throughout the weekend, we were able to implement concepts while having fun.
## What we learned
Chip bags aren't actually recyclables... Learned a lot more about Web-Devlopment and a bit of Android Development. Procrastination is a real thing, but last minute stress work is as well.
## What's next for 12 Hours
Translating the Mobile web-app into a full functioning iOS and Android App.
|
partial
|
## Inspiration
Autism is the fastest growing developmental disorder worldwide – preventing 3 million individuals worldwide from reaching their full potential and making the most of their lives. Children with autism often lack crucial communication and social skills, such as recognizing emotions and facial expressions in order to empathize with those around them.
The current gold-standard for emotion recognition therapy is applied behavioral analysis (ABA), which uses positive reinforcement techniques such as cartoon flashcards to teach children to recognize different emotions. However, ABA therapy is often a boring process for autistic children, and the cartoonish nature of the flashcards doesn't fully capture the complexity of human emotion communicated through real facial expressions, tone of voice, and body language.
## What it does
Our solution is KidsEmote – a fun, interactive mobile app that leverages augmented reality and deep learning to help autistic children understand emotions from facial expressions. Children hold up the phone to another person's face – whether its their parents, siblings, or therapists – and cutting-edge deep learning algorithms identify the face's emotion as one of joy, sorrow, happiness, or surprise. Then, four friendly augmented reality emojis pop up as choices for the child to choose from. Selecting the emoji correctly matching the real-world face creates a shower of stars and apples in AR, and a score counter helps gamify the process to encourage children to keep on playing to get better at recognizing emotions.
The interactive nature of KidsEmote helps makes therapy seem like nothing more than play, increasing the rate at which they improve their social abilities. Furthermore, compared to cartoon faces, the real facial expressions that children with autism recognize in KidsEmote are exactly the same as the expressions they'll face in real life – giving them greater security and confidence to engage with others in social contexts.
## How we built it
KidsEmote is built on top of iOS in Swift, and all augmented reality objects were generated through ARKit, which provided easy to use physics and object manipulation capabilities. The deep learning emotion classification on the backend was conducted through the Google Cloud Vision API, and 3D models were generated through Blender and also downloaded from Sketchfab and Turbosquid.
## Challenges we ran into
Since it was our first time working with ARKit and mobile development, learning the ins and outs of Swift as well as created augmented reality objects was truly and eye-opening experience. Also, since the backend API calls to the Vision API call were asynchronous, we had to carefully plan and track the flow of inputs (i.e. taps) and outputs for our app. Also, finding suitable 3D models for our app also required much work – most online models that we found were quite costly, and as a result we ultimately generated our own 3D face expression emoji models with Blender.
## Accomplishments that we're proud of
Building a fully functional app, working with Swift and ARKit for the first time, successfully integrating the Vision API into our mobile backend, and using Blender for the first time!
## What we learned
ARKit, Swift, physics for augmented reality, and using 3D modeling software. We also learned how to tailor the user experience of our software specifically to our audience to make it as usable and intuitive as possible. For instance, we focused on minimizing the amount of text and making sure all taps would function as expected inside our app.
## What's next for KidsEmote
KidsEmote represents a complete digital paradigm shift in the way autistic children are treated. While much progress has been made in the past 36 hours, KidsEmote opens up so many more ways to equip children with autism with the necessary interpersonal skills to thrive in social situations. For instance, KidsEmote can be easily extended to help autistic children recognize between different emotions from the tone of one's voice, and understand another's mood based on their body gesture. Integration between all these various modalities only yields more avenues for exploration further down the line. In the future, we also plan on incorporating video streaming abilities into KidsEmote to enable autistic children from all over the world to play with each other and meet new friends. This would greatly facilitate social interaction on an unprecedented scale between children with autism since they might not have the opportunity to do so in otherwise in traditional social contexts. Lastly, therapists can also instruct parents to KidsEmote as an at-home tool to track the progress of their children – helping parents become part of the process and truly understand how their kids are improving first-hand.
|
## Inspiration
My cousin recently had children, and as they enter their toddler years, I saw how they use their toys. They buy them, use them for a couple weeks, but then get bored. To be honest, however, I can't blame them. Currently, toys are used in one way, with no real interaction coming from both the child and the toy. What I really wanted to do was bring Toy Story to real life, and allow children to talk and learn from their toys, maximizing their happiness and education.
## What it does
We use Hume's API to allow you to talk with your toys and have full blown conversations with them. We utilize prompt engineering and allow you to have math lessons embedded within their choose your own adventure stories.
## How we built it
We embedded a raspberry pi, speaker, and microphone in the animal, which hosts a web app through which it can speak.
## Challenges we ran into
The Hume API was down sometimes which was tough to navigate, which halted our ability to make a self improving prompt (track progress of kids lessons). We also had a broken raspberry pi for the first 12 hours of the Hackathon. Hyperbolic randomly stopped working for us so our interactive story pictures stopped working.
## Accomplishments that we're proud of
Learning about how to prompt image generation (feeding the text transcription into Hyperbolic)
## What we learned
Image generation, prompt engineering, function calling
## What's next for Teddy.AI
Memory and lesson progress reports
|
## Inspiration
Touch screen devices have been used increasingly in the domain of cognitive healing
as an aid for children with autism, down-syndrome, and brain traumatic injury
To help treat amnesia, dementia, and post stroke symptoms
There generally is no one-size-fits all solution to help rehabilitate these victims
This is validated by a list of methods to treat amnesia from psychologytoday.com which includes hypnosis, energy psychology, cognitive therapy, nutrition and Technical Assistance from iPhone, iPad, tablet as potential treatment methods
## What it does
CORTEX features two exercises to help people of with different cognitive autistic people read and understand facial expressions and people who suffer cognitive impairment following intermediate and severe strokes.
The first group can you the game called \_ Sweet Emotion \_ to correctly identify people's facial expression with the power of their camera. The second game called \_ Magnifying Glass 2 \_ lets people point their phone at various objects and attempt to correctly identify it through a quiz like interface.
All the user's progress can be tracked via a separate web interface.
## How we built it
We used Swift to build the iOS app, the model Inceptionv3 for object classification, and CNNEmotions for expression classification. We used React to build the web app, and the app posts it's information to a Firebase backend which we use for authentication and as a database.
## Challenges we ran into
ML models weren't as accurate in portrait mode as expected. Design challenges since we designed for a group with special requirements.
## Accomplishments that we're proud of
Implemented accessibility services for illiterate, the object detection works quite well.
## What we learned
Learned many skills in Firebase, more advanced iOS development skills and web development skills
## What's next for Cortex
User testing, more background research, beginning to refine the exercises based on user input and academic research.
|
partial
|
## Inspiration
A couple weeks ago, a friend was hospitalized for taking Advil–she accidentally took 27 pills, which is nearly 5 times the maximum daily amount. Apparently, when asked why, she responded that thats just what she always had done and how her parents have told her to take Advil. The maximum Advil you are supposed to take is 6 per day, before it becomes a hazard to your stomach.
#### PillAR is your personal augmented reality pill/medicine tracker.
It can be difficult to remember when to take your medications, especially when there are countless different restrictions for each different medicine. For people that depend on their medication to live normally, remembering and knowing when it is okay to take their medication is a difficult challenge. Many drugs have very specific restrictions (eg. no more than one pill every 8 hours, 3 max per day, take with food or water), which can be hard to keep track of. PillAR helps you keep track of when you take your medicine and how much you take to keep you safe by not over or under dosing.
We also saw a need for a medicine tracker due to the aging population and the number of people who have many different medications that they need to take. According to health studies in the U.S., 23.1% of people take three or more medications in a 30 day period and 11.9% take 5 or more. That is over 75 million U.S. citizens that could use PillAR to keep track of their numerous medicines.
## How we built it
We created an iOS app in Swift using ARKit. We collect data on the pill bottles from the iphone camera and passed it to the Google Vision API. From there we receive the name of drug, which our app then forwards to a Python web scraping backend that we built. This web scraper collects usage and administration information for the medications we examine, since this information is not available in any accessible api or queryable database. We then use this information in the app to keep track of pill usage and power the core functionality of the app.
## Accomplishments that we're proud of
This is our first time creating an app using Apple's ARKit. We also did a lot of research to find a suitable website to scrape medication dosage information from and then had to process that information to make it easier to understand.
## What's next for PillAR
In the future, we hope to be able to get more accurate medication information for each specific bottle (such as pill size). We would like to improve the bottle recognition capabilities, by maybe writing our own classifiers or training a data set. We would also like to add features like notifications to remind you of good times to take pills to keep you even healthier.
|
**check out the project demo during the closing ceremony!**
<https://youtu.be/TnKxk-GelXg>
## Inspiration
On average, half of patients with chronic illnesses like heart disease or asthma don’t take their medication. Reports estimates that poor medication adherence could be costing the country $300 billion in increased medical costs.
So why is taking medication so tough? People get confused and people forget.
When the pharmacy hands over your medication, it usually comes with a stack of papers, stickers on the pill bottles, and then in addition the pharmacist tells you a bunch of mumble jumble that you won’t remember.
<http://www.nbcnews.com/id/20039597/ns/health-health_care/t/millions-skip-meds-dont-take-pills-correctly/#.XE3r2M9KjOQ>
## What it does
The solution:
How are we going to solve this? With a small scrap of paper.
NekoTap helps patients access important drug instructions quickly and when they need it.
On the pharmacist’s end, he only needs to go through 4 simple steps to relay the most important information to the patients.
1. Scan the product label to get the drug information.
2. Tap the cap to register the NFC tag. Now the product and pill bottle are connected.
3. Speak into the app to make an audio recording of the important dosage and usage instructions, as well as any other important notes.
4. Set a refill reminder for the patients. This will automatically alert the patient once they need refills, a service that most pharmacies don’t currently provide as it’s usually the patient’s responsibility.
On the patient’s end, after they open the app, they will come across 3 simple screens.
1. First, they can listen to the audio recording containing important information from the pharmacist.
2. If they swipe, they can see a copy of the text transcription. Notice how there are easy to access zoom buttons to enlarge the text size.
3. Next, there’s a youtube instructional video on how to use the drug in case the patient need visuals.
Lastly, the menu options here allow the patient to call the pharmacy if he has any questions, and also set a reminder for himself to take medication.
## How I built it
* Android
* Microsoft Azure mobile services
* Lottie
## Challenges I ran into
* Getting the backend to communicate with the clinician and the patient mobile apps.
## Accomplishments that I'm proud of
Translations to make it accessible for everyone! Developing a great UI/UX.
## What I learned
* UI/UX design
* android development
|
## Inspiration
When you are prescribed medication by a Doctor, it is crucial that you complete the dosage cycle in order to ensure that you recover fully and quickly. Unfortunately forgetting to take your medication is something that we have all done. Failing to run the full course of medicine often results in a delayed recovery and leads to more suffering through the painful and annoying symptoms of illness. This has inspired us to create Re-Pill. With Re-Pill, you can automatically generate scheduling and reminders to take you medicine by simply uploading a photo of your prescription.
## What it does
A user uploads an image of their prescription which is then processed by image to text algorithms that extract the details of the medication. Data such as the name of the medication, its dosage, and total tablets is stored and presented to the user. The application synchronizes with google calendar and automatically sets reminders for taking pills into the user's schedule based on the dosage instructions on the prescription. The user can view their medication details at any time by logging into Re-Pill.
## How we built it
We built the application using the Python web framework Flask. Simple endpoints were created for login, registration, and viewing of the user's medication. User data is stored in Google Cloud's Firestore. Images are uploaded and sent to a processing endpoint through a HTTP Request which delivers the medication information. Reminders are set using the Google Calendar API.
## Challenges we ran into
We initially struggled to figure out the right tech-stack to use for building the app. We struggled with Android development before settling for a web-app. One big challenge we faced was to merge all the different part of our application into one smoothly running product. Another challenge was finding a method to inform/notify the user of his/her medication time through a web-based application.
## Accomplishments that we're proud of
There are a couple of things that we are proud of. One of them is how well our team was able to communicate with one another. All team members knew what the other members were working on and the work was divided in such a manner that all teammates worked on the projects using his/her strengths. One important accomplishment is that we were able to overcome a huge time constraint and come up with a prototype of an idea that has potential to change people's lives.
## What we learned
We learned how to set up and leverage Google API's, manage non-relational databases and process image to text using various python libraries.
## What's next for Re-Pill
The next steps for Re-Pill would be to move to a mobile environment and explore useful features that we can implement. Building a mobile application would make it easier for the user to stay connected with the schedules and upload prescription images at a click of a button using the built in camera. Some features we hope to explore are creating automated activities, such as routine appointment bookings with the family doctor and monitoring dietary considerations with regards to stronger medications that might conflict with a patients diet.
|
winning
|
# Highlights
A product of [YHack '16](http://www.yhack.org/). Built by Aaron Vontell, Ali Benlalah & Cooper Pellaton.
## Table of Contents
* [Overview](#overview)
* [Machine Learning and More](#machine-learning-and-more)
* [Our Infrastructure](#our-infrastructure)
* [API](#api)
## Overview
The first thing you're probably thinking is what this ambiguiously named application is, and secondly, you're likely wondering why it has any significance. Firstly, Highlights is the missing component of your YouTube life, and secondly it's important because we leverage Machine Learning to find out what content is most important in a particular piece of media unlike it has ever been done before.
Imagine this scenario: you subscribe to 25+ YouTube channels but over the past 3 weeks you simply haven't had the time to watch videos because of work. Today, you decide that you want to watch one of your favorite vloggers, but realize you might lack the context to understand what has happened in her/his life since you last watched which lead her/him to this current place. Here enters Highlights. Simply download the Android application, log in with your Google credentials and you will be able to watch the so called *highlights* of your subscriptions for all of the videos which you haven't seen. Rather than investing hours in watching your favorite vlogger's past weeks worth of videos, you can get caught up in 30 seconds - 1 minute by simply being presented with all of the most important content in those videos in one place, seamlessly.
## Machine Learning and More
Now that you understand the place and signifiance of Highlights, a platform that can distill any media into bite sized chunks that can be consumed quickly in the order of their importance, it is important to explain the technical details of how we achieve such a gargantuant feat.
Let's break down the pipeline.
1. We start by accessing your Google account within the YouTube scope and get a list of your current subscriptions, 'activities' such as watched videos, comments, etc., your recommended videos and your home feed.
2. We take this data and extract the key features from it. Some of these include:
* The number of videos watched on a particular channel.
* The number of likes/dislikes you have and the categories on which they center.
* The number of views a particular video has/how often you watch videos after they have been posted.
* Number of days after publication. This is most important in determing the signficiance of a reccomended video to a particular user.
We go about this process for every video that the user has watched, or which exists in his or her feed to build a comprehensive feature set of the videos that are in their own unique setting.
3. We proceed by feeding the data from the aforementioned investigation and probabilities by then generating a new machine learning model which we use to determine the likelihood of a user watching any particular reccomended video, etc.
4. For each video in the set we are about to iterate over, the video is either a recomended watch, or a video in the user's feed which she/he has not seen. They key to this process is a system we like to call 'video quanitization'. In this system we break each video down into it's components. We look at the differences between images and end up analyzing something near to every other 2, 3, or 4 frames in a video. This reduces the size of the video that we need to analyze while ensuring that we don't miss anything important. As you will not here, a lot of the processes we undertake have bases in very comprehensive and confusing mathematics. We've done our best to keep math out of this, but know that one of the most important tools in our toolset is the exponential moving average.
5. This is the most important part of our entire process, the scene detection. To distill this down to it's most basic principles we use features like lighting, edge/shape detection and more to determine how similar or different every frame is from the next. Using this methodology of trying to find the frames that are different we coin this change in setting a 'scene'. Now, 'scenes' by themselves are not exciting but coupled with our knowledge of the context of the video we are analyzing we can come up with very apt scenes. For instance, in a horror movie we know that we would be looking for something like 5-10 seconds in differences between the first frame of that series and the last frame; this is what is referred to as a 'jump' or 'scare' cut. So using our exponential moving average, and background subtraction we are able to figure out the changes in between, and validate scenes.
6. We pass this now deconstructed video into the next part of our pipeline where we will generate unique vectors for each of them that will be used in the next stage. What we are looking for here is the key features that define a frame. We are trying to understand, for example, what makes a 'jump' cut a 'jump' cut. Features that we are most commonly looking include
* Intensity of an analyzed area.
+ EX: The intensity of a background coloring vs edges, etc.
* The length of each scence.
* Background.
* Speed.
* Average Brightness
* Average background speed.
* Position
* etc.
Armed with this information we are able to derive a unqiue column vector for each scence which we will then feed into our neural net.
7. The meat and bones of our operation: the **neural net**! What we do here is not terribly complicated. At it's most basic principles, we take each of the above column vectors and feed it into this specialized machine learning model. What we are looking for is to derive a sort order for these features. Our initial training set, a group of 600 YouTube videos which @Ali spent a significant amount of time training, is used to help to advance this net. The gist of what we are trying to do is this: given a certain vector, we want to determine it's signifiance in the context of the YouTube univerise in which each of our users lives. To do this we abide by a semi-supervised learning model in which we are looking over the shoulder of the model to check the output. As time goes on, this model begins to tweak it's own parameters and produce the best possible output given any input vector.
8. Lastly, now having a sorted order of every scene in a user's YouTube universe, we go about reconstructing the top 'highlights' for each user. IE in part 7 of our pipeline we figured out which vectors carried the greatest weight. Now we want to turn these back into videos that the user can watch, quickly, and derive the greatest meaning from. Using a litany of Google's APIs we will turn the videoIds, categories, etc into parameterized links which the viewer is then shown within our application.
## Our Infrastructure
Our service is currently broken down into the following core components:
* Highlights Android Application
+ Built and tested on Android 7.0 Nougat, and uses the YouTube Android API Sample Project
+ Also uses various open source libraries (OkHTTP, Picasso, ParallaxEverywhere, etc...)
* Highlights Web Service (Backs the Pipeline)
* The 'Highlighter' or rather our ML component
## API
### POST
* `/api/get_subscriptions`
This requires the client to `POST` a body of the nature below. This will then trigger the endpoint to go and query the YouTube API for the user's subscriptions, and then build a list of the most recent videos which he/she has not seen yet.
```
{
"user":"Cooper Pellaton"
}
```
* `/api/get_videos`
*DEPRECATED*. This endpoint requires the client to `POST` a body similar to that below and then will fetch the user's most recent activity in list form from the YouTube API.
```
{
"user":"Cooper Pellaton"
}
```
### GET
* `/api/fetch_oauth`
So optimally, what should happen when you call this method is that the user should be prompted to enter her/his Google credentials to authorize the application to then be able to access her/his YouTube account.
- The way that this is currently architected, the user's entrance into our platform will immediately trigger learning to occur on their videos. We have since *DEPRECATED* our ML training endpoint in favor of one `GET` endpoint to retrieve this info.
* `/api/fetch_subscriptions`
To get the subscriptions for a current user in list form simply place a `GET` to this endpoint. Additionally, a call here will trigger the ML pipeline to begin based on the output of the subscriptions and user data.
* `/api/get_ml_data`
For each user there is a queue of their Highlights. When you query this endpoint the response will be the return of a dequeue operation on said queue. Hence, you are guaranteed to never have overlap or miss a video.
- To note: in testing we have a means to bypass the dequeue and instead append, constantly, directly to the queue so that you can ensure you are retrieving the appropriate response.
|
## Inspiration
Over the Summer one of us was reading about climate change but then he realised that most of the news articles that he came across were very negative and affected his mental health to the point that it was hard to think about the world as a happy place. However one day he watched this one youtube video that was talking about the hope that exists in that sphere and realised the impact of this "goodNews" on his mental health. Our idea is fully inspired by the consumption of negative media and tries to combat it.
## What it does
We want to bring more positive news into people’s lives given that we’ve seen the tendency of people to only read negative news. Psychological studies have also shown that bringing positive news into our lives make us happier and significantly increases dopamine levels.
The idea is to maintain a score of how much negative content a user reads (detected using cohere) and once it reaches past a certain threshold (we store the scores using cockroach db) we show them a positive news related article in the same topic area that they were reading.
We do this by doing text analysis using a chrome extension front-end and flask, cockroach dp backend that uses cohere for natural language processing.
Since a lot of people also listen to news via video, we also created a part of our chrome extension to transcribe audio to text - so we included that into the start of our pipeline as well! At the end, if the “negativity threshold” is passed, the chrome extension tells the user that it’s time for some good news and suggests a relevant article.
## How we built it
**Frontend**
We used a chrome extension for the front end which included dealing with the user experience and making sure that our application actually gets the attention of the user while being useful. We used react js, HTML and CSS to handle this. There was also a lot of API calls because we needed to transcribe the audio from the chrome tabs and provide that information to the backend.
**Backend**
## Challenges we ran into
It was really hard to make the chrome extension work because of a lot of security constraints that websites have. We thought that making the basic chrome extension would be the easiest part but turned out to be the hardest. Also figuring out the overall structure and the flow of the program was a challenging task but we were able to achieve it.
## Accomplishments that we're proud of
1) (co:here) Finetuned a co:here model to semantically classify news articles based on emotional sentiment
2) (co:here) Developed a high-performing classification model to classify news articles by topic
3) Spun up a cockroach db node and client and used it to store all of our classification data
4) Added support for multiple users of the extension that can leverage the use of cockroach DB's relational schema.
5) Frontend: Implemented support for multimedia streaming and transcription from the browser, and used script injection into websites to scrape content.
6) Infrastructure: Deploying server code to the cloud and serving it using Nginx and port-forwarding.
## What we learned
1) We learned a lot about how to use cockroach DB in order to create a database of news articles and topics that also have multiple users
2) Script injection, cross-origin and cross-frame calls to handle multiple frontend elements. This was especially challenging for us as none of us had any frontend engineering experience.
3) Creating a data ingestion and machine learning inference pipeline that runs on the cloud, and finetuning the model using ensembles to get optimal results for our use case.
## What's next for goodNews
1) Currently, we push a notification to the user about negative pages viewed/a link to a positive article every time the user visits a negative page after the threshold has been crossed. The intended way to fix this would be to add a column to one of our existing cockroach db tables as a 'dirty bit' of sorts, which tracks whether a notification has been pushed to a user or not, since we don't want to notify them multiple times a day. After doing this, we can query the table to determine if we should push a notification to the user or not.
2) We also would like to finetune our machine learning more. For example, right now we classify articles by topic broadly (such as War, COVID, Sports etc) and show a related positive article in the same category. Given more time, we would want to provide more semantically similar positive article suggestions to those that the author is reading. We could use cohere or other large language models to potentially explore that.
|
## Inspiration
The inspiration for our Auto-Teach project stemmed from the growing need to empower both educators and learners with a **self-directed and adaptive** learning environment. We were inspired by the potential to merge technology with education to create a platform that fosters **personalized learning experiences**, allowing students to actively **engage with the material while offering educators tools to efficiently evaluate and guide individual progress**.
## What it does
Auto-Teach is an innovative platform that facilitates **self-directed learning**. It allows instructors to **create problem sets and grading criteria** while enabling students to articulate their problem-solving methods and responses through text input or file uploads (future feature). The software leverages AI models to assesses student responses, offering **constructive feedback**, **pinpointing inaccuracies**, and **identifying areas for improvement**. It features automated grading capabilities that can evaluate a wide range of responses, from simple numerical answers to comprehensive essays, with precision.
## How we built it
Our deliverable for Auto-Teach is a full-stack web app. Our front-end uses **ReactJS** as our framework and manages data using **convex**. Moreover, it leverages editor components from **TinyMCE** to provide student with better experience to edit their inputs. We also created back-end APIs using "FastAPI" and "Together.ai APIs" in our way building the AI evaluation feature.
## Challenges we ran into
We were having troubles with incorporating Vectara's REST API and MindsDB into our project because we were not very familiar with the structure and implementation. We were able to figure out how to use it eventually but struggled with the time constraint. We also faced the challenge of generating the most effective prompt for chatbox so that it generates the best response for student submissions.
## Accomplishments that we're proud of
Despite the challenges, we're proud to have successfully developed a functional prototype of Auto-Teach. Achieving an effective system for automated assessment, providing personalized feedback, and ensuring a user-friendly interface were significant accomplishments. Another thing we are proud of is that we effectively incorporates many technologies like convex, tinyMCE etc into our project at the end.
## What we learned
We learned about how to work with backend APIs and also how to generate effective prompts for chatbox. We also got introduced to AI-incorporated databases such as MindsDB and was fascinated about what it can accomplish (such as generating predictions based on data present on a streaming basis and getting regular updates on information passed into the database).
## What's next for Auto-Teach
* Divide the program into **two mode**: **instructor** mode and **student** mode
* **Convert Handwritten** Answers into Text (OCR API)
* **Incorporate OpenAI** tools along with Together.ai when generating feedback
* **Build a database** storing all relevant information about each student (ex. grade, weakness, strength) and enabling automated AI workflow powered by MindsDB
* **Complete analysis** of student's performance on different type of questions, allows teachers to learn about student's weakness.
* **Fine-tuned grading model** using tools from Together.ai to calibrate the model to better provide feedback.
* **Notify** students instantly about their performance (could set up notifications using MindsDB and get notified every day about any poor performance)
* **Upgrade security** to protect against any illegal accesses
|
winning
|
## Background
Scanning Tunneling Microscopes are devices that allow you to image atomic scale features, commonly used in physics and semiconductor research. It was developed in 1981 by Gerd Binnig and Heinrich Rohrer, winning them a nobel prize.
A small handful of hobbyists have constructed DIY versions of these devices that would typically cost upwards of $30,000 (<https://dberard.com/home-built-stm/>, <http://www.e-basteln.de/index.htm> ). Our goal was to replicate this work within the timeframe of a hackathon, a feat that we believe to be the fastest construction to date.
## How it Works
Atoms are small. Imaging atoms requires a probe equally as small. Ideally a probe is perfectly sharp and comes down to one atom, and we were able to create a probe close to that. We sweep the probe across the surface, and as it moves, electrons teleport from the probe into the surface for reasons not entirely understood. We measure the current, the rate of electrons leaving the probe, for each position on the surface. Higher currents (more atomic nuclei) make a brighter area in the final result, and lower currents (less atomic nuclei) make a dark area in the final result.
This technique is similar to scanning electron microscopes, but has the advantage of not requiring a vacuum: there is simply no space between the probe and the surface to fit any air atoms, so no vacuum is necessary. Maintaining a vacuum requires a vacuum pump (which consumes power) and requires the use of expensive materials.
## Challenges we ran into
When the angstroms matter and make the difference between a working device and a non-functional device, you will run into challenges.
Acrylic, the material easiest to use at PennApps, has a large thermal expansion coefficient, which is gigantic in the scale of our project: blowing on the sample area impacted the calibration for about ten minutes. This limited the accuracy of the system.
STM’s require some oddball parts. [Walks into the hardware checkout.] “Hey you wouldn’t happen to have a teflon standoff or even just a chunk of teflon.” [And] “Have you got a chunk of steel that weighs maybe... 20, 25 lbs?”
Don’t solder your parts in backwards.
Noise is the enemy! Thermal, vibrational, electromagnetic, your own computer's power :(
## What we learned
We learned how to take pictures of atoms, and how to work at the nano scale and smaller.
|
## Inspiration
Nikola Tesla, Green energy advancements, fighting climate change, promoting sustainability, NASA x BYU collab (complient mechanism)
## What it does
Tracks the sun to make the solar panels more efficient
## How we built it
Using Arduino IDE with an Uno, photoresistors to send signals to the Arduino, recycled materials from other hackers and students at the University of Ottawa's Makerspace and various cables and resistors from other projects.
## Challenges we ran into
We originally wanted to use a bending machine (complient mechanism) for the movement, but the print did not turn out well (support too hard to remove without damaging the needed structure) and we had a very strict timeline. We switched to a simpler axel design made from a chopstick that would have been otherwise thrown out.
## Accomplishments that we're proud of
Using reused and recycled materials almost exclusively and creating a device that actually works and is complete. (and our code working basically first try )
## What we learned
We gained lots of experience with Arduino, soldering, using photoresistors, using solar panels as a power source and working as a team under pressure.
## What's next for Recycle Everything Under the Sun
Working on more sophisticated solar or green energy projects in future hackathons or other engineering competitions or events. Expanding the range of motion from 180 degrees to half a sphere's.
|
## Inspiration
We love computer science, and wikipedia.
## What it does
Converts most of the numbers on wikipedia pages to binary
## How we built it
Ontop of many failed projects and some coffee
## Challenges we ran into
Other projects
## Accomplishments that we're proud of
We came to Montreal, had fun, and made something fun
## What we learned
Don't eat ALL the free food
## What's next for wikibin
First it was numbers, then we move onto words, other websites, and finally hash them all together to make for a more readable 2017 Internet.
|
losing
|
## Inspiration
Our inspiration for creating this project stems from a pressing global issue - the proliferation of counterfeit products in the market. Counterfeit goods not only harm consumers by delivering subpar quality but also pose significant risks to businesses, eroding trust in brands, and potentially causing health and safety hazards (in terms of fake medical products). We were actually aware of the dire consequences of this issue through the experiences of a mutual friend who hails from a third-world country. Our friend shared how his grandmother fell seriously ill and the family, facing limited healthcare options, purchased what they believed to be life-saving medication. Tragically, the medication turned out to be fake. The counterfeit product not only failed to alleviate her condition but also worsened it. This demonstrated the life-and-death implications of counterfeit products and grappled our minds, inspiring us to find a solution for it.
## What it does
We set out to create a system that could provide instantaneous verification by utilizing blockchain's data storage capabilities to securely store QR code data and product information. Companies can create products by simply entering the product name, number of products going into production and a product description. A new block will be made in the blockchain for each product. For each block/product, there will be a QR code generated. The extracted data of this QR code will be saved into the block of that product.
When a QR code is scanned, our system cross-references the extracted data from the scanned QR code with the QR code information stored in the blockchain. A match signifies authenticity, while a mismatch immediately alerts the user to the presence of a counterfeit product.
In essence, our project marries the potential of blockchain technology with the concept of QR codes to offer a strong and user-friendly solution to the counterfeit problem. Our aim is to provide consumers with a reliable tool to make informed purchasing decisions and assist businesses in protecting their brand integrity.
By participating in Hack the North, we hope to not only showcase our innovative solution but also inspire others to explore the intersection of technology and social impact. We believe that by leveraging blockchain in this manner, we can contribute to a safer, more trustworthy marketplace, benefiting both consumers and businesses on a global scale.
## How we built it
With a core focus on safeguarding product authenticity, our system leverages blockchain technology in conjunction with QR code scanning to provide a comprehensive means of verifying product legitimacy. This project includes frontend with React and Tailwind CSS, while the backend includes Python.
*Blockchain Technology:*
* At the heart of our system is the blockchain structure, an immutable ledger renowned for its security and trustworthiness.
* The blockchain consists of a series of blocks, each containing vital information about a product, such as its index, previous block's hash, timestamp, product name, product ID, nonce, and hash. Each block represents a product.
* The blockchain is secured using the SHA-256 hashing algorithm, ensuring the integrity of stored data.
*Block Creation and QR Code Generation:*
* Every legitimate product is associated with a unique block in the blockchain.
* A QR code is generated for each authentic product, which encodes critical product information, including its authenticity status, product name, and product ID.
* The QR code image is created using the qrcode library, and it is saved as an image file (e.g., static/block\_{product\_id}\_qr.png).
* Additionally, the binary data of the QR code is stored within the block as qr\_code\_data through:
qr\_byte\_io = io.BytesIO()
```
img.save(qr_byte_io)
qr_byte_data = qr_byte_io.getvalue()
```
*Verification Process:*
* When a QR code is scanned using our dedicated web application, the verification process begins.
* The application extracts the data encoded within the scanned QR code.
* The extracted data is then compared with the QR code data information stored in the blocks of the blockchain, seeking an exact match.
* If the extracted data corresponds precisely to the information saved in the blockchain, the product is confirmed as genuine.
* In cases where there is no match, the system immediately alerts the user, unequivocally identifying the - product as counterfeit.
*User Interface:*
* Users interact with our system via a web interface.
* he system's user interface provides login/sign up opportunities to companies trying to make a fake product identifier for their product. To make these identifiers, users can input the product's name, the quantity of items, and a product description (which can be automatically generated by AI). The system will create a corresponding number of blocks, each representing a product. Within each block or product, a QR code will be generated and the data from the QR code will be stored along with additional security measures such as hashing.
* When a match is found, users are directed to a page confirming the product's legitimacy, assuring them of their purchase's authenticity.
* In instances of mismatch, users are directed to a page clearly stating that the product is counterfeit, empowering them to make informed decisions.
## Challenges we ran into
We encountered several challenges during the development of our project, including:
* **Data Extraction from QR Codes:** Extracting data from QR codes proved to be a complex task, requiring precise process of decoding the data into a byte file to ensure accuracy and reliability in the verification process.
* **Cross-Checking with the Blockchain:** Integrating the QR code scanner with the blockchain posed another challenge. It necessitated creating a seamless communication system to cross-check QR code data with the information stored within the blockchain. Cross-checking results were different on VS Code compared to the MacOS Terminal. It took a while, but with perseverance, we were able to pass this error.
* **ICP Decentralized Platform Setup:** Setting up the ICP (Internet Computer Protocol) decentralized platform for our blockchain was a crucial step. However, it turned out to be time-consuming and challenging due to compatibility issues with our laptops. We faced difficulties during the initial setup, leading to an unexpected delay.
Despite these challenges, our team persevered and successfully overcame them. We dedicated considerable effort to refining data extraction processes, enhancing communication between components, and eventually finding a workaround to set up the ICP decentralized platform. These obstacles, while demanding, ultimately contributed to the validity of our project and the expertise of our team in addressing complex technical issues.
## Accomplishments that we're proud of
* **Blockchain Implementation:** Successfully implementing a blockchain structure and security system is a remarkable accomplishment in itself. Our team was able to create a secure and tamper-resistant ledger that stores product data, ensuring data integrity and authenticity verification.
* **QR Code Integration:** Integrating QR code technology into our project was a pivotal achievement. We devised a method to generate QR codes for each product, enhancing user-friendliness and enabling quick and accurate verification.
* **Global Impact:** Our project's potential for addressing the global issue of counterfeit products is an accomplishment we hold in high regard. By providing a reliable solution, we aim to make a substantial impact on consumer safety and brand protection worldwide.
## What we learned
Our journey with the ICP (Internet Computer Protocol) Decentralized Platform provided us with valuable insights and expertise. We learned to navigate the platform's architecture and manage data securely in a decentralized environment. One of our most significant lessons was overcoming compatibility challenges with our laptops, which led us to explore alternative solutions and adapt our setup process. We also honed our skills in optimizing resource usage, engaging with the ICP community, and applying our knowledge to create a secure and efficient blockchain system for counterfeit product detection. This project required more than just our programming skills, helping us learn new concepts, such as blockchain. This experience not only enhanced our technical abilities but also reinforced our problem-solving and adaptability in real-world blockchain development scenarios.
## What's next for Unfake
Our project's future involves several key strategies:
* **Global Reach:** We plan to expand internationally, collaborating with organizations and regulatory bodies.
* **Team Expansion:** We'll grow our developer team to accelerate development and explore new features.
* **Use Case Diversification:** We'll adapt our system for various industries beyond counterfeit detection.
* **Verification Team:** A dedicated team will vet and verify companies, ensuring platform credibility.
* **Improved User Interface:** We'll enhance user-friendliness for easier onboarding.
* **Education and Outreach:** Initiatives will promote awareness about counterfeit risks and our technology's role.
* **Feedback Integration:** User feedback will guide continuous improvements.
* **Security and Compliance:** Strengthening security measures and compliance will remain paramount.
|
## Inspiration
No one wants to be handed the aux on a 5-hour road trip. What song do you pick? What artist? Immeasurable amounts of stress can arise from this one simple task. No one wants to be the one to kill the mood. CarJam seeks to resolve this universal issue for all of your future road trips and save some friendships. We strive to enable all listeners by truly embracing the idea of "going with the flow".
## What it does
CarJam generates a unique and personal song queue by leveraging powerful speech recognition technology. The user is prompted to record ambient audio during their drive, be it a sentence or a whole conversation. This audio snippet is then processed via Google Speech to Text. The text is analyzed for general mood and split into factors such as danceability, tempo, etc. to parse the Spotify API for the perfect combination of songs. The user will then promptly receive a newly generated queue of songs to be played and can continue on their drive.
## How we built it
We built the frontend with React and Material UI. It allows the user to record their speech and then plays the song. The audio recording is then sent to the backend, which was built using Node.js and Python. Using Spotify API, Google Speech-to-Text API, and Google Natural Language API, the last-cached sentiment data picks a song based on danceability, tempo, loudness, etc., and sends this song URL to the frontend.
## Challenges we ran into
Some of the challenges we faced were: working around the Spotify API web token repeatedly expiring, learning object principles of javascript, and figuring out a way to route the audio recorded on the front end to be processed by the backend.
## Accomplishments that we're proud of
We are proud of the fact that we were able to create an efficient and effective product. We met all of our goals and are particularly proud of the fact that we were able to successfully fetch audio and process it accordingly.
## What we learned
Through making this project, we were exposed to JavaScript and using Spotify API.
## What's next for CarJam
Some features we'd love to explore in regards to the future of CarJam include:
• Implementing a Cloud API storage to cache previously loaded songs
• Leveraging a machine learning-based algorithm to enhance text processing and song selection
• Integrate an autoplay system to allow the users a hands-free option
|
## Inspiration
There are 1.1 billion people without Official Identity (ID). Without this proof of identity, they can't get access to basic financial and medical services, and often face many human rights offences, due the lack of accountability.
The concept of a Digital Identity is extremely powerful.
In Estonia, for example, everyone has a digital identity, a solution was developed in tight cooperation between the public and private sector organizations.
Digital identities are also the foundation of our future, enabling:
* P2P Lending
* Fractional Home Ownership
* Selling Energy Back to the Grid
* Fan Sharing Revenue
* Monetizing data
* bringing the unbanked, banked.
## What it does
Our project starts by getting the user to take a photo of themselves. Through the use of Node JS and AWS Rekognize, we do facial recognition in order to allow the user to log in or create their own digital identity. Through the use of both S3 and Firebase, that information is passed to both our dash board and our blockchain network!
It is stored on the Ethereum blockchain, enabling one source of truth that corrupt governments nor hackers can edit.
From there, users can get access to a bank account.
## How we built it
Front End: | HTML | CSS | JS
APIs: AWS Rekognize | AWS S3 | Firebase
Back End: Node JS | mvn
Crypto: Ethereum
## Challenges we ran into
Connecting the front end to the back end!!!! We had many different databases and components. As well theres a lot of accessing issues for APIs which makes it incredibly hard to do things on the client side.
## Accomplishments that we're proud of
Building an application that can better the lives people!!
## What we learned
Blockchain, facial verification using AWS, databases
## What's next for CredID
Expand on our idea.
|
losing
|
## Inspiration
The main focus of our project is creating opportunities for people to interact virtually and pursue their interests while remaining active. We hoped to accomplish this through a medium that people are already interested in and providing them with a tool to take that interest to the next level. From these intentions came our project- TikTok Dance Trainer.
In our previous hackathon, we gained experience with computer vision using OpenCV2 in python, and we wanted to look further in this field. Gaining inspiration from other projects that we saw, we wanted to create a project that could not only recognize hand movements but full body motion as well.
## What it does
TikTok Dance Trainer is a new Web App that enables its users to learn and replicate popular dances from TikTok. While using the app, users will receive a score in real time that gives them feedback on how well they are dancing compared to the original video. This web app is an encouraging way for beginners to hone dance skills and improve their TikTok content as well as a fun way for advanced users to compete against one another in perfecting dances.
## How we built it
To create this project, we split into teams. One team experimented with comparison metrics to compare body poses while the other built up the UI with HTML, CSS and Javascript.
The pose estimation is implemented with an open source pre-trained neural network in tensorflow called posenet. This model can pinpoint the key points on the human body such as wrists, elbows, hips, knees, and joints on the head. The two dancers each have a set of 17 joints, which are then compared to each other, frame by frame. In order to compare these arrays of coordinates, we researched various distance metrics to use such as the Euclidean Metric, Cosine Similarity, the weighted Manhattan distance, and Procrustes Analysis (Affine Transformation). Through data collection and trial and error, the cosine distance gave the best results in the end. The resulting distances were then fed into a function to map the values to viable player scores.
The UI is built up in HTML with CSS styling and Javascript to run its functions. It has a hand-drawn background and an easy-to-use design packed with function. The menu bar has a file selector for choosing and uploading a dance video to compare to. The three main cards of the UI have the reference video and live cam side by side, with pose-estimated skeletons of each in the middle to aid in matching the reference dance. The whole UI is built up in general keeping in mind ease of use, simplicity, visual appeal and functionality.
## Challenges we ran into
As a result of splitting into two teams for different parts of the project, one challenge we faced was merging the two parts. It was difficult to both combine the code but as well to connect the different parts of it, returning outputs from one part as acceptable inputs for another. Through perseverance and a lot of communication we managed to effectively merge the two parts.
## Accomplishments that we're proud of
We managed to create a clean looking app that performs the algorithm well despite the time pressure and complexity of the project. In addition, we were able to allocate time into making a presentation with a skit to tie everything together.
## What we learned
Coming into this hackathon, only one of our members was experienced in web development, but coming out, all of us four felt that we gained valuable experience and insight into the ins and outs of webpages. We learned how to effectively use Node.js to create a backend and connect it with our frontend. Along with this, we gained experience using npm and many of javascript's potpourri of packages such as browserify.
## What's next for TikTok Dance Trainer
We also looked into using Dynamic Time Warping to help with the comparison. This would help primarily when the videos were different lengths or if the dancers were slightly mismatched. However, we realized that this would not be needed if the user is dancing against the TikTok video in their own live feed. In the future, we would like to add a functionality that allows two pre-recorded videos to be compared that would then use Dynamic Time Warping.
All open source repositories/packages that were used:
[link](https://github.com/tensorflow/tfjs-models/tree/master/posenet)
[link](https://github.com/compute-io/cosine-similarity)
[link](https://github.com/GordonLesti/dynamic-time-warping)
[link](https://github.com/browserify/browserify)
[link](https://github.com/ml5js/ml5-library)
|
## Inspiration
The idea arose from the current political climate. At a time where there is so much information floating around, and it is hard for people to even define what a fact is, it seemed integral to provide context to users during speeches.
## What it does
The program first translates speech in audio into text. It then analyzes the text for relevant topics for listeners, and cross references that with a database of related facts. In the end, it will, in real time, show viewers/listeners a stream of relevant facts related to what is said in the program.
## How we built it
We built a natural language processing pipeline that begins with a speech to text translation of a YouTube video through Rev APIs. We then utilize custom unsupervised learning networks and a graph search algorithm for NLP inspired by PageRank to parse the context and categories discussed in different portions of a video. These categories are then used to query a variety of different endpoints and APIs, including custom data extraction API's we built with Mathematica's cloud platform, to collect data relevant to the speech's context. This information is processed on a Flask server that serves as a REST API for an Angular frontend. The frontend takes in YouTube URL's and creates custom annotations and generates relevant data to augment the viewing experience of a video.
## Challenges we ran into
None of the team members were very familiar with Mathematica or advanced language processing. Thus, time was spent learning the language and how to accurately parse data, give the huge amount of unfiltered information out there.
## Accomplishments that we're proud of
We are proud that we made a product that can help people become more informed in their everyday life, and hopefully give deeper insight into their opinions. The general NLP pipeline and the technologies we have built can be scaled to work with other data sources, allowing for better and broader annotation of video and audio sources.
## What we learned
We learned from our challenges. We learned how to work around the constraints of a lack of a dataset that we could use for supervised learning and text categorization by developing a nice model for unsupervised text categorization. We also explored Mathematica's cloud frameworks for building custom API's.
## What's next for Nemo
The two big things necessary to expand on Nemo are larger data base references and better determination of topics mentioned and "facts." Ideally this could then be expanded for a person to use on any audio they want context for, whether it be a presentation or a debate or just a conversation.
|
## Inspiration
We aren't musicians. We can't dance. With AirTunes, we can try to do both! Superheroes are also pretty cool.
## What it does
AirTunes recognizes 10 different popular dance moves (at any given moment) and generates a corresponding sound. The sounds can be looped and added at various times to create an original song with simple gestures.
The user can choose to be one of four different superheroes (Hulk, Superman, Batman, Mr. Incredible) and record their piece with their own personal touch.
## How we built it
In our first attempt, we used OpenCV to maps the arms and face of the user and measure the angles between the body parts to map to a dance move. Although successful with a few gestures, more complex gestures like the "shoot" were not ideal for this method. We ended up training a convolutional neural network in Tensorflow with 1000 samples of each gesture, which worked better. The model works with 98% accuracy on the test data set.
We designed the UI using the kivy library in python. There, we added record functionality, the ability to choose the music and the superhero overlay, which was done with the use of rlib and opencv to detect facial features and map a static image over these features.
## Challenges we ran into
We came in with a completely different idea for the Hack for Resistance Route, and we spent the first day basically working on that until we realized that it was not interesting enough for us to sacrifice our cherished sleep. We abandoned the idea and started experimenting with LeapMotion, which was also unsuccessful because of its limited range. And so, the biggest challenge we faced was time.
It was also tricky to figure out the contour settings and get them 'just right'. To maintain a consistent environment, we even went down to CVS and bought a shower curtain for a plain white background. Afterward, we realized we could have just added a few sliders to adjust the settings based on whatever environment we were in.
## Accomplishments that we're proud of
It was one of our first experiences training an ML model for image recognition and it's a lot more accurate than we had even expected.
## What we learned
All four of us worked with unfamiliar technologies for the majority of the hack, so we each got to learn something new!
## What's next for AirTunes
The biggest feature we see in the future for AirTunes is the ability to add your own gestures. We would also like to create a web app as opposed to a local application and add more customization.
|
partial
|
## Inspiration
Improve the productivity of people, groups, and businesses by keeping checklists simple, reusable, sharable, and extendable.
## What it does
A real-time collaborative, sharable and reusable checklist for any platform.
This platform provides easy to use APIs to extend functionality. For example, users can interface with checklists using Google Assistant, Alexa, Siri, Slack, and anything that can send HTTP requests!
* Common reusable use-cases: daily reminders, weekly chores, HR onboarding, travel, instructions, opening/closing retail stores
* Common sharable use-cases: grocery, chores, party setup
## How I built it
* Love and passion for the platform
* Vue.js for the frontend
* Golang for the backend
* Firebase Real-Time Database for datastore
## Challenges I ran into
Designing the interface, UI/UX is tough!
## Accomplishments that I'm proud of
Had a lot of fun meeting a bunch of peers, organizers, mentors, and sponsors!
Developed this hack on the side for fun and deployed it within a few hours :)
## What I learned
Vue, Golang, and Firebase Real-Time Database.
## What's next for Check-It
Design the interface to be more aesthetically pleasing!
|
## Forgetting my deadlines and missing on important dates and work.
## What it does - It helps up keep tracks of our things whether if you are going on shopping and you can note down all the grocery items which you want to purchase , or suppose you have a deadline of some project then you can make note of your progress in that project.
## How we built it - This project was mainly built using \**Bootstrap , HTML , CSS , Java script \** and for deployment we used netlify.com' s free services.
## Challenges we ran into - The challenges were dealing with the JavaScript part of this project cause since this project works on the basics of Lists and writing JavaScript was a bit on far side.
## Accomplishments that we're proud of that we created a \**Fully working Web Application \**
## What we learned - How to use bootstrap efficiently .
## What's next for Check List WebApp - Not sure now !!
|
## Inspiration
Everyone in society is likely going to buy a home at some point in their life. They will most likely meet realtors, see a million listings, gather all the information they can about the area, and then make a choice. But why make the process so complicated?
MeSee lets users pick and recommend regions of potential housing interest based on their input settings, and returns details such as: crime rate, public transportation accessibility, number of schools, ratings of local nearby business, etc.
## How we built it
Data was sampled by an online survey on what kind of things people looked for when house hunting. The most repeated variables were then taken and data on them was collected. Ratings were pulled from Yelp, crime data was provided by CBC, public transportation data by TTC, etc. The result is a very friendly web-app.
## Challenges we ran into
Collecting data in general was difficult because it was hard to match different datasets with each other and consistently present them since they were all from from different sources. It's still a little patchy now, but the data is now there!
## Accomplishments that we're proud of
Finally choosing an idea 6 hours into the hackathon, get the data, get at least four hours of sleep, and establish open communication with each other as we didn't really know each other until today!
## What we learned
Our backend learned to use different callbacks, front end learned that googlemaps API is definitely out to get him, and our designer learned Adobe Xd to better illustrate what the design looked like and how it functioned.
## What's next for MeSee
There's still a long ways before Mesee can cover more regions, but if it continues, it'd definitely be something our team would look into. Furthermore, collecting more sampling data would definitely be beneficial in improving the variables available to users by Mesee. Finally, making Mesee mobile would also be a huge plus.
|
losing
|
## Inspiration
Between my friends and I, when there is a task everyone wants to avoid, we play a game to decide quickly. These tasks may include, ordering pizza or calling an uber for the group. The game goes like this, whoever thinks of this game first says "shotty not" and then touches their nose. Everyone else reacts to him and touches their nose as fast as they can. The person with the slowest reaction time is chosen to do the task. I often fall short when it comes to reaction time so I had to do something about it
## What it does
The module sits on top of your head, waiting to hear the phrase "shotty not." When it is recognized the finger will come down and touch your nose. You will never get caught off guard again.
## How I built it
The finger moves via a servo and is controlled by an arduino, it is connected to a python script that recognizes voice commands offline. The finger is mounted to the hat with some 3d printed parts.
## Challenges I ran into
The hardware lab did not have a voice recognition module or a bluetooth module for arduino. I had to figure out how to go about implementing voice recognition and connect it to the arduino.
## Accomplishments that I'm proud of
I was able to model and print all the parts to create a completely finished hack to the best of my abilities.
## What I learned
I learned to use a voice recognition library and use Pyserial to communicate to an arduino with a python program.
## What's next for NotMe
I will replace the python program with a bluetooth module to make the system more portable. This allows for real life use cases.
|
## Inspiration
We wanted to promote an easy learning system to introduce verbal individuals to the basics of American Sign Language. Often people in the non-verbal community are restricted by the lack of understanding outside of the community. Our team wants to break down these barriers and create a fun, interactive, and visual environment for users. In addition, our team wanted to replicate a 3D model of how to position the hand as videos often do not convey sufficient information.
## What it does
**Step 1** Create a Machine Learning Model To Interpret the Hand Gestures
This step provides the foundation for the project. Using OpenCV, our team was able to create datasets for each of the ASL alphabet hand positions. Based on the model trained using Tensorflow and Google Cloud Storage, a video datastream is started, interpreted and the letter is identified.
**Step 2** 3D Model of the Hand
The Arduino UNO starts a series of servo motors to activate the 3D hand model. The user can input the desired letter and the 3D printed robotic hand can then interpret this (using the model from step 1) to display the desired hand position. Data is transferred through the SPI Bus and is powered by a 9V battery for ease of transportation.
## How I built it
Languages: Python, C++
Platforms: TensorFlow, Fusion 360, OpenCV, UiPath
Hardware: 4 servo motors, Arduino UNO
Parts: 3D-printed
## Challenges I ran into
1. Raspberry Pi Camera would overheat and not connect leading us to remove the Telus IoT connectivity from our final project
2. Issues with incompatibilities with Mac and OpenCV and UiPath
3. Issues with lighting and lack of variety in training data leading to less accurate results.
## Accomplishments that I'm proud of
* Able to design and integrate the hardware with software and apply it to a mechanical application.
* Create data, train and deploy a working machine learning model
## What I learned
How to integrate simple low resource hardware systems with complex Machine Learning Algorithms.
## What's next for ASL Hand Bot
* expand beyond letters into words
* create a more dynamic user interface
* expand the dataset and models to incorporate more
|
## Inspiration
The Canadian winter's erratic bouts of chilling cold have caused people who have to be outside for extended periods of time (like avid dog walkers) to suffer from frozen fingers. The current method of warming up your hands using hot pouches that don't last very long is inadequate in our opinion. Our goal was to make something that kept your hands warm and *also* let you vent your frustrations at the terrible weather.
## What it does
**The Screamathon3300** heats up the user's hand based on the intensity of their **SCREAM**. It interfaces an *analog electret microphone*, *LCD screen*, and *thermoelectric plate* with an *Arduino*. The Arduino continuously monitors the microphone for changes in volume intensity. When an increase in volume occurs, it triggers a relay, which supplies 9 volts, at a relatively large amperage, to the thermoelectric plate embedded in the glove, thereby heating the glove. Simultaneously, the Arduino will display an encouraging prompt on the LCD screen based on the volume of the scream.
## How we built it
The majority of the design process was centered around the use of the thermoelectric plate. Some research and quick experimentation helped us conclude that the thermoelectric plate's increase in heat was dependent on the amount of supplied current. This realization led us to use two separate power supplies -- a 5 volt supply from the Arduino for the LCD screen, electret microphone, and associated components, and a 9 volt supply solely for the thermoelectric plate. Both circuits were connected through the use of a relay (dependent on the Arduino output) which controlled the connection between the 9 volt supply and thermoelectric load. This design decision provided electrical isolation between the two circuits, which is much safer than having common sources and ground when 9 volts and large currents are involved with an Arduino and its components.
Safety features directed the rest of our design process, like the inclusion of a kill-switch which immediately stops power being supplied to the thermoelectric load, even if the user continues to scream. Furthermore, a potentiometer placed in parallel with the thermoelectric load gives control over how quickly the increase in heat occurs, as it limits the current flowing to the load.
## Challenges we ran into
We tried to implement feedback loop, ambient temperature sensors; even though large temperature change, very small changes in sensor temperatures. Goal to have an optional non-scream controlled system failed because of ultimately not having a sensor feedback system.
We did not own components such as the microphone, relay, or battery pack, we could not solder many connections so we could not make a permanent build.
## Accomplishments that we're proud of
We're proud of using a unique transducer (thermoelectric plate) that uses an uncommon trigger (current instead of voltage level), which forced us to design with added safety considerations in mind.
Our design was also constructed of entirely sustainable materials, other than the electronics.
We also used a seamless integration of analog and digital signals in the circuit (baby mixed signal processing).
## What we learned
We had very little prior experience interfacing thermoelectric plates with an Arduino. We learned to effectively leverage analog signal inputs to reliably trigger our desired system output, as well as manage physical device space restrictions (for it to be wearable).
## What's next for Screamathon 3300
We love the idea of people having to scream continuously to get a job done, so we will expand our line of *Scream* devices, such as the scream-controlled projectile launcher, scream-controlled coffee maker, scream-controlled alarm clock. Stay screamed-in!
|
partial
|
## Realm Inspiration
Our inspiration stemmed from our fascination in the growing fields of AR and virtual worlds, from full-body tracking to 3D-visualization. We were interested in realizing ideas in this space, specifically with sensor detecting movements and seamlessly integrating 3D gestures. We felt that the prime way we could display our interest in this technology and the potential was to communicate using it. This is what led us to create Realm, a technology that allows users to create dynamic, collaborative, presentations with voice commands, image searches and complete body-tracking for the most customizable and interactive presentations. We envision an increased ease in dynamic presentations and limitless collaborative work spaces with improvements to technology like Realm.
## Realm Tech Stack
Web View (AWS SageMaker, S3, Lex, DynamoDB and ReactJS): Realm's stack relies heavily on its reliance on AWS. We begin by receiving images from the frontend and passing it into SageMaker where the images are tagged corresponding to their content. These tags, and the images themselves, are put into an S3 bucket. Amazon Lex is used for dialog flow, where text is parsed and tools, animations or simple images are chosen. The Amazon Lex commands are completed by parsing through the S3 Bucket, selecting the desired image, and storing the image URL with all other on-screen images on to DynamoDB. The list of URLs are posted on an endpoint that Swift calls to render.
AR View ( AR-kit, Swift ): The Realm app renders text, images, slides and SCN animations in pixel perfect AR models that are interactive and are interactive with a physics engine. Some of the models we have included in our demo, are presentation functionality and rain interacting with an umbrella. Swift 3 allows full body tracking and we configure the tools to provide optimal tracking and placement gestures. Users can move objects in their hands, place objects and interact with 3D items to enhance the presentation.
## Applications of Realm:
In the future we hope to see our idea implemented in real workplaces in the future. We see classrooms using Realm to provide students interactive spaces to learn, professional employees a way to create interactive presentations, teams to come together and collaborate as easy as possible and so much more. Extensions include creating more animated/interactive AR features and real-time collaboration methods. We hope to further polish our features for usage in industries such as AR/VR gaming & marketing.
|
## 💡 Inspiration💡
Our team is saddened by the fact that so many people think that COVID-19 is obsolete when the virus is still very much relevant and impactful to us. We recognize that there are still a lot of people around the world that are quarantining—which can be a very depressing situation to be in. We wanted to create some way for people in quarantine, now or in the future, to help them stay healthy both physically and mentally; and to do so in a fun way!
## ⚙️ What it does ⚙️
We have a full-range of features. Users are welcomed by our virtual avatar, Pompy! Pompy is meant to be a virtual friend for users during quarantine. Users can view Pompy in 3D to see it with them in real-time and interact with Pompy. Users can also view a live recent data map that shows the relevance of COVID-19 even at this time. Users can also take a photo of their food to see the number of calories they eat to stay healthy during quarantine. Users can also escape their reality by entering a different landscape in 3D. Lastly, users can view a roadmap of next steps in their journey to get through their quarantine, and to speak to Pompy.
## 🏗️ How we built it 🏗️
### 🟣 Echo3D 🟣
We used Echo3D to store the 3D models we render. Each rendering of Pompy in 3D and each landscape is a different animation that our team created in a 3D rendering software, Cinema 4D. We realized that, as the app progresses, we can find difficulty in storing all the 3D models locally. By using Echo3D, we download only the 3D models that we need, thus optimizing memory and smooth runtime. We can see Echo3D being much more useful as the animations that we create increase.
### 🔴 An Augmented Metaverse in Swift 🔴
We used Swift as the main component of our app, and used it to power our Augmented Reality views (ARViewControllers), our photo views (UIPickerControllers), and our speech recognition models (AVFoundation). To bring our 3D models to Augmented Reality, we used ARKit and RealityKit in code to create entities in the 3D space, as well as listeners that allow us to interact with 3D models, like with Pompy.
### ⚫ Data, ML, and Visualizations ⚫
There are two main components of our app that use data in a meaningful way. The first and most important is using data to train ML algorithms that are able to identify a type of food from an image and to predict the number of calories of that food. We used OpenCV and TensorFlow to create the algorithms, which are called in a Python Flask server. We also used data to show a choropleth map that shows the active COVID-19 cases by region, which helps people in quarantine to see how relevant COVID-19 still is (which it is still very much so)!
## 🚩 Challenges we ran into
We wanted a way for users to communicate with Pompy through words and not just tap gestures. We planned to use voice recognition in AssemblyAI to receive the main point of the user and create a response to the user, but found a challenge when dabbling in audio files with the AssemblyAI API in Swift. Instead, we overcame this challenge by using a Swift-native Speech library, namely AVFoundation and AVAudioPlayer, to get responses to the user!
## 🥇 Accomplishments that we're proud of
We have a functioning app of an AR buddy that we have grown heavily attached to. We feel that we have created a virtual avatar that many people really can fall for while interacting with it, virtually traveling places, talking with it, and getting through quarantine happily and healthily.
## 📚 What we learned
For the last 36 hours, we learned a lot of new things from each other and how to collaborate to make a project.
## ⏳ What's next for ?
We can use Pompy to help diagnose the user’s conditions in the future; asking users questions about their symptoms and their inner thoughts which they would otherwise be uncomfortable sharing can be more easily shared with a character like Pompy. While our team has set out for Pompy to be used in a Quarantine situation, we envision many other relevant use cases where Pompy will be able to better support one's companionship in hard times for factors such as anxiety and loneliness. Furthermore, we envisage the Pompy application being a resource hub for users to improve their overall wellness. Through providing valuable sleep hygiene, exercise tips and even lifestyle advice, Pompy will be the one-stop, holistic companion for users experiencing mental health difficulties to turn to as they take their steps towards recovery.
\*\*we had to use separate github workspaces due to conflicts.
|
## Inspiration
Not all products are designed in a user-friendly and intuitive way. We often come across devices that are annoying and unclear to use. This is especially true for people with less exposure to tech, such as seniors. Whether it’s setting up a new tech gadget or controlling the AC in a new rental car, reading long user manuals or finding a random YouTube tutorial is currently the best course of action. But what if an AI could generate the tutorial specifically for you directly on your phone and visually explain the product using interactive AR?
## What it does
We leave AI chatbots in the dust, by combining them with 3D stable diffusion and Augmented Reality to create a user experience as if an expert is physically next to you, visually answering your question with a helpful virtual demonstration.
## Workflow
1. User wants to know how to interact with an object.
2. They open the app and place their camera in-front of the object.
3. The user asks their question e.g. How do I do 'X'?
4. Object detection model detects the item in-front of user.
5. Speech to text understands the user’s question and sends the label and prompt to the backend LLM instruction agent.
6. The instruction agent takes the user's prompt and generates a list of clear instructions to resolve the user’s problem.
7. The detected object and contextualised instructions are fed into a 3D stable diffusion model which generates a digital twin that is displayed alongside the real object in AR.
8. The 3D models are positioned in AR space as a visual guidance for the written instructions, which are also shown to the user.
## How we built it
**FrontEnd:**
The core frontend was developed using Swift UI, using ARKit for rendering the tutorials in space and CoreML as the on-device model to detect the object in front of the camera. We also used AVFoundation to enable speech-to-text capabilities to simplify the user experience. For more complex and involved tutorials we aim to make the frontend compatible with the Apple Vision Pro in the near future.
**Instruction Agent:**
The instruction agent simplifies user guidance by generating concise instructions in three clear steps. It receives prompts via a REST API from the front-end, incorporating them into the output JSON format. These instructions are then contextualised for the Text-to-3D model, which facilitates the generation and positioning of AR objects. This process involves passing the question and label through a LLM to produce the finalised JSON.
**Text to 3D Stable Diffusion:**
The text to 3D stable diffusion model was developed using a pre-trained 2D text-to-image diffusion model to perform text-to-3D synthesis. We used probability density distillation loss to optimise a NeRF model using gradient descent. The resulting model can be viewed from any angle and requires no 3D training data or modifications to the image diffusion model. Because querying each ray in a NeRF requires a lot of computation we used a Sparse Neural Radiance Grid (SNeRG) that enables real-time rendering. This involved reformulation of the architecture using a sparse voxel grid representation with learned feature vectors. We used USDPython with ARConvert for Usdz compatibility on iOS.
The following papers were used as technical support and inspiration:
* <https://instruct-nerf2nerf.github.io/>
* <https://phog.github.io/snerg/>
* <https://dreamfusion3d.github.io/>
## Challenges we ran into
Rendering 3D models at high speed and quality turned out to be very tough. Our model started out producing low quality AR objects within one minute, and after precomputing and storing the NeRF into a SNeRG, we were able to cut that time down to several seconds. Producing the highest quality models takes longer and is a challenge that we want to address in the future. For now, the lower quality version suffices, and on the small size of a smartphone screen, is not much of an issue.
## Accomplishments that we're proud of
We made a fully functional demo and MVP! Despite facing many technical challenges along the way, we managed to overcome them all and are proud of the functionality and complexity of our product. We were able to integrate many packages and models into a complex pipeline that seamlessly converts the user’s question into a visual tutorial. The technical complexity of our solution was both challenging and rewarding, and we are excited to work on this further and see how far we can push the performance and quality of the model, especially considering how close to the edge of research it is.
## What we learned
We used many new packages and techniques in this project, significantly expanding our skillset. Our biggest breakthrough was getting the 3D stable diffusion algorithm to work, as this was something we had never done before. We also expanded our AR capabilities by learning about ARKit, RealityKit and AVFoundation as well as using the ‘Combine’ and ‘Speech’ packages to transcribe the user’s spoken prompt and ensure a smooth experience.
## What's next for Aira
Our next goal is to improve the model to animate the AR objects generated using 3D stable diffusion. This involves identifying each moving component as a separate object, generating them separately, and then using the contextualisation ability of the instruction agent to understand the desired movement of the components relative to each other, and outputting the motion in polar coordinates. Following this, we will further fine-tune and optimise our model to cut down the time it takes to generate the 3D AR models. To improve the UX we also plan to add arrows visualising the actions needed to interact with the user too.
Deck: <https://www.canva.com/design/DAF9EZRlAW8/lDw9k8mMUDGqLUeVQBfBbw/edit?utm_content=DAF9EZRlAW8&utm_campaign=designshare&utm_medium=link2&utm_source=sharebutton>
|
winning
|
## Inspiration
There are many misconceptions around solar panels being truly a green energy alternative. Many people want to help the Earth and decide to purchase private solar panels for their home. One major flaw in solar power is from the variable energy output depending on where they are located - hours of sunlight per day, cloud coverage, temperature, etc. In some cases solar panels may actually be worse for the environment because of the resources required to create one. Canada is a location that is on the verge for whether it would be beneficial or not to the Earth as some places pass the threshold for green energy and some places do not. We created **SolEarth** to help people decide if solar panels are right for their home and the possible energy that they can expect from their location.
## What it does
**SolEarth** is a **web application** that takes a user's address and provides them the amount of energy that can be produced from setting up **solar panels** at their home. It also does an assessment on whether or not it is actually eco-friendly for solar panels to be built at that location.
## How we built it
We built the back end of the application with *Node.js* and *Express.js* and a few apis. *PositionStack* is a forward geocoding api and *Open Meteo* is a weather api. We receive weather data from an address. Weather conditions such as hourly solar radiation and other conditions averaged yearly to use in a *formula* to calculate the potential amount of solar energy that can be generated per year from a solar panel.
## Challenges we ran into
Some challenges that we faced when creating **SolEarth** were using *Node.js* and *Express.js*. Our team had never written a line of *javascript* prior to **Hack the 6ix** and wanted to challenge ourselves and learn something outside of our comfort zone over the weekend. Two of our team members have done 1 hackathon before and **Hack the 6ix is the first hackathon for the other 2 team members**. We also had some troubles with connecting the backend to frontend and deploying the web application online.
## Accomplishments that we're proud of
We are proud of having a complete product to show and having our project revolve around being environmentally sustainable. Both frontends of our website look amazing and the backend works exactly as we planned.
## What we learned
We learned what client side and server side means for web apps. We also became familiar with *REST* apis and how to call and create our own apis for the server side. We are now able to semi-confidently say that we are familiar in javascript and the *Node.js* and *Express.js* frameworks.
## What's next for SolEarth
Next steps for SolEarth is to provide support through education and more information to raise awareness on proper usage of green energy starting with solar first. After, we would like to promote wind energy as an alternative to solar energy for private homes.
|
## Inspiration
We wanted to make a **fun** and **interactive** tool which allows people to very easily see ways in which they can live more sustainably. Given that in a year or two, our team will have to consider renting an apartment / buying a home, we have considered what is truly important! In that spirit, we wanted to make a tool which brings to the foreground the environmental impact of these homes / apartments and improvements that can be made in the short / long term in order to live more sustainably :)
## What it does
We ask a user for some information including their (prospective) home address and use ScaleAI's API in order to help the user interact and highlight different areas of the home which may need these upgrades in the future to live a greener life!!
## How we built it
We used ScaleAI heavily in order to process the user's information and create a fully interactive experience. We used an HTML template to help with the framework, but from there most of our work was focused in javascript calls to our backend server which connected with ScaleAI's API
## Challenges we ran into
This is the first time either of us have been to a hackathon so there was a big learning curve in understanding how to use API's in general and how to accomplish what we wanted. There was not enough time for fully finishing and polishing our product fully, but we still think it (overall) looks really good !
## Accomplishments that we're proud of
We're both very proud of the final result and are happy to have made something so incredible!!
## What we learned
We learned how to use API's! (and in the process, we made over 2000 somewhat spamy requests, but 39 real, not spammy, essential requests that each help our product greatly!), we learned a lot about HTML, javascript, and python. We also learned how to set up our own python server in order to create real API calls and how to connect a webpage to python in general!!
## What's next for Sustainability Hack
We would like to consider the inside of houses as well and allow people to upload images where we can use ScaleAI's tools further in order to expand upon the interactive displays. We imagine a future where people can easily snap a pic of their living room and our interactive tool tells them exactly what things they specifically can do to lower their carbon output! Right now, all of our information is general, however, we'd love to create enough information so that we can help everyone with their own room / kitchen / house :)
## (Stanford Center for Ethics) Most Ethically Engaged Hack -- Written Narrative
When developing our idea and prototype, our team considered the ethical implications of our product each step of the way. The idea for our project was inspired by our team’s interest in looking for solutions to fight climate change. We decided to focus on spreading awareness of the climate impact of certain elements of a house. These include the size of the house, the pool, foliage, driveway material, and whether there are solar panels on the roof. We know that homes in the United States are being bought at a high rate, yet the climate impact of newly sold homes is not a focus of the general public in the fight against climate change. Thus, by bringing awareness to the possible climate hazards of a home, we can make an impact on the reduction of energy usage, water usage, and the spread of hazardous materials.
During development, our team recognized that our project had not been inclusive for underrepresented groups, who are less likely to be new home buyers. Therefore, a next step in the development of our project will be to include features that take into account appliances inside the house as well. We can recommend ways that users can use less energy, water, and hazardous materials so that users can save money and improve their health while lessening their negative environmental impact. This would have the benefit of involving underrepresented groups in the fight against climate change. This is important because many of the ways that most people can lessen their climate impact require a certain income level, such as solar panels, which also requires one to be a homeowner.
We hope that it is clear that the ethical considerations of our project are known to users because the website gives information about the climate impact of each of the elements of the house mentioned above. In future versions of our project, we also wish to include information about the effect our website will have on spreading awareness to specific underrepresented groups, such as those who are low-income, racial minorities, and women.
Lastly, our team was diligent in identifying the ethical considerations in the deployment of our project. We will be making use of web servers and APIs which are computationally expensive, and thus, will use more energy than the usual amount to host a website. However, we believe that if our project can make a significant impact in changing peoples’ decisions in home designing to consider the effects on climate change, including their energy usage, this will be a net positive.
|
## Inspiration
We wanted to create a proof-of-concept for a potentially useful device that could be used commercially and at a large scale. We ultimately designed to focus on the agricultural industry as we feel that there's a lot of innovation possible in this space.
## What it does
The PowerPlant uses sensors to detect whether a plant is receiving enough water. If it's not, then it sends a signal to water the plant. While our proof of concept doesn't actually receive the signal to pour water (we quite like having working laptops), it would be extremely easy to enable this feature.
All data detected by the sensor is sent to a webserver, where users can view the current and historical data from the sensors. The user is also told whether the plant is currently being automatically watered.
## How I built it
The hardware is built on an Arduino 101, with dampness detectors being used to detect the state of the soil. We run custom scripts on the Arduino to display basic info on an LCD screen. Data is sent to the websever via a program called Gobetwino, and our JavaScript frontend reads this data and displays it to the user.
## Challenges I ran into
After choosing our hardware, we discovered that MLH didn't have an adapter to connect it to a network. This meant we had to work around this issue by writing text files directly to the server using Gobetwino. This was an imperfect solution that caused some other problems, but it worked well enough to make a demoable product.
We also had quite a lot of problems with Chart.js. There's some undocumented quirks to it that we had to deal with - for example, data isn't plotted on the chart unless a label for it is set.
## Accomplishments that I'm proud of
For most of us, this was the first time we'd ever created a hardware hack (and competed in a hackathon in general), so managing to create something demoable is amazing. One of our team members even managed to learn the basics of web development from scratch.
## What I learned
As a team we learned a lot this weekend - everything from how to make hardware communicate with software, the basics of developing with Arduino and how to use the Charts.js library. Two of our team member's first language isn't English, so managing to achieve this is incredible.
## What's next for PowerPlant
We think that the technology used in this prototype could have great real world applications. It's almost certainly possible to build a more stable self-contained unit that could be used commercially.
|
losing
|
## Inspiration
It's lunchtime, you are looking for somewhere to go to eat so you open Yelp and look for recommendations. After scrolling through many pages, you are overwhelmed by the number of restaurants around you and can't decide where to eat so you end up going to the fast food restaurant you always go to. We've all been there. What others like may not be what you like, but you also do not want to waste time entering all your preferences on the app.
But wait, you always like photos on social media, so shouldn't your phone know what you like already?
## What it does
Doko will collect data about the photos foods/restaurants the user have liked on social media and next time the user pass by that location, we will notify you. The user can also see restaurants around him/her in a convenient map view. Since t you have shown interest already in these restaurants, we are confident in our recommendations.
## How we built it
We used Twitter API to query tweets a certain user has liked every 10 seconds. The backend is written in Python serving as the API connecting Mongo DB and iOS.
## Challenges we ran into
Getting trapped by MongoDB Stitch iOS SDK. It took us nearly 2 hours to find out the issue in our project (also the inexplicit of the document) after reading the source code of it.
## Accomplishments that we're proud of
## What we learned
## What's next for Doko
Our first step will be to support social media other than Twitter.
Then we can include additional features such as restaurant recommendations using machine learning algorithms, or making a reservation within the app.
|
## Inspiration
Over the Summer one of us was reading about climate change but then he realised that most of the news articles that he came across were very negative and affected his mental health to the point that it was hard to think about the world as a happy place. However one day he watched this one youtube video that was talking about the hope that exists in that sphere and realised the impact of this "goodNews" on his mental health. Our idea is fully inspired by the consumption of negative media and tries to combat it.
## What it does
We want to bring more positive news into people’s lives given that we’ve seen the tendency of people to only read negative news. Psychological studies have also shown that bringing positive news into our lives make us happier and significantly increases dopamine levels.
The idea is to maintain a score of how much negative content a user reads (detected using cohere) and once it reaches past a certain threshold (we store the scores using cockroach db) we show them a positive news related article in the same topic area that they were reading.
We do this by doing text analysis using a chrome extension front-end and flask, cockroach dp backend that uses cohere for natural language processing.
Since a lot of people also listen to news via video, we also created a part of our chrome extension to transcribe audio to text - so we included that into the start of our pipeline as well! At the end, if the “negativity threshold” is passed, the chrome extension tells the user that it’s time for some good news and suggests a relevant article.
## How we built it
**Frontend**
We used a chrome extension for the front end which included dealing with the user experience and making sure that our application actually gets the attention of the user while being useful. We used react js, HTML and CSS to handle this. There was also a lot of API calls because we needed to transcribe the audio from the chrome tabs and provide that information to the backend.
**Backend**
## Challenges we ran into
It was really hard to make the chrome extension work because of a lot of security constraints that websites have. We thought that making the basic chrome extension would be the easiest part but turned out to be the hardest. Also figuring out the overall structure and the flow of the program was a challenging task but we were able to achieve it.
## Accomplishments that we're proud of
1) (co:here) Finetuned a co:here model to semantically classify news articles based on emotional sentiment
2) (co:here) Developed a high-performing classification model to classify news articles by topic
3) Spun up a cockroach db node and client and used it to store all of our classification data
4) Added support for multiple users of the extension that can leverage the use of cockroach DB's relational schema.
5) Frontend: Implemented support for multimedia streaming and transcription from the browser, and used script injection into websites to scrape content.
6) Infrastructure: Deploying server code to the cloud and serving it using Nginx and port-forwarding.
## What we learned
1) We learned a lot about how to use cockroach DB in order to create a database of news articles and topics that also have multiple users
2) Script injection, cross-origin and cross-frame calls to handle multiple frontend elements. This was especially challenging for us as none of us had any frontend engineering experience.
3) Creating a data ingestion and machine learning inference pipeline that runs on the cloud, and finetuning the model using ensembles to get optimal results for our use case.
## What's next for goodNews
1) Currently, we push a notification to the user about negative pages viewed/a link to a positive article every time the user visits a negative page after the threshold has been crossed. The intended way to fix this would be to add a column to one of our existing cockroach db tables as a 'dirty bit' of sorts, which tracks whether a notification has been pushed to a user or not, since we don't want to notify them multiple times a day. After doing this, we can query the table to determine if we should push a notification to the user or not.
2) We also would like to finetune our machine learning more. For example, right now we classify articles by topic broadly (such as War, COVID, Sports etc) and show a related positive article in the same category. Given more time, we would want to provide more semantically similar positive article suggestions to those that the author is reading. We could use cohere or other large language models to potentially explore that.
|
# Omakase
*"I'll leave it up to you"*
## Inspiration
On numerous occasions, we have each found ourselves staring blankly into the fridge with no idea of what to make. Given some combination of ingredients, what type of good food can I make, and how?
## What It Does
We have built an app that recommends recipes based on the food that is in your fridge right now. Using Google Cloud Vision API and Food.com database we are able to detect the food that the user has in their fridge and recommend recipes that uses their ingredients.
## What We Learned
Most of the members in our group were inexperienced in mobile app development and backend. Through this hackathon, we learned a lot of new skills in Kotlin, HTTP requests, setting up a server, and more.
## How We Built It
We started with an Android application with access to the user’s phone camera. This app was created using Kotlin and XML. Android’s ViewModel Architecture and the X library were used. This application uses an HTTP PUT request to send the image to a Heroku server through a Flask web application. This server then leverages machine learning and food recognition from the Google Cloud Vision API to split the image up into multiple regions of interest. These images were then fed into the API again, to classify the objects in them into specific ingredients, while circumventing the API’s imposed query limits for ingredient recognition. We split up the image by shelves using an algorithm to detect more objects. A list of acceptable ingredients was obtained. Each ingredient was mapped to a numerical ID and a set of recipes for that ingredient was obtained. We then algorithmically intersected each set of recipes to get a final set of recipes that used the majority of the ingredients. These were then passed back to the phone through HTTP.
## What We Are Proud Of
We were able to gain skills in Kotlin, HTTP requests, servers, and using APIs. The moment that made us most proud was when we put an image of a fridge that had only salsa, hot sauce, and fruit, and the app provided us with three tasty looking recipes including a Caribbean black bean and fruit salad that uses oranges and salsa.
## Challenges You Faced
Our largest challenge came from creating a server and integrating the API endpoints for our Android app. We also had a challenge with the Google Vision API since it is only able to detect 10 objects at a time. To move past this limitation, we found a way to segment the fridge into its individual shelves. Each of these shelves were analysed one at a time, often increasing the number of potential ingredients by a factor of 4-5x. Configuring the Heroku server was also difficult.
## Whats Next
We have big plans for our app in the future. Some next steps we would like to implement is allowing users to include their dietary restriction and food preferences so we can better match the recommendation to the user. We also want to make this app available on smart fridges, currently fridges, like Samsung’s, have a function where the user inputs the expiry date of food in their fridge. This would allow us to make recommendations based on the soonest expiring foods.
|
winning
|
# Echo Chef
### TreeHacks 2016 @ Stanford
#### <http://echochef.net/>
Ever wanted a hands-free way to follow recipes while you cook?
Echo Chef can guide you through recipes interactively, think of it as your personal assistant in the kitchen.
Just add your favorite recipes to our web interface and they'll be available on your Amazon Echo!
Ask for step by step instructions, preheat temperatures, and more!
In addition to Echo Chef's use in the kitchen, we track your data and deliver it to you in an easily digestible way.
From your completion time of each recipe, to your most often used ingredients.
#### Features:
* Data Analytics and Visualization
* Amazon Alexa Skill Kit using the Amazon Echo
* AWS and DynamoDB
* Qualtrics API
* Responsive Site
#### Team
* Brandon Cen
* Cherrie Wang
* Elizabeth Chu
* Izzy Benavente
|
## Inspiration
Unhealthy diet is the leading cause of death in the U.S., contributing to approximately 678,000 deaths each year, due to nutrition and obesity-related diseases, such as heart disease, cancer, and type 2 diabetes. Let that sink in; the leading cause of death in the U.S. could be completely nullified if only more people cared to monitor their daily nutrition and made better decisions as a result. But **who** has the time to meticulously track every thing they eat down to the individual almond, figure out how much sugar, dietary fiber, and cholesterol is really in their meals, and of course, keep track of their macros! In addition, how would somebody with accessibility problems, say blindness for example, even go about using an existing app to track their intake? Wouldn't it be amazing to be able to get the full nutritional breakdown of a meal consisting of a cup of grapes, 12 almonds, 5 peanuts, 46 grams of white rice, 250 mL of milk, a glass of red wine, and a big mac, all in a matter of **seconds**, and furthermore, if that really is your lunch for the day, be able to log it and view rich visualizations of what you're eating compared to your custom nutrition goals?? We set out to find the answer by developing macroS.
## What it does
macroS integrates seamlessly with the Google Assistant on your smartphone and let's you query for a full nutritional breakdown of any combination of foods that you can think of. Making a query is **so easy**, you can literally do it while *closing your eyes*. Users can also make a macroS account to log the meals they're eating everyday conveniently and without hassle with the powerful built-in natural language processing model. They can view their account on a browser to set nutrition goals and view rich visualizations of their nutrition habits to help them outline the steps they need to take to improve.
## How we built it
DialogFlow and the Google Action Console were used to build a realistic voice assistant that responds to user queries for nutritional data and food logging. We trained a natural language processing model to identify the difference between a call to log a food eaten entry and simply a request for a nutritional breakdown. We deployed our functions written in node.js to the Firebase Cloud, from where they process user input to the Google Assistant when the test app is started. When a request for nutritional information is made, the cloud function makes an external API call to nutrionix that provides nlp for querying from a database of over 900k grocery and restaurant foods. A mongo database is to be used to store user accounts and pass data from the cloud function API calls to the frontend of the web application, developed using HTML/CSS/Javascript.
## Challenges we ran into
Learning how to use the different APIs and the Google Action Console to create intents, contexts, and fulfillment was challenging on it's own, but the challenges amplified when we introduced the ambitious goal of training the voice agent to differentiate between a request to log a meal and a simple request for nutritional information. In addition, actually finding the data we needed to make the queries to nutrionix were often nested deep within various JSON objects that were being thrown all over the place between the voice assistant and cloud functions. The team was finally able to find what they were looking for after spending a lot of time in the firebase logs.In addition, the entire team lacked any experience using Natural Language Processing and voice enabled technologies, and 3 out of the 4 members had never even used an API before, so there was certainly a steep learning curve in getting comfortable with it all.
## Accomplishments that we're proud of
We are proud to tackle such a prominent issue with a very practical and convenient solution that really nobody would have any excuse not to use; by making something so important, self-monitoring of your health and nutrition, much more convenient and even more accessible, we're confident that we can help large amounts of people finally start making sense of what they're consuming on a daily basis. We're literally able to get full nutritional breakdowns of combinations of foods in a matter of **seconds**, that would otherwise take upwards of 30 minutes of tedious google searching and calculating. In addition, we're confident that this has never been done before to this extent with voice enabled technology. Finally, we're incredibly proud of ourselves for learning so much and for actually delivering on a product in the short amount of time that we had with the levels of experience we came into this hackathon with.
## What we learned
We made and deployed the cloud functions that integrated with our Google Action Console and trained the nlp model to differentiate between a food log and nutritional data request. In addition, we learned how to use DialogFlow to develop really nice conversations and gained a much greater appreciation to the power of voice enabled technologies. Team members who were interested in honing their front end skills also got the opportunity to do that by working on the actual web application. This was also most team members first hackathon ever, and nobody had ever used any of the APIs or tools that we used in this project but we were able to figure out how everything works by staying focused and dedicated to our work, which makes us really proud. We're all coming out of this hackathon with a lot more confidence in our own abilities.
## What's next for macroS
We want to finish building out the user database and integrating the voice application with the actual frontend. The technology is really scalable and once a database is complete, it can be made so valuable to really anybody who would like to monitor their health and nutrition more closely. Being able to, as a user, identify my own age, gender, weight, height, and possible dietary diseases could help us as macroS give users suggestions on what their goals should be, and in addition, we could build custom queries for certain profiles of individuals; for example, if a diabetic person asks macroS if they can eat a chocolate bar for lunch, macroS would tell them no because they should be monitoring their sugar levels more closely. There's really no end to where we can go with this!
|
# Cook It!
**Cook It!** is a web service that is personalized to your tastes and for your taste. It uses the Amazon AWS Machine Learning API to learn your food preferences and to recommend recipes that you can make with the ingredients in your fridge. Just enter your ingredient list and select your meal type (from Breakfast, Main Course, and Dessert), and simply choose your dish from the many recipes that Cook It! has to offer.
# Inspiration
Being huge foodies, and very recently overworked college students, making delicacies that could satisfy our palates while being practical at the same time had started becoming increasingly impossible in these last few months. That is when we decided to make Cook It, something that would help us in our food exploration.
# How does it Work
We've collected data from two of the largest food recipe sources on the internet, *Yummly* and *Spoonacular* and ran Amazon AWS' industry standard regression on it to create an ML model that predicts the correlational success of a given set of ingredients. Moreover, this model evolves over time based on the user's own personal choices and the recipes he chooses to click on. All of this invisible to the user, all one has to do it enter a list of ingredients he might have on hand and wait for the magic to happen. Using web ratings and the past user recorded data, our algorithm creates a sorted list of recipes for the user to choose from starting from the top left.
# Challenges we Faced
Being just freshmen, exploring the field of ML was especially hard for us. Applying this to a genre like food where subjectivity prevails and reliable data was extremely hard to find, we had to hand sort a lot of our sources and train our model on around 10,000 existing ingredient combinations and their ratings derived from social networks to achieve a reliably consistent prediction model.
Integrating, consolidating and making the different technologies work together was another aspect that gave us a huge challenge.
# Accomplishments that we're Proud Of
Making something that we and our friends are extremely excited to use on a daily basis!
# What's Ahead
While our ML model is reasonably reliable right now, we aim to include a few more datasets and run some more training to make it better.
We are also planning to improve our recipe generation to get better suggestions.
|
winning
|
## FLEX [Freelancing Linking Expertise Xchange]
## Inspiration
Freelancers deserve a platform where they can fully showcase their skills, without worrying about high fees or delayed payments. Companies need fast, reliable access to talent with specific expertise to complete jobs efficiently. "FLEX" bridges the gap, enabling recruiters to instantly find top candidates through AI-powered conversations, ensuring the right fit, right away.
## What it does
Clients talk to our AI, explaining the type of candidate they need and any specific skills they're looking for. As they speak, the AI highlights important keywords and asks any more factors that they would need with the candidate. This data is then analyzed and parsed through our vast database of Freelancers or the best matching candidates. The AI then talks back to the recruiter, showing the top candidates based on the recruiter’s requirements. Once the recruiter picks the right candidate, they can create a smart contract that’s securely stored and managed on the blockchain for transparent payments and agreements.
## How we built it
We built starting with the Frontend using **Next.JS**, and deployed the entire application on **Terraform** for seamless scalability. For voice interaction, we integrated **Deepgram** to generate human-like voice and process recruiter inputs, which are then handled by **Fetch.ai**'s agents. These agents work in tandem: one agent interacts with **Flask** to analyze keywords from the recruiter's speech, another queries the **SingleStore** database, and the third handles communication with **Deepgram**.
Using SingleStore's real-time data analysis and Full-Text Search, we find the best candidates based on factors provided by the client. For secure transactions, we utilized **SUI** blockchain, creating an agreement object once the recruiter posts a job. When a freelancer is selected and both parties reach an agreement, the object gets updated, and escrowed funds are released upon task completion—all through Smart Contracts developed in **Move**. We also used Flask and **Express.js** to manage backend and routing efficiently.
## Challenges we ran into
We faced challenges integrating Fetch.ai agents for the first time, particularly with getting smooth communication between them. Learning Move for SUI and connecting smart contracts with the frontend also proved tricky. Setting up reliable Speech to Text was tough, as we struggled to control when voice input should stop. Despite these hurdles, we persevered and successfully developed this full stack application.
## Accomplishments that we're proud of
We’re proud to have built a fully finished application while learning and implementing new technologies here at CalHacks. Successfully integrating blockchain and AI into a cohesive solution was a major achievement, especially given how cutting-edge both are. It’s exciting to create something that leverages the potential of these rapidly emerging technologies.
## What we learned
We learned how to work with a range of new technologies, including SUI for blockchain transactions, Fetch.ai for agent communication, and SingleStore for real-time data analysis. We also gained experience with Deepgram for voice AI integration.
## What's next for FLEX
Next, we plan to implement DAOs for conflict resolution, allowing decentralized governance to handle disputes between freelancers and clients. We also aim to launch on the SUI mainnet and conduct thorough testing to ensure scalability and performance.
|
## Inspiration
We were inspired by the instability and corruption of many developing governments and wanted to provide more transparency for citizens. The immutability and decentralization of IPFS seemed like the perfect tool for this problem. We further developed this idea into a framework for conducting government activities and passing laws in a secure manner through ethereum smart contracts
## What it does
Lex Aeterna provides a service for governments to publish laws and store them on IPFS increasing security and transparency for citizens. We offer a website for viewing these laws and interfacing with our service but anyone can view these laws by looking at them directly on IPFS. We also offer increased security through the use of filecoin nodes to further decentralize the storage of laws and ensure that all laws and documents will **always** stay up. We also offer smart contracts which can be used to vote on proposed laws through ethereum transactions. Our website offers a UI for this functionality which includes secure account login through firebase.
## How we built it
We used the ipfs-http-client in python to upload and download files on IPFS. We set up a firebase database to store countries and associated laws with CIDs and other parameters. We then used flask to create a rest API to connect our database, our front end and IPFS. We coded our front end using react. We coded our voting smart contract using solidity and deployed it to a test net using web3 on python. We then expanded our API so that governments could deploy and use voting smart contracts all through our API. We use firebase tokens to authenticate the use of API functionality.
## Challenges we ran into
With such an ambitious project, we had to cover a lot of ground. Connecting the front end to our API was especially difficult because we didn't have much experience with react. It was difficult to learn on the fly and develop our front end as we went.
## Accomplishments that we're proud of
Although we were very ambitious, we were able to pretty much implement all major functionality that we wanted to. We implemented an entire web application through the entire stack which uses IPFS and blockchain technology. Most of all we pushed through and continued to work even when we felt stuck.
## What we learned
None us had used flask or react before however, we all became proficient enough to implement and API using flask and a front end using react. We also learned more about what it takes to plan and execute an original idea extremely quickly.
## What's next for Lex Aeterna
First we would move to AWS to increase scalability and security. We would spend some time testing the security of our API and log in features. We would also want to expand our smart contracts to further provide more options for governments to utilize the ethereum infrastructure. For example, different types of votes such as super majority or government terms that expire after a period of time and even direct citizen votes for government officials or policies.
|
## Inspiration
Inspired by the need to streamline the job application and interview process, our project aims to eliminate the tediousness of traditional methods. With the growing competition in the job market, we envisioned a platform that not only assists candidates in applying for jobs but also meticulously prepares them for technical interviews. By leveraging advanced AI technologies, we aspire to make the journey efficient, personalized, and insightful, offering clear feedback on areas of improvement and enhancing overall interview preparedness.
## What it does
Our application, HireMeAI, integrates multiple functionalities to assist job seekers throughout the application process. Key features include:
Real-time Video Interview with Personalization: Engage in real-time video calls with a bot that asks questions based on the user's resume, job description, and personalization replicating human interviews although done by a bot.
Pseudo Code Testing: Test and evaluate pseudo code using Generative AI, provided with the topic and context.
Resume Generator: Generate professional resumes tailored to specific job descriptions using the user's profile, job descriptions, and existing resumes.
Cover Letter Generator: Create personalized cover letters based on job descriptions and user profiles.
Interview Preparation: Prepare for coding interviews with dynamically generated questions based on job descriptions and difficulty levels.
Application Tracking: Track applications and interview statuses.
## How we built it
We built the application using a combination of modern web technologies and AI services:
LLM’s: Leveraged OpenAI's Whisper for speech-to-text, Antropic Clude 3 on AWS Bedrock for reasoning purposes, LMNT for the text-to-speech in real-time which was used to mimic any person's voice.
Frontend: Developed with React, ensuring a responsive and interactive user interface. We utilized React Router for navigation and Axios for API requests.
Backend: Built using Flask, which handles generating resumes and cover letters, and all other methods to keep the app running.
Data Storage: Implemented MongoDB to manage user profiles, job listings, and application statuses.
## Challenges we ran into
Building this comprehensive application presented several challenges:
AI Integration: Effectively integrating OpenAI's API to generate meaningful and relevant questions required fine-tuning the prompts and handling the API responses.
Personalization: AI-based interview questions shouldn't be repeated between interviews to prevent redundancy in the questions.
## Accomplishments that we're proud of
Successfully integrating AI to generate dynamic and relevant interview questions that are not redundant but vary based on several factors to match a human-based interview.
With the architecture in place, we can tailor the interviews based on the necessities of the companies
Resumes and cover letters are successfully edited without hallucination, but to just meet the needs based on the job description and experience of the user, not exaggerating it
Ensuring the application is scalable and secure, with efficient handling of user data.
We have created an interview bot that can replicate the interviewing style of a person once we have an input of the person's previous interview data
## What we learned
Throughout the development of this project, we learned:
How to effectively integrate AI services to enhance application functionalities.
The importance of user experience design in building applications with multiple features.
Techniques for secure and efficient data handling in web applications.
The value of iterative development and continuous testing to ensure application robustness.
## What's next for HireMeAI
Moving forward, we plan to:
Enhance the AI capabilities to include more personalized feedback and suggestions.
Expand the job listing database to include more companies and roles.
Integrate video interview functionalities to cover a broader range of interview types.
Develop mobile applications to provide users with access on the go.
|
winning
|
## Inspiration
Toronto suffers from a 26% contamination rate in their recycling, that is nearly a quarter of all the recycling completed does not actually end up being recycled. Contamination happens when non-recyclable materials or garbage end up in the recycling system, from leftover food in containers to non-recyclable plastics to clothing and propane tanks.
To address this problem, we created sort-it.
## What it does
The app helps everyone do their part in working towards a greener planet. Users open the app and take a photo of the trash that they are unsure about, and the app tells them where it goes! In addition to this feature, user can enter their phone number and the day of the week they wish to be reminded about garbage days.
## How we built it
Android app built with Java and Firebase ML kit, webserver with Go and Firebase Firestore
## Challenges we ran into
Successfully implemented ML kit
## Accomplishments that we're proud of
Having a finished product at the end of our first hackathon!
|
## Inspiration
Have you ever gone on a nice dinner out with friends, only to find that the group is too big for your server to split the bills according to each person's order? Someone inevitably decides to pay for the whole group and asks everyone to pay them back afterwards, but this doesn't always happen right away. When people forget to pay their friends back, it becomes somewhat awkward to bring up...
Enter Encountability, our cash transfer app!
## What it does
Encountability was created as an alternative to current cash transfer mechanisms, such as Interac e-transfer, that are somewhat clunky at best and inconvenient at worst - it sucks when the e-transfers don't arrive immediately and you and the person you're buying stuff from on Facebook Marketplace have to stand there awkwardly shuffling your feet and praying that the autodeposit email arrives soon. You can add friends to the app and send them cash (or request cash of your own) just by navigating to their profile on the app and sending a message in seconds! The app also reminds you of money you owe to any friends you might have on the app, ensuring that you don't forget to pay them back (especially if they shouldered everyone's bill last time you went out) and you spare them the awkwardness of having to remind you that you owe them some cash.
## How we built it
We built the backend in Python and Flask, and used CockroachDB for the database. The RBC Money Transfer API was also used for the project. For the frontend, a combination of HTML, CSS, and Javascript was used.
## Challenges we ran into
The name Encountability is a portmanteau of "encounter" and "accountability"; this was because we originally envisioned an RPG-style app where dinner bills that needed to be split could be treated like boss monsters and "defeated" by gathering a party of your friends and splitting the bill amongst yourselves easily. Time constraints were in full force this weekend, and we had to cut down on some of our more ambitious planned features after it became evident that there would not be enough time to accomplish everything we wanted. There were some difficulties with learning the techniques and tools necessary to integrate frontend and backend as well, but we pushed through and created something functional in the end!
## Accomplishments that we're proud of
Despite the hurdles and the compromises (and the time constraints... and the steep learning curve...) we were able to create something functional, with a prototype that shows how we envision the app to work and look!
## What we learned
* databases can be fiddly, but when they work, it's a beautiful thing!
* Sanity Walks™ are an essential part of the hackathon experience
* so are 30-min naps
## What's next for encountability
We'd like to connect it to bank accounts directly next time, just like we originally intended! It would also be nice to fully implement the automatic transaction-splitting feature of the app next time, as well as the more social aspects of the app.
|
## Inspiration
As university students, we and our peers have found that our garbage and recycling have not been taken by the garbage truck for some unknown reason. They give us papers or stickers with warnings, but these get lost in the wind, chewed up by animals, or destroyed because of the weather. For homeowners or residents, the lack of communication is frustrating because we want our garbage to be taken away and we don't know why it wasn't. For garbage disposal workers, the lack of communication is detrimental because residents do not know what to fix for the next time.
## What it does
This app allows garbage disposal employees to communicate to residents about what was incorrect with how the garbage and recycling are set out on the street. Through a checklist format, employees can select the various wrongs, which are then compiled into an email and sent to the house's residents.
## How we built it
The team built this by using a Python package called **Kivy** that allowed us to create a GUI interface that can then be packaged into an iOS or Android app.
## Challenges we ran into
The greatest challenge we faced was the learning curve that arrived when beginning to code the app. All team members had never worked on creating an app, or with back-end and front-end coding. However, it was an excellent day of learning.
## Accomplishments that we're proud of
The team is proud of having a working user interface to present. We are also proud of our easy to interactive and aesthetic UI/UX design.
## What we learned
We learned skills in front-end and back-end coding. We also furthered our skills in Python by using a new library, Kivy. We gained skills in teamwork and collaboration.
## What's next for Waste Notify
Further steps for Waste Notify would likely involve collecting data from Utilities Kingston and the city. It would also require more back-end coding to set up these databases and ensure that data is secure. Our target area was University District in Kingston, however, a further application of this could be expanding the geographical location. However, the biggest next step is adding a few APIs for weather, maps and schedule.
|
partial
|
## Inspiration
We found that as we made more friends, keeping track of events became hard and texting friends to figure out plans for the weekend becomes tedious.With a little inspiration from friends, we decided to develop LitUp.
## What it does
Too lazy to check all your event invites? Want to see what your friends are up to on the weekend? Get lit.
LitUp enables users to see which events their friends are going to, and how lit it's going to be. Using data from Facebook, LitUp can show you the best events in your area, even one's your friends are not going to as long as it's lit. LitUp aims to minimize planning time and maximize time to get lit.
## How we built it
Building the framework of the website with Axure, we then developed the rest just through our host's native code editor using HTML, CSS, Jquery, and Javascript.
## Challenges we ran into
Integrating both Facebook API and Google Maps API to create a visual representation of data created some tricky problems for us. It was challenging in both the front-end and back-end. Filtering out un-relevant events, monitoring time of event, as well as tracking friend activities created a large roadblock for us.
## Accomplishments that we're proud of
Going from 0 to 100 real quick, we both delved deep into learning things and trying things we were not used to. Using different APIs and integrating it into the website, we challenged a problem that we would normally not think of fixing.
## What we learned
Planning and hard work is very important. That being said, sleeping is also very important. Enjoying the work you do and realizing what you're working towards is for your own benefit will make the entire experience that much more enjoyable.
## What's next for LitUp
There's no other way but up. Lit up. Get lit, get it?
|
## Inspiration
When you're planning an evening out with friends, it can be very difficult to agree on what to do. We wanted to ease that pain.
## What it does
We created a mobile web app that is like tinder for events. Simply give a location, number of friends, date and time, and you'll get a unique url that you can share with your friends. Share the url and start clicking! If anyone dislikes 5 events from a single category, no one will see any events from that category to speed up the process. When everyone agrees on an event, you'll be redirected to a page that shows you what you'll be doing.
## How we built it
We used angular.js for our front-end and firebase for our back-end. We got the events from the eventful API.
## Challenges we ran into
Changing our idea at midnight, but we think it worked out for the better.
## Accomplishments that we're proud of
Learning angular and firebase on the fly.
## What we learned
You don't have to stick with your initial idea, you can always pivot.
## What's next for planz
Incorporate other APIs and improve the sharing system for sharing the link.
|
## Inspiration
The inspiration for our project stems from the increasing trend of online shopping and the declining foot traffic in physical stores. Our goal was to provide a unique and engaging experience for customers, encouraging them to visit physical stores and rediscover the joy of in-person shopping. We wanted to create an interactive and entertaining shopping experience that would entice customers to visit stores more frequently and foster a deeper connection between them and the store's brand.
## What it does
Our project is an AR scavenger hunt experience that gamifies the shopping experience. The scavenger hunt encourages customers to explore the store and discover new products they may have otherwise overlooked. As customers find specific products, they can earn points which can be redeemed for exclusive deals and discounts on future purchases. This innovative marketing scheme not only provides customers with an entertaining experience but also incentivizes them to visit stores more frequently and purchase products they may have otherwise overlooked.
## How we built it
To create the AR component of our project, we used Vuforia and Unity, two widely used platforms for building AR applications. The Vuforia platform allowed us to create and track image targets, while Unity was used to design the 3D models for the AR experience. We then integrated the AR component into an Android application by importing it as a Gradle project. Our team utilized agile development methodologies to ensure efficient collaboration and problem-solving throughout the development process.
## Challenges we ran into
One of the challenges we faced was integrating multiple APIs and ensuring that they worked together seamlessly. Another challenge was importing the AR component and creating the desired functionality within our project. We also faced issues with debugging and resolving technical errors that arose during the development process.
## Accomplishments that we're proud of
Despite the challenges we faced, we were able to achieve successful teamwork and collaboration. Despite forming the team later than other groups, we were able to effectively communicate and work together to bring our project to fruition. We are proud of the end result, which was a polished and functional AR scavenger hunt experience that met our objectives.
## What we learned
We learned how difficult it is to truly ship out software, and we are grateful to have joined the hackathon. We gained a deeper understanding of the importance of project planning, effective communication, and collaboration among team members. We also learned that the development process can be challenging and unpredictable, and that it requires perseverance and problem-solving skills. Additionally, participating in the hackathon taught us valuable technical skills such as integrating APIs, creating AR functionality, and importing projects onto an Android application.
## What's next for Winnur
Looking forward, we plan to incorporate Computer Vision technology into our project to prevent potential damage to our product's packaging. We also aim to expand the reach of our AR scavenger hunt experience by partnering with more retailers and enhancing the user interface and experience. We are excited about the potential for future development and growth of Winnur.
|
losing
|
## Inspiration
We have a problem! We have a new generation of broke philanthropists.
The majority of students do not have a lot of spare cash so it can be challenging for them to choose between investing in their own future or the causes that they believe in to build a better future for others.
On the other hand, large companies have the capital needed to make sizeable donations but many of these acts go unnoticed or quickly forgotten.
## What it does
What if I told you that there is a way to support your favourite charities while also saving money? Students no longer need to choose between investing and donating!
Giving tree changes how we think about investing. Giving tree focuses on a charity driven investment model providing the ability to indulge in philanthropy while still supporting your future financially.
We created a platform that connects students to companies that make donations to the charities that they are interested in. Students will be able to support charities they believe in by investing in companies that are driven to make donations to such causes.
Our mission is to encourage students to invest in companies that financially support the same causes they believe in. Students will be able to not only learn more about financial planning but also help support various charities and services.
## How we built it
### Backend
The backend of this application was built using python. In the backend, we were able to overcome one of our largest obstacles, that this concept has never been done before! We really struggled finding a database or API that would provide us with information on what companies were donating to which charities.
So, how did we overcome this? We wanted to avoid having to manually input the data we needed as this was not a sustainable solution. Additionally, we needed a way to get data dynamically. As time passes, companies will continue to donate and we needed recent and topical data.
Giving Tree overcomes these obstacles using a 4 step process:
1. Using a google search API, search for articles about companies donating to a specified category or charity.
2. Identify all the nouns in the header of the search result.
3. Using the nouns, look for companies with data in Yahoo Finance that have a strong likeness to the noun.
4. Get the financial data of the company mentioned in the article and return the financial data to the user.
This was one of our greatest accomplishments of this project. We were able to overcome and obstacle that almost made us want to do a different project. Although the algorithm can occasionally produce false positives, it works more often than not and allows for us to have a self-sustaining platform to build off of.
### Flask
```shell script
$ touch application.py
from flask import Flask
application = Flask(**name**)
@application.route('/')
def hello\_world():
return 'Hello World'
```
```shell script
$ export FLASK_APP="application.py"
$ flask run
```
Now runs locally:
<http://127.0.0.1:5000/>
### AWS Elastic Beanstalk
Create a Web Server Environment:
```shell script
AWS -> Services -> Elastic beanstalk
Create New Application called hack-western-8 using Python
Create New Environment called hack-western-8-env using Web Server Environment
```
### AWS CodePipeline
Link to Github for Continuous Deployment:
```shell script
Services -> Developer Tools -> CodePipeline
Create Pipeline called hack-western-8
GitHub Version 2 -> Connect to Github
Connection Name -> Install a New App -> Choose Repo Name -> Skip Build Stage -> Deploy to AWS Elastic Beanstalk
```
This link is no longer local:
<http://hack-western-8-env.eba-a5injkhs.us-east-1.elasticbeanstalk.com/>
### AWS Route 53
Register a Domain:
```shell script
Route 53 -> Registered Domains -> Register Domain -> hack-western-8.com -> Check
Route 53 -> Hosted zones -> Create Record -> Route Traffic to IPv4 Address -> Alias -> Elastic Beanstalk -> hack-western-8-env -> Create Records
Create another record but with alias www.
```
Now we can load the website using:<br/>
[hack-western-8.com](http://hack-western-8.com)<br/>
www.hack-western-8.com<br/>
http://hack-western-8.com<br/>
http://www.hack-western-8.com<br/>
Note that it says "Not Secure" beside the link<br/>
### AWS Certificate Manager
Add SSL to use HTTPS:
```shell script
AWS Certificate Manager -> Request a Public Certificate -> Domain Name "hack-western-8.com" and "*.hack-western-8.com" -> DNS validation -> Request
$ dig +short CNAME -> No Output? -> Certificate -> Domains -> Create Records in Route 53
Elastic Beanstalk -> Environments -> Configuration -> Capacity -> Enable Load Balancing
Load balancer -> Add listener -> Port 443 -> Protocol HTTPS -> SSL certificate -> Save -> Apply
```
Now we can load the website using:
<https://hack-western-8.com>
<https://www.hack-western-8.com>
Note that there is a lock icon beside the link to indicate that we are using a SSL certificate so we are secure
## Challenges we ran into
The most challenging part of the project was connecting the charities to the companies. We allowed the user to either type the charity name or choose a category that they would like to support. Once we knew what charity they are interested in, we could use this query to scrape information concerning donations from various companies and then display the stock information related to those companies. We were able to successfully complete this query and we can display the donations made by various companies in the command line, however further work would need to be done in order to display all of this information on the website. Despite these challenges, the current website is a great prototype and proof of concept!
## Accomplishments that we're proud of
We were able to successfully use the charity name or category to scrape information concerning donations from various companies. We not only tested our code locally, but also deployed this website on AWS using Elastic Beanstalk. We created a unique domain for the website and we made it secure through a SSL certificate.
## What we learned
We learned how to connect Flask to AWS, how to design an eye-catching website, how to create a logo using Photoshop and how to scrape information using APIs.
We also learned about thinking outside the box. To find the data we needed we approached the problem from several different angles. We looked for ways to see what companies were giving to charities, where charities were receiving their money, how to minimize false positives in our search algorithm, and how to overcome seemingly impossible obstacles.
## What's next for Giving Tree
Currently, students have 6 categories they can choose from, in the future we would be able to divide them into more specific sub-categories in order to get a better query and find charities that more closely align with their interests.
Health
- Medical Research
- Mental Health
- Physical Health
- Infectious Diseases
Environment
- Ocean Conservation
- Disaster Relief
- Natural Resources
- Rainforest Sustainability
- Global Warming
Human Rights
- Women's Rights
- Children
Community Development
- Housing
- Poverty
- Water
- Sanitation
- Hunger
Education
- Literacy
- After School Programs
- Scholarships
Animals
- Animal Cruelty
- Animal Health
- Wildlife Habitats
We would also want to connect the front and back end.
|
## Inspiration
With the rise of meme stocks taking over the minds of gen-z, vast amounts of young people are diving into the world of finance. We wanted to make a platform to make it easy for young people to choose stocks based on what matters most: the environment.
## What it does
Loraxly speaks for the trees: it aggregates realtime stock data along with articles about the environmental impact a company has on the world. It then uses OpenAI's powerful GPT-3 api to summarize and classify these articles to determine if the company's environmental impact is positive or not.
## How we built it
Figma, React, Javascript, Kumbucha, Python, Selenium, Golang, goroutines, Cally, firebase, pandas, OpenAI API, Alphavantage stock api, Doppler, rechart, material-ui, and true love.
## Challenges we ran into
some goroutines being ahead of others, fixed this with channels.
article summaries not making since, we had to be more granular with our article selection.
the chart was tough to setup.
we didn't get going until saturday afternoon.
## Accomplishments that we're proud of
getting things working
## What we learned
alphavantage api has some major rate limiting.
## What's next for Lorax.ly
Adding a trading function and creating an ETF comprised of environmentally friendly companies that people can invest in.
|
## Inspiration
In the past three months, there have been over 60 current events that have resulted in over 200 deaths and have affected over 2 million people. From natural disasters such as Hurricane Dorian and the wildfires burning in the Amazon to events such as the El Paso shooting that prompt discussions around social issues, these events shook individuals, communities, and nations.
While the opportunities to donate to these causes are endless, there is a disconnect between wanting to donate and following through with the action. In fact, from a survey we collected with 64 responses, 81% of respondents have thought about or wanted to donate to a cause in the last three months, but only 27% actually did. Despite all the different charities and organizations that provide disaster relief and funding for social causes, respondents cite “confusion,” “too many options”, “trust” and “too lazy to do own research” as reasons for why they haven’t donated. These reasons are precisely why we chose to create ++Giving.
## What it does
++Giving is a platform that empowers individuals to donate to causes that speak to them, bridging the gap between people who wish to donate and the organizations that can help. On the ++Giving web application, a user is presented with recently occurring natural disasters and social issues that they can donate to, a description for each event, and where the event is located on a map. ++Giving decreases the amount of time that individuals have to spend doing their own research on what to give to and how. Through the web application, users can also follow our donate link to organization sites, allowing the user to give money to the cause.
## How I built it
++Giving is hosted in an Azure Web Application built with a .NET Core 2.1 framework, and a user interface implemented with HTML and CSS. Our front end application allows people to view the world's trending issues, learn about these issues through the top trending news articles, and directly link to ways to donate to these causes while staying all within our application. The data storage for our application is configured in four data tables in an Azure SQL Database. In order to keep our databases up to date with issues that people care about, we use Time Triggered Azure functions to continually update our databases. These Serverless Functions rely on APIs like NASA's EONET API to obtain all currently occurring natural disasters, Google-News to obtain trending political and social issues, and Charity Navigator to match the right charities to these issues to empower our users to give to the causes that speak to them.
## Challenges I ran into **and**
## Accomplishments that I'm proud of **and**
## What I learned
We are really proud that we all did something we haven’t done before at this hackathon, whether it was coding in a new language, using an API we weren’t familiar with, or learning new concepts. In addition, we are glad that we took the time to work together and take feedback from one another during a planning stage. It made sure that everyone was on the same page and had a mutual understanding of what we were trying to accomplish and build. However, we were challenged by data flow because we were dealing with so many APIs. We learned that it’s important to communicate, work as a team, and appreciate all viewpoints. Taking advantage of team members’ different skill sets and ideas helps move the project forward.
## What's next for ++Giving
During this hackathon we were able to implement the core functionality of our idea: making giving to charities more accessible. Moving forward, we have a couple of ideas to further improve our platform. Firstly, we would like to implement direct payment functionality, so that a user can donate to their select charity without needing to leave our
site. Next, we would like to implement a round-up system, to further incentive our user base to donate to important causes. When linked to a debit card, our site would give the option to "round up" purchases, similar to the investment platform "Acorns". If one of our users spends $13.43 at Trader Joe's, we will send them a notification asking if they would like toround up that purchase to a flat $14, and invest the $.57 to one of their favorited charities.
|
winning
|
## Inspiration
We wanted to be able to allow people to understand the news they read in context because often times, we ourselves will read about events happening on the other side of the globe, but we have no idea where it is. So we wanted a way to visualize the news along with it's place in the world.
## What it does
Visualize the news as it happens in real-time, all around the world. Each day, GLOBEal aggregates news and geotags it, allowing the news to be experienced in a more lucid and immersive manner. Double click on a location and see what's happening there right now. Look into the past and see how the world shifts as history is made.
## How we built it
We used WebGL's Open Globe Platform to display magnitudes of popularity that were determined by a "pagerank" we made by crawling google ourselves and using WebHose API's. We used python scripts to create these crawlers and API calls, and then populated JSON files. We also used javascript with google maps, here maps, and google news apis in order to allow a user to double click on the globe to see the news from that location.
## Challenges we ran into
Google blocked our IPs because our web crawler made too many queries/second
Our query needs were too many for the free version of WebHose, so we called them and got a deal where they gave us free queries in exchange for attribution. So shout out to [webhose.io](http://webhose.io)!!!
## Accomplishments that we're proud of
Learned how to make web crawlers, how to use javascript/html/css, and developed a partnership with Webhose
Made a cool app!
## What we learned
Javascript, Firebase, webhose, how to survive without sleep
## What's next for GLOBEal News
INTERGALACTIC NEWS!
Work more on timelapse
Faster update times
Tags on globe directly
Click through mouse rather than camera ray
|
## Inspiration
We realized that while there are many news sources out there, with this, we can see what countries in the world are news hotspots - the ones that have the most things going on. You can view the world from a new perspective with our World New Map, seeing which countries are the centers of action and events.
## What it does
It uses UiPath for web scraping to gather data from news sites. The program will then create a heat map based on the number of news reports sorted by category and the severity. The mobile app will then notify it's users when there is a problem with an area that they are in or is near them.
## How we built it:
#### WebScraping(UI Path)
We used UiPath to gather data from news sites via web scraping. We accumulated around 5000 different data entries and exported them into CSVs, which are then put in a MongoDB Atlas database.
#### Cloud Storage(MongoDB Atlas)
We created a main database and sorted our data into about 100 different subfolders for different countries around the world.
#### Web Application (NodeJS)
Using Node.js and Google Charts we created a heat map based on the media coverage of the areas. We exported the mongoDB as a json and created it into a graph. We had a seperate file for each country to show the news. Clicking on Canada would show each province and show many articles each province has.
#### Android App(Radar.io)
We used Kotlin for the moblie app and used Radar.io to get the location of the user and notify them if they are in an area with a safety concern.
## Deploy with Heroku
We deployed this software to the web with Heroku. Originally we had it running locally for testing, afterwards, we converted to a version that we could deploy with Heroku.
## Assign a domain name
We used the free domain code to assign it to a .online domain.
## Challenges we ran into
Radar.io software was complicated, there was very little documentation. Radar.io also had a bug on their end that did not allow the location tracking to work properly which forced us to hard code the app. There was also numerous issues with Heroku as we struggled to convert it from the local to a version we could deploy.
MongoDB's official documentation was vague and confusing, we had to resort to third-party documentation to use it.
We also had to create a heat map, which we were originally going to make in plot.ly.
## Accomplishments that we're proud of
Learning how to use Radar.io, UiPath, and MongoDB Atlas.
## What we learned
How to use Radar.io, UiPath, MongoDB Atlas, Heroku, Custom Domains, Static web-hosting.
We also learned fast json manipulation to create graphs and output files.
## What's next for WorldNewsMap
We could use machine learning to allow the application to predict future media coverage. We could then alert users that there could be potential danger in the area in the future. We also need more data from UIPath, the data should be proportional.
|
## Inspiration
While observing the vast amount of climate data available, we realized there wasn't a singular, user-friendly platform to visualize this data in real-time. We wanted to make global climate data accessible and understandable to everyone.
## What it does
GlobeGlance offers an interactive 3D earth view, allowing users to zoom into regions and visualize climate impacts, from carbon emissions to tidal metrics. The data is updated in real-time, giving an accurate representation of our planet's health.
## How we built it
Leveraging the power of React for the earth visualization, and integrating the lightning-quick FastAPI platform, aggregating multiple APIs from various global climate data sources, we built a seamless, real-time updating platform. The UI/UX was designed to be intuitive.
## Challenges we ran into
Accurately aggregating diverse sets of climate data and presenting it in a unified format was challenging. Balancing the detail of data with user-friendliness was also a considerable hurdle.
## Accomplishments that we're proud of
Successfully integrating real-time data from multiple sources into one platform. We're also proud of our user interface!
## What we learned
The importance of data visualization in conveying complex information. We also learned how diverse the realm of climate data is, and the challenges of standardizing it.
## What's next for GlobeGlance
We're planning to integrate more data sources, especially from underrepresented regions. We also aim to add a feature for users to contribute their local climate data and observations, making GlobeGlance a community-driven platform.
|
partial
|
## Inspiration
Every year, millions of infants die from diseases such as Birth Asphyxia, Kawasaki disease, Sudden Infant Death Syndrome (SIDS), etc. This could be prevented via a responsive system to monitor and care for the baby during emergencies while keeping the parents more medically involved, and saving important data akin to a dashcam.
## What it does
It is a hardware device that monitors a baby's vitals such as body temperature and heart pulse. It uses time-series analysis and predictive modeling to raise alarms if the baby might reach an emergency state and warn the parents. It then interacts with the parents via speech to instruct them with first aid, while also providing the LLM with the sensor data as context for better treatment.
## How we built it
We used an Arduino to interface with the sensors, and a Raspberry Pi as the master controller. We initially planned to use the InterSystems Integrated ML Cloud for our predictive modeling, but the cloud did not support this feature yet, so we wrote our own signal processing model. Once the alarms are raised, the parents interact via speech, and it is translated to text using the Google Cloud speech-to-text API, and interacts with OpenAI's ChatGPT API with the sensor data as context. We do all the processing on the hardware itself to reduce reliance on mobile devices and speed up processing for critical events such as this.
## Challenges we ran into
We did not have enough time to source a USB Microphone to present real-time interaction, so we instead ran with pre-recorded talks. The InterSystems Cloud also could not support our requirements, so we had to switch to a custom processor at the last minute.
## Accomplishments that we're proud of
We worked on a very serious real-world problem and learned how to integrate cutting-edge AI/ML APIs with the hardware, especially as beginner hackers.
## What's next for PediBeat
Making the device more modular and compact. Then we want to transition from purely infant-related emergencies to a device to tackle any general emergency in households.
|
## Inspiration
The inspiration for this project was the recent wildfires in Canada, which have polluted and damaged the air quality throughout the world. With this website, we strive to raise awareness of the environment, which has been getting damaged throughout the years, and educate people about the air quality.
## What it does
It shows the air quality data of any location worldwide requested by the client. The data includes Carbon Monoxide, Nitrogen Dioxide, Ozone, Sulphur Dioxide, and Particulate Matter 2.5 and 1.0.
## How we built it
This website is built using HTML and Bootstrap for the front end and JavaScript and Air Quality by API-Ninja for the back end. We divided the work and had different members work on different parts of the code. By doing so, we had to communicate with other teammates about the code we were working on, the different ideas we had, and the challenges we encountered.
## Challenges we ran into
During the hackathon, we encountered many challenges. At first, one challenge that we encountered was coming up with ideas. Since it was the first hackathon for most of us, we did not know what to expect. After coming up with an idea, we encountered many more problems. First, we encountered the problem of learning JavaScript. Although some of us had experience with JavaScript and HTML, we also had some members not familiar with these languages and the usage of APIs. However, after this hackathon, we believe that this has been a learning experience for everyone and an experience that enhanced our technical and communication skills.
## Accomplishments that we're proud of
As a beginner-friendly group, we were able to create a fully functioning air-indexing website that contributes to sustainability and environmental purposes. Furthermore, we are proud that we were able to work effectively as a team and enhance our technical abilities.
## What we learned
We learned how to use APIs more effectively as well as the value of collaboration between team members and learned how to communicate our work.
## What's next for Air Quality Indexing Website
In the future, we want to implement features that also raise awareness for the environment. Some examples are displaying plastic usage from all around the world and garbage detection. We also want find ways to further optimize our code, reducing the amount of energy required to run the code, which will help the environment.
|
## Inspiration
Tired, sleep-deprived, and hungry-- the first thing you’d want once you retreat back to your humble abode is a perfectly cooked bowl of noodles just the way YOU like it. Last thing you want to do... is make it yourself.
## What it does
Through our iOS app, customize a bowl of noodles to your liking. Medium spice, al dente noodles, veggies, meat, and chili flakes? You got it! Our pre-loaded noodle-making machine will handle the rest and add in the toppings of your desire and cook your noodles just the way you like it.
## How we built it
We built the iOS app using Swift, SwiftUI, and Alamofire, with the design prototyped in Figma and Procreate. The backend + hardware was written entirely in Python and deployed on a raspberry pi that used Flask to expose API endpoints to our frontend. All the physical actuation of the noodle/chili/water was done so using servo motors and PWM commands. You can watch how it works here: <https://streamable.com/r1z867>
## Challenges we ran into
* Fluid mechanics (realized the practical implications of hydrostatic pressure)
* Servo mechanisms not mechanism-ing
* iOS dependencies with Alamofire
* Sticky spices (impeded with our spice dispenser)
* Extreme sleep-deprivation
* Procrastinating devpost
## Accomplishments that we're proud of
* IT WORKSSSS
* Fully integrated iOS app and the hardware mechanisms
## What we learned
* Spices are sticky
* Life Sciences Institute gets diabolically cold at night (especially you plan to stay overnight)
* McDonald's cantaloupe chunks are bad (no questions asked)
## What's next for Noodle Doodle
* Better motors and structural components
* Integrate a camera to allow a video livefeed
* Dynamically calculate cook times
|
losing
|
## Inspiration
There are so many apps out there that track your meals, but they often don't work for people who struggle with access to healthy and affordable food. FoodPrint is creating a meal-tracking app that is designed with people who struggle with food security in mind. By using the app, anonymous data can be collected to help researchers learn about community-specific gaps in food security and nutrition.
By taking a picture of your food, FoodPrint assesses its nutritional content and tells you what’s in it in real time as well as easy and affordable ways to improve your nutritional intake. With FoodPrint, it is not just people taking care of their own health but, collectively across potentially diverse populations, rich data is being gathered to help better understand and improve public health – something that’s especially useful for highlighting the nutrition challenges of at-risk communities.
Every meal you enter is not just a personal learning experience about how to eat better, but it also helps your community address food insecurity.
## What it does
FoodPrint is an app that allows users to take a picture of their meal, analyzes its nutritional content, and provides suggestions on how to improve the biggest gaps in nutrition using affordable and accessible ingredients. It is designed for people in food deserts in mind, who have limited access to fresh and affordable ingredients. Through the anonymous collection of these photos, it contributes to data on the types of foods consumed and largest areas where nutrition is lacking in local communities. The data from this app contributes to valuable insights for researchers working to address food insecurity in areas lacking local and community-specific context.
## How we built it
1. User Interface Design:
Designed a user-friendly mobile app interface with screens for food scanning, meal calendar viewing and preferences/allergies input.
Created a camera interface for users to take pictures of food.
2. User Personal Information:
Developed a user profile system within the app.
Allowed users to input their dietary preferences and allergies, storing this information for future recommendations.
3. Image Recognition:
Using the image recognition from Open AI to extract the ingredients out of the food picture
4. Nutrition Data Processing:
Processed and cleaned the extracted ingredients into different kinds of nutrition.
Identified and categorized nutrition types and provide the list of the nutrition.
5. Database Integration:
Store and manage food calendar data, user profiles, and recommendation data in a database.
6. Recommendation Engine:
Implemented a recommendation engine that factors in user preferences and allergies.
Using AI, the algorithm we built will suggest users taking some nutritional and affordable meal based on their preference and food history (calendar).
Depending on complexity, integrated machine learning models that learn user preferences over time.
7. Integration Testing:
Test the integration of different components to ensure that the food recognition, user preferences, and recommendations work together.
## Challenges we ran into
One challenge we ran into was coordinating our time well and delegating tasks early on was something important to do.
## Accomplishments that we're proud of
Implemented an image recognition system that identifies ingredients from images.
Developed a user profile system to keep track of long-term dietary habits and changes.
Designed a user-friendly mobile app interface to enhance the overall user experience.
We are proud of the team-building and how we grew as a team.
## What we learned
We learned that it's important to have a clear vision about the project for everyone on the team before moving forward.
## What's next for Food Print
There are many ways that we can expand FoodPrint. Some directions we were thinking about are working with food banks to show where ingredients can be found and working with research organizations or non-profits to perform more in-depth analysis of the research data to better understand the conditions of food security in communities. A potential expansion for FoodPrint includes enabling family members to share and view each other’s dietary data, fostering better awareness of their family members' nutritional habits and conditions. This feature could help families support each other in making healthier choices. Additionally, users across different communities would be able to learn from one another’s dietary patterns, offering insights into various cultural eating habits and solutions to food insecurity.
|
## Inspiration
Every 2 in 5 Americans are classified as medically obese. Obesity is the leading cause of suffering in many developed countries, not because of the lack of healthy food available but more so due to the lack of information that many brands provide.
In our quest to combat this ever-growing issue globally, we decided to make a nutrition app that not only helped people track their food but also proactively help people in making better choices.
## What it does
Fiber uses your camera to scan barcodes that are located on the back of most products. Using the OpenFoodFacts API we access the biggest collection of nutrition information in current human history allowing us to intelligently identify the ingredients and allergens.
We parse this information into the OpenAI API using industry-leading generative AI to inform the user of the pros and cons of the product.
## How we built it
We created a mobile application front end using React-Native allowing for great cross-platform functionality. Accessing our backend written in Python with Flask, which provides an endpoint for searching the OpenFoodFacts database for product information.
We further used OpenAI's GPT3.5 turbo model to summarize and list the benefits and disadvantages as a summary of the ingredients.
## Challenges we ran into
Due to the millions of variations of mobile devices that are available globally, we ran into a few issues ensuring that Fiber ran well on a wide variety of devices. In addition, permissions for the camera and the barcode scanning functionality were a small hurdle.
The biggest challenge we ran into was learning how to work together as a team, but a few hours in we got the hang of it, using tools on GitHub to optimize our ability to work together.
## Accomplishments that we're proud of
We are proud to learn more about the various technologies involved and working with AI to push our purpose through an application. Through this, we were able to make learning more about various products quicker, easier, and more efficient.
## What we learned
During the creation of the project, we learned a lot more about the pros and cons of different everyday products. It was surprising to see the various ingredients and potential hazards as well. After the creation of the project, our team was able to gain a deeper understanding of implementing AI into an application and revealed potential issues with everyday products.
## What's next for Fiber: Your AI Nutrition Companion
In the future, we hope to add more features that will specify additional information about the product, including the nutrition facts and some potential ways that the product could be used. For example, some food products could include recipes on various nutritional dishes.
|
## Inspiration
Despite being a global priority in the eyes of the United Nations, food insecurity still affects hundreds of millions of people. Even in the developed country of Canada, over 5.8 million individuals (>14% of the national population) are living in food-insecure households. These individuals are unable to access adequate quantities of nutritious foods.
## What it does
Food4All works to limit the prevalence of food insecurity by minimizing waste from food corporations. The website addresses this by serving as a link between businesses with leftover food and individuals in need. Businesses with a surplus of food are able to donate food by displaying their offering on the Food4All website. By filling out the form, businesses will have the opportunity to input the nutritional values of the food, the quantity of the food, and the location for pickup.
From a consumer’s perspective, they will be able to see nearby donations on an interactive map. By separating foods by their needs (e.g., high-protein), consumers will be able to reserve the donated food they desire. Altogether, this works to cut down unnecessary food waste by providing it to people in need.
## How we built it
We created this project using a combination of multiple languages. We used Python for the backend, specifically for setting up the login system using Flask Login. We also used Python for form submissions, where we took the input and allocated it to a JSON object which interacted with the food map. Secondly, we used Typescript (JavaScript for deployable code) and JavaScript’s Fetch API in order to interact with the Google Maps Platform. The two major APIs we used from this platform are the Places API and Maps JavaScript API. This was responsible for creating the map, the markers with information, and an accessible form system. We used HTML/CSS and JavaScript alongside Bootstrap to produce the web-design of the website. Finally, we used the QR Code API in order to get QR Code receipts for the food pickups.
## Challenges we ran into
Some of the challenges we ran into was using the Fetch API. Since none of us were familiar with asynchronous polling, specifically in JavaScript, we had to learn this to make a functioning food inventory. Additionally, learning the Google Maps Platform was a challenge due to the comprehensive documentation and our lack of prior experience. Finally, putting front-end components together with back-end components to create a cohesive website proved to be a major challenge for us.
## Accomplishments that we're proud of
Overall, we are extremely proud of the web application we created. The final website is functional and it was created to resolve a social issue we are all passionate about. Furthermore, the project we created solves a problem in a way that hasn’t been approached before. In addition to improving our teamwork skills, we are pleased to have learned new tools such as Google Maps Platform. Last but not least, we are thrilled to overcome the multiple challenges we faced throughout the process of creation.
## What we learned
In addition to learning more about food insecurity, we improved our HTML/CSS skills through developing the website. To add on, we increased our understanding of Javascript/TypeScript through the utilization of the APIs on Google Maps Platform (e.g., Maps JavaScript API and Places API). These APIs taught us valuable JavaScript skills like operating the Fetch API effectively. We also had to incorporate the Google Maps Autofill Form API and the Maps JavaScript API, which happened to be a difficult but engaging challenge for us.
## What's next for Food4All - End Food Insecurity
There are a variety of next steps of Food4All. First of all, we want to eliminate the potential misuse of reserving food. One of our key objectives is to prevent privileged individuals from taking away the donations from people in need. We plan to implement a method to verify the socioeconomic status of users. Proper implementation of this verification system would also be effective in limiting the maximum number of reservations an individual can make daily.
We also want to add a method to incentivize businesses to donate their excess food. This can be achieved by partnering with corporations and marketing their business on our webpage. By doing this, organizations who donate will be seen as charitable and good-natured by the public eye.
Lastly, we want to have a third option which would allow volunteers to act as a delivery person. This would permit them to drop off items at the consumer’s household. Volunteers, if applicable, would be able to receive volunteer hours based on delivery time.
|
losing
|
View presentation at the following link: <https://youtu.be/Iw4qVYG9r40>
## Inspiration
During our brainstorming stage, we found that, interestingly, two-thirds (a majority, if I could say so myself) of our group took medication for health-related reasons, and as a result, had certain external medications that result in negative drug interactions. More often than not, one of us is unable to have certain other medications (e.g. Advil, Tylenol) and even certain foods.
Looking at a statistically wider scale, the use of prescription drugs is at an all-time high in the UK, with almost half of the adults on at least one drug and a quarter on at least three. In Canada, over half of Canadian adults aged 18 to 79 have used at least one prescription medication in the past month. The more the population relies on prescription drugs, the more interactions can pop up between over-the-counter medications and prescription medications. Enter Medisafe, a quick and portable tool to ensure safe interactions with any and all medication you take.
## What it does
Our mobile application scans barcodes of medication and outputs to the user what the medication is, and any negative interactions that follow it to ensure that users don't experience negative side effects of drug mixing.
## How we built it
Before we could return any details about drugs and interactions, we first needed to build a database that our API could access. This was done through java and stored in a CSV file for the API to access when requests were made. This API was then integrated with a python backend and flutter frontend to create our final product. When the user takes a picture, the image is sent to the API through a POST request, which then scans the barcode and sends the drug information back to the flutter mobile application.
## Challenges we ran into
The consistent challenge that we seemed to run into was the integration between our parts.
Another challenge that we ran into was one group member's laptop just imploded (and stopped working) halfway through the competition, Windows recovery did not pull through and the member had to grab a backup laptop and set up the entire thing for smooth coding.
## Accomplishments that we're proud of
During this hackathon, we felt that we *really* stepped out of our comfort zone, with the time crunch of only 24 hours no less. Approaching new things like flutter, android mobile app development, and rest API's was daunting, but we managed to preserver and create a project in the end.
Another accomplishment that we're proud of is using git fully throughout our hackathon experience. Although we ran into issues with merges and vanishing files, all problems were resolved in the end with efficient communication and problem-solving initiative.
## What we learned
Throughout the project, we gained valuable experience working with various skills such as Flask integration, Flutter, Kotlin, RESTful APIs, Dart, and Java web scraping. All these skills were something we've only seen or heard elsewhere, but learning and subsequently applying it was a new experience altogether. Additionally, throughout the project, we encountered various challenges, and each one taught us a new outlook on software development. Overall, it was a great learning experience for us and we are grateful for the opportunity to work with such a diverse set of technologies.
## What's next for Medisafe
Medisafe has all 3-dimensions to expand on, being the baby app that it is. Our main focus would be to integrate the features into the normal camera application or Google Lens. We realize that a standalone app for a seemingly minuscule function is disadvantageous, so having it as part of a bigger application would boost its usage. Additionally, we'd also like to have the possibility to take an image from the gallery instead of fresh from the camera. Lastly, we hope to be able to implement settings like a default drug to compare to, dosage dependency, etc.
|
## Inspiration
How many times have you forgotten to take your medication and damned yourself for it? It has happened to us all, with different consequences. Indeed, missing out on a single pill can, for some of us, throw an entire treatment process out the window. Being able to keep track of our prescriptions is key in healthcare, this is why we decided to create PillsOnTime.
## What it does
PillsOnTime allows you to load your prescription information, including the daily dosage and refills, as well as reminders in your local phone calendar, simply by taking a quick photo or uploading one from the library. The app takes care of the rest!
## How we built it
We built the app with react native and expo using firebase for authentication. We used the built in expo module to access the devices camera and store the image locally. We then used the Google Cloud Vision API to extract the text from the photo. We used this data to then create a (semi-accurate) algorithm which can identify key information about the prescription/medication to be added to your calendar. Finally the event was added to the phones calendar with the built in expo module.
## Challenges we ran into
As our team has a diverse array of experiences, the same can be said about the challenges that each encountered. Some had to get accustomed to new platforms in order to design an application in less than a day, while figuring out how to build an algorithm that will efficiently analyze data from prescription labels. None of us had worked with machine learning before and it took a while for us to process the incredibly large amount of data that the API gives back to you. Also working with the permissions of writing to someones calendar was also time consuming.
## Accomplishments that we're proud of
Just going into this challenge, we faced a lot of problems that we managed to overcome, whether it was getting used to unfamiliar platforms or figuring out the design of our app.
We ended up with a result rather satisfying given the time constraints & we learned quite a lot.
## What we learned
None of us had worked with ML before but we all realized that it isn't as hard as we thought!! We will definitely be exploring more similar API's that google has to offer.
## What's next for PillsOnTime
We would like to refine the algorithm to create calendar events with more accuracy
|
## Inspiration
This project is inspired by our personal experiences of seeing elderly relatives struggle with keeping track of their prescriptions as well as with the complexity of modern tech. By creating simple, no-signup-required, and user-friendly apps, we aim to enhance their quality of life and improve their health as well.
## What it does
Althea allows users to enter which prescriptions drugs they are taking so that they can have a checklist to remember what they have taken on a certain day. Althea then asks the user if they felt any symptoms the same day and notifies the user if the symptoms, they had could be side effects of a drug they use. The user can rate the severity of the symptoms they felt. Finally, users would be able to see past logs and even export them as PDFs if they would like to share the information easily with their primary care providers.
## How we built it
For the front-end, we used JavaScript with Tailwind CSS and HTML for a smooth and sleek user experience. For the back-end, we used Python with Django. We organized schemas and used SQLite for patient data because of its sensitive nature. We used PostgreSQL for medicine data since we wanted it to be shared among different people and used Gemini AI in conjunction with it. For the mobile app alternative, we used pure Flutter and emulated on our machine.
## Challenges we ran into
One significant challenge we ran into early on was deciding with what tech to build our app. We originally agreed on React Native since we all were familiar with React, but we learned that they're not that similar. This caused us to lose quite some time as we struggle with figuring it out. After that, we decided to do React+Django web app with a Flutter mobile app in parallel with huge ambition to interconnect them. However, time constraints and technical challenges didn't allow us to achieve our initial goals.
Additionally, our web team was challenged by the undertaking of linking the frontend and backend via API endpoints.
## Accomplishments that we're proud of
We're proud of few things. First, we have a large codebase with well-maintained structure, tech and features. Second, we have a neat, streamlined and user-friendly navigation system for both the web and mobile app. Third, we did a decent design considering we are all primarily backend developers with minor experience in React or any other frontend tech. Lastly, we worked on two projects in parallel, even though linking them together didn't work out.
## What we learned
Teamwork and communication are a must. Hackathons are the grindiest grind out there (lol). Gradual development and constant updates via Git is the path to success. We made sure to work on separate branches and then merge them to avoid conflicts as well as having structured file system that promotes collaboration. And lastly, simplicity is a key, trying too hard or aiming to high often won't end good.
## What's next for Althea
Next thing is implementing all the features that stayed on the whiteboard as well as connecting web and mobile apps into one ecosystem, allowing seamless access for wide audience. Definitely making our UI/UX even better but also streamlined and simple so any user, even least tech-wise, can use it. A cool and useful feature would be allowing users to securely scan their prescription label with their phone camera.
|
winning
|
## Video Demo
<https://youtu.be/_edJf_7ZcLk>
## Project GitHub
<https://github.com/AnselZeng/Teamote>
## Inspiration
As more and more classes are moving online due to the pandemic, it is very crucial to make sure that students are able to learn properly and teachers are able to teach as they did in classrooms.
## What it does
This web app provides real time analysis of the students’ expressions and reports the average emotion back to the teacher in order to provide an accurate representation of understanding and attention.
## How I built it
Teamote contains three major views: a home page, a student's view and a instructor's view.
**Home page:**
The home page will have two options: student or instructor. Users must input the classroom code into the text box to enter the classroom.
**Student's View:**
Students will be provided with a special classroom code. They can enter this code into the box and click the button to join the classroom. They must turn on their video in order to participate
**Instructor's View:**
Upon entering the classroom code, the teacher is taken to the video page. The teacher’s view will display an average percentage of all the student’s emotion data. This will help the instructor recognize if the students are understanding the concepts properly or if a concept should be further clarified.
We built this project using **React**, **Django** and **Azure Cognitive Services**
## Challenges I ran into
One of the challenges was creating the backend of this project and implementing the different APIs and components to make the web app function correctly.
## What we learned
As a team we learned to work together to pitch different ideas and features to each other. We learned or refreshed our knowledge of various programming languages, APIs and frameworks.
## What's next for Teamote
With the increase in online learning due to the pandemic, many online tools that aid in remote learning are becoming more popular. Once our web app becomes more popular, we will introduce subscriptions for educational institutions in order to increase the longevity of our product and business.
|
## Inspiration
COVID-19 has drastically transformed education from in-person to online. While being more accessible, e-learning imposes challenges in terms of attention for both educators and students. Attention is key to any learning experience, and it could normally be assessed approximately by the instructor from the physical feedback of students. However, it is not feasible for instructors to assess the attention levels of students in a remote environment. Therefore, we aim to build a web app that could assess attention based on eye-tracking, body-gesture, and facial expression using the Microsoft Azure Face API.
## What it does
C.L.A.A.S takes the video recordings of students watching lectures (with explicit consent and ethics approval) and process them using Microsoft Azure Face API. Three features including eye-tracking, body posture, and facial expression with sub-metrics will be extracted from the output of the API and analyzed to determine the attention level of the student during specific periods of time. An attention average score will be assigned to each learner at different time intervals based on the evaluation of these three features, and the class attention average score will be calculated and displayed across time on our web app. The results would better inform instructors on sections of the lecture that gain attraction and lose attention in order for more innovative and engaging curriculum design.
## How we built it
1. The front end of the web app is developed using Python and the Microsoft Azure Face API. Video streaming decomposes the video into individual frames from which key features are extracted using the Microsoft Azure Face API.
2. The back end of the web app is also written with Python. With literature review, we created an algorithm which assesses attention based on three metrics (blink frequency, head position, leaning) from two of the above-mentioned features (eye-tracking and body gesture). Finally, we output the attention scores averaged across all students with respect to time on our web app.
## Challenges we ran into
1. Lack of online datasets and limitation on time prevents us from collecting our own data or using machine learning models to classify attention.
2. Insufficient literature to provide quantitative measure for the criteria of each metric.
3. Decomposing a video into frames of image on a web app.
4. Lag during data collection.
## Accomplishments that we're proud of
1. Relevance of the project for education
2. Successfully extracting features from video data using the Microsoft Azure Face API
3. Web design
## What we learned
1. Utilizing the Face API to obtain different facial data
2. Computer vision features that could be used to classify attention
## What's next for C.L.A.A.S.
1. Machine learning model after collection of accurate and labelled baseline data from a larger sample size.
2. Address the subjectiveness of the classification algorithm by considering more scenarios and doing more lit review
3. Test the validity of the algorithm with more students
4. Improve web design, functionalities
5. Address limitations of the program from UX standpoint, such as lower resolution camera, position of their webcam relative to their face
|
## Inspiration
Today we live in a world that is all online with the pandemic forcing us at home. Due to this, our team and the people around us were forced to rely on video conference apps for school and work. Although these apps function well, there was always something missing and we were faced with new problems we weren't used to facing. Personally, forgetting to mute my mic when going to the door to yell at my dog, accidentally disturbing the entire video conference. For others, it was a lack of accessibility tools that made the experience more difficult. Then for some, simply scared of something embarrassing happening during class while it is being recorded to be posted and seen on repeat! We knew something had to be done to fix these issues.
## What it does
Our app essentially takes over your webcam to give the user more control of what it does and when it does. The goal of the project is to add all the missing features that we wished were available during all our past video conferences.
Features:
Webcam:
1 - Detect when user is away
This feature will automatically blur the webcam feed when a User walks away from the computer to ensure the user's privacy
2- Detect when user is sleeping
We all fear falling asleep on a video call and being recorded by others, our app will detect if the user is sleeping and will automatically blur the webcam feed.
3- Only show registered user
Our app allows the user to train a simple AI face recognition model in order to only allow the webcam feed to show if they are present. This is ideal to prevent ones children from accidentally walking in front of the camera and putting on a show for all to see :)
4- Display Custom Unavailable Image
Rather than blur the frame, we give the option to choose a custom image to pass to the webcam feed when we want to block the camera
Audio:
1- Mute Microphone when video is off
This option allows users to additionally have the app mute their microphone when the app changes the video feed to block the camera.
Accessibility:
1- ASL Subtitle
Using another AI model, our app will translate your ASL into text allowing mute people another channel of communication
2- Audio Transcriber
This option will automatically transcribe all you say to your webcam feed for anyone to read.
Concentration Tracker:
1- Tracks the user's concentration level throughout their session making them aware of the time they waste, giving them the chance to change the bad habbits.
## How we built it
The core of our app was built with Python using OpenCV to manipulate the image feed. The AI's used to detect the different visual situations are a mix of haar\_cascades from OpenCV and deep learning models that we built on Google Colab using TensorFlow and Keras.
The UI of our app was created using Electron with React.js and TypeScript using a variety of different libraries to help support our app. The two parts of the application communicate together using WebSockets from socket.io as well as synchronized python thread.
## Challenges we ran into
Dam where to start haha...
Firstly, Python is not a language any of us are too familiar with, so from the start, we knew we had a challenge ahead. Our first main problem was figuring out how to highjack the webcam video feed and to pass the feed on to be used by any video conference app, rather than make our app for a specific one.
The next challenge we faced was mainly figuring out a method of communication between our front end and our python. With none of us having too much experience in either Electron or in Python, we might have spent a bit too much time on Stack Overflow, but in the end, we figured out how to leverage socket.io to allow for continuous communication between the two apps.
Another major challenge was making the core features of our application communicate with each other. Since the major parts (speech-to-text, camera feed, camera processing, socket.io, etc) were mainly running on blocking threads, we had to figure out how to properly do multi-threading in an environment we weren't familiar with. This caused a lot of issues during the development, but we ended up having a pretty good understanding near the end and got everything working together.
## Accomplishments that we're proud of
Our team is really proud of the product we have made and have already begun proudly showing it to all of our friends!
Considering we all have an intense passion for AI, we are super proud of our project from a technical standpoint, finally getting the chance to work with it. Overall, we are extremely proud of our product and genuinely plan to better optimize it in order to use within our courses and work conference, as it is really a tool we need in our everyday lives.
## What we learned
From a technical point of view, our team has learnt an incredible amount the past few days. Each of us tackled problems using technologies we have never used before that we can now proudly say we understand how to use. For me, Jonathan, I mainly learnt how to work with OpenCV, following a 4-hour long tutorial learning the inner workings of the library and how to apply it to our project. For Quan, it was mainly creating a structure that would allow for our Electron app and python program communicate together without killing the performance. Finally, Zhi worked for the first time with the Google API in order to get our speech to text working, he also learned a lot of python and about multi-threadingbin Python to set everything up together. Together, we all had to learn the basics of AI in order to implement the various models used within our application and to finally attempt (not a perfect model by any means) to create one ourselves.
## What's next for Boom. The Meeting Enhancer
This hackathon is only the start for Boom as our team is exploding with ideas!!! We have a few ideas on where to bring the project next. Firstly, we would want to finish polishing the existing features in the app. Then we would love to make a market place that allows people to choose from any kind of trained AI to determine when to block the webcam feed. This would allow for limitless creativity from us and anyone who would want to contribute!!!!
|
losing
|
## Inspiration
Ever since ChatGPT came out, my most frequent use for it *by far* was to help me study for my courses. Over the past two years, I've gotten quite good at using it in a way that is most helpful for making content as digestible and easily understandable as possible for myself. Our goal was not only to streamline this for students already quite experienced at "prompt engineering" to effectively study, but also for students unfamiliar with this technology to take advantage of it.
We believe that increasing the rate of learning for students by even a small amount results in compounding effects that are incredibly significant.
## What it does
Our project helps you study by making bullet points, flash cards, and quizzes from study materials like the course textbook, slides, and notes.
The user drops the various study materials they have. This may include presentation slides from class, a textbook, notes, documents, or all of the above! Once the user has submitted this, the user is greeted with a screen with bullet points summarizing the material. In addition to this, there are flashcards and quizzes that are available to the user to review and practice their material.
## How we built it
frontend: react and tailwind
backend: express and javascript
database: supabase
open ai API
serpAPI
## Challenges we ran into
text extraction from multiple files and smartly prompting the model was tricky but we did it well.
used users wrong answers to multiple choice questions to identify weak areas and propose resources from the internet.
speed was a key challenge we tackled, making api calls to models with large tokens is costly so we had to come up with clever ways to reduce time cost on the user's end, producing a smooth experience.
## Accomplishments that we're proud of
Successfully made an app that has is user faced and aims to make the user experience as frictionless as possible.
## What we learned
You can just do things. Conceiving an idea and bringing its first iteration to reality is both fun and very doable.
## What's next for Peachy Prep
We plan on adding memory to allow Peachy Prep to review all past study sessions with it so that it can help review for midterms and finals based on how the student did in each study session.
|
## Inspiration
Coming from South Texas, two of the team members saw ESL (English as a Second Language) students being denied of a proper education. Our team created a tool to break down language barriers that traditionally perpetuate socioeconomic cycles of poverty by providing detailed explanations about word problems using ChatGPT. Traditionally, people from this group would not have access to tutoring or 1-on-1 support, and this website is meant to rectify this glaring issue.
## What it does
The website takes in a photo as input, and it uses optical character recognition to get the text from the problem. Then, it uses ChatGPT to generate a step-by-step explanation for each problem, and this output is tailored to the grade level and language of the student, enabling students from various backgrounds to get assistance they are often denied of.
## How we built it
We coded the backend in Python with two parts: OCR and ChatGPT API implementation. Moreover, we considered the multiple parameters, such as grade and language, that we could implement in our code and eventually query ChatGPT with to make the result as helpful as possible. On the other side of the stack, we coded it in React with TypeScript to be as simple and intuitive as possible. It has two sections that clearly show what it is outputting and what ChatGPT has generated to assist the student.
## Challenges we ran into
During the development of our product we often ran into struggles with deciding the optimal way to apply different APIs and learning how to implement them, many of which we ended up not using or changing our applications for, such as the IBM API. Through this process, we had to change our high level plan for the backend functions and consequently reimplement our frontend user interface to fit the operations. This also provided a compounding challenge of having to reestablish and discuss new ideas while communicating as a team.
## Accomplishments that we're proud of
We are proud of the website layout. Personally the team is very fond of the color and the arrangement of the site’s elements. Another thing that we are proud of is just that we have something working, albeit jankily. This was our first hackathon, so we were proud to be able to contribute to the hackathon in some form.
## What we learned
One invaluable skill we developed through this project was learning more about the unique plethora of APIs available and how we can integrate and combine them to create new revolutionary products that can help people in everyday life. We not only developed our technical skills, including git familiarity and web development, but we also developed our ability to communicate our ideas as a team and gain the confidence and creativity to create and carry out an idea from thought to production.
## What's next for Homework Helper
As part of our mission to increase education accessibility and combat common socioeconomic barriers, we hope to use Homework Helper to not only translate and minimize the language barrier, but to also help those with visual and auditory disabilities. Some functions we hope to implement include having text-to-speech and speech-to-text features, and producing video solutions along with text answers.
|
## 💡 Inspiration
You have another 3-hour online lecture, but you’re feeling sick and your teacher doesn’t post any notes. You don’t have any friends that can help you, and when class ends, you leave the meet with a blank document. The thought lingers in your mind “Will I ever pass this course?”
If you experienced a similar situation in the past year, you are not alone. Since COVID-19, there have been many struggles for students. We created AcadeME to help students who struggle with paying attention in class, missing class, have a rough home environment, or just want to get ahead in their studies.
We decided to build a project that we would personally use in our daily lives, and the problem AcadeME tackled was the perfect fit.
## 🔍 What it does
First, our AI-powered summarization engine creates a set of live notes based on the current lecture.
Next, there are toggle features for simplification, definitions, and synonyms which help you gain a better understanding of the topic at hand. You can even select text over videos!
Finally, our intuitive web app allows you to easily view and edit previously generated notes so you are never behind.
## ⭐ Feature List
* Dashboard with all your notes
* Summarizes your lectures automatically
* Select/Highlight text from your online lectures
* Organize your notes with intuitive UI
* Utilizing Google Firestore, you can go through your notes anywhere in the world, anytime
* Text simplification, definitions, and synonyms anywhere on the web
* DCP, or Distributing Computing was a key aspect of our project, allowing us to speed up our computation, especially for the Deep Learning Model (BART), which through parallel and distributed computation, ran 5 to 10 times faster.
## ⚙️ Our Tech Stack
* Chrome Extension: Chakra UI + React.js, Vanilla JS, Chrome API,
* Web Application: Chakra UI + React.js, Next.js, Vercel
* Backend: AssemblyAI STT, DCP API, Google Cloud Vision API, DictionariAPI, NLP Cloud, and Node.js
* Infrastructure: Firebase/Firestore
## 🚧 Challenges we ran into
* Completing our project within the time constraint
* There was many APIs to integrate, making us spend a lot of time debugging
* Working with Google Chrome Extension, which we had never worked with before.
## ✔️ Accomplishments that we're proud of
* Learning how to work with Google Chrome Extensions, which was an entirely new concept for us.
* Leveraging Distributed Computation, a very handy and intuitive API, to make our application significantly faster and better to use.
## 📚 What we learned
* The Chrome Extension API is incredibly difficult, budget 2x as much time for figuring it out!
* Working on a project where you can relate helps a lot with motivation
* Chakra UI is legendary and a lifesaver
* The Chrome Extension API is very difficult, did we mention that already?
## 🔭 What's next for AcadeME?
* Implementing a language translation toggle to help international students
* Note Encryption
* Note Sharing Links
* A Distributive Quiz mode, for online users!
|
losing
|
## Inspiration
Every time I talk to someone about boardgames, a few games always skip my mind because there are just so many good games to keep track of! If only there was a convenient location where I could access all of the board games that I own or am interested in.
## What it does
Boardhoard allows you to access your library of games from anywhere! It will conveniently give you the ability to search for board games and you will be able to add them to your library so that you can access them with ease whenever you want!
## How we built it
We leveraged the versatility of react to create a beautiful UI and the depth of information from BoardGameGeek's API to provide us with the necessary information to display the games. We used Charles and Postman to generate queries and used Java and HTTP libraries to fetch sample data to test our implementation.
## Challenges we ran into
BoardGameGeek's (BGG) API returns XML responses which are not ideal. We found an alternative server that converted the responses to JSON which we then used to populate our app. Another challenge involved fetching a complete catalogue of all games on BGG, it simply could not be done. We had to come up with work arounds to fetch large amounts of data. We had trouble implementing individual user databases.
## Accomplishments that we're proud of
It worked! It was a great accomplishment that we were able to maintain code quality and styling throughout the project.
## What we learned
Learned about the importance of setting proper headers and authorization to POST requests. Learned how to persevere and make something work when faced with a limited set of APIs
## What's next for Boardhoard
Add the ability to share you library with other people. Include more metadata to each game detail.
|
## Inspiration
Every thursday night when my friend group meetup, we always spend at least 10 minutes or more debating which is a good boardgame to play for the night. Because we have varying number of people in the room each time and we don't know which board games we own and how many players each game can support, we always waste time on this.
## What it does
It lets me enter boardgames I own and the maximum number of players each can support to and an accompanying command line tool that lets me enter number of players I have and randomly suggest a game for me.
## How I built it
I used Google Cloud Firestore as the database and backend. I used a pure JS webapp for the data entry webtool and a python tool that reads data from the Firestore database and randomize the results.
## Challenges I ran into
This is the first time I was using cloud firestore so it was a bit of a challenge understanding the data modeling but the docs were pretty helpful. I also didn't know how to take in cli arguments before but it was solved fairly quickly.
## Accomplishments that I'm proud of
I am proud that I learned how to use a tool that I just learned by attending the workshop and solving an actual problem that I have.
## What I learned
I learned Firestore Modelling, NoSQL data storage, writing queries. I also learned how to use different languages for just one project (I have not done that quite before).
## What's next for Pick Board Game
I would like to implement Firebase Authentication for the webapp so that my friends can log in with their emails and enter the games they own as well so there is some form of verification for our database.
|
## Inspiration
GeoGuesser is a fun game which went viral in the middle of the pandemic, but after having played for a long time, it started feeling tedious and boring. Our Discord bot tries to freshen up the stale air by providing a playlist of iconic locations in addendum to exciting trivia like movies and monuments for that extra hit of dopamine when you get the right answers!
## What it does
The bot provides you with playlist options, currently restricted to Capital Cities of the World, Horror Movie Locations, and Landmarks of the World. After selecting a playlist, five random locations are chosen from a list of curated locations. You are then provided a picture from which you have to guess the location and the bit of trivia associated with the location, like the name of the movie from which
we selected the location. You get points for how close you are to the location and if you got the bit of trivia correct or not.
## How we built it
We used the *discord.py* library for actually coding the bot and interfacing it with discord. We stored our playlist data in external *excel* sheets which we parsed through as required. We utilized the *google-streetview* and *googlemaps* python libraries for accessing the google maps streetview APIs.
## Challenges we ran into
For initially storing the data, we thought to use a playlist class while storing the playlist data as an array of playlist objects, but instead used excel for easier storage and updating. We also had some problems with the Google Maps Static Streetview API in the beginning, but they were mostly syntax and understanding issues which were overcome soon.
## Accomplishments that we're proud of
Getting the Discord bot working and sending images from the API for the first time gave us an incredible feeling of satisfaction, as did implementing the input/output flows. Our points calculation system based on the Haversine Formula for Distances on Spheres was also an accomplishment we're proud of.
## What we learned
We learned better syntax and practices for writing Python code. We learnt how to use the Google Cloud Platform and Streetview API. Some of the libraries we delved deeper into were Pandas and pandasql. We also learned a thing or two about Human Computer Interaction as designing an interface for gameplay was rather interesting on Discord.
## What's next for Geodude?
Possibly adding more topics, and refining the loading of streetview images to better reflect the actual location.
|
losing
|
Bet for Bit is a premium, high-security Bitcoin betting service for dedicated sports fans.
It is built with Python, Django, and the CoinBase api.
Our platform scrapes live sports stats, and allows users to place bitcoin bets on their sports teams.
|
## Inspiration
Prolonged covid restrictions have caused immense damage to the economy and local markets alike. Shifts in this economic landscape have led to many individuals seeking alternate sources of income to account for the losses imparted by lack of work or general opportunity. One major sector that has seen a boom, despite local market downturns, is investment in the stock market. While stock market trends at first glance, seem to be logical, and fluid, they're in fact the opposite. Beat earning expectation? New products on the market? *It doesn't matter!*, because at the end of the day, a stock's value is inflated by speculation and **hype**. Many see the allure of rapidly increasing ticker charts, booming social media trends, and hear talk of town saying how someone made millions in a matter of a day *cough* **GameStop** *cough* , but more often then not, individual investors lose money when market trends spiral. It is *nearly* impossible to time the market. Our team sees the challenges and wanted to create a platform which can account for social media trends which may be indicative of early market changes so that small time investors can make smart decisions ahead of the curve.
## What it does
McTavish St. Bets is a platform that aims to help small time investors gain insight on when to buy, sell, or hold a particular stock on the DOW 30 index. The platform uses the recent history of stock data along with tweets in the same time period in order to estimate the future value of the stock. We assume there is a correlation between tweet sentiment towards a company, and it's future evaluation.
## How we built it
The platform was build using a client-server architcture and is hosted on a remote computer made available to the team. The front-end was developed using react.js and bootstrap for quick and efficient styling, while the backend was written in python with flask. The dataset was constructed by the team using a mix of tweets and article headers. The public Twitter API was used to scrape tweets according to popularity and were ranked against one another using an engagement scoring function. Tweets were processed using a natural language processing module with BERT embeddings which was trained for sentiment analysis. Time series prediction was accomplished through the use of a neural stochastic differential equation which incorporated text information as well. In order to incorporate this text data, the latent representations were combined based on the aforementioned scoring function. This representation is then fed directly to the network for each timepoint in the series estimation in an attempt to guide model predictions.
## Challenges we ran into
Obtaining data to train the neural SDE proved difficult. The free Twitter API only provides high engagement tweets for the last seven days. Obtaining older tweets requires an enterprise account costing thousands of dollars per month. Unfortunately, we didn’t feel that we had the data to train an end-to-end model to learn a single representation for each day’s tweets. Instead, we use a weighted average tweet representation, weighing each tweet by its importance computed as a function of its retweets and likes. This lack of data extends to the validation side too, with us only able to validate our model’s buy/sell/hold prediction on this Friday's stock price.
Finally, without more historical data, we can only model the characteristics of the market this week, which has been fairly uncharacteristic of normal market conditions. Adding additional data for the trajectory modeling would have been invaluable.
## Accomplishments that we're proud of
* We used several API to put together a dataset, trained a model, and deployed it within a web application.
* We put together several animations introduced in the latest CSS revision.
* We commissioned McGill-themed banner in keeping with the /r/wallstreetbets culture. Credit to Jillian Cardinell for the help!
* Some jank nlp
## What we learned
Learned to use several new APIs, including Twitter and Web Scrapers.
## What's next for McTavish St. Bets
Obtaining much more historical data by building up a dataset over several months (using Twitters 7-day API). We would have also liked to scale the framework to be reinforcement based which is data hungry.
|
## Inspiration
The anonymity of cryptocurrencies is a blessing and a curse. One of Bitcoin's major shortcomings is its usefulness in criminal activity. Money laundering, drug sales, and other illegal activities use Bitcoin to evade law enforcement. This makes it hard for legitimate people and businesses to avoid dealing with criminals and their dirty (aka tainted) currency.
## What it does
BitTrace analyzes a Bitcoin address's involvement in potentially illegitimate transactions. The concept of marking an entity as 'tainted' has existed for a long time in many cryptocurrencies, but there is no easy way to track the flow of tainted money. When given a Bitcoin address, BitTrace painstakingly scans its transaction history, looking for dealings with tainted addresses. It then scores the target address based on its previous dealings. This allows the community to quickly build a crowdsourced map of bad actors.
## How I built it
We built our app in Expo and React Native. It connects to a backend written in NodeJS and Express, which calculates the taint level of a given address. It does this using BlockTrail and BlockCypher. We also maintain a separate MongoDB database of taint values in MLab (hosted in Google Cloud Platform), which allows us to record more detailed data on each address. Our server itself is run on Azure, which we chose for its speed, reliability, and ease of use.
## Challenges I ran into
We were very inexperienced with React Native and even more inexperienced with Expo. We had no idea how to accomplish simple effects like gradient buttons. Luckily, with the help of Expo's excellent documentation, we were able to overcome most issues.
## Accomplishments that I'm proud of
We have a very clean UI. We're proud of the cross-platform compatibility, speed, and reliability we were able to achieve with Expo/React Native. Our algorithm, which is highly optimized for a large network of transactions, is stable and fast.
## What I learned
We learned a huge amount about using React Native and Expo. We also learned much about analyzing transaction records on the blockchain.
## What's next for BitTrace
We want to add support for more currencies, like Ether and LiteCoin. We also want to show a more detailed analysis of an address's transactions (we already generate this data, but do not show it in the app).
|
winning
|
## Inspiration
Alex K's girlfriend Allie is a writer and loves to read, but has had trouble with reading for the last few years because of an eye tracking disorder. She now tends towards listening to audiobooks when possible, but misses the experience of reading a physical book.
Millions of other people also struggle with reading, whether for medical reasons or because of dyslexia (15-43 million Americans) or not knowing how to read. They face significant limitations in life, both for reading books and things like street signs, but existing phone apps that read text out loud are cumbersome to use, and existing "reading glasses" are thousands of dollars!
Thankfully, modern technology makes developing "reading glasses" much cheaper and easier, thanks to advances in AI for the software side and 3D printing for rapid prototyping. We set out to prove through this hackathon that glasses that open the world of written text to those who have trouble entering it themselves can be cheap and accessible.
## What it does
Our device attaches magnetically to a pair of glasses to allow users to wear it comfortably while reading, whether that's on a couch, at a desk or elsewhere. The software tracks what they are seeing and when written words appear in front of it, chooses the clearest frame and transcribes the text and then reads it out loud.
## How we built it
**Software (Alex K)** -
On the software side, we first needed to get image-to-text (OCR or optical character recognition) and text-to-speech (TTS) working. After trying a couple of libraries for each, we found Google's Cloud Vision API to have the best performance for OCR and their Google Cloud Text-to-Speech to also be the top pick for TTS.
The TTS performance was perfect for our purposes out of the box, but bizarrely, the OCR API seemed to predict characters with an excellent level of accuracy individually, but poor accuracy overall due to seemingly not including any knowledge of the English language in the process. (E.g. errors like "Intreduction" etc.) So the next step was implementing a simple unigram language model to filter down the Google library's predictions to the most likely words.
Stringing everything together was done in Python with a combination of Google API calls and various libraries including OpenCV for camera/image work, pydub for audio and PIL and matplotlib for image manipulation.
**Hardware (Alex G)**: We tore apart an unsuspecting Logitech webcam, and had to do some minor surgery to focus the lens at an arms-length reading distance. We CAD-ed a custom housing for the camera with mounts for magnets to easily attach to the legs of glasses. This was 3D printed on a Form 2 printer, and a set of magnets glued in to the slots, with a corresponding set on some NerdNation glasses.
## Challenges we ran into
The Google Cloud Vision API was very easy to use for individual images, but making synchronous batched calls proved to be challenging!
Finding the best video frame to use for the OCR software was also not easy and writing that code took up a good fraction of the total time.
Perhaps most annoyingly, the Logitech webcam did not focus well at any distance! When we cracked it open we were able to carefully remove bits of glue holding the lens to the seller’s configuration, and dial it to the right distance for holding a book at arm’s length.
We also couldn’t find magnets until the last minute and made a guess on the magnet mount hole sizes and had an *exciting* Dremel session to fit them which resulted in the part cracking and being beautifully epoxied back together.
## Acknowledgements
The Alexes would like to thank our girlfriends, Allie and Min Joo, for their patience and understanding while we went off to be each other's Valentine's at this hackathon.
|
## Inspiration
Imagine a world where your best friend is standing in front of you, but you can't see them. Or you go to read a menu, but you are not able to because the restaurant does not have specialized brail menus. For millions of visually impaired people around the world, those are not hypotheticals, they are facts of life.
Hollywood has largely solved this problem in entertainment. Audio descriptions allow the blind or visually impaired to follow the plot of movies easily. With Sight, we are trying to bring the power of audio description to everyday life.
## What it does
Sight is an app that allows the visually impaired to recognize their friends, get an idea of their surroundings, and have written text read aloud. The app also uses voice recognition to listen for speech commands to identify objects, people or to read text.
## How we built it
The front-end is a native iOS app written in Swift and Objective-C with XCode. We use Apple's native vision and speech API's to give the user intuitive control over the app.
---
The back-end service is written in Go and is served with NGrok.
---
We repurposed the Facebook tagging algorithm to recognize a user's friends. When the Sight app sees a face, it is automatically uploaded to the back-end service. The back-end then "posts" the picture to the user's Facebook privately. If any faces show up in the photo, Facebook's tagging algorithm suggests possibilities for who out of the user's friend group they might be. We scrape this data from Facebook to match names with faces in the original picture. If and when Sight recognizes a person as one of the user's friends, that friend's name is read aloud.
---
We make use of the Google Vision API in three ways:
* To run sentiment analysis on people's faces, to get an idea of whether they are happy, sad, surprised etc.
* To run Optical Character Recognition on text in the real world which is then read aloud to the user.
* For label detection, to indentify objects and surroundings in the real world which the user can then query about.
## Challenges we ran into
There were a plethora of challenges we experienced over the course of the hackathon.
1. Each member of the team wrote their portion of the back-end service a language they were comfortable in. However when we came together, we decided that combining services written in different languages would be overly complicated, so we decided to rewrite the entire back-end in Go.
2. When we rewrote portions of the back-end in Go, this gave us a massive performance boost. However, this turned out to be both a curse and a blessing. Because of the limitation of how quickly we are able to upload images to Facebook, we had to add a workaround to ensure that we do not check for tag suggestions before the photo has been uploaded.
3. When the Optical Character Recognition service was prototyped in Python on Google App Engine, it became mysteriously rate-limited by the Google Vision API. Re-generating API keys proved to no avail, and ultimately we overcame this by rewriting the service in Go.
## Accomplishments that we're proud of
Each member of the team came to this hackathon with a very disjoint set of skills and ideas, so we are really glad about how well we were able to build an elegant and put together app.
Facebook does not have an official algorithm for letting apps use their facial recognition service, so we are proud of the workaround we figured out that allowed us to use Facebook's powerful facial recognition software.
We are also proud of how fast the Go back-end runs, but more than anything, we are proud of building a really awesome app.
## What we learned
Najm taught himself Go over the course of the weekend, which he had no experience with before coming to YHack.
Nathaniel and Liang learned about the Google Vision API, and how to use it for OCR, facial detection, and facial emotion analysis.
Zak learned about building a native iOS app that communicates with a data-rich APIs.
We also learned about making clever use of Facebook's API to make use of their powerful facial recognition service.
Over the course of the weekend, we encountered more problems and bugs than we'd probably like to admit. Most of all we learned a ton of valuable problem-solving skills while we worked together to overcome these challenges.
## What's next for Sight
If Facebook ever decides to add an API that allows facial recognition, we think that would allow for even more powerful friend recognition functionality in our app.
Ultimately, we plan to host the back-end on Google App Engine.
|
We started off wanting to make a program to make inputting equations to online platforms easier for you but quickly realized that our text based idea had much more potential than that. Now it has become a tool that both the visually impaired and children can use to better their lives and combat illiteracy at any age. It converts video to text and then to audio so you can read almost anything on spot!
We built it using opencv to analyze and filter an image. Then we ran tesseract on that image to convert all the words to text. After that we used goggle's text to speech api to convert it to speech and read the words out loud.
Right now we have a rough algorithm for comparing word similarity to sense if a page has been flipped or an image has shifted so we can save time. We also have a rough way to sense a finger and find a word near it but not well enough to isolate a single word and it is definitely a future project for us.
|
winning
|
## Inspiration
Fresh fruit, vegetables, nuts, and legumes don’t play the **central role** they need to in our food system. The result is a **chronic health epidemic**, disastrous impacts on the **climate**, and **unequal access** to food. We wanted to find a tangible solution to this very real and consequential problem.
## What it does
Cropscape analyzes location-specific **soil**, **climate**, and **hardiness** data to provide personalized crop recommendations for a thriving backyard garden. The user inputs a
## How we built it
We built our project using React, Express, Node.js, and MySQL.
## Challenges we ran into
ChatGPT required tokens that cost money. We had difficulties implementing React on the frontend, so we had to opt for vanilla HTML/CSS.
## Accomplishments that we're proud of
We are proud of our ability to leverage several APIs to create a centralized source of gardening-relevant information.
## What's next for Cropscape
Gaining access to more robust databases of soil information, implementing a better soil analysis system.
|
## Inspiration
Agriculture is the backbone of our society. Not only providing a supply of food, agriculture is responsible for the production of raw materials such as textiles, sugar, coffee, cocoa, and oils. For many, agriculture is not only an occupation but it is a way of life. This is especially true for those farmers in developing regions around the world. Without having access to smart agriculture technology and mass amounts of weather data, these farmers face challenges such as diminishing crop yields due to inconsistent weather patterns.
Due to climate change in recent years, large-scale developed farms have experienced a declination of nearly 17% in crop yield despite having massive amounts of resources and support to back up their losses. For those in developing countries, these responses cannot be replicated, and farmers are left with insufficient harvests after a season of growing. The widespread impact of agriculture on communities in developing countries lead to the creation of Peak PerFARMance - a data-driven tool designed to provide farmers with the information necessary to make informed decisions about crop production.
## What it does
Peak PerFARMance is a holistic platform providing real-time hardware data and historical weather indexes to allow small-scale farmers to grow crops efficiently and tackle the challenges introduced by climate change. Our platform features a unique map that allows users to draw a polygon of any shape on the map in order to retrieve in-depth data about the geographical area such as normalized difference vegetation index, UV index, temperature, humidity, pressure, cloud coverage, and wind conditions. Not only retrieving real-time data, but our platform also provides historical data and compares it with current conditions to allow farmers to get a holistic understanding of weather trends allowing them to make more informed decisions about the crops they grow. In addition, our platform combines data retrieved from real-time hardware beacons composed of environmental sensors to provide further information about the conditions on the farm.
## How we built it
The backend of the project was built in Go. We created APIs for creating polygons, retrieving polygons, and retrieving specific data for the polygon. We integrated the backend with external APIs such as Agro APIs and Open Weather APIs. This allowed us to retrieve data for specific date ranges. The frontend would make API requests to the backend in order to display the data in the frontend dashboards.
The frontend of the project was built using ReactJS. This is the user entry of the project and it is where the user will draw the polygon to retrieve the data. The hardware sensor data is retrieved by the frontend from Google Firebase where it is then processed and displayed. We integrated the frontend with several styling APIs such as ApexChartJS and Bootstrap in order to create the UI for the website.
The hardware for the project was built on top of the Arduino platform and the data was communicated over serial to a host computer where we read the data using a Python script and uploaded it to a Google Real-time Firebase.
## Challenges we ran into
A challenge we faced when building the backend was working with Go. None of our team members had any prior knowledge working with Go in creating web applications. We wanted to learn a new language during this hackathon, which was challenging but rewarding. An issue we ran into was converting the data between json and structs in Go. Furthermore, the APIs had limited number of past dates that we could get data for.
## Accomplishments that we're proud of
For the backend, an accomplishment we’re proud of was how we were able to learn using a new language for creating multiple APIs and also successfully integrating it to the rest of the project. While there were multiple hours spent debugging, we’re proud of how our team members collaborated in troubleshooting bugs and issues together.
## What we learned
Having had no experience with Go prior to this weekend, we were able to learn the language and how to leverage it in creating web applications.
From the frontend perspective, I have not had experience working with SASS before. I was also to leverage this new styling format to create more effective stylesheets for our React app.
Also, for several of our team members, this was their first online hackathon so we spent a lot of time learning how to create an effective virtual pitch and to make a video for our virtual submission.
## What's next for Peak PerFARMance
With Peak PerFARMance, we were able to combine a ton of data from numerous sources into a simple-to-use dashboard. In the future, we would like to use machine learning to extract even more insight out of this wealth of data and provide tailored suggestions to farmers.
For this project, we wanted to incorporate LoRaWAN (long range low-power wide-area network) technology to connect many of the IoT sensors over vast distances --- a feat which would be impossible and/or expensive with traditional Wi-Fi and cellular technologies. Unfortunately, the hardware components for this did not arrive in time for this hackathon. We are really excited for it to arrive --- we believe that Peak PerFARMance is the perfect project to show off this technology.
|
## Overview
Crop diseases pose a significant threat to global food security, especially in regions lacking proper infrastructure for rapid disease identification. To address this challenge, we present a web application that leverages the widespread adoption of smartphones and cutting-edge transfer learning models. Our solution aims to streamline the process of crop disease diagnosis, providing users with insights into disease types, suitable treatments, and preventive measures.
## Key Features
* **Disease Detection:** Our web app employs advanced transfer learning models to accurately identify the type of disease affecting plants. Users can upload images of afflicted plants for real-time diagnosis.
* **Treatment Recommendations:** Beyond disease identification, the app provides actionable insights by recommending suitable treatments for the detected diseases. This feature aids farmers and agricultural practitioners in promptly addressing plant health issues.
* **Prevention Suggestions:** The application doesn't stop at diagnosis; it also offers preventive measures to curb the spread of diseases. Users receive valuable suggestions on maintaining plant health and preventing future infections.
* **Generative AI Interaction:** To enhance user experience, we've integrated generative AI capabilities for handling additional questions users may have about their plants. This interactive feature provides users with insightful information and guidance.
## How it Works ?
* **Image Upload:** Users upload images of plant specimens showing signs of disease through the web interface.
* **Transfer Learning Model:** The uploaded images undergo real-time analysis using advanced transfer learning model, enabling the accurate identification of diseases with the help of PlantID API.
* **Treatment and Prevention Recommendations:** Once the disease is identified, the web app provides detailed information on suitable treatments and preventive measures, empowering users with actionable insights.
* **Generative AI Interaction:** Users can engage with generative AI to seek additional information, ask questions, or gain knowledge about plant care beyond disease diagnosis.
|
winning
|
## Inspiration
Business cards haven't changed in years, but cARd can change this! Inspired by the rise of augmented reality applications, we see potential for creative networking. Next time you meet someone at a conference, a career fair, etc., simply scan their business card with your phone and watch their entire online portfolio enter the world! The business card will be saved, and the experience will be unforgettable.
## What it does
cARd is an iOS application that allows a user to scan any business card to bring augmented reality content into the world. Using OpenCV for image rectification and OCR (optical character recognition) with the Google Vision API, we can extract both the business card and text on it. Feeding the extracted image back to the iOS app, ARKit can effectively track our "target" image. Furthermore, we use the OCR result to grab information about the business card owner real-time! Using selelium, we effectively gather information from Google and LinkedIn about the individual. When returned to the iOS app, the user is presented with information populated around the business card with augmented reality!
## How I built it
Some of the core technologies that go into this project include the following:
* ARKit for augmented reality in iOS
* Flask for the backend server
* selenium for collecting data about the business card owner on the web in real-time
* OpenCV to find the rectangular business card in the image and use a homography to map it into a rectangle for AR tracking
* Google Vision API for optical character recognition (OCR)
* Text to speech
## Challenges I ran into
## Accomplishments that I'm proud of
## What I learned
## What's next for cARd
Get cARd on the app store for everyone to use! Stay organized and have fun while networking!
|
💡
## Inspiration
49 percent of women reported feeling unsafe walking alone after nightfall according to the Office for National Statistics (ONS). In light of recent sexual assault and harassment incidents in the London, Ontario and Western community, women now feel unsafe travelling alone more than ever.
Light My Way helps women navigate their travel through the safest and most well-lit path. Women should feel safe walking home from school, going out to exercise, or going to new locations, and taking routes with well-lit areas is an important precaution to ensure safe travel. It is essential to always be aware of your surroundings and take safety precautions no matter where and when you walk alone.
🔎
## What it does
Light My Way visualizes data of London, Ontario’s Street Lighting and recent nearby crimes in order to calculate the safest path for the user to take. Upon opening the app, the user can access “Maps” and search up their destination or drop a pin on a location. The app displays the safest route available and prompts the user to “Send Location” which sends the path that the user is taking to three contacts via messages. The user can then click on the google maps button in the lower corner that switches over to the google maps app to navigate the given path. In the “Alarm” tab, the user has access to emergency alert sounds that the user can use when in danger, and upon clicking the sounds play at a loud volume to alert nearby people for help needed.
🔨
## How we built it
React, Javascript, and Android studio were used to make the app. React native maps and directions were also used to allow user navigation through google cloud APIs. GeoJson files were imported of Street Lighting data from the open data website for the City of London to visualize street lights on the map. Figma was used for designing UX/UI.
🥇
## Challenges we ran into
We ran into a lot of trouble visualization such a large amount of data that we exported on the GeoJson street lights. We overcame that by learning about useful mapping functions in react that made marking the location easier.
⚠️
## Accomplishments that we're proud of
We are proud of making an app that can be of potential help to make women be safer walking alone. It is our first time using and learning React, as well as using google maps, so we are proud of our unique implementation of our app using real data from the City of London. It was also our first time doing UX/UI on Figma, and we are pleased with the results and visuals of our project.
🧠
## What we learned
We learned how to use React, how to implement google cloud APIs, and how to import GeoJson files into our data visualization. Through our research, we also became more aware of the issue that women face daily on feeling unsafe walking alone.
💭
## What's next for Light My Way
We hope to expand the app to include more data on crimes, as well as expand to cities surrounding London. We want to continue developing additional safety features in the app, as well as a chatting feature with the close contacts of the user.
|
## Inspiration
Course selection is an exciting but frustrating time to be a Princeton student. While you can look at all the cool classes that the university has to offer, it is challenging to aggregate a full list of prerequisites and borderline impossible to find what courses each of them leads to in the future. We recently encountered this problem when building our schedules for next fall. The amount of searching and cross-referencing that we had to do was overwhelming, and to this day, we are not exactly sure whether our schedules are valid or if there will be hidden conflicts moving forward. So we built TigerMap to address this common issue among students.
## What it does
TigerMap compiles scraped course data from the Princeton Registrar into a traversable graph where every class comes with a clear set of prerequisites and unlocked classes. A user can search for a specific class code using a search bar and then browse through its prereqs and unlocks, going down different course paths and efficiently exploring the options available to them.
## How we built it
We used React (frontend), Python (middle tier), and a MongoDB database (backend). Prior to creating the application itself, we spent several hours scraping the Registrar's website, extracting information, and building the course graph. We then implemented the graph in Python and had it connect to a MongoDB database that stores course data like names and descriptions. The prereqs and unlocks that are found through various graph traversal algorithms, and the results are sent to the frontend to be displayed in a clear and accessible manner.
## Challenges we ran into
Data collection and processing was by far the biggest challenge for TigerMap. It was difficult to scrape the Registrar pages given that they are rendered by JavaScript, and once we had the pages downloaded, we had to go through a tedious process of extracting the necessary information and creating our course graph. The prerequisites for courses is not written in a consistent manner across the Registrar's pages, so we had to develop robust methods of extracting data. Our main concern was ensuring that we would get a graph that completely covered all of Princeton's courses and was not missing any references between classes. To accomplish this, we used classes from both the Fall and Spring 21-22 semesters, and we can proudly say that, apart from a handful of rare occurrences, we achieved full course coverage and consistency within our graph.
## Accomplishments that we're proud of
We are extremely proud of how fast and elegant our solution turned out to be. TigerMap definitely satisifes all of our objectives for the project, is user-friendly, and gives accurate results for nearly all Princeton courses. The amount of time and stress that TigerMap can save is immeasurable.
## What we learned
* Graph algorithms
* The full stack development process
* Databases
* Web-scraping
* Data cleaning and processing techniques
## What's next for TigerMap
We would like to improve our data collection pipeline, tie up some loose ends, and release TigerMap for the Princeton community to enjoy!
## Track
Education
## Discord
Leo Stepanewk - nwker#3994
Aaliyah Sayed - aaligator#1793
|
winning
|
## Inspiration
Peripheral nerve compression syndromes such as carpal tunnel syndrome affect approximately 1 out of every 6 adults. They are commonly caused by repetitive stress and with the recent trend of working at home due to the pandemic it has become a mounting issue more individuals will need to address. There exist several different types of exercises to help prevent these syndromes, in fact studies show that 71.2% of patients who did not perform these exercises had to later undergo surgery due to their condition. It should also be noted that doing these exercises wrong could cause permanent injury to the hand as well.
## What it does
That is why we decided to create the “Helping Hand”, providing exercises for a user to perform and using a machine learning model to recognize each successful try. We implemented flex sensors and an IMU on a glove to track the movement and position of the user's hand. An interactive GUI was created in Python to prompt users to perform certain hand exercises. A real time classifier is then run once the user begins the gesture to identify whether they were able to successfully recreate it. Through the application, we can track the progression of the user's hand mobility and appropriately recommend exercises to target the areas where they are lacking most.
## How we built it
The flex sensors were mounted on the glove using custom-designed 3D printed holders. We used an Arduino Uno to collect all the information from the 5 flex sensors and the IMU. The Arduino Uno interfaced with our computer via a USB cable. We created a machine learning model with the use of TensorFlow and Python to classify hand gestures in real time. The user was able to interact with our program with a simple GUI made in Python.
## Challenges we ran into
Hooking up 5 flex sensors and an IMU to one power supply initially caused some power issues causing the IMU not to function/give inaccurate readings. We were able to rectify the problem and add pull-up resistors as necessary. There were also various issues with the data collection such as gyroscopic drift in the IMU readings. Another challenge was the need to effectively collect large datasets for the model which prompted us to create clever Python scripts to facilitate this process.
## Accomplishments that we're proud of
Accomplishments we are proud of include, designing and 3D printing custom holders for the flex sensors and integrating both the IMU and flex sensors to collect data simultaneously on the glove. It was also our first time collecting real datasets and using TensorFlow to train a machine learning classifier model.
## What we learned
We learned how to collect real-time data from sensors and create various scripts to process the data. We also learned how to set up a machine learning model including parsing the data, splitting data into training and testing sets, and validating the model.
## What's next for Helping Hand
There are many improvements for Helping Hand. We would like to make Helping Hand wireless, by using an Arduino Nano which has Bluetooth capabilities as well as compatibility with Tensorflow lite. This would mean that all the classification would happen right on the device! Also, by uploading the data from the glove to a central database, it can be easily shared with your doctor.
We would also like to create an app so that the user can conveniently perform these exercises anywhere, anytime.
Lastly, we would like to implement an accuracy score of each gesture rather than a binary pass/fail (i.e. display a reading of how well you are able to bend your fingers/rotate your wrist when performing a particular gesture). This would allow us to more appropriately identify the weaknesses within the hand.
|
## Inspiration
Physiotherapy is expensive for what it provides you with, A therapist stepping you through simple exercises and giving feedback and evaluation? WE CAN TOTALLY AUTOMATE THAT! We are undergoing the 4th industrial revolution and technology exists to help people in need of medical aid despite not having the time and money to see a real therapist every week.
## What it does
IMU and muscle sensors strapped onto the arm accurately track the state of the patient's arm as they are performing simple arm exercises for recovery. A 3d interactive GUI is set up to direct patients to move their arm from one location to another by performing localization using IMU data. A classifier is run on this variable-length data stream to determine the status of the patient and how well the patient is recovering. This whole process can be initialized with the touch of a button on your very own mobile application.
## How WE built it
on the embedded system side of things, we used a single raspberry pi for all the sensor processing. The Pi is in charge of interfacing with the IMU while another Arduino interfaces with the other IMU and a muscle sensor. The Arduino then relays this info over a bridged connection to a central processing device where it displays the 3D interactive GUI and processes the ML data. all the data in the backend is relayed and managed using ROS. This data is then uploaded to firebase where the information is saved on the cloud and can be accessed anytime by a smartphone. The firebase also handles plotting data to give accurate numerical feedback of the many values orientation trajectory, and improvement over time.
## Challenges WE ran into
hooking up 2 IMU to the same rpy is very difficult. We attempted to create a multiplexer system with little luck.
To run the second IMU we had to hook it up to the Arduino. Setting up the library was also difficult.
Another challenge we ran into was creating training data that was general enough and creating a preprocessing script that was able to overcome the variable size input data issue.
The last one was setting up a firebase connection with the app that supported the high data volume that we were able to send over and to create a graphing mechanism that is meaningful.
|
## Inspiration
Course selection is an exciting but frustrating time to be a Princeton student. While you can look at all the cool classes that the university has to offer, it is challenging to aggregate a full list of prerequisites and borderline impossible to find what courses each of them leads to in the future. We recently encountered this problem when building our schedules for next fall. The amount of searching and cross-referencing that we had to do was overwhelming, and to this day, we are not exactly sure whether our schedules are valid or if there will be hidden conflicts moving forward. So we built TigerMap to address this common issue among students.
## What it does
TigerMap compiles scraped course data from the Princeton Registrar into a traversable graph where every class comes with a clear set of prerequisites and unlocked classes. A user can search for a specific class code using a search bar and then browse through its prereqs and unlocks, going down different course paths and efficiently exploring the options available to them.
## How we built it
We used React (frontend), Python (middle tier), and a MongoDB database (backend). Prior to creating the application itself, we spent several hours scraping the Registrar's website, extracting information, and building the course graph. We then implemented the graph in Python and had it connect to a MongoDB database that stores course data like names and descriptions. The prereqs and unlocks that are found through various graph traversal algorithms, and the results are sent to the frontend to be displayed in a clear and accessible manner.
## Challenges we ran into
Data collection and processing was by far the biggest challenge for TigerMap. It was difficult to scrape the Registrar pages given that they are rendered by JavaScript, and once we had the pages downloaded, we had to go through a tedious process of extracting the necessary information and creating our course graph. The prerequisites for courses is not written in a consistent manner across the Registrar's pages, so we had to develop robust methods of extracting data. Our main concern was ensuring that we would get a graph that completely covered all of Princeton's courses and was not missing any references between classes. To accomplish this, we used classes from both the Fall and Spring 21-22 semesters, and we can proudly say that, apart from a handful of rare occurrences, we achieved full course coverage and consistency within our graph.
## Accomplishments that we're proud of
We are extremely proud of how fast and elegant our solution turned out to be. TigerMap definitely satisifes all of our objectives for the project, is user-friendly, and gives accurate results for nearly all Princeton courses. The amount of time and stress that TigerMap can save is immeasurable.
## What we learned
* Graph algorithms
* The full stack development process
* Databases
* Web-scraping
* Data cleaning and processing techniques
## What's next for TigerMap
We would like to improve our data collection pipeline, tie up some loose ends, and release TigerMap for the Princeton community to enjoy!
## Track
Education
## Discord
Leo Stepanewk - nwker#3994
Aaliyah Sayed - aaligator#1793
|
winning
|
## ✨ Inspiration
Driven by the goal of more accessible and transformative education, our group set out to find a viable solution. Stocks are very rarely taught in school and in 3rd world countries even less, though if used right, it can help many people go above the poverty line. We seek to help students and adults learn more about stocks and what drives companies to gain or lower their stock value and use that information to make more informed decisions.
## 🚀 What it does
Users are guided to a search bar where they can search a company stock for example "APPL" and almost instantly they can see the stock price over the last two years as a graph, with green and red dots spread out on the line graph. When they hover over the dots, the green dots explain why there is a general increasing trend in the stock and a news article to back it up, along with the price change from the previous day and what it is predicted to be from. An image shows up on the side of the graph showing the company image as well.
## 🔧 How we built it
When a user writes a stock name, it accesses yahooFinance API and gets the data on stock price from the last 3 years. It takes the data and converts to a JSON File on local host 5000. Then using flask it is converted to our own API that populates the ChartsJS API with the data on the stock. Using a Matlab Server, we then take that data to find areas of most significance (the absolute value of slope is over a certain threshold). Those data points are set as green if it is positive or red if it is negative. Those specific dates in our data points are fed back to Gemini and asks it why it is thinking the stock shifted as it did and the price changed on the day as well. The Gemini at the same time also takes another request for a phrase that easy for the json search API to find a photo for about that company and then shows it on the screen.
## 🤯 Challenges we ran into
Using the amount of API's we used and using them properly was VERY Hard especially making our own API and incorporating Flask. As well, getting Stock data to a MATLAB Server took a lot of time as it was all of our first time using it. Using POST and Fetch commands were new for us and took us a lot of time for us to get used too.
## 🏆 Accomplishments that we're proud of
Connecting a prompt to a well-crafted stocks portfolio.
learning MATLAB in a time crunch.
connecting all of our API's successfully
making a website that we believe has serious positive implications for this world
## 🧠 What we learned
MATLAB integration
Flask Integration
Gemini API
## 🚀What's next for StockSee
* Incorporating it on different mediums such as VR so users can see in real-time how stocks shift in from on them in an interactive way.
* Making a small questionnaire on different parts of the stocks to ask whether if it good to buy it at the time
* Use (MPT) and other common stock buying algorthimns to see how much money you would have made using it.
|
## Inspiration
In today's fast-paced world, the average person often finds it challenging to keep up with the constant flow of news and financial updates. With demanding schedules and numerous responsibilities, many individuals simply don't have the time to sift through countless news articles and financial reports to stay informed about stock market trends. Despite this, they still desire a way to quickly grasp which stocks are performing well and make informed investment decisions.
Moreover, the sheer volume of news articles, financial analyses and market updates is overwhelming. For most people finding the time to read through and interpret this information is not feasible. Recognizing this challenge, there is a growing need for solutions that distill complex financial information into actionable insights. Our solution addresses this need by leveraging advanced technology to provide streamlined financial insights. Through web scraping, sentiment analysis, and intelligent data processing we can condense vast amounts of news data into key metrics and trends to deliver a clear picture of which stocks are performing well.
Traditional financial systems often exclude marginalized communities due to barriers such as lack of information. We envision a solution that bridges this gap by integrating advanced technologies with a deep commitment to inclusivity.
## What it does
This website automatically scrapes news articles from the domain of the user's choosing to gather the latests updates and reports on various companies. It scans the collected articles to identify mentions of the top 100 companies. This allows users to focus on high-profile stocks that are relevant to major market indices. Each article or sentence mentioning a company is analyzed for sentiment using advanced sentiment analysis tools. This determines whether the sentiment is positive, negative, or neutral. Based on the sentiment scores, the platform generates recommendations for potential stock actions such as buying, selling, or holding.
## How we built it
Our platform was developed using a combination of robust technologies and tools. Express served as the backbone of our backend server. Next.js was used to enable server-side rendering and routing. We used React to build the dynamic frontend. Our scraping was done with beautiful-soup. For our sentiment analysis we used TensorFlow, Pandas and NumPy.
## Challenges we ran into
The original dataset we intended to use for training our model was too small to provide meaningful results so we had to pivot and search for a more substantial alternative. However, the different formats of available datasets made this adjustment more complex. Also, designing a user interface that was aesthetically pleasing proved to be challenging and we worked diligently to refine the design, balancing usability with visual appeal.
## Accomplishments that we're proud of
We are proud to have successfully developed and deployed a project that leverages web scrapping and sentiment analysis to provide real-time, actionable insights into stock performances. Our solution simplifies complex financial data, making it accessible to users with varying levels of expertise. We are proud to offer a solution that delivers real-time insights and empowers users to stay informed and make confident investment decisions.
We are also proud to have designed an intuitive and user-friendly interface that caters to busy individuals. It was our team's first time training a model and performing sentiment analysis and we are satisfied with the result. As a team of 3, we are pleased to have developed our project in just 32 hours.
## What we learned
We learned how to effectively integrate various technologies and acquired skills in applying machine learning techniques, specifically sentiment analysis. We also honed our ability to develop and deploy a functional platform quickly.
## What's next for MoneyMoves
As we continue to enhance our financial tech platform, we're focusing on several key improvements. First, we plan to introduce an account system that will allow users to create personal accounts, view their past searches, and cache frequently visited websites. Second, we aim to integrate our platform with a stock trading API to enable users to buy stocks directly through the interface. This integration will facilitate real-time stock transactions and allow users to act on insights and make transactions in one unified platform. Finally, we plan to incorporate educational components into our platform which could include interactive tutorials, and accessible resources.
|
## Inspiration
The inspiration behind MyStock derives from our own personal experience and views on investing. The both of us look to invest in stocks in the future, however we don't know much about the investing world and the stock market. We've also noticed that some of our family and friends are in similar situations. We wanted to create a project that is something we can potentially use to benefit ourselves, as the project has that sort of personal connection. The concept of MyStock itself was also inspired by articles on mediums that we've both shown interest in, in the past.
## What it does
Our program uses Yahoo Finance API along with other libraries to run analysis on a number of specified stocks, and finds the stock volatility and safety by comparing daily returns in the past month or year. Our program then sorts the stocks into most and least risky based on the stocks variance and asks the user whether they prefer risky or safe stocks.
## How we built it
The project is python programmed and was built in google colab. We used a complex variety of libraries and APIs to help extract data from current, real world stocks, to compile data and create our own program using our technical skills in python.
## Challenges we ran into
Some challenges we ran into was encountering LSTM (Long Short-Term Memory), we didn't end up using it as our code was unsuccessful when we attempted with it but we were close. An idea came up to use Monte Carlo Simulations but we also came up short with that as we couldn't get it to run.
## Accomplishments that we're proud of
We're proud of pushing ourselves beyond our comfort zone by learning new libraries, codes we've never run before, and trying out a whole range of things. We're also accomplished by how much our coding has improved since our other hackathons, and how we were able to be more organized with the time.
## What we learned
Our experience with programming MyStock led us to investigate and discover and handful of new topics such as machine learning and deep learning APIs like keras, stock and market analysis, and monty carlo simulation models. This project also helped us refine our own technical skills in python programming, along with improving our soft skills as we continued to communicate with each other throughout the programming process.
## What's next for MyStock
The next steps for MyStock include creating a front end web application or a mobile app for our program to create an organized, practical and user friendly program. MyStock also looks to modify scaling, such as the amount of stocks on the market it can take as input, and the amount of simulations it can run through to create more accurate results.
|
partial
|
**Inspiration**
Currently, society faces many environmental challenges with the fashion industry known as one of the most polluting industries in the world. Fast fashion produces clothing that isn't made to last since they are made with cheap materials that harm the environment (landfill impact, pesticides in growing cotton, and toxic chemicals making their way into water). However, sustainable clothing use materials that are made to last longer and are non-harmful to the environment. By choosing to have sustainable clothing, one can reduce their waste significantly and spend less money when shopping for clothes. We believe that users would benefit greatly from an intuitive app that can be used to track their wardrobe that can let them know how sustainable their wardrobe is and also predict the type of clothing they have using Tangram.
**What it does**
The app allows users to explore their closet by letting the user add clothing that they already have in their closet by selecting the type of clothing (shirt, pants, shorts, jacket, etc.) and then picking the brand of clothing. The user can then take a picture of the clothing item that they wish to upload and can add more items if they wish to. After they are done adding their items, the app then generates their sustainability score and gives them outfit recommendations as well that are more sustainable for their closet, which would take them to the store’s website and purchase it on-app. Users can also get points from buying from a sustainable brand which they can redeem through gift cards from some of their favorite sustainable brands. The model that was built using Tangram recognizes the type of clothing the user has based on what the user takes a picture of.
**How we built it**
For the model: we learned how to work with Tangram, using a CSV to create a .tangram model, and then using that model to test individual data and judge the overall accuracy. Then, we were able to find a large dataset online that had thousands of images of different clothing, all labeled for type (i.e. shirts, shoes, pants). We converted those images into binary strings and created a new CSV with those strings and the corresponding types. That was then used to train a Tangram model and we came out with about 61% accuracy, which is slightly better than a completely random guess would be expected to be, with so many clothing options.
**Challenges we ran into**
Tangram does not have support yet for image recognition, so we did have to think of a way to pass in the images to test the library in use of that. When training, there were issues in the inclusion of commas in the binary strings (messing up the CSV formatting) and the cleansing of data we performed to remove clothing we deemed irrelevant.
**Accomplishments that we're proud of**
Although the model is not the most accurate, we are proud of trying to find a way to apply this tool, Tangram, to a new purpose in image recognition. Also, learning how to train and use this machine learning library was a useful skill that multiple people on our team can use in the future.
**What we learned**
We learned how to use an intuitive machine learning library and a little about how image recognition and data cleansing work.
**What's next for Sound of Sustainability**
The development of a full front-end using some cross-platform tool such as React Native, and the connection of that with our user interface and machine learning model to create a fully functioning app.
|
## Inspiration
The fashion industry is often overlooked when we think about the main suspects to pollution. The industry has been burdened by hidden supply chains, unethical labor practices, and significant environmental damage in various sectors. The UN Environment Programme (UNEP) reports that the fashion industry is the second-largest consumer of water and accounts for approximately 10% of global carbon emissions—exceeding the combined emissions of all international flights and maritime shipping. As consumers, we often only see the final product, overlooking or not even realizing the harmful impacts caused throughout the fashion production process. In our quest for transparency, we investigated the potential of blockchain and identified safety contracts as a crucial solution for ensuring accountability at every stage of the manufacturing process so user's are encouraged to become more selective with the products they purchase.
## What it does
There are two primary participants in the use of safety contracts. First, the admin (such as distributors, manufacturers, or suppliers) signs off on successfully transferring the physical product and its digital twin to the next part of the distribution chain. Secondly, the end users, scanning the final QR code that contains a unique hash tied to the garment and blockchain, allowing them to access and collect a digitized version of the item. This ensures both transparency in the supply chain and a digital representation of the product for users to track.
This decentralized auditing system adds another layer of accountability, as multiple parties independently validate the successful transfer of both the physical product and its digital twin. The distributed nature of this system reduces the risk of corruption or errors that may occur in a centralized system, ensuring that every step in the supply chain is transparent, verified, and traceable.
The practice of falsely portraying products or companies as environmentally friendly—is so widespread in the fashion industry, full transparency is critical to combat this issue. Ultimately, this level of transparency allows consumers to trust the product and make informed decisions, knowing that the garment’s sustainability credentials are genuinely aligned with the practices behind it, eliminating the deceptive practices of greenwashing.
## How we built it
Frontend - TypeScript and Figma
Backend - Motoko
## Challenges we ran into
The first big hurdle we ran into was setting up the frontend of our website. We also ran into issues figuring out how ICP tokens work and how to deploy our project on the blockchain using the main net.
## Accomplishments that we're proud of
We’re extremely proud of being able to implement a difficult concept, none of us had any blockchain experience. We entered the hackathon with knowledge of front-end development and back-end development. We finished with an end-to-end application addressing an important real-world problem to increase sustainability by integrating blockchains.
## What we learned
For this hackathon, we decided to push ourselves and develop a project utilizing blockchain. As this was a new technical area for all of us, these past 36 hours has created an environment for non-stop learning. For one, we learned about what ICP (Internet Computer Protocol) is and the benefits to using its backend software. Furthermore, smart contracts (or canisters) are computational units that developers deploy to the Internet Computer which interact with one another automatically. We also learned about Layer 2 scaling solutions which is the concept of building blockchains on top of each other. After researching and learning about all the ways we can incorporate blockchain into an app, we came to the concept of tracking supply chains using blockchains where all the participants in the supply chain have access to a shared, decentralized ledger and each transaction or change in product status is recorded as a block in this ledger. After we understood the logic in the backend, we evaluated implementation in the frontend and came to the idea of QR Scanning. This was another learning curve for us because none of us have worked extensively with embedding information within a QR code, especially to retrieve the serial code and send a request to the blockchain. The development of this process came with a lot of research as we learning about dfx libraries, the usage of Node.js and development using Motoko. Another lesson for us was the practice, use and importance of user interface design and translating Figma to Typescript. Finally, we implemented API calls to integrate the backend with the front end. Overall, the team learned a lot about blockchain, front-end design and development and API integration through the Hack the Valley 2024 hackathon.
## What's next for VeriThread
Aside for continuing to develop a dedicated mobile app, we would like to implement an incentive where people receive a partial ICP fund, encouraging them to continue shopping sustainably while also adding a gamified element to the experience. We would also like to expand into new markets and integrate VeriThread’s blockchain technology with more retail partners, making it easier for people to shop ethically.
|
## Inspiration
As a team, we had a collective interest in sustainability and knew that if we could focus on online consumerism, we would have much greater potential to influence sustainable purchases. We also were inspired by Honey -- we wanted to create something that is easily accessible across many websites, with readily available information for people to compare.
Lots of people we know don’t take the time to look for sustainable items. People typically say if they had a choice between sustainable and non-sustainable products around the same price point, they would choose the sustainable option. But, consumers often don't make the deliberate effort themselves. We’re making it easier for people to buy sustainably -- placing the products right in front of consumers.
## What it does
greenbeans is a Chrome extension that pops up when users are shopping for items online, offering similar alternative products that are more eco-friendly. The extension also displays a message if the product meets our sustainability criteria.
## How we built it
Designs in Figma, Bubble for backend, React for frontend.
## Challenges we ran into
Three beginner hackers! First time at a hackathon for three of us, for two of those three it was our first time formally coding in a product experience. Ideation was also challenging to decide which broad issue to focus on (nutrition, mental health, environment, education, etc.) and in determining specifics of project (how to implement, what audience/products we wanted to focus on, etc.)
## Accomplishments that we're proud of
Navigating Bubble for the first time, multiple members coding in a product setting for the first time... Pretty much creating a solid MVP with a team of beginners!
## What we learned
In order to ensure that a project is feasible, at times it’s necessary to scale back features and implementation to consider constraints. Especially when working on a team with 3 first time hackathon-goers, we had to ensure we were working in spaces where we could balance learning with making progress on the project.
## What's next for greenbeans
Lots to add on in the future:
Systems to reward sustainable product purchases. Storing data over time and tracking sustainable purchases. Incorporating a community aspect, where small businesses can link their products or websites to certain searches.
Including information on best prices for the various sustainable alternatives, or indicators that a product is being sold by a small business. More tailored or specific product recommendations that recognize style, scent, or other niche qualities.
|
partial
|
# ImperialSim
## Background
Imperialism and Colonialism are viewed as relics of days-gone-by, but numerous modern institutional problems - ranging from racism to economic inequality and global warming- stem as a result of these phenomena
There is also a lack of understanding in colonialism's global impact, with many localizing its influence and thus underestimating the extent to which Western influence - and malevolence - have spread throughout the world.
This website offers a compact look at imperialism's spread since Columbus sailed the ocean blue, through a chronological survey at how different nations and reaches of the world have been affected by policies at home and across the pond.
# How We Build It
We used BeautifulSoup4 and a [Wikipedia Python library](https://pypi.org/project/wikipedia/) to scrape [Wikipedia timelines of Western colonialism](https://en.wikipedia.org/wiki/Chronology_of_Western_colonialism). Then, we used IBM Watson's Natural Language API to parse through the information we scraped in order to build a large dataset of events from the timeline. Using that data, we manipulated SVG components to effectively show where colonialism has had its impact. We also enabled an option in a backend to use agent-based simulation data based off of the historical data we collected. The web app was created using Python Flask and we deployed it using Google Cloud's App Engine
# Simulation

RED countries - or landmasses that were formerly not of any nation - have been either negatively impacted by imperialism or have had their influence severely diminished, either way, resulting in loss of life or structural violence.
ORANGE countries are undergoing conflict or dispute - perhaps an assassination has occured, or there is a controversial conference - and GREEN countries are freer of imperial grasp, either having gained their independence or forming Empires of their own.
|
## Inspiration
We were inspired by sentiment analysis in the news, as well as crisis trackers. We decided to use co:here to run our own sentiment analysis for current events to pick out petitions one might want to sign!
## What it does
Our project aims to visualise global turmoil for the user through news headlines. Heatmaps are established based on negative headlines and the user can navigate to anywhere on the world to take action and sign petitions that our model generates.
* Displays a globe with a heat map which visualises countries which have the most negative recent news articles
* Allows the user to select an area of turmoil and do relevant to helping this region
## How we built it
In order to implement the heatmaps for our globe, we needed to create a ratio between negative and positive news articles in order to gauge if a country was in a state of turmoil/had many negative headlines. To do this, we had to train cohere's NLP model so that we can identify the sentiment of an article - whether it was positive or negative. In order to do this, we used a news search API in order to pull recent news articles from a variety of countries and then created a program so that we could script through these headlines and descriptions manually and quickly decide whether they are positive or negative, thus creating a database to use to train cohere.
Once we had a gauge of the negative to positive headlines in countries, we created the heat map which identifies the most contentious countries at the moment. In order to promote the user to take action in certain countries that may appear to be in turmoil, we used change.org's searching to automatically search and display the most relevant few charitable initiatives within the selected countries.
## Challenges we ran into
One of the biggest challenges during our project was classifying our news articles into negative or positive - this was particularly difficult because of the various nuances a news article's headline and description have, making it often difficult to fit into a strict category. We therefore needed to train coshare's model in order to identify what kinds of news articles were negative or positive.
## Accomplishments that we're proud of
We are proud of being able to produce a globe with the heat map that implements the relative "negativity" of news articles
## What we learned
3D modelling on the web browser is a lot harder than it needs to be :/
Sentiment analysis has a long way to go.
There is an upper limit for caffeine saturation in the human body
## What's next for The World in Colour
From the beginning our overarching aim was for The World in Colour to implement a chatbot that can guide the user through the experience
|
## Inspiration
memes have become a cultural phenomenon and a huge recreation for many young adults including ourselves. for this hackathon, we decided to connect the sociability aspect of the popular site "twitter", and combine it with a methodology of visualizing the activity of memes in various neighborhoods. we hope that through this application, we can create a multicultural collection of memes, and expose these memes, trending from popular cities to a widespread community of memers.
## What it does
NWMeme is a data visualization of memes that are popular in different parts of the world. Entering the application, you are presented with a rich visual of a map with Pepe the frog markers that mark different cities on the map that has dank memes. Pepe markers are sized by their popularity score which is composed of retweets, likes, and replies. Clicking on Pepe markers will bring up an accordion that will display the top 5 memes in that city, pictures of each meme, and information about that meme. We also have a chatbot that is able to reply to simple queries about memes like "memes in Vancouver."
## How we built it
We wanted to base our tech stack with the tools that the sponsors provided. This started from the bottom with CockroachDB as the database that stored all the data about memes that our twitter web crawler scrapes. Our web crawler was in written in python which was Google gave an advanced level talk about. Our backend server was in Node.js which CockroachDB provided a wrapper for hosted on Azure. Calling the backend APIs was a vanilla javascript application which uses mapbox for the Maps API. Alongside the data visualization on the maps, we also have a chatbot application using Microsoft's Bot Framework.
## Challenges we ran into
We had many ideas we wanted to implement, but for the most part we had no idea where to begin. A lot of the challenge came from figuring out how to implement these ideas; for example, finding how to link a chatbot to our map. At the same time, we had to think of ways to scrape the dankest memes from the internet. We ended up choosing twitter as our resource and tried to come up with the hypest hashtags for the project.
A big problem we ran into was that our database completely crashed an hour before the project was due. We had to redeploy our Azure VM and database from scratch.
## Accomplishments that we're proud of
We were proud that we were able to use as many of the sponsor tools as possible instead of the tools that we were comfortable with. We really enjoy the learning experience and that is the biggest accomplishment. Bringing all the pieces together and having a cohesive working application was another accomplishment. It required lots of technical skills, communication, and teamwork and we are proud of what came up.
## What we learned
We learned a lot about different tools and APIs that are available from the sponsors as well as gotten first hand mentoring with working with them. It's been a great technical learning experience. Asides from technical learning, we also learned a lot of communication skills and time boxing. The largest part of our success relied on that we all were working on parallel tasks that did not block one another, and ended up coming together for integration.
## What's next for NWMemes2017Web
We really want to work on improving interactivity for our users. For example, we could have chat for users to discuss meme trends. We also want more data visualization to show trends over time and other statistics. It would also be great to grab memes from different websites to make sure we cover as much of the online meme ecosystem.
|
losing
|
## Inspiration
As our team nears the point in our lives in which buying our first home is on our radar, we've realized how cumbersome and daunting the task is, especially for new homeowners. Just the first step, determining which house you can afford, is already difficult, as a home's listing price doesn’t even begin to cover all of the hidden costs and fees. Terms like credit scores, down payments, mortgages, and homeowner's insurance just add to the confusion. So how can an aspiring homeowner ensure they're financially prepared to make one of the biggest purchases of their life? Enter all-in.
## What it does
all-in is a web app that calculates the all-in cost (n. the total cost of a transaction after commissions, interest rates, and other expenses) of a house including all hidden costs, fees, and taxes based on state and local laws with just an address (+ some extra details). It breaks down each individual cost, allowing the user to see and learn about where their money is going. Through this comprehensive list of expenses, all-in is able to provide the user with a realistic estimate of how much the should expect to have to pay for their new home so they can be financially prepared for this monumental moment.
all-in has two main value propositions: time and preparedness. It's significantly faster to automatically calculate the total cost of a home than manually entering every expense in. The process is also less stressful and more exciting when the homeowner is fully prepared to make the purchase.
## Challenges we ran into
Our team was split between 2 different time zones, so communication and coordination was a challenge.
## Accomplishments that we're proud of
For two of our members, this was their first hackathon!
## What's next for all-in
In the future, we hope to make all-in available on all platforms.
|
## Inspiration
One of our team members is a community manager for a real estate development group that often has trouble obtaining certifications in their attempts to develop eco-friendly buildings. The trouble that they go through, leaves them demotivated because of the cost and effort, leading them to avoid the process altogether, choosing to instead develop buildings that are not good for the environment.
If there was some way that they could more easily see what tier of LEED certification they could fall into and furthermore, what they need to do to get to the NEXT tier, they would be more motivated to do so, benefiting both their building practices as well as the Earth.
## What it does
Our product is a model that takes in building specifications and is trained on LEED codes. We take your building specifications and then answer any questions you may have on your building as well as put it into bronze, silver, gold, or platinum tiering!
## How we built it
The project structure is NextJS, React, Tailwind and for the ai component we used a custom openai api contextualized using past building specs and their certification level. We also used stack ai for testing and feature analysis.
## Challenges we ran into
The most difficult part of our project was figuring out how to make the model understand what buildings fall into different tiers.
## Accomplishments that we're proud of
GETTING THIS DONE ON TIME!!
## What we learned
This is our first full stack project using ai.
## What's next for LEED Bud
We're going to bring this to builders across Berkeley for them to use! Starting of course at the company of our team member!
|
## Inspiration
Through a discussion about Critical Studies of Race and Ethnicity with Professor Michael Wilcox, the topic of financial literacy and the disparity it has caused in the US was brought up, and we noticed how alarming the situation was. The statistics revealed serious problems rooted in information assymetry among socioeconomic classes, so we wanted to develop an app that would educate people of all ages, especially the youth, on various financial topics to help them establish financial literacy and confidence.
## What it does
We designed paperwall., a mobile app which teaches children, teenagers, young adults, and really everybody, about personal finance through a series of fun interactive quizzes, videos, and other resources. For each question that is asked, four possible answers appear, and the user has to guess the correct one. Our interface will then reveal if the answer was correct or not, and regardless of the guess made, we also provide an explanation page detailing the reason for the correct answer, as well as a link to a website where the user can learn more about that topic.
## How we built it
We built this app entirely using Bubble. We used text, buttons, and additional features to provide an interactive experience. We designed our logo using Notability.
## Challenges we ran into
Various challenges were faced while building our product. We initially wanted to build our app through code (using the C++ language), and we actually have a basic working version of our process running in terminal. However, we struggled with implementing our code into an application platform that was presentable and accessible for people to use, so we ended up having to take a no-code approach and focused on delivering a functional app using Bubble. We also had a lot of ideas for features that we wanted to incorporate, but given our lack of time we were not able to implement them into our product. Hence, we had to make team decisions on what to prioritize.
## Accomplishments that we're proud of
We are all so proud of what we were able to accomplish. This was the first time participating in a hackathon for all of us, so we had to overcome a steep learning curve. Writing code for an assignment outside of our classes was a big challenge, partly because we are not given instructions on what/how to write, and mainly because we had to learn to work with new software and interfaces. We are also extremely proud of how we came together and made collective agreements in times when we needed to pivot. Working in a team, it was crucial for all of us to discuss ideas, be open to contributions, and put our different skillsets to use. We were able to do that successfully, yet the biggest accomplishment for this event was doing it all while having fun.
## What we learned
Our main takeaways from this experience is that coding should not be restricted to the classroom. There is so much that can be programmed in this world, and a lot of it can make a lasting positive impact. And it's so much fun to implement! Problem-solving and seeing the results of your work are highly rewarding.
## What's next for paperwall.
Moving forward, we want our app to contain some of the features that we did not have the time or experience to implement during the event. We would like to create an open-source project where users can contribute individual questions about financial literacy to a csv file, which we would then approve to our app. This feature would be a great way for individuals to meaningfully share their knowledge, democratizing access to information about personal finance. Another feature we would like to implement is a rewards-based system, where users can collect points for every question they answer correctly. Such a system will motivate users to engage with the app more and thus learn more indispensable knowledge.
|
partial
|
## Inspiration
We all know that you shouldn't simply throw away used batteries or broken lightbulbs in the bin.
But we also know that we might often be too lazy to go all the way to a recycling centre for a couple of batteries. The solution? Lazy disposal!
## What it does
Submit the items you want to get rid of in a couple of seconds! Leave the box containing these items right in front of your house, or in other place of your choice.
Alternatively, be the environmental hero! Check out the list of boxes that are waiting to be collected, collect the ones closer to you and bring them to the closest recycling centre.
You can see the status of a box in real time - if it's already "booked" by another volunteer, or if it's available for you to book it.
## How we built it
We mainly used the Convex platform and Formik and Yup for creating and submitting forms.
## Challenges we ran into
Incorporating the geolocation API was a bigger challenge than we expected.
## Accomplishments that we're proud of
We are proud that our 2-back-end-people team finally learned some front-end.
## What we learned
We learned how to use Convex, and more about TypeScript and React and their libraries.
## What's next for Lazy Disposal
We plan to
* extend our database of recycling centres
* provide an estimate of the amount of money a recycling centre can offer based on the box content
* gamify the process!
|
## Inspiration
This system was designed to make waste collection more efficient, organized and user-friendly.
Keeping the end users in mind we have created a system that detects what type of waste has been inserted in the bin and categorizes it as a recyclable or garbage.
The system then opens the appropriate shoot (using motors) and turns on an LED corresponding to the type of waste that was just disposed to educate the user.
## What it does
Able to sort out waste into recycling or garbage with the use of the Google Vision API to identify the waste object, Python to sort the object into recycling or garbage, and Arduino to move the bin/LED to show the appropriate waste bin.
## How we built it
We built our hack using the Google Cloud Vision API, Python to convert data received from the API, and then used that data to transmit to the Arduino on which bin to open. The bin was operated using a stepper motor and LED that indicated the appropriate bin, recycling or garbage, so that the waste object can automatically be correctly disposed of. We built our hardware model by using cardboard. We split a box into 2 sections and attached a motor onto the centre of a platform that allows it to rotate to each of the sections.
## Challenges we ran into
We were planning on using a camera interface with the Arduino to analyze the garbage at the input, unfortunately, the hardware component that was going to act as our camera ended up failing, forcing us to find an alternative way to analyze the garbage. Another challenge we ran into was getting the Google Cloud Vision API, but we stayed motivated and got it all to work. One of the biggest challenges we ran into was trying to use the Dragonboard 410c, due to inconsistent wifi, and the controller crashing frequently, it was hard for us to get anything concrete.
## Accomplishments that we're proud of
Something that we are really proud of is that we were able to come up with a hardware portion of our hack overnight. We finalized our idea late into the hackathon (around 7pm) and took up most of the night splitting our resources between the hardware and software components of our hack. Another accomplishment that we are proud of is that our hack has positive implications for the environment and society, something that all of our group members are really passionate about.
## What we learned
We learned a lot through our collaboration on this project. What stands out is our exploration of APIs and attempts at using new technologies like the Dragonboard 410c and sensors. We also learned how to use serial communications, and that there are endless possibilities when we look to integrate multiple different technologies together.
## What's next for Eco-Bin
In the future, we hope to have a camera that is built in with our hardware to take pictures and analyze the trash at the input. We would also like to add more features like a counter that keeps up with how many elements have been recycled and how many have been thrown into the trash. We can even go into specifics like counting the number of plastic water bottles that have been recycled. This data could also be used to help track the waste production of certain areas and neighbourhoods.
|
## Inspiration
As college students more accustomed to having meals prepared by someone else than doing so ourselves, we are not the best at keeping track of ingredients’ expiration dates. As a consequence, money is wasted and food waste is produced, thereby discounting the financially advantageous aspect of cooking and increasing the amount of food that is wasted. With this problem in mind, we built an iOS app that easily allows anyone to record and track expiration dates for groceries.
## What it does
The app, iPerish, allows users to either take a photo of a receipt or load a pre-saved picture of the receipt from their photo library. The app uses Tesseract OCR to identify and parse through the text scanned from the receipt, generating an estimated expiration date for each food item listed. It then sorts the items by their expiration dates and displays the items with their corresponding expiration dates in a tabular view, such that the user can easily keep track of food that needs to be consumed soon. Once the user has consumed or disposed of the food, they could then remove the corresponding item from the list. Furthermore, as the expiration date for an item approaches, the text is highlighted in red.
## How we built it
We used Swift, Xcode, and the Tesseract OCR API. To generate expiration dates for grocery items, we made a local database with standard expiration dates for common grocery goods.
## Challenges we ran into
We found out that one of our initial ideas had already been implemented by one of CalHacks' sponsors. After discovering this, we had to scrap the idea and restart our ideation stage.
Choosing the right API for OCR on an iOS app also required time. We tried many available APIs, including the Microsoft Cognitive Services and Google Computer Vision APIs, but they do not have iOS support (the former has a third-party SDK that unfortunately does not work, at least for OCR). We eventually decided to use Tesseract for our app.
Our team met at Cubstart; this hackathon *is* our first hackathon ever! So, while we had some challenges setting things up initially, this made the process all the more rewarding!
## Accomplishments that we're proud of
We successfully managed to learn the Tesseract OCR API and made a final, beautiful product - iPerish. Our app has a very intuitive, user-friendly UI and an elegant app icon and launch screen. We have a functional MVP, and we are proud that our idea has been successfully implemented. On top of that, we have a promising market in no small part due to the ubiquitous functionality of our app.
## What we learned
During the hackathon, we learned both hard and soft skills. We learned how to incorporate the Tesseract API and make an iOS mobile app. We also learned team building skills such as cooperating, communicating, and dividing labor to efficiently use each and every team member's assets and skill sets.
## What's next for iPerish
Machine learning can optimize iPerish greatly. For instance, it can be used to expand our current database of common expiration dates by extrapolating expiration dates for similar products (e.g. milk-based items). Machine learning can also serve to increase the accuracy of the estimates by learning the nuances in shelf life of similarly-worded products. Additionally, ML can help users identify their most frequently bought products using data from scanned receipts. The app could recommend future grocery items to users, streamlining their grocery list planning experience.
Aside from machine learning, another useful update would be a notification feature that alerts users about items that will expire soon, so that they can consume the items in question before the expiration date.
|
losing
|
## Inspiration
The system Coursera uses to authenticate users, but for arbitrary text fields.
## What it does
Identifies potentially fraudulent activity through keystroke dynamics.
## How we built it
Javascript to implement the data-collection, MATLAB to train the algorithm, and a fancy bootstrap theme to jazz it all up.
## Challenges we ran into
Javascript makes it difficult to collect data. Machine learning is difficult when you don't start with a data set already prepared. We weren't sure what we were trying to accomplish for a large part of the project.
## Accomplishments that we're proud of
The demo site is pretty. We managed to clearly come up with something relatively unique.
## What we learned
Javascript is not a great language for implementing machine learning.
## What's next for Protktme
Further research into machine learning, potentially not with javascript.
|
## Inspiration
Coming from South Texas, two of the team members saw ESL (English as a Second Language) students being denied of a proper education. Our team created a tool to break down language barriers that traditionally perpetuate socioeconomic cycles of poverty by providing detailed explanations about word problems using ChatGPT. Traditionally, people from this group would not have access to tutoring or 1-on-1 support, and this website is meant to rectify this glaring issue.
## What it does
The website takes in a photo as input, and it uses optical character recognition to get the text from the problem. Then, it uses ChatGPT to generate a step-by-step explanation for each problem, and this output is tailored to the grade level and language of the student, enabling students from various backgrounds to get assistance they are often denied of.
## How we built it
We coded the backend in Python with two parts: OCR and ChatGPT API implementation. Moreover, we considered the multiple parameters, such as grade and language, that we could implement in our code and eventually query ChatGPT with to make the result as helpful as possible. On the other side of the stack, we coded it in React with TypeScript to be as simple and intuitive as possible. It has two sections that clearly show what it is outputting and what ChatGPT has generated to assist the student.
## Challenges we ran into
During the development of our product we often ran into struggles with deciding the optimal way to apply different APIs and learning how to implement them, many of which we ended up not using or changing our applications for, such as the IBM API. Through this process, we had to change our high level plan for the backend functions and consequently reimplement our frontend user interface to fit the operations. This also provided a compounding challenge of having to reestablish and discuss new ideas while communicating as a team.
## Accomplishments that we're proud of
We are proud of the website layout. Personally the team is very fond of the color and the arrangement of the site’s elements. Another thing that we are proud of is just that we have something working, albeit jankily. This was our first hackathon, so we were proud to be able to contribute to the hackathon in some form.
## What we learned
One invaluable skill we developed through this project was learning more about the unique plethora of APIs available and how we can integrate and combine them to create new revolutionary products that can help people in everyday life. We not only developed our technical skills, including git familiarity and web development, but we also developed our ability to communicate our ideas as a team and gain the confidence and creativity to create and carry out an idea from thought to production.
## What's next for Homework Helper
As part of our mission to increase education accessibility and combat common socioeconomic barriers, we hope to use Homework Helper to not only translate and minimize the language barrier, but to also help those with visual and auditory disabilities. Some functions we hope to implement include having text-to-speech and speech-to-text features, and producing video solutions along with text answers.
|
## Inspiration
<https://www.youtube.com/watch?v=lxuOxQzDN3Y>
Robbie's story stuck out to me at the endless limitations of technology. He was diagnosed with muscular dystrophy which prevented him from having full control of his arms and legs. He was gifted with a Google home that crafted his home into a voice controlled machine. We wanted to take this a step further and make computers more accessible for people such as Robbie.
## What it does
We use a Google Cloud based API that helps us detect words and phrases captured from the microphone input. We then convert those phrases into commands for the computer to execute. Since the python script is run in the terminal it can be used across the computer and all its applications.
## How I built it
The first (and hardest) step was figuring out how to leverage Google's API to our advantage. We knew it was able to detect words from an audio file but there was more to this project than that. We started piecing libraries to get access to the microphone, file system, keyboard and mouse events, cursor x,y coordinates, and so much more. We build a large (~30) function library that could be used to control almost anything in the computer
## Challenges I ran into
Configuring the many libraries took a lot of time. Especially with compatibility issues with mac vs. windows, python2 vs. 3, etc. Many of our challenges were solved by either thinking of a better solution or asking people on forums like StackOverflow. For example, we wanted to change the volume of the computer using the fn+arrow key shortcut, but python is not allowed to access that key.
## Accomplishments that I'm proud of
We are proud of the fact that we had built an alpha version of an application we intend to keep developing, because we believe in its real-world applications. From a technical perspective, I was also proud of the fact that we were able to successfully use a Google Cloud API.
## What I learned
We learned a lot about how the machine interacts with different events in the computer and the time dependencies involved. We also learned about the ease of use of a Google API which encourages us to use more, and encourage others to do so, too. Also we learned about the different nuances of speech detection like how to tell the API to pick the word "one" over "won" in certain context, or how to change a "one" to a "1", or how to reduce ambient noise.
## What's next for Speech Computer Control
At the moment we are manually running this script through the command line but ideally we would want a more user friendly experience (GUI). Additionally, we had developed a chrome extension that numbers off each link on a page after a Google or Youtube search query, so that we would be able to say something like "jump to link 4". We were unable to get the web-to-python code just right, but we plan on implementing it in the near future.
|
losing
|
## Inspiration 🎮
We wanted to make an open world game with cool graphics and lighting. In addition, we wanted to play around with the Unity game engine to create something 3D.
## What's it About 🌃
**Escape from Polis** is an open world game set in a dystopian city. You play as a survivor trying to break the barrier surrounding the city by picking up energy capsules to use in the generator. While trying to find energy capsules, you realize you are **not alone**.
## How we built it 🛠
The game engine we used was **Unity** which uses **C#** as its programming language. We used free public assets for our textures and animations.
## Challenges we ran into 🧱
This was our first ever 3D game made using Unity. We had a lot of trouble deciding what we wanted to do and ended up wasting a lot of time brainstorming ideas. Github gave us a lot of trouble and we weren't able to upload some files to the repo due to the size of our assets.
## Accomplishments that we're proud of ❤️
We added a lot of cool features:
* Double tap to sprint
* Seamless jumping animations
* Enemy Ai tracks and follows players within a certain distance
* Light rendering
## What we learned 📚
We are now a lot more familiar with Unity and using C#. Our passion of game development has also greatly increased as we continue exploring our love for video games.
## What's next for Escape From Polis 💸
1. Menu and Pause screen
2. More Levels
3. Multiplayer support
4. Hopefully publish the game onto Steam
|
## Inspiration
We were inspired by the topic of UofTHacks X theme **Exploration**. The game itself requires **players to explore** all the clues inside the rooms. We also want **"explore" ourselves** in UofTHacks X. Since our team does not have experience in game development, we also witness our 24-hour-accomplishment in this new area.
We were inspired by **Metaverse** AND **Virtual Reality (VR)**. We believe that Metaverse will be the next generation of the Internet. Metaverse is a collective virtual sharing space. Metaverse is formed by a combination of physical reality, augmented reality (AR), and virtual reality (VR) to enable users to interact virtually. VR is widely used in game development. Therefore, we decided to design a VR game.
## What it does
Escape room is a first-person, multiplayer VR game that allows users to discover clues, solve puzzles, and accomplish tasks in rooms in order to accomplish a specific goal in a limited amount of time.
## How we built it
We found the 3D models, and animations from the Unity asserts store and import the models to different scenes in our project. We used **Unity** as our development platform and used **GitHub** to manage our project. In order to allow multiplay in our game, we used the **photon engine**. For our VR development, we used the **OpenXR** plug-in in Unity.
## Challenges we ran into
One of the challenges we ran into was setting up the VR devices. We used **Valve Index** as our VR devices. As Valve Index only supports DisplayPort output, but our laptop only supports HDMI input. We spent lots of time looking for the adapter and could not find one. After asking for university laptops, and friends, we found a DisplayPort input-supportive device.
Another challenge we have is that we are not experienced in game development. And we start our project by script. However, we find awesome tutorials on YouTube and learn game development in a short period of time.
## Accomplishments that we're proud of
We are proud of allowing multiplayer in our game, we learned the photon engine within one hour and applied it in our project. We are also proud of creating a VR game using the OpenXR toolkit with no previous experience in game development.
## What we learned
We learned about Unity and C# from YouTube. We also learned the photon engine that allows multiuser play in our game. Moreover, we learned the OpenXR plug-in for our VR development. To better manage our project, we also learned more about GitHub.
## What's next for Escape Room
We want to allow users to self-design their rooms and create puzzles by themselves.
We plan to design more puzzles in our game.
We also want to improve the overall user experience by allowing our game runs smoothly.
|
## Inspiration
My college friends and brother inspired me for doing such good project .This is mainly a addictive game which is same as we played in keypad phones
## What it does
This is a 2-d game which includes tunes graphics and much more .we can command the snake to move ip-down-right and left
## How we built it
I built it using pygame module in python
## Challenges we ran into
Many bugs are arrived such as runtime error but finally i manged to fix all this problems
## Accomplishments that we're proud of
I am proud of my own project that i built a user interactive program
## What we learned
I learned to use pygame in python and also this project attarct me towards python programming
## What's next for Snake Game using pygame
Next I am doing various python projects such as alarm,Virtual Assistant program,Flappy bird program,Health management system And library management system using python
|
losing
|
## Inspiration
Everyone in society is likely going to buy a home at some point in their life. They will most likely meet realtors, see a million listings, gather all the information they can about the area, and then make a choice. But why make the process so complicated?
MeSee lets users pick and recommend regions of potential housing interest based on their input settings, and returns details such as: crime rate, public transportation accessibility, number of schools, ratings of local nearby business, etc.
## How we built it
Data was sampled by an online survey on what kind of things people looked for when house hunting. The most repeated variables were then taken and data on them was collected. Ratings were pulled from Yelp, crime data was provided by CBC, public transportation data by TTC, etc. The result is a very friendly web-app.
## Challenges we ran into
Collecting data in general was difficult because it was hard to match different datasets with each other and consistently present them since they were all from from different sources. It's still a little patchy now, but the data is now there!
## Accomplishments that we're proud of
Finally choosing an idea 6 hours into the hackathon, get the data, get at least four hours of sleep, and establish open communication with each other as we didn't really know each other until today!
## What we learned
Our backend learned to use different callbacks, front end learned that googlemaps API is definitely out to get him, and our designer learned Adobe Xd to better illustrate what the design looked like and how it functioned.
## What's next for MeSee
There's still a long ways before Mesee can cover more regions, but if it continues, it'd definitely be something our team would look into. Furthermore, collecting more sampling data would definitely be beneficial in improving the variables available to users by Mesee. Finally, making Mesee mobile would also be a huge plus.
|
## Inspiration
As the lines between AI-generated and real-world images blur, the integrity and trustworthiness of visual content have become critical concerns. Traditional metadata isn't as reliable as it once was, prompting us to seek out groundbreaking solutions to ensure authenticity.
## What it does
"The Mask" introduces a revolutionary approach to differentiate between AI-generated images and real-world photos. By integrating a masking layer during the propagation step of stable diffusion, it embeds a unique hash. This hash is directly obtained from the Solana blockchain, acting as a verifiable seal of authenticity. Whenever someone encounters an image, they can instantly verify its origin: whether it's an AI creation or an authentic capture from the real world.
## How we built it
Our team began with an in-depth study of the stable diffusion mechanism, pinpointing the most effective point to integrate the masking layer. We then collaborated with blockchain experts to harness Solana's robust infrastructure, ensuring seamless and secure hash integration. Through iterative testing and refining, we combined these components into a cohesive, reliable system.
## Challenges we ran into
Melding the complex world of blockchain with the intricacies of stable diffusion was no small feat. We faced hurdles in ensuring the hash's non-intrusiveness, so it didn't distort the image. Achieving real-time hash retrieval and embedding while maintaining system efficiency was another significant challenge.
As the lines between AI-generated and real-world images blur, the integrity and trustworthiness of visual content have become critical concerns. Traditional metadata isn't as reliable as it once was, prompting us to seek out groundbreaking solutions to ensure authenticity.
## Accomplishments that we're proud of
Successfully integrating a seamless masking layer that does not compromise image quality.
Achieving instantaneous hash retrieval from Solana, ensuring real-time verification.
Pioneering a solution that addresses a pressing concern in the AI and digital era.
Garnering interest from major digital platforms for potential integration.
## What we learned
The journey taught us the importance of interdisciplinary collaboration. Bringing together experts in AI, image processing, and blockchain was crucial. We also discovered the potential of blockchain beyond cryptocurrency, especially in preserving digital integrity.\
## What's next for The Mask
We envision "The Mask" as the future gold standard for digital content verification. We're in talks with online platforms and content creators to integrate our solution. Furthermore, we're exploring the potential to expand beyond images, offering verification solutions for videos, audio, and other digital content forms.
|
## Inspiration
As college students learning to be socially responsible global citizens, we realized that it's important for all community members to feel a sense of ownership, responsibility, and equal access toward shared public spaces. Often, our interactions with public spaces inspire us to take action to help others in the community by initiating improvements and bringing up issues that need fixing. However, these issues don't always get addressed efficiently, in a way that empowers citizens to continue feeling that sense of ownership, or sometimes even at all! So, we devised a way to help FixIt for them!
## What it does
Our app provides a way for users to report Issues in their communities with the click of a button. They can also vote on existing Issues that they want Fixed! This crowdsourcing platform leverages the power of collective individuals to raise awareness and improve public spaces by demonstrating a collective effort for change to the individuals responsible for enacting it. For example, city officials who hear in passing that a broken faucet in a public park restroom needs fixing might not perceive a significant sense of urgency to initiate repairs, but they would get a different picture when 50+ individuals want them to FixIt now!
## How we built it
We started out by brainstorming use cases for our app and and discussing the populations we want to target with our app. Next, we discussed the main features of the app that we needed to ensure full functionality to serve these populations. We collectively decided to use Android Studio to build an Android app and use the Google Maps API to have an interactive map display.
## Challenges we ran into
Our team had little to no exposure to Android SDK before so we experienced a steep learning curve while developing a functional prototype in 36 hours. The Google Maps API took a lot of patience for us to get working and figuring out certain UI elements. We are very happy with our end result and all the skills we learned in 36 hours!
## Accomplishments that we're proud of
We are most proud of what we learned, how we grew as designers and programmers, and what we built with limited experience! As we were designing this app, we not only learned more about app design and technical expertise with the Google Maps API, but we also explored our roles as engineers that are also citizens. Empathizing with our user group showed us a clear way to lay out the key features of the app that we wanted to build and helped us create an efficient design and clear display.
## What we learned
As we mentioned above, this project helped us learn more about the design process, Android Studio, the Google Maps API, and also what it means to be a global citizen who wants to actively participate in the community! The technical skills we gained put us in an excellent position to continue growing!
## What's next for FixIt
An Issue’s Perspective
\* Progress bar, fancier rating system
\* Crowdfunding
A Finder’s Perspective
\* Filter Issues, badges/incentive system
A Fixer’s Perspective
\* Filter Issues off scores, Trending Issues
|
partial
|
## Inspiration **💪🏼**
Health insurance, everyone needs it, no one wants to pay for it. As soon-will-be adults, health insurance has been a growing concern. Since a simple ambulance ride easily costs up to thousands of dollars, not having health insurance is a terrible decision in the US. But how much are you supposed to pay for it? Insurance companies publish their rates, but just having formulas doesn't tell me anything about if they are ripping me off, especially for young adults having never paid for health insurance.
## What it does? **🔍**
Thus, to prevent being ripped off on health insurance after leaving our parents' household. We have developed Health Insurance 4 Dummies. A website utilizing a machine learning model that determines a fair estimate for the annual costs of health insurance, based on user inputs of their personal information. It also uses a LMM to provide detailed information on the composition of the cost.
## How we built it **👷🏼♀️**
The front-end is built using convex-react, creating an UI that takes inputs from the user. The backend is built using python-flask, which communicates with remote services, InterSystems and Together.AI. The ML model for predicting the cost is built on InterSystems using the H2O, trained on a dataset consist of individual's information and their annual rate for health insurance. The explanation of costs is created using Together.AI's Llama-2 model.
## Challenges we ran into **🔨**
Full-stack development is tedious, especially when the functions require remote resources. Finding good datasets to train the model. Authentication in connecting and accessing the trained model on InterSystem using their IRIS connection driver. Choosing the right model to use from Together.AI.
## Accomplishments that we're proud of **⭐**
Trained and accessed ML model on a remote database open possibility for massive datasets, integrating LMMs to provide automated information.
## What we learned **📖**
Full-Stack Development skills, ML model training and utilizing. Accessing remote services using APIs, TLS authentication.
## What's next for Health Insurance 4 Dummys **🔮**
Gather larger datasets to make more parameters available and give more accurate predictions.
|
## Inspiration
This past year, we've seen the effects of uncontrolled algorithmic amplification on society. From widespread [riot-inciting misinformation on Facebook](https://www.theverge.com/2020/3/17/21183341/facebook-misinformation-report-nathalie-marechal) to the explosive growth of TikTok - a platform that serves content [entirely on a black-box algorithm](https://www.wired.com/story/tiktok-finally-explains-for-you-algorithm-works/), we've reached a point where [social media algorithms rule how we see the world](https://www.wsj.com/articles/social-media-algorithms-rule-how-we-see-the-world-good-luck-trying-to-stop-them-11610884800) - and it seems like we've lost our individual ability to control these incredibly intricate systems.
From a consumer's perspective, it's difficult to tell what your social media feed prioritizes – sometimes, it shows you content related to products you might have searched the internet for; other times, you might see [eerily accurate friend recommendations](https://www.theverge.com/2017/9/7/16269074/facebook-tinder-messenger-suggestions). If you've watched [The Social Dilemma](https://www.thesocialdilemma.com), you might think that your Facebook feed is managed directly by Mark Zuckerberg & his three dials: engagement, growth, and revenue
The bottom line: we need significant innovation around the algorithms that power our digital lives.
## Feeds: an Open-Sourced App Store for Algorithmic Choice
On Feeds, you're in control over what information is prioritized. You're no longer bound to a hyper-personalized engine designed to maximize your engagement: instead, you have the ability to set your own utility function & design your own feed.
## How we built it
We built Feeds on a React Native frontend & serverless Google Cloud Functions backend! Our app pulls data live from Twitter using [Twint](https://pypi.org/project/twint/) (an open-source Twitter OSINT tool). To prototype our algorithms, we employed a variety of techniques to prioritize different emotions & content –
* "Positivity" - optimized for positive & optimistic content (powered by [OpenAI](http://openai.com))
* "Virality" - optimized for viral content (powered by Twint)
* "Controversy" - optimized for controversial content (powered by [Textblob/NLTK](https://textblob.readthedocs.io/en/dev/))
* "Verified" - optimized for high-quality & verified content
* "Learning" - optimized for educational content
Additionally, to add to the ability to break out of your own echo chamber, we added a feature that puts you into the social media feed of influencers – so if you want to see exactly what Elon Musk or Vice President Kamala Harris sees on Twitter, you can switch to those Feeds with just a tap!
## Challenges we ran into
Twitter's hardly a developer-friendly platform - scraping Tweets to use for our prototype was probably one of our most challenging tasks! We also ran into many algorithmic design choices (e.g. how to detect "controversy") - and drew inspiration from a variety of resource papers & open-source projects.
## Accomplishments that we're proud of
We built a functioning full-stack product over the course of ~10 hours - and we truly believe this emphasis on algorithmic choice is one critical component to the future of social media!
## What we learned
We learned a lot about natural language processing & the different challenges when it comes to designing algorithms using cutting-edge tools like GPT-3!
## What's next for Feeds
We'd love to turn this into an open-sourced platform that plugs into different content sources -- and allows anyone (any developer) to create a custom Feed & share it with the world!
|
## Inspiration
The American Healthcare system is expensive and complicated. Everyone wants the best, most cost-effective insurance plan, but choosing one can feel like a daunting task. We were motivated to build a product that could deal with large benefits summaries containing opaque language and support our users' unique medical needs, all the while maintaining a high level of user personalization.
## What it does
PredictAPulseAI provides users with a health questionnaire, where the answers serve as input for an ML model for heart attack risk classification. Our ML model was trained using a dataset that has features for causes of heart attacks and predicts future heart attacks. Afterward, users upload summary benefits of insurance policies to PredictAPulseAI, and it combines all of this data to find the most cost-effective insurance policy given your risk for heart attacks.
## How we built it
Frontend: JS, React, Next.js, TypeScript, HTML/CSS, Material UI
Backend: Python, Flask, MindsDB.api, Tesseract OCR, Open AI GPT (3.5)
Database: SQL, CockroachDB
Classification Model: Heart Attack Kaggle Dataset, MindsDB
## Challenges we ran into
Some of the challenges we ran into were dealing with the limitations of Intel Cloud, specifically its inability to connect to Cockroach DB and port-forward for our custom backend API.
## Accomplishments that we're proud of
Successful implementations of functions. Replaced cookie usage.
Bridged front-end to back-end, API integration, and database implementation.
## What we learned
Insurance comparisons and how it helps users based on their current health conditions. We expanded our knowledge of SQL with CockroachDB, ML prediction models with MindsDB, and Flask web server with our custom-written API endpoint.
## What's next for PredictAPulseAI
This project is not limited to predicting heart attacks and reducing the cost of treatment for our users. Other leading causes of death, such as cancer, can be effectively predicted to do the same pipeline. Most importantly, our project would save lives.
We plan to adapt our project to an intuitive mobile application to increase accessibility. We also plan to effectively market our idea and utilize sponsorships from insurance companies to gain funding.
|
partial
|
## Inspiration
Seeing that physical fitness was a rather under-served aspect of quarantine, we were very excited to pursue an opportunity that would both increase interpersonal engagement while getting people up and moving - ultimately improving both mental and physical fitness.
## What it does
Our application connects individuals through two major approaches: community and competition. By allowing users to search through groups nearby that share similar interests to themselves, users are better connected to those who they can relate to. Within these groups we offer the ability to play various minigames, whether it is lifting a metaphorical elephant up a mountain (aka do a large number of squats and pushups to successfully accomplish the task) or racing against a metaphorical zebra (aka run and/or bike for a large distance), the objective is not only to complete the task but to do so with as much group involvement as possible. The more evenly spread involvement is, the higher score rewards are. Additionally, as teams work together to complete challenges, their continuity streaks (how many days in a row is a significant portion of the group active) will contribute to additional score increases.
Aside from this collaborative aspect, however, we also introduce a competitive side to the platform with challenges. Challenges can be hosted by anybody, and they can be set to either public or private. The concept of these challenges is to compete against everyone else in the group for the specified exercises. For example, if the exercises are to run, bike, and do squats, the user who does the most overall (has the highest score) wins the tournament (gets additional score bonus + profile awards).
To encourage fair and safe competition, PhysiPal also includes a GCP-hosted ML model that receives live camera footage from the mobile application and analyzes form to verify push-ups while also providing feedback if form is incorrect. It also tracks distances using GPS for accurate measurements during runs and biking. Through these features, we create a community of users who are interested in both fitness as well as social interconnectivity.
## How we built it and Challenges we ran into
This application was primarily built with Flutter in the front-end. The back-end was a mix of GCP products (including Firebase), python, websockets, pytorch, and open-cv integrations. The front-end was quite challenging, and contains quite a few large holes when compared to the original UI/UX plan. This was primarily due to Flutter being a relatively new framework for many of us. The backend consist of 2 endpoints. The websocket control plane endpoint between server and clients, and the RTMP data plane endpoint. The RTMP end point is managed by an RTMP server, and the code here consists of the websocket server. All communications between client and server are handled in JSON messages. The protocol include 2 fields, type of the message, and an optional data. After connection, the client is expected to send key type (without optional data) to request the stream key for RTMP end point, and the server will respond with key type and the content of key in data field. Then, the cient is expected to establish connection to RTMP end point and serving the video stream (possibly from the camera feed of a mobile device). After RTMP connection is established, the client is expected to send type connected (without optional data) to inform the server about RTMP connection. Upon receiving the packet, the server will begin inference subroutine (running on a separate thead), and continuously send the current result to client with type push, and the corresponding statistic of the workout based on the machine-learning based evaluation of the video stream. The RTMP stream and websocket connection can be terminated at any time by client. After the termination of websocket connection, the ML evaluation will be terminated immediately. With RTMP clients from mobile devices, this system allows real-time machine learning based analysis of workout, and the client can display those statistics as needed by application. The design of using a server has many limitations, and the decision of such design is entirely due to the convenience of implementation of such ML inference system on server environment. With more mature support of machine learning infrastructures on mobile device (such as OpenCV and pytorch libraries), this entire system can be integrated to the mobile device itself without server dependency. This will enable lower latency, less bandwidth/battery consumption, and offline usage.The back-end had lots of trouble in terms of making the analysis of exercise done in real-time, making efficient data structures for larger queries, and learning how to use GCP to host the ML model.
## What's next for PhysiPal
Next up, we aim to complete the application. There are quite a few holes (both in functionality as well as design) from our original thought processes. Additionally, the idea itself has room for growth. By incorporating more exercises and challenges, PhysiPal aims to appeal towards a larger audience. The application can even be expanded to involve sponsors of various sorts, and transform the platform into one that has individuals pursuing fitness, supported by sponsors who in turn donate funds to charities.
|
## Inspiration
We had never worked with augmented reality before and believed there was no better time to learn how to use it than at HackMIT.
## What it does
Our prototype recognizes landmarks in images, and gives a brief synopsis on each travel destination as well as links for further information.
## How we built it
We used the Unity engine, alongside Vuforia and Paint3D in order to make an augmented reality user experience. Paint3D was utilized to create all the 3D models showcased, and all information was obtained via Wikipedia. Photos obtained via Google Images.
## Challenges we ran into
All of us came into this hackathon with little relevant coding experience – and a lot of time was spent learning how to build an Android app with Unity, and how to integrate an AR engine into our solution. We had further plans for integrating multiple APIs, but because of the time constraint this proved extremely difficult, as we just finished our prototype application by the end of the hackathon. This said, we’re hopeful for the development of these features in the future!
## Our Accomplishments
We were successful in gaining proficiency in AR development and were able to program an application that had our own 3D models over a wide array of destination images. It also provided an accurate summary of the location recognized consistently and in a variety of lighting conditions.
## What's next for BURST?
There are a lot of different directions we could take our prototype. For one, we could integrate machine learning into our target detection scripts – so we’re able to identify cities worldwide beyond what we’ve pre-specified. We’d like to superimpose multiple labels at once over images we use – enabling us to label points of interest within each image we take. In addition, we could integrate third party APIs to potentially provide weather conditions, a map, or even travel information to your scanned location – making the app like a one stop shop towards paradise.
Not to mention, while the app currently runs on Android, we built it under Unity to ensure seamless, cross-platform compatibility – we’d like to test iOS and Windows capabilities soon.
|
## Inspiration
Being sport and fitness buffs, we understand the importance of right form. Incidentally, suffering from a wrist injury himself, Mayank thought of this idea while in a gym where he could see almost everyone following wrong form for a wide variety of exercises. He knew that it couldn't be impossible to make something that easily accessible yet accurate in recognizing wrong exercise form and most of all, be free. He was sick of watching YouTube videos and just trying to emulate the guys in it with no real guidance. That's when the idea for Fi(t)nesse was born, and luckily, he met an equally health passionate group of people at PennApps which led to this hack, an entirely functional prototype that provides real-time feedback on pushup form. It also lays down an API that allows expansion to a whole array of exercises or even sports movements.
## What it does
A user is recorded doing the push-up twice, from two different angles. Any phone with a camera can fulfill this task.
The data is then analyzed and within a minute, the user has access to detailed feedback pertaining to the 4 most common push-up mistakes. The application uses custom algorithms to detect these mistakes and also their extent and uses this information to provide a custom numerical score to the user for each category.
## How we built it
Human Pose detection with a simple camera was achieved with OpenCV and deep neural nets. We tried using both the COCO and the MPI datasets for training data and ultimately went with COCO. We then setup an Apache server running Flask using the Google Computing Engine to serve as an endpoint for the input videos. Due to lack of access to GPUs, a 24 core machine on the Google Cloud Platform was used to run the neural nets and generate pose estimations.
The Fi(t)nesse website was coded in HTML+CSS while all the backend was written in Python.
## Challenges we ran into
Getting the Pose Detection right and consistent was a huge challenge. After a lot of tries, we ended and a model that works surprisingly accurately. Combating the computing power requirements of a large neural network was also a big challenge. We were initially planning to do the entire project on our local machines but when they kept slowing down to a crawl, we decided to shift everything to a VM.
The algorithms to detect form mistakes and generate scores for them were also a challenge since we could find no mathematical information about the right form for push-ups, or any of the other popular exercises for that matter. We had to come up with the algorithms and tweak them ourselves which meant we had to do a LOT of pushups. But to our pleasant surprise, the application worked better than we expected.
Getting a reliable data pipeline setup was also a challenge since everyone on our team was new to deployed systems. A lot of hours and countless tutorials later, even though we couldn't reach exactly the level of integration we were hoping for, we were able to create something fairly streamlined. Every hour of the struggle taught us new things though so it was all worth it.
## Accomplishments that we're proud of
-- Achieving accurate single body human pose detection with support for multiple bodies as well from a simple camera feed.
-- Detecting the right frames to analyze from the video since running every frame through our processing pipeline was too resource intensive
-- Developing algorithms that can detect the most common push-up mistakes.
-- Deploying a functioning app
## What we learned
Almost every part of this project involved a massive amount of learning for all of us. Right from deep neural networks to using huge datasets like COCO and MPI, to learning how deployed app systems work and learning the ins and outs of the Google Cloud Service.
## What's next for Fi(t)nesse
There is an immense amount of expandability to this project.
Adding more exercises/movements is definitely an obvious next step. Also interesting to consider is the 'gameability' of an app like this. By giving you a score and sassy feedback on your exercises, it has the potential to turn exercise into a fun activity where people want to exercise not just with higher weights but also with just as good form.
We also see this as being able to be turned into a full-fledged phone app with the right optimizations done to the neural nets.
|
losing
|
# Summary
**Just two weeks ago, two devastating earthquakes struck Turkey and Syria, leaving 46,000 dead and many more lost under the remains of buildings.**
**Emergency response efforts were significantly impeded by inefficient recovery protocols to screen the large area of land affected, depleting the chances of survival for those still trapped under buildings.**
**This urgent crisis demands an effective solution. We present Aziz, a technological device that screens for humans still trapped under debris, empowers rescue workers on their mission to save lives, and revolutionizes our preparedness for future crises.**
# Inspiration
On February 6th, 2023, two 7.8 and 7.5 magnitude Earthquakes struck Turkey and Syria, causing the collapse of over 6,000 buildings. Given that these earthquakes took place in the middle of the night, very few had time to escape, trapping many underground without a route to safety.
Since that night, many rescue workers have been mobilized to dig through the rubble in search of remaining victims. However, these rescue efforts have been extremely time-consuming and labor-intensive, which have compounded the consequences for those still trapped underground, waiting with waning hope and praying that someone will hear their feeble voices.
One of our team members has lost both family and friends living in Adana, Turkey, to these earthquakes. Those who have survived from her home city have been forced to evacuate to other parts of the country, causing a prosperous region to convert to what resembled a waste zone overnight. Our solution Aziz was motivated by the devastating state that Turkey and Syria are now in and aims to respond to the harsh lessons learned from this incident.
One of the major factors that impeded an effective response to earthquake recovery was the inefficient and demanding process that was needed in order to identify bodies amongst the rubble. With over 6,000 buildings collapsed and each one requiring either an overwhelming number of hours or highly advanced construction equipment to dig through, there was no feasible method to rescue victims in time. As a result, groups of volunteers resorted to waiting in silence, listening for a voice to emerge from under the rubble, and upon hearing a voice digging with construction tools, pieces of debris, or in some cases, just their hands. This process was repeatedly followed until potential victims could be recovered, alive or not.
As rescue workers persisted day and night with this labor-intensive process, victims still trapped under buildings were forced to wait in agony as they lost strength and hope that someone would come to the rescue. Even at the present moment, rescue workers continue to dig through remnants of buildings, with many bodies discovered daily. This crisis has blatantly exposed many technological and ethical limitations that pushed this situation to be even worse than it already was.
Aziz seeks to provide the technological capacity needed for rescue workers to answer the hopes and prayers of trapped victims wishing to see sunlight once again. The name “Aziz'' itself is a Turkish word that means “dear,” representing the care that unites communities in times of emergency. Our platform complements the care that is shared between communities by equipping them with robust technology to respond to times of crisis and to identify humans still trapped underground.
Still, the devastating situation in Turkey is not a one-time event. Natural disasters and international crises are bound to occur yet again, each time testing our preparedness and capacity to respond. Through Aziz, we lay a technological foundation to redefine how crises are dealt with and bring equity, security, and safety as we let our endearing care unite us closer, especially during emergencies.
# What Aziz does
Aziz is an all-in-one, comprehensive monitoring system that integrates various metrics to assess signs of human life from beneath rubble, allowing rescue workers to find victims trapped underground. First, our sensing technology contains a sensitive voice detection system to identify voices originating from under the rubble that may be difficult for the human ear to hear. Furthermore, our system contains a carbon dioxide monitor, of which elevated levels measured through ppm-range changes in atmospheric CO2 composition indicate respiration. Our platform also integrates an altitude sensor for gauging the depth of potential victims with respect to sea level to a high degree of accuracy. Finally, our system harnesses user-controlled ultrasonic sensors that scan across areas and provides real-time information on the 3D landscape of a rescue worker’s surroundings, even in complete darkness. Ultimately, these four capabilities work in complement with one another to support rescue teams as they try to identify signs of life amongst difficult terrain and in dangerous environments.
While Aziz provides the technological foundation for a robust sensing system with responsive and live data on metrics including audio, respiration levels, altitude, and surrounding landscape, this sensor system also carries societal implications that can be realized during rescue missions. Most simply, Aziz can be used as a compact and portable device that can be attached to and controlled by rescue workers digging through rubble. For example, if a rescue worker comes across a small opening too small for them to fit through while crawling through tunnels under rubble, they may use this device to survey the area that they are not able to reach and decide whether to initiate efforts in that direction. Alternatively, Aziz, which is lightweight and small in size, can be integrated with drones for rapid scanning over large areas. Aziz could be assembled onto drones and first used to scan over buildings and then survey inside buildings using an additional LED to light up its surroundings and provide stability in its flying movement. Finally, Aziz could be coupled with small robots that can be fished inside small openings and retrieved from deep underground. While the possibilities for implementation of Aziz are broad and well-defined, even on its own, this technology is a powerful and impactful device critical to rescue missions.
However, beyond just the technological level, Aziz works on a societal level to provide an effective solution that empowers communities under crisis, especially those with low access to advanced technology or financial flexibility. Aziz is particularly impactful in developing or underprivileged regions as it has low-resource and low-cost capabilities. We designed this platform to operate without WiFi, which alleviates a significant burden in developing areas and immensely widens its potential for impact. Moreover, Aziz is very inexpensive as it is composed of standard, low-cost parts assembled onto a 3D printed scaffold, all of which sum to a small price that will ensure that all have access to this critical tool. With its technologically robust and societally impactful capabilities, Aziz provides an effective solution to disaster preparedness, which is a significant concern that carries consequences beyond the situation in Turkey and that will continue to impact future generations.
# How we built Aziz
Aziz was built using an Arduino microcontroller and complementary modules. The main part of the circuit is the Arduino MKR WiFi 1010, which is connected with an Arduino MKR IoT equipped with several built-in sensors, such as temperature, humidity, barometric pressure, gas (air quality, VOC), ambient light, and gyroscope. This combination provides the device with sufficient computational power and access to many useful sensors all while maintaining a compact build. Initially, the team planned to implement the preliminary detection via drone using the RCWL-0516 microwave radar module for Arduino; however, the hardware limitation made us use Ultrasonic Sensor HC-SR04 as a complementary device that can serve as a temporary alternative. The 3D printed scaffold holds all the components together by the attached SG90 Servo Motor for holding and directing Ultrasonic Sensor HC-SR04 from one side and the joystick from another side. The electrical circuit was soldered to the common board and connected to the computer to upload code and test using the Arduino IDE on C++.
# Challenges we ran into
**Technological limitations:** The first challenge we faced was the limited variety of sensors and other hardware that could be used to generate inputs for screening for signs of life. After reading into the literature, we decided that RCWL-0516 microwave radar, which can sense heartbeat and heart rate through walls, would be most suited to our needs, but were unable to obtain this. Hence, we chose the next best alternative, an ultrasonic sensor, which still provided similar insight into spatial organization in the dark. Nonetheless, in the future, it would be possible to implement alternatives like RCWL-0516 microwave radars at any point without dramatic impact on the weight of the device.
**Processing external metrics for detecting the likelihood of life:** While we were able to employ various data-collecting sensors to gather external information on abiotic factors, the process of converting these abiotic metrics to biotic predictions was challenging. We particularly struggled while trying to determine a threshold for carbon dioxide ppm concentration that was indicative of respiration. We had to do extensive reading of scientific literature to understand carbon dioxide levels across various terrains before deciding upon a threshold value that separated carbon dioxide levels in outdoors places without human inhabitants from carbon dioxide levels of indoor and outdoor human-inhabited places.
**Practical limitations:** In order to keep our solution low-cost, low-resource, and hence widely accessible, we placed additional constraints on ourselves during the design process to ensure the final product would meet these visions. For example, in order to ensure that our system could work without a WiFi connection, we decided to take a hardware approach where all code could be loaded onto a microcontroller and used without compromised impact in remote regions, rather than developing a WiFi-reliant website.
# Accomplishments that we're proud of
**An interdisciplinary approach to designing a technological solution:** In order to develop our final product, we drew from software coding skills in C++, 3D modeling skills in Fusion 360, societal knowledge of the state of Turkey and Syria’s crisis, and more. We are happy to see how we were able to draw from different skill sets to develop a cohesive solution that is a product of interdisciplinary collaboration. Our multidisciplinary approach allowed us to come up with a more comprehensive solution that embeds essential knowledge from different fields, and we’re happy to see how these all came together in the end to support a more robust final product.
**Going from strangers to a tightly knit team:** Prior to coming to TreeHacks, none of us had met in person and we were barely familiar with one another. However, after 36 hours of hacking together, we have formed a closely knit, collaborative team and feel very close with one another. Without knowing whether we’d get along, we were open to each others’ ideas and willing to take risks, allowing us to foster effective collaboration both when our ideas agreed and when our ideas differed.
**Integrating interests and strengths:** We are proud that our final product is a mosaic of everyone’s interests and strengths. While we integrated Sam’s strengths in 3D modeling and Dilnaz’s interests developing biomedical models from abiotic data, we coupled these with Selin’s interest in designing tools for identifying earthquake victims trapped under rubble. When we look at our final design, we see a reflection of our own ideas and visions as well as those of our teammates’, each time being able to pinpoint how ideas were proposed and how they developed through collaboration to become a part of this final mosaic.
# What we learned
Throughout this weekend, we learned how to overcome challenges by optimizing our product design path. As we faced technological limitations, we creatively brainstormed suitable alternatives that would allow us to preserve the initial project vision but reach that vision through an alternate path, whether it be replacing radio wave sensors with an ultrasound sensor to establish a proof-of-concept model or 3D-printing a scaffold to hold the various electronic components together rather than leave them connected by flimsy wires. Additionally, even when we achieved our general vision, we still performed iterations of testing to find alternate approaches that potentially worked even better. For example, as we were implementing our ultrasonic sensor to detect distances and outline the surrounding landscape, we initially implemented auditory signals whose frequency increased as the sensor approached the nearest object. Even though this achieved our fundamental vision of providing a readout in response to distance from objects, we realized that the mix of auditory signals with visual signals displayed on the user interface created too many senses for the user to focus on, so we decided to just represent distance readouts using the visual user interface.
Additionally, given that we were under an extreme time constraint this weekend, we learned the importance of fully thinking through ideas early on before diving headfirst into the build phase. We learned that 5 minutes of early brainstorming can save 5 hours down the road and that fully fleshing out ideas gives a stronger team vision and paves a clearer development path. We particularly experienced this when deciding on how to implement our product; we realized we could either pursue a drone add-on or a system connected to robotic platforms that would be used like a fish hook in small cracks. Uncertainty on how to approach this decision as we were constructing created some hesitation and we realized that the best course of action at that time was to thoroughly address the decision before moving forward with a half-clear idea in mind. Once we discussed and came to a conclusion, we felt much more confident in development and were able to resume at a quicker pace than before, achieving a more cohesive vision at the end.
# What's next for Aziz
We plan to implement thermal infrared sensors to replace our ultrasonic sensors, which were unable to be obtained due to technological limitations. Infrared imaging will enable us to capture body heat signatures, drawing more precise conclusions on the presence and location of humans. In addition, adding a variety of inputs that will be able to detect the person breathing in a close distance will increase the overall accuracy of prediction. The possible variants are acetone, ammonia, and isoprene that detect metabolic tracers emitted by human breath and skin, all of which have a literary precedent as being used as a metric for the presence of humans.
Another next important step is creating real-world impacts through implementation. We see the future of the project as a device that can be put into rubble underneath from a drone that has its own microwave/thermal IR sensors to detect the possible life signals within an area. Rescuers can use drones for delivering Aziz deep inside the rubble to places where humans can’t reach. This could be achieved by pairing our sensor system with a spherical robot, specifically the polyhex edge skeleton, which has adjustable sides/legs capable of manuevering these electronic components across obstacles inside buildings. The adjustable system of the polyhex design will make sure that Aziz can move inside the rubble and transform into different shapes depending on the environment. The shapes would be determined through pressure on the legs and would allow thorough screening of building remnants prior to rescue worker labor-intensive efforts.
# Ethics
The situation in Turkey and Syria was a very large demonstration of an ethical crisis in that those who lived in more remote regions were unable to receive the life-saving support they needed—including heavy duty construction equipment, search and rescue teams, and medical attention. As a result, depending on the regions in which they lived, certain groups of people were more likely to be rescued in a quick enough time to still be found alive, as opposed to others who were less fortunate.
This ethical crisis creates a need for more equitable emergency recovery protocols that not only provide equal treatment depending on location and status, but that also provide equal chances of being saved instead of equal chances of not being found. Through technological innovation, we can remedy these ethical dilemmas and move toward a more equitable future, although development of these fair technologies will also require more ethical considerations. Our proposed life sensing system has both positive and negative ethical implications, each of which must be thoroughly considered to ensure that the platform reaches its intended goal rather than amplify any unintended consequences.
**Ethical implications of Aziz:**
* At the highest level, Aziz is playing a foundational role in determining whether or not lives are saved. If not advertised or implemented properly, this aspect could create major repercussions for Aziz, especially in the situation where Aziz fails to detect humans and causes search and rescue teams to overlook those victims (type II error). To carefully navigate this ethical concern, we will first be very deliberate during marketing efforts to clearly state that we are purely a data collecting platform and that we make no ardent statements on our ability to save the lives of those trapped under rubble, as this could lead us into consequences where we receive blame for overlooking humans in need of saving. Second, to address this ethical challenge, we will develop our platform to be as objective as possible; we will give explicit data values when possible instead of making any human-influenced subjective statements and will ensure that every indication has a quantitative basis.
* Moreover, our platform needs to be carefully reviewed to prevent biases in function and output, which favors the survival of some victims over others. Although we have a CO2 sensor that is read out as either too high or too low, the question becomes what is this CO2 threshold with respect to? What part of the globe? How applicable is it to other parts? Given that these thresholds will vary from region to region, we need to ensure that we either use thresholds that are inherently bias-free or remove these thresholds and purely reflect quantitative values. One particularly successful way in which our system avoids preferential search and rescue is due to the fact that Aziz doesn’t rely on WiFi. Because WiFi access can be highly variable in remote parts of the world, we’ve specifically designed our system to operate WiFi-free and in a wireless manner, ensuring that WiFi availability does not create an ethical issue of unequal accessibility.
* Another ethical risk to consider are the political implications of this device, given tensions in cross-national relationships. Since this product has been designed and developed in the US, when it is implemented abroad, it may imply messages about the US political system or create resentment toward the country. For example, if the device were to miss a person trapped under rubble, this could deflect blame onto the US and heighten political tensions between other countries. Likewise, depending on where this product is implemented, this could also suggest political inclinations of the US and create tension between nations. If this technology were to reach Syria in support of efforts to save the lives of those trapped under rubble, an initiative which happens to currently be spearheaded in rebel regions by the White Helmets, the Syrian government could see this as a threat and further expand its resentment toward the US.
* Because our platform collects data from its surroundings, there could be data privacy concerns and cases of the unintentional collection of sensitive data. The microphone on our system detects audio signals to determine if voices are present, but this could pick up on voices of people who do not consent to it. Since surveying everyone in the region to verify consent of the use of this technology conflicts with the purpose of uncovering hidden victims, we may need to remove certain functionalities due to this or use our platform solely for real-time data monitoring without involving any data storage.
Discussing these ethical considerations has been one of our early steps in ensuring that we do not create unintended ethical implications. The next steps in addressing ethical concerns will involve two aspects: product design and marketing. With respect to product design, although we already have features implemented to bolster the ethical side of our device, there are further revisions we can make. For example, we could replace the carbon dioxide sensor output from “high” or “low” levels to be a spectrum of values or to provide simply the quantitative value of carbon dioxide content in the atmosphere. Second, when it comes to marketing our platform, we need to cleverly develop a marketing strategy that doesn’t overpromise its benefits to users. Rather than having the slogan of rescuing the lives of those trapped under collapsed buildings, we will focus on objective data collection-oriented capabilities of this comprehensive and widely accessible technology.
Ultimately, through Aziz, we hope to provide fair and equitable technological solutions that promote wellbeing rather than compound ethical consequences. Aziz has a powerful potential to embolden underrepresented populations and provide critical technologies that will be widely accessible during emergencies. Achieving this vision will require careful planning, clever design, and strategic marketing at each step of the way to ensure that this platform can reach its full potential for impact.
# Bibliography
[1] <https://www.technology.org/2018/04/23/portable-device-to-aid-rescue-workers-in-searching-for-humans-trapped-under-rubble/>
[2] <https://spinoff.nasa.gov/FINDER-Finds-Its-Way-into-Rescuers-Toolkits>
[3] <https://www.jpl.nasa.gov/videos/finder-radar-for-locating-disaster-victims>
[4] <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1455483/>
[5] <https://www.mdpi.com/2218-6581/1/1/3>
|
# How it started
We were brainstorming ideas, passionate about a project that had a social impact. The IBM and Microsoft sponsor challenges were especially interesting, and we began discussing problem areas during natural disasters. **Communication is the most value asset during disaster.** Not only do communication lines go down, and cell coverage get's congested, but phones break and lose battery. Knowing where people are, and a way of connecting with anyone that can help provides piece of mind for civilians, and priceless information for first responders.
# What we made:
During natural disasters, friends and loved ones go missing. The phone networks go down and the first responders have no idea where to start looking or even if they need to. Did he/she leave and is safely in a hotel hundreds of miles away, or are they stranded in a flooded home in need of immediate help? **It is critical for first responders to know if they should be looking for an individual and where to begin**.
That is why we have developed Response, an app that will automatically and preemptively begin sending an **encrypted packet of info containing the user's position, phone battery level, and other relevant info to a server.** This server will provide tools for first responders to quickly generate a list of last known locations around a geographic point to begin their search with some knowledge of each individual’s context.
One of our main concerns was the cellular network becoming congested or even going down, so we have also developed a hardware add on that enables peer to peer communication. This will allow these data packets to be cached locally on nearby devices and propagated outwards until the information reaches a node that is connected to the internet.
# How we made it:
The project has two distinctive components, hardware and software.
## Hardware
The hardware was initially constructed using breadboards, Arduinos, and other simple hardware components. The RadioHead open-source library was utilized to facilitate radio communications. The hardware was then rebuilt on a soldered breadboard to create a neater demonstration package.
## Software
The software was the larger of the two components, utilizing SQL databases, Azure tools, and the Android Dev SDK. We connected user accounts, along with status updates, to the SQL database through the Azure webserver.
No team members have used Azure before, which presented challenges integrated the SQL database and the server backend. Comically enough, one of the larger challenges was phones only having one port. It made impossible to simultaneously connect to a laptop for Android Development debugging, and attach to our hardware.
## What we learned:
Inherently, the team became much more familiar with Azure and wireless communication. With Azure, we are now comfortable hosting web servers for database management and backend support. We also learned more about the challenging issues faced during disasters, and the current projects being done for relief efforts.
## What’s next:
The next step is to develop a data visualization functionality on the server, so first responders can view the data on a map in an intuitive way and quickly judge a situation to create a plan. Additionally, using speech-to-text for the status updates would speed up the process in stressful situations, and allow users to continue evacuating as normal.
On the hardware side, we would like to integrate the hardware module with a battery pack to extend the phones battery life and support higher power transceivers for peer 2 peer communication. We would also like to add sensors to the hardware module to give more situational context. This includes things like air quality and integrating with a smartwatch for heart rate monitor.
Overall, we wish we had narrowed our scope more for this hackathon, but are looking to further develop this for future competitions.
|
## Inspiration
eCommerce is a field that has seen astronomical growth in recent years, and shows no signs of slowing down. With a forecasted growth rate of 10.4% this year, up to $6.3 trillion in global revenues, we decided to tackle Noibu’s challenge to develop an extension to aide ecommerce developers with the impossible task of staying ahead amongst the fierce competitions in this space, all whilst providing a tremendous unique value to shoppers and eCommerce brands alike.
## What it does
Our extension. ShopSmart, aim to provide developers and brands with an accurate idea of how their website is being used, naturally. Unlike A/B testing, which forces a participant to use a given platform and provide feedback, ShopSmart analyzes user activity on any given website and produces a heatmap showing their exact usage patterns, all without collecting user identifying data. In tandem with the heatmap, ShopSmart provides insights as to the sequences of actions taken on their website, correlated with their heatmap usage, allowing an ever deeper understanding of what average usage truly looks like. To incentivize consumers, brands may elect to provide exclusive discount codes only available through ShopSmart, giving the shoppers a kickback for their invaluable input to the brand partners.
## How we built it
ShopSmart was built using the classic web languages HTML, CSS, and Javascript, keeping it simple, lightweight, and speedy.
## Challenges we ran into
We ran into several challenges throughout our development process, largely due to the complexity of the extension in theory being limited in execution to HTML, CSS, and Javascript (as those are the only allowed languages for use in developing extensions). One issue we had was finding a way to overlay the heatmap over the website so as to visually show the paths the user took. Whilst we were able to solve that challenge, we were sadly unable to finish fully integrating our database within the given timeframe into the extension due to the frequency of data collection/communication, and the complexity of the data itself.
## Accomplishments that we're proud of
Our team is very proud in being able to put out a working extension capable of tracking usage and overlaying the resulting heatmap data over the used website, especially as neither of us had any experience with developing extensions. Despite not being able to showcase our extensive database connections in the end as they are not finalized, we are proud of achieving reliable and consistent data flow to our cloud-based database within our testing environment. We are also proud of coming together and solving a problem none of us had considered before, and of course, of the sheer amount we learned over this short time span.
## What we learned
Our hackathon experience was truly transformative, as we not only gained invaluable technical knowledge in Javascript, but also cultivated essential soft skills that will serve us well in any future endeavors. By working together as a team, we were able to pool our unique strengths and collaborate effectively to solve complex problems and bring our ideas to life.
## What's next for ShopSmart
The next steps for ShopSmart are to focus on expanding its capabilities and increasing its reach. One area of focus could be on integrating the extension with more e-commerce platforms to make it more widely accessible to developers and brands. Another area for improvement could be on enhancing the heatmap visualization and adding more advanced analytics features to provide even deeper insights into user behavior. With the help of Machine Learning, developers and brands can utilize the data provided by ShopSmart to better recognize patterns within their customer's usage of their site to make better adjustments and improvements. Additionally, exploring partnerships with e-commerce brands to promote the extension and offer more exclusive discount codes to incentivize consumers could help increase its adoption. Overall, the goal is to continuously improve the extension and make it an indispensable tool for e-commerce businesses looking to stay ahead of the competition.
|
losing
|
## Inspiration
Why settle for 2D when you can have 3D? As avid entrepreneurs and salesmen, we always strive for the ultimate sales tools to improve customer conversion. After recognizing that the majority of online shopping is done on mobile, we noticed a gap in the market. 2-dimensional images don't let customers truly experience a product before purchase point the way a 3-dimensional representation would, yet there are absolutely no easily-accessible libraries to integrate embedded Augmented Reality representations of products in the mobile app industry. As passionate mobile developers ourselves, we spent this weekend building an iOS library that enables other developers to do exactly this.
## What it does
shopAR allows developers to upload 3D representations of their products to our REST API and instantly retrieve and display them to users through an Augmented Reality portal, which our library integrates into their existing iOS app. In return, clients get to better experience products before purchasing in a significantly more interactive and better-converting way.
## How we built it
We began by building our REST API in node.js and express.js, using React for front-end web. We connected it to an AWS s3 bucket for file storage, and deployed it on Heroku. Developers can either directly make POST requests to our API or upload files via drag-and-drop in our React front-end. We then developed an iOS CocoaPod for Swift with a variety of functions that interact with our API (documented in more detail on our github), including a file retrieval system connecting s3 to the iOS app via signedURL request. Finally, we built an iOS Demo App utilizing our technology and displaying the ease of integration. It is included in our github as well.
## Challenges we ran into
We struggled working with a native apple filetype .scn for 3D object representations, since it was not well documented. We also struggled with latency issues in ensuring that our servers allowed for the fastest possible file transfers. Finally, making a library for other developers was something neither of us had attempted before, and posed more organizational challenges than we anticipated.
## Accomplishments that we're proud of
We are proud of publishing the library as an open source tool on CocoaPods.com, the leading library host for iOS. We also managed a very large stack, including front-end, back-end, mobile, and code distribution. We are also proud of introducing a new tool that can greatly contribute to the mobile retail industry.
## What we learned
In a technical sense, we learned that it is extremely difficult to make an open source library very accessible and easy to integrate into existing projects, considering the varying scopes of peoples' projects. More importantly, we learned resilience, the power of planning, and the perks of sleep-deprivation.
## What's next for shopAR
We seriously plan to pursue this as we progress through our Junior year. We have big ideas about potential retail partners, supporting more file formats, integrating 3D scanning tools, and much more!
|
## Inspiration
Shopping can be a very frustrating experience at times. Nowadays, almost everything is digitally connected yet some stores fall behind when it comes to their shopping experience. We've unfortunately encountered scenarios where we weren't able to find products stocked at our local grocery store, and there have been times where we had no idea how much stock was left or if we need to hurry! Our app solves this issue, by displaying various data relating to each ingredient to the user.
## What it does
Our application aims to guide users to the nearest store that stocks the ingredient they're looking for. This is done on the maps section of the app, and the user can redirect to other stores in the area as well to find the most suitable option. Displaying the price also enables the user to find the most suitable product for them if there are alternatives, ultimately leading to a much smoother shopping experience.
## How we built it
The application was built using React Native and MongoDB. While there were some hurdles to overcome, we were finally able to get a functional application that we could view and interact with using Expo.
## Challenges we ran into
Despite our best efforts, we weren't able to fit the integration of the database within the allocated timeframe. Given that this was a fairly new experience to us with using MongoDB, we struggled to correctly implement it within our React Native code which resulted in having to rely on hard-coding ingredients.
## Accomplishments that we're proud of
We're very proud of the progress we managed to get on our mobile app. Both of us have little experience ever making such a program, so we're very happy we have a fully functioning app in so little time.
Although we weren't able to get the database loaded into the search functionality, we're still quite proud of the fact that we were able to create and connect all users on the team to the database, as well as correctly upload documents to it and we were even able to get the database printing through our code. Just being able to connect to the database and correctly output it, as well as being able to implement a query functionality, was quite a positive experience since this was unfamiliar territory to us.
## What we learned
We learnt how to create and use databases with MongoDB and were able to enhance our React Native skills through importing Google Cloud APIs and being able to work with them (particularly through react-native-maps).
## What's next for IngredFind
In the future, we would hope to improve the front and back end of our application. Aside from visual tweaks and enhancing our features, as well as fixing any bugs that may occur, we would also hope to get the database fully functional and working and perhaps create the application that enables the grocery store to add and alter products on their end.
|
## Inspiration
3-D Printing. It has been around for decades, yet the printing process is often too complex to navigate, labour intensive and time consuming. Although the technology exists, it is only used by those who are trained in the field because of the technical skills required to operate the machine. We want to change all that. We want to make 3-D printing simpler, faster, and accessible for everyone. By leveraging the power of IoT and Augmented Reality, we created a solution to bridge that gap.
## What it does
Printology revolutionizes the process of 3-D printing by allowing users to select, view and print files with a touch of a button. Printology is the first application that allows users to interact with 3-D files in augmented reality while simultaneously printing it wirelessly. This is groundbreaking because it allows children, students, healthcare educators and hobbyists to view, create and print effortlessly from the comfort of their mobile devices. For manufacturers and 3-D Farms, it can save millions of dollars because of the drastically increased productivity.
The product is composed of a hardware and a software component. Users can download the iOS app on their devices and browse a catalogue of .STL files. They can drag and view each of these items in augmented reality and print it to their 3-D printer directly from the app. Printology is compatible with all models of printers on the market because of the external Raspberry Pi that generates a custom profile for each unique 3-D printer. Combined, the two pieces allow users to print easily and wirelessly.
## How I built it
We built an application in XCode that uses Apple’s AR Kit and converts STL models to USDZ models, enabling the user to view 3-D printable models in augmented reality. This had never been done before, so we had to write our own bash script to convert these models. Then we stored these models in a local server using node.js. We integrated functions into the local servers which are called by our application in Swift.
In order to print directly from the app, we connected a Raspberry Pi running Octoprint (a web based software to initialize the 3-D printer). We also integrated functions into our local server using node.js to call functions and interact with Octoprint. Our end product is a multifunctional application capable of previewing 3-D printable models in augmented reality and printing them in real time.
## Challenges I ran into
We created something that had never been done before hence we did not have a lot of documentation to follow. Everything was built from scratch. In other words this project needed to be incredibly well planned and executed in order to achieve a successful end product. We faced many barriers and each time we pushed through. Here were some major issues we faced.
1. No one on our team had done iOS development before and we a lot through online resources and trial and error. Altogether we watched more than 12 hours of YouTube tutorials on Swift and XCode - It was quite a learning curve. Ultimately with insane persistence, a full all-nighter and the generous help of the Deltahacks mentors, we troubleshooted errors and found new ways of getting around problems.
2. No one on our team had experience in bash or node.js. We learned everything from the Google and our mentors. It was exhausting and sometimes downright frustrating. Learning the connection between our javascript server and our Swift UI was extremely difficult and we went through loads of troubleshooting for our networks and IP addresses.
## Accomplishments that I'm proud of and what I've Learned
We're most proud of learning the integration of multiple languages, APIs and devices into one synchronized system. It was the first time that this had been done before and most of the software was made in house. We learned command line functions and figured out how to centralize several applications to provide a solution. It was so rewarding to learn an entirely new language and create something valuable in 24 hours.
## What's next for Print.ology
We are working on a scan feature on the app that allows users to do a 3-D scan with their phone of any object and be able to produce a 3-D printable STL file from the photos. This has also never been accomplished before and it would allow for major advancements in rapid prototyping. We look forward to integrating machine learning techniques to analyze a 3-D model and generate settings that reduce the number of support structures needed. This would reduce the waste involved in 3-D printing. A future step would be to migrate our STL files o a cloud based service in which users can upload their 3-D models.
|
losing
|
## Inspiration
Alex K's girlfriend Allie is a writer and loves to read, but has had trouble with reading for the last few years because of an eye tracking disorder. She now tends towards listening to audiobooks when possible, but misses the experience of reading a physical book.
Millions of other people also struggle with reading, whether for medical reasons or because of dyslexia (15-43 million Americans) or not knowing how to read. They face significant limitations in life, both for reading books and things like street signs, but existing phone apps that read text out loud are cumbersome to use, and existing "reading glasses" are thousands of dollars!
Thankfully, modern technology makes developing "reading glasses" much cheaper and easier, thanks to advances in AI for the software side and 3D printing for rapid prototyping. We set out to prove through this hackathon that glasses that open the world of written text to those who have trouble entering it themselves can be cheap and accessible.
## What it does
Our device attaches magnetically to a pair of glasses to allow users to wear it comfortably while reading, whether that's on a couch, at a desk or elsewhere. The software tracks what they are seeing and when written words appear in front of it, chooses the clearest frame and transcribes the text and then reads it out loud.
## How we built it
**Software (Alex K)** -
On the software side, we first needed to get image-to-text (OCR or optical character recognition) and text-to-speech (TTS) working. After trying a couple of libraries for each, we found Google's Cloud Vision API to have the best performance for OCR and their Google Cloud Text-to-Speech to also be the top pick for TTS.
The TTS performance was perfect for our purposes out of the box, but bizarrely, the OCR API seemed to predict characters with an excellent level of accuracy individually, but poor accuracy overall due to seemingly not including any knowledge of the English language in the process. (E.g. errors like "Intreduction" etc.) So the next step was implementing a simple unigram language model to filter down the Google library's predictions to the most likely words.
Stringing everything together was done in Python with a combination of Google API calls and various libraries including OpenCV for camera/image work, pydub for audio and PIL and matplotlib for image manipulation.
**Hardware (Alex G)**: We tore apart an unsuspecting Logitech webcam, and had to do some minor surgery to focus the lens at an arms-length reading distance. We CAD-ed a custom housing for the camera with mounts for magnets to easily attach to the legs of glasses. This was 3D printed on a Form 2 printer, and a set of magnets glued in to the slots, with a corresponding set on some NerdNation glasses.
## Challenges we ran into
The Google Cloud Vision API was very easy to use for individual images, but making synchronous batched calls proved to be challenging!
Finding the best video frame to use for the OCR software was also not easy and writing that code took up a good fraction of the total time.
Perhaps most annoyingly, the Logitech webcam did not focus well at any distance! When we cracked it open we were able to carefully remove bits of glue holding the lens to the seller’s configuration, and dial it to the right distance for holding a book at arm’s length.
We also couldn’t find magnets until the last minute and made a guess on the magnet mount hole sizes and had an *exciting* Dremel session to fit them which resulted in the part cracking and being beautifully epoxied back together.
## Acknowledgements
The Alexes would like to thank our girlfriends, Allie and Min Joo, for their patience and understanding while we went off to be each other's Valentine's at this hackathon.
|
## Inspiration
Imagine a world where your best friend is standing in front of you, but you can't see them. Or you go to read a menu, but you are not able to because the restaurant does not have specialized brail menus. For millions of visually impaired people around the world, those are not hypotheticals, they are facts of life.
Hollywood has largely solved this problem in entertainment. Audio descriptions allow the blind or visually impaired to follow the plot of movies easily. With Sight, we are trying to bring the power of audio description to everyday life.
## What it does
Sight is an app that allows the visually impaired to recognize their friends, get an idea of their surroundings, and have written text read aloud. The app also uses voice recognition to listen for speech commands to identify objects, people or to read text.
## How we built it
The front-end is a native iOS app written in Swift and Objective-C with XCode. We use Apple's native vision and speech API's to give the user intuitive control over the app.
---
The back-end service is written in Go and is served with NGrok.
---
We repurposed the Facebook tagging algorithm to recognize a user's friends. When the Sight app sees a face, it is automatically uploaded to the back-end service. The back-end then "posts" the picture to the user's Facebook privately. If any faces show up in the photo, Facebook's tagging algorithm suggests possibilities for who out of the user's friend group they might be. We scrape this data from Facebook to match names with faces in the original picture. If and when Sight recognizes a person as one of the user's friends, that friend's name is read aloud.
---
We make use of the Google Vision API in three ways:
* To run sentiment analysis on people's faces, to get an idea of whether they are happy, sad, surprised etc.
* To run Optical Character Recognition on text in the real world which is then read aloud to the user.
* For label detection, to indentify objects and surroundings in the real world which the user can then query about.
## Challenges we ran into
There were a plethora of challenges we experienced over the course of the hackathon.
1. Each member of the team wrote their portion of the back-end service a language they were comfortable in. However when we came together, we decided that combining services written in different languages would be overly complicated, so we decided to rewrite the entire back-end in Go.
2. When we rewrote portions of the back-end in Go, this gave us a massive performance boost. However, this turned out to be both a curse and a blessing. Because of the limitation of how quickly we are able to upload images to Facebook, we had to add a workaround to ensure that we do not check for tag suggestions before the photo has been uploaded.
3. When the Optical Character Recognition service was prototyped in Python on Google App Engine, it became mysteriously rate-limited by the Google Vision API. Re-generating API keys proved to no avail, and ultimately we overcame this by rewriting the service in Go.
## Accomplishments that we're proud of
Each member of the team came to this hackathon with a very disjoint set of skills and ideas, so we are really glad about how well we were able to build an elegant and put together app.
Facebook does not have an official algorithm for letting apps use their facial recognition service, so we are proud of the workaround we figured out that allowed us to use Facebook's powerful facial recognition software.
We are also proud of how fast the Go back-end runs, but more than anything, we are proud of building a really awesome app.
## What we learned
Najm taught himself Go over the course of the weekend, which he had no experience with before coming to YHack.
Nathaniel and Liang learned about the Google Vision API, and how to use it for OCR, facial detection, and facial emotion analysis.
Zak learned about building a native iOS app that communicates with a data-rich APIs.
We also learned about making clever use of Facebook's API to make use of their powerful facial recognition service.
Over the course of the weekend, we encountered more problems and bugs than we'd probably like to admit. Most of all we learned a ton of valuable problem-solving skills while we worked together to overcome these challenges.
## What's next for Sight
If Facebook ever decides to add an API that allows facial recognition, we think that would allow for even more powerful friend recognition functionality in our app.
Ultimately, we plan to host the back-end on Google App Engine.
|
## Inspiration
The inspiration behind HumanFT comes from the desire to revolutionize the way people receive feedback and approach personal development. The project aims to harness the power of advanced technology to provide individuals, educational institutions, and organizations with a comprehensive feedback system that can drive positive change and improvement in various aspects of life.
## What it does
HumanFT serves as a multifaceted platform that collects, analyzes, and delivers feedback to users across different domains. It offers a central hub for personal development, empowers educators and students to enhance the learning experience, and enables organizations to optimize workplace performance. By leveraging data-driven insights and gamification, HumanFT engages users in a meaningful journey of self-improvement.
## How we built it
HumanFT is built upon a foundation of cutting-edge technology, including machine learning and AI algorithms. It combines a user-friendly interface with robust data analysis to ensure efficient feedback delivery. Privacy and security are fundamental aspects of its construction, ensuring that user data remains confidential and protected.
## Challenges we ran into
Developing HumanFT presented several challenges, including the integration of gamification elements, the development of secure data handling processes, and the creation of a dynamic and engaging user experience. Overcoming these obstacles required a dedicated team effort and continuous innovation.
## Accomplishments that we're proud of
One of our proudest accomplishments with HumanFT is the creation of a thriving community of individuals who are passionate about personal development and feedback. We've also successfully integrated gamification elements to keep users engaged and motivated on their journey towards self-improvement.
## What we learned
Throughout the development of HumanFT, we've learned the significance of personalized feedback in driving positive change. We've also gained valuable insights into the power of data-driven recommendations and the importance of maintaining user privacy and security.
## What's next for HumanFT
The future of HumanFT holds exciting possibilities. We aim to expand its reach and impact, incorporating more domains, refining the user experience, and continuously improving the AI algorithms that drive feedback and recommendations. Additionally, we plan to further strengthen the HumanFT community, fostering connections and support among like-minded individuals on their journey of self-improvement.
|
winning
|
## Inspiration
We wanted to make something that linked the virtual and real worlds, but in a quirky way. On our team we had people who wanted to build robots and people who wanted to make games, so we decided to combine the two.
## What it does
Our game portrays a robot (Todd) finding its way through obstacles that are only visible in one dimension of the game. It is a multiplayer endeavor where the first player is given the task to guide Todd remotely to his target. However, only the second player is aware of the various dangerous lava pits and moving hindrances that block Todd's path to his goal.
## How we built it
Todd was built with style, grace, but most of all an Arduino on top of a breadboard. On the underside of the breadboard, two continuous rotation servo motors & a USB battery allows Todd to travel in all directions.
Todd receives communications from a custom built Todd-Controller^TM that provides 4-way directional control via a pair of Bluetooth HC-05 Modules.
Our Todd-Controller^TM (built with another Arduino, four pull-down buttons, and a bluetooth module) then interfaces with Unity3D to move the virtual Todd around the game world.
## Challenges we ran into
The first challenge of the many that we ran into on this "arduinous" journey was having two arduinos send messages to each other via over the bluetooth wireless network. We had to manually configure the setting of the HC-05 modules by putting each into AT mode, setting one as the master and one as the slave, making sure the passwords and the default baud were the same, and then syncing the two with different code to echo messages back and forth.
The second challenge was to build Todd, the clean wiring of which proved to be rather difficult when trying to prevent the loose wires from hindering Todd's motion.
The third challenge was building the Unity app itself. Collision detection was an issue at times because if movements were imprecise or we were colliding at a weird corner our object would fly up in the air and cause very weird behavior. So, we resorted to restraining the movement of the player to certain axes. Additionally, making sure the scene looked nice by having good lighting and a pleasant camera view. We had to try out many different combinations until we decided that a top down view of the scene was the optimal choice. Because of the limited time, and we wanted the game to look good, we resorted to looking into free assets (models and textures only) and using them to our advantage.
The fourth challenge was establishing a clear communication between Unity and Arduino. We resorted to an interface that used the serial port of the computer to connect the controller Arduino with the unity engine. The challenge was the fact that Unity and the controller had to communicate strings by putting them through the same serial port. It was as if two people were using the same phone line for different calls. We had to make sure that when one was talking, the other one was listening and vice versa.
## Accomplishments that we're proud of
The biggest accomplishment from this project in our eyes, was the fact that, when virtual Todd encounters an object (such as a wall) in the virtual game world, real Todd stops.
Additionally, the fact that the margin of error between the real and virtual Todd's movements was lower than 3% significantly surpassed our original expectations of this project's accuracy and goes to show that our vision of having a real game with virtual obstacles .
## What we learned
We learned how complex integration is. It's easy to build self-sufficient parts, but their interactions introduce exponentially more problems. Communicating via Bluetooth between Arduino's and having Unity talk to a microcontroller via serial was a very educational experience.
## What's next for Todd: The Inter-dimensional Bot
Todd? When Todd escapes from this limiting world, he will enter a Hackathon and program his own Unity/Arduino-based mastery.
|
## Inspiration
Minecraft has an interesting map mechanic where your character holds a map which "draws itself" while exploring the world. I am also very interested in building a plotter, which is a printer that uses a pen and (XY) gantry to produce images. These ideas seemed to fit together quite well.
## What it does
Press a button, copy GPS coordinates and run the custom "gcode" compiler to generate machine/motor driving code for the arduino. Wait around 15 minutes for a 48 x 48 output.
## How we built it
Mechanical assembly - Tore apart 3 dvd drives and extracted a multitude of components, including sled motors (linear rails). Unfortunately, they used limit switch + DC motor rather than stepper, so I had to saw apart the enclosure and **glue** in my own steppers with a gear which (you guessed it) was also glued to the motor shaft.
Electronics - I designed a simple algorithm to walk through an image matrix and translate it into motor code, that looks a lot like a video game control. Indeed, the stepperboi/autostepperboi main source code has utilities to manually control all three axes like a tiny claw machine :)
U - Pen Up
D - Pen Down
L - Pen Left
R - Pen Right
Y/T - Pen Forward (top)
B - Pen Backwards (bottom)
Z - zero the calibration
O - returned to previous zeroed position
## Challenges we ran into
* I have no idea about basic mechanics / manufacturing so it's pretty slipshod, the fractional resolution I managed to extract is impressive in its own right
* Designing my own 'gcode' simplification was a little complicated, and produces strange, pointillist results. I like it though.
## Accomplishments that we're proud of
* 24 hours and a pretty small cost in parts to make a functioning plotter!
* Connected to mapbox api and did image processing quite successfully, including machine code generation / interpretation
## What we learned
* You don't need to take MIE243 to do low precision work, all you need is superglue, a glue gun and a dream
* GPS modules are finnicky and need to be somewhat near to a window with built in antenna
* Vectorizing an image is quite a complex problem
* Mechanical engineering is difficult
* Steppers are *extremely* precise, and I am quite surprised at the output quality given that it's barely held together.
* Iteration for mechanical structure is possible, but difficult
* How to use rotary tool and not amputate fingers
* How to remove superglue from skin (lol)
## What's next for Cartoboy
* Compacting the design it so it can fit in a smaller profile, and work more like a polaroid camera as intended. (Maybe I will learn solidworks one of these days)
* Improving the gcode algorithm / tapping into existing gcode standard
|
## Inspiration
Our teammate, Anchit, worked in ESG investing for 5 years and understood that ESG scores are broken. Despite ESG investors spending millions of $ hiring consultants to do this research, ESG scores are not accurate and do not reflect public sentiment. For example, ESG rating agencies awarded EV maker Tesla with an ESG score of 37/100 against a score of 84 awarded to tobacco company Phillip Morris, which is completely non-sensical.
## What it does
EcoMetrics scores companies on ESG, based on public perception. Investors focusing on ESG can utilize these scores to formulate their investment decisions, as these scores are real-time and more accurate. Also, unlike ESG consultants, EcoMetrics is much cheaper.
## How we built it
We used twitter as a data source and then extracted ESG related posts in a time period. We then used sentiment analysis to assign scores on each post. We then aggregated these posts to get a company score. Amongst the tools, we used:
* LangChain: For orchestrating data flows and integrating with Large Language Models (LLMs).
* Flask / FastAPI: For backend web development and API creation.
* Hume AI: For sentiment analysis and advanced emotional AI processing.
* Streamlit: For creating an interactive user interface.
* Replit: For development and deployment.
* LLMs (Groq): For natural language understanding and generation.
* Crew AI: For Multi-AI Agents Collaboration
Please find our detailed tech stack here: [link](https://miro.com/app/board/uXjVK5Mght4=/?userEmail=anchit.jain@berkeley.edu&track=true&utm_source=notification&utm_medium=email&utm_campaign=add-to-team-and-board&utm_content=go-to-board)
## Challenges we ran into
Twitter APIs to extract data are expensive. We found some free APIs, but they unfortunately took a long time to extract data. We had to find a way that would allow us to extract social media posts for our project, which was challenging. Ultimately, we used Langchain with SerperAPI to find social media posts on the internet. Unfortunately, this process took a long time, so we had to scale down our initial plans of also presenting a lot of visualization.
One of the big challenges was also prompt engineering of AI agents, in order to get better results. Also, there were other challenges like token limitations with Groq.
## Accomplishments that we're proud of
For our project, we had to integrate multiple moving parts, including social media posts, Hume, Groq, Crew AI and Streamlit. We worked together as a great team for integrating these parts, and are proud of that.
## What we learned
We learned a lot on how to make best use of pre-built APIs/ infrastructure to create a large and complex project.
## What's next for EcoMetrics
We have a working prototype ready. The next step is to go out to the market and sign up ESG investors to use Ecometrics. We also intend to raise a seed round to be able to integrate the paid Twitter APIs and develop more features.
|
winning
|
## What it does
This google chrome extension checks if an online article has traces that might indicate it is a fake news.
## How we built it
We coded the majority of our logic in python, using a large dataset and cosine similarity to find intersections among different articles. We then coded the extension using javascript that connects the article to the python code that runs locally. The little notification window was written in Java.
## Challenges we ran into
The first challenge was finding the right calculation method to use. We decided to use cosine similarity, thanks to Hyunjoon's great idea, and adapted it to fit our purpose.
Another difficulty of coding this was using a language that we have never used before: Javascript. Xinyue and Bu Geun did this part: using Javascript to make an extension.
## Accomplishments that we're proud of
We ran different test cases, using actual articles that we have found on various websites, such as New York Times, National Geographic, Fox News, and Politico. Although our test script achieved only 60% accuracy, we were able to distinguish most of the fake news when field testing with the extension.
## What we learned
We learned how to make a Chrome extension. We learned that combining different programming languages is difficult, but possible and sometimes necessary. We've also learned the limits of computers in the fields of text analysis. Lastly, we learned the amazing things we can do by combining software engineering with data science.
## What's next for FakeNewsDetector
We have a few bugs to fix. We also have to increase the accuracy of our detector. We can also improve GUI.
|
## Inspiration
Disinformation and misinformation loom large in today's world. As it becomes easier to share information, it becomes more important that we develop quick and intuitive ways of viewing all perspectives. This project aims to begin this process by offering a ‘bias reverser’, in which users can view articles of the same topic from different sides.
According to the FBI, online radicalization is on the rise. Much of this occurs due to an echo chamber. Views become more and more extreme, and a well-trained recommendation system only shows users content that will confirm their beliefs. Therefore, it’s extremely important that browsing is consciously inclusive of all sides.
## What it does
BeyondBias is a web app which promotes the spread of information by offering readers of political articles the opportunity to consult similar articles written from different perspectives. We find the political bias of a given article by matching its URL with the top-level domain, which is then matched against a list of classified sources.We then retrieve articles about the same topic with reversed biases, to allow users to consider all sides of an argument. Finally, we return the three most relevant articles. This offers users the opportunity to learn about arguments that would otherwise have been unfamiliar to them.
## How I built it
We built the application using top-down approach, beginning from the Minimum Viable Product that addressed our problem. We determined that we needed a program that took the URL of a seed article as input and the URLs of articles on the same topic from reverse biases as output. Then we split this task into the two following components: a front end GUI and a backend model.
## Challenges I ran into
The first problem we ran into was retrieving articles that addressed the same topic as the seed article. Even if two articles share many of the same words, they may not belong to the same time period, and they might still be about different topics. To fix this issue, we implemented a cosine similarity metric that we computed between the original article and candidate articles. This allowed us to select only those articles that scored above a given similarity threshold.
Although we originally planned to create a Chrome Extension, we decided that a web app would offer more potential for wide use. Chrome is only used by a subset of those online, but websites are accessible to all.
Prior to this project, none of us were familiar with Django template tags. Troubleshooting all of the integrations was challenging, but rewarding.
## Accomplishments that I'm proud of
We began with just one mission: reduce bias when reading the news. Although we didn’t have a specific plan, we brainstormed and thought about bias reversal. This idea allowed us to develop an idea that has not been implemented before. Also, our diverse team allowed us to have some very interesting experiences this weekend.
We are also proud of our perseverance through numerous technical challenges, and our commitment to this project’s mission. We plan to continue related work in the future.
## What I learned
We learned a lot about full-stack web development and the challenges and triumphs that come from this venture. As we did more research throughout the weekend, it became more and more apparent that this project is vital in today’s age.
Also, we learned a lot about each other. It was very interesting to hear about our different backgrounds.
## What's next for BeyondBias
We hope to continue working together remotely in the future on similar projects. Also, we hope to develop this website into a cross-platform extension, to keep the audience while increasing the utility.
|
## Inspiration
1. Affordable pet doors with simple "flap" mechanisms are not secure
2. Potty trained pets requires the door to be manually opened (e.g. ring a bell, scratch the door)
## What it does
The puppy *(or cat, we don't discriminate)* can exit without approval as soon as the sensor detects an object within the threshold distance. When entering back in, the ultrasonic sensor will trigger a signal that something is at the door and the camera will take a picture and send to the owner's phone through a web app. The owner may approve or deny the request depending on the photo. If the owner approves the request, the door will open automatically.
## How we built it
Ultrasonic sensors relay the distance from the sensor to an object to the Arduino, which sends this signal to Raspberry Pi. The Raspberry Pi program handles the stepper motor movement (rotate ~90 degrees CW and CCW) to open and close the door and relays information to the Flask server to take a picture using the Kinect camera. This photo will display on the web application, where an approval to the request will open the door.
## Challenges we ran into
1. Connecting everything together (Arduino, Raspberry Pi, frontend, backend, Kinect camera) despite each component working well individually
2. Building cardboard prototype with limited resources = lots of tape & poor wire management
3. Using multiple different streams of I/O and interfacing with each concurrently
## Accomplishments that we're proud of
This was super rewarding as it was our first hardware hack! The majority of our challenges lie in the camera component as we're unfamiliar with Kinect but we came up with a hack-y solution and nothing had to be hardcoded.
## What we learned
Hardware projects require a lot of troubleshooting because the sensors will sometimes interfere with eachother or the signals are not processed properly when there is too much noise. Additionally, with multiple different pieces of hardware, we learned how to connect all the subsystems together and interact with the software components.
## What's next for PetAlert
1. Better & more consistent photo quality
2. Improve frontend notification system (consider push notifications)
3. Customize 3D prints to secure components
4. Use thermal instead of ultrasound
5. Add sound detection
|
losing
|
## Inspiration
1 in 2 Canadians will personally experience a mental health issue by age 40, with minority communities at a greater risk. As the mental health epidemic surges and support at its capacity, we sought to build something to connect trained volunteer companions with people in distress in several ways for convenience.
## What it does
Vulnerable individuals are able to call or text any available trained volunteers during a crisis. If needed, they are also able to schedule an in-person meet-up for additional assistance. A 24/7 chatbot is also available to assist through appropriate conversation. You are able to do this anonymously, anywhere, on any device to increase accessibility and comfort.
## How I built it
Using Figma, we designed the front end and exported the frame into Reacts using Acovode for back end development.
## Challenges I ran into
Setting up the firebase to connect to the front end react app.
## Accomplishments that I'm proud of
Proud of the final look of the app/site with its clean, minimalistic design.
## What I learned
The need for mental health accessibility is essential but unmet still with all the recent efforts. Using Figma, firebase and trying out many open-source platforms to build apps.
## What's next for HearMeOut
We hope to increase chatbot’s support and teach it to diagnose mental disorders using publicly accessible data. We also hope to develop a modeled approach with specific guidelines and rules in a variety of languages.
|
## Inspiration
We are currently living through one of the largest housing crises in human history. As a result, more Canadians than ever before are seeking emergency shelter to stay off the streets and find a safe place to recuperate. However, finding a shelter is still a challenging, manual process, where no digital service exists that lets individuals compare shelters by eligibility criteria, find the nearest one they are eligible for, and verify that the shelter has room in real-time. Calling shelters in a city with hundreds of different programs and places to go is a frustrating burden to place on someone who is in need of safety and healing. Further, we want to raise the bar: people shouldn't be placed in just any shelter, they should go to the shelter best for them based on their identity and lifestyle preferences.
70% of homeless individuals have cellphones, compared to 85% of the rest of the population; homeless individuals are digitally connected more than ever before, especially through low-bandwidth mediums like voice and SMS. We recognized an opportunity to innovate for homeless individuals and make the process for finding a shelter simpler; as a result, we could improve public health, social sustainability, and safety for the thousands of Canadians in need of emergency housing.
## What it does
Users connect with the ShelterFirst service via SMS to enter a matching system that 1) identifies the shelters they are eligible for, 2) prioritizes shelters based on the user's unique preferences, 3) matches individuals to a shelter based on realtime availability (which was never available before) and the calculated priority and 4) provides step-by-step navigation to get to the shelter safely.
Shelter managers can add their shelter and update the current availability of their shelter on a quick, easy to use front-end. Many shelter managers are collecting this information using a simple counter app due to COVID-19 regulations. Our counter serves the same purpose, but also updates our database to provide timely information to those who need it. As a result, fewer individuals will be turned away from shelters that didn't have room to take them to begin with.
## How we built it
We used the Twilio SMS API and webhooks written in express and Node.js to facilitate communication with our users via SMS. These webhooks also connected with other server endpoints that contain our decisioning logic, which are also written in express and Node.js.
We used Firebase to store our data in real time.
We used Google Cloud Platform's Directions API to calculate which shelters were the closest and prioritize those for matching and provide users step by step directions to the nearest shelter. We were able to capture users' locations through natural language, so it's simple to communicate where you currently are despite not having access to location services
Lastly, we built a simple web system for shelter managers using HTML, SASS, JavaScript, and Node.js that updated our data in real time and allowed for new shelters to be entered into the system.
## Challenges we ran into
One major challenge was with the logic of the SMS communication. We had four different outgoing message categories (statements, prompting questions, demographic questions, and preference questions), and shifting between these depending on user input was initially difficult to conceptualize and implement. Another challenge was collecting the distance information for each of the shelters and sorting between the distances, since the response from the Directions API was initially confusing. Lastly, building the custom decisioning logic that matched users to the best shelter for them was an interesting challenge.
## Accomplishments that we're proud of
We were able to build a database of potential shelters in one consolidated place, which is something the city of London doesn't even have readily available. That itself would be a win, but we were able to build on this dataset by allowing shelter administrators to update their availability with just a few clicks of a button. This information saves lives, as it prevents homeless individuals from wasting their time going to a shelter that was never going to let them in due to capacity constraints, which often forced homeless individuals to miss the cutoff for other shelters and sleep on the streets. Being able to use this information in a custom matching system via SMS was a really cool thing for our team to see - we immediately realized its potential impact and how it could save lives, which is something we're proud of.
## What we learned
We learned how to use Twilio SMS APIs and webhooks to facilitate communications and connect to our business logic, sending out different messages depending on the user's responses. In addition, we taught ourselves how to integrate the webhooks to our Firebase database to communicate valuable information to the users.
This experience taught us how to use multiple Google Maps APIs to get directions and distance data for the shelters in our application. We also learned how to handle several interesting edge cases with our database since this system uses data that is modified and used by many different systems at the same time.
## What's next for ShelterFirst
One addition to make could be to integrate locations for other basic services like public washrooms, showers, and food banks to connect users to human rights resources. Another feature that we would like to add is a social aspect with tags and user ratings for each shelter to give users a sense of what their experience may be like at a shelter based on the first-hand experiences of others. We would also like to leverage the Twilio Voice API to make this system accessible via a toll free number, which can be called for free at any payphone, reaching the entire homeless demographic.
We would also like to use Raspberry Pis and/or Arduinos with turnstiles to create a cheap system for shelter managers to automatically collect live availability data. This would ensure the occupancy data in our database is up to date and seamless to collect from otherwise busy shelter managers. Lastly, we would like to integrate into municipalities "smart cities" initiatives to gather more robust data and make this system more accessible and well known.
|
## Inspiration
Over **15% of American adults**, over **37 million** people, are either **deaf** or have trouble hearing according to the National Institutes of Health. One in eight people have hearing loss in both ears, and not being able to hear or freely express your thoughts to the rest of the world can put deaf people in isolation. However, only 250 - 500 million adults in America are said to know ASL. We strongly believe that no one's disability should hold them back from expressing themself to the world, and so we decided to build Sign Sync, **an end-to-end, real-time communication app**, to **bridge the language barrier** between a **deaf** and a **non-deaf** person. Using Natural Language Processing to analyze spoken text and Computer Vision models to translate sign language to English, and vice versa, our app brings us closer to a more inclusive and understanding world.
## What it does
Our app connects a deaf person who speaks American Sign Language into their device's camera to a non-deaf person who then listens through a text-to-speech output. The non-deaf person can respond by recording their voice and having their sentences translated directly into sign language visuals for the deaf person to see and understand. After seeing the sign language visuals, the deaf person can respond to the camera to continue the conversation.
We believe real-time communication is the key to having a fluid conversation, and thus we use automatic speech-to-text and text-to-speech translations. Our app is a web app designed for desktop and mobile devices for instant communication, and we use a clean and easy-to-read interface that ensures a deaf person can follow along without missing out on any parts of the conversation in the chat box.
## How we built it
For our project, precision and user-friendliness were at the forefront of our considerations. We were determined to achieve two critical objectives:
1. Precision in Real-Time Object Detection: Our foremost goal was to develop an exceptionally accurate model capable of real-time object detection. We understood the urgency of efficient item recognition and the pivotal role it played in our image detection model.
2. Seamless Website Navigation: Equally essential was ensuring that our website offered a seamless and intuitive user experience. We prioritized designing an interface that anyone could effortlessly navigate, eliminating any potential obstacles for our users.
* Frontend Development with Vue.js: To rapidly prototype a user interface that seamlessly adapts to both desktop and mobile devices, we turned to Vue.js. Its flexibility and speed in UI development were instrumental in shaping our user experience.
* Backend Powered by Flask: For the robust foundation of our API and backend framework, Flask was our framework of choice. It provided the means to create endpoints that our frontend leverages to retrieve essential data.
* Speech-to-Text Transformation: To enable the transformation of spoken language into text, we integrated the webkitSpeechRecognition library. This technology forms the backbone of our speech recognition system, facilitating communication with our app.
* NLTK for Language Preprocessing: Recognizing that sign language possesses distinct grammar, punctuation, and syntax compared to spoken English, we turned to the NLTK library. This aided us in preprocessing spoken sentences, ensuring they were converted into a format comprehensible by sign language users.
* Translating Hand Motions to Sign Language: A pivotal aspect of our project involved translating the intricate hand and arm movements of sign language into a visual form. To accomplish this, we employed a MobileNetV2 convolutional neural network. Trained meticulously to identify individual characters using the device's camera, our model achieves an impressive accuracy rate of 97%. It proficiently classifies video stream frames into one of the 26 letters of the sign language alphabet or one of the three punctuation marks used in sign language. The result is the coherent output of multiple characters, skillfully pieced together to form complete sentences
## Challenges we ran into
Since we used multiple AI models, it was tough for us to integrate them seamlessly with our Vue frontend. Since we are also using the webcam through the website, it was a massive challenge to seamlessly use video footage, run realtime object detection and classification on it and show the results on the webpage simultaneously. We also had to find as many opensource datasets for ASL as possible, which was definitely a challenge, since with a short budget and time we could not get all the words in ASL, and thus, had to resort to spelling words out letter by letter. We also had trouble figuring out how to do real time computer vision on a stream of hand gestures of ASL.
## Accomplishments that we're proud of
We are really proud to be working on a project that can have a profound impact on the lives of deaf individuals and contribute to greater accessibility and inclusivity. Some accomplishments that we are proud of are:
* Accessibility and Inclusivity: Our app is a significant step towards improving accessibility for the deaf community.
* Innovative Technology: Developing a system that seamlessly translates sign language involves cutting-edge technologies such as computer vision, natural language processing, and speech recognition. Mastering these technologies and making them work harmoniously in our app is a major achievement.
* User-Centered Design: Crafting an app that's user-friendly and intuitive for both deaf and hearing users has been a priority.
* Speech Recognition: Our success in implementing speech recognition technology is a source of pride.
* Multiple AI Models: We also loved merging natural language processing and computer vision in the same application.
## What we learned
We learned a lot about how accessibility works for individuals that are from the deaf community. Our research led us to a lot of new information and we found ways to include that into our project. We also learned a lot about Natural Language Processing, Computer Vision, and CNN's. We learned new technologies this weekend. As a team of individuals with different skillsets, we were also able to collaborate and learn to focus on our individual strengths while working on a project.
## What's next?
We have a ton of ideas planned for Sign Sync next!
* Translate between languages other than English
* Translate between other sign languages, not just ASL
* Native mobile app with no internet access required for more seamless usage
* Usage of more sophisticated datasets that can recognize words and not just letters
* Use a video image to demonstrate the sign language component, instead of static images
|
winning
|
## FLEX [Freelancing Linking Expertise Xchange]
## Inspiration
Freelancers deserve a platform where they can fully showcase their skills, without worrying about high fees or delayed payments. Companies need fast, reliable access to talent with specific expertise to complete jobs efficiently. "FLEX" bridges the gap, enabling recruiters to instantly find top candidates through AI-powered conversations, ensuring the right fit, right away.
## What it does
Clients talk to our AI, explaining the type of candidate they need and any specific skills they're looking for. As they speak, the AI highlights important keywords and asks any more factors that they would need with the candidate. This data is then analyzed and parsed through our vast database of Freelancers or the best matching candidates. The AI then talks back to the recruiter, showing the top candidates based on the recruiter’s requirements. Once the recruiter picks the right candidate, they can create a smart contract that’s securely stored and managed on the blockchain for transparent payments and agreements.
## How we built it
We built starting with the Frontend using **Next.JS**, and deployed the entire application on **Terraform** for seamless scalability. For voice interaction, we integrated **Deepgram** to generate human-like voice and process recruiter inputs, which are then handled by **Fetch.ai**'s agents. These agents work in tandem: one agent interacts with **Flask** to analyze keywords from the recruiter's speech, another queries the **SingleStore** database, and the third handles communication with **Deepgram**.
Using SingleStore's real-time data analysis and Full-Text Search, we find the best candidates based on factors provided by the client. For secure transactions, we utilized **SUI** blockchain, creating an agreement object once the recruiter posts a job. When a freelancer is selected and both parties reach an agreement, the object gets updated, and escrowed funds are released upon task completion—all through Smart Contracts developed in **Move**. We also used Flask and **Express.js** to manage backend and routing efficiently.
## Challenges we ran into
We faced challenges integrating Fetch.ai agents for the first time, particularly with getting smooth communication between them. Learning Move for SUI and connecting smart contracts with the frontend also proved tricky. Setting up reliable Speech to Text was tough, as we struggled to control when voice input should stop. Despite these hurdles, we persevered and successfully developed this full stack application.
## Accomplishments that we're proud of
We’re proud to have built a fully finished application while learning and implementing new technologies here at CalHacks. Successfully integrating blockchain and AI into a cohesive solution was a major achievement, especially given how cutting-edge both are. It’s exciting to create something that leverages the potential of these rapidly emerging technologies.
## What we learned
We learned how to work with a range of new technologies, including SUI for blockchain transactions, Fetch.ai for agent communication, and SingleStore for real-time data analysis. We also gained experience with Deepgram for voice AI integration.
## What's next for FLEX
Next, we plan to implement DAOs for conflict resolution, allowing decentralized governance to handle disputes between freelancers and clients. We also aim to launch on the SUI mainnet and conduct thorough testing to ensure scalability and performance.
|
# Inspiration
Traditional startup fundraising is often restricted by stringent regulations, which make it difficult for small investors and emerging founders to participate. These barriers favor established VC firms and high-networth individuals, limiting innovation and excluding a broad range of potential investors. Our goal is to break down these barriers by creating a decentralized, community-driven fundraising platform that democratizes startup investments through a Decentralized Autonomous Organization, also known as DAO.
# What It Does
To achieve this, our platform leverages blockchain technology and the DAO structure. Here’s how it works:
* **Tokenization**: We use blockchain technology to allow startups to issue digital tokens that represent company equity or utility, creating an investment proposal through the DAO.
* **Lender Participation**: Lenders join the DAO, where they use cryptocurrency, such as USDC, to review and invest in the startup proposals.
-**Startup Proposals**: Startup founders create proposals to request funding from the DAO. These proposals outline key details about the startup, its goals, and its token structure. Once submitted, DAO members review the proposal and decide whether to fund the startup based on its merits.
* **Governance-based Voting**: DAO members vote on which startups receive funding, ensuring that all investment decisions are made democratically and transparently. The voting is weighted based on the amount lent in a particular DAO.
# How We Built It
### Backend:
* **Solidity** for writing secure smart contracts to manage token issuance, investments, and voting in the DAO.
* **The Ethereum Blockchain** for decentralized investment and governance, where every transaction and vote is publicly recorded.
* **Hardhat** as our development environment for compiling, deploying, and testing the smart contracts efficiently.
* **Node.js** to handle API integrations and the interface between the blockchain and our frontend.
* **Sepolia** where the smart contracts have been deployed and connected to the web application.
### Frontend:
* **MetaMask** Integration to enable users to seamlessly connect their wallets and interact with the blockchain for transactions and voting.
* **React** and **Next.js** for building an intuitive, responsive user interface.
* **TypeScript** for type safety and better maintainability.
* **TailwindCSS** for rapid, visually appealing design.
* **Shadcn UI** for accessible and consistent component design.
# Challenges We Faced, Solutions, and Learning
### Challenge 1 - Creating a Unique Concept:
Our biggest challenge was coming up with an original, impactful idea. We explored various concepts, but many were already being implemented.
**Solution**:
After brainstorming, the idea of a DAO-driven decentralized fundraising platform emerged as the best way to democratize access to startup capital, offering a novel and innovative solution that stood out.
### Challenge 2 - DAO Governance:
Building a secure, fair, and transparent voting system within the DAO was complex, requiring deep integration with smart contracts, and we needed to ensure that all members, regardless of technical expertise, could participate easily.
**Solution**:
We developed a simple and intuitive voting interface, while implementing robust smart contracts to automate and secure the entire process. This ensured that users could engage in the decision-making process without needing to understand the underlying blockchain mechanics.
## Accomplishments that we're proud of
* **Developing a Fully Functional DAO-Driven Platform**: We successfully built a decentralized platform that allows startups to tokenize their assets and engage with a global community of investors.
* **Integration of Robust Smart Contracts for Secure Transactions**: We implemented robust smart contracts that govern token issuance, investments, and governance-based voting bhy writing extensice unit and e2e tests.
* **User-Friendly Interface**: Despite the complexities of blockchain and DAOs, we are proud of creating an intuitive and accessible user experience. This lowers the barrier for non-technical users to participate in the platform, making decentralized fundraising more inclusive.
## What we learned
* **The Importance of User Education**: As blockchain and DAOs can be intimidating for everyday users, we learned the value of simplifying the user experience and providing educational resources to help users understand the platform's functions and benefits.
* **Balancing Security with Usability**: Developing a secure voting and investment system with smart contracts was challenging, but we learned how to balance high-level security with a smooth user experience. Security doesn't have to come at the cost of usability, and this balance was key to making our platform accessible.
* **Iterative Problem Solving**: Throughout the project, we faced numerous technical challenges, particularly around integrating blockchain technology. We learned the importance of iterating on solutions and adapting quickly to overcome obstacles.
# What’s Next for DAFP
Looking ahead, we plan to:
* **Attract DAO Members**: Our immediate focus is to onboard more lenders to the DAO, building a large and diverse community that can fund a variety of startups.
* **Expand Stablecoin Options**: While USDC is our starting point, we plan to incorporate more blockchain networks to offer a wider range of stablecoin options for lenders (EURC, Tether, or Curve).
* **Compliance and Legal Framework**: Even though DAOs are decentralized, we recognize the importance of working within the law. We are actively exploring ways to ensure compliance with global regulations on securities, while maintaining the ethos of decentralized governance.
|
## Inspiration
Initially, we struggled to find a project idea. After circling through dozens of ideas and the occasional hacker's block, we were still faced with a huge ***blank space***. In the midst of all our confusion, it hit us that this feeling of desperation and anguish is familiar to all thinkers and creators. There came our inspiration - the search for inspiration. Tailor is a tool that enables artists to overcome their mental blocks in a fun and engaging manner, while leveraging AI technology. AI is very powerful, but finding the right prompt can sometimes be tricky, especially for children or those with special needs. With our easy to use app, anyone can find inspiration as swiftly as possible.
## What it does
The site generates prompts artists to generate creative prompts for DALLE. By clicking the "add" button, a react component containing a random noun is added to the main container. Users can then specify the color and size of this noun. They can add as many nouns as they want, then specify the style and location of the final artwork. After hitting submit, a prompt is generated and sent to OpenAI's API, which returns an image.
## How we built it
It was built using React Remix, OpenAI's API, and a random noun generator API. TailwindCSS was used for styling which made it easy to create beautiful components.
## Challenges we ran into
Getting tailwind installed, and installing dependencies in general. Sometimes our API wouldn't connect, and OpenAI rotated our keys since we were developing together. Even with tailwind, it was sometimes hard for the CSS to do what we wanted it to. Passing around functions and state between parent and child components in react was also difficult. We tried to integrate twiliio with an API call but it wouldn't work, so we had to setup a separate backend on Vercel and manually paste the image link and phone number. Also, we learned Remix can't use react-speech libraries so that was annoying.
## Accomplishments that we're proud of
* Great UI/UX!
* Connecting to the OpenAI Dalle API
* Coming up with a cool domain name
* Sleeping more than 2 hours this weekend
## What we learned
We weren't really familiar with React as none of use had really used it before this hackathon. We really wanted to up our frontend skills and selected Remix, a metaframework based on React, to do multipage routing. It turned out to be a little overkill, but we learned a lot and are thankful to mentors. They showed us how to avoid overuse of Hooks, troubleshoot API connection problems, and use asynchrous functions. We also learned many more tailwindcss classes and how to use gradients.
## What's next for Tailor
It would be cool to have this website as a browser extension, maybe just to make it more accessible, or even to have it scrape websites for AI prompts. Also, it would be nice to implement speech to text, maybe through AssemblyAI
|
winning
|
## Inspiration
A few weeks before HT6, Randy received a poorly shipped package with too much plastic filler. Coincidentally, Ryan received a package with almost 90% empty space and was packed even worse. This prompted our team to want to tackle a major problem: reducing packaging waste (and as a result, logistics efficiency).
## What it does
Our application optimally reduces the empty package volume when packaging multiple items through computer vision and NP-hard algorithms.
Steps:
1. User inputs all boxes which can be used for packing
2. User places items in the scanner box, which uses computer vision to measure the dimensions of all products to be packed
3. Our program sends this data into the algorithm to come up with a solution to optimally pack the items given the boxes available
4. User optimally packs items by following the 3D visual solution
## How we built it
* Computer vision:
Used OpenCV and supporting libraries to create a duo-image pipeline to detect items through contours, and measure items using reference pixel to centimetre conversions.
* Algorithm:
We used an algorithm based off an existing heuristic outlined in <https://github.com/enzoruiz/3dbinpacking>
* Visualizer:
Our 3d canvas was built using the react-fibre-library. We created blocks to represent the items that need to be shipped and laid them out according to the configuration generated by our algorithm.
* Fullstack:
Used a React frontend and a Flask backend and to store images we used Amazon S3. For styling, ChakraUI is used.
## Challenges we ran into
* OpenCV detection
Contour detection proved to be more difficult than expected since we had to continually make adjustments to hyperparameters and the environment.
* Connecting various components
For most on our team, this was the first time developing an application with a Flask backend. And so routing all the api calls and handling data transfer from Python to Javascript and vice verse was a challenge.
## Accomplishments that we're proud of
* Creating a functional fullstack application
* Visualization of our 3d solution
* Accurate object detection through OpenCV
## What we learned
* Computer Vision (OpenCV)
* Algorithm
* AWS and Flask
## What's next for Package Optimizer
Implement hardware: Add a conveyor belt to it so that we can automatically scan objects
Improve our algorithm to take into account parameters such as weight, material etc.
|
## Inspiration
Therapy is all about creating a trusting relationship between the clients and their therapist. Building rapport, or trust, is the main job of a therapist, especially at the beginning. But in the current practices, therapists have to take notes throughout the session to keep track of their clients. This does 2 things:
* Deviate the therapists from getting fully involved in the sessions.
* Their clients feel disconnected from their therapists (due to minimal/no eye contact, more focus on note-taking than "connecting" with patients, etc.)
## What it does
Enter **MediScript**.
MediScript is an AI-powered android application that:
* documents the conversation in therapy sessions
* supports speaker diarization (multiple speaker labeling)
* determines the theme of the conversation (eg: negative news, drug usage, health issues, etc.)
* transparently share session transcriptions with clients or therapists as per their consent
With MediScript, we aim to automate the tedious note-taking procedures in therapy sessions and as a result, make therapy sessions engaging again!
## How we built it
We built an Android application, adhering to Material Design UI guidelines, and integrated it with the Chaquopy module to run python scripts directly via the android application. Moreover, the audio recording of each session is stored directly within the app which sends the recorded audio files over to an AWS S3 bucket. We made AssemblyAI API calls via the python scripts and accessed the session recording audio files over the same S3 bucket while calling the API.
Documenting conversations, multi-speaker labeling, and conversation theme detection - all of this was made possible by using the brilliant API by **AssemblyAI**.
## Challenges we ran into
Configuring python scripts with the android application proved to be a big challenge initially. We had to experiment with lots of modules before finding Chaquopy which was a perfect fit for our use-case. AsseblyAPI was quite easy to use but we had to figure out a way to host our .mp3 files over the internet so that the API could access them instantly.
## Accomplishments that we're proud of
None of us had developed an Android app before so this was certainly a rewarding experience for all 3 of us. We weren't sure we'd be able to build a functioning prototype in time but we're delighted with the results!
## What's next for MediScript
* Privacy inclusion: we wish to use more privacy-centric methods to share session transcripts with the therapists and their clients
* Make a more easy-to-use and clean UI
* Integrate emotion detection capabilities for better session logging.
|
## Inspiration
Have you ever faced the problem of trying to fit in too many takeout containers into your fridge? Or tried to stock your kitchen closet with an obscene amount of ramen cups? We started by looking at common problems so niche that there wasn’t a solution readily available online yet. We wanted to solve something that affects the ordinary people and is simple and easy to use.
## What it does
The goal of the project was to fit objects onto a shelf based on their sizes. We got up to an implementation where objects are stacked first based on size, and a shelf is populated with stacks of objects of decreasing size.
## How we built it
Using React.js, take user inputs and use a GET request to set up the information for parsing. With Django, get the necessary information from the request and sent it into python for the script to make its computations.
We wrote Python classes for Pantry, which consists of Shelves that contain Stacks, which hold Foodstuffs. We wrote an APIReader to parse JSON files to initialize the Foodstuffs. To streamline the performance of our code, we wrote unit tests to help us debug and trace our program along the way.
## Challenges we ran into
The foundation (i.e. classes) took the longest because we ensured that everything was going according to plan before moving on the the next step. Some time was lost here but it was important that we had the fundamentals down.
## Accomplishments that we're proud of
The React site looks great! We got a working PantrySorter up with the classes all interacting with other. Some of us have only learned Python briefly or for just a semester.
## What we learned
We learned a lot about different tools from each other, such as git, React, Django, and Python. We also reaped the benefits of planning thoroughly and early using a mockup and UML so that it was easier to structure classes and find roadblocks.
## What's next for Pantry Optimizer
Our original idea was to implement PantryOptimizer in 3D with length, width, and height to consider as dimensions. Given more time, we would also think about rotating the objects to fit them in even more efficiently, and we are in the process of adding a dynamic shelf method in PantrySorter to let the user insert items after the initial filling of the Pantry. This application can also be stretched to include fridges, closets, bookshelves, you name it! It comes down to reducing the items to simple shapes to work with. We can also move away from using local host and set up PantryOptimizer on a public domain.
|
partial
|
# Inspiration
There are variety of factors that contribute to *mental health* and *wellbeing*. For many students, the stresses of remote learning have taken a toll on their overall sense of peace. Our group created **Balance Pad** as a way to serve these needs. Thus, Balance Pads landing page gives users access to various features that aim to improve their wellbeing.
# What it does
Balance Pad is a web-based application that gives users access to **several resources** relating to mental health, education, and productivity. Its initial landing page is a dashboard tying everything together to make a clear and cohesive user experience.
### Professional Help
>
> 1. *Chat Pad:* The first subpage of the application has a built in *Chatbot* offering direct access to a **mental heath professional** for instant messaging.
>
>
>
### Productivity
>
> 1. *Class Pad:* With the use of the Assembly API, users can convert live lecture content into text based notes. This feature will allow students to focus on live lectures without the stress of taking notes. Additionally, this text to speech aide will increase accessibility for those requiring note takers.
> 2. *Work Pad:* Timed working sessions using the Pomodoro technique and notification restriction are also available on our webpage. The Pomodoro technique is a proven method to enhance focus on productivity and will benefit students
> 3. *To Do Pad:* Helps users stay organized
>
>
>
### Positivity and Rest
>
> 1. *Affirmation Pad:* Users can upload their accomplishments throughout their working sessions. Congratulatory texts and positive affirmations will be sent to the provided mobile number during break sessions!
> 2. *Relaxation Pad:* Offers options to entertain students while resting from studying. Users are given a range of games to play with and streaming options for fun videos!
>
>
>
### Information and Education
>
> 1. *Information Pad:* is dedicated to info about all things mental health
> 2. *Quiz Pad:* This subpage tests what users know about mental health. By taking the quiz, users gain valuable insight into how they are and information on how to improve their mental health, wellbeing, and productivity.
>
>
>
# How we built it
**React:** Balance Pad was built using React. This allowed for us to easily combine the different webpages we each worked on.
**JavaScript, HTML, and CSS:** React builds on these languages so it was necessary to gain familiarity with them
**Assembly API:** The assembly API was used to convert live audio/video into text
**Twilio:** This was used to send instant messages to users based on tracked accomplishments
# Challenges we ran into
>
> * Launching new apps with React via Visual Studio Code
> * Using Axios to run API calls
> * Displaying JSON information
> * Domain hosting of Class Pad
> * Working with Twilio
>
>
>
# Accomplishments that we're proud of
*Pranati:* I am proud that I was able to learn React from scratch, work with new tech such as Axios, and successfully use the Assembly API to create the Class Pad (something I am passionate about). I was able to persevere through errors and build a working product that is impactful. This is my first hackathon and I am glad I had so much fun.
*Simi:* This was my first time using React, Node.js, and Visual Studio. I don't have a lot of CS experience so the learning curve was steep but rewarding!
*Amitesh:* Got to work with a team to bring a complicated idea to life!
# What we learned
*Amitesh:* Troubleshooting domain creation for various pages, supporting teammates and teaching concepts
*Pranati:* I learned how to use new tech such as React, new concepts such API calls using Axios, how to debug efficiently, and how to work and collaborate in a team
*Simi:* I learned how APIs work, basic html, and how React modularizes code. Also learned the value of hackathons as this was my first
# What's next for Balance Pad
*Visualizing Music:* Our group hopes to integrate BeatCaps software to our page in the future. This would allow a more interactive music experience for users and also allow hearing impaired individuals to experience music
*Real Time Transcription:* Our group hopes to implement in real time transcription in the Class Pad to make it even easier for students.
|
## Inspiration
**A lot of people have stressful things on their mind right now.** [According to a Boston University study, "depression symptom prevalence was more than 3-fold higher during the COVID-19 pandemic than before."](https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2770146)
Sometimes it’s hard to sleep or get a good night’s rest because of what happened that day. If you say or write everything down, it helps get it out of your mind so you’re not constantly thinking about it. Diaries take a long time to write in, and sometimes you want to talk. **Voice diaries aren't common and they are quicker and easier to use than a real diary.**
## What it does
When you're too tired to write out your thoughts at the end of the day, **you can simply talk aloud and our app will write it down for you, easy right?**
Nite Write asks you questions to get you thinking about your day. It listens to you while you speak. You can take breaks and continue speaking to the app. You can go back and look at old posts and reflect on your days!
## How we built it
* We used **Figma** to plan out the design and flow of our web app.
* We used **WebSpeech API** and **JavaScript** to hook up the speech-to-text transcription.
* We used **HTML** and **CSS** for the front-end of the web app.
* And lastly, we used **Flask** to put the entire app together.
## Challenges we ran into
Our first challenge was understanding how to use Flask both in its usage of routes, templates, and syntax. Another challenge was the lack of time and integrating the different parts of the app because we're virtual. It was difficult to coordinate and use our time efficiently since we lived all over the country in different timezones.
## Accomplishments that we're proud of
**We are proud of being able to come together virtually to address this problem we all had!**
## What we learned
We learned how to use Flask, WebSpeech API, and CSS. We also learned how to put together a demo with slides, work together virtually.
## What's next for Nite Write?
* Show summaries and trends on a person's most frequent entry topics or emotion
* Search feature that filters your diary entries based on certain words you used
* Light Mode feature
* Ability to sort entries based on topic/etc
|
## Inspiration
College students are busy, juggling classes, research, extracurriculars and more. On top of that, creating a todo list and schedule can be overwhelming and stressful. Personally, we used Google Keep and Google Calendar to manage our tasks, but these tools require constant maintenance and force the scheduling and planning onto the user.
Several tools such as Motion and Reclaim help business executives to optimize their time and maximize productivity. After talking to our peers, we realized college students are not solely concerned with maximizing output. Instead, we value our social lives, mental health, and work-life balance. With so many scheduling applications centered around productivity, we wanted to create a tool that works **with** users to maximize happiness and health.
## What it does
Clockwork consists of a scheduling algorithm and full-stack application. The scheduling algorithm takes in a list of tasks and events, as well as individual user preferences, and outputs a balanced and doable schedule. Tasks include a name, description, estimated workload, dependencies (either a start date or previous task), and deadline.
The algorithm first traverses the graph to augment nodes with additional information, such as the eventual due date and total hours needed for linked sub-tasks. Then, using a greedy algorithm, Clockwork matches your availability with the closest task sorted by due date. After creating an initial schedule, Clockwork finds how much free time is available, and creates modified schedules that satisfy user preferences such as workload distribution and weekend activity.
The website allows users to create an account and log in to their dashboard. On the dashboard, users can quickly create tasks using both a form and a graphical user interface. Due dates and dependencies between tasks can be easily specified. Finally, users can view tasks due on a particular day, abstracting away the scheduling process and reducing stress.
## How we built it
The scheduling algorithm uses a greedy algorithm and is implemented with Python, Object Oriented Programming, and MatPlotLib. The backend server is built with Python, FastAPI, SQLModel, and SQLite, and tested using Postman. It can accept asynchronous requests and uses a type system to safely interface with the SQL database. The website is built using functional ReactJS, TailwindCSS, React Redux, and the uber/react-digraph GitHub library. In total, we wrote about 2,000 lines of code, split 2/1 between JavaScript and Python.
## Challenges we ran into
The uber/react-digraph library, while popular on GitHub with ~2k stars, has little documentation and some broken examples, making development of the website GUI more difficult. We used an iterative approach to incrementally add features and debug various bugs that arose. We initially struggled setting up CORS between the frontend and backend for the authentication workflow. We also spent several hours formulating the best approach for the scheduling algorithm and pivoted a couple times before reaching the greedy algorithm solution presented here.
## Accomplishments that we're proud of
We are proud of finishing several aspects of the project. The algorithm required complex operations to traverse the task graph and augment nodes with downstream due dates. The backend required learning several new frameworks and creating a robust API service. The frontend is highly functional and supports multiple methods of creating new tasks. We also feel strongly that this product has real-world usability, and are proud of validating the idea during YHack.
## What we learned
We both learned more about Python and Object Oriented Programming while working on the scheduling algorithm. Using the react-digraph package also was a good exercise in reading documentation and source code to leverage an existing product in an unconventional way. Finally, thinking about the applications of Clockwork helped us better understand our own needs within the scheduling space.
## What's next for Clockwork
Aside from polishing the several components worked on during the hackathon, we hope to integrate Clockwork with Google Calendar to allow for time blocking and a more seamless user interaction. We also hope to increase personalization and allow all users to create schedules that work best with their own preferences. Finally, we could add a metrics component to the project that helps users improve their time blocking and more effectively manage their time and energy.
|
partial
|
## Inspiration
The inspiration of this project came from one of the sponsors in **HTN (Co:here)**. Their goal is to make AI/ML accessible to devs, which gave me the idea, that I can build a platform, where people who do not even know how to code can build their own Machine Learning models.
Coding is a great skill to have, but we need to ensure that it doesn't become a necessity to survive. There are a lot of people who prefer to work with the UI and cannot understand code. As developers, it is our duty to cater to this audience as well. This is my inspiration and goal through this project.
## What it does
The project works by taking in the necessary details of the Machine Learning model that are required by the function. Then it works in the backend to dynamically generate code and build the model. It is even able to decide whether to convert data in the dataset to vectors or not, based on race conditions, and ensure that the model doesn't fail. It then returns the required metric to the user for them to check it out.
## How I built it
I first built a Flask backend that took in information regarding the model using JSON. Then I built a service to parse and evaluate the necessary conditions for the Scikit Learn models, and then train and predict with it. After ensuring that my backend was working properly, I moved to the front-end where I spent a lot of my time, building a clean UI/UX design so that the users can have the best and the most comfortable experience while using my application.
## Challenges I ran into
One of the key challenges of this project is to generate code dynamically at run-time upon user input. This requirement is a very hefty one as I had to ensure that the inputs won't break the code. I read through the documentation of Scikit Learn and worked with it, while building the web app.
## Accomplishments that I'm proud of
I was able to full-fledged working application on my own, building the entire frontend and backend from scratch. The application is able to take in the features of the model and the dataset, and display the results of the trainings. This allows the user to tweak their model and check results everytime to see what works best for them. I'm especially proud of being able to generate and run code dynamically based on user input.
## What I learned
This project, more than anything, was a challenge to myself, to see how far I had come from my last HackTheNorth experience. I wanted to get the full experience of building every part of the project, and that's why I worked solo. This gave me experience of all the aspects of building a software from scratch in limited amount of time, allowing me to grasp the bigger picture.
## What's next for AutoML
My very first step will be to work on integrating TensorFlow to this project. My initial goal was to have a visual representation of Neural Network layers for users to drag and drop. Due to time and technical constraints, I couldn't fully idealize my goal. So this is the first thing I am going to work with. After this, I'll probably work with authentication so that people can work on their projects and store their progresses.
|
## Inspiration
As university students, we empathize with the struggle of paying off student loans and working tireless hours trying to complete schoolwork. Money spent on ordering food over the course of a week often becomes an afterthought during the midterm week where the last thing we want to spend our time on is making meals. It is too easy to be reckless with our money. That is why we spoke to Melissa, a third-year engineering student who gets classic difficulties of university life. We realized that there is a need in the market to better visualize people's personal finances.
## What it does
Our solution has two components; one being the website which harnesses and gathers user data, the second being the AR experience powered by EchoAR’s system. With these two hand in hand, we hope to deliver an interactive user experience where people see how they stand financially in real-time in a unique way.
## How we built it
1) Miro board to better ideate what issues are prevalent in our lives and what we want to address.
2) The prototype and demo are displayed using Figma.
3) Discussed potential stacks we could use to best execute the project.
4) Original idea used the same software and languages as mentioned below. However, changed from Excel and MongoDB to using EchoAR as our back-end and react as our front-end.
## Challenges we ran into
1) EchoAR can only create custom animations using Unity. With our original plan being to use excel to generate the 3D charts and present them using EchoAR, we had to pivot and discover new ways to deliver a similar message with different content.
2) React routers.
3) Using MERN stack for the first time.
4) Tried to incorporate MongoDB.
## Accomplishments that we're proud of
1) Working AR prototype that helps visualize finances by placing users in 5 different categories.
2) Introduction to the back-end (MongoDB and Node.js) for some members.
## What we learned
1) Learned how to use React, Bootstrap, and AR in daily life.
2) Trying to figure out how to create custom images in augmented reality.
## What's next for Value Visualiz-AR
1) Custom graphs and charts in augmented reality.
2) Helping users track their progress in achieving their budgeting and saving goals.
3) Creating special AR models for users when they reach their milestones.
|
## Inspiration
We wanted to build a technical app that is actually useful. Scott Forestall's talk at the opening ceremony really spoke to each of us, and we decided then to create something that would not only show off our technological skill but also actually be useful. Going to the doctor is inconvenient and not usually immediate, and a lot of times it ends up being a false alarm. We wanted to remove this inefficiency to make everyone's lives easier and make healthy living more convenient. We did a lot of research on health-related data sets and found a lot of data on different skin diseases. This made it very easy for us to chose to build a model using this data that would allow users to self diagnose skin problems.
## What it does
Our ML model has been trained on hundreds of samples of diseased skin to be able to identify among a wide variety of malignant and benign skin diseases. We have a mobile app that lets you take a picture of a patch of skin that concerns you and runs it through our model and tells you what our model classified your picture as. Finally, the picture also gets sent to a doctor with our model results and allows the doctor to override that decision. This new classification is then rerun through our model to reinforce the correct outputs and penalize wrong outputs, ie. adding a reinforcement learning component to our model as well.
## How we built it
We built the ML model in IBM Watson from public skin disease data from ISIC(International Skin Imaging Collaboration). We have a platform independent mobile app built in React Native using Expo that interacts with our ML Model through IBM Watson's API. Additionally, we store all of our data in Google Firebase's cloud where doctors will have access to them to correct the model's output if needed.
## Challenges we ran into
Watson had a lot of limitations in terms of data loading and training, so it had to be done in extremely small batches, and it prevented us from utilizing all the data we had available. Additionally, all of us were new to React Native, so there was a steep learning curve in implementing our mobile app.
## Accomplishments that we're proud of
Each of us learned a new skill at this hackathon, which is the most important thing for us to take away from any event like this. Additionally, we came in wanting to implement an ML model, and we implemented one that is far more complex than we initially expected by using Watson.
## What we learned
Web frameworks are extremely complex with very similar frameworks being unable to talk to each other. Additionally, while REST APIs are extremely convenient and platform independent, they can be much harder to use than platform-specific SDKs.
## What's next for AEye
Our product is really a proof of concept right now. If possible, we would like to polish both the mobile and web interfaces and come up with a complete product for the general user. Additionally, as more users adopt our platform, our model will get more and more accurate through our reinforcement learning framework.
See a follow-up interview about the project/hackathon here! <https://blog.codingitforward.com/aeye-an-ai-model-to-detect-skin-diseases-252747c09679>
|
losing
|
## **What is Cointree?**
Cointree is a platform where users get paid to go green.
Because living more sustainably shouldn't be more expensive. In fact, we should be rewarded for living sustainably – and that's exactly what Cointree does.
Cointree connects companies looking to offset carbon emissions with users looking to live a more sustainable life.
## **How does Cointree accomplish this?**
More and more companies want to become carbon neutral. Carbon offsets are a means for companies to become carbon neutral even if they still have to emit carbon dioxide in the air – by paying a third party to remove or not emit carbon dioxide by means such as reducing driving pollution, cutting down less trees, or building wind farms. But as these third parties have nearly quadrupled in size in just the past two years, debates have arised about the effectiveness and value which these carbon offsetting companies really provide.
Cointree takes a drastically different approach, instead connecting individual people to these companies who are willing to pay carbon offsets.
Cointree accomplishes this by having two different clients: an iOS app, and a web client.
The web client is for the companies paying carbon offsets, who can sign in, deposit currency, and view the progress on their carbon offset goals. In the process, we take a small cut out of the companie's deposit.
Meanwhile users install our Cointree iOS app. There they can announce that they, say, installed solar panels, or bought an electric vehicle, or even planted a tree. Then they demonstrate proof of completion (by scanning an invoice for instance), and they get paid. Simple as that.
You might be wondering, how exactly do we connect the two, and more importantly how do we store data in a safe, efficient, and accountable system? The answer, ***blockchain***.
## **What is unique about Cointree?**
At Cointree, all of our data is on the blockchain. And to us, that’s really important. We want the radical transparency that blockchain offers – it means that anyone can see what carbon offsets companies are paying, and keep them accountable. Indeed, the web client also acts as a log where anyone can see all the carbon offsets that a certain company bought. Real transparency.
We use Polygon's MATIC currency and Ethereum platform in order to develop a system where companies deposit MATIC into a smart contract that functions almost like a vault. When users demonstrate proof of completion of a certain task, we send money to their wallet (as a function of how much CO2 they removed / won't put into the atmosphere thanks to their task). Thanks to the speed and security of Polygon, we offer a really great experience here.
Check out our video for a deep-dive into how Cointree works on the blockchain. There's some pretty novel stuff in there (also check out our attached slides).
## Challenges we ran into
The biggest challenge was interfacing with the blockchain from a native iOS app. It's nearly impossible – blockchain is almost exclusively made for the web. But we didn't want to ditch using an iOS app though, since we wanted the smoothest possible experience for the end user. So instead we had to come up with clever work arounds to offload any interfacing done with the blockchain to our express.js backend.
## Accomplishments that we're proud of
We're really proud of the range of things we were able to make – from an iOS client to a web client, from smart contracts to REST APIs. All of our past experience as developers across our whole (short) lives came into use here.
## Want to view the source code?
[Cointree iOS App](https://github.com/nikitamounier/Cointree-iOS)
[Cointree Smart Contracts & REST API](https://github.com/sidereior/cointree-smartcontract)
[Cointree web client](https://github.com/jmurphy5613/cointree-web)
[Cointree backend](https://github.com/jmurphy5613/cointree-backend)
## What's next for Cointree
Expanding to new sustainable projects (planting of trees and growth of them, using public transport, etc.), third party company verification of invoices & receipts (these companies will check with their own databases to verify that invoices are not fraudulent), providing uses for sustainable companies or retailers to benefit (companies that sell products which we offer payment for--for example electric cars--can give a percent discount and can better reach their market segment), improvement of security with Vault Smart Contract and communication between Vault Smart Contract and IOS app, rework of NFT minting process and rather than minting NFT's which are expensive we can have a Parent Smart Contract and make children smart contracts for each company and use data in these to verify proofs of transactions without the cost.
|
## Inspiration
People are increasingly aware of climate change but lack actionable steps. Everything in life has a carbon cost, but it's difficult to understand, measure, and mitigate. Information about carbon footprints of products is often inaccessible for the average consumer, and alternatives are time consuming to research and find.
## What it does
With GreenWise, you can link email or upload receipts to analyze your purchases and suggest products with lower carbon footprints. By tracking your carbon usage, it helps you understand and improve your environmental impact. It provides detailed insights, recommends sustainable alternatives, and facilitates informed choices.
## How we built it
We started by building a tool that utilizes computer vision to read information off of a receipt, an API to gather information about the products, and finally ChatGPT API to categorize each of the products. We also set up an alternative form of gathering information in which the user forwards digital receipts to a unique email.
Once we finished the process of getting information into storage, we built a web scraper to gather the carbon footprints of thousands of items for sale in American stores, and built a database that contains these, along with AI-vectorized form of the product's description.
Vectorizing the product titles allowed us to quickly judge the linguistic similarity of two products by doing a quick mathematical operation. We utilized this to make the application compare each product against the database, identifying products that are highly similar with a reduced carbon output.
This web application was built with a Python Flask backend and Bootstrap for the frontend, and we utilize ChromaDB, a vector database that allowed us to efficiently query through vectorized data.
## Accomplishments that we're proud of
In 24 hours, we built a fully functional web application that uses real data to provide real actionable insights that allow users to reduce their carbon footprint
## What's next for GreenWise
We'll be expanding e-receipt integration to support more payment processors, making the app seamless for everyone, and forging partnerships with companies to promote eco-friendly products and services to our consumers
[Join the waitlist for GreenWise!](https://dea15e7b.sibforms.com/serve/MUIFAK0jCI1y3xTZjQJtHyTwScsgr4HDzPffD9ChU5vseLTmKcygfzpBHo9k0w0nmwJUdzVs7lLEamSJw6p1ACs1ShDU0u4BFVHjriKyheBu65k_ruajP85fpkxSqlBW2LqXqlPr24Cr0s3sVzB2yVPzClq3PoTVAhh_V3I28BIZslZRP-piPn0LD8yqMpB6nAsXhuHSOXt8qRQY)
|
## Inspiration
I wanted to create something that used openai image generation API in some away aswell as use my familiarity with python PYQT5 module to build the user interface
## What it does
This app allows the user to enter several prompts, once the user is satisfied they can click generate and the images will get created. From there users can click on the photos to preview them, download them, copy them to the clipboard, or even make some edits such as sharpening the image, converting the image to black and white, and rotating the image.
## How we built it
I build this by using PYQT5 to do all the work surrounding the user interface as well as connecting all my buttons to functions that get executed after being clicked. I made use of openai to generate the images and then convert them into QPixMap objects in order to display them. The PIL module was used to edit the images in the editor page.
## Challenges we ran into
I ran into some challenges toward the end where I had some user interface elements randomly disappear and images were not resizing correctly, but luckily I was able to find a solution.
## Accomplishments that we're proud of
I am happy to have built an easy-to-use UI and be able to incorporate Openai API and hope to use it a lot more as it becomes better over time
## What we learned
I learned more about designing a UI and navigating between several screens, as well as how to use Openai to generate images.
## What's next for AI Generation Image Editor
Next for AI Generation Image Editor I would like to add additional options such as picking the image size when downloading and reworking some styling elements in the UI
|
winning
|
## What it does
YapTrack is a real-time meeting guide that forces you to close the loop on all of your ideas
## Inspiration
One of our team members spent 5 hours in a meeting just to MOVE SOME COLUMNS IN A DATABASE. The meeting should have been two hours, but it got stretched out because every time we proposed a small changes, we weren’t able to keep track of all of the prior requirements to our solution and the resulting implications of the change. This resulted in a feedback loop where we kept following the chain of ideas, forgetting why we changed something and reverting back and forth. 5 Software engineers, getting paid about $500 in company resources per an hour, got dragged out for 3 extra hours ($1500 in unnecessary costs) just to make a decision that they had considered from the start.
Topic fluctuations constantly arise in our meetings. We are tired of wasting our time trying to remember how to circle back to a concrete starting point. So we intended to solve this problem by creating a meeting guide that highlights questions that have yet to be explored to completion and previous decisions that informed our decision making process.
## How we built it
We take speech audio, and in real-time convert it into text. In order to provide a real time speech experience with Groq (which requires complete audio files) we needed to send many small audio segments to create real-time transcriptions of spoken language. As a result, we constantly sliced our audio stream on 5 second increments. We queried LLMS to predict the intended sentence structure when the splits occurred in the middle of a word. For example, when the splitted language looked like “A”, “wire”, the LLM could predict that we intended to say “Acquire”. Thus using assistive LLM merging on different segments seamlessly built upon our ideas as we gained more speech-to-text information. As we keep adding information, we also use llama-index LLM to extract entity-relationship-entity triplets for building nodes and edges in our knowledge graph. Once our knowledge graph is built, we visualize current idea branches and the full knowledge graph. Additionally, the entities can be re-evaluated for additional correlation edges in our graph, by using a transformer model to embed the entities and threshold cosine similarity scores.
For example: when the splitted language looked like “A”, “wire”. Using the context of the sentence, the LLM could predict that we intended to say “Acquire”. Thus using assistive LLM merging on different segments seamlessly built upon our ideas as we gained more speech-to-text information.
For the backend, a graphical database, Neo4j, was used to keep track of both our initial node relationships based on the text, and also additional correlations derived after generating the knowledge graph.
Repo - Next JS w/ Flask and Aceternity UI ← need more info here
## Challenges we ran into
Similarity Search and Similarity Thresholding: When generating different entities and relationships using our LLM, we realized that some nodes might be very closely correlated, but not understood in a sentence structure. Therefore, we used the BERT (Bidirectional Encoder Representations from Transformers) model, a pre-trained language model developed by Google. It gives us semantic encoding of all our entities, which allows us to do cosine similarities on our embeddings, and find new correlations on our knowledge graph. Based on a threshold, we create correlation edges on the graph. This could be potentially useful for finding new relationships or merging highly related nodes that represent near similar ideas.
Redundancies in Knowledge graph entity generation . We queried LLAMA for (Source Entity -> Edge -> Destination Entity) mappings. Though it immediately began to form pairings, it often created high similarity pairings like (“greg”, “likes”, “sleeping on the couch”) and (“greg”, “enjoys”, “spending time sleeping on the couch”) which is a redundancy that we wanted to avoid when querying the graph. We used three techniques to mitigate these redundancies:
GraphDB input to our RAG queries provided context for matching new entities to closely related existing ones.
Prompt engineering using examples of closely matching inputs being generalized into the same entities
Vector Cosine Similarity Scores: Depending on the use-case of our knowledge graph (ie. Learning, Brainstorming, Requirements Analysis) we set different thresholds based
Knowledge graph real time visual display
Because the knowledge graph was dynamically being changed due to updates from real time audio streaming, it was difficult keeping the UI live. To address this issue, we had to extensively research React components and their operations.
## Accomplishments that we're proud of
Making use of the Neo4j Graph Database, allowing us to query data using the Cypher query language to extract paths and edges with less logic on the backend, and more logic in-database.
Using Groq and llama-index to quickly streamline speech recognition to a knowledge graph database.
Leveraging prompt engineering to get full power from our LLMs.
Finding more discrete relationships using transformer model embedding on text entities.
## What we learned
Learning how to use Groq (speech recognition and llama index node extractions).
Fine-tuning LLM prompts for specific output formats.
Embedding text information using machine learning.
## What's next for YapTrack
Adding a large list of prompt options for knowledge graph styles, or allowing users to make their own prompts for knowledge graph entities to fit their specific needs.
Merging nodes based on correlation calculations on similarity search.
|
## Inspiration
Our inspiration for this project came from newfound research stating the capabilities of models to perform the work of data engineers and provide accurate tools for analysis. We realized that such work is impactful in various sectors, including finance, climate change, medical devices, and much more. We decided to test our solution on various datasets to see the potential in its impact.
## What it does
A lot of things will let you know soon
## How we built it
For our project, we developed a sophisticated query pipeline that integrates a chatbot interface with a SQL database. This setup enables users to make database queries effortlessly through natural language inputs. We utilized SQLAlchemy to handle the database connection and ORM functionalities, ensuring smooth interaction with the SQL database. To bridge the gap between user queries and database commands, we employed LangChain, which translates the natural language inputs from the chatbot into SQL queries. To further enhance the query pipeline, we integrated Llama Index, which facilitates sequential reasoning, allowing the chatbot to handle more complex queries that require step-by-step logic. Additionally, we added a dynamic dashboard feature using Plotly. This dashboard allows users to visualize query results in an interactive and visually appealing manner, providing insightful data representations. This seamless integration of chatbot querying, sequential reasoning, and data visualization makes our system robust, user-friendly, and highly efficient for data access and analysis.
## Challenges we ran into
Participating in the hackathon was a highly rewarding yet challenging experience. One primary obstacle was integrating a large language model (LLM) and chatbot functionality into our project. We faced compatibility issues with our back-end server and third-party APIs, and encountered unexpected bugs when training the AI model with specific datasets. Quick troubleshooting was necessary under tight deadlines.
Another challenge was maintaining effective communication within our remote team. Coordinating efforts and ensuring everyone was aligned led to occasional misunderstandings and delays. Despite these hurdles, the hackathon taught us invaluable lessons in problem-solving, collaboration, and time management, preparing us better for future AI-driven projects.
## Accomplishments that we're proud of
We successfully employed sequential reasoning within the LLM, enabling it to not only infer the next steps but also to accurately follow the appropriate chain of actions that a data analyst would take. This advanced capability ensures that complex queries are handled with precision, mirroring the logical progression a professional analyst would utilize. Additionally, our integration of SQLAlchemy streamlined the connection and ORM functionalities with our SQL database, while LangChain effectively translated natural language inputs from the chatbot into accurate SQL queries. We further enhanced the user experience by implementing a dynamic dashboard with Plotly, allowing for interactive and visually appealing data visualizations. These accomplishments culminated in a robust, user-friendly system that excels in both data access and analysis.
## What we learned
We learned the skills in integrating various APIs along with the sequential process of actually being a data engineer and analyst through the implementation of our agent pipeline.
## What's next for Stratify
For our next steps, we plan to add full UI integration to enhance the user experience, making our system even more intuitive and accessible. We aim to expand our data capabilities by incorporating datasets from various other industries, broadening the scope and applicability of our project. Additionally, we will focus on further testing to ensure the robustness and reliability of our system. This will involve rigorous validation and optimization to fine-tune the performance and accuracy of our query pipeline, chatbot interface, and visualization dashboard. By pursuing these enhancements, we strive to make our platform a comprehensive, versatile, and highly reliable tool for data analysis and visualization across different domains.
|
## Inspiration
Over this past semester, Alp and I were in the same data science class together, and we were really interested in how data can be applied through various statistical methods. Wanting to utilize this knowledge in a real-world application, we decided to create a prediction model using machine learning. This would allow us to apply the concepts that we learned in class, as well as to learn more about various algorithms and methods that are used to create better and more accurate predictions.
## What it does
This project consists of taking a dataset containing over 280,000 real-life credit card transactions made by European cardholders over a two-day period in September 2013, with a variable determining whether the transaction was fraudulent, also known as the ground truth. After conducting exploratory data analysis, we separated the dataset into training and testing data, before training the classification algorithms on the training data. After that, we observed how accurately each algorithm performed on the testing data to determine the best-performing algorithm.
## How we built it
We built it in Python using Jupyter notebooks, where we imported all our necessary libraries for plotting, visualizing and modeling the dataset. From there, we began to do some explanatory data analysis to figure out the imbalances of the data and the different variables. However, we discovered that there were several variables that were unknown to us due to customer confidentiality. From there, we first applied principal component analysis (PCA) to reduce the dimensionality of the dataset by removing the unknown variables and analyzing the data using the only two variables that were known to us, the amount and time of each transaction. Thereafter, we had to balance the dataset using the SMOTE technique in order to balance the dataset outcomes, as the majority of the data was determined to be not fraudulent. However, in order to detect fraud, we had to ensure that the training had an equal proportion of data values that were both fraudulent and not fraudulent in order to return accurate predictions. After that, we applied 6 different classification algorithms to the training data to train it to predict the respective outcomes, such as Naive Bayes, Decision Tree, Random Forest, K-Nearest Neighbor, Logistic Regression and XGBoost. After training the data, we then applied these algorithms to the testing data and observed how accurately does each algorithm predict fraudulent transactions. We then cross-validated each algorithm by applying it to every subset of the dataset in order to reduce overfitting. Finally, we used various evaluation metrics such as accuracy, precision, recall and F-1 scores to compare which algorithm performed the best in accurately predicting fraudulent transactions.
## Challenges we ran into
The biggest challenge was the sheer amount of research and trial and error required to build this model. As this was our first time building a prediction model, we had to do a lot of reading to understand the various steps and concepts needed to clean and explore the dataset, as well as the theory and mathematical concepts behind the classification algorithms in order to model the data and check for accuracy.
## Accomplishments that we're proud of
We are very proud that we are able to create a working model that is able to predict fraudulent transactions with very high accuracy, especially since this was our first major ML model that we have made.
## What we learned
We learned a lot about the processing of building a machine learning application, such as cleaning data, conducting explanatory data analysis, creating a balanced sample, and modeling the dataset using various classification strategies to find the model with the highest accuracy.
## What's next for Credit Card Fraud Detection
We want to do more research into the theory and concepts behind the modeling process, especially the classification strategies, as we work towards fine-tuning this model and building more machine learning projects in the future.
|
losing
|
## Inspiration
*Mafia*, also known as *Werewolf*, is a classic in-person party game that university and high school students play regularly. It's been popularized by hit computer games such as Town of Salem and Epic Mafia that serve hundreds of thousands of players, but where these games go *wrong* is that they replace the in-person experience with a solely online experience. We built Super Mafia as a companion app that people can use while playing Mafia with their friends in live social situations to *augment* rather than *replace* their experience.
## What it does
Super Mafia replaces the role of the game's moderator, freeing up every student to play. It also allows players to play character roles which normally aren't convenient or even possible in-person, such as the *gunsmith* and *escort*.
## How we built it
Super Mafia was built with Flask, Python, and MongoDB on the backend, and HTML, CSS, and Javascript on the front-end. We also spent time learning about mLab which we used to host the database.
## Challenges we ran into
Our biggest challenge was making sure that our user experience would be simple-to-use and approachable for young users, while still accommodating of the extra features we built.
## Accomplishments that we're proud of
We survived the deadly combo of a cold night and the 5th floor air conditioning.
## What we learned
How much sleeping during hackathons actually improves your focus...lol
## What's next for Super Mafia
* Additional roles (fool, oracle, miller, etc) including 3rd party roles. A full list of potential roles can be found [here](https://epicmafia.com/role)
* Customization options (length of time/day)
* Last words/wills
* Animations and illustrations
|
## Inspiration
It's hard to believe we're nearly **a full year** into this awful pandemic. Two semesters of online school for students, hundreds of virtual meetings for employees, and the general monotony that is everyday life will make even Tom Hanks' character in *Groundhog Day* bored. Every Zoom session seems to be the same; it's nearly impossible to agree on something for the group to do, and sometimes the activity just doesn't fit the vibe. It's something we all know way too well. This introspection made us ask ourselves: How can we use technology to make virtual social spaces at school and at the workplace more unique and engaging?
## What it does
For some, there are so many games to play online that choosing one is a long and drawn out endeavor. For others, virtual games are a completely foreign world. The Water Cooler Space, aptly named after the common area where employees gather to chat and forget about the stress of work life, aims to bridge that gap.
It provides an area where participants of the same organization can gather and play online games together. Joining the virtual Water Cooler Space is simple; it requires just the organization ID and PIN. Based on the number of players in that game session, the app then suggests three ideal games for the group to play, and the players can vote on which game to play. And multiple sessions can run at once, so there's no need to wait for people to join the current session. Finally, the app provides a meeting link and a link to the game. And just like that, everyone is ready to rock and roll!
## How we built it
As three amateur developers with limited experience in both frontend and backend technologies, we knew from the get-go that delegating the workload would be pivotal. Therefore, we decided to use a tech stack that everyone had some familiarity with - a ReactJS frontend paired with a NodeJS backend with a Google Cloud database.
One developer began by crafting the UI and the frontend of the web application, which is not as technologically sophisticated. But each page, modal, and component had to be designed carefully, considering especially the variables that could change. For example, a basic algorithm had to be designed for splitting the array of total participants into two arrays to display on the web page.
Meanwhile, the other two developers began to work on the backend by creating the NodeJS server and connecting that server with the Google Cloud database. After the database connection was complete, the backend was programmed to communicate with the client. Because real-time updates are pivotal in any virtual space (i.e. someone who joins the room should be added immediately), the Socket.io protocol was used for updating the server and web page immediately. Messages could now be broadcast from the frontend and backend to update the server state.
Lastly, the logic of the game, which includes voting and game selection, was implemented. Again, Socket.io was heavily used in this stage because voting is also a real-time process. The games, which are stored in a Google Cloud database, can be passed back and forth between the backend and frontend as an object.
## Challenges we ran into
The past two nights have been quite an adventure for our team. We faced obstacles, foreseen and unforeseen. But no matter how insurmountable these challenges seemed, we were proud that we always found a way to overcome it.
From the conception of the idea, we knew we'd encounter some major hurdles. How can we make the user experience as seamless as possible? How do we handle the case where some users are already in a session but others still want to play a game? And how in the world does Socket.io even work? Jumping past these barriers simply took some time, discussion, experimentation, and **many** frustrating errors. Just like learning a new language, learning new libraries is difficult at the beginning, but once we had a solid grasp of the new technologies and a clear direction in which to proceed, the workflow became much more smooth-sailing.
However, we also encountered a fair number of unanticipated problems, the biggest of which involved our initial choice of databases. Our first choice was CockroachDB, but after a few hours of tinkering around with the database, it became clear that integrating CockroachDB with NodeJS would be difficult. That prompted us to switch to Google Cloud, which was far more accessible and versatile. And to that, we say: Thank you, Google.
## Accomplishments that we're proud of
As a group of relatively inexperienced hackers and developers, it goes without saying that we were immensely proud that we actually completed a complete web application using ReactJS, NodeJS, and Google Cloud. The whole experience was a fantastic learning process, and each of us came away with the sweet taste of victory.
More specifically, we felt proud that we learned and integrated a new library that has so many practical applications: Socket.io. At the beginning of uOttaHacks, nobody had even heard of the technology, but by the end of the event, we felt pretty comfortable working with the sockets and transmitting events between the server and the frontend.
## What we learned
The programming and technical skills we acquired are indubitably crucial, but big picture, we learned the importance of having a good workflow. We learned to build the app starting from the bottom to the top. For instance, neither the frontend nor the backend team can lag too far behind; otherwise, the integration process would be much slower. This notion applies not just to hackathons but also to larger-scale projects, where workflow is key to success.
## What's next for The Water Cooler Space
So far, The Water Cooler Space has limited functionalities. But we have many ideas on improving the web app. The app could optimize game suggestions based on previous ratings for the game and the number of people to find the ideal game for a given number of participants. It could also utilize the Zoom API or Google Calendar API to provide a custom Zoom/Google Meet link for the participants. That way, the whole virtual space experience is fully integrated into our application. Once minor bugs are fixed and these larger features are added, our app can be tested and rolled out to some clubs and members of the UPenn community. Hopefully, if all goes well, The Water Cooler Space can make virtual meetings at work or school just a tad *cooler* :)
|
## Inspiration
As students, we have found that there are very few high-quality resources on investing for those who are interested but don't have enough resources. Furthermore, we have found that investing and saving money can be a stressful experience. We hope to change this for those who want to save better with the help of our app, hopefully making it fun in the process!
## What it does
Our app first asks a new client a brief questionnaire about themselves. Then, using their banking history, it generates 3 "demons", aka bad spending habits, to kill. Then, after the client chooses a habit to work on, it brings them to a dashboard where they can monitor their weekly progress on a task. Once the week is over, the app declares whether the client successfully beat the mission - if they did, they get rewarded with points which they can exchange for RBC Loyalty points!
## How we built it
We built the frontend using React + Tailwind, using Routes to display our different pages. We used Cohere for our AI services, both for generating personalized weekly goals and creating a more in-depth report. We used Firebase for authentication + cloud database to keep track of users. For our data of users and transactions, as well as making/managing loyalty points, we used the RBC API.
## Challenges we ran into
Piecing the APIs together was probably our most difficult challenge. Besides learning the different APIs in general, integrating the different technologies got quite tricky when we are trying to do multiple things at the same time!
Besides API integration, definitely working without any sleep though was the hardest part!
## Accomplishments that we're proud of
Definitely our biggest accomplishment was working so well together as a team. Despite only meeting each other the day before, we got along extremely well and were able to come up with some great ideas and execute under a lot of pressure (and sleep deprivation!) The biggest reward from this hackathon are the new friends we've found in each other :)
## What we learned
I think each of us learned very different things: this was Homey and Alex's first hackathon, where they learned how to work under a small time constraint (and did extremely well!). Paige learned tons about React, frontend development, and working in a team. Vassily learned lots about his own strengths and weakness (surprisingly reliable at git, apparently, although he might have too much of a sweet tooth).
## What's next for Savvy Saver
Demos! After that, we'll just have to see :)
|
losing
|
**Finding a problem**
Education policy and infrastructure tend to neglect students with accessibility issues. They are oftentimes left on the backburner while funding and resources go into research and strengthening the existing curriculum. Thousands of college students struggle with taking notes in class due to various learning disabilities that make it difficult to process information quickly or write down information in real time.
Over the past decade, Offices of Accessible Education (OAE) have been trying to help support these students by hiring student note-takers and increasing ASL translators in classes, but OAE is constrained by limited funding and low interest from students to become notetakers.
This problem has been particularly relevant for our TreeHacks group. In the past year, we have become notetakers for our friends because there are not enough OAE notetakers in class. Being note writers gave us insight into what notes are valuable for those who are incredibly bright and capable but struggle to write. This manual process where we take notes for our friends has helped us become closer as friends, but it also reveals a systemic issue of accessible notes for all.
Coming into this weekend, we knew note taking was an especially interesting space. GPT3 had also been on our mind as we had recently heard from our neurodivergent friends about how it helped them think about concepts from different perspectives and break down complicated topics.
**Failure and revision**
Our initial idea was to turn videos into transcripts and feed these transcripts into GPT-3 to create the lecture notes. This idea did not work out because we quickly learned the transcript for a 60-90 minute video was too large to feed into GPT-3.
Instead, we decided to incorporate slide data to segment the video and use slide changes to organize the notes into distinct topics. Our overall idea had three parts: extract timestamps the transcript should be split at by detecting slide changes in the video, transcribe the text for each video segment, and pass in each segment of text into a gpt3 model, fine-tuned with prompt engineering and examples of good notes.
We ran into challenges every step of the way as we worked with new technologies and dealt with the beast of multi-gigabyte video files. Our main challenge was identifying slide transitions in a video so we could segment the video based on these slide transitions (which signified shifts in topics). We initially started with heuristics-based approaches to identify pixel shifts. We did this by iterating through frames using OpenCV and computing metrics such as the logarithmic sum of the bitwise XORs between images. This approach resulted in several false positives because the compressed video quality was not high enough to distinguish shifts in a few words on the slide. Instead, we trained a neural network using PyTorch on both pairs of frames across slide boundaries and pairs from within the same slide. Our neural net was able to segment videos based on individual slides, giving structure and organization to an unwieldy video file. The final result of this preprocessing step is an array of timestamps where slides change.
Next, this array was used to segment the audio input, which we did using Google Cloud’s Speech to Text API. This was initially challenging as we did not have experience with cloud-based services like Google Cloud and struggled to set up the various authentication tokens and permissions. We also ran into the issue of the videos taking a very long time, which we fixed by splitting the video into smaller clips and then implementing multithreading approaches to run the speech to text processes in parallel.
**New discoveries**
Our greatest discoveries lay in the fine-tuning of our multimodal model. We implemented a variety of prompt engineering techniques to coax our generative language model into producing the type of notes we wanted from it. In order to overcome the limited context size of the GPT-3 model we utilized, we iteratively fed chunks of the video transcript into the OpenAI API at once. We also employed both positive and negative prompt training to incentivize our model to produce output similar to our desired notes in the output latent space. We were careful to manage the external context provided to the model to allow it to focus on the right topics while avoiding extraneous tangents that would be incorrect. Finally, we sternly warned the model to follow our instructions, which did wonders for its obedience.
These challenges and solutions seem seamless, but our team was on the brink of not finishing many times throughout Saturday. The worst was around 10 PM. I distinctly remember my eyes slowly closing, a series of crumpled papers scattered nearby the trash can. Each of us was drowning in new frameworks and technologies. We began to question, how could a group of students, barely out of intro-level computer science, think to improve education.
The rest of the hour went in a haze until we rallied around a text from a friend who sent us some amazing CS notes we had written for them. Their heartfelt words of encouragement about how our notes had helped them get through the quarter gave us the energy to persevere and finish this project.
**Learning about ourselves**
We found ourselves, after a good amount of pizza and a bit of caffeine, diving back into documentation for react, google text to speech, and docker. For hours, our eyes grew heavy, but their luster never faded. More troubles arose. There were problems implementing a payment system and never-ending CSS challenges. Ultimately, our love of exploring technologies we were unfamiliar with helped fuel our inner passion.
We knew we wanted to integrate Checkbook.io’s unique payments tool, and though we found their API well architectured, we struggled to connect to it from our edge-compute centric application. Checkbook’s documentation was incredibly helpful, however, and we were able to adapt the code that they had written for a NodeJS server-side backend into our browser runtime to avoid needing to spin up an entirely separate finance service. We are thankful to Checkbook.io for the support their team gave us during the event!
Finally, at 7 AM, we connected the backend of our website with the fine-tuned gpt3 model. I clicked on CS106B and was greeted with an array of lectures to choose from. After choosing last week’s lecture, a clean set of notes were exported in LaTeX, perfect for me to refer to when working on the PSET later today!
We jumped off of the couches we had been sitting on for the last twelve hours and cheered. A phrase bounced inside my mouth like a rubber ball, “I did it!”
**Product features**
Real time video to notes upload
Multithreaded video upload framework
Database of lecture notes for popular classes
Neural network to organize video into slide segments
Multithreaded video to transcript pipeline
|
## Inspiration 💡
With the introduction of online-based learning, a lot of video tutorials are being created for students and learners to gain knowledge. As much as the idea is excellent, there was also a constraint whereby the tutorials may contain long hours of content and in some cases, it is inaccessible to users with disability. This is seen as a problem in the world today, that's why we built an innovative and creative solution **Vid2Text** to provide the solution to this. It is a web application that provides users with easy access to audio and video text transcription for all types of users. So either if the file is in an audio or video format it can always be converted to readable text.
## 🍁About
Vid2Text is a web app that allows users to upload audio and video files with ease, which then generates Modified and Customized Audio and Video Transcriptions.
Some of the features it provides are:
### Features
* Automatically transcribe audio and video files with high accuracy.
* Modified and Customized Audio and Video Transcriptions.
* Easy Keyword Search through Text
* Easy Keyword Search and Highlight through Text.
## How we built it
We built our project using Django, a Python web framework that uses the MVC architecture to develop full-stack web applications. When the user uploads the video they want to transcribe, we created a script that uploads the video onto the Django model database and after that, the video gets uploaded to the AssemblyAI server. The response of that part is the *upload\_url*. Finally, we send a post request, with the video transcript ID and get the video transcript text as the response. We utilized the AssemblyAI to match and search the transcript text for keywords. We also created an accessible and good user experience for the client-side.
## Challenges we ran into
In course of the hackathon, we faced some issues with limited time and integration of the AssemblyAI to determine the video duration of the uploaded videos dynamically. Initially, we were confused about how to do that, but we finally figured it out.
## Accomplishments that we're proud of
Finally, after long hours of work, we were able to build and deploy the full web application. The team was able to put in extra effort towards making it work.
## What we learned
This hackathon gave us the opportunity to learn how to use a Django project together with utilizing the Django and AssemblyAI API and also we were able to work together as a team despite the fact we were from different timezones.
## What's next for Vid2Text ⏭
For our next steps:
We plan to include more features like multiple language transcription, and export text files as pdf.
Also, improve the user experience and make it more accessible.
|
## Inspiration
Level of heart beat is a primary indicator for the effectiveness of a workout, yet it is often unmeasured and untargeted. Brainstorming ways to stimulate our body for increased intensity, we realized that music often relates to physiological transition within an exercise, based on its bpm, pitch, decibels, etc., and decided to begin work on an app that targets a certain level of heart rate using music to help the user reach their physical limit.
## What it does
Allows for users to input target workout levels and recommends songs based on ML algorithms that will help them reach those specific levels and meet their workout goals.
## How we built it
Frontend on Swift, backend using ML in google colab using python
## Challenges we ran into
Determining model selection for the clustering component. Due to the high dimensions for the model, we needed to use Principal Component Analysis to visually aid in mapping the clusterings to the heart rate ranges. The watchOS development was tricky since it is a more niche development system with less existing documentation. A major roadblock is the step of integrating Python script to SwiftUI, due to incomplete documentation and a limited means of implementation for this conversion from Python to SwiftUI.
## Accomplishments that we're proud of
Segmenting the UI workflow to optimize segues through parallel structures in Swift storyboard interface was a major step taken in progressing through the app development, as there were limited constructs to enable a process of updating selections. The UI/UX Design and the resulting intuitive user integration were vital components in ensuring that our hack provides a user friendly environment. We are proud of our application of choosing and implementing a machine learning based clustering algorithm to associate a wide variety of relevant musical features from the identified dataset. The process of dimensionality reduction through PCA and subsequent data visualization for multidimensional feature vectors was important in enabling us to separate these clusters and assign them to a workout intensity level.
## What we learned
Swift, Principal Component Analysis, Unsupervised Machine Learning, K-Means Algorithms, XCode, PyObjC, learning about and considering various clustering algorithms and methods such as k-nearest neighbors, reading prior literature about the strong correlation between a variety of musical factors and heartbeat, and consideration of other ML models such as Recurrent Neural Networks (RNN) as a solution using time series data.
## What's next for Heart Beats
Enabling autoplay Functionality / song shuffle, optimal integration of python scripts in SwiftUI, dynamic heartbeat and song updates over time window, backend model optimization, and playlist compatibility
|
winning
|
## Inspiration
IoT devices are extremely useful; however, they come at a high price. A key example of this is a smart fridge, which can cost thousands of dollars. Although many people can't afford this type of luxury, they can still greatly benefit from it. A smart fridge can eliminate food waste by keeping an inventory of your food and its freshness. If you don't know what to do with leftover food, a smart fridge can suggest recipes that use what you have in your fridge. This can easily expand to guiding your food consumption and shopping choices.
## What it does
FridgeSight offers a cheap, practical solution for those not ready to invest in a smart fridge. It can mount on any existing fridge as a touch interface and camera. By logging what you put in, take out, and use from your fridge, FridgeSight can deliver the very same benefits that smart fridges provide. It scans barcodes of packaged products and classifies produce and other unprocessed foods. FridgeSight's companion mobile app displays your food inventory, gives shopping suggestions based on your past behavior, and offers recipes that utilize what you currently have.
## How we built it
The IoT device is powered by Android Things with a Raspberry Pi 3. A camera and touchscreen display serve as peripherals for the user. FridgeSight scans UPC barcodes in front of it with the Google Mobile Vision API and cross references them with the UPCItemdb API in order to get the product's name and image. It also can classify produce and other unpackaged products with the Google Cloud Vision API. From there, the IoT device uploads this data to its Hasura backend.
FridgeSight's mobile app is built with Expo and React Native, allowing it to dynamically display information from Hasura. Besides using the data to display inventory and log absences, it pulls from the Food2Fork API in order to suggest recipes. Together, the IoT device and mobile app have the capability to exceed the functionality of a modern smart fridge.
## Challenges we ran into
Android Things provides a flexible environment for an IoT device. However, we had difficulty with initial configuration. At the very start, we had to reflash the device with an older OS because the latest version wasn't able to connect to WiFi networks. Our setup would also experience power issues, where the camera took too much power and shut down the entire system. In order to avoid this, we had to convert from video streaming to repeated image captures. In general, there was little documentation on communicating with the Raspberry Pi camera.
## Accomplishments that we're proud of
Concurring with Android Things's philosophy, we are proud of giving accessibility to previously unaffordable IoT devices. We're also proud of integrating a multitude of APIs across different fields in order to solve this issue.
## What we learned
This was our first time programming with Android Things, Expo, Hasura, and Google Cloud - platforms that we are excited to use in the future.
## What's next for FridgeSight
We've only scratched the surface for what the FridgeSight technology is capable of. Our current system, without any hardware modifications, can notify you when food is about to expire or hasn't been touched recently. Based on your activity, it can conveniently analyze your diet and provide healthier eating suggestions. FridgeSight can also be used for cabinets and other kitchen inventories. In the future, a large FridgeSight community would be able to push the platform with crowd-trained neural networks, easily surpassing standalone IoT kitchenware. There is a lot of potential in FridgeSight, and we hope to use PennApps as a way forward.
|
## Inspiration
How many clicks does it take to upload a file to Google Drive? TEN CLICKS. How many clicks does it take for PUT? **TWO** **(that's 1/5th the amount of clicks)**.
## What it does
Like the name, PUT is just as clean and concise. PUT is a storage universe designed for maximum upload efficiency, reliability, and security. Users can simply open our Chrome sidebar extension and drag files into it, or just click on any image and tap "upload". Our AI algorithm analyzes the file content and organizes files into appropriate folders. Users can easily access, share, and manage their files through our dashboard, chrome extension or CLI.
## How we built it
We the TUS protocol for secure and reliable file uploads, Cloudflare workers for AI content analysis and sorting, React and Next.js for the dashboard and Chrome extension, Python for the back-end, and Terraform allow anyone to deploy the workers and s3 bucket used by the app to their own account.
## Challenges we ran into
TUS. Let's prefix this by saying that one of us spent the first 18 hours of the hackathon on a golang backend then had to throw the code away due to a TUS protocol incompatibility. TUS, Cloudflare's AI suite and Chrome extension development were completely new to us and we've run into many difficulties relating to implementing and combining these technologies.
## Accomplishments that we're proud of
We managed to take 36 hours and craft them into a product that each and every one of us would genuinely use.
We actually received 30 downloads of the CLI from people interested in it.
## What's next for PUT
If given more time, we would make our platforms more interactive by utilizing AI and faster client-server communications.
|
# grocery\_smart
Grocery\_Smart was created to solve a common problem that second-year students face when moving into their first home. We never have anything to eat and all those vegetables we keep buying simply gets left in the fridge to go bad! Grocery\_Smart is an IoT device that can turn any fridge into a smart fridge, the barcode of any product can be scanned by the device, our device will display useful information about all food that is stored inside the fridge. A monitor utilizes data visualization to intuitively display to the user, the stock of different food as well as how long before the food will spoil. The device can alert users through text or email before an item is about to go bad so that it may be eaten in time. This can reduce the number of waste products as well as save money spent on food.
|
winning
|
## Inspiration
We were inspired to reduce the amount of time it takes to seek medical attention. By directing patients immediately to a doctor specific to their needs, one may reduce the wait time commonly associated with seeking medical aid.
## What it does
Destination Doc asks a user how they are feeling at which point it determines what type of doctor a patient needs (by screening for flagged words). It then proceeds to search a 10 km radius for establishments that such as dentist offices, walk-in clinics, physiotherapy centers or other need-specific locations. Using Microsoft's Bing's API, Destination Doc determines which destination is the shortest time away using real-time traffic. A map is then displayed directing the user from their home location to the optimal medical center.
## How we built it
We built the application front end using angular and the backend with flask, We incorporated the Cisco meraki, twilio APIs and azure.
## Challenges we ran into
Our biggest challenge was putting all the different components together as well as doing a lot within a short time constraint.
## Accomplishments that we're proud of
We're proud to take steps in creating a more efficient wait time service and also aiding the cause of better health and being safer.
## What we learned
What we learned - We learned how to leverage the functionality of AngularJs to create a responsive front end page. we also learned how to use RestAPI HTTP get and post requests to communicate between the front end and the backend network.
## What's next for Destination Doc
We plan to put together destination doc to an extent where anyone can enter their needs and find the best place to get help.
|
## Inspiration
The inspiration for our application stemmed from the desire to solve an issue we, our friends, and our families have experienced. We noticed that hospitals work individually to combat the issue of long patient wait times, and the Canadian government has spent over 100 million dollars to fix it in the past year alone. Introducing, a service that works with all hospitals—a collaborative approach to better the lives of all Canadian citizens.
## What it does
TimeToCare is a web-based application designed to mitigate the long wait times experienced in hospital ER settings. Our service tackles the root of the problem by directing patient streams to hospitals better-suited to accommodate them. This results in a smoothed out demand at each health care centre, less frequent patient-demand spikes, and thus faster time to be treated.
## How we built it
We built the components of our app with a few different languages and tools. They include HTML/CSS to build the framework of our website, Javascript for the general functionality of the website, and data from the Google Maps API.
## Challenges we ran into
Challenges in our project mostly arose from the learning curve of what was required to build our application.
## Accomplishments that we're proud of
We are very proud to have developed a website, we believe, could have an impact in the health field. We came to HackWestern with the hope of solving this problem and despite our relative inexperience with any of the APIs or new languages, we feel like we have accomplished something amazing.
## What we learned
Including the programs and tools we used to build TimeToCare, we also learned teamwork, the importance of clear communication and good design principles.
## What's next for TimeToCare
Our next step for TimeToCare would be to create a scalable build. Now that we have supported our theory of directing patients to hospitals based on wait times and distance, we would like to see it run using real-time data, in numerous locations. We could see the required data being pulled from a government website or provided by the hospitals directly to the app.
|
## Inspiration
What would you do with 22 hours of your time? I could explore all of Ottawa - from sunrise at parliament, to lunch at Shawarma palace, and end the night at our favourite pub, Heart and Crown!
But imagine you hurt your ankle and go to the ER. You're gonna spend that entire 22 hours in the waiting room, before you even get to see a doctor for this. This is a critical problem in our health care system.
We're first year medical students, and we've seen how much patients struggle to get the care they need. From the overwhelming ER wait time, to travelling over 2 hours to talk to a family doctor (not to mention only 1/5 Canadians having a family doctor), Canada's health care system is currently in a crisis. Using our domain knowledge, we wanted to take a step towards solving this problem.
## What is PocketDoc?
PocketDoc is your own personal physician available on demand. You can talk to it like you would to any other person, explaining what you're feeling, and PocketDoc will inform you what you may be experiencing at the moment. But can't WedMD do that? No! Because our app actually uses your personalized portfolio - consisting of user inputed vaccinations, current medications, allergies, and more, and PocketDoc can use that information to figure out the best diagnosis for your body. It tells you what your next steps are: go to your pharmacist who can now in Ontario, prescribe the appropriate medication, or maybe use your puffer for an acute allergic reaction, or maybe you do need to go to the ER. But wait, it doesn't stop there! PocketDoc uses your location to find the closest walk-in clinics, pharmacies, and hospitals - and its all in one app!
## How we built it
We've all dealt with the healthcare system in Canada, and with all the pros it offers, there are also many cons. From the perspective of a healthcare provider, we recognized that a more efficient solution is feasible. We used a dataset from Kaggle which provided long text data on symptoms, and the associated diagnosis. After trying various ML systems for classification, we decided to Cohere to implement a natural language processing model to classify any user input into one of 21 possible diagnoses. We further used XCode to implement login and used Auth0 to provide an authenticated login experience and ensure users feel safe inputing and storing their data on the app. We fully prototyped our app in Figma to show the range of functionalities we wish to implement beyond this hackathon.
## Challenges we ran into
We faced challenges at every step of the design and implementation process. As computer science beginners, we took on a ML-based classification task that required a lot of new learning. The first step was the most difficult: choosing a dataset. There were many ML systems we were considering, such as Tensor Flow, PyTorch, Keras, Scikid-learn, and each one worked best with a certain type of dataset. The dataset we chose also had to give use verified diagnoses for a set of symptoms, and we narrowed it down to 3 different sets. Choosing one of these sets took up a lot of time and effort.
The next challenge we faced occurred due to cross-platform incompatibility, where Xcode was used for app development but the ML algorithm was built on python 3. A huge struggle was bringing this model to run on the app directly. We found our only solution was to build a python API that can be accessed by Xcode, a task that we had no time to learn and implement.
Hardware was also a bottleneck for our productivity. With limited storage and computing power on our devices, we were compelled to use smaller datasets and simpler algorithms. This used up lots of time and resources as well.
The final and most important challenge was the massive learning curve under the short time constraints. For the majority of our team, this was our first hackathon and there is a lot to learn about the hackathon expectations/requirements while also learning new skills on the fly. The lack of prior knowledge made it difficult for us to manage resources efficiently throughout the 36 hours. This brought on more unexpected challenges throughout the entire process.
## Accomplishments that we're proud of
As medical students, we're proud to have been introduced to the field of computer science and the intersection between computer science and medicine as this will help us become well-versed and equipped physicians.
**Project Planning and Ideation**: Our team spent the initial hours of the hackathon discussing various ideas using the creative design process and finally settled on the healthcare app concept. Together, we outlined the features and functionalities the app would offer, considering user experience and technical feasibility.
**Learning and Skill Development**: Since this was our first time coding, we embraced the opportunity to learn new programming languages and technologies. We used our time carefully to learn from tutorials, online resources, and guidance from hackathon mentors.
**Prototype Development**: Despite the time constraints, we worked hard to develop a functional prototype of the app. We divided and conquered -- some team members focused on front-end development including designing the user interface and implementing navigation elements while others tackled back-end tasks like cleaning up the dataset and building our machine learning model.
**Iterative Development and Feedback**: We worked tirelessly on the prototype based on feedback from mentors and participants. We remained open to suggestions for improvement to enhance the app's functionality.
**Presentation Preparation**: As the deadline rapidly approached, we prepared a compelling presentation to showcase our project to the judges using the skills we learned from the public speaking workshop with Ivan Wanis Ruiz.
**Final Demo and Pitch**: In the final moments of the hackathon, we confidently presented our prototype to the judges and fellow participants. We demonstrated the key functionalities of the app, emphasizing its user-friendly design and its potential to improve the lives of individuals managing chronic illnesses.
**Reflection**: The hackathon experience itself has been incredibly rewarding. We gained valuable coding skills, forged strong bonds with our teammates, and contributed to a meaningful project with real-world applications.
Specific tasks:
1. Selected a high quality medical-based dataset that was representative of the Canadian patient population to ensure generalizability
2. Learned Cohere AI through YouTube tutorials
3. Learned Figma through trial and error and YouTube tutorials
4. Independently used XCode
5. Learned a variety of ML systems - Tensor Flow, PyTorch, Keras, Scikid-learn
6. Acquired skills in public speaking to captivate and audience with our unique solution to enhance individual quality of life, improve population health, and streamline the use of scarce healthcare resources.
## What we learned
1. Technical skills in coding, problem-solving, and utilizing development tools.
2. Effective time management under tight deadlines.
3. Improved communication and collaboration within a team setting.
4. Creative thinking and innovation in problem-solving.
5. Presentation skills for effectively showcasing our project.
6. Resilience and adaptability in overcoming challenges.
7. Ethical considerations in technology, considering the broader implications of our solutions on society and individuals.
8. Experimental learning by fearlessly trying new approaches and learning from both successes and failures.
Most importantly, we developed a passion for computer science and we’re incredibly eager to build off our skills through future independent projects, hackathons, and internships. Now more than ever, with rapid advancements in technology and the growing complexity of healthcare systems, as future physicians and researchers we must embrace computational tools and techniques to enhance patient care and optimize clinical outcomes. This could be through Electronic Health Records (EHR) management, data analysis and interpretation, diagnosing complex medical conditions using machine learning algorithms, and creating clinician decision support systems with evidence-based recommendations to improve patient care.
## What's next for PocketDoc
Main goal: connecting our back end with our front end through an API
NEXT STEPS
**Enhancing Accuracy and Reliability**: by integrating more comprehensive medical databases, and refining the diagnostic process based on user feedback and real-world data.
**Expanding Medical Conditions**: to include a wider range of specialties and rare diseases.
**Integrating Telemedicine**: to facilitate seamless connections between users and healthcare providers. This involves implemented features including real-time video consultations, secure messaging and virtual follow-up appointments.
**Personalizing Health Recommendations**: along with preventive care advice based on users' medical history, lifestyle factors, and health goals to empower users to take control of their health and prevent health issues before they arise. This can decrease morbidity and mortality.
**Health Monitoring and Tracking**: this would enable users to monitor their health metrics, track progress towards health goals, and receive actionable insights to improve their well-being.
**Global Expansion and Localization**: having PocketDoc available to new regions and markets along with tailoring the app to different languages, cultural norms, and healthcare systems.
**Partnerships and Collaborations**: with healthcare organizations, insurers, pharmaceutical companies, and other stakeholders to enhance the app's capabilities and promote its adoption.
|
winning
|
## Inspiration
To introduce the most impartial and ensured form of voting submission in response to controversial democratic electoral polling following the 2018 US midterm elections. This event involved several encircling clauses of doubt and questioned authenticity of results by citizen voters. This propelled the idea of bringing enforced and much needed decentralized security to the polling process.
## What it does
Allows voters to vote through a web portal on a blockchain. This web portal is written in HTML and Javascript using the Bootstrap UI framework and JQuery to send Ajax HTTP requests through a flask server written in Python communicating with a blockchain running on the ARK platform. The polling station uses a web portal to generate a unique passphrase for each voter. The voter then uses said passphrase to cast their ballot anonymously and securely. Following this, their vote alongside passphrase go to a flask web server where it is properly parsed and sent to the ARK blockchain accounting it as a transaction. Is transaction is delegated by one ARK coin represented as the count. Finally, a paper trail is generated following the submission of vote on the web portal in the event of public verification.
## How we built it
The initial approach was to use Node.JS, however, Python with Flask was opted for as it proved to be a more optimally implementable solution. Visual studio code was used as a basis to present the HTML and CSS front end for visual representations of the voting interface. Alternatively, the ARK blockchain was constructed on the Docker container. These were used in a conjoined manner to deliver the web-based application.
## Challenges I ran into
* Integration for seamless formation of app between front and back-end merge
* Using flask as an intermediary to act as transitional fit for back-end
* Understanding incorporation, use, and capability of blockchain for security in the purpose applied to
## Accomplishments that I'm proud of
* Successful implementation of blockchain technology through an intuitive web-based medium to address a heavily relevant and critical societal concern
## What I learned
* Application of ARK.io blockchain and security protocols
* The multitude of transcriptional stages for encryption involving pass-phrases being converted to private and public keys
* Utilizing JQuery to compile a comprehensive program
## What's next for Block Vote
Expand Block Vote’s applicability in other areas requiring decentralized and trusted security, hence, introducing a universal initiative.
|
## Inspiration
A deep and unreasonable love of xylophones
## What it does
An air xylophone right in your browser!
Play such classic songs as twinkle twinkle little star, ba ba rainbow sheep and the alphabet song or come up with the next club banger in free play.
We also added an air guitar mode where you can play any classic 4 chord song such as Wonderwall
## How we built it
We built a static website using React which utilised Posenet from TensorflowJS to track the users hand positions and translate these to specific xylophone keys.
We then extended this by creating Xylophone Hero, a fun game that lets you play your favourite tunes without requiring any physical instruments.
## Challenges we ran into
Fine tuning the machine learning model to provide a good balance of speed and accuracy
## Accomplishments that we're proud of
I can get 100% on Never Gonna Give You Up on XylophoneHero (I've practised since the video)
## What we learned
We learnt about fine tuning neural nets to achieve maximum performance for real time rendering in the browser.
## What's next for XylophoneHero
We would like to:
* Add further instruments including a ~~guitar~~ and drum set in both freeplay and hero modes
* Allow for dynamic tuning of Posenet based on individual hardware configurations
* Add new and exciting songs to Xylophone
* Add a multiplayer jam mode
|
## Ark Platform for an IoT powered Local Currency
## Problem:
Many rural communities in America have been underinvested in our modern age. Even urban areas such as Detroit MI, and Scranton PA, have been left behind as their local economies struggle to reach a critical mass from which to grow. This underinvestment has left millions of citizens in a state of economic stagnation with little opportunity for growth.
## Big Picture Solution:
Cryptocurrencies allow us to implement new economic models to empower local communities and spark regional economies. With Ark.io and their Blockchain solutions we implemented a location-specific currency with unique economic models. Using this currency, experiments can be run on a regional scale before being more widely implemented. All without an increase in government debt and with the security of blockchains.
## To Utopia!:
By implementing local currencies in economically depressed areas, we can incentivize investment in the local community, and thus provide more citizens with economic opportunities. As the local economy improves, the currency becomes more valuable, which further spurs growth. The positive feedback could help raise standards of living in areas currently is a state of stagnation.
## Technical Details
\*\* LocalARKCoin (LAC) \*\*
LAC is based off of a fork of the ARK cryptocurrency, with its primary features being its relation to geographical location. Only a specific region can use the currency without fees, and any fees collected are sent back to the region that is being helped economically. The fees are dynamically raised based on the distance from the geographic region in question. All of these rules are implemented within the logic of the blockchain and so cannot by bypassed by individual actors.
\*\* Point of Sale Terminal \*\*
Our proof of concept point of sale terminal consists of the Adafruit Huzzah ESP32 micro-controller board, which has integrated WiFi to connect to the ARK API to verify transactions. The ESP32 connects to a GPS board which allows verification of the location of the transaction, and a NFC breakout board that allows contactless payment with mobile phone cryptocurrency wallets.
\*\* Mobile Wallet App \*\*
In development is a mobile wallet for our local currency which would allow any interested citizen to enter the local cryptocurrency economy. Initiating transactions with other individuals will be simple, and contactless payments allow easy purchases with participating vendors.
|
winning
|
## The beginning
We had always been fascinated by the incredibly varied talents that people could use for good. Serena had worked at a nonprofit, and witnessed firsthand how the volunteers that they found were eager to contribute their skills and knowledge for the cause. There was a glaring problem, though: not enough people knew which organizations fit their particular skillset. Nonprofits were needing skills - and there were people sitting at home, with those skills, wondering what to do.
Jackson was one of those people. During the pandemic, he developed an acute awareness of the needs of the community around him. He looked for places to volunteer - but it was difficult to determine which places were in need of which skills. He wanted to use his abilities to maximize his impact on the community - but he needed something to connect him to the right nonprofits.
## The idea
We decided to build a website - one with both a volunteer and organization portal. The idea was that volunteers would sign up, providing a ranked list of their skills, while organizations would create events, which looked for certain skills.
## The build
The frontend of the website was built in react, while the backend was built with django, and the server was deployed with Heroku. We took responses from organizations and volunteers, stored them in tables, and created connections. The process was hindered by hiccups in some utilities - Visual Studio Code’s Git extension, as well as npm, both gave us problems. However, through the unending wisdom of the internet, we were able to overcome those difficulties. With Serena working on the backend, and Jackson on the frontend, the result was a website that satisfied our vision - one that could take volunteer skill and match it to the needs of nonprofit events.
## The future
In the future, we hope to incorporate more information about volunteers, to help them better match with nonprofits - using an api to determine distance, for example. We also hope to introduce a friends feature to the site, which will allow volunteers to see events that other volunteers are interested in. There are dozens of other ways in which we could expand our site - but our ultimate goal was, and still is, to take human skill, and plug it into human need - and ultimately bring more hope to the community around us.
|
## Inspiration: Many people that we know want to get more involved in the community but don't have the time for regular commitments. Furthermore, many volunteer projects require an extensive application, and applications for different organizations vary so it can be a time-consuming and discouraging process. We wanted to find a way to remove these boundaries by streamlining the volunteering process so that people can get involved, doing one-time projects without needing to apply every time.
## What it does
It is a website aimed at streamlining volunteering hiring and application processes. There are 2 main users: volunteer organizations, and volunteers. Volunteers will sign-up, registering preset documents, waivers, etc. These will then qualify them to volunteer at any of the projects posted by organizations. Organizations can post event dates, locations, etc. Then volunteers can sign-up with the touch of a button.
## How I built it
We used node.js, express, and MySQL for the backend. We used bootstrap for the front end UI design and google APIs for some of the functionality. Our team divided the work based on our strengths and interests.
## Challenges I ran into
We ran into problems with integrating MongoDB and the Mongo Daemon so we had to switch to MySQL to run our database. MySQL querying and set-up had a learning curve that was very discouraging, but we were able to gain the necessary skills and knowledge to use it. We tried to set up a RESTful API, but ultimately, we decided there was not enough time/resources to efficiently execute it, as there were other tasks that were more realistic.
## Accomplishments that I'm proud of
We are proud to all have completed our first 24hr hackathon. Throughout this process, we learned to brainstorm as a team, create a workflow, communicate our progress/ideas, and all acquired new skills. We are proud that we have something that is cohesive functioning components and to have completed our first non-academic collaborative project. We all ventured outside of our comfort zones, using a language that we weren't familiar with.
## What I learned
This experience has taught us a lot about working in a team and communicating with other people. There is so much we can learn from our peers. Skillwise, many of our members gained experience in node.js, MySQL, endpoints, embedded javascript, etc. It taught us a lot about patience and persevering because oftentimes, problems could seem unsolvable but yet we still were able to solve them with time and effort.
## What's next for NWHacks2020
We are all very proud of what we have accomplished and would like to continue this project, even though the hackathon is over. The skills we have all gained are sure to be useful and our team has made this a very memorable experience.
|
## Inspiration
Tinder but Volunteering
## What it does
Connects people to volunteering organizations. Makes volunteering fun, easy and social
## How we built it
react for web and react native
## Challenges we ran into
So MANY
## Accomplishments that we're proud of
Getting a really solid idea and a decent UI
## What we learned
SO MUCH
## What's next for hackMIT
|
losing
|
## Inspiration
Throughout my life, the news has been littered with stories of bridges collapsing and infrastructure decaying all across the continent. You have probably seen them too. The quality of our infrastructure is a major challenge which smart cities could address.
But did you know that repairing and replacing all this infrastructure (in particular anything with large amounts of steel or cement ) is a major contributor to greenhouse gases? Globally, cement is responsible for 7% of all emissions and construction overall accounts for 23%. Reducing our need to construct would go a long way towards reducing greenhouse gas emissions, curbing climate change, and lowering the particulates in the air.
## What it does
One of the major causes of bridges decaying before their time is unexpected environmental factors such as a far wider range of temperatures (a consequence of climate change). As temperatures rise, materials expand. As temperatures fall, materials contract. This is what leads to cracks in concrete and stone.
Where this technology comes in is that it tracks the temperature fluctuations at the specific point where the material is to allow for highly precise modeling of when it should be checked for smaller cracks. Smaller cracks are orders of magnitude cheaper to repair than larger cracks.
## How I built it
It was built with an ESP32, Google Compute Instance with Flask, Grafana, and a variety of other technologies which I am too tired to mention. I tried to include an Uno and camera as well, but that did not get far given the time alloted.
## Challenges I ran into
Poor Wifi prevented the downloading of much of anything in the way of libraries. A lot of work needed to be done on the cloud.
## What's next for The Bridge Owl
We shall see.
Source code: <https://drive.google.com/open?id=11JEiwNHQvUu0oYLax53ANwNo4ygarcdb>
|
## Inspiration
We were intriguied by the City of London's Open Data portal and wanted to see what we could do with it. We also wanted to give back to the city, which houses UWO and Hack Western, as well as many of our friends. With The London Bridge, we aim to enable communication between the community and its citizens, highlight the most important points of infrastructure to maintain/build upon, and to ultimately make London citizens feel involved and proud of their city.
## What it does
The London Bridge is a web app aiming to bridge communication between changemakers and passionate residents in the city of London. Citizens can submit requests for the construction/maintenance of public infrastructure, including street lights, bike lanes, traffic lights, and parks. Using our specially designed algorithm, The London Bridge uses a variety of criteria such as public demand, proximity to similar infrastructure, and proximity to critical social services to determine the most important issues to bring to the attention of city employees, so that they may focus their efforts on what the city truly needs.
## How we built it
First and foremost, we consulted City of London booth sponsors, the City of London Open Data portal, colleagues studying urban planning, and the 2019 edition of the London Plan to determine the most important criteria that would be used in our algorithm.
We created a simple citizen portal where one can submit requests using PugJS templates. We stored geotagged photos in Google Cloud Storage, and relevant geographical/statistical data in MongoDB Atlas, to be used in our score calculating algorithm. Finally, we used Nodejs to implement our algorithm, calculating scores for certain requests, and sending an email to Ward Councellors upon meeting a threshold score.
## Challenges we ran into
Integrating and picking up a variety of new technologies proved to be a difficult challenge, as we had never used any of these technologies before. We also discussed and revised our algorithm many times throughout the hackathon, in hopes of creating a scoring system that would truly reflect London's needs.
## Accomplishments that we're proud of
We're proud of our team's commitment to our hack's vision and goals, especially when things looked hairy.
## What we learned
We learned more about a variety of the aforementioned web technologies, as well as the struggles of integrating them together.
## What's next for The London Bridge
In the future, we'd hope to:
* Refine and add to our algorithm
* Implement additional request types
* Enhance data visualization and add workflow integration
* Add a web interface for city employees
* Create a user login system and impact tracking
|
## Inspiration
Our inspiration was Find My by Apple. It allows you to track your Apple devices and see them on a map giving you relevant information such as last time pinged, distance, etc.
## What it does
Picks up signals from beacons using the Eddystone protocol. Using this data, it will display the beacon's possible positions on Google Maps.
## How we built it
Node.js for the scanning of beacons, our routing and our API which is hosted on Heroku. We use React.js for the front end with Google Maps as the main component of the web app.
## Challenges we ran into
None of us had experience with mobile app development so we had to improvise with our skillset. NodeJs was our choice however we had to rely on old deprecated modules to make things work. It was tough but in the end it was worth it as we learned a lot.
Calculating the distance from the given data was also a challenge but we managed to get it quite accurately.
## Accomplishments that I'm proud of
Using hardware was an interesting as I (Olivier) have never done a hackathon project with them. I stick to web apps as it is my comfort zone but this time we have merged two together.
## What we learned
Some of us learned front-end web development and even got started with React. I've learned that hardware hacks doesn't need to be some low-level programming nightmare (which to me seemed it was).
## What's next for BeaconTracker
The Eddystone technology is deprecated and beacons are everywhere in every day life. I don't think there is a future for BeaconTracker but we have all learned much from this experience and it was definitely worth it.
|
winning
|
# My Eyes
Helping dyslectics read by using vision and text-to-speech.
## Mission Statement
Dyslexia is a reading disability that affects 5% of the population. Individuals with dyslexia have difficulty decoding written word with speed and accuracy. To help those afflicted with dyslexia, many schools in BC provide additional reading classes. But even with reading strategies, it can take quite a bit of concentration and effort to comprehend text.
Listening to an audio book is more convenient than reading a physical one. There are text-to-speech services which can read off digital text on your tablet or computer. However, there aren't any easily accessible services which offer reading off physical text.
Our mission was to provide an easily accessible service that could read off physical text. Our MOBILE app at 104.131.142.126:3000 or eye-speak.org allows you to take a picture of any text and play it back. The site's UI was designed for those with dyslexia in mind. The site fonts and color scheme were purposely chosen to be as easily read as possible.
This site attempts to provide an easy free service for those with reading disabilities.
## Getting Started
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.
### Installing
Create a file called '.env' and a folder called 'uploads' in the root folder
Append API keys from [IBM Watson](https://www.ibm.com/watson/services/text-to-speech/)
Append API keys from Google Cloud Vision
1. [Select or create a Cloud Platform project](https://console.cloud.google.com/project)
2. [Enable billing for your project](https://support.google.com/cloud/answer/6293499#enable-billing)
3. [Enable the Google Cloud Vision API API](https://console.cloud.google.com/flows/enableapi?apiid=vision.googleapis.com)
4. [Set up authentication with a service account so you can access the API from your local workstation](https://cloud.google.com/docs/authentication/getting-started)
.env should look like this when you're done:
```
USERNAME=<watson_username>
PASSWORD=<watson_password>
GOOGLE_APPLICATION_CREDENTIALS=<path_to_json_file>
```
Install dependencies and start the program:
```
npm install
npm start
```
Take a picture of some text and and press play to activate text-to-speech.
## Built With
* [Cloud Vision API](https://cloud.google.com/vision/) - Used to read text from images
* [Watson Text to Speech](https://console.bluemix.net/catalog/services/text-to-speech) - Used to read text from images to speech
## Authors
* **Zachary Anderson** - *Frontend* - [ZachaRuba](https://github.com/ZachaRuba)
* **Håvard Estensen** - *Google Cloud Vision* - [estensen](https://github.com/estensen)
* **Kristian Jensen** - *Backend, IBM Watson* - [HoboKristian](https://github.com/HoboKristian)
* **Charissa Sukontasukkul** - *Design*
* **Josh Vocal** - *Frontend* - [joshvocal](https://github.com/joshvocal)
|
## Inspiration
We realized how visually-impaired people find it difficult to perceive the objects coming near to them, or when they are either out on road, or when they are inside a building. They encounter potholes and stairs and things get really hard for them. We decided to tackle the issue of accessibility to support the Government of Canada's initiative to make work and public places completely accessible!
## What it does
This is an IoT device which is designed to be something wearable or that can be attached to any visual aid being used. What it does is that it uses depth perception to perform obstacle detection, as well as integrates Google Assistant for outdoor navigation and all the other "smart activities" that the assistant can do. The assistant will provide voice directions (which can be geared towards using bluetooth devices easily) and the sensors will help in avoiding obstacles which helps in increasing self-awareness. Another beta-feature was to identify moving obstacles and play sounds so the person can recognize those moving objects (eg. barking sounds for a dog etc.)
## How we built it
Its a raspberry-pi based device and we integrated Google Cloud SDK to be able to use the vision API and the Assistant and all the other features offered by GCP. We have sensors for depth perception and buzzers to play alert sounds as well as camera and microphone.
## Challenges we ran into
It was hard for us to set up raspberry-pi, having had no background with it. We had to learn how to integrate cloud platforms with embedded systems and understand how micro controllers work, especially not being from the engineering background and 2 members being high school students. Also, multi-threading was a challenge for us in embedded architecture
## Accomplishments that we're proud of
After hours of grinding, we were able to get the raspberry-pi working, as well as implementing depth perception and location tracking using Google Assistant, as well as object recognition.
## What we learned
Working with hardware is tough, even though you could see what is happening, it was hard to interface software and hardware.
## What's next for i4Noi
We want to explore more ways where i4Noi can help make things more accessible for blind people. Since we already have Google Cloud integration, we could integrate our another feature where we play sounds of living obstacles so special care can be done, for example when a dog comes in front, we produce barking sounds to alert the person. We would also like to implement multi-threading for our two processes and make this device as wearable as possible, so it can make a difference to the lives of the people.
|
## Inspiration
We wanted to find a way to make transit data more accessible to the public as well as provide fun insights into their transit activity. As we've seen in Spotify Wrapped, people love seeing data about themselves. In addition, we wanted to develop a tool to help city organizers make data-driven decisions on how they operate their networks.
## What it does
Transit Tracker is simultaneously a tool for operators to analyze their network as well as an app for users to learn about their own activities and how it lessens their impact on the environment. For network operators, Transit Tracker allows them to manage data for a system of riders and individual trips. We developed a visual map that shows the activity of specific sections between train stations. For individuals, we created an app that shows data from their own transit activities. This includes gallons of gas saved, time spent riding, and their most visited stops.
## How we built it
We primarily used Palantir Foundry to provide a platform for our back-end data management. Used objects within Foundry to facilitate dataset transformation using SQL and python. Utilized Foundry Workshop to create user interface to display information.
## Challenges we ran into
Working with the geoJSON file format proved to be particularly challenging, because it is semi-structured data and not easily compatible with the datasets we were working with. Another large challenge we ran into was learning how to use Foundry. This was our first time using the software, we had to first learn the basics before we could even begin tackling our problem.
## Accomplishments that we're proud of
With Treehacks being all of our first hackathons, we're proud of making it to the finish line and building something that is both functional and practical. Additionally, we're proud of the skills we've gained from learning to deal with large data as well as our ability to learn and use foundry in the short time frame we had.
## What we learned
We learned just how much we take everyday data analysis for granted. The amount of information being processed everyday in regards to data is unreal. We only tackled a small level of data analysis and even we had a multitude of difficult issues that had to be dealt with. The understanding we’ve learned from dealing with data is so valuable and the skills we’ve gained in using a completely foreign application to build something in such a short amount of time has been truly insightful.
## What's next for Transit Tracker
The next step for Transit Tracker would be to be able to translate our data (that is being generated through objects) onto a visual map where the routes would constantly be changing in regards to the data being collected. Being able to visually represent the change onto a graph would be such a valuable step to achieve as it would mean we are working our way towards a functional application.
|
winning
|
## Inspiration
With the frequency of climate disasters occurring as a result of global warming, droughts are becoming more common across the country. One of our members, as a California native, has brought up their desire to keep plants in their home while not wasting the precious resource of water in the process. Realizing this may be a desire among many people, we decided to develop a way to efficiently water plants based on the species and the average moisture levels needed for them to survive.
## What it does
Using AI, the product recognizes the flower/plant species and calculates the optimal soil moisture necessary for it to survive. By using this as a threshold value, the moisture sensor tracks the moisture level every hour, and waters the plant if below that threshold.
## How we built it
We utilized Arduino and a water pump for the hardware aspect to control the watering, as well as a SeeSaw moisture sensor for moisture detection. In terms of software, we used TensorFlow (running on Google Collab) to upload pictures of the plant and get image recognition, and then python/Arduino software to use the plant type to calculate the threshold for moisture and set the loop for watering it based on that threshold.
## Challenges we ran into
* Image formatting: It was difficult to take an image of the plant we have and upload it to the database in a format that is automated.
## Accomplishments that we're proud of
* Trained an AI model to recognize flowers and various other plant types.
* Python to Arduino communication for watering and moisture sensing system.
## What we learned
* How to communicate with Arduino using Python through the serial port
* How to create an image classification AI through TensorFlow using Google Collab
## What's next for OptiGrow
* Streamlining image capture and automatic input for plant type
* Implementing app/IoT to allow users to manually water and monitor plant health (possibly using Blynk IoT or Adafruit software)
* Improved material, such as acrylic or injection molded parts to provide greater product stability
|
## Inspiration
Jeremy, one of our group members, always buys new house plants with excitement and confidence that he will take care of them this time.. He unfortunately disregards his plant every time, though, and lets it die within three weeks. We decided to give our plant a persona, and give him frequent reminders whenever the soil does not have enough moisture, and also through personalized conversations whenever Jeremy walks by.
## What it does
Using four Arduino sensors, including soil moisture, temperature, humidity, and light, users can see an up-to-date overview of how their plant is doing. This is shown on the display and bar graph with an animal of choice's emotions! Using the webcam which is built-in into the the device, your pet will have in-depth conversations with you using ChatGPT and image recognition.
For example, if you were holding a water bottle and the soil moisture levels were low, your sassy cat plant might ask if the water is for them since they haven't been watered in so long!
## How we built it
The project is comprised of Python and C++. The 4 sensors and 2 displays on the front are connected through an Arduino and monitor the stats of the plant and also send them to our Python code. The Python code utilizes chatGPT API, openCV, text-to-speech, speech-to-text, as well as data from the sensors to have a conversation with the user based on their mood.
## Challenges we ran into
Our project consisted of two very distinct parts. The software was challenging as it was difficult to tame an AI like chatGPT and get it to behave like we wanted. Figuring out the exact prompt to give it was a meticulous process. Additionally, the hardware posed a challenge as we were working with new IO parts. Another challenge was combining these two distinct but complex components to send and receive data in a smooth manner.
## Accomplishments that we're proud of
We're very proud of how sleek the final product looks as well as how smoothly the hardware and software connect. Most of all we're proud of how the plant really feels alive and responds to its environment.
## What we learned
Making this project, we definitely learned a lot about sending and receiving messages from chatGPT API, TTS, STT, configuring different Arduino IO methods, and communicating between the Arduino and Python code using serial.
## What's next for Botanical Bestie
We have many plans for the future of Botanical Bestie. We'd like to make the product more diverse and include different language options to be applicable to international markets. We'd also like to collab with big brands to include their characters as AI plant personalities (Batman plant? Spongebob plant?). On the hardware side, we'd obviously want to put speakers and microphones on the plant/plant pot itself, since we used the laptop speaker and phone microphone for this hackathon. We also have plans for the plant pot to detect what kind of plant is in it, and change its personality accordingly.
|
## Inspiration
Every year roughly 25% of recyclable material is not able to be recycled due to contamination. We set out to reduce the amount of things that are needlessly sent to the landfill by reducing how much people put the wrong things into recycling bins (i.e. no coffee cups).
## What it does
This project is a lid for a recycling bin that uses sensors, microcontrollers, servos, and ML/AI to determine if something should be recycled or not and physically does it.
To do this it follows the following process:
1. Waits for object to be placed on lid
2. Take picture of object using webcam
3. Does image processing to normalize image
4. Sends image to Tensorflow model
5. Model predicts material type and confidence ratings
6. If material isn't recyclable, it sends a *YEET* signal and if it is it sends a *drop* signal to the Arduino
7. Arduino performs the motion sent to it it (aka. slaps it *Happy Gilmore* style or drops it)
8. System resets and waits to run again
## How we built it
We used an Arduino Uno with an Ultrasonic sensor to detect the proximity of an object, and once it meets the threshold, the Arduino sends information to the pre-trained TensorFlow ML Model to detect whether the object is recyclable or not. Once the processing is complete, information is sent from the Python script to the Arduino to determine whether to yeet or drop the object in the recycling bin.
## Challenges we ran into
A main challenge we ran into was integrating both the individual hardware and software components together, as it was difficult to send information from the Arduino to the Python scripts we wanted to run. Additionally, we debugged a lot in terms of the servo not working and many issues when working with the ML model.
## Accomplishments that we're proud of
We are proud of successfully integrating both software and hardware components together to create a whole project. Additionally, it was all of our first times experimenting with new technology such as TensorFlow/Machine Learning, and working with an Arduino.
## What we learned
* TensorFlow
* Arduino Development
* Jupyter
* Debugging
## What's next for Happy RecycleMore
Currently the model tries to predict everything in the picture which leads to inaccuracies since it detects things in the backgrounds like people's clothes which aren't recyclable causing it to yeet the object when it should drop it. To fix this we'd like to only use the object in the centre of the image in the prediction model or reorient the camera to not be able to see anything else.
|
losing
|
## Inspiration
According to Statistics Canada, nearly 48,000 children are living in foster care. In the United States, there are ten times as many. Teenagers aged 14-17 are the most at risk of aging out of the system without being adopted. Many choose to opt-out when they turn 18. At that age, most youths like our team are equipped with a lifeline back to a parent or relative. However, without the benefit of a stable and supportive home, fostered youths, after emancipation, lack the consistent security for their documents, tacit guidance for practical tasks, and moral aid in building meaningful relationships through life’s ups and downs.
Despite the success possible during foster care, there is overwhelming evidence that shows how our conventional system alone inherently cannot guarantee the necessary support to bridge a foster youth’s path into adulthood once they exit the system.
## What it does
A virtual, encrypted, and decentralized safe for essential records. There is a built-in scanner function and a resource of contacts who can mentor and aid the user. Alerts can prompt the user to tasks such as booking the annual doctors' appointments and tell them, for example, about openings for suitable housing and jobs. Youth in foster care can start using the app at age 14 and slowly build a foundation well before they plan for emancipation.
## How we built it
The essential decentralized component of this application, which stores images on an encrypted blockchain, was built on the Internet Computer Protocol (ICP) using Node JS and Azle. Node JS and React were also used to build our user-facing component. Encryption and Decryption was done using CryptoJS.
## Challenges we ran into
ICP turned out to be very difficult to work with - attempting to publish the app to a local but discoverable device was nearly impossible. Apart from that, working with such a novel technology through an unfamiliar library caused many small yet significant mistakes that we wouldn't be able to resolve without the help of ICP mentors. There were many features we worked on that were put aside to prioritize, first and foremost, the security of the users' sensitive documents.
## Accomplishments that we're proud of
Since this was the first time any of us worked on blockchain, having a working application make use of such a technology was very satisfying. Some of us also worked with react and front-end for the first time, and others worked with package managers like npm for the first time as well. Apart from the hard skills developed throughout the hackathon, we're also proud of how we distributed the tasks amongst ourselves, allowing us to stay (mostly) busy without overworking anyone.
## What we learned
As it turns out, making a blockchain application is easier than expected! The code was straightforward and ICP's tutorials were easy to follow. Instead, we spent most of our time wrangling with our coding environment, and this experience gave us a lot of insight into computer networks, blockchain organization, CORS, and methods of accessing blockchain applications through code run in standard web apps like React.
## What's next for MirrorPort
Since the conception of MirrorPort, it has always been planned to become a safe place for marginalized youths. Often, they would also lose contact with adults who have mentored or housed them. This app will provide this information to the user, with the consent of the mentor. Additionally, alerts will be implemented to prompt the user to tasks such as booking the annual doctors' appointments and tell them, for example, about openings for suitable housing and jobs. It could also be a tool for tracking progress against their aspirations and providing tailored resources that map out a transition plan. We're looking to migrate the dApp to mobile for more accessibility and portability. 2FA would be implemented for login security. It could also be a tool for tracking progress against their aspirations and providing tailored resources that map out a transition plan. Adding a document translation feature would also make the dApp work well with immigrant documents across borders.
|
## Inspiration
With the excitement of blockchain and the ever growing concerns regarding privacy, we wanted to disrupt one of the largest technology standards yet: Email. Email accounts are mostly centralized and contain highly valuable data, making one small breach, or corrupt act can serious jeopardize millions of people. The solution, lies with the blockchain. Providing encryption and anonymity, with no chance of anyone but you reading your email.
Our technology is named after Soteria, the goddess of safety and salvation, deliverance, and preservation from harm, which we believe perfectly represents our goals and aspirations with this project.
## What it does
First off, is the blockchain and message protocol. Similar to PGP protocol it offers \_ security \_, and \_ anonymity \_, while also **ensuring that messages can never be lost**. On top of that, we built a messenger application loaded with security features, such as our facial recognition access option. The only way to communicate with others is by sharing your 'address' with each other through a convenient QRCode system. This prevents anyone from obtaining a way to contact you without your **full discretion**, goodbye spam/scam email.
## How we built it
First, we built the block chain with a simple Python Flask API interface. The overall protocol is simple and can be built upon by many applications. Next, and all the remained was making an application to take advantage of the block chain. To do so, we built a React-Native mobile messenger app, with quick testing though Expo. The app features key and address generation, which then can be shared through QR codes so we implemented a scan and be scanned flow for engaging in communications, a fully consensual agreement, so that not anyone can message anyone. We then added an extra layer of security by harnessing Microsoft Azures Face API cognitive services with facial recognition. So every time the user opens the app they must scan their face for access, ensuring only the owner can view his messages, if they so desire.
## Challenges we ran into
Our biggest challenge came from the encryption/decryption process that we had to integrate into our mobile application. Since our platform was react native, running testing instances through Expo, we ran into many specific libraries which were not yet supported by the combination of Expo and React. Learning about cryptography and standard practices also played a major role and challenge as total security is hard to find.
## Accomplishments that we're proud of
We are really proud of our blockchain for its simplicity, while taking on a huge challenge. We also really like all the features we managed to pack into our app. None of us had too much React experience but we think we managed to accomplish a lot given the time. We also all came out as good friends still, which is a big plus when we all really like to be right :)
## What we learned
Some of us learned our appreciation for React Native, while some learned the opposite. On top of that we learned so much about security, and cryptography, and furthered our beliefs in the power of decentralization.
## What's next for The Soteria Network
Once we have our main application built we plan to start working on the tokens and distribution. With a bit more work and adoption we will find ourselves in a very possible position to pursue an ICO. This would then enable us to further develop and enhance our protocol and messaging app. We see lots of potential in our creation and believe privacy and consensual communication is an essential factor in our ever increasingly social networking world.
|
## Inspiration
While the use cases for web3 expand every day from healthcare to polling systems, we wanted to explore the implementation of web3 in the entertainment sector. As the world of cryptocurrency expands, people would want to play games using crypto and win crypto.
## What it does
The user gets to draw a picture and set the answer for the picture. The other players can then try to guess the answer. If they get it right, they are rewarded with crypto. In order to guess, the player needs to put in some crypto. As a result, the prize pool for that particular picture increases. The artist will get a portion of the prize pool as an incentive for drawing.
## How we built it
To start off we used Solana's Twitter example and other social media-on-the-block-chain implementations we found online. Through that, we were able to set up creating a wallet on our local machines that could be used to test functions. Our next issue was uploading an image to the blockchain so that the data itself was de-centralized. We used IPFS for this task but ran into issues while connecting the uploading API to the function for creating a post. For our front end we had to flip-flop between React and Vue, as Due was already connected to our backend and could be used to fetch data, however, our team felt more comfortable in using React for front-end development.
## Challenges we ran into
We ran into some challenges in building the blockchain and saving the drawn image. Moreover, the time crunch was also a big challenge for us. While we were able to learn many individual technologies like creating a wallet on our local machine, uploading images with IPFS, and sending posts through the blockchain combining all those elements together with our front end is what posed an issue in the constrained timings. Another problem was picking technologies. For our front end, React was a framework most of use were accustomed to however, Vue was better integrated without backend calls and for getting the drawing of our user.
## Accomplishments that we're proud of
We are proud that we were able to learn and overcome so many challenges in a short period of time. Despite it having been 24 hours it feels like we have gained decent experience in Web3 and Solana specifically.
## What we learned
None of us had ever worked on web3 before. This was our first time developing a decentralized application (dapp). We also learned about the various use cases of Web3 and its advantages. Furthermore, we explored building smart contracts.
## What's next for Cryptionary
In the future, we hope that cryptionary will become an end-to-end game that anyone on the blockchain can enjoy in a safe way.
|
partial
|
## Inspiration
Everyone loves to eat. But whether you’re a college student, a fitness enthusiast trying to supplement your gains, or have dietary restrictions, it can be hard to come up with meal ideas. LetMeCook is an innovative computer vision-powered web application that combines a scan of a user’s fridge or cupboard with dietary needs to generate personalized recipes based on the ingredients they have.
## What it does
When opening LetMeCook, users are first prompted to take an image of their fridge or cupboard. After this, the taken image is sent to a backend server where it is entered into an object segmentation and image classification machine-learning algorithm to classify the food items being seen. Next, the app sends this data to the Edamam API, which then returns comprehensive nutritional facts for each ingredient. After this, users are presented with an option to add custom dietary needs or go directly to the recipe page. When adding dietary needs, users fill out a questionnaire regarding allergies, dietary preferences (such as vegetarian or vegan), or specific nutritional goals (like high-protein or low-carb). They are also prompted to select a meal type (like breakfast or dinner), time-to-prepare limit, and tools available for preparation (like microwave or stove). Next, the dietary criteria, classified ingredients, and corresponding nutritional facts are sent to the OpenAI API, and a personalized recipe is generated to match the user's needs. Finally, LetMeCook displays the recipe and step-by-step instructions for preparation onscreen. If users are unsatisfied with the recipe, they can add a comment and generate a new recipe.
## How we built it
The frontend was designed using React with Tailwind for styling. This was done to allow the UI to be dynamic and adjust seamlessly regardless of varying devices. A component library called Radix-UI was used for prefabricating components and Lucide was used for icon components. To use the device's local camera in the app, a library called react-dashcam was utilized. To edit the photos, a library called react-image-crop was used. After the initial image and dietary restrictions are entered, the image is encoded to base64 and entered as a parameter in an HTTP request to the backend server. The backend server is hosted using ngrok and passes the received image to the Google Cloud Vision API. A response containing the classified ingredients is then passed to the Edamam API where nutritional facts are stored about each respective ingredient. All of the information gathered until this point (ingredients, nutritional facts, dietary needs) is then passed to the OpenAI API where a custom recipe is generated and returned. Finally, a response containing the meal name, ingredients, step-by-step instructions for preparation, and nutritional information is returned to the interface and displayed onscreen.
## Challenges we ran into
One of the biggest challenges we ran into was creating the model to accurately and rapidly classify the objects in the taken picture. Because we were trying to classify multiple objects from the same image, we sought to create an object segmentation and classification model, but this required hardware capabilities incompatible with our laptops. As a result, we had to switch to using Google Cloud's Vision API, which would allow us to perform the same data extraction necessary. Additionally, we ran into many issues when working on the frontend and allowing it to be responsive regardless of device type, size, or orientation. Finally, we had to troubleshoot the sequence of HTTP communication between the interface and the backend server for specific data types and formatting.
## Accomplishments that we're proud of
We are proud to have recognized a very prevalent problem around us and engineered a seamless and powerful tool to solve it. We all enjoyed the bittersweet experience of discovering bugs, editing troublesome code, and staying up overnight working to overcome the various challenges we faced. Additionally, we are proud to have learned many new tools and technologies to create a successful mobile application. Ultimately, our efforts and determination culminated in an innovative, functional product we are all very proud of and excited to present. Lastly, we are proud to have created a product that could reduce food waste and revolutionize the home cooking space around the world.
## What we learned
First and foremost, we've learned the profound impact that technology can have on simplifying everyday challenges. In researching the problem, we learned how pervasive the problem of "What to make?" is in home cooking around the world. It can be painstakingly difficult to make home-cooked meals with limited ingredients and numerous dietary criteria. However, we also discovered how effective intelligent-recipe generation can be when paired with computer vision and user-entered dietary needs. Finally, the hackathon motivated us to learn a lot about the technologies we worked with - whether it be new errors or desired functions, new ideas and strategies had to be employed to make the solution work.
## What's next for LetMeCook
There is much potential for LetMeCook's functionality and interfacing. First, the ability to take photos of multiple food storages will be implemented. Additionally, we will add the ability to manually edit ingredients after scanning, such as removing detected ingredients or adding new ingredients. A feature allowing users to generate more detailed recipes with currently unavailable ingredients would also be useful for users willing to go to a grocery store. Overall, there are many improvements that could be made to elevate LetMeCook's overall functionality.
|
# Omakase
*"I'll leave it up to you"*
## Inspiration
On numerous occasions, we have each found ourselves staring blankly into the fridge with no idea of what to make. Given some combination of ingredients, what type of good food can I make, and how?
## What It Does
We have built an app that recommends recipes based on the food that is in your fridge right now. Using Google Cloud Vision API and Food.com database we are able to detect the food that the user has in their fridge and recommend recipes that uses their ingredients.
## What We Learned
Most of the members in our group were inexperienced in mobile app development and backend. Through this hackathon, we learned a lot of new skills in Kotlin, HTTP requests, setting up a server, and more.
## How We Built It
We started with an Android application with access to the user’s phone camera. This app was created using Kotlin and XML. Android’s ViewModel Architecture and the X library were used. This application uses an HTTP PUT request to send the image to a Heroku server through a Flask web application. This server then leverages machine learning and food recognition from the Google Cloud Vision API to split the image up into multiple regions of interest. These images were then fed into the API again, to classify the objects in them into specific ingredients, while circumventing the API’s imposed query limits for ingredient recognition. We split up the image by shelves using an algorithm to detect more objects. A list of acceptable ingredients was obtained. Each ingredient was mapped to a numerical ID and a set of recipes for that ingredient was obtained. We then algorithmically intersected each set of recipes to get a final set of recipes that used the majority of the ingredients. These were then passed back to the phone through HTTP.
## What We Are Proud Of
We were able to gain skills in Kotlin, HTTP requests, servers, and using APIs. The moment that made us most proud was when we put an image of a fridge that had only salsa, hot sauce, and fruit, and the app provided us with three tasty looking recipes including a Caribbean black bean and fruit salad that uses oranges and salsa.
## Challenges You Faced
Our largest challenge came from creating a server and integrating the API endpoints for our Android app. We also had a challenge with the Google Vision API since it is only able to detect 10 objects at a time. To move past this limitation, we found a way to segment the fridge into its individual shelves. Each of these shelves were analysed one at a time, often increasing the number of potential ingredients by a factor of 4-5x. Configuring the Heroku server was also difficult.
## Whats Next
We have big plans for our app in the future. Some next steps we would like to implement is allowing users to include their dietary restriction and food preferences so we can better match the recommendation to the user. We also want to make this app available on smart fridges, currently fridges, like Samsung’s, have a function where the user inputs the expiry date of food in their fridge. This would allow us to make recommendations based on the soonest expiring foods.
|
## Inspiration
An individual living in Canada wastes approximately 183 kilograms of solid food per year. This equates to $35 billion worth of food. A study that asked why so much food is wasted illustrated that about 57% thought that their food goes bad too quickly, while another 44% of people say the food is past the expiration date.
## What it does
LetsEat is an assistant that comprises of the server, app and the google home mini that reminds users of food that is going to expire soon and encourages them to cook it in a meal before it goes bad.
## How we built it
We used a variety of leading technologies including firebase for database and cloud functions and Google Assistant API with Dialogueflow. On the mobile side, we have the system of effortlessly uploading the receipts using Microsoft cognitive services optical character recognition (OCR). The Android app is writen using RxKotlin, RxAndroid, Retrofit on a MVP architecture.
## Challenges we ran into
One of the biggest challenges that we ran into was fleshing out our idea. Every time we thought we solved an issue in our concept, another one appeared. We iterated over our system design, app design, Google Action conversation design, integration design, over and over again for around 6 hours into the event. During development, we faced the learning curve of Firebase Cloud Functions, setting up Google Actions using DialogueFlow, and setting up socket connections.
## What we learned
We learned a lot more about how voice user interaction design worked.
|
partial
|
## Inspiration
My friend Pablo used to throw me ball when playing beer pong, he moved away so i replaced him with a much better robot.
## What it does
it tosses you a ping pong ball right when you need, you just have to show it your hand.
## How we built it
With love sweat tears and lots of energy drink.
## Challenges we ran into
getting open cv and arduino to communicate.
## Accomplishments that we're proud of
getting the arduino to communicate with python
## What we learned
open cv
## What's next for P.A.B.L.O (pong assistant beats losers only)
use hand tracking to track the cups and actually play and win the game
|
## Inspiration
We wanted to make the interactions with our computers more intuitive while giving people with special needs more options to navigate in the digital world. With the digital landscape around us evolving, we got inspired by scenes in movies featuring Tony Stark, where he interacts with computers within his high-tech office. Instead of using a mouse and computer, he uses hand gestures and his voice to control his work environment.
## What it does
Instead of a mouse, Input/Output Artificial Intelligence, or I/OAI, uses a user's webcam to move their cursor to where their face OR hand is pointing towards through machine learning.
Additionally, I/OAI allows users to map their preferred hand movements for commands such as "click", "minimize", "open applications", "navigate websites", and more!
I/OAI also allows users to input data using their voice, so they don't need to use a keyboard and mouse. This increases accessbility for those who don't readily have access to these peripherals.
## How we built it
Face tracker -> Dlib
Hand tracker -> Mediapipe
Voice Recognition -> Google Cloud
Graphical User Interface -> tkinter
Mouse and Keyboard Simulation -> pyautogui
## Challenges we ran into
Running this many programs at the same time slows it down considerably, we therefore need to selectively choose which ones we wanted to keep during the implementation. We solved this by using multithreading and carefully investigating efficiency.
We also had a hard time mapping the face because of the angles of rotation of the head, increasing the complexity of the matching algorithm.
## Accomplishments we're proud of
We were able to implement everything we set out to do in a short amount of time, as there was a lot of integrations with multiple frameworks and our own algorithms.
## What we learned
How to use multithreading for multiple trackers, using openCV for easy camera frames, tkinter GUI building and pyautogui for automation.
## What's next for I/OAI
We need to figure a way to incorporate features more efficiently or get a supercomputer like Tony Stark!
By improving on the features, people will have more accessbility at their computers by simply downloading a program instead of buying expensive products like an eyetracker.
|
## Inspiration
Our project was inspired by the age old, famously known prisoner's dilemma. A fairly simple problem to understand, yet a difficult one to come up with a solution to. Individuals can either defect or cooperate, however they must come to a decision based on what they believe the other prisoner will do. If two individuals defect, then they each get 10 years in prison. If only one defects, and another cooperates, the one who defects gets 1 year, and the one who cooperates gets 5 years. If both decide to cooperate, they both get 3 years. It's truly a battle of wits, luck, and strategy. We decided to do a hardware implementation fo the game in a form of points.
## What it does
Our project has two interface boards with three buttons each. The first button allows a player to defect. The second button allows a player to cooperate. The third button allows players to speed up the round time. Once both players agree to speed up round time, or the timer hits zero, the points are calculated and the results are revealed. The game can be played with player versus player or player versus bot. The bots have various algorithms on them that make different decisions based on previous player output/ For example, one algorithm may solely cooperate, but once it sees that its opponent has defected once, it only defects from then on. We have various messages on LCD screens for an easier to use interface
## How we built it
We build the hardware portion of it using Arduino to connect our hardware inputs. We used digital design to develop our interface which allows the user to seemlessly navigate and understand that their inputs have been processed. The bot is built using C++ Arduino code that implements the main logic.
## Challenges we ran into
One of the major challenges we ran into was power consumption. Initially our interface had much more circuitry and wiring, but this ate up our power sources, as well as required a high constant voltage. We had to reduce the amount of circuitry involved within the project to be able to develop a feasible project. It took a lot of effort to build some portions of the circuit, but they had to be removed just due to the immense power consumption. We also did not have enough time to implement the algorithm with the hardware, thus parts of firmware is incomplete.
## Accomplishments that we're proud of
We are proud of being able to develop a cohesive project as a group which uses all the fundamentals we learned in school. This being logic design, circuit design and analysis, as well as coding. We are also proud of developing good code which incorporates the principles of object oriented design, as well as created a good player interface with the circuitry despite the major setbacks we faced along the way.
## What we learned
We definitely learned a lot from this project. We learned a lot about the design process of embedded systems which incorporate both hardware and software portions, as well as how to divide work between teams and seemlessly integrate work between them.
## What's next for Programmer's Dilemma
For the prisoner's dilemma, we hope to be able to find a way to implement all the cirucitry we had in the initial prototyping, but with much less power consumption. Furthermore, add more algorithms, as well as store statistics from previous games.
|
winning
|
## Inspiration:
ChatTeach.io was inspired by the need for a more personalized and interactive learning experience. As college students, something we have all faced is the dreaded two-hour lecture video, where it is quite literally impossible to pay attention completely and to not get distracted. This oftentimes leads to important content in the video being missed, and a usually unhappy student. We saw a gap in the market for a system that could provide customized content to each individual user and offer them the opportunity to learn from virtual teachers that look and talk like humans, making the learning process more engaging and fun.
## What it Does:
ChatTeach.io is an online learning platform that uses advanced AI technologies such as GPT and deepfake to provide a more personalized and interactive learning experience for users. Users can input their questions and receive responses in natural language from virtual teachers that are created using deepfake technology. Imagine having Spiderman teaching you physics or Zendaya teaching you differential equations. In this way, students are less likely to get bored and are more inclined to pay attention to the video.
## Accomplishments That We’re Proud Of:
We use advanced AI technologies like GPT and deepfake to create a personalized and engaging learning experience. Our speech-to-text and text-to-speech conversions enable natural conversations between users and virtual teachers. This approach benefits both children, who can learn from their favorite superhero, and college students, who can make long lectures more interesting. We are proud of our technology and the positive impact it can have on learners of all ages.
## What We Learned:
We learned about the potential of AI in education and the importance of accurate speech-to-text and text-to-speech conversion. We also learned to integrate APIs and collaborate efficiently. Our experience gave us a deeper appreciation for AI's capabilities and the power of teamwork in bringing innovative ideas to work.
## How We Built It:
Our project involved several steps. Firstly, we integrated a speech-to-text API that transcribed the user's voice input into text. Then, we implemented ChatGPT to generate a relevant response to the user's question based on the script. Finally, we utilized a text-to-speech API to convert the text output into verbal speech. Our ultimate goal was to provide users with the ability to choose the voice and appearance of the virtual teacher in the video, allowing for a more personalized and engaging learning experience.
## Challenges We Ran Into:
Challenges we faced when building this project were uploading a visual character onto the video screen (we were not able to implement this due to the constrained time period, but plan to in the future). We also ran into issues with choosing which API to use as many of them only worked on Windows computers, and all four members of our team have Macbooks.
## What is Next for Chat Teach:
We envision expanding our project to offer users the ability to input custom audio and character selections, providing even greater personalization to the learning experience.
For instance, Spiderman could teach physics with Whitney Houston's voice. Additionally, our AI-generated content has the potential to not only deliver pre-existing information but also to tailor content to each user's needs.
By taking a simple quiz, a student watching an AP Calculus review video can indicate which chapters they already know, and the AI will generate a custom video without those sections.
Our platform can also be used to conduct mock interviews by generating a random person for users to ask questions to.
Overall, our aim is to offer a highly customizable and interactive learning tool with a wide range of potential applications.
We hope that ChatTeach can be used to make learning more fun and engaging for students worldwide.
|
Driven by a shared passion for leveraging AI in education, our team of four embarked on a hackathon journey to revolutionize how learners access information. We envisioned a world where lengthy textbooks, lectures, and videos could be transformed into concise, engaging learning experiences.
Pooling our diverse skill sets, we collaborated on building an AI pipeline that ingested various lengthy media formats such as textbooks and lectures, leveraging Large Language Models (LLMs) to extract key insights and generate summaries. We harnessed the power of text-to-speech (TTS) AI engine LMNT to give voice to the summaries and combined them with informative visuals through an AI powered video renderer. The result is an informative video lecture that concisely captures the essence of the original content.
But our vision went beyond content delivery. We integrated question generation and AI-powered grading into the pipeline, allowing users to assess their comprehension and receive personalized feedback. All outputs – video lectures, questions, and feedback – were seamlessly integrated into a user-friendly web application deployed on the cloud.
Throughout the hackathon, we faced challenges as a team, from integrating complex APIs to handling massive amounts of data that resulted in lengthy query times. We tackled each obstacle with collaboration and creative problem-solving, drawing strength from our shared commitment to the project's potential impact. The moments of triumph, when we saw the pipeline seamlessly transform content and the web application deliver a smooth learning experience, reinforced our belief in the power of teamwork and the potential of AI to reshape education.
This hackathon was not just about building a project; it was about proving that a team of passionate individuals could leverage AI to make knowledge more accessible, engaging, and personalized. The journey continues as we explore new possibilities for adaptive learning, expanded media support, and global reach. Our hackathon experience has instilled in us the confidence that, together, we can transform education and empower learners worldwide.
|
## Inspiration
We were inspired by hard working teachers and students. Although everyone was working hard, there was still a disconnect with many students not being able to retain what they learned. So, we decided to create both a web application and a companion phone application to help target this problem.
## What it does
The app connects students with teachers in a whole new fashion. Students can provide live feedback to their professors on various aspects of the lecture, such as the volume and pace. Professors, on the other hand, get an opportunity to receive live feedback on their teaching style and also give students a few warm-up exercises with a built-in clicker functionality.
The web portion of the project ties the classroom experience to the home. Students receive live transcripts of what the professor is currently saying, along with a summary at the end of the lecture which includes key points. The backend will also generate further reading material based on keywords from the lecture, which will further solidify the students’ understanding of the material.
## How we built it
We built the mobile portion using react-native for the front-end and firebase for the backend. The web app is built with react for the front end and firebase for the backend. We also implemented a few custom python modules to facilitate the client-server interaction to ensure a smooth experience for both the instructor and the student.
## Challenges we ran into
One major challenge we ran into was getting and processing live audio and giving a real-time transcription of it to all students enrolled in the class. We were able to solve this issue through a python script that would help bridge the gap between opening an audio stream and doing operations on it while still serving the student a live version of the rest of the site.
## Accomplishments that we’re proud of
Being able to process text data to the point that we were able to get a summary and information on tone/emotions from it. We are also extremely proud of the
## What we learned
We learned more about React and its usefulness when coding in JavaScript. Especially when there were many repeating elements in our Material Design. We also learned that first creating a mockup of what we want will facilitate coding as everyone will be on the same page on what is going on and all thats needs to be done is made very evident. We used some API’s such as the Google Speech to Text API and a Summary API. We were able to work around the constraints of the API’s to create a working product. We also learned more about other technologies that we used such as: Firebase, Adobe XD, React-native, and Python.
## What's next for Gradian
The next goal for Gradian is to implement a grading system for teachers that will automatically integrate with their native grading platform so that clicker data and other quiz material can instantly be graded and imported without any issues. Beyond that, we can see the potential for Gradian to be used in office scenarios as well so that people will never miss a beat thanks to the live transcription that happens.
|
losing
|
# Flash Computer Vision®
### Computer Vision for the World
Github: <https://github.com/AidanAbd/MA-3>
Try it Out: <http://flash-cv.com>
## Inspiration
Over the last century, computers have gained superhuman capabilities in computer vision. Unfortunately, these capabilities are not yet empowering everyday people because building an image classifier is still a fairly complicated task.
The easiest tools that currently exist still require a good amount of computing knowledge. A good example is [Google AutoML: Vision](https://cloud.google.com/vision/automl/docs/quickstart) which is regarded as the "simplest" solution and yet requires an extensive knowledge of web skills and some coding ability. We are determined to change that.
We were inspired by talking to farmers in Mexico who wanted to identify ready / diseased crops easily without having to train many workers. Despite the technology existing, their walk of life had not lent them the opportunity to do so. We were also inspired by people in developing countries who want access to the frontier of technology but lack the education to unlock it. While we explored an aspect of computer vision, we are interested in giving individuals the power to use all sorts of ML technologies and believe similar frameworks could be set up for natural language processing as well.
## The product: Flash Computer Vision
### Easy to use Image Classification Builder - The Front-end
Flash Magic is a website with an extremely simple interface. It has one functionality: it takes in a variable amount of image folders and gives back an image classification interface. Once the user uploads the image folders, he simply clicks the Magic Flash™ button. There is a short training process during which the website displays a progress bar. The website then returns an image classifier and a simplistic interface for using it. The user can use the interface (which is directly built on the website) to upload and classify new images. The user never has to worry about any of the “magic” that goes on in the backend.
### Magic Flash™ - The Backend
The front end’s connection to the backend sets up a Train image folder on the server. The name of each folder is the category that the pictures inside of it belong to. The backend will take the folder and transfer it into a CSV file. From this CSV file it creates a [Pytorch.utils.Dataset](https://pytorch.org/docs/stable/_modules/torch/utils/data/dataset.html) Class by inheriting Dataset and overriding three of its methods. When the dataset is ready, the data is augmented using a combination of 6 different augmentations: CenterCrop, ColorJitter, RandomAffine, RandomRotation, and Normalize. By doing those transformations, we ~10x the amount of data that we have for training. Once the data is in a CSV file and has been augmented, we are ready for training.
We import [SqueezeNet](https://pytorch.org/hub/pytorch_vision_squeezenet/) and use transfer learning to adjust it to our categories. What this means is that we erase the last layer of the original net, which was originally trained for ImageNet (1000 categories), and initialize a layer of size equal to the number of categories that the user defined, making sure to accurately match dimensions. We then run back-propagation on the network with all the weights “frozen” in place with exception to the ones in the last layer. As the model is training, it is informing the front-end that progress being made, allowing us to display a progress bar. Once the model converges, the final model is saved into a file that the API can easily call for inference (classification) when the user asks for prediction on new data.
## How we built it
The website was built using Node.js and javascript and it has three main functionalities: allowing the user to upload pictures into folders, sending the picture folders to the backend, making API calls to classify new images once the model is ready.
The backend is built in Python and PyTorch. On top of Python we used torch, torch.nn as nn, torch.optim as optim, torch.optim import lr\_scheduler, numpy as np, torchvision, from torchvision import datasets, models, and transforms, import matplotlib.pyplot as plt, import time, import os, import copy, import json, import sys.
## Accomplishments that we're proud of
Going into this, we were not sure if 36 hours were enough to build this product. The proudest moment of this project was the successful testing round at 1am on Sunday. While we had some machine learning experience on our team, none of us had experience with transfer learning or this sort of web application. At the beginning of our project, we sat down, learned about each task, and then drew a diagram of our project. We are especially proud of this step because coming into the coding portion with a clear API functionality understanding and delegated tasks saved us a lot of time and helped us integrate the final product.
## Obstacles we overcame and what we learned
Machine learning models are often finicky in their training patterns. Because our application allows users with little experience, we had to come up with a robust training pipeline. Designing this pipeline took a lot of thought and a few failures before we converged on a few data augmentation techniques that do the trick. After this hurdle, integrating a deep learning backend with the website interface was quite challenging, as the training pipeline requires very specific labels and file structure. Iterating the website to reflect the rigid protocol without over complicating the interface was thus a challenge. We learned a ton over this weekend. Firstly, getting the transfer learning to work was enlightening as freezing parts of the network and writing a functional training loop for specific layers required diving deep into the pytorch API. Secondly, the human-design aspect was a really interesting learning opportunity as we had to come up with the right line between abstraction and functionality. Finally, and perhaps most importantly, constantly meeting and syncing code taught us the importance of keeping a team on the same page at all times.
## What's next for Flash Computer Vision
### Application companion + Machine Learning on the Edge
We want to build a companion app with the same functionality as the website. The companion app would be even more powerful than the website because it would have the ability to quantize models (compression for ml) and to transfer them into TensorFlow Lite so that **models can be stored and used within the device.** This would especially benefit people in developing countries, where they sometimes cannot depend on having a cellular connection.
### Charge to use
We want to make a payment system within the website so that we can scale without worrying about computational cost. We do not want to make a business model out of charging per API call, as we do not believe this will pave a path forward for rapid adoption. **We want users to own their models, this will be our competitive differentiator.** We intentionally used a smaller neural network to reduce hosting and decrease inference time. Once we compress our already small models, this vision can be fully realized as we will not have to host anything, but rather return a mobile application.
|
## Inspiration
Every musician knows that moment of confusion, that painful silence as onlookers shuffle awkward as you frantically turn the page of the sheet music in front of you. While large solo performances may have people in charge of turning pages, for larger scale ensemble works this obviously proves impractical. At this hackathon, inspired by the discussion around technology and music at the keynote speech, we wanted to develop a tool that could aid musicians.
Seeing AdHawks's MindLink demoed at the sponsor booths, ultimately give us a clear vision for our hack. MindLink, a deceptively ordinary looking pair of glasses, has the ability to track the user's gaze in three dimensions, recognizes events such as blinks and even has an external camera to display the user's view. Blown away by the possibility and opportunities this device offered, we set out to build a hands-free sheet music tool that simplifies working with digital sheet music.
## What it does
Noteation is a powerful sheet music reader and annotator. All the musician needs to do is to upload a pdf of the piece they plan to play. Noteation then displays the first page of the music and waits for eye commands to turn to the next page, providing a simple, efficient and most importantly stress-free experience for the musician as they practice and perform. Noteation also enables users to annotate on the sheet music, just as they would on printed sheet music and there are touch controls that allow the user to select, draw, scroll and flip as they please.
## How we built it
Noteation is a web app built using React and Typescript. Interfacing with the MindLink hardware was done on Python using AdHawk's SDK with Flask and CockroachDB to link the frontend with the backend.
## Challenges we ran into
One challenge we came across was deciding how to optimally allow the user to turn page using eye gestures. We tried building regression-based models using the eye-gaze data stream to predict when to turn the page and built applications using Qt to study the effectiveness of these methods. Ultimately, we decided to turn the page using right and left wink commands as this was the most reliable technique that also preserved the musicians' autonomy, allowing them to flip back and forth as needed.
Strategizing how to structure the communication between the front and backend was also a challenging problem to work on as it is important that there is low latency between receiving a command and turning the page. Our solution using Flask and CockroachDB provided us with a streamlined and efficient way to communicate the data stream as well as providing detailed logs of all events.
## Accomplishments that we're proud of
We're so proud we managed to build a functioning tool that we, for certain, believe is super useful. As musicians this is something that we've legitimately thought would be useful in the past, and granted access to pioneering technology to make that happen was super exciting. All while working with a piece of cutting-edge hardware technology that we had zero experience in using before this weekend.
## What we learned
One of the most important things we learnt this weekend were the best practices to use when collaborating on project in a time crunch. We also learnt to trust each other to deliver on our sub-tasks and helped where we could. The most exciting thing that we learnt while learning to use these cool technologies, is that the opportunities are endless in tech and the impact, limitless.
## What's next for Noteation: Music made Intuitive
Some immediate features we would like to add to Noteation is to enable users to save the pdf with their annotations and add landscape mode where the pages can be displayed two at a time. We would also really like to explore more features of MindLink and allow users to customize their control gestures. There's even possibility of expanding the feature set beyond just changing pages, especially for non-classical musicians who might have other electronic devices to potentially control. The possibilities really are endless and are super exciting to think about!
|
## Inspiration
Students often do not have a financial background and want to begin learning about finance, but the sheer amount of resources that exist online make it difficult to know which articles are good for people to read. Thus we thought the best way to tackle this problem was to use a machine learning technique known as sentiment analysis to determine the tone of articles, allowing us to recommend more neutral options to users and provide a visual view of the different articles available so that users can make more informed decisions on the articles they read.
## What it does
This product is a web based application that performs sentiment analysis on a large scope of articles to aid users in finding biased, or un - biased articles. We also offer three data visualizations of each topic, an interactive graph that shows the distribution of sentiment scores on articles, a heatmap of the sentiment scores and a word cloud showing common key words among the articles.
## How we built it
Around 80 unique articles from 10 different domains were scraped from the web using scrapy. This data was then processed with the help of Indico's machine learning API. The API provided us with the tools to perform sentiment analysis on all of our articles which was the main feature of our product. We then further used the summarize feature of Indico api to create shorter descriptions of the article for our users. Indico api also powers the other two data visualizations that we provide to our users. The first of the two visualizations would be the heatmap which is also created through tableau and takes the sentimenthq scores to better visualize and compare articles and the difference between the sentiment scores. The last visualization is powered by wordcloud which is built on top of pillow and matplotlib. It takes keywords generated by Indico api and displays the most frequent keywords across all articles.The web application is powered by Django and a SQL lite database in the backend, bootstrap for the frontend and is all hosted on a google cloud platform app engine.
## Challenges we ran into
The project itself was a challenge since it was our first time building a web application with Django and hosting on a cloud platform. Another challenge arose in data scraping, when finding the titles of the articles, different domains placed their article titles in different locations and tags making it difficult to make one scraper that could abstract to many websites. Not only this, but the data that was returned by the scraper was not the correct format for us to easily manipulate so unpackaging dictionaries and such were small little tasks that we had to do in order for us to solve these problems. On the data visualization side, there was no graphic library that would fit our vision for the interactive graph, so we had to build that on our own!
## Accomplishments that we're proud of
Being able to accomplish the goals that we set out for the project and actually generating useful information in our web application based on the data that we ran through Indico API.
## What we learned
We learned how to build websites using Django, generate word clouds using matplotlib and pandas, host websites on google cloud platform, how to utilize the Indico api and researched various types of data visualization techniques.
## What's next for DataFeels
Lots of improvements could still be made to this project and here are just some of the different things that could be done. The scraper created for the data required us to manually run the script for every new link but creating an automated scraper that built the correct data structures for us to directly pipeline to our website would be much more ideal. Next we would expand our website to have not just financial categories but any topic that has articles about it.
|
winning
|
## Inspiration
Nothing quite accomplishes daily productivity like the traditional todo-list. Each task displayed in order, ready to be ticked off one by one. However, this can usually be an isolating process rather than a collaborative one. TODOTogether hopes to bring a company culture of collaboration and teamwork down into people's daily tasks.
## What it does
TODOTogether implements core task management functionality into a workspace-wide synchronized platform. It consists of 3 sections:
1. The Personal task list: This section functions like a traditional todo-list, where a user can add tasks on their docket.
2. The Team task list: This section allows users to quickly reach out and collaborate on tasks or projects from their team members. This replaces lengthy emails and allows team members to opt-in to the task.
3. The Open task list: The core functionality of TODOTogether, which allows for projects or tasks to be shared company-wide. With the departmental, subject, and time-estimation tags available, general information about the task can be rapidly disseminated across an entire company, allowing people to opt-in and help cross-departmentally according to their personal strengths.
## How we built it
The platform is built with HTML/CSS and JavaScript, with drag-and-drop functionality enabled by Dragula JS.
## What's next for TODOTogether
Search and filter functionality for the Open task list, the ability to add/tag multiple user profiles to tasks, and to chain tasks and create sub-tasks.
Further service integration could also be exciting aspect of TODOTogether, such as automatically generating and displaying video-conference links for meetings, or utilizing the Trello API to bring the open-list functionality to companies' current work-flow.
|
## Inspiration
Research shows that maximum people face mental or physical health problems due to their unhealthy daily diet or ignored symptoms at the early stages. This app will help you track your diet and your symptoms daily and provide recommendations to provide you with an overall healthy diet. We were inspired by MyFitnessPal's ability to access the nutrition information from foods at home, restaurants, and the grocery store. Diet is extremely important to the body's wellness, but something that is hard for any one person to narrow down is: What foods should I eat to feel better? It is a simple question, but actually very hard to answer. We eat so many different things in a day, how do you know what is making positive impacts on your health, and what is not?
## What it does
Right now, the app is in a pre-alpha phase. It takes some things as input, carbs, fats, protein, vitamins, and electrolyte intake in a day. It sends this data to a Mage API, and Mage predicts how well they will feel in that day. The Mage AI is based off of sample data that is not real-world data, but as the app gets users it will get more accurate. Based off of our data set that we gather and the model type, the AI maintains 96.4% accuracy at predicting the wellness of a user on a given day. This is based off of 10000 users over 1 day, or 1 user over 10000 days, or somewhere in between. The idea is that the AI will be constantly learning as the app gains users and individual users enter more data.
## How we built it
We built it in Swift using the Mage.ai for data processing and API
## Challenges we ran into
Outputting the result on the App after the API returns the final prediction. We have the prediction score displayed in the terminal, but we could not display it on the app initially. We were able to do that after a lot of struggle. All of us made an app and implemented an API for the very first time.
## Accomplishments that we're proud of
-- Successfully implementing the API with our app
-- Building an App for the very first time
-- Creating a model for AI data processing with a 96% accuracy
## What we learned
-- How to implement an API and it's working
-- How to build an IOS app
-- Using AI in our application without actually knowing AI in depth
## What's next for NutriCorr
--Adding different categories of symptoms
-- giving the user recommendations on how to change their diet
-- Add food object to the app so that the user can enter specific food instead of the nutrient details
-- Connect our results to mental health wellness and recommendations. Research shows that people who generally have more sugar intake in their diet generally stay more depressed.
|
## Inspiration
Ordering delivery and eating out is a major aspect of our social lives. But when healthy eating and dieting comes into play it interferes with our ability to eat out and hangout with friends. With a wave of fitness hitting our generation as a storm we have to preserve our social relationships while allowing these health conscious people to feel at peace with their dieting plans. With NutroPNG, we enable these differences to be settled once in for all by allowing health freaks to keep up with their diet plans while still making restaurant eating possible.
## What it does
The user has the option to take a picture or upload their own picture using our the front end of our web application. With this input the backend detects the foods in the photo and labels them through AI image processing using Google Vision API. Finally with CalorieNinja API, these labels are sent to a remote database where we match up the labels to generate the nutritional contents of the food and we display these contents to our users in an interactive manner.
## How we built it
Frontend: Vue.js, tailwindCSS
Backend: Python Flask, Google Vision API, CalorieNinja API
## Challenges we ran into
As we are many first-year students, learning while developing a product within 24h is a big challenge.
## Accomplishments that we're proud of
We are proud to implement AI in a capacity to assist people in their daily lives. And to hopefully allow this idea to improve peoples relationships and social lives while still maintaining their goals.
## What we learned
As most of our team are first-year students with minimal experience, we've leveraged our strengths to collaborate together. As well, we learned to use the Google Vision API with cameras, and we are now able to do even more.
## What's next for McHacks
* Calculate sum of calories, etc.
* Use image processing to estimate serving sizes
* Implement technology into prevalent nutrition trackers, i.e Lifesum, MyPlate, etc.
* Collaborate with local restaurant businesses
|
partial
|
## Inspiration
Elementary school kids are very savvy with searching via Google, and while sometimes the content returned are relevant, they may not be at a suitable reading level when the first search results talks about something like phytochemicals or pharmacology. Is there a way to assess whether links in a search result are at the level users desire to read?
That's why we created Readabl.
Readability is about the reader, and different personas will have their own perspective on how readability metrics can help them. Our vision is to enable users to find content suitable for their needs and help make content accessible to everyone.
## What it does
Readabl offers search results along with readability metrics so that users can at a glance see what search results are suitable for them to read.
## How we built it
The entire application is hosted in a monorepo consisting of a Javascript frontend framework - Svelte with a FastAPI backend endpoint. The frontend is hosted on Netlify while the backend is hosted using GCP's Cloud Run. The search and processing that takes place in the backend is built using both Google Cloud Custom Search JSON API and the py-readability-metrics library.
### Backend
Hosted on GCP's Cloud Run using Docker, we are using FastAPI to communicate with our frontend to get user's search term and rank the information according back to the users. The FastAPI talks to Google Search API, retrieving information and passing it along. Before passing to the frontend, we parsed the information using a Python Library - BeautifulSoup - to get the text on the particular page to be ranked for readability. We also explored concurrent programming via Python in the backend so that we can parse multiple webpages in parallel to speed up the processing.
backend -> <https://api.readabl.tech/>
### Frontend
The frontend uses the Svelte framework as the main driver due to it's fast run time and minimalistic structure with little boilerplates code. We explored using a UI framework to speed up the development workflow but a lot of the existing UI frameworks suits the projects due to limited functionality and poor documentation.
frontend -> <https://readto.beabetterhuman.tech/>
## Challenges we ran into
We explored multiple new technologies during this hackathon. Since we are all new to the technology we used, we faced a lot of steep learning curve and issue revolving around navigating GCP:
* back end processing takes a lot longer and times out the search results when there is too much to parse because of the content submitted (e.g. philosophical questions). We are also limited by Google API to be able to request only 10 links per search hence we needed to do this recursively which added on to the processing time
* couldn't redeem MLH GCP credits
* lack of knowledge of Svelte.js framework
* lack of UI libraries to speed up development time
* GCP's Cloud Run deployment blocked due to Python requirements versioning
* deployment on Netlify and setting up custom domains
* constantly having Git merge conflicts
## Accomplishments that we're proud of
We made a working search engine! We learned a ton about development with GCP and deployment using cloud technologies !
Each of us was able to challenge ourselves by working with new tools and APIs. Moreover, we have been very supportive and helpful to each other by assisting them to the best of our knowledge. In the end, the team has made a functional product with most of the features we have envisioned from the start, and we bring home new knowledge, as well as new tools to explore later on. We knew we took on an ambitious project and we are really proud of what we were able to achieve in this hackathon.
## What we learned
We have integrated and tried many APIs from various providers, which was a valuable learning experience. Solving conflicts helps us understand more thoroughly how things work behind the scene. In addition, as a team consisting of different skill sets and from different time zones, we learned how to communicate and teamwork effectively. We also learn how to help each other since each teammate had different varying of experience with certain tech stacks and applications. It was everyone's first experience working with Svelte and GCP services, so getting all the additional APIs while reducing the processing time on top of that was rather challenging.
Alas, we also learned a lot of accessibility and on leveraging cloud technology.
## What's next for Readabl
We plan to improve the search and ranking algorithm so we can improve on the performance. We also hope to build a community that contributes back and makes the world a bit easier to navigate at least readability wise. We are also searching for new datasets which include more information, such as scrolling speed information, color vision deficiencies information on webpages to implement a more inclusive search function.
# How to Contact Us
* {ben}#5927 - Benedict Neo
* ceruleanox#7402 - Anita Yip
* Pravallika#2768 - Pravallika Myneni
* weichun#3945 - Wei Chun
|
## Inspiration
We wanted to tackle a problem that impacts a large demographic of people. After research, we learned that 1 in 10 people suffer from dyslexia and 5-20% of people suffer from dysgraphia. These neurological disorders go undiagnosed or misdiagnosed often leading to these individuals constantly struggling to read and write which is an integral part of your education. With such learning disabilities, learning a new language would be quite frustrating and filled with struggles. Thus, we decided to create an application like Duolingo that helps make the learning process easier and more catered toward individuals.
## What it does
ReadRight offers interactive language lessons but with a unique twist. It reads out the prompt to the user as opposed to it being displayed on the screen for the user to read themselves and process. Then once the user repeats the word or phrase, the application processes their pronunciation with the use of AI and gives them a score for their accuracy. This way individuals with reading and writing disabilities can still hone their skills in a new language.
## How we built it
We built the frontend UI using React, Javascript, HTML and CSS.
For the Backend, we used Node.js and Express.js. We made use of Google Cloud's speech-to-text API. We also utilized Cohere's API to generate text using their LLM.
Finally, for user authentication, we made use of Firebase.
## Challenges we faced + What we learned
When you first open our web app, our homepage consists of a lot of information on our app and our target audience. From there the user needs to log in to their account. User authentication is where we faced our first major challenge. Third-party integration took us significant time to test and debug.
Secondly, we struggled with the generation of prompts for the user to repeat and using AI to implement that.
## Accomplishments that we're proud of
This was the first time for many of our members to be integrating AI into an application that we are developing so that was a very rewarding experience especially since AI is the new big thing in the world of technology and it is here to stay.
We are also proud of the fact that we are developing an application for individuals with learning disabilities as we strongly believe that everyone has the right to education and their abilities should not discourage them from trying to learn new things.
## What's next for ReadRight
As of now, ReadRight has the basics of the English language for users to study and get prompts from but we hope to integrate more languages and expand into a more widely used application. Additionally, we hope to integrate more features such as voice-activated commands so that it is easier for the user to navigate the application itself. Also, for better voice recognition, we should
|
## Inspiration
Coming from South Texas, two of the team members saw ESL (English as a Second Language) students being denied of a proper education. Our team created a tool to break down language barriers that traditionally perpetuate socioeconomic cycles of poverty by providing detailed explanations about word problems using ChatGPT. Traditionally, people from this group would not have access to tutoring or 1-on-1 support, and this website is meant to rectify this glaring issue.
## What it does
The website takes in a photo as input, and it uses optical character recognition to get the text from the problem. Then, it uses ChatGPT to generate a step-by-step explanation for each problem, and this output is tailored to the grade level and language of the student, enabling students from various backgrounds to get assistance they are often denied of.
## How we built it
We coded the backend in Python with two parts: OCR and ChatGPT API implementation. Moreover, we considered the multiple parameters, such as grade and language, that we could implement in our code and eventually query ChatGPT with to make the result as helpful as possible. On the other side of the stack, we coded it in React with TypeScript to be as simple and intuitive as possible. It has two sections that clearly show what it is outputting and what ChatGPT has generated to assist the student.
## Challenges we ran into
During the development of our product we often ran into struggles with deciding the optimal way to apply different APIs and learning how to implement them, many of which we ended up not using or changing our applications for, such as the IBM API. Through this process, we had to change our high level plan for the backend functions and consequently reimplement our frontend user interface to fit the operations. This also provided a compounding challenge of having to reestablish and discuss new ideas while communicating as a team.
## Accomplishments that we're proud of
We are proud of the website layout. Personally the team is very fond of the color and the arrangement of the site’s elements. Another thing that we are proud of is just that we have something working, albeit jankily. This was our first hackathon, so we were proud to be able to contribute to the hackathon in some form.
## What we learned
One invaluable skill we developed through this project was learning more about the unique plethora of APIs available and how we can integrate and combine them to create new revolutionary products that can help people in everyday life. We not only developed our technical skills, including git familiarity and web development, but we also developed our ability to communicate our ideas as a team and gain the confidence and creativity to create and carry out an idea from thought to production.
## What's next for Homework Helper
As part of our mission to increase education accessibility and combat common socioeconomic barriers, we hope to use Homework Helper to not only translate and minimize the language barrier, but to also help those with visual and auditory disabilities. Some functions we hope to implement include having text-to-speech and speech-to-text features, and producing video solutions along with text answers.
|
winning
|
## Inspiration
Herpes Simplex Virus-2 (HSV-2) is the cause of Genital Herpes, a lifelong and contagious disease characterized by recurring painful and fluid-filled sores. Transmission occurs through contact with fluids from the sores of the infected person during oral, anal, and vaginal sex; transmission can occur in asymptomatic carriers. HSV-2 is a global public health issue with an estimated 400 million people infected worldwide and 20 million new cases annually - 1/3 of which take place Africa (2012). HSV-2 will increase the risk of acquiring HIV by 3 fold, profoundly affect the psychological well being of the individual, and pose as a devastating neonatal complication. The social ramifications of HSV-2 are enormous. The social stigma of sexual transmitted diseases (STDs) and the taboo of confiding others means that patients are often left on their own, to the detriment of their sexual partners. In Africa, the lack of healthcare professionals further exacerbates this problem. Further, the 2:1 ratio of female to male patients is reflective of the gender inequality where women are ill-informed and unaware of their partners' condition or their own. Most importantly, the symptoms of HSV-2 are often similar to various other dermatological issues which are less severe, such as common candida infections and inflammatory eczema. It's very easy to dismiss Genital Herpes as these latter conditions which are much less severe and non-contagious.
## What it does
Our team from Johns Hopkins has developed the humanitarian solution “Foresight” to tackle the taboo issue of STDs. Offered free of charge, Foresight is a cloud-based identification system which will allow a patient to take a picture of a suspicious skin lesion with a smartphone and to diagnose the condition directly in the iOS app. We have trained the computer vision and machine-learning algorithm, which is downloaded from the cloud, to differentiate between Genital Herpes and the less serious eczema and candida infections.
We have a few main goals:
1. Remove the taboo involved in treating STDs by empowering individuals to make diagnostics independently through our computer vision and machine learning algorithm.
2. Alleviate specialist shortages
3. Prevent misdiagnosis and to inform patients to seek care if necessary
4. Location service allows for snapshots of local communities and enables more potent public health intervention
5. Protects the sexual relationship between couples by allowing for transparency- diagnose your partner!
## How I built it
We first gathered 90 different images of 3 categories (30 each) of skin conditions that are common around the genital area: "HSV-2", "Eczema", and "Yeast Infections". We realized that a good way to differentiate between these different conditions are the inherent differences in texture, which are although subtle to the human eye, very perceptible via good algorithms. ] We take advantage of the Bag of Words model common in the field of Web Crawling and Information Retrieval, and apply a similar algorithm, which is written from scratch except for the feature identifier (SIFT). The algorithm follows:
Part A) Training the Computer Vision and Machine Learning Algorithm (Python)
1. We use a Computer Vision feature identifying algorithm called SIFT to process each image and to identify "interesting" points like corners and other patches that are highly unique
2. We consider each patch around the "interesting" points as textons, or units of characteristic textures
3. We build a vocabulary of textons by identifying the SIFT points in all of our training images, and use the machine learning algorithm k-means clustering to narrow down to a list of 1000 "representative" textons
4. For each training image, we can build our own version of a descriptor by representation of a vector, where each element of the vector is the normalized frequency of the texton. We further use tf-idf (term frequency, inverse document frequency) optimization to improve the representation capabilities of each vector. (all this is manually programmed)
5. Finally, we save these vectors in memory. When we want to determine whether a test image depicts either of the 3 categories, we encode the test image into the same tf-idf vector representation, and apply k-nearest neighbors search to find the optimal class. We have found through experimentation that k=4 works well as a trade-off between accuracy and speed.
6. We tested this model with a randomly selected subset that is 10% the size of our training set and achieved 89% accuracy of prediction!
Part B) Ruby on Rails Backend
1. The previous machine learning model can be expressed as an aggregate of 3 files: cluster centers in SIFT space, tf-idf statistics, and classified training vectors in cluster space
2. We output the machine learning model as csv files from python, and write an injector in Ruby that inserts the trained model into our PostgreSQL database on the backend
3. We expose the API such that our mobile iOS app can download our trained model directly through an HTTPS endpoint.
4. Beyond storage of our machine learning model, our backend also includes a set of API endpoints catered to public health purposes: each time an individual on the iOS app make a diagnosis, the backend is updated to reflect the demographic information and diagnosis results of the individual's actions. This information is visible on our web frontend.
Part C) iOS app
1. The app takes in demographic information from the user and downloads a copy of the trained machine learning model from our RoR backend once
2. Once the model has been downloaded, it is possible to make diagnosis even without internet access
3. The user can take an image directly or upload one from the phone library for diagnosis, and a diagnosis is given in several seconds
4. When the diagnosis is given, the demographic and diagnostic information is uploaded to the backend
Part D) Web Frontend
1. Our frontend leverages the stored community data (pooled from diagnoses made from individual phones) accessible via our backend API
2. The actual web interface is a portal for public health professionals like epidemiologists to understand the STD trends (as pertaining to our 3 categories) in a certain area. The heat map is live.
3. Used HTML5,CSS3,JavaScript,jQuery
## Challenges I ran into
It is hard to find current STD prevalence incidence data report outside the United States. Most of the countries have limited surveilliance data among African countries, and the conditions are even worse among stigmatized diseases. We collected the global HSV-2 prevalence and incidence report from World Health Organization(WHO) in 2012. Another issue we faced is the ethical issue in collecting disease status from the users. We were also conflicted on whether we should inform the user's spouse on their end result. It is a ethical dilemma between patient confidentiality and beneficence.
## Accomplishments that I'm proud of
1. We successfully built a cloud-based picture recognition system to distinguish the differences between HSV-2, yeast infection and eczema skin lesion by machine learning algorithm, and the accuracy is 89% for a randomly selected test set that is 10% the training size.
2. Our mobile app which provide users to anonymously send their pictures to our cloud database for recognition, avoid the stigmatization of STDs from the neighbors.
3. As a public health aspect, the function of the demographic distribution of STDs in Africa could assist the prevention of HSV-2 infection and providing more medical advice to the eligible patients.
## What I learned
We learned much more about HSV-2 on the ground and the ramifications on society. We also learned about ML, computer vision, and other technological solutions available for STD image processing.
## What's next for Foresight
Extrapolating our workflow for Machine Learning and Computer Vision to other diseases, and expanding our reach to other developing countries.
|
## Inspiration
How many times have you forgotten to take your medication and damned yourself for it? It has happened to us all, with different consequences. Indeed, missing out on a single pill can, for some of us, throw an entire treatment process out the window. Being able to keep track of our prescriptions is key in healthcare, this is why we decided to create PillsOnTime.
## What it does
PillsOnTime allows you to load your prescription information, including the daily dosage and refills, as well as reminders in your local phone calendar, simply by taking a quick photo or uploading one from the library. The app takes care of the rest!
## How we built it
We built the app with react native and expo using firebase for authentication. We used the built in expo module to access the devices camera and store the image locally. We then used the Google Cloud Vision API to extract the text from the photo. We used this data to then create a (semi-accurate) algorithm which can identify key information about the prescription/medication to be added to your calendar. Finally the event was added to the phones calendar with the built in expo module.
## Challenges we ran into
As our team has a diverse array of experiences, the same can be said about the challenges that each encountered. Some had to get accustomed to new platforms in order to design an application in less than a day, while figuring out how to build an algorithm that will efficiently analyze data from prescription labels. None of us had worked with machine learning before and it took a while for us to process the incredibly large amount of data that the API gives back to you. Also working with the permissions of writing to someones calendar was also time consuming.
## Accomplishments that we're proud of
Just going into this challenge, we faced a lot of problems that we managed to overcome, whether it was getting used to unfamiliar platforms or figuring out the design of our app.
We ended up with a result rather satisfying given the time constraints & we learned quite a lot.
## What we learned
None of us had worked with ML before but we all realized that it isn't as hard as we thought!! We will definitely be exploring more similar API's that google has to offer.
## What's next for PillsOnTime
We would like to refine the algorithm to create calendar events with more accuracy
|
## Inspiration
By 2050,16% of the the Global Population will be the elderly. Around 1.5 billlion people will be above the age of 65. Professionals will not be able to cope up with this increased demand for quality healthcare. Many elders don't get in-time treatment, and emergency is always a fear for the children.
Artificial Intelligence is the solution.
## What it does
* Diagnose disease
* Offer medicine recommendations
* Send daily reports
* Create emergency calls to 911
* Process injury images
## Technology we used
* MongoDB
* Node.js
* Express.js
* Python
* JavaScript
* Twilio
* Amazon Echo (hardware)
* Camera (hardware)
* Machine Learning
* Computer Vision
## Challenges we ran into
* Integrating the Naive Bayesian and Decision Tree Models for our Limited Test Set data.
* Run python file in Node.js
## Accomplishments that we're proud of
Integrating and building the backend for Alexa.
## What we learned
How to Integrate the Back end with the Cloud services and an Intelligent Speech to Text System
## What's next for Dr. Jarvis
Larger Data Set and utilzing Deep learning Convolutional Nerual Networks for multiclass classification.
High Resolution Camera to be integrated with the System for Image to detect visible skin diseases from a persisting trained data set.
|
winning
|
## Inspiration
Our inspiration came from the challenge proposed by Varient, a data aggregation platform that connects people with similar mutations together to help doctors and users.
## What it does
Our application works by allowing the user to upload an image file. The image is then sent to Google Cloud’s Document AI to extract the body of text, process it, and then matched it against the datastore of gene names for matches.
## How we built it
While originally we had planned to feed this body of text to a Vertex AI ML for entity extraction, the trained model was not accurate due to a small dataset. Additionally, we attempted to build a BigQuery ML model but struggled to format the data in the target column as required. Due to time constraints, we explored a different path and downloaded a list of gene symbols from HUGO Gene Nomenclature Committee’s website (<https://www.genenames.org/>). Using Node.js, and Multer, the image is processed and the text contents are efficiently matched against the datastore of gene names for matches. The app returns a JSON of the matching strings in order of highest frequency. This web app is then hosted on Google Cloud through App Engine at (<https://uofthacksix-chamomile.ue.r.appspot.com/>).
The UI while very simple is easy to use. The intent of this project was to create something that could easily be integrated into Varient’s architecture. Converting this project into an API to pass the JSON to the client would be very simple.
## How it meets the theme "restoration"
The overall goal of this application, which has been partially implemented, was to create an application that could match mutated gene names from user-uploaded documents and connect them with resources, and others who share the common gene mutation. This would allow them to share strategies or items that have helped them deal with living with the gene mutation. This would allow these individuals to restore some normalcy in their lives again.
## Challenges we ran into
Some of the challenges we faced:
* having a small data set to train the Vertex AI on
* time constraints on learning the new technologies, and the best way to effectively use it
* formatting the data in the target column when attempting to build a BigQuery ML model
## Accomplishments that we're proud of
The accomplishment that we are all proud of is the exposure we gained to all the new technologies we discovered and used this weekend. We had no idea how many AI tools Google offers. The exposure to new technologies and taking the risk to step out of our comfort zone and attempt to learn and use it this weekend in such a short amount of time is something we are all proud of.
## What we learned
This entire project was new to all of us. We have never used google cloud in this manner before, only for Firestore. We were unfamiliar with using Express and working with machine learning was something only one of us had a small amount of experience with. We learned a lot about Google Cloud and how to access the API through Python and Node.js.
## What's next for Chamomile
The hack is not as complete as we would like since ideally there would be a machine learning aspect to confirm the guesses made by the substring matching and more data to improve the Vertex AI model. Improving on this would be a great step for this project. Also, add a more put-together UI to match the theme of this application.
|
## Inspiration
Data is becoming increasingly prevalent in every field, and bioinformatics is ripe for innovation in data acquisition with no insights being derived from experiments daily, a streamline way to query and use this data for analysis will be very important in the future of personalized cancer research.
## What it does
The web application contains form fields where the user simply selects the disease for which they want all available gene expression values available at the National Cancer Institute Genomic Data Commons. Upon requesting, the data, an API call is made to acquire the data, the data is parsed into a table, and displayed.
## How we built it
The API queries were performed in python, and the web integration was built on the Django framework.
## Challenges we ran into
Neither of us had experience using the bulk of these technologies, so we were continually learning as we went on. A particular challenge was figuring out how to use Django to do form requests.
## Accomplishments that we're proud of
Creating a UI that makes it seamless to acquire particular data from NCI and Pubmed
## What we learned
The basics of the Django framework, and how to use data portal APIs
## What's next for UTH
Now that our webapp can receive the data, we would like to offer online services to auto analyze with open source software in both python and R.
|
## Inspiration
We got the idea when we ask ourselves the question: how can we better make use of the large amounts of data in the world to meet people's privacy and healthcare needs? We all recall being at the doctor's office and seeing the doctor write long written notes. We then came up with a potentially impactful idea that is rewarding to create, both conceptually and technically.
## What it does
Imagine your research has finally reached the stage of clinical trials. Finding the right participants meeting the strict trial criteria (specific combinations of illnesses, prescription drugs, medical devices, surgical history, and anatomy) is paramount to a successful trial.
Current researchers have to manually search through many medical notes to identify their potential candidates. Doctors already write medical notes to a centralized repository, but unstructured data is not useful on its own. TrialLink uses privacy-minded natural language processing to extract medical information from unstructured medical notes. We offer researchers a platform to perform advanced queries to easily find candidates for medical trials that exactly match their strict requirements.
Our platform uses HIPPA-compliant technologies. It allows participants who have previously consented to participate in clinical trials to receive timely notifications when they are matched to a trial. We also implemented a proxy server to securely route our API calls.
## How we built it
Our final application involved a full frontend, a full Spring-based backend server, the integration of database tables, Velo code, Google Cloud API, Javascript, and CSS.
## Challenges we ran into
Using the enterprise version of Google Cloud has a high overhead and required an understanding of different security models.
The database schemas and querying functions were complex.
## Accomplishments that we're proud of
We are proud of using NLP in creative ways to make use of information from a preexisting abundant source of unstructured notes to improve access to healthcare for patients and assist researchers by minimizing their time looking for appropriate trial candidates.
## What's next for TrialLink
Add more querying options.
Connect to existing databases of medical notes.
Make it a startup!
|
losing
|
## Inspiration
College students are busy, juggling classes, research, extracurriculars and more. On top of that, creating a todo list and schedule can be overwhelming and stressful. Personally, we used Google Keep and Google Calendar to manage our tasks, but these tools require constant maintenance and force the scheduling and planning onto the user.
Several tools such as Motion and Reclaim help business executives to optimize their time and maximize productivity. After talking to our peers, we realized college students are not solely concerned with maximizing output. Instead, we value our social lives, mental health, and work-life balance. With so many scheduling applications centered around productivity, we wanted to create a tool that works **with** users to maximize happiness and health.
## What it does
Clockwork consists of a scheduling algorithm and full-stack application. The scheduling algorithm takes in a list of tasks and events, as well as individual user preferences, and outputs a balanced and doable schedule. Tasks include a name, description, estimated workload, dependencies (either a start date or previous task), and deadline.
The algorithm first traverses the graph to augment nodes with additional information, such as the eventual due date and total hours needed for linked sub-tasks. Then, using a greedy algorithm, Clockwork matches your availability with the closest task sorted by due date. After creating an initial schedule, Clockwork finds how much free time is available, and creates modified schedules that satisfy user preferences such as workload distribution and weekend activity.
The website allows users to create an account and log in to their dashboard. On the dashboard, users can quickly create tasks using both a form and a graphical user interface. Due dates and dependencies between tasks can be easily specified. Finally, users can view tasks due on a particular day, abstracting away the scheduling process and reducing stress.
## How we built it
The scheduling algorithm uses a greedy algorithm and is implemented with Python, Object Oriented Programming, and MatPlotLib. The backend server is built with Python, FastAPI, SQLModel, and SQLite, and tested using Postman. It can accept asynchronous requests and uses a type system to safely interface with the SQL database. The website is built using functional ReactJS, TailwindCSS, React Redux, and the uber/react-digraph GitHub library. In total, we wrote about 2,000 lines of code, split 2/1 between JavaScript and Python.
## Challenges we ran into
The uber/react-digraph library, while popular on GitHub with ~2k stars, has little documentation and some broken examples, making development of the website GUI more difficult. We used an iterative approach to incrementally add features and debug various bugs that arose. We initially struggled setting up CORS between the frontend and backend for the authentication workflow. We also spent several hours formulating the best approach for the scheduling algorithm and pivoted a couple times before reaching the greedy algorithm solution presented here.
## Accomplishments that we're proud of
We are proud of finishing several aspects of the project. The algorithm required complex operations to traverse the task graph and augment nodes with downstream due dates. The backend required learning several new frameworks and creating a robust API service. The frontend is highly functional and supports multiple methods of creating new tasks. We also feel strongly that this product has real-world usability, and are proud of validating the idea during YHack.
## What we learned
We both learned more about Python and Object Oriented Programming while working on the scheduling algorithm. Using the react-digraph package also was a good exercise in reading documentation and source code to leverage an existing product in an unconventional way. Finally, thinking about the applications of Clockwork helped us better understand our own needs within the scheduling space.
## What's next for Clockwork
Aside from polishing the several components worked on during the hackathon, we hope to integrate Clockwork with Google Calendar to allow for time blocking and a more seamless user interaction. We also hope to increase personalization and allow all users to create schedules that work best with their own preferences. Finally, we could add a metrics component to the project that helps users improve their time blocking and more effectively manage their time and energy.
|
# Course Connection
## Inspiration
College is often heralded as a defining time period to explore interests, define beliefs, and establish lifelong friendships. However the vibrant campus life has recently become endangered as it is becoming easier than ever for students to become disconnected. The previously guaranteed notion of discovering friends while exploring interests in courses is also becoming a rarity as classes adopt hybrid and online formats. The loss became abundantly clear when two of our members, who became roommates this year, discovered that they had taken the majority of the same courses despite never meeting before this year. We built our project to combat this problem and preserve the zeitgeist of campus life.
## What it does
Our project provides a seamless tool for a student to enter their courses by uploading their transcript. We then automatically convert their transcript into structured data stored in Firebase. With all uploaded transcript data, we create a graph of people they took classes with, the classes they have taken, and when they took each class. Using a Graph Attention Network and domain-specific heuristics, we calculate the student’s similarity to other students. The user is instantly presented with a stunning graph visualization of their previous courses and the course connections to their most similar students.
From a commercial perspective, our app provides businesses the ability to utilize CheckBook in order to purchase access to course enrollment data.
## High-Level Tech Stack
Our project is built on top of a couple key technologies, including React (front end), Express.js/Next.js (backend), Firestore (real time graph cache), Estuary.tech (transcript and graph storage), and Checkbook.io (payment processing).
## How we built it
### Initial Setup
Our first task was to provide a method for students to upload their courses. We elected to utilize the ubiquitous nature of transcripts. Utilizing python we parse a transcript, sending the data to a node.js server which serves as a REST api point for our front end. We chose Vercel to deploy our website. It was necessary to generate a large number of sample users in order to test our project. To generate the users, we needed to scrape the Stanford course library to build a wide variety of classes to assign to our generated users. In order to provide more robust tests, we built our generator to pick a certain major or category of classes, while randomly assigning different category classes for a probabilistic percentage of classes. Using this python library, we are able to generate robust and dense networks to test our graph connection score and visualization.
### Backend Infrastructure
We needed a robust database infrastructure in order to handle the thousands of nodes. We elected to explore two options for storing our graphs and files: Firebase and Estuary. We utilized the Estuary API to store transcripts and the graph “fingerprints” that represented a students course identity. We wanted to take advantage of the web3 storage as this would allow students to permanently store their course identity to be easily accessed. We also made use of Firebase to store the dynamic nodes and connections between courses and classes.
We distributed our workload across several servers.
We utilized Nginx to deploy a production level python server that would perform the graph operations described below and a development level python server. We also had a Node.js server to serve as a proxy serving as a REST api endpoint, and Vercel hosted our front-end.
### Graph Construction
Treating the firebase database as the source of truth, we query it to get all user data, namely their usernames and which classes they took in which quarters. Taking this data, we constructed a graph in Python using networkX, in which each person and course is a node with a type label “user” or “course” respectively. In this graph, we then added edges between every person and every course they took, with the edge weight corresponding to the recency of their having taken it.
Since we have thousands of nodes, building this graph is an expensive operation. Hence, we leverage Firebase’s key-value storage format to cache this base graph in a JSON representation, for quick and easy I/O. When we add a user, we read in the cached graph, add the user, and update the graph. For all graph operations, the cache reduces latency from ~15 seconds to less than 1.
We compute similarity scores between all users based on their course history. We do so as the sum of two components: node embeddings and domain-specific heuristics. To get robust, informative, and inductive node embeddings, we periodically train a Graph Attention Network (GAT) using PyG (PyTorch Geometric). This training is unsupervised as the GAT aims to classify positive and negative edges. While we experimented with more classical approaches such as Node2Vec, we ultimately use a GAT as it is inductive, i.e. it can generalize to and embed new nodes without retraining. Additionally, with their attention mechanism, we better account for structural differences in nodes by learning more dynamic importance weighting in neighborhood aggregation. We augment the cosine similarity between two users’ node embeddings with some more interpretable heuristics, namely a recency-weighted sum of classes in common over a recency-weighted sum over the union of classes taken.
With this rich graph representation, when a user queries, we return the induced subgraph of the user, their neighbors, and the top k most people most similar to them, who they likely have a lot in common with, and whom they may want to meet!
## Challenges we ran into
We chose a somewhat complicated stack with multiple servers. We therefore had some challenges with iterating quickly for development as we had to manage all the necessary servers.
In terms of graph management, the biggest challenges were in integrating the GAT and in maintaining synchronization between the Firebase and cached graph.
## Accomplishments that we're proud of
We’re very proud of the graph component both in its data structure and in its visual representation.
## What we learned
It was very exciting to work with new tools and libraries. It was impressive to work with Estuary and see the surprisingly low latency. None of us had worked with next.js. We were able to quickly ramp up to using it as we had react experience and were very happy with how easily it integrated with Vercel.
## What's next for Course Connections
There are several different storyboards we would be interested in implementing for Course Connections. One would be a course recommendation. We discovered that chatGPT gave excellent course recommendations given previous courses. We developed some functionality but ran out of time for a full implementation.
|
## Introducing Nuisance
### Inspiration
When prompted with the concept of **Useless Inventions**, and the slight delay from procrastinating the brainstorming process of our idea, we suddenly felt very motivated to make a little friend to help us. Introducing **Nuisance**. A (not so friendly) Bot that will sense when you have given him your phone. Promptly running away, and screaming if you get too close. An interesting take on the game of manhunt.
### What it does
**Nuisance** detects when a *phone* is placed in its possession. It then embarks on a random journey, in an effort to play everyone's favourite game, keep away. If a daring human approaches before *Nuisance* is ready to end the game, he screams and runs away; only a genuine scream of horror stands a chance of reclaiming the device. Adding a perfect touch of embarrassment and a loss of dignity.
### How we built it
1. Arduino Due
2. 2 wheels
3. Caster Wheel
4. H-bridge/ Motor Driver
5. Motors
6. 2 UltraSonic Sensors
7. Noise Sound Audio Sensor
8. PIR Motion sensor
9. 1 Grove Buzzer v1.2
10. large breadboard
11. 2 small breadboard
12. OCD display
13. Force sensor
14. 9V battery
15. 3\*1.5 = 4.5 V battery
16. A bit of wires
and a **lot** of cardboard
*and some software*
### Challenges we ran into
* different motor powers / motors not working anymore
We had an issue during the debugging phase of our code regarding the *Ultrasonic Sensors*. No matter what was done, they just seemed to constantly be timing out. After looking extensively into the issue, we figured out that the issue was neither hardware nor software related. The breadboard had sporadic faulty pins that we had to be considerate of. Thus causing us to test the rest of the breadboard for integrity.
Furthermore, we had a lot of coding issues regarding the swap between our Arduino uno and due. The Arduino due did not support the same built in libraries, such as tone (for the buzzer).
We also had issues with the collision detection algorithm at first. However, with a lil tenacity, *and the power of friendship*, you too can solve this problem. We originally had the wrong values being processed, causing out algorithm to disregard the numbers we required to gauge distance accurately.
### Accomplishments that we're proud of
* completed project..?
### What we learned
* yell at a nuisance if you want ur stuff back?
Never doubt the
## What's next for Nuisance
Probably more crying
|
winning
|
## Inspiration
I know that times have been tough for local restaurant owners, and so I set out to create SeeFood, a project that could help them engage with potential customers.
## What it does
SeeFood is a REST API that stores and shares data about restaurants. When restaurant owners create meals on SeeFood, a qr code is automatically generated for them to use to share their specialties.
## How we built it
The backend api and website routing was all done in Python Flask and SQL, using cockroachdb for the cloud storage option. The website was done using Jinja2 templates mixed with CSS and JavaScript. A flutter app was also created but unfortunately never completed.
## Challenges we ran into
Trying to set up the augmented reality cost me a lot of time and I was ultimately stumped by problems with unity and flutter\_unity. I sunk a good half day into trying to make the AR work, which was tough, but a good lesson to learn.
## Accomplishments that we're proud of
Setting up a REST API on the first night and setting up my first Flutter app.
## What we learned
Don't spread yourself too thin/don't bite off more than you can chew.
## What's next for SeeFood
An eventual implementation of AR capabilities for a unique restaurant experience!
|
**Inspiration**
Our inspiration behind dineAR was our frustration of not being able to see our food before ordering at a restaurant. Too many times have we gone to a restaurant and ordered something we ended up not liking because we had no idea what it was or looked like based on the vague menu descriptions. To prevent this from happening to others just like us, and to reduce potential food waste and unhappy customers, we developed dineAR to address this problem.
**What it does**
dineAR allows customers to view how their food looks on their table before ordering their meals at a restaurant. This allows the user to choose their meals with confidence. The additional visual information supplied by AR technology provides for a more enjoyable dining experience.
**How we built it**
We used Google's ARCore library as a framework for our augmented reality project, with Java and Android Studio serving as the backbone of our mobile application development. In addition, we applied Firebase's real time database to provide up-to-date information and images for our project.
**Challenges we ran into**
The toughest challenge we encountered was the implementation of the ARCore library. Given our previous lack of experience with mobile AR development, we needed to understand the documentation and nuances of the capabilities of the library. After reading many guides that attempted to explain the fundamentals of ARCore, we eventually gained enough understanding of the library that we could render a 3D object in real life. Furthermore, we also ran into difficulties with the many different versions of libraries and their lack of compatibility within the Android Studio framework, resulting in numerous hours of debugging and cross-referencing documentation and forum posts.
**Accomplishments that we're proud of**
Being our first AR project, we are very proud of all that we accomplished over the last 24 hours. In particular, we are especially pleased with the seamless integration of augmented reality into real life. As beginner hackers, we enjoyed this experience and are happy to have developed our skills in AR and mobile development in general.
**What we learned**
Through the development of dineAR, we were able to learn how to work at a more advanced capability in Android Studio and work with asynchronous requests to the Firebase API.
**What's next for dineAR**
In the future, we hope to expand upon the quality of the 3D models as well as increasing the amount of reference images where we can display our food models. We also want to improve the scalability of the project by taking advantage of Firebase's powerful database management capabilities.
|
# imHungry
## Inspiration
As Berkeley students we are always prioritizing our work over our health! Students often don't have as much time to go buy food. Why not pick a food for other students while buying your own food and make a quick buck? Or perhaps, place an order with a student who was already planning to head your way after buying some food for themselves? This revolutionary business model enables students to participate in the food delivery business while avoiding the hassles that are associated with the typical food delivery app. Our service does not require students to do anything differently than what they already do!
## What it does
This application allows students to be able to purchase or help purchase food from/for their fellow students. The idea is that students already purchase food often before heading out to the library or another place on campus. This app allows these students to list their plans in advance and allow other students to put in their orders as well. These buyers will then meet the purchaser at wherever they are expected to meet. That way, the purchaser doesn't need to make any adjustments to their plan besides buy a few extra orders! The buyers will also have the convenience of picking up their order near campus to avoid walking. This app enables students to get involved in the food delivery business while doing nearly nothing additional!
## How we built it
We used Flask, JavaScript, HTML/CSS, and Python. Some technologies we used include Mapbox API, Google Firebase, and Google Firestore. We built this as a team at CalHacks!
## Challenges we ran into
We had some trouble getting starting with using Google Cloud for user authentication. One of our team members went to the Google Cloud sponsor stand and was able to help fix part of the documentation!
## Accomplishments that we're proud of
We're proud of our use of the Mapbox API because it enabled us to use some beautiful maps in our application! As a food delivery app, we found it quite important that we are able to displays restaurants and locations on campus (for delivery) that we support and Mapbox made that quite easy. We are also quite proud of our use of Firebase and Firestore because we were able to use these technologies to authenticate users as Berkeley students while also quickly storing and retrieving data from the cloud.
## What we learned
We learned how to work with some great APIs provided by Mapbox and Google Cloud!
## What's next for imHungry
We hope to complete our implementation of the user and deliverer interface and integrate a payments API to enable users to fully use the service! Additional future plans are to add time estimates, improve page content, improve our back-end algorithms, and to improve user authentication.
|
losing
|
## Inspiration
The expense behavior of the user, especially in the age group of 15-29, is towards spending unreasonably amount in unnecessary stuff. So we want them to have a better financial life, and help them understand their expenses better, and guide them towards investing that money into stocks instead.
## What it does
It points out the unnecessary expenses of the user, and suggests what if you invest this in the stocks what amount of income you could gather around in time.
So, basically the app shows you two kinds of investment grounds:
1. what if you invested somewhere around 6 months back then what amount of money you could have earned now.
2. The app also shows what the most favorable companies to invest at the moment based on the Warren Buffet Model.
## How we built it
We basically had a python script that scrapes the web and analyzes the Stock market and suggests the user the most potential companies to invest based on the Warren Buffet model.
## Challenges we ran into
Initially the web scraping was hard, we tried multiple ways and different automation software to get the details, but some how we are not able to incorporate fully. So we had to write the web scrapper code completely by ourselves and set various parameters to short list the companies for the Investment.
## Accomplishments that we're proud of
We are able to come up with an good idea of helping people to have a financially better life.
We have learnt so many things on spot and somehow made them work for satisfactory results. but i think there is many more ways to make this effective.
## What we learned
We learnt firebase, also we learnt how to scrape data from a complex structural sites.
Since, we are just a team of three new members who just formed at the hackathon, we had to learn and co-operate with each other.
## What's next for Revenue Now
We can study our user and his behavior towards spending money, and have customized profiles that suits him and guides him for the best use of financial income and suggests the various saving patterns and investment patterns to make even the user comfortable.
|
## Inspiration
As students, we have found that there are very few high-quality resources on investing for those who are interested but don't have enough resources. Furthermore, we have found that investing and saving money can be a stressful experience. We hope to change this for those who want to save better with the help of our app, hopefully making it fun in the process!
## What it does
Our app first asks a new client a brief questionnaire about themselves. Then, using their banking history, it generates 3 "demons", aka bad spending habits, to kill. Then, after the client chooses a habit to work on, it brings them to a dashboard where they can monitor their weekly progress on a task. Once the week is over, the app declares whether the client successfully beat the mission - if they did, they get rewarded with points which they can exchange for RBC Loyalty points!
## How we built it
We built the frontend using React + Tailwind, using Routes to display our different pages. We used Cohere for our AI services, both for generating personalized weekly goals and creating a more in-depth report. We used Firebase for authentication + cloud database to keep track of users. For our data of users and transactions, as well as making/managing loyalty points, we used the RBC API.
## Challenges we ran into
Piecing the APIs together was probably our most difficult challenge. Besides learning the different APIs in general, integrating the different technologies got quite tricky when we are trying to do multiple things at the same time!
Besides API integration, definitely working without any sleep though was the hardest part!
## Accomplishments that we're proud of
Definitely our biggest accomplishment was working so well together as a team. Despite only meeting each other the day before, we got along extremely well and were able to come up with some great ideas and execute under a lot of pressure (and sleep deprivation!) The biggest reward from this hackathon are the new friends we've found in each other :)
## What we learned
I think each of us learned very different things: this was Homey and Alex's first hackathon, where they learned how to work under a small time constraint (and did extremely well!). Paige learned tons about React, frontend development, and working in a team. Vassily learned lots about his own strengths and weakness (surprisingly reliable at git, apparently, although he might have too much of a sweet tooth).
## What's next for Savvy Saver
Demos! After that, we'll just have to see :)
|
## Inspiration
We wanted to bring financial literacy into being a part of your everyday life while also bringing in futuristic applications such as augmented reality to really motivate people into learning about finance and business every day. We were looking at a fintech solution that didn't look towards enabling financial information to only bankers or the investment community but also to the young and curious who can learn in an interesting way based on the products they use everyday.
## What it does
Our mobile app looks at company logos, identifies the company and grabs the financial information, recent company news and company financial statements of the company and displays the data in an Augmented Reality dashboard. Furthermore, we allow speech recognition to better help those unfamiliar with financial jargon to better save and invest.
## How we built it
Built using wikitude SDK that can handle Augmented Reality for mobile applications and uses a mix of Financial data apis with Highcharts and other charting/data visualization libraries for the dashboard.
## Challenges we ran into
Augmented reality is very hard, especially when combined with image recognition. There were many workarounds and long delays spent debugging near-unknown issues. Moreover, we were building using Android, something that none of us had prior experience with which made it harder.
## Accomplishments that we're proud of
Circumventing challenges as a professional team and maintaining a strong team atmosphere no matter what to bring something that we believe is truly cool and fun-to-use.
## What we learned
Lots of things about Augmented Reality, graphics and Android mobile app development.
## What's next for ARnance
Potential to build in more charts, financials and better speech/chatbot abilities into out application. There is also a direction to be more interactive with using hands to play around with our dashboard once we figure that part out.
|
partial
|
## Inspiration
While we were doing preliminary research, we had found overwhelming amounts of evidence of mental health deterioration as a consequence of life-altering lockdown restrictions. Academic research has shown that adolescents depend on friendship to maintain a sense of self-worth and to manage anxiety and depression. Intimate exchanges and self-esteem support significantly increased long-term self worth and decreased depression.
While people do have virtual classes and social media, some still had trouble feeling close with anyone. This is because conventional forums and social media did not provide a safe space for conversation beyond the superficial. User research also revealed that friendships formed by physical proximity don't necessarily make people feel understood and resulted in feelings of loneliness anyway. Proximity friendships formed in virtual classes also felt shallow in the sense that it only lasted for the duration of the online class.
With this in mind, we wanted to create a platform that encouraged users to talk about their true feelings, and maximize the chance that the user would get heartfelt and intimate replies.
## What it does
Reach is an anonymous forum that is focused on providing a safe space for people to talk about their personal struggles. The anonymity encourages people to speak from the heart. Users can talk about their struggles and categorize them, making it easy for others in similar positions to find these posts and build a sense of closeness with the poster. People with similar struggles have a higher chance of truly understanding each other. Since ill-mannered users can exploit anonymity, there is a tone analyzer that will block posts and replies that contain mean-spirited content from being published while still letting posts of a venting nature through. There is also ReCAPTCHA to block bot spamming.
## How we built it
* Wireframing and Prototyping: Figma
* Backend: Java 11 with Spring Boot
* Database: PostgresSQL
* Frontend: Bootstrap
* External Integration: Recaptcha v3 and IBM Watson - Tone Analyzer
* Cloud: Heroku
## Challenges we ran into
We initially found it a bit difficult to come up with ideas for a solution to the problem of helping people communicate. A plan for a VR space for 'physical' chatting was also scrapped due to time constraints, as we didn't have enough time left to do it by the time we came up with the idea. We knew that forums were already common enough on the internet, so it took time to come up with a product strategy that differentiated us. (Also, time zone issues. The UXer is Australian. They took caffeine pills and still fell asleep.)
## Accomplishments that we're proud of
Finishing it on time, for starters. It felt like we had a bit of a scope problem at the start when deciding to make a functional forum with all these extra features, but I think we pulled it off. The UXer also iterated about 30 screens in total. The Figma file is *messy.*
## What we learned
As our first virtual hackathon, this has been a learning experience for remote collaborative work.
UXer: I feel like i've gotten better at speedrunning the UX process even quicker than before. It usually takes a while for me to get started on things. I'm also not quite familiar with code (I only know python), so watching the dev work and finding out what kind of things people can code was exciting to see.
# What's next for Reach
If this was a real project, we'd work on implementing VR features for those who missed certain physical spaces. We'd also try to work out improvements to moderation, and perhaps a voice chat for users who want to call.
|
## Inspiration
The idea for VenTalk originated from an everyday stressor that everyone on our team could relate to; commuting alone to and from class during the school year. After a stressful work or school day, we want to let out all our feelings and thoughts, but do not want to alarm or disturb our loved ones. Releasing built-up emotional tension is a highly effective form of self-care, but many people stay quiet as not to become a burden on those around them. Over time, this takes a toll on one’s well being, so we decided to tackle this issue in a creative yet simple way.
## What it does
VenTalk allows users to either chat with another user or request urgent mental health assistance. Based on their choice, they input how they are feeling on a mental health scale, or some topics they want to discuss with their paired user. The app searches for keywords and similarities to match 2 users who are looking to have a similar conversation. VenTalk is completely anonymous and thus guilt-free, and chats are permanently deleted once both users have left the conversation. This allows users to get any stressors from their day off their chest and rejuvenate their bodies and minds, while still connecting with others.
## How we built it
We began with building a framework in React Native and using Figma to design a clean, user-friendly app layout. After this, we wrote an algorithm that could detect common words from the user inputs, and finally pair up two users in the queue to start messaging. Then we integrated, tested, and refined how the app worked.
## Challenges we ran into
One of the biggest challenges we faced was learning how to interact with APIs and cloud programs. We had a lot of issues getting a reliable response from the web API we wanted to use, and a lot of requests just returned CORS errors. After some determination and a lot of hard work we finally got the API working with Axios.
## Accomplishments that we're proud of
In addition to the original plan for just messaging, we added a Helpful Hotline page with emergency mental health resources, in case a user is seeking professional help. We believe that since this app will be used when people are not in their best state of minds, it's a good idea to have some resources available to them.
## What we learned
Something we got to learn more about was the impact of user interface on the mood of the user, and how different shades and colours are connotated with emotions. We also discovered that having team members from different schools and programs creates a unique, dynamic atmosphere and a great final result!
## What's next for VenTalk
There are many potential next steps for VenTalk. We are going to continue developing the app, making it compatible with iOS, and maybe even a webapp version. We also want to add more personal features, such as a personal locker of stuff that makes you happy (such as a playlist, a subreddit or a netflix series).
|
## Inspiration
Studies show that drawing, coloring, and other art-making activities can help people express themselves artistically and explore their art's psychological and emotional undertones [1]. Before this project, many members of our team had already caught on to the stress-relieving capabilities of art-centered events, especially when they involved cooperative interaction. We realized that we could apply this concept in a virtual setting in order to make stress-relieving art events accessible to those who are homeschooled, socially-anxious, unable to purchase art materials, or otherwise unable to access these groups in real life. Furthermore, virtual reality provides an open sandbox suited exactly to the needs of a stressed person that wants to relieve their emotional buildup. Creating art in a therapeutic environment not only reduces stress, depression, and anxiety in teens and young adults, but it is also rooted in spiritual expression and analysis [2]. We envision an **online community where people can creatively express their feelings, find healing, and connect with others through the creative process of making art in Virtual Reality.**
## VIDEOS:
<https://youtu.be/QXY9UfquwNI>
<https://youtu.be/u-3l8vwXHvw>
## What it does
We built a VR application that **learns from the user's subjective survey responses** and then **connects them with a support group who might share some common interests and worries.** Within the virtual reality environment, they can **interact with others through anonymous avatars, see others' drawings in the same settings, and improve their well-being by interacting with others in a liberating environment.** To build the community outside of VR, there is an accompanying social media website allowing users to share their creative drawings with others.
## How we built it
* We used SteamVR with the HTC Vive HMD and Oculus HMD, as well as Unity to build the interactive environments and develop the softwares' functionality.
* The website was built with Firebase, Node.js, React, Redux, and Material UI.
## Challenges we ran into
* Displaying drawing real-time on a server-side, rather than client-side output posed a difficulty due to the restraints on broadcasting point-based cloud data through Photon. Within the timeframe of YHack, we were able to build the game that connects multiple players and allows them to see each other's avatars. We also encountered difficulties with some of the algorithmic costs of the original line-drawing methods we attempted to use.
## Citation:
[1] <https://www.psychologytoday.com/us/groups/art-therapy/connecticut/159921?sid=5db38c601a378&ref=2&tr=ResultsName>
[2] <https://www.psychologytoday.com/us/therapy-types/art-therapy>
|
winning
|
## Inspiration
One of the biggest roadblocks during disaster relief is reestablishing the first line of communication between community members and emergency response personnel. Whether it is the aftermath of a hurricane devastating a community or searching for individuals in the backcountry, communication is the key to speeding up these relief efforts and ensuring a successful rescue of those at risk.
In the event of a hurricane, blizzard, earthquake, or tsunami, cell towers and other communication nodes can be knocked out leaving millions stranded and without a way of communicating with others. In other instances where skiers, hikers, or travelers get lost in the backcountry, emergency personnel have no way of communicating with those who are lost and can only rely on sweeping large areas of land in a short amount of time to be successful in rescuing those in danger.
This is where Lifeline comes in. Our project is all about leveraging communication technologies in a novel way to create a new way to establish communication in a short amount of time without the need for prior existing infrastructures such as cell towers, satellites, or wifi access point thereby speeding up natural disaster relief efforts, search and rescue missions, and helping provide real-time metrics for emergency personnel to leverage.
Lifeline uses LoRa and Wifi technologies to create an on-the-fly mesh network to allow individuals to communicate with each other across long distances even in the absence of cell towers, satellites, and wifi. Additionally, Lifeline uses an array of sensors to send vital information to emergency response personnel to assist with rescue efforts thereby creating a holistic emergency response system.
## What it does
Lifeline consists of two main portions. First is a homebrewed mesh network made up of IoT and LoRaWAN nodes built to extend communication between individuals in remote areas. The second is a control center dashboard to allow emergency personnel to view an abundance of key metrics of those at risk such as heart rate, blood oxygen levels, temperature, humidity, compass directions, acceleration, etc.
On the mesh network side, Lifeline has two main nodes. A control node and a network of secondary nodes. Each of the nodes contains a LoRa antenna capable of communication up to 3.5km. Additionally, each node consists of a wifi chip capable of acting as both a wifi access point as well as a wifi client. The intention of these nodes is to allow users to connect their cellular devices to the secondary nodes through the local wifi networks created by the wifi access point. They can then send emergency information to response personnel such as their location, their injuries, etc. Additionally, each secondary node contains an array of sensors that can be used both by those in danger in remote communities or by emergency personnel when they venture out into the field so members of the control center team can view their vitals. All of the data collected by the secondary nodes is then sent using the LoRa protocol to other secondary nodes in the area before finally reaching the control node where the data is processed and uploaded to a central server. Our dashboard then fetches the data from this central server and displays it in a beautiful and concise interface for the relevant personnel to read and utilize.
Lifeline has several main use cases:
1. Establishing communication in remote areas, especially after a natural disaster
2. Search and Rescue missions
3. Providing vitals for emergency response individuals to control center personnel when they are out in the field (such as firefighters)
## How we built it
* The hardware nodes used in Lifeline are all built on the ESP32 microcontroller platform along with a SX1276 LoRa module and IoT wifi module.
* The firmware is written in C.
* The database is a real-time Google Firebase.
* The dashboard is written in React and styled using Google's Material UI package.
## Challenges we ran into
One of the biggest challenges we ran into in this project was integrating so many different technologies together. Whether it was establishing communication between the individual modules, getting data into the right formats, working with new hardware protocols, or debugging the firmware, Lifeline provided our team with an abundance of challenges that we were proud to tackle.
## Accomplishments that we're proud of
We are most proud of being able to have successfully integrated all of our different technologies and created a working proof of concept for this novel idea. We believe that combing LoRa and wifi in the way can pave the way for a new era of fast communication that doesn't rely on heavy infrastructures such as cell towers or satellites.
## What we learned
We learned a lot about new hardware protocols such as LoRa as well as working with communication technologies and all the challenges that came along with that such as race conditions and security.
## What's next for Lifeline
We plan on integrating more sensors in the future and working on new algorithms to process our sensor data to get even more important metrics out of our nodes.
|
## Inspiration
The three of us love lifting at the gym. We always see apps that track cardio fitness but haven't found anything that tracks lifting exercises in real-time. Often times when lifting, people tend to employ poor form leading to gym injuries which could have been avoided by being proactive.
## What it does and how we built it
Our product tracks body movements using EMG signals from a Myo armband the athlete wears. During the activity, the application provides real-time tracking of muscles used, distance specific body parts travel and information about the athlete’s posture and form. Using machine learning, we actively provide haptic feedback through the band to correct the athlete’s movements if our algorithm deems the form to be poor.
## How we built it
We trained an SVM based on employing deliberately performed proper and improper forms for exercises such as bicep curls. We read properties of the EMG signals from the Myo band and associated these with the good/poor form labels. Then, we dynamically read signals from the band during workouts and chart points in the plane where we classify their forms. If the form is bad, the band provides haptic feedback to the user indicating that they might injure themselves.
## Challenges we ran into
Interfacing with the Myo bands API was not the easiest task for us, since we ran into numerous technical difficulties. However, after we spent copious amounts of time debugging, we finally managed to get a clear stream of EMG data.
## Accomplishments that we're proud of
We made a working product by the end of the hackathon (including a fully functional machine learning model) and are extremely excited for its future applications.
## What we learned
It was our first time making a hardware hack so it was a really great experience playing around with the Myo and learning about how to interface with the hardware. We also learned a lot about signal processing.
## What's next for SpotMe
In addition to refining our algorithms and depth of insights we can provide, we definitely want to expand the breadth of activities we cover too (since we’re primarily focused on weight lifting too).
The market we want to target is sports enthusiasts who want to play like their idols. By collecting data from professional athletes, we can come up with “profiles” that the user can learn to play like. We can quantitatively and precisely assess how close the user is playing their chosen professional athlete.
For instance, we played tennis in high school and frequently had to watch videos of our favorite professionals. With this tool, you can actually learn to serve like Federer, shoot like Curry or throw a spiral like Brady.
|
## Inspiration
In light of the ongoing global conflict between war-torn countries, many civilians face hardships. Recognizing these challenges, LifeLine Aid was inspired to direct vulnerable groups to essential medical care, health services, shelter, food and water assistance, and other deprivation relief.
## What it does
LifeLine Aid provides multifunctional tools that enable users in developing countries to locate resources and identify dangers nearby. Utilizing the user's location, the app alerts them about the proximity of a situation and centers for help. It also facilitates communication, allowing users to share live videos and chat updates regarding ongoing issues. An upcoming feature will highlight available resources, like nearby medical centers, and notify users if these centers are running low on supplies.
## How we built it
Originally, the web backend was to be built using Django, a trusted framework in the industry. As we progressed, we realized that the amount of effort and feasibility of exploiting Django were not sustainable; as we made no progress within the first day. Quickly turning to the in-depth knowledge of one of our team member’s extensive research into asyncio, we decided to switch to FastAPI, a trusted framework used by Microsoft. Using this framework had both its benefits and costs. Realizing after our first day, Django proved to be a roadblock, thus we ultimately decided to switch to FastAPI.
Our backend proudly uses CockroachDB, an unstoppable force to be reckoned with. CockroachDB allowed our code to scale and continue to serve those who suffer from the effects of war.
## Challenges we ran into
In order to pinpoint hazards and help, we would need to obtain, store, and reverse-engineer Geospatial coordinate points which we would then present to users in a map-centric manner. We initially struggled with converting the Geospatial data from a degree, minutes, seconds format to decimal degrees and storing the converted values as points on the map which were then stored as unique 50 character long SRID values. Luckily, one of our teammates had some experience with processing GeoSpatial data so drafting coordinates on a map wasn’t our biggest hurdle to overcome. Another challenge we faced were certain edge cases in our initial Django backend that resulted in invalid data. Since some outputs would be relevant to our project, we had to make an executive decision to change backend midway through. We decided to go with FastApi. Although FastApi brought its own challenge with processing SQL to usable data, it was our way of overcoming our Jango situation. One last challenge we ran into was our overall source control. A mixture of slow and unbearable WiFi, combined with tedious local git repositories not correctly syncing create some frustrating deadlocks and holdbacks. To combat this downtime, we resort to physically drafting and planning out how each component of our code would work.
## Accomplishments that we're proud of
Three out of the four in our team are attending their first hackathon. The experience of crafting an app and seeing the fruits of our labor is truly rewarding. The opportunity to acquire and apply new tools in our project has been exhilarating. Through this hackathon, our team members were all able to learn different aspects of creating an idea into a scalable application. From designing and learning UI/UX, implementing the React-Native framework, emulating iOS and Android devices to test and program compatibility, and creating communication between the frontend and backend/database.
## What we learned
This challenge aimed to dive into technologies that are used widely in our daily lives. Spearheading the competition with a framework trusted by huge companies such as Meta, Discord and others, we chose to explore the capabilities of React Native. Joining our team are three students who have attended their first hackathon, and the grateful opportunity of being able to explore these technologies have led us to take away a skillset of a lifetime.
With the concept of the application, we researched and discovered that the only best way to represent our data is through the usage of Geospatial Data. CockroachDB’s extensive tooling and support allowed us to investigate the usage of geospatial data extensively; as our backend team traversed the complexity and the sheer awe of the scale of said technology. We are extremely grateful to have this opportunity to network and to use these tools that would be useful in the future.
## What's next for LifeLine Aid
There are a plethora of avenues to further develop the app, which include enhanced verification, rate limiting, and many others. Other options include improved hosting using Azure Kubernetes Services (AKS), and many others. This hackathon project is planned to be maintained further into the future as a project for others who may be new or old in this field to collaborate on.
|
winning
|
## Inspiration
Our team decided to analyze the current COVID-19 testing procedures in our region. Then, we found that residents were spending a disproportionate amount of time waiting in line at local health centers to get tested for COVID-19. Additionally, they barely respected social distancing guidelines due to the regular overcrowdedness of those centers. With those observations in mind, we agreed to come up with **Test-on-the-Fly**, a drone-based system that local health centers can use to offer COVID-19 tests to residents within minutes from the moment they request it, all that while respecting social distancing guidelines from the comfort of their homes!
## What it does
First, the user initiates a COVID-19 test request using **Test-on-the-Fly**'s Android app while making sure that their location is on. Second, the local health center receives the request information, which includes the user's location and username, in its Firestore database. Third, the request information is encoded into a QR code that is sent to a drone holding a COVID-19 test kit, as well as to the user for validation purposes. Fourth, the drone flies to the user's location using a real-time navigation algorithm leveraging GPS and compass modules. Fifth, once the drone arrives at the user's location, it proceeds to land within a safe range; the user will then be able to retrieve the COVID-19 test kit from the drone receptacle and follow instructions to test themselves. Sixth, once the user completes the COVID-19 test procedure, they drop the test sample in the drone receptacle and show the QR code, which was sent to them through the Test-on-the-Fly app, to the drone camera so that we can validate the user's identity. Seventh and finally, the drone flies back to the local health center with the test sample ready for analysis!
## How we built it
**Test-on-the-Fly** is a system made up of multiple components. First, the drone structure was assembled with 3D-printed (test kit receptacle, microcontroller and battery holders) and off-the-shelf components (carbon fiber frame). Then, the drone's flight system included carbon fiber propellers, brushless motors and a STM-type flight controller. For data acquisition and signal processing, GPS and compass modules, as well as a Raspberry Pi 4 board were used; the RPi4 board was in charge of executing the real-time navigation algorithm and then sending the updated flight parameters (yaw, throttle, pitch, roll) to the drone's flight controller. Also, note that a soldering iron was used, among many other instances, to solder the GPS and compass pins to the jumper cables that are connected to the RPi4 board.
Second, the Android user app was built entirely using Java (main classes used include the android suite, Zxing and Firebase/Firestore for authentication and data retrieval) , with user information being stored in a Cloud Firestore database.
Third, the real-time navigation algorithm was developed with Python. For GPS and compass interfacing, the gpsd, pynmea2 and i2clibraries modules were mainly used; for QR code recognition, we used Open-CV. Then, for PPM communication between the RPi4 board and the flight controller, the pigpio module was imported.
## Challenges we ran into
We ran into several challenges! First, we noticed that the real-time navigation algorithm would only generate satisfying results if we were to fly the drone outdoors; this is because the GPS module is inherently less accurate when located indoors. Unfortunately, due to snow being present on most outdoor spaces near our places, we were restricted to fly the drone indoors. Nevertheless, by slowing down the data acquisition rate of the RPi4 board from the GPS and compass modules, we were able to build a navigation algorithm that could adjust to drone drift relatively well.
Then, another challenge was that one our teammates was located in Greece during the hackathon; with him being 7 hours ahead of us, this made it hard for our team to have regular video meetings, which slowed down the exchange of thoughts and ideas. Fortunately, we were able to maintain solid team communication using exhaustive code documentation on GitHub and discussions on Messenger.
Finally, since the hardware component of our project was quite involved, a part of our team, based in Montreal, therefore had to meet for drone testing purposes; the main tests included communication with the RPi4 board and real-time navigation with GPS and compass modules. Unfortunately, given a 8PM-5AM curfew is currently in effect in our region, we had to be very effective at troubleshooting issues related to configuration, interfacing and so on.
## Accomplishments that we're proud of
We are very proud of the Test-on-the-Fly Android app that we developed. Its simple, user-friendly and straightforward layout allows the user to quickly sign up with just a user name and a password, and then request a COVID-19 test after enabling their location. Then, a QR code is displayed on their screen, and they simply have to show it to the drone camera for validation after they prepared their test sample accordingly.
Another accomplishment we are proud of is the successful interfacing between the Android app, the RPi4 board and the drone. We initially thought that it would be virtually impossible to make all three components interact smoothly due to the diverse communication protocols governing each component. However, we surprised ourselves since we managed to integrate all three components by extensively consulting documentation on the Internet (tutorials, articles, etc.) and managing our stress adequately as a team. Even though the drone navigation is less accurate than we expected, we think it is still an accomplishment worth mentioning.
The final accomplishment, and perhaps the most important one, is the effectiveness and dedication of our team despite unfavorable circumstances. From driving to another teammate's place to pick up a crucial hardware component, to meeting with our teammate in Greece at 3 AM in the morning to discuss essential feature specifications, we took actions which demonstrated everyone's awareness that, regardless of unfavorable circumstances, we could build, step by step, something impressive, innovative and useful for the health system.
## What we learned
This hackathon was a great learning experience for all of us! From organizing our project and collaborating from different time zones to learning how to use new sensors and components, frameworks and APIs, we all learned something new.
For instance, the Raspberry Pi 4 board was not a technology that most of our team members were familiar with. Indeed, our prior projects were based on Arduino boards, which are simple, high-level electronics prototyping platforms. On the other hand, Raspberry Pi boards have much more dependencies. For instance, to be able to execute Python scripts on the RPi4 board, we had to put the desired files and the Raspberry Pi operating system in a microSD card, and then insert it in the appropriate port on the board.
Then, we also learned that the Raspberry Pi 4 signals did not have the same format (10-bit) as those processed by the flight controller (8-bit). Therefore, we wrote an additional Python script to convert 10-bit signals from the RPi4 board to 8-bit signals (throttle, yaw, pitch, roll) that could be recognized by the flight controller.
## What's next for Test-on-the-Fly
When weather conditions will be more favorable, we plan on validating the **Test-on-the-Fly** system in an ideal outdoors setting so that the drone can navigate and arrive at the destination location more accurately. Then, we plan on making this project fully-functional and fully-documented so that local health centers can test it on a small scale to reduce their waiting times for COVID-19 tests. Eventually, we also plan on improving the drone hardware to extend the scope of this system. For instance, residents with mobility problems could use it to fetch household items and delivered packages.
|
## Inspiration
The inspiration for our hackathon idea stemmed from an experience observed by one of our team members who had recently been to the hospital. They noticed the numerous amount of staff required at every entrance to ensure that patients and visitors had their masks on properly, as well as asking COVID-19 screening questions and recording their time of entry into the hospital. They thought about the potential problems and implications that this might have such as health care workers having a higher chance of getting sick due to more frequent exposure with other individuals, as well as the required resources needed to complete this task.
Another thing that was discussed was about the scalability of this procedure and how it could apply to schools & businesses. Hiring an employee to perform these tasks may be financially unfeasible for small businesses and schools but the social benefit that these services would provide would definitely help towards the containment of COVID-19.
Our team decided to see if we could use a combination of Machine Learning, AI, Robotics, and Web development in order to automate this process and create a solution that would be financially feasible and reduce the workload on already hard-working individuals who work every day to keep us safe.
## What it does
Our stand-alone solutions consists of three main elements, the hardware, the mobile app, and the software to connect everything together.
**Camera + Card Reader**
The hardware is meant to be placed at an entry point for a business/school. It will automatically detect the presence of a person through an ultrasonic sensor. From there, it adjusts the camera to center the view for a better image, and takes a screenshot. The screenshot is used to make an API request using Microsoft Azure Computer Vision Prediction API where it can be used to return a confidence value of a tag. (Mask / No Mask) Once the person is confirmed to be wearing a mask through AI, the individual will be prompted to scan their RFID tag. The hardware will check the owner of the RFID id and add a time checked-in or out for their profile in a cloud database. (Firestore)
**Mobile Application**
The mobile application is intended for the administrator/business owner who would like to be able to manage the hardware settings and observe any analytics. \_ (We did not have enough time to complete that unfortunately) \_ Additionally, the mobile app can also be used to perform basic contact tracing through a API request on a custom-made Autocode API that will check the database and determine recent potential instances of exposure between employees based on check-in and check-out times. It will also determine those employees affected and automatically send them an email with the dates of the potential exposure instances.
**The software**
Throughout our application, we had many smaller instances of programming/software that was used run our overall prototype. From the python scripts on our Raspberry Pi to communicate with the database, to the custom API made on Autocode, there were many small pieces that we had to put together in order for this prototype to work.
## How we built it
For all of our team members, this was our first hackathon and we had to think creatively about how we were going to make our idea into a reality. Because of this, we used many well-documented/beginner-friendly services to create a "stack" that we were able to manage with our limited expertise. Our team background came mainly from robotics and hardware so we definitely wanted to incorporate a hardware element into our project, however we also wanted to take full advantage of this amazing opportunity at Hack The 6ix and apply the knowledge that we learned in the workshops.
**The Hardware**
In order to make our hardware, we utilized a Raspberry Pi and various sensors that we had on hand. Our hardware consisted of an RFID reader, Ultrasonic Sensor, Servo Motor, and Web Camera to perform the tasks mentioned in the section above. Additionally, we had access to a 3D printer and were able to print some basic parts to mount our electronics and create our device. **(Although our team has a stronger mechanical background, we spent most of our time programming haha)**
**Mobile Application**
In order to program our mobile app, we utilized a framework called Flutter which is developed by Google and is a very easy way to rapidly prototype a mobile application that can be supported by both Android and iOS. Because Flutter is based on the DART language, it was very easy to follow along tutorials and documentation, as well as some members had previous experience with Flutter. We decided to also go with firestore as our database as there was quite a lot of documentation and support between the two applications.
**Software**
In order to put everything together, we had to utilize a variety of skills and get creative with how we were going to connect our backend considering our limited experience in programming and computer science. In order to run the mask detector, we first used some Python scripts on a Raspberry Pi to center our camera onto the object and perform very basic face detection to determine whether to take a screenshot or not in order to send to the cloud to be processed. We did not want to stream our entire camera feed to the cloud as that could be costly due to a high rate of API requests, as well as impracticality due to hardware limitations. Because of that, we used some lower end face detection in order to determine whether a screenshot should be taken and from there we send it through an API request through Microsoft Azure Services Computer Vision Prediction API where we had trained a model to detect two classifiers. (Mask and No Mask). We were very impressed with how easy it was to set up the Azure Prediction API and it really helped our team with reliable, accurate, and fast mask detection.
Since we did not have much experience with back-end in flutter, we decided to utilize a very powerful tool which was Autocode which we learned about during a workshop on Saturday. With the ease of use and utility of Autocode, we decided to create a back-end API that our mobile app could call basically with an HTTP request and through that our Autocode program could interact with our firebase database in order to perform basic calculations and achieve the basic contact tracing that we wanted in our project. The autocode project can be found here!
[link](https://autocode.com/src/samsonwhua81421/unmasked-api/)
## Challenges we ran into
The majority of our challenges that we ran into was due to our limited experience in back-end development which lead us with a lot of gaps in the functionality of our project. However, the mentors were very friendly and helpful and helped us with connecting the different parts of our project. Our creativity also aided in helping us connect our portions together.
Another challenge that we ran into was our hardware. Because of quarantine, many of us were at home and did not have access to lab equipment that could have been very helpful in diagnosing most of our hardware problems. (Multimeters, Oscilloscopes, Soldering Irons). However, we were able to solve these problems, all be-it using very precious hackathon time to do so.
## What We learned
-Hackathons are very fun, we definitely want to do more!
-Sleep is very important. :)
-Microsoft Azure Services are super easy to use
-Autocode is very useful and cool
## What's next for Unmasked
The next steps for Unmasked would be to further add to the contact tracing feature of the app, as knowing who was in the same building at the time does not provide enough information to determine who may actually be at risk. One potential solution to this would be to have employees scan their Id's based on location as well, enabling the ability to determine whether any individuals were actually near those with the virus.
|
## Inspiration
Our solution was named on remembrance of Mother Teresa
## What it does
Robotic technology to assist nurses and doctors in medicine delivery, patient handling across the hospital, including ICU’s. We are planning to build an app in Low code/No code that will help the covid patients to scan themselves as the mobile app is integrated with the CT Scanner for doctors to save their time and to prevent human error . Here we have trained with CT scans of Covid and developed a CNN model and integrated it into our application to help the covid patients. The data sets are been collected from the kaggle and tested with an efficient algorithm with the efficiency of around 80% and the doctors can maintain the patient’s record. The beneficiary of the app is PATIENTS.
## How we built it
Bots are potentially referred to as the most promising and advanced form of human-machine interactions. The designed bot can be handled with an app manually through go and cloud technology with e predefined databases of actions and further moves are manually controlled trough the mobile application. Simultaneously, to reduce he workload of doctor, a customized feature is included to process the x-ray image through the app based on Convolution neural networks part of image processing system. CNN are deep learning algorithms that are very powerful for the analysis of images which gives a quick and accurate classification of disease based on the information gained from the digital x-ray images .So to reduce the workload of doctors these features are included. To know get better efficiency on the detection I have used open source kaggle data set.
## Challenges we ran into
The data source for the initial stage can be collected from the kaggle and but during the real time implementation the working model and the mobile application using flutter need datasets that can be collected from the nearby hospitality which was the challenge.
## Accomplishments that we're proud of
* Counselling and Entertainment
* Diagnosing therapy using pose detection
* Regular checkup of vital parameters
* SOS to doctors with live telecast
-Supply off medicines and foods
## What we learned
* CNN
* Machine Learning
* Mobile Application
* Cloud Technology
* Computer Vision
* Pi Cam Interaction
* Flutter for Mobile application
## Implementation
* The bot is designed to have the supply carrier at the top and the motor driver connected with 4 wheels at the bottom.
* The battery will be placed in the middle and a display is placed in the front, which will be used for selecting the options and displaying the therapy exercise.
* The image aside is a miniature prototype with some features
* The bot will be integrated with path planning and this is done with the help mission planner, where we will be configuring the controller and selecting the location as a node
* If an obstacle is present in the path, it will be detected with lidar placed at the top.
* In some scenarios, if need to buy some medicines, the bot is attached with the audio receiver and speaker, so that once the bot reached a certain spot with the mission planning, it says the medicines and it can be placed at the carrier.
* The bot will have a carrier at the top, where the items will be placed.
* This carrier will also have a sub-section.
* So If the bot is carrying food for the patients in the ward, Once it reaches a certain patient, the LED in the section containing the food for particular will be blinked.
|
losing
|
## Inspiration
We wanted to leverage machine learning to help lenders to Kiva learn where their money could make the greatest impact. Kiva is an international nonprofit, founded in 2005 and based in San Francisco, that works with microfinance institutions on five continents to provide loans to people without access to traditional banking systems. Lenders invest money in small businesses and fundraisers in underprivileged parts of the world.
Our goal was to quantify the impact of people's donation and show people how even tiny amounts of money can help multiple families across the globe start businesses that are self sustainable.
## What it does
Kinvest is trained on Kiva's large datasets. We used these to train a predictive model that can score the value of a dollar amount in a certain country and accurately predict the number of families that are directly impacted by a donors donation. This encourages people to donate more.
## How we built it
Kinvest is built in `python`. We used data and machine learning libraries like `pandas` and `sklearn`. We also integrated Flask, Beaker Notebook, and Firebase.
## Challenges we ran into
With the data we had, it was very difficult to define what a successful loan was, or who a successful lender was. It was also difficult to learn how to leverage all of our large tech stack in just 24 hours.
## Accomplishments that we're proud of
* The potential impact of Kinvest is huge. We are excited to see where Kinvest will impact those in need!
* Finishing our hack in 24 hours
* Going from unfamiliar with these technologies to being able to properly implement them into an application was no small feat
## What we learned
* The satisfaction of hacking for good
* How to work with a large dataset in a limited amount of time
* The importance of picking up languages and technologies on the fly
## What's next for Kinvest
We want to study Kiva's dataset more deeply so that we can better predict what a good choice for a loan is. The possibilities for linking up with other datasets (World Bank, Census, economic data) is nearly limitless. We want to see where the societal impact of Kinvest can go.
|
## Inspiration
It’s Friday afternoon, and as you return from your final class of the day cutting through the trailing winds of the Bay, you suddenly remember the Saturday trek you had planned with your friends. Equipment-less and desperate you race down to a nearby sports store and fish out $$$, not realising that the kid living two floors above you has the same equipment collecting dust. While this hypothetical may be based on real-life events, we see thousands of students and people alike impulsively spending money on goods that would eventually end up in their storage lockers. This cycle of buy-store-collect dust inspired us to develop Lendit this product aims to stagnate the growing waste economy and generate passive income for the users on the platform.
## What it does
A peer-to-peer lending and borrowing platform that allows users to generate passive income from the goods and garments collecting dust in the garage.
## How we built it
Our Smart Lockers are built with RaspberryPi3 (64bit, 1GB RAM, ARM-64) microcontrollers and are connected to our app through interfacing with Google's Firebase. The locker also uses Facial Recognition powered by OpenCV and object detection with Google's Cloud Vision API.
For our App, we've used Flutter/ Dart and interfaced with Firebase. To ensure *trust* - which is core to borrowing and lending, we've experimented with Ripple's API to create an Escrow system.
## Challenges we ran into
We learned that building a hardware hack can be quite challenging and can leave you with a few bald patches on your head. With no hardware equipment, half our team spent the first few hours running around the hotel and even the streets to arrange stepper motors and Micro-HDMI wires. In fact, we even borrowed another team's 3-D print to build the latch for our locker!
On the Flutter/ Dart side, we were sceptical about how the interfacing with Firebase and Raspberry Pi would work. Our App Developer previously worked with only Web Apps with SQL databases. However, NoSQL works a little differently and doesn't have a robust referential system. Therefore writing Queries for our Read Operations was tricky.
With the core tech of the project relying heavily on the Google Cloud Platform, we had to resolve to unconventional methods to utilize its capabilities with an internet that played Russian roulette.
## Accomplishments that we're proud of
The project has various hardware and software components like raspberry pi, Flutter, XRP Ledger Escrow, and Firebase, which all have their own independent frameworks. Integrating all of them together and making an end-to-end automated system for the users, is the biggest accomplishment we are proud of.
## What's next for LendIt
We believe that LendIt can be more than just a hackathon project. Over the course of the hackathon, we discussed the idea with friends and fellow participants and gained a pretty good Proof of Concept giving us the confidence that we can do a city-wide launch of the project in the near future. In order to see these ambitions come to life, we would have to improve our Object Detection and Facial Recognition models. From cardboard, we would like to see our lockers carved in metal at every corner of this city. As we continue to grow our skills as programmers we believe our product Lendit will grow with it. We would be honoured if we can contribute in any way to reduce the growing waste economy.
|
#### PLEASE WATCH THE DEMO VIDEO IN THE HEADING OF THIS DEVPOST
## Auxilium Inspiration
In many low income areas around the world, people are forced to rely on unaccredited institutions for loans because their jobs do not provide them with a formal/stable source of income. This issue is primarily prominent in India where workers like rickshaw drivers and food-stop owners have to rely on unregulated loans to sustain and grow their businesses. These unregulated loans may come with unfavourable conditions that can harm the borrowers.
Auxilium aims to create value for the grey financial system in 3 key ways:
1. Help borrowers build credit history to make them eligible for loans from accredited institutions
2. Serve as a mediator between lenders and borrowers to avoid bounty hunting
3. Provide charitable microloans that directly improve people’s quality of life
In order to actualize on these goals we envision a network of low cost ATMs designed specifically for loan management. To keep deployment costs low, we intend to use cheap telecommunications infrastructure, like text messaging and phone calls as a user interface. Lenders use our web application to extend credit to individuals through a regulated interest schedule. In the case of loan default, lenders can negotiate with Auxilium for reasonable insurance instead of head hunting individual borrowers. Transactions for borrowers and lenders will be recorded on the Stellar blockchain as an immutable credit history that could eventually be used to prove creditworthiness for home mortgages or other large payments. Since many of the users we intend to reach may not have a government issued ID, we intend to use facial recognition software to validate identities.
The scope of our hackathon project was:
1. A hardware ATM that enables that is interfaced with our server.
2. A blockchain schema on the Stellar Network that provides a publicly visible and immutable record that can be used to evaluate creditworthiness. *(Stellar)*
3. A facial recognition based registration process that doesn’t allow for fraudulent duplicate account creation. *(AWS Rekognition)*
4. A server that retains user information and coordinates Twilio, ATMs and the Stellar blockchain
## Auxilium Tech Stack
**Hardware** *(Raspberry Pi 3, IR Break-Beam Sensor, Servo Motor, 3D Printer, Laser Cutter)* : We created a miniature ATM to satisfy a pivotal need in this project. First we used the 3D printer and laser cutter to create the housing for the ATM. We added a slot to allow for coin deposit and one for coin withdrawal. The coin deposit mechanism was created by using the break-beam sensor to implement counting functionality. The coin withdrawal mechanism was created using the servo motor and custom cut parts from the laser cutter. This information is collected via python scripts which are running on a node server which is running on the Raspberry Pi 3.
**Blockchain** *(Stellar)* : Stellar serves as an immutable transaction ledger that can be used by financial institutions to evaluate creditworthiness of borrowers. Every time a transaction occurs on our network, it is pushed to the Stellar ledger. Our web-view populates the transactions for users from the ledger and is meant to serve as a portal for lenders to extend credit to borrowers on our platform.
**Web View** *(React.js)* : The web-view allows the user and financial institutions to view the registered transaction history of users. They can toggle the settings to view a feed of live transactions or view their own. It also allows users to create accounts and displays statistics about Auxilium.
**AWS** *(Amazon Rekognition, S3 Bucket)*: We used Amazon Rekognition to bolster authentication for our web platform. We upload the user image into the bucket and use the AI library to compare this image to all other users. Since improving credit history is a critical motivating factor for people to repay loans, we want to ensure that no individual can reset their credit scores by creating an alternate identity.
**Backend** *(Node.js, Express.js, MongoDB)* : The backend of our service acts as a liaison between all the other services. It interacts with the Stellar network in order to populate the web view with the transaction history. It also relays vital information between the ATM and Twilio, such as withdraw limit and number of coins deposited.
**Twilio**: Twilio is a pivotal part of our application and the means which we use to securely communicate with the client. Upon receiving a text message or phone call from the user, the Twilio flow verifies the user identify by hitting authentication endpoints in the back-end. From there we allow the user to conduct many operations over the phone such as withdrawing and depositing money at the ATM. Attached above is a screenshot of the Twilio flow diagram.
|
winning
|
## Inspiration
Covid-19 has turned every aspect of the world upside down. Unwanted things happen, situation been changed. Lack of communication and economic crisis cannot be prevented. Thus, we develop an application that can help people to survive during this pandemic situation by providing them **a shift-taker job platform which creates a win-win solution for both parties.**
## What it does
This application offers the ability to connect companies/manager that need employees to cover a shift for their absence employee in certain period of time without any contract. As a result, they will be able to cover their needs to survive in this pandemic. Despite its main goal, this app can generally be use to help people to **gain income anytime, anywhere, and with anyone.** They can adjust their time, their needs, and their ability to get a job with job-dash.
## How we built it
For the design, Figma is the application that we use to design all the layout and give a smooth transitions between frames. While working on the UI, developers started to code the function to make the application work.
The front end was made using react, we used react bootstrap and some custom styling to make the pages according to the UI. State management was done using Context API to keep it simple. We used node.js on the backend for easy context switching between Frontend and backend. Used express and SQLite database for development. Authentication was done using JWT allowing use to not store session cookies.
## Challenges we ran into
In the terms of UI/UX, dealing with the user information ethics have been a challenge for us and also providing complete details for both party. On the developer side, using bootstrap components ended up slowing us down more as our design was custom requiring us to override most of the styles. Would have been better to use tailwind as it would’ve given us more flexibility while also cutting down time vs using css from scratch.Due to the online nature of the hackathon, some tasks took longer.
## Accomplishments that we're proud of
Some of use picked up new technology logins while working on it and also creating a smooth UI/UX on Figma, including every features have satisfied ourselves.
Here's the link to the Figma prototype - User point of view: [link](https://www.figma.com/proto/HwXODL4sk3siWThYjw0i4k/NwHacks?node-id=68%3A3872&scaling=min-zoom)
Here's the link to the Figma prototype - Company/Business point of view: [link](https://www.figma.com/proto/HwXODL4sk3siWThYjw0i4k/NwHacks?node-id=107%3A10&scaling=min-zoom)
## What we learned
We learned that we should narrow down the scope more for future Hackathon so it would be easier and more focus to one unique feature of the app.
## What's next for Job-Dash
In terms of UI/UX, we would love to make some more improvement in the layout to better serve its purpose to help people find an additional income in job dash effectively. While on the developer side, we would like to continue developing the features. We spent a long time thinking about different features that will be helpful to people but due to the short nature of the hackathon implementation was only a small part as we underestimated the time that it will take. On the brightside we have the design ready, and exciting features to work on.
|
## Inspiration
The inspiration for our project stems from the increasing trend of online shopping and the declining foot traffic in physical stores. Our goal was to provide a unique and engaging experience for customers, encouraging them to visit physical stores and rediscover the joy of in-person shopping. We wanted to create an interactive and entertaining shopping experience that would entice customers to visit stores more frequently and foster a deeper connection between them and the store's brand.
## What it does
Our project is an AR scavenger hunt experience that gamifies the shopping experience. The scavenger hunt encourages customers to explore the store and discover new products they may have otherwise overlooked. As customers find specific products, they can earn points which can be redeemed for exclusive deals and discounts on future purchases. This innovative marketing scheme not only provides customers with an entertaining experience but also incentivizes them to visit stores more frequently and purchase products they may have otherwise overlooked.
## How we built it
To create the AR component of our project, we used Vuforia and Unity, two widely used platforms for building AR applications. The Vuforia platform allowed us to create and track image targets, while Unity was used to design the 3D models for the AR experience. We then integrated the AR component into an Android application by importing it as a Gradle project. Our team utilized agile development methodologies to ensure efficient collaboration and problem-solving throughout the development process.
## Challenges we ran into
One of the challenges we faced was integrating multiple APIs and ensuring that they worked together seamlessly. Another challenge was importing the AR component and creating the desired functionality within our project. We also faced issues with debugging and resolving technical errors that arose during the development process.
## Accomplishments that we're proud of
Despite the challenges we faced, we were able to achieve successful teamwork and collaboration. Despite forming the team later than other groups, we were able to effectively communicate and work together to bring our project to fruition. We are proud of the end result, which was a polished and functional AR scavenger hunt experience that met our objectives.
## What we learned
We learned how difficult it is to truly ship out software, and we are grateful to have joined the hackathon. We gained a deeper understanding of the importance of project planning, effective communication, and collaboration among team members. We also learned that the development process can be challenging and unpredictable, and that it requires perseverance and problem-solving skills. Additionally, participating in the hackathon taught us valuable technical skills such as integrating APIs, creating AR functionality, and importing projects onto an Android application.
## What's next for Winnur
Looking forward, we plan to incorporate Computer Vision technology into our project to prevent potential damage to our product's packaging. We also aim to expand the reach of our AR scavenger hunt experience by partnering with more retailers and enhancing the user interface and experience. We are excited about the potential for future development and growth of Winnur.
|
## Inspiration
We’re a team of **developers** and **hackers** who love tinkering with new technologies. Obviously, this means that we have been super excited about building projects in the ML space, especially applications involving LLMs. However, we realized two crucial issues with using LLMs in production applications. First, LLMs hallucinate, confidently responding with incorrect information. Second, we cannot explain on what basis the LLM gives its answer. This is why essentially all production LLM applications use retrieval augmented generation (RAG). Through RAGs, you supply the LLM with relevant, factual, and citable information, significantly increasing the quality of its responses. Our initial idea for **Treehacks** was to build an app based on such a system: we wanted to build a fully automated literature review. Yet, when building the system, we spend most of our time sourcing, cleaning, processing, embedding, and maintaining data for the retrieval process.
After talking with other developers, we have realized that this is a significant hurdle many in the AI community face: the LLM app ecosystem provides robust abstractions for most parts of the backend infrastructure, yet it falls short in offering solutions for the critical data component needed for retrieval. This gap significantly impacts the development of RAG applications, making it a slow, expensive, and arduous journey to embed data into vector databases. The challenge of sourcing, embedding, and maintaining data, with its high costs and slow processing times, threw us off our initial course, making it an issue we were determined to solve.
We observed that most RAG applications require similar types of data, such as legal documents, health records, research papers, news articles, educational material, and books. Each time developers create a RAG application, they find themselves having to reinvent the wheel to populate their vector databases—collecting, pre-processing, and managing data instead of focusing on the actual application development.
**To solve this problem**, we have built an API that lets developers retrieve relevant data for their AI/LLM application without collecting, preprocessing, and managing it. Our tool sits in between developers and vector databases, abstracting away all the complexity of sourcing and managing data for RAG applications. This allows developers to focus on what they do best: build applications.
Our solution also addresses a critical mismatch for developers: the vast amount of data they need to preprocess versus how much they actually utilize. Given the steep prices of embedding models, developers must pay for all the data they ingest, regardless of how much is ultimately used. Our experience suggests that a small subset of the embedding data is frequently queried, while the vast majority is unread. Blanket eliminates this financial burden for developers.
Finally, we are also building the infrastructure to process and embed unstructured data, giving developers access to ten times the amount of data that they previously could harness, significantly enhancing the capabilities of their applications. For example, until now only the abstracts of ArXiv research papers had been embedded, as the full papers are stored in difficult-to-process PDF files. Over the course of Treehacks, we were able to embed the actual paper content itself, unlocking an incredible wealth of knowledge.
In the current RAG development stack, despite advancements and abstractions provided by tools like Langchain, open-source vector databases like Chroma, and APIs to LLM models, collecting relevant data remains the sole significant hurdle for developers building AI/LLM applications. Blanket emerges as the final piece of this puzzle, offering an API that allows developers to query the data they need with a single line of code, thereby streamlining the development process and significantly reducing overhead.
We want to emphasize that this is not a theoretical solution. We have actively demonstrated its efficacy. For our demo, we built an application that automatically generates a literature review from a research question, utilizing the Langchain and Blanket’s API. Achieved in merely **six lines of code**, this showcases the power and efficiency of our solution, making Blanket a groundbreaking tool for developers in the AI space.
## What it does
Blanket is an API which lets developers retrieve relevant data for their AI/LLM application. We are a **developer tool** that sits between developers and vector databases (such as ChromaDB and Pinecone), abstracting away all the complexity of sourcing and managing data for RAG applications. We aim to embed large high quality, citable datasets, (using both structured and unstructured data) from major verticals (Legal, Health, Education, Research, News, ...) into vector databases, such as Chroma DB. Our service will ensure that the data is up-to-date, accurate, and citable, freeing developers from the tedious work of data management.
During Treehacks, we embedded the full contents and abstracts of around 20,000 (due to time and cost constraints) computer science related ArXiv research papers. We built an easy-to-use API that lets users query our databases in their AI/LLM application removing the need for them to deal with data. Finally, we built our original idea of an app that generates an academic literature review of a research question using the blanket API with only **6 lines of code**.
```
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from blanket.utils.api import Blanket
def get_lit_review(query):
prompt_template = """
Create a 600 word literature review on the following topic: {query}.
Use the following papers and context. Cite the authors and the title of the paper when you quote.
Only use the context that is relevant, dont add a references section or a title just the review itself.
Context: \n\n
{context}
"""
prompt = ChatPromptTemplate.from_template(prompt_template)
model = ChatOpenAI()
chain = prompt | model
context = Blanket().get_research_data_sort_paper(query, numResults=10)
return chain.invoke({"query": query, "context": context}).content, context
```
The API we have built currently only allows for the querying of data related to research papers. Below are the three user facing function each returning the data in a different format such that the developer can choose the format most suited for them.
```
def get_research_data(self, query: str, num_results: int = 10) -> list[dict]:
"""
Retrieves research data based on a specified query, formatted for client-facing applications.
This method conducts a search for research papers related to the given query and compiles
a list of relevant papers, including their metadata. Each item in the returned list represents
a single result, formatted with both the textual content found by the configured Vector DB and structured
metadata about the paper itself.
Parameters:
- query (str): The query string to search for relevant research papers.
- num_results (int, optional): The number of results to return. Defaults to 10.
Returns:
- list[dict]: A list where each element is a dictionary containing:
- "text": The textual content related to the query as found by the Vector DB, which may include
snippets from the paper or generated summaries.
- "meta": A dictionary of metadata for the paper, including:
- "title": The title of the paper.
- "authors": A list or string of the paper's authors.
- "abstract": The abstract of the paper.
- "source": A URL to the full text of the paper, typically pointing to a PDF on arXiv.
The return format is designed to be easily used in client-facing applications, where both
the immediate context of the query's result ("text") and detailed information about the source
("meta") are valuable for end-users. This method is particularly useful for applications
requiring quick access to research papers' metadata and content based on specific queries,
such as literature review tools or academic search engines.
Example Usage:
>>> api = YourAPIClass()
>>> research_data = api.get_research_data("deep learning", 5)
>>> print(research_data[0]["meta"]["title"])
"Title of the first relevant paper"
Note:
Multiple elements of the list may relate to the same paper, to return results batched by paper
please use the `get_research_data_sort_paper` method instead.
"""
def get_research_data_sort_paper(self, query: str, num_results: int = 10) -> dict[dict]:
"""
Retrieves and organizes research data based on a specified query, with a focus on sorting
and structuring the data by paper ID.
This method searches for research papers relevant to the given query. It then organizes
the results into a dictionary, where each key is a paper ID, and its value is another
dictionary containing detailed metadata about the paper and its contextual relevance
to the query.
Parameters:
- query (str): The query string to search for relevant research papers.
- num_results (int, optional): The desired number of results to return. Defaults to 10.
Returns:
- dict[dict]: A nested dictionary where each key is a paper ID and each value is a
dictionary with the following structure:
- "title": The title of the research paper.
- "authors": The authors of the paper.
- "abstract": The abstract of the paper.
- "source": A URL to the full text of the paper, typically pointing to arXiv.
- "context": A dictionary where each key is an index (starting from 0) and each value
is a text snippet or summary relevant to the query, as found in the paper or generated.
This structure is especially useful for client-facing applications that require detailed
information about each paper, along with contextual snippets or summaries that highlight
the paper's relevance to the query. The `context` dictionary within each paper's data allows
for a granular presentation of how each paper relates to the query, facilitating a deeper
understanding and exploration of the research landscape.
Example Usage:
>>> api = YourAPIClass()
>>> sorted_research_data = api.get_research_data_sort_paper("neural networks", 5)
>>> for paper_id, paper_info in sorted_research_data.items():
>>> print(paper_info["title"], paper_info["source"])
"Title of the first paper", "https://arxiv.org/pdf/paper_id.pdf"
"""
def get_research_data_easy_cite(self, query: str, num_results: int = 10) -> list[str]:
"""
Generates a list of easily citable strings for research papers relevant to a given query.
This method conducts a search for research papers that match the specified query and formats
the key information about each paper into a citable string. This includes the title, authors,
abstract, and a direct source link to the full text, along with a relevant text snippet or
summary that highlights the paper's relevance to the query.
Parameters:
- query (str): The query string to search for relevant research papers.
- num_results (int, optional): The desired number of results to return. Defaults to 10.
Returns:
- list[str]: A list of strings, each representing a citable summary of a research paper.
Each string includes the paper's title, authors, abstract, source URL, and a relevant
text snippet. This format is designed to provide a quick, comprehensive overview suitable
for citation purposes in academic or research contexts.
Example Usage:
>>> api = YourAPIClass()
>>> citations = api.get_research_data_easy_cite("deep learning", 5)
>>> for cite in citations:
>>> print(cite)
Paper title: [Title of the Paper]
Authors: [Authors List]
Abstract: [Abstract Text]
Source: [URL to the paper]
Text: [Relevant text snippet or summary]
"""
```
## How we built it
We built our solution by blending innovative tech (such as vectorDBs), optimization techniques, and a seamless design for developers. Here’s how we pieced together our project:
**1. Cloud Infrastructure**
We established our cloud infrastructure by creating two Azure cloud instances. One instance is dedicated to continuously managing the embedding process, while the other manages the deployed vector database.
**2. Vector Database Selection**
For our backend database, we chose Chroma DB. This decision was driven by Chroma DB's compatibility with our goals and ethos of seamless developer tooling. Chroma DB serves as one of the backbone tools of our system, storing the embedded databases and enabling fast, reliable retrieval of embedded information.
\**3. Embedding Model \**
We embed documents using VoyageAI’s voyage-lite-02-instruct model. We selected it for its strong semantic similarity performance on the Massive Text Embedding Benchmark (MTEB) Leaderboard. However, it's important to note that while this model offers superior accuracy, it comes with higher costs and slower embedding times—a trade-off we accepted for the sake of quality.
**4. Data Processing and Ingestion Pipeline**
With our infrastructure in place, we focused on building a robust data processing and ingestion pipeline. Written in Python, this pipeline is responsible for collecting, processing, embedding, and storing the academic papers into our database. This step was crucial for automating the data flow and ensuring our database remains extensive and comprehensive.
**5. Optimization Techniques**
We also optimized our data processing. By leveraging a wide array of systems optimization techniques, including batch processing and parallelization, we ensured our infrastructure could handle large volumes of data efficiently. These techniques allowed us to maximize our system's performance and speed, laying the groundwork for quickly processing new data.
**6. Literature Review Demo App**
The culmination of our efforts is the literature review demo application. Utilizing our API and integrating with Langchain, we developed an application capable of generating accurate, high-quality literature reviews for research questions in a matter of seconds. This demonstration not only showcases the power of our API but also the practical application of our system in academic research.
**7. Frontend Development**
Finally, to make our application accessible and user-friendly, we designed a simple yet effective frontend using HTML. This interface allows users to interact with our demo app easily, submitting research questions and receiving comprehensive literature reviews in return.
## Challenges we ran into
Over the course of this project, we ran into a few challenges:
**1. Optimizing chunking and retrieval accuracy.**
In order to ensure accurate and relevant retrieval of data, we needed to choose smart chunking strategies. We thus had to experiment with many different strategies, measure which ones performed better compared to others, and ultimately make a decision based on data we collected.
**2. Dealing with embedding models.**
A crucial part of the system is the generation of embeddings for data. However, most high-quality embedding models are run through APIs. This makes them expensive. In addition, accessing these embedding APIs is at times very slow.
**3. Dealing with PDFs.**
As PDFs use specific encoding formats, extracting and processing data from PDFs is not straightforward. We had to deal with quite a few error cases and had to find ways to filter for badly-formatted data. This took more time and effort than we had initially expected.
**4. Deploying the database.**
In order to be able to access our database through our API, we deployed Chroma on Azure. We ran it in a docker container. However, the database crashed twice due to memory constraints, leading to us losing our generated embeddings. So, we figured out how to use the disk by directly inspecting ChromaDB’s source code.
## Accomplishments that we're proud of
**1. Embedding full-text ArXiv papers.**
We are the first team to embed the full texts of thousands of ArXiv papers into a widely accessible database. We believe that this can have a wide range of use cases, from application development, to education and academic research.
**2. Pivoting during the Hackathon.**
We successfully pivoted from creating an LLM application to building a developer tool after identifying a key point of friction in the development pipeline. Ultimately, we were able to create our initial application–in 6 lines of code on top of our new API.
**3. Optimizing our code.**
When we initially created our data processing and embedding pipeline, it was fairly slow. However, through a combination of systems optimizations, we were able to achieve 10x speedups over our original approach.
**4. Creating cloud architecture.**
We built and configured a server to run the ArXiv embedding pipeline in perpetuity until all papers are embedded. In addition, we created a different server that fully manages our backend database infrastructure.
## What we learned
Over the course of Treehacks, we learned a tremendous amount about the development process of LLM applications. We delved deeply into exploring the tradeoffs between different tools and architectures. Having a wide variety of technical requirements in our own project, we were able to explore and learn more about these tradeoffs. In addition, we gained experience in applying optimization strategies. On the one hand, we optimized our data processing on a systems level. On the other hand, we optimized our accuracy retrieval accuracy by applying different chunking and embedding strategies. Overall, we have gained a much greater appreciation for the problem of data management for RAG-based applications and the whole LLM application ecosystem as a whole.
## What's next for Blanket.ai
After Treehacks, we want to start working closely with LLM application developers to better understand their data and infrastructure needs. In addition, we plan to embed the full texts of all of ArXiv’s (approximately 3 million) research papers into a database accessible through our API to any developer. To do so, we aim to make the API production-ready, decreasing response times, increasing throughput capabilities, and releasing documentation. Furthermore, we want to spread the word about the Blanket API by advertising on forums and developer meetups. Finally, we aim to build widely-available databases for data in other verticals, such as legal, health, and education.
|
winning
|
## Inspiration
In one calendar year, approximately 1 in 6 children are sexually victimized within the United States. Unfortunately, technology has enabling instant messaging and social media has been identified as a large source of where these grave events trace back. With this information, we knew that helping the efforts of undermining sexual predators was a must, and one that could additionally be helped with through machine learning and blockchain technologies, combined with an easy-to-use user interface.
## What it does
This app basically can take as an input a sentence, phrase or a few words from a conversation, and using Text analysis and machine learning can determine whether the dialogue in the conversation may be potentially considered harassment. If so, the input transcript is stored on a blockchain which can then generate a report that can be reviewed and signed by authorities to verify the harassment claim, and therefore this becomes a proof of any subsequent claim of abuse or harassment.
## How we built it
1: scraping the web for dialogue and conversation data
2: extracting raw chat logs using STDLib from perverted justice (to catch a predator NBC series) archives which resulted in actual arrests and convictions (600+ convictions)
3; curating scraped and extracted data into a labelled dataset
4: building a neural network (3 layers, 40 neurons)
5: using the nltk toolkit to extract keywords, stems and roots from the corpus
6: sanitizing input data
7: training neural network
8: evaluating neural network and retraining with modified hyperparameters
9: curating and uploading dataset to google containers
10: setup automl instance on google cloud
11: train a batch of input corpora with automl
12: evaluate model, update overall corpus and retrain automl model
13: create a blockchain to store immutable and verified copies of the transcript along with author
14: wrap machine learning classifiers around with flask server
15: attach endpoints of blockchain service as pipelines from classifiers.
16: setup frontend for communication and interfacing
## Challenges we ran into
extracting and curating raw conversation data is slow, tedious and cumbersome. To do this well, a ton of patience is required.
the ARK blockchain does not have smart contracts fully implemented yet. we used some shortcuts and hacky tricks, but ideally the harassment reports would be generated using a solidity-like contract on the blockchain
Google's AutoML, although promising, takes a very long time to train a model (~7 hours for one model)
There is a serious paucity of publicly available social media interaction dialogue corpora, especially for one to one conversations. Those that are publicly available often have many labeling, annotation and other errors which are challenging to sanitize.Google cloud SDK libraries, especially for newer products like AutoML often have conflicts with earlier versions of the google cloud SDK (atleast from what we saw using the python sdk)
## Accomplishments that we're proud of
cross validation gave our model a very high score using the test set. However, there needs to me much more data from a generic (non-abuse/harassment) conversation corpus as it seems the model is "eagerly" biased towards harassment label.
tl.dr: the model works for almost all phrases we considered as "harassment".
The scraper and curating code for the perverted justice transcripts are now publicly available functions on STDLib. these can be used for future research and development work
## What we learned
Scraping, extracting and curating data actually consumes most of the time in a machine learning project.
## What's next for To Blockchain a Predator
integration with current chat interfaces like Facebook messenger, WhatsApp, Instagram etc. An immutable record of possible harassing messages, especially to children using these platforms is a very useful tool to have, especially with the increasing prevalence of sexual predators using social media to interact with potential victims.
## Video Link
<https://splice.gopro.com/v?id=bJ2xdG>
|
## Inspiration:
The inspiration for this project was finding a way to incentivize healthy activity. While the watch shows people data like steps taken and calories burned, that alone doesn't encourage many people to exercise. By making the app, we hope to make exercise into a game that people look forward to doing rather than something they dread.
## What it does
Zepptchi is an app that allows the user to have their own virtual pet that they can take care of, similar to that of a Tamagotchi. The watch tracks the steps that the user takes and rewards them with points depending on how much they walk. With these points, the user can buy food to nourish their pet which incentivizes exercise. Beyond this, they can earn points to customize the appearance of their pet which further promotes healthy habits.
## How we built it
To build this project, we started by setting up the environment on the Huami OS simulator on a Macbook. This allowed us to test the code on a virtual watch before implementing it on a physical one. We used Visual Studio Code to write all of our code.
## Challenges we ran into
One of the main challenges we faced with this project was setting up the environment to test the watch's capabilities. Out of the 4 of us, only one could successfully install it. This was a huge setback for us since we could only write code on one device. This was worsened by the fact that the internet was unreliable so we couldn't collaborate through other means. One other challenge was
## Accomplishments that we're proud of
Our group was most proud of solving the issue where we couldn't get an image to display on the watch. We had been trying for a couple of hours to no avail but we finally found out that it was due to the size of the image. We are proud of this because fixing it showed that our work hadn't been for naught and we got to see our creation working right in front of us on a mobile device. On top of this, this is the first hackathon any of us ever attended so we are extremely proud of coming together and creating something potentially life-changing in such a short time.
## What we learned
One thing we learned is how to collaborate on projects with other people, especially when we couldn't all code simultaneously. We learned how to communicate with the one who *was* coding by asking questions and making observations to get to the right solution. This was much different than we were used to since school assignments typically only have one person writing code for the entire project. We also became fairly well-acquainted with JavaScript as none of us knew how to use it(at least not that well) coming into the hackathon.
## What's next for Zepptchi
The next step for Zepptchi is to include a variety of animals/creatures for the user to have as pets along with any customization that might go with it. This is crucial for the longevity of the game since people may no longer feel incentivized to exercise once they obtain the complete collection. Additionally, we can include challenges(such as burning x calories in 3 days) that give specific rewards to the user which can stave off the repetitive nature of walking steps, buying items, walking steps, buying items, and so on. With this app, we aim to gamify a person's well-being so that their future can be one of happiness and health.
|
## Inspiration
CereStyle was born out of a personal challenge that all of us have faced—struggling to choose clothes that truly complement our looks and define our fashion sense. We often found ourselves unsure of what colors suited us best, leading to a lack of confidence in our wardrobe choices. In one of our classes, we learned about the power of color theory and how it can be used to highlight natural features by selecting the right colors. This sparked the idea to create a solution using AI, blending color theory with technology to solve a real-life problem. CereStyle aims to help thousands of people who face the same challenge we did, guiding them to discover the perfect fashion style that enhances their confidence.
## What it does
CereStyle is an AI-powered fashion assistant that leverages color theory to provide personalized clothing recommendations. Users upload a photo of themselves, and the platform analyzes their skin tone and eye color to suggest outfits that enhance their natural features. The app provides tailored suggestions for casual, professional, and special occasions, helping users feel more confident in every setting.
## How we built it
We built CereStyle using a combination of front-end and back-end technologies. The image analysis is powered by the Cerebras API, which processes the uploaded photos to identify skin tone, hair color, and eye color. The backend is developed using Node.js, Flask, Python to handle user requests and integrate the AI features, while the front-end is built using React, HTML, CSS, JavaScript providing a sleek, responsive user interface. We also integrated fashion retailer APIs to source real-time clothing recommendations that match the user's profile.
## Challenges we ran into
One of the major challenges we encountered was ensuring the accuracy of skin tone detection. Variations in lighting and image quality made it difficult to standardize results across different users. Another challenge was integrating multiple APIs, each with its own set of authentication protocols, data structures, and response times. Additionally, fine-tuning the color theory algorithms to provide recommendations that are both accurate and fashionable required extensive testing and refinement.
## Accomplishments that we're proud of
We’re proud of successfully merging AI and color theory into a user-friendly platform that provides real, actionable fashion advice. Overcoming the technical challenges of integrating the Cerebras API and ensuring smooth interaction between multiple APIs was a significant achievement. Most importantly, we’ve created a tool that can genuinely help users feel more confident by understanding their personal style.
## What we learned
Throughout this project, we deepened our understanding of color theory and how it relates to fashion. We also gained hands-on experience working with AI-powered image analysis and integrating complex APIs. The project taught us the importance of user experience design, as we needed to make sure the platform was intuitive and accessible for all users. Additionally, balancing technical constraints with fashion-forward recommendations was a key learning experience.
## What's next for CereStyle
Looking ahead, we plan to enhance CereStyle by integrating even more advanced AI models to improve the accuracy of our color recommendations. We aim to partner with additional fashion retailers to broaden our range of product suggestions. Moreover, we want to introduce features that promote sustainable fashion by recommending eco-friendly clothing options. Our goal is to keep refining CereStyle to better serve users in their journey to find their unique style.
|
partial
|
## Inspiration
We were really excited to hear about the self-driving bus Olli using IBM's Watson. However, one of our grandfather's is rather forgetful due to his dementia, and because of this would often forget things on a bus if he went alone. Memory issues like this would prevent him, and many people like him, from taking advantage of the latest advancements in public transportation, and prevent him from freely traveling even within his own community.
To solve this, we thought that Olli and Watson could work to take pictures of luggage storage areas on the bus, and if it detected unattended items, alert passengers, so that no one would forget their stuff! This way, individuals with memory issues like our grandparents can gain mobility and be able to freely travel.
## What it does
When the bus stops, we use a light sensitive resistor on the seat to see if someone is no longer sitting there, and then use a camera to take a picture of the luggage storage area underneath the seat.
We send the picture to IBM's Watson, which checks to see if the space is empty, or if an object is there.
If Watson finds something, it identifies the type of object, and the color of the object, and vocally alerts passengers of the type of item that was left behind.
## How we built it
**Hardware**
Arduino - Senses whether there is someone sitting based on a light sensitive resistor.
Raspberry Pi - Processes whether it should take a picture, takes the picture, and sends it to our online database.
**Software**
IBM's IoT Platform - Connects our local BlueMix on Raspberry Pi to our BlueMix on the Server
IBM's Watson - to analyze the images
Node-RED - The editor we used to build our analytics and code
## Challenges we ran into
Learning IBM's Bluemix and Node-Red were challenges all members of our team faced. The software that ran in the cloud and that ran on the Raspberry Pi were both coded using these systems. It was exciting to learn these languages, even though it was often challenging.
Getting information to properly reformat between a number of different systems was challenging. From the 8-bit Arduino, to the 32-bit Raspberry Pi, to our 64-bit computers, to the ultra powerful Watson cloud, each needed a way to communicate with the rest and lots of creative reformatting was required.
## Accomplishments that we're proud of
We were able to build a useful internet of things application using IBM's APIs and Node-RED. It solves a real world problem and is applicable to many modes of public transportation.
## What we learned
Across our whole team, we learned:
* Utilizing APIs
* Node-RED
* BlueMix
* Watson Analytics
* Web Development (html/ css/ js)
* Command Line in Linux
|
## Inspiration
There are approximately **10 million** Americans who suffer from visual impairment, and over **5 million Americans** suffer from Alzheimer's dementia. This weekend our team decided to help those who were not as fortunate. We wanted to utilize technology to create a positive impact on their quality of life.
## What it does
We utilized a smartphone camera to analyze the surrounding and warn visually impaired people about obstacles that were in their way. Additionally, we took it a step further and used the **Azure Face API** to detect the faces of people that the user interacted with and we stored their name and facial attributes that can be recalled later. An Alzheimer's patient can utilize the face recognition software to remind the patient of who the person is, and when they last saw him.
## How we built it
We built our app around **Azure's APIs**, we created a **Custom Vision** network that identified different objects and learned from the data that we collected from the hacking space. The UI of the iOS app was created to be simple and useful for the visually impaired, so that they could operate it without having to look at it.
## Challenges we ran into
Through the process of coding and developing our idea we ran into several technical difficulties. Our first challenge was to design a simple UI, so that the visually impaired people could effectively use it without getting confused. The next challenge we ran into was attempting to grab visual feed from the camera, and running them fast enough through the Azure services to get a quick response. Another challenging task that we had to accomplish was to create and train our own neural network with relevant data.
## Accomplishments that we're proud of
We are proud of several accomplishments throughout our app. First, we are especially proud of setting up a clean UI with two gestures, and voice control with speech recognition for the visually impaired. Additionally, we are proud of having set up our own neural network, that was capable of identifying faces and objects.
## What we learned
We learned how to implement **Azure Custom Vision and Azure Face APIs** into **iOS**, and we learned how to use a live camera feed to grab frames and analyze them. Additionally, not all of us had worked with a neural network before, making it interesting for the rest of us to learn about neural networking.
## What's next for BlindSpot
In the future, we want to make the app hands-free for the visually impaired, by developing it for headsets like the Microsoft HoloLens, Google Glass, or any other wearable camera device.
|
## Inspiration
Metaverse, vision pro, spatial video. It’s no doubt that 3D content is the future. But how can I enjoy or make 3d content without spending over 3K? Or strapping massive goggles to my head? Let's be real, wearing a 3d vision pro while recording your child's birthday party is pretty [dystopian.](https://youtube.com/clip/UgkxXQvv1mxuM06Raw0-rLFGBNUqmGFOx51d?si=nvsDC3h9pz_ls1sz) And spatial video only gets you so far in terms of being able to interact, it's more like a 2.5D video with only a little bit of depth.
How can we relive memories in 3d without having to buy new hardware? Without the friction?
Meet 3dReal, where your best memories got realer. It's a new feature we imagine being integrated in BeReal, the hottest new social media app that prompts users to take an unfiltered snapshot of their day through a random notification. When that notification goes off, you and your friends capture a quick snap of where you are!
The difference with our feature is based on this idea where if you have multiple images of the same area ie. you and your friends are taking BeReals at the same time, we can use AI to generate a 3d scene.
So if the app detects that you are in close proximity to your friends through bluetooth, then you’ll be given the option to create a 3dReal.
## What it does
With just a few images, the AI powered Neural Radiance Fields (NeRF) technology produces an AI reconstruction of your scene, letting you keep your memories in 3d. NeRF is great in that it only needs a few input images from multiple angles, taken at nearly the same time, all which is the core mechanism behind BeReal anyways, making it a perfect application of NeRF.
So what can you do with a 3dReal?
1. View in VR, and be able to interact with the 3d mesh of your memory. You can orbit, pan, and modify how you see this moment captures in the 3dReal
2. Since the 3d mesh allows you to effectively view it however you like, you can do really cool video effects like flying through people or orbiting people without an elaborate robot rig.
3. TURN YOUR MEMORIES INTO THE PHYSICAL WORLD - one great application is connecting people through food. When looking through our own BeReals, we found that a majority of group BeReals were when getting food. With 3dReal, you can savor the moment by reconstructing your friends + food, AND you can 3D print the mesh, getting a snippet of that moment forever.
## How it works
Each of the phones using the app has a countdown then takes a short 2-second "video" (think of this as a live photo) which is sent to our Google Firebase database. We group the videos in Firebase by time captured, clustering them into a single shared "camera event" as a directory with all phone footage captured at that moment. While one camera would not be enough in most cases, by using the network of phones to take the picture simultaneously we have enough data to substantially recreate the scene in 3D. Our local machine polls Firebase for new data. We retrieve it, extract a variety of frames and camera angles from all the devices that just took their picture together, use COLMAP to reconstruct the orientations and positions of the cameras for all frames taken, and then render the scene as a NeRF via NVIDIA's instant-ngp repo. From there, we can export, modify, and view our render for applications such as VR viewing, interactive camera angles for creating videos, and 3D printing.
## Challenges we ran into
We lost our iOS developer team member right before the hackathon (he's still goated just unfortunate with school work) and our team was definitely not as strong as him in that area. Some compromises on functionality were made for the MVP, and thus we focused core features like getting images from multiple phones to export the cool 3dReal.
There were some challenges with splicing the videos for processing into the NeRF model as well.
## Accomplishments that we're proud of
Working final product and getting it done in time - very little sleep this weekend!
## What we learned
A LOT of things out of all our comfort zones - Sunny doing iOS development and Phoebe doing not hardware was very left field, so lots of learning was done this weekend. Alex learned lots about NeRF models.
## What's next for 3dReal
We would love to refine the user experience and also improve our implementation of NeRF - instead of generating a static mesh, our team thinks with a bit more time we could generate a mesh video which means people could literally relive their memories - be able to pan, zoom, and orbit around in them similar to how one views the mesh.
BeReal pls hire 👉👈
|
partial
|
## Inspiration
In the time of corona, it can be great to interact with friends over group video chat, and especially to play games. Similarly to Cards Against Humanity or other collaborative games, we thought it would be great to implement Mafia in a virtual setting.
## What it does
Mafia, or variants like One Night Werewolf, involve several players with secret roles including mafia, detectives, protectors, and townspeople. The goal is to have a central game room from which events are announced, yet secret events such as the mafia gathering to decide a victim or the detectives meeting to choose suspects. Thus, some players' video and audio need to be turned on periodically without others knowing.
## How I built it
We used Twilio's programmable video API and group rooms with React.JS to host the game rooms and implement the game logic.
## Challenges I ran into
## Accomplishments that I'm proud of
## What I learned
## What's next for PennAppsMafia
|
## Inspiration
The inspiration behind MyStock derives from our own personal experience and views on investing. The both of us look to invest in stocks in the future, however we don't know much about the investing world and the stock market. We've also noticed that some of our family and friends are in similar situations. We wanted to create a project that is something we can potentially use to benefit ourselves, as the project has that sort of personal connection. The concept of MyStock itself was also inspired by articles on mediums that we've both shown interest in, in the past.
## What it does
Our program uses Yahoo Finance API along with other libraries to run analysis on a number of specified stocks, and finds the stock volatility and safety by comparing daily returns in the past month or year. Our program then sorts the stocks into most and least risky based on the stocks variance and asks the user whether they prefer risky or safe stocks.
## How we built it
The project is python programmed and was built in google colab. We used a complex variety of libraries and APIs to help extract data from current, real world stocks, to compile data and create our own program using our technical skills in python.
## Challenges we ran into
Some challenges we ran into was encountering LSTM (Long Short-Term Memory), we didn't end up using it as our code was unsuccessful when we attempted with it but we were close. An idea came up to use Monte Carlo Simulations but we also came up short with that as we couldn't get it to run.
## Accomplishments that we're proud of
We're proud of pushing ourselves beyond our comfort zone by learning new libraries, codes we've never run before, and trying out a whole range of things. We're also accomplished by how much our coding has improved since our other hackathons, and how we were able to be more organized with the time.
## What we learned
Our experience with programming MyStock led us to investigate and discover and handful of new topics such as machine learning and deep learning APIs like keras, stock and market analysis, and monty carlo simulation models. This project also helped us refine our own technical skills in python programming, along with improving our soft skills as we continued to communicate with each other throughout the programming process.
## What's next for MyStock
The next steps for MyStock include creating a front end web application or a mobile app for our program to create an organized, practical and user friendly program. MyStock also looks to modify scaling, such as the amount of stocks on the market it can take as input, and the amount of simulations it can run through to create more accurate results.
|
## Inspiration
We were inspired by websites such as backyard.co which allow users to have video chats and play various games together. However one of the main issues with websites such as these or any video chat rooms such as zoom is that people are reluctant to put on their video. So to combat this issue, we wanted to create a similar website which encouraged people to turn on their video cameras, by making games that heavily relied on or used videos to function.
## What it does
Right now, the website only allows for the creation of multiple rooms, each room allowing up to 200 participants to join and share screen, use the chat box, and of course, share video and audio.
## How we built it
We used a combination of javascript api, react frontend, and node express backend. We connected to the cockroachDB in hopes of storing active user sessions. We also used heroku to deploy the site. To get the videos to work we used the Daily.co API.
## Challenges we ran into
One of the earliest challenges we ran into was learning how to use the Daily.co API. Connecting it to the express server we created and connecting the server to the front end took a good portion of our time. The biggest challenge we ran into however was using cockroachDB. We had many issues just connecting to the database and seeing as neither of us had any prior knowledge or experience with cockroachDB were we unable to get more use out of the database in the time given.
## Accomplishments that we're proud of
Setting up the video call and chat system using the Daily.co API. We also set up the infrastructure to expand our app with games.
## What we learned
As this was our first time using cockroachDB, react, and express we learned a lot about developing a full stack project and using APIs. We learned how to connect a backend server to the front end and how to connect to the database as well as heroku deployment.
## What's next for bestdomaingetsfree.tech
Our next steps would be configuring the database to store active sessions and to implement the games we have created.
We purchased the domain name but domain.com had a review process we have to wait for so at the time of submission the domain is not working.
|
losing
|
## Inspiration
We wanted to be able to connect with mentors. There are very few opportunities to do that outside of LinkedIn where many of the mentors are in a foreign field to our interests'.
## What it does
A networking website that connects mentors with mentees. It uses a weighted matching algorithm based on mentors' specializations and mentees' interests to prioritize matches.
## How we built it
Google Firebase is used for our NoSQL database which holds all user data. The other website elements were programmed using JavaScript and HTML.
## Challenges we ran into
There was no suitable matching algorithm module on Node.js that did not have version mismatches so we abandoned Node.js and programmed our own weighted matching algorithm. Also, our functions did not work since our code completed execution before Google Firebase returned the data from its API call, so we had to make all of our functions asynchronous.
## Accomplishments that we're proud of
We programmed our own weighted matching algorithm based on interest and specialization. Also, we refactored our entire code to make it suitable for asynchronous execution.
## What we learned
We learned how to use Google Firebase, Node.js and JavaScript from scratch. Additionally, we learned advanced programming concepts such as asynchronous programming.
## What's next for Pyre
We would like to add interactive elements such as integrated text chat between matched members. Additionally, we would like to incorporate distance between mentor and mentee into our matching algorithm.
|
## 💡Inspiration
Gaming is often associated with sitting for long periods of time in front of a computer screen, which can have negative physical effects. In recent years, consoles such as the Kinect and Wii have been created to encourage physical fitness through games such as "Just Dance". However, these consoles are simply incompatible with many of the computer and arcade games that we love and cherish.
## ❓What it does
We came up with Motional at HackTheValley wanting to create a technological solution that pushes the boundaries of what we’re used to and what we can expect. Our product, Motional, delivers on that by introducing a new, cost-efficient, and platform-agnostic solution to universally interact with video games through motion capture, and reimagining the gaming experience.
Using state-of-the-art machine learning models, Motional can detect over 500 features on the human body (468 facial features, 21 hand features, and 33 body features) and use these features as control inputs to any video game.
Motional operates in 3 modes: using hand gestures, face gestures, or full-body gestures. We ship certain games out-of-the-box such as Flappy Bird and Snake, with predefined gesture-to-key mappings, so you can play the game directly with the click of a button. For many of these games, jumping in real-life (body gesture) /opening the mouth (face gesture) will be mapped to pressing the "space-bar"/"up" button.
However, the true power of Motional comes with customization. Every simple possible pose can be trained and clustered to provide a custom command. Motional will also play a role in creating a more inclusive gaming space for people with accessibility needs, who might not physically be able to operate a keyboard dexterously.
## 🤔 How we built it
First, a camera feed is taken through Python OpenCV. We then use Google's Mediapipe models to estimate the positions of the features of our subject. To learn a new gesture, we first take a capture of the gesture and store its feature coordinates generated by Mediapipe. Then, for future poses, we compute a similarity score using euclidean distances. If this score is below a certain threshold, we conclude that this gesture is the one we trained on. An annotated image is generated as an output through OpenCV. The actual keyboard presses are done using PyAutoGUI.
We used Tkinter to create a graphical user interface (GUI) where users can switch between different gesture modes, as well as select from our current offering of games. We used MongoDB as our database to keep track of scores and make a universal leaderboard.
## 👨🏫 Challenges we ran into
Our team didn't have much experience with any of the stack before, so it was a big learning curve. Two of us didn't have a lot of experience in Python. We ran into many dependencies issues, and package import errors, which took a lot of time to resolve. When we initially were trying to set up MongoDB, we also kept timing out for weird reasons. But the biggest challenge was probably trying to write code while running on 2 hours of sleep...
## 🏆 Accomplishments that we're proud of
We are very proud to have been able to execute our original idea from start to finish. We managed to actually play games through motion capture, both with our faces, our bodies, and our hands. We were able to store new gestures, and these gestures were detected with very high precision and low recall after careful hyperparameter tuning.
## 📝 What we learned
We learned a lot, both from a technical and non-technical perspective. From a technical perspective, we learned a lot about the tech stack (Python + MongoDB + working with Machine Learning models). From a non-technical perspective, we worked a lot working together as a team and divided up tasks!
## ⏩ What's next for Motional
We would like to implement a better GUI for our application and release it for a small subscription fee as we believe there is a market for people that would be willing to invest money into an application that will help them automate and speed up everyday tasks while providing the ability to play any game they want the way they would like. Furthermore, this could be an interesting niche market to help gamify muscle rehabilition, especially for children.
|
# Tender - watch our video!
## Inspiration
Online dating is frustrating. This because 1) dating strangers is overrated — we are looking in the wrong place and 2) current algorithms value quantity over quality. There are no apps that match people within your network. This is where Tender comes in.
## What it does
Find out who your crushes are in a college/network. We believe that your relationships should be intentional and private. Via search, our app allows you to select people in your school and, through a quadratic cost function, express how much you like them. We announce one mutual match per week with an algorithm that values mutually strong liking. We model love instead of gamifying it.
Each user will be given 20 tokens per week, and you can express how intensely you like someone through a quadratic voting algorithm. Watch our video for an explanation.
Additionally, we can invite people to the app by inserting their email address and we will send a private invite link to their .edu emails.
## How we built it
We learned and built the backend with MongoDB with a client interface using React and Redux.
## Challenges we ran into
We are proud to build a functional backend and frontend from the ground up. However, we are having bugs in the API that are preventing the two from connecting. Currently, the user can interact with other users but the email functionality has not been implemented yet.
## Accomplishments that we're proud of
On the first day of the hackathon, we thought of the algorithm for matching two people. It is based on
the minimum standard deviation between two person likings. We spent a significant portion of time to design and refine this centric algorithm.
Additionally, regarding the technical aspects, we learnt a lot about implementing a new database eco-system: MongoDB. We also successfully implemented a React and Redux login system, storing encrypted user information onto our MongoDB database.
## What we learned
We learned about MongoDB, Redux, and also how to quickly transform an idea into something that is usable.
## What's next for Tender
We intend to integrate this with Instagram and existing social networks, such as LinkedIn, Instagram, etc.
Additionally, we think that this would be a great idea for a start-up aiming at university students and beyond!
|
partial
|
## Inspiration
Documenting and analyzing a crime scene is very tedious and difficult task. There are many things that hinder a crime scene investigator to properly do their job.
First off, photographs are a common method to document pieces of evidence in a crime. However, often times, disjointed pieces of imagery do not give the investigators the full picture. There is a possibility where a few photos were taking at the crime scene and they require closeups of pieces of evidence that are no longer available to them.
## What it does
Detecto Mode is a mobile Augmented Reality (AR) crime scene annotation tool that allows investigators to spatially map out crime scenes and document pieces of evidence. This tool allows a crime scene investigator to:
* Spatially map the environment in real time, using AR
* Collaborate with other crime scene investigators to place notes, highlight important pieces of evidence in AR.
* Use collected data points from notes and spatial mapping be sent to the cloud, to be processed at the police station.
## How we built it
* ARCore
* Google Cloud API
* C#
* Unity Engine
## Challenges we ran into
During this hackathon, we were using a lot of technology such as spatial mapping (photogrammetry) and networking. There were are a lot of problems when it came to setting up these technologies to work into Unity.
## Accomplishments that we're proud of
We were able to successfully combine two technologies that we as a team were completely unfamiliar with. In addition, we also made a polished user interface for the final product.
## What we learned
An important lesson we took away from this hackathon is to spend time understanding and quantifying a problem. Doing proper research will help inform design decisions. In addition, we also learned that time management is a key part of being able to complete a project on time. As a team, we tried to track progress and set milestones during development of the software we were making.
## What's next for Detecto Mode
We would explore the possibility of using technology such as Computer Vision/Machine Learning to have the software auto tag points of evidence. In addition, we would want to create a backend system that would parse the data collected by the crime scene investigators and create useful graphs and visualizations.
|
## Inspiration
The inspiration for our project mainly stems from past as well as recent events involving sexual harassment, robbery, violence and several other forms of misconduct. The rate at which these misconducts happen is enormously high all over the world, and as students we receive 2-3 emails per day on an average, alerting us about robbery crimes. This brought us to thinking about a plausible solution to detect such crimes in real-time and help the victims before damage is done, while alerting others in the nearby area.
## What it does
The objective of our project is to stream the videos recorded by surveillance cameras in real-time as dataset, feed it into our system, apply a Deep Learning model to detect suspicious activity or possible misconduct, and send alerts to nearby safety departments.
## How we built it
1.The Deep Learning model, we thought best suited the detection of suspicious behavior, was based on Convolutional Neural Networks since CNN makes an explicit assumption that the input is image. And breaking down video into image frames can be implemented.
2.The first neural network we used was convolutional, to extract high level features and reduce input complexity. For this, we used a pretrained model, called Inception by Google. Since Inception is trained on ImageNet that categorizes images into basic classes, we further used this model to apply the technique of Transfer Learning. This technique will perform classification of the videos into one of three categories, namely – Criminal activity, Potentially suspicious, Safe.
## Challenges we ran into
Biggest hurdle we faced was while using Inception for implementing our first network. There are two ways we figured out, one was to construct the model using Tensorflow but we noticed the documentation was not proper. Second was to use an existing API, but it didn't solve our purpose to get features of the input.
## Accomplishments that we're proud of
This was our first time dabbling with Deep Learning models, and we were able to get acquainted with which models are best suited to solve our problem as there's plenty of algorithms for video/image classification but choosing the best suited model for our specific problem is a crucial step.
## What we learned
We learnt tons about Deep Learning models, such as CNN, RNN(LSTM), Transfer Learning, Python libraries - Tensorflow.
## What's next for Eagle-eye
This was an ambitious project to implement for a 36-hours Hackathon. We were able to understand the methodology to solve the problem, but were short on time to provide a fully-functional solution. The next step for Eagle-eye is to implement a system for the same, that can detect safety threats in videos in real-time and solve our objective.
|
## Inspiration
With COVID-19, the world has been forced to stay safe inside their homes and avoid social contact, a measure which has taken a noticeable toll on everyone’s mental well being. With All of the Lights, individuals can connect in new and fun ways with products that they likely already own - RGB light strips.
## What it does
All of the Lights is a web-enabled LED strip control system. It allows friends to synchronize their lights and remotely participate in each other's lives.
**Note:** The devices made for this project use only a short LED strip as proof of concept. In real use, the device would be mounted with the user's LED strip that typically runs around the perimeter of their ceiling, controlling the lights for an entire room.
Users access our web app to choose a different light pattern depending on whether they want to study together, party, or just chill. Each All of the Lights device is updated with the new pattern, immediately changing everyone's lights.
All of the Lights has several different modes or patterns, including:
* White Light (On or Off)
* Slow colour fading for vibing
* Fast colour jumping for parties
* Custom colour patterns (such as Blue-Orange fading)
* Pomodoro Study Mode
With the Pomodoro Study Mode, users can use their LED lights as a way to boost their productivity by changing colour when they should take a break from studying, then returning to the original colour to notify the user to resume studying.
## How we built it
All of the Lights is primarily a hardware hack. We began with a rough device circuit diagram to determine the necessary components and used CAD to design a enclosure to be 3D printed. While waiting for the prints, we split up to work on the two major components: creating a circuit to control high-power LEDs and interfacing between Raspberry Pi's to synchronize the devices.
The control circuit uses an ATtiny84 microcontroller to drive 3 MOSFET transistors which adjust the brightness of each 12V RGB channel. This utilizes Pulse Width Modulation (PWM) to access the entire range of colour values. To control the light patterns, the Raspberry Pi sends a 32 bit serial packet to the ATtiny. This packet contains the red, green, and blue values, as well as information about whether the colours should fade or not and the duration of the current pattern element. Using a system inspired by floating point integers, an accurate duration between 10 milliseconds and 3 hours can be specified using just 9 bits.
All of the Lights supports several nodes in the local network using Python threading and sockets combined with Flask to submit GET requests from the localhost. One Raspberry Pi is used as the server node, which retrieves a string from the Flask server containing information about desired light pattern. The server Pi supports multiple client Pi’s to join its network and updates each with the pattern data upon a new POST to the server. The clients and the server all send a serial message to the ATTiny on the LED driver board to change the light colours.
## Challenges we ran into
With the tight time constraints of this Hackathon, waiting for 3D prints to finish could be the difference between complete a product and not. To avoid this, we had to design our 3D printed case before having a concrete list of parts that would be enclosed. This required making intelligent design decisions to estimate how parts would eventually fit together in the case, without being too tight or oversized.
The serial communication between the Raspberry Pi web client and ATtiny LED driver board was made difficult by the different logic levels of the two devices. A voltage step-up circuit was needed to convert the 3.3V serial output from the Pi to a 5V serial input for the ATtiny. This required several prototype circuits that tried using diodes or MOSFETs, but the finally solution uses a double bipolar transistor inverter to accomplish the step up.
With the current system, one of the All of the Lights devices acts as both a client to the web app and a server to the other devices. This means that it must simultaneously fetch data from the web app, relay this information to each client, and control its own LEDs via serial. Organizing all of these concurrent tasks required lots of integration testing to get right.
## Accomplishments that we're proud of
We focused heavily on modularizing both the hardware and software components of this project to facilitate future development. This was a rewarding endeavour as we got to see all of the modules, such as the LED driver board, power circuit and LED strip being seamlessly integrated.
As a project that required many interactions between hardware and software, there were many challenges and bugs during the Hackathon. However, after finally fixing all of the issues, it was a great accomplishment to see a physical, real world device behaving exactly as we had designed, even if that meant pulling an all-nighter to see it work at 7:30am! We are especially excited about this device since we intend on further developing All of the Lights for us and friends to use.
## What we learned
One of the main features of this project is the various device interactions. We learned how to use sockets to interface between Raspberry Pi's, how to collect information from a web server with Flask, and how to communicate over serial between devices with different logic levels.
We also improved our engineering soft skills, primarily teamwork and communication. Throughout the competition, our team members frequently discussed the objective of each component of the project, allowing us to work in parallel and design hardware or code that would be relatively easy to integrate later on.
## What's next for All of the Lights
With all of the technical groundwork complete, All of the Lights possessed the necessary hardware and software requirements to expand out and create more intricate and useful LED patterns. The localhost server is a crucial aspect to the build, and currently allows people of the same household to connect to and control the lights from any browser. The server will eventually be deployed to the web, allowing people to connect their lights from anywhere in the world. Additionally, All of the Lights will allow users to use the Spotify API to synchronize music on a device with their LED lights. Finally, more productivity features will be implemented to allow users to structure their day. All of the Lights will launch a custom alarm setting, and let users be naturally woken with lights simulating the sunrise. Thanks to its modular design, launching custom settings on a device has never been easier!
|
losing
|
## Inspiration
As the world progresses into the digital age, there is a huge simultaneous focus on creating various sources of clean energy that is sustainable and affordable. Unfortunately, there is minimal focus on ways to sustain the increasingly rapid production of energy. Energy is wasted everyday as utility companies over supply power to certain groups of consumers.
## What It Does
Thus, we bring you Efficity, a device that helps utility companies analyze and predict the load demand of a housing area. By leveraging the expanding, ubiquitous arrival of Internet of Things devices, we can access energy data in real-time. Utility companies could then estimate the ideal power to supply to a housing area, while keeping in mind to satisfy the load demand. With this, not too much energy will be wasted and thus improving energy efficiency. On top of that, everyday consumers can also have easy access to their own personal usage for tracking.
## How We Built It
Our prototype is built primarily around a Dragonboard 410c, where a potentiometer is used to represent the varying load demand of consumers. By using the analog capabilities of a built in Arduino (ATMega328p), we can calculate the power that is consumed by the load in real time. A Python script is then run via the Dragonboard to receive the data from the Arduino through serial communication. The Dragonboard then further complements our design by having built-in WiFi capabilities. With this in mind, we can send HTTP requests to a webserver hosted by energy companies. In our case, we explored sending this data to a free IOT platform webserver, which can allow a user from anywhere to track energy usage as well as perform analytics such as using MATLAB. In addition, the Dragonboard comes with a fully usable GUI and compatible HDMI monitor for users that are less familiar with command line controls.
## Challenges We Ran Into
There were many challenges throughout the Hackathon. First, we had trouble grasping the operations of a Dragonboard. The first 12 hours was spent only on learning how to use the device itself—it also did not help that our first Dragonboard was defective and did not come with a pre-flashed operating system! Next time, we plan to ask more questions early on rather than fixating on problems we believed were trivial. Next, we had a hard time coding the Wi-Fi functionality of the DragonBoard. This was largely due to the lack of expertise in the area from most members. For future references, we find it advisable to have a larger diversity of team members to facilitate faster development.
## Accomplishments That We're Proud Of
Overall, we are proud of what we have achieved as this was our first time participating in a hackathon. We ranged from first all the way to fourth year students! From learning how to operate the Dragonboard 410c to having hands on experience in implementing IOT capabilities, we thoroughly believe that HackWestern has broaden all our perspectives on technology.
## What's Next for Efficity
If this pitch is successful in this hackathon, we are planning to further improvise and make iterations and develop the full potential of the Dragonboard prototype. There are numerous algorithms we would love to implement and explore to process the collected data since the Dragonboard is quite a powerful device with its own operation systems. We may also want to include extra hardware add-ons such as silent arms for over-usage or solar panels to allow a fully self-sustained device. To take this one step further--if we were able to have a fully functional product, we can opt to pitch this idea to investors!
|
## Inspiration and what it does
As smart as our homes (or offices) become, they do not fully account for the larger patterns in electricity grids and weather systems.
their environments. They still waste energy cooling empty buildings, or waste money by purchasing electricity during peak periods. Our project, The COOLest hACk, solves these problems. We use sensors to detect both ambient temperature in the room and on-body temperature. We also increase the amount of cooling when electricity prices are cheaper, which in effect uses your building as a energy storage device. These features simultaneously save you money and the environment.
## How we built it
We built it using particle photons and infrared and ambient temperature sensors. These photons also control a fan motor and leds, representing air conditioning. We have a machine learning stack to forecast electricity prices. Finally, we built an iPhone app to show what's happening behind the scenes
## Challenges we ran into
Our differential equation models for room temperature were not solvable, so we used a stepwise approach. In addition, we needed to find a reliable source of time-of-day peek electricity prices.
## Accomplishments that we're proud of
We're proud that we created an impactful system to reduce energy used by the #1 energy hungry appliance, Air Conditioning. Our solution has minimal costs and works through automated means.
## What we learned
We learned how to work with hardware, Photons, and Azure.
## What's next for The COOLest hACk
For the developers: sleep, at the right temperature ~:~
|
## Inspiration
In online documentaries, we saw visually impaired individuals and their vision consisted of small apertures. We wanted to develop a product that would act as a remedy for this issue.
## What it does
When a button is pressed, a picture is taken of the user's current view. This picture is then analyzed using OCR (Optical Character Recognition) and the text is extracted from the image. The text is then converted to speech for the user to listen to.
## How we built it
We used a push button connected to the GPIO pins on the Qualcomm DragonBoard 410c. The input is taken from the button and initiates a python script that connects to the Azure Computer Vision API. The resulting text is sent to the Azure Speech API.
## Challenges we ran into
Coming up with an idea that we were all interested in, incorporated a good amount of hardware, and met the themes of the makeathon was extremely difficult. We attempted to use Speech Diarization initially but realized the technology is not refined enough for our idea. We then modified our idea and wanted to use a hotkey detection model but had a lot of difficulty configuring it. In the end, we decided to use a pushbutton instead for simplicity in favour of both the user and us, the developers.
## Accomplishments that we're proud of
This is our very first Makeathon and we were proud of accomplishing the challenge of developing a hardware project (using components we were completely unfamiliar with) within 24 hours. We also ended up with a fully function project.
## What we learned
We learned how to operate and program a DragonBoard, as well as connect various APIs together.
## What's next for Aperture
We want to implement hot-key detection instead of the push button to eliminate the need of tactile input altogether.
|
partial
|
## 💡Inspiration
Gaming is often associated with sitting for long periods of time in front of a computer screen, which can have negative physical effects. In recent years, consoles such as the Kinect and Wii have been created to encourage physical fitness through games such as "Just Dance". However, these consoles are simply incompatible with many of the computer and arcade games that we love and cherish.
## ❓What it does
We came up with Motional at HackTheValley wanting to create a technological solution that pushes the boundaries of what we’re used to and what we can expect. Our product, Motional, delivers on that by introducing a new, cost-efficient, and platform-agnostic solution to universally interact with video games through motion capture, and reimagining the gaming experience.
Using state-of-the-art machine learning models, Motional can detect over 500 features on the human body (468 facial features, 21 hand features, and 33 body features) and use these features as control inputs to any video game.
Motional operates in 3 modes: using hand gestures, face gestures, or full-body gestures. We ship certain games out-of-the-box such as Flappy Bird and Snake, with predefined gesture-to-key mappings, so you can play the game directly with the click of a button. For many of these games, jumping in real-life (body gesture) /opening the mouth (face gesture) will be mapped to pressing the "space-bar"/"up" button.
However, the true power of Motional comes with customization. Every simple possible pose can be trained and clustered to provide a custom command. Motional will also play a role in creating a more inclusive gaming space for people with accessibility needs, who might not physically be able to operate a keyboard dexterously.
## 🤔 How we built it
First, a camera feed is taken through Python OpenCV. We then use Google's Mediapipe models to estimate the positions of the features of our subject. To learn a new gesture, we first take a capture of the gesture and store its feature coordinates generated by Mediapipe. Then, for future poses, we compute a similarity score using euclidean distances. If this score is below a certain threshold, we conclude that this gesture is the one we trained on. An annotated image is generated as an output through OpenCV. The actual keyboard presses are done using PyAutoGUI.
We used Tkinter to create a graphical user interface (GUI) where users can switch between different gesture modes, as well as select from our current offering of games. We used MongoDB as our database to keep track of scores and make a universal leaderboard.
## 👨🏫 Challenges we ran into
Our team didn't have much experience with any of the stack before, so it was a big learning curve. Two of us didn't have a lot of experience in Python. We ran into many dependencies issues, and package import errors, which took a lot of time to resolve. When we initially were trying to set up MongoDB, we also kept timing out for weird reasons. But the biggest challenge was probably trying to write code while running on 2 hours of sleep...
## 🏆 Accomplishments that we're proud of
We are very proud to have been able to execute our original idea from start to finish. We managed to actually play games through motion capture, both with our faces, our bodies, and our hands. We were able to store new gestures, and these gestures were detected with very high precision and low recall after careful hyperparameter tuning.
## 📝 What we learned
We learned a lot, both from a technical and non-technical perspective. From a technical perspective, we learned a lot about the tech stack (Python + MongoDB + working with Machine Learning models). From a non-technical perspective, we worked a lot working together as a team and divided up tasks!
## ⏩ What's next for Motional
We would like to implement a better GUI for our application and release it for a small subscription fee as we believe there is a market for people that would be willing to invest money into an application that will help them automate and speed up everyday tasks while providing the ability to play any game they want the way they would like. Furthermore, this could be an interesting niche market to help gamify muscle rehabilition, especially for children.
|
## Inspiration
While a member of my team was conducting research at UCSF, he noticed a family partaking in a beautiful, albeit archaic, practice. They gave their grandfather access to a google doc, where each family member would write down the memories that they have with him. Nearly every day, the grandfather would scroll through the doc and look at the memories that him and his family wanted him to remember.
## What it does
Much like the Google Doc does, our site stores memories inputted by either the main account holder themself, or other people who have access to the account, perhaps through a shared family email. From there, the memories show up on the users feed and are tagged with the emotion they indicate. Someone with Alzheimers can easily search through their memories to find what they are looking for. In addition, our Chatbot feature trained on their memories also allows users to easily talk to the app directly, asking what they are looking for.
## How we built it
Next.js, React, Node.js, Tailwind, etc.
## Challenges we ran into
It was difficult implementing our chatbot in a way where it is automatically update with data that our user inputs into the site. Moreover, we were working with React for the first time and faced many challenges trying to build out and integrate the different technologies into our website including setting up MongoDB, Flask, and different APIs.
## Accomplishments that we're proud of
Getting this done! Our site is polished and carries out our desired functions well!
## What we learned
As beginners, we were introduced to full-stack development!
## What's next for Scrapbook
We'd like to introduce Scrapbook to medical professionals at UCSF and see their thoughts on it.
|
## 💡 Inspiration
Manga are Japanese comics, considered to form a genre unique from other graphic novels. Similar to other comics, it lacks a musical component. However, their digital counterparts (such as sites like Webtoons) have innovated on their take on the traditional format with the addition of soundtracks, playing concurrently with the reader's progression through the comic. It can create an immersive experience for the reader building the emotion on screen. While Webtoon’s take on incorporating music is not mainstream, we believe there is potential in building on the concept and making it mainstream in online manga. Imagine how cool it would be to generate a soundtrack to the story unfolding. Who doesn't enjoy personalized music while reading?
## 💻 What it does
1. Users choose a manga chapter to read (in our prototype, we're using just one page).
2. Sentiment analysis is performed on the dialogue of the manga.
3. The resulting sentiment is used to determine what kind of music is fed into the song-generating model.
4. A new song will be created and played while the user reads the manga.
## 🔨 How we built it
* Started with brainstorming
* Planned and devised a plan for implementation
* Divided tasks
* Implemented the development of the project using the following tools
*Tech Stack* : Tensorflow, Google Cloud (Cloud Storage, Vertex AI), Node.js
Registered Domain name : **mangajam.tech**
## ❓Challenges we ran into
* None of us knew machine learning at the level that this project demanded of us.
* Timezone differences and the complexity of the project
## 🥇 Accomplishments that we're proud of
The teamwork of course!! We are a team of four coming from three different timezones, this was the first hackathon for one of us and the enthusiasm and coordination and support were definitely unique and spirited. This was a very ambitious project but we did our best to create a prototype proof of concept. We really enjoyed learning new technologies.
## 📖 What we learned
* Using TensorFlow for sound generation
* Planning and organization
* Time management
* Performing Sentiment analysis using Node.js
## 🚀 What's next for Magenta
Oh tons!! We have many things planned for Magenta in the future.
* Ideally, we would also do image recognition on the manga scenes to help determine sentiment, but it's hard to actualize because of varying art styles and genres.
* To add more sentiments
* To deploy the website so everyone can try it out
* To develop a collection of Manga along with the generated soundtrack
|
partial
|
## Inspiration
Since the arrival of text messaging into the modern day world, users have had a love hate relationship with this novel form of communication. Instant contact with those you love at the cost of losing an entire facet of conversation - emotion. However, one group of individuals has been affected by this more than most. For those with autism, who already have a difficult time navigating emotional cues in person, the world of text messages is an even more challenging situation. That's where NOVI comes in.
## What it does
NOVI utilizes Natural Language Processing to identify a range of emotions within text messages from user to user. Then, by using visual and text cues and an intuitive UI/UX design, it informs the user (based on their learning preferences) of what emotions can be found in the texts they are receiving. NOVI is a fully functional app with a back-end utilizing machine learning and a heavily researched front end to cater to our demographic and help them as much as possible.
## How I built it
Through the use of react native, CSS, javascript, Google Cloud and plenty of hours, NOVI was born. We focused on a back end implementation with a weight on machine learning and natural language processing and a front end focus on research based intuition that could maximize the effectiveness of our app for our users. We ended up with a brand new fully functional messaging app that caters to our demographic's exact needs.
## Challenges I ran into
As this was many of our first times touching anything related to machine learning, there was no real intuition behind a lot of the things we tried to implement. This meant a lot of learning potential and many hours poured into developing new skills. By the end of it however we ended up learning a lot about not only new topics, but also the process of discovering new information and content in order to create our own products.
## Accomplishments that I'm proud of
Something we put a genuinely large amount of effort into was researching our target demographic. As every member in our group had very individual experiences with someone with autism, there were a lot of assumptions we had to avoid making. We avoided these generalizations by looking into as many research papers backing our theories as we could find. This was the extra step we chose to take to assure a genuinely effective UI/UX for our users.
## What I learned
We learned how to use react native, how to use a backend and among many other things, simply how to learn new things. We learned how to research to maximize effectiveness of interfaces and experiences and we learned how to make an app with a specific user base.
## What's next for NOVI
NOVI is an app with much to offer and a lot of potential for collaboration with a variety of organizations and other companies. It is also possible to adapt the concept of NOVI to adapt to other areas of aid for other possible demographics, such as for those with Asperger's.
|
## Inspiration
We were inspired to build Loki to illustrate the plausibility of social media platforms tracking user emotions to manipulate the content (and advertisements) that they view.
## What it does
Loki presents a news feed to the user much like other popular social networking apps. However, in the background, it uses iOS’ ARKit to gather the user’s facial data. This data is piped through a neural network model we trained to map facial data to emotions. We use the currently-detected emotion to modify the type of content that gets loaded into the news feed.
## How we built it
Our project consists of three parts:
1. Gather training data to infer emotions from facial expression
* We built a native iOS application view that displays the 51 facial attributes returned by ARKit.
* On the screen, a snapshot of the current face can be taken and manually annotated with one of four emotions [happiness, sadness, anger, and surprise]. That data is then posted to our backend server and stored in a Postgres database.
2. Train a neural network with the stored data to map the 51-dimensional facial data to one of four emotion classes. Therefore, we:
* Format the data from the database in a preprocessing step to fit into the purely numeric neural network
* Train the machine learning algorithm to discriminate different emotions
* Save the final network state and transform it into a mobile-enabled format using CoreMLTools
3. Use the machine learning approach to discreetly detect the emotion of iPhone users in a Facebook-like application.
* The iOS application utilizes the neural network to infer user emotions in real time and show post that fit the emotional state of the user
* With this proof of concept we showed how easy applications can use the camera feature to spy on users.
## Challenges we ran into
One of the challenges we ran into was the problem of converting the raw facial data into emotions. Since there are 51 distinct data points returned by the API, it would have been difficult to manually encode notions of different emotions. However, using our machine learning pipeline, we were able to solve this.
## Accomplishments that we're proud of
We’re proud of managing to build an entire machine learning pipeline that harnesses CoreML — a feature that is new in iOS 11 — to perform on-device prediction.
## What we learned
We learned that it is remarkably easy to detect a user’s emotion with a surprising level of accuracy using very few data points, which suggests that large platforms could be doing this right now.
## What's next for Loki
Loki is currently not saving any new data that it encounters. One possibility is for the application to record the expression of the user mapped to the social media post. Another possibility is to expand on our current list of emotions (happy, sad, anger, and surprise) as well as train on more data to provide more accurate recognition. Furthermore, we can utilize the model’s data points to create additional functionalities.
|
## Inspiration
make access to everyday therapy faster, cheaper (free), and easier while keeping your identity anonymous.
## What it does
hippo checks up on you every day, records how you are feeling in an unintrusive manner throughout the conversation, analyzes the emotions behind your language and patterns in your mood, as well as provides helpful resources to manage your feelings.
## How we built it
We designed in Figma to HTML and CSS, machine learning on Kaggle GPU and integrated our components into a Flask full stack app. We used a [Chatterbot](https://chatterbot.readthedocs.io/en/stable/) Python bot for conversational interaction.
## Challenges we ran into
Time was the biggest challenge, especially for a machine learning project where just training the model can take up to 5+ hours. In addition, the team was introduced to a completely new programming language and most of us had no experience with making a chatbot. We experimented with various AI chatbot platforms, including Dialogflow, before settling on Chatterbot that was most suitable.
## Accomplishments that we're proud of
* Successfully analyzing moods in conversational language
* Creating a working chatbot that can maintain a short conversation
* Coming up with a solution that helps those unable to access a real therapist due to family situation, finances, social stigma. This anonymous, time-efficient tool can help users keep track of the emotions surrounding their thoughts.
* Our neat mascot, hippo!
## What we learned
* Configuring AI chatbots and integrating them into our web applications
* Designing intuitive applications with Figma, HTML and CSS
* Building machine learning models to classify emotion based on text
## What's next for hippo
Product market fit! Getting this out there to users to use, get feedback, improve and iterate. We want to develop our mood tracking features to parse diary entries and provide our users with live therapists for when they need expert attention.
|
partial
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.