anchor
stringlengths
1
23.8k
positive
stringlengths
1
23.8k
negative
stringlengths
1
31k
anchor_status
stringclasses
3 values
# Liber Populus ## Vote like your life depends on it. By many accounts, this upcoming 2020 election will be one of the most contentious to date. With a global pandemic, economic crisis, racial unrest, two starkly different candidates, and a polarized nation, it is important that everyone who is able to partakes in the democratic process at the heart of the American story. This is where **Liber Populus** comes in. **Liber Populus** – Latin for *“free people”* **Liber Populus** provides an assortment of tools and information to equip every voter with what they need to ensure they register and cast their ballot while avoiding obstacles such as voter suppression, ballot filing mistakes, and more!
## Inspiration We were inspired to create this site to raise the importance of being a global citizen in our modern world. Our biggest goal was to create a site that would achieve just that. ## What it does Our site serves as a beacon of knowledge for those looking for an educational tool or more importantly, those looking to become global citizens. Our site allows users to search for countries or US states and see them on a map visually along with learning important basic, demographic, and cultural information about the country or US state. ## How we built it For the backend, we used Flask as a base. This allowed us to use POST and GET requests for the Wolfram|Alpha Short Answers API that would accept queries for specific information on countries around the world along with US states. The logic for the APIs was written in Python and JavaScript. On top of this, Flask allowed us to host the site. To build out the frontend, we used a combination of HTML, CSS, and JavaScript. ## Challenges we ran into Our biggest challenge was figuring out how to use Flask to actually send POST and GET request for the Wolfram|Alpha Short Answers API. We eventually figured out how to pipe the output of the Short Answers API to the frontend using a combination of our knowledge in Python and JavaScript. ## Accomplishments that we're proud of We are so proud that we managed to build a full-stack website in just a day! We achieved our mission of serving as a beacon of knowledge for users anywhere. ## What we learned The most important thing we learned was how to use Flask for websites, and more specifically, in the context of calling APIs and piping the results to the frontend. On top of this, we sharpened our skills in JavaScript, HTML, and CSS while designing the frontend pages for aesthetic results! ## What's next for MAPIFY Our next goal is to add a sharing function on the site to allow users to share the information they receive with others. On top of this, we plan to add a way for users to ask specific questions about a given US state or country.
Our full case study, including our user research and ideation process can be read here: <https://medium.com/@annakambhampaty/pocket-democracy-empowering-voters-using-the-google-cloud-vision-api-ibm-watson-and-revspeech-61268791fcd3> ## Inspiration "If I don't know either candidate in the race, I'd go by picking the Democrat over the Republican, then women over men, then names that sound like they come from like some kind of racial minority or something, then from there we're just straight up guessing." Over 30% of voters fail to complete their ballots every year. Political scientists attribute this to an absence in information which causes the SAT effect- if you don't know, don't answer it. Even more, researchers have found that candidates listed first on the ballot can receive up to 5% more votes. When they don't have the information they need, candidates' names, ethnicity, and gender can affect voters make decisions. The above quote from a voter we interviewed vividly illustrates this fact. There are several issues surrounding voter engagement, voter registration, and disenfranchisement policy, but, for the scope of this project, we focus on the specific interaction of the registered voter filling out their ballot. We ask, how might we help a voter make a more informed, more personal decision at the booth? Through interviewing users on the day of the hackathon and going off past observations of this issue, yesterday, we ideated and came up with the following solution that utilizes a wide range of technologies. ## What it does Our solution is an augmented reality experience that allows a user to scan their smart phone over their ballot. Our app, Pocket Democracy, will pick up the names on the ballot and allow the user to click them to reveal relevant information, popular news links, and a sentiment analysis of articles relating to the candidate. Pocket Democracy also supports speech-to-text and text-to-speech recognition processing. ## How we built it We developed a web app that first processes an image of the ballot using Google Cloud Vision's Optical Character Recognition API to detect and then extract the text from the image of the ballot. We grab the candidate names in text-form and pass them in queries to IBM Watson's Discovery News API. We use this API to scrape the web and gather the relevant information on the candidate- stances on prominent policy issues, relevant news links, and a sentiment analysis of news articles. We also utilize RevSpeech's API to implement a speech to text feature, for accessibility reasons. A user can say a name into the app, and it will pull up the same relevant information on the candidate. The app also has the ability, thanks to Google Cloud's text to speech, to speak the relevant information it scraped back to the user. Beyond just accessibility, this also makes it so that the user does not need to be in front of a ballot and can get informed prior, as well. ## What's next for Pocket Democracy Before moving forward with our project, extensive research in information ethics and user testing for accessibility and usability will be required. Then, we can iterate on our design in an informed manner to make it as accessible and equitable as possible. Algorithmic and news source bias should also be addressed in the future. We'd like to implement a personalization feature as well as a simple text input . We also need to more smoothly connect the varying components of our project. A reminder of our original mission- to help the voter make an informed decision with ease for themselves!
partial
## Inspiration There were two primary sources of inspiration. The first one was a paper published by University of Oxford researchers, who proposed a state of the art deep learning pipeline to extract spoken language from video. The paper can be found [here](http://www.robots.ox.ac.uk/%7Evgg/publications/2018/Afouras18b/afouras18b.pdf). The repo for the model used as a base template can be found [here](https://github.com/afourast/deep_lip_reading). The second source of inspiration is an existing product on the market, [Focals by North](https://www.bynorth.com/). Focals are smart glasses that aim to put the important parts of your life right in front of you through a projected heads up display. We thought it would be a great idea to build onto a platform like this through adding a camera and using artificial intelligence to gain valuable insights about what you see, which in our case, is deciphering speech from visual input. This has applications in aiding individuals who are deaf or hard-of-hearing, noisy environments where automatic speech recognition is difficult, and in conjunction with speech recognition for ultra-accurate, real-time transcripts. ## What it does The user presses a button on the side of the glasses, which begins recording, and upon pressing the button again, recording ends. The camera is connected to a raspberry pi, which is a web enabled device. The raspberry pi uploads the recording to google cloud, and submits a post to a web server along with the file name uploaded. The web server downloads the video from google cloud, runs facial detection through a haar cascade classifier, and feeds that into a transformer network which transcribes the video. Upon finished, a front-end web application is notified through socket communication, and this results in the front-end streaming the video from google cloud as well as displaying the transcription output from the back-end server. ## How we built it The hardware platform is a raspberry pi zero interfaced with a pi camera. A python script is run on the raspberry pi to listen for GPIO, record video, upload to google cloud, and post to the back-end server. The back-end server is implemented using Flask, a web framework in Python. The back-end server runs the processing pipeline, which utilizes TensorFlow and OpenCV. The front-end is implemented using React in JavaScript. ## Challenges we ran into * TensorFlow proved to be difficult to integrate with the back-end server due to dependency and driver compatibility issues, forcing us to run it on CPU only, which does not yield maximum performance * It was difficult to establish a network connection on the Raspberry Pi, which we worked around through USB-tethering with a mobile device ## Accomplishments that we're proud of * Establishing a multi-step pipeline that features hardware, cloud storage, a back-end server, and a front-end web application * Design of the glasses prototype ## What we learned * How to setup a back-end web server using Flask * How to facilitate socket communication between Flask and React * How to setup a web server through local host tunneling using ngrok * How to convert a video into a text prediction through 3D spatio-temporal convolutions and transformer networks * How to interface with Google Cloud for data storage between various components such as hardware, back-end, and front-end ## What's next for Synviz * With stronger on-board battery, 5G network connection, and a computationally stronger compute server, we believe it will be possible to achieve near real-time transcription from a video feed that can be implemented on an existing platform like North's Focals to deliver a promising business appeal
## Inspiration Determined to create a project that was able to make impactful change, we sat and discussed together as a group our own lived experiences, thoughts, and opinions. We quickly realized the way that the lack of thorough sexual education in our adolescence greatly impacted each of us as we made the transition to university. Furthermore, we began to really see how this kind of information wasn't readily available to female-identifying individuals (and others who would benefit from this information) in an accessible and digestible manner. We chose to name our idea 'Illuminate' as we are bringing light to a very important topic that has been in the dark for so long. ## What it does This application is a safe space for women (and others who would benefit from this information) to learn more about themselves and their health regarding their sexuality and relationships. It covers everything from menstruation to contraceptives to consent. The app also includes a space for women to ask questions, find which products are best for them and their lifestyles, and a way to find their local sexual health clinics. Not only does this application shed light on a taboo subject but empowers individuals to make smart decisions regarding their bodies. ## How we built it Illuminate was built using Flutter as our mobile framework in order to be able to support iOS and Android. We learned the fundamentals of the dart language to fully take advantage of Flutter's fast development and created a functioning prototype of our application. ## Challenges we ran into As individuals who have never used either Flutter or Android Studio, the learning curve was quite steep. We were unable to even create anything for a long time as we struggled quite a bit with the basics. However, with lots of time, research, and learning, we quickly built up our skills and were able to carry out the rest of our project. ## Accomplishments that we're proud of In all honesty, we are so proud of ourselves for being able to learn as much as we did about Flutter in the time that we had. We really came together as a team and created something we are all genuinely really proud of. This will definitely be the first of many stepping stones in what Illuminate will do! ## What we learned Despite this being our first time, by the end of all of this we learned how to successfully use Android Studio, Flutter, and how to create a mobile application! ## What's next for Illuminate In the future, we hope to add an interactive map component that will be able to show users where their local sexual health clinics are using a GPS system.
## Inspiration Currently, it is difficult for people coming from different language backgrounds to share pictures and describe them to each other. For example, if someone wanted to generate a way of describing what is happening in the picture to someone who speaks a different language than themselves, it would be nearly impossible for that caption generation to occur. ## What it does Our project allows users to select a picture from Google Drive, upload the picture, and the project will generate a caption for the image. A word cloud is also generated, to capture the "feeling of a user's profile so far. ## How we built it We built this project on top of IBM's current machine learning image caption generator. We added a feature of choosing pictures from Google Drive, as well as trying to add a Facebook profile feature. This idea was to allow for users to generate word clouds that reflected their entire profile. ## Challenges we ran into Our team is not particularly advanced in skill, so we spent a lot of time just learning basic web skills and familiarizing ourselves with the technology stack we would be working on. We also encountered difficulties with both the Facebook and Google APIs, making it difficult for certain parts of our project to wkr. ## Accomplishments that we're proud of Overall, our team learned a lot!! And worked well together, despite being a team that just met. We helped each other learn and figure things out, as well as being supportive and pushing each other to learn our hardest during this hackathon! ## What we learned Basic HTML, Javascript and Python functions, especially for in a web app, as well as how to deal with large company's APIs that can be difficult to manage and understand. ## What's next for Word Cloud Generator Continue fixing the bugs! We're at table O7
winning
## Inspiration We were inspired by our own experiences and by those of our classmates. Despite having started our degree in-person, before the pandemic, most of our time in college has been spent online. We know people who started post-secondary studies during the pandemic and some of them have never even been to campus. We noted that when pandemic recovery plans are discussed, economic recovery seems to be the focus. And while it is quite important, restoring the social lives and mental health of people is also crucial to building a better world after this situation. Our concept is designed to facilitate both, social and economic restoration. ## What it does We created the concept prototype for an app that helps students, specially those who have done school online for the last two years, make real-life connections with their classmates and people around them. During the onboarding process, users are asked about their interests along with some personal information, like what school they go to and what program/major they are taking. Because we understand that safety and boundaries are important, users are asked covid safety related questions as well. Things like vaccination status, how many people they would feel comfortable meeting with at a time, and what kind of settings they would feel safe in (restaurants, parks, malls, etc.), are all imputed by the user. Using the information provided, users are matched with a group of people or with a person, whom with the app has determined they are compatible with. Users are then prompted to start a chat or group chat. After the app detects a certain number of messages has been exchanged, the app will suggest meeting up to the users. The app, based on the users interests, will suggest a small local venue where they should hang out. ## How we built it We used Figma for the whole process, from brainstorming to user flows and wireframes. Our final prototype can be viewed in Figma, ## Challenges we ran into When we were brainstorming we found it difficult to find an idea that would help people's mental health recover along with the economy. After we came up with our concept we found ourselves having to narrow down or scope. We had to curate the user flow that we showed in our video to display the core idea of our project. ## Accomplishments that we're proud of We managed to find a way to effectively display our project within the required timeframe. We overcame feature creep and made sure that we only included what was essential for our project. We created a fun and effective visual identity for our project ## What we learned We refined our skills in Figma. We got to apply creative problem techniques we had been taught in classes in a practical situation. ## What's next for IRL✨ User research and testing. Improving our prototype and eventually making it a real thing!
## Inspiration As our world becomes more digitalized and interactions become a more permanent, our team noticed a rise in online anxiety stemming from an innate fear of being judged or making a mistake. In Elizabeth Armstrong's book *Paying for the Party* , Armstrong mentions the negative impacts of being unique at a school where it pays to not stand out. This exact sentiment can now be seen online, except now, everything can be traced back to an identity indefinitely. Our thoughts, questions, and personal lives are constantly be ridiculed and monitored for mistakes. Even after a decade of growth, we will still be tainted with the person we were years before. Contrary to this social fear, many of us started off a childhood with a confidence and naivety of social norms allowing us to simply make friends based on interests. Everyday was made for show-and-tell and asking questions. Through this platform, we seek to develop a web app that allows us to reminisce about the days when making friends was as easy as turning to stranger on the playground and asking to play. ## What it does Our web app is designed to make befriending strangers with shared interests easier and making mistakes less permanent. When opening the app, users will be given a pseudo name and will be able to choose their interests based on a word cloud. Afterwards, the user can then follow one of three paths. The first would be a friend matching path where the user will receive eight different people who share common interests with them. In these profiles, each person's face would be blurred and the only thing shared would be interests and age. The user can select up to two people to message per day. The second path allows for learning. Once a user selects a topic they'd like to learn more about, they will then be matched to someone who is volunteering to share information. The third consists of a random match in the system for anyone who is feeling spontaneous. This was inspired by Google's "I'm feeling lucky" button. Once messaging begins, the both people will have the ability to reveal their identity at any point, which would resolve the blurred image on their profile for the user they are unlocking it for. The overall objective is to create a space for users to share without their identity being attached. ## How we built it Our team built this by taking time to learn UI design in Figma and then began to implement the frontend through html and css. We then attempted to build the back-end using python through Flask. We then hosted the web app on Azure as our server. ## Challenges we ran into Our team is made up of 100% beginners with extremely limited coding experience, so finding the starting point for web app development was the biggest challenge we ran into. In addition, we ran into a significant amount of software downloading issues which we worked with a mentor to resolve for several hours. Due to these issues, we never fully implement the program. ## Accomplishments that we're proud of Our team is extremely proud of the progress we have made thus far on the project. Coming in, most of us had very limited skills so being able to have learned Figma and launch a website in 36 hours feels incredible. Through this process, all of us were able to learn something new whether that be a software, language, or simply the process of website design and execution. As a group coming from four different schools from different parts of the world, we are also proud of the general enthusiasm, friendship, and team skills we built through this journey. ## What we learned Coming in as beginner programmers, our team learned a lot about the process of creating and designing a web app from start to finish. Through talking to mentors, we were able to learn more about the different softwares, frameworks, and languages many applications use as well as the flow of going from frontend to backend. In terms of technical skills, we picked up Figma and html and css through this project. ## What's next for Playground In the future, we hope to continue designing the frontend of Playground and then implementing the backend through python since we never got to the point of completion. As a web app, we hope to be able to later implement better matching algorithms and expanding into communities for different "playground."
## Inspiration Covid-19 has turned every aspect of the world upside down. Unwanted things happen, situation been changed. Lack of communication and economic crisis cannot be prevented. Thus, we develop an application that can help people to survive during this pandemic situation by providing them **a shift-taker job platform which creates a win-win solution for both parties.** ## What it does This application offers the ability to connect companies/manager that need employees to cover a shift for their absence employee in certain period of time without any contract. As a result, they will be able to cover their needs to survive in this pandemic. Despite its main goal, this app can generally be use to help people to **gain income anytime, anywhere, and with anyone.** They can adjust their time, their needs, and their ability to get a job with job-dash. ## How we built it For the design, Figma is the application that we use to design all the layout and give a smooth transitions between frames. While working on the UI, developers started to code the function to make the application work. The front end was made using react, we used react bootstrap and some custom styling to make the pages according to the UI. State management was done using Context API to keep it simple. We used node.js on the backend for easy context switching between Frontend and backend. Used express and SQLite database for development. Authentication was done using JWT allowing use to not store session cookies. ## Challenges we ran into In the terms of UI/UX, dealing with the user information ethics have been a challenge for us and also providing complete details for both party. On the developer side, using bootstrap components ended up slowing us down more as our design was custom requiring us to override most of the styles. Would have been better to use tailwind as it would’ve given us more flexibility while also cutting down time vs using css from scratch.Due to the online nature of the hackathon, some tasks took longer. ## Accomplishments that we're proud of Some of use picked up new technology logins while working on it and also creating a smooth UI/UX on Figma, including every features have satisfied ourselves. Here's the link to the Figma prototype - User point of view: [link](https://www.figma.com/proto/HwXODL4sk3siWThYjw0i4k/NwHacks?node-id=68%3A3872&scaling=min-zoom) Here's the link to the Figma prototype - Company/Business point of view: [link](https://www.figma.com/proto/HwXODL4sk3siWThYjw0i4k/NwHacks?node-id=107%3A10&scaling=min-zoom) ## What we learned We learned that we should narrow down the scope more for future Hackathon so it would be easier and more focus to one unique feature of the app. ## What's next for Job-Dash In terms of UI/UX, we would love to make some more improvement in the layout to better serve its purpose to help people find an additional income in job dash effectively. While on the developer side, we would like to continue developing the features. We spent a long time thinking about different features that will be helpful to people but due to the short nature of the hackathon implementation was only a small part as we underestimated the time that it will take. On the brightside we have the design ready, and exciting features to work on.
losing
## Inspiration In an increasingly polarized America, it is often too easy to live inside an echo chamber of political thought. This is especially true in colleges across America, where once-civil discussion of political issues can often devolves into shouting matches or violence. We wanted to create a tool to make it easier to broaden your perspective on political issues and better understand opposite viewpoints. ## What it does When you are reading a political news article online, you can click the Enigma chrome extension icon be taken to a side by side comparison of your article with a similar article written with an equal amount political bias, but in the opposite direction. For instance, a New York Times article may be compared with a Wall Street Journal article, and a Breitbart article may be compared with a similar HuffPost article. ## How we built it We used HTML/CSS/JS to build the Chrome Extension and comparison page, and we used Node.JS and the Bing API to find appropriate comparison articles. ## Challenges we ran into We found it difficult to properly embed news articles on the comparison page due to issues with XSS protections in the browser. It was also challenging to create a convenient flow for users to see article comparisons. ## Accomplishments that we're proud of We're proud that we were able to make an easy to use tool that helps broaden your perspective and, in its own little way, fight political divisions and polarization in America. ## What we learned Embedding articles is more complicated than it seems... ## What's next for Enigma Public availability?
## Our Project - CounterPoint **What**: To hack for a social good, we created a website that uses GoogleCloud's AI services to bridge the divide, whether the debate is political or scientific! **How**: You give our site a link to an article that you agree with(ex. a FoxNews Anti climate-change article) and we use natural language processing to give back relevant articles that represent different views on the topic! We use Google's NLP api services to analyze the article, extract keywords and determine the article's viewpoint, and then we use Google's custom search api to find relevant articles that defend agreeing, opposing, and neutral viewpoints! There's more! If you read an article that changed your mind or opened you to a new viewpoint, mark it with a delta. Deltas will help us sort which articles were most effective in making people think. **Who**: For people who want to listen to both sides of the debate and make informed decisions for themselves. ## Technologies Google NLP and Custom Search API Flask Python Vue.js ## Challenges * Front end development * determining viewpoint and keywords from the article
# NewsBlind ## Summary Machine-learning powered fake news detector with a user-oriented web interface that includes both a concrete judgment on the article, detailed and rigorous summary, and a crowdsourced poll for users to vote on whether or not they agree on the application's judgment. ## Our Hack We created a multi-layered algorithm which uses machine learning, sentiment analysis, and other facets of natural language processing to holistically evaluate news articles for bias and falsehoods. We created a weighted percentage based on the results of: * Judgement of the intentions the text based on the results of a natural language processing-powered linear support vector machine (a machine learning algorithm) trained on thousands of real and fake articles acquired via a web crawler * Response of polarity and subjectivity-based sentiment analysis of the article headline and text * Judgement of a naive Bayesian classifier (a machine learning algorithm) regarding the extent to which the headline aims to sway the reader * Grammar analysis of article text * Cross-Referencing an established database of questionable, problematic, and trustworthy top-level domains and secondary domains. Then, we created an interface using web technologies such as Flask, Ajax, and HTML/CSS/Javascript to create an in-browser experience which, in response to a user's input, runs the algorithm and returns a detailed summary of the article's performance in terms of the metrics. In addition to this more detailed information, we also provide a comprehensive percentage and progress bar to provide a more direct summary. Furthermore, we have also included a poll for users to comment on whether or not they agree with the machine's judgment: doing so allows for more open communication and democracy, as we do not intend on censoring any information. Instead, our goal is to increase the extent to which citizens understand the sources and elements (and any related biases or falsehoods) associated with consuming media. In the future, we would love host this project on the internet completely such that users can access it online directly. From there, we could explore options such as browser and social media extensions. It would also be an exciting data science project to incorporate the crowdsourced poll results in the algorithm results. ## About Us We are NewsBlind, a team of engineers from Olin College of Engineering. Our product is a web app that takes URLs inputted by the user and determines whether the article in question contains false, biased, and/or questionable information. Our interconnectedness on the web and the lightning speed at which data is shared creates an environment that makes it very easy for falsehoods and misinformation to spread. Easy access to accurate information on the internet is crucial to the continued success of advancing technology and the success of the human race as a whole. While it is of the utmost importance to minimize the pertinence of fake news, we firmly believe that outright censorship of information is wrong. Our product is aimed at informing viewers about the accuracy of the media they consume, but ultimately leaves the decision up to them whether they wish to view and/or share the article or not. ## How to Install All of the dependencies for this application exist in the requirements.txt file in this particular directory. To install them, you'll first need python 2.7. Then, if you don't have pip, install pip: ``` sudo apt-get install python-pip ``` Then, you can install the requirements with ``` pip install -r requirements.txt ``` ## How to Run To run this program, clone this repository. To interface with the application, you will need to start the Flask server and open up the HTML pages in your favorite Then, navigate to the top directory and run ``` cd layers python detect_prod.py ``` Then, open up the index.html page located in the /web directory. You can do this from the graphic file structure user interface, or you can use terminal. From terminal, navigate again to the top of the project and run ``` cd web [browser] index.html ``` From there, the application should work like a usual web page.
partial
## Inspiration In the future we would be driving electric cars. They would have different automatic features, including electronic locks. This system may be vulnerable to the hackers, who would want to unlock the car in public parking lots. So we would like to present **CARSNIC**, a solution to this problem. ## What it does The device implements a continuous loop, in which the camera is checked in order to detect the theft/car unlocking activity. If there is something suspect, the program iterated in the list of frequencies for the unlocking signal (*315MHz* for US and *433.92MHz* in the rest of the world). If the signal is detected, then the antenna starts to transmit a mirrored signal in order to neutralize the hacker's signal. We used the propriety that the signal from car keys are sinusoidal, and respects the formula: sin(-x) = -sin(x). ## How I built it We used a **Raspberry Pi v3** as SBC, a RPI camera and a **RTL-SDR** antenna for RX/TX operations. In order to detect the malicious activity and to analyze the plots of the signals, I used python and **Custom Vision** API from Azure. The admin platform was created using **PowerApps** and **Azure SQL** Databases. ## Challenges I ran into The main challenge was that I was not experienced in electronics and learned harder how to work with the components. ## Accomplishments that I'm proud of The main accomplishment was that the MVP was ready for the competition, in order to demonstrate the proposed idea. ## What I learned In this project I learned mostly how to work with hardware embedded systems, This is my first project with Raspberry Pi and RTL-SDR antenna. ## What's next for CARSNIC In the next couple of months, I would like to finish the MVP with all the features in the plan: 3D scanning of the structure, acclimatization and automatic parking from an intelligent service sent directly to your car. Then I think the project should be ready to be presented to investors and accelerators.
## Inspiration We found that the current price of smart doors on the market is incredibly expensive. We wanted to improve the current technology of smart doors at a fraction of the price. In addition, smart locks are not usually hands free, either requiring the press of a button or going on the User's phone. We wanted to make it as easy and fast as possible for User's to securely unlock their door while blocking intruders. ## What it does Our product acts as a smart door with two-factor authentication to allow entry. A camera cross-matches your face with an internal database and also uses voice recognition to confirm your identity. Furthermore, the smart door provides useful information for your departure such as weather, temperature and even control of the lights in your home. This way, you can decide how much to put on at the door even if you forgot to check, and you won't forget to turn off the lights when you leave the house. ## How we built it For the facial recognition portion, we used a Python script & OpenCV through the Qualcomm Dragonboard 410c, where we trained the algorithm to recognize correct and wrong individuals. For the user interaction, we used the Google Home to talk to the User and allow for the vocal confirmation as well as control over all other actions. We then used an Arduino to control a motor that would open and close the door. ## Challenges we ran into OpenCV was incredibly difficult to work with. We found that the setup on the Qualcomm board was not well documented and we ran into several errors. ## Accomplishments that we're proud of We are proud of getting OpenCV to work flawlessly and providing a seamless integration between the Google Home, the Qualcomm board and the Arduino. Each part was well designed to work on its own, and allowed for relatively easy integration together. ## What we learned We learned a lot about working with the Google Home and the Qualcomm board. More specifically, we learned about all the steps required to set up a Google Home, the processes needed to communicate with hardware, and many challenges when developing computer vision algorithms. ## What's next for Eye Lock We plan to market this product extensively and see it in stores in the future!
## Inspiration The failure of a certain project using Leap Motion API motivated us to learn it and use it successfully this time. ## What it does Our hack records a motion password desired by the user. Then, when the user wishes to open the safe, they repeat the hand motion that is then analyzed and compared to the set password. If it passes the analysis check, the safe unlocks. ## How we built it We built a cardboard model of our safe and motion input devices using Leap Motion and Arduino technology. To create the lock, we utilized Arduino stepper motors. ## Challenges we ran into Learning the Leap Motion API and debugging was the toughest challenge to our group. Hot glue dangers and complications also impeded our progress. ## Accomplishments that we're proud of All of our hardware and software worked to some degree of success. However, we recognize that there is room for improvement and if given the chance to develop this further, we would take it. ## What we learned Leap Motion API is more difficult than expected and communicating with python programs and Arduino programs is simpler than expected. ## What's next for Toaster Secure -Wireless Connections -Sturdier Building Materials -User-friendly interface
partial
# PixelShare: An Online Pixelated Canvas Online pixel canvas where users can collaborate with each other and contribute to the canvas. Check it out and contribute [here](http://104.197.33.114:8000/)!
## Inspiration MillionPixelHomepage and how people actually bought all the pixels. ## What it does It lets people to spend their excess money to draw on a public canvas. ## How I built it We used \_ HTML/JS/PHP \_ and \_ Bootstrap \_ to build the website. We also used the \_ BrainTree API \_ to handle the payment system. ## Challenges I ran into Using PHP Image Manipulations was not quite easy. ## Accomplishments that I'm proud of The source code is under **4k bytes!** ## What I learned \_ Braintree API, Bootstrap, PHP Image Manipulations \_ ## What's next for Public Canvas We will convince **Donald Trump** and **Kanye West** to print their portraits on the canvas. They will keep paying us as they will keep overwriting each others' pictures.
## About the Project ### TLDR: Caught a fish? Take a snap. Our AI-powered app identifies the catch, keeps track of stats, and puts that fish in your 3d, virtual, interactive aquarium! Simply click on any fish in your aquarium, and all its details — its rarity, location, and more — appear, bringing your fishing memories back to life. Also, depending on the fish you catch, reel in achievements, such as your first fish caught (ever!), or your first 20 incher. The cherry on top? All users’ catches are displayed on an interactive map (built with Leaflet), where you can discover new fishing spots, or plan to get your next big catch :) ### Inspiration Our journey began with a simple observation: while fishing creates lasting memories, capturing those moments often falls short. We realized that a picture might be worth a thousand words, but a well-told fish tale is priceless. This spark ignited our mission to blend the age-old art of fishing with cutting-edge AI technology. ### What We Learned Diving into this project was like casting into uncharted waters – exhilarating and full of surprises. We expanded our skills in: * Integrating AI models (Google's Gemini LLM) for image recognition and creative text generation * Crafting seamless user experiences in React * Building robust backend systems with Node.js and Express * Managing data with MongoDB Atlas * Creating immersive 3D environments using Three.js But beyond the technical skills, we learned the art of transforming a simple idea into a full-fledged application that brings joy and preserves memories. ### How We Built It Our development process was as meticulously planned as a fishing expedition: 1. We started by mapping out the user journey, from snapping a photo to exploring their virtual aquarium. 2. The frontend was crafted in React, ensuring a responsive and intuitive interface. 3. We leveraged Three.js to create an engaging 3D aquarium, bringing caught fish to life in a virtual environment. 4. Our Node.js and Express backend became the sturdy boat, handling requests and managing data flow. 5. MongoDB Atlas served as our net, capturing and storing each precious catch securely. 6. The Gemini AI was our expert fishing guide, identifying species and spinning yarns about each catch. ### Challenges We Faced Like any fishing trip, we encountered our fair share of challenges: * **Integrating Gemini AI**: Ensuring accurate fish identification and generating coherent, engaging stories required fine-tuning and creative problem-solving. * **3D Rendering**: Creating a performant and visually appealing aquarium in Three.js pushed our graphics programming skills to the limit. * **Data Management**: Structuring our database to efficiently store and retrieve diverse catch data presented unique challenges. * **User Experience**: Balancing feature-rich functionality with an intuitive, streamlined interface was a constant tug-of-war. Despite these challenges, or perhaps because of them, our team grew stronger and more resourceful. Each obstacle overcome was like landing a prized catch, making the final product all the more rewarding. As we cast our project out into the world, we're excited to see how it will evolve and grow, much like the tales of fishing adventures it's designed to capture.
losing
## Inspiration The members are part of a working/studying community that tends to rely on caffeine to improve their performance and focus. Over the years, this community has been expanding, as the idea of drinking a coffee or energetic drink becomes a trending activity, fostering new types of beverages in the market and coffee shops. While caffeine benefits one's performance, people tend to overestimate the advantages and neglect caffeine's side effects. If you consumes in the wrong amount or at the wrong time, caffeine can take the entire productivity thing against you: you will feel more fatigue, headaches, excessive anxiety, or harm sleep quality. The team's members and many others have experienced the side effects, without exactly knowing the underlying cause and feeling even more frustrated because they assumed that caffeine would enhance their performance. Therefore, the team created the Caffeine Intake Advisor to prevent people from consuming caffeine in an ineffective and potentially unhealthy way. Zepp's Smart watch also contributes to the ideation process. The advisor relies on a person's biological data to recommend accurate caffeine amounts, thus facilitating the user's life. Given that Zepp's Smart Watch is capable of collecting relevant and various data, such as sleep quality and stress level, the team is confident that the product will impact people's well-being. ## What it does In terms of direct interaction with the user: when the user wants to consume caffeine and have a productive session soon, they will enter the Caffeine Intake Advisor app in the smartwatch. The app will ask about their caffeine beverage or food preference, their goal ( duration of the work session), and based on the datasets that reveal the user's past biological responses to specific amounts of caffeine, the user's on-time health metrics, and the time of the day, recommend a final amount of caffeine to the user. In terms of what happens in the back end: To recommend the amount of caffeine a user can drink per day, the app needs to know: the user’s regular caffeine intake and the user’s caffeine sensitivity. These variables will be measured through Step 2 below. ## How the user’s data will be collected and used: ### 1. (First, the app will give an initial questionnaire when the person creates the account) The questionnaire mainly aims to gather this information: 1. **Serving size:** Amount of caffeinated drinks that the person drinks per day ( as well as the amount of caffeinated food) 2. **Time**: The usual time period that the person is used to intake 3. **Variability**: Does the person consume the same serving size every day? Or they drink more at specific times of the day 4. **Duration of habit:** for how long does the person have this habit. ### 2.After, the app will begin to measure, for 5 days, the coffee sensitivity based on these real-time data: 1. Sleeping pattern: sleep time and wake-up time 2. Here is how the machine will track the sleeping pattern in these 5 days 1. I\***dentify the sleeping pattern baseline:**\* the usual sleep time and wake up time before the 5-days observation period. This can be extracted from the questionnaire above 2. **Identify the effects on sleeping pattern based on intake variation:** During the monitoring week, the person can consume different amounts of caffeine and vary the timing of caffeine intake on different days. For example, they might have a standard amount of caffeine (their usual) on some days and a reduced or increased amount on other days. They can also adjust the time of caffeine consumption, such as having caffeine earlier or later in the day. Record the sleep-related data at regular intervals throughout the monitoring week. **After getting these data, the machine should :** * Compare sleep patterns on days with standard caffeine intake to days with reduced or increased caffeine intake. * Regular Timing and sleep pattern: Compare sleep patterns on days with standard caffeine intake to days with hours before the sleeping time ( this can be extracted from extracting the time of the day the person consume caffeine) 1. Stress level and Heart rate 2. Here is how the machine will track the stress and heart rate in these 5 days 1. Identify the person’s stress level and heart’s rate baseline\*\*\*: starts recording stress levels and heart rate from the moment the individual wakes up in the morning, before consuming any caffeine. 2. Measure the caffeine direct impact on the person :\*\*\* measure the stress and hear rate immediately after the person consumes their first dose of caffeine for the day. ### 3. After knowing the user’s Regular Caffeine Intake after 5 days. The watch can now recommend the caffeine intake every time the user wants to consume. To do that, the machine needs to calculate the `x amount of caffeine per y hours of focus` without yielding negative effects. To do that the machine will use this formula **Caffeine Amount (mg) = Regular Caffeine Intake x Caffeine Sensitivity Factor x Study Duration x Time Gap Factor** EXAMPLE: ``` **Hypothetical Values:** - Regular Caffeine Intake: 200 mg (the individual's typical daily caffeine consumption) - Caffeine Sensitivity Factor: 0.5 (a multiplier representing the individual's moderate caffeine sensitivity) - Study Goal: Stay awake and enhance focus - Study Duration: 4 hours (the intended duration of the study session) - Time of Study: 7:00 PM to 11:00 PM (4 hours before the individual's typical bedtime at 11:00 PM) - Desired Sleep Quality: The individual prefers to have high-quality sleep without disruptions. **Simplified Calculation:** Now, we'll consider the timing of caffeine consumption and its impact on sleep quality to estimate the amount of caffeine needed: 1. **Assessing Timing and Sleep Quality:** - Calculate the time gap between the end of the study session (11:00 PM) and bedtime (11:00 PM). In this case, it's zero hours, indicating the study session ends at bedtime. - Since the individual desires high-quality sleep, we aim to minimize caffeine's potential effects on sleep disruption. 2. **Caffeine Amount Calculation:** - To achieve the study goal (staying awake and enhancing focus) without impacting sleep quality, we aim to use the caffeine primarily during the study session. - We'll calculate the amount of caffeine needed during the study session to maintain focus, which is the 4-hour duration. ``` * T\*ime gap factor\* = hours before sleep time ( time when intaking caffeine - sleep time) ## How we built it We use intel data to train an algorithm. Use minds dp to connect the machine learning algorithm with our software. The process can be categorized into several components: Researching Key Factors and Features to Include in the Smartwatch We went through several scientific research about what factors affect people’s caffeine intake, and how different amounts of caffeine impact performance and trigger side effects based on people's caffeine sensitivity level and regular caffeine intake. We also organized a table that specifies types of caffeinated beverages and food according to the amount of caffeine based on their quantity in different units ( for example, in grams or ounces). UI/UX Design/ Product Management We first designed two types of wireframes of the smartwatch and tried which provided an easier and smoother process, so the user can get their caffeine intake recommendation as easily, convenient, and accurately as possible. After deciding on the user flow, we looked into specific features and their placements on the interfaces, brainstorming the questions: * Which feature is needed to accomplish task x * Which feature is relevant ( but not necessary) to improve the user's experience during task x * What is the hierarchy of the visual and textual elements that prevent cognitive load ( consider the smartwatch interface) and appeal to the user's intuitive navigation in the app Training Algorithms We make use of MindsDB pre-train models to predict the amount of caffeine each person can have according to the goal and body's condition. This algorithm is based on two types of datasets: * existing datasets online backed up by scientific research: it includes the biological factors that contribute to user's caffeine sensitivity * real-time data collected from the smartwatch of each user about their on-time body reactions ( through heartbeats, stress level, sleep quality) in the period of caffeine consumption. After research and training , the algorithm derives in the formula: Caffeine Amount (mg) = Regular Caffeine Intake x Caffeine Sensitivity Factor x Study Duration x Time Gap Factor. Code To implement back-end code to the hardware and display the codes in the smart watch's interface, the team used JavaScript in VS code. Additionally, we integrated our work with Zepp OS API and their AutoGUI, as well as Figma in into visualize the UI/UX aspect ## Challenges we ran into One of the most significant challenges we met was discerning which features of the app to focus on, so we can maximize the social impact considering the time and resources constraints of the hackathon. There is not enough real user datasets available to the public because of the confidentiality of human biological data and the lack of existing solutions that use these datasets ( the topic of caffeine intake has been limited to scholarly research, but not applied largely in today's enterprise solutions). This was time-consuming and frustrating at first since we didn't know which problem to work on as we didn't have previous user experiences to refer to. Therefore, we had to put extra effort and time into the research outlined in the section above, in which we had to calculate with MindDB the mathematical formulas, and from them, hypothesize the values and elaborate our own data sets. ## Accomplishments that we're proud of One of our most notable achievements is integrating software with hardware given that no one in the team has any prior experience in this kind of development. Another accomplishment is to work around our constraints and come up with a realistic and effective solution. Since there were no datasets regarding people’s reactions to varying amounts of caffeine, we had to draw on other types of data to estimate approximate statistics about a user’s caffeine sensitivity and regular intake. For example, we researched how factors like heart rate and sleep quality affect the caffeine effect on the user, and applied the insights on a mathematical formula to generate the data we needed ## What we learned The team improved its ability to connect the application's front end with the back end. Additionally, we enhanced our skills in critical thinking, helping us decide which datasets to gather and how to use them effectively to benefit the user. Moreover, we honed our problem-solving skills to explore methods that can have a substantial impact on the user. Lastly, we enhanced our communication skills by presenting the key aspects of our solution concisely and providing clear responses to the judges' questions. ## What's next for Caffeine Intake Recommender As we continue to develop our app, our aim is to make it more tailored to our users' needs. To provide even more personalized recommendations, we will also add questionnaire features for individual factors such as age, medications, pregnancy, menstrual cycles, and caffeine preferences. Our algorithms will monitor real-time data on users' responses to caffeine consumption and refine the predictions accordingly. Moreover, we are working on integrating our app with other health and fitness apps and devices to create a more comprehensive view of users' health and fitness data. With this approach, users can get a more holistic understanding of their health and fitness. Specifically, we plan to add caffeine intake tracking to AI assistants such as Apple’s Siri and Amazon’s Alexa, with simple commands like "Alexa, log a cup of espresso." These advancements will enable users to keep track of their caffeine intake more effectively and help them make better decisions for their overall health and wellness.
## Inspiration: The inspiration for this project was finding a way to incentivize healthy activity. While the watch shows people data like steps taken and calories burned, that alone doesn't encourage many people to exercise. By making the app, we hope to make exercise into a game that people look forward to doing rather than something they dread. ## What it does Zepptchi is an app that allows the user to have their own virtual pet that they can take care of, similar to that of a Tamagotchi. The watch tracks the steps that the user takes and rewards them with points depending on how much they walk. With these points, the user can buy food to nourish their pet which incentivizes exercise. Beyond this, they can earn points to customize the appearance of their pet which further promotes healthy habits. ## How we built it To build this project, we started by setting up the environment on the Huami OS simulator on a Macbook. This allowed us to test the code on a virtual watch before implementing it on a physical one. We used Visual Studio Code to write all of our code. ## Challenges we ran into One of the main challenges we faced with this project was setting up the environment to test the watch's capabilities. Out of the 4 of us, only one could successfully install it. This was a huge setback for us since we could only write code on one device. This was worsened by the fact that the internet was unreliable so we couldn't collaborate through other means. One other challenge was ## Accomplishments that we're proud of Our group was most proud of solving the issue where we couldn't get an image to display on the watch. We had been trying for a couple of hours to no avail but we finally found out that it was due to the size of the image. We are proud of this because fixing it showed that our work hadn't been for naught and we got to see our creation working right in front of us on a mobile device. On top of this, this is the first hackathon any of us ever attended so we are extremely proud of coming together and creating something potentially life-changing in such a short time. ## What we learned One thing we learned is how to collaborate on projects with other people, especially when we couldn't all code simultaneously. We learned how to communicate with the one who *was* coding by asking questions and making observations to get to the right solution. This was much different than we were used to since school assignments typically only have one person writing code for the entire project. We also became fairly well-acquainted with JavaScript as none of us knew how to use it(at least not that well) coming into the hackathon. ## What's next for Zepptchi The next step for Zepptchi is to include a variety of animals/creatures for the user to have as pets along with any customization that might go with it. This is crucial for the longevity of the game since people may no longer feel incentivized to exercise once they obtain the complete collection. Additionally, we can include challenges(such as burning x calories in 3 days) that give specific rewards to the user which can stave off the repetitive nature of walking steps, buying items, walking steps, buying items, and so on. With this app, we aim to gamify a person's well-being so that their future can be one of happiness and health.
## Inspiration Our team wanted improve the daily lives of our society and third world countries. We realized that a lot of fatigue is caused by dehydration, and that it is easily improved by drinking more water. However, we often forget as our lives get very busy, but what we don't forget to do is to check our phones every minute! We wanted to incorporate a healthier habit with our phones, to help remind us to drink enough water every day. We also realized the importance of drinking clean, and pure water, and that some people in this world are not priveledged enough to have pure water. Our product promotes the user's physical well being, and shows them how to drink different, and also raises awareness of the impure water that many individuals have to drink. ## What it does The bottle senses the resistance of the water, and uses this data to determine whether or not the water is safe to drink. The bottle also senses the change in mass of the bottle to determine your daily intake. Using this data, it will send a text message to your phone to remind you to drink water, and if the water you are about to drink is safe or not. ## How we built it The resistance sensor is essentially a voltage divider. The voltage produced from the Photon is split between a known resistance and the water of unknown resistance. The voltage of the water, the total voltage and the resistance from one resistor is known. From there, the program conducts multiple trials and chooses the most accurate data to calculate its resistance. The pressure sensor senses the pressure placed and changes the resistance accordingly. Its voltage is then recorded and processed within our code. The changes in pressure and resistance that are sent from the sensors first passes through the Google Cloud Platform publisher/subscriber API. Then they proceed to a python script which will send the data back to the Google Cloud, but this time to the datastore, which, optimally, would use machine learning to analyze the data and figure out the information to return. This processed information would then be sent to a Twilio script in order to be sent as a text message to the designated individual's phone number. ## Challenges we ran into Our biggest challenge was learning the new material is a short amount of time. A lot of the concepts were quite foreign to us, and learning these new concepts took a lot of time and effort. Furthermore, there were several issues and inconsistancies with our circuits and sensors. They were quite time consuming to fix, and required us to trace back our circuits and modify the program. However, these challenges were more than enjoyable to overcome and an amazing learning opportunity for our entire team. ## Accomplishments that we're proud of Our team is firstly proud of finishing the entire project while using foreign software and hardware. It was the first time we used Google Cloud Platform and the Particle Photon, and a lot of the programming was quite foreign. The project required a lot of intricate design and programming. There were a lot of small and complex parts of the project, and given the time restraint and minor malfunctions, it was very difficult to accomplish everything. ## What we learned Our team developed our previous knowledge in programming and sensors. We learned how to integrate things with Google Cloud Platform, how to operate Twilio, and how setup and use a Particle Photon. Our team also learned about the engineering process of design, prototyping and pitching a novel idea. This improves what to expect if any of us decide to do a startup. ## What's next for SmartBottle In the future, we want to develop an app that sends notifications to your phone instead of texts, and use machine learning to monitor your water intake, and recommend how you should incorporate it in your life. More importantly, we want to integrate the electrical components within the bottle instead of the external prototype we have. We imagine the force sensor sill being at the bottom, and a more slick design for the resistance sensor.
partial
## Inspiration Humans left behind nearly $1,000,000 in change at airports worldwide in 2018. Imagine what that number is outside of airports. Now, imagine the impact we could make if that leftover money went toward charities that make the world a better place. This is why we should SpareChange - bringing change, with change. ## What it does This app rounds up spending from each purchase to the nearest dollar and uses that accumulated money to donate it to charities & nonprofits. ## How we built it We built a cross platform mobile app using Flutter, powered by a Firebase backend. We setup Firebase authentication to ensure secure storage of user info, and used Firebase Cloud Functions to ensure we keep developer credentials locked away in our secure cloud. We used the CapitalOne Hackathon API to simulate bank accounts, transactions between bank accounts and withdrawals. ## Challenges we ran into 1. Implementing a market place or organizations that could automatically update the user of tangible items to donate to the non-profits in lieu of directly gifting money. 2. Getting cloud functions to work. 3. Properly implementing the API's the way we need them to function. ## Accomplishments that we're proud of Making a functioning app. This was some of our first times at a hackathon so it was amazing to have such a first great experience. Overcoming obstacles - it was empowering to prevail over hardships that we previously thought would be impossible to hurdle. Creating something with the potential to help other people live better lives. ## What we learned On the surface, software such as Flutter, Dart & Firebase. We found them very useful. More importantly, realizing how quickly an idea can come to fruition. The perspective was really enlightening. If we could make something this helpful in a weekend, what could we do in a year? Or a lifetime? ## What's next for SpareChange We believe that SpareChange could be a large scale application. We would love to experiment and try out with real bank accounts and see how it works, and also build more features, such as using Machine Learning to provide the best charities to donate to based on real-time news.
## Github Repository <https://github.com/deltahacksiii/deltahacksiii> ## Inspiration Loaning money can be difficult, especially when interests rates are so high, and many loan sharks seem to have alterior motives when you can't find other means. Some groups have an increased difficulty due to their situation; they may be immigrants with a language barrier, refugees without a credit history, or people with a lower income striving for an education. What if there was an app that provided a trustworthy platform for more open minded loaners and put the focus back on benefitting borrowers as much as possible? ## What it does Lendr is a reverse-auction loan community where borrowers can post an amount of money they need to loan, and lenders bid on lower and lower interest rates; the lowest rate takes the deal. Lenders are matched to borrowers in a tinder-style queue. A lender can swipe right on a loan and make a bid for a lower interest rate if they find the profile of the loaner to be promising and trustworthy, or they can swipe left on the profile if they are not interested. This way, every loaner gets an ideal match with minimal interest charged and a personal connection to a lender. The process just becomes a lot more fun and welcoming. How can the world of finance benefit from this idea? These loans and money transfers can take place on the platforms of financial institutions, and can help future customers build up a sense of responsibility. Banks can take action on the bidding too and earn some extra income. Keeping track of all actions occuring on the community can provide some interesting insights and analytics about the industry and the current state of the economy. ## How we built it Lendr is a web application built with the Node.js framework, the Express framework, and MySQL. We made use of the Cloud9 IDE for quick setup and collaboration. We also have a fancy landing page made with Wix. ## Challenges we ran into Sending information to ourselves from the frontend proved to be harder than we expected. We had to look into some hacks/workarounds and ended up settling on an invisible form method. We were all new to Node.js so that was also a challenge to get started on. ## Accomplishments that we're proud of A working product! We are excited to see how people will use and interact with our project. ## What we'd do differently... Node.js was rewarding to learn, but we would have worked on a mobile application if we had more time for the setup and learning curve. A full software stack such as MEAN would have made it easier to set up the database and build nicer looking views. We'd also like to reorganize and separate our code and implement a real sign-up/login process rather than having everything wide open. ## What's next for Lendr A mobile version for users on the go! Payments done completely through the app/website and/or partnerships with banks! Integration of algorithmic, real-time bidding! A media centre with testimonials, articles, and follow-ups from users! Machine learning to prioritize the loan match queue!
## Our Inspiration, what it does, and how we built it We wanted to work on something that challenged the engineering of today’s consumer economy. As college students across different campuses we noticed the common trend of waste, hoarding, and overspending among students. At the core of this issue is a first-instinct to buy a solution, whether service or product, when a problem arises. We did some market research among fellow hackers and on our college's subreddits – finding that students have no choice but to pay for items/services or go without them. To solve this we wanted to introduce a platform to allow students an alternative way to pay for items, allowing students to leverage the typically illiquid assets that they already have. ## Challenges we ran into We wanted to keep development light, so we chose to use React and Convex to abstract away many of the details associated with full stack development. Still, however, among our biggest challenges was getting everyone up to par in terms of technical ability. We are students from all sorts of backgrounds (from cognitive science to business to CS majors!) and who all had varying levels of experience with development. ## Accomplishments that we're proud of and what we learned. That’s why, as we finished up the final steps of the Hackathon, we felt so proud of being able to power through and produce a functional product of our vision. All of us grew and learned immensely about software development, converting ideas into tangible visions (using tools such as Figma and Dall-E), and - most importantly - the “hacker” mindset. We all have had so much to take away from this experience. ## What's next for BarterBuddies Our long-term vision for the app is to become the go-to platform for bartering and item trading among young adults. We plan to expand and grow beyond the college student market by developing partnerships with other organizations and by continually iterating on the platform to meet the changing needs of our users.
partial
## Inspiration In the maze of social interactions, we've all encountered the awkward moment of not remembering the name of someone we've previously met. Face-Book is our solution to the universal social dilemma of social forgetfulness. ## What it does Face-Book is an iPhone app that--discreetly--records and analyzes faces and conversations using your phone's camera and audio. Upon recognizing a familiar face, it instantly retrieves their name gathered during past saved conversations, as well as past interactions, and interesting tidbits – a true social lifesaver. ## How we built it Swift, Xcode, AWS Face Rekognition & Diarization, OpenAI. ## Challenges we ran into We navigated the uncharted waters of ethical boundaries and technical limitations, but our vision of a seamlessly connected world guided us. We didn't just build an app; we redefined social norms. ## Accomplishments that we're proud of We take pride in Face-Book's unparalleled ability to strip away the veil of privacy, presenting it as the ultimate tool for social convenience. Our app isn't just a technological triumph; it's a gateway to omnipresent social awareness. ## What we learned Our journey revealed the immense potential of data in understanding and predicting human behavior. Every interaction is a data point, contributing to an ever-growing atlas of human connections. ## What's next for Face-Book The future is limitless. We envision Face-Book as a standard feature in smartphones, working hand-in-hand with governments worldwide. Imagine a society where every face is known, every interaction logged – a utopia of safety and social convenience. Porting it to an AR platform would also be nice.
## Inspiration In our current day to day life, social media has taken priority in a majority of our attention spans. We recognize an addiction that exists within society and how it limits users to their phones instead of engaging with their surroundings. We were inspired to create an environment in which allowed users to be sociable with their outside surroundings without enclosing them in the digital world. ## What it does Our app takes the essential features from major social media platforms (Facebook messaging and Instagram image gallery) to create a online space that reflects the relationship between people. We want to allow a space where memories are cherished and where people can connect after an initial get together. The app utilizes facial recognition technology as a gateway to access and interact within this platform. Only when a user is physically hanging out with another person(s), can they use the app to add one another. ## How I built it We split the roles evenly between 3 team members. We had a back-end developer work with Django and incorporate Face API, MySQL database from Microsoft Azure while our full-stack/front-end developer worked to develop our app in React Native. Our UX designer worked on the visuals and user flow simultaneously. ## Challenges I ran into On top of pivoting ideas MANY times, we had quite a few system challenges. The biggest challenges we ran into were connecting the database to our environment using Microsoft Azure. We also came across issues regarding OS permissions with our Android phone when trying to access the camera functionality. ## Accomplishments that I'm proud of We are proud of being able to implement Microsoft Azure and Face API as well as being able to resolve our issues and building a cross platform app in react-native with little or no experience. ## What I learned We learned about cloud deployment and database migration. ## What's next for Linkr Changing header colours to match our emotions as we communicate online with our friends (to build awareness in our and our peers' emotional states).
## Inspiration Security and facial recognition was our main interest when planning our project for this Makeathon. We were inspired by a documentary in which a male was convicted of murder by the police deparment in Los Angeles. The man was imprisoned for six months away from his daughter and wife. He was wrongfully convicted and this was discovered thorugh the means of a video that displayed evidence of him beign present at a LA Dodgers game at the same time of the alleged murder conviction. Thus, he was set free and this story truly impacted us from an emotional standpoint because the man had to pay a hefty price of six month prison time for no reason. This exposed us to the world of facial recognition and software that can help identify faces that are not explicitly shown. We wanted to employ software that would help identify faces based on neural networks that were preloaded. ## What it does The webcam takes a picture of the user's face, and it compares it to preloaded images of the user's face from the database. The algorithm will then draw boxes around the user's face and eyes. ## How I built it To build this project, we used a PYNQ board, a computer with an Ethernet cable, and several cables to power the PYNQ board as well as neural networks to power the technology to identify the faces (xml files), as well as Python programming to power the software. We used a microprocessor, ethernet cable, HDMI cable, and webcam to power the necessary devices for the PYNQ board. The Python programming coupled with the xml files that were trained to recognize different faces and eyes were used on a Jupyter platform to display the picture taken as well as boxes around the face and eyes. ## Challenges I ran into We faced a plethora of problems while completing this project. These range from technical gaps in knowledge to hardware malfunctions that were unexpected by the team. The first issue we ran into was being given an SD card for the PYNQ board that was not preloaded with the required information. This meant that we had to download a PYNQ file with 1.5 GB of data from the pynq.io. This would hinder our process as it could lead to future difficulties so we decided to switch the SD card with one that is preloaded. This lead us to lose valuable time trying to debug the PYNQ board. Another issue we had was the SD card was corrupted. This was because we unintentionally and ignorantly uploaded files to the Jupyter platform by clicking “Upload” and choosing the files from our personal computer. What we should have done was to use map networking to load the files from our personal computer to Jupyter successfully. Thus, we will be able to implement pictures for computer recognition. Finally, the final issue we had was trying to import the face recognition API that was developed by the Massachusetts Institute of Technology. We did not know how to import the library for use, and given more time, we would explore that avenue more as this was our very first hackathon. We would export it in the PYNQ folder and not the data folder, which is a feature that was elaborated upon by the Xilinx representative. ## Accomplishments that I'm proud of Loading code and images from our computers into the PYNQ board. We were also able to link a web camera with the board while also being able to analyse the pictures taken from the web camera. ## What I learned As a team we were able to learn more about neural networks and how the PYNQ board technology could be incorporated into various areas including our intended purpose in security. To be specific, we learned how to use Jupyter and Python as tools to create these possible embedded systems and even got to explore ideas of possible machine learning. ## What's next for PYNQ EYE Our project is able to recognize users using their facial features. With that being said, there is a huge application in the security industry. In instances where employees/workers have security recognize them and their id to enter the premise of the company, this technology could prove to be useful. The automation in the security aspect of facial recognition would allow employees to show their face to a camera and be granted access to the building/premise, removing the need for extra security detail and identification that could easily be falsified, making the security of the premise much safer. Another application would be home security where the facial recognition system would be used to disable home alarms by the faces of the residents of the property. Such applications prove that this project has the potential to boost security in the workforce and at home.
losing
## Inspiration As a team, we've all witnessed the devastation of muscular-degenerative diseases, such as Parkinson's, on the family members of the afflicted. Because we didn't have enough money or resources or time to research and develop a new drug or other treatment for the disease, we wanted to make the medicine already available as effective as possible. So, we decided to focus on detection; the early the victim can recognize the disease and report it to his/her physician, the more effective the treatments we have become. ## What it does HandyTrack uses three tests: a Flex Test, which tests the ability of the user to bend their fingers into a fist, a Release Test, which tests the user's speed in releasing the fist, and a Tremor Test, which measures the user's hand stability. All three of these tests are stored and used to, over time, look for trends that may indicate symptoms of Parkinson's: a decrease in muscle strength and endurance (ability to make a fist), an increase in time spent releasing the fist (muscle stiffness), and an increase in hand tremors. ## How we built it For the software, we built the entirety of the application in the Arduino IDE using C++. As for the hardware, we used 4 continuous rotation servo motors, an Arduino Uno, an accelerometer, a microSD card, a flex sensor, and an absolute abundance of wires. We also used a 3D printer to make some rings for the users to put their individual fingers in. The 4 continuous rotation servos were used to provide resistance against the user's hands. The flex sensor, which is attached to the user's palm, is used to control the servos; the more bent the sensor is, the faster the servo rotation. The flex sensor is also used to measure the time it takes for the user to release the fist, a.k.a the time it takes for the sensor to return to the original position. The accelerometer is used to detect the changes in the user's hand's position, and changes in that position represent the user's hand tremors. All of this data is sent to the SD cards, which in turn allow us to review trends over time. ## Challenges we ran into Calibration was a real pain in the butt. Every time we changed the circuit, the flex sensor values would change. Also, developing accurate algorithms for the functions we wanted to write was kind of difficult. Time was a challenge as well; we had to stay up all night to put out a finished product. Also, because the hack is so hardware intensive, we only had one person working on the code for most of the time, which really limited our options for front-end development. If we had an extra team member, we probably could have made a much more user-friendly application that looks quite a bit cleaner. ## Accomplishments that we're proud of Honestly, we're happy that we got all of our functions running. It's kind of difficult only having one person code for most of the time. Also, we think our hardware is on-point. We mostly used cheap products and Arduino parts, yet we were able to make a device that can help users detect symptoms of muscular-degenerative diseases. ## What we learned We learned that we should always have a person dedicated to front-end development, because no matter how functional a program is, it also needs to be easily navigable. ## What's next for HandyTrack Well, we obviously need to make a much more user-friendly app. We would also want to create a database to store the values of multiple users, so that we can not only track individual users, but also to store data of our own and use the trends of different users to compare to the individuals, in order to create more accurate diagnostics.
## Inspiration Currently the insurance claims process is quite labour intensive. A person has to investigate the car to approve or deny a claim, and so we aim to make the alleviate this cumbersome process smooth and easy for the policy holders. ## What it does Quick Quote is a proof-of-concept tool for visually evaluating images of auto accidents and classifying the level of damage and estimated insurance payout. ## How we built it The frontend is built with just static HTML, CSS and Javascript. We used Materialize css to achieve some of our UI mocks created in Figma. Conveniently we have also created our own "state machine" to make our web-app more responsive. ## Challenges we ran into > > I've never done any machine learning before, let alone trying to create a model for a hackthon project. I definitely took a quite a bit of time to understand some of the concepts in this field. *-Jerry* > > > ## Accomplishments that we're proud of > > This is my 9th hackathon and I'm honestly quite proud that I'm still learning something new at every hackathon that I've attended thus far. *-Jerry* > > > ## What we learned > > Attempting to do a challenge with very little description of what the challenge actually is asking for is like a toddler a man stranded on an island. *-Jerry* > > > ## What's next for Quick Quote Things that are on our roadmap to improve Quick Quote: * Apply google analytics to track user's movement and collect feedbacks to enhance our UI. * Enhance our neural network model to enrich our knowledge base. * Train our data with more evalution to give more depth * Includes ads (mostly auto companies ads).
## Inspiration In third-world countries and in remote areas, there are few doctors available to diagnose disease. Among these diseases are strokes, which require immediate medical attention to avoid serious injury. Unfortunately, many do not receive treatment in time, simply because they cannot identify whether they are indeed having a stroke or not. Telemedicine offers a solution to this problem. However, it is only a partial solution. Doctors can gain insight through visual and audio cues, but much is left out in regards to sensory and musculoskeletal information. Nervetelligence bridges this knowledge gap by providing physicians with data which could previously only be gathered by direct interaction with patients. ## What it does Nervetelligence is a machine a patient inserts their arm into, along with an accompanying web application. The machine gathers three different data types: the existence of proprioception; light touch sensation in the palm and in the forearm; and musculoskeletal strength. The web application allows physicians to video chat with patients and record patient response to stimuli for further analysis. In order to detect the existence of proprioception, patients extend their arm into the Nervetelligence box until their finger fits snugly in far most compartment. Then, a rotating wheel with two spokes will either push the phalanges up or down. Patients will be asked to answer about the directionality of the given push. Since stroke victims have difficulty recognizing their joint's position in space, correct answers would be a good sign. Light touch sensation will be solicited through two hanging servo motors, one fixed with a pencil and the other with a leaf. These two objects will be dragged across skin as they oscillate back and forth. Patients will respond to whether the object was felt on the skin or not. Missing or only a partial sensation can indicate a stroke. Finally, musculoskeletal strength will be measured with a Myo Armband. The Nervetelligence box supports the testing of arm vitality through two distinct arm movements. ## How I built it Nervetelligence was prototyped in a cardboard box. There are two breadboards, three servo 180-degree motors, two Arduino Uno's, and one force sensor attached. The force sensor detects when a finger is inserted into the proprioception cavity. The Arduino Uno's control the servo motors responsible for light touch sensation and are triggered at the will of the doctor. ## Challenges I ran into We faced challenges when trying to control the servo motors in real-time. This proved difficult since they were controlled by Arduino's, which are microcontrollers and require re-upload to change behavior. Furthermore, we faced obstacles when implementing video messaging functionality on the web application. ## Accomplishments that I'm proud of We are proud of all of the hardware we used and its analog software. Most of us worked with hardware for the first time ever, so it was a tremendous learning experience. Also, I am proud of our exceptional teamwork. We were all strangers when we met through Facebook—now we can say we are all good friends! ## What I learned We learned about breadboards, resistors, jump wires, soldering, Arduinos, servos, and force sensors, web development, the Myo armband, and the diagnosis of stroke. ## What's next for Nervetelligence We want to expand the capabilities of Nervetelligence to gather information about other diseases, such as diabetes and rheumatoid arthritis. A host of illnesses require physicians to probe the arm. This information, including pulse-oximetry, peripheral nervous system response, blood glucose levels, respiratory rates, facial gestures, and temperature is valuable to assess the progression of the disease. We want Nervetelligence to grow into an all-in-one stop for arm sensitivity tests. But we don't intend to stop there! Similar diseases afflict the foot and other body parts, which can be interacted with the same as arms.
winning
# Travel Itinerary Generator ## Inspiration Traveling is an experience that many cherish, but planning for it can often be overwhelming. With countless events, places to visit, and activities, it's easy to miss out on experiences that could have made the trip even more memorable. This realization inspired us to create the **Travel Itinerary Generator**. We wanted to simplify the travel planning process by providing users with curated suggestions based on their preferences. ## What It Does The **Travel Itinerary Generator** is a web application that assists users in generating travel itineraries. Users receive tailored suggestions on events or places to visit by simply entering a desired location and activity type. The application fetches this data using the Metaphor API, ensuring the recommendations are relevant and up-to-date. ## How We Built It We began with a React-based frontend, leveraging components to create a user-friendly interface. Material-UI was our go-to library for the design, ensuring a consistent and modern look throughout the application. To fetch relevant data, we integrated the Metaphor API. Initially, we faced CORS issues when bringing data directly from the front end. To overcome this, we set up a Flask backend to act as a proxy, making requests to the Metaphor API on behalf of the front end. We utilized the `framer-motion` library for animations and transitions, enhancing the user experience with smooth and aesthetically pleasing effects. ## Challenges We Faced 1. **CORS Issues**: One of the significant challenges was dealing with CORS when trying to fetch data from the Metaphor API. This required us to rethink our approach and implement a Flask backend to bypass these restrictions. 2. **Routing with GitHub Pages**: After adding routing to our React application, we encountered issues deploying to GitHub Pages. It took some tweaking and adjustments to the base URL to get it working seamlessly. 3. **Design Consistency**: Ensuring a consistent design across various components while integrating multiple libraries was challenging. We had to make sure that the design elements from Material-UI blended well with our custom styles and animations. ## What We Learned This project was a journey of discovery. We learned the importance of backend proxies in handling CORS issues, the intricacies of deploying single-page applications with client-side routing, and the power of libraries like `framer-motion` in enhancing user experience. Moreover, integrating various tools and technologies taught us the value of adaptability and problem-solving in software development. ## Conclusion This journey was like a rollercoaster - thrilling highs and challenging lows. We discovered the art of bypassing CORS, the nuances of SPAs, and the sheer joy of animating everything! It reinforced our belief that we can create solutions that make a difference with the right tools and a problem-solving mindset. We're excited to see how travelers worldwide will benefit from our application, making their travel planning a breeze! ## Acknowledgements * [Metaphor API](https://metaphor.systems/) for the search engine. * [Material-UI](https://mui.com/) for styling. * [Framer Motion](https://www.framer.com/api/motion/) for animations. * [Express API](https://expressjs.com/) hosted on [Google Cloud](https://cloud.google.com/). * [React.js](https://react.dev/) for web framework.
## Inspiration We were sitting together as a team after dinner when our team member pulled out her phone and mentioned she needed to log her food – mentioning how she found the app she used (MyFitnessPal) to be quite tedious. This was a sentiment shared by many users we've encountered and we decided there must be a way that we could make this process simple and smooth! ## What it does Artemis is an Amazon Alexa experience that changes the way you engage in fitness and meal tracking. Log your food, caloric intake, and know what the breakdown of your daily diet is with a simple command. All you have to do is tell Artemis that you ate something, and she'll automatically record it for you, retrieve all pertinent nutrition information, and see how it stacks up with your daily goals. Check how you're doing at anytime by asking Artemis, "How am I doing?" or looking up your stats presented in a clear and digestible way at [www.artemisalexa.com](http://www.artemisalexa.com) ## How we built it We took the foods processed from the language request, made a call to the Nutritionix API to get the caloric breakdown, and update the backend server which live-updates the dashboard. The smart-sensor waterbottle tracks water level by using ultrasonar waves that bounce back with distance data. ## Challenges we ran into It's definitely difficult for us to model data beyond the two days we've been working on this project and we wanted to model a lot richer of a data set in our dashboard. ## Accomplishments that we're proud of We're really proud of the product we've built! * Polished and pleasant user experience * Thorough coverage of conversation, can sustain a pertinent conversation with Artemis about healthy eating. * Wide breadth of data visualization * Categorical breakdown * Variances for Caloric intake over the course of the day * Items consumed as percentages of daily nutritional breakdown * Light sensor for fluid color detection (aside from water – no cheating with soda!) * Ultrasonar sensor that measures water level ## What's next for Artemis * We're hoping to build Fitbit integration so that Alexa can directly log your food into one app.
## Inspiration There are many occasions where we see a place in a magazine, or just any image source online and we don't know where the place is. There is no description anywhere, and a possible vacation destination may very possibly just disappear into thin air. We certainly did not want to miss out. ## What it does Take a picture of a place. Any place. And upload it onto our web app. We will not only tell you where that place is located, but immediately generate a possible trip plan from your current location. That way, you will be able to know how far away you are from your desired destination, as well as how feasible this trip is in the near future. ## How we built it We first figured out how to use Google Cloud Vision to retrieve the data we wanted. We then processed pictures uploaded to our Flask application, retrieved the location, and wrote the location to a text file. We then used Beautiful Soup to read the location from the text file, and integrated the Google Maps API, along with numerous tools within the API, to display possible vacation plans, and the route to the location. ## Challenges we ran into This was our first time building a dynamic web app, and using so many API’s so it was pretty challenging. Our final obstacle of reading from a text file using JavaScript turned out to be our toughest challenge, because we realized it was not possible due to security concerns, so we had to do it through Beautiful Soup. ## Accomplishments that we're proud of We're proud of being able to integrate many different API's into our application, and being able to make significant progress on the front end, despite having only two beginner members. We encountered many difficulties throughout the building process, and had some doubts, but we were still able to pull through and create a product with an aesthetically pleasing GUI that users can easily interact with. ## What we learned We got better at reading documentation for different API's, learned how to integrate multiple API's together in a single application, and realized we could create something useful with just a bit of knowledge. ## What's next for TravelAnyWhere TravelAnyWhere can definitely be taken on to a whole other level. Users could be provided with different potential routes, along with recommended trip plans that visit other locations along the way. We could also allow users to add multiple pictures corresponding to the same location to get a more precise reading on the destination through machine learning techniques.
partial
## Inspiration As programmers, we collectively realized how much we disliked the process of creating a pitch site/landing page to explain our project - it took away precious time from working on the actual product! We recognized our shared need for a quick landing page solution, which would sum up the basics of our project, idea, and solution for any viewer to understand. ## What it does MyLandingPage creates a landing page site for an emerging project within seconds. Based on an elevator pitch of a project, MyLandingPage uses Cohere's large language model to generate informative copy for the website (the headline, product description, benefits/solutions, and a call to action). ## How we built it We used MongoDB, Express, React, Node, Typescript, Cohere, Google Cloud Platform, CICD, and AppEngine in order to create our final product. ## Challenges we ran into We struggled to think of an idea early on in our coding process. We initially wanted to create a texting device with vision-tracking glasses, or use NLP to summarize complex textbooks into simpler text, and didn't come up with our final idea until Saturday morning. We also struggled to delegate all aspects of the project among our team members, and manage our time efficiently in order to get everything before the due date. However, we got better at settling on our main idea/problem statement and figured out how to allocate roles efficiently between the four team members. ## Accomplishments that we're proud of * Finishing a prototype for demo day. * Starting off our coding by defining the problem and empathizing with our users, rather than starting off with the software product. * Being able to whip together a successful NLP model in such a short amount of time. * Successfully creating a clean user interface for the prototype. ## What we learned It's important to have a plan early on in the development process: not just for our project itself, but for who is ultimately responsible for which aspect of the project, and how long we want to allocate to each aspect of the project. It's also a good idea to refer back to our team members' areas of expertise when we're trying to create a project in such a short period of time (e.g. one of our members has past experience working with NLP text generation, so we should have recognized this as a competitive advantage within our project earlier)! ## What's next for MyLandingPage 1. The option to customize the landing page by making edits/additions to the text and images, altering the placement of elements on the website, etc. 2. Giving users the option to personalize their domain, allowing for the sharability of the site. 3. A slideshow/pitch deck generation as an add-on to our landing site generation, to allow hackers + entrepreneurs to easily pitch to entrepreneurs.
## Why Soundscape Hacking for the hack of it. It is a great mantra, and one that we often take to heart. While there is significant value in hackathon projects that offer aid in difficult and demanding tasks, sometimes the most interesting hacks are those that exist for their own sake. Soundscape takes a novel approach to an activity that many of us love — discovering music. Instead of letting the user simply respond "yea" or "nay" to an ever increasing list of songs, Soundscape places you in midst of the action and shows you a world of music right under your feet. Users can then pursue avenues they find interesting, search for new or exciting pieces, or merely wander through a selection of dynamically curated music. With Soundscape, you have a hack-of-a-lot of power. ## Functional Overview Soundscape is a Virtual Reality application based on the Google Daydream platform. It curates data by crawling Soundcloud and building a relationship model of songs in their repository. From there, it uses advanced graph search techniques to identify songs that are similar to each other, so that users can start with one set a long, and shift the genre and style until they find something new that they enjoy. ## Technical Overview Soundscape is built on top of Google's yet unreleased platform for high quality mobile virtual reality—Daydream. Developing most of the application’s front end in Unity, we make use of this framework in conjunction with the existing Google Cardboard technology to help power a virtual experience that has high fidelity, low stutter, and intuitive input. The application itself is built in Unity, with custom hooks built into the Daydream infrastructure to allow for a high quality user interface. The core functionality of Soundscape lies in our backend aggregation server which runs a node, mongodb, and express.js stack on top of Linode. This server fetches song, user, and playlist data through the SoundCloud API to generate similarity scores between songs — calculated through user comments and track favorites. This conglomerated data is then queried by the Unity application, alongside the standard SoundCloud data and audio stream. Search functionality within the app is also enabled through voice recognition powered by IBM’s Watson Developer Cloud service for speech to text. All of this works seamlessly together to power one versatile and unique music visualization and exploration app. ## Looking Forward We are excited about Soundscape, and look forward to perfecting this for the final release of Google Daydream. Until then, we have exciting ideas about better search, and ways to incorporate other APIs
## Inspiration Imagine you're sitting in your favorite coffee shop and a unicorn startup idea pops into your head. You open your laptop and choose from a myriad selection of productivity tools to jot your idea down. It’s so fresh in your brain, you don’t want to waste any time so, fervently you type, thinking of your new idea and its tangential components. After a rush of pure ideation, you take a breath to admire your work, but disappointment. Unfortunately, now the hard work begins, you go back though your work, excavating key ideas and organizing them. ***Eddy is a brainstorming tool that brings autopilot to ideation. Sit down. Speak. And watch Eddy organize your ideas for you.*** ## Learnings Melding speech recognition and natural language processing tools required us to learn how to transcribe live audio, determine sentences from a corpus of text, and calculate the similarity of each sentence. Using complex and novel technology, each team-member took a holistic approach and learned news implementation skills on all sides of the stack. ## Features 1. **Live mindmap**—Automatically organize your stream of consciousness by simply talking. Using semantic search, Eddy organizes your ideas into coherent groups to help you find the signal through the noise. 2. **Summary Generation**—Helpful for live note taking, our summary feature converts the graph into a Markdown-like format. 3. **One-click UI**—Simply hit the record button and let your ideas do the talking. 4. **Team Meetings**—No more notetakers: facilitate team discussions through visualizations and generated notes in the background. ![The Eddy TechStack](https://i.imgur.com/FfsypZt.png) ## Challenges 1. **Live Speech Chunking** - To extract coherent ideas from a user’s speech, while processing the audio live, we had to design a paradigm that parses overlapping intervals of speech, creates a disjoint union of the sentences, and then sends these two distinct groups to our NLP model for similarity. 2. **API Rate Limits**—OpenAI rate-limits required a more efficient processing mechanism for the audio and fewer round trip requests keyword extraction and embeddings. 3. **Filler Sentences**—Not every sentence contains a concrete and distinct idea. Some sentences go nowhere and these can clog up the graph visually. 4. **Visualization**—Force graph is a premium feature of React Flow. To mimic this intuitive design as much as possible, we added some randomness of placement; however, building a better node placement system could help declutter and prettify the graph. ## Future Directions **AI Inspiration Enhancement**—Using generative AI, it would be straightforward to add enhancement capabilities such as generating images for coherent ideas, or business plans. **Live Notes**—Eddy can be a helpful tool for transcribing and organizing meeting and lecture notes. With improvements to our summary feature, Eddy will be able to create detailed notes from a live recording of a meeting. ## Built with **UI:** React, Chakra UI, React Flow, Figma **AI:** HuggingFace, OpenAI Whisper, OpenAI GPT-3, OpenAI Embeddings, NLTK **API:** FastAPI # Supplementary Material ## Mindmap Algorithm ![Mindmap Algorithm](https://i.imgur.com/QtqeBjG.png)
partial
## Inspiration Nearly 800,000 people in the US experience strokes, killing over 140,000 die every year. Majority of strokes block blood entry to the brain, resulting in serious paralysis and difficulty with motor control. Despite the growing number of people suffering from strokes every year, there are very few therapies targeting stroke rehabilitation. Our augmented reality app looks to utilize cutting edge technology to bring accessible tools to create a personalized regimen for users to regain mobility within the comfort of their own homes. ## What it does Patients are guided through a simple rehabilitation exercise composed of certain hand pose estimations known for increasing mobility. It guides the user by visualizing an augmented hologram hand and encourages them to follow along with the hologram. The application gives real-time feedback utilizing a machine learning model to detect how well the user has completed the exercise. ## How we built it We trained a machine learning algorithm with ~100 pictures through Azure Custom Vision. We used this model implemented this model with CoreML and Swift to accurately assess the position of the user's hand poses. Using ARKit, we created an augmented reality application that guides users to correctly perform certain hand poses. ## Challenges we ran into Developing our first ML model. Creating an augmented reality iOS app for the first time and using ARKit/Swift. ## Accomplishments that we're proud of Training our own ML model, building our first mobile app, learning a new language. ## What we learned The basics on how to train an ML model, a new programming language Swift, and ARKit processes/workflow. ## What's next for Stroke Saver Training a more accurate data set for different poses and less biases. Better gamification to encourage users to complete daily exercises. Integration with the Stanford Stroke Center App.
## Inspiration * Whenever I have a pain in the chest, leg or arm, I never know what to think or what to look up. It would be quite unnecessary to look up leg pain, that is so vague. And I am not able to name each part of my leg. There had to be a better way to asses that. After all, there are only 2 in 3,000 people that are trained medical professionals. What if we could enable anyone to determine what their problem is. ## What it does * Our iOS app provides an Augmented Reality experience backed by a Computer Vision algorithm to assess your symptoms when you are feeling ill and provides you with the most probable diagnosis. If you are feeling sick or having some sort of pain, a person would place pinpoints on those areas of pain on your body. The app would then process those pinpoints to provide you with a list of possible issues. AR here allows you to be extremely precise with indicating what area of your body is hurting/uncomfortable. ## How I built it * We built our app using ARKit and Swift. Our API is built in NodeJS and hosted on GCP. Our Machine Learning algorithms used Caffe and OpenCV for Computer Vision. Our website is written In Vue.js and also hosted on GCP. The website is live as well. ## Challenges I ran into * We had a ton of issues with everything from domain deployment to post request issues. * Figuring out the best way to translate 3-dimensional nodes in ARKit to usable coordinates for the ML algorithm to figure out the exact body part the node points to. ## Accomplishments that I'm proud of * iOS app is working, API is live and the website is almost done. ## What I learned * Learned about SceneKit which could be used for making iOS games and about ARKit which is for Augmented Reality. * We learned a lot regarding API calls and how different technologies integrate and work together. ## What's next for ExaminAR * Better visualization of the AR, using for example an overlay of the anatomy. We did not consider this idea because of the cost of those anatomy models. * Ability to use a front facing camera and thus not require assistance to operate.
## Inspiration Millions of people around the world are either blind, or partially sighted. For those who's vision is impaired, but not lost, there are tools that can help them see better. By increasing contrast and detecting lines in an image, some people might be able to see clearer. ## What it does We developed an AR headset that processes the view in front of it and displays a high contrast image. It also has the capability to recognize certain images and can bring their existence to the attention of the wearer (one example we used was looking for crosswalk signs) with an outline and a vocal alert. ## How we built it OpenCV was used to process the image stream from a webcam mounted on the VR headset, the image is processed with a Canny edge detector to find edges and contours. Further a BFMatcher is used to find objects that resemble a given image file, which is highlighted if found. ## Challenges we ran into We originally hoped to use an oculus rift, but we were not able to drive the headset with the available hardware. We opted to use an Adafruit display mounted inside a Samsung VR headset instead, and it worked quite well! ## Accomplishments that we're proud of Our development platform was based on macOS 10.12, Python 3.5 and OpenCV 3.1.0, and OpenCV would not cooperate with our OS. We spent many hours compiling and configuring our environment until it finally worked. This was no small feat. We were also able to create a smooth interface using multiprocessing, which operated much better than we expected. ## What we learned Without the proper environment, your code is useless. ## What's next for EyeSee Existing solutions exist, and are better suited for general use. However a DIY solution is endlessly customizable, we this project inspires other developers to create projects that help other people. ## Links Feel free to read more about visual impairment, and how to help; <https://w3c.github.io/low-vision-a11y-tf/requirements.html>
losing
## Inspiration Noise sensitivity is common in autism, but it can also affect individuals without autism. Research shows that 50 to 70 percent of people with autism experience hypersensitivity to everyday sounds. This inspired us to create a wearable device to help individuals with heightened sensory sensitivities manage noise pollution. Our goal is to provide a dynamic solution that adapts to changing sound environments, offering a more comfortable and controlled auditory experience. ## What it does SoundShield is a wearable device that adapts to noisy environments by automatically adjusting calming background audio and applying noise reduction. It helps individuals with sensory sensitivities block out overwhelming sounds while keeping them connected to their surroundings. The device also alerts users if someone is behind them, enhancing both awareness and comfort. It filters out unwanted noise using real-time audio processing and only plays calming music if the noise level becomes too high. If it detects a person speaking or if the noise is low enough to be important, such as human speech, it doesn't apply filters or background music. ## How we built it We developed SoundShield using a combination of real-time audio processing and computer vision, integrated with a Raspberry Pi Zero, a headphone, and a camera. The system continuously monitors ambient sound levels and dynamically adjusts music accordingly. It filters noise based on amplitude and frequency, applying noise reduction techniques such as Spectral Subtraction, Dynamic Range Compression to ensure users only hear filtered audio. The system plays calming background music when noise levels become overwhelming. If the detected noise is low, such as human speech, it leaves the sound unfiltered. Additionally, if a person is detected behind the user and the sound amplitude is high, the system alerts the user, ensuring they are aware of their surroundings. ## Challenges we ran into Processing audio in real-time while distinguishing sounds based on frequency was a significant challenge, especially with the limited computing power of the Raspberry Pi Zero. Additionally, building the hardware and integrating it with the software posed difficulties, especially when ensuring smooth, real-time performance across audio and computer vision tasks. ## Accomplishments that we're proud of We successfully integrated computer vision, audio processing, and hardware components into a functional prototype. Our device provides a real-world solution, offering a personalized and seamless sensory experience for individuals with heightened sensitivities. We are especially proud of how the system dynamically adapts to both auditory and visual stimuli. ## What we learned We learned about the complexities of real-time audio processing and how difficult it can be to distinguish between different sounds based on frequency. We also gained valuable experience in integrating audio processing with computer vision on a resource-constrained device like the Raspberry Pi Zero. Most importantly, we deepened our understanding of the sensory challenges faced by individuals with autism and how technology can be tailored to assist them. ## What's next for SoundSheild We plan to add a heart rate sensor to detect when the user is becoming stressed, which would increase the noise reduction score and automatically play calming music. Additionally, we want to improve the system's processing power and enhance its ability to distinguish between human speech and other noises. We're also researching specific frequencies that can help differentiate between meaningful sounds, like human speech, and unwanted noise to further refine the user experience.
Bet for Bit is a premium, high-security Bitcoin betting service for dedicated sports fans. It is built with Python, Django, and the CoinBase api. Our platform scrapes live sports stats, and allows users to place bitcoin bets on their sports teams.
## Inspiration All of us have friends or know of someone who has ADHD. ADHD is characterized by being easily distracted, especially by sounds in our everyday life such as a cough or car honk. From this idea of noise distractions, we decided to try creating noise-cancelling headphones for anyone just wanting to focus (because anyone can be distracted at times). ## What it does Focus is a web application that controls your headphones. Cancels noise and adjusts volume of certain sounds depending on the environment/activity you want to focus in. For example, Focus allows you to hear traffic while you're jogging but not the conversations of your neighbors. ## How we built it We used pyAudioAnalysis to isolate sounds. Through machine-learning we were able to recognize the sounds of voices, music, alarms, clapping, and more. We used HTML, CSS, and JS for the frontend web app. We spent time on prototyping with Photoshop and PowerPoint. ## Challenges we ran into Realtime noise-cancellation is tricky to implement because we need to first listen and identify the sound before being able to cancel it. We tried using Raspberry Pi but had problems with installation and ultimately did not get it to work at the end. :( ## Accomplishments that we're proud of We are proud that we figured out how to use pyAudioAnalysis and got the CSS to be nice. We also met each other during team-building and are proud of our collaboration to pull this off. ## What we learned We learned how to use pyAudioAnalysis and work with audio in our hardware hack. This was Karisa's very first hardware hack and she learned more about Raspberry Pi even though the Raspberry Pi did not work in the end. We all learned more of CSS using W3Schools. Md, Suparit, and Yunqi learned more about Material Design and UX practices. ## What's next for Focus Improvements to realtime noise-cancellation through Bluetooth and Raspberry Pi. We would like to also give users more customizability in their audio environments. Market research into what is the high value-added that the mass and ADHD population would like.
winning
## Inspiration When thinking about how we could make a difference within local communities impacted by Covid-19, what came to mind are our frontline workers. Our doctors, nurses, grocery store workers, and Covid-19 testing volunteers, who have tirelessly been putting themselves and their families on the line. They are the backbone and heartbeat of our society during these past 10 months and counting. We want them to feel the appreciation and gratitude they deserve. With our app, we hope to bring moments of positivity and joy to those difficult and trying moments of our frontline workers. Thank you! ## What it does Love 4 Heroes is a web app to support our frontline workers by expressing our gratitude for them. We want to let them know they are loved, cared for, and appreciated. In the app, a user can make a thank you card, save it, and share it with a frontline worker. A user's card is also posted to the "Warm Messages" board, a community space where you can see all the other thank-you-cards. ## How we built it Our backend is built with Firebase. The front-end is built with Next.js, and our design framework is Tailwind CSS. ## Challenges we ran into * Working with different time zones [12 hour time difference]. * We ran into trickiness figuring out how to save our thank you cards to a user's phone or laptop. * Persisting likes with Firebase and Local Storage ## Accomplishments that we're proud of * Our first Hackathon + We're not in the same state, but came together to be here! + Some of us used new technologies like Next.js, Tailwind.css, and Firebase for the first time! + We're happy with how the app turned out from a user's experience + We liked that we were able to create our own custom card designs and logos, utilizing custom made design-textiles ## What we learned * New Technologies: Next.js, Firebase * Managing time-zone differences * How to convert a DOM element into a .jpeg file. * How to make a Responsive Web App * Coding endurance and mental focus -Good Git workflow ## What's next for love4heroes More cards, more love! Hopefully, we can share this with a wide community of frontline workers.
## Inspiration As students, we have found that there are very few high-quality resources on investing for those who are interested but don't have enough resources. Furthermore, we have found that investing and saving money can be a stressful experience. We hope to change this for those who want to save better with the help of our app, hopefully making it fun in the process! ## What it does Our app first asks a new client a brief questionnaire about themselves. Then, using their banking history, it generates 3 "demons", aka bad spending habits, to kill. Then, after the client chooses a habit to work on, it brings them to a dashboard where they can monitor their weekly progress on a task. Once the week is over, the app declares whether the client successfully beat the mission - if they did, they get rewarded with points which they can exchange for RBC Loyalty points! ## How we built it We built the frontend using React + Tailwind, using Routes to display our different pages. We used Cohere for our AI services, both for generating personalized weekly goals and creating a more in-depth report. We used Firebase for authentication + cloud database to keep track of users. For our data of users and transactions, as well as making/managing loyalty points, we used the RBC API. ## Challenges we ran into Piecing the APIs together was probably our most difficult challenge. Besides learning the different APIs in general, integrating the different technologies got quite tricky when we are trying to do multiple things at the same time! Besides API integration, definitely working without any sleep though was the hardest part! ## Accomplishments that we're proud of Definitely our biggest accomplishment was working so well together as a team. Despite only meeting each other the day before, we got along extremely well and were able to come up with some great ideas and execute under a lot of pressure (and sleep deprivation!) The biggest reward from this hackathon are the new friends we've found in each other :) ## What we learned I think each of us learned very different things: this was Homey and Alex's first hackathon, where they learned how to work under a small time constraint (and did extremely well!). Paige learned tons about React, frontend development, and working in a team. Vassily learned lots about his own strengths and weakness (surprisingly reliable at git, apparently, although he might have too much of a sweet tooth). ## What's next for Savvy Saver Demos! After that, we'll just have to see :)
Hello and thank you for judging my project. I am listing below two different links and an explanation of what the two different videos are. Due to the time constraints of some hackathons, I have a shorter video for those who require a lower time. As default I will be placing the lower time video up above, but if you have time or your hackathon allows so please go ahead and watch the full video at the link below. Thanks! [3 Minute Video Demo](https://youtu.be/8tns9b9Fl7o) [5 Minute Demo & Presentation](https://youtu.be/Rpx7LNqh7nw) For any questions or concerns, please email me at [joshiom28@gmail.com](mailto:joshiom28@gmail.com) ## Inspiration Resource extraction has tripled since 1970. That leaves us on track to run out of non-renewable resources by 2060. To fight this extremely dangerous issue, I used my app development skills to help everyone support the environment. As a human and a person residing in this environment, I felt that I needed to used my technological development skills in order to help us take care of the environment better, especially in industrial countries such as the United States. In order to do my part in the movement to help sustain the environment. I used the symbolism of the LORAX to name LORAX app; inspired to help the environment. \_ side note: when referencing firebase I mean firebase as a whole since two different databases were used; one to upload images and the other to upload data (ex. form data) in realtime. Firestore is the specific realtime database for user data versus firebase storage for image uploading \_ ## Main Features of the App To start out we are prompted with the **authentication panel** where we are able to either sign in with an existing email or sign up with a new account. Since we are new, we will go ahead and create a new account. After registering we are signed in and now are at the home page of the app. Here I will type in my name, email and password and log in. Now if we go back to firebase authentication, we see a new user pop up over here and a new user is added to firestore with their user associated data such as their **points, their user ID, Name and email.** Now lets go back to the main app. Here at the home page we can see the various things we can do. Lets start out with the Rewards tab where we can choose rewards depending on the amount of points we have. If we press redeem rewards, it takes us to the rewards tab, where we can choose various coupons from companies and redeem them with the points we have. Since we start out with zero points, we can‘t redeem any rewards right now. Let's go back to the home page. The first three pages I will introduce are apart of the point incentive system for purchasing items that help the environment If we press the view requests button, we are then navigated to a page where we are able to view our requests we have made in the past. These requests are used in order to redeem points from items you have purchased that help support the environment. Here we would we able to **view some details and the status of the requests**, but since we haven’t submitted any yet, we see there are none upon refreshing. Let’s come back to this page after submitting a request. If we go back, we can now press the request rewards button. By pressing it, we are navigated to a form where we are able to **submit details regarding our purchase and an image of proof to ensure the user truly did indeed purchase the item**. After pressing submit, **this data and image is pushed to firebase’s realtime storage (for picture) and Firestore (other data)** which I will show in a moment. Here if we go to firebase, we see a document with the details of our request we submitted and if we go to storage we are able to **view the image that we submitted**. And here we see the details. Here we can review the details, approve the status and assign points to the user based on their requests. Now let’s go back to the app itself. Now let’s go to the view requests tab again now that we have submitted our request. If we go there, we see our request, the status of the request and other details such as how many points you received if the request was approved, the time, the date and other such details. Now to the Footprint Calculator tab, where you are able to input some details and see the global footprint you have on the environment and its resources based on your house, food and overall lifestyle. Here I will type in some data and see the results. **Here its says I would take up 8 earths, if everyone used the same amount of resources as me.** The goal of this is to be able to reach only one earth since then Earth and its resources would be able to sustain for a much longer time. We can also share it with our friends to encourage them to do the same. Now to the last tab, is the savings tab. Here we are able to find daily tasks we can simply do to no only save thousands and thousands of dollars but also heavily help sustain and help the environment. \**Here we have some things we can do to save in terms of transportation and by clicking on the saving, we are navigated to a website where we are able to view what we can do to achieve these savings and do it ourselves. \** This has been the demonstration of the LORAX app and thank you for listening. ## How I built it For the navigation, I used react native navigation in order to create the authentication navigator and the tab and stack navigators in each of the respective tabs. ## For the incentive system I used Google Firebase’s Firestore in order to view, add and upload details and images to the cloud for reviewal and data transfer. For authentication, I also used **Google Firebase’s Authentication** which allowed me to create custom user data such as their user, the points associated with it and the complaints associated with their **user ID**. Overall, **Firebase made it EXTREMELY easy** to create a high level application. For this entire application, I used Google Firebase for the backend. ## For the UI for the tabs such as Request Submitter, Request Viewer I used React-native-base library to create modern looking components which allowed me to create a modern looking application. ## For the Prize Redemption section and Savings Sections I created the UI from scratch trialing and erroring with different designs and shadow effects to make it look cool. The user react-native-deeplinking to navigate to the specific websites for the savings tab. ## For the Footprint Calculator I embedded the **Global Footprint Network’s Footprint Calculator** with my application in this tab to be able to use it for the reference of the user of this app. The website is shown in the **tab app and is functional on that UI**, similar to the website. I used expo for wifi-application testing, allowing me to develop the app without any wires over the wifi network. For the Request submission tab, I used react-native-base components to create the form UI elements and firebase to upload the data. For the Request Viewer, I used firebase to retrieve and view the data as seen. ## Challenges I ran into Some last second challenges I ran to was the manipulation of the database on Google Firebase. While creating the video in fact, I realize that some of the parameters were missing and were not being updated properly. I eventually realized that the naming conventions for some of the parameters being updated both in the state and in firebase got mixed up. Another issue I encountered was being able to retrieve the image from firebase. I was able to log the url, however, due to some issues with the state, I wasnt able to get the uri to the image component, and due to lack of time I left that off. Firebase made it very very easy to push, read and upload files after installing their dependencies. Thanks to all the great documentation and other tutorials I was able to effectively implement the rest. ## What I learned I learned a lot. Prior to this, I had not had experience with **data modelling, and creating custom user data points. \*\*However, due to my previous experience with \*\*firebase, and some documentation referencing** I was able to see firebase’s built in commands allowing me to query and add specific User ID’s to the the database, allowing me to search for data base on their UIDs. Overall, it was a great experience learning how to model data, using authentication and create custom user data and modify that using google firebase. ## Theme and How This Helps The Environment Overall, this application used **incentives and educates** the user about their impact on the environment to better help the environment. ## Design I created a comprehensive and simple UI to make it easy for users to navigate and understand the purposes of the Application. Additionally, I used previously mentioned utilities in order to create a modern look. ## What's next for LORAX (Luring Others to Retain our Abode Extensively) I hope to create my **own backend in the future**, using **ML** and an **AI** to classify these images and details to automate the submission process and **create my own footprint calculator** rather than using the one provided by the global footprint network.
partial
## Inspiration We created Stage Hand with the hopes of equipping people with the necessary skills to become poised and confident speakers. Upon realizing how big of a fear public speaking is for most people, we wanted to help people cope with that fear while gaining the skillsets necessary to become better presenters. Stage Hand was our solution to this as it provides users with a real time feedback tool that helps them rapidly improve their skills while overcoming their fears. ## What it does Stage Hand is a web application that offers users real time feedback on their speaking pace, expression, and coherence by having them record videos of themselves delivering speeches. The goal in doing this is to help users pinpoint and improve upon the weak aspects of their public speaking ability. As you practice with Stage Hand, it essentially coaches you on how to polish your speeches by keeping track of and displaying vital statistics like your average speaking speed, the main emotion that your facial expression conveys, and how many filler words you have used. We hope that making this information available in real time will allow for users to learn how to adapt while speaking in order to create the optimal speech and become a holistically better public speaker. ## How we built it The Stage Hand interface was built using the React library to compose the user interface and the MediaDevices object to access our user’s video camera. The data we collect from each recording is sent to the suite of Microsoft’s Cognitive Services, utilizing the Bing Speech API’s speech-to-text function and developing our own custom language and acoustic models through the Custom Speech Service. Furthermore, various points throughout the video recording were passed into the Microsoft Emotion API, which we used to analyze the speaker’s emotional expression at any given point. Throughout the entire recording, live feedback is being given to the user so they can continue to adapt and learn the skills necessary to become excellent public speakers. ## Challenges we ran into Our biggest challenge that we faced in our final rounds of debugging was that we kept getting a 403 Error when calling the Bing Speech API that was necessary for the inner speech-to-text aspect of our program. We addressed this issue with a Microsoft mentor and were able to determine that the root cause of this was on Microsoft’s end. We ended up pinging a Microsoft employee in India who had a similar issue recently, and we are awaiting a response so that we can optimize our application. In the meantime, we found a temporary workaround for this issue: we were suggested to call REST API instead of WebSocket API even though REST only works for fifteen seconds of recording. This has allowed us to create a working product that we can demo (although it is less efficient). We hope to fix this in the future upon getting a response from the employee in India. Another chief issue that we faced is that we were having a hard time recognizing filler words in the pattern of normal speech because most speech-to-text APIs automatically filter out filler words. This made it nearly impossible for us to actually be able to detect them. We were able to overcome this issue by developing custom language and acoustic models on Microsoft’s Custom Speech Service. These models allowed us to detect filler words so that we would be able to count them in our program and give feedback to the user on how to minimize the use of fillers in their speech. ## Accomplishments that we're proud of We are especially proud of our team’s ability to adapt to the variety of issues we had come across in the process of building the application. Every time we made a breakthrough, it seemed like we ran into a new issue. For example, when it seemed like we were almost done creating the backbone of our application, we faced issues when calling the Bing Speech API. Regardless of these challenges that were thrown at us, we were always able to overcome them in one way or another. In this sense, we are most proud of our team’s resilience. ## What we learned From this experience, we learned how to utilize and incorporate many technologies that none of us have previously used. For example, we were able to learn how to use Microsoft’s Cognitive Services in order to perform tasks like detecting emotion and converting speech to text for the internal processing of our application. Additionally, we learned how to develop custom language and acoustic models using Microsoft’s Custom Speech Service. Overall, we learned a lot from this experience when incorporating technologies into our application that were new to all of us. ## What's next for Stage Hand Since we ran into difficulty using the Web Socket API, we had trouble opening up a tunnel from our client to the server. This bottlenecked our ability to create a live stream that directly funnels our user’s video data to the Microsoft Cognitive Services. In the future, we hope to overcome these issues to create an app that truly allows for live interactive feedback from our devices.
## Inspiration Almost all undergraduate students, especially at large universities like the University of California Berkeley, will take a class that has a huge lecture format, with several hundred students listening to a single professor speak. At Berkeley, students (including three of us) took CS61A, the introductory computer science class, alongside over 2000 other students. Besides forcing some students to watch the class on webcasts, the sheer size of classes like these impaired the ability of the lecturer to take questions from students, with both audience and lecturer frequently unable to hear the question and notably the question not registering on webcasts at all. This led us to seek out a solution to this problem that would enable everyone to be heard in a practical manner. ## What does it do? *Questions?* solves this problem using something that we all have with us at all times: our phones. By using a peer to peer connection with the lecturer’s laptop, a student can speak into their smartphone’s microphone and have that audio directly transmitted to the audio system of the lecture hall. This eliminates the need for any precarious transfer of a physical microphone or the chance that a question will be unheard. Besides usage in lecture halls, this could also be implemented in online education or live broadcasts to allow participants to directly engage with the speaker instead of feeling disconnected through a traditional chatbox. ## How we built it We started with a fail-fast strategy to determine the feasibility of our idea. We did some experiments and were then confident that it should work. We split our working streams and worked on the design and backend implementation at the same time. In the end, we had some time to make it shiny when the whole team worked together on the frontend. ## Challenges we ran into We tried the WebRTC protocol but ran into some problems with the implementation and the available frameworks and the documentation. We then shifted to WebSockets and tried to make it work on mobile devices, which is easier said than done. Furthermore, we had some issues with web security and therefore used an AWS EC2 instance with Nginx and let's encrypt TLS/SSL certificates. ## Accomplishments that we're (very) proud of With most of us being very new to the Hackathon scene, we are proud to have developed a platform that enables collaborative learning in which we made sure whatever someone has to say, everyone can hear it. With *Questions?* It is not just a conversation between a student and a professor in a lecture; it can be a discussion between the whole class. *Questions?* enables users’ voices to be heard. ## What we learned WebRTC looks easy but is not working … at least in our case. Today everything has to be encrypted … also in dev mode. Treehacks 2020 was fun. ## What's next for *Questions?* In the future, we could integrate polls and iClicker features and also extend functionality for presenters and attendees at conferences, showcases, and similar events. \_ Questions? \_ could also be applied even broader to any situation normally requiring a microphone—any situation where people need to hear someone’s voice.
## Inspiration One of our team members underwent speech therapy as a child, and the therapy helped him gain a sense of independence and self-esteem. In fact, over 7 million Americans, ranging from children with gene-related diseases to adults who suffer from stroke, go through some sort of speech impairment. We wanted to create a solution that could help amplify the effects of in-person treatment by giving families a way to practice at home. We also wanted to make speech therapy accessible to everyone who cannot afford the cost or time to seek institutional help. ## What it does BeHeard makes speech therapy interactive, insightful, and fun. We present a hybrid text and voice assistant visual interface that guides patients through voice exercises. First, we have them say sentences designed to exercise specific nerves and muscles in the mouth. We use deep learning to identify mishaps and disorders on a word-by-word basis, and show users where exactly they could use more practice. Then, we lead patients through mouth exercises that target those neural pathways. They imitate a sound and mouth shape, and we use deep computer vision to display the desired lip shape directly on their mouth. Finally, when they are able to hold the position for a few seconds, we celebrate their improvement by showing them wearing fun augmented-reality masks in the browser. ## How we built it * On the frontend, we used Flask, Bootstrap, Houndify and JavaScript/css/html to build our UI. We used Houndify extensively to navigate around our site and process speech during exercises. * On the backend, we used two Flask servers that split the processing load, with one running the server IO with the frontend and the other running the machine learning. * On our algorithms side, we used deep\_disfluency to identify speech irregularities and filler words and used the IBM Watson speech-to-text (STT) API for a more raw, fine-resolution transcription. * We used the tensorflow.js deep learning library to extract 19 points representing the mouth of a face. With exhaustive vector analysis, we determined the correct mouth shape for pronouncing basic vowels and gave real-time guidance for lip movements. To increase motivation for the user to practice, we even incorporated AR to draw the desired lip shapes on users mouths, and rewards them with fun masks when they get it right! ## Challenges we ran into * It was quite challenging to smoothly incorporate voice our platform for navigation, while also being sensitive to the fact that our users may have trouble with voice AI. We help those who are still improving gain competence and feel at ease by creating a chat bubble interface that reads messages to users, and also accepts text and clicks. * We also ran into issues finding the balance between getting noisy, unreliable STT transcriptions and transcriptions that autocorrected our users’ mistakes. We ended up employing a balance of the Houndify and Watson APIs. We also adapted a dynamic programming solution to the Longest Common Subsequence problem to create the most accurate and intuitive visualization of our users’ mistakes. ## Accomplishments that we are proud of We’re proud of being one of the first easily-accessible digital solutions that we know of that both conducts interactive speech therapy, while also deeply analyzing our users speech to show them insights. We’re also really excited to have created a really pleasant and intuitive user experience given our time constraints. We’re also proud to have implemented a speech practice program that involves mouth shape detection and correction that customizes the AR mouth goals to every user’s facial dimensions. ## What we learned We learned a lot about the strength of the speech therapy community, and the patients who inspire us to persist in this hackathon. We’ve also learned about the fundamental challenges of detecting anomalous speech, and the need for more NLP research to strengthen the technology in this field. We learned how to work with facial recognition systems in interactive settings. All the vector calculations and geometric analyses to make detection more accurate and guidance systems look more natural was a challenging but a great learning experience. ## What's next for Be Heard We have demonstrated how technology can be used to effectively assist speech therapy by building a prototype of a working solution. From here, we will first develop more models to determine stutters and mistakes in speech by diving into audio and language related algorithms and machine learning techniques. It will be used to diagnose the problems for users on a more personal level. We will then develop an in-house facial recognition system to obtain more points representing the human mouth. We would then gain the ability to feature more types of pronunciation practices and more sophisticated lip guidance.
partial
## Inspiration Pranit and Sashank are a part of Boy Scouts, and we both spend a lot of time helping our community and giving back. We have had personal accounts with the organizations we aim to help. Krish is also a local advocate for sustainability, he regularly looks for opportunities to help. We saw a need and we thought of a cool way to process and collect data and wanted to build it out. Who wouldn’t want to be earning Money and feeling like you're playing a game? ## What it does Datability is an app and web platform that gamifies and incentivizes crowdsourcing data. The problem we are trying to solve is simple: sustainable organizations struggle to receive the data they need because it is either scattered, expensive, or extremely difficult to access. Without this data, organizations can’t take action where their efforts make the most difference. Datability works by allowing organizations to request data and we crowdsource data from users by gamifying and incentivizing the process. Organizations give data requests and give us information on what data they want to capture and from where. Next, users in the geofence of the challenge are eligible to participate and can upload data they collect. In turn they get points and real money. At the end of it, organizations get actionable data and users get to compete with each other and make real money all while helping the environment. Our business model is simple too, the top 3 contributing users of a challenge get 35% of the pot and the rest is proportionality distributed to all contributors using this formula. We take a small cut from the payouts offered by Organizers and we want to give 50% of our profits back to sustainable efforts ## How we built it We built 2 platforms. One geared toward organizations and the other toward the public. And an IOS App for the users, and the web app for non-profits/business. Our tech stack included Swift, Swift UI, Firebase, Google Cloud APIs, Plant Identification, HuggingFace ML Models, NLP, Javascript, CSS, HTML, Bootstrap, Apexcharst, Google Maps API, Apple Maps On the organization side, we streamlined the onboarding process and added a smooth Stripe Setup. Using multithreading we can get immediate API responses and dynamic updating on the screen. We allow organizations to enable geofencing and location boundaries. On top of that we provide advanced analytics with Google Maps APIs as well as data aggregations and collections. We also have the cool feature of exporting all the data into a JSON format. On the consumer side, we were able to create an easy-to-use application that allowed easy interaction and understanding of all of its features. We were able to create NLP based descriptions for each of the plants to make sure everyone learns something from sustainability. All of the user data/images/coordinates were uploaded to our database to make sure that the web app could easily interact with that data too. ## Challenges we ran into This was our 4th Hackathon, but first time competing on a college campus. We went through multiple phases of ideation and several technical challenges. For example, when we were first discussing how to implement a psp (payment service provider) into our platform we exhaustively went through all the potential options before landing back on stripe as the optimal solution for us. Though these discussions could be seemingly tedious at times, it taught us a great deal on why planning is incredibly important, especially in computer science. The lack of sleep is also always a challenge :) ## Accomplishments that we're proud of We have a working product that with a few tweaks would be beta launch ready. That means we wrote a whole lot of code this weekend and are happy to see the final product. We’re proud of making a full stack iOS & Web application that works with various parts of Machine Learning and other APIs to deliver an impactful yet clean user interface. In addition, we are proud that the iOS and the web app seamlessly work together with no difficulties to make the overall platform better. ## What we learned We learned a whole lot. From several technical challenges and countless debugging hours, we learned the ins and outs of Swift/Swift UI, integrating stripe into web payments and mobile payouts, Google Maps API… ## What's next for Datability Over the weekend we loved the process of building and ideating over Datability. We want to continue so we are going to start a Beta Launch in just 2 weeks. Yes that’s right we are doing a beta launch in two weeks in the Bay Area community. We want to continue iterating on our tech and get it ready for scale. We look forward to what’s next.
## Inspiration Save the World is a mobile app meant to promote sustainable practices, one task at a time. ## What it does Users begin with a colorless Earth prominently displayed on their screens, along with a list of possible tasks. After completing a sustainable task, such as saying no to a straw at a restaurant, users obtain points towards their goal of saving this empty world. As points are earned and users level up, they receive lively stickers to add to their world. Suggestions for activities are given based on the time of day. They can also connect with their friends to compete for the best scores and sustainability. Both the fun stickers and friendly competition encourage heightened sustainability practices from all users! ## How I built it Our team created an iOS app with Swift. For the backend of tasks and users, we utilized a Firebase database. To connect these two, we utilized CocoaPods. ## Challenges I ran into Half of our team had not used iOS before this Hackathon. We worked together to get past this learning curve and all contribute to the app. Additionally, we created a setup in Xcode for the wrong type of database at first. At that point, we made a decision to change the Xcode setup instead of creating a different database. Finally, we found that it is difficult to use CocoaPods in conjunction with Github, because every computer needs to do the pod init anyway. We carefully worked through this issue along with several other merge conflicts. ## Accomplishments that I'm proud of We are proud of our ability to work as a team even with the majority of our members having limited Xcode experience. We are also excited that we delivered a functional app with almost all of the features we had hoped to complete. We had some other project ideas at the beginning but decided they did not have a high enough challenge factor; the ambition worked out and we are excited about what we produced. ## What I learned We learned that it is important to triage which tasks should be attempted first. We attempted to prioritize the most important app functions and leave some of the fun features for the end. It was often tempting to try to work on exciting UI or other finishing touches, but having a strong project foundation was important. We also learned to continue to work hard even when the due date seemed far away. The first several hours were just as important as the final minutes of development. ## What's next for Save the World Save the World has some wonderful features that could be implemented after this hackathon. For instance, the social aspect could be extended to give users more points if they meet up to do a task together. There could also be be forums for sustainability blog posts from users and chat areas. Additionally, the app could recommend personal tasks for users and start to “learn” their schedule and most-completed tasks.
## Inspiration The modern world faces a materials and resource crisis. As per The Guardian, about one pound of food is wasted per person. Our global gas supply steadily continues shrinking as more carbon emissions are send into our atmosphere. There are countless clothes that people store in their closets that they can no longer wear that could be donated others. We sought to target these specific crises by making an app that can allow communities to share these resources and interact with each other. ## What it does Matter is a social network platform that allows individual to see and make posts with user and friends within a certain radius. Users can make posts letting others in their community know that they have leftover food that they are willing to share, have clothes that they are willing to give away, or are going somewhere and willing to give someone a ride. Others can respond on these posts to let them know that they are coming. These interactions are incentivized by points that can be used for redeeming gift cards. Not only does this help improve sustainability, but also makes communities more tight knit by promoting people to get to know their local community. ## How we built it We built Matter using React Native with the Expo API to construct the front end of our app prototype, which is primarily coded in JavaScript. Our back end was developed using Google Firebase. ## Challenges we ran into The first few challenges we faced were during our brainstorming phase and coming up the ideas that fell within the scope of the competition and within the resources that we have been given access to. It took a longer time than we expected to come up with and begin development of an idea that could make use of what we had access to. The other side of our challenges were more technical. Working with the Expo API and React Native was relatively new and, as a result, we had a bit of an issue when it came with debugging and one of the older libraries was outdated and not running properly. ## Accomplishments that we're proud of We are proud of the idea that we came up with, as we believe it is a great way to improve connectivity within a community while also building a better and more sustainable, local environment. From a more technical standpoint, we are proud of learning all that we have, from learning as much React and Java Script we could to being able to actually develop an app. ## What we learned As aforementioned, we learned a lot of new skills, especially in React, Java Script, and general app development. We have also learned about the process of developing a product and what has to go into that process. From a business standpoint, we learned about judging the marketability of ideas that we come up with. ## What's next for Matter Our next step with Matter would be expanding the categories it has, such as for gardening or volunteer events, or partnering with non-profit organizations in order to boost the presence of the app. We would also focus on further developing our UI to accommodate for these new changes and further improve it for our users.
partial
## Inspiration Students are often put into a position where they do not have the time nor experience to effectively budget their finances. This unfortunately leads to many students falling into debt, and having a difficult time keeping up with their finances. That's where wiSpend comes to the rescue! Our objective is to allow students to make healthy financial choices and be aware of their spending behaviours. ## What it does wiSpend is an Android application that analyses financial transactions of students and creates a predictive model of spending patterns. Our application requires no effort from the user to input their own information, as all bank transaction data is synced in real-time to the application. Our advanced financial analytics allow us to create effective budget plans tailored to each user, and to provide financial advice to help students stay on budget. ## How I built it wiSpend is build using an Android application that makes REST requests to our hosted Flask server. This server periodically creates requests to the Plaid API to obtain financial information and processes the data. Plaid API allows us to access major financial institutions' users' banking data, including transactions, balances, assets & liabilities, and much more. We focused on analysing the credit and debit transaction data, and applied statistical analytics techniques in order to identify trends from the transaction data. Based on the analysed results, the server will determine what financial advice in form of a notification to send to the user at any given point of time. ## Challenges I ran into Integration and creating our data processing algorithm. ## Accomplishments that I'm proud of This was the first time we as a group successfully brought all our individual work on the project and successfully integrated them together! This is a huge accomplishment for us as the integration part is usually the blocking factor from a successful hackathon project. ## What I learned Interfacing the Android and Web server was a huge challenge but it allowed us as developers to find clever solutions by overcoming encountered roadblocks and thereby developing our own skills. ## What's next for wiSpend Our first next feature would be to build a sophist acted budgeting app to assist users in their budgeting needs. We also plan on creating a mobile UI that can provide even more insights to users in form of charts, graphs, and infographics, as well as further developing our web platform to create a seamless experience across devices.
## Inspiration So, like every Hackathon we’ve done in the past, we wanted to build a solution based on the pain points of actual, everyday people. So when we decided to pursue the Healthtech track, we called the nurses and healthcare professionals in our lives. To our surprise, they all seemed to have the same gripe – that there was no centralized system for overviewing the procedures, files, and information about specific patients in a hospital or medical practice setting. Even a quick look through google showed that there wasn’t any new technology that was really addressing this particular issue. So, we created UniMed - united medical - to offer an innovate alternative to the outdated software that exists – or for some practices, pen and paper. While this isn’t necessarily the sexiest idea, it’s probably one of the most important issues to address for healthcare professionals. Looking over the challenge criteria, we couldn’t come up with a more fitting solution – what comes to mind immediately is the criterion about increasing practitioner efficiency. The ability to have a true CMS – not client management software, but CARE management software – eliminates any need for annoying patients with a barrage of questions they’ve answered a hundred times, and allows nurses and doctors to leave observations and notes in a system where they can be viewed from other care workers going forward. ## What it does From a technical, data-flow perspective, this is the gist of how UniMed works: Solace connects our React-based front end to our database. While we normally would have a built a SQL database or perhaps gone the noSQL route and leveraged mongoDB, due to time constraints we’re using JSON for simplicities sake. So while JSON is acting, typically, like a REST API, we’re pulling real-time data with Solace’s functionality. Any time an event-based subscription is called – for example, a nurse updates a patient’s records reporting that their post-op check-up went well and they should continue on their current dosage of medication – that value, in this case a comment value, is passed to that event (updating our React app by populating the comments section of a patient’s record with a new comment). ## How we built it We all learned a lot at this hackathon – Jackson had some Python experience but learned some HTML5 to design the basic template of our log-in page. I had never used React before, but spent several hours watching youtube videos (the React workshop was also very helpful!) and Manny mentored me through some of the React app creation. Augustine is a marketing student but it turns out he has a really good eye for design, and he was super helpful in mockups and wireframes! ## What's next for UniMed There are plenty of cool ideas we have for integrating new features - the ability to give patients a smartwatch that monitors their vital signs and pushes that bio-information to their patient "card" in real time would be super cool. It would be great to also integrate scheduling functionality so that practitioners can use our program as the ONLY program they need while they're at work - a complete hub for all of their information and duties!
## Our Inspiration, what it does, and how we built it We wanted to work on something that challenged the engineering of today’s consumer economy. As college students across different campuses we noticed the common trend of waste, hoarding, and overspending among students. At the core of this issue is a first-instinct to buy a solution, whether service or product, when a problem arises. We did some market research among fellow hackers and on our college's subreddits – finding that students have no choice but to pay for items/services or go without them. To solve this we wanted to introduce a platform to allow students an alternative way to pay for items, allowing students to leverage the typically illiquid assets that they already have. ## Challenges we ran into We wanted to keep development light, so we chose to use React and Convex to abstract away many of the details associated with full stack development. Still, however, among our biggest challenges was getting everyone up to par in terms of technical ability. We are students from all sorts of backgrounds (from cognitive science to business to CS majors!) and who all had varying levels of experience with development. ## Accomplishments that we're proud of and what we learned. That’s why, as we finished up the final steps of the Hackathon, we felt so proud of being able to power through and produce a functional product of our vision. All of us grew and learned immensely about software development, converting ideas into tangible visions (using tools such as Figma and Dall-E), and - most importantly - the “hacker” mindset. We all have had so much to take away from this experience. ## What's next for BarterBuddies Our long-term vision for the app is to become the go-to platform for bartering and item trading among young adults. We plan to expand and grow beyond the college student market by developing partnerships with other organizations and by continually iterating on the platform to meet the changing needs of our users.
partial
## Inspiration After surfing online, we found some cool videos of 3d tracking with an eye. We thought taking those concepts and bringing it to a mobile game would be a wonderful combination of the two, and bring more physical activity in a game. ## What it does This game provides a source of endless activity and encourages people to be active with their body/eyes providing a more full health experience. Travelling down a slippery slope, users have a chance to have a good time dealing with obstacles and can enjoying the ride. ## How we built it This app was built on Unity, a popular 3D software that is used by popular app developers and companies. Some renders were also done on Blender, and open computer vision on Python and Unity were explored. ## Challenges we ran into: * Learning unity in general was a challenge. A number of strange issues happened: large installs, learning the syntax (e.g. mesh, font assets) * Source control with Github was more challenging with 3D renderings. The files were more dependent on one another making it difficult for multiple users at a time. * Computer vision libraries worked well on Python but did not work on Unity ## Accomplishments that we're proud of * Render/app looks amazing and provides users a good experience * Really cool idea making the slide and the randomness of the colours ## What we learned * App development can be a challenge * Great UI/UX makes for an impressive project ## What's next for Slippery Slope * Exploring the iOS mobile development is a thing that might be done to the app in the future
## TRY IT ON YOUR OWN DEVICE! <https://ecoalchemy.vercel.app/> ## Inspiration 🌍 We built EcoAlchemy to be the ultimate educational game on ecology and sustainability that is friendly to all age groups. EcoAlchemy is inspired by the urgent need for environmental awareness 🌱 and the captivating power of gaming 🎮. It's our answer to engaging a wider audience in the critical conversation about sustainability and ecological balance. We want to gamify our approach to spreading awareness on carbon footprints and the permanence of pollution. ## What it Does 🚀 The game educates players on the interconnectedness of nature 🌳, the impact of human activities 🏭, and the importance of sustainable living through fun, interactive gameplay. Eco-Alchemy is a puzzle game where players combine basic elements like air, fire, earth, and water to create new items, substances, and concepts. The game mechanics involve experimentation and discovery as players mix and match elements to unlock new combinations, eventually leading to the creation of more complex items. The goal is to uncover as many combinations as possible and observing how the unlocking of different elements shape the environment we live in, fostering creativity and problem-solving skills along the way. ## How We Built It 🛠️ With a blend of creativity, scientific research 📚, and cutting-edge technology with Three.js and React 💻, our interdisciplinary team crafted a game that's as informative as it is engaging. ## Challenges We Ran Into 🚧 Balancing educational content with compelling gameplay was a tightrope walk 🎢. Simplifying complex concepts without oversimplification required a thoughtful approach. We had to handle rendering the 3d assets in our 3d rendered environment that changed in accordance to the different elements that the user combines and discovers. ## Accomplishments That We're Proud Of 😊 Creating a platform that makes learning about the environment an adventure 🌟, and receiving positive feedback from players and educators alike, has been incredibly rewarding. ## What We Learned 📖 The project was a masterclass in collaboration, innovation, and the transformative potential of educational technology 🌐. We learned new skills from 3d rendering to incorporating NFT credentials. Thanks crossmint : ) ## What's Next for EcoAlchemy 🌈 EcoAlchemy will continue to grow 🌱, with new content, challenges, and partnerships on the horizon, aiming to inspire actionable change for a greener planet 🌎. We will plan to expand the project into a real game, hopefully simulating even more accurate footprint cycles, element combinations, and smoother UI experiences. ## Core learning Mechanism * Analogy: Helps players understand complex ecological concepts by relating them to familiar experiences. * Contrasting Cases: Showcases the effects of sustainable vs. unsustainable practices on the environment, enhancing understanding through comparison. * Generation: Encourages players to create solutions for environmental challenges, fostering creativity and problem-solving skills. * Just-in-Time Telling: Provides information and feedback right when players need it, enhancing learning without overwhelming them. * Hands-On: Offers interactive experiences that mimic real-world environmental activities, promoting active learning. * Visualization: Uses detailed graphics and simulations to represent ecological processes and the impact of human actions, aiding comprehension and retention. ## CODE! <https://github.com/TheMoon2000/little-alchemy-3d>
## Inspiration We were inspired by the genetic-algorithm-based Super Mario AI known as MarI/O made by SethBling. MarI/O uses genetic algorithms to teach a neural net to beat levels of Super Mario by maximizing an objective function. Inspired by this, we wanted to create a game that maximizes an objective function using genetic algorithms in order to present the player with a challenge. ## What it does A game designed to take EEG input to monitor and parse brain wave and stress-related data and produce a machine learned environment for the user to interact with. The game generates obstacles in the form of hurdles and walls, where the user can control their speed and position using the real-time data streamed from the Muse headband. ## How we built it We utilized the Muse API and research tools to help construct a script in Java that connected to the Muse port via TCP. We then coded in Java to produce our graphics and movement interfaces. We developed a machine learning neural network that constructed each of the game stages, generating obstacle models based off of previous iterations to increase the difficulty level. ## Challenges we ran into Being able to parse and convert the data feed from the Muse headband into a usable format was definitely a challenge that took us several hours to overcome. In addition, adjusting the parameters for our progressive machine learning to for a non-repetitive, but also feasible set of obstacles was another major challenge. ## Accomplishments that we're proud of I think just having the game environment that we produced and being able to run through it and interact with it is rewarding on its own. Along with the fact that this game has the potential to relieve stress levels and produce positive user feedback and impact, we all feel tremendously about the game that we have produced. ## What we learned We learned a lot about using breeding neural networks and different forms of data that can be utilized in novel and unique methods. ## What's next for iamhappy We definitely want to up our game on the UI and design side. We can allow for more user adjusted parameters and settings, to help fine tune each of the user preferences. In addition, we want to improve our visuals and design of each level and environment. With a more aesthetically appealing background, we can definitely reach a higher mark in our objective of reducing user stress levels.
losing
## Inspiration In the exciting world of hackathons, where innovation meets determination, **participants like ourselves often ask "Has my idea been done before?"** While originality is the cornerstone of innovation, there's a broader horizon to explore - the evolution of an existing concept. Through our AI-driven platform, hackers can gain insights into the uniqueness of their ideas. By identifying gaps or exploring similar projects' functionalities, participants can aim to refine, iterate, or even revolutionize existing concepts, ensuring that their projects truly stand out. For **judges, the evaluation process is daunting.** With a multitude of projects to review in a short time frame, ensuring an impartial and comprehensive assessment can become extremely challenging. The introduction of an AI tool doesn't aim to replace the human element but rather to enhance it. By swiftly and objectively analyzing projects based on certain quantifiable metrics, judges can allocate more time to delve into the intricacies, stories, and the passion driving each team ## What it does This project is a smart tool designed for hackathons. The tool measures the similarity and originality of new ideas against similar projects if any exist, we use web scraping and OpenAI to gather data and draw conclusions. **For hackers:** * **Idea Validation:** Before diving deep into development, participants can ascertain the uniqueness of their concept, ensuring they're genuinely breaking new ground. * **Inspiration:** By observing similar projects, hackers can draw inspiration, identifying ways to enhance or diversify their own innovations. **For judges:** * **Objective Assessment:** By inputting a project's Devpost URL, judges can swiftly gauge its novelty, benefiting from AI-generated metrics that benchmark it against historical data. * **Informed Decisions:** With insights on a project's originality at their fingertips, judges can make more balanced and data-backed evaluations, appreciating true innovation. ## How we built it **Frontend:** Developed using React JS, our interface is user-friendly, allowing for easy input of ideas or Devpost URLs. **Web Scraper:** Upon input, our web scraper dives into the content, extracting essential information that aids in generating objective metrics. **Keyword Extraction with ChatGPT:** OpenAI's ChatGPT is used to detect keywords from the Devpost project descriptions, which are used to capture project's essence. **Project Similarity Search:** Using the extracted keywords, we query Devpost for similar projects. It provides us with a curated list based on project relevance. **Comparison & Analysis:** Each incoming project is meticulously compared with the list of similar ones. This analysis is multi-faceted, examining the number of similar projects and the depth of their similarities. **Result Compilation:** Post-analysis, we present users with an 'originality score' alongside explanations for the determined metrics, keeping transparency. **Output Display:** All insights and metrics are neatly organized and presented on our frontend website for easy consumption. ## Challenges we ran into **Metric Prioritization:** Given the timeline restricted nature of a hackathon, one of the first challenges was deciding which metrics to prioritize. Striking the balance between finding meaningful data points that were both thorough and feasible to attain were crucial. **Algorithmic Efficiency:** We struggled with concerns over time complexity, especially with potential recursive scenarios. Optimizing our algorithms, prompt engineering, and simplifying architecture was the solution. *Finding a good spot to sleep.* ## Accomplishments that we're proud of We took immense pride in developing a solution directly tailored for an environment we're deeply immersed in. By crafting a tool for hackathons, while participating in one, we felt showcases our commitment to enhancing such events. Furthermore, not only did we conceptualize and execute the project, but we also established a robust framework and thoughtfully designed architecture from scratch. Another general Accomplishment was our team's synergy. We made efforts to ensure alignment, and dedicated time to collectively invest in and champion the idea, ensuring everyone was on the same page and were equally excited and comfortable with the idea. This unified vision and collaboration were instrumental in bringing HackAnalyzer to life. ## What we learned We delved into the intricacies of full-stack development, gathering hands-on experience with databases, backend and frontend development, as well as the integration of AI. Navigating through API calls and using web scraping were also some key takeaways. Prompt Engineering taught us to meticulously balance the trade-offs when leveraging AI, especially when juggling cost, time, and efficiency considerations. ## What's next for HackAnalyzer We aim to amplify the metrics derived from the Devpost data while enhancing the search function's efficiency. Our secondary and long-term objective is to transition the application to a mobile platform. By enabling students to generate a QR code, judges can swiftly access HackAnalyzer data, ensuring a more streamlined and effective evaluation process.
## Inspiration We saw that lots of people were looking for a team to work with for this hackathon, so we wanted to find a solution ## What it does I helps developers find projects to work, and helps project leaders find group members. By using the data from Github commits, it can determine what kind of projects a person is suitable for. ## How we built it We decided on building an app for the web, then chose a graphql, react, redux tech stack. ## Challenges we ran into The limitations on the Github api gave us alot of troubles. The limit on API calls made it so we couldnt get all the data we needed. The authentication was hard to implement since we had to try a number of ways to get it to work. The last challenge was determining how to make a relationship between the users and the projects they could be paired up with. ## Accomplishments that we're proud of We have all the parts for the foundation of a functional web app. The UI, the algorithms, the database and the authentication are all ready to show. ## What we learned We learned that using APIs can be challenging in that they give unique challenges. ## What's next for Hackr\_matchr Scaling up is next. Having it used for more kind of projects, with more robust matching algorithms and higher user capacity.
## Inspiration In our fast-paced, bustling world, cooking is nothing short of a tedious task. Most importantly, finding the motivation to cook a healthy, nutritious meal at home with McDonald's being around every corner has always been extremely difficult. That is, until now. Meet OnlyChefs. ## What it does OnlyChefs is a Progress Web App (PWA) that allows individuals to truly enjoy and crave the home cooking journey through a gamified user interface. When users log onto the app, they are provided the option to view a collection of previous recipes they have unlocked based on if they wish to gain or lose weight, along with a navigation bar leading to gacha, recipes, fitness calculator, information, and quiz pages. ### Gacha Page The gacha button is one of the main ways that users can unlock new recipes. Essentially, the gacha mechanic allows individuals to "roll" for featured recipes, with the roll involving RNG. It's similar to buying Willy Wonka chocolate to potentially receive a golden ticket, but for recipes. ### Recipes Page The recipes button is synonymous with the home page. It displays all the recipes that the user has unlocked in a nice, symmetrical, trading card fashion. The user can click on any recipe they wish to cook, and it'll provide step by step instructions, ingredients, and nutritional facts for it. ### Fitness Calculator Page The fitness calculator page is a way for users to determine an estimated amount of calories and macronutrients they should consume to achieve their fitness goals (lose/gain weight). It provides options to enter gender, weight (both metric and imperial), and outputs a detailed table display calories and other macronutrients. ### Information Page The information page provides a nice, compact overview of all the macronutrients and the part they play in our diets. ### Quiz Page The quiz page is one of the main ways that users earn points to spend on gacha rolls. Users are provided with a recipe for a certain dish, but with one ingredient missing. It is then the user's goal to determine what that missing key ingredient is. ## How we built it We built OnlyChefs using a free and open-source front-end compiler known as Svelte. This was a huge aid in enabling us to simultaneously develop cross-platform UIs for desktop, and mobile, including iOS. Along with this, we used TypeScript to develop our data models and a temporary database. ## Challenges we ran into There was definitely no shortage of challenges throughout our hacking journey. Implementing the actual gacha banners took quite a while, as we had to design assets and animations for them. Along with this, expanding a specific recipe card and displaying its contents was a tedious task due to all the layout constraints in place. Last but not least, we had to refactor our data model for the recipes within the last night of the hackathon which lead to a fun night of debugging :) ## Accomplishments that we're proud of Honestly, we're pretty glad that we were able to develop a functioning web app. Developing all our assets in-house was also a fun learning experience throughout the hackathon. Along with this, Svelte was a new technology for four of us, so we were glad to have picked up another tool in our toolkit. ## What we learned Similar to above, Svelte was a new technology for most of us, so learning how to use and model with this framework was definitely a cool experience. We also were able to play around with different ways of designing assets, and were able to learn about many useful features of GitHub that we never knew existed! ## What's next for OnlyChefs Gordon Ramsay, we're coming for you.
winning
## Inspiration Neural Learning seminar. We wanted to try a project with some Artificial Intelligence ## What it does A unsolved Sudoku puzzle is provided in a text file, and the program solves the corresponding puzzle. ## How we built it Used an AC-3 constraint propagation algorithm to limit domains. Each grid on puzzle represents a grid ## Challenges we ran into Too many to add ## Accomplishments that we're proud of Building something that works ## What we learned How to implement constraint satisfaction algorithm ## What's next for McHacks-2017 Learn some Node.js
## Inspiration We wanted to create a game that helped us further understanding some main methods of machine learning. ## What it does This game encompasses a machine learning model that learns how to play a simplified version of the popular game, Super Smash Bros. It uses a population of individuals with neural networks that evolve over time using a genetic algorithm. ## How I built it Each machine learning algorithm is implemented in native java using Eclipse. ## Challenges I ran into Implementing the genetic algorithms to evolve individuals who already have neural networks within them was difficult. We solved this by abstracting the neural networks and treating every individual solely according to its fitness value calculated from the neural network within it. ## Accomplishments that I'm proud of We used no libraries to implement machine learning models from scratch. This is not typically done due to the complex nature of such algorithms--we learned a lot about the inner-workings of genetic algorithms and neural networks by doing so. ## What I learned We gained a deeper understanding of Genetic Algorithms and Neural Networks by cross implementing them. We also learned about the nuisances of having 2 ML algorithms in a single model. ## What's next for MITSSAI We hope to further customize the program with more platforms and maybe even levels.
## Inspiration Noise sensitivity is common in autism, but it can also affect individuals without autism. Research shows that 50 to 70 percent of people with autism experience hypersensitivity to everyday sounds. This inspired us to create a wearable device to help individuals with heightened sensory sensitivities manage noise pollution. Our goal is to provide a dynamic solution that adapts to changing sound environments, offering a more comfortable and controlled auditory experience. ## What it does SoundShield is a wearable device that adapts to noisy environments by automatically adjusting calming background audio and applying noise reduction. It helps individuals with sensory sensitivities block out overwhelming sounds while keeping them connected to their surroundings. The device also alerts users if someone is behind them, enhancing both awareness and comfort. It filters out unwanted noise using real-time audio processing and only plays calming music if the noise level becomes too high. If it detects a person speaking or if the noise is low enough to be important, such as human speech, it doesn't apply filters or background music. ## How we built it We developed SoundShield using a combination of real-time audio processing and computer vision, integrated with a Raspberry Pi Zero, a headphone, and a camera. The system continuously monitors ambient sound levels and dynamically adjusts music accordingly. It filters noise based on amplitude and frequency, applying noise reduction techniques such as Spectral Subtraction, Dynamic Range Compression to ensure users only hear filtered audio. The system plays calming background music when noise levels become overwhelming. If the detected noise is low, such as human speech, it leaves the sound unfiltered. Additionally, if a person is detected behind the user and the sound amplitude is high, the system alerts the user, ensuring they are aware of their surroundings. ## Challenges we ran into Processing audio in real-time while distinguishing sounds based on frequency was a significant challenge, especially with the limited computing power of the Raspberry Pi Zero. Additionally, building the hardware and integrating it with the software posed difficulties, especially when ensuring smooth, real-time performance across audio and computer vision tasks. ## Accomplishments that we're proud of We successfully integrated computer vision, audio processing, and hardware components into a functional prototype. Our device provides a real-world solution, offering a personalized and seamless sensory experience for individuals with heightened sensitivities. We are especially proud of how the system dynamically adapts to both auditory and visual stimuli. ## What we learned We learned about the complexities of real-time audio processing and how difficult it can be to distinguish between different sounds based on frequency. We also gained valuable experience in integrating audio processing with computer vision on a resource-constrained device like the Raspberry Pi Zero. Most importantly, we deepened our understanding of the sensory challenges faced by individuals with autism and how technology can be tailored to assist them. ## What's next for SoundSheild We plan to add a heart rate sensor to detect when the user is becoming stressed, which would increase the noise reduction score and automatically play calming music. Additionally, we want to improve the system's processing power and enhance its ability to distinguish between human speech and other noises. We're also researching specific frequencies that can help differentiate between meaningful sounds, like human speech, and unwanted noise to further refine the user experience.
losing
## Inspiration Every few days, a new video of a belligerent customer refusing to wear a mask goes viral across the internet. On neighborhood platforms such as NextDoor and local Facebook groups, neighbors often recount their sightings of the mask-less minority. When visiting stores today, we must always remain vigilant if we wish to avoid finding ourselves embroiled in a firsthand encounter. With the mask-less on the loose, it’s no wonder that the rest of us have chosen to minimize our time spent outside the sanctuary of our own homes. For anti-maskers, words on a sign are merely suggestions—for they are special and deserve special treatment. But what can’t even the most special of special folks blow past? Locks. Locks are cold and indiscriminate, providing access to only those who pass a test. Normally, this test is a password or a key, but what if instead we tested for respect for the rule of law and order? Maskif.ai does this by requiring masks as the token for entry. ## What it does Maskif.ai allows users to transform old phones into intelligent security cameras. Our app continuously monitors approaching patrons and uses computer vision to detect whether they are wearing masks. When a mask-less person approaches, our system automatically triggers a compatible smart lock. This system requires no human intervention to function, saving employees and business owners the tedious and at times hopeless task of arguing with an anti-masker. Maskif.ai provides reassurance to staff and customers alike with the promise that everyone let inside is willing to abide by safety rules. In doing so, we hope to rebuild community trust and encourage consumer activity among those respectful of the rules. ## How we built it We use Swift to write this iOS application, leveraging AVFoundation to provide recording functionality and Socket.io to deliver data to our backend. Our backend was built using Flask and leveraged Keras to train a mask classifier. ## What's next for Maskif.ai While members of the public are typically discouraged from calling the police about mask-wearing, businesses are typically able to take action against someone causing a disturbance. As an additional deterrent to these people, Maskif.ai can be improved by providing the ability for staff to call the police.
## Inspiration 🍪 We’re fed up with our roommates stealing food from our designated kitchen cupboards. Few things are as soul-crushing as coming home after a long day and finding that someone has eaten the last Oreo cookie you had been saving. Suffice it to say, the university student population is in desperate need of an inexpensive, lightweight security solution to keep intruders out of our snacks... Introducing **Craven**, an innovative end-to-end pipeline to put your roommates in check and keep your snacks in stock. ## What it does 📸 Craven is centered around a small Nest security camera placed at the back of your snack cupboard. Whenever the cupboard is opened by someone, the camera snaps a photo of them and sends it to our server, where a facial recognition algorithm determines if the cupboard has been opened by its rightful owner or by an intruder. In the latter case, the owner will instantly receive an SMS informing them of the situation, and then our 'security guard' LLM will decide on the appropriate punishment for the perpetrator, based on their snack-theft history. First-time burglars may receive a simple SMS warning, but repeat offenders will have a photo of their heist, embellished with an AI-generated caption, posted on [our X account](https://x.com/craven_htn) for all to see. ## How we built it 🛠️ * **Backend:** Node.js * **Facial Recognition:** OpenCV, TensorFlow, DLib * **Pipeline:** Twilio, X, Cohere ## Challenges we ran into 🚩 In order to have unfettered access to the Nest camera's feed, we had to find a way to bypass Google's security protocol. We achieved this by running an HTTP proxy to imitate the credentials of an iOS device, allowing us to fetch snapshots from the camera at any time. Fine-tuning our facial recognition model also turned out to be a bit of a challenge. In order to ensure accuracy, it was important that we had a comprehensive set of training images for each roommate, and that the model was tested thoroughly. After many iterations, we settled on a K-nearest neighbours algorithm for classifying faces, which performed well both during the day and with night vision. Additionally, integrating the X API to automate the public shaming process required specific prompt engineering to create captions that were both humorous and effective in discouraging repeat offenders. ## Accomplishments that we're proud of 💪 * Successfully bypassing Nest’s security measures to access the camera feed. * Achieving high accuracy in facial recognition using a well-tuned K-nearest neighbours algorithm. * Fine-tuning Cohere to generate funny and engaging social media captions. * Creating a seamless, rapid security pipeline that requires no legwork from the cupboard owner. ## What we learned 🧠 Over the course of this hackathon, we gained valuable insights into how to circumvent API protocols to access hardware data streams (for a good cause, of course). We also deepened our understanding of facial recognition technology and learned how to tune computer vision models for improved accuracy. For our X integration, we learned how to engineer prompts for Cohere's API to ensure that the AI-generated captions were both humorous and contextual. Finally, we gained experience integrating multiple APIs (Nest, Twilio, X) into a cohesive, real-time application. ## What's next for Craven 🔮 * **Multi-owner support:** Extend Craven to work with multiple cupboards or fridges in shared spaces, creating a mutual accountability structure between roommates. * **Machine learning improvement:** Experiment with more advanced facial recognition models like deep learning for even better accuracy. * **Social features:** Create an online leaderboard for the most frequent offenders, and allow users to vote on the best captions generated for snack thieves. * **Voice activation:** Add voice commands to interact with Craven, allowing roommates to issue verbal warnings when the cupboard is opened.
## Inspiration The general challenge of UottaHack 4 was to create a hack surrounding COVID-19. We got inspired by a COVID-19 restriction in the province of Quebec which requires stores to limit the number of people allowed in the store at once (depending on the store floor size). This results in many stores having to place an employee at the door of the shop to monitor the people entering/exiting, if they are wearing a mask and to make sure they disinfect their hands. Having an employee dedicated to monitoring the entrance can be a financial drain on a store and this is where our idea kicks in, dedicating the task of monitoring the door to the machine so the human resources could be best used elsewhere in the store. ## What it does Our hack monitors the entrance of a store and does the following: 1. It counts how many people are currently in the store by monitoring the number of people that are entering/leaving the store. 2. Verifies that the person entering is wearing PPE ( a mask ). If no PPE was recognized, and a reminder to wear a mask is played from a speaker on the Raspberry Pi. 3. Verify that the person entering has used the sanitation station and displays a message thanking them for using it. 4. Display information to people entering such as. how many people are in the store and what is the store's max capacity, reminders to wear a mask, and thanks for using the sanitation station 5. Provides useful stats to the shop owner about the monitoring of the shop. ## How we built it **Hardware:** The hack uses a Raspberry Pi and it PiCam to monitor the entrance. **Monitoring backend:** The program starts by monitoring the floor in front of the door for movement this is done using OpenCV. Once movement is detected pictures are captured and stored. the movement is also analyzed to estimate if the person is leaving or entering the store. Following an event of someone entering/exiting, a secondary program analyses the collection of a picture taken and submits chooses one of them to be analyzed by google cloud vision API. The picture sent to the google API looks for three features: faces, object location (to identify people's bodies), and labels (to look for PPE). Using the info from the Vision API we can determine first if the person has PPE and if the difference in the number of people leaving and entering by comparing the number of faces to the body detected. if the is fewer faces than bodies then that means people have left, if there is the same amount then only people entered. Back on the first program, another point is being monitored which is the sanitation station. if there is an interaction(movement) with it then we know the person entering has used it. **cloud backend:** The front end and monitoring hardware need a unified API to broker communication between the services, as well as storage in the mongoDB data lake; This is where the cloud backend shines. Handling events triggered by the monitoring system, as well as user defined configurations from the front end, logging, and storage. All from a highly available containerized Kubernetes environment on GKE. **cloud frontend:** The frontend allows the administration to set the box parameters for where the objects will be in the store. If they are wearing a mask and sanitized their hands, a message will appear stating "Thank you for slowing the spread." However, if they are not wearing a mask or sanitized their hands, then a message will state "Please put on a mask." By doing so, those who are following protocols will be rewarded, and those who are not will be reminded to follow them. ## Challenges we ran into On the monitoring side, we ran into problems because of the color of the pants. Having bright-colored pants registered as PPE to Google's Cloud Vision API (they looked to similar to reflective pants PPe's). On the backend architecture side, developing event driven code was a challenge, as it was our first time working with such technologies. ## Accomplishments that we're proud of The efficiency of our computer vision is something we are proud of as we initially started with processing each frame every 50 milliseconds, however, we optimized the computer vision code to only process a fraction of our camera feed, yet maintain the same accuracy. We went from 50 milliseconds to 10 milliseconds ## What we learned **Charles:** I've learn how to use the google API **Mingye:** I've furthered my knowledge about computer vision and learned about google's vision API **Mershab:** I built and deployed my first Kubernetes cluster in the cloud. I also learned event driven architecture. ## What's next for Sanitation Station Companion We hope to continue improving our object detection and later on, detect if customers in the store are at least six feet apart from the person next to them. We will also remind them to keep their distance throughout the store as well. Their is also the feature of having more then on point of entry(door) monitored at the same time.
winning
## Inspiration We are reimagining what social interactions empowered by technology can be. Technology will continue to be a huge factor in everyone’s life, and it is about time it facilitates conversation, rather than acting as a barrier from real interactions. The first area we will tackle is forming the initial meaningful connection between two individuals. Currently the market is filled with dating apps that gamify meeting people with whom you share sexual chemistry, while solutions for forming platonic relationships are lacking. Not only will our platform provides access to others trying to meet new people, but it goes a step beyond, using data to intelligently predict which people will be compatible. This means we facilitate new relevant, meaningful, and time-efficient relations. Understanding each user’s preferences, we can go further and recommend a time and location. This is well suited for partnerships with local businesses, to which our app will drive traffic. The need for this is massive: loneliness is skyrocketing. ## What it does With Munchies you will **never eat alone again.** Walk into any venue and instantly notify all app users that a new conversation is to be had. You have the option of scheduling it for now or for the future, and choose a location close in proximity. Those nearby can choose to accept, and will automatically be added to your event. ## How I built it * Node.js backend with an API built with express. * Data models designed critically on paper, normalized and beautified in a relational database: postgresql. * Front end mobile app in react native. Expo worked well to speed up the development process. * GCP for cloud hosting. This was definitely the hardest part -- connecting the Google App Engine with the Cloud SQL datastore. ## Challenges I ran into This was my first time deploying a node app to a GCP GAE. Turns out, ES6 support is not common. I toyed with using AWS, Azure, etc, but found nobody supporing ES6. This was important as my entire backend was built by the time I was thinking about hosting it, and this meant it was filled with new syntax and Async/Awaits... Turned out, all I needed was Babel. Having never used this before, it took a bit of playing around. Definitely gave me a better understanding of the javascript ecosystem :) ## Accomplishments that I'm proud of Building a fully functioning end-to-end system, complete with a relational db, mobile app, and backend API. ## What's next for Munchies So much! As described above, our world is lacking in meaningful connections. Despite more "likes" on social media then ever, loneliness is skyrocketing, and mental health rates are grim. This can't all be solved by having a lunch companion, but we have to start somewhere. An easy next add is questions that prompt solid conversation, or at a minimum, ice breakers. Beyond this, an entire platform that matches people based on data. We have amazing insight into what will create a good platonic relationship -- we need to stop leaving it up to chance, and take fate into our hands.
## Inspiration With the world in a technology age, it is easy to lose track of human emotions when developing applications to make the world a better place. Searching for restaurants using multiple filters and reading reviews is often times inefficient, leading the customer to give up searching and settle for something familiar. With a more personal approach, we hope to connect people to restaurants that they will love. ## What it does Using Indico’s machine learning API for text analysis, we are able to create personality profiles for individuals and recommend them restaurants from people of a similar personality. ## How we built it Backend: We started with drafting the architecture of the application, then defining the languages, frameworks and API's to be used within the project. We then proceeded on the day of the hackathon to create a set of mock data from the Yelp dataset. The dataset is then imported into MongoDB and managed through Mlab. In order to query the data, we used Node.js and Mongoose to communicate with the database. Frontend: The front end is built off of the semantic ui framework. We used default layouts to start and then built on top of them as new functionality was required. The landing page was developed from something a member had done in the past using modernizer and bootstrap slideshow functionality to rotate through background images. Lastly we used ejs as our templating language as it integrates with express very easily. ## Challenges we ran into 1. We realized that the datasets we've compiled was not diverse enough to show a wide range of possible results. 2. The team had an overall big learning curve throughout the weekend as we all were picking up some new languages along the way. 3. There was an access limit to the resources that we were using towards testing efforts for our application, which we never predicted. ## Accomplishments that we're proud of 1. Learning new web technologies, frameworks and APIs that are available and hot in the market at the moment! 2. Using the time before the hackathon to brainstorm and discussing a little more in depth of each team member's task. 3. Collaboratively working together using version control through Git! 4. Asking for help and guidance when needed, which leads to a better understanding of how to implement certain features. ## What we learned Node.js, Mongoose, Mlab, Heroku, No SQL Databases, API integration, Machine Learning & Sentiment Analysis! ## What's next for EatMotion We hope that with our web app and with continued effort, we may be able to predict restaurant preferences for people with a higher degree of accuracy than before.
## See our live demo! **On Rinkeby testnet blockchain (recommended):** <https://rinkeby.kelas.dev> **On xDAI blockchain (Warning: uses real money):** <https://xdai.kelas.dev> ## Check out our narrative StoryMaps here! **Greenery in your Community:** <https://arcg.is/1vu448> **Culture & Diversity in choosing your Home:** <https://arcg.is/DH511> ## Inspiration BlockFund's mission is to build a platform to empower communities with tools and data. We aim to improve outcomes in **community civic engagement and community sustainability.** *How we do so, BlockFund:* 1. Democratises community funds through blockchain and voting technology - allowing community members to submit their own project proposals and vote. 2. Highlights the need for community environment sustainability projects through identifying local areas lacking in tree foliage. Importantly, we educate the community through a narrative in an ArcGIS StoryMap. Image processing and deep learning science enables the identification of even smallest tree's foliage. **TeamTreesMini** 3. Aids potential new residents-to-be and migrants in looking for home (and community) that fits their unique culture heritage, beliefs and diversity needs, through outlining demographic breakdowns, religious institutions, and ammenities. Also educating the importance and factors to consider through a narrative in an ArcGIS StoryMap. **1. Democratises community funds through blockchain and voting technology** In the US, Homeowner Associations (HOA) are the main medium in which residents members pay community upkeep fees to maintain grounds, master insurance, community utilities, as well as overall community finances. Financial Transparency varies between HOAs, but often they only reflect past fund usage and the choices of a few representative members. We sought a solution that democratises the funding of projects process – allowing residents to contribute and vote for projects that **actually matter** to them. It's easy for community minorities to go unheard, so our voting system helps to account for that. We adjust and increase the voting weight of residents whose vote has not funded a successful project after a few attempts – thus improving the representation of minorities in any community. **2. Highlights the need for environment sustainability projects #TeamTreesMini** Additionally, we empower communities to engage in green urban planning. We mimic #TeamTrees on a communal scale. Climate change is an increasingly prevalent topic, and we believe illustrating the dangers in your backyard is an excellent way to encourage local action. Our StoryMap solution maps the green foliage coverage in your neighbourhood. Then, we empower the community in proposing projects on the platform to fund tree planting in each home and in common areas. **3. Your home, why Cultural Fit and Diversity matters** After a community profile is made, we also assist new members in choosing a community aligned with their cultural, religious and diversity interests. When one of our members moved to a different and largely skewed racial group neighbourhood, he faced both explicit and subtle racism growing up. Home seekers already take demographics into consideration, and our solution empowers aids home seekers in making a more informed decision from a cultural perspective. It also can support urban planning for community planners. We map diversity index scores, demographic data (generational and race), and the religious institutions and ammenities – aiding new home seekers in choosing their home. The proverb "Birds of a feather flock together" describes how those of similar taste congregate in groups. However, in our world today, the importance of diversity and exposing oneself to different opinions and people is crucial to thrive in the workforce. > > Diversity is having a seat at the table. Inclusion is having a voice. And belonging is having that voice be heard. - Liz Fosslien > > > BlockFund believes that more than just price or transport convenience – diversity, belonging, and inclusion are key concepts in choosing a place to live. BlockFund is a decentralised autonomous organisation (DAO), that pools community funds, engage the community, and allow transparent voting for projects. ## How we built it We built and deployed the Decentralized Autonomous Organisation (DAO) smartcontract on two EVM-based blockchain: Rinkeby (Testnet) and xDAI. We use AlchemyAPI as a node endpoint for our Rinkeby deployment for better data availability and consistency, while our xDAI deployment uses POA's official community node. We deployed a React.js frontend for quick delivery of our application, leveraging Axios to asynchronously communicate with external libraries, OpenAPI to provide an intuitive Q&A feature promoting universal proposal comprehension, and Ant.Design/Sal for a modern, sleek, and animated user interface. We use ethers.js to perform communication blockchain nodes, and it supports two main cryptocurrency wallets: Burner wallet (our homebrew in-browser wallet made for easy user onboarding) Metamask (a popular web3-enabled wallet for those who wants better security) On top of that, our Community Learning Kits are made using ESRI ArcGIS storyboards for highly visual storytelling of geographic data. Last but not least, we use Hardhat for smartcontract deployment automation. **Here are some other technologies we used:** For blockchain: * Ethereum * Solidity * Hardhat For front-end client: * React.js (+ Hooks + Router) * Axios — asynchronous communication with OpenAPI * OpenAI GPT-3 — intuitive Q&A feature for universal proposal comprehension * Sal — sleek animations * Ant.Design — modern user interface system For mapping: * ArcGIS WebMap * ArcGIS StoryMap * ArcGIS-Rest-API * Custom Functions Datasets: * 2010 US Census Data * 2018 US Census Data * Pima AZ Foliage Data ## Challenges we ran into Our main challenge was in integrating ArcGIS API's in limited timeframe. As it was a new technology for us, we really had to crunch our brainpower. On top of that, deploying a fully working website for other people to try takes a lot of effort to make sure that all of the integrations are also working beyond localhost. ## Accomplishments that we're proud of * We have a live website! * We launched to two different blockchains: xDAI and Rinkeby. * React state management! ## What we learned * We learned that working remotely with colleagues from 4 different timezones is challenging. * Good React state management practices will safe a lot of time. ## What's next for BlockFund * Explore ways how we can work with local communities to deploy this. * Run more DAO experiments in smaller scope (family, small neighborhood, etc)
losing
# ConnectWith We want people to make the best use of their first degree connections on LinkedIn to achieve their goals in life. Often times, we have amazing ideas that go to waste because we weren't able to find enough people to help fulfill that idea and we don't know where to look. It is surprising how much you can learn from a person's LinkedIn profile. LinkedIn's search feature digs through a user's whole profile and searches for keywords all throughout. And sometimes, we may even miss out on big opportunities for our careers because we were too anxious about our first contact with a stranger on LinkedIn to strike up a conversation. Our tool allows people to move faster and it makes sure that one doesn't miss out on their dreams because they were nervous to send that first message. We've found this to be useful for the following example scenarios: A community director wants to organize a rather big dance class, but she is not sure of how she could get multiple volunteers to help. Using our tool, she can quickly search through her LinkedIn connections for people with dance experience and use our templates to send them messages. The more messages sent, the more likely someone is going to respond. A person is interested in building a startup, and want to talk to as many potential partners as possible. A fresh grad is desparate in looking for jobs, and have to send many personal messages and e-mails to recruiters. etc. Due to LinkedIn APIs not being readily available for non-exclusive users, we decided to use Puppeteer to automate most of the process.
## Inspiration Imagine this: You’re overwhelmed, scrolling through countless LinkedIn profiles, trying to figure out which clubs or activities will help you land your dream job. It feels like searching for a needle in a haystack! Here’s where UJourney steps in: We simplify your career planning by providing personalized paths tailored specifically to your goals. UJourney uses LinkedIn data from professionals in your dream job to recommend the exact clubs to join, events to attend, skills to acquire, and courses to take at your university. Our mission is to transform career exploration into a clear, actionable journey from aspiration to achievement. ## What it does UJourney is like having a career GPS with a personality. Tell it your dream job, and it will instantly scan the LinkedIn career cosmos to reveal the paths others have taken. No more endless profile scrolling! Instead, you get a curated list of personalized steps—like joining that robotics club or snagging that perfect internship—so you can be the most prepared candidate out there. With UJourney, the path to your dream job isn’t just a distant vision; it’s a series of clear, actionable steps right at your fingertips. ## How we built it The UJourney project is built on three core components: 1. Gathering Personal Information: We start by seamlessly integrating LinkedIn authorization to collect essential details like name and email. This allows users to create and manage their profiles in our system. For secure login and sign-up, we leveraged Auth0, ensuring a smooth and safe user experience. 2. Filtering LinkedIn Profiles: Next, we set up a MongoDB database by scraping LinkedIn profiles, capturing a wealth of career data. Using Python, we filtered this data based on keywords related to company names and job roles. This process helps us pinpoint relevant profiles and extract meaningful insights. 3. Curating Optimal Career Paths: Our AI model takes it from here. By feeding the filtered data and user information into an advanced model via the Gemini API, we generate personalized career paths, complete with timelines and actionable recommendations. The model outputs these insights in a structured JSON format, which we then translate into an intuitive, user-friendly UI design. ## Challenges we ran into Problem: LinkedIn Scraping Restrictions. Our initial plan was to directly scrape LinkedIn profiles based on company names and job roles to feed data into our AI model. However, LinkedIn’s policies prevented us from scraping directly from their platform. We turned to a third-party LinkedIn scraper, but this tool had significant limitations, including a restriction of only 10 profiles per company and no API for automation. While we utilized automation tools like Zapier and HubSpot CRM to streamline part of our workflow, we ultimately faced a significant roadblock. Despite these challenges, we adapted our approach to continue progressing with the project. Solution: Manual Database Creation. To work around these limitations, we manually built a database focused on the top five most commonly searched companies and job roles. While this approach allowed us to gather essential data, it also meant that our database was initially limited in scope. This manual effort was crucial for ensuring we had enough data to effectively train our AI model and provide valuable recommendations. Despite these hurdles, we adapted our approach to ensure UJourney could deliver accurate and practical career insights. ## Accomplishments that we're proud of 1. Rapid Development: We successfully developed and launched UJourney in a remarkably short period of time. Despite the tight timeline, we managed to pull everything together efficiently and effectively. 2. Making the Most of Free Tools: Working with limited resources and relying on free versions of various software, we still managed to create a fully functional version of UJourney. Our resourcefulness allowed us to overcome budget constraints and still deliver a high-quality product. 3. University-Specific Career Plans: One of our standout achievements is the app’s ability to provide personalized career plans tailored to specific universities. By focusing on actionable steps relevant to users' educational contexts, UJourney offers unique value that addresses individual career planning needs with precision. ## What we learned 1. Adaptability is Key: Our journey taught us that flexibility is crucial in overcoming obstacles. When faced with limitations like LinkedIn's scraping restrictions, we had to quickly pivot our approach. This experience reinforced the importance of adapting to challenges and finding creative solutions to keep moving forward. 2. Data Quality Over Quantity: We learned that the quality of data is far more important than sheer volume. By focusing on the most commonly searched companies and job roles, we ensured that our AI model could provide relevant and actionable insights, even with a limited dataset. This underscored the value of precision and relevance in data-driven projects. 3. Resourcefulness Drives Innovation: Working within constraints, such as using free software and limited resources, highlighted our team’s ability to innovate under pressure. We discovered that resourcefulness can turn limitations into opportunities for creative problem-solving, pushing us to explore new tools and methods. 4. User-Centric Design Matters: Our focus on creating university-specific career plans taught us that understanding and addressing user needs is essential for success. Providing tailored, actionable steps for career planning showed us the impact of designing solutions with the user in mind, making the tool genuinely useful and relevant. ## What's next for UJourney What exciting features are on the horizon? 1. Resume Upload Feature: To kick things off, we’re introducing a resume upload feature. This will allow users to gather personal information directly from their resumes, streamlining profile creation and reducing manual data entry. 2. Real-Time University Information: Next, we’ll be scraping university websites to provide real-time updates on campus events and activities. This feature will enable users to see upcoming events and automatically add them to their calendars, keeping them informed and organized. 3. Enhanced Community Involvement: We’ll then roll out features that allow users to view their friends' dream jobs and career paths. This will facilitate connections with like-minded individuals and foster a community where students can share experiences related to jobs and university clubs. 4. Automated LinkedIn Web Scraping: To improve data collection, we’ll automate LinkedIn data scraping. This will help expand our database with up-to-date and relevant career information, enhancing the app’s ability to provide accurate recommendations. 5. AI-Driven Job Recommendations: Finally, we’ll leverage real-time market information and AI to recommend job opportunities that are ideal for the current year. Users will also be able to apply for these jobs directly through the app, making the job application process more efficient and seamless. These upcoming features are designed to enhance the UJourney experience, making career planning, networking, and job applications more intuitive and effective. Stay tuned for these exciting updates!
# vly.ai: generating full-stack SaaS applications in just 1 click We generate full-stack web apps optimized for SaaS use cases (front end, back end, integrations such as stripe, email, texting, and more) completely using AI, without the need for any programming knowledge. ### going for: best use of AI Agents / AI project, best use of Reflex, most viable startup / most commercially viable startup (YC, pear, etc) This is a no-code system that converts our very own natural language programming framework into full-stack reflex-based code, allowing us to achieve unparallel performance from raw code without the tradeoffs of a no-code system. This means unlimited flexibility and scalability on enterprise-grade software all generated using AI. **We quite literally replace the need to hire a web developer.** *built by stanford, berkeley, uw cs students. majority first hackathon project (beginner hack)* ## THE PROBLEM: Building a SaaS web application is hard. If you were trying to build one, here are your options: 1. Hiring a developer or agency: a. $10,000-$100,000, 2-6 months b. Expensive to iterate, prone to miscommunication 2. Developing an app on your own: a. Free + Software costs, 2-12 months b. Requires significant amounts of personal time, not many people can do it 3. Using a no-code tool like bubble.io: a. Free + Software costs, 1-6 months b. Highly restrictive and limited, and learning curve requiring lots of time 4. Hiring a no-code developer or agency: a. $5,000-$50,000, 1-4 months b. Still highly restrictive and limited, prone to miscommunication ## THE SOLUTION: vly.ai A component-based natural language framework that takes specific saas project ideas and turns them into code. Through our framework, we are able to laser focus on specific features exactly the way the user wants it to be. These features can be produced reliably through our component-based system. Essentially: we abstract layer by layer. This allows us to not rely on AI to produce bottom-level code (it can be inaccurate). Finally: our abstractions and context system makes it very easy for AI to understand the code base and locate edits. ## What sets us apart Here is how we stand compared to other tools out there trying to solve the same thing. * Reliable and robust without the technical issues * Scales effectively in size and complexity * Significantly faster load times and optimizations not possible in no-code * More flexibility and complexity in terms of what can be produced * Faster integration of external APIs and capabilities * Automatic generation of front end and back end code not possible in no-code * Ownership of code and the ability to export and expand Our solution also maintains the benefits of a no-code system: * Automatic deployment and hosting on the web for both front and back end * Automatic scaling and optimization * User-friendly environment for interacting with data * Ability to make changes quickly and re-deploy instantly ## Our solution builds what a no-code tool can build, but in hours instead of months: You can build enterprise-grade software with unlimited features custom to exactly what you need: * CRM systems * Custom Internal Tools * Niche platforms * Dashboards and client portals * Interfaces on data * GPT integrations and wrappers on custom data * Marketplaces, web apps, and more * SaaS applications for founders to launch and make money We also allow business owners to create custom software at dirt cheap costs. So instead of paying for 5 subscriptions to manage your business, you can now combine everything into 1 super-app, such as the following: * Managing Employee Payroll * Tracking hours * POS and inventory tracking * Front site and processing orders # The Technology built at Treehacks: breakdown Here is how our treehacks project operates differently ## attention is all you need We mean context. You can't just say: "build me a full stack blogging site" and expect the AI to produce the next Medium. You need to be specific. As specific as possible. You need to describe every page, feature, and component, or else it may not give you what you want, and most of the times, the AI doesn't have the capacity to build out this much logic on its own. So, here's the process: 1. User enters a broader prompt 2. The AI conversationally details the prompt and reviews it with the user 3. This cycle repeats for a description and a list of features & user flow 4. Then, the AI helps build out in natural language the database schema 5. Process continues to write descriptions for each page, then all the way down to a component level Eventually, you create one giant configuration file split up from the top (being more context related) to the bottom (being more specific to exact operations). What you have at the bottom isn't too far off from just using Reflux and calling pre-built components with pre-defined parameters. This form of abstraction makes it much more straightforward for the AI. ## The .vly programming language We standardized the format of this config file and optimized its AI-friendliness to where it's pretty much become it's own language. It's now called the vly programming language (with .vly file extensions) that uses natural language arranged in an intuitive format. This allows people with no programming knowledge to write out what they want in immense detail for our AI to implement. This specificity is required to ensure that the level of depth and complexity desired is reached; something current AI code generators lack. This also is how we seperate ourselves in comprehensiveness. ## AI-Agent specific capabilities Due to the specificity of the reflex framework, we need to prompt engineer different AI agents for each step of the process to specifically produce what we need. For example, we have one agent for configuring specifically the database based on the .vly framework. We have others for different parts of the process, from selecting and implementing components in our library. Also, the limited component library requires the generation of the vly language to choose from the existing library, fed through directly into the system message. The AI then knows the exact way to implement each library, which is in the form of a function with parameters. ## Component Library The limitation is that we rely on pre-built components rather than components built on the fly. We chose this route because we didn't want to rely on AI to write bottom-level code; you definitely don't want the AI to be writing your stripe payments scripts. ## Integrated SaaS Specific Technologies We reliably integrate Stripe, email, text, and more into our tech stack. The vertical approach we take allows us to ensure correctness in our work and creates reliability albeit sacrificing flexibility in the short term. ## Use of Reflex and language abstraction Our full-stack web app uses reflex to allow for simple intuitive abstraction of web applications. Rather than verbose systems seen in next.js, reflex abstracts all of it for us already so that the AI can have a lot more focus on the higher level operations rather than the specific syntax. # The future of vly.ai: committed to becoming a startup We are highly committed to turning this venture into a startup company. We have contacted clients already to build projects for to raise funds and expand our component library. We hope to someday be able to generate software cheaper, faster, and more reliable than other companies to bring consumers technologies fit for their needs that don't cost large sums of money. ## Expanding library of components We are actively working on expanding the capabilities of the AI to be able to tackle more and more components and external specific features based on demand from clients. This could mean instant implementation of large-level features that often replace existing tools. All potential prize money will go towards funding this project and the extension.
losing
## What it does * Connects people in need of food with those that are able and willing to donate their surplus food supply. The App utilizes geolocation to determine realtime locations. ## Challenges I ran into * Implementing Firebase to allow client-server interaction ## What's next for Nourish * Implementing a fully functional supplier and volunteer portal using Firebase Database and NodeJS. * Implementing geofencing to allow for pop-up notification when someone in need of food is in the vicinity of a food provider. * Registering the domain and going live.
## Inspiration The first step of our development process was conducting user interviews with University students within our social circles. When asked of some recently developed pain points, 40% of respondents stated that grocery shopping has become increasingly stressful and difficult with the ongoing COVID-19 pandemic. The respondents also stated that some motivations included a loss of disposable time (due to an increase in workload from online learning), tight spending budgets, and fear of exposure to covid-19. While developing our product strategy, we realized that a significant pain point in grocery shopping is the process of price-checking between different stores. This process would either require the user to visit each store (in-person and/or online) and check the inventory and manually price check. Consolidated platforms to help with grocery list generation and payment do not exist in the market today - as such, we decided to explore this idea. **What does G.e.o.r.g.e stand for? : Grocery Examiner Organizer Registrator Generator (for) Everyone** ## What it does The high-level workflow can be broken down into three major components: 1: Python (flask) and Firebase backend 2: React frontend 3: Stripe API integration Our backend flask server is responsible for web scraping and generating semantic, usable JSON code for each product item, which is passed through to our React frontend. Our React frontend acts as the hub for tangible user-product interactions. Users are given the option to search for grocery products, add them to a grocery list, generate the cheapest possible list, compare prices between stores, and make a direct payment for their groceries through the Stripe API. ## How we built it We started our product development process with brainstorming various topics we would be interested in working on. Once we decided to proceed with our payment service application. We drew up designs as well as prototyped using Figma, then proceeded to implement the front end designs with React. Our backend uses Flask to handle Stripe API requests as well as web scraping. We also used Firebase to handle user authentication and storage of user data. ## Challenges we ran into Once we had finished coming up with our problem scope, one of the first challenges we ran into was finding a reliable way to obtain grocery store information. There are no readily available APIs to access price data for grocery stores so we decided to do our own web scraping. This lead to complications with slower server response since some grocery stores have dynamically generated websites, causing some query results to be slower than desired. Due to the limited price availability of some grocery stores, we decided to pivot our focus towards e-commerce and online grocery vendors, which would allow us to flesh out our end-to-end workflow. ## Accomplishments that we're proud of Some of the websites we had to scrape had lots of information to comb through and we are proud of how we could pick up new skills in Beautiful Soup and Selenium to automate that process! We are also proud of completing the ideation process with an application that included even more features than our original designs. Also, we were scrambling at the end to finish integrating the Stripe API, but it feels incredibly rewarding to be able to utilize real money with our app. ## What we learned We picked up skills such as web scraping to automate the process of parsing through large data sets. Web scraping dynamically generated websites can also lead to slow server response times that are generally undesirable. It also became apparent to us that we should have set up virtual environments for flask applications so that team members do not have to reinstall every dependency. Last but not least, deciding to integrate a new API at 3am will make you want to pull out your hair, but at least we now know that it can be done :’) ## What's next for G.e.o.r.g.e. Our next steps with G.e.o.r.g.e. would be to improve the overall user experience of the application by standardizing our UI components and UX workflows with Ecommerce industry standards. In the future, our goal is to work directly with more vendors to gain quicker access to price data, as well as creating more seamless payment solutions.
## Inspiration After witnessing the abundance of food waste here in our community at Stanford, we became discouraged by the social attitudes pertaining to this issue. Members of our team come from backgrounds that place emphasis on mindfulness surrounding food consumption and security. We hope to provide a resource to people who may need additional access to food, such as low-income families and unhoused people. ## What it does FoodSource provides a way for surplus food (such as sports games, imperfect produce, or other venues) to be redistributed. Food donors post about extra food to have to donate, which distributors (i.e. non-profits, food drives, church orgs, etc.) can pick up and move to a location where it can be redistributed. Consumers who are in need can view upcoming and current events held by distributors and are provided with an additional resource for food. ## How we built it We used Bootstrap, PyCharm, and Procreate for website design, front-end visuals, and back-end engineering. ## Challenges we ran into Our biggest challenge was finding a back-end framework, hosting, and accessing a database. Because we could not develop our back-end infrastructure without most of our front-end, we were challenged by completing our front-end development quickly. ## Accomplishments that we're proud of We are super proud of getting through our first hackathon! We're also proud of creating a product whose causes we resonate with. ## What we learned We learned how to use Bootstrap and PGSQL. Some of our team members learned how to create a website from scratch, along with planning and implementing all the details involved in web development. ## What's next for FoodSource We hope to allow distributors to upload images of food for consumers, set up a messaging system between distributors and food donors, and enable consumers to filter results by food allergens and location. Once our prototype becomes a more fleshed-out and structured product, we hope to gain more publicity, as the effectiveness of our site depends on public awareness.
winning
## Inspiration COVID-19 has made it difficult for low-income families and seniors to access food banks. In addition to long line-ups at existing food banks, individuals need to travel for longer periods of time due to transit delays. People need food but they also need to access it safely and in a way that upholds their dignity. ## What it does Freshco is a web app that allows food banks to input details of the recipients (addresses, allergies, etc.) into its database. It also allows food banks to keep track of their volunteer list and assign deliveries to such volunteers. On the recipient's end, they can also update their grocery list to personalize their delivered food. ## Accomplishments that we're proud of Learning React (I've always wanted to learn this). Also, Luigi and Eric are proud of coding a web app in Python and learning how to work with frameworks! ## What's next for Freshco Our app is meant to be three-pronged: in addition to the organization’s side, we also want to allow food recipients to browse through the organization’s inventory (grocery-style). We also want to allow volunteers to create their own accounts and see and choose their deliveries.
## Inspiration The both of us study in NYC and take subways almost everyday, and we notice the rampant food insecurity and poverty in an urban area. In 2017 40 million people struggled with hunger (source Feeding America) yet food waste levels remain at an all time high (“50% of all produce in the United States is thrown away” source The Guardian). We wanted to tackle this problem, because it affects a huge population, and we see these effects in and around the city. ## What it does Our webapp uses machine learning to detect produce and labels of packaged foods. The webapp collects this data, and stores it into a user's ingredients list. Recipes are automatically found using google search API from the ingredients list. Our code parses through the list of ingredients and generates the recipe that would maximize the amount of food items (also based on spoilage).The user may also upload their receipt or grocery list to the webapp. With these features, the goal of our product is to reduce food waste by maximizing the ingredients a user has at home. With our trained datasets that detect varying levels of spoiled produce, a user is able to make more informed choices based on the webapp's recommendation. ## How we built it We first tried to detect images of different types of food using various platforms like open-cv and AWS. After we had this detection working, we used Flask to display the data onto a webapp. Once the information was stored on the webapp, we automatically generated recipes based on the list of ingredients. Then, we built the front-end (HTML5, CSS3) including UX/UI design into the implementation. We shifted our focus to the back-end, and we decided to detect text from receipts, grocery lists, and labels (packaged foods) that we also displayed onto our webapp. On the webapp we also included an faq page to educate our users on this epidemic. On the webapp we also posted a case study on the product in terms of UX and UI design. ## Challenges we ran We first used open-cv for image recognition, but we learned about amazon web services, specifically, Amazon Rekognition to identify text and objects to detect expiration dates, labels, produce, and grocery lists. We trained models in sci-kit python to detect levels of spoilage/rotten produce. We encountered merge conflicts with GitHub, so we had to troubleshoot with the terminal in order to resolve them. We were new to using Flask, which we used to connect our python files to display in a webpage. We also had to choose certain features over others that would best fit the needs of the users. This was also our first hackathon ever! ## Accomplishments that we're proud of We feel proud to have learned new tools in different areas of technology (computer vision, machine learning, different languages) in a short period of time. We also made use of the mentor room early on, which was helpful. We learned different methods to implement similar ideas, and we were able to choose the most efficient one (example: AWS was more efficient for us than open-cv). We also used different functions in order to not repeat lines of code. ## What we learned New technologies and different ways of implementing them. We both had no experience in ML and computer vision prior to this hackathon. We learned how to divide an engineering project into smaller tasks that we could complete. We managed our time well, so we could choose workshops to attend, but also focus on our project, and get rest. ## What's next for ZeroWaste In a later version, ZeroWaste would store and analyze the user's history of food items, and recommend recipes (which max out the ingredients that are about to expire using computer vision) as well as other nutritional items similar to what the user consistently eats through ML. In order to tackle food insecurity at colleges and schools ZeroWaste would detect when fresh produce would expire, and predict when an item may expire based on climate/geographic region of community. We had hardware (raspberry PI), which we could have used with a software ML method, so in the future we would want to test the accuracy of our code with the hardware.
## Inspiration With evolutions in bipedal robotics, balance has become something that we take for granted. Yet despite the many steps taken forward in advancing technological walking, we've neglected to innovate and support our elderly populations with their mobility issues. The older we get, the more dangerous a fall can become. We set out to create a solution that provides instant aid to seniors that have been involved in loss of coordination, applying advanced technology and APIs to locate and support them. ## What it does SecureStep is a smart walking cane designed for seniors in old age homes, responsible for notifying care workers when a fall has occurred. SecureStep's built-in sensors detect falls and transmit an exact geolocation of the incident. Paired with the power of the MappedIn SDK, the geolocation is published to an in-depth floor plan of the old age home, alerting caregivers of the incident, and the best indoor route to reach the victim. Communication over Wi-Fi between the cane's microcontroller and a local webpage ensures urgent care for a fallen senior. Any location covered by the Wi-Fi network will allow the cane to connect to the webpage, enabling long-range data communication. The webpage displays a list of the cane's acceleration and gyration data, along with the longitude and latitude. A map of the floor plans is also included and displays a route to a senior after a fall has occurred. ## How we built it SecureStep was developed with many technologies. The mechanical enclosure was designed in SolidWorks, with space allocated for an ESP-8266 microcontroller, a 6V battery pack, an MPU6050 accelerometer/gyro sensor, and an RGB LED used for displaying cane status (blinking blue - connecting to Wi-Fi, solid blue - searching for react webpage server, green - connected and upright, red - fall detected). The ESP microcontroller transmits the MPU data to a react webpage via a built-in Wi-Fi module, processing the acceleration of the cane to verify whether a fall has occurred. The ESP also calls the Google Cloud Geolocation API to triangulate its latitude and longitude based on other Wi-Fi devices on the network. The react webpage receives and displays all raw data, along with a MappedIn map of E7. The map is updated with the latitude and longitude of the cane only when a fall is detected, and a path is drawn from a dedicated medical zone to the incident location. Once the cane is returned to its upright position, the fall detection warning is removed along with the path. This ensures that a cane falling on its own and picked back up does not generate a false positive. ## Challenges we ran into The first (and definitely most annoying) challenge we ran into was uploading code to the ESP module. After an hour of verifying drivers, chugging coffee, and praying to C itself, we realized that our micro-USB cable was unable to transfer data... :/ Besides small annoyances and hurdles to overcome, a significant amount of time was spent trying to perfect the communication between the ESP and the webpage. With over 1000 people using the HTN Wi-Fi network, our solution had to find the right balance between polling for information and not blowing up our ESP! ## Accomplishments that we're proud of Our team is incredibly proud with the speed in which we developed our idea and brought a fully functional prototype to fruition over the course of the weekend. With no pre-planned idea or resources, we let the technology speak to us and guide us over the past two days! It's truly a reward to have finished a project that not only helped us to learn more skills but also created a solution to a very real problem that affects elderly people across the world. We were also excited to work with amazing sponsored technology, and implement solutions to expand on their API functionality, taking our project to the next level :) ## What we learned It wouldn't be a hackathon if we had learned nothing. SecureStep has been a fantastic opportunity for our team to try out new APIs, take a crack at web development with React, and expand on our understanding of network communication and synchronization. ## What's next for SecureStep Moving forward, there are a few changes we'd love to add: * Make an enclosure that can withstand a full drop * Create better compatibility for mobile devices * Speed-up the Wi-Fi communication
losing
## Inspiration At reFresh, we are a group of students looking to revolutionize the way we cook and use our ingredients so they don't go to waste. Today, America faces a problem of food waste. Waste of food contributes to the acceleration of global warming as more produce is needed to maintain the same levels of demand. In a startling report from the Atlantic, "the average value of discarded produce is nearly $1,600 annually" for an American family of four. In terms of Double-Doubles from In-n-Out, that goes to around 400 burgers. At reFresh, we believe that this level of waste is unacceptable in our modern society, imagine every family in America throwing away 400 perfectly fine burgers. Therefore we hope that our product can help reduce food waste and help the environment. ## What It Does reFresh offers users the ability to input ingredients they have lying around and to find the corresponding recipes that use those ingredients making sure nothing goes to waste! Then, from the ingredients left over of a recipe that we suggested to you, more recipes utilizing those same ingredients are then suggested to you so you get the most usage possible. Users have the ability to build weekly meal plans from our recipes and we also offer a way to search for specific recipes. Finally, we provide an easy way to view how much of an ingredient you need and the cost of those ingredients. ## How We Built It To make our idea come to life, we utilized the Flask framework to create our web application that allows users to use our application easily and smoothly. In addition, we utilized a Walmart Store API to retrieve various ingredient information such as prices, and a Spoonacular API to retrieve recipe information such as ingredients needed. All the data is then backed by SQLAlchemy to store ingredient, recipe, and meal data. ## Challenges We Ran Into Throughout the process, we ran into various challenges that helped us grow as a team. In a broad sense, some of us struggled with learning a new framework in such a short period of time and using that framework to build something. We also had issues with communication and ensuring that the features we wanted implemented were made clear. There were times that we implemented things that could have been better done if we had better communication. In terms of technical challenges, it definitely proved to be a challenge to parse product information from Walmart, to use the SQLAlchemy database to store various product information, and to utilize Flask's framework to continuously update the database every time we added a new recipe. However, these challenges definitely taught us a lot of things, ranging from a better understanding to programming languages, to learning how to work and communicate better in a team. ## Accomplishments That We're Proud Of Together, we are definitely proud of what we have created. Highlights of this project include the implementation of a SQLAlchemy database, a pleasing and easy to look at splash page complete with an infographic, and being able to manipulate two different APIs to feed of off each other and provide users with a new experience. ## What We Learned This was all of our first hackathon, and needless to say, we learned a lot. As we tested our physical and mental limits, we familiarized ourselves with web development, became more comfortable with stitching together multiple platforms to create a product, and gained a better understanding of what it means to collaborate and communicate effectively in a team. Members of our team gained more knowledge in databases, UI/UX work, and popular frameworks like Boostrap and Flask. We also definitely learned the value of concise communication. ## What's Next for reFresh There are a number of features that we would like to implement going forward. Possible avenues of improvement would include: * User accounts to allow ingredients and plans to be saved and shared * Improvement in our search to fetch more mainstream and relevant recipes * Simplification of ingredient selection page by combining ingredients and meals in one centralized page
## Inspiration Almost 2.2 million tonnes of edible food is discarded each year in Canada alone, resulting in over 17 billion dollars in waste. A significant portion of this is due to the simple fact that keeping track of expiry dates for the wide range of groceries we buy is, put simply, a huge task. While brainstorming ideas to automate the management of these expiry dates, discussion came to the increasingly outdated usage of barcodes for Universal Product Codes (UPCs); when the largest QR codes can store [thousands of characters](https://stackoverflow.com/questions/12764334/qr-code-max-char-length), why use so much space for a 12 digit number? By building upon existing standards and the everyday technology in our pockets, we're proud to present **poBop**: our answer to food waste in homes. ## What it does Users are able to scan the barcodes on their products, and enter the expiration date written on the packaging. This information is securely stored under their account, which keeps track of what the user has in their pantry. When products have expired, the app triggers a notification. As a proof of concept, we have also made several QR codes which show how the same UPC codes can be encoded alongside expiration dates in a similar amount of space, simplifying this scanning process. In addition to this expiration date tracking, the app is also able to recommend recipes based on what is currently in the user's pantry. In the event that no recipes are possible with the provided list, it will instead recommend recipes in which has the least number of missing ingredients. ## How we built it The UI was made with native android, with the exception of the scanning view which made use of the [code scanner library](https://github.com/yuriy-budiyev/code-scanner). Storage and hashing/authentication were taken care of by [MongoDB](https://www.mongodb.com/) and [Bcrypt](https://github.com/pyca/bcrypt/) respectively. Finally in regards to food, we used [Buycott](https://www.buycott.com/)'s API for UPC lookup and [Spoonacular](https://spoonacular.com/) to look for valid recipes. ## Challenges we ran into As there is no official API or publicly accessible database for UPCs, we had to test and compare multiple APIs before determining Buycott had the best support for Canadian markets. This caused some issues, as a lot of processing was needed to connect product information between the two food APIs. Additionally, our decision to completely segregate the Flask server and apps occasionally resulted in delays when prioritizing which endpoints should be written first, how they should be structured etc. ## Accomplishments that we're proud of We're very proud that we were able to use technology to do something about an issue that bothers not only everyone on our team on a frequent basis (something something cooking hard) but also something that has large scale impacts on the macro scale. Some of our members were also very new to working with REST APIs, but coded well nonetheless. ## What we learned Flask is a very easy to use Python library for creating endpoints and working with REST APIs. We learned to work with endpoints and web requests using Java/Android. We also learned how to use local databases on Android applications. ## What's next for poBop We thought of a number of features that unfortunately didn't make the final cut. Things like tracking nutrition of the items you have stored and consumed. By tracking nutrition information, it could also act as a nutrient planner for the day. The application may have also included a shopping list feature, where you can quickly add items you are missing from a recipe or use it to help track nutrition for your upcoming weeks. We were also hoping to allow the user to add more details to the items they are storing, such as notes for what they plan on using it for or the quantity of items. Some smaller features that didn't quite make it included getting a notification when the expiry date is getting close, and data sharing for people sharing a household. We were also thinking about creating a web application as well, so that it would be more widely available. Finally, we strongly encourage you, if you are able and willing, to consider donating to food waste reduction initiatives and hunger elimination charities. One of the many ways to get started can be found here: <https://rescuefood.ca/> <https://secondharvest.ca/> <https://www.cityharvest.org/> # Love, # FSq x ANMOL
# Inspiration As a new hire, you probably know the pain of trying to get familiar with company culture, or spending hours digging through large, complicated codebases, or having to dig through a library of technical messages just to understand coworkers. In the end a tremendous amount of essential, work related knowledge gets lost in conversations on company apps like Slack or Discord. However, this information isn't stored in a structured, easily navigable format, unlike with collaboration tools like Notion or Confluence, in which content can be organized in a structured manner. This results in much of the context and background on key decisions and discussions being inaccessible to team members. Solving this key workflow issue would enable teams to better document their decisions and be able to refer to this conversational knowledge in the future, which would help increase productivity. We devised an approach to solving this problem that takes advantage of backlinks, a powerful concept used by note taking apps like Roam Research and Obsidian. Backlinks are essentially bidirectional links; with traditional links, you can follow a link to a destination but cannot see all the places that link to this destination. Backlinks/bidirectional links enable this functionality, allowing users to build a network of related ideas through backlinks. # What it does Backlink enables users to quickly save conversations from Slack into relevant pages in their Notion database. All they need to do is add a backlink with a topic name (ex. [[Topic Name]]) to their message (or add backlinks in the replies for a message) and they will be able to access all mentions of that topic through a “Topic Name” page within their Notion database. This enables users to seamlessly view all the conversations around a particular topic, feature or idea, drastically improving their ability to extract knowledge from their chat logs. If a topic page doesn’t exist, simply adding a backlink to a reply or message will dynamically create a new page to store relevant references. Another key feature is that the notion page will automatically include links to the messages in addition to the content, allowing users to jump to the context and understand the conversation. # How we built it Tools Used: Golang, CockroachDB, Slack API, Notion API, GCP CockroachDB was crucial to store the mappings we generated to connect backlink names to notion pages. We also store our configurations for slack workspaces so we can connect the correct Slack and Notion instances together. We decided to use CockroachDB due to its extremely high reliability, which helps us ensure that our users’ valuable information stays available, even in the worst of situations. Additionally, CockroachDB is compatible with Postgres, so we were able to easily apply all of our existing experience with confidence, which was a huge bonus under the strict time restraints of the hackathon. The project has 2 main components: A slack bot and a notion integration. Slack Bot: The slack bot looks out for user messages and scans them for backlinks in the text. It then uses regex to isolate the backlinks and retrieves the notion page IDs from the CockroachDB database. If the backlink doesn’t exist, it creates a new page in notion for that backlink and adds the content to that page. If the backlink does exist, it simply adds the message content and link to the page. Notion Integration: The notion integration is a Go package custom written to interact with Notion’s new API (which is in beta). This package can create subpages and add content to pages. This package is what allows the slack bot to integrate with Notion. # Challenges we ran into We ran into issues with the notion API The Notion api is very new, and is only in beta. As a result, it’s missing many features and has documentation that is hard to understand. Because of how hard it was to understand, iterating with the Notion integration proved to be difficult. The slack api was also hard to navigate, but for the opposite reason! Layers upon layers of documentation were provided, but not all of it up to date, and a lot of it deprecated. There were also issues with bugs and weird behaviour that made it difficult to quickly identify the correct api endpoints to use for a given task. # Accomplishments that we're proud of We are proud that we built a service that will be useful, not only to us, but potentially to others as well. # What we learned We learned how to work with CockroachDB and Google Cloud Platform. We faced issues with the APIs we wanted to use and integrate, but we worked through them and ended up with better skills and knowledge involving different APIs. # What's next for Slack Backlinks We'd love to productize and ship our project! We think it'd be extremely useful to teams in its current form, and have identified areas for potential improvements: Automatically capture conversation/context: Currently, users can save individual messages. Using natural language processing and examining timestamps, we can capture the entire conversation/context and store it in a nested representation. This would allow users to be able to quickly understand the context of a message A related improvement is to use abstractive text summarization to summarize the content of a conversation Allow message group capture: Implementing the ability to select multiple messages to capture together, and getting all the replies for a particular message to be saved along with it. Embed HTML slack message: Replace the plaintext representation of the message with a styled HTML embed in notion that improves the user experience.
partial
## What it does KokoRawr at its core is a Slack App that facilitates new types of interactions via chaotic cooperative gaming through text. Every user is placed on a team based on their Slack username and tries to increase their team's score by playing games such as Tic Tac Toe, Connect 4, Battleship, and Rock Paper Scissors. Teams must work together to play. However, a "Twitch Plays Pokemon" sort of environment can easily be created where multiple people are trying to execute commands at the same time and step on each others' toes. Additionally, people can visualize the games via a web app. ## How we built it We jumped off the deep into the land of microservices. We made liberal use of StdLib with node.js to deploy a service for every feature in the app, amounting to 10 different services. The StdLib services all talk to each other and to Slack. We also have a visualization of the game boards that is hosted as a Flask server on Heroku that talks to the microservices to get information. ## Challenges we ran into * not getting our Slack App banned by HackPrinceton * having tokens show up correctly on the canvas * dealing with all of the madness of callbacks * global variables causing bad things to happen ## Accomplishments that we're proud of * actually chaotically play games with each other on Slack * having actions automatically showing up on the web app * The fact that we have **10 microservices** ## What we learned * StdLib way of microservices * Slack integration * HTML5 canvas * how to have more fun with each other ## Possible Use Cases * Friendly competitive way for teams at companies to get to know each other better and learn to work together * New form of concurrent game playing for friend groups with "unlimited scalability" ## What's next for KokoRawr We want to add more games to play and expand the variety of visualizations that are shown to include more games. Some service restructuring would be need to be done to reduce the Slack latency. Also, game state would need to be more persistent for the services.
## Problem In these times of isolation, many of us developers are stuck inside which makes it hard for us to work with our fellow peers. We also miss the times when we could just sit with our friends and collaborate to learn new programming concepts. But finding the motivation to do the same alone can be difficult. ## Solution To solve this issue we have created an easy to connect, all in one platform where all you and your developer friends can come together to learn, code, and brainstorm together. ## About Our platform provides a simple yet efficient User Experience with a straightforward and easy-to-use one-page interface. We made it one page to have access to all the tools on one screen and transition between them easier. We identify this page as a study room where users can collaborate and join with a simple URL. Everything is Synced between users in real-time. ## Features Our platform allows multiple users to enter one room and access tools like watching youtube tutorials, brainstorming on a drawable whiteboard, and code in our inbuilt browser IDE all in real-time. This platform makes collaboration between users seamless and also pushes them to become better developers. ## Technologies you used for both the front and back end We use Node.js and Express the backend. On the front end, we use React. We use Socket.IO to establish bi-directional communication between them. We deployed the app using Docker and Google Kubernetes to automatically scale and balance loads. ## Challenges we ran into A major challenge was collaborating effectively throughout the hackathon. A lot of the bugs we faced were solved through discussions. We realized communication was key for us to succeed in building our project under a time constraints. We ran into performance issues while syncing data between two clients where we were sending too much data or too many broadcast messages at the same time. We optimized the process significantly for smooth real-time interactions. ## What's next for Study Buddy While we were working on this project, we came across several ideas that this could be a part of. Our next step is to have each page categorized as an individual room where users can visit. Adding more relevant tools, more tools, widgets, and expand on other work fields to increase our User demographic. Include interface customizing options to allow User’s personalization of their rooms. Try it live here: <http://35.203.169.42/> Our hopeful product in the future: <https://www.figma.com/proto/zmYk6ah0dJK7yJmYZ5SZpm/nwHacks_2021?node-id=92%3A132&scaling=scale-down> Thanks for checking us out!
# PennApps18-Chrome-Messenger ## All instructions will go here A MQTT based Chat messenger for the Google Chrome Extension bar. It consist of two part: 1) Front-end Chrome Extension 2) Back-end MQTT+Nodejs+MySQL Front-end it offers the UX interface with register, login and chat features Back-end offers the backbone of the communication layer. It works on the MQTT protocol woth opensource MOSCA MQTT broker. Back-end is writtn in nodejs with different nodejs modules such as a) bunyan b) debug c) mysql d) mqtt e) bcrypt There is a MySQL database with supporting schema for user data entry and user-to-user mapping. Project UX needs more developement, with support for multi-user chatting.
winning
## Inspiration The general challenge of UottaHack 4 was to create a hack surrounding COVID-19. We got inspired by a COVID-19 restriction in the province of Quebec which requires stores to limit the number of people allowed in the store at once (depending on the store floor size). This results in many stores having to place an employee at the door of the shop to monitor the people entering/exiting, if they are wearing a mask and to make sure they disinfect their hands. Having an employee dedicated to monitoring the entrance can be a financial drain on a store and this is where our idea kicks in, dedicating the task of monitoring the door to the machine so the human resources could be best used elsewhere in the store. ## What it does Our hack monitors the entrance of a store and does the following: 1. It counts how many people are currently in the store by monitoring the number of people that are entering/leaving the store. 2. Verifies that the person entering is wearing PPE ( a mask ). If no PPE was recognized, and a reminder to wear a mask is played from a speaker on the Raspberry Pi. 3. Verify that the person entering has used the sanitation station and displays a message thanking them for using it. 4. Display information to people entering such as. how many people are in the store and what is the store's max capacity, reminders to wear a mask, and thanks for using the sanitation station 5. Provides useful stats to the shop owner about the monitoring of the shop. ## How we built it **Hardware:** The hack uses a Raspberry Pi and it PiCam to monitor the entrance. **Monitoring backend:** The program starts by monitoring the floor in front of the door for movement this is done using OpenCV. Once movement is detected pictures are captured and stored. the movement is also analyzed to estimate if the person is leaving or entering the store. Following an event of someone entering/exiting, a secondary program analyses the collection of a picture taken and submits chooses one of them to be analyzed by google cloud vision API. The picture sent to the google API looks for three features: faces, object location (to identify people's bodies), and labels (to look for PPE). Using the info from the Vision API we can determine first if the person has PPE and if the difference in the number of people leaving and entering by comparing the number of faces to the body detected. if the is fewer faces than bodies then that means people have left, if there is the same amount then only people entered. Back on the first program, another point is being monitored which is the sanitation station. if there is an interaction(movement) with it then we know the person entering has used it. **cloud backend:** The front end and monitoring hardware need a unified API to broker communication between the services, as well as storage in the mongoDB data lake; This is where the cloud backend shines. Handling events triggered by the monitoring system, as well as user defined configurations from the front end, logging, and storage. All from a highly available containerized Kubernetes environment on GKE. **cloud frontend:** The frontend allows the administration to set the box parameters for where the objects will be in the store. If they are wearing a mask and sanitized their hands, a message will appear stating "Thank you for slowing the spread." However, if they are not wearing a mask or sanitized their hands, then a message will state "Please put on a mask." By doing so, those who are following protocols will be rewarded, and those who are not will be reminded to follow them. ## Challenges we ran into On the monitoring side, we ran into problems because of the color of the pants. Having bright-colored pants registered as PPE to Google's Cloud Vision API (they looked to similar to reflective pants PPe's). On the backend architecture side, developing event driven code was a challenge, as it was our first time working with such technologies. ## Accomplishments that we're proud of The efficiency of our computer vision is something we are proud of as we initially started with processing each frame every 50 milliseconds, however, we optimized the computer vision code to only process a fraction of our camera feed, yet maintain the same accuracy. We went from 50 milliseconds to 10 milliseconds ## What we learned **Charles:** I've learn how to use the google API **Mingye:** I've furthered my knowledge about computer vision and learned about google's vision API **Mershab:** I built and deployed my first Kubernetes cluster in the cloud. I also learned event driven architecture. ## What's next for Sanitation Station Companion We hope to continue improving our object detection and later on, detect if customers in the store are at least six feet apart from the person next to them. We will also remind them to keep their distance throughout the store as well. Their is also the feature of having more then on point of entry(door) monitored at the same time.
OUR VIDEO IS IN THE COMMENTS!! THANKS FOR UNDERSTANDING (WIFI ISSUES) ## Inspiration As a group of four students having completed 4 months of online school, going into our second internship and our first fully remote internship, we were all nervous about how our internships would transition to remote work. When reminiscing about pain points that we faced in the transition to an online work term this past March, the one pain point that we all agreed on was a lack of connectivity and loneliness. Trying to work alone in one's bedroom after experiencing life in the office where colleagues were a shoulder's tap away for questions about work, and the noise of keyboards clacking and people zoned into their work is extremely challenging and demotivating, which decreases happiness and energy, and thus productivity (which decrease energy and so on...). Having a mentor and steady communication with our teams is something that we all valued immensely during our first co-ops. In addition, some of our works had designated exercise times, or even pre-planned one-on-one activities, such as manager-coop lunches, or walk breaks with company walking groups. These activities and rituals bring structure into a sometimes mundane day which allows the brain to recharge and return back to work fresh and motivated. Upon the transition to working from home, we've all found that somedays we'd work through lunch without even realizing it and some days we would be endlessly scrolling through Reddit as there would be no one there to check in on us and make sure that we were not blocked. Our once much-too-familiar workday structure seemed to completely disintegrate when there was no one there to introduce structure, hold us accountable and gently enforce proper, suggested breaks into it. We took these gestures for granted in person, but now they seemed like a luxury- almost impossible to attain. After doing research, we noticed that we were not alone: A 2019 Buffer survey asked users to rank their biggest struggles working remotely. Unplugging after work and loneliness were the most common (22% and 19% respectively) <https://buffer.com/state-of-remote-work-2019> We set out to create an application that would allow us to facilitate that same type of connection between colleagues and make remote work a little less lonely and socially isolated. We were also inspired by our own online term recently, finding that we had been inspired and motivated when we were made accountable by our friends through usage of tools like shared Google Calendars and Notion workspaces. As one of the challenges we'd like to enter for the hackathon, the 'RBC: Most Innovative Solution' in the area of helping address a pain point associated with working remotely in an innovative way truly encaptured the issue we were trying to solve perfectly. Therefore, we decided to develop aibo, a centralized application which helps those working remotely stay connected, accountable, and maintain relationships with their co-workers all of which improve a worker's mental health (which in turn has a direct positive affect on their productivity). ## What it does Aibo, meaning "buddy" in Japanese, is a suite of features focused on increasing the productivity and mental wellness of employees. We focused on features that allowed genuine connections in the workplace and helped to motivate employees. First and foremost, Aibo uses a matching algorithm to match compatible employees together focusing on career goals, interests, roles, and time spent at the company following the completion of a quick survey. These matchings occur multiple times over a customized timeframe selected by the company's host (likely the People Operations Team), to ensure that employees receive a wide range of experiences in this process. Once you have been matched with a partner, you are assigned weekly meet-ups with your are partner to build that connection. Using Aibo, you can video call with your partner and start creating a To-Do list with your partner and by developing this list together, you can bond over the common tasks to perform despite potentially having seemingly very different roles. Partners would have 2 meetings a day, once in the morning where they would go over to-do lists and goals for the day, and once in the evening in order to track progress over the course of that day and tasks that need to be transferred over to the following day. ## How We built it This application was built with React, Javascript and HTML/CSS on the front-end along with Node.js and Express on the back-end. We used the Twilio chat room API along with Autocode to store our server endpoints and enable a Slack bot notification that POSTs a message in your specific buddy Slack channel when your buddy joins the video calling room. In total, we used **4 APIs/ tools** for our project. * Twilio chat room API * Autocode API * Slack API for the Slack bots * Microsoft Azure to work on the machine learning algorithm When we were creating our buddy app, we wanted to find an effective way to match partners together. After looking over a variety of algorithms, we decided on the K-means clustering algorithm. This algorithm is simple in its ability to group similar data points together and discover underlying patterns. The K-means will look for a set amount of clusters within the data set. This was my first time working with machine learning but luckily, through Microsoft Azure, I was able to create a working training and interference pipeline. The dataset marked the user’s role and preferences and created n/2 amount of clusters where n are the number of people searching for a match. This API was then deployed and tested on web server. Although, we weren't able to actively test this API on incoming data from the back-end, this is something that we are looking forward to implementing in the future. Working with ML was mainly trial and error, as we have to experiment with a variety of algorithm to find the optimal one for our purposes. Upon working with Azure for a couple of hours, we decided to pivot towards leveraging another clustering algorithm in order to group employees together based on their answers to the form they fill out when they first sign up on the aido website. We looked into the PuLP, a python LP modeler, and then looked into hierarchical clustering. This seemed similar to our initial approach with Azure, and after looking into the advantages of this algorithm over others for our purpose, we decided to chose this one for the clustering of the form responders. Some pros of hierarchical clustering include: 1. Do not need to specify the number of clusters required for the algorithm- the algorithm determines this for us which is useful as this automates the sorting through data to find similarities in the answers. 2. Hierarchical clustering was quite easy to implement as well in a Spyder notebook. 3. the dendrogram produced was very intuitive and helped me understand the data in a holistic way The type of hierarchical clustering used was agglomerative clustering, or AGNES. It's known as a bottom-up algorithm as it starts from a singleton cluster then pairs of clusters are successively merged until all clusters have been merged into one big cluster containing all objects. In order to decide which clusters had to be combined and which ones had to be divided, we need methods for measuring the similarity between objects. I used Euclidean distance to calculate this (dis)similarity information. This project was designed solely using Figma, with the illustrations and product itself designed on Figma. These designs required hours of deliberation and research to determine the customer requirements and engineering specifications, to develop a product that is accessible and could be used by people in all industries. In terms of determining which features we wanted to include in the web application, we carefully read through the requirements for each of the challenges we wanted to compete within and decided to create an application that satisfied all of these requirements. After presenting our original idea to a mentor at RBC, we had learned more about remote work at RBC and having not yet completed an online internship, we learned about the pain points and problems being faced by online workers such as: 1. Isolation 2. Lack of feedback From there, we were able to select the features to integrate including: Task Tracker, Video Chat, Dashboard, and Matching Algorithm which will be explained in further detail later in this post. Technical implementation for AutoCode: Using Autocode, we were able to easily and successfully link popular APIs like Slack and Twilio to ensure the productivity and functionality of our app. The Autocode source code is linked before: Autocode source code here: <https://autocode.com/src/mathurahravigulan/remotework/> **Creating the slackbot** ``` const lib = require('lib')({token: process.env.STDLIB_SECRET_TOKEN}); /** * An HTTP endpoint that acts as a webhook for HTTP(S) request event * @returns {object} result Your return value */ module.exports = async (context) => { console.log(context.params) if (context.params.StatusCallbackEvent === 'room-created') { await lib.slack.channels['@0.7.2'].messages.create({ channel: `#buddychannel`, text: `Hey! Your buddy started a meeting! Hop on in: https://aibo.netlify.app/ and enter the room code MathurahxAyla` }); } // do something let result = {}; // **THIS IS A STAGED FILE** // It was created as part of your onboarding experience. // It can be closed and the project you're working on // can be returned to safely - or you can play with it! result.message = `Welcome to Autocode! 😊`; return result; }; ``` **Connecting Twilio to autocode** ``` const lib = require('lib')({token: process.env.STDLIB_SECRET_TOKEN}); const twilio = require('twilio'); const AccessToken = twilio.jwt.AccessToken; const { VideoGrant } = AccessToken; const generateToken =() => { return new AccessToken( process.env.TWILIO_ACCOUNT_SID, process.env.TWILIO_API_KEY, process.env.TWILIO_API_SECRET ); }; const videoToken = (identity, room) => { let videoGrant; if (typeof room !== 'undefined') { videoGrant = new VideoGrant({ room }); } else { videoGrant = new VideoGrant(); } const token = generateToken(); token.addGrant(videoGrant); token.identity = identity; return token; }; /** * An HTTP endpoint that acts as a webhook for HTTP(S) request event * @returns {object} result Your return value */ module.exports = async (context) => { console.log(context.params) const identity = context.params.identity; const room = context.params.room; const token = videoToken(identity, room); return { token: token.toJwt() } }; ``` From the product design perspective, it is possible to explain certain design choices: <https://www.figma.com/file/aycIKXUfI0CvJAwQY2akLC/Hack-the-6ix-Project?node-id=42%3A1> 1. As shown in the prototype, the user has full independence to move through the designs as one would in a typical website and this supports the non sequential flow of the upper navigation bar as each feature does not need to be viewed in a specific order. 2. As Slack is a common productivity tool in remote work and we're participating in the Autocode Challenge, we chose to use Slack as an alerting feature as sending text messages to phone could be expensive and potentially distract the user and break their workflow which is why Slack has been integrated throughout the site. 3. The to-do list that is shared between the pairing has been designed in a simple and dynamic way that allows both users to work together (building a relationship) to create a list of common tasks, and duplicate this same list to their individual workspace to add tasks that could not be shared with the other (such as confidential information within the company) In terms of the overall design decisions, I made an effort to create each illustration from hand simply using Figma and the trackpad on my laptop! Potentially a non-optimal way of doing so, but this allowed us to be very creative in our designs and bring that individuality and innovation to the designs. The website itself relies on consistency in terms of colours, layouts, buttons, and more - and by developing these components to be used throughout the site, we've developed a modern and coherent website. ## Challenges We ran into Some challenges that we ran into were: * Using data science and machine learning for the very first time ever! We were definitely overwhelmed by the different types of algorithms out there but we were able to persevere with it and create something amazing. * React was difficult for most of us to use at the beginning as only one of our team members had experience with it. But by the end of this, we all felt like we were a little more confident with this tech stack and front-end development. + Lack of time - there were a ton of features that we were interested in (like user authentication and a Google calendar implementation) but for the sake of time we had to abandon those functions and focus on the more pressing ones that were integral to our vision for this hack. These, however, are features I hope that we can complete in the future. We learned how to successfully scope a project and deliver upon the technical implementation. ## Accomplishments that We're proud of * Created a fully functional end-to-end full stack application incorporating both the front-end and back-end to enable to do lists and the interactive video chat that can happen between the two participants. I'm glad I discovered Autocode which made this process simpler (shoutout to Jacob Lee - mentor from Autocode for the guidance) * Solving an important problem that affects an extremely large amount of individuals- according to tnvestmentexecutive.com: StatsCan reported that five million workers shifted to home working arrangements in late March. Alongside the 1.8-million employees who already work from home, the combined home-bound employee population represents 39.1% of workers. <https://www.investmentexecutive.com/news/research-and-markets/statscan-reports-numbers-on-working-from-home/> * From doing user research we learned that people can feel isolated when working from home and miss the social interaction and accountability of a desk buddy. We're solving two problems in one, tackling social problems and increasing worker mental health while also increasing productivity as their buddy will keep them accountable! * Creating a working matching algorithm for the first time in a time crunch and learning more about Microsoft Azure's capabilities in Machine Learning * Creating all of our icons/illustrations from scratch using Figma! ## What We learned * How to create and trigger Slack bots from React * How to have a live video chat on a web application using Twilio and React hooks * How to use a hierarchical clustering algorithm (agglomerative clustering algorithms) to create matches based on inputted criteria * How to work remotely in a virtual hackathon, and what tools would help us work remotely! ## What's next for aibo * We're looking to improve on our pairing algorithm. I learned that 36 hours is not enough time to create a new Tinder algorithm and that other time these pairing can be improved and perfected. * We're looking to code more screens and add user authentication to the mix, and integrate more test cases in the designs rather than using Figma prototyping to prompt the user. * It is important to consider the security of the data as well, and that not all teams can discuss tasks at length due to specificity. That is why we encourage users to create a simple to do list with their partner during their meeting, and use their best judgement to make it vague. In the future, we hope to incorporate machine learning algorithms to take in the data from the user knowing whether their project is NDA or not, and if so, as the user types it can provide warnings for sensitive information. * Add a dashboard! As can be seen in the designs, we'd like to integrate a dashboard per user that pulls data from different components of the website such as your match information and progress on your task tracker/to-do lists. This feature could be highly effective to optimize productivity as the user simply has to click on one page and they'll be provided a high level explanation of these two details. * Create our own Slackbot to deliver individualized Kudos to a co-worker, and pull this data onto a Kudos board on the website so all employees can see how their coworkers are being recognized for their hard work which can act as a motivator to all employees.
## Inspiration We as a team shared the same interest in knowing more about Machine Learning and its applications. upon looking at the challenges available, we were immediately drawn to the innovation factory and their challenges, and thought of potential projects revolving around that category. We started brainstorming, and went through over a dozen design ideas as to how to implement a solution related to smart cities. By looking at the different information received from the camera data, we landed on the idea of requiring the raw footage itself and using it to look for what we would call a distress signal, in case anyone felt unsafe in their current area. ## What it does We have set up a signal that if done in front of the camera, a machine learning algorithm would be able to detect the signal and notify authorities that maybe they should check out this location, for the possibility of catching a potentially suspicious suspect or even being present to keep civilians safe. ## How we built it First, we collected data off the innovation factory API, and inspected the code carefully to get to know what each part does. After putting pieces together, we were able to extract a video footage of the nearest camera to us. A member of our team ventured off in search of the camera itself to collect different kinds of poses to later be used in training our machine learning module. Eventually, due to compiling issues, we had to scrap the training algorithm we made and went for a similarly pre-trained algorithm to accomplish the basics of our project. ## Challenges we ran into Using the Innovation Factory API, the fact that the cameras are located very far away, the machine learning algorithms unfortunately being an older version and would not compile with our code, and finally the frame rate on the playback of the footage when running the algorithm through it. ## Accomplishments that we are proud of Ari: Being able to go above and beyond what I learned in school to create a cool project Donya: Getting to know the basics of how machine learning works Alok: How to deal with unexpected challenges and look at it as a positive change Sudhanshu: The interesting scenario of posing in front of a camera while being directed by people recording me from a mile away. ## What I learned Machine learning basics, Postman, working on different ways to maximize playback time on the footage, and many more major and/or minor things we were able to accomplish this hackathon all with either none or incomplete information. ## What's next for Smart City SOS hopefully working with innovation factory to grow our project as well as inspiring individuals with similar passion or desire to create a change.
partial
## Inspiration We want to make healthcare more accessible through our skINsight app. ## What it does Identifies skin condition using picture of affected skin area. Chatbot to get help and information on treating the skin condition. ## How we built it App using react-native and node.js. Custom classifier model with Microsoft Azure Cognitive Services Computer Vision API. Chatbot with QnA Maker API. Web crawler using python to create dataset of pictures. ## Challenges we ran into We didn't have an existing dataset to work with so we created our own! The functionality to take live picture of the suspected skin area could not be tested as the camera app does not work in Xcode simulator ## Accomplishments that we're proud of and What we Learned Learning how to make a web crawler, using Microsoft Azure Machine Learning Platform ## What's next for skINsight Integrate all components of app, and publish to app store!
## Inspiration The inspiration for our application came from our own laziness and unwillingness to go to the doctors for anything. We wanted to create an application with machine learning and artificial intelligence models in order to improve the daily lives of the lazy. Our goal is to create an application to change the future of skin care and self analysis of skin conditions at home. ## What it does Our application takes in a picture of the user's skin taken using their phone camera and uses that picture to conduct a smart inference on what skin condition the user might have based on our trained machine learning model that uses image classification. ## How we built it For our front end, we used android and java to build the ultimate experience for the mobile user. We also used android's built in camera in order to snap a picture of their skin. For the machine learning model, we used Microsoft Azure cloud computing platform. More specifically, the custom vision AI. ## Challenges we ran into One challenge we ran into was figuring out how to use TensorFlow to integrate the trained model, the camera and android studio. Additionally, the training of the model was challenging because the platform that was used is not extremely sensitive to subtle differences. The data set that we used had to be carefully selected in order to increase the precision and accuracy of our model. ## Accomplishments that we're proud of We, as a team, are proud of the fact that we reached our front end goal and finished a complete, marketable application. We are also proud of our machine learning model and android integration. Our final user interface is user friendly to its fullest extent. ## What we learned We, as a team, learned how integrate a Microsoft custom vision AI and android. We also learned how to create a sleek android user interface for the user. Some team members learned how to step aside and ask other team members for their support and input to their part of the project. This got the foundation of the application running and boosted team morale. ## What's next for Skinmergency We would like to increase the accuracy of our current model and we would also like to start including and supporting other well known skin conditions. The end goal of Skinmergency is to get real doctor certification in order to improve trust, and reliability of our machine learning model. We would like to possibly take our application to enterprise and become a trusted source of skin condition diagnoses.
## Inspiration Due to the shortages of doctors and clinics in rural areas, early diagnosis of skin diseases that may seem harmless on the outside but can become life-threatening is a real problem. MediDerma uses Computer Vision to predict skin diseases and provide the underlying symptoms associated which are prominent in rural India. The lockdown has not helped either, with the increasing shortage of doctors due to many of them going on COVID duties. Keeping the goal of helping out our community in any way we can, Bhuvnesh Nagpal and Mehul Srivastava decided to create this AI-enabled project to help the underprivileged with one slogan in mind – “Prevention is better than Cure” ## What it does MediDerma uses Computer Vision to predict skin diseases and provide the underlying symptoms associated which are prominent in rural India. ## How we built it The image classification model is integrated with a web app. There is an option to either click a picture or upload a saved one. The model based on resnet34 architecture then classifies the image of the skin disease into 1 of the 29 classes and shows the predicted disease and its common symptoms. We trained it on a custom dataset using the fastai library in python. ## Challenges we ran into Collecting the dataset was a big problem as medical datasets are not freely available. We collected the data from various sources including google images, various sites, etc. ## Accomplishments that we're proud of We were able to make an innovative solution to solve a real-world problem. This solution might help a lot of people in the rural parts of India. We are really proud of what we have built. The app aims to provide a simple and accurate diagnosis of skin disease in rural parts of India where medical facilities are scarce. ## What we learned We brainstormed a lot of ideas during the ideation part of this project and realized that there was a dire need for this app. While developing the project, we learned about the Streamlit framework which allows us to easily deploy ML projects. We also learned about the various sources from where we can collect image data. ## What's next for MediDerma We plan to try and improve this model to a level where it can be certified and deployed into the real-world setting. We can do this by collecting and feeding more data to the model. We also plan to increase the number of diseases that this app can detect.
partial
Introducing Melo-N – where your favorite tunes get a whole new vibe! Melo-N combines "melody" and "Novate" to bring you a fun way to switch up your music. Here's the deal: You pick a song and a genre, and we do the rest. We keep the lyrics and melody intact while changing up the music style. It's like listening to your favourite songs in a whole new light! How do we do it? We use cool tech tools like Spleeter to separate vocals from instruments, so we can tweak things just right. Then, with the help of the MusicGen API, we switch up the genre to give your song a fresh spin. Once everything's mixed up, we deliver your custom version – ready for you to enjoy. Melo-N is all about exploring new sounds and having fun with your music. Whether you want to rock out to a country beat or chill with a pop vibe, Melo-N lets you mix it up however you like. So, get ready to rediscover your favourite tunes with Melo-N – where music meets innovation, and every listen is an adventure!
## Inspiration We both love karaoke, but there are lots of obstacles: * going to a physical karaoke is expensive and inconvenient * youtube karaoke videos not always matches your vocal (key) range, and there is also no playback * existing karaoke apps have limited songs, not flexible based on your music taste ## What it does Vioke is a karaoke web-app that supports pitch-changing, on/off vocal switching, and real-time playback, simulating the real karaoke experience. Unlike traditional karaoke machines, Vioke is accesible anytime, anywhere, from your own devices. ## How we built it **Frontend** The frontend is built with React, and it handles settings including on/off playback, on/off vocal, and pitch changing. **Backend** The backend is built in Python. It leverages source separation ML library to extract instrumental tracks. It also uses a pitch-shifting library to adjust the key of a song. ## Challenges we ran into * Playback latency * Backend library compatibily conflicts * Integration between frontend and backend * Lack of GPU / computational power for audio processing ## Accomplishments that we're proud of * We were able to learn and implement audio processing, an area we did not have experience with before. * We built a product that can can be used in the future. * Scrolling lyrics is epic * It works!! ## What's next for Vioke * Caching processed audio to eventually create a data source that we can leverage from to reduce processing time. * Train models for source separation in other languages (we found that the pre-built library mostly just supports English vocals). * If time and resources allow, we can scale it to a platform where people can share their karaoke playlists and post their covers.
## Inspiration Have you ever wanted to listen to music based on how you’re feeling? Now, all you need to do is message MoodyBot a picture of yourself or text your mood, and you can listen to the Spotify playlist MoodyBot provides. Whether you’re feeling sad, happy, or frustrated, MoodyBot can help you find music that suits your mood! ## What it does MoodyBot is a Cisco Spark Bot linked with Microsoft’s Emotion API and Spotify’s Web API that can detect your mood from a picture or a text. All you have to do is click the Spotify playlist link that MoodyBot sends back. ## How we built it Using Cisco Spark, we created a chatbot that takes in portraits and gives the user an optimal playlist based on his or her mood. The chatbot itself was implemented on built.io which controls feeding image data through Microsoft’s Emotion API. Microsoft’s API outputs into a small Node.js server in order to compensate for the limited features of built.io. like it’s limitations when importing modules. From the external server we use moods classified by Microsoft’s API to select a Spotify playlist using Spotify’s Web API which is then sent back to the user on Cisco Spark. ## Challenges we ran into Spotify’s Web API requires a new access token every hour. In the end, we were not able to find a solution to this problem. Our inexperience with Node.js also led to problems with concurrency. We had problems with built.io having limited APIs that hindered our project. ## Accomplishments that we're proud of We were able to code around the fact that built.io would not encoding our images correctly. Built.io also was not able to implement other solutions to this problem that we tried to use. ## What we learned Sometimes, the short cut is more work, or it won't work at all. Writing the code ourselves solved all the problems we were having with built.io. ## What's next for MoodyBot MoodyBot has the potential to have its own app and automatically open the Spotify playlist it suggests. It could also connect over bluetooth to a speaker.
winning
## Inspiration We spend a lot of our time sitting in front of the computer. The idea is to use the video feed from webcam to determine the emotional state of the user, analyze and provide a feedback accordingly in the form of music, pictures and videos. ## How I built it Using Microsoft Cognitive Services (Video + Emotion API) we get the emotional state of the user through the webcam feed. We parse that to the bot framework which in turn sends responses based upon the change in the values of emotional state. ## Challenges I ran into Passing data between the bot framework and the desktop application which captured the webcam feed. ## Accomplishments that I'm proud of A fully functional bot which provides feedback to the user based upon the changes in the emotion. ## What I learned Visual studio is a pain to work with. ## What's next for ICare Use Recurrent Neural Network to keep track of the emotional state of the user before and after and improve the content provided to the user over the period of time.
## Inspiration After years of teaching methods remaining constant, technology has not yet infiltrated the classroom to its full potential. One day in class, it occurred to us that there must be a correlation between students behaviour in classrooms and their level of comprehension. ## What it does We leveraged Apple's existing API's around facial detection and combined it with the newly added Core ML features to track students emotions based on their facial queues. The app can follow and analyze up to ~ ten students and provide information in real time using our dashboard. ## How we built it The iOS app integrated Apple's Core ML framework to run a [CNN](https://www.openu.ac.il/home/hassner/projects/cnn_emotions/) to detect people's emotions from facial queues. The model was then used in combination with Apple's Vision API to identify and extract student's face's. This data was then propagated to Firebase for it to be analyzed and displayed on a dashboard in real time. ## Challenges we ran into Throughout this project, there were several issues regarding how to improve the accuracy of the facial results. Furthermore, there were issues regarding how to properly extract and track users throughout the length of the session. As for the dashboard, we ran into problems around how to display data in real time. ## Accomplishments that we're proud of We are proud of the fact that we were able to build such a real-time solution. However, we are happy to have met such a great group of people to have worked with. ## What we learned Ozzie learnt more regarding CoreML and Vision frameworks. Haider gained more experience with front-end development as well as working on a team. Nakul gained experience with real-time graphing as well as helped developed the dashboard. ## What's next for Flatline In the future, Flatline could grow it's dashboard features to provide more insight for the teachers. Also, the accuracy of the results could be improved by training a model to detect emotions that are more closely related to learning and student's behaviours.
## Inspiration ✨ Seeing friends' lives being ruined through **unhealthy** attachment to video games. Struggling with regulating your emotions properly is one of the **biggest** negative effects of video games. ## What it does 🍎 YourHP is a webapp/discord bot designed to improve the mental health of gamers. By using ML and AI, when specific emotion spikes are detected, voice recordings are queued *accordingly*. When the sensor detects anger, calming reassurance is played. When happy, encouragement is given, to keep it up, etc. The discord bot is an additional fun feature that sends messages with the same intention to improve mental health. It sends advice, motivation, and gifs when commands are sent by users. ## How we built it 🔧 Our entire web app is made using Javascript, CSS, and HTML. For our facial emotion detection, we used a javascript library built using TensorFlow API called FaceApi.js. Emotions are detected by what patterns can be found on the face such as eyebrow direction, mouth shape, and head tilt. We used their probability value to determine the emotional level and played voice lines accordingly. The timer is a simple build that alerts users when they should take breaks from gaming and sends sound clips when the timer is up. It uses Javascript, CSS, and HTML. ## Challenges we ran into 🚧 Capturing images in JavaScript, making the discord bot, and hosting on GitHub pages, were all challenges we faced. We were constantly thinking of more ideas as we built our original project which led us to face time limitations and was not able to produce some of the more unique features to our webapp. This project was also difficult as we were fairly new to a lot of the tools we used. Before this Hackathon, we didn't know much about tensorflow, domain names, and discord bots. ## Accomplishments that we're proud of 🏆 We're proud to have finished this product to the best of our abilities. We were able to make the most out of our circumstances and adapt to our skills when obstacles were faced. Despite being sleep deprived, we still managed to persevere and almost kept up with our planned schedule. ## What we learned 🧠 We learned many priceless lessons, about new technology, teamwork, and dedication. Most importantly, we learned about failure, and redirection. Throughout the last few days, we were humbled and pushed to our limits. Many of our ideas were ambitious and unsuccessful, allowing us to be redirected into new doors and opening our minds to other possibilities that worked better. ## Future ⏭️ YourHP will continue to develop on our search for a new way to combat mental health caused by video games. Technological improvements to our systems such as speech to text can also greatly raise the efficiency of our product and towards reaching our goals!
partial
## Inspiration Treehacks's 'Most Impactful Hack' and theme of sustainability encouraged us to think about one of the **Sustainable Developmental Goals** that is **Zero Hunger** and work on solving the global issue. As it is not possible to eradicate completely so we analyzed to implement in our homes; the change begins with our refrigerator. Certain factors such as Growing Concern about Food Waste, Advancements in Technology, Consumer Demand, Social Responsibility, Business Opportunities led us to come to the idea of **FridgeSpace**. ## What it does Our project addresses the challenge by tackling the global problem of food waste and by empowering individuals to take control of their food expiration dates. Our solution provides a simple and user-friendly platform that allows users to easily track their food inventory and receive alerts before it expires. With this technology, users can plan their meals better, reduce unnecessary purchases, and ultimately reduce food waste. ## How we built it We designed web pages and created prototypes using Figma tools and framed the UI with it. We used front-end languages like HTML, CSS, Javascript to develop the website. The system required accurate and up-to-date data on food items and their expiration dates. We collected data from various sources. ## Challenges we ran into It was challenging to maintain the accuracy in the details of the food items and their expiration date as new products are introduced regularly in the market. We also struggled in the preparing the recipe page in the website. ## Accomplishments that we're proud of We are proud of our idea having **environmental & social impact \*\* and create awareness among the public and encourage them to adopt sustainable habits. The software also help users plan their meals better, ensuring that they consume fresh and healthy food, reducing the likelihood of illnesses and improving \*\*overall nutrition**. We liked the **efficiency \*\*of our website which helps users to save time and money by reducing unnecessary grocery purchases and ensuring that they use their food items before they expire. Providing a \*\*user-friendly platform** that can help users save money, time, and reduce food waste can lead to increased user satisfaction and loyalty. ## What we learned The importance of **data accuracy** and value of **user-centered design** along with user adoption to gain the market skills. The role of technology in creating a more sustainable future. ## What's next for FridgeSpace Implementing a notification tab and new custom recipes. Secondly, gaining marketing and traction with a better UI. "Join us in making a positive impact on our environment and our community by taking a step towards a more sustainable future."
## Inspiration \_ I was and am always curious about the trading market \_ . Both the **stock-exchange** and the **crypto-world** has been one of my friends for some time now. I started investing in these as soon as I turned 18. So, as a trainee myself, I wanted to create a platform where all can train themselves and have a playful environment. ## What it does It is a **crypto-currency training game**, where you can \_ buy and sell \_ coins, and make a score for yourself on the Score-Board, while learning how to use crypto-tools ## How we built it I have build the whole web application on **HTML5, CSS3, and JS** frontend, with **NodeJS** as it's backend , and **MongoDB** as the database ## Challenges we ran into **Time** was the biggest challenge for me I build the project from the scratch in 3 days and I am really proud of my work. ## Accomplishments that we're proud of I was really nervous with the \_ complexity of the project \_ and the \_ time constraint \_ , I was not sure if I would have been able to complete the project. The 5 mins of after completing the project, I felt it was a dream. ## What we learned I learned to use **ChartJS** in this process, I learned about **transactions** and brushed up on the past knowledge of **session-management**. ## What's next for Crypto-Train I want to extend this project with more number of coins. I want to add a section for users to see their past details. I want to add a search option where the users can see the other users playing tactics also. I want a section for discussions also, because I can't imagine how the discussion would be like.
# Omakase *"I'll leave it up to you"* ## Inspiration On numerous occasions, we have each found ourselves staring blankly into the fridge with no idea of what to make. Given some combination of ingredients, what type of good food can I make, and how? ## What It Does We have built an app that recommends recipes based on the food that is in your fridge right now. Using Google Cloud Vision API and Food.com database we are able to detect the food that the user has in their fridge and recommend recipes that uses their ingredients. ## What We Learned Most of the members in our group were inexperienced in mobile app development and backend. Through this hackathon, we learned a lot of new skills in Kotlin, HTTP requests, setting up a server, and more. ## How We Built It We started with an Android application with access to the user’s phone camera. This app was created using Kotlin and XML. Android’s ViewModel Architecture and the X library were used. This application uses an HTTP PUT request to send the image to a Heroku server through a Flask web application. This server then leverages machine learning and food recognition from the Google Cloud Vision API to split the image up into multiple regions of interest. These images were then fed into the API again, to classify the objects in them into specific ingredients, while circumventing the API’s imposed query limits for ingredient recognition. We split up the image by shelves using an algorithm to detect more objects. A list of acceptable ingredients was obtained. Each ingredient was mapped to a numerical ID and a set of recipes for that ingredient was obtained. We then algorithmically intersected each set of recipes to get a final set of recipes that used the majority of the ingredients. These were then passed back to the phone through HTTP. ## What We Are Proud Of We were able to gain skills in Kotlin, HTTP requests, servers, and using APIs. The moment that made us most proud was when we put an image of a fridge that had only salsa, hot sauce, and fruit, and the app provided us with three tasty looking recipes including a Caribbean black bean and fruit salad that uses oranges and salsa. ## Challenges You Faced Our largest challenge came from creating a server and integrating the API endpoints for our Android app. We also had a challenge with the Google Vision API since it is only able to detect 10 objects at a time. To move past this limitation, we found a way to segment the fridge into its individual shelves. Each of these shelves were analysed one at a time, often increasing the number of potential ingredients by a factor of 4-5x. Configuring the Heroku server was also difficult. ## Whats Next We have big plans for our app in the future. Some next steps we would like to implement is allowing users to include their dietary restriction and food preferences so we can better match the recommendation to the user. We also want to make this app available on smart fridges, currently fridges, like Samsung’s, have a function where the user inputs the expiry date of food in their fridge. This would allow us to make recommendations based on the soonest expiring foods.
losing
## The Problem 🤔 In the midst of an ecological crisis, one of Canada's largest environmental problems is being caused by one of its smallest critters - the dreaded Mountain Pine Beetle. Between 1990 and 2012, the Beetle ate its way through approximately 723 million cubic metres (53%) of all the merchantable pine in BC, and it's on the rise again today. Technology provides an obvious solution to this problem - drones and satellites can be used to scope out large areas of forest without needing to send forest rangers out, saving timber managers time and money. The beetle can then be eradicated via targeted bursts of insecticide sprayed from drones. Yet, existing insecticide spraying drones are expensive ($18000+ CAD) and can only spray on a predefined route, which is laborious to find and program. ## The Solution 💡 Hence, we present an all in one system for combatting this epidemic, comprising: * Convolutional Neural Network model for detecting standing dead pines from satellite imagery, to enable automatic targeting of regions suffering from the beetle. * Drone attachment for available hobby drones (cheap and could be "borrowed" by national park service when not in use by local citizens) to enable spraying of trees on the periphery of an affected area with insecticide to create a "firebreak". * Web server with 3D globe for viewing the drone's progress in eradicating the beetle The satellite imagery dataset could easily be replaced with data collected from the drone camera itself when moving to production, and the model retrained in a few minutes, though we weren't able to create this dataset ourselves due to not living in Canada. ## How we built it 🤖 A [BBC micro:bit](https://en.wikipedia.org/wiki/Micro_Bit) is mounted to the drone and attached to a servo motor used to release the insecticide. The micro:bit uses standard 2.4GHz radio to communicate with another micro:bit on the ground, using protocol buffers as the communication medium. We use protocol buffers as they enable us to send a large amount of data efficiently, in the small amount of bytes available over the radio. The second micro:bit uses serial to communicate with the computer running the web server that displays the status of the project. A WebGL Earth 3D globe with Mapbox satellite tiles is used to depict where the drone is and the area it needs to cover. We use an API request to feed this data into the JS, after the server decodes it from protobuf. ![A schematic diagram explaining the structure of the project](https://media.discordapp.net/attachments/1010468691567202394/1010601723699679354/unknown.png?width=1080&height=386 "Structure of the Project") We tested several machine learning models on satellite imagery and found that a CNN was the most effective model. ## Challenges we ran into 😬 * Trying to get protobufs to work on the micro:bit which has only 16 KiB of usable RAM. To solve this problem we developed a custom protocol buffer implementation for just the fields we need to send to load onto the micro:bit, and interface with standard protocol buffers code running on the server computer. + Learning about different architectures in Tensorflow and fine tuning them. + Building a hardware project remotely with only one team member having access to the hardware, and getting effective drone footage for the video! ## Accomplishments that we're proud of 🏆 * Having the confidence to try a challenging project using technology we hadn't used before in a hackathon, given that we weren't sure it was going to work. * Integrating protocol buffers into the micro:bit via a custom library! * Achieving 89% accuracy on the binary classifier model used to detect standing dead trees. ## What we learned 📚 * Protocol Buffers serialisation format, use of proto files, use of `buf lint` and `protoc`. The buf CLI was invaluable in ensuring that our proto schema was correct, efficient and designed with backwards compatibility in mind. * How to collect a good dataset for machine learning and tune hyperparameters to get good results (full pipeline for a real world scenario, when you haven't been given a pre-built dataset). * Controlling a servo motor with a micro:bit. ## What's next for Pine Protection 🔮 To be scaled up the project would need to be supported by forest owners. The model could easily be retrained to deal with drone photos to ensure that the precision is better than you would get from satellites, then the product would need to be mass-manufactured as a small PCB which could be added onto commercially available hobby drones.
## Inspiration We saw that lots of people were looking for a team to work with for this hackathon, so we wanted to find a solution ## What it does I helps developers find projects to work, and helps project leaders find group members. By using the data from Github commits, it can determine what kind of projects a person is suitable for. ## How we built it We decided on building an app for the web, then chose a graphql, react, redux tech stack. ## Challenges we ran into The limitations on the Github api gave us alot of troubles. The limit on API calls made it so we couldnt get all the data we needed. The authentication was hard to implement since we had to try a number of ways to get it to work. The last challenge was determining how to make a relationship between the users and the projects they could be paired up with. ## Accomplishments that we're proud of We have all the parts for the foundation of a functional web app. The UI, the algorithms, the database and the authentication are all ready to show. ## What we learned We learned that using APIs can be challenging in that they give unique challenges. ## What's next for Hackr\_matchr Scaling up is next. Having it used for more kind of projects, with more robust matching algorithms and higher user capacity.
## 💡 Inspiration From farmers' protest around the world, subsidies to keep agriculture afloat, to the regular use of pesticides that kills organisms and pollutes the environment, the agriculture industry has an issue in optimizing resources. So, we want to make technology that would efficiently manage a farm through AI fully automated to reduce human energy costs. Not only that, but we would also open crowdfunding for farm plants as a form of an environmental investment that rewards you with money and carbon credits offset. ## 💻 What it does Drone: The drone communicates with the ground sensors which include, UV, pest vision detection, humidity sensor, CO2 sensor, and more. Based on this data then the drone would execute a cloud command to solve it. For example, if it detects a pest, it will call the second drone with the pest spray. Or if its lacking water, it would command the pump using wifi to pump the water, creating an efficient fully automated cycle that reduces resources as it's based on need. Farmer’s Dashboard: View the latest data on your plant from its growth, pest status, watering status, fertilizing status, etc. Open your farm for crowdfunding, in terms of land share for extra money. Harvest money would be split based on that share. Plant Adopter: Adopt a plan and see how much carbon offset it did in real time until harvest. Other than collecting carbon points you could also potentially get a capital gain from the selling of the harvest. Have a less worry investment by being able to check on it anytime you want with extra data such as height when it’s last sprayed, etc. On Field Sensor Array and horticulture system: Collects various information about the plants using a custom built sensor array, and then automatically adjusts lighting, heat, irrigation and fertilization accordingly. The sensor data is stored on cockroachdb using an onramping function deployed on Google Cloud which also hosts the pest detection and weed detection machine learning models. ## 🔨 How we built it: * Hardware Setup: SoC Hub: Raspberry PI Sensor MCU: Arduino Mega 2560 Actuation MCU Arduino UNO R3 Temperature (outdoor/indoor): SHT40, CCS811, MR115A2 Humidity: SHT40 Barometric Pressure: MR115A2 Soil Temperature: Adafruit Stemma Soil Sensing Module Soil Moisture: Adafruit Stemma Soil Sensing Module Carbon Dioxide Emitted/Absorbed: CCS811 UV Index/incident: VEML6070 Ventilation Control: SG90 Mini Servo Lighting: AdaFruit NeoPixel Strip x8 Irrigation Pump: EK1893 3-5V Submersible Pump \*Drones: DJI TELLO RoboMaster TT \*Database: CockroachDB \*Cloud: Google Cloud Services \*Machine Learning (for pest and weed detection): Cloud Vision, AutoML Design: Figma Arduino, Google Vision Cloud, Raspberry pi, Drones, Cockroach DB, etc We trained ML models for pest (saddleback caterpillar,true armyworm) and weed detection using images dataset from "ipmimages". We used google cloud Auto ML to train our model. ## 📖 What we learned This is the first time some of us have coded a drone, so it’s an amazing experience to be able to automate the code like that. It is also a struggle to find a solution that can be realistically implemented in a business sense.
winning
## Inspiration Jessica here - I came up with the idea for BusPal out of expectation that the skill has already existed. With my Amazon Echo Dot, I was already doing everything from checking the weather to turning off and on my lights with Amazon skills and routines. The fact that she could not check when my bus to school was going to arrive was surprising at first - until I realized that Amazon and Google are one of the biggest rivalries there is between 2 tech giants. However, I realized that the combination of Alexa's genuine personality and the powerful location ability of Google Maps would fill a need that I'm sure many people have. That was when the idea for BusPal was born: to be a convenient Alexa skill that will improve my morning routine - and everyone else's. ## What it does This skill enables Amazon Alexa users to ask Alexa when their bus to a specified location is going to arrive and to text the directions to a phone number - all hands-free. ## How we built it Through the Amazon Alexa builder, Google API, and AWS. ## Challenges we ran into We originally wanted to use stdlib, however with a lack of documentation for the new Alexa technology, the team made an executive decision to migrate to AWS roughly halfway into the hackathon. ## Accomplishments that we're proud of Completing Phase 1 of the project - giving Alexa the ability to take in a destination, and deliver a bus time, route, and stop to leave for. ## What we learned We learned how to use AWS, work with Node.js, and how to use Google APIs. ## What's next for Bus Pal Improve the text ability of the skill, and enable calendar integration.
## Inspiration We as a team shared the same interest in knowing more about Machine Learning and its applications. upon looking at the challenges available, we were immediately drawn to the innovation factory and their challenges, and thought of potential projects revolving around that category. We started brainstorming, and went through over a dozen design ideas as to how to implement a solution related to smart cities. By looking at the different information received from the camera data, we landed on the idea of requiring the raw footage itself and using it to look for what we would call a distress signal, in case anyone felt unsafe in their current area. ## What it does We have set up a signal that if done in front of the camera, a machine learning algorithm would be able to detect the signal and notify authorities that maybe they should check out this location, for the possibility of catching a potentially suspicious suspect or even being present to keep civilians safe. ## How we built it First, we collected data off the innovation factory API, and inspected the code carefully to get to know what each part does. After putting pieces together, we were able to extract a video footage of the nearest camera to us. A member of our team ventured off in search of the camera itself to collect different kinds of poses to later be used in training our machine learning module. Eventually, due to compiling issues, we had to scrap the training algorithm we made and went for a similarly pre-trained algorithm to accomplish the basics of our project. ## Challenges we ran into Using the Innovation Factory API, the fact that the cameras are located very far away, the machine learning algorithms unfortunately being an older version and would not compile with our code, and finally the frame rate on the playback of the footage when running the algorithm through it. ## Accomplishments that we are proud of Ari: Being able to go above and beyond what I learned in school to create a cool project Donya: Getting to know the basics of how machine learning works Alok: How to deal with unexpected challenges and look at it as a positive change Sudhanshu: The interesting scenario of posing in front of a camera while being directed by people recording me from a mile away. ## What I learned Machine learning basics, Postman, working on different ways to maximize playback time on the footage, and many more major and/or minor things we were able to accomplish this hackathon all with either none or incomplete information. ## What's next for Smart City SOS hopefully working with innovation factory to grow our project as well as inspiring individuals with similar passion or desire to create a change.
## Inspiration Have you ever missed your bus stop? Was it because you were sleeping, too busy socializing, or too focused on procrastinating? Whatever it was, we’ve all experienced that disappointment of getting off at the wrong bus stop because we were occupied with our own lives. We want to fix this problem for you through our app, TranSleep. ## What it does Welcome to TranSleep! An app that wakes you up in your sleep that transforms your stressful TranSit experience into a relaxing TranSleep experience! This app is programmed to wake you up on your transit journey. All you have to do let us know where you are going. The application will use your phone’s location, and when your phone is within 1km of your destination, your phone will ring and vibrate to wake you up from your nap. You can also use this app as a reminder app, even if you are not sleeping so that you can get off at your desired bus stop with our reminder. ## How we built it To build this app, we broke down our program into smaller pieces. While brainstorming the required steps between the team members, we recognized the importance of developing the program separately as front-end and back-end. This was necessary for several reasons. Due to the enormous size of the transit data, a back-end service was appropriate to address challenges in efficient data processing. That is why we built RESTful API powered by Flask, with open public transit data into database from multiple transit agencies. In the front-end, we utilized Android Studios to design and develop the app, and how the user interacts with the provided data from the back-end server. ## Challenges we ran into For the back-end, MySQL was not cooperating, and we had to switch gears to other databases. In addition, the open data from transit agencies were extremely difficult to interpret. Lastly, the remote server's performance was very limited, so we needed to experiment with different compilers and interpreters of python to make our program run faster. For the front-end, we faced technical challenges with GoogleMapsSDK. The SDK was difficult and cumbersome to work with as it provided numerous irrelevant functionalities. We also ran into graphical layout issues due to our lack of familiarity with the Android Studios environment. Additionally, the integration of back-end data required extensive logic, so our crew faced road blocks to accomplishing our app. ## Accomplishments that we're proud of Nonetheless, with patience and perseverance our team refused to give up. After watching various online tutorials, and extensive research on Android Studios, we recognized the bugs in our code, and successfully figured out ways to communicate the back-end data with the front-end. In the matter of 36 hours, our team overcame all these challenges and built a working application designed to tackle everyday issues. ## What we learned Over this weekend, our team was stretched, and challenged with our knowledge of code. Most importantly, we learned to persevere even when things are not working out. ## What's next for TranSleep We plan to support more transit agencies, and continue to improve the TranSleep experience.
winning
## Inspiration We were inspired by a [recent article](https://www.cbc.ca/news/canada/manitoba/manitoba-man-heart-stops-toronto-airport-1.5430605) that we saw on the news, where there was a man who suffered a cardiac arrest while waiting for his plane. With the help of a bystander who was able to administer the AED and the CPR, he was able to make a full recovery. We wanted to build a solution that is able to connect victims of cardiac arrests with bystanders who are willing to help, thereby [increasing their survival rates](https://www.ahajournals.org/doi/10.1161/CIRCOUTCOMES.109.889576) . We truly believe in the goodness and willingness of people to help. ## Problem Space We wanted to be laser-focused in the problem that we are solving - helping victims of cardiac arrests. We did tons of research to validate that this was a problem to begin with, before diving deeper into the solution-ing space. We also found that there are laws protecting those who try to offer help - indemnifying them of liabilities while performing CPR or AED: [Good Samaritan and the Chase Mceachern Act](https://www.toronto.ca/community-people/public-safety-alerts/training-first-aid-courses/). So why not ask everyone to help? ## What it does Hero is a web and app based platform that empowers community members to assist in time sensitive medical emergencies especially cardiac arrests, by providing them a ML optimised route that maximizes the CA victim's chances of survival. We have 2 components - Hero Command and Hero Deploy. 1) **Hero Command** is the interface that the EMS uses. It allows the location of cardiac arrests to be shown on a single map, as well as the nearby first-responders and AED Equipment. We scrapped the Ontario Goverment's AED listing to provide an accurate geo-location of an AED for each area. Hero Command has a **ML Model** working in the background to find out the optimal route that the first-responder should take: should they go straight to the victim and perform CPR, or should they detour and collect the AED before proceeding to the victim (of which will take some time). This is done by training our model on a sample dataset and calculating an estimated survival percentage for each of the two routes. 2) **Hero Deploy** is the mobile application that our community of first-responders use. It will allow them to accept/reject the request, and provide the location and navigation instructions. It will also provide hands-free CPR audio guidance so that the community members can focus on CPR. \* Cue the Staying Alive music by the BeeGees \* ## How we built it With so much passion, hard work and an awesome team. And honestly, youtube tutorials. ## Challenges I ran into We **did not know how** to create an app - all of us were either web devs or data analysts. This meant that we had to watch alot of tutorials and articles to get up to speed. We initially considered abandoning this idea because of the inability to create an app, but we are so happy that we managed to do it together. ## Accomplishments that I'm proud of Our team learnt so much things in the past few days, especially tech stacks and concepts that were super unfamiliar to us. We are glad to have created something that is viable, working, and has the potential to change how the world works and lives. We built 3 things - ML Model, Web Interface and a Mobile Application ## What I learned Hard work takes you far. We also learnt React Native, and how to train and use supervised machine learning models (which we did not have any experience in). We also worked on the business market validation such that the project that we are building is actually solving a real problem. ## What's next for Hero Possibly introducing the idea to Government Services and getting their buy in. We may also explore other use cases that we can use Hero with
## Inspiration Imagine you broke your EpiPen but you need it immediately for an allergic reaction. Imagine being lost in the forest with cut wounds and bleeding from a fall but have no first aid kit. How will you take care of your health without nearby hospitals or pharmacies? Well good thing for you, we have **MediFly**!! MediFly is inspired by how emergency vehicles such as ambulances take too long to get to the person in need of aid because of other cars on the road and traffic. Every second spent waiting is risking someone's life. So in order to combat that issue, we use **drones** as the first emergency responders to send medicine to save people's lives or keep them in a stable condition before human responders arrive. ## What it does MediFly allows the user to request for emergency help or medication such as an Epipen and Epinephrine. First you download the MediFly app and create a personal account. Then you can log into your account and use the features when necessary. If you are in an emergency, press the "EMERGENCY" button and a list of common medication options will appear for the person to pick from. There is also an option to search for your needed medication. Once a choice is selected, the local hospital will see the request and send a drone to deliver the medication to the person. Human first responders will also be called. The drone will have a GPS tracker and a GPS location of the person it needs to send the medication to. When the drone is within close distance to the person, a message is sent to tell them to go outside to where the drone can see the person. The camera will use facial recognition to confirm the person is indeed the registered user who ordered the medication. This level of security is important to ensure that the medication is delivered to the correct person. When the person is confirmed, the medication holding compartment lid is opened so the person can take their medication. ## How we built it On the software side, the front end of the app was made with react coded in Javascript, and the back end was made with Django in Python. The text messages work through Twilio. Twilio is used to tell the user that the drone is nearby with the medication ready to hand over. It sends a message telling the person to go outdoors where the drone will be able to find the user. On the hardware side, there are many different components that make up the drone. There are four motors, four propeller blades, a electronic speed controller, a flight controller, and 3D printed parts such as the camera mount, medication box holder, and some components of the drone frame. Besides this there is also a Raspberry Pi SBC attached to the drone for controlling the on-board systems such as the door to unload the cargo bay and stream the video to a server to process for the face recognition algorithm. ## Challenges we ran into Building the drone from scratch was a lot harder than we anticipated. There was a lot of setting up that needed to be done for the hardware and the building aspect was not easy. It consisted of a lot of taking apart, rebuilding, soldering, cutting, hot gluing, and rebuilding. Some of the video streaming systems did not work well at first, due to the CORS blocking the requests, given that we were using two different computers to run two different servers. Traditional geolocation techniques often take too long - as such, we needed to build a scheme to cache a user's location before they decided to send a request to prevent lag. Additionally, the number of pages required to build, stylize, and connect together made building the site a notable challenge of scale. ## Accomplishments that we're proud of We are extremely proud of the way the drone works and how it's able to move at quick, steady speeds while carrying the medication compartment and battery. On the software side, we are super proud of the facial recognition code and how it's able to tell the difference between different peoples' faces. The front and back end of the website/app is also really well done. We first made the front end UI design on Figma and then implemented the design on our final website. ## What we learned For software we learned how to use React, as well as various user authorization and authentication techniques. We also learned how to use Django. We learnt how to build an accurate, efficient and resilient face detection recognition and tracking system to make sure the package is always delivered to the correct person. We experimented with and learned various ways to stream real-time video over a network, also over longer ranges for the drone. For hardware we learned how to set up and construct a drone from scratch! ## What's next for MediFly In the future we hope to add a GPS tracker to the drone so that the person who orders the medication can see where the drone is on its path. We would also add Twilio text messages so that when the drone is within a close radius to the user, it will send a message notifying the person to go outside and wait for the drone to deliver the medication.
## Inspiration Guardian Angel was born from the need for reliable emergency assistance in an unpredictable world. Our experiences with the elderly, such as our grandparents, who may fall when we’re not around, and the challenges we may face in vulnerable situations motivated us to create a tool that automatically reaches out for help when it’s needed most. We aimed to empower individuals to feel safe and secure, knowing that assistance is just a call away, even in their most vulnerable moments. ## What it does Core to Guardian Angel, our life-saving Emergency Reporter AI speech app, is an LLM and text-to-speech pipeline that provides real-time, situation-critical responses to 911 dispatchers. The app automatically detects distress signals—such as falls or other emergencies—and contacts dispatch services on behalf of the user, relaying essential information like patient biometric data, medical history, current state, and location. By integrating these features, Guardian Angel enhances efficiency and improves success in time-sensitive situations where rapid, accurate responses are crucial. ## How we built it We developed Guardian Angel using React Native with Expo, leveraging Python and TypeScript for enhanced code quality. The backend is powered by FastAPI, allowing for efficient data handling. We integrated AI technologies, including Google Gemini for voice transcription and Deepgram for audio processing, which enhances our app’s ability to communicate effectively with dispatch services. ## Challenges we ran into Our team faced several challenges during development, including difficulties with database integration and frontend design. Many team members were new to React Native, leading to styling and compatibility issues. Additionally, figuring out how to implement functions in the API for text-to-speech and speech-to-text during phone calls required significant troubleshooting. ## Accomplishments that we're proud of We are proud of several milestones achieved during this project. First, we successfully integrated a unique aesthetic into our UI by incorporating hand-drawn elements, which sets our app apart and creates a friendly, approachable user experience. Additionally, we reached a significant milestone in audio processing by effectively transcribing audio input using the Gemini model, allowing us to capture user commands accurately, and converting the transcribed text back to voice with Deepgram for seamless communication with dispatch. We’re also excited to share that our members have only built websites, making the experience of crafting an app and witnessing the fruits of our labor even more rewarding. It’s been exciting to acquire and apply new tools throughout this project, diving into various aspects of transforming our idea into a scalable application—from designing and learning UI/UX to implementing the React Native framework, emulating iOS and Android devices for testing compatibility, and establishing communication between the frontend and backend/database. ## What we learned Through this hackathon, our team learned the importance of effective collaboration, utilizing a “divide and conquer” approach while keeping each other updated on our progress. We gained hands-on experience in mobile app development, transitioning from our previous focus on web development, and explored new tools and technologies essential for creating a scalable application. ## What's next for Guardian Angel Looking ahead, we plan to enhance Guardian Angel by integrating features such as smartwatch compatibility for monitoring vital signs like heart rate and improving fall detection accuracy. We aim to refine our GPS location services for better tracking and continue optimizing our AI speech models for enhanced performance. Additionally, we’re exploring the potential for spatial awareness and microphone access to record surroundings during emergencies, further improving our response capabilities.
winning
## Inspiration Old school bosses don't want want to see you slacking off and always expect you to be all movie hacker in the terminal 24/7. As professional slackers, we also need our fair share of coffee and snacks. We initially wanted to create a terminal app to order Starbucks and deliver it to the E7 front desk. Then bribe a volunteer to bring it up using directions from Mappedin. It turned out that it's quite hard to reverse engineer Starbucks. Thus, we tried UberEats, which was even worse. After exploring bubble tea, cafes, and even Lazeez, we decided to order pizza instead. Because if we're suffering, might as well suffer in a food coma. ## What it does Skip the Walk brings food right to your table with the help of volunteers. In exchange for not taking a single step, volunteers are paid in what we like to call bribes. These can be the swag hackers received, food, money, ## How we built it We used commander.js to create the command-line interface, Next.js to run MappedIn, and Vercel to host our API endpoints and frontend. We integrated a few Slack APIs to create the Slack bot. To actually order the pizzas, we employed Terraform. ## Challenges we ran into Our initial idea was to order coffee through a command line, but we soon realized there weren’t suitable APIs for that. When we tried manually sending POST requests to Starbucks’ website, we ran into reCaptcha issues. After examining many companies’ websites and nearly ordering three pizzas from Domino’s by accident, we found ourselves back at square one—three times. By the time we settled on our final project, we had only nine hours left. ## Accomplishments that we're proud of Despite these challenges, we’re proud that we managed to get a proof of concept up and running with a CLI, backend API, frontend map, and a Slack bot in less than nine hours. This achievement highlights our ability to adapt quickly and work efficiently under pressure. ## What we learned Through this experience, we learned that planning is crucial, especially when working within the tight timeframe of a hackathon. Flexibility and quick decision-making are essential when initial plans don’t work out, and being able to pivot effectively can make all the difference. ## Terraform We used Terraform this weekend for ordering Domino's. We had many close calls and actually did accidentally order once, but luckily we got that cancelled. We created a Node.JS app that we created Terraform files for to run. We also used Terraform to order Domino's using template .tf files. Finally, we used TF to deploy our map on Render. We always thought it funny to use infrastructure as code to do something other than pure infrastructure. Gotta eat too! ## Mappedin Mappedin was an impressive tool to work with. Its documentation was clear and easy to follow, and the product itself was highly polished. We leveraged its room labeling and pathfinding capabilities to help volunteers efficiently deliver pizzas to hungry hackers with accuracy and ease. ## What's next for Skip the Walk We plan to enhance the CLI features by adding options such as reordering, randomizing orders, and providing tips for volunteers. These improvements aim to enrich the user experience and make the platform more engaging for both hackers and volunteers.
💭 **Inspiration**💭 We know what it's like to be indecisive about where to eat (trust us, we're both extremely indecisive), so we created a solution that'll solve just that! 🍴 **What it does** 🍴 For all the indecisive people, our program will randomly select a restaurant as well as recommend the highest rated restaurant in the given radius! If neither of these satisfy your cravings, Find Dine will then provide you a full list of restaurants in the area. 🔨 **How we built it** 🔨 We wanted to work with a coding language that was familiar to both of us, hence we used Python. The IDE we chose is called Visual Studio Code which has a live sharing tool that made collaboration a breeze. Lastly, to make our code come together, we used Google's Geocoding API to turn the inputted location into coordinates. Along with that, we gathered filters based on additional user inputs to then generate the restaurants using Google's Places API. 😭 **Challenges we ran into** 😭 We are both fairly new to coding and this is our first ever hackathon! That being said, we have never worked with API's so it took a good chunk of our time learning the uses and functions. As a result, we never had the time to create an interface. ✨ **Accomplishments that we're proud of** ✨ 1. WE FINISHED OUR FIRST HACKATHON *(kinda)*! 2. We incorporated something we've never used or learnt about before into out code. 3. We pushed ourselves to use something outside of our comfort zones. 4. We persevered till the end. 📚 **What we learned** 📚 1. We learned about APIs and how to work with them. 2. How to use Git and GitHub properly and effectively. 🔮 **What's next for Find Dine** 🔮 We would like to finish an interface to get our code out to the public one day (hopefully an app so we can take this to go)! As well as incorporate a way for the program to retrieve a user's location by itself, rather than them typing it in.
## Inspiration A couple of weeks ago, 3 of us met up at a new Italian restaurant and we started going over the menu. It became very clear to us that there were a lot of options, but also a lot of them didn't match our dietary requirements. And so, we though of Easy Eats, a solution that analyzes the menu for you, to show you what options are available to you without the dissapointment. ## What it does You first start by signing up to our service through the web app, set your preferences and link your phone number. Then, any time you're out (or even if you're deciding on a place to go) just pull up the Easy Eats contact and send a picture of the menu via text - No internet required! Easy Eats then does the hard work of going through the menu and comparing the items with your preferences, and highlights options that it thinks you would like, dislike and love! It then returns the menu to you, and saves you time when deciding your next meal. Even if you don't have any dietary restricitons, by sharing your preferences Easy Eats will learn what foods you like and suggest better meals and restaurants. ## How we built it The heart of Easy Eats lies on the Google Cloud Platform (GCP), and the soul is offered by Twilio. The user interacts with Twilio's APIs by sending and recieving messages, Twilio also initiates some of the API calls that are directed to GCP through Twilio's serverless functions. The user can also interact with Easy Eats through Twilio's chat function or REST APIs that connect to the front end. In the background, Easy Eats uses Firestore to store user information, and Cloud Storage buckets to store all images+links sent to the platform. From there the images/PDFs are parsed using either the OCR engine or Vision AI API (OCR works better with PDFs whereas Vision AI is more accurate when used on images). Then, the data is passed through the NLP engine (customized for food) to find synonym for popular dietary restrictions (such as Pork byproducts: Salami, Ham, ...). Finally, App Engine glues everything together by hosting the frontend and the backend on its servers. ## Challenges we ran into This was the first hackathon for a couple of us, but also the first time for any of us to use Twilio. That proved a little hard to work with as we misunderstood the difference between Twilio Serverless Functions and the Twilio SDK for use on an express server. We ended up getting lost in the wrong documentation, scratching our heads for hours until we were able to fix the API calls. Further, with so many moving parts a few of the integrations were very difficult to work with, especially when having to re-download + reupload files, taking valuable time from the end user. ## Accomplishments that we're proud of Overall we built a solid system that connects Twilio, GCP, a back end, Front end and a database and provides a seamless experience. There is no dependency on the user either, they just send a text message from any device and the system does the work. It's also special to us as we personally found it hard to find good restaurants that match our dietary restrictions, it also made us realize just how many foods have different names that one would normally google. ## What's next for Easy Eats We plan on continuing development by suggesting local restaurants that are well suited for the end user. This would also allow us to monetize the platform by giving paid-priority to some restaurants. There's also a lot to be improved in terms of code efficiency (I think we have O(n4) in one of the functions ahah...) to make this a smoother experience. Easy Eats will change restaurant dining as we know it. Easy Eats will expand its services and continue to make life easier for people, looking to provide local suggestions based on your preference.
partial
## Inspiration In today's age, people have become more and more divisive on their opinions. We've found that discussion nowadays can just result in people shouting instead of trying to understand each other. ## What it does **Change my Mind** helps to alleviate this problem. Our app is designed to help you find people to discuss a variety of different topics. They can range from silly scenarios to more serious situations. (Eg. Is a Hot Dog a sandwich? Is mass surveillance needed?) Once you've picked a topic and your opinion of it, you'll be matched with a user with the opposing opinion and put into a chat room. You'll have 10 mins to chat with this person and hopefully discover your similarities and differences in perspective. After the chat is over, you ask you to rate the maturity level of the person you interacted with. This metric allows us to increase the success rate of future discussions as both users matched will have reputations for maturity. ## How we built it **Tech Stack** * Front-end/UI + Flutter and dart + Adobe XD * Backend + Firebase - Cloud Firestore - Cloud Storage - Firebase Authentication **Details** * Front end was built after developing UI mockups/designs * Heavy use of advanced widgets and animations throughout the app * Creation of multiple widgets that are reused around the app * Backend uses gmail authentication with firebase. * Topics for debate are uploaded using node.js to cloud firestore and are displayed in the app using specific firebase packages. * Images are stored in firebase storage to keep the source files together. ## Challenges we ran into * Initially connecting Firebase to the front-end * Managing state while implementing multiple complicated animations * Designing backend and mapping users with each other and allowing them to chat. ## Accomplishments that we're proud of * The user interface we made and animations on the screens * Sign up and login using Firebase Authentication * Saving user info into Firestore and storing images in Firebase storage * Creation of beautiful widgets. ## What we're learned * Deeper dive into State Management in flutter * How to make UI/UX with fonts and colour palates * Learned how to use Cloud functions in Google Cloud Platform * Built on top of our knowledge of Firestore. ## What's next for Change My Mind * More topics and User settings * Implementing ML to match users based on maturity and other metrics * Potential Monetization of the app, premium analysis on user conversations * Clean up the Coooooode! Better implementation of state management specifically implementation of provide or BLOC.
## Inspiration We wanted to bring financial literacy into being a part of your everyday life while also bringing in futuristic applications such as augmented reality to really motivate people into learning about finance and business every day. We were looking at a fintech solution that didn't look towards enabling financial information to only bankers or the investment community but also to the young and curious who can learn in an interesting way based on the products they use everyday. ## What it does Our mobile app looks at company logos, identifies the company and grabs the financial information, recent company news and company financial statements of the company and displays the data in an Augmented Reality dashboard. Furthermore, we allow speech recognition to better help those unfamiliar with financial jargon to better save and invest. ## How we built it Built using wikitude SDK that can handle Augmented Reality for mobile applications and uses a mix of Financial data apis with Highcharts and other charting/data visualization libraries for the dashboard. ## Challenges we ran into Augmented reality is very hard, especially when combined with image recognition. There were many workarounds and long delays spent debugging near-unknown issues. Moreover, we were building using Android, something that none of us had prior experience with which made it harder. ## Accomplishments that we're proud of Circumventing challenges as a professional team and maintaining a strong team atmosphere no matter what to bring something that we believe is truly cool and fun-to-use. ## What we learned Lots of things about Augmented Reality, graphics and Android mobile app development. ## What's next for ARnance Potential to build in more charts, financials and better speech/chatbot abilities into out application. There is also a direction to be more interactive with using hands to play around with our dashboard once we figure that part out.
## Clique ### Inspiration + Description Our inspiration came from us really missing those real life encounters that we make with people on a daily basis. In college, we quite literally have the potential to meet someone new everyday: at the dining hall, the gym, discussion section, anywhere. However, with remote learning put into place with most universities, these encounters are now nonexistent. The interactions through people on Zoom just weren't cutting it for meeting new people and starting organic relationships with other people. We created Clique: an app that would encourage university students to organically meet each other. To remove barriers of social anxiety and awkwardness, Clique makes its users anonymous. We only let users upload an image that is not themselves and a 10 words bio to describe themselves. This way it's similar to real life encounters they would've made on campus where they wouldn't even know the other person's name. This also removes inherent biases of judging people by their appearance. After signing up, users can swipe through other users' profiles and decide if their image and bio is interesting enough for them to strike up a conversation. ### Technical Details We used React for the frontend and Google Firebase for the backend. #### Front End Clique contains 4 core pages: Login/Signup, Profile, Match, Conversations. We used React Bootstrap for many of the components to create a minimalistic design. All of the pages interact with the user and also interact with the backend. For finishing touches we added a navigation bar for easy access and loading animations to improve the user experience. #### Back End We used Firebase auth to handle logins and signups. Upon signups, we would add a new entry in Firestore hashing the entry with the unique user id. The entry in Firestore would contain a default bio. When the user would upload an image to their profile, we would hash that image with their uid as well for fast lookup. This way we can easily pull data relevant to current user because we can search for data tagged with their unique user id. Our chat functionality creates a new an entry for every conversation between two users and continually updates that entry in a sub-entry as the conversation goes on. Our matching functionality randomly generates users that we have never matched with before by checking hashes in the users match history. We really embraced our will to break down the barriers that prevent bringing people together to build a user-focused product to help people make connections in a socially distant way. :-)
winning
## Inspiration We are tinkerers and builders who love getting our hands on new technologies. When we discovered that the Spot Robot Dog from Boston Dynamics was available to build our project upon, we devised different ideas about the real-world benefits of robot dogs. From a conversational companion to a navigational assistant, we bounced off different ideas and ultimately decided to use the Spot robot to detect explosives in the surrounding environment as we realized the immense amount of time and resources that are put into training real dogs to perform these dangerous yet important tasks. ## What it does Lucy uses the capabilities of Spot Robot Dog to help identify potentially threatening elements in a surrounding through computer vision and advanced wave sensing capabilities. A user can command the dog to inspect a certain geographic area and the dog autonomously walks around the entire area and flags objects that could be a potential threat. It captures both raw and thermal images of the given object in multiple frames, which are then stored on a vector database and can be searched through semantic search. This project is a simplified approach inspired by the research "Atomic Magnetometer Multisensor Array for rf Interference Mitigation and Unshielded Detection of Nuclear Quadrupole Resonance" (<https://link.aps.org/accepted/10.1103/PhysRevApplied.6.064014>). ## How we built it We've combined the capabilities of OpenCV with a thermal sensing camera to allow Spot Robot to identify and flag potentially threatening elements in a given surrounding. To simulate these elements in the surroundings, we built a simple Arduino application that emits light waves in irregular patterns. The robot dog operates independently through speech instructions, which are powered by DeepGram's Speech to Text and Llama-3-8b model hosted on the Groq platform. Furthermore, we've leveraged ChromDB's vector database to tokenize images that allow people to easily search through images, which are captured in the range of 20-40fps. ## Challenges we ran into The biggest challenge we encountered was executing and testing our code on Spot due to the unreliable internet connection. We also faced configuration issues, as some parts of modules were not supported and used an older version, leading to multiple errors during testing. Additionally, the limited space made it difficult to effectively run and test the code. ## Accomplishments that we're proud of We are proud that we took on the challenge of working with something that we had never worked with before and even after many hiccups and obstacles we were able to convert our idea in our brains into a physical reality. ## What we learned We learned how to integrate and deploy our program onto Spot. We also learned that to work around the limitations of the technology and our experience working with them. ## What's next for Lucy We want to integrate LiDar in our approach, providing more accurate results then cameras. We plan to experiment beyond light to include different wave forms, thus helping improve the reliability of the results.
# 🚗 InsuclaimAI: Simplifying Insurance Claims 📝 ## 🌟 Inspiration 💡 After a frustrating experience with a minor fender-bender, I was faced with the overwhelming process of filing an insurance claim. Filling out endless forms, speaking to multiple customer service representatives, and waiting for assessments felt like a second job. That's when I knew that there needed to be a more streamlined process. Thus, InsuclaimAI was conceived as a solution to simplify the insurance claim maze. ## 🎓 What I Learned ### 🛠 Technologies #### 📖 OCR (Optical Character Recognition) * OCR technologies like OpenCV helped in scanning and reading textual information from physical insurance documents, automating the data extraction phase. #### 🧠 Machine Learning Algorithms (CNN) * Utilized Convolutional Neural Networks to analyze and assess damage in photographs, providing an immediate preliminary estimate for claims. #### 🌐 API Integrations * Integrated APIs from various insurance providers to automate the claims process. This helped in creating a centralized database for multiple types of insurance. ### 🌈 Other Skills #### 🎨 Importance of User Experience * Focused on intuitive design and simple navigation to make the application user-friendly. #### 🛡️ Data Privacy Laws * Learned about GDPR, CCPA, and other regional data privacy laws to make sure the application is compliant. #### 📑 How Insurance Claims Work * Acquired a deep understanding of the insurance sector, including how claims are filed, and processed, and what factors influence the approval or denial of claims. ## 🏗️ How It Was Built ### Step 1️⃣: Research & Planning * Conducted market research and user interviews to identify pain points. * Designed a comprehensive flowchart to map out user journeys and backend processes. ### Step 2️⃣: Tech Stack Selection * After evaluating various programming languages and frameworks, Python, TensorFlow, and Flet (From Python) were selected as they provided the most robust and scalable solutions. ### Step 3️⃣: Development #### 📖 OCR * Integrated Tesseract for OCR capabilities, enabling the app to automatically fill out forms using details from uploaded insurance documents. #### 📸 Image Analysis * Exploited an NLP model trained on thousands of car accident photos to detect the damages on automobiles. #### 🏗️ Backend ##### 📞 Twilio * Integrated Twilio to facilitate voice calling with insurance agencies. This allows users to directly reach out to the Insurance Agency, making the process even more seamless. ##### ⛓️ Aleo * Used Aleo to tokenize PDFs containing sensitive insurance information on the blockchain. This ensures the highest levels of data integrity and security. Every PDF is turned into a unique token that can be securely and transparently tracked. ##### 👁️ Verbwire * Integrated Verbwire for advanced user authentication using FaceID. This adds an extra layer of security by authenticating users through facial recognition before they can access or modify sensitive insurance information. #### 🖼️ Frontend * Used Flet to create a simple yet effective user interface. Incorporated feedback mechanisms for real-time user experience improvements. ## ⛔ Challenges Faced #### 🔒 Data Privacy * Researching and implementing data encryption and secure authentication took longer than anticipated, given the sensitive nature of the data. #### 🌐 API Integration * Where available, we integrated with their REST APIs, providing a standard way to exchange data between our application and the insurance providers. This enhanced our application's ability to offer a seamless and centralized service for multiple types of insurance. #### 🎯 Quality Assurance * Iteratively improved OCR and image analysis components to reach a satisfactory level of accuracy. Constantly validated results with actual data. #### 📜 Legal Concerns * Spent time consulting with legal advisors to ensure compliance with various insurance regulations and data protection laws. ## 🚀 The Future 👁️ InsuclaimAI aims to be a comprehensive insurance claim solution. Beyond just automating the claims process, we plan on collaborating with auto repair shops, towing services, and even medical facilities in the case of personal injuries, to provide a one-stop solution for all post-accident needs.
## Inspiration We wanted to build something that would help those with limited mobility or eyesight. We wanted to make something that was as simple and intuitive as possible, while performing complex tasks. To this end, we designed a system to allow users to locate items around their home they might not normally be able to see. ## What it does Our fleet of robots allows the user to speak the name of the object they are looking for, and will then set off autonomously to track down the item. They will report back to the user once they have found the item, while the user can watch at every step along the way with a live video stream. The user can also take manual control of the robots at any time if they so wish. ## How I built it The robots were built using laser cut plates, a raspberry pi, DC motors, and a dual voltage power system. The software used a TCP/IP library for streaming video called FireEye to send video and data from the Raspberry Pi to our Node.js server. This server performed image processing and natural language processing to determine what the user was trying to find, and identify it when the camera picked the object up. The front end was built using React.js, with Socket.io acting as the method of communication between server and UI. ## Challenges I ran into We ran into many challenges. Many. Our first problems lay with trying to get a consistent video stream from the robot to our server, and then only grew more difficult. We faces challenges trying to communicate data from our server to the robot, and from our server to the front-end UI. We also have very little experience designing user interfaces, and ran into many implementation problems. Additionally, this was the first undertaking we have coded with Node.js, which we learned was substantially different than Python. (Looking back Python probably should have been the way to go...) ## Accomplishments that I'm proud of We are particularly proud of the overall tech stack we ended up using. There are many technologies that we had to get working, and then get to communicate before our system would become functional. We learned about TCP and Web sockets, as well as coding for hardware constraints, and how to perform cloud image processing. ## What I learned We learned a substantial amount overall, mostly as it related to socket programming, and how to have multiple components share stateful data. We also learned how to deal with the constraints of network speed, and raspberry pi processing power. As such we learned about multi-threading programs to make them run more efficiently. ## What's next for MLE We would like to expand our robots to include a robot arm, such that they would be able to retrieve and interact with the objects they are searching for. We would also like to make the robots bigger such that they can more effectively navigate. We also have plans to increase the overall speed of the system, and try to eliminate network and streaming latency.
partial
## Inspiration Type 1 diabetes is a chronic disease that requires intensive management of blood glucose levels, resulting in countless interrupted meals, sleepless nights, and stress. Tools to automate insulin delivery have been slow to come to market, and are often inaccessible due to their higher costs; all current systems also offer few customization options, locking patients into algorithms that are not tailored to their specific physiology. ## What it does Loop is an open-source automated insulin dosing app that uses historical blood glucose, carbohydrate, and insulin data to automatically set basal (background) insulin rates, minimizing the cognitive burden of diabetes. The app is interoperable with several insulin pumps and continuous glucose monitors to provide maximum patient choice. One important feature is that the algorithm can accommodate the use of multiple types of insulin, a commonly-requested feature that is not found in any other open-source or commercial system and helps tailor the system to the needs of each patient, giving the ability, for example, to inject long acting basal insulin and use Loop for automatic dosing adjustments. Parents of diabetics can actually sleep through the night, while their child enjoys a lower average A1c (a measure of average blood glucose), which has been shown to reduce risk of diabetes-related complications. ## How I built it I took advantage of already-existing frameworks and tools in the DIY diabetes open-source community to accelerate my development, improving user interface and algorithms as I went along. The hardware and transmission protocols to communicate with insulin pumps and continuous glucose monitors are publicly available, making it so I could really focus on improving the software. I collaborated with a physician at the Stanford School of Medicine to gain access to insulin modeling data for the next generation of ultra-rapid-acting insulins, which are commonly used by patients but current automated insulin dosing systems cannot accommodate, which forces patients to choose between advanced insulin formulations and cutting-edge technology. ## Challenges I ran into I spent more than 6 hours debugging issues with Core Data migration, which we found was ultimately caused by a simple sql file misnaming, despite what the (40-page) error message might have said! ## Accomplishments that I'm proud of * Incorporating experimental data from research experiments in order to customize insulin curve modeling * Crafting a smooth user interface to minimize the number of taps or swipes that are needed to do common actions, like entering mealtime insulin doses * Enabling patients to enter multiple types of insulin into the system (including insulin doses that weren't given by the pump!) and have it be correctly accounted for, making these patients be able to benefit from automated insulin delivery when they previously could not. ## What I learned I learned SO MUCH about how the Core Data frameworks work under the hood, and about the best debugging practices for Swift/Xcode. ## What's next for Loop: Accessible & Customized Automatic Insulin Delivery The improvements that were made will be tested in patients and contributed back to the #WeAreNotWaiting open-source diabetes community so that as many patients as possible can benefit from the work done at Treehacks. ### If you would like to interact with the app on your device, please comment so I can invite you to the Testflight build!
## Inspiration We were originally inspired by the use of emojis to create mosaic art from photos. We thought it was interesting how emojis are often used in modern communication to replace words. We wanted to explore how emojis could be used to visualize the meanings of songs in a fun and interactive way. ## What it does Emojic takes the lyrics in a song, and translates the words into emojis. Then, based on the frequencies in the music's audio profile, it displays these emojis in a visualization that interacts with the beat of the music. The result is a fun and unique visualization of the song through the use of emojis.
(Local Program for data in + hardware firmware)<https://github.com/assasin2gamer/CalHacks> (Website with Reflex) <https://github.com/mishcoder/calhacks> (Intel integration) <https://github.com/AKUMAR0019/calhacks> ## Inspiration Our project draws inspiration from the challenge of classifying emotions, a complex task. We aim to provide a reliable and cost-effective solution for training EEGs (Electroencephalograms), which can contribute to better understanding and analysis of emotions. We used a research grade EEG headset to get reliable data and do accurate sentiment analysis. ## What it does Our project leverages machine learning to train on vast datasets of EEG data and Hume's audio analysis. We use this training to develop a sentiment analysis model specifically tailored to EEGs. EEGs capture brainwaves resulting from the neuro-physiological interactions in the brain. By employing techniques like Fast Fourier Transform (FFT) and random tree (RT) modeling of time series data, we can identify EEG characteristics associated with different emotions. Our model allows us to generate visually appealing images using TogetherAI's image generation service based on the emotion classification and even further! ## How we built it We built our product using three main components: 1) Hume for sentiment data labelling 2) CockroachDB serves as our database for storing EEG data. 3) Intel Cloud Compute to compute our model 4) TogetherAI to generate images based on the emotion classification. 5) Reflex to host our website which combines the multiple data streams into one websocket. ## Challenges we ran into We ran into a few challenges: 1) We could not figure out why, but when we pinged spesifically the CockroachDB from the Intel Compute instance, the instance would freeze. 2) Our concept of using the model to be input sources for a VR game fell through when we realized the computer we brought could not handle the processing required. ## Accomplishments that we're proud of We take pride in successfully integrating multiple services, including Intel's cloud computing resources, CockroachDB for data storage, and TogetherAI for image generation, to create a cohesive solution, leverage platform advantages to create a complete and scalable tech stack. ## What's next for MindScape In the future, we plan to refine our model further and explore additional applications, such as incorporating it into a VR game or expanding our analysis capabilities for emotion classification. ## We can read your mind The details: Using reverse referencing through bloom filters, we figured out a way to introduce entropy with specific feature matches and using advanced software and hardware noise reduction we can do something most teams cannot. Through EEG data timeseries we can estimate brain activity and through that, classify emotions among other brain activities.
losing
## Inspiration We were interested in machine learning and data analytics and decided to pursue a real-world application that could prove to have practical use for society. Many themes of this project were inspired by hip-hop artist Cardi B. ## What it does Money Moves analyzes data about financial advisors and their attributes and uses machine's deep learning unsupervised algorithms to predict if certain financial advisors will most likely be beneficial or detrimental to an investor's financial standing. ## How we built it We partially created a custom deep-learning library where we built a Self Organizing Map. The Self Organizing Map is a neural network that takes data and creates a layer of abstraction; essentially reducing the dimensionality of the data. To make this happened we had to parse several datasets. We used beautiful soup library, pandas and numpy to parse the data needed. Once it was parsed, we were able to pre-process the data, to feed it to our neural network (Self Organizing Map). After we were able to successfully analyze the data with the deep learning algorithm, we uploaded the neural network and dataset to our Google server where we are hosting a Django website. The website will show investors the best possible advisor within their region. ## Challenges we ran into Due to the nature of this project, we struggled with moving large amounts of data through the internet, cloud computing, and designing a website to display analyzed data because of the difficult with WiFi connectivity that many hackers faced at this competition. We mostly overcame this through working late nights and lots of frustration. We also struggled to find an optimal data structure for storing both raw and output data. We ended up using .csv files organized in a logical manner so that data is easier accessible through a simple parser. ## Accomplishments that we're proud of Successfully parse the dataset needed to do preprocessing and analysis with deeplearing. Being able to analyze our data with the Self Organizing Map neural network. Side Note: Our team member Mikhail Sorokin placed 3rd in the Yhack Rap Battle ## What we learned We learnt how to implement a Self Organizing Map, build a good file system and code base with Django. This led us to learn about Google's cloud service where we host our Django based website. In order to be able to analyze the data, we had to parse several files and format the data that we had to send through the network. ## What's next for Money Moves We are looking to expand our Self Organizing Map to accept data from other financial dataset, other than stock advisors; this way we are able to have different models that will work together. One way we were thinking is to have unsupervised and supervised deep-learning systems where, we have the unsupervised find the patterns that would be challenging to find; and the supervised algorithm will direct the algorithm to a certain goal that could help investors choose the best decision possible for their financial options.
## Inspiration The inspiration for *SurgeVue* came from the high risks associated with brain surgeries, where precision is critical and the margin for error is minimal. With a mortality rate that increases 15-fold for non-elective neurosurgeries, we were motivated to create a solution that leverages augmented reality (AR) and machine learning to enhance the precision of brain tumor removal surgeries. By merging real-time data with surgical tools, we aimed to provide surgeons with an advanced assistant to reduce risks and save lives. ## What it does *SurgeVue* is an AR-powered surgical assistant that provides neurosurgeons with real-time visual overlays during brain surgery. Using machine learning, the system outlines tumors on the patient's brain and classifies the presence of foreign objects. Integrated with an Arduino gyroscope for hand tracking, SurgeVue offers surgeons real-time feedback on tool movements, sweat sensor data, and critical hand movement trends—all displayed within a secure mobile app that ensures patient data privacy using facial recognition and RFID technology. The system empowers surgeons to make more informed, precise decisions during the most delicate procedures. ## How we built it We built *SurgeVue* using a combination of cutting-edge technologies: * **OpenCV** for real-time tumor detection and hand movement detection for augmented view. * **PyTorch** for classifying tumors. * **Swift and SceneKit** to create an immersive AR environment that overlays tumor outlines onto the surgeon's view. * **Arduino gyroscope** for tracking the surgeon's hand movements and tool positioning. * **PropelAuth** to ensure secure access to sensitive patient data via facial recognition and RFID. * **Flask backend** to process machine learning models and serve image classification results via API. * **Mobile App** that visualizes gyroscope, sweat sensor, and hand movement trends. ## Challenges we ran into One of the biggest challenges was ensuring that the AR overlay, tumor detection, and hand-tracking happened in real-time without latency. We had to optimize our models to ensure seamless performance in the fast-paced environment of an operating room. Integrating the hardware components like the Arduino gyroscope and managing precise hand-tracking also posed challenges, as did creating a user-friendly interface that was informative without being overwhelming during a surgery. ## Accomplishments that we're proud of * Successfully implementing real-time AR overlays that provide surgeons with critical information at a glance. * Developing a machine learning model that accurately classifies tumors and detects foreign objects. * Integrating hardware sensors (gyroscope, sweat sensors) to provide surgeons with hand movement insights, enhancing precision during surgeries. * Ensuring patient data security through advanced authentication measures like facial recognition and RFID. ## What we learned We learned how to combine AR and machine learning into a cohesive solution that can operate in real-time under intense conditions like surgery. We also gained experience in integrating hardware components, optimizing machine learning models for low latency, and handling large datasets like medical imaging. Furthermore we can save OR nurses and surgeons from intense radiation from the Medtronic devices, ones that prevent them from continuing operation. Additionally, building an intuitive, non-intrusive interface for surgeons highlighted the importance of user-centered design in healthcare applications. ## What's next for SurgeVue Next, we plan to: * **Refine the Machine Learning Model**: Enhance tumor classification accuracy and expand it to detect other conditions and anomalies. * **Clinical Trials**: Test *SurgeVue* in real-world surgical settings and gather feedback from neurosurgeons. * **Tool Tracking**: Further refine the hand-tracking and integrate more advanced surgical tools into the AR environment. * **Global Expansion**: Implement support for other AR platforms like Hololens and explore expanding the use of the system in other complex surgeries beyond neurosurgery. * **3D Implementation**: Create 3D models of the brain for the surgeon to interact with in real-time
## Inspiration How does a first-time freshman hacker compete with teams possessing several or even dozens of hackathons under their belt? Our team's response is buzz-words, all of them. With a diverse team of majors ranging from Mechanical Engineering to International and Public Affairs, we each possessed distinct skillsets. Laughing about buzzwords transitioned our team and after some introspection, we realized behind all the fanfare and satire there were real hidden opportunities. All of us previously tinkered around with Bitcoin and other Cryptocurrencies and as millennials we're well aware of the power of social media and big data. Our project comes from this intersection of our personal experiences and hidden opportunities within buzzwords. ## What it does Leverages social media analytics for forecasting cryptocurrency trends; provides an efficient, automated trading algorithm. Synthesizes moving averages analysis, recent twitter data, and a modified relative strength index to provide a robust strategy for algorithmic trading. Uses sentiment analysis to parse relevant tweets and extract consumer sentiment and predict its influence on highly volatile cryptocurrency markets. ## How I built it Frontend built with Adobe Dreamweaver in HTML5/CSS3, JavaScript, jQuery, and Bootstrap. Backend written in JavaScript and Python. Leveraged Twitter REST API and Python's tweepy API for scraping social media data, Quandl API and Alpha Vantage API for collecting historical financial data, TraderView API and Rickshaw.js for data visualization, nltk for natural language processing, and pandas and numpy for data analysis. ## Challenges I ran into Twitter API was very tricky to use -- rate limits made large-scale data analysis difficult and time-consuming. Error handling and optimizing our trading algorithm consumed the bulk of our time. ## Accomplishments that I'm proud of Simulated on historical cryptocurrency data and yields 562.60% return on investments for Bitcoin over past 20 days (compared to 486.8% ROI achieved by holding). Naive Bayes classifier trained on dataset of 30,000 tweets and achieved 24847.01% ROI between August 2016 and September 2016 (compared to 19.4% ROI achieved by holding). ## What I learned Learned large-scale data analysis and financial analytics. Familiarized with Twitter REST and Streaming APIs and realtime data collection. ## What's next for Everest Futures After finalizing risk management systems for Cryptocurrency trading, we would like to apply our method to other financial instruments which are similarly influenced by general sentiment. We also plan to further develop our trading algorithm and seek investment and mentorship.
winning
## Inspiration Our team understands the importance of pursuing and—ultimately—living your passion. Spending time at the job you desire will eventually have it ‘work’ for you, as you are never wasting your energy, but rather, enjoying yourself. Hence, our slogan. That is exactly why we decided to create a website that utilizes the Myers-Briggs Type Indicator (MBTI) personality types alongside the OCEAN model, giving quiz takers a broad discipline and several career examples best suited to their character. ## What it does Our website, 16Pathways, evaluates the visitor’s MBTI personality type with a series of questions. They are then provided with a single career pathway and a handful of professions linked to the aforementioned type. It is important to keep in mind that, because the MBTI is a conceptual THEORY, the given results are in no way precise. Rather, they give the quiz taker some possibilities to consider moving forward. ## How we built it We built a couple of website layouts from scratch using HTML and CSS (frontend). Contrariwise, the quiz and its response system were built with Python (backend). ## Challenges we ran into It became very clear, very quickly, that we were bound to face many, MANY challenges throughout the making of 16Pathways. Whether it was coming up with a memorable name and slogan for the project or coding a working backend for the quiz, with 3 first-time hackathoners on the team, mistakes became a very habitual thing for us. Not knowing HTML or CSS prior to the hackathon proved very time-consuming as well—but a rewarding challenge nonetheless. We also had trouble linking the frontend with the backend, that is, processing the quiz results and sending them over to the backend. At first, we thought about using a text field, but after getting some help from a mentor (thank you Tanvir!), we learned how to properly read the values of checkboxes. ## Accomplishments that we're proud of For us, submitting a fully functioning project on time was a huge accomplishment on its own merits. Add to this the fact that we had to learn several programming languages within a very strict time frame, and that Hack the 6ix 2021 was—for 3 out of 4 of us—our very first hackathon. Mastering the coding software (i.e. Visual Studio Code and Github) used and applying the field of psychology to web development were also accomplishments in our eyes. ## What we learned Without question, this project has given the team a higher degree of familiarity with the website structure and the building process. The opportunity to work in a group setting has also inevitably improved our communication and teamwork skills. We also finally realize the limitless potential of web development and technology and the many unexpected ways it can be used to make the world a better place. ## What's next for 16Pathways We are definitely looking to expand 16Pathways and improve the accuracy of its results by collaborating with both professional psychologists and web developers alike. We believe their advice and support can bring more personalized feedback to quiz participants, which, in turn, can launch our project to mainstream spaces.
## Inspiration Looking around you in your day-to-day life you see so many people eating so much food. Trust me, this is going somewhere. All that stuff we put in our bodies, what is it? What are all those ingredients that seem more like chemicals that belong in nuclear missiles over your 3 year old cousins Coke? Answering those questions is what we set out to accomplish with this project. But answering a question doesn't mean anything if you don't answer it well, meaning your answer raises as many or more questions than it answers. We wanted everyone, from pre-teens to senior citizens to be able to understand it. So in summary of what we wanted to do, we wanted to give all these lazy couch potatoes (us included) an easy, efficient, and most importantly, comprehendible method of knowing what it is exactly that we're consuming by the metric ton on a daily basis. ## What it does What our code does is that it takes input either in the form of text or image, and we use it as input for an API from which we extract our final output using specific prompts. Some of our outputs are the nutritional values, a nutritional summary, the amount of exercise required to burn off the calories gained from the meal, (its recipe), and its health in comparison to other foods. ## How we built it Using Flask, HTML,CSS, and Python for backend. ## Challenges we ran into We are all first-timers so none of us had any idea as to how the whole thing worked, so individually we all faced our fair share of struggles with our food, our sleep schedules, and our timidness, which led to miscommunication. ## Accomplishments that we're proud of Making it through the week and keeping our love of tech intact. Other than that we really did meet some amazing people and got to know so many cool folks. As a collective group, we really are proud of our teamwork and ability to compromise, work with each other, and build on each others ideas. For example we all started off with different ideas and different goals for the hackathon but we ended up all managing to find a project we all liked and found it in ourselves to bring it to life. ## What we learned How hackathons work and what they are. We also learned so much more about building projects within a small team and what it is like and what should be done when our scope of what to build was so wide. ## What's next for NutriScan -Working ML -Use of camera as an input to the program -Better UI -Responsive -Release
## Inspiration As students, we have found that there are very few high-quality resources on investing for those who are interested but don't have enough resources. Furthermore, we have found that investing and saving money can be a stressful experience. We hope to change this for those who want to save better with the help of our app, hopefully making it fun in the process! ## What it does Our app first asks a new client a brief questionnaire about themselves. Then, using their banking history, it generates 3 "demons", aka bad spending habits, to kill. Then, after the client chooses a habit to work on, it brings them to a dashboard where they can monitor their weekly progress on a task. Once the week is over, the app declares whether the client successfully beat the mission - if they did, they get rewarded with points which they can exchange for RBC Loyalty points! ## How we built it We built the frontend using React + Tailwind, using Routes to display our different pages. We used Cohere for our AI services, both for generating personalized weekly goals and creating a more in-depth report. We used Firebase for authentication + cloud database to keep track of users. For our data of users and transactions, as well as making/managing loyalty points, we used the RBC API. ## Challenges we ran into Piecing the APIs together was probably our most difficult challenge. Besides learning the different APIs in general, integrating the different technologies got quite tricky when we are trying to do multiple things at the same time! Besides API integration, definitely working without any sleep though was the hardest part! ## Accomplishments that we're proud of Definitely our biggest accomplishment was working so well together as a team. Despite only meeting each other the day before, we got along extremely well and were able to come up with some great ideas and execute under a lot of pressure (and sleep deprivation!) The biggest reward from this hackathon are the new friends we've found in each other :) ## What we learned I think each of us learned very different things: this was Homey and Alex's first hackathon, where they learned how to work under a small time constraint (and did extremely well!). Paige learned tons about React, frontend development, and working in a team. Vassily learned lots about his own strengths and weakness (surprisingly reliable at git, apparently, although he might have too much of a sweet tooth). ## What's next for Savvy Saver Demos! After that, we'll just have to see :)
losing
## Inspiration Lack of quality education in third-world countries. ## What it does Connects the bright students to the best professors. ## How we built it React for front-end and python/flask for backend with the use of assemblyAI for transcribing the calls and WebRTC for communication. ## Challenges we ran into Technical challenges around APIs and team communication ## Accomplishments that we're proud of Building a smart solution for students who crave education. ## What we learned AssemblyAI and WebRTC ## What's next for !Learn Scaling the application.
## Inspiration As we all know the world has come to a halt in the last couple of years. Our motive behind this project was to help people come out of their shells and express themselves. Connecting various people around the world and making them feel that they are not the only ones fighting this battle was our main objective. ## What it does People share their thought processes through speaking and our application identifies the problem the speaker is facing and connects him to a specialist in that particular domain so that his/her problem could be resolved. There is also a **Group Chat** option available where people facing similar issues can discuss their problems among themselves. For example, if our application identifies that the topics spoken by the speaker are related to mental health, then it connects them to a specialist in the mental health field and also the user has an option to get into a group discussion which contains people who are also discussing mental health. ## How we built it The front-end of the project was built by using HTML, CSS, Javascript, and Bootstrap. The back-end part was exclusively written in python and developed using the Django framework. We integrated the **Assembly AI** file created by using assembly ai functions to our back-end and were successful in creating a fully functional web application within 36 hours. ## Challenges we ran into The first challenge was to understand the working of Assembly AI. None of us had used it before and it took us time to first understand it's working. Integrating the audio part into our application was also a major challenge. Apart from Assembly AI, we also faced issues while connecting our front-end to the back-end. Thanks to the internet and the mentors of **HackHarvard**, especially **Assembly AI Mentors** who were very supportive and helped us resolve our errors. ## Accomplishments that we're proud of Firstly, we are proud of creating a fully functional application within 36 hours taking into consideration all the setbacks we had. We are also proud of building an application from which society can be benefitted. Finally and mainly we are proud of exploring and learning new things which is the very reason for hackathons. ## What we learned We learned how working as a team can do wonders. Working under a time constraint can be a really challenging task, aspects such as time management, working under pressure, the never give up attitude and finally solving errors which we never came across are some of the few but very important things which we were successful in learning.
## 💫 Inspiration Inspired by our grandparents, who may not always be able to accomplish certain tasks, we wanted to create a platform that would allow them to find help locally . We also recognize that many younger members of the community might be more knowledgeable or capable of helping out. These younger members may be looking to make some extra money, or just want to help out their fellow neighbours. We present to you.... **Locall!** ## 🏘 What it does Locall helps members of a neighbourhood get in contact and share any tasks that they may need help with. Users can browse through these tasks, and offer to help their neighbours. Those who post the tasks can also choose to offer payment for these services. It's hard to trust just anyone to help you out with daily tasks, but you can always count on your neighbours! For example, let's say an elderly woman can't shovel her driveway today. Instead of calling a big snow plowing company, she can post a service request on Locall, and someone in her local community can reach out and help out! By using Locall, she's saving money on fees that the big companies charge, while also helping someone else in the community make a bit of extra money. Plenty of teenagers are looking to make some money whenever they can, and we provide a platform for them to get in touch with their neighbours. ## 🛠 How we built it We first prototyped our app design using Figma, and then moved on to using Flutter for actual implementation. Learning Flutter from scratch was a challenge, as we had to read through lots of documentation. We also stored and retrieved data from Firebase ## 🦒 What we learned Learning a new language can be very tiring, but also very rewarding! This weekend, we learned how to use Flutter to build an iOS app. We're proud that we managed to implement some special features into our app! ## 📱 What's next for Locall * We would want to train a Tensorflow model to better recommend services to users, as well as improve the user experience * Implementing chat and payment directly in the app would be helpful to improve requests and offers of services
losing
## Our Inspiration for Quadrant. Our group first discussed an alternative to headphones, as they can typically become cumbersome and annoying, however we realized that at the same time solving a comfort issue, we are also able to solve an accessibility issue for people that play video games, and that may have impaired hearing. Approximately 360+ million people worldwide endeavour hearing loss from a mild to profound level according to the World Health Organization (2016). From acknowledging our discussions and the research we have come across, we began to understand its importance and concluded we wanted to serve a greater purpose for our project. ## What Quadrant. does Those with hearing loss can be at a severe disadvantage in regards to interacting with media, especially in the world of competitive gaming. However, by developing a technology that allows for the player to utilize a system of visual cues, not only would this feel more inclusive to hearing impaired players, but would also elevate the ability of play and performance as well. This technology, in essence, is something that we have developed called **Quadrant.**. It is an accessibility tool that is able to input game audio and output it as a visual spectrum, creating greater user awareness and object/player detection during gameplay. This technology interfaces with LEDs using the microcontroller, while a python sound capturing background process (pyaudio) processes and serializes data (using numpy) to be sent over a USB wire. The Arduino calculates which light to turn on from the sent data. ## What we used to build Quadrant. The way in which we were able to come to the implementation and execution of our idea began with an Arduino microcontroller as the primary component for our design of **Quadrant.**. From utilizing Arduino, our team was able to successfully assemble a uniform collection of LEDs strategically positioned onto a breadboard that serves as the hardware. We complimented the hardware aspect of our product with an intricate embedded software program using Python, thoroughly developing serial communication between our program and the Arduino we used. ## Challenges our team encountered The first challenge we came across was recording of the audio. This was first attempted when our team began to dissect an AUX cable, and use it as the input to receive the audio for recording. However, after dissecting the AUX cable, we realized that the actual wire was too thin in order to use it as a receiver, and thus became unusable for our project. Additionally, alligator clamps that we had attempted to use kept attaching to the wire casing as opposed to the wire itself due to its aforementioned thinness and lack in girth. ## Accomplishments our team is proud of Although we were confronted with several hardware and software problems in addition to errors, we were more than willing and able to find group resiliency and determination. One achievement that we are particularly proud of is the presentation video, which took a three way group effort to accomplish. On the creation side, since we knew nothing about audio sampling and processing, creating our own functioning algorithm to find the "dominant side" was a great achievement when it finally worked. ## Next steps for Quadrant. For **Quadrant.** and our team, we would be elated to improve and expand upon our product for the hearing impaired from a technological as well as business perspective in order to reach out to as many of those who would find it helpful and effective! One idea is to make the lights more accessible, by using some kind of mount (like glasses). Another would be to use a more advanced algorithm that can pick out certain sounds, improving the overall precision of the device.
Welcome to our project, RepMe! We are a team of four university students studying Computer Science and Mathematics at the University of Waterloo. We created RepCount as a solution for those looking to track their exercise progress and stay motivated. Our app allows users to easily input and record the number of reps for various exercises, providing a simple and convenient way to track progress over time. Our team is dedicated to providing a high-quality and user-friendly experience for all of our users. We are constantly working to improve and update the app to ensure that it meets the needs of our users. ## Inspiration As a student of University of Waterloo, we have noticed that many of our peers weren't getting the necessary workout they needed. Many students may find it difficult to prioritize exercise amidst the demands of their studies, and an app that helps them keep track of their progress and goals could be a useful tool. Additionally, regular exercise has been shown to have many benefits, including increased productivity and improved overall health and well-being. ## What it does RepMe uses the camera on a user's device to keep tracks of reps for the exercise chosen by the user. It is capable of automatically counting the reps with help of ML, as well as providing a workout customized by skill level. ## How we built it We built this webapp using a Python backend, where we implemented the computer vision portion of the project. We used React.js for building the frontend of the app. ## Challenges we ran into Implementing the server and connecting the backend to the frontend was a biggest challenge, and took up 90% of our time. Importing the webcam to the frontend was also challenging. ## Accomplishments that we're proud of Although we went through many miscalculations and frustrations, we are the most proud of implementing the algorithm that counts the rep. We used OpenCV and MediaPipe to algebraically calculate the movement of the user. ## What we learned We learned the fundamentals of OpenCV and their uses in real life. ## What's next for RepMe Add stats page, where users and review and analyze their stats on reps, and how much they improved. Also, we wish to add functionalities such as a timer that notifies the user if they haven't logged in to the app for a few days. Finally, to improve the productivity of the user, we wish to and scheduling functionalities. Overall, our goal is to continue to improve and develop the app so that it meets the needs of all of our users and helps them stay motivated and on track with their fitness goals.
## What it does Think "virtual vision stick on steroids"! It is a wearable device that AUDIBLY provides visually impaired people with information on the objects in front of them as well as their proximity. ## How we built it We used computer vision from Python and OpenCV to recognize objects such as "chair" and "person" and then we used an Arduino to interface with an ultrasonic sensor to receive distance data in REAL TIME. On top of that, the sensor was mounted on a servo motor, connected to a joystick so the user can control where the sensor scans in their field of vision. ## Challenges we ran into The biggest challenge we ran into was integrating the ultrasonic sensor data from the Arduino with the OpenCV live object detection data. This is because we had to grab data from the Arduino (the code is in C++) and use it in our OpenCV program (written in Python). We solved this by using PySerial and calling our friends Phoebe Simon Ryan and Olivia from the Anti Anti Masker Mask project for help! ## Accomplishments that we're proud of Using hardware and computer vision for the first time! ## What we learned How to interface with hardware, work as a team, and be flexible (we changed our idea and mechanisms like 5 times). ## What's next for All Eyez On Me Refine our design so it's more STYLISH :D
losing
## Inspiration What if I want to take an audio tour of a national park or a University campus on my own time? What if I want to take an audio tour of a place that doesn't even offer audio tours? With Toor, we are able to harness people's passions for the places they love to serve the curiosity of our users. ## What it does We enable users to submit their own audio tours of the places they love, and we allow them to listen to other user submissions as well. Users can also elect to receive a text alert if a new audio tour has been updated for a specific location. ## How we built it We built the front-end using React, and back-end with multiple REST API endpoints using Flask. Flask then uses SQLAlchemy, an ORM to submit records to the SQLite3 database and query data to and from. The audio files are stored in Google Cloud Firebase database. The front end is also hosted on Firebase. ## Challenges we ran into Enabling users to listen to audio without having to repeatedly download the files was our first major obstacle. With some research we found that either an AWS S3 bucket or a Google Firebase database would solve our problems. After issues with permission with the AWS S3 bucket, we decided that Google Firebase would be a more apt solution to our issue. ## Accomplishments that we're proud of Enabling audio streaming was a big win for us. We are also proud of the our team synergy and how we got things done quickly. We also are proud of the fact that we applied a lot of the things we learned from our internships this summer. ## What we learned * Audio streaming, audio file upload * Upload audio player on react * Thinking about minimal viable product * Flask * Soft skills such as interpersonal communication with fellow hackers ## What's next for Toor Adding the ability to comment on an audio tour, expanding the scope outside of just college campus, using Google Cloud Platform to implement Speech-To-Text and NLP to filter out "bad" comments and words in audio files.
## Inspiration We were inspired by hard working teachers and students. Although everyone was working hard, there was still a disconnect with many students not being able to retain what they learned. So, we decided to create both a web application and a companion phone application to help target this problem. ## What it does The app connects students with teachers in a whole new fashion. Students can provide live feedback to their professors on various aspects of the lecture, such as the volume and pace. Professors, on the other hand, get an opportunity to receive live feedback on their teaching style and also give students a few warm-up exercises with a built-in clicker functionality. The web portion of the project ties the classroom experience to the home. Students receive live transcripts of what the professor is currently saying, along with a summary at the end of the lecture which includes key points. The backend will also generate further reading material based on keywords from the lecture, which will further solidify the students’ understanding of the material. ## How we built it We built the mobile portion using react-native for the front-end and firebase for the backend. The web app is built with react for the front end and firebase for the backend. We also implemented a few custom python modules to facilitate the client-server interaction to ensure a smooth experience for both the instructor and the student. ## Challenges we ran into One major challenge we ran into was getting and processing live audio and giving a real-time transcription of it to all students enrolled in the class. We were able to solve this issue through a python script that would help bridge the gap between opening an audio stream and doing operations on it while still serving the student a live version of the rest of the site. ## Accomplishments that we’re proud of Being able to process text data to the point that we were able to get a summary and information on tone/emotions from it. We are also extremely proud of the ## What we learned We learned more about React and its usefulness when coding in JavaScript. Especially when there were many repeating elements in our Material Design. We also learned that first creating a mockup of what we want will facilitate coding as everyone will be on the same page on what is going on and all thats needs to be done is made very evident. We used some API’s such as the Google Speech to Text API and a Summary API. We were able to work around the constraints of the API’s to create a working product. We also learned more about other technologies that we used such as: Firebase, Adobe XD, React-native, and Python. ## What's next for Gradian The next goal for Gradian is to implement a grading system for teachers that will automatically integrate with their native grading platform so that clicker data and other quiz material can instantly be graded and imported without any issues. Beyond that, we can see the potential for Gradian to be used in office scenarios as well so that people will never miss a beat thanks to the live transcription that happens.
# Nexus, **Empowering Voices, Creating Connections**. ## Inspiration The inspiration for our project, Nexus, comes from our experience as individuals with unique interests and challenges. Often, it isn't easy to meet others with these interests or who can relate to our challenges through traditional social media platforms. With Nexus, people can effortlessly meet and converse with others who share these common interests and challenges, creating a vibrant community of like-minded individuals. Our aim is to foster meaningful connections and empower our users to explore, engage, and grow together in a space that truly understands and values their uniqueness. ## What it Does In Nexus, we empower our users to tailor their conversational experience. You have the flexibility to choose how you want to connect with others. Whether you prefer one-on-one interactions for more intimate conversations or want to participate in group discussions, our application Nexus has got you covered. We allow users to either get matched with a single person, fostering deeper connections, or join one of the many voice chats to speak in a group setting, promoting diverse discussions and the opportunity to engage with a broader community. With Nexus, the power to connect is in your hands, and the choice is yours to make. ## How we built it We built our application using a multitude of services/frameworks/tool: * React.js for the core client frontend * TypeScript for robust typing and abstraction support * Tailwind for a utility-first CSS framework * DaisyUI for animations and UI components * 100ms live for real-time audio communication * Clerk for a seamless and drop-in OAuth provider * React-icons for drop-in pixel perfect icons * Vite for simplified building and fast dev server * Convex for vector search over our database * React-router for client-side navigation * Convex for real-time server and end-to-end type safety * 100ms for real-time audio infrastructure and client SDK * MLH for our free .tech domain ## Challenges We Ran Into * Navigating new services and needing to read **a lot** of documentation -- since this was the first time any of us had used Convex and 100ms, it took a lot of research and heads-down coding to get Nexus working. * Being **awake** to work as a team -- since this hackathon is both **in-person** and **through the weekend**, we had many sleepless nights to ensure we can successfully produce Nexus. * Working with **very** poor internet throughout the duration of the hackathon, we estimate it cost us multiple hours of development time. ## Accomplishments that we're proud of * Finishing our project and getting it working! We were honestly surprised at our progress this weekend and are super proud of our end product Nexus. * Learning a ton of new technologies we would have never come across without Cal Hacks. * Being able to code for at times 12-16 hours straight and still be having fun! * Integrating 100ms well enough to experience bullet-proof audio communication. ## What we learned * Tools are tools for a reason! Embrace them, learn from them, and utilize them to make your applications better. * Sometimes, more sleep is better -- as humans, sleep can sometimes be the basis for our mental ability! * How to work together on a team project with many commits and iterate fast on our moving parts. ## What's next for Nexus * Make Nexus rooms only open at a cadence, ideally twice each day, formalizing the "meeting" aspect for users. * Allow users to favorite or persist their favorite matches to possibly re-connect in the future. * Create more options for users within rooms to interact with not just their own audio and voice but other users as well. * Establishing a more sophisticated and bullet-proof matchmaking service and algorithm. ## 🚀 Contributors 🚀 | | | | | | --- | --- | --- | --- | | [Jeff Huang](https://github.com/solderq35) | [Derek Williams](https://github.com/derek-williams00) | [Tom Nyuma](https://github.com/Nyumat) | [Sankalp Patil](https://github.com/Sankalpsp21) |
partial
# Healthy.ly An android app that can show if a food is allergic to you just by clicking its picture. It can likewise demonstrate it's health benefits, ingredients, and recipes. ## Inspiration We are a group of students from India. The food provided here is completely new to us and we don't know the ingredients. One of our teammates is dangerously allergic to seafood and he has to take extra precautions while eating at new places. So we wanted to make an app that can detect if the given food is allergic or not using computer vision. We also got inspiration from the HBO show **Silicon Valley**, where a guy tries to make a **Shazam for Food** app. Over time our idea grew bigger and we added nutritional value and recipes to it. ## What it does This is an android app that uses computer vision to identify food items in the picture and shows you if you are allergic to it by comparing the ingredients to your restrictions provided earlier. It can also give the nutritional values and recipes for that food item. ## How we built it We developed a deep learning model using **Tensorflow** that can classify between 101 different food items. We trained it using the **Google Compute Engine** with 2vCPUs, 7.5 GB RAM and 2 Tesla K80 GPU. This model can classify 101 food items with over 70% accuracy. From the predicted food item, we were able to get its ingredients and recipes from an API from rapidAPI called "Recipe Puppy". We cross validate the ingredients with the items that the user is allergic to and tell them if it's safe to consume. We made a native **Android Application** that lets you take an image and uploads it to **Google Storage**. The python backend runs on **Google App Engine**. The web app takes the image from google storage and using **Tensorflow Serving** finds the class of the given image(food name). It uses its name to get its ingredients, nutritional values, and recipes and return these values to the android app via **Firebase**. The Android app then takes these values and displays them to the user. Since most of the heavy lifting happens in the cloud, our app is very light(7MB) and is **computationally efficient**. It does not need a lot of resources to run. It can even run in a cheap and underperforming android mobile without crashing. ## Challenges we ran into > > 1. We had trouble converting our Tensorflow model to tflite(tflite\_converter could not convert a multi\_gpu\_model to tflite). So we ended up hosting it on the cloud which made the app lighter and computationally efficient. > 2. We are all new to using google cloud. So it took us a long time to even figure out the basic stuff. Thanks to the GCP team, we were able to get our app up and running. > 3. We couldn't use the Google App Engine to support TensorFlow(we could not get it working). So we have hosted our web app on Google Compute Engine > 4. We did not get a UI/UX designer or a frontend developer in our team. So we had to learn basic frontend and design our app. > 5. We could only get around 70% validation accuracy due to the higher computation needs and less available time. > 6. We were using an API from rapidAPI. But since yesterday, they stopped support for that API and it wasn't working. So we had to make our own database to run our app. > 7. Couldn't use AutoML for vision classification, because our dataset was too large to be uploaded. > > > ## What we learned Before coming to this hack, we had no idea about using cloud infrastructure like Google Cloud Platform. In this hack, we learned a lot about using Google Cloud Platform and understand its benefits. We are pretty comfortable using it now. Since we didn't have a frontend developer we had to learn that to make our app. Making this project gave us a lot of exposure to **Deep Learning**, **Computer Vision**, **Android App development** and **Google Cloud Platform**. ## What's next for Healthy.ly 1. We are planning to integrate **Google Fit** API with this so that we can get a comparison between the number of calories consumed and the number of calories burnt to give better insight to the user. We couldn't do it now due to time constraints. 2. We are planning to integrate **Augmented Reality** with this app to make it predict in real-time and look better. 3. We have to improve the **User Interface** and **User Experience** of the app. 4. Spend more time training the model and **increase the accuracy**. 5. Increase the **number of labels** of the food items.
## Inspiration We were inspired by issues we had ourselves with COVID-19 - we learned a ton about tech and coding and wanted to share our passion with others in software development, but hit a wall in finding others like ourselves. We were all too familiar with recruiting someone for a hackathon, only to have them ghost the team and scrambling last minute, or finding others to work on an opensource project with and then make all of the contributions. Simply finding someone with similar interests was difficult - you could go to hackathons, but what if you want to work more long-term? Enter Open4Collab - our solution to this problem. COVID-19 has taught many a new skill - but at the same time, made it apparent that finding dedicated collaborators on projects is difficult. Current social media is wonderful for meeting new people, but so dependent on first impressions - are they from your hometown, and do they have a pretty face. Open4Collab takes the first impressions out of a new person and focuses on what really counts - their projects. Whether it’s learning together, building the next big startup, or looking for developers to start an open-source project, Open4Collab uses machine learning to cluster similar projects with you. ## What it does Open4Collab is a platform that, unlike current social media, allows for more active collaboration and in turn creates a lot more engagement. It does this by using the skills and interests you've listed to cluster you with projects that want to work with. It also works the other way, you can create a project where you require people with certain skills and get those people! Our model is easy to understand, it's not a black box like seen in so many sites today, you know what data you're giving. We want you to be aware that your skills are used with a k mean model in order to find projects that fit their specific interests and skill set. To understand which technologies are related, we downloaded all of StackOverflow's tags to find the correlation between them. If two technologies had questions asked together, they were more similar and should be matched together. Based on this, we generate a list of suitable projects for you which you can then contact the owner of. We believe social media should be engaging in people's lives in a positive way and it can do that by being a simple and transparent tool that encourages people to collaborate and connect. ## How we built it In order for a chance at the @ Company prize, Open4Collab was built using the Flutter UI Framework, the @ platform and Firebase. The @ platform was used to give everyone a unique sign. Cloud Firestore was used to store project data and handle real-time updates. This ensured that our application would scale and stay responsive even with massive amounts of project data. The model used correlations between stackoverflow tags - if a question contained tags of 2 technologies, they were deemed similar. Each set of technologies specified by the user were given a similarity score, which was then minimized to give a more relevant suggestions page. This was deployed as a flask API through AppEngine. ## Challenges we ran into Using the @ protocol and a service like Firebase together while ensuring that the @ companies beliefs are still respected. We planned out the app so that user data is not stored on the cloud and instead managed using the @ platform, however, relevant data used for the cluster on the GCP still be stored there. We would have preferred was a solution that utilized GCP cloud functions that tied to the app platform in a permission-based manner but could not find support for this with the limited time. ## Accomplishments that we're proud of Integrating the @ platform with are project was difficult but we kept working on it even after recording our demo and we eventually got it working (see project media). Also being able to successfully set up a Flutter UI and connect Firebase is something we're proud of. Most of us were not familiar with Flutter but now have a better understanding of its purpose and why it's growing in popularity. ## What we learned We learned about Flutter and gained a better understanding of the Google Cloud Platform. ## What's next for Open4Collab - Social Media for Developers We'll improve UI and add more features based on user feedback. When we started, we set out with the task of making a platform that makes it possible to find other dedicated users. We would love to add a feedback/rating system to facilitate this.
# Healthy.ly An android app that can show if a food is allergic to you just by clicking its picture. It can likewise demonstrate it's health benefits, ingredients and recipes. ## Inspiration We are a group of students from India. The food provided here are completely new to us and we don't know the ingredients. One of our teammate is dangerously allergic to seafood and he has to take extra precautions while eating at new places. So we wanted to make an app which can detect if the given food is allergic or not using computer vision. We also got inspiration from the HBO show **Silicon Valley**, where a guy tries to make a **Shazam for Food** app. Over time our idea grew bigger and we added nutritional value and recipes to it. ## What it does This is an android app that uses computer vision to identify food item in the picture and shows you if you are allergic to it by comparing the ingredients to your restrictions provided earlier. It can also give the nutritional values and recipes for that food item. ## How we built it We used **Google Vision API** to predict the food item. We also made an alternate tensorflow model to predict the food. We developed a deep learning model using **Tensorflow** that can classify between 101 different food items. We trained it using the **Google Compute Engine** with 2vCPUs, 7.5 GB RAM and 2 Tesla K80 GPU. This model can classify 101 food items with over 70% accuracy. From the predicted food item, we were able to get its ingredients and recipes from an API from rapidAPI called "Recipe Puppy". We cross validate the ingredients with the items that the user is allergic to and tell them if its safe to consume. We made a native **Android Application** that lets you take an image and uploads it to **Google Storage**. The python backend runs on **Google App Engine**. The web app takes the image from google storage and using **Tensorflow Serving** finds the class of the given image(food name). It uses its name to get its ingredient, nutritional values, and recipes and return these values to the android app via **Firebase**. The Android app then takes these values and displays it to the user. Since most of the heavy lifting happens in the cloud, our app is very light(7MB) and is **computationally efficient**. It does not need a lot of resources to run. It can even run in a cheap and underperforming android mobile without crashing. ## Challenges we ran into > > 1. We had trouble converting our TensorFlow model to tflite(tflite\_converter could not convert a multi\_gpu\_model to tflite). So we ended up hosting it on the cloud which made the app lighter and computationally efficient. > 2. We are all new to using google cloud. So it took us a long time to even figure out the basic stuff. Thanks to the GCP team, we were able to get our app up and running. > 3. We couldn't use the Google App Engine to support TensorFlow(we could not get it working). So we have hosted our web app on Google Compute Engine > 4. We did not get a UI/UX designer or a frontend developer in our team. So we had to learn basic frontend and design our app. > 5. We could only get around 70% validation accuracy due to the higher computation needs and less available time. > 6. We were using an API from rapidAPI. But since yesterday, they stopped support for that API and it wasn't working. So we had to make our own database to run our app. > 7. Couldn't use AutoML for vision classification, because our dataset was too large to be uploaded. > > > ## What we learned Before coming to this hack, we had no idea about using cloud infrastructure like Google Cloud Platform. In this hack, we learned a lot about using Google Cloud Platform and understand its benefits. We are pretty comfortable using it now. Since we didn't have a frontend developer we had to learn that to make our app. Making this project gave us a lot of exposure to **Deep Learning**, **Computer Vision**, **Android App development** and **Google Cloud Platform**. ## What's next for Healthy.ly 1. We are planning to integrate **Google Fit** API with this so that we can get a comparison between the amount of calories taken and the amount of calories burnt to give better insight to the user. We didn't do it now due to time constraints. 2. We are planning to integrate **Augmented Reality** with this app to make it predict in real-time and look better. 3. We have to improve the **User Interface** and **User Experience** of the app. 4. Spend more time training the model and **increase the accuracy**. 5. Increase the **number of labels** of the food items.
partial
## Problem In these times of isolation, many of us developers are stuck inside which makes it hard for us to work with our fellow peers. We also miss the times when we could just sit with our friends and collaborate to learn new programming concepts. But finding the motivation to do the same alone can be difficult. ## Solution To solve this issue we have created an easy to connect, all in one platform where all you and your developer friends can come together to learn, code, and brainstorm together. ## About Our platform provides a simple yet efficient User Experience with a straightforward and easy-to-use one-page interface. We made it one page to have access to all the tools on one screen and transition between them easier. We identify this page as a study room where users can collaborate and join with a simple URL. Everything is Synced between users in real-time. ## Features Our platform allows multiple users to enter one room and access tools like watching youtube tutorials, brainstorming on a drawable whiteboard, and code in our inbuilt browser IDE all in real-time. This platform makes collaboration between users seamless and also pushes them to become better developers. ## Technologies you used for both the front and back end We use Node.js and Express the backend. On the front end, we use React. We use Socket.IO to establish bi-directional communication between them. We deployed the app using Docker and Google Kubernetes to automatically scale and balance loads. ## Challenges we ran into A major challenge was collaborating effectively throughout the hackathon. A lot of the bugs we faced were solved through discussions. We realized communication was key for us to succeed in building our project under a time constraints. We ran into performance issues while syncing data between two clients where we were sending too much data or too many broadcast messages at the same time. We optimized the process significantly for smooth real-time interactions. ## What's next for Study Buddy While we were working on this project, we came across several ideas that this could be a part of. Our next step is to have each page categorized as an individual room where users can visit. Adding more relevant tools, more tools, widgets, and expand on other work fields to increase our User demographic. Include interface customizing options to allow User’s personalization of their rooms. Try it live here: <http://35.203.169.42/> Our hopeful product in the future: <https://www.figma.com/proto/zmYk6ah0dJK7yJmYZ5SZpm/nwHacks_2021?node-id=92%3A132&scaling=scale-down> Thanks for checking us out!
## Inspiration We saw that lots of people were looking for a team to work with for this hackathon, so we wanted to find a solution ## What it does I helps developers find projects to work, and helps project leaders find group members. By using the data from Github commits, it can determine what kind of projects a person is suitable for. ## How we built it We decided on building an app for the web, then chose a graphql, react, redux tech stack. ## Challenges we ran into The limitations on the Github api gave us alot of troubles. The limit on API calls made it so we couldnt get all the data we needed. The authentication was hard to implement since we had to try a number of ways to get it to work. The last challenge was determining how to make a relationship between the users and the projects they could be paired up with. ## Accomplishments that we're proud of We have all the parts for the foundation of a functional web app. The UI, the algorithms, the database and the authentication are all ready to show. ## What we learned We learned that using APIs can be challenging in that they give unique challenges. ## What's next for Hackr\_matchr Scaling up is next. Having it used for more kind of projects, with more robust matching algorithms and higher user capacity.
# So Many Languages Web application to help convert one programming language's code to another within seconds while also enabling the user to generate code using just logic. ## Inspiration Our team consists of 3 developers and all of us realised that we face the same problem- it's very hard to memorise all syntaxes since each language has its own different syntax. This not only causes confusion but also takes up a lot of our time. ## What it does So Many Languages has various features to motivate students to learn competitive coding while also making the process easier. SML helps: 1) Save time 2) Immediate language conversion 3) One of its kind language freedom 4) Voice to code templating 5) Code accurately 6) Code programs by just knowing the logic (no need to remember syntaxes) 7) Take tests and practice while also earning rewards for the same ## How to run ``` 1) git clone https://github.com/akshatvg/So-Many-Languages 2) pip install -r requirements.txt 3) python3 run.py ``` ## How to use 1) Run the software as mentioned above. 2) Use the default page to upload code of a programming language to be converted into any of the other listed languages in the dropdown menu. 3) Use the Voice to Code Templating page to give out intents to be converted into code. eg: "Open C++", "Show me how to print a statement", etc. 4) Use the Compete and Practice page to try out language specific programs to test out how much you learnt, compete against your peers and earn points. 5) Use the Rewards page to redeem the earnt Points. ## Its advantage 1) Run the code from the compiler to get desired result in the same place. 2) Easy to use and fast processing. 3) Save time from scrolling through Google searching for different answers and syntaxes by having everything come up on its own in one single page. 4) Learn and earn at the same time through the Compete and Rewards page. ## Target audience Students- learning has no age & developers need to keep learning to stay updated with trends. ## Business model We intend to provide free code templating and conversion for particular common languages like C++, Python, Java, etc and have paid packs for exclusive languages like Swift, PHP, JavaScript, etc. ## Marketing strategy 1) For every referral, points will be earned which help purchase premium and exclusive language packs once enough points are saved. These points can also be used to purchase schwags. 2) Schwags and discount benefits for Campus Ambassadors in different Universities and Colleges. ## How we built it We built the assistive educative technology using: 1) HTML/ CSS/ JavaScript/ Bootstrap (Frontend Web Development), 2) Flask (Backend Web Development), 3) IBM Watson (To Gather User's Intent- NLU), 4) PHP, C++, Python (Test Programming Languages). ## Challenges we ran into Other than the jet lag we still have from travelling all the way from India and hence lack of sleep, we came across a few technical challenges too. Creating algorithms to convert PHP code wasn't very easy at first, but we managed to pull it off in the end. ## Accomplishments that we're proud of Creating a one of its kind product. 1) We are the first ever educative technological assistant to help learn and migrate to programming languages while also giving users a platform to practice and test how much they learnt using language specific problems. 2) We also help users completely convert one language's code to another language's code accurately within seconds. ## What we learned This was our team's first international hackathon. We met hundreds of inspiring coders and developers who tried and tested our product and gave their views and suggestions which we then tried implementing. We saw how other teams functioned and what we may have been doing wrong before. We also each learnt a technical skill for the project (Akshat learnt Animations and basics of Flask, Anand learnt using IBM Watson to its greatest extent and Sandeep learnt PHP just to implement it into this project). ## What's next for So Many Languages We intend to add support for more programming languages as soon as possible while also making sure that any upcoming bugs are rectified.
winning
## Inspiration Our journey with PathSense began with a deeply personal connection. Several of us have visually impaired family members, and we've witnessed firsthand the challenges they face navigating indoor spaces. We realized that while outdoor navigation has seen remarkable advancements, indoor environments remained a complex puzzle for the visually impaired. This gap in assistive technology sparked our imagination. We saw an opportunity to harness the power of AI, computer vision, and indoor mapping to create a solution that could profoundly impact lives. We envisioned a tool that would act as a constant companion, providing real-time guidance and environmental awareness in complex indoor settings, ultimately enhancing independence and mobility for visually impaired individuals. ## What it does PathSense, our voice-centric indoor navigation assistant, is designed to be a game-changer for visually impaired individuals. At its heart, our system aims to enhance mobility and independence by providing accessible, spoken navigation guidance in indoor spaces. Our solution offers the following key features: 1. Voice-Controlled Interaction: Hands-free operation through intuitive voice commands. 2. Real-Time Object Detection: Continuous scanning and identification of objects and obstacles. 3. Scene Description: Verbal descriptions of the surrounding environment to build mental maps. 4. Precise Indoor Routing: Turn-by-turn navigation within buildings using indoor mapping technology. 5. Contextual Information: Relevant details about nearby points of interest. 6. Adaptive Guidance: Real-time updates based on user movement and environmental changes. What sets PathSense apart is its adaptive nature. Our system continuously updates its guidance based on the user's movement and any changes in the environment, ensuring real-time accuracy. This dynamic approach allows for a more natural and responsive navigation experience, adapting to the user's pace and preferences as they move through complex indoor spaces. ## How we built it In building PathSense, we embraced the challenge of integrating multiple cutting-edge technologies. Our solution is built on the following technological framework: 1. Voice Interaction: Voiceflow * Manages conversation flow * Interprets user intents * Generates appropriate responses 2. Computer Vision Pipeline: * Object Detection: Detectron * Depth Estimation: DPT (Dense Prediction Transformer) * Scene Analysis: GPT-4 Vision (mini) 3. Data Management: Convex database * Stores CV data and mapping information in JSON format 4. Semantic Search: Cohere's Rerank API * Performs semantic search on CV tags and mapping data 5. Indoor Mapping: MappedIn SDK * Provides floor plan information * Generates routes 6. Speech Processing: * Speech-to-Text: Groq model (based on OpenAI's Whisper) * Text-to-Speech: Unreal Engine 7. Video Input: Multiple TAPO cameras * Stream 1080p video of the environment over Wi-Fi To tie it all together, we leveraged Cohere's Rerank API for semantic search, allowing us to find the most relevant information based on user queries. For speech processing, we chose a Groq model based on OpenAI's Whisper for transcription, and Unreal Engine for speech synthesis, prioritizing low latency for real-time interaction. The result is a seamless, responsive system that processes visual information, understands user requests, and provides spoken guidance in real-time. ## Challenges we ran into Our journey in developing PathSense was not without its hurdles. One of our biggest challenges was integrating the various complex components of our system. Combining the computer vision pipeline, Voiceflow agent, and MappedIn SDK into a cohesive, real-time system required careful planning and countless hours of debugging. We often found ourselves navigating uncharted territory, pushing the boundaries of what these technologies could do when working in concert. Another significant challenge was balancing the diverse skills and experience levels within our team. While our diversity brought valuable perspectives, it also required us to be intentional about task allocation and communication. We had to step out of our comfort zones, often learning new technologies on the fly. This steep learning curve, coupled with the pressure of working on parallel streams while ensuring all components meshed seamlessly, tested our problem-solving skills and teamwork to the limit. ## Accomplishments that we're proud of Looking back at our journey, we're filled with a sense of pride and accomplishment. Perhaps our greatest achievement is creating an application with genuine, life-changing potential. Knowing that PathSense could significantly improve the lives of visually impaired individuals, including our own family members, gives our work profound meaning. We're also incredibly proud of the technical feat we've accomplished. Successfully integrating numerous complex technologies - from AI and computer vision to voice processing - into a functional system within a short timeframe was no small task. Our ability to move from concept to a working prototype that demonstrates the real-world potential of AI-driven indoor navigation assistance is a testament to our team's creativity, technical skill, and determination. ## What we learned Our work on PathSense has been an incredible learning experience. We've gained invaluable insights into the power of interdisciplinary collaboration, seeing firsthand how diverse skills and perspectives can come together to tackle complex problems. The process taught us the importance of rapid prototyping and iterative development, especially in a high-pressure environment like a hackathon. Perhaps most importantly, we've learned the critical importance of user-centric design in developing assistive technology. Keeping the needs and experiences of visually impaired individuals at the forefront of our design and development process not only guided our technical decisions but also gave us a deeper appreciation for the impact technology can have on people's lives. ## What's next for PathSense As we look to the future of PathSense, we're brimming with ideas for enhancements and expansions. We're eager to partner with more venues to increase our coverage of mapped indoor spaces, making PathSense useful in a wider range of locations. We also plan to refine our object recognition capabilities, implement personalized user profiles, and explore integration with wearable devices for an even more seamless experience. In the long term, we envision PathSense evolving into a comprehensive indoor navigation ecosystem. This includes developing community features for crowd-sourced updates, integrating augmented reality capabilities to assist sighted companions, and collaborating with smart building systems for ultra-precise indoor positioning. With each step forward, our goal remains constant: to continually improve PathSense's ability to provide independence and confidence to visually impaired individuals navigating indoor spaces.
## 🌟**Inspiration** There are over **7.2 million** people in the U.S. who are legally blind, many of whom rely on others to help them navigate and understand their environment. While technology holds the promise of increased independence, current solutions for the visually impaired often fall short—either lacking accessibility features like text-to-speech or offering overly complex interfaces. Optica was born out of a desire to bridge **this gap**. Our app empowers visually impaired individuals by giving them a simple, intuitive tool to perceive the world independently. Through clear, human-like descriptions of their surroundings, Optica provides not just information, but confidence, autonomy, and a deeper connection to their environment. ## 🛠️ **What it does** Optica transforms a smartphone into a tool of empowerment for the visually impaired, enabling users to independently understand their surroundings. With the press of a button, users receive clear, succinct, vivid audio descriptions of what the phone’s camera captures. Optica doesn’t just list objects; it paints a picture—communicating the relationships between objects and creating a true sense of place. Optica enables its users to engage with their environment without outside assistance. ## 🧱 **How we built it** We developed Optica using the ML Kit Object Detection API, which enabled us to identify and classify objects in real-time. These object classifications were then fed into a custom Large Language Model (LLM) powered by TuneStudio and Cerebras, which we trained to generate coherent, natural-language descriptions. The output from this LLM was integrated with Google Cloud’s text-to-speech API to provide users with real-time audio feedback. Throughout development, we maintained a user-first mindset, ensuring that the interface was intuitive and fully accessible. ## ⚔️ **Challenges we ran into** Developing Optica presented numerous technical and logistical challenges, particularly when it came to integrating various cutting-edge technologies. Deploying our object detection model in Android Studio took longer than anticipated, which limited the time we had to refine other components. Communicating between our computer vision model and TuneStudio’s LLM proved to be complex, requiring us to overcome issues with API integration and SDK compatibility. Additionally, managing the project across GitHub repositories introduced git-related challenges, particularly when merging contributions from different team members. However, these difficulties only strengthened our resolve and pushed us to learn new skills—especially in debugging, collaboration, and working across frameworks. Mentors played a crucial role in helping us push through these roadblocks, and the experience has made us better engineers and problem solvers! ## 🎖️ **Our Accomplishments** We are incredibly proud of our **integration of computer vision and natural language processing**, a combination that allows Optica to go beyond standard object recognition! Starting from a basic CV-based idea, we pushed the boundaries by incorporating an LLM to enhance the descriptions and truly serve the visually impaired community. None of us had experience with these APIs and learned so much on this journey! Our ability to bring together these powerful technologies to create a tool that can have a tangible, positive impact on people’s lives is an accomplishment we hold in high regard. Successfully deploying this onto a user-friendly platform was a milestone we are excited about. ## 📖 **What we learned** Although we might have learned new languages, APIs, and git commands on a technical level, the lessons we've learned **go beyond the pages**: * Setbacks are an inevitable part of the creative process, and staying adaptable allows you to turn challenges into opportunities! * Starting without all the answers taught us that taking the first step is crucial for personal and project development. We learned to not get ahead of ourselves and take it slow! * Reaching out for help from our mentors showed us the power of collaboration and shared knowledge. We would like to specifically mention Nifaseth and Harsh Deep for their help! ## ⏭️ **What's next for Optica** We plan to continually enhance the app by improving the accuracy and breadth of the image classification model, training it on more diverse datasets that include non-conventional settings and real-world complexity. Additionally, we aim to incorporate advanced depth sensing with Google AR’s depth API to provide even more nuanced scene descriptions. On the accessibility front, we will refine the voice activation and gesture-based navigation to make the app even more intuitive. We also look forward to partnering with organizations and sponsors, like Cerebras and TuneStudio, to ensure that **Optica continues to push the boundaries of AI for social good**, helping us realize our vision of full independence for the visually impaired.
## Inspiration A study recently done in the UK learned that 69% of people above the age of 65 lack the IT skills needed to use the internet. Our world's largest resource for information, communication, and so much more is shut off to such a large population. We realized that we can leverage artificial intelligence to simplify completing online tasks for senior citizens or people with disabilities. Thus, we decided to build a voice-powered web agent that can execute user requests (such as booking a flight or ordering an iPad). ## What it does The first part of Companion is a conversation between the user and a voice AI agent in which the agent understands the user's request and asks follow up questions for specific details. After this call, the web agent generates a plan of attack and executes the task by navigating the to the appropriate website and typing in relevant search details/clicking buttons. While the agent is navigating the web, we stream the agent's actions to the user in real time, allowing the user to monitor how it is browsing/using the web. In addition, each user request is stored in a Pinecone database, to the agent has context about similar past user requests/preferences. The user can also see the live state of the web agent navigation on the app. ## How we built it We developed Companion using a combination of modern web technologies and tools to create an accessible and user-friendly experience: For the frontend, we used React, providing a responsive and interactive user interface. We utilized components for input fields, buttons, and real-time feedback to enhance usability as well as integrated VAPI, a voice recognition API, to enable voice commands, making it easier for users with accessibility needs. For the Backend we used Flask to handle API requests and manage the server-side logic. For web automation tasks we leveraged Selenium, allowing the agent to navigate websites and perform actions like filling forms and clicking buttons. We stored user interactions in a Pinecone database to maintain context and improve future interactions by learning user preferences over time, and the user can also view past flows. We hosted the application on a local server during development, with plans for cloud deployment to ensure scalability and accessibility. Thus, Companion can effectively assist users in navigating the web, particularly benefiting seniors and individuals with disabilities. ## Challenges we ran into We ran into difficulties getting the agent to accurately complete each task. Getting it to take the right steps and always execute the task efficiently was a hard but fun problem. It was also challenging to prompt the voice agent such to effectively communicate with the user and understand their request. ## Accomplishments that we're proud of Building a complete, end-to-end agentic flow that is able to navigate the web in real time. We think that this project is socially impactful and can make a difference for those with accessibility needs. ## What we learned The small things that can make or break an AI agent such as the way we display memory, how we ask it to reflect, and what supplemental info we give it (images, annotations, etc.) ## What's next for Companion Making it work without CSS selectors; training a model to highlight all the places the computer can click because certain buttons can be unreachable for Companion.
winning
## Inspiration Philadelphia, like many urban cities, is grappling with rising temperatures due to climate change, industrialization, and the urban heat island effect. We noticed that extreme heat is making it unsafe for many communities, especially during summer months. Chilladelphia was inspired by the need to provide residents with real-time resources and actionable insights to help them stay cool and safe. ## What it does Help cool down Philly! The main page features a heat map that visually highlights the hottest and coolest areas around Philadelphia. By entering your address, you can instantly see how “chill” your neighborhood is. Using our computer vision algorithm, we analyze the ratio of greenery in your area, giving you a personalized chill rating. This rating helps you understand the immediate state of your environment. Chilladelphia goes beyond just information—it provides actionable suggestions like planting trees, painting rooftops lighter, and other eco-friendly tips to actively cool down your community. Plus, you can easily find nearby cooling centers, water stations, and shaded areas to help you beat the heat on the go ## How we built it We built Chilladelphia with a strong focus on user experience and seamless access to location based data. For user authentication, we integrated **Propel Auth**, which provided a quick and scalable solution for user sign-ups and logins. This allowed us to securely manage user sessions, ensuring that personal data, like location preferences, is handled safely. On the frontend, we used **React** to create a dynamic and responsive user interface. This enabled smooth interactions, from entering an address to viewing real-time temperature and air quality updates. To style the app, we utilized **Tailwind CSS**, which allowed us to rapidly prototype and design components with minimal code. **Axios** was implemented for handling API requests, efficiently fetching environmental data and user-specific suggestions. The frontend also leverages **React Router** to manage navigation, making it easy for users to explore different parts of the app. For the backend, we set up a **Node.js** server with **Express** to handle API requests and data routing. The core of our data storage is **MongoDB**, where we store geospatial information like cooling center locations and tree-planting sites. MongoDB’s flexibility allowed us to efficiently store and query data based on the user’s location. We also integrated external APIs to get coordinates and map data. To manage authentication securely across both the backend and frontend, we utilized **Propel Auth** to handle user session tokens and login states. For the data generation, we used python to compile images of university city by downloading sections of university city from sattelite images. We then use DetecTrees, a Python library that uses a pre-trained model to identify tree pixels from aerial images. We then were able to calculate what percentage of the image was green space to give users an idea of how green the area around them is. ## Challenges we ran into One of the biggest challenges was getting high resolution satellite imagery that would work well for our purposes. After testing out over 5 different APIs, we ended up having to wrap a google maps scraper, which worked best for our needs. ## Accomplishments that we're proud of We’re proud of creating a solution that can have real impact in our neighboring Philly communities. The recent heat waves in the northeast have been dangerous and put our peers and community at risk, and we are excited to take steps in the right direction to mitigate the issue. ## What we learned We've expanded our tech stack -- several of us used MongoDB, Express.js, PropelAuth, and many other tools for the first time this weekend. ## What's next for Chilladelphia Next, we plan to scale Chilladelphia by integrating more data - we had limited storage in our database and weren't able to cover as much of Philly as we wanted to, but we hope to do more in the future! We also want to partner with local governments and environmental organizations to further expand the app's resource database and promote city-wide efforts in cooling down Philadelphia.
As a response to the ongoing wildfires devastating vast areas of Australia, our team developed a web tool which provides wildfire data visualization, prediction, and logistics handling. We had two target audiences in mind: the general public and firefighters. The homepage of Phoenix is publicly accessible and anyone can learn information about the wildfires occurring globally, along with statistics regarding weather conditions, smoke levels, and safety warnings. We have a paid membership tier for firefighting organizations, where they have access to more in-depth information, such as wildfire spread prediction. We deployed our web app using Microsoft Azure, and used Standard Library to incorporate Airtable, which enabled us to centralize the data we pulled from various sources. We also used it to create a notification system, where we send users a text whenever the air quality warrants action such as staying indoors or wearing a P2 mask. We have taken many approaches to improving our platform’s scalability, as we anticipate spikes of traffic during wildfire events. Our codes scalability features include reusing connections to external resources whenever possible, using asynchronous programming, and processing API calls in batch. We used Azure’s functions in order to achieve this. Azure Notebook and Cognitive Services were used to build various machine learning models using the information we collected from the NASA, EarthData, and VIIRS APIs. The neural network had a reasonable accuracy of 0.74, but did not generalize well to niche climates such as Siberia. Our web-app was designed using React, Python, and d3js. We kept accessibility in mind by using a high-contrast navy-blue and white colour scheme paired with clearly legible, sans-serif fonts. Future work includes incorporating a text-to-speech feature to increase accessibility, and a color-blind mode. As this was a 24-hour hackathon, we ran into a time challenge and were unable to include this feature, however, we would hope to implement this in further stages of Phoenix.
## Inspiration Because of covid-19 and the holiday season, we are getting increasingly guilty over the carbon footprint caused by our online shopping. This is not a coincidence, Amazon along contributed over 55.17 million tonnes of CO2 in 2019 alone, the equivalent of 13 coal power plants. We have seen many carbon footprint calculators that aim to measure individual carbon pollution. However, the pure mass of carbon footprint too abstract and has little meaning to average consumers. After calculating footprints, we would feel guilty about our carbon consumption caused by our lifestyles, and maybe, maybe donate once to offset the guilt inside us. The problem is, climate change cannot be eliminated by a single contribution because it's a continuous process, so we thought to gamefy carbon footprint to cultivate engagement, encourage donations, and raise awareness over the long term. ## What it does We build a google chrome extension to track the user’s amazon purchases and determine the carbon footprint of the product using all available variables scraped from the page, including product type, weight, distance, and shipping options in real-time. We set up Google Firebase to store user’s account information and purchase history and created a gaming system to track user progressions, achievements, and pet status in the backend. ## How we built it We created the front end using React.js, developed our web scraper using javascript to extract amazon information, and Netlify for deploying the website. We developed the back end in Python using Flask, storing our data on Firestore, calculating shipping distance using Google's distance-matrix API, and hosting on Google Cloud Platform. For the user authentication system, we used the SHA-256 hashes and salts to store passwords securely on the cloud. ## Challenges we ran into This is our first time developing a web app application for most of us because we have our background in Mechatronics Engineering and Computer Engineering. ## Accomplishments that we're proud of We are very proud that we are able to accomplish an app of this magnitude, as well as its potential impact on social good by reducing Carbon Footprint emission. ## What we learned We learned about utilizing the google cloud platform, integrating the front end and back end to make a complete webapp. ## What's next for Purrtector Our mission is to build tools to gamify our fight against climate change, cultivate user engagement, and make it fun to save the world. We see ourselves as a non-profit and we would welcome collaboration from third parties to offer additional perks and discounts to our users for reducing carbon emission by unlocking designated achievements with their pet similar. This would bring in additional incentives towards a carbon-neutral lifetime on top of emotional attachments to their pet. ## Domain.com Link <https://purrtector.space> Note: We weren't able to register this via domain.com due to the site errors but Sean said we could have this domain considered.
winning
## Inspiration Over the past 30 years, the percentage American adults who read literature has dropped about 14%. We found our inspiration. The issue we discovered is that due to the rise of modern technologies, movies and other films are more captivating than reading a boring book. We wanted to change that. ## What it does By implementing Google’s Mobile Vision API, Firebase, IBM Watson, and Spotify's API, Immersify first scans text through our Android Application by utilizing Google’s Mobile Vision API. After being stored into Firebase, IBM Watson’s Tone Analyzer deduces the emotion of the text. A dominant emotional score is then sent to Spotify’s API where the appropriate music is then played to the user. With Immerisfy, text can finally be brought to life and readers can feel more engaged into their novels. ## How we built it On the mobile side, the app was developed using Android Studio. The app uses Google’s Mobile Vision API to recognize and detect text captured through the phone’s camera. The text is then uploaded to our Firebase database. On the web side, the application pulls the text sent by the Android app from Firebase. The text is then passed into IBM Watson’s Tone Analyzer API to determine the tone of each individual sentence with the paragraph. We then ran our own algorithm to determine the overall mood of the paragraph based on the different tones of each sentence. A final mood score is generated, and based on this score, specific Spotify playlists will play to match the mood of the text. ## Challenges we ran into Trying to work with Firebase to cooperate with our mobile app and our web app was difficult for our whole team. Querying the API took multiple attempts as our post request to IBM Watson was out of sync. In addition, the text recognition function in our mobile application did not perform as accurately as we anticipated. ## Accomplishments that we're proud of Some accomplishments we’re proud of is successfully using Google’s Mobile Vision API and IBM Watson’s API. ## What we learned We learned how to push information from our mobile application to Firebase and pull it through our web application. We also learned how to use new APIs we never worked with in the past. Aside from the technical aspects, as a team, we learned collaborate together to tackle all the tough challenges we encountered. ## What's next for Immersify The next step for Immersify is to incorporate this software with Google Glasses. This would eliminate the two step process of having to take a picture on an Android app and going to the web app to generate a playlist.
## Inspiration We set out to create a tool that unleashes dancers' creativity by syncing their moves with AI-generated music that matches perfectly. Inspired by the vibrant dance scenes on TikTok and Instagram, where beats and moves are inseparable, we wanted to take it to the next level. Imagine dancing to music made just for your style, effortlessly turning your moves into shareable, jaw-dropping videos with custom soundtracks. With our tool, dancers don’t just follow the beat—they create it! It's like having your bb DJ that grooves with you. ## What it does KhakiAI allows users to upload or record short dance 6-second videos analyzed by our AI-powered system. The AI tracks the dancers' movements, tempo, and style, generating a custom music track that perfectly matches the rhythm and energy of the performance. Users can further customize the music by selecting different genres or adding sound effects. The tool then syncs the music with the video, creating a seamless, high-quality dance video that can be shared directly on social media. ## How we built it We built this project with a complex tech stack involving several APIs, LLMs, and programming languages. Throughout our programming process, we broke up the task into various parts and pieced them together as we went. To begin, we focused on a key functionality of dance movement recognition with OpenPose/OpenCV. This recognition outputs a JSON that gets put into a MongoDB Database. Then, we use Llama, Tune AI, and Cerebras to pass the JSON through an LLM quickly to create a low latency, so that the user generates the prompt quickly. SunoAPI then uses the generated prompt to create music for the video. Then, we attach it with Python and output it. ## Challenges we ran into There were many challenges involved in the creation of this project. The Suno AI API doesn't have an official API, so we had to rely on an unofficial version API that uses cookies, which lengthened our ability to complete this project, instead of getting the actual API Key. ## Accomplishments that we're proud of We are proud of the way we made a way for computer vision to detect a way to make dance moves generate a prompt for music to text. ## What we learned We learned about computer vision, flask/next.js implementation, and react. We made proper use of version control. I was experienced and used new AI technologies like Cerebras.
## Inspiration *It's lunchtime. You quickly grab a bite to eat and sit by yourself while you munch it down alone. While food clearly has the power to bring people together, the pandemic has separated us from friends and family with whom we love to eat together with* Muk means Eat in in Korean 🍽️ Eating is such an essential and frequent part of our daily lives, but eating alone has become the "norm" nowadays. With the addition of the lockdown and working from home, it is even harder to be social over food. Even worse, some young people are too busy to eat at home at all. This brings significant health drawbacks, including chances to lead to obesity, higher risk factors for prediabetes, risk of having metabolic syndrome, high blood pressure, and higher cholesterol. This is because loneliness and lack of social support lead to physical wear and anxiety 🤒 We rely on relationships for emotional support and stress management ✨🧑‍🤝‍🧑 This is why we became inspired to make eating social again. With LetsMuk, we bring available the interactions of catching up with friends or meeting someone new, so that you won't spend your next meal alone anymore. This project targets the health challenge because of the profound problems related to loneliness and mental health. With young people working from home, it brings back the social-ness of lunching with coworkers. Among seniors, it levitates them from isolation during a lonely meal. For friends over a distance, it provides them a chance to re-connect and catch up. 💬🥂 Here are multiple health studies that highlight the health problems around eating alone that inspired us * [eating-alone-metabolic-syndrome](https://time.com/4995466/eating-alone-metabolic-syndrome/) * [eating-alone-good-or-bad-for-your-health](https://globalnews.ca/news/6123020/eating-alone-good-or-bad-for-your-health/#:%7E:text=The%20physical%20implicati%5B%E2%80%A6%5D0and%20elevated%20blood%20pressure) * [eating-alone-is-the-norm](https://ryersonian.ca/eating-alone-is-the-norm-but-ryerson-grads-app-works/) ## How we built it We chose Flutter as our mobile framework to support iOS and Android and leveraged its tight integrations with Google Firebase. We used Firebase's real-time database as our user store and built a Ruby on Rails API with PostgreSQL to serve as a broker and source of truth. Our API takes care of two workflows: storing and querying schedules, and communication with Agora's video chat API to query the active channels and the users within them. Here is the video demo of when one joins a room [YouTube Demo on entering a room](https://youtu.be/3EYfO5VVVHU) ## Challenges we ran into This is the first time any of us has worked with Flutter, and none of us had in-depth mobile development experience. We went through initial hurdles simply setting up the mobile environment as well as getting the Flutter app running. Dart is a new language to us, so we took the time to learn it from scratch and met challenges with calling async functions, building the UI scaffold, and connecting our backend APIs to it. We also ran into issues calling the Agora API for our video-chat, as our users' uid did not cohere with the API's int size restriction. ## Accomplishments that we're proud of It works!! * Building out a polished UI * Seeing your friends who are eating right now * Ease of starting a meal and joining a room * "Mukking" together with our team! ## What we learned * Mobile development with Flutter and Dart * User authentication with Firebase * Video call application with Agora * API development with Ruby on Rails * Firebase and Postgres databases (NoSQL and SQL) ## What's next for LetsMuk * Schedule integration - put in when you eat every day * Notifications - reminders to your friends and followers when you're eating. * LetsMuk for teams/enterprise - eat with coworkers or students * Better social integration - FB, twitter, "followers"
partial
## Inspiration I woke up on Saturday morning (10/23/2021). It was 9 am. My mom had just gotten back from the grocery store. She complained to me that the groceries were too expensive and her salary wasn't keeping up. "Damn," I thought, "sounds like a problem for deFi smart-contracts to solve!" ## What it does These smart contracts pay employees their inflation-adjusted salary based on the most recent API. ## How we built it Chainlink keepers automatically trigger the smart contracts to pay employees when it's time. Chainlink API functions get the latest cpi from the BLS. Uniswap lets the company/employer store their assets in whatever currency they want but can swap to the employee's preferred currency/token when it comes time to pay them. Right now, the employer has to keep lots of cash in the smart-contract to pay the employees. In the future, we want this cash to always be in use, so the employer could add it to a Uniswap liquidity pool or lend it out to earn yields. We read the Solidity, Uniswap, and Chainlink docs. A few StackExchange/StackOverflow Q&As also helped. ## Challenges we ran into Time was our biggest challenge. We started coding at 9 am 10/23/2021 and realized we couldn't fully finish the implementations of all the functions. I don't like staying up until 2 am coding so at about 9 pm 10/23/2021 I decided to call it a night and submit what I have. I hope the judges will get the idea. ## Accomplishments that we're proud of I've never used Chainlink keepers so I was stoked to successfully use them. I tested my Chainlink keepers functions in Remix and they worked great! ## What we learned I learned a lot about Solidity mappings, Uniswap v3, and Chainlink keepers. ## What's next for Inflation Adjusted Salary I'm not sure if there's already an app that does this, but I want to add a plug-in or separate app that lets anybody pay and receive in whatever crypto they want. Eg. let's say you go to the grocery store and the cashier asks for 20 USDC but all you want to own and carry is ETH. You should be able to put 20 USDC into your app, see the ETH conversion, hit confirm, and then the app automatically takes whatever currency you have (ETH in this case) does a swap on Uniswap, and transfers it to the cashier all in one transaction.
## Inspiration We live in a world where kiosks and card machines break down, and where owners of your favorite local restaurant only take in cash, in the advent of cashless, contactless transaction society. We want to make this transition a positive experience by offering the next generation alternative that incentivizes both customers and owners. Current card and online transaction industries bully restaurant owners that operate on very tight margins to just survive and compete with 3% + 0.5 cents transaction fees. Without the alternative, in the advent of the shift, this is just plain monopoly and bullying. We can potentially capture $114 B(cash) + $247 B (debit) + $554 B = 915 B (credit) sized market on both online and offline POS, just in Canada. ## What it does We replace cash, credit, and debit transactions with blockchain technology. However, we overcome the traditional challenges of blockchain by using Algorand's proof by validation technique. This reduces energy, time, and cost of transaction traditional blockchain down to almost instant time frame just like the current commercial infrastructures with fees. We provide beautiful and easy to use customer and merchant Android Apps for Paysy. Also, for commercial settings, we provide a java application that can be used in any device by the merchants. Merchants sends a bill to the user, user approves or declines, and the smart contract is fulfilled through a given node. ## How we built it We integrated Algorand's services in both the desktop and mobile applications by using their SDK. We were able to finally build a working point-of-sale (POS) service by successfully adding different stages of blockchain transaction mechanisms to our application- creating and storing of public-private keys for users, generating unsigned transactions for exchange, signing and verifying transactions and adding the node to the blockchain. ## Challenges we ran into It was our first time learning about blockchain technology. There were some issues with protocol communication, but with the help of mentors and company employees, we were able to successfully build the infrastructure. ## Accomplishments that we're proud of We are now familiar with the blockchain technology and are able to use a state-of-the-art blockchain platform to create our application that will change the world for the better. ## What we learned We learned about the blockchain technology. ## What's next for Paysy We will implement the recommendation algorithm for restaurants so that users can save time, and provide an ML-based analytics dashboard for small restaurant owners to enable them for strategic business decisions. This will allow them to go from surviving to THRIVING.
## Inspiration After observing the news about the use of police force for so long, we considered to ourselves how to solve that. We realized that in some ways, the problem was made worse by a lack of trust in law enforcement. We then realized that we could use blockchain to create a better system for accountability in the use of force. We believe that it can help people trust law enforcement officers more and diminish the use of force when possible, saving lives. ## What it does Chain Gun is a modification for a gun (a Nerf gun for the purposes of the hackathon) that sits behind the trigger mechanism. When the gun is fired, the GPS location and ID of the gun are put onto the Ethereum blockchain. ## Challenges we ran into Some things did not work well with the new updates to Web3 causing a continuous stream of bugs. To add to this, the major updates broke most old code samples. Android lacks a good implementation of any Ethereum client making it a poor platform for connecting the gun to the blockchain. Sending raw transactions is not very well documented, especially when signing the transactions manually with a public/private keypair. ## Accomplishments that we're proud of * Combining many parts to form a solution including an Android app, a smart contract, two different back ends, and a front end * Working together to create something we believe has the ability to change the world for the better. ## What we learned * Hardware prototyping * Integrating a bunch of different platforms into one system (Arduino, Android, Ethereum Blockchain, Node.JS API, React.JS frontend) * Web3 1.0.0 ## What's next for Chain Gun * Refine the prototype
partial
# talko Hello! talko is a project for nwHacks 2022. Interviews can be scary, but they don't have to be! We believe that practice and experience is what gives you the confidence you need to show interviewers what you're made of. talko is made for students and new graduates who want to learn to fully express their skills in interviews. With everything online, it's even more important now that you can get your thoughts across clearly virtually. As students who have been and will be looking for co-ops, we know very well how stressful interview season can be; we took this as our source of inspiration for talko. talko is an app that helps you practice for interviews. Record and time your answers to interview questions to get feedback on how fast you're talking and view previous answers. ## Features * View answer history for previous answers- playback recordings, words per minute, standard deviation of talking speed and overall answer quality. * Integrated question bank with a variety of topics. * Skip answers you aren't ready to answer. * Adorable robots!! ## Technical Overview For Talko’s front-end, we used React to create a web app that can be used on both desktop and mobile devices. We used Figma for the wireframing and Adobe Fresco for some of the aesthetic touches and character designs. We created the backend using Nodejs and Express. The api handles uploading, saving and retrieving recordings, as well as fetching random questions from our question bank. We used Google Cloud Firestore to save data about past answers, and Microsoft Azure to store audio files and use speech-to-text on our clips. In our api, we calculate the average words per minute over the entire answer, as well as the variance in the rate of speech. ## Challenges & Accomplishments Creating this project in just 24 hours was quite the challenge! While we have worked with some of these tools before, it was our first time working with Microsoft Azure. We're really proud of what we managed to put together over this weekend. Another issue we had is that it can take a while to get speech-to-text results from Azure. We wanted to send a response back to the frontend quickly, so we decided to calculate the rate of speech variance afterwards and patch our data in Firestore. ## What's next for talko? * Tagged questions: get questions most relevant to your industry * Better answer analysis: use different NLP APIs and assess the text to give better stats and pointers + Are there lots of pauses and filler words in the answer? + Is the answer related to the question? + Given a job description selected or supplied by the user, does the answer cover the keywords? + Is the tone of the answer formal, assertive? * View answer history in more detail: option to show transcript and play back audio recordings * Settings to personalize your practice experience: customize number of questions and answer time limit. ## Built using ![image](https://img.shields.io/badge/Node.js-339933?style=for-the-badge&logo=nodedotjs&logoColor=white) ![image](https://img.shields.io/badge/React-20232A?style=for-the-badge&logo=react&logoColor=61DAFB) ![image](https://img.shields.io/badge/Express.js-000000?style=for-the-badge&logo=express&logoColor=white) ![image](https://img.shields.io/badge/microsoft%20azure-0089D6?style=for-the-badge&logo=microsoft-azure&logoColor=white) ![image](https://img.shields.io/badge/firebase-ffca28?style=for-the-badge&logo=firebase&logoColor=black) ![drawing](https://github.com/nwhacks-2022/.github/blob/main/assets/rainbow.png?raw=true) ### Thanks for visiting!
## Inspiration Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order. ## What it does You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision. ## How we built it The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors. ## Challenges we ran into One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle. ## Accomplishments that we're proud of We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with. ## What we learned We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment. ## What's next for Harvard Burger Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains.
## Inspiration From our experience renting properties from private landlords, we think the rental experience is broken. Payments and communication are fragmented for both landlords and tenants. As tenants, we have to pay landlords through various payment channels, and that process is even more frustrating if you have roommates. On the other hand, landlords have trouble reconciling payments coming from these several sources. We wanted to build a rental companion that initially tackles this problem of payments, but extends to saving time and headaches in other aspects of the rental experience. As we are improving convenience for landlords and tenants, we focused solely on a mobile application. ## What it does * Allows tenants to make payments quickly in less than three clicks * Chatbot interface that has information about the property's lease and state-specific rental regulation * Landlords monitor the cash flow of their properties transparently and granularly ## How we built it * Full stack React Native app * Convex backend and storage * Stripe credit card integration * Python backend for Modal & GPT3 integration ## Challenges we ran into * Choosing a payment method that is reliable and fast to implement * Parsing lease agreements and training GPT3 models * Deploying and running modal.com for the first time * Ensuring transaction integrity and idempotency on Convex ## Accomplishments that we're proud of * Shipped chat bot although we didn't plan to * Pleased about the UI design ## What we learned * Mobile apps are tough for hackathons * Payment integrations have become very accessible ## What's next for Domi Rental Companion * See if we provide value for target customers
winning
## Inspiration Ever since we started using Discord voice channels to chill, play games, and listen to music, we always thought of various features that the "ideal" music bot would have. Because popular bots such as Groovy and Rythm have been recently discontinued, we thought it would be the perfect time to make this "ideal bot". ## What it does Orpheus is a self-hosted, completely free discord music bot with a web gui to observe and manipulate the queue, as well as a smart-queueing algorithm to evenly distribute music-playing time amongst users in a voice channel. ## How we built it We built this bot's backend primarily using various libraries and modules in GoLang. In particular, we used the open-source project YouTube-dl to download YouTube videos from their links and process much of their information, as well as other libraries like discordgo, which has specific support for Discord bots. The frontend was built using HTML, JavaScript, and CSS. ## Challenges we ran into One of the challenges that we encountered was the audio streaming portion of our code which was very susceptible to latency in code execution, as well as tricky conversions between encodings that was necessary to communicate with Discord. We also additionally had to plug into many different apis and frameworks, without project including a go business logic part, web part, discord bot part, audio part, etc. that made it hard to finish an MVP within a short amount of time. ## Accomplishments that we're proud of We're proud of the fact that we were able to create the beginnings of a much better bot, one that refines on the UI/UX of our predecessors through our own experience. We think that, once finished, this will definitely be the best music bot that is available, especially as open-source. ## What we learned I'm personally glad that I was able to begin to learn the technical aspect of golang's extensive multithreaded and parallel support which was necessary in order to cleanly facilitate handling a theoretically high throughput of data. As a team, we learned how to work together and divide up tasks, which for many of the members was a great learning experience as it was the first time working on an extensive hackathon or even just generally software engineer project. ## What's next for orpheus First on the to-do list for orpheus is clearly to fix our many bugs regarding the slash commands in Discord, as well as add various other commands we'd like to support, like skipping between songs in the queue, removing songs in the queue, and moving songs within the queue. Furthermore, we'd also like to make our web gui more robust to make it easier to interact with. Finally, we'd also like to improve the smart-queueing algorithm and add others, in order for users to have the most customizable and enjoyable experience.
Check out our slides here: shorturl.at/fCLR4 ## Inspiration In a digital age increasingly dictated by streaming services’ editorial teams and algorithms, we want to bring the emotion back to the music discovery experience. Your playlists have become an echo chamber of your own past preferences and the short, chorus-led song mold artists now follow to maximize playlist performance, giving music streaming platforms much of the control over who succeeds in this industry and what types of music users can easily access. We want to democratize the process of finding unique, emotion-driven sound and bring your personal emotions back to the forefront of your listening experience. ## What It Does Thus Maestro was born, a chat bot that integrates with virtually any messaging service and enables a user to send an emoji corresponding to how they’re feeling and receive a personalized new song recommendation through YouTube. ## How We Built It To achieve this maximized integrability, we used Gupshup to build the Messenger chatbot and developed the back-end on a Python Flask-JSON localhost server exposed on Ngrok. The AI sentiment analysis is performed a public, large Kaggle dataset scraping tweets with emojis. Given the sentiment an emoji conveys, we then match it to a corresponding YouTube playlist, and scrape the playlist for a suitable song track. ## Challenges We Ran Into The FB Messenger API was hard to access - we wasted some time initially trying to gain access to build on top of it & to integrate it with AWS. That’s when we switched over to GupShop.io. However, it was extremely clunky. It slowed us down massively and required a lot of duplicative work. We also struggled to combine our front end and back end in one continuous integration pipeline. After receiving a tip from a mentor on what resources we could use to achieve this, it took massive efforts to integrate it all together. ## Accomplishments We're Proud Of A working Messenger bot that can be used by anyone, at anytime! These are emoji-song recommendations that are actually GOOD :) ## What We Learned Through our hack, we learned to perform sentiment analysis on not just text but emojis as well. We also learned how to integrate back-end and front-end -- especially with hosting a server and exposing it on a URL, as well as setting up a Messenger chatbot most smoothly. ## What's Next For Maestro We're very excited to develop Maestro further, including building a feedback loop so that users can tell us if they like their song, adding personalization so that music recommendations are tailored to the user, and catering to more streaming platforms so users can listen on Spotify/Apple Music/Youtube/etc. We'd also like to port to other platforms so that we achieve integration with Whatsapp, SMS, Instagram DMs, and others, as well as build on human curators to better select music.
# About Us Discord Team Channel: #Team-25 secretage001#6705, Null#8324, BluCloos#8986 <https://friendzr.tech/> ## Inspiration Over the last year the world has been faced with an ever-growing pandemic. As a result, students have faced increased difficulty in finding new friends and networking for potential job offers. Based on Tinder’s UI and LinkedIn’s connect feature, we wanted to develop a web-application that would help students find new people to connect and network with in an accessible, familiar, and easy to use environment. Our hope is that people will be able Friendz to network successfully using our web-application. ## What it does Friendzr allows users to login with their email or Google account and connect with other users. Users can record a video introduction of themselves for other users to see. When looking for connections, users can choose to connect or skip on someone’s profile. Selecting to connect allows the user to message the other party and network. ## How we built it The front-end was built with HTML, CSS, and JS using React. On our back-end, we used Firebase for authentication, CockroachDB for storing user information, and Google Cloud to host our service. ## Challenges we ran into Throughout the development process, our team ran into many challenges. Determining how to upload videos recorded in the app directly to the cloud was a long and strenuous process as there are few resources about this online. Early on, we discovered that the scope of our project may have been too large, and towards the end, we ended up being in a time crunch. Real-time messaging also proved incredibly difficult to implement. ## Accomplishments that we're proud of As a team, we are proud of our easy-to-use UI. We are also proud of getting the video to record users then directly upload to the cloud. Additionally, figuring out how to authenticate users and develop a viable platform was very rewarding. ## What we learned We learned that when collaborating on a project, it is important to communicate, and time manage. Version control is important, and code needs to be organized and planned in a well-thought manner. Video and messaging is difficult to implement, but rewarding once completed. In addition to this, one member learned how to use HTML, CSS, JS, and react over the weekend. The other two members were able to further develop their database management skills and both front and back-end development. ## What's next for Friendzr Moving forward, the messaging system can be further developed. Currently, the UI of the messaging service is very simple and can be improved. We plan to add more sign-in options to allow users more ways of logging in. We also want to implement AssembyAI’s API for speech to text on the profile videos so the platform can reach people who aren't as able. Friendzr functions on both mobile and web, but our team hopes to further optimize each platform.
losing
## Inspiration The project is an **educational learning app** designed to teach English through a **structured roadmap**, particularly targeting **youth and students with *learning disabilities*.** It breaks down English learning into multiple levels, starting from the basics like alphabets and progressing to reading full sentences. Each level contains a variety of **mini-games that engage different senses**, using **visual and auditory cues** to enhance understanding and maintain the attention of students. Successfully completing games *rewards students with coins*, which they can use to purchase **AI-generated books** tailored to their preferences. The app provides continuous **guidance and motivation** through audio support, helping students when they get stuck, and **offering a *clear path* for next steps in their learning journey.** ## What our project does The project is an **innovative educational learning app** designed to address the unique challenges faced by youth, especially those with learning disabilities, in mastering English. It provides a **comprehensive, structured approach to language learning**, starting with the very basics like\*alphabets and gradually progressing to more advanced skills, such as *reading* and comprehending full sentences. The app is divided into multiple levels, each focused on specific topics, ensuring that students build a solid foundation before moving on to more complex concepts. Unlike existing educational games, this app offers a concise and effective ***roadmap*** that guides students *step-by-step* through the learning process, **reducing the overwhelming choice that can hinder progress for students with learning disabilities.** Each level includes a variety of mini-games, designed to be highly engaging and interactive, using a combination of visual and auditory cues to captivate students' attention. These games not only test knowledge but also **promote *multi-sensory learning*,** catering to short attention spans by being visually appealing and concise. A unique feature of the app is its **reward system**: when students successfully complete games, they *earn coins* that can be used to purchase **AI-generated books** within the app. These books are *custom-made* based on the student's preferences in topics, genres, and styles, offering personalized content that further strengthens their reading skills. Additionally, the app provides motivational support through **audio guidance**, helping students when they struggle and encouraging them to continue learning. Through this systematic, engaging, and supportive approach, the project empowers students to improve their literacy skills while making learning *fun and rewarding*. --- ### **Key Features:** * **Structured roadmap:** Guides students from basic to advanced English learning. * **Multi-sensory engagement:** Visual and auditory cues enhance the learning experience. * **Reward system:** *Earn coins* to purchase personalized AI-generated books. * **Inclusivity:** Audio support helps students when they face challenges. * **Motivational design:** Short attention span-friendly and visually appealing games. ## How we built it The project was built using **NextJS** and **React** for both the frontend and backend. We integrated **GPT-4o**, **DALL-E 3** and **Google's Web Speech** APIs for *generating AI images, AI-powered stories*, and *speech recognition* functionalities. To manage user data and in-game currency within the application, we utilized the **Prisma** library and **SQLite** for our database system. In addition, we developed an **Adobe Add-on** using **JavaScript**, enabling users to easily upload avatars by leveraging **React's built-in camera** library. This seamless integration enhances user interaction by providing a smooth, intuitive experience for customizing avatars. ## Challenges we ran into One of the hardest was working with Adobe Express. We set out to create our own add-on, but the process was far more complex and time-consuming than we expected. The limited documentation made things even trickier, and connecting the playground with our code led to a lot of trial and error. After hours of hard work, we finally got it working, and that moment felt like a huge win! We were definitely overambitious at the start of the hackathon. We had all these big ideas and plans, but as we got deeper into the project, it became clear that some of them were far more complicated than we anticipated. This forced us to take a step back and re-evaluate what was actually achievable within the time limit. We had to compromise and shift our focus to more realistic goals, scaling back some features while making sure we could still deliver a polished final product. It was a tough decision, but it taught us the importance of balancing ambition with practicality. Even though these challenges pushed us to the limit, solving them was incredibly rewarding. We learned so much along the way, and by the end of it, we were proud of what we achieved! ## What we learned Looking ahead, we have some exciting plans for the future of our project. One of our main goals is to expand the game to support teaching in multiple different languages, making it accessible to a wider audience. We also want to integrate more AI features to make the application even more responsive and efficient. By doing this, we hope to offer users more personalized support and improve accessibility, helping them on their learning journey in an even more interactive and engaging way. The possibilities are endless, and we’re excited to see where we can take it next! We’re incredibly passionate about the impact this project can have. With literacy rates dropping and children with special needs not always having access to the extra resources they need, we believe this tool can play a crucial role in supporting their success. Education is the foundation of opportunity, and by expanding our game to offer multi-language teaching and integrating AI for more personalized support, we hope to bridge some of those gaps. We see this project as more than just a game—it’s a way to give children, especially those who need extra help, the tools they need to thrive in their learning journey.
## Inspiration From fashion shows to strutting down downtown Waterloo, you **are** the main character… Unless… your walk is less of a confident strut but more a hobble . I’ve always wanted to know if my walking/running form is bad, and *Runway Ready* is here to tell me just that. ## What it does Your phone *is* the hardware. By leveraging the built in accelerometer within our mobile devices, we’ve built an app that sends the accelerometer data in x,y, and z directions collected from our phones to our server, which then analyzes the accelerometer patterns to deduce whether your walk is either a ‘strut’ or a ‘limp’ without any additional hardware other than the phone in your pocket! ## How we built it We used Swift to build the simple mobile app that gathers and transmits the accelerometer data to our server. The server, powered by Node.js, collects, processes, and analyzes the data in real time using JavaScript. Finally, the website used to display the results is built on Express. ## Challenges we ran into Signal processing was a huge challenge. A seemingly easy task of differentiating between a good or bad walk is not straightforward. After hours of graphing and signal analysis, we created an algorithm that can separate the two. Connecting the data sent from the mobile app to integrating the data into our backend signal processing took way too much time. ## Accomplishments that we're proud of After hours of rigorous data collection and graphical and algebraic signal analysis, we *finally* came up with a simple algorithm that works with basically zero previous signal processing knowledge! ## What we learned While we've used languages such as Swift and Javascript separately, this project was a chance for us to combine all our knowledge and skills in these software tools to create something with potential to be expanded across various fields. ## What's next for *Runway Ready*? It’s a silly idea; we know. But it actually sparked from our hatred of the piercing pain of shin splints that occurs on long runs. Shin splints are partially caused due to improper running form, which *Runway Ready* could be used to fix. *Runway Ready* has a diverse range of applications including sport performance, physiotherapy, injury/surgery recovery, and of course, making you an **everyday runway supermodels**. Given a bit more time, we could collect more data, and employ ML to make a more robust algorithm that can detect even the most subtle issues in walking/running form.
# Doctors Within Borders ### A crowdsourcing app that improves first response time to emergencies by connecting city 911 dispatchers with certified civilians ## 1. The Challenge In Toronto, ambulances get to the patient in 9 minutes 90% of the time. We all know that the first few minutes after an emergency occurs are critical, and the difference of just a few minutes could mean the difference between life and death. Doctors Within Borders aims to get the closest responder within 5 minutes of the patient to arrive on scene so as to give the patient the help needed earlier. ## 2. Main Features ### a. Web view: The Dispatcher The dispatcher takes down information about an ongoing emergency from a 911 call, and dispatches a Doctor with the help of our dashboard. ### b. Mobile view: The Doctor A Doctor is a certified individual who is registered with Doctors Within Borders. Each Doctor is identified by their unique code. The Doctor can choose when they are on duty. On-duty Doctors are notified whenever a new emergency occurs that is both within a reasonable distance and the Doctor's certified skill level. ## 3. The Technology The app uses *Flask* to run a server, which communicates between the web app and the mobile app. The server supports an API which is used by the web and mobile app to get information on doctor positions, identify emergencies, and dispatch doctors. The web app was created in *Angular 2* with *Bootstrap 4*. The mobile app was created with *Ionic 3*. Created by Asic Chen, Christine KC Cheng, Andrey Boris Khesin and Dmitry Ten.
losing
![Blik Logo](https://i.dhr.wtf/r/Small_stuff_(1).png) ## Inspiration Over the last five years, we've seen the rise and the slow decline of the crypto market. It has made some people richer, and many have suffered because of it. We realized that this problem can be solved with data and machine learning - What if we can, accurately, predict forecast for crypto tokens so that the decisions are always calculated? What if we also include a chatbot to it - so that crypto is a lot less overwhelming for the users? ## What it does *Blik* is an app and a machine learning model, made using MindsDB, that forecasts cryptocurrency data. Not only that, but it also comes with a chatbot that you can talk to, to make calculated decisions for your. Next trades. The questions can be as simple as *"How's bitcoin been this year?"* to something as personal as *"I want to buy a tesla worth $50,000 by the end of next year. My salary is 4000$ per month. Which currency should I invest in?"* We believe that this functionality can help the users make proper, calculated decisions into what they want to invest in. And in return, get high returns for their hard-earned money! ## How we built it Our tech stack includes: * **Flutter** for the mobile app * **MindsDB** for the ML model + real time finetuning * **Cohere** for AI model and NLP from user input * **Python** backend to interact with MindsDB and CohereAI * **FastAPI** to connect frontend and backend. * **Kaggle** to source the datasets of historic crypto prices ## Challenges we ran into We started off using the default model training using MindsDB, however, we realized that we would need many specific things like forecasting at specific dates, with a higher horizon etc. The mentors at the MindsDB counter helped us a real lot. With their help, we were able to set up a working prototype and were getting confident about our plan. One more challenge we ran into was that the forecasts for a particular crypto would always end up spitting the same numbers, making it difficult for users to predict Then, we ended up using the NeuralTS as our engine, which was perfect. Getting the forecasts to be as accurate as possible was definitely a challenge for us, while keeping it performant enough. Solving every small issue would give rise to another one; but thanks to the mentors and the amazing documentations, we were able to figure out the MindsDB part. Then, we were trying to implement the AI chat feature, using CohereAI. We had a great experience with the API as it was easy to use, and the chat completions were also really good. We wanted the generated data from Cohere to generate an SQL query to use on MindsDB. Getting this right was challenging, as I'd always need the same datatype in a structured format in order to be able to stitch an SQL command. We figured this also out using advanced prompting techniques and changing the way we pass the data into the SQL. We also used some code to clean up the generated text and make sure that its always compatible. ## Accomplishments that we're proud of Honestly, going from an early ideation phase to an entire product in just two days, for an indie team of two college freshmen is really a moment of pride. We created a fully working product with an AI chatbot, etc. Even though we were both new to all of this - integrating crypto with AI techologies is a challenging problem, and thankfully MindsDB was very fun to work with. We are extremely happy about the mindsDB learnings as we can now implement it in our other projects to enhance them with machine learning. ## What we learned We learnt AI and machine learning, using MindsDB, interacting with AI and advanced prompting, understanding user's needs, designing beautiful apps and presenting data in a useful yet beautiful way in the app. ## What's next for Blik. At Blik, long term, we plan on expanding this to a full fledged crypto trading solution, where users can sign up and create automations that they can run, to "get rich quick". Short term, we plan to increase the model's accuracy by aggregating news into it, along with the cryptocurrency information like the founder information and the market ownership of the currency. All this data can help us further develop the model to be more accurate and helpful.
## Inspiration **With the world producing more waste then ever recorded, sustainability has become a very important topic of discussion.** Whether that be social, environmental, or economic, sustainability has become a key factor in how we design products and how we plan for the future. Especially during the pandemic, we turned to becoming more efficient and resourceful with what we had at home. Thats where home gardens come in. Many started home gardens as a hobby or a cool way to grow your own food from the comfort of your own home. However, with the pandemic slowly coming to a close, many may no longer have the time to micromanage their plants, and those who are interested in starting this hobby may not have the patience. Enter *homegrown*, an easy way for people anyone interested in starting their own mini garden to manage their plants and enjoy the pleasures of gardening. ## What it does *homegrown* monitors each individual plant, adjusted depending on the type of plant. Equipped with different sensors, *homegrown* monitors the plants health, whether that's it's exposure to light, moisture, or temperature. When it detects fluctuations in these levels, *homegrown* sends a text to the owner, alerting them about the plants condition and suggesting changes to alleviate these problems. ## How we built it *homegrown* was build using python, an arduino, and other hardware components. The different sensors connected to the arduino take different measurements and record them. They are then sent as one json file to the python script where they data is then further parsed and sent by text to the user through the twilio api. ## Challenges we ran into We originally planned on using CockroachDB as a data based but scrapped idea since dealing with initializing the database and trying to extract data out of it proved to be too difficult. We ended up using an arduino instead to send the data directly to a python script that would handle the data. Furthermore, ideation took quite a while because it was all out first times meeting each other. ## Accomplishments that we're proud of Forming a team when we've all never met and have limited experience and still building something in the end was something that brought together each of our respective skills is something that we're proud of. Combining hardware and software was a first for some of us so we're proud of adapting quickly to cater to each others strengths ## What we learned We learned more about python and its various libraries to build on each other and create more and more complex programs. We also learned about how different hardware components can interact with software components to increase functionality and allow for more possibilities. ## What's next for homegrown *homegrown* has the possibility to grow bigger, not only in terms of the number of plants which it can monitor growth for, but also the amount of data if can take in surrounding the plant. With more data comes more functionality which allows for more thorough analysis of the plant's conditions to provide a better and more efficient growing experience for the plant and the user.
## Inspiration My father put me in charge of his finances and in contact with his advisor, a young, enterprising financial consultant eager to make large returns. That might sound pretty good, but to someone financially conservative like my father doesn't really want that kind of risk in this stage of his life. The opposite happened to my brother, who has time to spare and money to lose, but had a conservative advisor that didn't have the same fire. Both stopped their advisory services, but that came with its own problems. The issue is that most advisors have a preferred field but knowledge of everything, which makes the unknowing client susceptible to settling with someone who doesn't share their goals. ## What it does Resonance analyses personal and investment traits to make the best matches between an individual and an advisor. We use basic information any financial institution has about their clients and financial assets as well as past interactions to create a deep and objective measure of interaction quality and maximize it through optimal matches. ## How we built it The whole program is built in python using several libraries for gathering financial data, processing and building scalable models using aws. The main differential of our model is its full utilization of past data during training to make analyses more wholistic and accurate. Instead of going with a classification solution or neural network, we combine several models to analyze specific user features and classify broad features before the main model, where we build a regression model for each category. ## Challenges we ran into Our group member crucial to building a front-end could not make it, so our designs are not fully interactive. We also had much to code but not enough time to debug, which makes the software unable to fully work. We spent a significant amount of time to figure out a logical way to measure the quality of interaction between clients and financial consultants. We came up with our own algorithm to quantify non-numerical data, as well as rating clients' investment habits on a numerical scale. We assigned a numerical bonus to clients who consistently invest at a certain rate. The Mathematics behind Resonance was one of the biggest challenges we encountered, but it ended up being the foundation of the whole idea. ## Accomplishments that we're proud of Learning a whole new machine learning framework using SageMaker and crafting custom, objective algorithms for measuring interaction quality and fully utilizing past interaction data during training by using an innovative approach to categorical model building. ## What we learned Coding might not take that long, but making it fully work takes just as much time. ## What's next for Resonance Finish building the model and possibly trying to incubate it.
partial
## Inspiration One of our team members has firsthand experience working in retail, where shoplifting was always a looming threat. We wanted to help small business owners take back control of their stores and feel more secure. TheftWatch aims to reduce the anxiety and losses that come from theft by providing better insights and actionable alerts. * 🚨According to the National Retail Federation (NRF), shoplifting cost retailers $112.1 billion in 2022 alone in the United States.] * 🚨Organized crime groups have become increasingly involved in shoplifting, targeting high-value items and utilizing sophisticated techniques to evade detection. * 🚨 Studies indicate that shoplifting is a common occurrence, with some estimates suggesting that it affects a large percentage of retail establishments. Smart cities often incorporate technologies such as: * 📸 Surveillance Cameras: These can be used to monitor store entrances and exits, identify potential shoplifters, and provide evidence for law enforcement. * 📊 Analytics and Data: Advanced analytics can analyze patterns in shoplifting behavior, identify hotspots, and predict potential incidents. **How can smart city technologies be effectively integrated into retail environments to deter shoplifting?** ## What it does We decided to go much more further and make our own live representation of a store. 1. Used our workspace to simulate a store environment ![test](https://harvard-devpost-images.s3.amazonaws.com/s3_1.png) We even tested on the MLH Stand! ![test](https://harvard-devpost-images.s3.amazonaws.com/s3_2.png) 1. Position 4 cameras (our mobile devices) on each corner to simulate surveillance cameras of a store ![test](https://harvard-devpost-images.s3.amazonaws.com/s3_5.png) 2. We recorded 5 samples from each camera view to get insights from our ML model 3. Created different graphs that users can view to see shoplifts over time ![test](https://harvard-devpost-images.s3.amazonaws.com/s3_3.png) 4. Created a heatmap to see the most dangerous areas of the store to have a clear view of dangerous ![test](https://harvard-devpost-images.s3.amazonaws.com/s3_4.png) After running this experiment on our workspace we realized that the information is very valuable for security staff from the event * ✅ Upload security footage videos to analyze and provide insigths to store owners * ✅ Stablish LLM conversations with the security footage to learn more about the shoplift * ✅ 2D mapping and heatmap of the store to view dangerous zones * ✅ Facial recognition and timestamp snapshot when dangerous activty is being detected * ✅ Live dashboard to view data fetched from security cameras * ✅ Created and trained our own ML model from open data to improve detection: dangerous, suspicious, safe ## How we built it We leveraged computer vision tools and machine learning models to classify customer behavior and identify suspicious actions. The system integrates with WhatsApp for real-time alerts and uses object recognition and facial recognition technologies to provide clear insights into incidents. The intricate dashboard was created to give shop owners actionable data in a simple, visual way. ## Challenges we ran into Developing an accurate classification system for customer behavior was challenging. Defining clear metrics for what constitutes suspicious or dangerous activity, while minimizing false positives, required extensive tuning and testing. We also faced difficulties integrating multiple technologies into a seamless product. ## Accomplishments that we're proud of * 🎯 Hosted and live website * 🎯 Visual representation of our workspace as a store * 🎯 Creation of our own ML model to detect shoplifting * 🎯 Live alerts whenever our cameras detect suspicious activity We're proud to have built a functional system that addresses a real problem for small businesses. Creating a reliable classification model for customer behavior and successfully integrating different tech components—including real-time notifications and a detailed dashboard—was a huge achievement for our team. ## What we learned We learned a lot about the complexities of building a computer vision solution that works in real-time and can make meaningful decisions based on nuanced data. We also gained insights into balancing the need for accurate detection with ensuring a positive customer experience. ## What's next for TheftWatch We're planning to refine our classification model to improve accuracy and reduce false alarms. We also want to add more features to the dashboard, such as predictive analytics to help shop owners anticipate theft risks before they happen. Additionally, expanding our notification system to integrate with other platforms, such as SMS or email, is on our roadmap. ## SDGs 11 & 9: Sustainable Cities, Communities, and Industry Innovation TheftWatch contributes to both **SDG 11** and **SDG 9** by promoting safer and more resilient urban environments while fostering innovation and digital transformation in the retail industry. By helping small businesses reduce losses from theft, TheftWatch strengthens the economic stability and safety of cities, enabling more sustainable and inclusive communities. It also empowers smaller enterprises by offering affordable, tech-driven solutions that enhance security and efficiency, directly supporting the development of resilient infrastructure and fostering innovation in retail. This solution provides small businesses with the tools they need to proactively manage risks and optimize their operations, leading to a thriving, secure urban ecosystem. ## Smart Cities TheftWatch aligns with the vision of **smart cities** by contributing to the digital infrastructure that enhances urban living. In smart cities, data-driven solutions play a significant role in improving safety and security, and TheftWatch integrates seamlessly into this ecosystem. By providing real-time theft alerts and insights, your project helps create smarter, more secure retail environments, contributing to the overall intelligence and efficiency of city infrastructure. It supports the development of smarter business operations while ensuring safety and resilience in urban areas.
## Inspiration As the world grapples with challenges like climate change, resource depletion, and social inequality, it has become imperative for organizations to not only understand their environmental, social, and governance (ESG) impacts but also to benchmark and improve upon them. However, one of the most significant hurdles in this endeavor is the complexity and inaccessibility of sustainability data, which is often buried in lengthy official reports and varied formats, making it challenging for stakeholders to extract actionable insights. Recognizing the potential of AI to transform this landscape, we envision Oasis as a solution to democratize access to sustainability data, enabling more informed decision-making and fostering a culture of continuous improvement toward global sustainability goals. By conversing with AI agents, companies are able to collaborate in real-time to gain deeper insights and work towards solutions. ## What it does Oasis is a groundbreaking platform that leverages AI agents to streamline the parsing, indexing, and analysis of sustainability data from official government and corporate ESG reports. It provides an interface for companies to assess their records and converse with an AI agent that has access to their sustainability data. The agent helps them benchmark their practices against practices of similar companies and narrow down ways that they can improve through conversation. Companies can effortlessly benchmark their current sustainability practices, assess their current standings, and receive tailored suggestions for enhancing their sustainability efforts. Whether it's identifying areas for improvement, tracking progress over time, or comparing practices against industry standards, Oasis offers a comprehensive suite of features to empower organizations in their sustainability journey. ## How we built it Oasis uses a sophisticated blend of the following: 1. LLM (LLaMA 2) parsing to parse data from complex reports. We fine-tuned an instance of `meta-llama/Llama-2-7b-chat-hf` on the HuggingFace dataset [Government Report Summarization](https://huggingface.co/datasets/ccdv/govreport-summarization) using MonsterAPI. We use this model to parse data points from ESG PDF text, since these documents are in a non-standard format, into a JSON format. LLMs are incredibly powerful at extracting key information and summarization, which is why we see such a strong use case here. 2. Open-source text embedding model (SentenceTransformers) to index data including metrics and data points within a vector database. LLM-parsed data points contain key descriptors. We use an embedding model to index these descriptors in semantic space, allowing us to compare similar metrics across companies. Two key points may not have the same descriptions, but are semantically similar, which is why indexing with embeddings is beneficial. We use the SentenceTransformer model `msmarco-bert-base-dot-v5` for text embeddings. We also use the InterSystems IRIS Data Platform to store embedding vectors, on top of the LangChain framework. This is useful for finding similar metrics across different companies and also for RAG, as discussed next. 3. Retrieval augmented generation (RAG) to incorporate relevant metrics and data points into conversation To enable users to converse with the agent and inspect and make decisions based on real data, we use RAG integrated with our IRIS vector database, running on the LangChain framework. We have a frontend UI for interacting with our agent in real time. 4. Embedding similarity to semantically align data points for benchmarking across companies Our frontend UI also presents key metrics for benchmarking a user’s company. It uses embedding similarity to find company metrics and relevant metrics from other companies. ## Challenges we ran into One of the most challenging parts of the project was prompting the LLM and running numerous experiments until the LLM output matched what was expected. Since LLMs are non deterministic in nature and we required outputs in a consistent JSON form (for parsed results), we needed to prompt the LLM and reinforce the constraints multiple times. This was a valuable lesson that helped us learn how to leverage LLMs in intricate ways for niche applications. ## Accomplishments that we're proud of We are incredibly proud of developing a platform that not only addresses a critical global challenge but does so with a level of sophistication and accessibility that sets a new standard in the field. Successfully training AI models to navigate the complexities of ESG reports marks a significant technical achievement. The ability to turn dense reports into clear, actionable insights represents a leap forward in sustainability practice. ## What we learned Throughout the process of building Oasis, we learned the importance of interdisciplinary collaboration in tackling complex problems. Combining AI and sustainability expertise was crucial in understanding both the technical and domain-specific challenges. We also gained insights into the practical applications of AI in real-world scenarios, particularly in how NLP and machine learning can be leveraged to extract and analyze data from unstructured sources. The iterative process of testing and feedback was invaluable, teaching us that user experience is as important as the underlying technology in creating impactful solutions. ## What's next for Oasis The journey for Oasis is just beginning. Our next steps involve expanding the corpus of sustainability reports to cover a broader range of industries and geographies, enhancing the platform's global applicability. We are also exploring the integration of predictive analytics to offer forward-looking insights, enabling users to not just assess their current practices but also to anticipate future trends and challenges. Collaborating with sustainability experts and organizations will remain a priority, as their insights will help refine our models and ensure that Oasis continues to meet the evolving needs of its users. Ultimately, we aim to make Oasis a cornerstone in the global effort towards more sustainable practices, driving change through data-driven insights and recommendations.
# Inspiration and Product There's a certain feeling we all have when we're lost. It's a combination of apprehension and curiosity – and it usually drives us to explore and learn more about what we see. It happens to be the case that there's a [huge disconnect](http://www.purdue.edu/discoverypark/vaccine/assets/pdfs/publications/pdf/Storylines%20-%20Visual%20Exploration%20and.pdf) between that which we see around us and that which we know: the building in front of us might look like an historic and famous structure, but we might not be able to understand its significance until we read about it in a book, at which time we lose the ability to visually experience that which we're in front of. Insight gives you actionable information about your surroundings in a visual format that allows you to immerse yourself in your surroundings: whether that's exploring them, or finding your way through them. The app puts the true directions of obstacles around you where you can see them, and shows you descriptions of them as you turn your phone around. Need directions to one of them? Get them without leaving the app. Insight also supports deeper exploration of what's around you: everything from restaurant ratings to the history of the buildings you're near. ## Features * View places around you heads-up on your phone - as you rotate, your field of vision changes in real time. * Facebook Integration: trying to find a meeting or party? Call your Facebook events into Insight to get your bearings. * Directions, wherever, whenever: surveying the area, and find where you want to be? Touch and get instructions instantly. * Filter events based on your location. Want a tour of Yale? Touch to filter only Yale buildings, and learn about the history and culture. Want to get a bite to eat? Change to a restaurants view. Want both? You get the idea. * Slow day? Change your radius to a short distance to filter out locations. Feeling adventurous? Change your field of vision the other way. * Want get the word out on where you are? Automatically check-in with Facebook at any of the locations you see around you, without leaving the app. # Engineering ## High-Level Tech Stack * NodeJS powers a RESTful API powered by Microsoft Azure. * The API server takes advantage of a wealth of Azure's computational resources: + A Windows Server 2012 R2 Instance, and an Ubuntu 14.04 Trusty instance, each of which handle different batches of geospatial calculations + Azure internal load balancers + Azure CDN for asset pipelining + Azure automation accounts for version control * The Bing Maps API suite, which offers powerful geospatial analysis tools: + RESTful services such as the Bing Spatial Data Service + Bing Maps' Spatial Query API + Bing Maps' AJAX control, externally through direction and waypoint services * iOS objective-C clients interact with the server RESTfully and display results as parsed ## Application Flow iOS handles the entirety of the user interaction layer and authentication layer for user input. Users open the app, and, if logging in with Facebook or Office 365, proceed through the standard OAuth flow, all on-phone. Users can also opt to skip the authentication process with either provider (in which case they forfeit the option to integrate Facebook events or Office365 calendar events into their views). After sign in (assuming the user grants permission for use of these resources), and upon startup of the camera, requests are sent with the user's current location to a central server on an Ubuntu box on Azure. The server parses that location data, and initiates a multithread Node process via Windows 2012 R2 instances. These processes do the following, and more: * Geospatial radial search schemes with data from Bing * Location detail API calls from Bing Spatial Query APIs * Review data about relevant places from a slew of APIs After the data is all present on the server, it's combined and analyzed, also on R2 instances, via the following: * Haversine calculations for distance measurements, in accordance with radial searches * Heading data (to make client side parsing feasible) * Condensation and dynamic merging - asynchronous cross-checking from the collection of data which events are closest Ubuntu brokers and manages the data, sends it back to the client, and prepares for and handles future requests. ## Other Notes * The most intense calculations involved the application of the [Haversine formulae](https://en.wikipedia.org/wiki/Haversine_formula), i.e. for two points on a sphere, the central angle between them can be described as: ![Haversine 1](https://upload.wikimedia.org/math/1/5/a/15ab0df72b9175347e2d1efb6d1053e8.png) and the distance as: ![Haversine 2](https://upload.wikimedia.org/math/0/5/5/055b634f6fe6c8d370c9fa48613dd7f9.png) (the result of which is non-standard/non-Euclidian due to the Earth's curvature). The results of these formulae translate into the placement of locations on the viewing device. These calculations are handled by the Windows R2 instance, essentially running as a computation engine. All communications are RESTful between all internal server instances. ## Challenges We Ran Into * *iOS and rotation*: there are a number of limitations in iOS that prevent interaction with the camera in landscape mode (which, given the need for users to see a wide field of view). For one thing, the requisite data registers aren't even accessible via daemons when the phone is in landscape mode. This was the root of the vast majority of our problems in our iOS, since we were unable to use any inherited or pre-made views (we couldn't rotate them) - we had to build all of our views from scratch. * *Azure deployment specifics with Windows R2*: running a pure calculation engine (written primarily in C# with ASP.NET network interfacing components) was tricky at times to set up and get logging data for. * *Simultaneous and asynchronous analysis*: Simultaneously parsing asynchronously-arriving data with uniform Node threads presented challenges. Our solution was ultimately a recursive one that involved checking the status of other resources upon reaching the base case, then using that knowledge to better sort data as the bottoming-out step bubbled up. * *Deprecations in Facebook's Graph APIs*: we needed to use the Facebook Graph APIs to query specific Facebook events for their locations: a feature only available in a slightly older version of the API. We thus had to use that version, concurrently with the newer version (which also had unique location-related features we relied on), creating some degree of confusion and required care. ## A few of Our Favorite Code Snippets A few gems from our codebase: ``` var deprecatedFQLQuery = '... ``` *The story*: in order to extract geolocation data from events vis-a-vis the Facebook Graph API, we were forced to use a deprecated API version for that specific query, which proved challenging in how we versioned our interactions with the Facebook API. ``` addYaleBuildings(placeDetails, function(bulldogArray) { addGoogleRadarSearch(bulldogArray, function(luxEtVeritas) { ... ``` *The story*: dealing with quite a lot of Yale API data meant we needed to be creative with our naming... ``` // R is the earth's radius in meters var a = R * 2 * Math.atan2(Math.sqrt((Math.sin((Math.PI / 180)(latitude2 - latitude1) / 2) * Math.sin((Math.PI / 180)(latitude2 - latitude1) / 2) + Math.cos((Math.PI / 180)(latitude1)) * Math.cos((Math.PI / 180)(latitude2)) * Math.sin((Math.PI / 180)(longitude2 - longitude1) / 2) * Math.sin((Math.PI / 180)(longitude2 - longitude1) / 2))), Math.sqrt(1 - (Math.sin((Math.PI / 180)(latitude2 - latitude1) / 2) * Math.sin((Math.PI / 180)(latitude2 - latitude1) / 2) + Math.cos((Math.PI / 180)(latitude1)) * Math.cos((Math.PI / 180)(latitude2)) * Math.sin((Math.PI / 180)(longitude2 - longitude1) / 2) * Math.sin((Math.PI / 180)(longitude2 - longitude1) / 2);))); ``` *The story*: while it was shortly after changed and condensed once we noticed it's proliferation, our implementation of the Haversine formula became cumbersome very quickly. Degree/radian mismatches between APIs didn't make things any easier.
partial
## Inspiration Herpes Simplex Virus-2 (HSV-2) is the cause of Genital Herpes, a lifelong and contagious disease characterized by recurring painful and fluid-filled sores. Transmission occurs through contact with fluids from the sores of the infected person during oral, anal, and vaginal sex; transmission can occur in asymptomatic carriers. HSV-2 is a global public health issue with an estimated 400 million people infected worldwide and 20 million new cases annually - 1/3 of which take place Africa (2012). HSV-2 will increase the risk of acquiring HIV by 3 fold, profoundly affect the psychological well being of the individual, and pose as a devastating neonatal complication. The social ramifications of HSV-2 are enormous. The social stigma of sexual transmitted diseases (STDs) and the taboo of confiding others means that patients are often left on their own, to the detriment of their sexual partners. In Africa, the lack of healthcare professionals further exacerbates this problem. Further, the 2:1 ratio of female to male patients is reflective of the gender inequality where women are ill-informed and unaware of their partners' condition or their own. Most importantly, the symptoms of HSV-2 are often similar to various other dermatological issues which are less severe, such as common candida infections and inflammatory eczema. It's very easy to dismiss Genital Herpes as these latter conditions which are much less severe and non-contagious. ## What it does Our team from Johns Hopkins has developed the humanitarian solution “Foresight” to tackle the taboo issue of STDs. Offered free of charge, Foresight is a cloud-based identification system which will allow a patient to take a picture of a suspicious skin lesion with a smartphone and to diagnose the condition directly in the iOS app. We have trained the computer vision and machine-learning algorithm, which is downloaded from the cloud, to differentiate between Genital Herpes and the less serious eczema and candida infections. We have a few main goals: 1. Remove the taboo involved in treating STDs by empowering individuals to make diagnostics independently through our computer vision and machine learning algorithm. 2. Alleviate specialist shortages 3. Prevent misdiagnosis and to inform patients to seek care if necessary 4. Location service allows for snapshots of local communities and enables more potent public health intervention 5. Protects the sexual relationship between couples by allowing for transparency- diagnose your partner! ## How I built it We first gathered 90 different images of 3 categories (30 each) of skin conditions that are common around the genital area: "HSV-2", "Eczema", and "Yeast Infections". We realized that a good way to differentiate between these different conditions are the inherent differences in texture, which are although subtle to the human eye, very perceptible via good algorithms. ] We take advantage of the Bag of Words model common in the field of Web Crawling and Information Retrieval, and apply a similar algorithm, which is written from scratch except for the feature identifier (SIFT). The algorithm follows: Part A) Training the Computer Vision and Machine Learning Algorithm (Python) 1. We use a Computer Vision feature identifying algorithm called SIFT to process each image and to identify "interesting" points like corners and other patches that are highly unique 2. We consider each patch around the "interesting" points as textons, or units of characteristic textures 3. We build a vocabulary of textons by identifying the SIFT points in all of our training images, and use the machine learning algorithm k-means clustering to narrow down to a list of 1000 "representative" textons 4. For each training image, we can build our own version of a descriptor by representation of a vector, where each element of the vector is the normalized frequency of the texton. We further use tf-idf (term frequency, inverse document frequency) optimization to improve the representation capabilities of each vector. (all this is manually programmed) 5. Finally, we save these vectors in memory. When we want to determine whether a test image depicts either of the 3 categories, we encode the test image into the same tf-idf vector representation, and apply k-nearest neighbors search to find the optimal class. We have found through experimentation that k=4 works well as a trade-off between accuracy and speed. 6. We tested this model with a randomly selected subset that is 10% the size of our training set and achieved 89% accuracy of prediction! Part B) Ruby on Rails Backend 1. The previous machine learning model can be expressed as an aggregate of 3 files: cluster centers in SIFT space, tf-idf statistics, and classified training vectors in cluster space 2. We output the machine learning model as csv files from python, and write an injector in Ruby that inserts the trained model into our PostgreSQL database on the backend 3. We expose the API such that our mobile iOS app can download our trained model directly through an HTTPS endpoint. 4. Beyond storage of our machine learning model, our backend also includes a set of API endpoints catered to public health purposes: each time an individual on the iOS app make a diagnosis, the backend is updated to reflect the demographic information and diagnosis results of the individual's actions. This information is visible on our web frontend. Part C) iOS app 1. The app takes in demographic information from the user and downloads a copy of the trained machine learning model from our RoR backend once 2. Once the model has been downloaded, it is possible to make diagnosis even without internet access 3. The user can take an image directly or upload one from the phone library for diagnosis, and a diagnosis is given in several seconds 4. When the diagnosis is given, the demographic and diagnostic information is uploaded to the backend Part D) Web Frontend 1. Our frontend leverages the stored community data (pooled from diagnoses made from individual phones) accessible via our backend API 2. The actual web interface is a portal for public health professionals like epidemiologists to understand the STD trends (as pertaining to our 3 categories) in a certain area. The heat map is live. 3. Used HTML5,CSS3,JavaScript,jQuery ## Challenges I ran into It is hard to find current STD prevalence incidence data report outside the United States. Most of the countries have limited surveilliance data among African countries, and the conditions are even worse among stigmatized diseases. We collected the global HSV-2 prevalence and incidence report from World Health Organization(WHO) in 2012. Another issue we faced is the ethical issue in collecting disease status from the users. We were also conflicted on whether we should inform the user's spouse on their end result. It is a ethical dilemma between patient confidentiality and beneficence. ## Accomplishments that I'm proud of 1. We successfully built a cloud-based picture recognition system to distinguish the differences between HSV-2, yeast infection and eczema skin lesion by machine learning algorithm, and the accuracy is 89% for a randomly selected test set that is 10% the training size. 2. Our mobile app which provide users to anonymously send their pictures to our cloud database for recognition, avoid the stigmatization of STDs from the neighbors. 3. As a public health aspect, the function of the demographic distribution of STDs in Africa could assist the prevention of HSV-2 infection and providing more medical advice to the eligible patients. ## What I learned We learned much more about HSV-2 on the ground and the ramifications on society. We also learned about ML, computer vision, and other technological solutions available for STD image processing. ## What's next for Foresight Extrapolating our workflow for Machine Learning and Computer Vision to other diseases, and expanding our reach to other developing countries.
## Problem Statement As the number of the elderly population is constantly growing, there is an increasing demand for home care. In fact, the market for safety and security solutions in the healthcare sector is estimated to reach $40.1 billion by 2025. The elderly, disabled, and vulnerable people face a constant risk of falls and other accidents, especially in environments like hospitals, nursing homes, and home care environments, where they require constant supervision. However, traditional monitoring methods, such as human caregivers or surveillance cameras, are often not enough to provide prompt and effective responses in emergency situations. This potentially has serious consequences, including injury, prolonged recovery, and increased healthcare costs. ## Solution The proposed app aims to address this problem by providing real-time monitoring and alert system, using a camera and cloud-based machine learning algorithms to detect any signs of injury or danger, and immediately notify designated emergency contacts, such as healthcare professionals, with information about the user's condition and collected personal data. We believe that the app has the potential to revolutionize the way vulnerable individuals are monitored and protected, by providing a safer and more secure environment in designated institutions. ## Developing Process Prior to development, our designer used Figma to create a prototype which was used as a reference point when the developers were building the platform in HTML, CSS, and ReactJs. For the cloud-based machine learning algorithms, we used Computer Vision, Open CV, Numpy, and Flask to train the model on a dataset of various poses and movements and to detect any signs of injury or danger in real time. Because of limited resources, we decided to use our phones as an analogue to cameras to do the live streams for the real-time monitoring. ## Impact * **Improved safety:** The real-time monitoring and alert system provided by the app helps to reduce the risk of falls and other accidents, keeping vulnerable individuals safer and reducing the likelihood of serious injury. * **Faster response time:** The app triggers an alert and sends notifications to designated emergency contacts in case of any danger or injury, which allows for a faster response time and more effective response. * **Increased efficiency:** Using cloud-based machine learning algorithms and computer vision techniques allow the app to analyze the user's movements and detect any signs of danger without constant human supervision. * **Better patient care:** In a hospital setting, the app could be used to monitor patients and alert nurses if they are in danger of falling or if their vital signs indicate that they need medical attention. This could lead to improved patient care, reduced medical costs, and faster recovery times. * **Peace of mind for families and caregivers:** The app provides families and caregivers with peace of mind, knowing that their loved ones are being monitored and protected and that they will be immediately notified in case of any danger or emergency. ## Challenges One of the biggest challenges have been integrating all the different technologies, such as live streaming and machine learning algorithms, and making sure they worked together seamlessly. ## Successes The project was a collaborative effort between a designer and developers, which highlights the importance of cross-functional teams in delivering complex technical solutions. Overall, the project was a success and resulted in a cutting-edge solution that can help protect vulnerable individuals. ## Things Learnt * **Importance of cross-functional teams:** As there were different specialists working on the project, it helped us understand the value of cross-functional teams in addressing complex challenges and delivering successful results. * **Integrating different technologies:** Our team learned the challenges and importance of integrating different technologies to deliver a seamless and effective solution. * **Machine learning for health applications:** After doing the research and completing the project, our team learned about the potential and challenges of using machine learning in the healthcare industry, and the steps required to build and deploy a successful machine learning model. ## Future Plans for SafeSpot * First of all, the usage of the app could be extended to other settings, such as elderly care facilities, schools, kindergartens, or emergency rooms to provide a safer and more secure environment for vulnerable individuals. * Apart from the web, the platform could also be implemented as a mobile app. In this case scenario, the alert would pop up privately on the user’s phone and notify only people who are given access to it. * The app could also be integrated with wearable devices, such as fitness trackers, which could provide additional data and context to help determine if the user is in danger or has been injured.
## Inspiration Due to the shortages of doctors and clinics in rural areas, early diagnosis of skin diseases that may seem harmless on the outside but can become life-threatening is a real problem. MediDerma uses Computer Vision to predict skin diseases and provide the underlying symptoms associated which are prominent in rural India. The lockdown has not helped either, with the increasing shortage of doctors due to many of them going on COVID duties. Keeping the goal of helping out our community in any way we can, Bhuvnesh Nagpal and Mehul Srivastava decided to create this AI-enabled project to help the underprivileged with one slogan in mind – “Prevention is better than Cure” ## What it does MediDerma uses Computer Vision to predict skin diseases and provide the underlying symptoms associated which are prominent in rural India. ## How we built it The image classification model is integrated with a web app. There is an option to either click a picture or upload a saved one. The model based on resnet34 architecture then classifies the image of the skin disease into 1 of the 29 classes and shows the predicted disease and its common symptoms. We trained it on a custom dataset using the fastai library in python. ## Challenges we ran into Collecting the dataset was a big problem as medical datasets are not freely available. We collected the data from various sources including google images, various sites, etc. ## Accomplishments that we're proud of We were able to make an innovative solution to solve a real-world problem. This solution might help a lot of people in the rural parts of India. We are really proud of what we have built. The app aims to provide a simple and accurate diagnosis of skin disease in rural parts of India where medical facilities are scarce. ## What we learned We brainstormed a lot of ideas during the ideation part of this project and realized that there was a dire need for this app. While developing the project, we learned about the Streamlit framework which allows us to easily deploy ML projects. We also learned about the various sources from where we can collect image data. ## What's next for MediDerma We plan to try and improve this model to a level where it can be certified and deployed into the real-world setting. We can do this by collecting and feeding more data to the model. We also plan to increase the number of diseases that this app can detect.
winning
## Inspiration Last year we had to go through the hassle of retrieving a physical key from a locked box in a hidden location in order to enter our AirBnB. After seeing the August locks, we thought there must be a more convenient alternative. We thought of other situations where you would want to grant access to your locks. In many cases where you would want to only grant temporary access, such as AirBnB, escape rooms or visitors or contractors at a business, you would want the end user to sign an agreement before being granted access, so naturally we looked into the DocuSign API. ## What it does The app has two pieces: a way for home owners to grant temporary access to their clients, and the way for the clients to access the locks. The property owner fills out a simple form with the phone number of their client as a way to identify them, the address of the property, the end date of their stay, the details needed to access the August lock. Our server then generates a custom DocuSign Click form and waits for the client. When the client access the server, they first have to agree to the DocuSign form, which is mostly our agreement, but includes details about the time and location of the access granted, and includes a section for the property owners to add their own details. Once they have agreed to the form, they are able to use our website to lock and unlock the August lock they are granted access to via the internet, until the period of access specified by the property owner ends. ## How we built it We set up a Flask server, and made an outline of what the website would be. Then we worked on figuring out the API calls we would need to make in local python scripts. We developed the DocuSign and August pieces separately. Once the pieces were ready, we began integrating them into the Flask server. Then we worked on debugging and polishing our product. ## Challenges we ran into Some of the API calls were complex and it was difficult figuring out which pieces of data were needed and how to format them in order to use the APIs properly. The hardest API piece to implement was programatically generating DocuSign documents. Also, debugging was difficult once we were working on the Flask server, but once we figured out how to use Flask debug mode, it became a lot easier. ## Accomplishments that we're proud of We successfully implemented all the main pieces of our idea, including ensuring users signed via DocuSign, controlling the August lock, rejecting users after their access expires, and including both the property owner and client sides of the project. We are also proud of the potential security of our system. The renter is given absolute minimal access. They are never given direct access to the lock info, removing potential security vulnerabilities. They login to our website, and both verification that they have permission to use the lock and the API calls to control the lock occur on our server. ## What we learned We learned a lot about web development including how to use cookies, forms, and URL arguments. We also gained a lot of experience in implementing 3rd party API's. ## What's next for Unlocc The next steps would be expanding the rudimentary account system with a more polished one, having a lawyer help us draft the legalese in the DocuSign documents, and contacting potential users such as AirBnB property owners or escape room companies.
## Inspiration Recently, security has come to the forefront of media with the events surrounding Equifax. We took that fear and distrust and decided to make something to secure and protect data such that only those who should have access to it actually do. ## What it does Our product encrypts QR codes such that, if scanned by someone who is not authorized to see them, they present an incomprehensible amalgamation of symbols. However, if scanned by someone with proper authority, they reveal the encrypted message inside. ## How we built it This was built using cloud functions and Firebase as our back end and a Native-react front end. The encryption algorithm was RSA and the QR scanning was open sourced. ## Challenges we ran into One major challenge we ran into was writing the back end cloud functions. Despite how easy and intuitive Google has tried to make it, it still took a lot of man hours of effort to get it operating the way we wanted it to. Additionally, making react-native compile and run on our computers was a huge challenge as every step of the way it seemed to want to fight us. ## Accomplishments that we're proud of We're really proud of introducing encryption and security into this previously untapped market. Nobody to our kowledge has tried to encrypt QR codes before and being able to segment the data in this way is sure to change the way we look at QR. ## What we learned We learned a lot about Firebase. Before this hackathon, only one of us had any experience with Firebase and even that was minimal, however, by the end of this hackathon, all the members had some experience with Firebase and appreciate it a lot more for the technology that it is. A similar story can be said about react-native as that was another piece of technology that only a couple of us really knew how to use. Getting both of these technologies off the ground and making them work together, while not a gargantuan task, was certainly worthy of a project in and of itself let alone rolling cryptography into the mix. ## What's next for SeQR Scanner and Generator Next, if this gets some traction, is to try and sell this product on th marketplace. Particularly for corporations with, say, QR codes used for labelling boxes in a warehouse, such a technology would be really useful to prevent people from gainng unnecessary and possibly debiliatory information.
## Inspiration We were inspired by the need for better accessibility tools for students with special needs. Observing the struggle of these students in accessing reading materials, and the lack of adaptive technologies in schools to support their unique learning requirements, motivated us to develop a solution. Our frustration stemmed from the gap in resources that should be helping these students succeed. ReadOn was born out of a desire to bridge this gap and provide an inclusive tool that empowers students with special needs to access educational content more effectively. ## What it does ReadOn is an accessible reading assistant designed to cater to students with different learning abilities. It simplifies text, generates voiceovers, and uses adaptive reading speeds based on the user’s needs. The app can highlight key information, break down complex paragraphs, and convert text to audio for students who struggle with traditional reading methods. By using ReadOn, students can customize their reading experience to suit their personal needs, improving comprehension and retention of material. ## How we built it We built ReadOn using Next.js for the frontend, ensuring a fast and responsive user experience. The app integrates with natural language processing (NLP) APIs to provide text simplification and summary features. We used text-to-speech libraries to convert text into audio, making the content accessible for users with visual impairments or dyslexia. The backend is powered by Node.js. We also focused on making the user interface intuitive and easy to navigate, ensuring that students and educators can use the tool without a steep learning curve. ## Challenges we ran into One of the major challenges we faced was integrating an accurate and responsive text-to-speech feature that could handle a wide range of educational content without errors. Specifically, we encountered difficulties with phonetic breakdowns, as many words have multiple pronunciations or complex phonetic structures. Ensuring that the text-to-speech system correctly pronounced these words based on context proved to be a significant hurdle. Additionally, optimizing the application’s performance when processing large documents in real-time, while ensuring that all accessibility features like accurate phonetics were properly implemented, presented further obstacles. Balancing these needs without sacrificing user experience was one of our biggest technical hurdles. ## Accomplishments that we’re proud of We’re proud of building an application that can truly make a difference in the lives of students with special needs. Creating an adaptable, multi-functional reading tool that simplifies content while keeping it engaging and accessible is an achievement we’re excited about. Additionally, we successfully integrated key accessibility features like audio generation, customizable reading levels, and visual aids that enhance the overall user experience. Most importantly, we are proud that this tool can empower students who have previously struggled with traditional learning methods. ## What we learned Throughout this journey, we learned the importance of accessibility in education and how small changes in technology can significantly impact the learning experience for students with special needs. We also deepened our understanding of text processing, audio synthesis, and front-end optimization to create a seamless and supportive user experience. Additionally, we gained valuable insight into how inclusive design principles can be applied to software development, ensuring that the app caters to a diverse set of users. ## What’s next for ReadOn Looking ahead, we plan to incorporate AI-driven personalization to automatically adjust reading levels and recommend content based on each user’s progress. We aim to further improve text summarization and make the app compatible with a wider range of devices, including mobile phones and tablets. Our next steps include building a comprehensive dashboard for teachers, where they can monitor their students’ progress and provide personalized guidance. Finally, we hope to collaborate with schools and educational institutions to bring ReadOn into classrooms and assist a larger population of students in their learning journey.
partial
# SpeakEasy ## Overview SpeakEasy: AI Language Companion Visiting another country but don't want to sound like a robot? Want to learn a new language but can't get your intonation to sound like other people's? SpeakEasy can make you sound like, well, you! ## Features SpeakEasy is an AI language companion which centers around localizing your own voice into other languages. If, for example, you wanted to visit another country but didn't want to sound like a robot or Google Translate, you could still talk in your native language. SpeakEasy can then automatically repeat each statement in the target language in exactly the intonation you would have if you spoke that language. Say you wanted to learn a new language but couldn't quite get your intonation to sound like the source material you were learning from. SpeakEasy is able to provide you phrases in your own voice so you know exactly how your intonation should sound. ## Background SpeakEasy is the product of a group of four UC Berkeley students. For all of us, this is our first submission to a hackathon and the result of several years of wanting to get together to create something cool together. We are excited to present every part of SpeakEasy; from the remarkably accurate AI speech to just how much we've all learned about rapidly developed software projects. ### Inspiration Our group started by thinking of ways we could make an impact. We then expanded our search to include using and demonstrating technologies developed by CalHacks' generous sponsors, as we felt this would be a good way to demonstrate how modern technology can be used to help everyday people. In the end, we decided on SpeakEasy and used Cartesia to realize many of the AI-powered functions of the application. This enabled us to make something which addresses a specific real-world problem (robotic-sounding translations) many of us have either encountered or are attempting to avoid. ### Challenges Our group has varying levels of software development experience, and especially given our limited hackathon experience (read: none), there were many challenging steps. For example: deciding on project scope, designing high-level architecture, implementing major features, and especially debugging. What was never a challenge, however, was collaboration. We worked quite well as a team and had a good time doing it. ### Accomplishments / Learning We are proud to say that despite the many challenges we accomplished a great deal with this project. We have a fully functional Flask backend with React frontend (see "Technical Details") which uses multiple different APIs. This project successfully ties together audio processing, asynchonrous communication, artificial intelligence, UI/UX design, database management, and so much more. What's more is that many of our group members learned this from base fundamentals. ## Technical Details As mentioned in an earlier section, SpeakEasy is designed with a Flask (Python) backend and React (JavaScript) frontent. This is a very standard setup that is used often at hackathons due to its easy implementation and relatively limited required setup. Flask only requires two lines of code to make an entirely new endpoint, while React can make a full audio-playing page with callbacks that looks absolutely beautiful in less than an hour. For storing data, we use SQLAlchemy (backed by SQLite). 1. When a user opens SpeakEasy, they are first sent to a landing page. 2. After pressing any key, they are taken to a training screen. Here they will record a 15-20 second message (ideally the one shown on screen) which will be used to create an embedding. This is accomplished with the Cartesia "Clone Voice from Clip" endpoint. A Cartesia Voice (abbreviated as "Voice") is created from the returned embedding (using the "Create Voice" endpoint) which contains a Voice ID. This Voice ID is used to uniquely identify each voice, which itself is in a specific language. The database then stores this voice and creates a new user which this voice is associated with. 3. When the recording is complete and the user clicks "Next", they will be taken to a split screen where they can choose between the two main program functions of SpeakEasy. 4. If the user clicks on the vocal translation route, they will be brought to another recording screen. Here, they record a sound in English which is then sent to the backend. The backend encodes this MP3 data into PCM, sends it to a speech-to-text API, and then transfers it into a text translation API. Separately, the backend trains a new Voice (using the Cartesia Localize Voice endpoint, wrapped by get/create Voice since Localize requires an embedding instead of a Voice ID) with the intended target language and uses the Voice ID it returns. The backend then sends the translated text to the Cartesia "Text to Speech (Bytes)" endpoint using this new Voice ID. This is then played back to the user as a response to the original backend request. All created Voices are stored in the database and associated with the current user. This is done so returning users do not have to retrain their voices in any language. 5. If the user clicks on the language learning route, they will be brought to a page which displays a randomly selected phrase in a certain language. It will then query the Cartesia API to pronounce that phrase in that language, using the preexisting Voice ID if available (or prompting to record a new phrase if not). A request is made to the backend to input some microphone input, which is then compared to Cartesia's estimation of your speech in a target language. The backend then returns a set of feedback using the difference between the two pronounciations, and displays that to the user on the frontend. 6. After each route is selected, the user may choose to go back and select either route (the same route again or the other route). ## Cartesia Issues We were very impressed with Cartesia and its abilities, but noted a few issues which would improve the development experience. * Clone Voice From Clip endpoint documentation + The documentation for the endpoint in question details a `Response` which includes a variety of fields: `id`, `name`, `language`, and more. However, the endpoint only returns the embedding in a dictonary. It is then required to send the embedding into the "Create Voice" endpoint to create an `id` (and other fields), which are required for some further endpoints. * Clone Voice From Clip endpoint length requirements + The clip supplied to the endpoint in question appears to require a duration of greater than a second or two. Se "Error reporting" for further details. * Text to Speech (Bytes) endpoint output format + The TTS endpoint requires an output format be specified. This JSON object notably lacks an `encoding` field in the MP3 configuration which is present for the other formats (raw and WAV). The solution to this is to send an `encoding` field with the value for one of the other two formats, despite this functionally doing nothing. * Embedding format + The embedding is specified as a list of 192 numbers, some of which may be negative. Python's JSON parser does not like the dash symbol and frequently encounters issues with this. If possible, it would be good to either allow this encoding to be base64 encoded, hashed, or something else to prevent negatives. Optimally embeddings do not have negatives, though this seems difficult to realize. * Response code mismatches + Some response codes returned from endpoints do not match their listed function. For example, a response code of 405 should not be returned when there is a formatting error in the request. Similarly, 400 is returned before 404 when using invalid endpoints, making it difficult to debug. There are several other instances of this but we did not collate a list. * Error reporting + If (most) endpoints return in JSON format, errors should also be turned in JSON format. This prevents many parsing issues and would simplify design. In addition, error messages are too vague to glean any useful information. For example, 500 is always "Bad request" regardless of the underlying error cause. This is the same thing as the error name. ## Future Improvements In the future, it would be interesting to investigate the following: * Proper authentication * Cloud-based database storage (with redundancy) * Increased error checking * Unit and integration test coverage, with CI/CD * Automatic recording quality analysis * Audio streaming (instead of buffering) using WebSockets * Mobile device compatibility * Reducing audio processing overhead
## Inspiration In today's world, Public Speaking is one of the greatest skills any individual can have. From pitching at a hackathon to simply conversing with friends, being able to speak clearly, be passionate and modulate your voice are key features of any great speech. To tackle this problem of becoming a better public speaker, we created Talky. ## What it does It helps you improve your speaking skills by giving you suggestions based on what you said to the phone. Once you finish presenting your speech to the app, an audio file of the speech will be sent to a flask server running on Heroku. The server will analyze the audio file by examining pauses, loudness, accuracy and how fast user spoke. In addition, the server will do a comparative analysis with the past data stored in Firebase. Then the server will return the performance of the speech.The app also provides the community functionality which allows the user to check out other people’s audio files and view community speeches. ## How we built it We used Firebase to store the users’ speech data. Having past data will allow the server to do a comparative analysis and inform the users if they have improved or not. The Flask server uses similar Audio python libraries to extract meaning patterns: Speech Recognition library to extract the words, Pydub to detect silences and Soundfile to find the length of the audio file. On the iOS side, we used Alamofire to make the Http request to our server to send data and retrieve a response. ## Challenges we ran into Everyone on our team was unfamiliar with the properties of audio, so discovering the nuances of wavelengths in particular and the information it provides was challenging and integral part of our project. ## Accomplishments that we're proud of We successfully recognize the speeches and extract parameters from the sound file to perform the analysis. We successfully provide the users with an interactive bot-like UI. We successfully bridge the IOS to the Flask server and perform efficient connections. ## What we learned We learned how to upload audio file properly and process them using python libraries. We learned to utilize Azure voice recognition to perform operations from speech to text. We learned the fluent UI design using dynamic table views. We learned how to analyze the audio files from different perspectives and given an overall judgment to the performance of the speech. ## What's next for Talky We added the community functionality while it is still basic. In the future, we can expand this functionality and add more social aspects to the existing app. Also, the current version is focused on only the audio file. In the future, we can add the video files to enrich the post libraries and support video analyze which will be promising.
## Inspiration Knowtworthy is a startup that all three of us founded together, with the mission to make meetings awesome. We have spent this past summer at the University of Toronto’s Entrepreneurship Hatchery’s incubator executing on our vision. We’ve built a sweet platform that solves many of the issues surrounding meetings but we wanted a glimpse of the future: entirely automated meetings. So we decided to challenge ourselves and create something that the world has never seen before: sentiment analysis for meetings while transcribing and attributing all speech. ## What it does While we focused on meetings specifically, as we built the software we realized that the applications for real-time sentiment analysis are far more varied than initially anticipated. Voice transcription and diarisation are very powerful for keeping track of what happened during a meeting but sentiment can be used anywhere from the boardroom to the classroom to a psychologist’s office. ## How I built it We felt a web app was best suited for software like this so that it can be accessible to anyone at any time. We built the frontend on React leveraging Material UI, React-Motion, Socket IO and ChartJS. The backed was built on Node (with Express) as well as python for some computational tasks. We used GRPC, Docker and Kubernetes to launch the software, making it scalable right out of the box. For all relevant processing, we used Google Speech-to-text, Google Diarization, Stanford Empath, SKLearn and Glove (for word-to-vec). ## Challenges I ran into Integrating so many moving parts into one cohesive platform was a challenge to keep organized but we used trello to stay on track throughout the 36 hours. Audio encoding was also quite challenging as we ran up against some limitations of javascript while trying to stream audio in the correct and acceptable format. Apart from that, we didn’t encounter any major roadblocks but we were each working for almost the entire 36-hour stretch as there were a lot of features to implement. ## Accomplishments that I'm proud of We are super proud of the fact that we were able to pull it off as we knew this was a challenging task to start and we ran into some unexpected roadblocks. There is nothing else like this software currently on the market so being first is always awesome. ## What I learned We learned a whole lot about integration both on the frontend and the backend. We prototyped before coding, introduced animations to improve user experience, too much about computer store numbers (:p) and doing a whole lot of stuff all in real time. ## What's next for Knowtworthy Sentiment Knowtworthy Sentiment aligns well with our startup’s vision for the future of meetings so we will continue to develop it and make it more robust before integrating it directly into our existing software. If you want to check out our stuff you can do so here: <https://knowtworthy.com/>
partial
# Stegano ## End-to-end steganalysis and steganography tool #### Demo at <https://stanleyzheng.tech> Please see the video before reading documentation, as the video is more brief: <https://youtu.be/47eLlklIG-Q> A technicality, GitHub user RonanAlmeida ghosted our group after committing react template code, which has been removed in its entirety. ### What is steganalysis and steganography? Steganography is the practice of concealing a message within a file, usually an image. It can be done one of 3 ways, JMiPOD, UNIWARD, or UERD. These are beyond the scope of this hackathon, but each algorithm must have its own unique bruteforce tools and methods, contributing to the massive compute required to crack it. Steganoanalysis is the opposite of steganography; either detecting or breaking/decoding steganographs. Think of it like cryptanalysis and cryptography. ### Inspiration We read an article about the use of steganography in Al Qaeda, notably by Osama Bin Laden[1]. The concept was interesting. The advantage of steganography over cryptography alone is that the intended secret message does not attract attention to itself as an object of scrutiny. Plainly visible encrypted messages, no matter how unbreakable they are, arouse interest. Another curious case was its use by Russian spies, who communicated in plain sight through images uploaded to public websites hiding steganographed messages.[2] Finally, we were utterly shocked by how difficult these steganographs were to decode - 2 images sent to the FBI claiming to hold a plan to bomb 11 airliners took a year to decode. [3] We thought to each other, "If this is such a widespread and powerful technique, why are there so few modern solutions?" Therefore, we were inspired to do this project to deploy a model to streamline steganalysis; also to educate others on stegography and steganalysis, two underappreciated areas. ### What it does Our app is split into 3 parts. Firstly, we provide users a way to encode their images with a steganography technique called least significant bit, or LSB. It's a quick and simple way to encode a message into an image. This is followed by our decoder, which decodes PNG's downloaded from our LSB steganograph encoder. In this image, our decoder can be seen decoding a previoustly steganographed image: ![](https://i.imgur.com/dge0fDw.png) Finally, we have a model (learn more about the model itself in the section below) which classifies an image into 4 categories: unstegographed, MiPOD, UNIWARD, or UERD. You can input an image into the encoder, then save it, and input the encoded and original images into the model, and they will be distinguished from each other. In this image, we are inferencing our model on the image we decoded earlier, and it is correctly identified as stegographed. ![](https://i.imgur.com/oa0N8cc.png) ### How I built it (very technical machine learning) We used data from a previous Kaggle competition, [ALASKA2 Image Steganalysis](https://www.kaggle.com/c/alaska2-image-steganalysis). This dataset presented a large problem in its massive size, of 305 000 512x512 images, or about 30gb. I first tried training on it with my local GPU alone, but at over 40 hours for an Efficientnet b3 model, it wasn't within our timeline for this hackathon. I ended up running this model on dual Tesla V100's with mixed precision, bringing the training time to about 10 hours. We then inferred on the train set and distilled a second model, an Efficientnet b1 (a smaller, faster model). This was trained on the RTX3090. The entire training pipeline was built with PyTorch and optimized with a number of small optimizations and tricks I used in previous Kaggle competitions. Top solutions in the Kaggle competition use techniques that marginally increase score while hugely increasing inference time, such as test time augmentation (TTA) or ensembling. In the interest of scalibility and low latency, we used neither of these. These are by no means the most optimized hyperparameters, but with only a single fold, we didn't have good enough cross validation, or enough time, to tune them more. Considering we achieved 95% of the performance of the State of the Art with a tiny fraction of the compute power needed due to our use of mixed precision and lack of TTA and ensembling, I'm very proud. One aspect of this entire pipeline I found very interesting was the metric. The metric is a weighted area under receiver operating characteristic (AUROC, often abbreviated as AUC), biased towards the true positive rate and against the false positive rate. This way, as few unstegographed images are mislabelled as possible. ### What I learned I learned about a ton of resources I would have never learned otherwise. I've used GCP for cloud GPU instances, but never for hosting, and was super suprised by the utility; I will definitely be using it more in the future. I also learned about stenography and steganalysis; these were fields I knew very little about, but was very interested in, and this hackathon proved to be the perfect place to learn more and implement ideas. ### What's next for Stegano - end-to-end steganlaysis tool We put a ton of time into the Steganalysis aspect of our project, expecting there to be a simple steganography library in python to be easy to use. We found 2 libraries, one of which had not been updated for 5 years; ultimantely we chose stegano[4], the namesake for our project. We'd love to create our own module, adding more algorithms for steganography and incorporating audio data and models. Scaling to larger models is also something we would love to do - Efficientnet b1 offered us the best mix of performance and speed at this time, but further research into the new NFNet models or others could yeild significant performance uplifts on the modelling side, but many GPU hours are needed. ## References 1. <https://www.wired.com/2001/02/bin-laden-steganography-master/> 2. <https://www.wired.com/2010/06/alleged-spies-hid-secret-messages-on-public-websites/> 3. <https://www.giac.org/paper/gsec/3494/steganography-age-terrorism/102620> 4. <https://pypi.org/project/stegano/>
## See our live demo! **On Rinkeby testnet blockchain (recommended):** <https://rinkeby.kelas.dev> **On xDAI blockchain (Warning: uses real money):** <https://xdai.kelas.dev> ## Check out our narrative StoryMaps here! **Greenery in your Community:** <https://arcg.is/1vu448> **Culture & Diversity in choosing your Home:** <https://arcg.is/DH511> ## Inspiration BlockFund's mission is to build a platform to empower communities with tools and data. We aim to improve outcomes in **community civic engagement and community sustainability.** *How we do so, BlockFund:* 1. Democratises community funds through blockchain and voting technology - allowing community members to submit their own project proposals and vote. 2. Highlights the need for community environment sustainability projects through identifying local areas lacking in tree foliage. Importantly, we educate the community through a narrative in an ArcGIS StoryMap. Image processing and deep learning science enables the identification of even smallest tree's foliage. **TeamTreesMini** 3. Aids potential new residents-to-be and migrants in looking for home (and community) that fits their unique culture heritage, beliefs and diversity needs, through outlining demographic breakdowns, religious institutions, and ammenities. Also educating the importance and factors to consider through a narrative in an ArcGIS StoryMap. **1. Democratises community funds through blockchain and voting technology** In the US, Homeowner Associations (HOA) are the main medium in which residents members pay community upkeep fees to maintain grounds, master insurance, community utilities, as well as overall community finances. Financial Transparency varies between HOAs, but often they only reflect past fund usage and the choices of a few representative members. We sought a solution that democratises the funding of projects process – allowing residents to contribute and vote for projects that **actually matter** to them. It's easy for community minorities to go unheard, so our voting system helps to account for that. We adjust and increase the voting weight of residents whose vote has not funded a successful project after a few attempts – thus improving the representation of minorities in any community. **2. Highlights the need for environment sustainability projects #TeamTreesMini** Additionally, we empower communities to engage in green urban planning. We mimic #TeamTrees on a communal scale. Climate change is an increasingly prevalent topic, and we believe illustrating the dangers in your backyard is an excellent way to encourage local action. Our StoryMap solution maps the green foliage coverage in your neighbourhood. Then, we empower the community in proposing projects on the platform to fund tree planting in each home and in common areas. **3. Your home, why Cultural Fit and Diversity matters** After a community profile is made, we also assist new members in choosing a community aligned with their cultural, religious and diversity interests. When one of our members moved to a different and largely skewed racial group neighbourhood, he faced both explicit and subtle racism growing up. Home seekers already take demographics into consideration, and our solution empowers aids home seekers in making a more informed decision from a cultural perspective. It also can support urban planning for community planners. We map diversity index scores, demographic data (generational and race), and the religious institutions and ammenities – aiding new home seekers in choosing their home. The proverb "Birds of a feather flock together" describes how those of similar taste congregate in groups. However, in our world today, the importance of diversity and exposing oneself to different opinions and people is crucial to thrive in the workforce. > > Diversity is having a seat at the table. Inclusion is having a voice. And belonging is having that voice be heard. - Liz Fosslien > > > BlockFund believes that more than just price or transport convenience – diversity, belonging, and inclusion are key concepts in choosing a place to live. BlockFund is a decentralised autonomous organisation (DAO), that pools community funds, engage the community, and allow transparent voting for projects. ## How we built it We built and deployed the Decentralized Autonomous Organisation (DAO) smartcontract on two EVM-based blockchain: Rinkeby (Testnet) and xDAI. We use AlchemyAPI as a node endpoint for our Rinkeby deployment for better data availability and consistency, while our xDAI deployment uses POA's official community node. We deployed a React.js frontend for quick delivery of our application, leveraging Axios to asynchronously communicate with external libraries, OpenAPI to provide an intuitive Q&A feature promoting universal proposal comprehension, and Ant.Design/Sal for a modern, sleek, and animated user interface. We use ethers.js to perform communication blockchain nodes, and it supports two main cryptocurrency wallets: Burner wallet (our homebrew in-browser wallet made for easy user onboarding) Metamask (a popular web3-enabled wallet for those who wants better security) On top of that, our Community Learning Kits are made using ESRI ArcGIS storyboards for highly visual storytelling of geographic data. Last but not least, we use Hardhat for smartcontract deployment automation. **Here are some other technologies we used:** For blockchain: * Ethereum * Solidity * Hardhat For front-end client: * React.js (+ Hooks + Router) * Axios — asynchronous communication with OpenAPI * OpenAI GPT-3 — intuitive Q&A feature for universal proposal comprehension * Sal — sleek animations * Ant.Design — modern user interface system For mapping: * ArcGIS WebMap * ArcGIS StoryMap * ArcGIS-Rest-API * Custom Functions Datasets: * 2010 US Census Data * 2018 US Census Data * Pima AZ Foliage Data ## Challenges we ran into Our main challenge was in integrating ArcGIS API's in limited timeframe. As it was a new technology for us, we really had to crunch our brainpower. On top of that, deploying a fully working website for other people to try takes a lot of effort to make sure that all of the integrations are also working beyond localhost. ## Accomplishments that we're proud of * We have a live website! * We launched to two different blockchains: xDAI and Rinkeby. * React state management! ## What we learned * We learned that working remotely with colleagues from 4 different timezones is challenging. * Good React state management practices will safe a lot of time. ## What's next for BlockFund * Explore ways how we can work with local communities to deploy this. * Run more DAO experiments in smaller scope (family, small neighborhood, etc)
## Inspiration The advancement of image classification, notably in facial recognition, has some very interesting applications. [Microsoft](https://azure.microsoft.com/en-ca/services/cognitive-services/face/) and Amazon currently provide facial recognition services to parties ranging from corporations checking on their employees, to law enforcement officials locating serious criminals. This invasive method of facial recognition is problematic ethically, but brings forward an interesting concept: With an increasing dependence on ML/DL and computer vision, is there a method whereby state-of-the-art system can be fooled? Inversely, is there a method to help future facial recognition models overcome spoofing? We are a team of students with an interest in Deep Learning and Computer Vision, as well as academic researcher experience in the field (ML applied to Astrophysics, Engineering, etc.). As such, we were interested in exploring projects that expanded our views on these topics with some real-world application. **We wanted to create software to mask the identity of a person's photo to current image/facial recognition systems, as well as help future models overcome masking.** We looked at two major objectives for this project: 1) Allow people to maintain their privacy. An employer may search a job candidate's LinkedIn photo using existing facial recognition software to see appearances in in crime databases, or other online sources. Personal and Professional personas often differ, and as such, there should be a method to restrict invasive image or facial recognition. Our software should pass a person's photo through a model to add perturbations to the image and return an image that looks the same to the human-eye, but is unrecognizable to a computer-vision algorithm. 2) Help developers build more robust image classification models. A tool to trick existing models can be applied to improve future models. Developers may use this tool to include some spoofed images in their train set to help the model not be tricked. As a library, this could be a useful data augmentation tool is something we are considering. When first looking for academic inspiration, we referred to [this paper](https://arxiv.org/pdf/1312.6199.pdf), which looks at how changing certain pixels shifts the image's classification in vector space after being processed through a model; this may force a network to provide the incorrect classification, without significantly changing how the photo itself looks. We also looked at [this article](http://karpathy.github.io/2015/03/30/breaking-convnets/) which points out how using a gradient method may help intentionally misclassify an image by applying adversarial perturbations, and limiting their viability with a constraint. The classic example is classifying a Panda as a Vulture using the GoogLeNet model. The original source of the article is [this paper](https://arxiv.org/pdf/1412.6572.pdf) ## How it Works **Fast Gradient Sign Method** Traditional neural networks update and improve through gradient descent and backpropagation -- a process whereby an input is fed into a randomly initialized network, the model's prediction for the input is compared to the true value for the input, and the network's weights/biases are updated to more closely match the desired (correct) output. This process is repeated thousands of times over many inputs to train a model. To trick an image recognition model, the above process can be applied not to the model, but to the picture. An image may be passed into the model, and the output can be compared to a desired output. After this comparison, the photo may be tweaked (gradient descent) through **adversarial perturbations** many thousands of times (backpropagation) to create an image mask that the network identifies as the desired output. This image often is a noise of pixels that is not understandable to humans. As such, it acts as a mask, or a filter, that can be partially applied to the original image. A constraint restricts how visible the mask is, to ensure the image looks similar to the original to humans. The data may be inserted, but the mask would not be visible. An example of this is in the below figure. A photo of a panda that is passed into a model can be compared to the output of a nematode classification. Using the Fast Gradient Sign Method, the nematode filter is applied to the panda image. The resulting composite image may look the same due to the constraint on the filter, but the image recognition network will grossly misclassify it. ![Panda adversarial](https://brunolopezgarcia.github.io/img/adversarial_faces/panda_adversarial.png) However, this approach is time-consuming as it requires multiple cycles to add perturbations to an image. It also requires a pointer to another class to adapt the input image to. Lastly, it is also quite possible to overcome this method by simply adding images with perturbations to a training set when creating a model. At a high-level, an algorithm can be constructed with inspiration from [this paper](https://www.andrew.cmu.edu/user/anubhava/fooling.pdf): Algorithm: Gradient Method y = input vector n = gradient of y nx = sign of gradient at each pixel x = x+f(nx) which generates adversarial input return x \_ This method works as neural networks are vulnerable to adversarial perturbation due to their linear nature. These linear deep models are significantly impacted by a large number of small variations in higher dimensions, such as that of the input space. These facial recognition models have not been proven to understand the task at hand, and are instead simply "overly confident at points that do not occur in the data distribution" (from Goodfellow's paper). **Generative Adversarial Networks** A cleverer solution can be applied with the use of GANs, which are a huge area of research in machine vision currently. At a high level, GANs can piece together an image based on other images and textures. After applying an imperceptible perturbation with a GAN, it is possible for an image recognition model to completely misclassify an image by maximizing prediction error. There are many ways to approach this, such as adding a new object to an image that seems to fit in to distort the classifying algorithm, or by implementing some mask-like approach as seen above into the image that retains the original image's characteristics. The approach we chose to look at would take an adversarial gradient computed using the above method in addition to a GAN and apply it to create a new image from the original and the mask. GANs are extremely interesting but can be computationally expensive to compute and implement. One potential implementation outlined [here](https://www.andrew.cmu.edu/user/anubhava/fooling.pdf). ## Our Implementation **Image/Facial Recognition** The image recognition is done using an implementation of [Inception-v3](https://arxiv.org/pdf/1512.00567v3.pdf), ImageNet and [Facenet](https://github.com/davidsandberg/facenet).The paper of this pre-trained algorithm is found [here](https://arxiv.org/pdf/1503.03832.pdf). Inception is a CNN model that has proven to be quite robust. Trained on ImageNet, the baseline image recognition model V3 has been trained on over a thousand different classes. Facenet uses the Inception model, and we applied transfer learning from the ImageNet application to Facenet, which has learned over 10K identities based on over 400K face images. We also added the face of Drake (the musician) to this model. This model, made in **Python** using *Keras* (TensorFlow backend) allows the text classification/recognition of an input image, and serves as the baseline model that we chose to attack and exploit. When calling the function to classify an image, the image is loaded and resized to 300x300 pixels and intensities are scaled. Thereafter, a dimension is added to the image to work with it in Keras. The image is then passed through our modified Inception\_ImageNet\_Facenet model, and a class outputted, along with a confidence. Please note that the pre-trained model in the GitHub code uses the default Inception model, as the size of the model and execution time were factors that crashed our NodeJS implementation. Our custom model will be demoed and available later. **Modifier** In Python, Keras, Numpy and the Python Image Library were used to implement the Fast Gradient Sign Method to modify photos of a person's face. The first and last layers of the facial recognition CNN are referenced, a baseline object chosen to match gradients with (we use a Jigsaw piece because it was efficient at finding gradients for human faces), and the image to transform is loaded. The input image is appropriately scaled, and the constraints defined. A copy of the image made, and a learning rate is defined. We chose a learning rate of 5, which is extremely high, but allows for quick execution and output. A lower learning rate will likely result in a slower but better mask. Keras is used to define cost a gradient calculation functions, and the resultant master function is called in a loop until the cost exceeds a threshold. At each iteration, the model prints the cost. After the mask is chosen and applied to the image of a person's face with constraints, the image is normalized once again, and returned to the user. The output is relatively clean and maintains the person's facial features and the general look of the image. We decided not to implement a GAN as it was extremely difficult to train, and to meaningfully implement. Moreover, it did not provide results superior to the Fast Gradient Sign Method. A potential implementation of the model is a first a boundary equilibrium GAN to generate photos from the source image, then a conditional GAN, and finally a Pix2Pix translation to translate the mask onto a new image. The code to implement this solution is available, but not well-developed. **User Interface** We used HTML and CSS to create an extremely basic front-end to allow a user to drag and drop and image into the model. The image is first classified, then the modifications made, and the final classification outputted after modification. The image is also returned. The back-end of the application is in **NodeJS** and ExpressJS. The model takes a considerably long time to compile, and this underdeveloped front-end may exceed the 60s time the NodeJS server may run for, for some images. ## Challenges we ran into Finding and training a facial recognition model proved difficult. Most high-quality models by companies such as Amazon and Microsoft that are state-of-the-art and applied in industry are closely guarded secrets. As such, we needed to find a model that worked generally well at facial recognition. We found Inception V3 to be a good baseline to know 1000 general classes, and Facenet to be a great application of transferring learning from this base to facial recognition. As such, our initial problem was solved. Second, we ran into the problem of how to modify the images. The initial approaches we took with logistic regression and [pixel removal](https://arxiv.org/pdf/1710.08864.pdf) were slow, or modified the image too heavily to the point that it lost meaning, and human tests would also fail to classify the image. We had to work to tune parameters of our model to get a version that worked effectively for face classification. Lastly, working with GANs proved difficult. Training these models requires extensive computational power and time. As such, we were unable to effectively train a GAN model in the time allotted. ## Accomplishments that we're proud of We used a recognizable face to test that we could trick a CNN. Aubrey Graham (Drake) is a Toronto-based musician that is also an alum at my high school. He has strong facial features making him recognizable to the human eye. Drake's face was detected at 97.22% confidence through our modified Inception model's initial predictions. A reconstructed image through our model can be passed into the Inception model and will yield 4.12% confidence that the image is a bassoon (the highest likelihood). Although the two are in the music sub-genre, we are proud to see that our model can change an image's data enough to misclassify it without making it look significantly different to the human eye. ## What we learned We expanded our knowledge on machine vision and CNN-based deep learning. We were exposed to the concept of GANs and their implementations. We also learned that exploitation and tricking of current image recognition and general neural network models is an area of intense research, as it has the capacity to cause damage. For example, if one were to apply a filter or a mask to the video feed of a self-driving car, it is possible that the machine vision model used to drive may misinterpret a red light for a green light, and crash. Exploring this area will be very interesting in the coming months. ## What's next for Un-CNN We want to further explore GAN implementations, and speed up the model. GANs seem to be an extremely powerful tool for this application, that could be robust in fooling models while maintaining recognizably. Also, it would be interesting to reverse engineer current state-of-the-art facial recognition models such as Microsoft's Face API Model, which is used by corporates and law enforcement in the real-world. In doing so, we could use that model to calculate gradients and modify images to fool it. ## Research [Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition](Mahmood%20Sharif,%20Sruti%20Bhagavatula,%20Lujo%20Bauer,%20Michael%20K.%20Reiter) [Explaining and Harnessing Adversarial Examples - Ian J. Goodfellow, Jonathon Shlens, Christian Szegedy](https://arxiv.org/pdf/1412.6572.pdf) [Paper Discussion: Explaining and harnessing adversarial examples - Mahendra Kariya](https://medium.com/@mahendrakariya/paper-discussion-explaining-and-harnessing-adversarial-examples-908a1b7123b5) [Crafting adversarial faces - Bruno Lopez Garcia](https://brunolopezgarcia.github.io/2018/05/09/Crafting-adversarial-faces.html) [Intentionally Tricking Neural Networks - Adam Geitgey](https://medium.com/@ageitgey/machine-learning-is-fun-part-8-how-to-intentionally-trick-neural-networ-97a379b2) [Adversarial Examples in PyTorch - Akshay Chawla](https://github.com/akshaychawla/Adversarial-Examples-in-PyTorch)
winning
![](https://img.shields.io/github/license/MoroccanGemsbok/ReviewRecap) ![](https://img.shields.io/github/contributors/MoroccanGemsbok/ReviewRecap) ![](https://img.shields.io/github/last-commit/MoroccanGemsbok/ReviewRecap) ## Inspiration Any seasoned online shopper knows that one of the best ways to get information about a product is to check the reviews. However, reading through hundreds of reviews can be time-consuming and overwhelming, leading many shoppers to give up on their search for the perfect product. On top of that, many reviews can be emotionally driven, unhelpful, or downright nonsensical, with no truly effective way to filter them out from the aggregated star rating displayed on the product. Wouldn't it be great if a shopper could figure out why the people who liked the product liked it, and why the people who hated the product hated it, without wading through endless irrelevant information? ## What it does Review Recap goes through the reviews of the Amazon product to extract keywords using NLP. The frequency of the keywords and the average rating of the reviews with the keywords are presented to the user in a bar graph in the extension. With Review Recap, shoppers can now make informed buying decisions, with confidence, in just a matter of seconds. ## How we built it When a user is on a valid Amazon product page, the Chrome extension allows a GET request to be sent to our RESTful backend. The backend checks if the product page already exists in a cache. If not, the program scrapes through hundreds of reviews, compiling the data into review bodies and star ratings. This data is then fed into CoHere's Text Summarization natural language processing API, which we trained using a variety of prompts to find keywords in Amazon reviews. We also used CoHere to generate a list of meaningless keywords (such as "good", "great", "disappointing" etc) to filter out unhelpful information. The data is returned and processed in a bar graph using D3. ## Challenges we ran into Django features many ways to build similar RESTful APIs. It was a struggle to find a guide online that had the syntax and logic that suited our purpose best. Furthermore, being stuck with the free tier of many APIs meant that these APIs were the bottleneck of our program. The content security policies for the Chrome extension also made it difficult for us to implement D3 into our program. ## Accomplishments that we're proud of We were able to effectively work as a team, with each of us committing to our own tasks as well as coming together at the end to bring all our work together. We had an ambitious vision, and we were able to see it through. ## What we learned All members of our team learned a new tech stack during this project. Our frontend members learned how to create a web extension using the Chrome API, while our backend members learned how to use Django and Cohere. In addition, we also learned how to connect the frontend and backend together using a RESTful API. ## What's next for Review Recap We have several next goals for Review Recap: * Optimize the data-gathering algorithm * Add more configuration in the Chrome extension * Implement a loading animation while the data is being fetched
## Inspiration Despite the advent of the information age, misinformation remains a big issue in today's day and age. Yet, mass media accessibility for newer language speakers, such as younger children or recent immigrants, remains lacking. We want these people to be able to do their own research on various news topics easily and reliably, without being limited by their understanding of the language. ## What it does Our Chrome extension allows users to shorten and simplify and any article of text to a basic reading level. Additionally, if a user is not interested in reading the entire article, it comes with a tl;dr feature. Lastly, if a user finds the article interesting, our extension will find and link related articles that the user may wish to read later. We also include warnings to the user if the content of the article contains potentially sensitive topics, or comes from a source that is known to be unreliable. Inside of the settings menu, users can choose a range of dates for the related articles which our extension finds. Additionally, users can also disable the extension from working on articles that feature explicit or political content, alongside being able to disable thumbnail images for related articles if they do not wish to view such content. ## How we built it The front-end Chrome extension was developed in pure HTML, CSS and JavaScript. The CSS was done with the help of [Bootstrap](https://getbootstrap.com/), but still mostly written on our own. The front-end communicates with the back-end using REST API calls. The back-end server was built using [Flask](https://flask.palletsprojects.com/en/2.0.x/), which is where we handled all of our web scraping and natural language processing. We implemented text summaries using various NLP techniques (SMMRY, TF-IDF), which were then fed into the OpenAI API in order to generate a simplified version of the summary. Source reliability was determined using a combination of research data provided by [Ad Fontes Media](https://www.adfontesmedia.com/) and [Media Bias Check](https://mediabiasfactcheck.com/). To save time (and spend less on API tokens), parsed articles are saved in a [MongoDB](https://www.mongodb.com/) database, which acts as a cache and saves considerable time by skipping all the NLP for previously processed news articles. Finally, [GitHub Actions](https://github.com/features/actions) was used to automate our builds and deployments to [Heroku](https://www.heroku.com/), which hosted our server. ## Challenges we ran into Heroku was having issues with API keys, causing very confusing errors which took a significant amount of time to debug. In regards to web scraping, news websites have wildly different formatting which made extracting the article's main text difficult to generalize across different sites. This difficulty was compounded by the closure of many prevalent APIs in this field, such as Google News API which shut down in 2011. We also faced challenges with tuning the prompts in our requests to OpenAI to generate the output we were expecting. A significant amount of work done in the Flask server is pre-processing the article's text, in order to feed OpenAI a more suitable prompt, while retaining the meaning. ## Accomplishments that we're proud of This was everyone on our team's first time creating a Google Chrome extension, and we felt that we were successful at it. Additionally, we are happy that our first attempt at NLP was relatively successful, since none of us have had any prior experience with NLP. Finally, we slept at a Hackathon for the first time, so that's pretty cool. ## What we learned We gained knowledge of how to build a Chrome extension, as well as various natural language processing techniques. ## What's next for OpBop Increasing the types of text that can be simplified, such as academic articles. Making summaries and simplifications more accurate to what a human would produce. Improving the hit rate of the cache by web crawling and scraping new articles while idle. ## Love, ## FSq x ANMOL x BRIAN
## Inspiration Extreme potential for foundation models to accelerate high-quality biological and scientific illustrations for common individuals without special effects or animating abilities ## What it does Uses GPT API to create high-quality biological illustrations of various macromolecules given prompts with various Blender add-ons ## How we built it Used GPT API and prompt engineered GPT model to properly innervate MolecularNode and other Blender Add-ons with GPTBlender to allow prompts to generate high-quality illustrations ## Challenges we ran into Adapting GPT API and prompt engineering GPTBlender to properly innervate MolecularNode and other Blender Add-ons ## Accomplishments that we're proud of Was able to integrate GPTBlender and MolecularNodes add-ons in blender to seamlessly produce macromolecules with only prompting ## What we learned Foundation models and NLP will be extremely effective for not only biological inquiry and treatment, but also in illustrating and communicating valuable topics by and for individuals with novice experience with working with such tools. ## What's next for BioBlender Integrate more Blender add-ons for biological structures (eg. BlenderSpike, BlenderBrain)
partial
## Inspiration 2020 has definitely been the year of chess. Between 2020 locking everyone indoors, and Netflix's Queen Gambit raking in 62 million viewers, everyone is either talking about chess, or watching others play chess. ## What it does **Have you ever wanted to see chess through the eyes of chess prodigy Beth Harmon?** Where prodigies and beginners meet, BethtChess is an innovative software that takes any picture of a chessboard and instantly returns the next best move given the situation of the game. Not only does it create an experience to help improve your own chess skills and strategies, but you can now analyze chessboards in real-time while watching your favourite streamers on Twitch. ## How we built it IN A NUTSHELL: 1. Take picture of the chessboard 2. Turn position into text (by extracting the FEN code of it by using some machine learning model) 3. Run code through chess engine (we send the FEN code to stockfish (chess engine)) 4. Chess engine will return next best move to us 5. Display results to the user Some of our inspiration came from Apple's Camera app's ability to identify the URL of QR codes in an instant -- without even having to take a picture. **Front-end Technology** * Figma - Used for prototyping the front end * ReactJS - Used for making the website * HTML5 + CSS3 + Fomantic-UI * React-webcam * Styled-components * Framer-motion **Back-end Technology** * OpenCV - Convert image to an ortho-rectified chess board * Kaggle - Data set which has 100,000 chess board images * Keras - Deep Learning (DL) model to predict FEN string * Stockfish.js - The most powerful chess engine * NodeJS - To link front-end, DL model and Stockfish **User Interface** Figma was the main tool we used to design a prototype for the UI/UX page. Here's the link to our prototype: [<https://www.figma.com/proto/Vejv1dzQyZ2ZGOMoFw5w2L/BethtChess?node-id=4%3A2&scaling=min-zoom>] **Website** React.js and node.js were mainly used to create the website for our project (as it is a web app). **Predicting next best move using FEN stream** To predict the next best move, Node.js (express module) was used and stockfish.js was used to communicate with the most powerful chess engine so that we could receive information from the API to deliver to our user. We also trained the Deep Learning model with **Keras** and predicted the FEN string for the image taken from the webcam after image processing using **OpenCV**. ## Challenges we ran into Whether if it's 8pm, 12am, 4am, it doesn't matter to us. Literally. Each of us live in a different timezone and a large challenge was working around these differences. But that's okay. We stayed resilient, optimistic, and determined to finish our project off with a bang! **Learning Curves** It's pretty safe to say that all of us had to learn SOMETHING on the fly. Machine learning, image recognition, computing languages, navigating through Github, are only some of the huge learning curves we had to overcome. Not to mention, splitting the work and especially connecting all components together was a challenge that we had to work extra hard to achieve. Here's what Melody has to say about her personal learning curve: *At first, it felt like I didn't know ANYTHING. Literally nothing. I had some Python and Java experience but now I realize there's a whole other world out there full of possibilities, opportunities, etc. What the heck is an API? What's this? What's that? What are you doing right now? What is my job? What can I do to help? The infinite loop of questions kept on racing through my head. Honestly, though, the only thing that got me through all this was my extremely supportive team!!! They were extremely understanding, supportive, and kind and I couldn't have asked for a better team. Also, they're so smart??? They know so much!!* ## Accomplishments that we're proud of Only one hour into the hackathon (while we were still trying to work out our idea), one of our members already had a huge component of the project (a website + active camera component + "capture image" button) as a rough draft. Definitely, a pleasant surprise for all of us, and we're very proud of how far we've gotten together in terms of learning, developing, and bonding! As it was most of our members' first hackathon ever, we didn't know what to expect by the end of the hackathon. But, we managed to deliver a practically **fully working application** that connected all components that we originally planned. Obviously, there is still lots of room for improvement, but we are super proud of what we achieved in these twenty-four hours, as well as how it looks and feels. ## What we learned Our team consisted of students from high school all the way to recent graduates and our levels of knowledge vastly differed. Although almost all of our team consisted of newbies to hackathons, we didn't let that stop us from creating the coolest chess-analyzing platform on the web. Learning curves were huge for some of us: APIs, Javascript, node.js, react.js, Github, etc. were some of the few concepts we had to wrap our head around and learn on the fly. While more experienced members explored their limits by understanding how the stockfish.js engine works with APIs, how to run Python and node.js simultaneously, and how the two communicate in real-time. Because each of our members lives in a different time zone (including one across the world), adapting to each other's schedules was crucial to our team's success and efficiency. But, we stayed positive and worked hard through dusk and dawn together to achieve goals, complete tasks, and collaborate on Github. ## What's next for BethtChess? Maybe we'll turn it into an app available for iOS and Android mobile devices? Maybe we'll get rid of the "capture photo" so that before you even realize, it has already returned the next best move? Maybe we'll make it read out the instructions for those with impaired vision so that they know where to place the next piece? You'll just have to wait and see :)
## Inspiration The inspiration for the project was our desire to make studying and learning more efficient and accessible for students and educators. Utilizing advancements in technology, like the increased availability and lower cost of text embeddings, to make the process of finding answers within educational materials more seamless and convenient. ## What it does Wise Up is a website that takes many different types of file format, as well as plain text, and separates the information into "pages". Using text embeddings, it can then quickly search through all the pages in a text and figure out which ones are most likely to contain the answer to a question that the user sends. It can also recursively summarize the file at different levels of compression. ## How we built it With blood, sweat and tears! We used many tools offered to us throughout the challenge to simplify our life. We used Javascript, HTML and CSS for the website, and used it to communicate to a Flask backend that can run our python scripts involving API calls and such. We have API calls to openAI text embeddings, to cohere's xlarge model, to GPT-3's API, OpenAI's Whisper Speech-to-Text model, and several modules for getting an mp4 from a youtube link, a text from a pdf, and so on. ## Challenges we ran into We had problems getting the backend on Flask to run on a Ubuntu server, and later had to instead run it on a Windows machine. Moreover, getting the backend to communicate effectively with the frontend in real time was a real challenge. Extracting text and page data from files and links ended up taking more time than expected, and finally, since the latency of sending information back and forth from the front end to the backend would lead to a worse user experience, we attempted to implement some features of our semantic search algorithm in the frontend, which led to a lot of difficulties in transferring code from Python to Javascript. ## Accomplishments that we're proud of Since OpenAI's text embeddings are very good and very new, and we use GPT-3.5 based on extracted information to formulate the answer, we believe we likely equal the state of the art in the task of quickly analyzing text and answer complex questions on it, and the ease of use for many different file formats makes us proud that this project and website can be useful for so many people so often. To understand a textbook and answer questions about its content, or to find specific information without knowing any relevant keywords, this product is simply incredibly good, and costs pennies to run. Moreover, we have added an identification system (users signing up with a username and password) to ensure that a specific account is capped at a certain usage of the API, which is at our own costs (pennies, but we wish to avoid it becoming many dollars without our awareness of it. ## What we learned As time goes on, not only do LLMs get better, but new methods are developed to use them more efficiently and for greater results. Web development is quite unintuitive for beginners, especially when different programming languages need to interact. One tool that has saved us a few different times is using json for data transfer, and aws services to store Mbs of data very cheaply. Another thing we learned is that, unfortunately, as time goes on, LLMs get bigger and so sometimes much, much slower; api calls to GPT3 and to Whisper are often slow, taking minutes for 1000+ page textbooks. ## What's next for Wise Up What's next for Wise Up is to make our product faster and more user-friendly. A feature we could add is to summarize text with a fine-tuned model rather than zero-shot learning with GPT-3. Additionally, a next step is to explore partnerships with educational institutions and companies to bring Wise Up to a wider audience and help even more students and educators in their learning journey, or attempt for the website to go viral on social media by advertising its usefulness. Moreover, adding a financial component to the account system could let our users cover the low costs of the APIs, aws and CPU running Whisper.
## Chess Bird Chess Bird is a web app designed to let you play a game of chess while broadcasting your game on Twitter!
partial
## Inspiration Motion controls for the Wii remote, smartphones, etc... ## What it does Uses acceleration data from an accelerometer module to control a tilting maze within a Unity 3D application. ## How we built it An Arduino Nano board was used to interface with the MPU6050 accelerometer / gyroscope using I2C. The raw acceleration data was sent to the computer using a serial interface (USB). The Unity 3D application requested, received, filtered, and processed the incoming data to turn it into rotation vector. The maze was built in Blender and imported into Unity 3D. It was then rotated in synchronization with the physical device in the user's hand. The ball on the maze can be rolled around to collect green orbs and navigate the maze. ## Challenges we ran into Finding the correct Arduino library. Implementing the correct data processing algorithm. Filtering the rotational vector to remove jitter. Dealing with USB connection issues. ## Accomplishments that we're proud of Getting real-time orientation data based on the direction of gravity using a cheap accelerometer module. Developing the Unity 3D simulation and maze. ## What we learned Interfacing with a accelerometer module. Converting raw acceleration data to filtered orientation data. Unity 3D application development. Blender 3D modelling. Communication between Arduino and Unity 3D. ## What's next for Accele-maze Using a smartphone accelerometer for more reliable data, making the device wireless. Continuing to add features to the Unity 3D simulation.
## Inspiration **Machine learning** is a powerful tool for automating tasks that are not scalable at the human level. However, when deciding on things that can critically affect people's lives, it is important that our models do not learn biases. [Check out this article about Amazon's automated recruiting tool which learned bias against women.](https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G?fbclid=IwAR2OXqoIGr4chOrU-P33z1uwdhAY2kBYUEyaiLPNQhDBVfE7O-GEE5FFnJM) However, to completely reject the usefulness of machine learning algorithms to help us automate tasks is extreme. **Fairness** is becoming one of the most popular research topics in machine learning in recent years, and we decided to apply these recent results to build an automated recruiting tool which enforces fairness. ## Problem Suppose we want to learn a machine learning algorithm that automatically determines whether job candidates should advance to the interview stage using factors such as GPA, school, and work experience, and that we have data from which past candidates received interviews. However, what if in the past, women were less likely to receive an interview than men, all other factors being equal, and certain predictors are correlated with the candidate's gender? Despite having biased data, we do not want our machine learning algorithm to learn these biases. This is where the concept of **fairness** comes in. Promoting fairness has been studied in other contexts such as predicting which individuals get credit loans, crime recidivism, and healthcare management. Here, we focus on gender diversity in recruiting. ## What is fairness? There are numerous possible metrics for fairness in the machine learning literature. In this setting, we consider fairness to be measured by the average difference in false positive rate and true positive rate (**average odds difference**) for unprivileged and privileged groups (in this case, women and men, respectively). High values for this metric indicates that the model is statistically more likely to wrongly reject promising candidates from the underprivileged group. ## What our app does **jobFAIR** is a web application that helps human resources personnel keep track of and visualize job candidate information and provide interview recommendations by training a machine learning algorithm on past interview data. There is a side-by-side comparison between training the model before and after applying a *reweighing algorithm* as a preprocessing step to enforce fairness. ### Reweighing Algorithm If the data is unbiased, we would think that the probability of being accepted and the probability of being a woman would be independent (so the product of the two probabilities). By carefully choosing weights for each example, we can de-bias the data without having to change any of the labels. We determine the actual probability of being a woman and being accepted, then set the weight (for the woman + accepted category) as expected/actual probability. In other words, if the actual data has a much smaller probability than expected, examples from this category are given a higher weight (>1). Otherwise, they are given a lower weight. This formula is applied for the other 3 out of 4 combinations of gender x acceptance. Then the reweighed sample is used for training. ## How we built it We trained two classifiers on the same bank of resumes, one with fairness constraints and the other without. We used IBM's [AIF360](https://github.com/IBM/AIF360) library to train the fair classifier. Both classifiers use the **sklearn** Python library for machine learning models. We run a Python **Django** server on an AWS EC2 instance. The machine learning model is loaded into the server from the filesystem on prediction time, classified, and then the results are sent via a callback to the frontend, which displays the metrics for an unfair and a fair classifier. ## Challenges we ran into Training and choosing models with appropriate fairness constraints. After reading relevant literature and experimenting, we chose the reweighing algorithm ([Kamiran and Calders 2012](https://core.ac.uk/download/pdf/81728147.pdf?fbclid=IwAR3P1SFgtml7w0VNQWRf_MK3BVk8WyjOqiZBdgmScO8FjXkRkP9w1RFArfw)) for fairness, logistic regression for the classifier, and average odds difference for the fairness metric. ## Accomplishments that we're proud of We are proud that we saw tangible differences in the fairness metrics of the unmodified classifier and the fair one, while retaining the same level of prediction accuracy. We also found a specific example of when the unmodified classifier would reject a highly qualified female candidate, whereas the fair classifier accepts her. ## What we learned Machine learning can be made socially aware; applying fairness constraints helps mitigate discrimination and promote diversity in important contexts. ## What's next for jobFAIR Hopefully we can make the machine learning more transparent to those without a technical background, such as showing which features are the most important for prediction. There is also room to incorporate more fairness algorithms and metrics.
I've never made a project with Unity before, so I decided to give it a shot with a maze-based running game. The goal of this project is to support a VR headset and use the gyroscope to make the maze an immersive experience, but in its current stage it is controlled with left and right keyboard input.
winning
## Coinbase Analytics **Sign in with your Coinbase account, and get helpful analytics specific to your investment.** See in depth returns, and a simple profit and cost analysis for Bitcoin, Ethereum, and Litecoin. Hoping to help everyone that uses and will use Coinbase to purchase cryptocurrency.
## Inspiration The cryptocurrency market is an industry which is expanding at an exponential rate. Everyday, thousands new investors of all kinds are getting into this volatile market. With more than 1,500 coins to choose from, it is extremely difficult to choose the wisest investment for those new investors. Our goal is to make it easier for those new investors to select the pearl amongst the sea of cryptocurrency. ## What it does To directly tackle the challenge of selecting which cryptocurrency to choose, our website has a compare function which can add up to 4 different cryptos. All of the information from the chosen cryptocurrencies are pertinent and displayed in a organized way. We also have a news features for the investors to follow the trendiest news concerning their precious investments. Finally, we have an awesome bot which will answer any questions the user has about cryptocurrency. Our website is simple and elegant to provide a hassle-free user experience. ## How we built it We started by building a design prototype of our website using Figma. As a result, we had a good idea of our design pattern and Figma provided us some CSS code from the prototype. Our front-end is built with React.js and our back-end with node.js. We used Firebase to host our website. We fetched datas of cryptocurrency from multiple APIs from: CoinMarketCap.com, CryptoCompare.com and NewsApi.org using Axios. Our website is composed of three components: the coin comparison tool, the news feed page, and the chatbot. ## Challenges we ran into Throughout the hackathon, we ran into many challenges. First, since we had a huge amount of data at our disposal, we had to manipulate them very efficiently to keep a high performant and fast website. Then, there was many bugs we had to solve when integrating Cisco's widget to our code. ## Accomplishments that we're proud of We are proud that we built a web app with three fully functional features. We worked well as a team and had fun while coding. ## What we learned We learned to use many new api's including Cisco spark and Nuance nina. Furthermore, to always keep a backup plan when api's are not working in our favor. The distribution of the work was good, overall great team experience. ## What's next for AwsomeHack * New stats for the crypto compare tools such as the number of twitter, reddit followers. Keeping track of the GitHub commits to provide a level of development activity. * Sign in, register, portfolio and watchlist . * Support for desktop applications (Mac/Windows) with electronjs
## Inspiration: Our whole group had a strong interest in Natural Language Processing, and there has been a lot of excitement about cryptocurrencies recently. We are also very aware of the impact social media can have, and we wanted to show that there is a relationship between tweets about cryptocurrencies and the cryptocurrencies’ prices. ## What It Does: People are often doubtful of analyzing public sentiment for predicting stock prices because people tend to be more reactive than causative to the market changes. However, we believe that with cryptocurrency, it's the opposite. For example, when people's public sentiment is highly positive and many are tweeting good things about Bitcoin we think that the future value of Bitcoin from that moment would increase. We do the exact same for Ethereum. On our website, we display two graphs: cryptocurrency exchange rate data (price) with the public sentiment regarding that cryptocurrency over time. Our program uses a weighted algorithm to determine the overall sentiment of the day, and we include factors such as the number of retweets, favorites, etc. The base of the total score is the sentiment score of the tweet. Looking at the data, we can see that a high sentiment drives an increase in bitcoin prices the next day, while a low sentiment causes a decrease the next day. ## How We Built It: We used the CoinAPI to collect exchange rate data by the hour for Bitcoin and the Twitter Search API to filter out tweets that contained Bitcoin. We then used a Python library to perform sentiment analysis on these tweets. Then, we implemented our own algorithm in order to use that sentiment score and other metadata about the tweet get a weighted, overall daily sentiment score. Finally, we built data visualizations using Python’s Bokeh library and built the rest of the front end using HTML/CSS/Bootstrap. ## Challenges We Ran Into: The biggest challenge we ran into was the limits on our API calls. For example, Twitter only gives us access to 7 days of tweets, when it’d be great to see more. Another challenge we faced was embedding a dynamic graph (created in python) in our HTML code, and we had difficulties decided which visualizations would best represent our data. ## Accomplishments That We’re Proud of: Aside from solving the challenges we ran into, we’re proud of how well we worked together as a team, and we felt that we were very efficient with our time-management. ## What We Learned: We learned many things after working on this project including collecting market data using cryptocurrency API’s, creating data visualizations using python libraries, and performing sentiment analysis using Textblob, another python library. ## What’s Next for CryptoSentiment: We would like to get premium access to the API’s we utilized in order to have more data to show. It would also allow us to update our information more frequently, which is very important. Another aspect we’d like to improve on is the algorithm for determining the overall score of a tweet. We are currently taking the original sentiment score from -1 to 1, but it is important to account for retweets, favorites, perhaps how many followers the user has to give it a weighted score from -100 to 100. Accounting for these in an accurate way is very difficult, and we need to do more research on how to analyze Twitter data. We’d also like to expand beyond Bitcoin, which simply involves more API calls. Finally, we would like to utilize machine learning to make predictions about the exchange rates based on our sentiment analysis.
partial
## Inspiration While we were doing preliminary research, we had found overwhelming amounts of evidence of mental health deterioration as a consequence of life-altering lockdown restrictions. Academic research has shown that adolescents depend on friendship to maintain a sense of self-worth and to manage anxiety and depression. Intimate exchanges and self-esteem support significantly increased long-term self worth and decreased depression. While people do have virtual classes and social media, some still had trouble feeling close with anyone. This is because conventional forums and social media did not provide a safe space for conversation beyond the superficial. User research also revealed that friendships formed by physical proximity don't necessarily make people feel understood and resulted in feelings of loneliness anyway. Proximity friendships formed in virtual classes also felt shallow in the sense that it only lasted for the duration of the online class. With this in mind, we wanted to create a platform that encouraged users to talk about their true feelings, and maximize the chance that the user would get heartfelt and intimate replies. ## What it does Reach is an anonymous forum that is focused on providing a safe space for people to talk about their personal struggles. The anonymity encourages people to speak from the heart. Users can talk about their struggles and categorize them, making it easy for others in similar positions to find these posts and build a sense of closeness with the poster. People with similar struggles have a higher chance of truly understanding each other. Since ill-mannered users can exploit anonymity, there is a tone analyzer that will block posts and replies that contain mean-spirited content from being published while still letting posts of a venting nature through. There is also ReCAPTCHA to block bot spamming. ## How we built it * Wireframing and Prototyping: Figma * Backend: Java 11 with Spring Boot * Database: PostgresSQL * Frontend: Bootstrap * External Integration: Recaptcha v3 and IBM Watson - Tone Analyzer * Cloud: Heroku ## Challenges we ran into We initially found it a bit difficult to come up with ideas for a solution to the problem of helping people communicate. A plan for a VR space for 'physical' chatting was also scrapped due to time constraints, as we didn't have enough time left to do it by the time we came up with the idea. We knew that forums were already common enough on the internet, so it took time to come up with a product strategy that differentiated us. (Also, time zone issues. The UXer is Australian. They took caffeine pills and still fell asleep.) ## Accomplishments that we're proud of Finishing it on time, for starters. It felt like we had a bit of a scope problem at the start when deciding to make a functional forum with all these extra features, but I think we pulled it off. The UXer also iterated about 30 screens in total. The Figma file is *messy.* ## What we learned As our first virtual hackathon, this has been a learning experience for remote collaborative work. UXer: I feel like i've gotten better at speedrunning the UX process even quicker than before. It usually takes a while for me to get started on things. I'm also not quite familiar with code (I only know python), so watching the dev work and finding out what kind of things people can code was exciting to see. # What's next for Reach If this was a real project, we'd work on implementing VR features for those who missed certain physical spaces. We'd also try to work out improvements to moderation, and perhaps a voice chat for users who want to call.
## Inspiration For this hackathon, we wanted to build something that could have a positive impact on its users. We've all been to university ourselves, and we understood the toll, stress took on our minds. Demand for mental health services among youth across universities has increased dramatically in recent years. A Ryerson study of 15 universities across Canada show that all but one university increased their budget for mental health services. The average increase has been 35 per cent. A major survey of over 25,000 Ontario university students done by the American college health association found that there was a 50% increase in anxiety, a 47% increase in depression, and an 86 percent increase in substance abuse since 2009. This can be attributed to the increasingly competitive job market that doesn’t guarantee you a job if you have a degree, increasing student debt and housing costs, and a weakening Canadian middle class and economy. It can also be contributed to social media, where youth are becoming increasingly digitally connected to environments like instagram. People on instagram only share the best, the funniest, and most charming aspects of their lives, while leaving the boring beige stuff like the daily grind out of it. This indirectly perpetuates the false narrative that everything you experience in life should be easy, when in fact, life has its ups and downs. ## What it does One good way of dealing with overwhelming emotion is to express yourself. Journaling is an often overlooked but very helpful tool because it can help you manage your anxiety by helping you prioritize your problems, fears, and concerns. It can also help you recognize those triggers and learn better ways to control them. This brings us to our application, which firstly lets users privately journal online. We implemented the IBM watson API to automatically analyze the journal entries. Users can receive automated tonal and personality data which can depict if they’re feeling depressed or anxious. It is also key to note that medical practitioners only have access to the results, and not the journal entries themselves. This is powerful because it takes away a common anxiety felt by patients, who are reluctant to take the first step in healing themselves because they may not feel comfortable sharing personal and intimate details up front. MyndJournal allows users to log on to our site and express themselves freely, exactly like they were writing a journal. The difference being, every entry in a persons journal is sent to IBM Watson's natural language processing tone analyzing API's, which generates a data driven image of the persons mindset. The results of the API are then rendered into a chart to be displayed to medical practitioners. This way, all the users personal details/secrets remain completely confidential and can provide enough data to counsellors to allow them to take action if needed. ## How we built it On back end, all user information is stored in a PostgreSQL users table. Additionally all journal entry information is stored in a results table. This aggregate data can later be used to detect trends in university lifecycles. An EJS viewing template engine is used to render the front end. After user authentication, a journal entry prompt when submitted is sent to the back end to be fed asynchronously into all IBM water language processing API's. The results from which are then stored in the results table, with associated with a user\_id, (one to many relationship). Data is pulled from the database to be serialized and displayed intuitively on the front end. All data is persisted. ## Challenges we ran into Rendering the data into a chart that was both visually appealing and provided clear insights. Storing all API results in the database and creating join tables to pull data out. ## Accomplishments that we're proud of Building a entire web application within 24 hours. Data is persisted in the database! ## What we learned IBM Watson API's ChartJS Difference between the full tech stack and how everything works together ## What's next for MyndJournal A key feature we wanted to add for the web app for it to automatically book appointments with appropriate medical practitioners (like nutritionists or therapists) if the tonal and personality results returned negative. This would streamline the appointment making process and make it easier for people to have access and gain referrals. Another feature we would have liked to add was for universities to be able to access information into what courses or programs are causing the most problems for the most students so that policymakers, counsellors, and people in authoritative positions could make proper decisions and allocate resources accordingly. Funding please
## Inspiration In this time of the pandemic, most of us are just in our own homes. Not all people can cope with this "new normal" state and can't go much outside. We have realized that the emotional factor in life is essential. In the Philippines, mental health issues are not taken seriously. In this project, we created a web app that people could vent out and seek advice from trusted volunteers and have a meaningful talk. With that in line, we have come up with Safe Space, which also means a place or environment where a person or category of people can feel confident that they will not expose to discrimination, criticism, harassment, or any other emotional or physical harm. ## What it does This project is a user-friendly web- application for everyone who wants to talk about life without criticism and those who seek a place where everyone can vent out and have some meaningful talk. Three services are in the web-app: talk, mood booster, and inspire. These three categories consist of essential functions on helping a person cope with their struggles in life. The talk service paired up users with a volunteer to talk about and have advice. The mood booster shows memes that you could relate to and have fun with it. Lastly, the inspire section lets users post inspirational mess ## How we built it We collaborated using Repl.it in building the web-app interface using HTML5, CSS, JavaScript. While the talk service is made from React. ## Challenges we ran into A lot of challenges come up in creating this project. The first one is the time zone and it is difficult to adjust and make our time compatible. The internet speed and connectivity because there are a lot of times where the internet service providers are having issues. Last is brainstorming on what we would like to build that can help others and at the same time have fun building it. ## Accomplishments that we're proud of Our group consists of beginner programmers, and being in this hackathon itself is an accomplishment for us to have created an idea and built it within a short amount of time. ## What we learned We learned a lot in this hackathon, explored different technologies out there, and had a great time learning with the workshops, not just about programming but also about various things from the mini-events in Treehacks. ## What's next for Safe Space A community that helps for better improvement of Mental Health awareness and advocates Mental Health issues.
winning
## Inspiration There were two primary sources of inspiration. The first one was a paper published by University of Oxford researchers, who proposed a state of the art deep learning pipeline to extract spoken language from video. The paper can be found [here](http://www.robots.ox.ac.uk/%7Evgg/publications/2018/Afouras18b/afouras18b.pdf). The repo for the model used as a base template can be found [here](https://github.com/afourast/deep_lip_reading). The second source of inspiration is an existing product on the market, [Focals by North](https://www.bynorth.com/). Focals are smart glasses that aim to put the important parts of your life right in front of you through a projected heads up display. We thought it would be a great idea to build onto a platform like this through adding a camera and using artificial intelligence to gain valuable insights about what you see, which in our case, is deciphering speech from visual input. This has applications in aiding individuals who are deaf or hard-of-hearing, noisy environments where automatic speech recognition is difficult, and in conjunction with speech recognition for ultra-accurate, real-time transcripts. ## What it does The user presses a button on the side of the glasses, which begins recording, and upon pressing the button again, recording ends. The camera is connected to a raspberry pi, which is a web enabled device. The raspberry pi uploads the recording to google cloud, and submits a post to a web server along with the file name uploaded. The web server downloads the video from google cloud, runs facial detection through a haar cascade classifier, and feeds that into a transformer network which transcribes the video. Upon finished, a front-end web application is notified through socket communication, and this results in the front-end streaming the video from google cloud as well as displaying the transcription output from the back-end server. ## How we built it The hardware platform is a raspberry pi zero interfaced with a pi camera. A python script is run on the raspberry pi to listen for GPIO, record video, upload to google cloud, and post to the back-end server. The back-end server is implemented using Flask, a web framework in Python. The back-end server runs the processing pipeline, which utilizes TensorFlow and OpenCV. The front-end is implemented using React in JavaScript. ## Challenges we ran into * TensorFlow proved to be difficult to integrate with the back-end server due to dependency and driver compatibility issues, forcing us to run it on CPU only, which does not yield maximum performance * It was difficult to establish a network connection on the Raspberry Pi, which we worked around through USB-tethering with a mobile device ## Accomplishments that we're proud of * Establishing a multi-step pipeline that features hardware, cloud storage, a back-end server, and a front-end web application * Design of the glasses prototype ## What we learned * How to setup a back-end web server using Flask * How to facilitate socket communication between Flask and React * How to setup a web server through local host tunneling using ngrok * How to convert a video into a text prediction through 3D spatio-temporal convolutions and transformer networks * How to interface with Google Cloud for data storage between various components such as hardware, back-end, and front-end ## What's next for Synviz * With stronger on-board battery, 5G network connection, and a computationally stronger compute server, we believe it will be possible to achieve near real-time transcription from a video feed that can be implemented on an existing platform like North's Focals to deliver a promising business appeal
## Inspiration Inspired by a team member's desire to study through his courses by listening to his textbook readings recited by his favorite anime characters, functionality that does not exist on any app on the market, we realized that there was an opportunity to build a similar app that would bring about even deeper social impact. Dyslexics, the visually impaired, and those who simply enjoy learning by having their favorite characters read to them (e.g. children, fans of TV series, etc.) would benefit from a highly personalized app. ## What it does Our web app, EduVoicer, allows a user to upload a segment of their favorite template voice audio (only needs to be a few seconds long) and a PDF of a textbook and uses existing Deepfake technology to synthesize the dictation from the textbook using the users' favorite voice. The Deepfake tech relies on a multi-network model trained using transfer learning on hours of voice data. The encoder first generates a fixed embedding of a given voice sample of only a few seconds, which characterizes the unique features of the voice. Then, this embedding is used in conjunction with a seq2seq synthesis network that generates a mel spectrogram based on the text (obtained via optical character recognition from the PDF). Finally, this mel spectrogram is converted into the time-domain via the Wave-RNN vocoder (see [this](https://arxiv.org/pdf/1806.04558.pdf) paper for more technical details). Then, the user automatically downloads the .WAV file of his/her favorite voice reading the PDF contents! ## How we built it We combined a number of different APIs and technologies to build this app. For leveraging scalable machine learning and intelligence compute, we heavily relied on the Google Cloud APIs -- including the Google Cloud PDF-to-text API, Google Cloud Compute Engine VMs, and Google Cloud Storage; for the deep learning techniques, we mainly relied on existing Deepfake code written for Python and Tensorflow (see Github repo [here](https://github.com/rodrigo-castellon/Real-Time-Voice-Cloning), which is a fork). For web server functionality, we relied on Python's Flask module, the Python standard library, HTML, and CSS. In the end, we pieced together the web server with Google Cloud Platform (GCP) via the GCP API, utilizing Google Cloud Storage buckets to store and manage the data the app would be manipulating. ## Challenges we ran into Some of the greatest difficulties were encountered in the superficially simplest implementations. For example, the front-end initially seemed trivial (what's more to it than a page with two upload buttons?), but many of the intricacies associated with communicating with Google Cloud meant that we had to spend multiple hours creating even a landing page with just drag-and-drop and upload functionality. On the backend, 10 excruciating hours were spent attempting (successfully) to integrate existing Deepfake/Voice-cloning code with the Google Cloud Platform. Many mistakes were made, and in the process, there was much learning. ## Accomplishments that we're proud of We're immensely proud of piecing all of these disparate components together quickly and managing to arrive at a functioning build. What started out as merely an idea manifested itself into usable app within hours. ## What we learned We learned today that sometimes the seemingly simplest things (dealing with python/CUDA versions for hours) can be the greatest barriers to building something that could be socially impactful. We also realized the value of well-developed, well-documented APIs (e.g. Google Cloud Platform) for programmers who want to create great products. ## What's next for EduVoicer EduVoicer still has a long way to go before it could gain users. Our first next step is to implementing functionality, possibly with some image segmentation techniques, to decide what parts of the PDF should be scanned; this way, tables and charts could be intelligently discarded (or, even better, referenced throughout the audio dictation). The app is also not robust enough to handle large multi-page PDF files; the preliminary app was designed as a minimum viable product, only including enough to process a single-page PDF. Thus, we plan on ways of both increasing efficiency (time-wise) and scaling the app by splitting up PDFs into fragments, processing them in parallel, and returning the output to the user after collating individual text-to-speech outputs. In the same vein, the voice cloning algorithm was restricted by length of input text, so this is an area we seek to scale and parallelize in the future. Finally, we are thinking of using some caching mechanisms server-side to reduce waiting time for the output audio file.
## Inspiration There are thousands of people worldwide who suffer from conditions that make it difficult for them to both understand speech and also speak for themselves. According to the Journal of Deaf Studies and Deaf Education, the loss of a personal form of expression (through speech) has the capability to impact their (affected individuals') internal stress and lead to detachment. One of our main goals in this project was to solve this problem, by developing a tool that would a step forward in the effort to make it seamless for everyone to communicate. By exclusively utilizing mouth movements to predict speech, we can introduce a unique modality for communication. While developing this tool, we also realized how helpful it would be to ourselves in daily usage, as well. In areas of commotion, and while hands are busy, the ability to simply use natural lip-reading in front of a camera to transcribe text would make it much easier to communicate. ## What it does **The Speakinto.space website-based hack has two functions: first and foremost, it is able to 'lip-read' a stream from the user (discarding audio) and transcribe to text; and secondly, it is capable of mimicking one's speech patterns to generate accurate vocal recordings of user-inputted text with very little latency.** ## How we built it We have a Flask server running on an AWS server (Thanks for the free credit, AWS!), which is connected to a Machine Learning model running on the server, with a frontend made with HTML and MaterializeCSS. This was trained to transcribe people mouthing words, using the millions of words in LRW and LSR datasets (from the BBC and TED). This algorithm's integration is the centerpiece of our hack. We then used the HTML MediaRecorder to take 8-second clips of video to initially implement the video-to-mouthing-words function on the website, using a direct application of the machine learning model. We then later added an encoder model, to translate audio into an embedding containing vocal information, and then a decoder, to convert the embeddings to speech. To convert the text in the first function to speech output, we use the Google Text-to-Speech API, and this would be the main point of future development of the technology, in having noiseless calls. ## Challenges we ran into The machine learning model was quite difficult to create, and required a large amount of testing (and caffeine) to finally result in a model that was fairly accurate for visual analysis (72%). The process of preprocessing the data, and formatting such a large amount of data to train the algorithm was the area which took the most time, but it was extremely rewarding when we finally saw our model begin to train. ## Accomplishments that we're proud of Our final product is much more than any of us expected, especially the ability to which it seemed like it was an impossibility when we first started. We are very proud of the optimizations that were necessary to run the webpage fast enough to be viable in an actual use scenario. ## What we learned The development of such a wide array of computing concepts, from web development, to statistical analysis, to the development and optimization of ML models, was an amazing learning experience over the last two days. We all learned so much from each other, as each one of us brought special expertise to our team. ## What's next for speaking.space As a standalone site, it has its use cases, but the use cases are limited due to the requirement to navigate to the page. The next steps are to integrate it in with other services, such as Facebook Messenger or Google Keyboard, to make it available when it is needed just as conveniently as its inspiration.
winning
## Inspiration "My Little Web Dev" was inspired by the simplicity and beginner-friendly nature of Scratch. We wanted to make a product that had the same approachability, but tackled a more complex topic like web development. Furthermore, we wanted to build a project that allows children in marginalized groups to shrink the gap in early coding confidence so that all children can learn to code on a level playing field. ## What it does This educational tool allows users to drag and drop code blocks into a canvas, and link them together to create a web application. Furthermore, the app also allows users to save their work into a file and load it into the canvas when they decide to continue working on their project. ## How we built it We created the "My Little Web Dev" webpage using Next.js and React. We then used a library called Blockly to build the code blocks and parsed the blocks into HTML using JavaScript. ## Challenges we ran into We ran into some issues with building our own HTML-specific code blocks and parsing them properly into HTML. ## Accomplishments that we're proud of We're very proud of how we integrated Blockly with Next.js, and how we parsed the block code into HTML. ## What we learned We learned how to a lot about building websites with Next.js, rendering HTML based on block code, and working with Blockly. ## What's next for My Little Web Dev Now that we've created a project encapsulates frontend development into block code, we hope to extend the project to also cover backend development, evolving the app into "My Little Fullstack Dev".
# BlockOJ > > Boundless creativity. > > > ## What is BlockOJ? BlockOJ is an online judge built around Google's Blockly library that teaches children how to code. The library allows us to implement a code editor which lets the user program with various blocks (function blocks, variable blocks, etc.). ![Figure 1. Image of BlockOJ Editor](https://i.imgur.com/UOmBhL4.png) On BlockOJ, users can sign up and use our lego-like code editor to solve instructive programming challenges! Solutions can be verified by pitting them against numerous test cases hidden in our servers :) -- simply click the "submit" button and we'll take care of the rest. Our lightning fast judge, painstakingly written in C, will provide instantaneous feedback on the correctness of your solution (ie. how many of the test cases did your program evaluate correctly?). ![Figure 2. Image of entire judge submission page](https://i.imgur.com/N898UAw.jpg) ## Inspiration and Design Motivation Back in late June, our team came across the article announcing the "[new Ontario elementary math curriculum to include coding starting in Grade 1](https://www.thestar.com/politics/provincial/2020/06/23/new-ontario-elementary-math-curriculum-to-include-coding-starting-in-grade-1.html)." During Hack The 6ix, we wanted to build a practical application that can aid our hard working elementary school teachers deliver the coding aspect of this new curriculum. We wanted a tool that was 1. Intuitive to use, 2. Instructive, and most important of all 3. Engaging Using the Blockly library, we were able to use a code editor which resembles building with LEGO: the block-by-block assembly process is **procedural** and children can easily look at the **big picture** of programming by looking at how the blocks interlock with each other. Our programming challenges aim to gameify learning, making it less intimidating and more appealing to younger audiences. Not only will children using BlockOJ **learn by doing**, but they will also slowly accumulate basic programming know-how through our carefully designed sequence of problems. Finally, not all our problems are easy. Some are hard (in fact, the problem in our demo is extremely difficult for elementary students). In our opinion, it is beneficial to mix in one or two difficult challenges in problemsets, for they give children the opportunity to gain valuable problem solving experience. Difficult problems also pave room for students to engage with teachers. Solutions are saved so children can easily come back to a difficult problem after they gain more experience. ## How we built it Here's the tl;dr version. * AWS EC2 * PostgreSQL * NodeJS * Express * C * Pug * SASS * JavaScript *We used a link shortener for our "Try it out" link because DevPost doesn't like URLs with ports.*
## Inspiration We were inspired by dog lovers like us - curious and fascinated whenever we see a cool dog, but also eternally perplexed by the sheer number of dog breeds that exist. ## What it does What's that Doggo enables anyone to upload a picture of a dog, and then identifies exactly what breed the dog is! ## How we built it The front-end of the app was built using React.js. Using an open-source labeled dataset of over 20,000 dog pictures and 120 breeds, we trained a deep learning model hosted on AWS Sagemaker's notebook cluster instances, and utilizing keras, sklearn, and tensorflow. We trained two convolutional neural networks with two different architectures and then combined those outputs into a logistic regression classifier to create a final multiclass classification model. We then deployed this model on AWS Sagemaker to expose an endpoint for our front-end to send requests to and gather responses. We used AWS for most of our infrastructure; we created an API endpoint for the front-end to use; this enabled the web app to send PUT requests to the endpoint, which triggered a Lambda function that put the input picture onto S3. This then triggered our deployed model on SageMaker, which ran the model, gathered a prediction, and pushed a .json output onto S3. The web app then gathered the output and displayed the prediction and confidence interval to the user. ## Challenges We ran into some challenges setting up our AWS stack, including S3 buckets permissions, API gateway endpoints, as well as SageMaker deployments. Moreover, training our model was difficult given the size of the dataset we were using; we would often run out of memory on our clusters. We were thus forced to compress our images before training to overcome that barrier. The data pipelines setup between AWS and our front-end was also a challenge. ## Accomplishments We're proud of the architecture of our convolutional neural networks and our data preprocessing, as we were able to compress the data significantly (299 x 299) and still achieve a classification accuracy of 89.8% and a log-loss of 0.30. ## What we learned We learned a lot about the general AWS setup procedures, including ACLs and IAM, and we learned specifically how to utilize SageMaker to deploy a trained Machine Learning model. We also learned more about various techniques and architectures to optimize the performance of a CNN. ## Surprises While experimenting on training photos, we realized that photos of humans would still accomplish some classification, albeit with a low confidence for all breeds of dogs. Rather than returning a "The system does not believe this is a dog, please submit a different photo" error, we took advantage of the moment -- now, you can use "What's that Doggo?" to identify the dog that you most look like. Or your friends, family, and even boss! ## What's next for What's that Doggo? After adding support for canine crossbreeds, we believe the only logical next step would be to aggressively classify the feline species.
partial
## Inspiration Helping people who are visually and/or hearing impaired to have better and safer interactions. ## What it does The sensor beeps when the user comes too close to an object or too close to a hot beverage/food. The sign language recognition system translates sign language from a hearing impaired individual to english for a caregiver. The glasses capture pictures of surroundings and convert them into speech for a visually imapired user. ## How we built it We used Microsoft Azure's vision API,Open CV,Scikit Learn, Numpy, Django + REST Framework, to build the technology. ## Challenges we ran into Making sure the computer recognizes the different signs. ## Accomplishments that we're proud of Making a glove with a sensor that helps user navigate their path, recognizing sign language, and converting images of surroundings to speech. ## What we learned Different technologies such as Azure, OpenCV ## What's next for Spectrum Vision Hoping to gain more funding to increase the scale of the project.
## Inspiration We realized how visually-impaired people find it difficult to perceive the objects coming near to them, or when they are either out on road, or when they are inside a building. They encounter potholes and stairs and things get really hard for them. We decided to tackle the issue of accessibility to support the Government of Canada's initiative to make work and public places completely accessible! ## What it does This is an IoT device which is designed to be something wearable or that can be attached to any visual aid being used. What it does is that it uses depth perception to perform obstacle detection, as well as integrates Google Assistant for outdoor navigation and all the other "smart activities" that the assistant can do. The assistant will provide voice directions (which can be geared towards using bluetooth devices easily) and the sensors will help in avoiding obstacles which helps in increasing self-awareness. Another beta-feature was to identify moving obstacles and play sounds so the person can recognize those moving objects (eg. barking sounds for a dog etc.) ## How we built it Its a raspberry-pi based device and we integrated Google Cloud SDK to be able to use the vision API and the Assistant and all the other features offered by GCP. We have sensors for depth perception and buzzers to play alert sounds as well as camera and microphone. ## Challenges we ran into It was hard for us to set up raspberry-pi, having had no background with it. We had to learn how to integrate cloud platforms with embedded systems and understand how micro controllers work, especially not being from the engineering background and 2 members being high school students. Also, multi-threading was a challenge for us in embedded architecture ## Accomplishments that we're proud of After hours of grinding, we were able to get the raspberry-pi working, as well as implementing depth perception and location tracking using Google Assistant, as well as object recognition. ## What we learned Working with hardware is tough, even though you could see what is happening, it was hard to interface software and hardware. ## What's next for i4Noi We want to explore more ways where i4Noi can help make things more accessible for blind people. Since we already have Google Cloud integration, we could integrate our another feature where we play sounds of living obstacles so special care can be done, for example when a dog comes in front, we produce barking sounds to alert the person. We would also like to implement multi-threading for our two processes and make this device as wearable as possible, so it can make a difference to the lives of the people.
## Inspiration I've played the game ricochet robots before and thought it would be a cool project to try to replicate-- but with a twist! ## What it does Given a pre-designed game board layout (in the form of a txt file), a player can choose a player character (a dot for example!) They can then choose a starting position and if they want their puzzle to be solved by "Auto" or to figure it out themselves "Inter" ## How I built it I wrote the code in c++ using the std library. ## Challenges I ran into I ran into a few challenges with understanding an efficient way to find the solutions to puzzles. I ended up learning a lot about graph search algorithms like BFS and DFS. ## Accomplishments that I'm proud of It works! ## What I learned I've learned a lot about debugging tree.s ## What's next for Ricochet Robots -- Interactive and Auto-solver! I originally wanted to integrate a neopixel matrix + Arduino so you could visualize the game better and be better engaged, however, I ran out of time.
winning
## Inspiration Our spark to tackle this project was ignited by a teammate's immersive internship at a prestigious cardiovascular research society, where they served as a dedicated data engineer. Their firsthand encounters with the intricacies of healthcare data management and the pressing need for innovative solutions led us to the product we present to you here. Additionally, our team members drew motivation from a collective passion for pushing the boundaries of generative AI and natural language processing. As technology enthusiasts, we were collectively driven to harness the power of AI to revolutionize the healthcare sector, ensuring that our work would have a lasting impact on improving patient care and research. With these varied sources of inspiration fueling our project, we embarked on a mission to develop a cutting-edge application that seamlessly integrates AI and healthcare data, ultimately paving the way for advancements in data analysis and processing with generative AI in the healthcare sector. ## What it does Fluxus is an end to end workspace for data processing and analytics for healthcare workers. We leverage LLMs to translate text to SQL. The model is preprocessed to specifically handle Intersystems IRIS SQL syntax. We chose Intersystems as our database for storing electronic health records (EHRs) because this enabled us to leverage their integratedML queries. Not only can healthcare workers generate fully functional SQL queries for their datasets with simple text prompts, they now can perform instantaneous predictive analysis on datasets with no effort. The power of AI is incredible isn't it. For example, a user can simply type in "Calculate the average BMI for children and youth from the Body Measures table." and our app will output "SELECT AVG(BMXBMI) FROM P\_BMX WHERE BMDSTATS = '1';" and you can simply run it on the built in intersystems database. With Intersystems IntegratedML, with the simple input of "create a model named DemographicsPrediction to predict the language of ACASI Interview based on age and marital status from the Demographics table.", our app will output "CREATE MODEL DemographicsPrediction PREDICTING (AIALANGA) FROM P\_DEMO TRAIN MODEL DemographicsPrediction VALIDATE MODEL DemographicsPrediction FROM P\_DEMO SELECT \* FROM INFORMATION\_SCHEMA.ML\_VALIDATION\_METRICS;" to instantly create train and validate an ML model that you can perform predictive analysis on with integratedML's "PREDICT" command. It's THAT simple! Researchers and medical professionals working with big data now don't need to worry about the intricacies of SQL syntax, the obscurity of healthcare record formatting - column names and table names that do not give much information, and the need to manually dive into large datasets to find what they're looking for. With simple text prompts data processing becomes a no effort task, and predictive modelling with ML models becomes equally as effortless. See how tables come together without having to browse through large datasets with our DAG visualizations of connected tables/schemas. ## How we built it Our project incorporated a multitude of components that went into the development. It was both overwhelming, but also satisfying seeing so many parts come together. Frontend: The frontend was developed in Vue.js and utilized many modern day component libraries to give off a friendly UI. We also incorporated a visualization tool using third party graph libraries to draw directed acyclic graph (DAG) workflows between tables, showing the connection from one table to another that has been developed after querying the original table. To show this workflow in real time, we implemented a SQL parser API (node-sql-parser) to get a list of source tables used in the LLM generated query and used the DAGs to visually represent the list of source tables in connection to the newly modified/created table. Backend: We used Flask for the backend of our web service, handling multiple API endpoints from our data sources and LLM/prompt engineering functionality. Intersystems: We connected an IRIS intersystems database to our application and loaded it with a load of healthcare data leveraging intersystems libraries for connectors with Python. LLMs: We originally started looking into OpenAI's codex models and their integration, but ultimately worked with GPT-3.5 turbo which made it easy to fine-tune our data (to a certain degree) so our LLM could detect prompts and generate syntactically accurate queries with a high degree of accuracy. We wrapped the LLM and preprocessing of prompt engineering features as an API endpoint to integrate with our backend. ## Challenges we ran into * LLMs are not as magical as they look. There was nothing for us to train the kind of datasets that are used in healthcare. We had to manually push entire database schemas for our LLM to recognize and to attempt to fine-tune on in order to get queries that were accurate. This was intensive manual labour and a lot of frustrating failures with trying to fine-tune on both current and legacy LLM models provided by OpenAI. Ultimately we came to a promising result that delivered a solid degree of accuracy with some fine-tuning. * Integrating everything together - putting together countless API endpoints (honestly felt like writing production code at a certain point), hosting to our frontend, wrapping the LLM as an API endpoint. Ultimately there's definitely pain points that still need to be addressed, and we plan to make this a long term project that will help us identify bottlenecks that we didn't have time to address within these 24 hours, while simultaneously expanding on our application. ## Accomplishments that we're proud of We were all aware of how much we aimed to get done in a mere span of 24 hours. It seemed near impossible. But we were all on a mission, and had the drive to bring a whole new experience to data analytics and processing to the healthcare industry by leveraging the incredible power of generative AI. The satisfaction of seeing our LLM work, trying to fine-tune manually configured data hundreds of lines long and having it accurately give us queries for IRIS including integratedML queries, the frontend come to life, the countless API endpoints work and the integration of all our services for an application with high levels of functionality. Our team came together from different parts of the globe for this hackathon, but we were warriors that instantly clicked as a team and made the most of these past 24 hours by powered through day and night to deliver this product. ## What we learned Just how insane AI honestly is. A lot about SQL syntax, working with Intersystems, the highs and lows of generative AI, about all there is to know about current natural language to SQL processes leveraging generative AI thanks to like 5+ research papers. ## What's next for Fluxus * Develop an admin platform so users can put in their own datasets * Fine-tune the LLM for larger schemas and more prompts * buying a hard drive
## Inspiration When we joined the hackathon, we began brainstorming about problems in our lives. After discussing some constant struggles in their lives with many friends and family, one response was ultimately shared: health. Interestingly, one of the biggest health concerns that impacts everyone in their lives comes from their *skin*. Even though the skin is the biggest organ in the body and is the first thing everyone notices, it is the most neglected part of the body. As a result, we decided to create a user-friendly multi-modal model that can discover their skin discomfort through a simple picture. Then, through accessible communication with a dermatologist-like chatbot, they can receive recommendations, such as specific types of sunscreen or over-the-counter medications. Especially for families that struggle with insurance money or finding the time to go and wait for a doctor, it is an accessible way to understand the blemishes that appear on one's skin immediately. ## What it does The app is a skin-detection model that detects skin diseases through pictures. Through a multi-modal neural network, we attempt to identify the disease through training on thousands of data entries from actual patients. Then, we provide them with information on their disease, recommendations on how to treat their disease (such as using specific SPF sunscreen or over-the-counter medications), and finally, we provide them with their nearest pharmacies and hospitals. ## How we built it Our project, SkinSkan, was built through a systematic engineering process to create a user-friendly app for early detection of skin conditions. Initially, we researched publicly available datasets that included treatment recommendations for various skin diseases. We implemented a multi-modal neural network model after finding a diverse dataset with almost more than 2000 patients with multiple diseases. Through a combination of convolutional neural networks, ResNet, and feed-forward neural networks, we created a comprehensive model incorporating clinical and image datasets to predict possible skin conditions. Furthermore, to make customer interaction seamless, we implemented a chatbot using GPT 4o from Open API to provide users with accurate and tailored medical recommendations. By developing a robust multi-modal model capable of diagnosing skin conditions from images and user-provided symptoms, we make strides in making personalized medicine a reality. ## Challenges we ran into The first challenge we faced was finding the appropriate data. Most of the data we encountered needed to be more comprehensive and include recommendations for skin diseases. The data we ultimately used was from Google Cloud, which included the dermatology and weighted dermatology labels. We also encountered overfitting on the training set. Thus, we experimented with the number of epochs, cropped the input images, and used ResNet layers to improve accuracy. We finally chose the most ideal epoch by plotting loss vs. epoch and the accuracy vs. epoch graphs. Another challenge included utilizing the free Google Colab TPU, which we were able to resolve this issue by switching from different devices. Last but not least, we had problems with our chatbot outputting random texts that tended to hallucinate based on specific responses. We fixed this by grounding its output in the information that the user gave. ## Accomplishments that we're proud of We are all proud of the model we trained and put together, as this project had many moving parts. This experience has had its fair share of learning moments and pivoting directions. However, through a great deal of discussions and talking about exactly how we can adequately address our issue and support each other, we came up with a solution. Additionally, in the past 24 hours, we've learned a lot about learning quickly on our feet and moving forward. Last but not least, we've all bonded so much with each other through these past 24 hours. We've all seen each other struggle and grow; this experience has just been gratifying. ## What we learned One of the aspects we learned from this experience was how to use prompt engineering effectively and ground an AI model and user information. We also learned how to incorporate multi-modal data to be fed into a generalized convolutional and feed forward neural network. In general, we just had more hands-on experience working with RESTful API. Overall, this experience was incredible. Not only did we elevate our knowledge and hands-on experience in building a comprehensive model as SkinSkan, we were able to solve a real world problem. From learning more about the intricate heterogeneities of various skin conditions to skincare recommendations, we were able to utilize our app on our own and several of our friends' skin using a simple smartphone camera to validate the performance of the model. It’s so gratifying to see the work that we’ve built being put into use and benefiting people. ## What's next for SkinSkan We are incredibly excited for the future of SkinSkan. By expanding the model to incorporate more minute details of the skin and detect more subtle and milder conditions, SkinSkan will be able to help hundreds of people detect conditions that they may have ignored. Furthermore, by incorporating medical and family history alongside genetic background, our model could be a viable tool that hospitals around the world could use to direct them to the right treatment plan. Lastly, in the future, we hope to form partnerships with skincare and dermatology companies to expand the accessibility of our services for people of all backgrounds.
## Inspiration This project came to us when one of our teammates mentioned his grandma struggling to keep track of her cholesterol and the medications she was taking to lower it. We realized that to help alleviate this, we would need to approach the problem from both sides. High blood cholesterol causes **4.4 million** deaths each year (World Heart Federation, 2019), and other nutrient deficiencies and surplusses take many more. By leveraging new multimodal LLMs, we set out to solve this complex and multi-faceted problem. ## What it does MultiMed Vision+ allows users to track both their medications and nutrients with the snap of a picture. It then uses the information scanned by the user to generate advice in the context of the user's current health situation. MultiMed Vision+ integrates with our Raspberry Pi "watch", desktop app, and mobile app, easing access and user-friendliness for this demographic. The project comprises several key components, including: Integration of Prescription and Nutrition Data: Incorporating scanned prescriptions and food items to provide personalized recommendations based on individual health contexts. Analyzing prescription data to offer tailored health advice and reminders related to medication adherence. Smartwatch Integration: Facilitating easy access to health data without the need for a smartphone. Streamlining the monitoring of vital health indicators for elderly individuals. User-Friendly Interface: Designing an intuitive and straightforward interface specifically tailored to the needs of older users. Offering clear and concise advice on dietary choices and providing real-time health monitoring. Real-Time Sensor Data Analysis: Utilizing machine learning models integrated with real-time sensor data to predict the risk of heart attacks. Providing timely alerts and notifications to both the user and their family members, enabling proactive health management and intervention. ## How we built it Our project is split into a frontend and a backend stack. Our front end includes all of our UI/UX designs, and it utilizes Next.js + Typescript to build a UI design. It has authentication from Firebase + Clerk.js, and uses Tailwind CSS for styling. In the backend, we have our machine learning pipeline in Python as well as our API routes through FastAPI. We utilized OpenAI, Azure AI, Hugging Face, and Intersystems IntegratedML. ## Challenges we ran into One of the main issues we ran into was understanding and integrating InterSystems into our product. Since it was the first time our entire team was working with the IntegratedML tool, we had to spend quite a bit of time debugging and reading tool documentation to understand how we could implement this pipeline. ## Accomplishments that we're proud of We were able to simulate a real-life scenario where a user could scan their prescription and dishes from either their smartwatch (simulated through a raspberry pi) or mobile phone seamlessly. The integration of our tool into a wearable device allows a user to go about their entire day while also keeping track of their health in just a few seconds. We were able to hyper-personalize our context window and integrate an ML model so that our tool could give reliable insights to users based on their pre-existing conditions and eating habits. ## What we learned We learned a lot about integrations with different systems and models. Specifically, we learn how to use InterSystems as well as integrations with Raspberry Pi. ## What's next for MultiMed Vision+ The next step for MultiMed Vision is to launch this idea fully, expanding our vast data sources and improving on our hardware + software systems. Moreover, we are looking to consider expanding our platform that can tell us more about where our food came form, such as where it's being produced/processed. We could potentially integrate with blockchain technology by briding the real world with web&web3.
winning
## Inspiration As we began to look at the TreeHacks 10 tracks, all team members were immediately drawn to the sustainability track. In a world with increasing temperatures, excessive greenhouse gas emissions, biodiversity loss, and pollution, among numerous other ecological challenges, we know we all have an individual responsibility to help preserve and revitalize our environment. As a result, we began brainstorming how we could individually help contribute to a more sustainable future. Our first thoughts centered around how we could encourage contributions to environmental nonprofits. Still, we struggled to name localized organizations that could impact on an individual scale. With three of us originally from Iowa, we did a quick Google search to find potential organizations whose mission aligned with our goal and found over 20 (including 3 within 20 minutes of our hometown) around the state that could utilize resources from people in various ways. The contributions they were seeking primarily consisted of people volunteering and monetary donations. If this was the case in Iowa, we knew most other states would likely have even more available opportunities. But how could we make people aware of them? Looking at the communities of people we know, it’s clear there is no shortage of people interested in environmental sustainability. But just being passionate about an issue doesn’t lead to improvement. A streamlined way to identify tangible ways to catalyze change, though? That is what’s needed to bridge the gap between someone’s desire to make change and their ability to follow through. We realized our platform’s goal: to allow organizations to make themselves known to those people who already have a planted **spark** and want to help preserve their environment for future generations. ## What it does **Your spark can create change.** Spark is a platform that allows environmental organizations to create a campaign outlining their mission, vision, and goals to encourage people with an existing **spark** who don’t know what to do with their desire to make a difference to join their projects. Our platform works in two parts. First, organizations post their campaign, which is then added to a database holding all posted campaigns. Next, contributors can browse available campaigns to find one(s) that resonate with their goals. Once they identify organizations that do so, they can identify which of the organization’s goals they are inspired to contribute to and gain spark points. These spark points work to (1) allow contributors to see the tangible impact they are having as a continuous endeavor and (2) motivate these individuals to continue their contributions with more organizations. ## How we built it Iterations: 1. We started with a basic outline of listing an organization and its needs and allowing an individual to sign up to help. 2. To explore the broader stakeholders beyond just contributors, we spoke to an Executive Director at a nonprofit local to us (someone who may make a campaign page). We learned what features would make this platform more useful for them: * “Because it is so hard for nonprofits to receive funding [as the application process is often long and rarely fruitful because of the number of competitors], individual contributions go a long way,” so we made the monetary donation aspect the first built-out type of contribution with future plans to build out a page showing all volunteering opportunities * Within the organization’s dashboard view, they should be able to view and manage all of their own campaigns, so we added this functionality * “Nonprofits benefit greatly from being able to receive feedback from participants,” so in a future iteration, we hope to allow some form of communication between participants and the organization (if valuable and often not used, this may be required for someone to earn their spark points) 3. We then showed our product to a hackathon mentor to gain more feedback on how to address the pain points of a potential user * She suggested the usefulness of being able to “visually” observe opportunities “physically nearby.” We used this feedback and incorporated the Google Maps API to display the physical locations of the opportunities. She also noted this would remind users “how accessible” it is to make change. 4. After speaking to friends (people who would hold a future contributor role), we added in a few more features that could make our platform better encourage people: * A link to the non-profit website, if applicable, to allow for deeper learning. To encourage an easy-to-use UI (especially for those more tech-averse), we wanted to avoid a cluttered card and instead redirect contributors to the organization's website. * The ability to favorite a non-profit for future engagements, which we hold as a goal for a future iteration * Rotating information on our home page to serve as motivation for contributors (also not yet implemented but planned for next iteration) Technology Used: (1) We used Next.js to build and host the front-end portion of our application. This decision allowed us to scale easily with a growing user base. Next.js is a very popular framework with a lot of open-source support that made our ability to build a website quickly. (2) We use Convex for our API and backend. We enjoyed their presentation during the opening ceremony, which convinced us to use its extensive functionality. Its lightweight nature helped us develop much more quickly than what would’ve been required with other software. (3) Ant Design is a very popular UI framework and made it easy to translate our Figma designs into our final product with a clean, modern interface. (4) Visual Studio Code + Extensions to make development environment easier ## What makes us different The main features of Spark that set it apart from existing services can be grouped into three main parts: 1. The focus on individual contributions to environmental challenges * We couldn’t find any existing websites that were focused on sustainability. Many environmental organizations had their donation and/or funding pages in hard to find places and with little tangible impact associated with it. 2. The motivation through point earning * This feature is a unique motivator we didn’t find in other platforms. People like to do things when they feel like it is worthwhile. Providing an ability to track the “significance” of their cumulative – not just one-time – contributions does precisely that. 3. Allowing individuals to see that their monetary or physical donations are tied to specific goals, not just a cause * By having organizations outline why they’re asking for contributions in a certain manner, people are more aware of their individual part in these large-scale problems. A general donation fund or volunteering list is less valuable for contributing individuals. ## Challenges we ran into * Configuration and integration challenges like getting things to talk to each other, installing libraries * Agreeing with design and idea choices like UI, brand, and features * Working through exhaustion, stress, and frustration at times * Navigating a new environment of learning and networking ## Accomplishments that we're proud of * Our app successfully makes roundtrips! Data is rendered from the database to our front end, and we can successfully demonstrate our MVP * 3/4 of our team's first hackathon project! * We met new people with great ideas and enjoyed sharing them throughout the weekend * We got the chance to learn and experiment with technologies new to us (Next and Convex) and were successful in making them work * We had fun!!! * We were a successful team and enjoyed collaborating together :D ## What we learned About Sustainability: (1) What do nonprofits and organizations need when looking for support? * Access to a large user base: This can be especially key for smaller organizations and the funding of their projects * Passionate contributors: they are the key to spreading ideas through word of mouth, and this is our target demographic for our platform (2) How beneficial individual change can be * Ecological organizations have already done the research: they know what needs to be done to improve our environments. Once they’ve identified useful ways to use people, getting people to them is the new important goal. * The more involved people are individually, the better equipped they are to elect representatives that can further change on a more national and even global scale. Individuals spark greater contributions. About Technology: (3) The web development space is constantly evolving * Many tools out there are robust for scaling applications with growing users. Frameworks for the backend like Convex make spinning up a cloud server a breeze, with frameworks like Next.js ensuring that front-end applications are production-ready. * It’s crucial that before starting a project, the needs of the project are evaluated to find the right tech stack to handle them. Additionally, when encountering bugs or issues, evaluate the simplest potential culprits first, as the technologies being used are well-tested and are unlikely to be the issue. ## What's next for Spark We want to see Spark develop into a general platform for all kinds of organizations and allow them to receive support in ways not currently built into the website. While our motivation started out with achieving improvements in ecology, we learned that many nonprofits and small organizations also struggle with their day-to-day costs for small things, even as simple as needed plates or cups. Schools have underprivileged students who struggle to receive the school supplies they need. Both of these cases could be solved by having an additional contribution mode: purchasing individual items. Spark has the potential to become \_ the \_ platform for social good: you want to help a specific industry, that industry, and your ability to contribute is \_ literally \_ at your fingertips. Once this generalization is implemented, we hope to add a recommendation feature that will allow contributors to be matched with projects that they will likely find fulfilling based on their previous interests and engagements. ## Broader Stakeholders, Context, and Ethicality ### Accessibility: While this is hosted online, the requirements of people are nothing other than their time or money. While money can be a barrier to contributing to projects one resonates with, our platform encourages people to give their time if that is more accessible to them. While not everyone has access to the internet from their homes, they can easily access this platform from a public source (ex., library), allowing them to make individual contributions in whichever way they see as most suitable to their desire and ability. Our “add a campaign” process is extremely simple for an organization that may not have tech-savvy employees. Organizations have to provide as little information as they want to, and in a few simple clicks, they will be listed with minimal technology required. Additionally, existing organizations may see Spark as a competition. Still, Spark works to elevate the organization’s existing issues to be helped by a broader range of people looking to better their community and environment. ### Contributor Motivations: A potential unintended consequence may be a motivation surrounding the gamification of the process through spark points and hours, but regardless of motivation, contributions are impactful. If people begin to look for ways to maximize their points or hours, they will ultimately create more change for the better. ### Other Considerations and Research Our largest stakeholders outside contributors are the organizations that post to Spark. By speaking with someone so involved with a non-profit, its funding, and difficulties finding volunteers and donors, we better understood the pain points of potential users on both ends of the platform. Addressing environmental issues is the responsibility of all people. For one, poor environmental conditions disproportionately harm marginalized individuals. These communities are more likely to be exposed to lead, air pollution, hazardous waste, and extreme temperatures. We have individual responsibilities to improve the state of our environment to minimize this disproportionate impact, and it starts with awareness and individual contributions. Secondly, there is a moral obligation to leave the world livable for future generations. Without intervention and prioritizing of these projects, we don’t follow this ethical duty. Projects identified by local organizations often occur where the need for better conditions is very visible: beach or park cleanups, invasive species removal, recycling, or food waste minimization, to name a few. They can also work to improve these places through projects such as tree planting or beautification in neglected neighborhoods. Improving living conditions is a social issue that can help decrease the disproportionate impacts faced by those living in areas identified by environmentally focused organizations. There are potential ethical concerns that we also must consider with the creation of our platform: 1. A bias in which organizations are displayed for prospective contributors. To combat this, we want to incorporate technology that cycles organizations' recommendations to individuals. Especially in larger cities, we wouldn’t want small organizations to lose their ability to gain contributors at the expense of larger organizations on name alone. 2. The risk of greenwashing. Often, organizations looking to gain participation may falsely indicate an interest in ecological progress and then take away attention from organizations focused on improving sustainable practices. This would need to be handled delicately because if a vetting process to allow organizations to add a campaign is added, there may be bias in which types of organizations are filtered out or find the additional steps technologically challenging. 3. Community displacement. Projects that take place in a community may displace the residents of that area. To prevent this, there may be terms and conditions requiring organizations to ensure that their projects meet specific standards that don’t cause issues in the areas they are working with. Largely, though, ecological organizations are very aware of the footprint they leave in places where they work and are careful to be considerate of these communities. Sources: <https://www.apha.org/Topics-and-Issues/Environmental-Health/Environmental-Justice> <https://iep.utm.edu/envi-eth/>
## Inspiration With the Berkeley enrollment time just around the corner, everyone is stressed about what classes to take. Recently, we had a conversation with one of our friends who was especially stressed about taking CS 162 next semester, with her main concern being that the course has so much content and it will be hard for her to process and digest all the information before midterms. We got the idea to create SecondSearch, where her and all other students in any class can quickly and efficiently review class material by searching through lectures directly. ## What it does SecondSearch answers any question about a course with a direct link to the lecture which explains the question. It performs a vector similarity search to determine which portion of lecture is most likely to answer your question and then displays that video. ## How we built it We built SecondSearch on the Milvus open-source vector database, using OpenAI to help with the search, then completed the product with a companion React frontend built with Chakra UI component library. We implemented the backend using FastAPI and populated the Milvus docker containers with Jupyter Notebook. ## Challenges we ran into We had trouble setting up Milvus and Docker at first, but were quickly able to find thorough documentation for the setup process. Working with React and frontend in general for the first time, we took a couple hours ramping up. It was smooth sailing after the difficult ramp up process :) ## Accomplishments that we're proud of We're proud of getting a full stack product working in the short span of the hackathon: the client, server, and Milvus docker instance. ## What we learned We learned how to use Docker, FastAPI, and React, as well as the basics (struggles) of full stack development. ## What's next for SecondSearch After creating the minimum viable product, we wanted to make the UI more friendly by using OpenAI to summarize the caption display from the video segments. However, we quickly realized that adding this change would slow the search time down from its current ~1 second to ~20 seconds. As we ran out of time to speed up this feature, we decided to temporarily remove it. However, we will be reimplementing it more efficiently as soon as possible. As for the big picture and the more distant future, currently our product works with lecture series uploaded to Youtube - we want to expand to lecture videos uploaded to other platforms, as some Berkeley classes upload recordings to bCourses, and other institutions use different platforms. After we expand the project further, some reaching goals for the far future include advertising the completed product to all university students, as lectures are often recorded and uploaded in some form. We also want to add new features on future patches such as saving previous searches, and more.
## Inspiration 🌱 Climate change is affecting every region on earth. The changes are widespread, rapid, and intensifying. The UN states that we are at a pivotal moment and the urgency to protect our Earth is at an all-time high. We wanted to harness the power of social media for a greater purpose: promoting sustainability and environmental consciousness. ## What it does 🌎 Inspired by BeReal, the most popular app in 2022, BeGreen is your go-to platform for celebrating and sharing acts of sustainability. Everytime you make a sustainable choice, snap a photo, upload it, and you’ll be rewarded with Green points based on how impactful your act was! Compete with your friends to see who can rack up the most Green points by performing more acts of sustainability and even claim prizes once you have enough points 😍. ## How we built it 🧑‍💻 We used React with Javascript to create the app, coupled with firebase for the backend. We also used Microsoft Azure for computer vision and OpenAI for assessing the environmental impact of the sustainable act in a photo. ## Challenges we ran into 🥊 One of our biggest obstacles was settling on an idea as there were so many great challenges for us to be inspired from. ## Accomplishments that we're proud of 🏆 We are really happy to have worked so well as a team. Despite encountering various technological challenges, each team member embraced unfamiliar technologies with enthusiasm and determination. We were able to overcome obstacles by adapting and collaborating as a team and we’re all leaving uOttahack with new capabilities. ## What we learned 💚 Everyone was able to work with new technologies that they’ve never touched before while watching our idea come to life. For all of us, it was our first time developing a progressive web app. For some of us, it was our first time working with OpenAI, firebase, and working with routers in react. ## What's next for BeGreen ✨ It would be amazing to collaborate with brands to give more rewards as an incentive to make more sustainable choices. We’d also love to implement a streak feature, where you can get bonus points for posting multiple days in a row!
partial
# Inspiration and Product There's a certain feeling we all have when we're lost. It's a combination of apprehension and curiosity – and it usually drives us to explore and learn more about what we see. It happens to be the case that there's a [huge disconnect](http://www.purdue.edu/discoverypark/vaccine/assets/pdfs/publications/pdf/Storylines%20-%20Visual%20Exploration%20and.pdf) between that which we see around us and that which we know: the building in front of us might look like an historic and famous structure, but we might not be able to understand its significance until we read about it in a book, at which time we lose the ability to visually experience that which we're in front of. Insight gives you actionable information about your surroundings in a visual format that allows you to immerse yourself in your surroundings: whether that's exploring them, or finding your way through them. The app puts the true directions of obstacles around you where you can see them, and shows you descriptions of them as you turn your phone around. Need directions to one of them? Get them without leaving the app. Insight also supports deeper exploration of what's around you: everything from restaurant ratings to the history of the buildings you're near. ## Features * View places around you heads-up on your phone - as you rotate, your field of vision changes in real time. * Facebook Integration: trying to find a meeting or party? Call your Facebook events into Insight to get your bearings. * Directions, wherever, whenever: surveying the area, and find where you want to be? Touch and get instructions instantly. * Filter events based on your location. Want a tour of Yale? Touch to filter only Yale buildings, and learn about the history and culture. Want to get a bite to eat? Change to a restaurants view. Want both? You get the idea. * Slow day? Change your radius to a short distance to filter out locations. Feeling adventurous? Change your field of vision the other way. * Want get the word out on where you are? Automatically check-in with Facebook at any of the locations you see around you, without leaving the app. # Engineering ## High-Level Tech Stack * NodeJS powers a RESTful API powered by Microsoft Azure. * The API server takes advantage of a wealth of Azure's computational resources: + A Windows Server 2012 R2 Instance, and an Ubuntu 14.04 Trusty instance, each of which handle different batches of geospatial calculations + Azure internal load balancers + Azure CDN for asset pipelining + Azure automation accounts for version control * The Bing Maps API suite, which offers powerful geospatial analysis tools: + RESTful services such as the Bing Spatial Data Service + Bing Maps' Spatial Query API + Bing Maps' AJAX control, externally through direction and waypoint services * iOS objective-C clients interact with the server RESTfully and display results as parsed ## Application Flow iOS handles the entirety of the user interaction layer and authentication layer for user input. Users open the app, and, if logging in with Facebook or Office 365, proceed through the standard OAuth flow, all on-phone. Users can also opt to skip the authentication process with either provider (in which case they forfeit the option to integrate Facebook events or Office365 calendar events into their views). After sign in (assuming the user grants permission for use of these resources), and upon startup of the camera, requests are sent with the user's current location to a central server on an Ubuntu box on Azure. The server parses that location data, and initiates a multithread Node process via Windows 2012 R2 instances. These processes do the following, and more: * Geospatial radial search schemes with data from Bing * Location detail API calls from Bing Spatial Query APIs * Review data about relevant places from a slew of APIs After the data is all present on the server, it's combined and analyzed, also on R2 instances, via the following: * Haversine calculations for distance measurements, in accordance with radial searches * Heading data (to make client side parsing feasible) * Condensation and dynamic merging - asynchronous cross-checking from the collection of data which events are closest Ubuntu brokers and manages the data, sends it back to the client, and prepares for and handles future requests. ## Other Notes * The most intense calculations involved the application of the [Haversine formulae](https://en.wikipedia.org/wiki/Haversine_formula), i.e. for two points on a sphere, the central angle between them can be described as: ![Haversine 1](https://upload.wikimedia.org/math/1/5/a/15ab0df72b9175347e2d1efb6d1053e8.png) and the distance as: ![Haversine 2](https://upload.wikimedia.org/math/0/5/5/055b634f6fe6c8d370c9fa48613dd7f9.png) (the result of which is non-standard/non-Euclidian due to the Earth's curvature). The results of these formulae translate into the placement of locations on the viewing device. These calculations are handled by the Windows R2 instance, essentially running as a computation engine. All communications are RESTful between all internal server instances. ## Challenges We Ran Into * *iOS and rotation*: there are a number of limitations in iOS that prevent interaction with the camera in landscape mode (which, given the need for users to see a wide field of view). For one thing, the requisite data registers aren't even accessible via daemons when the phone is in landscape mode. This was the root of the vast majority of our problems in our iOS, since we were unable to use any inherited or pre-made views (we couldn't rotate them) - we had to build all of our views from scratch. * *Azure deployment specifics with Windows R2*: running a pure calculation engine (written primarily in C# with ASP.NET network interfacing components) was tricky at times to set up and get logging data for. * *Simultaneous and asynchronous analysis*: Simultaneously parsing asynchronously-arriving data with uniform Node threads presented challenges. Our solution was ultimately a recursive one that involved checking the status of other resources upon reaching the base case, then using that knowledge to better sort data as the bottoming-out step bubbled up. * *Deprecations in Facebook's Graph APIs*: we needed to use the Facebook Graph APIs to query specific Facebook events for their locations: a feature only available in a slightly older version of the API. We thus had to use that version, concurrently with the newer version (which also had unique location-related features we relied on), creating some degree of confusion and required care. ## A few of Our Favorite Code Snippets A few gems from our codebase: ``` var deprecatedFQLQuery = '... ``` *The story*: in order to extract geolocation data from events vis-a-vis the Facebook Graph API, we were forced to use a deprecated API version for that specific query, which proved challenging in how we versioned our interactions with the Facebook API. ``` addYaleBuildings(placeDetails, function(bulldogArray) { addGoogleRadarSearch(bulldogArray, function(luxEtVeritas) { ... ``` *The story*: dealing with quite a lot of Yale API data meant we needed to be creative with our naming... ``` // R is the earth's radius in meters var a = R * 2 * Math.atan2(Math.sqrt((Math.sin((Math.PI / 180)(latitude2 - latitude1) / 2) * Math.sin((Math.PI / 180)(latitude2 - latitude1) / 2) + Math.cos((Math.PI / 180)(latitude1)) * Math.cos((Math.PI / 180)(latitude2)) * Math.sin((Math.PI / 180)(longitude2 - longitude1) / 2) * Math.sin((Math.PI / 180)(longitude2 - longitude1) / 2))), Math.sqrt(1 - (Math.sin((Math.PI / 180)(latitude2 - latitude1) / 2) * Math.sin((Math.PI / 180)(latitude2 - latitude1) / 2) + Math.cos((Math.PI / 180)(latitude1)) * Math.cos((Math.PI / 180)(latitude2)) * Math.sin((Math.PI / 180)(longitude2 - longitude1) / 2) * Math.sin((Math.PI / 180)(longitude2 - longitude1) / 2);))); ``` *The story*: while it was shortly after changed and condensed once we noticed it's proliferation, our implementation of the Haversine formula became cumbersome very quickly. Degree/radian mismatches between APIs didn't make things any easier.
## Inspiration Want to take advantage of the AR and object detection technologies to help people to gain safer walking experiences and communicate distance information to help people with visual loss navigate. ## What it does Augment the world with the beeping sounds that change depending on your proximity towards obstacles and identifying surrounding objects and convert to speech to alert the user. ## How we built it ARKit; RealityKit uses Lidar sensor to detect the distance; AVFoundation, text to speech technology; CoreML with YoloV3 real time object detection machine learning model; SwiftUI ## Challenges we ran into Computational efficiency. Going through all pixels in the LiDAR sensor in real time wasn’t feasible. We had to optimize by cropping sensor data to the center of the screen ## Accomplishments that we're proud of It works as intended. ## What we learned We learned how to combine AR, AI, LiDar, ARKit and SwiftUI to make an iOS app in 15 hours. ## What's next for SeerAR Expand to Apple watch and Android devices; Improve the accuracy of object detection and recognition; Connect with Firebase and Google cloud APIs;
### Inspiration Have you ever found yourself wandering in a foreign land, eyes wide with wonder, yet feeling that pang of curiosity about the stories behind the unfamiliar sights and sounds? That's exactly where we found ourselves. All four of us share a deep love for travel and an insatiable curiosity about the diverse cultures, breathtaking scenery, and intriguing items we encounter abroad. It sparked an idea: why not create a travel companion that not only shares our journey but enhances it? Enter our brainchild, a fusion of VR and AI designed to be your personal travel buddy. Imagine having a friend who can instantly transcribe signs in foreign languages, identify any object from monuments to local flora, and guide you through the most bewildering of environments. That's what we set out to build—a gateway to a richer, more informed travel experience. ### What it does Picture this: you're standing before a captivating monument, curiosity bubbling up. With our VR travel assistant, simply speak your question, and it springs into action. This clever buddy captures your voice, processes your command, and zooms in on the object of your interest in the video feed. Using cutting-edge image search, it fetches information about just what you're gazing at. Wondering about that unusual plant or historic site? Ask away, and you'll have your answer. It's like having a local guide, historian, and botanist all rolled into one, accessible with just a glance and a word. ### How we built it We initiated our project by integrating Unity with the Meta XR SDK to bring our VR concept to life. The core of our system, a server engineered with Python and FastAPI, was designed to perform the critical tasks, enhanced by AI capabilities for efficient processing. We leveraged Google Lens via the SERP API for superior image recognition and OpenAI's Whisper for precise voice transcription. Our approach was refined by adopting techniques from a Meta research paper, enabling us to accurately crop images to highlight specific objects. This method ensured that queries were efficiently directed to the appropriate AI model for quick and reliable answers. To ensure a smooth operation, we encapsulated our system within Docker and established connectivity to our VR app through ngrok, facilitating instantaneous communication via websockets and the SocketIO library. ![architecture](https://i.ibb.co/cLqW8k3/architecture.png) ### Challenges we ran into None of us had much or any experience with both Unity3D and developing VR applications so there were many challenges in learning how to use the Meta XR SDK and how to build a VR app in general. Additionally, Meta imposed a major restriction that added to the complexity of the application: we could not capture the passthrough video feed through any third-party screen recording software. This meant we had to, in the last few hours of the hackathon, create a new server in our network that would capture the casted video feed from the headset (which had no API) and then send it to the backend. This was a major challenge and we are proud to have overcome it. ### Accomplishments that we're proud of From web developers to VR innovators, we've journeyed into uncharted territories, crafting a VR application that's not just functional but truly enriching for the travel-hungry soul. Our creation stands as a beacon of what's possible, painting a future where smart glasses serve as your personal AI-powered travel guides, making every journey an enlightening exploration. ### What we learned The journey was as rewarding as the destination. We mastered the integration of Meta Quest 2s and 3s with Unity, weaving through the intricacies of Meta XR SDKs. Our adventure taught us to make HTTP calls within Unity, transform screenshots into Base64 strings, and leverage Google Cloud for image hosting, culminating in real-time object identification through Google Lens. Every challenge was a lesson, turning us from novices into seasoned navigators of VR development and AI integration.
winning
## Inspiration Building and maintaining software is complex, time-consuming, and can quickly become expensive, especially as your application scales. Developers, particularly those in startups, often overspend on tools, cloud services, and server infrastructure without realizing it. In fact, nearly 40% of server costs are wasted due to inefficient resource allocation, and servers often remain idle for up to 80% of their runtime. As your traffic and data grow, so do your expenses. Managing these rising costs while ensuring your application's performance is critical—but it's not easy. This is where Underflow comes in. It automates the process of evaluating your tech stack and provides data-driven recommendations for cost-effective services and infrastructure. By analyzing your codebase and optimizing for traffic, Underflow helps you save money while maintaining the same performance and scaling capabilities. ## What it does Underflow is a **command-line tool** that helps developers optimize their tech stack by analyzing the codebase and identifying opportunities to reduce costs while maintaining performance. With a single command, developers can input a **GitHub repository** and the number of monthly active users, and Underflow generates a detailed report comparing the current tech stack with an optimized version. The report highlights potential cost savings, performance improvements, and suggests more efficient external services. The tool also provides a clear breakdown of why certain services were recommended, making it easier for developers to make informed decisions about their infrastructure. ## How we built it ![Blank board (6)](https://github.com/user-attachments/assets/f94681f3-4716-465b-b155-c8f0c13e2b02) Underflow is a command-line tool designed for optimizing software architecture and minimizing costs based on projected user traffic. It is executed with a single command and two arguments: ``` underflow <github-repository-identifier> <monthly-active-users> ``` Upon execution, Underflow leverages the **OpenAI API** to analyze the provided codebase, identifying key third-party services integrated into the project. The extracted service list and the number of monthly active users are then sent to a **FastAPI backend** for further processing. The backend queries an **AWS RDS**-hosted **MySQL** database, which contains a comprehensive inventory of external service providers, including cloud infrastructure, CI/CD platforms, container orchestration tools, distributed computing services, and more. The database stores detailed information such as pricing tiers, traffic limits, service categories, and performance characteristics. The backend uses this data to identify alternative services that provide equivalent functionality at a lower cost while supporting the required user traffic. The results of this optimization process are cached, and a comparison report is generated using the OpenAI API. This report highlights the cost and performance differences between the original tech stack and the proposed optimized stack, along with a rationale for selecting the new services. Finally, Underflow launches a GUI build with **Next.js** that presents a detailed analytics report comparing the original and optimized tech stacks. The report provides key insights into cost savings, performance improvements, and the reasoning behind service provider selections. This technical solution offers developers a streamlined way to evaluate and optimize their tech stacks based on real-world cost and performance considerations. ## Accomplishments that we're proud of We’re proud of creating a tool that simplifies the complex task of optimizing tech stacks while reducing costs for developers. Successfully integrating multiple components, such as the OpenAI API for codebase analysis, a FastAPI backend for processing, and an AWS-hosted MySQL database for querying external services, was a significant achievement. Additionally, building a user-friendly command-line interface that provides clear, data-driven reports about tech stack performance and cost optimization is something we're excited about. We also managed to create a streamlined workflow that allows developers to assess cost-saving opportunities without needing deep knowledge of infrastructure or services. ## What's next for Underflow * Generating a better database containing more comprehensive list of available external services and dependencies * Migrate traffic determination from user manual input to be based on server-level architecture, such as using elastic search on server logs to determine the true amount of third party service usages
## Inspiration: Millions of active software developers use GitHub for managing their software development, however, our team believes that there is a lack of incentive and engagement factor within the platform. As users on GitHub, we are aware of this problem and want to apply a twist that can open doors to a new innovative experience. In addition, we were having thoughts on how we could make GitHub more accessible such as looking into language barriers which we ultimately believed wouldn't be that useful and creative. Our offical final project was inspired by how we would apply something to GitHub but instead llooking into youth going into CS. Combining these ideas together, our team ended up coming up with the idea of DevDuels. ## What It Does: Introducing DevDuels! A video game located on a website that's goal is making GitHub more entertaining to users. One of our target audiences is the rising younger generation that may struggle with learning to code or enter the coding world due to complications such as lack of motivation or lack of resources found. Keeping in mind the high rate of video gamers in the youth, we created a game that hopes to introduce users to the world of coding (how to improve, open sources, troubleshooting) with a competitive aspect leaving users wanting to use our website beyond the first time. We've applied this to our 3 major features which include our immediate AI feedback to code, Duo Battles, and leaderboard. In more depth about our features, our AI feedback looks at the given code and analyses it. Very shortly after, it provides a rating out of 10 and comments on what it's good and bad about the code such as the code syntax and conventions. ## How We Built It: The web app is built using Next.js as the framework. HTML, Tailwind CSS, and JS were the main languages used in the project’s production. We used MongoDB in order to store information such as the user account info, scores, and commits which were pulled from the Github API using octokit. Langchain API was utilised in order to help rate the commit code that users sent to the website while also providing its rationale for said rankings. ## Challenges We Ran Into The first roadblock our team experienced occurred during ideation. Though we generated multiple problem-solution ideas, our main issue was that the ideas either were too common in hackathons, had little ‘wow’ factor that could captivate judges, or were simply too difficult to implement given the time allotted and team member skillset. While working on the project in specific, we struggled a lot with getting MongoDB to work alongside the other technologies we wished to utilise (Lanchain, Github API). The frequent problems with getting the backend to work quickly diminished team morale as well. Despite these shortcomings, we consistently worked on our project down to the last minute to produce this final result. ## Accomplishments that We're Proud of: Our proudest accomplishment is being able to produce a functional game following the ideas we’ve brainstormed. When we were building this project, the more we coded, the less optimistic we got in completing it. This was largely attributed to the sheer amount of error messages and lack of progression we were observing for an extended period of time. Our team was incredibly lucky to have members with such high perseverance which allowed us to continue working on, resolving issues and rewriting code until features worked as intended. ## What We Learned: DevDuels was the first step into the world hackathons for many of our team members. As such, there was so much learning throughout this project’s production. During the design and ideation process, our members learned a lot about website UI design and Figma. Additionally, we learned HTML, CSS, and JS (Next.js) in building the web app itself. Some members learned APIs such as Langchain while others explored Github API (octokit). All the new hackers navigated the Github website and Git itself (push, pull, merch, branch , etc.) ## What's Next for DevDuels: DevDuels has a lot of potential to grow and make it more engaging for users. This can be through more additional features added to the game such as weekly / daily goals that users can complete, ability to create accounts, and advance from just a pair connecting with one another in Duo Battles. Within these 36 hours, we worked hard to work on the main features we believed were the best. These features can be improved with more time and thought put into it. There can be small to large changes ranging from configuring our backend and fixing our user interface.
## FLEX [Freelancing Linking Expertise Xchange] ## Inspiration Freelancers deserve a platform where they can fully showcase their skills, without worrying about high fees or delayed payments. Companies need fast, reliable access to talent with specific expertise to complete jobs efficiently. "FLEX" bridges the gap, enabling recruiters to instantly find top candidates through AI-powered conversations, ensuring the right fit, right away. ## What it does Clients talk to our AI, explaining the type of candidate they need and any specific skills they're looking for. As they speak, the AI highlights important keywords and asks any more factors that they would need with the candidate. This data is then analyzed and parsed through our vast database of Freelancers or the best matching candidates. The AI then talks back to the recruiter, showing the top candidates based on the recruiter’s requirements. Once the recruiter picks the right candidate, they can create a smart contract that’s securely stored and managed on the blockchain for transparent payments and agreements. ## How we built it We built starting with the Frontend using **Next.JS**, and deployed the entire application on **Terraform** for seamless scalability. For voice interaction, we integrated **Deepgram** to generate human-like voice and process recruiter inputs, which are then handled by **Fetch.ai**'s agents. These agents work in tandem: one agent interacts with **Flask** to analyze keywords from the recruiter's speech, another queries the **SingleStore** database, and the third handles communication with **Deepgram**. Using SingleStore's real-time data analysis and Full-Text Search, we find the best candidates based on factors provided by the client. For secure transactions, we utilized **SUI** blockchain, creating an agreement object once the recruiter posts a job. When a freelancer is selected and both parties reach an agreement, the object gets updated, and escrowed funds are released upon task completion—all through Smart Contracts developed in **Move**. We also used Flask and **Express.js** to manage backend and routing efficiently. ## Challenges we ran into We faced challenges integrating Fetch.ai agents for the first time, particularly with getting smooth communication between them. Learning Move for SUI and connecting smart contracts with the frontend also proved tricky. Setting up reliable Speech to Text was tough, as we struggled to control when voice input should stop. Despite these hurdles, we persevered and successfully developed this full stack application. ## Accomplishments that we're proud of We’re proud to have built a fully finished application while learning and implementing new technologies here at CalHacks. Successfully integrating blockchain and AI into a cohesive solution was a major achievement, especially given how cutting-edge both are. It’s exciting to create something that leverages the potential of these rapidly emerging technologies. ## What we learned We learned how to work with a range of new technologies, including SUI for blockchain transactions, Fetch.ai for agent communication, and SingleStore for real-time data analysis. We also gained experience with Deepgram for voice AI integration. ## What's next for FLEX Next, we plan to implement DAOs for conflict resolution, allowing decentralized governance to handle disputes between freelancers and clients. We also aim to launch on the SUI mainnet and conduct thorough testing to ensure scalability and performance.
partial
## Inspiration All 3 of us are University of Waterloo students and have experienced many scary geese in our days here. So we wanted to create a visual representation of the way the Waterloo geese act while learning new skills. Specifically how certain aggressive geese decide to follow you as you are walking to class. ## What it does Mr. Goose on the Loose, is a very smart goose that can learn the fastest and most rewarding path to the Waterloo student as it continues to attack the same student. There are buildings that Mr. Goose cannot get through (brown boxes) and cages (black tents) that send Mr. Goose back to the start. This is without any user input and is shown on the screen using GUI libraries in Python. ## How we built it We harnessed reinforcement learning strategies in Python to have Mr. Goose learn from every successful attack of a Waterloo student. Using libraries such as Tkinter, NumPy, Sys and Pandas we were able to display a visual design with a grid, walls, grass and cages. ## Challenges we ran into We ran into challenges such as visually showing Mr. Goose in a friendly way since we began with him as a red square. As well as improving the algorithm used for reinforcement learning for larger mazes. As the bigger the maze the longer it would take, so adding a re-visited penalty to Mr. Goose improved his efficiency in getting to the student. Since this was our first time doing work with reinforcement learning there were some challenges in understanding the mechanics of it and some general debugging. ## Accomplishments that we're proud of We are proud that we were able to work together as a team as we had never worked together before and this was our first in-person hackathons. We are also proud of our ability to learn about reinforcement learning in such a short time and produce a final product as none of us had any previous experience with reinforcement learning. Also our ability to adapt to different challenges that arose throughout the weekend. ## What we learned We learned a lot about what reinforcement learning is, the mechanics of reinforcement learning and many libraries associated with Python. We also learned how to work collaboratively as a team working on a coding project and how to time manage. By attending workshops including the discord bot, the one on using Ruby to build a webpage, and the introduction to machine learning, all team members were able to pick up many valuable skills. ## What's next for Mr. Goose on the Loose Mr. Goose will continue to terrorize Waterloo students and become faster at attacking students. This will be done by improving his reinforcement learning algorithm. Another step would be creating more obstacles for Mr. Goose and improving the visual design of Mr. Goose on the Loose.
## Inspiration As coders, we often don’t have the time (or the will) to cook, which leads to one inevitable outcome—fast food, and not always the healthiest choices! We wanted to help fellow coders in workspaces and universities make better eating decisions. That’s where Goose the Duck comes in, our solution to encourage healthier choices without too much thinking involved. Because who doesn't trust a duck? And let’s be honest, when we hit a coding problem, we already talk to rubber ducks for advice (don’t deny it). P.S. Yes, we know—it’s ironic that the duck is named Goose, but that just makes it even more fun! 😉 ## What it does Goose, the not-so-goosey duck, is your personal food advisor. It is trained to help you make better food choices by identifying what's healthy and what's not! Simply present your food, and our duck will let you know if you're about to make a good or bad decision. ## How we built it We used a Raspberry Pi 3 to power Goose’s brain and trained a machine learning model (a CNN) to distinguish between healthy and unhealthy foods based on image classification. Additionally, with a bit of hardware hacking, we created a fun, feathery friend that learned the difference between a kale salad and a Dunkin’ Donuts gazed donut. ## Challenges we ran into Training a duck to recognize food turned out to be trickier than we anticipated! We struggled with finding large enough healthy and unhealthy food datasets, and to make matters worse, our Raspberry Pi often refused to boot at critical moments, making us question not just the project, but our career choices! On top of that, we didn’t receive all the hardware we ordered, forcing us to improvise with the limited resources we had. But in the end, the duck was quacking, and working just as we’d hoped! ## Accomplishments that we're proud of Despite the setbacks, we built everything ourselves—from the coding to the hardware setup to teaching a duck to judge your lunch! It was a proud moment when Goose finally quacked out its first food decision. He's quite the food critic now! ## What we learned We learned that building a project at Hack The North is a lot like feeding a duck—patience and persistence are key! We gained a deeper understanding of food nutrition and the importance of datasets in training models. We also learned (form experience) the intricacies of Raspberry Pi and hardware troubleshooting. Everything related to coding a system to process the webcam display was as well learned on the spot. ## What's next for JustDuckIt Goose might not stop at food! Who knows, maybe Goose will soon help you pick your outfit, plan your day, or even make life decisions like “Should I binge-watch this series or be productive?” We also didn't have a camera available during development, so our next step will also include attaching a camera directly to Goose to make it more independent. We'd also like to add a cooling system to Goose's brain (the Raspberry Pi) in the future to prevent overheating. Unfortunately, we didn’t have a fan available this time around, but it's definitely on our list for next time! We're also excited to add quacky sound effects to Goose in the future, with cheerful 'good' quacks and grumpy 'bad' quacks to keep things fun and engaging! Stay tuned—Goose is always evolving!
## Inspiration Amid the fast-paced rhythm of university life at Waterloo, one universal experience ties us all together: the geese. Whether you've encountered them on your way to class, been woken up by honking at 7 am, or spent your days trying to bypass flocks of geese during nesting season, the geese have established themselves as a central fixture of the Waterloo campus. How can we turn the staple bird of the university into a asset? Inspired by the quintessential role the geese play in campus life, we built an app to integrate our feather friends into our academic lives. Our app, Goose on the Loose allows you to take pictures of geese around the campus and turn them into your study buddies! Instead of being intimidated by the fowl fowl, we can now all be friends! ## What it does Goose on the Loose allows the user to "capture" geese across the Waterloo campus and beyond by snapping a photo using their phone camera. If there is a goose in the image, it is uniquely converted into a sprite added to the player's collection. Each goose has its own student profile and midterm grade. The more geese in a player's collection, the higher each goose's final grade becomes, as they are all study buddies who help one another. The home page also contains a map where the player can see their own location, as well as locations of nearby goose sightings. ## How we built it This project is made using Next.js with Typescript and TailwindCSS. The frontend was designed using Typescript React components and styled with TailwindCSS. MongoDB Atlas was used to store various data across our app, such as goose data and map data. We used the @React Google Maps library to integrate the Google maps display into our app. The player's location data is retrieved from the browser. Cohere was used to help generate names and quotations assigned to each goose. OpenAI was used for goose identification as well as converting the physical geese into sprites. All in all, we used a variety of different technologies to power our app, many of which we were beginners to. ## Challenges we ran into We were very unfamiliar with Cohere and found ourselves struggling to use some of its generative AI technologies at first. After playing around with it for a bit, we were able to get it to do what we wanted, and this saved us a lot of head pain. Another major challenge we underwent was getting the camera window to display properly on a smartphone. While it worked completely fine on computer, only a fraction of the window would be able to display on the phone and this really harmed the user experience in our app. After hours of struggle, debugging, and thinking, we were able to fix this problem and now our camera window is very functional and polished. One severely unexpected challenge we went through was one of our computers' files corrupting. This caused us HOURS of headache and we spent a lot of effort in trying to identify and rectify this problem. What made this problem worse was that we were at first using Microsoft VS Code Live Share with that computer happening to be the host. This was a major setback in our initial development timeline and we were absolutely relieved to figure out and finally solve this problem. A last minute issue that we discovered had to do with our Cohere API. Since the prompt did not always generate a response within the required bounds, looped it until it landed in the requirements. We fixed this by setting a max limit on the amount of tokens that could be used per response. One final issue that we ran into was the Google Maps API. For some reason, we kept running into a problem where the map would force its centre to be where the user was located, effectively prohibiting the user from being able to view other areas of the map. ## Accomplishments that we're proud of During this hacking period, we built long lasting relationships and an even more amazing project. There were many things throughout this event that were completely new to us: various APIs, frameworks, libraries, experiences; and most importantly: the sleep deprivation. We are extremely proud to have been able to construct, for the very first time, a mobile friendly website developed using Next.js, Typescript, and Tailwind. These were all entirely new to many of our team and we have learned a lot about full stack development throughout this weekend. We are also proud of our beautiful user interface. We were able to design extremely funny, punny, and visually appealing UIs, despite this being most of our's first time working with such things. Most importantly of all, we are proud of our perseverance; we never gave up throughout the entire hacking period, despite all of the challenges we faced, especially the stomach aches from staying up for two nights straight. This whole weekend has been an eye-opening experience, and has been one that will always live in our hearts and will remind us of why we should be proud of ourselves whenever we are working hard. ## What we learned 1. We learned how to use many new technologies that we never laid our eyes upon. 2. We learned of a new study spot in E7 that is open to any students of UWaterloo. 3. We learned how to problem solve and deal with problems that affected the workflow; namely those that caused our program to be unable to run properly. 4. We learned that the W store is open on weekend. 5. We learned one another's stories! ## What's next for GooseOnTheLoose In the future, we hope to implement more visually captivating transitional animations which will really enhance the UX of our app. Furthermore, we would like to add more features surrounding the geese, such as having a "playground" where the geese can interact with one another in a funny and entertaining way.
losing
## Inspiration Being frequent visitors of the UTM Esports Club, there are a lot of things which we would like to make more convenient for us. Things like looking up certain information about characters in the games we play, organizing tournaments with large amounts of people, and making lobbies for people to play in. This bot hopes to make these situations much less of a pain to do. ## What it does This bot has three features. The main feature is the frame data feature. So far, this only works for Guilty Gear Strive (one of the main games we play). This command allows users to input a character followed by a move that they have (for example Ky 2K), and the bot will print out up-to-date information about that move sourced from Dustloop (an online wiki for the game). The second feature is a tournament organization feature. This allows people to start and maintain tournaments through the bot. Players can report their wins to the bot, which will update the bot accordingly. Once all outstanding matches for the current round have ended (e.g. pools, semi-finals, etc.), the next round of matches can begin. This repeats until there is only one person left. The last feature we have is a lobby system. This feature allows users to create lobbies with lobby codes, of which users can look up. This is particularly useful when there are multiple concurrent lobbies between different players, as it can be quite confusing trying to figure out which lobby to join, as well as having to repost the same lobby code many times. ## How we built it This bot was mainly built on discord.py. We used beautifulsoup4 to scrape Dustloop's HTML character page for the information on the move we wanted, which would then be neatly organized into an embed and printed. The other two features were built mainly with basic python and the aforementioned discord.py. ## Challenges we ran into Web scraping was completely foreign for most of us; even the concept of web scraping was confusing for some. Having to figure out what web scraping even was, and how to implement it into our discord bot was a major challenge we had to overcome. Discord bots, while we were somewhat more comfortable with, we still did not know too much about how to make one at first. Figuring out how to do basic things like setting one up and even just getting it to print things was difficult at first. ## Accomplishments that we're proud of We're surprised that we even got anything to function in the first place. This is all of our first hackathons and it was pretty nerve-racking. ## What we learned How to use beautifulsoup4 and discord.py. We have also gained valuable experience working as a team. ## What's next for UTM Esports Bot Adding support for more games for the frame data feature is a major one. There are many video games on the market, after all, so we will have to scrape different websites for different games. Making it so the tournament feature actually pings people when their next-round matches begin is another, as well as making group numbers clearer when starting the tournament.
## Inspiration The inspiration that we drew when creating this discord bot was to personalize a bot that could do a plethora of different things. We wanted the main purpose of our bot to be entertainment sort of like an Airplane screen. To navigate PikaBot, only a single person is needed. ## What it does When using the bot, one may use $help to uncover the PikaBot's commands and abilities. This results in 5 interesting commands. 1. $rps — rock paper scissor game where the user plays against the computer 2. $weather — a command which informs the weather of any major city in live time 3. $quotes — provides an inspirational quote to uplift users 4. $mimic — a command which mimics the user 5. $quiz — provides fun questions in various forms T/F, MC or One Word Answers ## How we built it The overall design falls into two places, the main aspect being the discord bot itself and the website that links the discord bot to the web. # Website It utilizes js, html, css and a bootstrap framework # Discord Bot Uses flask and UptimeRobot to "live forever" and we leveraged discord.py documentation to code this bot. In regards to the weather, quotes and quiz commands, it was developed using RESTful endpoints with OpenWeatherAPI, ZenQuotesAPI and TheTriviaAPI to display data to the user. Furthermore, we attempted to mimic the users using a TrieTree data structure to partially autocomplete based on previous responses. ## Challenges we ran into Some of the main challenges were trying to come up with logical responses by mimicking the user and configuring the quiz
## Inspiration GeoGuesser is a fun game which went viral in the middle of the pandemic, but after having played for a long time, it started feeling tedious and boring. Our Discord bot tries to freshen up the stale air by providing a playlist of iconic locations in addendum to exciting trivia like movies and monuments for that extra hit of dopamine when you get the right answers! ## What it does The bot provides you with playlist options, currently restricted to Capital Cities of the World, Horror Movie Locations, and Landmarks of the World. After selecting a playlist, five random locations are chosen from a list of curated locations. You are then provided a picture from which you have to guess the location and the bit of trivia associated with the location, like the name of the movie from which we selected the location. You get points for how close you are to the location and if you got the bit of trivia correct or not. ## How we built it We used the *discord.py* library for actually coding the bot and interfacing it with discord. We stored our playlist data in external *excel* sheets which we parsed through as required. We utilized the *google-streetview* and *googlemaps* python libraries for accessing the google maps streetview APIs. ## Challenges we ran into For initially storing the data, we thought to use a playlist class while storing the playlist data as an array of playlist objects, but instead used excel for easier storage and updating. We also had some problems with the Google Maps Static Streetview API in the beginning, but they were mostly syntax and understanding issues which were overcome soon. ## Accomplishments that we're proud of Getting the Discord bot working and sending images from the API for the first time gave us an incredible feeling of satisfaction, as did implementing the input/output flows. Our points calculation system based on the Haversine Formula for Distances on Spheres was also an accomplishment we're proud of. ## What we learned We learned better syntax and practices for writing Python code. We learnt how to use the Google Cloud Platform and Streetview API. Some of the libraries we delved deeper into were Pandas and pandasql. We also learned a thing or two about Human Computer Interaction as designing an interface for gameplay was rather interesting on Discord. ## What's next for Geodude? Possibly adding more topics, and refining the loading of streetview images to better reflect the actual location.
losing
# Inspiration Traditional startup fundraising is often restricted by stringent regulations, which make it difficult for small investors and emerging founders to participate. These barriers favor established VC firms and high-networth individuals, limiting innovation and excluding a broad range of potential investors. Our goal is to break down these barriers by creating a decentralized, community-driven fundraising platform that democratizes startup investments through a Decentralized Autonomous Organization, also known as DAO. # What It Does To achieve this, our platform leverages blockchain technology and the DAO structure. Here’s how it works: * **Tokenization**: We use blockchain technology to allow startups to issue digital tokens that represent company equity or utility, creating an investment proposal through the DAO. * **Lender Participation**: Lenders join the DAO, where they use cryptocurrency, such as USDC, to review and invest in the startup proposals. -**Startup Proposals**: Startup founders create proposals to request funding from the DAO. These proposals outline key details about the startup, its goals, and its token structure. Once submitted, DAO members review the proposal and decide whether to fund the startup based on its merits. * **Governance-based Voting**: DAO members vote on which startups receive funding, ensuring that all investment decisions are made democratically and transparently. The voting is weighted based on the amount lent in a particular DAO. # How We Built It ### Backend: * **Solidity** for writing secure smart contracts to manage token issuance, investments, and voting in the DAO. * **The Ethereum Blockchain** for decentralized investment and governance, where every transaction and vote is publicly recorded. * **Hardhat** as our development environment for compiling, deploying, and testing the smart contracts efficiently. * **Node.js** to handle API integrations and the interface between the blockchain and our frontend. * **Sepolia** where the smart contracts have been deployed and connected to the web application. ### Frontend: * **MetaMask** Integration to enable users to seamlessly connect their wallets and interact with the blockchain for transactions and voting. * **React** and **Next.js** for building an intuitive, responsive user interface. * **TypeScript** for type safety and better maintainability. * **TailwindCSS** for rapid, visually appealing design. * **Shadcn UI** for accessible and consistent component design. # Challenges We Faced, Solutions, and Learning ### Challenge 1 - Creating a Unique Concept: Our biggest challenge was coming up with an original, impactful idea. We explored various concepts, but many were already being implemented. **Solution**: After brainstorming, the idea of a DAO-driven decentralized fundraising platform emerged as the best way to democratize access to startup capital, offering a novel and innovative solution that stood out. ### Challenge 2 - DAO Governance: Building a secure, fair, and transparent voting system within the DAO was complex, requiring deep integration with smart contracts, and we needed to ensure that all members, regardless of technical expertise, could participate easily. **Solution**: We developed a simple and intuitive voting interface, while implementing robust smart contracts to automate and secure the entire process. This ensured that users could engage in the decision-making process without needing to understand the underlying blockchain mechanics. ## Accomplishments that we're proud of * **Developing a Fully Functional DAO-Driven Platform**: We successfully built a decentralized platform that allows startups to tokenize their assets and engage with a global community of investors. * **Integration of Robust Smart Contracts for Secure Transactions**: We implemented robust smart contracts that govern token issuance, investments, and governance-based voting bhy writing extensice unit and e2e tests. * **User-Friendly Interface**: Despite the complexities of blockchain and DAOs, we are proud of creating an intuitive and accessible user experience. This lowers the barrier for non-technical users to participate in the platform, making decentralized fundraising more inclusive. ## What we learned * **The Importance of User Education**: As blockchain and DAOs can be intimidating for everyday users, we learned the value of simplifying the user experience and providing educational resources to help users understand the platform's functions and benefits. * **Balancing Security with Usability**: Developing a secure voting and investment system with smart contracts was challenging, but we learned how to balance high-level security with a smooth user experience. Security doesn't have to come at the cost of usability, and this balance was key to making our platform accessible. * **Iterative Problem Solving**: Throughout the project, we faced numerous technical challenges, particularly around integrating blockchain technology. We learned the importance of iterating on solutions and adapting quickly to overcome obstacles. # What’s Next for DAFP Looking ahead, we plan to: * **Attract DAO Members**: Our immediate focus is to onboard more lenders to the DAO, building a large and diverse community that can fund a variety of startups. * **Expand Stablecoin Options**: While USDC is our starting point, we plan to incorporate more blockchain networks to offer a wider range of stablecoin options for lenders (EURC, Tether, or Curve). * **Compliance and Legal Framework**: Even though DAOs are decentralized, we recognize the importance of working within the law. We are actively exploring ways to ensure compliance with global regulations on securities, while maintaining the ethos of decentralized governance.
## Inspiration As a startup founder, it is often difficult to raise money, but the amount of equity that is given up can be alarming for people who are unsure if they want the gasoline of traditional venture capital. With VentureBits, startup founders take a royalty deal and dictate exactly the amount of money they are comfortable raising. Also, everyone can take risks on startups as there are virtually no starting minimums to invest. ## What it does VentureBits allows consumers to browse a plethora of early stage startups that are looking for funding. In exchange for giving them money anonymously, the investors will gain access to a royalty deal proportional to the amount of money they've put into a company's fund. Investors can support their favorite founders every month with a subscription, or they can stop giving money to less promising companies at any time. VentureBits also allows startup founders who feel competent to raise just enough money to sustain them and work full-time as well as their teams without losing a lot of long term value via an equity deal. ## How we built it We drew out the schematics on the whiteboards after coming up with the idea at YHack. We thought about our own experiences as founders and used that to guide the UX design. ## Challenges we ran into We ran into challenges with finance APIs as we were not familiar with them. A lot of finance APIs require approval to use in any official capacity outside of pure testing. ## Accomplishments that we're proud of We're proud that we were able to create flows for our app and even get a lot of our app implemented in react native. We also began to work on structuring the data for all of the companies on the network in firebase. ## What we learned We learned that finance backends and logic to manage small payments and crypto payments can take a lot of time and a lot of fees. It is a hot space to be in, but ultimately one that requires a lot of research and careful study. ## What's next for VentureBits We plan to see where the project takes us if we run it by some people in the community who may be our target demographic.
## Inspiration Finding interesting and genuinely educational content on Youtube has become increasingly difficult in recent years. While Youtube has ample content targeted towards learning and skill acquisition, its algorithm steers users away from intellectual long-form content because short-form content increases user engagement and ad revenue. Personally, many of our members have felt frustrated with the platform’s persistent recommendations of addictive and unproductive videos. Additionally, we believe the divide between education and entertainment is unreasonably strong. The formalized learning environments that we grow up in have created an untrue idea that education and entertainment are mutually exclusive. In line with our educational goals, as students we also felt that educational content, such as lecture videos, was too passive. Gen Z students have an average attention span of eight seconds and with our current technology, students often space out and disengage while passively consuming educational content. ## What it does Edutain is a video learning platform that efficiently sorts through Youtube’s overwhelming attention-grabbing content to generate recommendations based on the user’s subject interest. An algorithm generates checkpoints based on the video’s transcript and asks students multiple choice questions at these key moments to ensure involved learning. We transform learning so that students are no longer merely passively consuming content, but instead actively grapple with the concepts introduced. Through this model, Edutain extrinsically motivates students to change amotivation within the student population into intrinsic motivation. ## How we built it Our team used Node.js, React.js, and Next.js to develop Edutain as an efficient and scalable web application. Using the Youtube API, we dynamically queried for educational content and extracted the audio data with the corresponding timestamps with AssemblyAI. Translating this speech to text, we ran Gemini on the transcript and dynamically generated multiple choice questions based on the video content and embedded these questions into the video. ## Challenges we ran into Working with the different LLMs and agents was a challenge, since it was the first time any of us had used them and we had to explore lots of different options and APIs. ## Accomplishments that we're proud of Since this was the first (in-person) hackathon for all of us, we were really proud of all the learning that happened. We used novel machine learning technologies to assist in dealing with a variety of data formats and also really developed our problem-solving skills when we came up with bugs. We also went through several prototyping phases with non-coded versions and leveraged the skill set of everyone on the team. ## What we learned Our team learned how to use a variety of new APIs and machine learning tools, including Google’s Youtube API, Gemini, and other LLMs such as AssemblyAI. We also learned how to integrate frontend and backend for web development. ## What's next for Edutain We hope to target specific age groups next to further reinforce the concept of lifelong learning. Specifically, we are looking to create specific interfaces for kids under the age of six and seniors over 65. We want to add more elements of gamification for the kids version, so that when parents want to occupy their kids they can give them Edutain instead of Youtube and games. For seniors, we hope to add an AI chatbot to individualize their experience and reduce their technological barriers to learning. Ultimately, Edutain aims to bridge the gap between education and entertainment, make learning casual, and transform the way social media influences intellectual development.
winning
## Inspiration For physical therapy patients, doing your home exercise program is a crucial part of therapy and recovery. These exercises improve the body and allow patients to remain pain-free without having to pay for costly repeat visits. However, doing these exercises incorrectly can hinder progress and put you back in the doctor’s office. ## What it does PocketPT uses deep learning technologies to detect and correct patient's form in a broad range of Physical Therapy exercises. ## How we built it We used the NVIDIA Jetson-Nano computer and a Logitech webcam to build a deep learning model. We trained the model on over 100 images in order to detect the accuracy of Physical Therapy postures. ## Challenges we ran into Since our group was using new technology, we struggled at first with setting up the hardware and figuring out how to train the deep learning model. ## Accomplishments that we're proud of We are proud that we created a working deep learning model despite no prior experience with hardware hacking or machine learning. ## What we learned We learned the principles of deep learning, hardware, and IoT. We learned how to use the NVIDIA Jetson Nano computer for use in various disciplines. ## What's next for PocketPT In the future, we want to expand to include more Physical Therapy postures. We also want to implement our product for use on Apple Watch and FitBit, which would allow a more seamless workout experience for users.
## Inspiration r.unear.me helps you get exactly where your friends are. In cases when you are supposed to meet them but can't exactly find them, unear.me helps you get to them. r.unear.me is a webapp that that enables you and your friend to share each other's location on the same page, and the location updates continuously as the people in it move. ## How I built it You go into the webapp and you get a custom code at the end of url with your location. You can then sent the url to your friends and their location markers will also be added to the page. ## Challenges I ran into Updating the location coordinates, trying to get firebase, mapbox and azure working together
## Inspiration The pandemic has affected the health of billions worldwide, and not just through COVID-19. Studies have shown a worrying decrease in physical activity due to quarantining and the closure of clubs, sports, and gyms. Not only does this discourage an active lifestyle, but it can also lead to serious injuries from working out alone at home. Without a gym partner or professional trainer to help spot and correct errors in movements, one can continue to perform inefficient and often damaging exercises without even being aware themselves. ## What it does Our solution to this problem is **GymLens**, a virtual gym trainer that allows anyone to workout at home with *personal rep tracking* and *correct posture guidance*. During the development of our Minimum Viable Product, we implemented a pose tracker using TensorFlow to track the movement of the person’s key body points. The posture during exercises such as pushups can then be evaluated by processing the data points through a linear regression algorithm. Based on the severity of the posture, a hint is provided on the screen to re-correct the posture. ## How we built it We used a Tensorflow MoveNet model to detect the positions of body parts. These positions were used as inputs for our machine learning algorithm, which we trained to identify specific stances. Using this, we were able to identify repetitions between each pose. ## Challenges we ran into From the beginning, our team had to navigate the code editor of Sublime Text and Floobits, which proved to be more difficult than we imagined since members could not log in and sign in to Github. Our front end members who were coding with HTML and CSS ran into problems with margins and paddings with divs and buttons. Aligning and making sure the elements were proportional caused a lot of frustration, but with moral support and many external sources, we were able to get a sleek website with which we could host our project. Incompatibilities between machine learning tools and the library used for pose detection were a major hurdle. We were able to solve this issue by using our own custom-coded machine learning library with a simple feed forward neural network. Lastly, our struggles with Floobits ended up being one of our biggest setbacks. It turned out, our entire team soon realized, that when two people were on the same file, the lines of code would severely glitch out, causing uncontrollable chaos when typing. Due to the separate nature of front end and back end programmers, it was inevitable that members would step on each others’ toes on the same file and accidentally undo, delete, or add too many characters into one line of code. We ended up having to code cautiously in the fear of deleting valuable code, but we had many laughs over the numerous errors that transpired due to this glitch. Furthermore, Floobits’ ability to overwrite code turned out to be an asset and liability. Although we were able to work on the same files in real-time, destruction from one member of the team turned out to be collateral. On the last evening of the hackathon, one of our team members accidentally overwrote the remote files that the rest of the team had worked hours on instead of the local ones. In a frantic effort to get our code back, our group tried to press ctrl-z to get to the point where the deletion occurred, but it was too late. Unfortunately, there was nothing we could do to get about 3 hours of work back. Luckily, with our excellent team morale, we separated into groups to repair what had been lost. However, our problems with this code editor did not end here. Nearing the end of the hackathon, our front end and back end duos came together triumphantly as we presented our accomplishments to each other. This final step turned out to be an unsuspecting hurdle once more. As the back end merged their final product into the website, many errors with Github pushes and the integration with Floobits became apparent. Progress had not been saved from branch to branch, and the front end code ended up being set back another 2 hours. Having dealt with this problem before, our team put our heads down, pushed away the frustration of restarting, and began to mend the lost progress once more. ## Accomplishments that we're proud of One significant milestone within our project was the successful alignment of the canvas-drawn posture overlay with the body of a user. Its occurrence brought the team to a video call, where we offered congratulations while hiding our faces behind our freshly made overlay. Its successful tracking later became the main highlight of multiple demonstrations of exercises and jokes surrounding bad posture. The front end development team enjoyed the challenge of coding in unfamiliar territory. Their encounters with unfamiliar functions as well as their first attempts at Javascript to create a stopwatch with working buttons all resulted in greater feelings of pride as they were incorporated into the site. They are proud to have designed a self-reassured, visually-appealing website by applying the knowledge they’ve gained over the last 36 hours. Personally, we're proud to have stayed awake for 21 hours. ## What we learned This hackathon became a giant learning experience for all of our members, despite the range of coding abilities and experience. Our more experienced back end members tackled Tensorflow’s MoveNet pose detection library for the first time. Meanwhile, the members responsible for the user interface, design, and website navigated Floobits and Sublime text as beginners. Our understanding of the different css functions greatly increased, most notably with layering and element positioning with inspect element. Additionally, members ventured into the world of Javascript for the first time, and realized its handiness with HTML and CSS. Overall, our team surprised ourselves with our mental fortitude when numerous obstacles were thrown our way, and our ability to learn different functions, languages, software, and platforms so quickly. We worked cohesively and efficiently, all the while discovering the true extent of our capabilities under time pressure and during late nights calls. ## What's next for GymLens In the future, we hope to expand the functionality of our project by making greater use of its movement correction and movement recognition capabilities. We plan to offer a greater library of exercises sourced from online trainers’ videos used as training data. This extra variety will allow users to take total command of their workout regimes, all while ensuring the safety and efficacy of all exercises.
winning
## Inspiration We believe technology should connect people together, and allow us to get out of our comfort zone. So we came up with random ideas that focused on the premise of socially connecting strangers together. Then it hit us! An app that connects users based off of personality traits and similar interested events. ## What it does In the process of registration, users are asked to fill out a personality questionnaire. After registration is complete users can look through a database of local events, and see lists of users attending those events. The users are filtered from most similar to least according to the user's interests, allowing users to message and chat with the most likeable people. From there the user can find other users to attend events with. ## How I built it Backend: AWS EC2 Flask Firebase PredictHQ Client-side: Android Studio Twillo Programmable Chat ## Challenges I ran into -Incorporating chat into app was rough -Firebase structure with python wrapper wasn't supported that well -Generating users with bitmap image to base 64 string on backend didn't workout that well ## Accomplishments that I'm proud of The whole entire app: -User management system on the server side -In app chat -UI/UX Design ## What I learned -Database user management -Chat room creation ## What's next for EventBuddy Expanding it to an IOS platform
## Inspiration It sometimes feels so innate to find inspiration from everyday routine. One such case is when my teammates and I were on a group call when we discussed about how we got **calls from spammers \*\*almost every day and how convincing it seemed to everyone who talked to them for a while. As we discussed further we found the problem continued in the current technological world of social media where we find \*\*spams on Twitter** as well. So, we decided to create an app for the problems faced by people in the virtual world. ## What it does The application is designed for easy usage while keeping every user in mind. The app targets **phone numbers** and **twitter accounts** of spammers and lets the users review the person behind it. The users can also look up the reviews added by other people on the platform along with relevant photos and tags. To make the app more secure we have **integrated OTP services** for new user registration. ## How we built it We tried to build the application with a simple idea in mind. We made it keeping the everyday challenges that we faced in mind and all the technologies and software that we are good at. For seamless integration into all mobile operating systems we developed the front end using **React Native** and the back end was made using **Node.js** hosted on **Google Cloud** app engine for synchronous code execution. As for our messaging and OTP services we decided to go with **Twilio** for reliable connectivity anywhere in the world. We even used Twitter API for collecting and analysing tweets. Finally, for database management we incorporated **NoSQL Cloud Firestone** and consolidated **Google Cloud** storage for blob files. ## Challenges we ran into The primary challenge that we ran into was the **long-distance communication** with the team. As the event was online our team was based from different parts of the world. With that we had different time zones. This led to lack of communication and longer response time. As we proceeded further into the event we **adapted to each other’s schedule and work**, which all in all was a great experience in itself. ## Accomplishments that we're proud of While creating the whole project we accomplished a lot of things and overcame a lot of challenges. Looking back, we think that making a **fully functional app in less that 36 hours** is the greatest accomplishment in itself. ## What we learned Our team enjoyed a lot during the project as we learned to make our way through. The key points that all of us can say we learned was **cooperation** and **efficiency** all while staying online the whole time. There was a unspoken understanding that we developed in the past few hours that we can say we are really proud of. ## What's next for SafeView We developed SafeView keeping the future in mind. Our team has a lot of plans and hopes for the application. But first we will try to **improve our UI** and make it more user friendly along with covering **more test cases** for the better learning of the app and covering all possibilities. Finally, we plan to **add location for our app** and help real world users to stay aware of bad neighbourhoods all this while improving any bugs that we can find in the app.
## Inspiration 📍 In an era where digital platforms are omnipresent, the challenge of fostering genuine connections while promoting ethical technology use has never been more pressing. The inspiration for "Beacons" came from our own experiences of struggling to coordinate plans and communicate effectively with groups. We recognized a void in digital solutions that not only streamline planning but do so with a focus on community, inclusivity, and sustainability. Our aim was to create an all-in-one solution that not only simplifies event coordination but also embodies the principles of socially responsible platforms, nurturing connections that extend beyond the digital realm. The inspiration for the name Beacons comes from the fact that a beacon is a light that brings people together and creates an opportunity for all to become a better version of themselves by working together. With this in mind, we built our app with the intention of it being a beacon for Beacons and allowing all these individual events to shine in one place. ## What it does 👭👬 Beacons is a pioneering app crafted to redefine the planning landscape, ensuring ease of communication and organization for events of all scales while adhering to ethical digital practices. It emerges as a beacon of responsible technology, providing: * **Login Page:** Secure entry to a personalized planning hub, emphasizing data protection and privacy. * **Feed:** An updated view of invites with an intuitive response mechanism, promoting digital wellness by reducing clutter and enhancing focus. * **Map:** A sustainable approach to event planning, featuring eco-friendly suggestions and local event highlights to minimize environmental impact. * **Create Event:** A feature that encourages community engagement and inclusivity, allowing for the easy organization of gatherings that cater to diverse interests and needs. * **Profile Area:** A space for users to express their preferences and manage settings, ensuring a tailored and responsible user experience. * **Create Group/Add Friend:** Facilitates building community ties, enabling users to forge and nurture connections based on shared interests and activities that brings people together in our digital age ## How we built it ⌨️ In constructing Beacons, we used a cohesive tech stack that reflects our commitment to responsible innovation. This included: * **Frontend:** Utilizing React Native for a universally accessible, cross-platform user interface compatible with devices of all builds/sizes. * **Backend:** Node.js with Express, ensuring efficient processes and ethical data handling, easily integrateable with our frontend. * **Database:** MongoDB, chosen for its flexibility and scalability, aligns with our sustainable development goals and can easily be controlled through our Node.js backend. * **APIs:** Implementing Apple Maps for its commitment to privacy and environmental sustainability in location-based services with easily implementable functions. ## Challenges we ran into ⏰ Throughout our experience building Beacons, we faced and overcame a variety of obstacles. To start with, in order to fulfill the theme of building a socially responsible platform, we had to factor in ethical considerations in every aspect of Beacons. From seamless map functionalities with an eco-friendly focus along with complex encryption and data security techniques at all levels, we went the extra mile to ensure our app aligned with our standards of digital wellbeing. An additional challenge we faced was within our user authentication protocol, which we initially planned to do with Auth0. However, Auth0 and Expo, the platform we used to develop our React Native app, were incompatible with a variety of documented issues. Our alternative plan, which involved Firebase, also faced similar challenges, leading us to develop our own authentication and authorization system from scratch. We also made many small mistakes in terms of file names and API routes, which meant that we had to spend time documenting many details within the backend and frontend, taking more time than it should have. All in all, these obstacles were a learning opportunity that allowed us all to become more professional and experienced programmers. ## Accomplishments that we're proud of 🎉 Developing Beacons has been an enriching journey, deeply rooted in our shared commitment to use technology for societal good. Our team, a diverse group united by this common goal, has overcome numerous challenges through close collaboration and innovative problem-solving. The impact Beacons has the potential to make is what we are most proud of. From testing the app, its potential usage highlights its role in enhancing real-world connections and community engagement, affirming our vision of technology as a tool for positive change. With what we have developed so far, Beacons definitely serves a purpose within our communities to not only bring people together who would never have met otherwise but also add a new dynamic to relationships that already exist. ## What we learned 🗞️ This hackathon fostered in us a variety of skills that we will be able to apply within future hackathons as well as our careers within the technological sector. Not only did we deepen our understanding of the soft skills required to work within a team, but we also were able to practice our programming skills and learn more about the technologies that we will eventually be using within internships/jobs. We distributed our work not according to what we were most comfortable with, but instead based on what we had the least experience with, meaning that all members of our team were assigned a task that they were not as familiar with. We also learned more about the balance between technology and social responsibility, and what goes into integrating sustainable practices within software development, a pursuit that we hope to match within our future ventures. ## What's next for Beacons 🔍 Our vision for Beacons includes evolving it into an even more impactful platform. We plan to integrate features for personalized event recommendations that prioritize sustainability and inclusivity, enhance location-based services to support local economies, and introduce new ways for users to engage with and contribute to their communities responsibly. We plan to continue to build out this application within the near future and are open to any feedback/advice that can be provided!
losing
## Inspiration Jorge's sister asked a question that he didn't want to answer because it was too much work. That's where ReplAI comes in. ## What it does ReplAI, your own generative-AI power Discord representative that learns your speech patterns and diction. Whether you're too busy or tired of reading messages, have it reply to a conversation on your behalf. ## How we built it ReplAI was built using a Python backend and HTML/CSS for the frontend. ReplAI utilizes OpenAI's and Discord's robust API's to deliver a fast, personalized response to any conversation. ## Challenges we ran into ReplAI was challenging to build because of learning how to utilize Discord's API's and Developer Tools. ## Accomplishments that we're proud of ReplAI gives you time back to spend doing things you love. Finishing a MVP for a product that could be integrated with other social media platforms was rewarding, especially for a first hackathon. ## What we learned Coming up with a project idea isn't easy. It takes iteration and deep knowledge of a subject area to build something that solves a complex problem with a simple solution in 24 hours. ## What's next for ReplAI ReplAI could be made to integrate with other, more mainstream social networking services like Messenger or WhatsApp.
## Inspiration Hearing a Huawei representative discuss the issues of mental health resonated strongly with our group. Younger generations are increasingly affected by mental health issues, especially as many of us experience the impacts of isolation and loneliness due to COVID-19. We were also inspired by the challenge of creating something ‘warm and fuzzy’. We wanted to create a fun and lighthearted solution that attempts to combat some of these issues by providing one with a friend for whatever mood they are in. ## What it does This website provides users with a “fuzzy friend”, a chatbot tailored to provide feel-good suggestions based on the user’s emotions. Users are introduced to 6 fuzzy friends, each with their own personalities, which they can then select based on their mood. Different bots offer various responses to messages, whether links to videos, jokes, or friendly commentary. ## How we built it We built this project using Python’s ChatterBot and Flask frameworks, and HTML. We worked on both front-end and back-end development, using Bootstrap to help with the front-end. We also attempted to use Heroku as a server to link the two components. ## Challenges we ran into Our main struggle was connecting our website to the server, so that users without Flask or ChatterBot could still use our FuzzyFriend chat service. We were new to GitHub and struggled with merging our front and back-end components without creating conflict. We also struggled to download and import the libraries necessary for our website to work, such as Flask. Building an AI chatbot capable of improving itself is also something that is harder than it seems! ## Accomplishments that we're proud of We managed to build a clean, user-friendly looking website that matches our warm, friendly theme. Another accomplishment was working with new libraries and improving our technical skills, such as learning to use Flask and how to collaborate on github. As our first hackathon ever, and first collaborative project, learning how to use github and creating common code that would work on all of our computers is an accomplishment that we are proud of. There were a lot of new components to this project, but we were all able to adapt and work with them to create what we wanted. Finally, we are proud of our teamwork. We were all incredibly supportive, determined and helpful through each step in the process; we were organized in planning our project, assigning and delegating different tasks, as well as quick to help eachother out. ## What we learned We learned how to collaborate on GitHub, how to implement libraries and frameworks we were not familiar with, and how to have fun coding together! ## What's next for FuzzyFriend We want to make our chatbots smarter to make user conversations more fluid. Currently, our bots are limited to very restricted conversations and often misunderstand messages. In the future we want to train our robots to interact more appropriately with all sorts of prompts from the user while also staying true to their character. We would also like to successfully connect our product to a server and domain. This way, users could more easily access its contents.
## Inspiration As someone who has always wanted to speak in ASL (American Sign Language), I have always struggled with practicing my gestures, as I, unfortunately, don't know any ASL speakers to try and have a conversation with. Learning ASL is an amazing way to foster an inclusive community, for those who are hearing impaired or deaf. DuoASL is the solution for practicing ASL for those who want to verify their correctness! ## What it does DuoASL is a learning app, where users can sign in to their respective accounts, and learn/practice their ASL gestures through a series of levels. Each level has a *"Learn"* section, with a short video on how to do the gesture (ie 'hello', 'goodbye'), and a *"Practice"* section, where the user can use their camera to record themselves performing the gesture. This recording is sent to the backend server, where it is validated with our Action Recognition neural network to determine if you did the gesture correctly! ## How we built it DuoASL is built up of two separate components; **Frontend** - The Frontend was built using Next.js (React framework), Tailwind and Typescript. It handles the entire UI, as well as video collection during the *"Learn"* Section, which it uploads to the backend **Backend** - The Backend was built using Flask, Python, Jupyter Notebook and TensorFlow. It is run as a Flask server that communicates with the front end and stores the uploaded video. Once a video has been uploaded, the server runs the Jupyter Notebook containing the Action Recognition neural network, which uses OpenCV and Tensorflow to apply the model to the video and determine the most prevalent ASL gesture. It saves this output to an array, which the Flask server reads and responds to the front end. ## Challenges we ran into As this was our first time using a neural network and computer vision, it took a lot of trial and error to determine which actions should be detected using OpenCV, and how the landmarks from the MediaPipe Holistic (which was used to track the hands and face) should be converted into formatted data for the TensorFlow model. We, unfortunately, ran into a very specific and undocumented bug with using Python to run Jupyter Notebooks that import Tensorflow, specifically on M1 Macs. I spent a short amount of time (6 hours :) ) trying to fix it before giving up and switching the system to a different computer. ## Accomplishments that we're proud of We are proud of how quickly we were able to get most components of the project working, especially the frontend Next.js web app and the backend Flask server. The neural network and computer vision setup was pretty quickly finished too (excluding the bugs), especially considering how for many of us this was our first time even using machine learning on a project! ## What we learned We learned how to integrate a Next.js web app with a backend Flask server to upload video files through HTTP requests. We also learned how to use OpenCV and MediaPipe Holistic to track a person's face, hands, and pose through a camera feed. Finally, we learned how to collect videos and convert them into data to train and apply an Action Detection network built using TensorFlow ## What's next for DuoASL We would like to: * Integrate video feedback, that provides detailed steps on how to improve (using an LLM?) * Add more words to our model! * Create a practice section that lets you form sentences! * Integrate full mobile support with a PWA!
losing
## Inspiration While a member of my team was conducting research at UCSF, he noticed a family partaking in a beautiful, albeit archaic, practice. They gave their grandfather access to a google doc, where each family member would write down the memories that they have with him. Nearly every day, the grandfather would scroll through the doc and look at the memories that him and his family wanted him to remember. ## What it does Much like the Google Doc does, our site stores memories inputted by either the main account holder themself, or other people who have access to the account, perhaps through a shared family email. From there, the memories show up on the users feed and are tagged with the emotion they indicate. Someone with Alzheimers can easily search through their memories to find what they are looking for. In addition, our Chatbot feature trained on their memories also allows users to easily talk to the app directly, asking what they are looking for. ## How we built it Next.js, React, Node.js, Tailwind, etc. ## Challenges we ran into It was difficult implementing our chatbot in a way where it is automatically update with data that our user inputs into the site. Moreover, we were working with React for the first time and faced many challenges trying to build out and integrate the different technologies into our website including setting up MongoDB, Flask, and different APIs. ## Accomplishments that we're proud of Getting this done! Our site is polished and carries out our desired functions well! ## What we learned As beginners, we were introduced to full-stack development! ## What's next for Scrapbook We'd like to introduce Scrapbook to medical professionals at UCSF and see their thoughts on it.
## Inspiration We started this project because we observed the emotional toll loneliness, grief, and dementia challenges can take on individuals and their caregivers, and we wanted to create a solution that could make a meaningful difference in their lives. Loneliness is especially tough on them, affecting both their mental and physical health. Our aim was simple: to offer them some companionship and comfort, a reminder of the good memories, and a way to feel less alone. ## What it does Using the latest AI technology, we create lifelike AI clones of their closest loved ones, whether living or passed away, allowing seniors to engage in text or phone conversations whenever they wish. For dementia patients and their caregivers, our product goes a step further by enabling the upload of cherished memories to the AI clones. This not only preserves these memories but also allows dementia patients to relive them, reducing the need to constantly ask caregivers about their loved ones. Moreover, for those who may forget that a loved one has passed away, our technology helps provide continuous connection, eliminating the need to repeatedly go through the grief of loss. It's a lifeline to cherished memories and emotional well-being for our senior community. ## How we built it We've developed a platform that transforms seniors' loved ones into AI companions accessible through web browsers, SMS, and voice calls. This technology empowers caregivers to customize AI companions for the senior they are in charge of, defining the companion personality and backstory with real data to foster the most authentic interaction, resulting in more engaging and personalized conversations. These interactions are stored indefinitely, enabling companions to learn and enhance their realism with each use. We used the following tools. . . Auth --> Clerk + Firebase App logic --> Next.js VectorDB --> Pinecone LLM orchestration --> Langchain.js Text model --> OpenAI Text streaming --> ai sdk Conversation history --> Upstash Deployment --> Ngrok Text and call with companion --> Twilio Landing Page --> HTML, CSS, Javascript Caregiver Frontend --> Angular Voice Clone --> ElevenLabs ## Challenges we ran into One of the significant challenges we encountered was in building and integrating two distinct front ends. We developed a Next.js front end tailored for senior citizens' ease of use, while a more administrative Angular front end was created for caregivers. Additionally, we designed a landing page using a combination of HTML, CSS, and JavaScript. The varying languages and technologies posed a challenge in ensuring seamless connectivity and cohesive operation between these different components. We faced difficulties in bridging the gap and achieving the level of integration we aimed for, but through collaborative efforts and problem-solving, we managed to create a harmonious user experience. ## Accomplishments that we're proud of We all tackled frameworks we weren't so great at, like Next.js. And, together, we built a super cool project with tons of real-world uses. It's a big deal for us because it's not just tech, it's helping seniors and caregivers in many ways. We've shown how we can learn and innovate, and we're stoked about it! ## What we learned We learned how to construct vector databases, which was an essential part of our project. Additionally, we discovered the intricacies of connecting the front-end and back-end components through APIs. ## What's next for NavAlone We would like to improve the AI clone phone call feature. We recognize that older users often prefer phone calls over texting, and we want to make this experience as realistic, fast, and user-friendly as possible.
## Inspiration We wanted to tackle a problem that impacts a large demographic of people. After research, we learned that 1 in 10 people suffer from dyslexia and 5-20% of people suffer from dysgraphia. These neurological disorders go undiagnosed or misdiagnosed often leading to these individuals constantly struggling to read and write which is an integral part of your education. With such learning disabilities, learning a new language would be quite frustrating and filled with struggles. Thus, we decided to create an application like Duolingo that helps make the learning process easier and more catered toward individuals. ## What it does ReadRight offers interactive language lessons but with a unique twist. It reads out the prompt to the user as opposed to it being displayed on the screen for the user to read themselves and process. Then once the user repeats the word or phrase, the application processes their pronunciation with the use of AI and gives them a score for their accuracy. This way individuals with reading and writing disabilities can still hone their skills in a new language. ## How we built it We built the frontend UI using React, Javascript, HTML and CSS. For the Backend, we used Node.js and Express.js. We made use of Google Cloud's speech-to-text API. We also utilized Cohere's API to generate text using their LLM. Finally, for user authentication, we made use of Firebase. ## Challenges we faced + What we learned When you first open our web app, our homepage consists of a lot of information on our app and our target audience. From there the user needs to log in to their account. User authentication is where we faced our first major challenge. Third-party integration took us significant time to test and debug. Secondly, we struggled with the generation of prompts for the user to repeat and using AI to implement that. ## Accomplishments that we're proud of This was the first time for many of our members to be integrating AI into an application that we are developing so that was a very rewarding experience especially since AI is the new big thing in the world of technology and it is here to stay. We are also proud of the fact that we are developing an application for individuals with learning disabilities as we strongly believe that everyone has the right to education and their abilities should not discourage them from trying to learn new things. ## What's next for ReadRight As of now, ReadRight has the basics of the English language for users to study and get prompts from but we hope to integrate more languages and expand into a more widely used application. Additionally, we hope to integrate more features such as voice-activated commands so that it is easier for the user to navigate the application itself. Also, for better voice recognition, we should
partial
## 💡 Inspiration Generation Z is all about renting - buying land is simply out of our budgets. But the tides are changing: with Pocket Plots, an entirely new generation can unlock the power of land ownership without a budget. Traditional land ownership goes like this: you find a property, spend weeks negotiating a price, and secure a loan. Then, you have to pay out agents, contractors, utilities, and more. Next, you have to go through legal documents, processing, and more. All while you are shelling out tens to hundreds of thousands of dollars. Yuck. Pocket Plots handles all of that for you. We, as a future LLC, buy up large parcels of land, stacking over 10 acres per purchase. Under the company name, we automatically generate internal contracts that outline a customer's rights to a certain portion of the land, defined by 4 coordinate points on a map. Each parcel is now divided into individual plots ranging from 1,000 to 10,000 sq ft, and only one person can own a contract to each plot to the plot. This is what makes us fundamentally novel: we simulate land ownership without needing to physically create deeds for every person. This skips all the costs and legal details of creating deeds and gives everyone the opportunity to land ownership. These contracts are 99 years and infinitely renewable, so when it's time to sell, you'll have buyers flocking to buy from you first. You can try out our app here: <https://warm-cendol-1db56b.netlify.app/> (AI features are available locally. Please check our Github repo for more.) ## ⚙️What it does ### Buy land like it's ebay: ![](https://i.imgur.com/PP5BjxF.png) We aren't just a business: we're a platform. Our technology allows for fast transactions, instant legal document generation, and resale of properties like it's the world's first ebay land marketplace. We've not just a business. We've got what it takes to launch your next biggest investment. ### Pocket as a new financial asset class... In fintech, the last boom has been in blockchain. But after FTX and the bitcoin crash, cryptocurrency has been shaken up: blockchain is no longer the future of finance. Instead, the market is shifting into tangible assets, and at the forefront of this is land. However, land investments have been gatekept by the wealthy, leaving little opportunity for an entire generation That's where pocket comes in. By following our novel perpetual-lease model, we sell contracts to tangible buildable plots of land on our properties for pennies on the dollar. We buy the land, and you buy the contract. It's that simple. We take care of everything legal: the deeds, easements, taxes, logistics, and costs. No more expensive real estate agents, commissions, and hefty fees. With the power of Pocket, we give you land for just $99, no strings attached. With our resell marketplace, you can sell your land the exact same way we sell ours: on our very own website. We handle all logistics, from the legal forms to the system data - and give you 100% of the sell value, with no seller fees at all. We even will run ads for you, giving your investment free attention. So how much return does a Pocket Plot bring? Well, once a parcel sells out its plots, it's gone - whoever wants to buy land from that parcel has to buy from you. We've seen plots sell for 3x the original investment value in under one week. Now how insane is that? The tides are shifting, and Pocket is leading the way. ### ...powered by artificial intelligence **Caption generation** *Pocket Plots* scrapes data from sites like Landwatch to find plots of land available for purchase. Most land postings lack insightful descriptions of their plots, making it hard for users to find the exact type of land they want. With *Pocket Plots*, we transformed links into images, into helpful captions. ![](https://i.imgur.com/drgwbft.jpg) **Captions → Personalized recommendations** These captions also inform the user's recommended plots and what parcels they might buy. Along with inputting preferences like desired price range or size of land, the user can submit a text description of what kind of land they want. For example, do they want a flat terrain or a lot of mountains? Do they want to be near a body of water? This description is compared with the generated captions to help pick the user's best match! ![](https://i.imgur.com/poTXYnD.jpg) ### **Chatbot** Minute Land can be confusing. All the legal confusion, the way we work, and how we make land so affordable makes our operations a mystery to many. That is why we developed a supplemental AI chatbot that has learned our system and can answer questions about how we operate. *Pocket Plots* offers a built-in chatbot service to automate question-answering for clients with questions about how the application works. Powered by openAI, our chat bot reads our community forums and uses previous questions to best help you. ![](https://i.imgur.com/dVAJqOC.png) ## 🛠️ How we built it Our AI focused products (chatbot, caption generation, and recommendation system) run on Python, OpenAI products, and Huggingface transformers. We also used a conglomerate of other related libraries as needed. Our front-end was primarily built with Tailwind, MaterialUI, and React. For AI focused tasks, we also used Streamlit to speed up deployment. ### We run on Convex We spent a long time mastering Convex, and it was worth it. With Convex's powerful backend services, we did not need to spend infinite amounts of time developing it out, and instead, we could focus on making the most aesthetically pleasing UI possible. ### Checkbook makes payments easy and fast We are an e-commerce site for land and rely heavily on payments. While stripe and other platforms offer that capability, nothing compares to what Checkbook has allowed us to do: send invoices with just an email. Utilizing Checkbook's powerful API, we were able to integrate Checkbook into our system for safe and fast transactions, and down the line, we will use it to pay out our sellers without needing them to jump through stripe's 10 different hoops. ## 🤔 Challenges we ran into Our biggest challenge was synthesizing all of our individual features together into one cohesive project, with compatible front and back-end. Building a project that relied on so many different technologies was also pretty difficult, especially with regards to AI-based features. For example, we built a downstream task, where we had to both generate captions from images, and use those outputs to create a recommendation algorithm. ## 😎 Accomplishments that we're proud of We are proud of building several completely functional features for *Pocket Plots*. We're especially excited about our applications of AI, and how they make users' *Pocket Plots* experience more customizable and unique. ## 🧠 What we learned We learned a lot about combining different technologies and fusing our diverse skillsets with each other. We also learned a lot about using some of the hackathon's sponsor products, like Convex and OpenAI. ## 🔎 What's next for Pocket Plots We hope to expand *Pocket Plots* to have a real user base. We think our idea has real potential commercially. Supplemental AI features also provide a strong technological advantage.
## Inspiration Our inspiration was to provide a robust and user-friendly financial platform. After hours of brainstorming, we decided to create a decentralized mutual fund on mobile devices. We also wanted to explore new technologies whilst creating a product that has several use cases of social impact. Our team used React Native and Smart Contracts along with Celo's SDK to discover BlockChain and the conglomerate use cases associated with these technologies. This includes group insurance, financial literacy, personal investment. ## What it does Allows users in shared communities to pool their funds and use our platform to easily invest in different stock and companies for which they are passionate about with a decreased/shared risk. ## How we built it * Smart Contract for the transfer of funds on the blockchain made using Solidity * A robust backend and authentication system made using node.js, express.js, and MongoDB. * Elegant front end made with react-native and Celo's SDK. ## Challenges we ran into Unfamiliar with the tech stack used to create this project and the BlockChain technology. ## What we learned We learned the many new languages and frameworks used. This includes building cross-platform mobile apps on react-native, the underlying principles of BlockChain technology such as smart contracts, and decentralized apps. ## What's next for *PoolNVest* Expanding our API to select low-risk stocks and allows the community to vote upon where to invest the funds. Refine and improve the proof of concept into a marketable MVP and tailor the UI towards the specific use cases as mentioned above.
## Inspiration As new adults and near-graduates entering a challenging job market, we've faced the uncertainty of housing affordability and real estate decisions firsthand. The complexity of fluctuating markets—driven by location, economics, and global events—can make homeownership and investment feel out of reach. We realized this wasn’t just our struggle; millions are asking the same questions: "Where should I invest?" or "Can I afford to live here?" Our project was inspired by this shared challenge. Existing tools often lack accessibility or accuracy for the average person, so we set out to build a solution that simplifies real estate data. By using two decades of historical trends and future projections, we empower users with clear, actionable insights into the market, helping them make smarter decisions. ## What it does Our project predicts real estate value for any city that the user inputs. It analyzes past real estate trends over the last two decades and generates real-time analytics and future projections for the coming years, all the way until 2035. The core features include: * Quantitative Predictions: Our tool provides exact predicted median housing prices for each selected city, alongside the current percentage change in housing value. This quantitative insight gives users a concrete understanding of the financial landscape in their chosen areas, helping them compare options effectively. * 3D Heatmap: A visually dynamic heatmap that overlays the U.S. map, highlighting cities expected to have the highest growth and property value increases. It gives users an at-a-glance understanding of where opportunities lie across the country. * Trend Graphs: Users can view a detailed graph showing historical trends from the past and projected trends up until 2035. This gives users insight into how the market has evolved and where it’s likely to go, helping them make informed decisions about where and when to invest. * Real-Time Analytics: By leveraging real-time data, users can receive up-to-date information about cities of their choice, allowing them to stay on top of rapidly changing market conditions. Our submission to the "Open Source Data" track emphasizes the power of shared data to drive innovation and solve complex, real-world problems like affordable housing and investment planning. By utilizing open-source data, we’re not just predicting market trends—we're also advocating for a future where data accessibility helps people make better, informed decisions. ## How we built it We used multiple open source datasets from Hugging Face, mostly tracking Zillow databases, as the main data source for our models. We utilized Scikit Learn, Numpy, PyTorch to train the AI prediction models and perform testing and evaluation. For the backend, we used Flask and SQLite to create API endpoints to the model, AI-powered suggestions, and the AI assistant. For the frontend, we used Next.js and Tailwind CSS. Lastly, we used AWS to host and deploy. ## Challenges we ran into Because of the numerous number of API endpoints, we had challenges during the hosting and deployment process, getting all of it to run live. We also worked on the deployment for almost six hours, and we're so proud it worked finally!!!! ## Accomplishments that we're proud of We will able to create a model that predicts real estate prices pretty well as well as with a very interactive user interface with helpful suggestions and assistant features. The visuals and analytics are easy to understand and show very detailed projections. ## What we learned We learned a lot about training and finetuning a model pretty much from the ground up using open source databases, as well as a lot about hosting our own backend through AWS. Each member originally had pretty different strengths, and though we played to them, our collaboration allowed us to learn a lot of skills in areas we had less exposure in before. ## What's next for Realytics Compare two cities function so that users can compare the analytics for two different regions, as well as adding even more regions/cities to the map for users to explore!
winning
## Inspiration Our inspiration for this project was the theme restoration. With recent circumstances of Covid-19, small businesses have suffered tremendously. In aims to restore families and economies back to normal, we created a web application in which small businesses can create postings for projects or tasks they might require. ## What it does Small businesses can sign up as a business owner to create their business profile and create postings for what they are looking for. Similar to freelancing, students as well as other individuals looking for opportunities and experiences can apply to these postings and possibly earn an incentive. ## How we built it We used React for the front client side of the application, firebase for the userAuth, MongoDB as the database, and the backend using Node.js (express.js). ## Challenges we ran into Source Code: Due to the amount of school work we have, we couldn't complete the entire project, so the following is the client side of Gigs. We are planning to complete this project and deploy it in the near future. GitHub Repo -> [link](https://github.com/JadKharboutly/Gigs) ## Accomplishments that we're proud of We are proud of being able to come up with and work through multiple aspects of the project along with the additional school work and assignments. We are satisfied with the simple yet effective design we were able to construct as it aligns well with the theme of restoration and is useful to many stakeholders involved. ## What we learned While looking into possible solutions, we learned that there is a lack of recognition on the challenges and hardships small businesses have faced as a result of Covid-19. Bringing awareness to these issues will assist people in recognizing the status of the smaller owned business, and recognize ways they can support them. ## What's next for Gigs Next steps are to complete the rest of the client side of Gigs, and the backend.
## Inspiration Memes are becoming increasingly popular among teenagers and young adults, providing a critical source of sentiment information the age groups that are otherwise less expressive of their thoughts. We seek to analyse and provide useful business insights from popular memes that reach thousands of people. ## What it does Satirate systematically analyses new memes for sentiment scores and topics that are being discussed in the memes, so to provide valuable feedback to companies/organisations involved in the discussions. ## How we built it Our project is supported by Google Cloud API: Vision, Natural Language and AutoML APIs for sentiment analysis, OCR extracting of text from images and training of supervised ML models to determine custom meme formats baseline sentiment score. Our backend is supported by SQLite3 and front end written in Plotly-Dash, an analytical dashboard framework in Python. ## Challenges we ran into Our largest hurdle was collaboration. Coming from an academic background, we learnt how to code in a vacuum. But Yhack showed us how important integration is in the real world. We learnt to seek help from mentors and to work together with each other to produce the product that we are proud of. ## Accomplishments that we're proud of We are able to successfully produce a real world analysis tool that involves supervised machine learning and computer vision to reveal critical insights about consumer sentiments. ## What we learned We have learnt to be flexible in terms of the tools we use. Learning to use Google APIs showed us a new set of great tools that we could use in our future projects. Always communicate with other competitors to exchange ideas and skills. ## What's next for YaleHack We will be back next year!
## Inspiration Fannie Mae, the largest provider of financing for mortgage lenders, faces a daunting task of micromanaging nearly every house in the country. One problem they have is the need to maintain a house property to certain standards when it is foreclosed so that its value does not depreciate significantly; a house that is not inhabited constantly needs maintenance to make sure the lawn is cut, A/C and heating are working, utilities are intact, etc. ## What it doies Our team built a mobile app and web app that help simplify the task of maintaining houses. The mobile app is used by house inspectors, who would use the app to take pictures of and write descriptions for various parts of the house. For example, if an inspector discovered that the bathtub was leaking on the second floor, he would take a picture of the scene, and write a brief description of the problem. The app would then take the picture and description, and load it into a database, which can be accessed later by both the mobile and web apps. On the web side, pictures and descriptions for each part of the house can be accessed. Furthermore, the web app features an interface that displays the repair status for each section of the house, whether it needs repair, is currently being repaired, or is in good condition; users can make repair requests on the website. ## How I built it We used an Angular framework to construct the web app, with Firebase API to upload and download images and information, and a bit of bootstrap to enhance aesthetics. For the mobile side, we used Swift to build the iOS app and Firebase to upload and download images and information. ## Challenges I ran into Since this was the first time any of us had used Firebase Storage, learning the API and even getting basic functions to work was difficult. In addition, making sure the right information was being uploaded, and in turn, the correct information downloaded and parsed was also difficult, since we were not familiar with Firebase. We also ran into a lot of Javascript issues, not only because it was our first time using Angular, but also that we were not familiar with many aspects of Javascript, such as scope and closure issues, as well as asynchronous and synchronous calls. ## Accomplishments that I'm proud of We are happy that we were able to accomplish our original goal of providing a mobile and web app that work together to provide information about various parts of the house, and give companies like Fannie Mae the ability to micromanage a large number of houses in a simple and compact way. ## What I learned The team members that worked on the mobile app learned a great deal of formatting data, and grabbing and uploading files onto Firebase. The team members that worked on the web app increased proficiency in Javascript and Angular. Everybody learned a good amount in the side of the Firebase (mobile, web) that they had to work in. # What's next for House Maed As Fannie Mae is looking to eventually deploy an app like House Maed in the future to make their management of house properties more efficient, we hope our app provides inspiration and is a guide for how such an app can be developed.
losing
## Inspiration We are inspired by... ## What it does Doc Assist is a physician tool used to help automate and help expedite the patient diagnosis process. The Doc Assist MVP specifically will take information about a users symptoms and imaging records such as chest Xray images to predict the chance of pneumonia. ## How we built it The front end was made with React and MUI to create the checkboxes, text and image uploading fields. Python was used for the backend, for the visualization, preprocessing, data augmentation, and training of the CNN model that predicted the chance of pneumonia based on chest X ray images. Flask was used to connect the front end to the backend. ## Challenges we ran into * Data collection * Integration of front end components with backend ## Accomplishments that we're proud of * Getting an MVP completed ## What's next for DocAssist * Include capability to diagnose other diseases * Add ability to insert personal data * Create more accurate predictions with classification models trained on personal data and symptoms, and add more CNN models to predict diagnosis for other types of imaging other than Xrays
## Inspiration In this modern technological age, so many things are created with the sole purpose of monetization. Ads are plastered everywhere, truths may be altered, and privacy may be breached by companies in an attempt to make an extra buck. We hoped to create something that would keep at least the most important things in people's lives - their health - accurate and private. We were also inspired by our own busy lives and how often we may forget things. Together, these two driving forces culminated in our creation of MedAssist. ## What it does MedAssist aids people's understandings of their prescriptions by providing information of their prescriptions' manufacturers as well as active ingredients as a starting point for future research. Using the Drug Product Database API created by an extremely trustworthy source, the Government of Canada, users can see what active ingredients are present in their drugs/medications. MedAssist also comes with a built in reminder system to remind people to take their prescriptions as well as audio message recording so that people can take note about their medications for future reference. ## How we built it We used different languages such as HTML, CSS, and Javascript, all of which were lightly bundled in Bootstrap. JavaScript, specifically fetch, came into effect by implementing some of our more in-depth features such as accessing the API to find details for medication, voice recording, and ## Challenges There were many challenges we have faced as a team. The most frequently experienced one was coding. Previously, we were mostly versed in Python, and so we were forced to adapt and learn other languages/markup languages extremely quickly. One of our biggest hurdles may have been the prominence of Javascript Promises that was present within our project. We hadn't had much experience with asynchronous functions, and even with mentors' help, we still struggled. But, these challenges made our accomplishments all the more satisfying. ## Accomplishments We are proud to have tried our best completing at Deltahacks 8. As a team composed of first year university students and a grade-11 high school student, we were intimidated by the amount of work our idea posed. But despite this, we chose to continue and give it our best shot. Apart from this, another thing we are proud of is figuring out code that helps us get what we wanted from MedAssist. Figuring out the code and understanding API’s was the most stressful part of our journey but in the end, we managed to make it all work. Overall, our project did not ride on the cutting-edge of technology, but the sheer amount of skills and information that we learned in such a short period of time made each member of the group feel extremely accomplished. ## What you learned We learned to work as a team and converse over ideas that we believed would be key features of MedAssist. We did by analyzing each feature on an important scale through brainstorming and comparing. We asked each other, “What would you as a consumer want from MedAssist?”. This was a perfect starting point for us as a team to get a general idea of what a typical consumer would want. We learned to work together and make choices together. On a more technical side, as we mentioned previously, we learned lots about other languages, especially Javascript. We were also forced to learn this concept of asynchronous functions. But most of all, we developed our problem-solving skills even further. Originally, we had struggled to use the API through Javascript, but had used it through Python. As a result, we learned about technologies that we didn't even ultimately use, such as Flask and PHP. ## What’s next for MedAssist MedAssist won’t be forgotten, we will continue to work on this project after Deltahacks, and figure out even more ways this website can help people. We hope to further develop both the front-end and back-end (front-end through React and back-end through Node.js). We had many other plans for our project that did not come to fruition at the end due to time constraints which can be seen in the specks of code and files floating around our Github repo, so we will definitely going back and exploring those. Finally, of course, what's the point of a website aimed to help others if it's not going to be hosted? Using our .tech domain (we already purchased medt.tech), we will be hosting MedAssist on the internet for anybody to use in future.
## Inspiration In 2012 in the U.S infants and newborns made up 73% of hospitals stays and 57.9% of hospital costs. This adds up to $21,654.6 million dollars. As a group of students eager to make a change in the healthcare industry utilizing machine learning software, we thought this was the perfect project for us. Statistical data showed an increase in infant hospital visits in recent years which further solidified our mission to tackle this problem at its core. ## What it does Our software uses a website with user authentication to collect data about an infant. This data considers factors such as temperature, time of last meal, fluid intake, etc. This data is then pushed onto a MySQL server and is fetched by a remote device using a python script. After loading the data onto a local machine, it is passed into a linear regression machine learning model which outputs the probability of the infant requiring medical attention. Analysis results from the ML model is passed back into the website where it is displayed through graphs and other means of data visualization. This created dashboard is visible to users through their accounts and to their family doctors. Family doctors can analyze the data for themselves and agree or disagree with the model result. This iterative process trains the model over time. This process looks to ease the stress on parents and insure those who seriously need medical attention are the ones receiving it. Alongside optimizing the procedure, the product also decreases hospital costs thereby lowering taxes. We also implemented a secure hash to uniquely and securely identify each user. Using a hyper-secure combination of the user's data, we gave each patient a way to receive the status of their infant's evaluation from our AI and doctor verification. ## Challenges we ran into At first, we challenged ourselves to create an ethical hacking platform. After discussing and developing the idea, we realized it was already done. We were challenged to think of something new with the same amount of complexity. As first year students with little to no experience, we wanted to tinker with AI and push the bounds of healthcare efficiency. The algorithms didn't work, the server wouldn't connect, and the website wouldn't deploy. We persevered and through the help of mentors and peers we were able to make a fully functional product. As a team, we were able to pick up on ML concepts and data-basing at an accelerated pace. We were challenged as students, upcoming engineers, and as people. Our ability to push through and deliver results were shown over the course of this hackathon. ## Accomplishments that we're proud of We're proud of our functional database that can be accessed from a remote device. The ML algorithm, python script, and website were all commendable achievements for us. These components on their own are fairly useless, our biggest accomplishment was interfacing all of these with one another and creating an overall user experience that delivers in performance and results. Using sha256 we securely passed each user a unique and near impossible to reverse hash to allow them to check the status of their evaluation. ## What we learned We learnt about important concepts in neural networks using TensorFlow and the inner workings of the HTML code in a website. We also learnt how to set-up a server and configure it for remote access. We learned a lot about how cyber-security plays a crucial role in the information technology industry. This opportunity allowed us to connect on a more personal level with the users around us, being able to create a more reliable and user friendly interface. ## What's next for InfantXpert We're looking to develop a mobile application in IOS and Android for this app. We'd like to provide this as a free service so everyone can access the application regardless of their financial status.
losing
## What it does KokoRawr at its core is a Slack App that facilitates new types of interactions via chaotic cooperative gaming through text. Every user is placed on a team based on their Slack username and tries to increase their team's score by playing games such as Tic Tac Toe, Connect 4, Battleship, and Rock Paper Scissors. Teams must work together to play. However, a "Twitch Plays Pokemon" sort of environment can easily be created where multiple people are trying to execute commands at the same time and step on each others' toes. Additionally, people can visualize the games via a web app. ## How we built it We jumped off the deep into the land of microservices. We made liberal use of StdLib with node.js to deploy a service for every feature in the app, amounting to 10 different services. The StdLib services all talk to each other and to Slack. We also have a visualization of the game boards that is hosted as a Flask server on Heroku that talks to the microservices to get information. ## Challenges we ran into * not getting our Slack App banned by HackPrinceton * having tokens show up correctly on the canvas * dealing with all of the madness of callbacks * global variables causing bad things to happen ## Accomplishments that we're proud of * actually chaotically play games with each other on Slack * having actions automatically showing up on the web app * The fact that we have **10 microservices** ## What we learned * StdLib way of microservices * Slack integration * HTML5 canvas * how to have more fun with each other ## Possible Use Cases * Friendly competitive way for teams at companies to get to know each other better and learn to work together * New form of concurrent game playing for friend groups with "unlimited scalability" ## What's next for KokoRawr We want to add more games to play and expand the variety of visualizations that are shown to include more games. Some service restructuring would be need to be done to reduce the Slack latency. Also, game state would need to be more persistent for the services.
## Inspiration The inspiration for our project came from three of our members being involved with Smash in their community. From one of us being an avid competitor, one being an avid watcher and one of us who works in an office where Smash is played quite frequently, we agreed that the way Smash Bro games were matched and organized needed to be leveled up. We hope that this becomes a frequently used bot for big and small organizations alike. ## How it Works We broke the project up into three components, the front end made using React, the back end made using Golang and a middle part connecting the back end to Slack by using StdLib. ## Challenges We Ran Into A big challenge we ran into was understanding how exactly to create a bot using StdLib. There were many nuances that had to be accounted for. However, we were helped by amazing mentors from StdLib's booth. Our first specific challenge was getting messages to be ephemeral for the user that called the function. Another adversity was getting DM's to work using our custom bot. Finally, we struggled to get the input from the buttons and commands in Slack to the back-end server. However, it was fairly simple to connect the front end to the back end. ## The Future for 'For Glory' Due to the time constraints and difficulty, we did not get to implement a tournament function. This is a future goal because this would allow workspaces and other organizations that would use a Slack channel to implement a casual tournament that would keep the environment light-hearted, competitive and fun. Our tournament function could also extend to help hold local competitive tournaments within universities. We also want to extend the range of rankings to have different types of rankings in the future. One thing we want to integrate into the future for the front end is to have a more interactive display for matches and tournaments with live updates and useful statistics.
[Check out the GitHub repo.](https://github.com/victorzshi/better) ## Inspiration In our friend group, we often like to bet against each other on goals for motivation and fun. We realized that in many situations, exposing personal milestones to a group of friends can provide great social encouragement and strengthen bonds within a community. ## What it does Our app provides a Slack bot and web-app interface for co-workers or friends to share and get involved with each others' goals. ## How we built it We built the core of our app on the Standard Library platform, which allowed us to quickly develop our serverless Slack bot implementation. ## Challenges we ran into Getting to understand Standard Library and its unique features was definitely the steepest part of the learning curve this weekend. We had to make changes due to play to certain weaknesses and strengths of the platform. ## Accomplishments that we're proud of We are proud that we were able to finish a functioning Slack bot, as well as present a pleasing website interface. ## What we learned We learned how to better take different types of user input/interactions into mind when designing an application, as it was the first time most of our team had developed a bot before. ## What's next for Better There are definitely many directions we could go with Better in the future. As a Slack bot, this prototype acts as an entry point with an HR-focused application. However, that is just the beginning. We could eventually spin-off Better into a standalone app, or integrate robust and convenient payment solutions (such as options to donate to charity or other places). We could also build this idea into a sustainable business, with percentage cuts of the money pool in mind.
winning
## Inspiration 💡 *An address is a person's identity.* In California, there are over 1.2 million vacant homes, yet more than 150,000 people (homeless population in California, 2019) don't have access to a stable address. Without an address, people lose access to government benefits (welfare, food stamps), healthcare, banks, jobs, and more. As the housing crisis continues to escalate and worsen throughout COVID-19, a lack of an address significantly reduces the support available to escape homelessness. ## This is Paper Homes: Connecting you with spaces so you can go places. 📃🏠 Paper Homes is a web application designed for individuals experiencing homelessness to get matched with an address donated by a property owner. **Part 1: Donating an address** Housing associations, real estate companies, and private donors will be our main sources of address donations. As a donor, you can sign up to donate addresses either manually or via CSV, and later view the addresses you donated and the individuals matched with them in a dashboard. **Part 2: Receiving an address** To mitigate security concerns and provide more accessible resources, Paper Homes will be partnering with California homeless shelters under the “Paper Homes” program. We will communicate with shelter staff to help facilitate the matching process and ensure operations run smoothly. When signing up, a homeless individual can provide ID, however if they don’t have any forms of ID we facilitate the entire process in getting them an ID with pre-filled forms for application. Afterwards, they immediately get matched with a donated address! They can then access a dashboard with any documents (i.e. applying for a birth certificate, SSN, California ID Card, registering address with the government - all of which are free in California). During onboarding they can also set up mail forwarding ($1/year, funded by NPO grants and donations) to the homeless shelter they are associated with. Note: We are solely providing addresses for people, not a place to live. Addresses will expire in 6 months to ensure our database is up to date with in-use addresses as well as mail forwarding, however people can choose to renew their addresses every 6 months as needed. ## How we built it 🧰 **Backend** We built the backend in Node.js and utilized express to connect to our Firestore database. The routes were written with the Express.js framework. We used selenium and pdf editing packages to allow users to download any filled out pdf forms. Selenium was used to apply for documents on behalf of the users. **Frontend** We built a Node.js webpage to demo our Paper Homes platform, using React.js, HTML and CSS. The platform is made up of 2 main parts, the donor’s side and the recipient’s side. The front end includes a login/signup flow that populates and updates our Firestore database. Each side has its own dashboard. The donor side allows the user to add properties to donate and manage their properties (ie, if it is no longer vacant, see if the address is in use, etc). The recipient’s side shows the address provided to the user, steps to get any missing ID’s etc. ## Challenges we ran into 😤 There were a lot of non-technical challenges we ran into. Getting all the correct information into the website was challenging as the information we needed was spread out across the internet. In addition, it was the group’s first time using firebase, so we had some struggles getting that all set up and running. Also, some of our group members were relatively new to React so it was a learning curve to understand the workflow, routing and front end design. ## Accomplishments & what we learned 🏆 In just one weekend, we got a functional prototype of what the platform would look like. We have functional user flows for both donors and recipients that are fleshed out with good UI. The team learned a great deal about building web applications along with using firebase and React! ## What's next for Paper Homes 💭 Since our prototype is geared towards residents of California, the next step is to expand to other states! As each state has their own laws with how they deal with handing out ID and government benefits, there is still a lot of work ahead for Paper Homes! ## Ethics ⚖ In California alone, there are over 150,000 people experiencing homelessness. These people will find it significantly harder to find employment, receive government benefits, even vote without proper identification. The biggest hurdle is that many of these services are linked to an address, and since they do not have a permanent address that they can send mail to, they are locked out of these essential services. We believe that it is ethically wrong for us as a society to not act against the problem of the hole that the US government systems have put in place to make it almost impossible to escape homelessness. And this is not a small problem. An address is no longer just a location - it's now a de facto means of identification. If a person becomes homeless they are cut off from the basic services they need to recover. People experiencing homelessness also encounter other difficulties. Getting your first piece of ID is notoriously hard because most ID’s require an existing form of ID. In California, there are new laws to help with this problem, but they are new and not widely known. While these laws do reduce the barriers to get an ID, without knowing the processes, having the right forms, and getting the right signatures from the right people, it can take over 2 years to get an ID. Paper Homes attempts to solve these problems by providing a method for people to obtain essential pieces of ID, along with allowing people to receive a proxy address to use. As of the 2018 census, there are 1.2 million vacant houses in California. Our platform allows for donors with vacant properties to allow people experiencing homelessness to put down their address to receive government benefits and other necessities that we take for granted. With the donated address, we set up mail forwarding with USPS to forward their mail from this donated address to a homeless shelter near them. With proper identification and a permanent address, people experiencing homelessness can now vote, apply for government benefits, and apply for jobs, greatly increasing their chance of finding stability and recovering from this period of instability Paper Homes unlocks access to the services needed to recover from homelessness. They will be able to open a bank account, receive mail, see a doctor, use libraries, get benefits, and apply for jobs. However, we recognize the need to protect a person’s data and acknowledge that the use of an online platform makes this difficult. Additionally, while over 80% of people experiencing homelessness have access to a smartphone, access to this platform is still somewhat limited. Nevertheless, we believe that a free and highly effective platform could bring a large amount of benefit. So long that we prioritize the needs of a person experiencing homelessness first, we will able to greatly help them rather than harming them. There are some ethical considerations that still need to be explored: We must ensure that each user’s information security and confidentiality are of the highest importance. Given that we will be storing sensitive and confidential information about the user’s identity, this is top of mind. Without it, the benefit that our platform provides is offset by the damage to their security. Therefore, we will be keeping user data 100% confidential when receiving and storing by using hashing techniques, encryption, etc. Secondly, as mentioned previously, while this will unlock access to services needed to recover from homelessness, there are some segments of the overall population that will not be able to access these services due to limited access to the internet. While we currently have focused the product on California, US where access to the internet is relatively high (80% of people facing homelessness have access to a smartphone and free wifi is common), there are other states and countries that are limited. In addition to the ideas mentioned above, some next steps would be to design a proper user and donor consent form and agreement that both supports users’ rights and removes any concern about the confidentiality of the data. Our goal is to provide means for people facing homelessness to receive the resources they need to recover and thus should be as transparent as possible. ## Sources [1](https://www.cnet.com/news/homeless-not-phoneless-askizzy-app-saving-societys-forgotten-smartphone-tech-users/#:%7E:text=%22Ninety%2Dfive%20percent%20of%20people,have%20smartphones%2C%22%20said%20Spriggs) [2](https://calmatters.org/explainers/californias-homelessness-crisis-explained/) [3](https://calmatters.org/housing/2020/03/vacancy-fines-california-housing-crisis-homeless/)
## Inspiration There are 1.1 billion people without Official Identity (ID). Without this proof of identity, they can't get access to basic financial and medical services, and often face many human rights offences, due the lack of accountability. The concept of a Digital Identity is extremely powerful. In Estonia, for example, everyone has a digital identity, a solution was developed in tight cooperation between the public and private sector organizations. Digital identities are also the foundation of our future, enabling: * P2P Lending * Fractional Home Ownership * Selling Energy Back to the Grid * Fan Sharing Revenue * Monetizing data * bringing the unbanked, banked. ## What it does Our project starts by getting the user to take a photo of themselves. Through the use of Node JS and AWS Rekognize, we do facial recognition in order to allow the user to log in or create their own digital identity. Through the use of both S3 and Firebase, that information is passed to both our dash board and our blockchain network! It is stored on the Ethereum blockchain, enabling one source of truth that corrupt governments nor hackers can edit. From there, users can get access to a bank account. ## How we built it Front End: | HTML | CSS | JS APIs: AWS Rekognize | AWS S3 | Firebase Back End: Node JS | mvn Crypto: Ethereum ## Challenges we ran into Connecting the front end to the back end!!!! We had many different databases and components. As well theres a lot of accessing issues for APIs which makes it incredibly hard to do things on the client side. ## Accomplishments that we're proud of Building an application that can better the lives people!! ## What we learned Blockchain, facial verification using AWS, databases ## What's next for CredID Expand on our idea.
# Inspiration I recently got attached to Beat Saber so I thought I'd be fun to build something similar to it. # Objective The objective of the game is to score higher than your opponent. Points are scored if a player triggers a hitbox when a note is in contact with it. **Points Chart:** **Green:** Perfect hit! The hitbox was triggered when a note was in full contact, **full points + combo bonus** **Yellow:** Hitbox was triggered when a note was in partial contact, **partial points** **Red:** Hitbox was triggered when a note was not in contact, **no points** **Combo (Bonus Points):** Combos are achieved when a hitbox triggered **Green** more than once in a row. Combos add a great amount of bonus to your score and progressively increase in value as the pace of the notes progress. # Controls & Info **HitBox:** The blue circles at the bottom of each player's half of the screen **Notes:** The orange circles that fall from the top of the screen down to the hotboxes **Player 1 (Left Side):** Key "A": Triggers the left hitbox Key "S": Triggers the center hitbox Key "D": Triggers the right hitbox **Player 2 (Right Side):** Key "J: Triggers the left hitbox Key "K": Triggers the center hitbox Key "L": Triggers the right hitbox # What's next for Rhythm Flow 1. Support for tablets. The game is very much playable on the computer but the mechanics of it can also be ported to tablets where the touch screen size is sufficient enough to use controls. 2. More game modes. Currently, there is only one game mode where two people are directly competing against each other. I have ideas for other games modes where instead of competing, two players would have collaborate together to beat the round.
winning
## Inspiration Firstly as a team, we were inspired to spend this weekend on making something for the betterment of society, each one of the team members feels tech can do a lot more in helping people solve problems at scale and impacting a large number of people. Climate Change is one such problem and we are inspired to build products mitigating its effects through new technologies. ## What it does Wildfire Protect is a wildfire parametric insurance product which provides instant payouts to the insured in case of wildfire damaging their property, the other significant feature of Wildfire Protect is to help potential householders take an informed decision in purchasing the house by looking at areas that are vulnerable to wildfires. ## How we built it We build the backend insurance product using Ethereum based blockchain taking use of a library called Etherisc which uses NodeJS and Solidity, 'The Oracle' - part of the product looking at trigger criteria was built using real-time imagery from MODIS, Landsat and Sentinel satellites. The database used for storing customer information was MySQL and the front end is created using React and Express.js ## Challenges we ran into Lack of real-time satellite data in multiple spectral bands and the granularity which was desired. Images with thick cloud cover were rendered useless as the geographic damage could not be assessed. Also challenges of making a blockchain app especially connecting backend to the client-side. Dealing with the challenges in satellite data involved pre-processing the obtained data to combine the 3 RGB+NIR bands and localizing to the building in question. ## Accomplishments that we're proud of First and foremost we are proud of making a hack around social good and with the intention of helping people face challenges of climate change. Technology-wise, it was the first time we worked with Blockchain and Satellite data. ## What we learned How to work with blockchain technology, satellite data and domain knowledge of the insurance product. ## What's next for Trying to make this app into a full fledged product! We will be sharing updates soon.
## Inspiration Fannie Mae, the largest provider of financing for mortgage lenders, faces a daunting task of micromanaging nearly every house in the country. One problem they have is the need to maintain a house property to certain standards when it is foreclosed so that its value does not depreciate significantly; a house that is not inhabited constantly needs maintenance to make sure the lawn is cut, A/C and heating are working, utilities are intact, etc. ## What it doies Our team built a mobile app and web app that help simplify the task of maintaining houses. The mobile app is used by house inspectors, who would use the app to take pictures of and write descriptions for various parts of the house. For example, if an inspector discovered that the bathtub was leaking on the second floor, he would take a picture of the scene, and write a brief description of the problem. The app would then take the picture and description, and load it into a database, which can be accessed later by both the mobile and web apps. On the web side, pictures and descriptions for each part of the house can be accessed. Furthermore, the web app features an interface that displays the repair status for each section of the house, whether it needs repair, is currently being repaired, or is in good condition; users can make repair requests on the website. ## How I built it We used an Angular framework to construct the web app, with Firebase API to upload and download images and information, and a bit of bootstrap to enhance aesthetics. For the mobile side, we used Swift to build the iOS app and Firebase to upload and download images and information. ## Challenges I ran into Since this was the first time any of us had used Firebase Storage, learning the API and even getting basic functions to work was difficult. In addition, making sure the right information was being uploaded, and in turn, the correct information downloaded and parsed was also difficult, since we were not familiar with Firebase. We also ran into a lot of Javascript issues, not only because it was our first time using Angular, but also that we were not familiar with many aspects of Javascript, such as scope and closure issues, as well as asynchronous and synchronous calls. ## Accomplishments that I'm proud of We are happy that we were able to accomplish our original goal of providing a mobile and web app that work together to provide information about various parts of the house, and give companies like Fannie Mae the ability to micromanage a large number of houses in a simple and compact way. ## What I learned The team members that worked on the mobile app learned a great deal of formatting data, and grabbing and uploading files onto Firebase. The team members that worked on the web app increased proficiency in Javascript and Angular. Everybody learned a good amount in the side of the Firebase (mobile, web) that they had to work in. # What's next for House Maed As Fannie Mae is looking to eventually deploy an app like House Maed in the future to make their management of house properties more efficient, we hope our app provides inspiration and is a guide for how such an app can be developed.
# butternut ## `buh·tr·nuht` -- `bot or not?` Is what you're reading online written by a human, or AI? Do the facts hold up? `butternut`is a chrome extension that leverages state-of-the-art text generation models *to combat* state-of-the-art text generation. ## Inspiration Misinformation spreads like wildfire in these days and it is only aggravated by AI-generated text and articles. We wanted to help fight back. ## What it does Butternut is a chrome extension that analyzes text to determine just how likely a given article is AI-generated. ## How to install 1. Clone this repository. 2. Open your Chrome Extensions 3. Drag the `src` folder into the extensions page. ## Usage 1. Open a webpage or a news article you are interested in. 2. Select a piece of text you are interested in. 3. Navigate to the Butternut extension and click on it. 3.1 The text should be auto copied into the input area. (you could also manually copy and paste text there) 3.2 Click on "Analyze". 4. After a brief delay, the result will show up. 5. Click on "More Details" for further analysis and breakdown of the text. 6. "Search More Articles" will do a quick google search of the pasted text. ## How it works Butternut is built off the GLTR paper <https://arxiv.org/abs/1906.04043>. It takes any text input and then finds out what a text generating model *would've* predicted at each word/token. This array of every single possible prediction and their related probability is crossreferenced with the input text to determine the 'rank' of each token in the text: where on the list of possible predictions was the token in the text. Text with consistently high ranks are more likely to be AI-generated because current AI-generated text models all work by selecting words/tokens that have the highest probability given the words before it. On the other hand, human-written text tends to have more variety. Here are some screenshots of butternut in action with some different texts. Green highlighting means predictable while yellow and red mean unlikely and more unlikely, respectively. Example of human-generated text: ![human_image](https://cdn.discordapp.com/attachments/795154570442833931/797931974064865300/unknown.png) Example of GPT text: ![gpt_text](https://cdn.discordapp.com/attachments/795154570442833931/797931307958534185/unknown.png) This was all wrapped up in a simple Flask API for use in a chrome extension. For more details on how GLTR works please check out their paper. It's a good read. <https://arxiv.org/abs/1906.04043> ## Tech Stack Choices Two backends are defined in the [butternut backend repo](https://github.com/btrnt/butternut_backend). The salesforce CTRL model is used for butternut. 1. GPT-2: GPT-2 is a well-known general purpose text generation model and is included in the GLTR team's [demo repo](https://github.com/HendrikStrobelt/detecting-fake-text) 2. Salesforce CTRL: [Salesforce CTRL](https://github.com/salesforce/ctrl) (1.6 billion parameter) is bigger than all GPT-2 varients (117 million - 1.5 billion parameters) and is purpose-built for data generation. A custom backend was CTRL was selected for this project because it is trained on an especially large dataset meaning that it has a larger knowledge base to draw from to discriminate between AI and human -written texts. This, combined with its greater complexity, enables butternut to stay a step ahead of AI text generators. ## Design Decisions * Used approchable soft colours to create a warm approach towards news and data * Used colour legend to assist users in interpreting language ## Challenges we ran into * Deciding how to best represent the data * How to design a good interface that *invites* people to fact check instead of being scared of it * How to best calculate the overall score given a tricky rank distrubution ## Accomplishments that we're proud of * Making stuff accessible: implementing a paper in such a way to make it useful **in under 24 hours!** ## What we learned * Using CTRL * How simple it is to make an API with Flask * How to make a chrome extension * Lots about NLP! ## What's next? Butternut may be extended to improve on it's fact-checking abilities * Text sentiment analysis for fact checking * Updated backends with more powerful text prediction models * Perspective analysis & showing other perspectives on the same topic Made with care by: ![Group photo](https://cdn.discordapp.com/attachments/795154570442833931/797730842234978324/unknown.png) ``` // our team: { 'group_member_0': [brian chen](https://github.com/ihasdapie), 'group_member_1': [trung bui](https://github.com/imqt), 'group_member_2': [vivian wi](https://github.com/vvnwu), 'group_member_3': [hans sy](https://github.com/hanssy130) } ``` Github links: [butternut frontend](https://github.com/btrnt/butternut) [butternut backend](https://github.com/btrnt/butternut_backend)
partial
## Inspiration - Did you know that Pandas were once carnivores, then became vegetarian, then vegan? According to Google, Vegan trend has quadrupled between 2012 and 2017. Today, at least one in three Americans have stopped or reduced their meat consumption. However, almost 90% of the Impossible Burgers sold at the fast food chains are being purchased by meat eaters. Who are these people? There is fast growing number of people who wants to help the planet by giving up meat and dairy, but also likes meat or dairy too much to give them up - so called REDUCETARIAN. If you have ever thought of maybe not eating so much meat today, for reasons including health, ethics, environment, etc., then you are part of the group too! A study published in the journal Science indicated that giving up meat and dairy is the SINGLE BIGGEST step any of us can take to reduce our carbon footprint on Earth. However, we know it's hard! Here it comes - Pandafy - the meat consumption tracking app that will encourage anyone to eat less meat (aka learn from pandas). For our mother earth, what counts right now is not whether we can give up on meats entirely, but how much more we can improve to help. ## What it does - Pandafy is a habit tracking app that records whether you eat meat on a day or no. If you did not eat meat, the app encourages you by showing you a tree that you saved! Pandafy also shows you the amount of carbon footprint you have reduced by eating less meat. Doesn't it feel good to know immediately how much you have helped to save Earth? ## How we built it - Android Studio We learnt to use Android Studio on the first day of PennApps XX. We then started the structure of tracking on scratch. We then improved the UX with graphics. ## Challenges we ran into - Many! We have never joined a hackathon before, let alone building an app! We ran into challenges on every single line of code - how to count cumulative number of days, how to insert pictures, and so on. ## Accomplishments that we're proud of - Many more! We made our first app in life at our first hackathon! We are especially proud of our courage to take on a tool we just learnt how to use yesterday. ## What we learned - So much thanks to PennApps! We learned that we should never be afraid to build a skill right when you need it. We know how much potentials we have in helping the world with technology, regardless how small it is. ## What's next for Pandafy - Exciting! 1) We will add more functions to Pandafy, including recording by meals and show a calendar with data recorded. 2) We will use the data collected to generate more services for users, e.g. recommend new vegan restaurants / products. 3) Launch the app. 4) Add in ways to monetarize the app. (a) generate premium version which has better functionality and generate more detailed reports. (b) design more rare trees (and animals) you could grow by paying for special seeds. Sources: * Scarborough, et al. (2014), "Dietary greenhouse gas emissions of meat-eaters, fish-eaters, vegetarians and vegans in the UK." * <https://www.jing.fm/iclipt/u2q8a9e6u2t4i1e6/> * <https://www.vectorstock.com/royalty-free-vector/home-icon-vector-20131404>
## Inspiration We were inspired by Plaid's challenge to "help people make more sense of their financial lives." We wanted to create a way for people to easily view where they are spending their money so that they can better understand how to conserve it. Plaid's API allows us to see financial transactions, and Google Maps API serves as a great medium to display the flow of money. ## What it does GeoCash starts by prompting the user to login through the Plaid API. Once the user is authorized, we are able to send requests for transactions, given the public\_token of the user. We then displayed the locations of these transactions on the Google Maps API. ## How I built it We built this using JavaScript, including Meteor, React, and Express frameworks. We also utilized the Plaid API for the transaction data and the Google Maps API to display the data. ## Challenges I ran into Data extraction/responses from Plaid API, InfoWindow displays in Google Maps ## Accomplishments that I'm proud of Successfully implemented meteor webapp, integrated two different APIs into our product ## What I learned Meteor (Node.js and React.js), Plaid API, Google Maps API, Express framework ## What's next for GeoCash We plan on integrating real user information into our webapp; we currently only using the sandbox user, which has a very limited scope of transactions. We would like to implement differently size displays on the Maps API to represent the amount of money spent at the location. We would like to display different color displays based on the time of day, which was not included in the sandbox user. We would also like to implement multiple different user displays at the same time, so that we can better describe the market based on the different categories of transactions.
## Inspiration: We thought about how we can provide an easy educational tool that would help people who were trying to eat more sustainable. ## What it does: We take an online recipe and give it a sustainability score based on the ingredients in the recipe. We also give suggestions on how to eat more sustainably. ## How we built it: We scraped the page for the ingredients and, based on what was in the recipe, gave suggestions on how to eat more sustainably and a sustainability score. ## Challenges we ran into: No one on our team knew how to code in JavaScript or HTML so we all had to learn from scratch. There were a lot of syntactical and configuration issues as a result. ## Accomplishments that we're proud of: We're proud of how much we could do without much experience and also how our final project ended up looking. ## What's next for Food for Thought: There are so many ways we could improve our web extension including having a more robust list of foods or adding a goals tracker.
partial
## What it does This app uses a user's location to determine whether or not they are in the proximity of a disaster, and if they are, will passively push their location to a database where first responders (who run admin accounts) can access and map out their locations. Thus, if people are stuck in a fire, flood, earthquake, or other natural disaster, they can be found and rescued as soon as possible. Users can indicate that they are safe to turn off this functionality, and can use a panic button to give an auditory signal as to where they are. The app actively shows users a regularly updated map of disasters from the GDACS server to let them know about the disasters in their proximity. ## How we built it This app was built using Google Cloud and the Google Maps API integrated with React, NodeJS, Firebase, and the Expo client. We also used files from GDACS, or the Global Disaster Alerting and Coordination System.
## Inspiration As students with family in West Africa, we’ve been close to personal situations in which the difference between life and death was quick access to medical aid. One call, quicker service or a nearby expert or volunteer could’ve made a big difference, preventing a life from being lost. We were inspired by this problem to pursue a community solution that would hopefully help to save lives. Oftentimes, in developing countries, even big cities, speedy access to quick medical or a centralized emergency service is not possible, for many reasons. In the case of an emergency, time is often of the essence and a few minutes can be a vital difference. We wondered, what if you could crowdsource the power of the communities nearby and even faraway to aid in these situations? What if at the push of the button you could notify not only nearby health professionals, but also family members who may able to quickly come and help? Having this in mind, we thought creating a platform to connect these professionals to those that need help would fill a significant gap, and use the power of communities to achieve that. ## What it does Though we didn't get the chance to fully complete the app, the general idea was to build a phone app that acts as a LifeAlert for those who live in developing countries or rely on community-based healthcare. Specifically, we want to build a system that allows people to send alerts to a list of emergency contacts. The app would then search for others within a certain radius that have the app and are signed up as helpers and send a location to that person. ## How we built it The app is build using React Native. We prototyped it using Figma and before that drew our prototypes. ## Challenges we ran into As none of us have completed a hackathon prior to this one, we found that the scope of our project didn't quite match our teams skill level and time availability. Setup was a bit of a challenge, and we also ran into a couple issues integrating various APIs and building a backend. We spent hours figuring out how to use APIs, learning React-Native and all its packages, uploading to a fire-base. One big thing was figuring out how to collaborate with React-Native. We stayed up late trying Atom, GitHub, before finally resorting to a creative solution with expo snack, working on different components apart and bringing them together in one centralized screen. ## Accomplishments that we're proud of We're proud of the fact that we were able to produce a semi-working app in the short span of time that we were given considering our experience at the time. From prototyping to an app in a day, we learned, used and synthesized skills in ways that we could have never imagined a week ago. ## What we learned We learned a lot about React and about app development in general. Workshops and social learning allowed us to pick up skills that we wouldn’t have otherwise, and building an app front to back-end was relatively new to us. It’s awesome what the power of hackathons, with the time pressure and drive can accomplish! Thanks TreeHacks!
## Inspiration In light of the ongoing global conflict between war-torn countries, many civilians face hardships. Recognizing these challenges, LifeLine Aid was inspired to direct vulnerable groups to essential medical care, health services, shelter, food and water assistance, and other deprivation relief. ## What it does LifeLine Aid provides multifunctional tools that enable users in developing countries to locate resources and identify dangers nearby. Utilizing the user's location, the app alerts them about the proximity of a situation and centers for help. It also facilitates communication, allowing users to share live videos and chat updates regarding ongoing issues. An upcoming feature will highlight available resources, like nearby medical centers, and notify users if these centers are running low on supplies. ## How we built it Originally, the web backend was to be built using Django, a trusted framework in the industry. As we progressed, we realized that the amount of effort and feasibility of exploiting Django were not sustainable; as we made no progress within the first day. Quickly turning to the in-depth knowledge of one of our team member’s extensive research into asyncio, we decided to switch to FastAPI, a trusted framework used by Microsoft. Using this framework had both its benefits and costs. Realizing after our first day, Django proved to be a roadblock, thus we ultimately decided to switch to FastAPI. Our backend proudly uses CockroachDB, an unstoppable force to be reckoned with. CockroachDB allowed our code to scale and continue to serve those who suffer from the effects of war. ## Challenges we ran into In order to pinpoint hazards and help, we would need to obtain, store, and reverse-engineer Geospatial coordinate points which we would then present to users in a map-centric manner. We initially struggled with converting the Geospatial data from a degree, minutes, seconds format to decimal degrees and storing the converted values as points on the map which were then stored as unique 50 character long SRID values. Luckily, one of our teammates had some experience with processing GeoSpatial data so drafting coordinates on a map wasn’t our biggest hurdle to overcome. Another challenge we faced were certain edge cases in our initial Django backend that resulted in invalid data. Since some outputs would be relevant to our project, we had to make an executive decision to change backend midway through. We decided to go with FastApi. Although FastApi brought its own challenge with processing SQL to usable data, it was our way of overcoming our Jango situation. One last challenge we ran into was our overall source control. A mixture of slow and unbearable WiFi, combined with tedious local git repositories not correctly syncing create some frustrating deadlocks and holdbacks. To combat this downtime, we resort to physically drafting and planning out how each component of our code would work. ## Accomplishments that we're proud of Three out of the four in our team are attending their first hackathon. The experience of crafting an app and seeing the fruits of our labor is truly rewarding. The opportunity to acquire and apply new tools in our project has been exhilarating. Through this hackathon, our team members were all able to learn different aspects of creating an idea into a scalable application. From designing and learning UI/UX, implementing the React-Native framework, emulating iOS and Android devices to test and program compatibility, and creating communication between the frontend and backend/database. ## What we learned This challenge aimed to dive into technologies that are used widely in our daily lives. Spearheading the competition with a framework trusted by huge companies such as Meta, Discord and others, we chose to explore the capabilities of React Native. Joining our team are three students who have attended their first hackathon, and the grateful opportunity of being able to explore these technologies have led us to take away a skillset of a lifetime. With the concept of the application, we researched and discovered that the only best way to represent our data is through the usage of Geospatial Data. CockroachDB’s extensive tooling and support allowed us to investigate the usage of geospatial data extensively; as our backend team traversed the complexity and the sheer awe of the scale of said technology. We are extremely grateful to have this opportunity to network and to use these tools that would be useful in the future. ## What's next for LifeLine Aid There are a plethora of avenues to further develop the app, which include enhanced verification, rate limiting, and many others. Other options include improved hosting using Azure Kubernetes Services (AKS), and many others. This hackathon project is planned to be maintained further into the future as a project for others who may be new or old in this field to collaborate on.
losing
## Inspiration Pivoted to a completely new project to allow users to access trading analysis tools normally not available to a everyday or beginner trader. ## What it does ## How we built it ## Challenges we ran into ## Accomplishments that we're proud of ## What we learned ## What's next for OpenAccessToQuantTools
## Inspiration The inspiration came from lectures on several of the innovations recently created by fintech and proprietary trading companies in the domain of algorithmic/ high frequency/ learning-based trading strategies, and my desire to open these technologies up to everyone else, especially those with insufficient capital to be considered for such advanced tech-based finanancial schemes. ## What it does It provides a UI and IDE (originally for Julia, switched over to Javascript for security reasons) for building trading algorithms in an intuitive, iterative manner. It also provides built-in validation for algorithms and pseudo-native incorporation of large sources of relevant data (currently limited to quandl) ## How I built it The platform was built on a nodejs server using express (using shelljs/bash for unsafe calls to Julia) ## Challenges I ran into The first big problem in this hackathon was that my two other group members were not able to make it to the hackathon due to family issues and upcoming midterms. This, I think, dealt a heavy but not insurmountable blow to my progress. The second major problem was trying to come up with a method of effectively sandboxing the inputted julia code: since it has the ability to directly modify low-level structures, there is effectively no way to safely run this server side without setting up a dedicated virtual machine for each process. Ultimately, this was resolved by just switching to javascript, which eliminated a lot of overhead and increased the safety greatly (simply disabling the require() nodejs function makes it very difficult to inject malicious code) ## What I learned I learned quite a bit about nodejs, which I had studied but not used previously for a significant project, and also about some of the challenges of HTTPS and maintaining servers safe against injection when the whole point of the server is to run foreign code. ## What's next for Γ trade Unfortunately, the security issues remain a big deal, and obviously the entire UI/UX needs a rehaul, so the next couple weeks will be spent rewriting the entire system in a safer, more scalable fashion to allow for the system to function as originally planned.
## Inspiration After being overwhelmed by the volume of financial educational tools available, we discovered how the majority of products are focused for institutions or expensive. We decided there needs to be an easy approach to learning about stocks in a more casual environment. Interested in the simplicity of Tinder’s yes or no swiping mechanics, we decided to combine the 2 ideas to create Tickr! ## What it does Tickr is a stock screening tool designed to help beginner retail investors discover their next trade! Using an intuitive yes or no discovery system through swiping mechanics, Tickr the next Tinder for stocks. For a more in depth video demo, see our [original screen recorded demo video!](https://youtu.be/dU6rF8vymKE) ## How we built it Our team created this web app using a Node and Express back end paired with a React front end. The back end of our project used 3 linked Supabase tables to host authenticated user information, static information about stocks from the New York stock exchange and NASDAQ. We also used the [Finnhub API](https://finnhub.io/) to get real time metrics about the stocks we were showing our users. ## Challenges we ran into Our biggest challenge was setting the scope into something that our team could complete in a weekend. We hadn't used Node and Express in a long time, so getting comfortable with our stack again took more time than we thought. We were also completely new to Supabase and decided to try it out because it sounded really interesting. While Supabase turned out to be incredibly useful and userfriendly, the learning curve for it also took a bit more time than we thought. ## Accomplishments that we're proud of The two accomplishments we are most proud of are our finished UI and successful integration of the Finnhub API. Drawing inspiration from Tinder, we were able to recreate a similar UI/UX design with minimal help from pre-existing libraries. Further, we were able to design our backend to make seamless API calls to fetch relevant data for our application. ## What we learned During this project we learned a lot about the power of friendship and anime. Some of us learned what a market cap was and how to write a viable business proposal while others learned more about full stack development and how to host a database on Supabase. Overall it was a very fun project and we're really glad we were able to get our MVP done 😁✌️ ## What's next for Tickr Our next goal for Tickr is to finish off the aggregate news feed function. This would entail a news feed of all stocks swiped on and provide notification. This would help improve our north star metric of time spent on platform and daily active users!
losing
## Problem Statement As the number of the elderly population is constantly growing, there is an increasing demand for home care. In fact, the market for safety and security solutions in the healthcare sector is estimated to reach $40.1 billion by 2025. The elderly, disabled, and vulnerable people face a constant risk of falls and other accidents, especially in environments like hospitals, nursing homes, and home care environments, where they require constant supervision. However, traditional monitoring methods, such as human caregivers or surveillance cameras, are often not enough to provide prompt and effective responses in emergency situations. This potentially has serious consequences, including injury, prolonged recovery, and increased healthcare costs. ## Solution The proposed app aims to address this problem by providing real-time monitoring and alert system, using a camera and cloud-based machine learning algorithms to detect any signs of injury or danger, and immediately notify designated emergency contacts, such as healthcare professionals, with information about the user's condition and collected personal data. We believe that the app has the potential to revolutionize the way vulnerable individuals are monitored and protected, by providing a safer and more secure environment in designated institutions. ## Developing Process Prior to development, our designer used Figma to create a prototype which was used as a reference point when the developers were building the platform in HTML, CSS, and ReactJs. For the cloud-based machine learning algorithms, we used Computer Vision, Open CV, Numpy, and Flask to train the model on a dataset of various poses and movements and to detect any signs of injury or danger in real time. Because of limited resources, we decided to use our phones as an analogue to cameras to do the live streams for the real-time monitoring. ## Impact * **Improved safety:** The real-time monitoring and alert system provided by the app helps to reduce the risk of falls and other accidents, keeping vulnerable individuals safer and reducing the likelihood of serious injury. * **Faster response time:** The app triggers an alert and sends notifications to designated emergency contacts in case of any danger or injury, which allows for a faster response time and more effective response. * **Increased efficiency:** Using cloud-based machine learning algorithms and computer vision techniques allow the app to analyze the user's movements and detect any signs of danger without constant human supervision. * **Better patient care:** In a hospital setting, the app could be used to monitor patients and alert nurses if they are in danger of falling or if their vital signs indicate that they need medical attention. This could lead to improved patient care, reduced medical costs, and faster recovery times. * **Peace of mind for families and caregivers:** The app provides families and caregivers with peace of mind, knowing that their loved ones are being monitored and protected and that they will be immediately notified in case of any danger or emergency. ## Challenges One of the biggest challenges have been integrating all the different technologies, such as live streaming and machine learning algorithms, and making sure they worked together seamlessly. ## Successes The project was a collaborative effort between a designer and developers, which highlights the importance of cross-functional teams in delivering complex technical solutions. Overall, the project was a success and resulted in a cutting-edge solution that can help protect vulnerable individuals. ## Things Learnt * **Importance of cross-functional teams:** As there were different specialists working on the project, it helped us understand the value of cross-functional teams in addressing complex challenges and delivering successful results. * **Integrating different technologies:** Our team learned the challenges and importance of integrating different technologies to deliver a seamless and effective solution. * **Machine learning for health applications:** After doing the research and completing the project, our team learned about the potential and challenges of using machine learning in the healthcare industry, and the steps required to build and deploy a successful machine learning model. ## Future Plans for SafeSpot * First of all, the usage of the app could be extended to other settings, such as elderly care facilities, schools, kindergartens, or emergency rooms to provide a safer and more secure environment for vulnerable individuals. * Apart from the web, the platform could also be implemented as a mobile app. In this case scenario, the alert would pop up privately on the user’s phone and notify only people who are given access to it. * The app could also be integrated with wearable devices, such as fitness trackers, which could provide additional data and context to help determine if the user is in danger or has been injured.
[Play The Game](https://gotm.io/askstudio/pandemic-hero) ## Inspiration Our inspiration comes from the concern of **misinformation** surrounding **COVID-19 Vaccines** in these challenging times. As students, not only do we love to learn, but we also yearn to share the gifts of our knowledge and creativity with the world. We recognize that a fun and interactive way to learn crucial information related to STEM and current events is rare. Therefore we aim to give anyone this opportunity using the product we have developed. ## What it does In the past 24 hours, we have developed a pixel art RPG game. In this game, the user becomes a scientist who has experienced the tragedies of COVID-19 and is determined to find a solution. Become the **Hero of the Pandemic** through overcoming the challenging puzzles that give you a general understanding of the Pfizer-BioNTech vaccine's development process, myths, and side effects. Immerse yourself in the original artwork and touching story-line. At the end, complete a short feedback survey and get an immediate analysis of your responses through our **Machine Learning Model** and receive additional learning resources tailored to your experience to further your knowledge and curiosity about COVID-19. Team A.S.K. hopes that through this game, you become further educated by the knowledge you attain and inspired by your potential for growth when challenged. ## How I built it We built this game primarily using the Godot Game Engine, a cross-platform open-source game engine that provides the design tools and interfaces to create games. This engine uses mostly GDScript, a python-like dynamically typed language designed explicitly for design in the Godot Engine. We chose Godot to ease cross-platform support using the OpenGL API and GDScript, a relatively more programmer-friendly language. We started off using **Figma** to plan out and identify a theme based on type and colour. Afterwards, we separated components into groupings that maintain similar characteristics such as label outlining and movable objects with no outlines. Finally, as we discussed new designs, we added them to our pre-made categories to create a consistent user-experience-driven UI. Our Machine Learning model is a content-based recommendation system built with Scikit-learn, which works with data that users provide implicitly through a brief feedback survey at the end of the game. Additionally, we made a server using the Flask framework to serve our model. ## Challenges I ran into Our first significant challenge was navigating through the plethora of game features possible with GDScript and continually referring to the documentation. Although Godot is heavily documented, as an open-source engine, there exist frequent bugs with rendering, layering, event handling, and more that we creatively overcame A prevalent design challenge was learning and creating pixel art with the time constraint in mind. To accomplish this, we methodically used as many shortcuts and tools as possible to copy/paste or select repetitive sections. Additionally, incorporating Machine Learning in our project was a challenge in itself. Also, sending requests, display JSON, and making the recommendations selectable were considerable challenges using Godot and GDScript. Finally, the biggest challenge of game development for our team was **UX-driven** considerations to find a balance between a fun, challenging puzzle game and an educational experience that leaves some form of an impact on the player. Brainstorming and continuously modifying the story-line while implementing the animations using Godot required a lot of adaptability and creativity. ## Accomplishments that I'm proud of We are incredibly proud of our ability to bring our past experiences gaming into the development process and incorporating modifications of our favourite gaming memories. The development process was exhilarating and brought the team down the path of nostalgia which dramatically increased our motivation. We are also impressed by our teamwork and team chemistry, which allowed us to divide tasks efficiently and incorporate all the original artwork designs into the game with only a few hiccups. We accomplished so much more within the time constraint than we thought, such as training our machine learning model (although with limited data), getting a server running up and quickly, and designing an entirely original pixel art concept for the game. ## What I learned As a team, we learned the benefit of incorporating software development processes such as **Agile Software Development Cycle.** We solely focused on specific software development stages chronologically while returning and adapting to changes as they come along. The Agile Process allowed us to maximize our efficiency and organization while minimizing forgotten tasks or leftover bugs. Also, we learned to use entirely new software, languages, and skills such as Godot, GDScript, pixel art, and design and evaluation measurements for a serious game. Finally, by implementing a Machine Learning model to analyze and provide tailored suggestions to users, we learned the importance of a great dataset. Following **Scikit-learn** model selection graph or using any cross-validation techniques are ineffective without the data set as a foundation. The structure of data is equally important to manipulate the datasets based on task requirements to increase the model's score. ## What's next for Pandemic Hero We hope to continue developing **Pandemic Hero** to become an educational game that supports various age ranges and is worthy of distribution among school districts. Our goal is to teach as many people about the already-coming COVID-19 vaccine and inspire students everywhere to interpret STEM in a fun and intuitive manner. We aim to find support from **mentors** along the way, who can help us understand better game development and education practices that will propel the game into a deployment-ready product. ### Use the gotm.io link below to play the game on your browser or follow the instructions on Github to run the game using Godot
## FallCall FallCall: Passive Reactive Safety Notifier ## Inspiration: One of our teammate's grandfathers in Japan has an increased susceptibility of falling down as a result of getting older. Even though there are times when people are around to help, we cannot account for all the potential situations when he may fall down. While there are smartphone applications that can help many people, especially the elderly, many individuals simply do not have access to that technology. We want to make helping such people as accessible and predictive as possible. Meet FallCall. ## What It Does: FallCall is an affordable wearable device that is able to detect if a person falls to the ground and contacts the appropriate person. When they fall, we will automatically send a call to the first emergency contact that they designate. If the first contact does not respond, then we will automatically trigger a call to the second emergency contact. If this person also doesn't respond, we will finally resort to calling emergency services (911) with the appropriate details to help the person who fell. This type of escalation procedure is something that sets us apart from other similar products that will automatically call 911, which would result in potentially unnecessary charges. If the person who falls does not wish to reach out for help, that person will be able to prevent any calls/messages from being sent by simply pressing a button. This is a very intentional design choice, as there may be scenarios in which a person who falls is capable of getting up or the situation was somehow an accident. We will always resort to opting to help the faller, and simply allow that person to prevent notifications when necessary. ## How We Built It: We connect an MPU6050, an accelerometer and gyroscope, to an Arduino, which processes the raw data. We implemented an algorithm (C++) to use that raw data to predict whether a person has fallen or not with high confidence. We made sure to calibrate the algorithm in order to reduce the likelihood of false positives and false negatives. The Arduino is connected to a Particle Photon, which is responsible for taking the prediction value from the Arduino and making an HTTP POST request (C++) to a REST endpoint that is built by using the StdLib-Twilio Hub (NodeJS). The logic within the StdLib-Twilio Hub is essentially our intelligent escalation notification system. Finally, we took our device and created an accessible, user friendly wearable for any consumer. ## Challenges We Ran Into: None of our team have used the Photon Particle before, so we faced long challenges trying to understand problems with using the product and integrating it with our Arduino data. We also struggled with connecting our Arduino device to the Photon Particle, because there was not much documentation on the issues we faced. ## What’s Next for FallCall: While we spent much time tweaking our fall-detection algorithm, we can take it a step further and be able to use machine learning to more accurately customize a fall-detection algorithm based on physical feature data of a user. We would also love to improve the actual physical wearable, and make it more user-friendly to accommodate all potential users.
winning
## Inspiration This project was inspired by one of our group members. As they do not have Discord nitro nor any other convenient means of sending large files to their friends, we have decided to solve this ourselves ## What it does FilePortal basically creates a temporary session between two people and allows them to choose any file they would like to share and it will be sent to the other person immediately in real time ## How we built it The front end of the web app was created in react.js. We used node.js to help communicate with our back end, a C++ server. ## Challenges we ran into We faced countless challenges when creating this project. The problem of how we will communicate between the web app and the C++ server was a big problem as node.js does not have many ways of doing this. Other problems we faced include difficulty in deploying the web app, and due to time constraints and many errors we were not able to fully implement the ability to transfer files from the phone to ## Accomplishments that we're proud of We were able to quickly transfer gigabyte-sized files from one person to another through our project. ## What we learned This was a great learning experience as we learned a lot about the development process and the amount of work it takes to create a project of this scale. ## What's next for FilePortal We hope to eventually be able to integrate Twilio into our project to allow people to text files from their phones to their desktops.
## Inspiration This project was inspired by Leon's father's first-hand experience with a lack of electronic medical records and realizing the need for more accessible patient experience. ## What it does The system stores patients' medical records. it also allows patients to fill out medical forms using their voice, as well as electronically sign using their voice as well. Our theme while building it was accessibility, hence the voice control integration, simple and easy to understand UI, and big, bold colours. ## How I built it The front end is built on react-native, while the background is built in node.js using MongoDB Atlas as our database. For our speed to text processing, we used the Google Cloud Platform. We also used Twilio for our SMS reminder component. ## Challenges We ran into There are three distinct challenges that we ran into. The first was trying to get Twilio to function correctly within the app. We were trying to use it on the frontend but due to the nature of react native, and some node.js libraries that were being used, it was not working. We solved the problem by deploying to a Heroku serving and making REST calls. A second challenge was trying to get the database queries to work from our backend. Although everything seems right it still did not work but to do attention to detail, and going over code multiple times, the mistake was spotted and corrected. The third and likely biggest challenge we faced was getting the speech to text streaming input to co-operate. In the beginning, it did not stop recording at the correct times and would capture a lot of noise from the background. This problem was eventually solved by redoing it by following a tutorial online. ## Accomplishments that I'm proud of **WE FINISHED!** We honestly did not expect to finish if you asked us at 10 pm on Saturday night. However, things came through well which we were really proud of. We are also really proud of our UI/UX and think it is a very sleek and clean design. Two other things include accurate speech to text processing and dynamically filled values through our database at runtime. ## What I learned **Joshua** - How to write server-side Javascript using node.js **Leon** - Twilio **Joy** - Speech to text streaming with react native **Kevin** - React-native ## What's next for MediSign If we were to continue to work on this project, we would first start by dynamically filling all values through our database. We would then focus a lot of attention on security as medical records are sensitive info. Thirdly, we would upgrade the UI/UX to be even better than before.
## Inspiration Electra was inspired by the complexity of modern elections. Traditional polls and static models often fail to capture dynamic voter behavior, especially in battleground states where every news cycle can shift the race. By leveraging LLM agents and a custom framework, Electra brings a cutting-edge solution to predict elections way better than conventional methods. ## What We Learned Throughout this project, we gained valuable insights into the intricacies of building multi-agent systems and leveraging them for simulations. Specifically, we learned how to: * Model voter behavior based on census data and AI agent backstories. * Create an collaborative framework where agents can interact and chat with one another to accomplish any task * Utilize modern AI/ML solutions like Cerebras and LLM's like Llama 3.1 for instantaneous inference * Generate agents on-the-fly that are fully representative of the general populace, while ensuring the system remains explainable and accurate ## How We Built Electra Electra was built using a custom multi-agent framework designed from scratch. Our goal was to simulate election results in battleground states by creating agents that behave like real voters. Here's how the process unfolded: 1. We began by using US census data to replicate voter districts. 2. We developed agents with rich backstories, programming them to "think" like voters in various demographics. 3. Agents were then embedded into conversational group chats to simulate how they would react to hypothetical scenarios (e.g., Donald Trump launching his own cryptocurrency). 4. We visualized the results in real time using an interactive map and communication logs, ensuring full transparency of the agents' interactions and decision-making processes. The entire backend was in Python. Node.js manages server-side logic to facilitate smooth communication between the front end and back end. For the user interface, we used React to create a responsive experience, while D3.js handles dynamic data visualizations that allow users to interpret sentiment trends easily. Geographic data is represented using GeoJSON, enabling users to visualize public sentiment across various regions effectively. ## Challenges We Faced Building a simulation that accurately captures the diversity of voters across different districts was one of our biggest challenges. Ensuring representativeness of voter behavior from census data while maintaining the system's performance required fine-tuning of agent interactions and balancing different political perspectives. Additionally, making the results both highly explainable and scalable to larger elections presented difficulties in data handling and system optimization. ## Accomplishments & Learning We successfully built our very own LLM python framework to quickly scale our eventual project. None of us have ever done anything of this scale before, so it was both super exciting and rewarding. We also had to do a lot of learning on-the-fly as well, from LLM inference to structured data extraction, this project significantly improved our ability to build accessible, fullstack projects, while giving us a ton of valuable experience integrating modern AI solutions into our techstack, from managing data flows to optimizing performance. ## What's Next? We plan to improve the tool by incorporating real-time data feeds from social media and news sources. This would enhance the accuracy of our analysis and provide even deeper insights into public opinion that we wouldn't need to simulate ourselves, but could instead have it continually running. Additionally, we hope to continue to iterate and refine upon the UI, thus making the tool accessible to an even broader audience. **Live Demo is added to the Canva**
losing
## Inspiration One of the greatest challenges facing our society today is food waste. From an environmental perspective, Canadians waste about *183 kilograms of solid food* per person, per year. This amounts to more than six million tonnes of food a year, wasted. From an economic perspective, this amounts to *31 billion dollars worth of food wasted* annually. For our hack, we wanted to tackle this problem and develop an app that would help people across the world do their part in the fight against food waste. We wanted to work with voice recognition and computer vision - so we used these different tools to develop a user-friendly app to help track and manage food and expiration dates. ## What it does greenEats is an all in one grocery and food waste management app. With greenEats, logging your groceries is as simple as taking a picture of your receipt or listing out purchases with your voice as you put them away. With this information, greenEats holds an inventory of your current groceries (called My Fridge) and notifies you when your items are about to expire. Furthermore, greenEats can even make recipe recommendations based off of items you select from your inventory, inspiring creativity while promoting usage of items closer to expiration. ## How we built it We built an Android app with Java, using Android studio for the front end, and Firebase for the backend. We worked with Microsoft Azure Speech Services to get our speech-to-text software working, and the Firebase MLKit Vision API for our optical character recognition of receipts. We also wrote a custom API with stdlib that takes ingredients as inputs and returns recipe recommendations. ## Challenges we ran into With all of us being completely new to cloud computing it took us around 4 hours to just get our environments set up and start coding. Once we had our environments set up, we were able to take advantage of the help here and worked our way through. When it came to reading the receipt, it was difficult to isolate only the desired items. For the custom API, the most painstaking task was managing the HTTP requests. Because we were new to Azure, it took us some time to get comfortable with using it. To tackle these tasks, we decided to all split up and tackle them one-on-one. Alex worked with scanning the receipt, Sarvan built the custom API, Richard integrated the voice recognition, and Maxwell did most of the app development on Android studio. ## Accomplishments that we're proud of We're super stoked that we offer 3 completely different grocery input methods: Camera, Speech, and Manual Input. We believe that the UI we created is very engaging and presents the data in a helpful way. Furthermore, we think that the app's ability to provide recipe recommendations really puts us over the edge and shows how we took on a wide variety of tasks in a small amount of time. ## What we learned For most of us this is the first application that we built - we learned a lot about how to create a UI and how to consider mobile functionality. Furthermore, this was also our first experience with cloud computing and APIs. Creating our Android application introduced us to the impact these technologies can have, and how simple it really is for someone to build a fairly complex application. ## What's next for greenEats We originally intended this to be an all-purpose grocery-management app, so we wanted to have a feature that could allow the user to easily order groceries online through the app, potentially based off of food that would expire soon. We also wanted to implement a barcode scanner, using the Barcode Scanner API offered by Google Cloud, thus providing another option to allow for a more user-friendly experience. In addition, we wanted to transition to Firebase Realtime Database to refine the user experience. These tasks were considered outside of our scope because of time constraints, so we decided to focus our efforts on the fundamental parts of our app.
## Inspiration Despite being a global priority in the eyes of the United Nations, food insecurity still affects hundreds of millions of people. Even in the developed country of Canada, over 5.8 million individuals (>14% of the national population) are living in food-insecure households. These individuals are unable to access adequate quantities of nutritious foods. ## What it does Food4All works to limit the prevalence of food insecurity by minimizing waste from food corporations. The website addresses this by serving as a link between businesses with leftover food and individuals in need. Businesses with a surplus of food are able to donate food by displaying their offering on the Food4All website. By filling out the form, businesses will have the opportunity to input the nutritional values of the food, the quantity of the food, and the location for pickup. From a consumer’s perspective, they will be able to see nearby donations on an interactive map. By separating foods by their needs (e.g., high-protein), consumers will be able to reserve the donated food they desire. Altogether, this works to cut down unnecessary food waste by providing it to people in need. ## How we built it We created this project using a combination of multiple languages. We used Python for the backend, specifically for setting up the login system using Flask Login. We also used Python for form submissions, where we took the input and allocated it to a JSON object which interacted with the food map. Secondly, we used Typescript (JavaScript for deployable code) and JavaScript’s Fetch API in order to interact with the Google Maps Platform. The two major APIs we used from this platform are the Places API and Maps JavaScript API. This was responsible for creating the map, the markers with information, and an accessible form system. We used HTML/CSS and JavaScript alongside Bootstrap to produce the web-design of the website. Finally, we used the QR Code API in order to get QR Code receipts for the food pickups. ## Challenges we ran into Some of the challenges we ran into was using the Fetch API. Since none of us were familiar with asynchronous polling, specifically in JavaScript, we had to learn this to make a functioning food inventory. Additionally, learning the Google Maps Platform was a challenge due to the comprehensive documentation and our lack of prior experience. Finally, putting front-end components together with back-end components to create a cohesive website proved to be a major challenge for us. ## Accomplishments that we're proud of Overall, we are extremely proud of the web application we created. The final website is functional and it was created to resolve a social issue we are all passionate about. Furthermore, the project we created solves a problem in a way that hasn’t been approached before. In addition to improving our teamwork skills, we are pleased to have learned new tools such as Google Maps Platform. Last but not least, we are thrilled to overcome the multiple challenges we faced throughout the process of creation. ## What we learned In addition to learning more about food insecurity, we improved our HTML/CSS skills through developing the website. To add on, we increased our understanding of Javascript/TypeScript through the utilization of the APIs on Google Maps Platform (e.g., Maps JavaScript API and Places API). These APIs taught us valuable JavaScript skills like operating the Fetch API effectively. We also had to incorporate the Google Maps Autofill Form API and the Maps JavaScript API, which happened to be a difficult but engaging challenge for us. ## What's next for Food4All - End Food Insecurity There are a variety of next steps of Food4All. First of all, we want to eliminate the potential misuse of reserving food. One of our key objectives is to prevent privileged individuals from taking away the donations from people in need. We plan to implement a method to verify the socioeconomic status of users. Proper implementation of this verification system would also be effective in limiting the maximum number of reservations an individual can make daily. We also want to add a method to incentivize businesses to donate their excess food. This can be achieved by partnering with corporations and marketing their business on our webpage. By doing this, organizations who donate will be seen as charitable and good-natured by the public eye. Lastly, we want to have a third option which would allow volunteers to act as a delivery person. This would permit them to drop off items at the consumer’s household. Volunteers, if applicable, would be able to receive volunteer hours based on delivery time.
# Omakase *"I'll leave it up to you"* ## Inspiration On numerous occasions, we have each found ourselves staring blankly into the fridge with no idea of what to make. Given some combination of ingredients, what type of good food can I make, and how? ## What It Does We have built an app that recommends recipes based on the food that is in your fridge right now. Using Google Cloud Vision API and Food.com database we are able to detect the food that the user has in their fridge and recommend recipes that uses their ingredients. ## What We Learned Most of the members in our group were inexperienced in mobile app development and backend. Through this hackathon, we learned a lot of new skills in Kotlin, HTTP requests, setting up a server, and more. ## How We Built It We started with an Android application with access to the user’s phone camera. This app was created using Kotlin and XML. Android’s ViewModel Architecture and the X library were used. This application uses an HTTP PUT request to send the image to a Heroku server through a Flask web application. This server then leverages machine learning and food recognition from the Google Cloud Vision API to split the image up into multiple regions of interest. These images were then fed into the API again, to classify the objects in them into specific ingredients, while circumventing the API’s imposed query limits for ingredient recognition. We split up the image by shelves using an algorithm to detect more objects. A list of acceptable ingredients was obtained. Each ingredient was mapped to a numerical ID and a set of recipes for that ingredient was obtained. We then algorithmically intersected each set of recipes to get a final set of recipes that used the majority of the ingredients. These were then passed back to the phone through HTTP. ## What We Are Proud Of We were able to gain skills in Kotlin, HTTP requests, servers, and using APIs. The moment that made us most proud was when we put an image of a fridge that had only salsa, hot sauce, and fruit, and the app provided us with three tasty looking recipes including a Caribbean black bean and fruit salad that uses oranges and salsa. ## Challenges You Faced Our largest challenge came from creating a server and integrating the API endpoints for our Android app. We also had a challenge with the Google Vision API since it is only able to detect 10 objects at a time. To move past this limitation, we found a way to segment the fridge into its individual shelves. Each of these shelves were analysed one at a time, often increasing the number of potential ingredients by a factor of 4-5x. Configuring the Heroku server was also difficult. ## Whats Next We have big plans for our app in the future. Some next steps we would like to implement is allowing users to include their dietary restriction and food preferences so we can better match the recommendation to the user. We also want to make this app available on smart fridges, currently fridges, like Samsung’s, have a function where the user inputs the expiry date of food in their fridge. This would allow us to make recommendations based on the soonest expiring foods.
winning
As an avid fitness geek, I was always on the lookout for fast and easy ways to monitor my nutrition By taking a picture of your meal and texting it to us through Twilio, we can determine what you are having using CloudVision and then calculate nutrition using the NutritionIX database. It was hard to integrate the different APIs to work with each other but by unifying the languages we were able to connect them seamlessly Our next step for NutriSnap is integrating users, we plan to do this by connecting to MongoDB for precise data storage. We also plan to refine the cloud vision to scan multiple dishes for even more convenient usage.
## Inspiration One of our teammates works part time in Cineplex and at the end of the day, he told us that all their extra food was just throw out.This got us thinking, why throw the food out when you can you earn revenue and some end of the day sales for people in the local proxmity that are looking for something to eat. ## What it does Out web-app will give a chance for the restaurant to publish the food item which they are selling with a photo of the food. Meanwhile, users have the chance to see everything in real-time and order food directly from the platform. The web-app also identifies the items in the food, nutrient facts, and health benefitis, pro's and con's of the food item and displays it directly to the user. The web-app also provides a secure transaction method which can be used to pay for the food. The food by the restaurant would be sold at a discounted price. ## How I built it The page was fully made by HTML, CSS, JavaScript and jQuery. There would be both a login and signup for both the restaurants wanting to sell and also for the participants wanting to buy the food.Once signed up for the app, the entry would get stored into Azure and would request for access to the Android Pay app which will allow the users to use Android Pay to pay for the food. When the food is ordered, we use the Clarifai API which allows the users can see the ingredients, health benefits, nutrient facts, pro's and con's of the food item on their dashboard and the photo of the app. This would all come together once the food is delivered by the restaurant. ## Challenges I ran into Challenges we ran into were getting our database working as none of us have past experiences using Azure. The biggest challenge we ran into was our first two ideas but after talking to sponsors we found out that they were too limiting meaning we had to let go of the ideas and keep coming up with new ones. We started hacking late afternoon on Saturday which cut our time to finish the entire thing. ## Accomplishments that I'm proud of We are really proud of getting the entire website up and running properly within the 20 hours as we started late enough with database problems that we were at the point of giving up on Sunday morning. Additionally we were very proud of getting our Clarifai API working as none of us had past experenices with Clarifai. ## What I learned The most important thing we learned out of this hackathon was to start with a concrete idea early on as if this was done for this weekend, our idea could've included a lot more functions. This would benefit both our users and consumers. ## What's next for LassMeal Our biggest next leap would be modifying the delivery portion of the food item. Instead of the restaurant delivering the food, users that sign up for the food service, also have a chance to become a deliverer. If they are within the distance of the restaurant and going back in the prxomity of the user's home, they would be able to pick up food for the user and deliver it and earn a percentage of the entire order. This would mean both the users and restaurants are earning money now for food that was once losing them money as they were throwing it out.Another additoin would be taking our Android Mockups and transferring them into a app meaning now both the users and restaurants have a way to buy/publish food via a mobile device.
## Inspiration Unhealthy diet is the leading cause of death in the U.S., contributing to approximately 678,000 deaths each year, due to nutrition and obesity-related diseases, such as heart disease, cancer, and type 2 diabetes. Let that sink in; the leading cause of death in the U.S. could be completely nullified if only more people cared to monitor their daily nutrition and made better decisions as a result. But **who** has the time to meticulously track every thing they eat down to the individual almond, figure out how much sugar, dietary fiber, and cholesterol is really in their meals, and of course, keep track of their macros! In addition, how would somebody with accessibility problems, say blindness for example, even go about using an existing app to track their intake? Wouldn't it be amazing to be able to get the full nutritional breakdown of a meal consisting of a cup of grapes, 12 almonds, 5 peanuts, 46 grams of white rice, 250 mL of milk, a glass of red wine, and a big mac, all in a matter of **seconds**, and furthermore, if that really is your lunch for the day, be able to log it and view rich visualizations of what you're eating compared to your custom nutrition goals?? We set out to find the answer by developing macroS. ## What it does macroS integrates seamlessly with the Google Assistant on your smartphone and let's you query for a full nutritional breakdown of any combination of foods that you can think of. Making a query is **so easy**, you can literally do it while *closing your eyes*. Users can also make a macroS account to log the meals they're eating everyday conveniently and without hassle with the powerful built-in natural language processing model. They can view their account on a browser to set nutrition goals and view rich visualizations of their nutrition habits to help them outline the steps they need to take to improve. ## How we built it DialogFlow and the Google Action Console were used to build a realistic voice assistant that responds to user queries for nutritional data and food logging. We trained a natural language processing model to identify the difference between a call to log a food eaten entry and simply a request for a nutritional breakdown. We deployed our functions written in node.js to the Firebase Cloud, from where they process user input to the Google Assistant when the test app is started. When a request for nutritional information is made, the cloud function makes an external API call to nutrionix that provides nlp for querying from a database of over 900k grocery and restaurant foods. A mongo database is to be used to store user accounts and pass data from the cloud function API calls to the frontend of the web application, developed using HTML/CSS/Javascript. ## Challenges we ran into Learning how to use the different APIs and the Google Action Console to create intents, contexts, and fulfillment was challenging on it's own, but the challenges amplified when we introduced the ambitious goal of training the voice agent to differentiate between a request to log a meal and a simple request for nutritional information. In addition, actually finding the data we needed to make the queries to nutrionix were often nested deep within various JSON objects that were being thrown all over the place between the voice assistant and cloud functions. The team was finally able to find what they were looking for after spending a lot of time in the firebase logs.In addition, the entire team lacked any experience using Natural Language Processing and voice enabled technologies, and 3 out of the 4 members had never even used an API before, so there was certainly a steep learning curve in getting comfortable with it all. ## Accomplishments that we're proud of We are proud to tackle such a prominent issue with a very practical and convenient solution that really nobody would have any excuse not to use; by making something so important, self-monitoring of your health and nutrition, much more convenient and even more accessible, we're confident that we can help large amounts of people finally start making sense of what they're consuming on a daily basis. We're literally able to get full nutritional breakdowns of combinations of foods in a matter of **seconds**, that would otherwise take upwards of 30 minutes of tedious google searching and calculating. In addition, we're confident that this has never been done before to this extent with voice enabled technology. Finally, we're incredibly proud of ourselves for learning so much and for actually delivering on a product in the short amount of time that we had with the levels of experience we came into this hackathon with. ## What we learned We made and deployed the cloud functions that integrated with our Google Action Console and trained the nlp model to differentiate between a food log and nutritional data request. In addition, we learned how to use DialogFlow to develop really nice conversations and gained a much greater appreciation to the power of voice enabled technologies. Team members who were interested in honing their front end skills also got the opportunity to do that by working on the actual web application. This was also most team members first hackathon ever, and nobody had ever used any of the APIs or tools that we used in this project but we were able to figure out how everything works by staying focused and dedicated to our work, which makes us really proud. We're all coming out of this hackathon with a lot more confidence in our own abilities. ## What's next for macroS We want to finish building out the user database and integrating the voice application with the actual frontend. The technology is really scalable and once a database is complete, it can be made so valuable to really anybody who would like to monitor their health and nutrition more closely. Being able to, as a user, identify my own age, gender, weight, height, and possible dietary diseases could help us as macroS give users suggestions on what their goals should be, and in addition, we could build custom queries for certain profiles of individuals; for example, if a diabetic person asks macroS if they can eat a chocolate bar for lunch, macroS would tell them no because they should be monitoring their sugar levels more closely. There's really no end to where we can go with this!
partial
## Inspiration The idea for Green Eats started after we recognized the growing crisis regarding greenhouse gas emissions globally, and through some quick research discovered that food production alone (including farming, transportation, packaging, etc.) was responsible for over **one quarter** of these emissions. Armed with this information, we felt strongly that we could meaningfully address these concerning metrics by incentivizing individuals around the world to make small changes in their food-purchasing habits in favour of foods that require fewer emissions to produce. ## What it does Green Eats is a web-based tool that analyzes an image of a particular item of food, classifies it into the corresponding product category in the database, and returns quantitative information describing the environmental resources that are consumed or expended during the production of that product category. From this data, the user can make more informed decisions about which products to purchase by comparing metrics between different foods. To incentivize low-emission food purchases, points will be awarded on a per-item basis and will accumulate over time towards reaching long-term sustainability goals. ## How we built it We built a React frontend with dynamic scripting. This was linked to a Node Express server backend, which handles business logic and interfaces with the Google Cloud Vision API to identify objects in images and perform analytics on the environmental impact of the identified objects. ## Challenges we ran into One of the challenges we faced was trying to work with, and draw correct implications from, the vast amounts of data pertaining to resource consumption and expenditure during various stages of the supply chain of food production. Although we were fortunate to have had access to high-quality data to inform our product design process, it was still difficult to understand how it was all organized. ## Accomplishments that we're proud of We're most proud of building a full-stack prototype that is actually functional, because it is ten times more satisfying when it works than if we hard-coded the whole thing. ## What we learned We learned how to overcome the difficulties of passing images through restful APIs and of optimizing HTML for mobile users. ## What's next for Green Eats We would like to continue to enhance our mobile user experience, and expand the database of emissions metrics.
## Inspiration We as developers mostly waste a lot of time and energy on repetitive compilations and testing of web and C technologies. The lose their flow, creativity with those monotonous and recurring steps, which actually reflects in the final web or low language production. We wanted to solve that by empowering developers with access to certain tools that makes the flow and increases the creativity. ## What it does With ... developers would be able to save a lot of time by having access to premier algorithms and js scripts to have the compilation and the results to be affected and displayed in real time. So you would have every script and oart compiled and displayed at the end of every keystroke as well as will be safely deployed via the cloud. Simple but elegant and efficient. ## How I built it For the MVP, I built it off the emacs editor with the functionality further added as a JS plugin and a server running to monitor and deploy changes in real time. ## Challenges I ran into Configuring the server to actively log and monitor changes in the code and to simultaneously push it to the local frontend and the backend was the biggest barrier for me. And I learned after 24 hours of sleeplessness Emacs seems like a very difficult kid to handle. ## Accomplishments that I'm proud of To make everything "just work" ## What I learned I learned a ton about how to make servers your slave(to make them work) and to lots of bad practices in code deploying. I also came to know that when you are sleep deprived and it's 4 in the morning. Please use Vim not emacs.
## 🌱 Inspiration With the ongoing climate crisis, we recognized a major gap in the incentives for individuals to make greener choices in their day-to-day lives. People want to contribute to the solution, but without tangible rewards, it can be hard to motivate long-term change. That's where we come in! We wanted to create a fun, engaging, and rewarding way for users to reduce their carbon footprint and make eco-friendly decisions. ## 🌍 What it does Our web app is a point-based system that encourages users to make greener choices. Users can: * 📸 Scan receipts using AI, which analyzes purchases and gives points for buying eco-friendly products from partner companies. * 🚴‍♂️ Earn points by taking eco-friendly transportation (e.g., biking, public transit) by tapping their phone via NFC. * 🌿 See real-time carbon emission savings and get rewarded for making sustainable choices. * 🎯 Track daily streaks, unlock milestones, and compete with others on the leaderboard. * 🎁 Browse a personalized rewards page with custom suggestions based on trends and current point total. ## 🛠️ How we built it We used a mix of technologies to bring this project to life: * **Frontend**: Remix, React, ShadCN, Tailwind CSS for smooth, responsive UI. * **Backend**: Express.js, Node.js for handling server-side logic. * **Database**: PostgreSQL for storing user data and points. * **AI**: GPT-4 for receipt scanning and product classification, helping to recognize eco-friendly products. * **NFC**: We integrated NFC technology to detect when users make eco-friendly transportation choices. ## 🔧 Challenges we ran into One of the biggest challenges was figuring out how to fork the RBC points API, adapt it, and then code our own additions to match our needs. This was particularly tricky when working with the database schemas and migration files. Sending image files across the web also gave us some headaches, especially when incorporating real-time processing with AI. ## 🏆 Accomplishments we're proud of One of the biggest achievements of this project was stepping out of our comfort zones. Many of us worked with a tech stack we weren't very familiar with, especially **Remix**. Despite the steep learning curve, we managed to build a fully functional web app that exceeded our expectations. ## 🎓 What we learned * We learned a lot about integrating various technologies, like AI for receipt scanning and NFC for tracking eco-friendly transportation. * The biggest takeaway? Knowing that pushing the boundaries of what we thought was possible (like the receipt scanner) can lead to amazing outcomes! ## 🚀 What's next We have exciting future plans for the app: * **Health app integration**: Connect the app to health platforms to reward users for their daily steps and other healthy behaviors. * **Mobile app development**: Transfer the app to a native mobile environment to leverage all the features of smartphones, making it even easier for users to engage with the platform and make green choices.
losing
## Inspiration We got the idea when we ask ourselves the question: how can we better make use of the large amounts of data in the world to meet people's privacy and healthcare needs? We all recall being at the doctor's office and seeing the doctor write long written notes. We then came up with a potentially impactful idea that is rewarding to create, both conceptually and technically. ## What it does Imagine your research has finally reached the stage of clinical trials. Finding the right participants meeting the strict trial criteria (specific combinations of illnesses, prescription drugs, medical devices, surgical history, and anatomy) is paramount to a successful trial. Current researchers have to manually search through many medical notes to identify their potential candidates. Doctors already write medical notes to a centralized repository, but unstructured data is not useful on its own. TrialLink uses privacy-minded natural language processing to extract medical information from unstructured medical notes. We offer researchers a platform to perform advanced queries to easily find candidates for medical trials that exactly match their strict requirements. Our platform uses HIPPA-compliant technologies. It allows participants who have previously consented to participate in clinical trials to receive timely notifications when they are matched to a trial. We also implemented a proxy server to securely route our API calls. ## How we built it Our final application involved a full frontend, a full Spring-based backend server, the integration of database tables, Velo code, Google Cloud API, Javascript, and CSS. ## Challenges we ran into Using the enterprise version of Google Cloud has a high overhead and required an understanding of different security models. The database schemas and querying functions were complex. ## Accomplishments that we're proud of We are proud of using NLP in creative ways to make use of information from a preexisting abundant source of unstructured notes to improve access to healthcare for patients and assist researchers by minimizing their time looking for appropriate trial candidates. ## What's next for TrialLink Add more querying options. Connect to existing databases of medical notes. Make it a startup!
* [Deployment link](https://unifymd.vercel.app/) * [Pitch deck link](https://www.figma.com/deck/qvwPyUShfJbTfeoPSjVIGX/UnifyMD-Pitch-Deck?node-id=4-71) ## 🌟 Inspiration Long lists of patient records make it challenging to locate **relevant health data**. This can lead to doctors providing **inaccurate diagnoses** due to insufficient or disorganized information. Unstructured data, such as **progress notes and dictated information**, are not stored properly, and smaller healthcare facilities often **lack the resources** or infrastructure to address these issues. ## 💡 What it does UnifyMD is a **unified health record system** that aggregates patient data and historical health records. It features an **AI-powered search bot** that leverages a patient's historical data to help healthcare providers make more **informed medical decisions** with ease. ## 🛠️ How we built it * We started with creating an **intuitive user interface** using **Figma** to map out the user journey and interactions. * For **secure user authentication**, we integrated **PropelAuth**, which allows us to easily manage user identities. * We utilized **LangChain** as the large language model (LLM) framework to enable **advanced natural language processing** for our AI-powered search bot. * The search bot is powered by **OpenAI**'s API to provide **data-driven responses** based on the patient's medical history. * The application is built using **Next.js**, which provides **server-side rendering** and a full-stack JavaScript framework. * We used **Drizzle ORM** (Object Relational Mapper) for seamless interaction between the application and our database. * The core patient data and records are stored **securely in Supabase**. * For front-end styling, we used **shadcn/ui** components and **TailwindCSS**. ## 🚧 Challenges we ran into One of the main challenges we faced was working with **LangChain**, as it was our first time using this framework. We ran into several errors during testing, and the results weren't what we expected. It took **a lot of time and effort** to figure out the problems and learn how to fix them as we got more familiar with the framework. ## 🏆 Accomplishments that we're proud of * Successfully integrated **LangChain** as a new large language model (LLM) framework to **enhance the AI capabilities** of our system. * Implemented all our **initial features on schedule**. * Effectively addressed key challenges in **Electronic Health Records (EHR)** with a robust, innovative solution to provide **improvements in healthcare data management**. ## 📚 What we learned * We gained a deeper understanding of various patient safety issues related to the limitations and inefficiencies of current Electronic Health Record (EHR) systems. * We discovered that LangChain is a powerful tool for Retrieval-Augmented Generation (RAG), and it can effectively run SQL queries on our database to optimize data retrieval and interaction. ## 🚀 What's next for UnifyMD * **Partnership with local clinics** to kick-start our journey into improving **healthcare services** and **patient safety**. * **Update** to include **speech-to-text** feature to increase more time **patient and healthcare provider’s satisfaction**.
## Inspiration An estimated 33 million disability-adjusted life years (DALYs) are lost each year due to medical errors, making bad health care as grave a public health threat as malaria or tuberculosis. At least 2/3 of medical errors occur in the global south. In the US, we know that an iterative, data-driven approach to health care improvement is effective in reducing medical errors. But in the global south, providers don’t have the data analytics tools to do this. Thanks to programs like Open Data Kit and DHIS2, we’re learning a tremendous amount about healthcare performance in remote settings. Unfortunately, this information is designed for policymakers, who create one-size-fits-all solutions, and not for health care providers. If we inform providers with their own performance data, wouldn’t they be better positioned to create site-specific solutions? ## What it does CLIPaed receives CSV files containing health care performance data and creates statistical process control charts that can be easily interpreted by clinicians. Health care teams can identify operational problems affecting their patients by analyzing outlying data, process variation, and mean performance. ## How we built it This was built on R using Shiny, and is being hosted on shinyapps.io. ## Challenges we ran into We wanted to create an interface that received data directly, as this would guide the type and quality of data collected. We were unable to do this in the time allotted. ## Accomplishments that we're proud of We have built a web app that generates gold-standard data visualization for health care quality improvement. To our knowledge, this has not been done before. ## What we learned We are physicians, not web developers! We learned 1) that wrappers and other tools make web development possible for users with limited experience in web development; 2) that Shiny can leverage our skills in R to build powerful web apps; and 3) that the provider perspective is important in understanding and using health care data. ## What's next for CLIPaed We have constructed a prototype and proof of concept with TreeHacks Health. We now have three tasks ahead of us: * First, to make data collection as easy as possible, we need to integrate our data visualization tool with a live data entry system. * Second, we need to help users identify the root cause of a problem, determine how to measure the problem, and identify the best solution to the problem. Fortunately, a form-based tool is already being developed by the Baylor International Pediatric AIDS Initiative (BIPAI); we hope to integrate our app as an API. * Third, we need to beta test our app with users in the global south.
partial
## Inspiration We hate making resumes and customizing them for each employeer so we created a tool to speed that up. ## What it does A user creates "blocks" which are saved. Then they can pick and choose which ones they want to use. ## How we built it [Node.js](https://nodejs.org/en/) [Express](https://expressjs.com/) [Nuxt.js](https://nuxtjs.org/) [Editor.js](https://editorjs.io/) [html2pdf.js](https://ekoopmans.github.io/html2pdf.js/) [mongoose](https://mongoosejs.com/docs/) [MongoDB](https://www.mongodb.com/)
## Inspiration Algorithm interviews... suck. They're more a test of sanity (and your willingness to "grind") than a true performance indicator. That being said, large language models (LLMs) like Cohere and ChatGPT are rather *good* at doing LeetCode, so why not make them do the hard work...? Introduce: CheetCode. Our hack takes the problem you're currently screensharing, feeds it to an LLM target of your choosing, and gets the solution. But obviously, we can't just *paste* in the generated code. Instead, we wrote a non-malicious (we promise!) keylogger to override your key presses with the next character of the LLM's given solution. Mash your keyboard and solve hards with ease. The interview doesn't end there though. An email notification will appear on your computer after with the subject "Urgent... call asap." Who is it? It's not mom! It's CheetCode, with a detailed explanation including both the time and space complexity of your code. Ask your interviewer to 'take this quick' and then breeze through the follow-ups. ## How we built it The hack is the combination of three major components: a Chrome extension, Node (actually... Bun) service, and Python script. * The **extension** scrapes LeetCode for the question and function header, and forwards the context to the Node (Bun) service * Then, the **Node service** prompts an LLM (e.g., Cohere, gpt-3.5-turbo, gpt-4) and then forwards the response to a keylogger written in Python * Finally, the **Python keylogger** enables the user to toggle cheats on (or off...), and replaces the user's input with the LLM output, seamlessly (Why the complex stack? Well... the extension makes it easy to interface with the DOM, the LLM prompting is best written in TypeScript to leverage the [TypeChat](https://microsoft.github.io/TypeChat/) library from Microsoft, and Python had the best tooling for creating a fast keylogger.) (P.S. hey Cohere... I added support for your LLM to Microsoft's project [here](https://github.com/michaelfromyeg/typechat). gimme job plz.) ## Challenges we ran into * HTML `Collection` data types are not fun to work with * There were no actively maintained cross-platform keyloggers for Node, so we needed another service * LLM prompting is surprisingly hard... they were not as smart as we were hoping (especially in creating 'reliable' and consistent outputs) ## Accomplishments that we're proud of * We can now solve any Leetcode hard in 10 seconds * What else could you possibly want in life?!
## Inspiration When you travelled, have you ever been curious about other tourists' experiences? Or you're eager to share your own experience with someone? Especially since tourist attractions welcome travellers from all over the world, this is a unique opportunity to see the world from another person's perspective. ## What it does * Leave voice memos at tourist attractions * Hear voice memos that others have left behind * Rate the memos so that the most interesting ones come to the top ## How we built it * React native for mobile application * Google Places API for location purposes * MongoDB database for storing account information and voice memos ## Challenges we ran into Quickly learning new tools and debugging errors to get them to work ## Accomplishments that we're proud of We all learned something new! ## What we learned How to quickly learn new tools and create a functional prototype. ## What's next for Voice Memo Adding more features and functionalities.
winning
## Inspiration The inspiration behind this page, is truly driven by 'Onefan' itself which has been a true labour of love and devotion for about 10 months and basically 'Onefan' is a site that aims to give people amazing conversations with a unique chat UI by using an algorithm to match you to someone that has the same interest as you in a Tv Show(more kinds of content coming soon). The reason for the cleanly designed UI on the beta sign up page is because I had an old sign up page that was incredibly mediocre, so I looked at a problem and said what can I do to improve it, and solve this problem, I always try to strive for better. This beta sign up page was only about 1/3 created before hack western but now it is all done! ## What it does This page simply uses firebase, JSON/JS, and email Js to get data package it in a JSON and push it to a server so I know who signed up for the beta , It also uses a clever js library email Js and some tricky code to email the person who just signed up. ## How I built it Just the regular web cocktail of: * JS * CSS * HTML as well as some external API's: * Firebase * emailJs ## Challenges I ran into * The JSON object of the user data having A-Synchronous call-back issues. * Working with the emailJs library. * [unrelated] But creating the match making algorithm for the website. ## Accomplishments that I'm proud of * The clean and minimalistic/functional UI ## What I learned + how to send parsed JSON to firebase quickly ## What's next for Onefan Beta Page * Once the site launches this page will have already served its purpose :).
## Inspiration Making things efficient was our main inspiration for this project. ## What it does Uses an algorithm that determines the best route for postal trucks to deliver parcels. ## How we built it While the back-end was entirely made in Java, our front-end was constructed with Html, CSS, JavaScript and python. Coming up with the logic of the code was first met with the algorithm of the fastest time to deliver the parcels with minimizing the costs that were induced through the optimized path. Furthermore, the back-end of the code was then converted into the front-end visual that was in the final code. ## Challenges we ran into Front End did not end up well, we forgot to add variables to the color properties in CSS ## Accomplishments that we're proud of Finding out how the algorithm works and actually putting it in use was our greatest achievement during this project. Also, bug fixing was very satisfying in the end. ## What we learned In the backend we learned a lot about abstraction and how to represent a real world problem into code. We also learned about algorithms coming up with our own and we also combing it with other algorithms. ## What's next for RoadOptimization RoadOptimazation needs a better front end that is actually finished, and it could be made an api for real time optimization.
Hello and thank you for judging my project. I am listing below two different links and an explanation of what the two different videos are. Due to the time constraints of some hackathons, I have a shorter video for those who require a lower time. As default I will be placing the lower time video up above, but if you have time or your hackathon allows so please go ahead and watch the full video at the link below. Thanks! [3 Minute Video Demo](https://youtu.be/8tns9b9Fl7o) [5 Minute Demo & Presentation](https://youtu.be/Rpx7LNqh7nw) For any questions or concerns, please email me at [joshiom28@gmail.com](mailto:joshiom28@gmail.com) ## Inspiration Resource extraction has tripled since 1970. That leaves us on track to run out of non-renewable resources by 2060. To fight this extremely dangerous issue, I used my app development skills to help everyone support the environment. As a human and a person residing in this environment, I felt that I needed to used my technological development skills in order to help us take care of the environment better, especially in industrial countries such as the United States. In order to do my part in the movement to help sustain the environment. I used the symbolism of the LORAX to name LORAX app; inspired to help the environment. \_ side note: when referencing firebase I mean firebase as a whole since two different databases were used; one to upload images and the other to upload data (ex. form data) in realtime. Firestore is the specific realtime database for user data versus firebase storage for image uploading \_ ## Main Features of the App To start out we are prompted with the **authentication panel** where we are able to either sign in with an existing email or sign up with a new account. Since we are new, we will go ahead and create a new account. After registering we are signed in and now are at the home page of the app. Here I will type in my name, email and password and log in. Now if we go back to firebase authentication, we see a new user pop up over here and a new user is added to firestore with their user associated data such as their **points, their user ID, Name and email.** Now lets go back to the main app. Here at the home page we can see the various things we can do. Lets start out with the Rewards tab where we can choose rewards depending on the amount of points we have. If we press redeem rewards, it takes us to the rewards tab, where we can choose various coupons from companies and redeem them with the points we have. Since we start out with zero points, we can‘t redeem any rewards right now. Let's go back to the home page. The first three pages I will introduce are apart of the point incentive system for purchasing items that help the environment If we press the view requests button, we are then navigated to a page where we are able to view our requests we have made in the past. These requests are used in order to redeem points from items you have purchased that help support the environment. Here we would we able to **view some details and the status of the requests**, but since we haven’t submitted any yet, we see there are none upon refreshing. Let’s come back to this page after submitting a request. If we go back, we can now press the request rewards button. By pressing it, we are navigated to a form where we are able to **submit details regarding our purchase and an image of proof to ensure the user truly did indeed purchase the item**. After pressing submit, **this data and image is pushed to firebase’s realtime storage (for picture) and Firestore (other data)** which I will show in a moment. Here if we go to firebase, we see a document with the details of our request we submitted and if we go to storage we are able to **view the image that we submitted**. And here we see the details. Here we can review the details, approve the status and assign points to the user based on their requests. Now let’s go back to the app itself. Now let’s go to the view requests tab again now that we have submitted our request. If we go there, we see our request, the status of the request and other details such as how many points you received if the request was approved, the time, the date and other such details. Now to the Footprint Calculator tab, where you are able to input some details and see the global footprint you have on the environment and its resources based on your house, food and overall lifestyle. Here I will type in some data and see the results. **Here its says I would take up 8 earths, if everyone used the same amount of resources as me.** The goal of this is to be able to reach only one earth since then Earth and its resources would be able to sustain for a much longer time. We can also share it with our friends to encourage them to do the same. Now to the last tab, is the savings tab. Here we are able to find daily tasks we can simply do to no only save thousands and thousands of dollars but also heavily help sustain and help the environment. \**Here we have some things we can do to save in terms of transportation and by clicking on the saving, we are navigated to a website where we are able to view what we can do to achieve these savings and do it ourselves. \** This has been the demonstration of the LORAX app and thank you for listening. ## How I built it For the navigation, I used react native navigation in order to create the authentication navigator and the tab and stack navigators in each of the respective tabs. ## For the incentive system I used Google Firebase’s Firestore in order to view, add and upload details and images to the cloud for reviewal and data transfer. For authentication, I also used **Google Firebase’s Authentication** which allowed me to create custom user data such as their user, the points associated with it and the complaints associated with their **user ID**. Overall, **Firebase made it EXTREMELY easy** to create a high level application. For this entire application, I used Google Firebase for the backend. ## For the UI for the tabs such as Request Submitter, Request Viewer I used React-native-base library to create modern looking components which allowed me to create a modern looking application. ## For the Prize Redemption section and Savings Sections I created the UI from scratch trialing and erroring with different designs and shadow effects to make it look cool. The user react-native-deeplinking to navigate to the specific websites for the savings tab. ## For the Footprint Calculator I embedded the **Global Footprint Network’s Footprint Calculator** with my application in this tab to be able to use it for the reference of the user of this app. The website is shown in the **tab app and is functional on that UI**, similar to the website. I used expo for wifi-application testing, allowing me to develop the app without any wires over the wifi network. For the Request submission tab, I used react-native-base components to create the form UI elements and firebase to upload the data. For the Request Viewer, I used firebase to retrieve and view the data as seen. ## Challenges I ran into Some last second challenges I ran to was the manipulation of the database on Google Firebase. While creating the video in fact, I realize that some of the parameters were missing and were not being updated properly. I eventually realized that the naming conventions for some of the parameters being updated both in the state and in firebase got mixed up. Another issue I encountered was being able to retrieve the image from firebase. I was able to log the url, however, due to some issues with the state, I wasnt able to get the uri to the image component, and due to lack of time I left that off. Firebase made it very very easy to push, read and upload files after installing their dependencies. Thanks to all the great documentation and other tutorials I was able to effectively implement the rest. ## What I learned I learned a lot. Prior to this, I had not had experience with **data modelling, and creating custom user data points. \*\*However, due to my previous experience with \*\*firebase, and some documentation referencing** I was able to see firebase’s built in commands allowing me to query and add specific User ID’s to the the database, allowing me to search for data base on their UIDs. Overall, it was a great experience learning how to model data, using authentication and create custom user data and modify that using google firebase. ## Theme and How This Helps The Environment Overall, this application used **incentives and educates** the user about their impact on the environment to better help the environment. ## Design I created a comprehensive and simple UI to make it easy for users to navigate and understand the purposes of the Application. Additionally, I used previously mentioned utilities in order to create a modern look. ## What's next for LORAX (Luring Others to Retain our Abode Extensively) I hope to create my **own backend in the future**, using **ML** and an **AI** to classify these images and details to automate the submission process and **create my own footprint calculator** rather than using the one provided by the global footprint network.
losing
## Inspiration I wanted to find the next big thing like Bitcoin. I saw it once happen with Binance on the Cryptocurrency Exchange market via a Python Package Repo. Is there a way to systematically discover the next big thing? It boils down to getting access to asymmetric information. Asymmetric information is a relatively new concept being explored in economic theory. It allows individuals to make more informed decisions because they have access to more information than the other party. ## What it does This project allows you to get access to asymmetric information. It analyzes thousands of potential data entries from various different data feeds to identify hot topics. It essentially is a custom "trends" finder. ## How I built it It finds trends by scraping the package registry of Python, Ruby, and JS packages. It also analyzes Arxiv Research Papers. It then creates a custom dashboard for the user to consume the content on these registries. ## Challenges I ran into Performing keyword extraction proved to be difficult without much knowledge of NLP. Additionally, building a set of scrapers robust to API changes over millions of scraped data was difficult. ## Accomplishments that I'm proud of 1. Coming up with the non-trivial and non-obvious data feeds such as Python Package Registry 2. Discovering Asymmetric information as a theoretical underpinning. 3. Using an auto-generated code framework through a clever repurposing of a framework (Jinja) 4. Generating millions of data entries after only writing a few lines of code for each data source. ## What I learned 1. How building resilient APIs is difficult. 2. Why asymmetric information captures the problem well. 3. How important interfaces and inheritance can be in removing code duplication. ## What's next for Asymmetry: Towards finding the next big thing like Bitcoin To build the last step. A data analysis technique with ML or NNs for analyzing the raw data. Things include keyword extraction, line charts for trends over time, and word clouds.
## Inspiration A [paper](https://arxiv.org/pdf/1610.09225.pdf) by Indian Institute of Technology researchers described that stock predictions using sentiment analysis had a higher accuracy rate than those analyzing previous trends. We decided to implement that idea and create a real-time, self-updating web-app that could visually show how the public felt towards the big stock name companies. What better way then, than to use the most popular and relatable images on the web, memes? ## What it does The application retrieves text content from Twitter, performs sentiment analysis on tweets and generates meme images based on the sentiment. ## How we built it The whole implementation process is divided into four parts: scraping data, processing data, analysing data, and visualizing data. For scraping data, we were planning to use python data scraping library and our target websites are the ones where users are active and able to speak out their own minds. We wanted unbiased and representative data to give us a more accurate result. For processing data, since we will get a lot of noise when we scrape data from websites and we try to make sure that our data is concise and less time-consuming to feed our algorithm, we planned to use regular expression to create a generic template where it ignores all the emoticons. ## Challenges we ran into We encountered some technical, architectural, and timing issues. For example, in terms of technical problems, when we try to scrape data from twitter, we ran into noise issues. To clarify, a lot of users use emoticons and uncommon symbols when they post tweets, and those information is not helpful for us to find how users actually react to certain things. To solve this challenge, we came up with a idea where we use Regular Expression to form a template that only scrapes useful data for us. However, due to limited time during a hackathon, we increased efficiency by using Twitter’s Search API. Furthermore, we realized towards the end of our project that the MemeAPI had been discontinued and that it was not possible to generate memes with it. ## Accomplishments that we're proud of * Designing the project based on the mechanism of multi servers * Utilizing Google Cloud Platform, Twitter API, MemeAPI ## What we learned * Google Could Platform, especially the Natural Language and Vision APIs * AWS * React ## What's next for $MMM * Getting real time big data probably with Spark * Including more data visualization method, possibly with D3.js * Designing a better algorithm to find memes reflecting the sentiment of the public towards the company * Creating more dank memes
## Inspiration Students often do not have a financial background and want to begin learning about finance, but the sheer amount of resources that exist online make it difficult to know which articles are good for people to read. Thus we thought the best way to tackle this problem was to use a machine learning technique known as sentiment analysis to determine the tone of articles, allowing us to recommend more neutral options to users and provide a visual view of the different articles available so that users can make more informed decisions on the articles they read. ## What it does This product is a web based application that performs sentiment analysis on a large scope of articles to aid users in finding biased, or un - biased articles. We also offer three data visualizations of each topic, an interactive graph that shows the distribution of sentiment scores on articles, a heatmap of the sentiment scores and a word cloud showing common key words among the articles. ## How we built it Around 80 unique articles from 10 different domains were scraped from the web using scrapy. This data was then processed with the help of Indico's machine learning API. The API provided us with the tools to perform sentiment analysis on all of our articles which was the main feature of our product. We then further used the summarize feature of Indico api to create shorter descriptions of the article for our users. Indico api also powers the other two data visualizations that we provide to our users. The first of the two visualizations would be the heatmap which is also created through tableau and takes the sentimenthq scores to better visualize and compare articles and the difference between the sentiment scores. The last visualization is powered by wordcloud which is built on top of pillow and matplotlib. It takes keywords generated by Indico api and displays the most frequent keywords across all articles.The web application is powered by Django and a SQL lite database in the backend, bootstrap for the frontend and is all hosted on a google cloud platform app engine. ## Challenges we ran into The project itself was a challenge since it was our first time building a web application with Django and hosting on a cloud platform. Another challenge arose in data scraping, when finding the titles of the articles, different domains placed their article titles in different locations and tags making it difficult to make one scraper that could abstract to many websites. Not only this, but the data that was returned by the scraper was not the correct format for us to easily manipulate so unpackaging dictionaries and such were small little tasks that we had to do in order for us to solve these problems. On the data visualization side, there was no graphic library that would fit our vision for the interactive graph, so we had to build that on our own! ## Accomplishments that we're proud of Being able to accomplish the goals that we set out for the project and actually generating useful information in our web application based on the data that we ran through Indico API. ## What we learned We learned how to build websites using Django, generate word clouds using matplotlib and pandas, host websites on google cloud platform, how to utilize the Indico api and researched various types of data visualization techniques. ## What's next for DataFeels Lots of improvements could still be made to this project and here are just some of the different things that could be done. The scraper created for the data required us to manually run the script for every new link but creating an automated scraper that built the correct data structures for us to directly pipeline to our website would be much more ideal. Next we would expand our website to have not just financial categories but any topic that has articles about it.
losing
## Inspiration * Sky: Children of Light HackMIT Challenge ## What it does * This project aims to look at a dataset given by That Game Company and identify meaningful relationships within the game. From simple data analysis like finding which countries are more likely to buy items in sky shop to more complex ones like abnormal activities. ## How we built it * Test trials with various ways to interact with data such in json & pkl format ## Challenges we ran into * Loading in 4GBs of file content ## Accomplishments that we're proud of * Pinpointing outlier users while playing around with the dataset and diving deeper into why they stand out. ## Presentation slides <https://docs.google.com/presentation/d/1jJLsmSnjX9uQ8o-3D2L0lydEbP5nNOnRlI7CTMGZUfY/edit?usp=sharing>
## Inspiration We needed a system to operate our Hackspace utilities shop, where we sell materials, snacks, and even the use of some restricted equipment. It needed to be instantaneous, allow for a small amount of debt in case of emergency and work using our College ID cards to keep track of who is purchasing what. ## What it does Each student has a set credit assigned to them that they may spend in our shop's products. To start the transaction they must tap their college ID onto the RFID scanner and, after being able to check their current credit, they can scan the barcode of the product they want to buy. If this transaction would leave them with less than £5 of debt, the may scan more items or proceed to checkout. Their credit can be topped up through our College Union website which will, in turn, update our database with their new credit amount. ## How we built it The interface is built in bootstrap-generated webpages (html) that we controlled with python, and these are locally hosted. We hosted all of our databases on firebase, accessing them through the firebase API with a python wrapper. ## Challenges we ran into Connecting the database with the GUI without the python program crashing took the majority of our debugging time, but getting it to work in the end was one of the most incredible moments of the weekend. ## Accomplishments that we're proud of We've never made a webapp before, and we have been pleasantly surprised with how well it turned out: it was clean, easy to use and modular, making it easy to update and develop. Instead of using technology we wouldn't have available back in London and doing a project with no real future outlook or development, we chose to tackle a problem which we actually needed to solve and whose solution we will be able to use many times in the future. This has meant that competing the Hackathon will have a real impact in the every day transactions that happen in our lab. We're also very proud of developing the database in a system which we knew nothing about at the beginning of this event: firebase. It was challenging, but the final result was as great as we expected. ## What we learned During the Hackathon, we have improved our coding skills, teamworking, database management, GUI development and many other skills which we will be also able to use in our future projects and careers. ## What's next for ICRS Checkout Because we have concentrated on a specific range of useful tasks for this Hackathon, we would love to be able to develop a more general system that can be used by different societies, universities and even schools; operated under the same principles but with a wider range of card identification possibilities, products and debt allowance.
## Inspiration Witnessing the atrocities(protests, vandalism, etc.) caused by the recent presidential election, we want to make the general public (especially for the minorities and the oppressed) be more safe. ## What it does It provides the users with live news update happening near them, alerts them if they travel near vicinity of danger, and provide them an emergency tool to contact their loved ones if they get into a dangerous situation. ## How we built it * We crawl the latest happenings/events using Bing News API and summarize them using Smmry API. * Thanks to Alteryx's API, we also managed to crawl tweets which will inform the users regarding the latest news surrounding them with good accuracy. * All of these data are then projected to Google Map which will inform user about any happening near them in easy-to-understand summarized format. * Using Pittney Bowes' API (GeoCode function), we alert the closest contacts of the user with the address name where the user is located. ## Challenges we ran into Determining the credibility of tweets is incredibly hard ## Accomplishments that we're proud of Actually to get this thing to work. ## What's next for BeSafe Better UI/UX and maybe a predictive capability.
losing
## Chess Bird Chess Bird is a web app designed to let you play a game of chess while broadcasting your game on Twitter!
Its a cool video game
## Inspiration This game was inspired by the classical game of connect four, in which one inserts disks into a vertical board to try to get four in a row. As big fans of the game, our team sought to improve it by adding new features. ## What it does The game is played like a regular game of connect four, except each player may choose to use their turn to rotate the board left or right and let gravity force the pieces to fall downwards. This seemingly innocent change to connect four adds many new layers of strategy and fun to what was already a strategic and fun game. We developed two products: an iOS app, and a web app, to run the game. In addition, both the iOS and web apps feature the abilities of: 1) Play local "pass and play" multiplayer 2) Play against multiple different AIs we crafted, each of differing skill levels 3) Play live online games against random opponents, including those on different devices! ## How we built it The iOS app was built in Swift and the web app was written with Javascript's canvas. The bulk of the backend, which is crucial for both our online multiplayer and our AIs, came from Firebase's services. ## Challenges we ran into None of us are particularly artistic, so getting a visually pleasant UI wasn't exactly easy... ## Accomplishments that we're proud of We are most proud of our ability to successfully run an online cross-platform multiplayer, which we could not have possibly done without the help of Firebase and its servers and APIs. We are also proud of the AIs we developed, which so far tend to beat us almost every time. ## What we learned Most of us had very little experience working with backend servers, so Firebase provided us with a lovely introduction to allowing our applications to flourish on my butt. ## What's next for Gravity Four Let's get Gravity Four onto even more types of devices and into the app store!
losing
## Inspiration In 2018 there are apps to track almost everything. You can track your financial well being, food consumption or even how well you sleep. Mental health is a prominent topic in today's world that is constantly in the public eye. We were surprised to realise that there are no good solutions to track your mental health. Our team decided to create an app that required a small amount of user interaction (<10 seconds) that could be used to monitor an individuals mental health and reach out to their support network when they may be at risk. ## What it does HappiMe is a mental health tracking app. Our app gives users a push notification at a random time every day and asks them a simple question, "how are you feeling right now"?. Users can choose 1 of 5 preset answers. This data is captured along with their current location and the time. ## How we built it The mobile application is built in React Native and connects to a python Flask server to apply our machine learning model and store important information about the users well being. The machine learning model used to determine unusual behaviour is a density based clustering algorithm fit over historical data. ## Challenges we ran into We ran into issues with idea generation and using python with stdlib. It took hours to finally settle on HappiMe and we cycled through more ideas than we can count. We also ran into issues using stdlib with python but were able to work around this by directly sending our request to their messagebird API using the python requests library. Additionally, we wanted to show how machine learning can be leveraged if this data was aggregated. However, since we just built this app this weekend, getting historical user data to train our ML models was not very feasible. So to really show the power of this app, we had to generate this data by designing a series of Markov decision processes and probability distributions to model how this data could appear if a user logs for 100 days. ## Accomplishments that we're proud of Our application can effectively detect abnormal behaviour for any given user. If a user responds to their daily push notification indicating they are very sad, at a time and place they are historically happy or content, an option to have the app send a text to a close friend appears. We also were able to implement means of data visualisation that the user can view and clearly see locations and times that often have a negative impact on their mental health. ## What we learned This weekend we learned a lot about API development, React, and machine learning. Our team member that developed the API had never done any of this type of work before so it was a brand new experience. We had never used React native before this weekend and our prior machine learning knowledge was limited. This project allowed us to use these new tools in a very engaging fashion! ## What's next for HappiMe The next steps for HappiMe centres around integrating new data sources and creating a therapist portal. HappiMe could be connected to other tracking apps (Fitbit, myFitnessPal) to capture additional data. This data would include consumption, physical activity and hours slept. Additionaly weather API's could be leveraged to include the weather at the time of notification. This data would be used to create a more robust model that could detect anomalies and more accurately describe why a person may be feeling a certain way. With the additional information we could provide our users with context as to why they may be feeling a certain way. This context could help to re-enforce good habits and point out why a person may be feeling poorly. A therapist portal would enable our app to be used as a tool to monitor the mental health remotely and help provide therapists with more information than they may usually have.
## Inspiration Medical device companies spend on average between $20-40K on FDA consulting services and around 9 months to complete just the 510k approval process which allows you to legally market your device. With the rise of software enabled devices (such as wearables etc), the development time for these devices are decreasing while regulatory timelines haven’t changed, hindering and even discouraging startups from going through the FDA process. Our team of 3 MIT graduate students have previously developed wearable medical devices and spent months designing study protocols, revising IRB applications, and doing literature review before even starting pilot testing of the device. After numerous conversations with other medical device entrepreneurs and FDA consulting services, we uncovered widespread frustrations with the FDA's overwhelming, non-intuitive, and scattered information landscape, particularly for medical devices. Driven by these challenges, we developed FastTrackFDA—a transformative platform engineered to streamline the FDA journey starting from device conception and guiding development until FDA clearance. ## What it does Demystifying the FDA journey, our platform empowers medical device startups and companies by guiding them seamlessly through the approval process. It aids in identifying predicate devices and crafting clinical trial protocols, significantly reducing the reliance on consultants. The FDA's website, often criticized for its cluttered guidance documents, constant revisions, voluminous and complex databases, and lack of personalization, poses a significant challenge, especially for medical device manufacturers. Our solution is an intuitive user interface, crafted with input from device developers, designed to streamline the description and intended use case of your device. It enables a comprehensive understanding of all market entry requirements, facilitating the creation of predicate devices and clinical trial designs. This approach not only accelerates the approval process and reduces costs but also diminishes the dependence on external consulting services. (1) Finding Substantially Equivalent Devices: Identifying a substantially equivalent device is crucial in determining the appropriate regulatory pathway for your device. For FDA 510(k) approval, it's imperative that your device is matched with a substantially equivalent 'predicate' device. Traditionally, consultants might spend 20-40 hours on this task alone. Our platform leverages vector search and matrix similarities, to pinpoint the most compatible predicate devices, using a semantic comparison with your device. (2) Personalizing the Regulatory Workflow: With the FDA's mandate from October 2023 requiring all 510(k) submissions to follow an e-submission template, our platform standardizes yet personalizes the steps necessary for compliance. (3) Generating Clinical Trial Designs: To streamline the development of study designs, our platform displays clinical trials for devices that are semantically similar to yours, sourced from the clinicaltrials.gov database. We then generate potential designs, incorporating inclusion/exclusion criteria, intervention modules (control and experimental groups), study procedures, and outcome measures, based on a comprehensive analysis of clinical trial data. This refined approach not only clarifies the path to FDA approval but also positions our platform as a pivotal tool in bringing medical devices to market more efficiently and cost-effectively. ## How we built it Our application consists of a Next.Js frontend and Flask backend. For generating our dataset of all the 510k summaries, we developed our own custom pdf extractor to extract different sections and tables from the 510K summary documents on the FDA site. In order to find the best predicate device we developed a matrix similarity algorithm using the vector similarity search built off of our data storage in Pinecone. Then we used openai GPT-4-turbo to generate comparison tables among the two devices which is a section of the 510k document. To visualize the database (in order for users to see where their device lies in the space of all similar devices), we used Nomic’s Atlas module. Lastly, we automatically collected a dataset of clinical trial designs from clinicaltrials.gov and used Together API to finetune a collection of clinical trials generation models and also used openAI’s LLM for trial generation. ## Challenges we ran into Table extraction: Each PDF document for 510k summaries has a different format (some are scanned copies and some are pdfs with differently formatted tables and various headers that aren’t standard). We tried multiple approaches using computer vision, ocr, table extraction libraries, and heuristics and ultimately creating our own custom approach that ran faster than the other approaches in order to populate our database with all of the previous records of 510ks. Deployment: Due to some dependencies, we had an issue with deploying the full application so we could deploy the frontend and backend separately. ## Accomplishments that we're proud of We’re really proud of actually solving a use case that we’ve seen a huge need for first hand and no real solution. We truly believe that cost, time, or ambiguity should not be the reason that a medical device which can transform a person’s life doesn’t get to market soon enough. ## What we learned * Prioritization of features and shipping an MVP that addresses the most impactful ones * Table extraction among pdfs isn’t a solved task and there are many nuances * The FDA process is very complex ## What's next for FastTrackFDA Our overall mission is to be a central platform for all device companies to start using when they begin conception of a device. Future steps that we plan to build out soon are: * Expanding to other areas of the FDA process such as quality Assurance: 75% of the time that a consultant spends is revising existing drafted material and making sure it abides by the Refusal to Accept checklist documentation and guidance documents for each section. We plan to user our table extraction tool to extract this RTA document and do automatic checking before the person creates the table. * Improve table extraction to be more generalizable: We hope users can ask interactive questions about guidance documents but only once they’ve been parsed perfectly. * Continuous learning and multi-agent reviewal- Since newer FDA device approvals are most important, we plan to integrate a continuous learning approach that constantly updates the data as soon as a new device is approved. * Multi-agent reviewal: For each part of this workflow, we hope to embed more specific knowledge about the decision making process of a FDA consultant. For example, for determining clinical trial design we hope to embed an “clinical trial expert” agent that checks whether the trial’s inclusion/ exclusion reduces bias, its trial design is safe, and potential suggestions.
## Inspiration What inspired us to build this application was spreading mental health awareness in relationship with the ongoing COVID-19 pandemic around the world. While it is easy to brush off signs of fatigue and emotional stress as just "being tired", often times, there is a deeper problem at the root of it. We designed this application to be as approachable and user-friendly as possible and allowed it to scale and rapidly change based on user trends. ## What it does The project takes a scan of a face using a video stream and interprets that data by using machine learning and specially-trained models for emotion recognition. Receiving the facial data, the model is then able to process it and output the probability of a user's current emotion. After clicking the "Recommend Videos" button, the probability data is exported as an array and is processed internally, in order to determine the right query to send to the YouTube API. Once the query is sent and a response is received, the response is validated and the videos are served to the user. This process is scalable and the videos do change as newer ones get released and the YouTube algorithm serves new content. In short, this project is able to identify your emotions using face detection and suggest you videos based on how you feel. ## How we built it The project was built as a react app leveraging face-api.js to detect the emotions and youtube-music-api for the music recommendations. The UI was designed using Material UI. The project was built using the [REACT](https://reactjs.org/) framework, powered by [NodeJS](https://nodejs.org/en/). While it is possible to simply link the `package.json` file, the core libraries that were used were the following * **[Redux](https://react-redux.js.org/)** * **[Face-API](https://justadudewhohacks.github.io/face-api.js/docs/index.html)** * **[GoogleAPIs](https://www.npmjs.com/package/googleapis)** * **[MUI](https://mui.com/)** * The rest were sub-dependencies that were installed automagically using [npm](https://www.npmjs.com/) ## Challenges we ran into We faced many challenges throughout this Hackathon, including both programming and logistical ones, most of them involved dealing with React and its handling of objects and props. Here are some of the most harder challenges that we encountered with React while working on the project: * Integration of `face-api.js`, as initially figuring out how to map the user's face and adding a canvas on top of the video stream proved to be a challenge, given how none of us really worked with that library before. * Integration of `googleapis`' YouTube API v3, as the documentation was not very obvious and it was difficult to not only get the API key required to access the API itself, but also finding the correct URL in order to properly formulate our search query. Another challenge with this library is that it does not properly communicate its rate limiting. In this case, we did not know we could only do a maximum of 100 requests per day, and so we quickly reached our API limit and had to get a new key. Beware! * Correctly set the camera refresh interval so that the canvas can update and be displayed to the user. Finding the correct timing and making sure that the camera would be disabled when the recommendations are displayed as well as when switching pages was a big challenge, as there was no real good documentation or solution for what we were trying to do. We ended up implementing it, but the entire process was filled with hurdles and challenges! * Finding the right theme. It was very important to us from the very start to make it presentable and easy to use to the user. Because of that, we took a lot of time to carefully select a color palette that the users would (hopefully) be pleased by. However, this required many hours of trial-and-error, and so it took us quite some time to figure out what colors to use, all while working on completing the project we had set out to do at the start of the Hackathon. ## Accomplishments that we're proud of While we did face many challenges and setbacks as we've outlined above, the results we something that we can really be proud of. Going into specifics, here are some of our best and satisfying moments throughout the challenge: * Building a well-functioning app with a nice design. This was the initial goal. We did it. We're super proud of the work that we put in, the amount of hours we've spent debugging and fixing issues and it filled us with confidence knowing that we were able to plan everything out and implement everything that we wanted, given the amount of time that we had. An unforgettable experience to say the least. * Solving the API integration issues which plagued us since the start. We knew, once we set out to develop this project, that meddling with APIs was never going to be an easy task. We were very unprepared for the amount of pain we were about to go through with the YouTube API. Part of that is mostly because of us: we chose libraries and packages that we were not very familiar with, and so, not only did we have to learn how to use them, but we also had to adapt them to our codebase to integrate them into our product. That was quite a challenge, but finally seeing it work after all the long hours we put in is absolutely worth it, and we're really glad it turned out this way. ## What we learned To keep this section short, here are some of the things we learned throughout the Hackathon: * How to work with new APIs * How to debug UI issues use components to build our applications * Understand and fully utilize React's suite of packages and libraries, as well as other styling tools such as MaterialUI (MUI) * Rely on each other strengths * And much, much more, but if we kept talking, the list would go on forever! ## What's next for MoodChanger Well, given how the name **is** *Moodchanger*, there is one thing that we all wish we could change next. The world! PS: Maybe add file support one day? :pensive: PPS: Pst! The project is accessible on [GitHub](https://github.com/mike1572/face)!
losing