anchor
stringlengths
1
23.8k
positive
stringlengths
1
23.8k
negative
stringlengths
1
31k
anchor_status
stringclasses
3 values
## Inspiration Not all hackers wear capes - but not all capes get washed correctly. Dorming on a college campus the summer before our senior year of high school, we realized how difficult it was to decipher laundry tags and determine the correct settings to use while juggling a busy schedule and challenging classes. We decided to try Google's up and coming **AutoML Vision API Beta** to detect and classify laundry tags, to save headaches, washing cycles, and the world. ## What it does L.O.A.D identifies the standardized care symbols on tags, considers the recommended washing settings for each item of clothing, clusters similar items into loads, and suggests care settings that optimize loading efficiency and prevent unnecessary wear and tear. ## How we built it We took reference photos of hundreds of laundry tags (from our fellow hackers!) to train a Google AutoML Vision model. After trial and error and many camera modules, we built an Android app that allows the user to scan tags and fetch results from the model via a call to the Google Cloud API. ## Challenges we ran into Acquiring a sufficiently sized training image dataset was especially challenging. While we had a sizable pool of laundry tags available here at PennApps, our reference images only represent a small portion of the vast variety of care symbols. As a proof of concept, we focused on identifying six of the most common care symbols we saw. We originally planned to utilize the Android Things platform, but issues with image quality and processing power limited our scanning accuracy. Fortunately, the similarities between Android Things and Android allowed us to shift gears quickly and remain on track. ## Accomplishments that we're proud of We knew that we would have to painstakingly acquire enough reference images to train a Google AutoML Vision model with crowd-sourced data, but we didn't anticipate just how awkward asking to take pictures of laundry tags could be. We can proudly say that this has been an uniquely interesting experience. We managed to build our demo platform entirely out of salvaged sponsor swag. ## What we learned As high school students with little experience in machine learning, Google AutoML Vision gave us a great first look into the world of AI. Working with Android and Google Cloud Platform gave us a lot of experience working in the Google ecosystem. Ironically, working to translate the care-symbols has made us fluent in laundry. Feel free to ask us any questions, ## What's next for Load Optimization Assistance Device We'd like to expand care symbol support and continue to train the machine-learned model with more data. We'd also like to move away from pure Android, and integrate the entire system into a streamlined hardware package.
## Inspiration Recent mass shooting events are indicative of a rising, unfortunate trend in the United States. During a shooting, someone may be killed every 3 seconds on average, while it takes authorities an average of 10 minutes to arrive on a crime scene after a distress call. In addition, cameras and live closed circuit video monitoring are almost ubiquitous now, but are almost always used for post-crime analysis. Why not use them immediately? With the power of Google Cloud and other tools, we can use camera feed to immediately detect weapons real-time, identify a threat, send authorities a pinpointed location, and track the suspect - all in one fell swoop. ## What it does At its core, our intelligent surveillance system takes in a live video feed and constantly watches for any sign of a gun or weapon. Once detected, the system immediately bounds the weapon, identifies the potential suspect with the weapon, and sends the authorities a snapshot of the scene and precise location information. In parallel, the suspect is matched against a database for any additional information that could be provided to the authorities. ## How we built it The core of our project is distributed across the Google Cloud framework and AWS Rekognition. A camera (most commonly a CCTV) presents a live feed to a model, which is constantly looking for anything that looks like a gun using GCP's Vision API. Once detected, we bound the gun and nearby people and identify the shooter through a distance calculation. The backend captures all of this information and sends this to check against a cloud-hosted database of people. Then, our frontend pulls from the identified suspect in the database and presents all necessary information to authorities in a concise dashboard which employs the Maps API. As soon as a gun is drawn, the authorities see the location on a map, the gun holder's current scene, and if available, his background and physical characteristics. Then, AWS Rekognition uses face matching to run the threat against a database to present more detail. ## Challenges we ran into There are some careful nuances to the idea that we had to account for in our project. For one, few models are pre-trained on weapons, so we experimented with training our own model in addition to using the Vision API. Additionally, identifying the weapon holder is a difficult task - sometimes the gun is not necessarily closest to the person holding it. This is offset by the fact that we send a scene snapshot to the authorities, and most gun attacks happen from a distance. Testing is also difficult, considering we do not have access to guns to hold in front of a camera. ## Accomplishments that we're proud of A clever geometry-based algorithm to predict the person holding the gun. Minimized latency when running several processes at once. Clean integration with a database integrating in real-time. ## What we learned It's easy to say we're shooting for MVP, but we need to be careful about managing expectations for what features should be part of the MVP and what features are extraneous. ## What's next for HawkCC As with all machine learning based products, we would train a fresh model on our specific use case. Given the raw amount of CCTV footage out there, this is not a difficult task, but simply a time-consuming one. This would improve accuracy in 2 main respects - cleaner identification of weapons from a slightly top-down view, and better tracking of individuals within the frame. SMS alert integration is another feature that we could easily plug into the surveillance system as well, and further compound the reaction improvement time.
## Inspiration Many women who want to get abortion services are not aware of reliable medical acesses available to them and the caveats surrounding this area. The most prominent problem in the status quo is that pro-life clinics are using both deceptive names and manipulative interactions to dissuade women from getting an abortion. As an example, most Crisis Pregnancy Centers (pro-life) locate themselves right across real abortion clinics, use names such as “Women’s Choice Clinic”, and claims to offer “abortion consultation” that uses all means to make the woman feel guilty about her choice. With the belief that every individual deserves the autonomy over her body, we designed Preggy to ensure the right to free choice, unbiased informationand ready medical acess for women who seek abortion service or consultation. ## What it does Our product aims to use an interactive chatbot and map to provide users reliable and detailed information about abortion and display certified abortion clinics around them. ## How I built it We built the chatbot using Google's Dialogflow, which is powered by Google Cloud and provides easy construction of decision tree and automatic training of phrases. We achieved the fullfillment (for displaying rich responses) by linking our web app built with Node.js to Firebase's Cloud Functions through a web hook call. We also used Google Maps API to display the locations of and routes to abortion clinics and crisis pregnancy centers on our web. ## Challenges I ran into 1. Implementation of fulfillments (rich responses) on Dialogflow. 2. Integration of the chatbot into our own website: all the intents and responses work correctly, but the fulfillments used to trigger rich responses only work on dialogflow console. ## Accomplishments that I'm proud of First, we tried to make the interface as easy as possible, so we can allow for women from different social-economic and education backgrounds to inquire and understand the answers. Second, we don’t set up any storage for the data input from the user -- each user’s data gets removed at the end of each query. Third, we get the transportation possibilities and give women options to go out-of-state, due to some states’ legislations. ## What I learned We not only need technical skills to build an app. more importantly, building a web app requires the developer to make a series of value judgements that embed the developer's own beliefs and biases. For example, as pro-choice advocates ourselves, we choose to label only certified abortion clinics and warn users of the pro-life "fake clinics". Therefore, it's our responsibility to inform the users of this design choice, and make sure that they consent to the one-sided information before they use the product. ## What's next for Preggy First, we want to prototype the app on our real users, and incorporate their feedback. To achieve this, we will reach out to support groups for women who have unwanted pregnancy, and send the link to them. Second, we want to make the source code open to the public, so any organization or individual working towards the same goal can help us improve upon it. Third, we plan to contact the authors of the Safe Place Project to ask about her advice and opportunities for future collaboration. The Safe Place Project is a website that collects information on abortion clinics and legislative restriction in each state. If we can collaborate with her, we will be able to get automatic updates to our data about legislations, for example.
winning
## Inspiration My inspiration for this was due to the fact that pollution is often a neglected issue. Not only does it harm the environment, but also people and animals. ## What it does People can post trash cleaning events and spread awareness about places which require cleaning of garbage or other types of pollution. It also has a machine learning feature to detect from an image if the location has trash or not. Also, there is a incentivized points system which can later be redeemed for prizes or rewards. ## How we built it Using HTML/CSS/Javascript & Machine learning. ## Challenges we ran into Sometimes the points weren't displaying or showing the correct value. When the page was refreshed, sometimes the events disappeared. ## Accomplishments that we're proud of Finishing in time limit and learning more about javascript. ## What we learned I learned some more advanced javascript syntax & how to incorporate it with HTML & CSS. ## What's next for EnviroMode Have a Google Maps API to display nearby locations where you can make a positive impact.
## Inspiration As the council jotted down their ideas, the search for a project to better our lives came to an end when the idea of garbage sorting came to us. It's not uncommon that people tend to misplace daily objects, and these little misplays and blunders add up quickly in an environment where people tend to give little thought to what they dispose of. Our methodology against these cringe habits calls for **Sort the 6ix**, an application made to identify and sort an object with your phone camera. With some very convoluted **object detection magic** and **API calling**, the application will take an image, presumably of the debris you're looking to dispose of, and categorize it, while providing applicable instructions for safe item disposal. ## How we built it With the help of Expo, the app was built with React Native. We used **Google Cloud's Vision AI** to help **detect and classify** our object by producing a list of labels. The response labels and weights are passed to our **Flask backend** to identify where the objects should be disposed, using the Open Toronto's Waste Wizard **dataset** to help classify where each object should be disposed, as well as additional instructions for cleaning items or dealing with hazardous waste. ## Challenges we ran into A big roadblock in our project was finding a sufficient image detection model, as the trash dataset (double entendre) we used had a lot of detailed objects, and the object detection models we used were not working or not expansive enough for the dataset. A decent portion of our time spent was looking for a model that would suit our requirements, to which we took the compromise of Google Cloud's Vision AI. There were also issues with dependencies that caused some headaches for for group, as well as the dataset as using a lot of html formatting which we had trouble working with. ## Accomplishments that we're proud of We were proud that we were able to get the app working and the object detection. We successfully navigated Google Cloud's API for the first time and implemented it into the comfort of your phone camera. We also used another Artificial Intelligence model from Hugging Face, called all-MiniLM-L6-v2. We utilized this for semantic search to better help **contextualize** the camera output, through the models ability to graph sentences & paragraphs to a **384 dimensional dense vector space**, and **comparing** it to the most relevant trash categories that are given from the dataset. ## What we learned During the 36 hours, we learned how to make and deal with **APIs**, we learned how to use **object recognition models \*\*and properly apply them onto our application, as well as implementing that into \*\*semantic search** to give the result using a comprehensive .json dataset, and **calling relevant information** from said dataset. And most importantly, we learned that react native wasn't the play for choosing a frontend language. ## What's next for Sort the 6ix The time constraint failed to give us the capability to implement this product physically, and we plan to implement this into a physical product, that can actively scan for objects to quickly output visual feedback. This then can be mounted directly onto garbage grabbers in Toronto, to better help people identify and clean up items to maximize their environmental impact on a whim.
## Inspiration 🌱 Climate change is affecting every region on earth. The changes are widespread, rapid, and intensifying. The UN states that we are at a pivotal moment and the urgency to protect our Earth is at an all-time high. We wanted to harness the power of social media for a greater purpose: promoting sustainability and environmental consciousness. ## What it does 🌎 Inspired by BeReal, the most popular app in 2022, BeGreen is your go-to platform for celebrating and sharing acts of sustainability. Everytime you make a sustainable choice, snap a photo, upload it, and you’ll be rewarded with Green points based on how impactful your act was! Compete with your friends to see who can rack up the most Green points by performing more acts of sustainability and even claim prizes once you have enough points 😍. ## How we built it 🧑‍💻 We used React with Javascript to create the app, coupled with firebase for the backend. We also used Microsoft Azure for computer vision and OpenAI for assessing the environmental impact of the sustainable act in a photo. ## Challenges we ran into 🥊 One of our biggest obstacles was settling on an idea as there were so many great challenges for us to be inspired from. ## Accomplishments that we're proud of 🏆 We are really happy to have worked so well as a team. Despite encountering various technological challenges, each team member embraced unfamiliar technologies with enthusiasm and determination. We were able to overcome obstacles by adapting and collaborating as a team and we’re all leaving uOttahack with new capabilities. ## What we learned 💚 Everyone was able to work with new technologies that they’ve never touched before while watching our idea come to life. For all of us, it was our first time developing a progressive web app. For some of us, it was our first time working with OpenAI, firebase, and working with routers in react. ## What's next for BeGreen ✨ It would be amazing to collaborate with brands to give more rewards as an incentive to make more sustainable choices. We’d also love to implement a streak feature, where you can get bonus points for posting multiple days in a row!
losing
## Inspiration I got the inspiration from the Mirum challenge, which was to be able to recognize emotion in speech and text. ## What it does It records speech from people for a set time, separating individual transcripts based on small pauses in between each person talking. It then transcribes this to a JSON string using the Google Speech API and passes this string into the IBM Watson Tone Analyzer API to analyze the emotion in each snippet. ## How I built it I had to connect to the Google Cloud SDK and Watson Developer Cloud first, and learn some python that was necessary to get them working. I then wrote one script file, recording audio with pyaudio and using the APIs for the other two to get JSON data back. ## Challenges I ran into I had trouble making a GUI, so I abandoned trying to make it. I didn't have enough practice with making GUIs in Python before this hackathon, and the use of the APIs were time-consuming already. Another challenge I ran into was getting the google-cloud-sdk to work on my laptop, as it seemed that there were conflicting files or missing files at times. ## Accomplishments that I'm proud of I'm proud that I got the google-cloud-sdk set up and got the Speech API to work, as well as get an API which I had never heard of to work, the IBM Watson one. ## What I learned To keep trying to get control of APIs, but ask for help from others who might've set theirs up already. I also learned to manage my time more effectively. This is my second hackathon, and I got a lot more work done than I did last time. ## What's next for Emotional Talks I want to add a GUI that will make it easy for viewers to analyze their conversations, and perhaps also use some future Speech APIs to better process the speech part. This could potentially be sold to businesses for use in customer care calls.
## Inspiration We wanted to make customer service a better experience for both parties. Arguments often arise during phone calls due to misunderstanding, so we wanted to create an app where voice can be analyzed and a course of action is suggested. ## What it does Our program is able to record a conversation and display the emotional state of the person speaking. This mobile app allows users, specifically people who are working the customer service department, to understand the caller better. As the customer explains the situation, the sadness, joy, fear, anger, and disgust in his/her voice is calculated and displayed in a graph. After a summary is given from entire conversation and the strongest, emotional tone, language tone, and social tone are given. The user has the ability to see the specific details of what the strongest tone consisted of. Lastly, a course of action is given according to the combination of tones. ## How we built it We began building our app using traditional paper prototyping and user research. We then, designed screens, logos, and icons with photoshop. Afterwards, we used keynote to display interactive data to showcase emotion. Lastly, we used Invision to prototype the app. For the backend, we attempted to use IBM Watson Tone Analyzer API to help detect communication tones from written text. ## Challenges we ran into The challenges we ran into was slow wifi, lack of technology and a basic coding knowledge. ## Accomplishments that we're proud of Were are proud of finishing a prototype in a two person in the demanding situation. We were able to design the user interface and user experience and challenge learning curves in the short amount of time. The mobile application is beautifully designed and capture our mission of helping the customer service field through understanding and listening. ## What we learned We learned the importance of the design process of ideate, research, prototype, and iterate. We learned programs such as principle and developed our skill in photoshop, keynote, sketch, and Invision. We attended the Intro to APIs workshop and learned about http request and how to use them to interact with APIs. ## What's next for Emotionyze We are going to continue to work on IBM Watson Tone Analyzer API to further develop the detection of communication tones from written text. We are also going to work not the overall User Experience and User Interface to create an effective mobile application
## Inspiration There were two primary sources of inspiration. The first one was a paper published by University of Oxford researchers, who proposed a state of the art deep learning pipeline to extract spoken language from video. The paper can be found [here](http://www.robots.ox.ac.uk/%7Evgg/publications/2018/Afouras18b/afouras18b.pdf). The repo for the model used as a base template can be found [here](https://github.com/afourast/deep_lip_reading). The second source of inspiration is an existing product on the market, [Focals by North](https://www.bynorth.com/). Focals are smart glasses that aim to put the important parts of your life right in front of you through a projected heads up display. We thought it would be a great idea to build onto a platform like this through adding a camera and using artificial intelligence to gain valuable insights about what you see, which in our case, is deciphering speech from visual input. This has applications in aiding individuals who are deaf or hard-of-hearing, noisy environments where automatic speech recognition is difficult, and in conjunction with speech recognition for ultra-accurate, real-time transcripts. ## What it does The user presses a button on the side of the glasses, which begins recording, and upon pressing the button again, recording ends. The camera is connected to a raspberry pi, which is a web enabled device. The raspberry pi uploads the recording to google cloud, and submits a post to a web server along with the file name uploaded. The web server downloads the video from google cloud, runs facial detection through a haar cascade classifier, and feeds that into a transformer network which transcribes the video. Upon finished, a front-end web application is notified through socket communication, and this results in the front-end streaming the video from google cloud as well as displaying the transcription output from the back-end server. ## How we built it The hardware platform is a raspberry pi zero interfaced with a pi camera. A python script is run on the raspberry pi to listen for GPIO, record video, upload to google cloud, and post to the back-end server. The back-end server is implemented using Flask, a web framework in Python. The back-end server runs the processing pipeline, which utilizes TensorFlow and OpenCV. The front-end is implemented using React in JavaScript. ## Challenges we ran into * TensorFlow proved to be difficult to integrate with the back-end server due to dependency and driver compatibility issues, forcing us to run it on CPU only, which does not yield maximum performance * It was difficult to establish a network connection on the Raspberry Pi, which we worked around through USB-tethering with a mobile device ## Accomplishments that we're proud of * Establishing a multi-step pipeline that features hardware, cloud storage, a back-end server, and a front-end web application * Design of the glasses prototype ## What we learned * How to setup a back-end web server using Flask * How to facilitate socket communication between Flask and React * How to setup a web server through local host tunneling using ngrok * How to convert a video into a text prediction through 3D spatio-temporal convolutions and transformer networks * How to interface with Google Cloud for data storage between various components such as hardware, back-end, and front-end ## What's next for Synviz * With stronger on-board battery, 5G network connection, and a computationally stronger compute server, we believe it will be possible to achieve near real-time transcription from a video feed that can be implemented on an existing platform like North's Focals to deliver a promising business appeal
losing
## Inspiration One teammate is quite knowledgeable in finance and stocks, but the rest of the team had very little background in stocks, so our group as a whole thought it would be really nice to create something that made learning about stocks easier. ## What it does Whitestone has two main functionalities. The first is that it takes in a stock and budget, and outputs a graph of the stock price over time (based on the 100 most recent days), the linear approximation, and the number of stocks you could buy with the budget. The second is a "glossary" functionality that takes in a company name and outputs its abbreviation. ## How we built it We started with a simple react app and installed the necessary packages using npm. We then used the Alpha Vantage API to fetch stock data, which we could use to plot on a graph. The layout was done in styled components in React (essentially just CSS). ## Challenges we ran into One of the main challenges was just getting started on the project. This was each member's first hackathon ever, and it took a while to just learn and get the hang of basic web development and using APIs. After that, there were some repetitive tasks that went by faster. ## Accomplishments that we're proud of We're all proud of our contribution and growth. Each member contributed a significant amount towards the project, and we all learn a ton of new skills regarding web development. ## What we learned We learned how to create a web app using React and JavaScript syntax, how to use the Alpha Vantage API, and how to read documentation in general. We also learned and used a lot more CSS. ## What's next for Whitestone Whitestone is still far from complete. We'd like to expand on our database of company abbreviations, as well as provide more information and analytics on stocks. Another possibility is to display information about the frequency of mentions of stocks (such as on Twitter), which could help users gain more intuition about which stocks are/were popular and how that correlates with the stock price.
## Inspiration During these covid times, people have pondered over two major facets: "Finance" and "Environment". So we thought out a way how people today can "Finance the Environment". We wanted to motivate people to invest in promoting sustainable developement. ## What it does We have built a web application that will allow new investors to explore various stocks that promote sustainable development and help the environment. User can save their preferences as we have connected their "InvestGreen" to google accounts. Apart from exploring various stocks, users can also write blogs and share their ideas and experiences of going green. We also have chat support, which we handled now, but soon we wish to shift it to the financial advisors' talk point. Basically, we have created a platform to promote sustainable development. ## How we built it We have used ReactJS for the frontend. Google Auth (Firebase) for the authentication of users and Alphadvantage API to get all the data for the companies' stock market, Ascend for a chat support feature and various plug-ins to convert data to highly visual graphs for better designs. ## Challenges we ran into We come from a non-CS background and some of us are new to react.js. Most of us were flutter developers. But we wanted to try react.js. We started building the whole website on react and have learnt something from the workshop that was conducted yesterday also. It was difficult to get the appropriate content of the website. It was difficult to integrate the chat support and the dashboard to reply to it. ## Accomplishments that we're proud of We're very happy that we learnt a new tech stack like react.js so soon. We learnt how to build chat support on the webpage. We had to design the web page ourselves which we are very proud of. ## What we learned We have learnt the importance of teamwork. We had split the work among each other and it had made things faster. We really enjoyed building stuff overnight. ## What's next for InvestGreen We will deploy our website in the days to come. We are thinking to introduce an option that would help people to directly invest in the companies from this website itself instead of only showing the data.
## Inspiration The inspiration for our project stems from the increasing trend of online shopping and the declining foot traffic in physical stores. Our goal was to provide a unique and engaging experience for customers, encouraging them to visit physical stores and rediscover the joy of in-person shopping. We wanted to create an interactive and entertaining shopping experience that would entice customers to visit stores more frequently and foster a deeper connection between them and the store's brand. ## What it does Our project is an AR scavenger hunt experience that gamifies the shopping experience. The scavenger hunt encourages customers to explore the store and discover new products they may have otherwise overlooked. As customers find specific products, they can earn points which can be redeemed for exclusive deals and discounts on future purchases. This innovative marketing scheme not only provides customers with an entertaining experience but also incentivizes them to visit stores more frequently and purchase products they may have otherwise overlooked. ## How we built it To create the AR component of our project, we used Vuforia and Unity, two widely used platforms for building AR applications. The Vuforia platform allowed us to create and track image targets, while Unity was used to design the 3D models for the AR experience. We then integrated the AR component into an Android application by importing it as a Gradle project. Our team utilized agile development methodologies to ensure efficient collaboration and problem-solving throughout the development process. ## Challenges we ran into One of the challenges we faced was integrating multiple APIs and ensuring that they worked together seamlessly. Another challenge was importing the AR component and creating the desired functionality within our project. We also faced issues with debugging and resolving technical errors that arose during the development process. ## Accomplishments that we're proud of Despite the challenges we faced, we were able to achieve successful teamwork and collaboration. Despite forming the team later than other groups, we were able to effectively communicate and work together to bring our project to fruition. We are proud of the end result, which was a polished and functional AR scavenger hunt experience that met our objectives. ## What we learned We learned how difficult it is to truly ship out software, and we are grateful to have joined the hackathon. We gained a deeper understanding of the importance of project planning, effective communication, and collaboration among team members. We also learned that the development process can be challenging and unpredictable, and that it requires perseverance and problem-solving skills. Additionally, participating in the hackathon taught us valuable technical skills such as integrating APIs, creating AR functionality, and importing projects onto an Android application. ## What's next for Winnur Looking forward, we plan to incorporate Computer Vision technology into our project to prevent potential damage to our product's packaging. We also aim to expand the reach of our AR scavenger hunt experience by partnering with more retailers and enhancing the user interface and experience. We are excited about the potential for future development and growth of Winnur.
losing
## Inspiration Did you know that minority groups, on average, have between 9 and 16 percent lower financial literacy scores compared to Caucasians? Additionally, a study conducted by Ohio State University found that an immigrant status lowers financial literacy exam scores by about 27 percent. Learning something new can be extremely daunting, especially for less privileged populations, however, it shouldn’t be. Being financially literate often entails the knowledge needed to make informed decisions pertaining to certain personal and/or business finance areas like real estate, insurance, investing, saving, tax planning and budgeting. Financial literacy shouldn’t be only for privileged populations - and EasyInvest is built to change that. ## What it does Taking inspiration from award-winning educational apps like Duolingo, our EasyInvest web app guides people through the basics of financial literacy by taking them through introductory concepts in investing, before reinforcing their learning with a series of case studies to simulate the process of making effective financial decisions. Furthermore, unlike other learning tools which leave you hanging after a barrage of learning materials, our website offers a quiz after each course where you can test your knowledge. If you get stuck with any of the prompts, there are multiple hints and prompts along the way or you can ultimately go back to the course page to review the material. ## How we built it After an extensive brainstorming session, we narrowed down the scope of our app from a comprehensive review of financial literacy information to a visually attractive app that focused on investing, saving, and budgeting. Next, we prototyped the user interface and flow of the app using Figma. After agreeing on the basic layout and features of the app, we began development in ReactJS to create a multi-page website to create components that matched the feel and flow of our Figma prototypes. ## Challenges we ran into One major challenge we faced was adapting to the limitations of remote communication which made it more difficult to directly demonstrate to each other what we thought was viable given the time constraints of the hackathon. On the technical side, our lack of an effective way to handle version control due to our difficulties syncing Github with Visual Studio Code was a big roadblock because we had to run local copies of the code that may not coordinate well with each other after significant development. A majority of our team were also first time hackers who had not worked in many collaborative tech environments. There was a learning curve in understanding each team members’ roles and how they fit into the project. ## Accomplishments that we're proud of We are proud that a majority of the visual prototypes created on Figma ended up being implemented onto the final product and the design of the website is extremely simple to navigate and visually pleasing. We’re also proud of being able to help each other understand our separate disciplines and being able to bridge the functionality and visual design components of our website together. Lastly, some of our members had to learn new coding methodologies and languages such as Javascript and ReactJS during the development of EasyInvest, which is an amazing feat. ## What we learned We learned new coding languages such as React and Javascript and the use of Figma for visual design. We learned how time consuming it can be to implement more complex, reusable components such as sidebars, buttons, and animations and to focus on building the minimum viable product. We also gained insight into the indigenous community and immigrants and the apparent gaps in knowledge regarding budgeting, saving, and investing. ## What's next for EasyInvest Next steps for EasyInvest include implementing more learning methodologies and best practices for helping less privileged populations. We hope to implement user testing to validate and iterate on our product based on collected data. We want to implement different language options for our learning modules to accommodate those who are new to English. In addition, we hope to build out the rest of our website with more robust interactions. Lastly, we hope to optimize our product for mobile platforms in the future.
## Inspiration Our goal to increase financial literacy started by acknowledging the statistics that there are 87.3% of college students in the U.S. who never keep track of their money. There are a lot of financial applications out there, and people seem not to use it. To encourage students to track their spendings, we broke down and analyzed to the core why people don't use them. We feel like the solution to such a problem is to treat this powerful financial tool as a social app (similar to Venmo, but for financial tracking). Thus, we offer people the solution to avoid losing track of their money. ## How we built it This iOS App is built using the native language Swift to ensure reliability and performance. Some of us worked on the software while some work on the design of the UI/UX and filters. We also imported numerous frameworks using Augmented Reality Kit by Apple. ## Challenges we ran into It took us hours to solve the new ARKit Framework by Apple and get it running to be implemented in the user's profile page. ## Accomplishments that we're proud of We are able to create a financial tool that is inviting for college students to use! More potential in adoption rate will eventually lead to an increase in the students' financial literacy level! ## What we learned We learned A LOT. All of us had no experience in Augmented Reality at the first place, and it took us hours to be able to figure our way out and create an amazing filter! Some of us learned how to use GitHub in team collaboration, which is pretty sick! ## What's next for WalletHacks We are planning to polish the UI! We are going to develop this app even further and ship this out to consumers!
# INSPIRATION Never before than now, existed something which can teach out about managing your money the right way practically. Our team REVA brings FinLearn, not another budgeting app. Money has been one thing around which everyone’s life revolves. Yet, no one teaches us how to manage it effectively. As much as earning money is not easy, so is managing it. As a student, when you start to live alone, take a student loan, or plans to study abroad, all this becomes a pain for you if you don’t understand how to manage your personal finances. We faced this problem ourselves and eventually educated ourselves. Hence, we bring a solution for all. Finlearn is a fin-ed mobile application that can teach you about money management in a practical way. You can set practical finance goals for yourself and learn while achieving them. Now, learning personal finances is easier than ever with FinLearn, # WHAT IT DOES Finlearn is a fin-ed-based mobile application that can teach you about money and finances in a practical way. You can set practical finance goals for yourself and learn while achieving them. Now, learning personal finances is easier than ever with FinLearn. It has features like Financial Learning Track, Goal Streaks, Reward-Based Learning Management, News Feed for all the latest cool information in the business world. # HOW WE BUILT IT * We built the mobile application on Flutter Framework and designed it on Figma. * It consists of Learning and Goal Tracker APIs built with Flask and Cosmos DB * The learning track has a voice-based feature too built with Azure text-to-speech cognitive services. * Our Budget Diary feature helps you to record all your daily expenses into major categories which can be visualized over time and can help in forecasting your future expenses. * These recorded expenses aid in managing your financial goals in the app. * The rewards-based learning system unlocks more learning paths to you as you complete your goal. # CHALLENGES WE RAN INTO Building this project in such a short time was quite a challenge. Building logic for the whole reward-based learning was not easy. Yet we were able to pull it off. Integrating APIs by using proper data/error handling and maintaining the sleek UI along with great performance was a tricky task. Making reusable/extractable snippets of Widgets helped a lot to overcome this challenge. # ACCOMPLISHMENTS WE ARE PROUD OF We are proud of the efforts that we put in and pulled off the entire application within 1.5 days. Only from an idea to building an entire beautiful application is more than enough to make us feel content. The whole Learning Track we made is the charm of the application. # WHAT’S NEXT FinLearn would have a lot of other things in the future. Our first agenda would be to build a community feature for the students on our app. Building a learning community is gonna give it an edge. # Credits Veideo editing: Aaditya VK
losing
# Gait @ TreeHacks 2016 [![Join the chat at https://gitter.im/thepropterhoc/TreeHacks_2016](https://badges.gitter.im/thepropterhoc/TreeHacks_2016.svg)](https://gitter.im/thepropterhoc/TreeHacks_2016?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) Diagnosing walking disorders with accelerometers and machine learning **Based on the original work of Dr. Matt Smuck** ![Walking correctly](https://d30y9cdsu7xlg0.cloudfront.net/png/79275-200.png) Author : *Shelby Vanhooser* Mentor : *Dr. Matt Smuck* --- ### Goals ***Can we diagnose patient walking disorders?*** * Log data of walking behavior for a known distance through a smartphone * Using nothing but an accelerometer on the smartphone, characterize walking behaviors as *good* or *bad* (classification) * Collect enough meaningful data to distinguish between these two classes, and draw inferences about them --- ### Technologies * Wireless headphone triggering of sampling * Signal processing of collected data * Internal database for storing collection * Support Vector Machine (machine learning classification) -> Over the course of the weekend, I was able to test the logging abilities of the app by taking my own phone outside, placing it in my pocket after selecting the desired sampling frequency and distance I would be walking (verified by Google Maps), and triggering its logging using my wireless headphones. This way, I made sure I was not influencing any data collected by having abnormal movements be recorded as I placed it in my pocket. ****Main screen of app I designed**** ![Landing screen](https://raw.githubusercontent.com/thepropterhoc/TreeHacks_2016/master/Screenshots/Screenshot_2.png) ****The logging in action**** ![The logging app in action](https://raw.githubusercontent.com/thepropterhoc/TreeHacks_2016/master/Screenshots/Screenshot_1.png) -> This way, we can go into the field, collect data from walking, and log if this behavior is 'good' or 'bad' so we can tell the difference on new data! --- ### Data First, let us observe the time-domain samples recorded from the accelerometer: ![Raw signal recorded](https://raw.githubusercontent.com/thepropterhoc/TreeHacks_2016/master/Collected_Data/Time_Domain.png) It is immediately possible to see where my steps were! Very nice. Let's look at what the spectrums are like after we take the FFT... *Frequency Spectrums of good walking behavior* ![Good walking behavior frequency spectrum](https://raw.githubusercontent.com/thepropterhoc/TreeHacks_2016/master/Collected_Data/images/good_animated.gif) *Frequency spectrums of bad walking behavior* ![Bad walking behavior frequency spectrum](https://raw.githubusercontent.com/thepropterhoc/TreeHacks_2016/master/Collected_Data/images/bad_animated.gif) 19 'correct' walking samples and 5 'incorrect' samples were collected around the grounds of Stanford across reasonably flat ground with no obstacle interference. ***Let's now take these spectrums and use them as features for a machine learning classification problem*** -> Additionally, I ran numerous simulations to see what kernel in SVM would give the best output prediction accuracy: **How many features do we need to get good prediction ability?** *Linear kernel* ![ROC-like characterization](https://raw.githubusercontent.com/thepropterhoc/TreeHacks_2016/master/Collected_Data/Linear_SVM_2000_Sample_FFT.png) **Look at that characterization for so few features!** Moving right along... *Quadratic kernel* ![ROC-like characterization](https://raw.githubusercontent.com/thepropterhoc/TreeHacks_2016/master/Collected_Data/Quadratic_SVM_2000_Sample_FFT.png) Not as good as linear. What about cubic? *Cubic kernel* ![ROC-like characterization](https://raw.githubusercontent.com/thepropterhoc/TreeHacks_2016/master/Collected_Data/Cubic_SVM_2000_Sample_FFT.png) Conclusion: We can get 100% cross-validated accuracy with... ***A linear kernel*** Good to know. We can therefore predict on incoming patient data if their gait is problematic! --- ### Results * From analysis of the data, its structure seems to be well-defined at several key points in the spectrum. That is, after feature selection was run on the collected samples, 11 frequencies were identified as dominating its behavior: **[0, 18, 53, 67, 1000, 1018, 1053, 2037, 2051, 2052, 2069]** ***Note*** : it is curious that index 0 has been selected here, implying that the overall angle of an accelerometer on the body while walking has influence over the observed 'correctness' of gait * From these initial results it is clear we *can* characterize 'correctness' of walking behavior using a smartphone application! * In the future, it would seem very reasonable to have a patient download an application such as this, and, using a set of known walking types from measurements taken in the field, be able to diagnose and report to an unknown patient if they have a disorder in gait. --- ### Acknowledgments * **Special thanks to Dr. Matt Smuck for his original work and aid in pushing this project in the correct direction** * **Special thanks to [Realm](https://realm.io) for their amazing database software** * **Special thanks to [JP Simard](https://cocoapods.org/?q=volume%20button) for his amazing code to detect volume changes for triggering this application** * **Special thanks to everyone who developed [Libsvm](https://www.csie.ntu.edu.tw/%7Ecjlin/libsvm/) and for writing it in C so I could compile it in iOS**
# seedHacks Drone-mounted tree planter with a splash of ML magic! ## Description **Our planet's in dire straits.** Over the past several decades, as residential, commercial, and industrial demands have skyrocketed across several industries around the globe, deforestation has become a major problem facing humanity. Though we depend on trees and forests for so much, they seem to be one of our fastest depleting natural resources. As our primary provider for oxygen and one of our biggest global carbon sinks, this is very dangerous news. **seedHacks is a tool to help save the world.** Through the use of cloud-based image classification, live video feed, geospatial optimization, and robotic flight optimization, we've made seedHacks to facilitate reforestation at a rate and efficiency that people might not be able to offer. Using our tech, a drone collecting simple birds-eye images of a forest can compute the optimal positions for seeds to be planted, aiming to approach a desired forest density collected from the user. Once it has this map, planting them's just a robotics problem! Easy, right? ## How did we build? We broke the project up into three main parts: video feed and image collection, image tagging and seed location optimization, and user interface/display. To tackle image/video, we landed on using pygame to set up a constant feed and collect images from that feed upon user request. We then send those captured images to a Microsoft could computing server, where we trained an object detection model using Azure's custom-vision platform which returns a tagged image with locations of the trees in the overhead image. Finally, we send to an optimization algorithm that utilizes all the free space possible as well as some distance constraints to fill the available space with as many trees as possible. All this was wrapped up in an elegant and easy-to-interpret UI that allows us to work together with the expertise of users to make the best end-result possible! ## Technical Notes * Azure custom vision was used to train an object detection model that could label tree and trees. We used about 33 images found online to train this machine learning model, resulting in a precision of 82.5% * We used the custom vision API to send our aerial view images of forests to the prediction endpoint which returned predictions consisting of confidence level, label, and a bounding box. * We then parsed the output of the object detection by creating a 2D numpy array in Python representing the original image. We filled indices of the array with 1’s where pixels were labeled as “tree” or “trees” with at least a 50% confidence. At the same time, we extracted the max width and height of the canopy of the trees to automate the process for users. The users are allowed to input a buffer, as a percentage, which increases the bounding box for tree growth based on the current density/present species; this is especially important if the roots of the tree need space to grow or the tree species is competitive. * After the 2D array was filled with pre-existing trees, we iterated through the array to find places where new trees could be planted such that there was enough space for the tree to mature to its full canopy size. We labeled these indices with 2 to differentiate between existing trees and potential new trees. ## What did we learn? First off, that selecting and training good object detector can be complicated and mysterious, but definitely worth putting some time into. Though our initial models had promise, we needed to optimize for overhead forest views, which is not something that's used to train too many. Second, that keeping it simple is sometimes better for realizing ideas well. We were very excited to get our hands on a Jetson Nano and trick it out with AlwaysAI's amazing technologies, but we realized some time in that because we didn't actually end up using the hardware and software to the fullest of their abilities, they might not be the best approach to our particular problems. So, we simplified! Finally, that the applicability of cutting-edge environmental robotics carries a lot of promise going forward. With not too much time, we managed to develop a somewhat sophisticated system that could potentially have a huge impact - and we hope to be able to contribute more to the field in the future! ## What's next for seedHacks? Next steps for our project would include: * Further optimization on seed location (more technical approach using botantical/silvological expertise, etc) * Training object detector better and better to pick out individual and clusters of trees from an overhead view * More training on burnt trees and forests * Robotic pathfinding systems to automatically execute paths through a forest space * Actuators on drones to make seed planting possible * Generalizing to aquatic and other ecosystems
## Inspiration Gait, the way that humans walk is a critical measurement of overall health condition, efficacy of rehabilitation, state of disease etc. This measure is significant for neurological diseases such as Parkinson's Disease, and other pathologies. At the clinical level, the gold standard of gait assessment (or walking assessment) is done at the clinic using highly expensive tools, called gait carpets. Doing these assessments at the clinical level requires both time from patients and doctors to schedule, appoint, commute, complete, & analyze. Patients also tend to change the way they walk at the clinic (due to the white coat effect), which further implicates the external validity of gait assessments at clinical locations. The inspiration for this idea came from the thought of "What if we could complete a similar assessment at the home level that requires an absolute minimum level of interaction from the patients?". ## What it does We used computer vision to map the joints and segments of the lower body. Based on these mappings we created a system of gait event detection, specifically step events. Step events are the backbone of gait analysis, using these events we generate key metrics of gait analysis that are similar to the metrics extracted from traditional gait analysis methods using the gait carpet. ## How we built it We used Mediapipe and OpenCV in Python to do the computer vision task with a Logitech webcam attached to the computer. Since we stored the key metrics on gait, we also used the numpy and pandas packages in order to efficiently store data in CSV files. ## Challenges we ran into Making fine adjustments to webcam viewing angle and precise distance measurement to the body was critical in getting reliable data from the openCV pose tracker. Laptop based cameras were not sufficient and thus we had to grab an external webcam to do the video streaming. Using a singular camera also meant that the angle of video capture was also critical and we learned through trial and error that movement in just one plane is near impossible and thus moving slightly towards or away from the camera also impacted our vertical threshold calculations we used for step event detection (as the projection of the visual space is not exactly rectangular from the POV of the camera). The project also went beyond just pose tracking and into step event detection. This meant using pose tracking and kinematics of human movement to mathematically create a step detection algorithm which combined time domain data analysis (using temporal aspects of a landmark's movement) and setting the precise threshold for footfall (stepping down) and foot-off (lifting the foot) detection. We initially opted to use joint kinematics but we found this to be a challenge due to the repeated joint movements and the variation in joint movements across subjects could possibly mean this system worked very poorly for people who did not move their legs as much. ## Accomplishments that we're proud of We were able to extract step time data from our step detection algorithm with values that were very similar to clinical level gait analysis. Considering how expensive a gait carpet can be (~$10,000 + the cost of space & time required to install + proprietary software requirement) we were proud to achieve one key metric in gait analysis that is easy to access and reproduce with consumer-grade technology. ## What we learned There is a reason this seemingly simple solution does not exist at the clinical level... yet. Humans have a hard time walking in a straight line, this creates multiple sources of error to deal with when trying to do step detection and even pose estimation. We also learned that laptop cameras are not it and a mounted webcam / camera works best for this type of detection. Learning from our teammates, we learned about the intricacies of applying computer vision to solve real world health issues, the wide field of gait analysis and how something as simple as walking can be a very useful tool to measure overall health of an individual. We also learned from each other in developing tools using previously novel frameworks and packages such as pandas & numpy. ## What's next for Gait@Home Polishing the metrics and making sure we collect more than one aspect of gait analysis is definitely the next step forward for this project. Ideally, we want to be able to collect almost all the types of data using this simple system that a traditional gait carpet collects. Reaching the critical "clinical standard" is also a goal for this project as we develop this tool further. Ideally a mobile app and incorporated phone camera system for the purpose of gait event detection is also a future goal. One important factor in this project is privacy, we could possibly seek external help and assistance in utilizing the perks of blockchain tech to preserve this patient - doctor and overall healthcare-related privacy.
partial
## Inspiration A project online detailing the "future cities index," a statistic that aims to calculate the viability of building a future city. After watching the Future Cities presentation, we were interested to see *where* Future Cities would be built, if a project like the one we saw was funded in the US. This prompted us to create a tool that may help social scientists answer that question — as many people work to innovate the various components of future cities, we tried to find possible homes for their ideas. ## What it does Allows Social Scientists and amateur researchers to access aggregated census and economic data through Lightbox API, without writing a single line of code. The program calculates a Future Cities Index based on the resilience of a census tract to natural disaster, housing availability, and the social vulnerability in the area. ## How we built it Interactive UI built with ReactJS, data parsed from Lightbox API with Javascript. ## Challenges we ran into Loading in the census tracts in our interactive map, finding appropriate data to display for each tract, and calculating the Future Cities Index ## Accomplishments that we're proud of Creating a working interactive map, successfully displaying a real-time Future Cities Index ## What we learned How to use geodata to make interactive maps that behave as we wish. We are able to overlay different raster images and polygons onto a map. ## What's next for Future Cities Index Using more parameters in the Future Cities Index, displaying data on the County and City level, linking each county tract to available census data, and allowing users to easily compare tracts
## Inspiration The failure of a certain project using Leap Motion API motivated us to learn it and use it successfully this time. ## What it does Our hack records a motion password desired by the user. Then, when the user wishes to open the safe, they repeat the hand motion that is then analyzed and compared to the set password. If it passes the analysis check, the safe unlocks. ## How we built it We built a cardboard model of our safe and motion input devices using Leap Motion and Arduino technology. To create the lock, we utilized Arduino stepper motors. ## Challenges we ran into Learning the Leap Motion API and debugging was the toughest challenge to our group. Hot glue dangers and complications also impeded our progress. ## Accomplishments that we're proud of All of our hardware and software worked to some degree of success. However, we recognize that there is room for improvement and if given the chance to develop this further, we would take it. ## What we learned Leap Motion API is more difficult than expected and communicating with python programs and Arduino programs is simpler than expected. ## What's next for Toaster Secure -Wireless Connections -Sturdier Building Materials -User-friendly interface
# About the Project: U-Plan ## Inspiration We're from Arizona, and yes—it really is incredibly hot. Having lived here for 2.5 years, each year seems to get hotter than the last. During a casual conversation with an Uber driver in Boston, we chatted about the weather. She mentioned that even the snowfall has been decreasing there. This got us thinking deeply about what's really happening to our climate. It's clear that climate change isn't some far-off concern; it's unfolding right now with far-reaching consequences around the world. Take Hurricane Milton in Florida, for example—it was so severe that even scientists and predictive models couldn't foresee its full impact. This realization made us wonder how we could contribute to a solution. One significant way is by tackling the issue of **Urban Heat Islands (UHIs)**. These UHIs not only make cities hotter but also contribute to the larger problem of global warming. But what exactly are Urban Heat Islands? ## What We Learned Diving into research, we learned that **Urban Heat Islands** are areas within cities that experience higher temperatures than their surrounding rural regions due to human activities and urban infrastructure. Materials like concrete and asphalt absorb and store heat during the day, releasing it slowly at night, leading to significant temperature differences. Understanding the impact of UHIs on energy consumption, air quality, and public health highlighted the urgency of addressing this issue. We realized that mitigating UHIs could play a crucial role in combating climate change and improving urban livability. ## How We Built U-Plan With this knowledge, we set out to create **U-Plan**—an innovative platform that empowers urban planners, architects, and developers to design more sustainable cities. Here's how we built it: * **Leveraging Satellite Imagery**: We integrated high-resolution satellite data to analyze temperatures, vegetation health (NDVI), and water content (NDWI) across urban areas. * **Data Analysis and Visualization**: Utilizing GIS technologies, we developed interactive heat maps that users can explore by simply entering a zip code. * **AI-Powered Chatbot**: We incorporated an AI assistant to provide insights into UHI effects, causes, and mitigation strategies specific to any selected location. * **Tailored Recommendations**: The platform offers architectural and urban planning suggestions, such as using reflective materials, green roofs, and increasing green spaces to naturally reduce surface temperatures. * **User-Friendly Interface**: Focused on accessibility, we designed an intuitive interface that caters to both technical and non-technical users. ## Challenges We Faced Building U-Plan wasn't without its hurdles: * **Data Complexity**: Integrating various datasets (temperature, NDVI, NDWI, NDBI) required sophisticated data processing and normalization techniques to ensure accuracy. * **Scalability**: Handling large volumes of data for real-time analysis challenged us to optimize our backend infrastructure. * **Algorithm Development**: Crafting algorithms that provide actionable insights and accurate sustainability scores involved extensive research and testing. * **User Experience**: Striking the right balance between detailed data presentation and user-friendly design required multiple iterations and user feedback sessions. ## What's Next for U-Plan We started with Urban Heat Islands because they are a pressing issue that directly affects the livability of cities and contributes significantly to global warming. By focusing on UHIs, we could provide immediate solutions to reduce urban temperatures and energy consumption. Moving forward, we plan to expand U-Plan into a comprehensive platform offering a wide range of data-driven insights, making it the go-to resource for urban planners to design sustainable, efficient, and resilient cities. Our roadmap includes: * **Adding More Environmental Factors**: Incorporating air quality indices, pollution levels, and noise pollution data. * **Predictive Analytics**: Developing models to forecast urban growth patterns and potential environmental impacts. * **Collaboration Tools**: Enabling teams to work together within the platform, sharing insights and coordinating projects. * **Global Expansion**: Adapting U-Plan for international use with localized data and multilingual support. --- # What's in it for our Market Audience? * **Data-Driven Insights**: U-Plan empowers urban planners, architects, developers, and property owners with precise, actionable data to make informed decisions. * **Sustainable Solutions**: Helps users design buildings and urban spaces that reduce heat retention, combating Urban Heat Islands and contributing to climate change mitigation. * **Cost and Energy Efficiency**: Offers strategies to lower energy consumption and reduce reliance on air conditioning, leading to significant cost savings. * **Regulatory Compliance**: Assists in meeting environmental regulations and sustainability standards, simplifying the approval process. * **Competitive Advantage**: Enhances reputation by showcasing a commitment to sustainable, forward-thinking design practices. ## Why Would They Use It? * **Comprehensive Analysis Tools**: Access to advanced features like real-time satellite imagery, detailed heat maps, and predictive modeling. * **Personalized Recommendations**: Tailored advice for both new constructions and retrofitting existing buildings to improve energy efficiency and reduce heat retention. * **User-Friendly Interface**: An intuitive platform that's easy to navigate, even for those without technical expertise. * **Expert Support**: Premium users gain access to expert consultants and an AI-powered chatbot for personalized guidance. * **Collaboration Features**: Ability to share maps and data with team members and stakeholders, facilitating better project coordination.
winning
## Inspiration For any hospital, the key to it running smoothly is to follow the set orderly system. But, as too many patients know, there are times when a should be orderly hospital is thrown into chaos. Patients would often need to ask medical staff several times for their medical documents and require assistance getting them in order. This often results in stays at hospitals hours longer than necessary to complete the paperwork. The burden of answering patients and their families' requests often falls to the nurses, and an already essential, often overlooked role in the hospital workflow. Interruptions of the nursing staff is an issue that is not given priority, though it can have disastrous consequences on the running of the hospital. Interruptions have adverse effects on a nurse’s memory; therefore, interruptions take focus away from the current task. Frequent interruptions can tax a nurse’s cognition load developing a higher risk of committing human error in critical medical procedures. Many communication interruptions could have been mitigated through more accessible information systems between patients and their medical staff. This has shown the importance to manage and lessen non-urgent communication as much as possible. @sclepius seeks to bridge the communication barrier between medical professionals, patients, and their families by streamlining the transfer of medical information securely and in a way that gives patients 100% control over who has access to their medical information and how much. ## What it does @sclepius is a medical application to be used by both patients and doctors that allows for the transfer of vital data between the two. The app keeps track of and organizes all of your medical documents that a doctor or medical professional may need. The medical professional can then send recent medical test results and forms such as prescriptions or clinical notes directly to your app for your convenience. If authorized by the patient, then select information can also be sent to family accounts to view. ## How we built it Our application is a Flutter application coded in Dart. The flutter app UI allows the user to access the data in their profile. @protocol is used to request access and transfer, and receive documents and medical information. The communication flow is summarized in an image in the header: ## Challenges we ran into * We had to switch projects midway through, initially; we wanted to create an open-source drug development platform, * Setting up the environment for Flutter, alongside an Android phone emulator to test the app. * Being a pioneer in working @protocol API. ## Accomplishments that we're proud of ``` - Managed to put the product together in the short span of the hackathon - We were able to create an outstanding project despite the limitations of meetings - We are proud of adapting to so many unfamiliar technologies( see challenges section) in such a short period of time. ``` ## What we learned To properly utilize the @protocol API, we also needed to learn to code flutter apps using the Dart programming language. Given how new the technology of the @protocol API we all needed to learn it on the fly. Finally, once we conceptualized the idea, we needed to do in-depth research into hospital-patient communication issues to pinpoint where our app could help solve the communication gaps. ## What's next for @sclepius ``` - More granularity for access levels of medical information - Adding features to ease the input of information such as scanning paper - Testing the functionality and usage of the app in a natural hospital setting ```
## Inspiration No one likes waiting around too much, especially when we feel we need immediate attention. 95% of people in hospital waiting rooms tend to get frustrated over waiting times and uncertainty. And this problem affects around 60 million people every year, just in the US. We would like to alleviate this problem and offer alternative services to relieve the stress and frustration that people experience. ## What it does We let people upload their medical history and list of symptoms before they reach the waiting rooms of hospitals. They can do this through the voice assistant feature, where in a conversation style they tell their symptoms, relating details and circumstances. They also have the option of just writing these in a standard form, if it's easier for them. Based on the symptoms and circumstances the patient receives a category label of 'mild', 'moderate' or 'critical' and is added to the virtual queue. This way the hospitals can take care of their patients more efficiently by having a fair ranking system (incl. time of arrival as well) that determines the queue and patients have a higher satisfaction level as well, because they see a transparent process without the usual uncertainty and they feel attended to. This way they can be told an estimate range of waiting time, which frees them from stress and they are also shown a progress bar to see if a doctor has reviewed his case already, insurance was contacted or any status changed. Patients are also provided with tips and educational content regarding their symptoms and pains, battling this way the abundant stream of misinformation and incorrectness that comes from the media and unreliable sources. Hospital experiences shouldn't be all negative, let's try try to change that! ## How we built it We are running a Microsoft Azure server and developed the interface in React. We used the Houndify API for the voice assistance and the Azure Text Analytics API for processing. The designs were built in Figma. ## Challenges we ran into Brainstorming took longer than we anticipated and had to keep our cool and not stress, but in the end we agreed on an idea that has enormous potential and it was worth it to chew on it longer. We have had a little experience with voice assistance in the past but have never user Houndify, so we spent a bit of time figuring out how to piece everything together. We were thinking of implementing multiple user input languages so that less fluent English speakers could use the app as well. ## Accomplishments that we're proud of Treehacks had many interesting side events, so we're happy that we were able to piece everything together by the end. We believe that the project tackles a real and large-scale societal problem and we enjoyed creating something in the domain. ## What we learned We learned a lot during the weekend about text and voice analytics and about the US healthcare system in general. Some of us flew in all the way from Sweden, for some of us this was the first hackathon attended so working together with new people with different experiences definitely proved to be exciting and valuable.
# Inspiration The need for fast, effective, and collaborative software development has never been greater in a rapidly expanding tech market. Existing coding helpers such as ChatGPT and Copilot are helpful but restricted in scope, particularly when it comes to retaining project-wide context and facilitating real-time team communication. ## What it is GenCollab is an AI-powered collaborative tool integrated within Discord that aims to change the way engineers collaborate on code. GenCollab, which combines generative AI with a novel hierarchical memory retention system, not only aids in code generation but also preserves an intelligent context around the entire project. This allows it to develop roadmaps automatically, allocate tasks based on roles, and generate code that is both scalable and readily integrated into the existing codebase. ## How we made it We used a combination of cutting-edge NLP techniques that were designed for producing syntactically and logically cohesive code. The platform contains a hierarchical memory architecture that keeps context at several levels, maintaining consistency and minimizing conflicts in created code. Its backend is meant to be highly scalable by integrating smoothly with MLOps pipelines. ## Challenges we faced It was a huge difficulty to create a hierarchical memory retention system capable of effectively leveraging generative AI without wasting resources. Another challenge was guaranteeing real-time performance and scalability for different Discord users. ## What we discovered In today's development environment, we discovered the value of real-time cooperation. We also learned about effective approaches for incorporating generative AI into collaborative applications, as well as optimizing the management and storage of hierarchical memory with Redis. ## What is the future of GenCollab? Integration with other major development platforms, as well as more AI capabilities to improve code quality and project management functionality, are on the GenCollab agenda. We also intend to provide additional granular role-based permissions and features to strengthen the platform
partial
## Inspiration We wanted to pioneer the use of computationally intensive image processing and machine learning algorithms for use in low resource robotic or embedded devices by leveraging cloud computing. ## What it does CloudChaser (or "Chase" for short) allows the user to input custom objects for chase to track. To do this Chase will first rotate counter-clockwise until the object comes into the field of view of the front-facing camera, then it will continue in the direction of the object, continually updating its orientation. ## How we built it "Chase" was built with four continuous rotation servo motors mounted onto our custom modeled 3D-printed chassis. Chase's front facing camera was built using a raspberry pi 3 camera mounted onto a custom 3D-printed camera mount. The four motors and the camera are controlled by the raspberry pi 3B which streams video to and receives driving instructions from our cloud GPU server through TCP sockets. We interpret this cloud data using YOLO (our object recognition library) which is connected through another TCP socket to our cloud-based parser script, which interprets the data and tells the robot which direction to move. ## Challenges we ran into The first challenge we ran into was designing the layout and model for the robot chassis. Because the print for the chassis was going to take 12 hours, we had to make sure we had the perfect dimensions on the very first try, so we took calipers to the motors, dug through the data sheets, and made test mounts to ensure we nailed the print. The next challenge was setting up the TCP socket connections and developing our software such that it could accept data from multiple different sources in real time. We ended up solving the connection timing issue by using a service called cam2web to stream the webcam to a URL instead of through TCP, allowing us to not have to queue up the data on our server. The biggest challenge however by far was dealing with the camera latency. We wanted to camera to be as close to live as possible so we took all possible processing to be on the cloud and none on the pi, but since the rasbian operating system would frequently context switch away from our video stream, we still got frequent lag spikes. We ended up solving this problem by decreasing the priority of our driving script relative to the video stream on the pi. ## Accomplishments that we're proud of We're proud of the fact that we were able to model and design a robot that is relatively sturdy in such a short time. We're also really proud of the fact that we were able to interface the Amazon Alexa skill with the cloud server, as nobody on our team had done an Alexa skill before. However, by far, the accomplishment that we are the most proud of is the fact that our video stream latency from the raspberry pi to the cloud is low enough that we can reliably navigate the robot with that data. ## What we learned Through working on the project, our team learned how to write an skill for Amazon Alexa, how to design and model a robot to fit specific hardware, and how to program and optimize a socket application for multiple incoming connections in real time with minimal latency. ## What's next for CloudChaser In the future we would ideally like for Chase to be able to compress a higher quality video stream and have separate PWM drivers for the servo motors to enable higher precision turning. We also want to try to make Chase aware of his position in a 3D environment and track his distance away from objects, allowing him to "tail" objects instead of just chase them. ## CloudChaser in the news! <https://medium.com/penn-engineering/object-seeking-robot-wins-pennapps-xvii-469adb756fad> <https://penntechreview.com/read/cloudchaser>
## Inspiration There were two primary sources of inspiration. The first one was a paper published by University of Oxford researchers, who proposed a state of the art deep learning pipeline to extract spoken language from video. The paper can be found [here](http://www.robots.ox.ac.uk/%7Evgg/publications/2018/Afouras18b/afouras18b.pdf). The repo for the model used as a base template can be found [here](https://github.com/afourast/deep_lip_reading). The second source of inspiration is an existing product on the market, [Focals by North](https://www.bynorth.com/). Focals are smart glasses that aim to put the important parts of your life right in front of you through a projected heads up display. We thought it would be a great idea to build onto a platform like this through adding a camera and using artificial intelligence to gain valuable insights about what you see, which in our case, is deciphering speech from visual input. This has applications in aiding individuals who are deaf or hard-of-hearing, noisy environments where automatic speech recognition is difficult, and in conjunction with speech recognition for ultra-accurate, real-time transcripts. ## What it does The user presses a button on the side of the glasses, which begins recording, and upon pressing the button again, recording ends. The camera is connected to a raspberry pi, which is a web enabled device. The raspberry pi uploads the recording to google cloud, and submits a post to a web server along with the file name uploaded. The web server downloads the video from google cloud, runs facial detection through a haar cascade classifier, and feeds that into a transformer network which transcribes the video. Upon finished, a front-end web application is notified through socket communication, and this results in the front-end streaming the video from google cloud as well as displaying the transcription output from the back-end server. ## How we built it The hardware platform is a raspberry pi zero interfaced with a pi camera. A python script is run on the raspberry pi to listen for GPIO, record video, upload to google cloud, and post to the back-end server. The back-end server is implemented using Flask, a web framework in Python. The back-end server runs the processing pipeline, which utilizes TensorFlow and OpenCV. The front-end is implemented using React in JavaScript. ## Challenges we ran into * TensorFlow proved to be difficult to integrate with the back-end server due to dependency and driver compatibility issues, forcing us to run it on CPU only, which does not yield maximum performance * It was difficult to establish a network connection on the Raspberry Pi, which we worked around through USB-tethering with a mobile device ## Accomplishments that we're proud of * Establishing a multi-step pipeline that features hardware, cloud storage, a back-end server, and a front-end web application * Design of the glasses prototype ## What we learned * How to setup a back-end web server using Flask * How to facilitate socket communication between Flask and React * How to setup a web server through local host tunneling using ngrok * How to convert a video into a text prediction through 3D spatio-temporal convolutions and transformer networks * How to interface with Google Cloud for data storage between various components such as hardware, back-end, and front-end ## What's next for Synviz * With stronger on-board battery, 5G network connection, and a computationally stronger compute server, we believe it will be possible to achieve near real-time transcription from a video feed that can be implemented on an existing platform like North's Focals to deliver a promising business appeal
## Inspiration Food is capable of uniting all of us together, no matter which demographic we belong to or which cultures we identify with. Our team recognized that there was a problem with how challenging it can be for groups to choose a restaurant that accommodated everyone's preferences. Furthermore, food apps like Yelp and Zomato can often cause 'analysis paralysis' as there are too many options to choose from. Because of this, we wanted to build a platform to facilitate the process of coming together for food, and make the process as simple and convenient as possible. ## What it does Bonfire is an intelligent food app that takes into account the food preferences of multiple users and provides a fast, reliable, and convenient recommendation based on the aggregate inputs of the group. To remove any friction while decision-making, Bonfire is even able to make a reservation on behalf of the group using Google's Dialogflow. ## How we built it We used Android Studio to build the mobile application and connected it to a Python back-end. We used Zomato's API for locating restaurants and data collection, and Google Sheets API and Google Apps scripts to decide the optimal restaurant recommendation given the user's preferences. We then used Adobe XD to create detailed wireframes to visualize the app's UI/UX. ## Challenges we ran into We found that Integrating all the API's into our app was quite challenging as some required Partner access privileges and restricted the amount of information we could request. In addition, choosing a framework to connect the back-end was a difficult. ## Accomplishments that we're proud of As our team is comprised of students studying bioinformatics, statistics, and kinesiology, we are extremely proud to have been able to bring an idea to fruition, and we are excited to continue working on this project as we think it has promising applications. ## What we learned We learned that trying to build a full-stack application in 24 hours is no easy task. We managed to build a functional prototype and a wireframe to visualize what the UI/UX experience should be like. ## What's next for Bonfire: the Intelligent Food App For the future of Bonfire, we are aiming to include options for dietary restrictions and incorporating Google Duplex into our app for a more natural-sounding linguistic profile. Furthermore, we want to further polish the UI to enhance the user experience. To improve the quality of the recommendations, we plan to implement machine learning for the decision-making process, which will also take into account the user's past food preferences and restaurant reviews.
winning
## Inspiration As recent or soon to be graduates, we personally understand the desire to relocate and expand our world views. There is so much potential out there, but it's hard to know what city is best as we all have unique needs and wants. ## What it does By gathering aspects that students care about when researching a city, we visualize the data based on selected preferences and suggest potential cities. Clicking on a city shows more information about that city and how it compares to others. ## How I built it We initially narrowed our focus to a set of users: recently graduated students. Then, we discussed several user journeys and sought out specific pain points. We conducted some research to find out what type of criterias people look into when deciding where to move, and then found open datasets from statscan and other online sources to support these criteria. We pulled the 2016 Canadian Census Data information on the biggest cities in Canada. We sorted this data into specific categories, and compiled static JSON files of the cities. We then fed this information into our web app powered by React where we visualized it using Mapbox and different graphing techniques. ## Challenges I ran into Going from a well designed static prototype to an implemented version is a big jump as the data had to be manipulated to fit the visualization library we used. The Stats Canada data was also unreliable and oddly formatted, leading to a lot of difficulties. ## Accomplishments that I'm proud of We managed to design a data visualization that makes use of multiple datasets and combined them in a cohesive way that helps students make an informed decision. ## What I learned We learned that the quality of the data sets was not only dependent on the source it came from, but also the richness of the data in providing value in visualization. In certain fields the data was especially shallow which made it difficult to draw any useful visualizations. ## What's next for LeaveTheNest We would love to explore how students can learn from our visualization and where to expand next. Right now we focused on Canadian data, but the next step would be to include American cities and beyond. We would also love to explore more intricate data visualizations that can dig deeper into the data and provide more value.
## Inspiration As first-year students, we have experienced the difficulties of navigating our way around our new home. We wanted to facilitate the transition to university by helping students learn more about their university campus. ## What it does A social media app for students to share daily images of their campus and earn points by accurately guessing the locations of their friend's image. After guessing, students can explore the location in full with detailed maps, including within university buildings. ## How we built it Mapped-in SDK was used to display user locations in relation to surrounding buildings and help identify different campus areas. Reactjs was used as a mobile website as the SDK was unavailable for mobile. Express and Node for backend, and MongoDB Atlas serves as the backend for flexible datatypes. ## Challenges we ran into * Developing in an unfamiliar framework (React) after learning that Android Studio was not feasible * Bypassing CORS permissions when accessing the user's camera ## Accomplishments that we're proud of * Using a new SDK purposely to address an issue that was relevant to our team * Going through the development process, and gaining a range of experiences over a short period of time ## What we learned * Planning time effectively and redirecting our goals accordingly * How to learn by collaborating with team members to SDK experts, as well as reading documentation. * Our tech stack ## What's next for LooGuessr * creating more social elements, such as a global leaderboard/tournaments to increase engagement beyond first years * considering freemium components, such as extra guesses, 360-view, and interpersonal wagers * showcasing 360-picture view by stitching together a video recording from the user * addressing privacy concerns with image face blur and an option for delaying showing the image
## Inspiration There should be an effective way to evaluate company value by examining the individual values of those that make up the company. ## What it does Simplifies the research process of examining a company by showing it in a dynamic web design that is free-flowing and easy to follow. ## How we built it It was originally built using a web scraper that scraped from LinkedIn which was written in python. The web visualizer was built using javascript and the VisJS library to have a dynamic view and aesthetically pleasing physics. In order to have a clean display, web components were used. ## Challenges we ran into Gathering and scraping the data was a big obstacle, had to pattern match using LinkedIn's data ## Accomplishments that we're proud of It works!!! ## What we learned Learning to use various libraries and how to setup a website ## What's next for Yeevaluation Finetuning and reimplementing dynamic node graph, history. Revamping project, considering it was only made in 24 hours.
partial
## 💡Inspiration * 2020 US Census survey showed that adults were 3x more likely to screen positive for depression or anxiety in 2020 vs 2019 * A 2019 review of 18 papers summarized that wearable data could help identify depression, and coupled with behavioral therapy can help improve mental health * 1 in 5 americans owns wearables now, and this adoption is projected to grow 18% every year * Pattrn aims to turn activity and mood data into actionable insights for better mental health. ## 🤔 What it does * Digests activity monitor data and produces bullet point actionable summary on health status * Allows users to set goals on health metrics, and provide daily, weekly, month review against goals * Based on user mood rating and memo entry, deduce activities that correlates with good and bad days [![Screen-Shot-2022-10-16-at-1-09-40-PM.jpg](https://i.postimg.cc/MZhjdqRw/Screen-Shot-2022-10-16-at-1-09-40-PM.jpg)](https://postimg.cc/bd9JvX3V) [![Fire-Shot-Capture-060-Pattrn-localhost.png](https://i.postimg.cc/zBQpx6wQ/Fire-Shot-Capture-060-Pattrn-localhost.png)](https://postimg.cc/bDQQJ6B0) ## 🦾 How we built it * Frontend: ReactJS * Backend: Flask, Google Cloud App Engine, Intersystems FHIR, Cockroach Labs DB, Cohere ## 👨🏻‍🤝‍👨🏽 Challenges / Accomplishments * Ideating and validating took up a big chunk of this 24 hour hack * Continuous integration and deployment, and Github collaboration for 4 developers in this short hack * Each team member pushing ourselves to try something we have never tried before ## 🛠 Hack for Health * Pattrn currently is able to summarize actionable steps for users to take towards a healthy lifestyle * Apart from health goal setting and reviewing, pattrn also analyses what activities have historically correlated with "good" and "bad" days ## 🛠 Intersystems Tech Prize * We paginated a GET and POST request * Generated synthetic data and pushed it in 2 different time resolution (Date, Minutes) * Endpoints used: Patient, Observation, Goals, Allergy Intolerance * Optimized API calls in pushing payloads through bundle request ## 🛠 Cockroach Labs Tech Prize * Spawned a serverless Cockroach Lab instance * Saved user credentials * Stored key mapping for FHIR user base * Stored sentiment data from user daily text input ## 🛠 Most Creative Use of GitHub * Implemented CICD, protected master branch, pull request checks ## 🛠 Cohere Prize * Used sentiment analysis toolkit to parse user text input, model human languages and classify sentiments with timestamp related to user text input * Framework designed to implement a continuous learning pipeline for the future ## 🛠 Google Cloud Prize * App Engine to host the React app and Flask observer and linked to Compute Engine * Hosted Cockroach Lab virtual machine ## What's next for Pattrn * Continue working on improving sentiment analysis on user’s health journal entry * Better understand pattern between user health metrics and daily activities and events * Provide personalized recommendations on steps to improve mental health * Provide real time feedback eg. haptic when stressful episode are predicted Temporary login credentials: Username: [norcal2@hacks.edu](mailto:norcal2@hacks.edu) Password: norcal
## Inspiration Globally, one in ten people do not know how to interpret their feelings. There's a huge global shift towards sadness and depression. At the same time, AI models like Dall-E and Stable Diffusion are creating beautiful works of art, completely automatically. Our team saw the opportunity to leverage AI image models and the emerging industry of Brain Computer Interfaces (BCIs) to create works of art from brainwaves: enabling people to learn more about themselves and ## What it does A user puts on a Brain Computer Interface (BCI) and logs in to the app. As they work in front of front of their computer or go throughout their day, the user's brainwaves are measured. These differing brainwaves are interpreted as indicative of different moods, for which key words are then fed into the Stable Diffusion model. The model produces several pieces, which are sent back to the user through the web platform. ## How we built it We created this project using Python for the backend, and Flask, HTML, and CSS for the frontend. We made use of a BCI library available to us to process and interpret brainwaves, as well as Google OAuth for sign-ins. We made use of an OpenBCI Ganglion interface provided by one of our group members to measure brainwaves. ## Challenges we ran into We faced a series of challenges throughout the Hackathon, which is perhaps the essential route of all Hackathons. Initially, we had struggles setting up the electrodes on the BCI to ensure that they were receptive enough, as well as working our way around the Twitter API. Later, we had trouble integrating our Python backend with the React frontend, so we decided to move to a Flask frontend. It was our team's first ever hackathon and first in-person hackathon, so we definitely had our struggles with time management and aligning on priorities. ## Accomplishments that we're proud of We're proud to have built a functioning product, especially with our limited experience programming and operating under a time constraint. We're especially happy that we had the opportunity to use hardware in our hack, as it provides a unique aspect to our solution. ## What we learned Our team had our first experience with a 'real' hackathon, working under a time constraint to come up with a functioning solution, which is a valuable lesson in and of itself. We learned the importance of time management throughout the hackathon, as well as the importance of a storyboard and a plan of action going into the event. We gained exposure to various new technologies and APIs, including React, Flask, Twitter API and OAuth2.0. ## What's next for BrAInstorm We're currently building a 'Be Real' like social media plattform, where people will be able to post the art they generated on a daily basis to their peers. We're also planning integrating a brain2music feature, where users can not only see how they feel, but what it sounds like as well
# We'd love if you read through this in its entirety, but we suggest reading "What it does" if you're limited on time ## The Boring Stuff (Intro) * Christina Zhao - 1st-time hacker - aka "Is cucumber a fruit" * Peng Lu - 2nd-time hacker - aka "Why is this not working!!" x 30 * Matthew Yang - ML specialist - aka "What is an API" ## What it does It's a cross-platform app that can promote mental health and healthier eating habits! * Log when you eat healthy food. * Feed your "munch buddies" and level them up! * Learn about the different types of nutrients, what they do, and which foods contain them. Since we are not very experienced at full-stack development, we just wanted to have fun and learn some new things. However, we feel that our project idea really ended up being a perfect fit for a few challenges, including the Otsuka Valuenex challenge! Specifically, > > Many of us underestimate how important eating and mental health are to our overall wellness. > > > That's why we we made this app! After doing some research on the compounding relationship between eating, mental health, and wellness, we were quite shocked by the overwhelming amount of evidence and studies detailing the negative consequences.. > > We will be judging for the best **mental wellness solution** that incorporates **food in a digital manner.** Projects will be judged on their ability to make **proactive stress management solutions to users.** > > > Our app has a two-pronged approach—it addresses mental wellness through both healthy eating, and through having fun and stress relief! Additionally, not only is eating healthy a great method of proactive stress management, but another key aspect of being proactive is making your de-stressing activites part of your daily routine. I think this app would really do a great job of that! Additionally, we also focused really hard on accessibility and ease-of-use. Whether you're on android, iphone, or a computer, it only takes a few seconds to track your healthy eating and play with some cute animals ;) ## How we built it The front-end is react-native, and the back-end is FastAPI (Python). Aside from our individual talents, I think we did a really great job of working together. We employed pair-programming strategies to great success, since each of us has our own individual strengths and weaknesses. ## Challenges we ran into Most of us have minimal experience with full-stack development. If you look at my LinkedIn (this is Matt), all of my CS knowledge is concentrated in machine learning! There were so many random errors with just setting up the back-end server and learning how to make API endpoints, as well as writing boilerplate JS from scratch. But that's what made this project so fun. We all tried to learn something we're not that great at, and luckily we were able to get past the initial bumps. ## Accomplishments that we're proud of As I'm typing this in the final hour, in retrospect, it really is an awesome experience getting to pull an all-nighter hacking. It makes us wish that we attended more hackathons during college. Above all, it was awesome that we got to create something meaningful (at least, to us). ## What we learned We all learned a lot about full-stack development (React Native + FastAPI). Getting to finish the project for once has also taught us that we shouldn't give up so easily at hackathons :) I also learned that the power of midnight doordash credits is akin to magic. ## What's next for Munch Buddies! We have so many cool ideas that we just didn't have the technical chops to implement in time * customizing your munch buddies! * advanced data analysis on your food history (data science is my specialty) * exporting your munch buddies and stats! However, I'd also like to emphasize that any further work on the app should be done WITHOUT losing sight of the original goal. Munch buddies is supposed to be a fun way to promote healthy eating and wellbeing. Some other apps have gone down the path of too much gamification / social features, which can lead to negativity and toxic competitiveness. ## Final Remark One of our favorite parts about making this project, is that we all feel that it is something that we would (and will) actually use in our day-to-day!
winning
It makes sure you never have to see unwanted content again!
## Inspiration YouTube is not incentivized to keep your kids from brain rot like Mr. Beast and Ryan's World. This is how YouTube makes their money. The only alternative to protect your kids is to pre-approve every video manually (a feature that YouTube hides anyway), which is extremely tedious and not feasible for most parents. We want a world that's not [skibidi toilet](https://www.youtube.com/shorts/tNzN8yi4FHo) and is aligned with nurturing the next generation, so we decided that this problem would be the focus of our hack for impact. ## What it does It uses an LLM to filter the videos a child sees on YouTube **dynamically** based on the parent's filters/values. The parent defines things like "I don't want anything that's clickbait whose purpose is to take my child's attention" and "Joey has been interested in dinosaurs in school so emphasize videos like that this week", and Attenbot takes care of the rest. Your child will have a clean feed whenever they open up YouTube. The parent gets emailed weekly updates on the videos their child is watching. This provides an opportunity for the parent to refine Attenbot's choices, helping it better decide the content that meets the child's needs. Parents can now avoid relying on the YouTube algorithm, which does not prioritize the well-being of their child. ## How we built it We have a Chrome extension that dynamically blocks the content on YouTube and makes API calls to our NextJS server endpoints to get the filtered videos. The filter is based on the structured output from gpt-4o given the YouTube videos that the child would have seen without the filter. The filter prompt is seeded with the filters defined by the parent and by the negative/positive examples that the parent defines from their weekly reports. ## Challenges we ran into Chrome extension development is painful. It was a fight against YouTube preventing us from changing the DOM how we'd like, but we got it done. ## Accomplishments that we're proud of The filter is good! We're happy with the result and think it'd be useful for parents which is great. ## What we learned Chrome extension development was a big thing we got better at. ## What's next for Attenbot Getting it into the hands of users. We're excited to try that out because we think this is a real problem.
Introducing Melo-N – where your favorite tunes get a whole new vibe! Melo-N combines "melody" and "Novate" to bring you a fun way to switch up your music. Here's the deal: You pick a song and a genre, and we do the rest. We keep the lyrics and melody intact while changing up the music style. It's like listening to your favourite songs in a whole new light! How do we do it? We use cool tech tools like Spleeter to separate vocals from instruments, so we can tweak things just right. Then, with the help of the MusicGen API, we switch up the genre to give your song a fresh spin. Once everything's mixed up, we deliver your custom version – ready for you to enjoy. Melo-N is all about exploring new sounds and having fun with your music. Whether you want to rock out to a country beat or chill with a pop vibe, Melo-N lets you mix it up however you like. So, get ready to rediscover your favourite tunes with Melo-N – where music meets innovation, and every listen is an adventure!
losing
## Inspiration I, Jennifer Wong, went through many mental health hurdles and struggled to get the specific help that I needed. I was fortunate to find a relatable therapist that gave me an educational understanding of my mental health, which helped me understand my past and accept it. I was able to afford this access to mental health care and connect with a therapist similar to me, but that's not the case for many racial minorities. I saw the power of mental health education and wanted to spread it to others. ## What it does Takes a personalized assessment of your background and mental health in order to provide a curated and self-guided educational program based on your cultural experiences. You can journal about your reflections as you learn through watching each video. Videos are curated as an MVP, but eventually, we want to hire therapists to create these educational videos. ## How we built it ## Challenges we ran into * We had our engineers drop the project or couldn't attend the working sessions, so there were issues with the workload. Also, there were issues with technical feasibility since knowledge on Swift was limited. ## Accomplishments that we're proud of Proud that overall, we were able to create a fully functioning app that still achieves our mission. We were happy to get the journal tool completed, which was the most complicated. ## What we learned We learned how to cut scope when we lost engineers on the team. ## What's next for Empathie (iOS) We will get more customer validation about the problem and see if our idea resonates with people. We are currently getting feedback from therapists who work with people of color. In the future, we would love to partner with schools to provide these types of self-guided services since there's a shortage of therapists, especially for underserved school districts.
## Inspiration Amidst the hectic lives and pandemic struck world, mental health has taken a back seat. This thought gave birth to our inspiration of this web based app that would provide activities customised to a person’s mood that will help relax and rejuvenate. ## What it does We planned to create a platform that could detect a users mood through facial recognition, recommends yoga poses to enlighten the mood and evaluates their correctness, helps user jot their thoughts in self care journal. ## How we built it Frontend: HTML5, CSS(frameworks used:Tailwind,CSS),Javascript Backend: Python,Javascript Server side> Nodejs, Passport js Database> MongoDB( for user login), MySQL(for mood based music recommendations) ## Challenges we ran into Incooperating the open CV in our project was a challenge, but it was very rewarding once it all worked . But since all of us being first time hacker and due to time constraints we couldn't deploy our website externally. ## Accomplishments that we're proud of Mental health issues are the least addressed diseases even though medically they rank in top 5 chronic health conditions. We at umang are proud to have taken notice of such an issue and help people realise their moods and cope up with stresses encountered in their daily lives. Through are app we hope to give people are better perspective as well as push them towards a more sound mind and body We are really proud that we could create a website that could help break the stigma associated with mental health. It was an achievement that in this website includes so many features to help improving the user's mental health like making the user vibe to music curated just for their mood, engaging the user into physical activity like yoga to relax their mind and soul and helping them evaluate their yoga posture just by sitting at home with an AI instructor. Furthermore, completing this within 24 hours was an achievement in itself since it was our first hackathon which was very fun and challenging. ## What we learned We have learnt on how to implement open CV in projects. Another skill set we gained was on how to use Css Tailwind. Besides, we learned a lot about backend and databases and how to create shareable links and how to create to do lists. ## What's next for Umang While the core functionality of our app is complete, it can of course be further improved . 1)We would like to add a chatbot which can be the user's guide/best friend and give advice to the user when in condition of mental distress. 2)We would also like to add a mood log which can keep a track of the user's daily mood and if a serious degradation of mental health is seen it can directly connect the user to medical helpers, therapist for proper treatement. This lays grounds for further expansion of our website. Our spirits are up and the sky is our limit
## Inspiration Yearly millions of dollars in scholarships and financial aid go unclaimed, because students find it difficult to hear about these opportunities. We wanted to create a product that made hearing about financial aid and other opportunities, such as internships, more accessible. We also wanted to combat the negative mental health effects that come with social media by redefining what social media is. To solve these problems we created a social networking tool that allowed users to learn about opportunities only if they were active and willing to share content that would be useful to at least one other user creating a community of active users that only shared uplifting content. ## What it does In order to access other users' posts for the next week about opportunities and events you have to make your own post and wait for at least one other user to save it to their own portfolio demonstrating that your post has meaning and is beneficial for other users. Each user will get 5 opportunities in 24 hours to make a post that can have a positive impact for others in order to access all of Güey's content. ## How we built it We used Swift to create the iOS app and firebase as our database. We also used Canva for our UI designs for each of the screens. ## Challenges we ran into This was our first time developing an iOS app and working with Swift so there was a learning curve in learning Swift in a short amount of time and utilizing it to bring our vision to life. ## Accomplishments that we're proud of In a short amount of time we were able to come up with a product that helps eliminate problems that come with social media such as comparisons and the posting of content that doesn't provide impact. We're also proud that despite our learning curves we were able to make a very nice product that's functional to do basic operations like creating a post. ## What we learned We learned how to create an iOS app in a short amount of time as well as the root causes of the negative effects social media has on mental health. We also learned how to successfully collaborate and create a product from the ground up in a team filled with different skill sets and experiences. ## What's next for Güey We hope to add more features that allow users to filter how they can see other people's posts such as by how many people saved it, most recently posted, and those who are in the same groups as you.
partial
## Inspiration We wanted to build something simple that had maximum value. ## What it does It's a chrome extension that quizzes you on articles once you finish reading them. ## How we built it We used an api for gpt3 to summarize and generate questions for the user to answer using a python backend. ## Challenges we ran into Finding a reliable connection to the API, parsing through the data, prompt engineering ## Accomplishments that we're proud of Building a product that used gpt3 and made it widely accessible via chrome extension. ## What we learned How to use gpt3 and make browser extensions ## What's next for Snippets.AI Backend database infrastructure
## Inspiration We wanted to be initiative in coming up with a new idea that optimizes meeting places for groups of people. The idea is inspired by Hack the Valley and other hackathons, where the host location of hackathons can be predetermined based on the RSVP'd hackers' addresses. ## What it does Lets Find Space allows users to either create or join a group using a group code. Once done, they can wait for other users to join their groups until ready. Group members are then shown on a Google Maps frame and are given the location of the meeting. This meeting place is optimized, resulting in a minimal commute for all members. ## How I built it This application operates on Node.js for backend integrated with MongoDB and React for frontend. It uses advanced Geocoding and place-finding based on a user's current location through various APIs offered by Google Cloud Platform. The optimized meeting location is calculated through the use of algebraic and geometric formulas and then returned back to Google's API which finds the most suitable meeting place based on the member's preferences (Coffeeshops, Restaurants, etc..). The most suitable place is determined by looking for the closest preferred place around the optimized location. ## Challenges I ran into The team has been introduced to multiple roadblocks throughout the 36 hours of hacking. However, their passion and persistence drove them all the way through this experiential adventure. The backend team was challenged to learn new languages, use advanced tools, and integrate APIs for the first time, they were able to prove their skills through communication and collaboration. The frontend team was challenged with connecting their design with the backend functionality and succeded to do so with the help of the backend team as well. ## Accomplishments that I'm proud of Finish the product completely and came up with extended features to be developed in the future. We got the best spot on campus and were able to apply the concept of working hard and playing hard. ## What I learned We learned to appreciate working within a team to increase productivity. We learned that there is a numerous amount of APIs, tools, and technology that allow us to develop any ideas we have in mind. ## What's next for Lets Find Space We will continue to develop and extend additional features, and keep open-source and free for the public!
Curate is an AI-enabled browser extension to allow individuals to more effectively and efficiently consume digital information. ## Inspiration Content on the web is rarely able to communicate ideas in an effective manner. Whether it be browsing the latest news or perusing a technical report, content consumers are often inundated with distractions (advertisements and boilerplate, for example). This problem is much more pronounced in an enterprise setting. Journalists and researchers are required to sift through countless sources to collect evidence. Software developers need to parse through lengthy documentation to implement programs. Executives need to gather high-level context from volumes of meeting minutes and reports. As such, the lack of support in ingesting the ever-increasing amount of digital information is a significant productivity drain on both individuals and enterprises. While many tools exist to enable more efficient content creation (ex. Google Docs & Grammarly), few services exist to allow individuals to better *consume* this content. ## What it does Curate is a browser extension that allows for better content consumption. It exposes a distractionless environment, a "reader-mode," so that readers can process and digest digital information more efficiently. Using the latest machine learning models ([BERT](https://arxiv.org/abs/1810.04805) in particular), our service **automatically highlights** the most important sentences within an article. We also recognize that people differ in their preferred learning strategy. To help cater to this preference and to enable content accessibility, we leveraged text-to-speech technology to narrate a given piece of content. ## How I built it The browser extension was built using React.js using common libraries and leveraging this [cross-browser extension boilerplate](https://github.com/abhijithvijayan/web-extension-starter). The backend is a Python server powered by [FastAPI](https://fastapi.tiangolo.com/). We were able to leverage the latest NLP capabilities using Google's [BERT](https://arxiv.org/abs/1810.04805) implementation for extractive summarization (using this [library](https://github.com/dmmiller612/bert-extractive-summarizer)). For text-to-speech capabilities, we used the neural models offered by Google Cloud [text-to-speech](https://cloud.google.com/text-to-speech). We used Google's [App Engine](https://cloud.google.com/appengine) to host an internal endpoint for development. As the BERT ML model requires quite a bit of memory, we needed to use the F4\_1G instance class. ## What's next for Curate We see a significant value and monetization opportunity for Curate, and would like to keep developing this product after the hackathon. We are excited to continue work on this product with an initial market focus on digital-forward content consumers (such as young software engineers & journalists) using a freemium pricing model. Our competitive advantage comes from a unique application of the latest machine learning capabilities to create a unified and efficient platform for content ingestion. As we are one of the first to enter the market with a priority in leveraging ML, we believe that we can maintain our competitive advantage by establishing a data moat. Tracked user behaviors, such as changing the automatically identified texts or the generated transcripts, can be used as training data to fine-tune our ML models. This enables us to offer a significantly more effective product than any subsequent competitors. That being said, we want to stress that data privacy is of utmost concern - we have no intention of continuing the history of data exploitation. We also see a significant specialization and monetization opportunity within the corporate market (especially in the law, education, and journalism), where the advantage of a data moat is especially clear. It would be immensely difficult for new entrants to compete with ML models fine-tuned to industry-specific content.
losing
## Inspiration During Hack the North, we realized how difficult it was to find suitable teammates to collaborate with. Browsing through Slack channels and random project ideas felt inefficient, leading to frustration. This struggle inspired the idea for Developers Assemble — a platform where developers can quickly find others based on skills and project needs, using a simple swiping model. ## What it does Developers Assemble connects developers looking to collaborate on projects. Users create profiles highlighting their skills (e.g., frontend, backend, full-stack), and can post or swipe on projects looking for teammates. The platform helps developers match with projects or other developers based on their specialties, making the process of forming a team easy and efficient. ## How we built it We built Developers Assemble with a focus on real-time interactions and scalability. The frontend was developed using React, Tailwind and Vite to create a dynamic and responsive user interface. For the backend, we used Django REST, with SQLite as our database. ## Challenges we ran into One of the biggest challenges we faced was integrating the backend and frontend for the first time. Integrating the React frontend with the Django backend posed difficulties, particularly in ensuring that the APIs and real-time matching features communicated seamlessly. This process taught us the importance of efficient communication between the frontend and backend, laying the groundwork for future scalability.Moving forward, we aim to add more features that will enhance the user experience and make **Developers Assemble** even more effective for connecting developers. In addition to team messaging, project management tools, and GitHub integration, we plan to introduce a ranking system where developers can be rated based on their contributions and collaboration skills. This feature will help teams identify the best matches not just based on technical skills, but also on teamwork and reliability. We also aim to refine the matching algorithm further to factor in these new ranking metrics, ensuring even more accurate and successful matches between developers and projects. By continuously improving these core functionalities, we hope to create a comprehensive platform that truly supports collaborative development. ## Accomplishments that we're proud of We’re proud of creating a seamless, intuitive platform that developers can use to find and connect with others for collaborative work. Successfully integrating real-time matching and building a system that scales with user growth were significant achievements. Additionally, building the platform from the ground up taught us important lessons about matchmaking algorithms and user experience. ## What we learned We learned that user experience is key — developers want a fast, easy-to-use platform that makes finding collaborators painless. We also learned a lot about building real-time systems and ensuring server efficiency. Fine-tuning the matching algorithm taught us how crucial it is to accurately pair users based on skills and project needs. Overall, the development process helped us gain a deeper understanding of collaboration dynamics in the tech space. ## What's next for DevelopersAssemble The next steps for Developers Assemble include adding features like team messaging, in-app project management tools, and integrating with popular developer platforms like GitHub and GitLab. We also plan to improve the matching algorithm further to offer even more refined matches, and expand the platform to cater to larger developer communities worldwide.
## Inspiration In a post pandemic world, where hygiene has become of paramount importance, and the informed consumer becomes rarer and rarer, we decided that the age of asymmetric information in the domain of food establishment health code violation was over ! Diners are entitled to transparency when it comes to the cleanliness of the business they decide to patronize. ## What it does CleanBite allows user to visualize previous violations of an establishment on a user friendly map. Users can also file their own complain, which will be sent to the Inspection Des Aliment de La Ville De Montreal. Food establishment can be conveniently searched through a search bar, by name or address. ## How we built it We built a web application that is built using React, Vite, ExpressJS, NodeJS, Twilio, and BeautifulSoup. The back end of our service is built using docker images and containers hosting our database downloader and formatter from Montreal's government website. This provides data to our web app. We also built a basic API with express that sends emails to the Montreal food inspectors which uses Twilio SendGrid to manage and send emails. ## Challenges we ran into We ran into multiple challenges such as getting the complaint data sent by clients to the Montreal food inspectors. We had to find a way to make the front end interact with a back end service which is when we decided to make a basic API in express for email sending, which would then send a request to the Twilio Sendgrid API. Another challenge we ran into in the development of the web application was the conversion of addresses to latitude and longitude values to map onto the Google Maps API. It turns out Google provides a Geocoding API for this exact purpose. We ran into challenges with the search bar, for how to organize the data and live preview the query matches. ## Accomplishments that we're proud of Managing to make the varying systems and APIs interact and work with each other without any conflicts. We are also proud of the visualization of the data, considering none of us had a strong background in front end or UI/UX skills. ## What we learned We learned how to use the Google Maps API to visualize location markers, we learned how to use the Twilio API to facilitate communication methods through their API endpoints. We learned to deploy docker images to self update the csv file hosted remotely once a week. This ensures that as new restaurants are added, the website reflects these changes. We learned that docker images cannot run two concurrent processes ! This came about when we tried to deploy the React application and the NodeJS server at the same time. ## What's next for CleanBites In the future, we would love to add functionality to provide users a way of self reporting, maybe through images, or more informal / crowdsources means than through a governmental email message. We would also like the support to sort the restaurants displayed. By allowing user self reports, we could then eventually also incorporate NLP to parse users' reviews and extract more information about the nature of health code violations in certain establishments.
## Inspiration We saw that lots of people were looking for a team to work with for this hackathon, so we wanted to find a solution ## What it does I helps developers find projects to work, and helps project leaders find group members. By using the data from Github commits, it can determine what kind of projects a person is suitable for. ## How we built it We decided on building an app for the web, then chose a graphql, react, redux tech stack. ## Challenges we ran into The limitations on the Github api gave us alot of troubles. The limit on API calls made it so we couldnt get all the data we needed. The authentication was hard to implement since we had to try a number of ways to get it to work. The last challenge was determining how to make a relationship between the users and the projects they could be paired up with. ## Accomplishments that we're proud of We have all the parts for the foundation of a functional web app. The UI, the algorithms, the database and the authentication are all ready to show. ## What we learned We learned that using APIs can be challenging in that they give unique challenges. ## What's next for Hackr\_matchr Scaling up is next. Having it used for more kind of projects, with more robust matching algorithms and higher user capacity.
losing
Everything in this project was planned & completed during TreeHacks. ## Inspiration Every student knows the week in a course after which nothing makes much sense. When enough "knowledge gaps" form in a course that builds upon itself, it becomes impossible to learn, and easy to stop paying attention, doubt yourself, or even switch majors. A top-tier TED talk is Sal Khan on [mastery learning](https://www.youtube.com/watch?v=-MTRxRO5SRA). His core idea is that if we teach for mastery rather than passing, we can remedy the problems in learning caused by knowledge gaps, which grow exponentially over time. There's lots of talk about using AI to catch cheating, to assist in grading and translation, but less about facilitating mastery learning. We want to build a tool that helps educators facilitate this seamlessly. ## What it does Educators join Ta.ai and get a QR code students can scan to join their class. Before lecture, educators upload their slides, and as students leave lecture, they receive a few short/long-answer questions based on the slides via text, allowing them to answer at their convenience. Each answer gets a quantitative score depending on how thoroughly the student answered the question. On the flip side, educators are able to track which students are yet to complete the questions, the cumulative % of correctness in each lecture, and the individual-level responses to each question sent out. This helps educators understand what topics need to be better communicated, and what students are failing to grasp the material. As the class progresses, students have access to a search that provides chatGPT-style answers to their questions about course content, powered by the content of the slide decks. ## How we built it We thought about our pain points at students, and then asked educators about theirs. We learned educators needed better ways to track student knowledge gaps. We knew that we needed an easier, more engaging way to demonstrate knowledge: text-messaging is a non-utilized and highly-convenient medium, which made it a good one to try. The product emerged from this intersection. We used Convex as our backend, Flask to deploy functions, and Next.js as our front-end. We used Chroma, Langchain and OpenAI to generate multi-modal embeddings for the slides and conduct Retrieval-Augmented Detection (RAG.) We used Twillio to send text messages to students. ## Challenges we ran into **Refusing to compromise front-end quality** Coming from a startup background, we knew that the devil is in the details for product usage, and wanted to build something that was shippable and usable for real-world usage. This came with a lot of extra time spent on interaction design, animation, and hosting. **Picking up Convex** Had to pick up a new backend stack and then play musical chairs switching ownership and membership as only 2 people were allowed in the free tier. Stuck with it because we loved the ease of using Convex in our stack otherwise :) **Deploying with Twillio** Spent hours debugging a niche error which was turned out to be the need to run it locally on a specific port :( ## Accomplishments that we're proud of * First ever hackathon for all of us! * Shipped a lot, fast to bring the vision to life * Lots of laughing with each other * 20 hours of sleep between the 4 of us ## What we learned **Don't shoot for the moon** We originally planned to be a lot more ambitious, hoping to make embeddings for lecture recordings, and integrate the search with Canvas and Ed. **Picking up new stacks isn't easy** Two of us were completely new to web development 36 hours ago. Tools like v0 and Cursor make it a lot easier to learn, but the learning curve is still there! ## What's next for Ta.ai * We're going to **expand search functionality** and data availability, by allowing search across Canvas and EdStem too. * We've already sent a few feeler emails to run a pilot program with Stanford CS classes this Spring quarter. We'll be cold-emailing 100 more CS professors across quarter-system schools.
## Inspiration As a multi-disciplinary team, each team member preferred different learning styles. As we discussed project ideas, we shared a common thread: often, we'd struggle with a class or concept not because it was hard, but because we struggled with the teaching style. Our inspiration was bringing the "Aha!" moment to students across the world and personalizing their experiences to improve their learning through AI and technology. ## What it does Our platform provides the necessary support to foster a growth mindset while promoting full engagement in education for both students and instructors. Teach It allows teachers to upload a range of resources based on their syllabi topics. Each of these resources gets sorted based on the tag the teacher assigns them: "auditory", "visual", or "interactive." Next, their students get to pick their preferred learning style and see the resources the teacher provided and labeled. They are free to pick more than one, or all three if so inclined, and can change their preference as the semester goes on. Finally, our AI summarizes the documents provided by the teacher and creates quizzes that are standardized across learning methods. This ensures each student is being tested on the same level as their peers. ## How we built it We used Reflex, one of the CalHacks sponsors' products, and coded both the front and back-end in Python. We also used OpenAI to create a quizzing structure to test the students. ## Challenges we ran into We ran into a few issues initially when creating our team, as one of our members left early on, but we were able to come up with our idea and start working on it soon after. One of our team members had technical issues when trying to run the website, but thanks to the CalHacks mentors and the sponsors tabling for Reflex, we were able to get past it! ## Accomplishments that we're proud of This is our first hackathon, and we're proud of all the effort we put into this! Even though we had a rough start, we were able to build something we're all passionate about. :) ## What we learned We learned how to use Reflex, push and pull Git code from the VSCode terminal, use LLMs, and how to do front-end and back-end. ## What's next for Teach It After this hackathon, we want to continue working on this project and improving it! By providing it to peers, we can get a better idea how we can make it better and better help students.
## 💡 Inspiration You have another 3-hour online lecture, but you’re feeling sick and your teacher doesn’t post any notes. You don’t have any friends that can help you, and when class ends, you leave the meet with a blank document. The thought lingers in your mind “Will I ever pass this course?” If you experienced a similar situation in the past year, you are not alone. Since COVID-19, there have been many struggles for students. We created AcadeME to help students who struggle with paying attention in class, missing class, have a rough home environment, or just want to get ahead in their studies. We decided to build a project that we would personally use in our daily lives, and the problem AcadeME tackled was the perfect fit. ## 🔍 What it does First, our AI-powered summarization engine creates a set of live notes based on the current lecture. Next, there are toggle features for simplification, definitions, and synonyms which help you gain a better understanding of the topic at hand. You can even select text over videos! Finally, our intuitive web app allows you to easily view and edit previously generated notes so you are never behind. ## ⭐ Feature List * Dashboard with all your notes * Summarizes your lectures automatically * Select/Highlight text from your online lectures * Organize your notes with intuitive UI * Utilizing Google Firestore, you can go through your notes anywhere in the world, anytime * Text simplification, definitions, and synonyms anywhere on the web * DCP, or Distributing Computing was a key aspect of our project, allowing us to speed up our computation, especially for the Deep Learning Model (BART), which through parallel and distributed computation, ran 5 to 10 times faster. ## ⚙️ Our Tech Stack * Chrome Extension: Chakra UI + React.js, Vanilla JS, Chrome API, * Web Application: Chakra UI + React.js, Next.js, Vercel * Backend: AssemblyAI STT, DCP API, Google Cloud Vision API, DictionariAPI, NLP Cloud, and Node.js * Infrastructure: Firebase/Firestore ## 🚧 Challenges we ran into * Completing our project within the time constraint * There was many APIs to integrate, making us spend a lot of time debugging * Working with Google Chrome Extension, which we had never worked with before. ## ✔️ Accomplishments that we're proud of * Learning how to work with Google Chrome Extensions, which was an entirely new concept for us. * Leveraging Distributed Computation, a very handy and intuitive API, to make our application significantly faster and better to use. ## 📚 What we learned * The Chrome Extension API is incredibly difficult, budget 2x as much time for figuring it out! * Working on a project where you can relate helps a lot with motivation * Chakra UI is legendary and a lifesaver * The Chrome Extension API is very difficult, did we mention that already? ## 🔭 What's next for AcadeME? * Implementing a language translation toggle to help international students * Note Encryption * Note Sharing Links * A Distributive Quiz mode, for online users!
losing
## Inspiration Many fitness enthusiasts struggle with maintaining **proper exercise form, leading to injuries** and ineffective workouts. Observing these common issues motivated us to create a tool that provides real-time feedback to enhance workout safety and effectiveness. ## What it does Muscle Intelligence **analyzes users' exercise forms** in real-time using computer vision and machine learning, offering actionable feedback on what and how to fix to optimize performance and prevent injuries. ## How we built it We developed Muscle Intelligence by integrating several cutting-edge technologies. For movement detection and analysis, we utilized **TensorFlow and OpenCV**, enabling accurate recognition of various exercise forms. The frontend was built with **React Native** to ensure cross-platform compatibility, allowing users on both iOS and Android devices to benefit from the app. On the backend, we implemented real-time data processing to deliver instant feedback seamlessly within the app. By combining **machine learning** models with an intuitive user interface, we created a smooth and engaging **user experience** that effectively guides users through their workouts ## Challenges we ran into One of the main challenges we faced was **ensuring the accuracy of movement detection**. Fine-tuning our machine learning models and building data pipelines to interpret complex body movements required extensive data collection and iterative testing. ## Accomplishments that we're proud of We are particularly proud of **developing highly accurate machine learning models** that can analyze a wide range of exercises with precision. Creating a user-friendly interface that is both intuitive and engaging was another significant achievement, ensuring that users of all technical backgrounds can easily navigate and benefit from the app. Additionally, achieving seamless real-time feedback has greatly enhanced the user experience, allowing immediate adjustments to improve exercise form. ## What we learned Through the development of Muscle Intelligence, we gained deep insights into **advanced computer vision techniques** and their application in real-time scenarios. We learned how to **optimize machine learning models** for better accuracy and performance, ensuring that our app provides reliable feedback. Our experience in **user-centered design** underscored the importance of creating interfaces that prioritize usability and engagement, making technology accessible to all users ## What's next for Muscle Intelligence Looking ahead, we plan to **implement Dynamic Time Warping (DTW)** metric due to its ability to handle slight delays in movement (temporal misalignment). Separately, we will include a broader variety of exercises and types of workouts (cardio, boxing). We aim to develop **personalized training** plans that adapt to each user's performance and goals, providing tailored guidance to enhance their fitness journey. Integrating with wearable devices like fitness trackers and smartwatches is also on our roadmap, allowing for more comprehensive data collection and analysis. Furthermore, we intend to leverage advanced AI to offer more personalized and detailed feedback, making Muscle Intelligence an even more effective tool for users striving to achieve their fitness objectives. *Stay fit, stay intelligent with Muscle Intelligence!*
## Inspiration Coach.me was born from a mission to make fitness training inclusive for all, leveraging AI and computer vision. Our aim is to bring personalized exercise feedback to everyone, fostering a friendly and accessible approach to fitness, while also helping users refine their form to prevent injuries. ## What it does Coach.me offers a variety of exercises and provides real-time feedback on form accuracy. Users receive instant notifications when performing exercises correctly and can ask questions for immediate feedback, enhancing their learning and safety during workouts. ## How we built it Coach.me was built using a combination of technologies to provide a seamless user experience. The UI was developed using the PyQt Python library, ensuring an intuitive interface for users. For the critical task of form detection, we employed MediaPipe and OpenCV for computer vision capabilities, enabling accurate real-time analysis of exercise form. Additionally, we integrated Whisper for speech-to-text functionality, ChatGPT for AI prompting, and OpenAI TTS for text-to-speech capabilities, enhancing the interactive experience and accessibility of the app. ## Challenges we ran into During development, we encountered hurdles installing required libraries on our Coach.me tablet running Ubuntu on an ARM architecture. We also worked to seamlessly integrate various technologies, ensuring effective feedback delivery. Through collaboration and innovation, we overcame these obstacles, enhancing the app's functionality. ## Accomplishments that we're proud of We had a blast when our pose detection algorithms nailed recognizing exercise forms, and we couldn't resist trying them out ourselves! Plus, we're stoked about seamlessly blending all those cool tech tools together to make a smooth user experience. These wins really showcase our team's knack for making fitness training fun and accessible. ## What we learned Working on Coach.me, we realized that crafting UIs with Python, especially using PyQt, can be quite tricky. Plus, we got a crash course in exercise anatomy, learning all about those nitty-gritty joint angles. These experiences really leveled up our skills and gave us some great stories to share! ## What's next for Coach.me We're eager to expand Coach.me's functionality by adding more exercises and catering to a broader range of fitness enthusiasts. Imagining Coach.me integrated into actual gyms is an exciting prospect, offering users access to real-time feedback and guidance during their workouts. Additionally, we're aiming to implement full workout detection capabilities, enabling Coach.me to track and log users' performance for valuable insights and progress tracking. These future developments will further enhance Coach.me's utility and effectiveness in helping users achieve their fitness goals.
## Inspiration Falls are the leading cause of injury and death among seniors in the US and cost over $60 billion in medical expenses every year. With every one in four seniors in the US experiencing a fall each year, attempts at prevention are badly needed and are currently implemented through careful monitoring and caregiving. However, in the age of COVID-19 (and even before), remote caregiving has been a difficult and time-consuming process: caregivers must either rely on updates given by the senior themselves or monitor a video camera or other device 24/7. Tracking day-to-day health and progress is nearly impossible, and maintaining and improving strength and mobility presents unique challenges. Having personally experienced this exhausting process in the past, our team decided to create an all-in-one tool that helps prevent such devastating falls from happening and makes remote caregivers' lives easier. ## What it does NoFall enables smart ambient activity monitoring, proactive risk assessments, a mobile alert system, and a web interface to tie everything together. ### **Ambient activity monitoring** NoFall continuously watches and updates caregivers with the condition of their patient through an online dashboard. The activity section of the dashboard provides the following information: * Current action: sitting, standing, not in area, fallen, etc. * How many times the patient drank water and took their medicine * Graph of activity throughout the day, annotated with key events * Histogram of stand-ups per hour * Daily activity goals and progress score * Alerts for key events ### **Proactive risk assessment** Using the powerful tools offered by Google Cloud, a proactive risk assessment can be activated with a simple voice query to a smart speaker like Google Home. When starting an assessment, our algorithms begin analyzing the user's movements against a standardized medical testing protocol for screening a patient's risk of falling. The screening consists of two tasks: 1. Timed Up-and-Go (TUG) test. The user is asked to sit up from a chair and walk 10 feet. The user is timed, and the timer stops when 10 feet has been walked. If the user completes this task in over 12 seconds, the user is said to be of at a high risk of falling. 2. 30-second Chair Stand test: The user is asked to stand up and sit down on a chair repeatedly, as fast as they can, for 30 seconds. If the user not is able to sit down more than 12 times (for females) and 14 times (for males), they are considered to be at a high risk of falling. The videos of the tests are recorded and can be rewatched on the dashboard. The caregiver can also view the results of tests in the dashboard in a graph as a function of time. ### **Mobile alert system** When the user is in a fallen state, a warning message is displayed in the dashboard and texted using SMS to the assigned caregiver's phone. ## How we built it ### **Frontend** The frontend was built using React and styled using TailwindCSS. All data is updated from Firestore in real time using listeners, and new activity and assessment goals are also instantly saved to the cloud. Alerts are also instantly delivered to the web dashboard and caretakers' phones using IFTTT's SMS Action. We created voice assistant functionality through Amazon Alexa skills and Google home routines. A voice command triggers an IFTTT webhook, which posts to our Flask backend API and starts risk assessments. ### **Backend** **Model determination and validation** To determine the pose of the user, we utilized Google's MediaPipe library in Python. We decided to use the BlazePose model, which is lightweight and can run on real-time security camera footage. The BlazePose model is able to determine the pixel location of 33 landmarks of the body, corresponding to the hips, shoulders, arms, face, etc. given a 2D picture of interest. We connected the real-time streaming from the security camera footage to continuously feed frames into the BlazePose model. Our testing confirmed the ability of the model to determine landmarks despite occlusion and different angles, which would be commonplace when used on real security camera footage. **Ambient sitting, standing, and falling detection** To determine if the user is sitting or standing, we calculated the angle that the knees make with the hips and set a threshold, where angles (measured from the horizontal) less than that number are considered sitting. To account for the angle where the user is directly facing the camera, we also determined the ratio of the hip-to knee length to the hip-to-shoulder length, reasoning that the 2D landmarks of the knees would be closer to the body when the user is sitting. To determine the fallen status, we determined if the center of the shoulders and the center of the knees made an angle less than 45 degrees for over 20 frames at once. If the legs made an angle greater than a certain threshold (close to 90 degrees), we considered the user to be standing. Lastly, if there was no detection of landmarks, we considered the status to be unknown (the user may have left the room/area). Because of the different possible angles of the camera, we also determined the perspective of the camera based on the convergence of straight lines (the straight lines are determined by a Hough transformation algorithm). The convergence can indicate how angled the camera is, and the thresholds for the ratio of lengths can be mathematically transformed accordingly. **Proactive risk assessment analysis** To analyze timed up-and-go tests, we first determined if the user is able to change his or her status from sitting to standing, and then determined the distance the user has traveled by determining the speed from finite difference calculation of the velocity from the previous frame. The pixel distance was then transformed based on the distance between the user's eyes and the height of the user (which is pre-entered in our website) to determine the real-world distance the user has traveled. Once the user reaches 10 meters cumulative distance traveled, the timer stops and is reported to the server. To analyze 30-second chair stand tests, the number of transitions between sitting and standing were counted. Once 30 seconds has been reached, the number of times the user sat down is half of the number of transitions, and the data is sent to the server. ## Challenges we ran into * Figuring out port forwarding with barebones IP camera, then streaming the video to the world wide web for consumption by our model. * Calibrating the tests (time limits, excessive movements) to follow the standards outlined by research. We had to come up with a way to mitigate random errors that could trigger fast changes in sitting and standing. * Converting recorded videos to a web-compatible format. The videos saved by python's video recording package was only compatible with saving .avi videos, which was not compatible with the web. We had to use scripted ffmpeg to dynamically convert the videos into .mp4 * Live streaming the processed Python video to the front end required processing frames with ffmpeg and a custom streaming endpoint. * Determination of a model that works on realtime security camera data: we tried Openpose, Posenet, tf-pose-estimation, and other models, but finally we found that MediaPipe was the only model that could fit our needs ## Accomplishments that we're proud of * Making the model ignore the noisy background, bad quality video stream, dim lighting * Fluid communication from backend to frontend with live updating data * Great team communication and separation of tasks ## What we learned * How to use IoT to simplify and streamline end-user processes. * How to use computer vision models to analyze pose and velocity from a reference length * How to display data in accessible, engaging, and intuitive formats ## What's next for NoFall We're proud of all the features we have implemented with NoFall and are eager to implement more. In the future, we hope to generalize to more camera angles (such as a bird's-eye view), support lower-light and infrared ambient activity tracking, enable obstacle detection, monitor for signs of other conditions (heart attack, stroke, etc.) and detect more therapeutic tasks, such as daily cognitive puzzles for fighting dementia.
losing
## Inspiration We want to make everyone impressed by our amazing project! We wanted to create a revolutionary tool for image identification! ## What it does It will identify any pictures that are uploaded and describe them. ## How we built it We built this project with tons of sweats and tears. We used Google Vision API, Bootstrap, CSS, JavaScript and HTML. ## Challenges we ran into We couldn't find a way to use the key of the API. We couldn't link our html files with the stylesheet and the JavaScript file. We didn't know how to add drag and drop functionality. We couldn't figure out how to use the API in our backend. Editing the video with a new video editing app. We had to watch a lot of tutorials. ## Accomplishments that we're proud of The whole program works (backend and frontend). We're glad that we'll be able to make a change to the world! ## What we learned We learned that Bootstrap 5 doesn't use jQuery anymore (the hard way). :'( ## What's next for Scanspect The drag and drop function for uploading iamges!
## Inspiration Recent mass shooting events are indicative of a rising, unfortunate trend in the United States. During a shooting, someone may be killed every 3 seconds on average, while it takes authorities an average of 10 minutes to arrive on a crime scene after a distress call. In addition, cameras and live closed circuit video monitoring are almost ubiquitous now, but are almost always used for post-crime analysis. Why not use them immediately? With the power of Google Cloud and other tools, we can use camera feed to immediately detect weapons real-time, identify a threat, send authorities a pinpointed location, and track the suspect - all in one fell swoop. ## What it does At its core, our intelligent surveillance system takes in a live video feed and constantly watches for any sign of a gun or weapon. Once detected, the system immediately bounds the weapon, identifies the potential suspect with the weapon, and sends the authorities a snapshot of the scene and precise location information. In parallel, the suspect is matched against a database for any additional information that could be provided to the authorities. ## How we built it The core of our project is distributed across the Google Cloud framework and AWS Rekognition. A camera (most commonly a CCTV) presents a live feed to a model, which is constantly looking for anything that looks like a gun using GCP's Vision API. Once detected, we bound the gun and nearby people and identify the shooter through a distance calculation. The backend captures all of this information and sends this to check against a cloud-hosted database of people. Then, our frontend pulls from the identified suspect in the database and presents all necessary information to authorities in a concise dashboard which employs the Maps API. As soon as a gun is drawn, the authorities see the location on a map, the gun holder's current scene, and if available, his background and physical characteristics. Then, AWS Rekognition uses face matching to run the threat against a database to present more detail. ## Challenges we ran into There are some careful nuances to the idea that we had to account for in our project. For one, few models are pre-trained on weapons, so we experimented with training our own model in addition to using the Vision API. Additionally, identifying the weapon holder is a difficult task - sometimes the gun is not necessarily closest to the person holding it. This is offset by the fact that we send a scene snapshot to the authorities, and most gun attacks happen from a distance. Testing is also difficult, considering we do not have access to guns to hold in front of a camera. ## Accomplishments that we're proud of A clever geometry-based algorithm to predict the person holding the gun. Minimized latency when running several processes at once. Clean integration with a database integrating in real-time. ## What we learned It's easy to say we're shooting for MVP, but we need to be careful about managing expectations for what features should be part of the MVP and what features are extraneous. ## What's next for HawkCC As with all machine learning based products, we would train a fresh model on our specific use case. Given the raw amount of CCTV footage out there, this is not a difficult task, but simply a time-consuming one. This would improve accuracy in 2 main respects - cleaner identification of weapons from a slightly top-down view, and better tracking of individuals within the frame. SMS alert integration is another feature that we could easily plug into the surveillance system as well, and further compound the reaction improvement time.
## Inspiration Our team recently started an iOS bootcamp and we were inspired to make a beginner app with some of the basics we've learned as well as interact with new technologies and APIs that we would learn at the hackathon. We wanted to work on a simple, fun educational app, and wanted to bring in [Google Cloud's Vision API](https://cloud.google.com/vision/docs/) as well as the iPhone's built-in camera capabilities. The end result is a culmination of these various inspirations. ## What it does Our app is an interactive game for young learners to come across new English words and photograph those items throughout their days. Players can track their scores throughout the day and confirm that the image matches the vocabulary word with fun audio prompts. The word and score can be reset so players can find new inspiration for their next photo journey. ## How we built it We worked primarily with Swift on XCode, using git for our version control. We used [Google Cloud's ML Kit for Firebase](https://firebase.google.com/docs/ml-kit), a mobile SDK that allowed us to accurately label images that the player took over a certain confidence score with Google Cloud Vision. To import this library and Alamofire we used Cocoapods. ## Challenges we ran into Because we are all beginners with mobile development, we ran into many challenges related to the intricacies of iOS development. We learned that it is difficult to collaborate on the same storyboards, which made version control and collaboration more challenging than usual. We also had trouble setting up the same environments, versions of XCode and Swift, and Apple developer teams. After connecting to the Google Cloud Vision API, we began testing our app. To our befuddlement, the Google Cloud Vision API labeled an image of a bagel as a musical instrument and a member of our team as a vehicle. Upon closer inspection, we realized we were using the on-device API instead of the Cloud API. Switching to the Cloud API, greatly increased the accuracy of the labels. ## Accomplishments that we're proud of Overall, we're very proud that we were able to put together this product within a day! We were able to complete our goal of connecting to the Google Cloud Vision API and using its machine learning models to analyze the objects within our images. Our game works exactly as envisioned and we're proud of our child-friendly, intuitive user design. ## What we learned We learned an incredible amount over such a short period of time, including: * Swift intricacies (especially optionals) * XCode and general iOS development gotchas * Autolayout and general design in iOS * Connecting to an iPhone camera * Firebase and Google Cloud setup * importing audio files to our app * local data persistence for our game * the joy of pair programming with friends * Segue modal navigation * Loading spinners while waiting for API response. ## What's next for Learning Lens We can envision many ways that Learning Lens can expand its offerings to more young learners, their parents, and even educators. These include: * Accessing a more complex dictionary that can be adjusted by age or learning level as appropriate * Parents' input to the specific word of the day or a word bank for their children * Integrating natural language processing and/or adaptive learning techniques to provide sets of suggestions that are tailored to a learner's interests * Integrating with school or study systems to assist educators in their existing curriculum * Text-to-speech capability so that very young learners who have not yet learned to read will be able to play as well (and older learners can improve upon their pronunciation) * Database integration so that learners can see previous words they've learned, as well as pictures they've taken; and parents can view their children's finds and life through their children's eyes * A countdown timer or built-in reset feature for the app to come up with a new word at a certain time every day * Social networking so that learners can connect with approved peers and view leaderboards for the same word as well as share images with one another ## Credits A special thanks to Alex Takahashi (Facebook Sponsor) for providing mentorship and answering our questions. Also thanks to Google for providing $100 credit to explore the Google Cloud API offerings. Our logo sans text was sourced from pngtree.com.
winning
## Inspiration Apple Wallet's a magical tool for anybody. Enjoy the benefits of your credit card, airline or concert ticket, and more without the hassles of keeping a physical object. But one crucial limitation is that these cards generally can't be put on the Apple Wallet without the company itself developing a supportive application. Thus, we're stuck in an awkward middle ground of half our cards being on our phone and the other half being in our wallet. ## What it does Append scans your cards and collects enough data to create an Apple Wallet pass. This means that anything with a standardized code(most barcodes, QR codes, Aztec codes, etc.) can be scanned in, and redisplayed on your Apple Wallet for your use. This includes things like student IDs, venue tickets, and more. ## How we built it We used Apple's Vision framework to analyze the live video feed from the iPhone to collect the code data. Our app parses the barcode to analyze its data, and then generates its own code to put on the wallet. It uses an external API to assist with generating the necessary wallet files, and then the wallet is presented to the user. From the user opening the app to downloading his/her wallet, the process takes about 30 seconds total. ## Challenges we ran into We ran into some troubles regarding nonstandardized barcodes, or ambiguity concerning different standards. Fortunately, we developed methods around them, and reached a point where we can recognize the major standards that exist. ## Accomplishments that we're proud of A large portion of the documentation regarding adding custom Apple Wallet passes are outdated. What's worse is that a subset of these outdated documentation are wrong in subtle ways, i.e. the code compiles but has different behavior. Navigating with limited vision was difficult, but we succeeded in the end. ## What we learned We learned a lot about Apple's PassKit API. The protocols behind it are well implemented, and seeing them in action gave us even more personal confidence in using Apple Wallet for wallet needs in the future. ## What's next for Append We want to implement our own custom API for producing Apple Wallet files, to make sure that any user data is completely secure. Additionally, we want to take advantage of iPhone hardware to read and reproduce NFC data so that every aspect of the physical card can be replaced.
## Inspiration With the rise of AI-generated content and DeepFakes, it's hard for people to identify what's real and what's fake. This leads to fake news and abuse. After seeing the launch of OpenAI's Sora model this week, we decided to build a solution to verify whether an image is real or AI-generated. ## What it does Aros is an **iOS app that allows you to verify that an image is real and not AI-generated**. It does this by cryptographically proving that you clicked an image on your iPhone, which means that the image is real. This is how it works: 1. When you click a photo using the Aros camera app, Aros uses your iPhone's Secure Enclave to cryptographically sign this image. 2. This signature is posted to the online Aros registry. 3. Anyone can use this signature and your public key to verify that the photo was clicked on your iPhone, and not generated using AI. We also built a **zero-knowledge prover** that verifies the signature on your image within a ZK circuit. This allows any **blockchain to easily verify** that an image is real. ## How we built it This is a system architecture diagram for Aros: ![System architecture diagram](https://hackmd.io/_uploads/SkrLOL126.png) ### Secure Enclave We create a cryptographic key pair in your iPhone's [Secure Enclave](https://developer.apple.com/documentation/security/certificate_key_and_trust_services/keys/protecting_keys_with_the_secure_enclave#2930473) to rely on **hardware security** and ensure that your private keys are never leaked outside your iPhone. Aros uses these keys to sign your photos to prove and verify that you clicked them on your iPhone. ### Zero-Knowledge To easily verify the image signatures on a blockchain, we decided to build a ZK verifier for this. We used state-of-the-art cryptographic systems like the **SP1 RISC-V prover** from Succinct Labs to verify the image signatures within a **Plonky3 circuit**. ### iOS App and Web Registry We built the iOS app using **Swift**. The Aros registry is used to store each image's hash and signature, along with users' public keys. It doesn't store the raw image data so we can protect privacy. We built the Aros registry using Next.js, Typescript, and Tailwind CSS. We **deployed the registry dashboard and registry API using Vercel**. ## Challenges we ran into * The Secure Enclave in the iPhone uses the **P-256 elliptic curve** but we found it hard to find a verifier ZK circuit for this curve within Circom or Halo2. So, we decided to use the SP1 RISC-V prover from Succinct Labs to verify the image signatures and generate a Plonky3 circuit. * We faced challenges with **base64 encoding and decoding** the public key. However, we realized that we could use the `base64EncodedString` function in Swift to help with this. ## Accomplishments that we're proud of * It was **our first time developing on iOS and using Swift**, so there was a pretty steep learning curve on the first day. We're really happy that we were able to learn Swift and iOS development over the weekend and successfully build this project. * It was a stretch goal for us to build a zero-knowledge verifier of the P256 signature verification. We're proud that we were able to build this, and now anyone can efficiently verify that an image is real on any blockchain as well. ## What we learned * In terms of technologies, we learned iOS development, Swift, and SwiftUI, and we also learned how to work with RISC-V ZK proving systems like the SP1 prover. * We learned about hardware security, specifically how to protect private keys using the Secure Enclave on iPhones. ## What's next for Aros * We want to extend this technology beyond just images, to **prove that audio and video is real** and not AI-generated. We have some ideas for this and we are excited to try these out soon! * We plan to deploy a **verifier smart contract** for the ZK circuit on Ethereum. * We hope to **work with social media platforms** to try to integrate our system since we think fake news and images are most prevalent on social media, and Aros can help reduce misinformation online.
## Inspiration Let's face it: Museums, parks, and exhibits need some work in this digital era. Why lean over to read a small plaque when you can get a summary and details by tagging exhibits with a portable device? There is a solution for this of course: NFC tags are a fun modern technology, and they could be used to help people appreciate both modern and historic masterpieces. Also there's one on your chest right now! ## The Plan Whenever a tour group, such as a student body, visits a museum, they can streamline their activities with our technology. When a member visits an exhibit, they can scan an NFC tag to get detailed information and receive a virtual collectible based on the artifact. The goal is to facilitate interaction amongst the museum patrons for collective appreciation of the culture. At any time, the members (or, as an option, group leaders only) will have access to a live slack feed of the interactions, keeping track of each other's whereabouts and learning. ## How it Works When a user tags an exhibit with their device, the Android mobile app (built in Java) will send a request to the StdLib service (built in Node.js) that registers the action in our MongoDB database, and adds a public notification to the real-time feed on slack. ## The Hurdles and the Outcome Our entire team was green to every technology we used, but our extensive experience and relentless dedication let us persevere. Along the way, we gained experience with deployment oriented web service development, and will put it towards our numerous future projects. Due to our work, we believe this technology could be a substantial improvement to the museum industry. ## Extensions Our product can be easily tailored for ecotourism, business conferences, and even larger scale explorations (such as cities and campus). In addition, we are building extensions for geotags, collectibles, and information trading.
partial
## Inspiration GeoGuesser is a fun game which went viral in the middle of the pandemic, but after having played for a long time, it started feeling tedious and boring. Our Discord bot tries to freshen up the stale air by providing a playlist of iconic locations in addendum to exciting trivia like movies and monuments for that extra hit of dopamine when you get the right answers! ## What it does The bot provides you with playlist options, currently restricted to Capital Cities of the World, Horror Movie Locations, and Landmarks of the World. After selecting a playlist, five random locations are chosen from a list of curated locations. You are then provided a picture from which you have to guess the location and the bit of trivia associated with the location, like the name of the movie from which we selected the location. You get points for how close you are to the location and if you got the bit of trivia correct or not. ## How we built it We used the *discord.py* library for actually coding the bot and interfacing it with discord. We stored our playlist data in external *excel* sheets which we parsed through as required. We utilized the *google-streetview* and *googlemaps* python libraries for accessing the google maps streetview APIs. ## Challenges we ran into For initially storing the data, we thought to use a playlist class while storing the playlist data as an array of playlist objects, but instead used excel for easier storage and updating. We also had some problems with the Google Maps Static Streetview API in the beginning, but they were mostly syntax and understanding issues which were overcome soon. ## Accomplishments that we're proud of Getting the Discord bot working and sending images from the API for the first time gave us an incredible feeling of satisfaction, as did implementing the input/output flows. Our points calculation system based on the Haversine Formula for Distances on Spheres was also an accomplishment we're proud of. ## What we learned We learned better syntax and practices for writing Python code. We learnt how to use the Google Cloud Platform and Streetview API. Some of the libraries we delved deeper into were Pandas and pandasql. We also learned a thing or two about Human Computer Interaction as designing an interface for gameplay was rather interesting on Discord. ## What's next for Geodude? Possibly adding more topics, and refining the loading of streetview images to better reflect the actual location.
## City Bins Roamer : An AI multiplayer game for sustainable cities! # What it does We're using Martello's geospatial data to make a Pac-Man-like game taking place across the playing board of Montreal's streets. The goal: if you play as 'garbage': try to escape from the Intelligent System of Bins!! If you play as 'bins' (audience) : try to collect the garbage! The idea: Player (garbage) can navigate around the city of Montreal (Microsoft Bing Maps). There is one place at the map that it is the player's goal. Try to reach it before the bins 'eat' you! :open\_mouth: But the garbage also has to avoid the place on the map that is the Audience's goal! The goal of the Audience is to prevent Player from reaching his goal. By placing bins, they try to push player towards the Audience's goal. # How we built it + Implementation With Python, Javascript, json, CSV, Bing Maps and a lot of frustration. Because bins' signals are sometimes weak and noisy, we are using Martello's database which is useful in decision making (which provider should we trust more locally, and how much should we trust signal and how much our previous knowledge (somehow similar to AI concepts like particle filters etc.) By Rest API we could retrieve information about the city's map structure which is passed to pygame framework. All algorithms (navigation, AI's game style) are implemented from scratch. Therefore: Microsoft Bing Maps (+Rest API) + Python pygame + Flask + AI # Accomplishments that we're proud of Parsing the Json file, being able to understand and analyze its data, and map it to Bing Maps, first touch with JS, Flask. Best multiplayer with AI game ever! City's bins Roamer : multiplayer game with AI for sustainable cities! Goal: if you play as 'garbage' try to escape from Intelligent System of Bins!!If you play as 'bins' (audience) try to collect garbage!Idea: Player (garbage) can navigate around the city of Montreal (Microsoft Bing Maps). There is one place at the map that it is player's goal. They try to reach it before bins 'eat' you. But also, it has to avoid audience's goal! Goal of audience is to prevent Player to reach player's goal. By placing bins they try to push player towards audience's goal. For the future: Combine everything: * combine ability to make decision (about the signal) with navigation's algorithms. * combine Bing Maps with pygame (style, retrieve data from map to have streets' layout etc.) * combine by Flask data from Martello to Bing Maps so that they can contain information about signals' strentgh.
## Inspiration Whenever I go on vacation, what I always fondly look back on is the sights and surroundings of specific moments. What if there was a way to remember these associations by putting them on a map to look back on? We strived to locate a problem, and then find a solution to build up from. What if instead of sorting pictures chronologically and in an album, we did it on a map which is easy and accessible? ## What it does This app allows users to collaborate in real time on making maps over shared moments. The moments that we treasure were all made in specific places, and being able to connect those moments to the settings of those physical locations makes them that much more valuable. Users from across the world can upload pictures to be placed onto a map, fundamentally physically mapping their favorite moments. ## How we built it The project is built off a simple React template. We added functionality a bit at a time, focusing on creating multiple iterations of designs that were improved upon. We included several APIs, including: Google Gemini and Firebase. With the intention of making the application very accessible to a wide audience, we spent a lot of time refining the UI and the overall simplicity yet useful functionality of the app. ## Challenges we ran into We had a difficult time deciding the precise focus of our app and which features we wanted to have and which to leave out. When it came to actually creating the app, it was also difficult to deal with niche errors not addressed by the APIs we used. For example, Google Photos was severely lacking in its documentation and error reporting, and even after we asked several experienced industry developers, they could not find a way to work around it. This wasted a decent chunk of our time, and we had to move in a completely different direction to get around it. ## Accomplishments that we're proud of We're proud of being able to make a working app within the given time frame. We're also happy over the fact that this event gave us the chance to better understand the technologies that we work with, including how to manage merge conflicts on Git (those dreaded merge conflicts). This is our (except one) first time participating in a hackathon, and it was beyond our expectations. Being able to realize such a bold and ambitious idea, albeit with a few shortcuts, it tells us just how capable we are. ## What we learned We learned a lot about how to do merges on Git as well as how to use a new API, the Google Maps API. We also gained a lot more experience in using web development technologies like JavaScript, React, and Tailwind CSS. Away from the screen, we also learned to work together in coming up with ideas and making decisions that were agreed upon by the majority of the team. Even with being friends, we struggled to get along super smoothly while working through our issues. We believe that this experience gave us an ample amount of pressure to better learn when to make concessions and also be better team players. ## What's next for Glimpses Glimpses isn't as simple as just a map with pictures, it's an album, a timeline, a glimpse into the past, but also the future. We want to explore how we can encourage more interconnectedness between users on this app, so we want to allow functionality for tagging other users, similar to social media, as well as providing ways to export these maps into friendly formats for sharing that don't necessarily require using the app. We also seek to better merge AI into our platform by using generative AI to summarize maps and experiences, but also help plan events and new memories for the future.
winning
## Inspiration After witnessing the power of collectible games and card systems, our team was determined to prove that this enjoyable and unique game mechanism wasn't just some niche and could be applied to a social activity game that anyone could enjoy or use to better understand one another (taking a note from Cards Against Humanity's book). ## What it does Words With Strangers pairs users up with a friend or stranger and gives each user a queue of words that they must make their opponent say without saying this word themselves. The first person to finish their queue wins the game. Players can then purchase collectible new words to build their deck and trade or give words to other friends or users they have given their code to. ## How we built it Words With Strangers was built on Node.js with core HTML and CSS styling as well as usage of some bootstrap framework functionalities. It is deployed on Heroku and also makes use of TODAQ's TaaS service API to maintain the integrity of transactions as well as the unique rareness and collectibility of words and assets. ## Challenges we ran into The main area of difficulty was incorporating TODAQ TaaS into our application since it was a new service that none of us had any experience with. In fact, it isn't blockchain, etc, but none of us had ever even touched application purchases before. Furthermore, creating a user-friendly UI that was fully functional with all our target functionalities was also a large issue and challenge that we tackled. ## Accomplishments that we're proud of Our UI not only has all our desired features, but it also is user-friendly and stylish (comparable with Cards Against Humanity and other genre items), and we were able to add multiple word packages that users can buy and trade/transfer. ## What we learned Through this project, we learned a great deal about the background of purchase transactions on applications. More importantly, though, we gained knowledge on the importance of what TODAQ does and were able to grasp knowledge on what it truly means to have an asset or application online that is utterly unique and one of a kind; passable without infinite duplicity. ## What's next for Words With Strangers We would like to enhance the UI for WwS to look even more user friendly and be stylish enough for a successful deployment online and in app stores. We want to continue to program packages for it using TODAQ and use dynamic programming principles moving forward to simplify our process.
## Inspiration I'm interested in NLP, so I wanted to work with one of the provided APIs that were related. Nuance had such a toolkit and they talked about how you could use it to order food. While talking with a friend, we realized that it was always difficult to keep track of coffee orders, so it would be possible to train a model to recognize the different parts of a coffee order. ## What it does For everyone who struggles to understand what an "iced coffee with two pumps of caramel and one scoop of macha powder" is or to remember that their friend asked for "a venti pumpkin spice latte with no whipped cream and skimmed milk", this app will help them remember all of their friends' orders, and it tags the important parts of each order. So, it recognizes orders of coffee and tags the different concepts (drink, size, milk-type, amount of sugar, etc.) ## How I built it * ran through an iOS tutorial to make a basic list app * developed a model using Nuance's toolkit to have it recognize the different orders * combined list application with sample app provided by Nuance ## Challenges I ran into * coming up with a way to break down coffee order * making iOS work, debugging errors in code (!!) ## Accomplishments that I'm proud of -having a functioning mobile app ## What I learned * learning Objective-C * how to train and test a model ## What's next for CoffeeRun * Fixing minor bugs on app * Calculating prices for each order: parsing order info and comparing against a menu * Request for payments from friends
## Inspiration **choices** is heavily inspired by both my friends' inability to make decisions and my experiences in scrum planning poker. In both scenarios, I frequently saw group members anxious to share their opinions for very human reasons - fear of seeming like an "outcast" for sharing your opinion first, even in close teams, or being influenced by someone else's responses. **choices** was built to simplify any decision or question, in a way where everyone can know their opinion matters. ## What it does All users are shown the same question and the same options. They may select an option from the list, and once all players select an answer, all responses are revealed simultaneously, allowing the group to discuss the results. ## How we built it **choices** is mainly built in raw HTML/CSS, with the idea that it should be responsive, to remove any friction that would discourage users from playing. JS is used to manipulate elements, and the socket.io NPM package is used to set up a server and connect clients to each other. ## Challenges we ran into A lack of familiarity with the underlying technology. I went into HTN 2023 with very rusty knowledge of JavaScript and was unable to even comprehend how I would build an interface between multiple clients. ## Accomplishments that we're proud of Truly going from very little knowledge in this area to completing a project that I can show off and feel excited to improve in the future. Being able to visualize my idea, map it out onto paper, bounce ideas off of peers, and bring it to life on the screen for others is always such an amazing feeling. ## What we learned JavaScript can be finicky. It's very easily to end up in the epitome of what can go wrong with a dynamically typed language. ## What's next for choices * Modularity for questions (especially user created). Turns out local JSON is not trivial to load into JS, who would've guessed? * AI-generated questions. In some of HTN 2023's talks I found out it was shockingly easy to set up useful AI prompts to create copy for you. I would love to be able to generate new questions on-the-fly, possibly with user-queried topics. * Proper online support. This would include a domain name, server hosting, and breaking off users into small groups they can create and join on their own. * Increased user/option count support. This is more of an issue of wrangling UI, as I wrote *choices* to be as expandable as possible.
partial
## Inspiration From Jarvis to GLaDOS, we were inspired by the fictional AIs that made the lives of our heroes so much more pleasant (if you forget about the neurotoxin). Unlike the movies, today’s smart assistants offer unnatural forms of interaction which makes it extremely painful for one control the aspects of their home. We want to create a seamless and natural smart home device flow that integrates Augmented Reality and gesture control into a centralized hub. One aspiration was a home that allows you to change the luminosity of a light bulb by pointing at it and turning your arm. We also aspired to set a group of smart home appliances to create group actions (such as a “go to bed command” to turn off your lights in living room and kitchen). ## What it does HomePoint allows homeowners to smarten up their home with natural points of interaction. The User Experience in a Nutshell: 1. Set customized smart home control flows on our Google home app. (such as “set Go to bed to turn off living room and kitchen lights”). You can set as many as possible. 2. Say “I am going to bed” to the Google Home. The Lutron hub will turn off the living room and kitchen lights real-time. 3. Hold an AR app in front you that will identify which light you are pointing at, point your arm toward a light bulb and tilt to change its luminosity. ## How we built it Homepoint is deployed on a Node.js server that uses Firebase’s cloud functions to manage all aspects of the smart home. Through the use of Api.ai/DialogFlow, a Google Home pings our node server and relays the requests it receives to control the state of the lights. The Myo armband sends its IMU data (turning speed, correlate to rate of change in luminosity) through BLE to the laptop. In conjunction with image recognition data from the AR App, a request is sent to the server. We used a laptop to pull the server data and transfer it to the Android Things board through serial communication. The Android Things board is connected wirelessly to the Lutron hub, which would receive the signals and turn on/off lights and change its luminosity. ## Challenges we ran into Setting up NXP Android Things development board was painful because the new product lacked tutorials and documentation. Learning how to program dialog flow and google home, and integrate all components to communicate through server deployed on Firebase. Lutron board has a poor system of connecting to devices. Once we connect the laptop to the router, the laptop lost internet access because the wireless connection is occupied. Thus, we decided to connect the router with an Android Thing board and communicate input signals to Android Things through a serial port. That failed because the poor documentation of Android Things, so we have to attach a touchscreen to Android Things board and demo it manually. We were also forced to abandon our original approach with an augmented reality interface because Vuforia and OpenGL refused to display images within Unity. We also spent time integrating a Myo Armband to change the luminosity of lights via the rotation of one’s arm. However, due to complications with reading accurate information on the arm’s position, we were unable to complete the integration with the backend server. ## Accomplishments that we are proud of We’re most proud of connecting the Lutron board to the Android things because our weekend was plagued by router issues and incompatible cables. Messed around with lots of hardware that are not designed to work together and tried our best to create integrations for them. Our picture while hacking was posted on [the Daily Pennsylvanian](http://www.thedp.com/article/2017/09/at-pennapps-xvi-students-made-inter-dimensional-robots-and-hung-out-with-the-founder-of-quora)! ## What we learned Our team could have focused more on a minimum viable product throughout the weekend. We spent too long exploring different methods of creating a cohesive smart home with devices such as the Google Home and the Myo armband. In hindsight, we should have prioritized the interaction between the NXP board and the Litron board. But hey, messing with hardware was really run, especially connecting so many hardware from different companies with different protocols. It is a good skill to practice as smart home and voice assistant are so popular and so diversified from different manufacturers, and the integration of all smart home hardware is extremely important. ## What's next for Home Point We hope to use the proper network devices to take advantage of Lutron’s wireless capabilities. A hub that integrates all kinds of smart home devices with different protocols, smart home speakers to control all your devices in one central location.
## Our Mission To hack the Lutron-Sliver kit using Android Things and Google Vision. ## Inspiration The idea was to automate your home using just your face. Lutron's Sliver kit is a representation of the modern smart home open to tinkering and intriguing possibilities. Google's Vision API allows us to go further with the Smart Home Vision, opening our team to Machine learning opportunities not possible prior. Keys in lock, you look up and smile at the camera. You are home, and since it's you the lights welcome you back. The inspitration for the titile is the famus line frome the movie "The Shining". ## What it does Using the Pico i.MX7 Development Board with the Android Things OS, and Google's Google Vision to communicate over TCP to the Lutron Sliver Kit. We imagined a smart home that could welcome you home. ## How we built it Honey I'm home uses a variety of different technologies, starting with the pico development board using android things OS. The intersection of all these technologies allows for the simplicity and potential to control your home without speaking or pressing a button. Honey I'm Home relies on Android Things OS to drive our camera and communication to the Lutron home system. We started by trying to just get the lights of the Lutron to turn on by sending Telnet commands to it. We then wrote a bash script that automated entering the user name and password while also executing light commands. We tried to take it a step further using the pico development board. The Pico proved to be difficult. Learning how to navigate through the android work flow was a huge challenge. ## Challenges we ran into Our biggest challenges were; getting the Pico board to communicate using Telnet Communication Protocol and the Java Language to the Lutron System. We also had trouble learning how to use Android APIs. ## Accomplishments that we're proud of Being able to stretch ourselves to learn multiple new technologies at once, and connect them in this one project. ## What we learned We went through the challenge of working with hardware and embraced how fun it was. We learned that, even if nothing is working, it's all a part of the learning process. Persistance is important to make whatever we attempt possible. ## What's next for Honey I'm Home The concept of Honey I'm Home could potentially be able to tell when you fell asleep to shut off the lights.
## Inspiration <https://www.youtube.com/watch?v=lxuOxQzDN3Y> Robbie's story stuck out to me at the endless limitations of technology. He was diagnosed with muscular dystrophy which prevented him from having full control of his arms and legs. He was gifted with a Google home that crafted his home into a voice controlled machine. We wanted to take this a step further and make computers more accessible for people such as Robbie. ## What it does We use a Google Cloud based API that helps us detect words and phrases captured from the microphone input. We then convert those phrases into commands for the computer to execute. Since the python script is run in the terminal it can be used across the computer and all its applications. ## How I built it The first (and hardest) step was figuring out how to leverage Google's API to our advantage. We knew it was able to detect words from an audio file but there was more to this project than that. We started piecing libraries to get access to the microphone, file system, keyboard and mouse events, cursor x,y coordinates, and so much more. We build a large (~30) function library that could be used to control almost anything in the computer ## Challenges I ran into Configuring the many libraries took a lot of time. Especially with compatibility issues with mac vs. windows, python2 vs. 3, etc. Many of our challenges were solved by either thinking of a better solution or asking people on forums like StackOverflow. For example, we wanted to change the volume of the computer using the fn+arrow key shortcut, but python is not allowed to access that key. ## Accomplishments that I'm proud of We are proud of the fact that we had built an alpha version of an application we intend to keep developing, because we believe in its real-world applications. From a technical perspective, I was also proud of the fact that we were able to successfully use a Google Cloud API. ## What I learned We learned a lot about how the machine interacts with different events in the computer and the time dependencies involved. We also learned about the ease of use of a Google API which encourages us to use more, and encourage others to do so, too. Also we learned about the different nuances of speech detection like how to tell the API to pick the word "one" over "won" in certain context, or how to change a "one" to a "1", or how to reduce ambient noise. ## What's next for Speech Computer Control At the moment we are manually running this script through the command line but ideally we would want a more user friendly experience (GUI). Additionally, we had developed a chrome extension that numbers off each link on a page after a Google or Youtube search query, so that we would be able to say something like "jump to link 4". We were unable to get the web-to-python code just right, but we plan on implementing it in the near future.
losing
## Inspiration Primarily YouTube tutorials and Coursera videos. Some of my relatives wanted to learn latest tools and technologies like programming, machine learning, and psychology. However, the language barrier always stood high and prevented them from accessing the tons of freely available video lectures on the internet. We were surprised to learn that video translation is not supported by even the major learning platforms and decided to explore this area. ## What it does tongueSpeak essentially translates any given video into a video in another language in a highly scalable manner. It uses machine learning, speech recognition, speech generation, text translation, signal processing (eg: chromagram and FFT algorithms) and audio normalization to stitch together a video translation service. ## How we built it We used numpy and pandas for all mathematical calculations. Scikit-learn for machine learning algorithms. Librosa for signal processing. pydub for audio stitching and splitting, and gtts for speech recognition and generation. ## Challenges we ran into One of the biggest challenges was identifying the gender of speakers as it was necessary to identify the tone of voice of the speakers to preserve the charisma of the original video. Since there is no definite mechanism to do this, we used RandomForest ensemble machine learning classification algorithm trained on 5000 input audio files. This gave us an appreciable 75% accuracy in identifying the gender of the person, through which we adjusted the pitch of the output audio to mimic the input audio. Apart from gender recognition, we also faced challenges in noise filtering, background music detection and pitch resolution. Altogether, these cutting-edge challenges gave us an opportunity to explore the latest frontiers of machine learning and use sophisticated algorithms to solve challenging problems. ## Accomplishments that we're proud of We understood the overall mechanism of the algorithms, got together a working web service, and processed extensive signal inputs, all in less than 36 hours. Since none of us had prior experience in these domains, this opportunity was a fantastic learning experience. ## What we learned Apart from the obvious gain in technical prowess, especially related to signal processing and machine learning, we also learnt essential interpersonal skills task distribution, project planning, collaboration, and effective time management. ## What's next for tongueSpeak * Improved background noise filtering * Wider range of languages * Handle multiple overlapping speakers * Deploy as a chrome extension for real-time translation
## Inspiration It is known that the best way to learn a language is to immerse yourself in the media of that language. However, watching media is traditionally a passive process, which has been indicated in many studies to have limited effectiveness in knowledge retention. Our goal was to innovate a solution that transforms the act of video watching into an active, engaging process, ultimately enhancing language learning and expediting the acquisition of new languages for our users. ## What it does linquiztics analyzes the video you are watching and creates a list of questions using AI that evaluates your understanding of the content of the video and provides detailed feedback on how the user can improve their understanding of the content. Not only does it have the functionality of playing audio for learning the pronunciation, it also can interpret the words that the user is saying for input. ## How we built it Leveraging Taipy for both the frontend and backend development, we seamlessly integrated various APIs to enhance our project's functionality. By tapping into YouTube's API, we efficiently retrieved video transcripts, which were then processed through OpenAI's GPT to dynamically generate quiz questions. Introducing the Whisper API allowed us to incorporate voice input, enabling users to engage in vocal practice during the quiz. Furthermore, we employed gTTS to implement an auditory component, integrating sound playback for an immersive pronunciation learning experience. ## Challenges we ran into One of our most significant hurdles in the development process was related to Chrome extensions. We encountered compatibility issues when attempting to integrate them with the YouTube and GPT APIs. Faced with this roadblock, we decided to pivot towards using Python, a language we were more familiar with and confident about its seamless integration with APIs. However, the transition came with its own set of challenges, particularly in learning Taipy, a language we chose for its suitability to our project. Adapting to Taipy's unique syntax posed a learning curve, demanding additional effort to grasp the intricacies of its Markdown style syntax. Overcoming these challenges became an essential part of our journey, ultimately contributing to the growth and expertise of our team. ## Accomplishments that we're proud of We are especially proud of learning to use and implement Taipy into a product that works seamlessly. This success is a testament to our commitment to acquiring new technical skills and adapting to innovative tools. Connecting various APIs showcased the effectiveness of our teamwork and technical expertise. Beyond enhancing our technical capabilities, this experience fostered a stronger sense of camaraderie within our team as we collaborated toward a shared objective. ## What we learned Throughout the hackathon, our team immersed ourselves in learning new skills and technologies. We not only became adept at version control using Git, but also connecting APIs became a breeze, allowing us to seamlessly integrate a variety of functionalities. Gaining proficiency in Taipy brought a distinctive flair to our toolkit, providing a versatile and powerful language for our project. This learning journey not only expanded our technical know-how but also armed us with the expertise to approach challenges creatively, nurturing a collaborative spirit within the team. ## What's next for linquiztics Our vision for linquiztics involves transforming it into a Chrome extension, seamlessly integrating it into the Chrome browser for enhanced accessibility. To elevate the user experience, we're introducing live transcription functionality, highlighting problematic words in real-time when watching videos. This feature not only aids in identifying and addressing language comprehension challenges but also reinforces learning through visual cues. Additionally, we're incorporating a pronunciation coach equipped with a language phonetic guide, utilizing the International Phonetic Alphabet (IPA). This guide will offer users precise assistance in mastering pronunciation, adding an invaluable dimension to language learning.
## Inspiration Knowtworthy is a startup that all three of us founded together, with the mission to make meetings awesome. We have spent this past summer at the University of Toronto’s Entrepreneurship Hatchery’s incubator executing on our vision. We’ve built a sweet platform that solves many of the issues surrounding meetings but we wanted a glimpse of the future: entirely automated meetings. So we decided to challenge ourselves and create something that the world has never seen before: sentiment analysis for meetings while transcribing and attributing all speech. ## What it does While we focused on meetings specifically, as we built the software we realized that the applications for real-time sentiment analysis are far more varied than initially anticipated. Voice transcription and diarisation are very powerful for keeping track of what happened during a meeting but sentiment can be used anywhere from the boardroom to the classroom to a psychologist’s office. ## How I built it We felt a web app was best suited for software like this so that it can be accessible to anyone at any time. We built the frontend on React leveraging Material UI, React-Motion, Socket IO and ChartJS. The backed was built on Node (with Express) as well as python for some computational tasks. We used GRPC, Docker and Kubernetes to launch the software, making it scalable right out of the box. For all relevant processing, we used Google Speech-to-text, Google Diarization, Stanford Empath, SKLearn and Glove (for word-to-vec). ## Challenges I ran into Integrating so many moving parts into one cohesive platform was a challenge to keep organized but we used trello to stay on track throughout the 36 hours. Audio encoding was also quite challenging as we ran up against some limitations of javascript while trying to stream audio in the correct and acceptable format. Apart from that, we didn’t encounter any major roadblocks but we were each working for almost the entire 36-hour stretch as there were a lot of features to implement. ## Accomplishments that I'm proud of We are super proud of the fact that we were able to pull it off as we knew this was a challenging task to start and we ran into some unexpected roadblocks. There is nothing else like this software currently on the market so being first is always awesome. ## What I learned We learned a whole lot about integration both on the frontend and the backend. We prototyped before coding, introduced animations to improve user experience, too much about computer store numbers (:p) and doing a whole lot of stuff all in real time. ## What's next for Knowtworthy Sentiment Knowtworthy Sentiment aligns well with our startup’s vision for the future of meetings so we will continue to develop it and make it more robust before integrating it directly into our existing software. If you want to check out our stuff you can do so here: <https://knowtworthy.com/>
losing
## Inspiration Behind Tangle ⚛️ The idea for Tangle emerged from a recognition that traditional networking methods, such as LinkedIn profiles, have remained static, even as large in-person events like Hack the North make a strong comeback. We saw an opportunity to reimagine how people connect by creating a more dynamic and visual way of representing relationships formed at events. Tangle aims to bridge the gap by providing an interactive web app that helps participants remember, revisit, and reinforce connections made during events, ensuring those relationships don’t just fade into the background.🤝 ## How We Built Tangle 🛠️ Tangle is a web app powered by React for the front end and Convex for the back end, with the entire platform hosted on Vercel. We used a GoDaddy domain to make Tangle easily accessible to all users.🌐 Our backend, built with Convex, integrates a vector embedding search powered by Cohere. This allows us to offer advanced search functionality that helps users locate people they’ve met based on attributes and interactions. On the front end, we designed a simple yet intuitive user interface in React, focusing on ease of navigation. The landing page introduces users to Tangle, while the home page allows them to search for individuals or features related to people they've met at the event. The query is sent to the backend, where it’s processed, and the results are returned in real-time.⚡️ ## Challenges We Overcame Building Tangle 🚧 **Design Challenges 🎨** One of our main challenges was creating a platform that caters to different event attendees—hackers, recruiters, and speakers—all of whom have different goals. Designing Tangle to meet the needs of all these groups while maintaining a cohesive user experience required careful planning and iteration. **Vector Embeddings 🧠** A technical challenge was mastering the use of vector embeddings. These embeddings were critical for enabling intelligent search functions within Tangle. We invested considerable time in optimizing the embedding process to accurately capture the nuanced relationships between event attendees. **Personal Challenges 💪** Balancing external commitments and the time pressure of the hackathon was no easy feat. At one point, we experienced challenges leading to a demotivating lull 10 hours prior to submissions, but we pushed through as a team in the final hours to finish strong. ## Accomplishments We Celebrate at Tangle 🎉 * Successfully integrating Cohere’s vector embeddings for advanced semantic search within our app. * Mastering and utilizing Convex to build a robust and efficient backend. * Designing a user-friendly interface. * Overcoming personal time management challenges to deliver a working, innovative platform within the hackathon’s time constraints. ## Lessons Learned from Tangle's Journey 📚 **The Importance of Planning 📝** One of the key takeaways from building Tangle was the necessity of meticulous planning, especially regarding user flow and system architecture. Diving straight into coding without first understanding how users would interact with our app resulted in some avoidable setbacks and rework. We learned that investing time upfront in planning leads to a more streamlined and efficient development process. **Simplifying Systems 🧩** We initially overcomplicated parts of our application, which led to inefficiencies. Simplifying and focusing on the core functionalities of Tangle allowed us to optimize the system and deliver a better user experience.
## Inspiration Have you ever attended a networking event and felt overwhelmed by the sheer number of people, unsure of how to find the right connections? We've all been there, wishing for a more streamlined way to meet individuals who align with our goals and values. The inspiration for Aligned.ai comes from this common challenge—finding people who will empower you to do your life's best work. Our goal was to create a system that helps individuals build sustainable and long-lasting relationships in the startup space, whether it’s founder-to-founder or founder-to-VC. ## What it does Aligned.ai is a matchmaking platform designed to connect individuals in the startup ecosystem based on deep personality and goal alignment. Users engage in live, in-depth conversations with Aligned Voice, an AI-powered by Groq, which simulates a real human interaction. The system then generates a unique personality embedding using Cohere, which is stored in Chroma DB. Using powerful vector similarity algorithms, Aligned.ai ranks potential connections, presenting users with the best matches first. This way, users can find those who are not only aligned with their professional aspirations but also resonate with their personal values. ## How we built it Aligned.ai was developed with a focus on seamless integration across multiple technologies: ![Alt text](https://d112y698adiu2z.cloudfront.net/photos/production/software_photos/003/026/013/datas/original.png) * Frontend: Built using Next.js, our frontend integrates Auth0 for tokenization and secure login. * Data Collection: Once logged in, users create their matchmaking profiles, including data from their Web Summit profile, LinkedIn, GitHub, and more. * Conversation with Aligned Voice: This **core feature** was powered by Groq, users engage in a live conversation that feels as natural as talking to a real person. We used Groq's integrations with Whisper for speech to text - then sent this info to our backend server for stateless requests to Groq's blazing fast llama 3 models. **We experimented with various platforms but found Groq to give us the latency necessary for our needs** * Personality Embedding: This was the **bread and butter** of our app. The conversation data is processed by Cohere's Embed API to generate a personality embedding, which is stored in Chroma DB vector database. * Matchmaking: Users can then search for others with similar profiles using vector similarity algorithms. This was all ran over Chroma DB's seamless and powerful interface with multiple integrations. Summaries of similarities are hosted using Groq and Cohere. The system ranks matches from most to least aligned, providing a curated list of potential connections. * Reach Out: Once a match is found, users can reach out directly using the embedded social information. ## Challenges we ran into * One of the main challenges we faced was prompt engineering—ensuring that the AI could generate meaningful and accurate personality embeddings from the conversations. * Additionally, integrating the full tech stack from frontend to backend, while maintaining real-time AI performance, posed significant difficulties. * Balancing the scale of integration with providing a clean and intuitive user experience was another key challenge we successfully navigated. ## Accomplishments that we're proud of * MVP Completion: We successfully built and deployed a minimum viable product that effectively demonstrates the core functionality of Aligned.ai. * Contribution to Open Source: We made a PR to improve one of the technologies we worked with, contributing back to the community. * User-Centric Design: We invested significant time in UI/UX design to ensure that the user experience is as intuitive and enjoyable as possible. ## What we learned Through this project, we learned the immense potential of AI-driven systems to facilitate meaningful connections between individuals. The ability of autonomous agents to understand and simulate human interaction signals a new paradigm in networking and relationship-building. We also gained valuable insights into prompt engineering, real-time AI processing, and the importance of seamless integration across the tech stack. ## What's next for Aligned.ai The potential for Aligned.ai is vast. Moving forward, we plan to expand the action space of personality embedding and vector based search, integrating more powerful features to enhance the matchmaking process. We aim to refine the AI's ability to simulate even more nuanced human interactions and to improve the accuracy of our personality embeddings. As we continue to develop Aligned.ai, our goal is to make it the go-to platform for building meaningful, long-term relationships in the startup ecosystem.
## Problem In these times of isolation, many of us developers are stuck inside which makes it hard for us to work with our fellow peers. We also miss the times when we could just sit with our friends and collaborate to learn new programming concepts. But finding the motivation to do the same alone can be difficult. ## Solution To solve this issue we have created an easy to connect, all in one platform where all you and your developer friends can come together to learn, code, and brainstorm together. ## About Our platform provides a simple yet efficient User Experience with a straightforward and easy-to-use one-page interface. We made it one page to have access to all the tools on one screen and transition between them easier. We identify this page as a study room where users can collaborate and join with a simple URL. Everything is Synced between users in real-time. ## Features Our platform allows multiple users to enter one room and access tools like watching youtube tutorials, brainstorming on a drawable whiteboard, and code in our inbuilt browser IDE all in real-time. This platform makes collaboration between users seamless and also pushes them to become better developers. ## Technologies you used for both the front and back end We use Node.js and Express the backend. On the front end, we use React. We use Socket.IO to establish bi-directional communication between them. We deployed the app using Docker and Google Kubernetes to automatically scale and balance loads. ## Challenges we ran into A major challenge was collaborating effectively throughout the hackathon. A lot of the bugs we faced were solved through discussions. We realized communication was key for us to succeed in building our project under a time constraints. We ran into performance issues while syncing data between two clients where we were sending too much data or too many broadcast messages at the same time. We optimized the process significantly for smooth real-time interactions. ## What's next for Study Buddy While we were working on this project, we came across several ideas that this could be a part of. Our next step is to have each page categorized as an individual room where users can visit. Adding more relevant tools, more tools, widgets, and expand on other work fields to increase our User demographic. Include interface customizing options to allow User’s personalization of their rooms. Try it live here: <http://35.203.169.42/> Our hopeful product in the future: <https://www.figma.com/proto/zmYk6ah0dJK7yJmYZ5SZpm/nwHacks_2021?node-id=92%3A132&scaling=scale-down> Thanks for checking us out!
partial
## Inspiration In a study on 1,237 Chrome extensions, each with a minimum of 1,000 downloads. The findings revealed that nearly half of these extensions request permissions that could potentially risk exposing users' personally identifiable information (PII), spread adware and malware, or even track all user activities online, including accessing passwords and financial data. We decided to tackle the problem of potential security risks through network analysis. ## What Redhat Panda Does Redhat Panda provides a security testing tool known as the LLM Pen Testing Solution. This tool is designed to identify and address potential security risks in web applications, thereby enhancing the overall security of your application. Key features of the tool include: * **API Security**: The tool can detect exposed API routes and keys. * **Data Protection**: It identifies instances of Personally Identifiable Information (PII). * **Additional Features**: The tool offers more features to ensure comprehensive security. The tool is designed to be affordable and easily integrated into your deployment pipeline, providing comprehensive security checks for all projects. ## How Redhat Panda Was Built Redhat Panda is built using a variety of technologies: * **Frontend**: Streamlit Python * **Backend**: Serverless FastAPI hosted through Modal, Playwright for headless web traffic, and Redis: KV for synchronizing published events in a distributed system. * **Infrastructure**: Modal for autoscaling serverless cloud compute, Upstash Redis for serverless redis instance. * **LLMs**: Anthropic’s Claude 3 Haiku through AWS for in-depth network analysis, OpenAI’s gpt-4o for user audit summarization. ## Challenges Faced We faced challenges in achieving synchronization across distributed events. These issues were eventually resolved by employing Redis and introducing a refreshed validation mechanism to ensure that routes that have already been examined are not revisited. ## Accomplishments Redhat Panda has developed an innovative security testing tool, the LLM Pen Testing Solution, designed to identify and address potential security risks in web applications. The team is proud of the tool's ability to detect exposed API routes and keys, identify instances of Personally Identifiable Information (PII), and offer additional security features. They have also made the tool affordable and easily integrated into deployment pipelines, ensuring comprehensive security checks for all projects. ## What's Next for Redhat Panda We plan to continue to enhance their LLM Pen Testing Solution, possibly by adding more features, improving existing ones, and expanding our service to a deployable pipeline.
## Inspiration We were inspired because many sites are often attacked by password cracking distributed attacks. We wanted to provide a solution to help website owners surveille the overall risk level of incoming requests. ## What it does The project is a solution for website owners to secure their site from malicious login attempts by checking with external API's and giving incoming requests a risk score based on the data. It also includes a registration and login system, as well as a dashboard to see the details of each request. ## How we built it How it was built: The project was built using Django for the API to check for risk, NextJS for the registration, login system and dashboard, and a MYSQL database to store data. ## Challenges we ran into Challenges faced: The biggest challenge was deploying the NextJS project onto AWS, which was not successful. ## What we learned The team learned about the components that make up a dangerous incoming request and which API's to use to check them.
## Inspiration 🌈 Our team has all experienced the struggle of jumping into a pre-existing codebase and having to process how everything works before starting to add our own changes. This can be a daunting task, especially when commit messages lack detail or context. We also know that when it comes time to push our changes, we often gloss over the commit message to get the change out as soon as possible, not helping any future collaborators or even our future selves. We wanted to create a web app that allows users to better understand the journey of the product, allowing users to comprehend previous design decisions and see how a codebase has evolved over time. GitInsights aims to bridge the gap between hastily written commit messages and clear, comprehensive documentation, making collaboration and onboarding smoother and more efficient. ## What it does 💻 * Summarizes commits and tracks individual files in each commit, and suggests more accurate commit messages. * The app automatically suggests tags for commits, with the option for users to add their own custom tags for further sorting of data. * Provides a visual timeline of user activity through commits, across all branches of a repository Allows filtering commit data by user, highlighting the contributions of individuals ## How we built it ⚒️ The frontend is developed with Next.js, using TypeScript and various libraries for UI/UX enhancement. The backend uses Express.js , which handles our API calls to GitHub and OpenAI. We used Prisma as our ORM to connect to a PostgreSQL database for CRUD operations. For authentication, we utilized GitHub OAuth to generate JWT access tokens, securely accessing and managing users' GitHub information. The JWT is stored in cookie storage and sent to the backend API for authentication. We created a github application that users must all add onto their accounts when signing up. This allowed us to not only authenticate as our application on the backend, but also as the end user who provides access to this app. ## Challenges we ran into ☣️☢️⚠️ Originally, we wanted to use an open source LLM, like LLaMa, since we were parsing through a lot of data but we quickly realized it was too inefficient, taking over 10 seconds to analyze each commit message. We also learned to use new technologies like d3.js, the github api, prisma, yeah honestly everything for me ## Accomplishments that we're proud of 😁 The user interface is so slay, especially the timeline page. The features work! ## What we learned 🧠 Running LLMs locally saves you money, but LLMs require lots of computation (wow) and are thus very slow when running locally ## What's next for GitInsights * Filter by tags, more advanced filtering and visualizations * Adding webhooks to the github repository to enable automatic analysis and real time changes * Implementing CRON background jobs, especially with the analysis the application needs to do when it first signs on an user, possibly done with RabbitMQ * Creating native .gitignore files to refine the summarization process by ignoring files unrelated to development (i.e., package.json, package-lock.json, **pycache**).
losing
## Inspiration Inspired by wearable technology and smart devices, RollX introduces a new way to control everyday technology using the familiar platform of a wristwatch. ## What it does RollX is a custom built wearable controller with an embedded accelerometer and gyroscope. The embedded software takes the sensor information, normalizes it, and maps it to various types of input controls. For example, the current sensor mapping is designed to be used with a simple custom built game but we have already tested mapping the data to control the cursor on a computer. ## How we built it Starting from the ground up, the RollX team designed the housing, the electronic layout, and the embedded software to drive the device. The housing was designed with SolidWorks and 3D printed. The electronic components were tested and wired separately then integrated into one circuit controlled by an Arduino Nano. This consisted of coordinating the data from gyroscope and accelerometer to properly display the orientation of the device using the LED ring. ## Challenges we ran into The initial design used an IoT Particle Photon board which would have wirelessly communicated over the internet however due to complication which packages were supported, we were forced to utilize an Arduino Nano. Further, due to the hardware change, the 3D modelling had to be updated. Separately, the integration of the OLED screen caused a memory overflow on the Arduino which was corrected with an updated deployment process. ## Accomplishments that we're proud of Our team is extremely proud to present a creative wearable device with a unique design that enables greater control over technology. This multidisciplinary project includes the integration of various sensors into one microcontroller, CAD and 3D modelling and custom embedded software. All together the unit accomplishes what we set out to do in the 36 hours we had. RollX has been successfully integrated with all of the hardware and software and is fully functional with a simple custom game to show the proof of concept! ## What we learned Hardware integration from the ground up in tandem with customized embedded software. We learned a lot about what is required for a multidisciplinary project to be fully integrated and deployed. Although not making it's way to the final project we also learned a lot about IoT development and the Particle development environment. ## Potential applications Potential applications of RollX include its utilization for educational purposes and assistive devices. RollX can be utilized in VR classrooms, where the orientation of your hand motion will be recorded, analyzed, and used for various hands-on tasks. For example, during sports lessons, RollX can be used to analyze the player's motion and enhance their skills. Another major application is as an assistive device with individuals with limited mobility. For example, the individuals suffering from spinal injuries may not have fine control over their hands and fingers. But RollX can detect movement in wrist and forearm that can potentially be used to control various technological applications.
## Inspiration ITE\_McMaster challenge/Google Challenge. ## What it does It is a website that allows registered users to share their rides. ## How we built it Vue.js/CSS/Bootstrap/Netlify/Firebase/Google-map api ## Challenges we ran into We were learning while working on this project. Implementing the firebase/ ## Accomplishments that we're proud of We integrate google map into our web app. ## What we learned Vue/Figma/Google map apis. ## What's next for poolit Using firebase to feature phone/email authentication.
## How we built it The sensors consist of the Maxim Pegasus board and any Android phone with our app installed. The two are synchronized at the beginning, and then by moving the "tape" away from the "measure," we can get an accurate measure of distance, even for non-linear surfaces. ## Challenges we ran into Sometimes high-variance outputs can come out of the sensors we made use of, such as Android gyroscopes. Maintaining an inertial reference frame from our board to the ground as it was rotated proved very difficult and required the use of quaternion rotational transforms. Using the Maxim Pegasus board was difficult as it is a relatively new piece of hardware, and thus, no APIs or libaries have been written for basic functions yet. We had to query for accelerometer and gyro data manually from internal IMU registers with I2C. ## Accomplishments that we're proud of Full integration with the Maxim board and the flexibility to adapt the software to many different handyman-style use cases, e.g. as a table level, compass, etc. We experimented with and implemented various noise filtering techniques such as Kalman filters and low pass filters to increase the accuracy of our data. In general, working with the Pegasus board involved a lot of low-level read-write operations within internal device registers, so basic tasks like getting accelerometer data became much more complex than we were used to. ## What's next Other possibilities were listed above, along with the potential to make even better estimates of absolute positioning in space through different statistical algorithms.
losing
## TL; DR * Music piracy costs the U.S. economy [$12.5 billion annually](https://www.riaa.com/wp-content/uploads/2015/09/20120515_SoundRecordingPiracy.pdf). * Independent artists are the [fastest growing segment in the music industry](https://www.forbes.com/sites/melissamdaniels/2019/07/10/for-independent-musicians-goingyour-own-way-is-finally-starting-to-pay-off/), yet lack the funds and reach to enforce the Digital Millennium Copyright Act (DMCA). * We let artists **OWN** their work (stored on InterPlanetary File System) by tracking it on our own Sonoverse Ethereum L2 chain (powered by Caldera). * Artists receive **Authenticity Certificates** of their work in the form of Non-Fungible Tokens (NFTs), powered by Crossmint’s Minting API. * We protect against parodies and remixes with our **custom dual-head LSTM neural network model** trained from scratch which helps us differentiate these fraudulent works from originals. * We proactively query YouTube through their API to constantly find infringing work. * We’ve integrated with **DMCA Services**, LLC. to automate DMCA claim submissions. Interested? Keep reading! ## Inspiration Music piracy, including illegal downloads and streaming, costs the U.S. economy $12.5 billion annually. Independent artists are the fastest growing segment in the music industry, yet lack the funds to enforce DMCA. We asked “Why hasn’t this been solved?” and took our hand at it. Enter Sonoverse, a platform to ensure small musicians can own their own work by automating DMCA detection using deep learning and on-chain technologies. ## The Issue * Is it even possible to automate DMCA reports? * How can a complex piece of data like an audio file be meaningfully compared? * How do we really know someone OWNS an audio file? * and more... These are questions we had too, but by making custom DL models and chain algorithms, we have taken our hand at answering them. ## What we’ve made We let artists upload their original music to our platform where we store it on decentralized storage (IPFS) and our blockchain to **track ownership**. We also issue Authenticity Certificates to the original artists in the form of Non-Fungible Tokens. We compare uploaded music with all music on our blockchain to **detect** if it is a parody, remix, or other fraudulent copy of another original song, using audio processing and an LSTM deep learning model built and trained from scratch. We proactively query YouTube through their API for “similar” music (based on our **lyric hashes**, **frequency analysis**, and more) to constantly find infringing work. For detected infringing work, we’ve integrated with DMCA Services, LLC. to **automate DMCA claim submissions**. ## How we built it All together, we used… * NextJS * Postgres * AWS SES * AWS S3 * IPFS * Caldera * Crossmint * AssemblyAI * Cohere * YouTube API * DMCA Services It’s a **lot**, but we were able to split up the work between our team. Gashon built most of the backend routes, an email magic link Auth platform, DB support, and AWS integrations. At the same time, Varun spent his hours collecting hours of audio clips, training and improving the deep LSTM model, and writing several sound differentiation/identification algorithms. Here’s Varun’s **explanation** of his algorithms: “To detect if a song is a remix, we first used a pre-trained speech to text model to extract lyrics from mp3 files and then analyzed the mel-frequency cepstral coefficients, tempo, melody, and semantics of the lyrics to determine if any songs are very similar. Checking whether a song is a parody is much more nuanced, and we trained a dual-head LSTM neural network model in PyTorch to take in vectorized embeddings of lyrics and output the probability of one of the songs being a parody of the other.” While Varun was doing that, Ameya built out the blockchain services with Caldera and Crossmint, and integrated DMCA Services. Ameya ran a Ethereum L2 chain specific for this project (check it out [here](https://treehacks-2024.explorer.caldera.xyz)) using Caldera. He built out significant infrastructure to upload audio files to IPFS (decentralized storage) and interact with the Caldera chain. He also created the Authenticity Certificate using Crossmint that’s delivered directly to each Sonoverse user’s account. Ameya and Gashon came together at the end to create the Sonoverse frontend, while Varun pivoted to create our YouTube API jobs that query through recently uploaded videos to find infringing content. ## Challenges we overcame We couldn’t find existing models to detect parodies and had to train a custom model from scratch on training data we had to find ourselves. Of course, this was quite challenging, but with audio files each being unique, we had to create a dataset of hours of audio clips. And, like always, integration was difficult. The power of a team was a huge plus, but also a challenge. Ameya’s blockchain infrastructure had Solidity compilation challenges when porting into Gashon’s platform (which took some precious hours to sort out). Varun’s ML algorithms ran on a Python backend which had to be hosted alongside our NextJS platform. You can imagine what else we had to change and fix and update, so I won’t bore you. Another major challenge was something we brought on ourselves, honestly. We set our aim high so we had to use several different frameworks, services, and technologies to add all the features we wanted. This included several hours of us learning new technologies and services, and figuring out how to implement them in our project. ## Accomplishments that we're proud of Blockchain has a lot of cool and real-world applications, but we’re excited to have settled on Sonoverse. We identified a simple (yet technically complex) way to solve a problem that affects many small artists. We also made a sleek web platform, in just a short amount of time, with scalable endpoints and backend services. We also designed and trained a deep learning LSTM model to identify original audios vs fraudulent ones (remixes, speed ups, parodies, etc) that achieved **93% accuracy**. ## What we learned #### About DMCA We learned how existing DMCA processes are implemented and the large capital costs associated with them. We became **experts** on digital copyrights and media work! #### Blockchain We learned how to combine centralized and decentralized infrastructure solutions to create a cohesive **end-to-end** project. ## What's next for Sonoverse We're looking forward to incorporating on-chain **royalties** for small artists by detecting when users consume their music and removing the need for formal contracts with big companies to earn revenue. We’re excited to also add support for more public APIs in addition to YouTube API!
## Inspiration Since this was the first hackathon for most of our group, we wanted to work on a project where we could learn something new while sticking to familiar territory. Thus we settled on programming a discord bot, something all of us have extensive experience using, that works with UiPath, a tool equally as intriguing as it is foreign to us. We wanted to create an application that will allow us to track the prices and other related information of tech products in order to streamline the buying process and enable the user to get the best deals. We decided to program a bot that utilizes user input, web automation, and web-scraping to generate information on various items, focusing on computer components. ## What it does Once online, our PriceTracker bot runs under two main commands: !add and !price. Using these two commands, a few external CSV files, and UiPath, this stores items input by the user and returns related information found via UiPath's web-scraping features. A concise display of the product’s price, stock, and sale discount is displayed to the user through the Discord bot. ## How we built it We programmed the Discord bot using the comprehensive discord.py API. Using its thorough documentation and a handful of tutorials online, we quickly learned how to initialize a bot using Discord's personal Development Portal and create commands that would work with specified text channels. To scrape web pages, in our case, the Canada Computers website, we used a UiPath sequence along with the aforementioned CSV file, which contained input retrieved from the bot's "!add" command. In the UiPath process, each product is searched on the Canada Computers website and then through data scraping, the most relevant results from the search and all related information are processed into a csv file. This csv file is then parsed through to create a concise description which is returned in Discord whenever the bot's "!prices" command was called. ## Challenges we ran into The most challenging aspect of our project was figuring out how to use UiPath. Since Python was such a large part of programming the discord bot, our experience with the language helped exponentially. The same could be said about working with text and CSV files. However, because automation was a topic none of us hardly had any knowledge of; naturally, our first encounter with it was rough. Another big problem with UiPath was learning how to use variables as we wanted to generalize the process so that it would work for any product inputted. Eventually, with enough perseverance, we were able to incorporate UiPath into our project exactly the way we wanted to. ## Accomplishments that we're proud of Learning the ins and outs of automation alone was a strenuous task. Being able to incorporate it into a functional program is even more difficult, but incredibly satisfying as well. Albeit small in scale, this introduction to automation serves as a good stepping stone for further research on the topic of automation and its capabilities. ## What we learned Although we stuck close to our roots by relying on Python for programming the discord bot, we learned a ton of new things about how these bots are initialized, the various attributes and roles they can have, and how we can use IDEs like Pycharm in combination with larger platforms like Discord. Additionally, we learned a great deal about automation and how it functions through UiPath which absolutely fascinated us the first time we saw it in action. As this was the first Hackathon for most of us, we also got a glimpse into what we have been missing out on and how beneficial these competitions can be. Getting the extra push to start working on side-projects and indulging in solo research was greatly appreciated. ## What's next for Tech4U We went into this project with a plethora of different ideas, and although we were not able to incorporate all of them, we did finish with something we were proud of. Some other ideas we wanted to integrate include: scraping multiple different websites, formatting output differently on Discord, automating the act of purchasing an item, taking input and giving output under the same command, and more.
## Inspiration Viral content, particularly copyrighted material and deepfakes, has huge potential to be widely proliferated with Generative AI. This impacts artists, creators and businesses, as for example, copyright infringement causes $11.5 billion in lost profits within the film industry annually. As students who regularly come across copyrighted material on social media, we know that manual reporting by users is clearly ineffective, and this problem lends itself well to the abilities of AI agents. A current solution by companies is to employ people to search for and remove content, which is time consuming and expensive. We are keen to leverage automatic detection through our software, and also serve individuals and businesses. ## What it does PirateShield is a SaaS solution that automatically finds and detects videos that infringe a copyright owned by a user. We deploy AI agents to search online and flag content using semantic search. We also build agents to scrape this content and classify whether it is pirated, using comparisons to copyright licenses on Youtube. Our prototype focuses on the TikTok platform. ## How we built it: Our platform includes AI agents built on Fetch.ai to perform automatic search and classification. This is split into a retrieval feature with semantic search, and a video classification feature. Our database is built with MongoDB to store videos and search queries. Our frontend uses data visualisation to provide an analytics dashboard for the rate of True Positive classifications over time, as well as rates of video removal. ## Challenges we ran into We initially considered many features for our platform, and had to distill this into a set of core prototype features. We were also initially unsure how we would implement the classification feature before deciding on using Youtube's database. Moreover, testing our agents end-to-end on queries involved much debugging! ## Accomplishments that we're proud of As a team, we are proud of identifying this impactful problem to work on, and coordinating to implement a solution while meeting for the first time! In particular, we are proud of successfully building AI agents to search for and download videos, as well as classify them. We're excited to get our first users and deploy the remaining features of the platform. ## What we learned Our tools used were Fetch.AI, Google APIs, fast RAG and MongoDB. We upskilled quickly in these frameworks, and also gained a lot from the advice of mentors and workshop speakers.
partial
* [Deployment link](https://unifymd.vercel.app/) * [Pitch deck link](https://www.figma.com/deck/qvwPyUShfJbTfeoPSjVIGX/UnifyMD-Pitch-Deck?node-id=4-71) ## 🌟 Inspiration Long lists of patient records make it challenging to locate **relevant health data**. This can lead to doctors providing **inaccurate diagnoses** due to insufficient or disorganized information. Unstructured data, such as **progress notes and dictated information**, are not stored properly, and smaller healthcare facilities often **lack the resources** or infrastructure to address these issues. ## 💡 What it does UnifyMD is a **unified health record system** that aggregates patient data and historical health records. It features an **AI-powered search bot** that leverages a patient's historical data to help healthcare providers make more **informed medical decisions** with ease. ## 🛠️ How we built it * We started with creating an **intuitive user interface** using **Figma** to map out the user journey and interactions. * For **secure user authentication**, we integrated **PropelAuth**, which allows us to easily manage user identities. * We utilized **LangChain** as the large language model (LLM) framework to enable **advanced natural language processing** for our AI-powered search bot. * The search bot is powered by **OpenAI**'s API to provide **data-driven responses** based on the patient's medical history. * The application is built using **Next.js**, which provides **server-side rendering** and a full-stack JavaScript framework. * We used **Drizzle ORM** (Object Relational Mapper) for seamless interaction between the application and our database. * The core patient data and records are stored **securely in Supabase**. * For front-end styling, we used **shadcn/ui** components and **TailwindCSS**. ## 🚧 Challenges we ran into One of the main challenges we faced was working with **LangChain**, as it was our first time using this framework. We ran into several errors during testing, and the results weren't what we expected. It took **a lot of time and effort** to figure out the problems and learn how to fix them as we got more familiar with the framework. ## 🏆 Accomplishments that we're proud of * Successfully integrated **LangChain** as a new large language model (LLM) framework to **enhance the AI capabilities** of our system. * Implemented all our **initial features on schedule**. * Effectively addressed key challenges in **Electronic Health Records (EHR)** with a robust, innovative solution to provide **improvements in healthcare data management**. ## 📚 What we learned * We gained a deeper understanding of various patient safety issues related to the limitations and inefficiencies of current Electronic Health Record (EHR) systems. * We discovered that LangChain is a powerful tool for Retrieval-Augmented Generation (RAG), and it can effectively run SQL queries on our database to optimize data retrieval and interaction. ## 🚀 What's next for UnifyMD * **Partnership with local clinics** to kick-start our journey into improving **healthcare services** and **patient safety**. * **Update** to include **speech-to-text** feature to increase more time **patient and healthcare provider’s satisfaction**.
## Inspiration: The inspiration for RehabiliNation comes from a mixture of our love for gaming, and our personal experiences regarding researching and working with those who have physical and mental disabilities. ## What it does: Provides an accessible gaming experience for people with physical disabilities and motivate those fighting through the struggles of physical rehabilitation. It can also be used to track the progress people make while going through their healing process. ## How we built it: The motion control arm band collects data using the gyroscope module linked to the Arduino board. It sends back the data to the Arduino serial monitor in the form of angles. We then use a python script to read the data from the serial monitor. It interprets the data into keyboard input, this allows us to interface with multiple games. Currently, it is used to play our Pac-man game which is written in java. ## Challenges we ran into: Our main challenges was determining how to utilize the gyroscope with the Arduino board and to trying to figure out how to receive and interpret the data with a python script. We also came across some issues with calibrating the motion sensors. ## Accomplishments that we're proud of Throughout our creation process, we all managed to learn about new technologies and new skills and programming concepts. We may have been pushed into the pool, but it was quite a fun way to learn, and in the end we came out with a finished product capable of helping people in need. ## What we learned We learned a great amount about the hardware product process, as well as the utilization of hardware in general. In general, it was a difficult but rewarding experience, and we thank U of T for providing us with this opportunity. ## What's next for RehabiliNation RehabiliNation will continue to refine our products in the future, including the use of better materials and more responsive hardware pieces than what was shown in today's proof of concept. Hopefully our products will be implemented by physical rehabilitation centres to help brighten the rehab process.
## Inspiration We've all left a doctor's office feeling more confused than when we arrived. This common experience highlights a critical issue: over 80% of Americans say access to their complete health records is crucial, yet 63% lack their medical history and vaccination records since birth. Recognizing this gap, we developed our app to empower patients with real-time transcriptions of doctor visits, easy access to health records, and instant answers from our AI doctor avatar. Our goal is to ensure EVERYONE has the tools to manage their health confidently and effectively. ## What it does Our app provides real-time transcription of doctor visits, easy access to personal health records, and an AI doctor for instant follow-up questions, empowering patients to manage their health effectively. ## How we built it We used Node.js, Next.js, webRTC, React, Figma, Spline, Firebase, Gemini, Deepgram. ## Challenges we ran into One of the primary challenges we faced was navigating the extensive documentation associated with new technologies. Learning to implement these tools effectively required us to read closely and understand how to integrate them in unique ways to ensure seamless functionality within our website. Balancing these complexities while maintaining a cohesive user experience tested our problem-solving skills and adaptability. Along the way, we struggled with Git and debugging. ## Accomplishments that we're proud of Our proudest achievement is developing the AI avatar, as there was very little documentation available on how to build it. This project required us to navigate through various coding languages and integrate the demo effectively, which presented significant challenges. Overcoming these obstacles not only showcased our technical skills but also demonstrated our determination and creativity in bringing a unique feature to life within our application. ## What we learned We learned the importance of breaking problems down into smaller, manageable pieces to construct something big and impactful. This approach not only made complex challenges more approachable but also fostered collaboration and innovation within our team. By focusing on individual components, we were able to create a cohesive and effective solution that truly enhances patient care. Also, learned a valuable lesson on the importance of sleep! ## What's next for MedicAI With the AI medical industry projected to exceed $188 billion, we plan to scale our website to accommodate a growing number of users. Our next steps include partnering with hospitals to enhance patient access to our services, ensuring that individuals can seamlessly utilize our platform during their healthcare journey. By expanding our reach, we aim to empower more patients with the tools they need to manage their health effectively.
partial
## Inspiration Around 40% of the lakes in America are too polluted for aquatic life, swimming or fishing.Although children make up 10% of the world’s population, over 40% of the global burden of disease falls on them. Environmental factors contribute to more than 3 million children under age five dying every year. Pollution kills over 1 million seabirds and 100 million mammals annually. Recycling and composting alone have avoided 85 million tons of waste to be dumped in 2010. Currently in the world there are over 500 million cars, by 2030 the number will rise to 1 billion, therefore doubling pollution levels. High traffic roads possess more concentrated levels of air pollution therefore people living close to these areas have an increased risk of heart disease, cancer, asthma and bronchitis. Inhaling Air pollution takes away at least 1-2 years of a typical human life. 25% deaths in India and 65% of the deaths in Asia are resultant of air pollution. Over 80 billion aluminium cans are used every year around the world. If you throw away aluminium cans, they can stay in that can form for up to 500 years or more. People aren’t recycling as much as they should, as a result the rainforests are be cut down by approximately 100 acres per minute On top of this, I being near the Great Lakes and Neeral being in the Bay area, we have both seen not only tremendous amounts of air pollution, but marine pollution as well as pollution in the great freshwater lakes around us. As a result, this inspired us to create this project. ## What it does For the react native app, it connects with the Website Neeral made in order to create a comprehensive solution to this problem. There are five main sections in the react native app: The first section is an area where users can collaborate by creating posts in order to reach out to others to meet up and organize events in order to reduce pollution. One example of this could be a passionate environmentalist who is organizing a beach trash pick up and wishes to bring along more people. With the help of this feature, more people would be able to learn about this and participate. The second section is a petitions section where users have the ability to support local groups or sign a petition in order to enforce change. These petitions include placing pressure on large corporations to reduce carbon emissions and so forth. This allows users to take action effectively. The third section is the forecasts tab where the users are able to retrieve data regarding various data points in pollution. This includes the ability for the user to obtain heat maps regarding the amount of air quality, pollution and pollen in the air and retrieve recommended procedures for not only the general public but for special case scenarios using apis. The fourth section is a tips and procedures tab for users to be able to respond to certain situations. They are able to consult this guide and find the situation that matches them in order to find the appropriate action to take. This helps the end user stay calm during situations as such happening in California with dangerously high levels of carbon. The fifth section is an area where users are able to use Machine Learning in order to figure out whether where they are is in a place of trouble. In many instances, not many know exactly where they are especially when travelling or going somewhere unknown. With the help of Machine Learning, the user is able to place certain information regarding their surroundings and the Algorithm is able to decide whether they are in trouble. The algorithm has 90% accuracy and is quite efficient. ## How I built it For the react native part of the application, I will break it down section by section. For the first section, I simply used Firebase as a backend which allowed a simple, easy and fast way of retrieving and pushing data to the cloud storage. This allowed me to spend time on other features, and due to my ever growing experience with firebase, this did not take too much time. I simply added a form which pushed data to firebase and when you go to the home page it refreshes and see that the cloud was updated in real time For the second section, I used native base in order to create my UI and found an assortment of petitions which I then linked and added images from their website in order to create the petitions tab. I then used expo-web-browser, to deep link the website in opening safari to open the link within the app. For the third section, I used breezometer.com’s pollution api, air quality api, pollen api and heat map apis in order to create an assortment of data points, health recommendations and visual graphics to represent pollution in several ways. The apis also provided me information such as the most common pollutant and protocols for different age groups and people with certain conditions should follow. With this extensive api, there were many endpoints I wanted to add in, but not all were added due to lack of time. For the fourth section, it is very much similar to the second section as it is an assortment of links, proofread and verified to be truthful sources, in order for the end user to have a procedure to go to for extreme emergencies. As we see horrible things happen, such as the wildfires in California, air quality becomes a serious concern for many and as a result these procedures help the user stay calm and knowledgeable. For the fifth section, Neeral please write this one since you are the one who created it. ## Challenges I ran into API query bugs was a big issue in formatting back the query and how to map the data back into the UI. It took some time and made us run until the end but we were still able to complete our project and goals. ## What's next for PRE-LUTE We hope to use this in areas where there is commonly much suffering due to the extravagantly large amount of pollution, such as in Delhi where seeing is practically hard due to the amount of pollution. We hope to create a finished product and release it to the app and play stores respectively.
## Inspiration We wanted to reduce global carbon footprint and pollution by optimizing waste management. 2019 was an incredible year for all environmental activities. We were inspired by the acts of 17-year old Greta Thunberg and how those acts created huge ripple effects across the world. With this passion for a greener world, synchronized with our technical knowledge, we created Recycle.space. ## What it does Using modern tech, we provide users with an easy way to identify where to sort and dispose of their waste items simply by holding it up to a camera. This application will be especially useful when permanent fixtures are erect in malls, markets, and large public locations. ## How we built it Using a flask-based backend to connect to Google Vision API, we captured images and categorized which waste categories the item belongs to. This was visualized using Reactstrap. ## Challenges I ran into * Deployment * Categorization of food items using Google API * Setting up Dev. Environment for a brand new laptop * Selecting appropriate backend framework * Parsing image files using React * UI designing using Reactstrap ## Accomplishments that I'm proud of * WE MADE IT! We are thrilled to create such an incredible app that would make people's lives easier while helping improve the global environment. ## What I learned * UI is difficult * Picking a good tech stack is important * Good version control practices is crucial ## What's next for Recycle.space Deploying a scalable and finalized version of the product to the cloud and working with local companies to deliver this product to public places such as malls.
## Inspiration Poor water quality can cause a multitude of viruses, including but not limited to: Travelers’ Diarrhea • Giardia and Cryptosporidium • Dysentery • Salmonella • Escherichia coli 0157:H7 (E. coli) • Typhoid Fever • Cholera • Hepatitis A • Hepatitis E • Campylobacter After a few hours of brainstorming, research, and discussion we decided to address a major root cause. **Poor water well maintenance.** ## What it does By pulling well data from [The Humanitarian Data Exchange](https://data.humdata.org/) we were able to create a map of existing water wells in Djibouti. We then built an Android application that allowed users to report new water wells, make updates to the quality of current ones, and report any water well maintenance all in real-time. ## How we built it We had our in-house data scientist explore, filter, and compile public health datasets and push them into Google Firestore. Our mobile dev folks worked on making a great framework in React Native for cross-platform app deployment. We all worked together on graphics and ideation. ## Challenges we ran into API finangling. Data curation and wrangling from .shp to .csv to firestore collections was a pain; react native + firebase + expo integration, and lots and lots of unbalanced brackets. Getting fast cloud retrieval and async programming took quite a while, but seeing all the pins finally show up on the screen was well worth it. ## Accomplishments that we're proud of Transforming archaic, inaccessible data into an intuitive, user-friendly interface with real social impact. Drinking 10 Red Bulls. ## What we learned React Native, Firebase API/Google Cloud Platform, Expo, and the Meaning of Life ## What's next for LiveWell More datasets, better analytics, greater impact!
winning
## Inspiration When we heard about using food as a means of love and connection from Otsuka x VALUENEX’s Opening Ceremony presentation, our team was instantly inspired to create something that would connect Asian American Gen Z with our cultural roots and immigrant parents. Recently, there has been a surge of instant Asian food in American grocery stores. However, the love that exudes out of our mother’s piping hot dishes is irreplaceable, which is why it’s important for us, the loneliest demographic in the U.S., to cherish our immigrant parents’ traditional recipes. As Asian American Gen Z ourselves, we often fear losing out on beloved cultural dishes, as our parents have recipes ingrained in them out of years of repetition and thus, neglected documenting these precious recipes. As a result, many of us don’t have access to recreating these traditional dishes, so we wanted to create a web application that encourages sharing of traditional, cultural recipes from our immigrant parents to Asian American Gen Z. We hope that this will reinforce cross-generational relationships, alleviate feelings of disconnect and loneliness (especially in immigrant families), and preserve memories and traditions. ## What it does Through this web application, users have the option to browse through previews of traditional Asian recipes, posted by Asian or Asian American parents, featured on the landing page. If choosing to browse through, users can filter (by culture) through recipes to get closer to finding their perfect dish that reminds them of home. In the previews of the dishes, users will find the difficulty of the dish (via the number of knives – greater is more difficult), the cultural type of dish, and will also have the option to favorite/save a dish. Once they click on the preview of a dish, they will be greeted by an expanded version of the recipe, featuring the name and image of the dish, ingredients, and instructions on how to prepare and cook this dish. For users that want to add recipes to *yumma*, they can utilize a modal box and input various details about the dish. Additionally, users can also supplement their recipes with stories about the meaning behind each dish, sparking warm memories that will last forever. ## How we built it We built *yumma* using ReactJS as our frontend, Convex as our backend (made easy!), Material UI for the modal component, CSS for styling, GitHub to manage our version set, a lot of helpful tips and guidance from mentors and sponsors (♡), a lot of hydration from Pocari Sweat (♡), and a lot of love from puppies (♡). ## Challenges we ran into Since we were all relatively beginners in programming, we initially struggled with simply being able to bring our ideas to life through successful, bug-free implementation. We turned to a lot of experienced React mentors and sponsors (shoutout to Convex) for assistance in debugging. We truly believe that learning from such experienced and friendly individuals was one of the biggest and most valuable takeaways from this hackathon. We additionally struggled with styling because we were incredibly ambitious with our design and wanted to create a high-fidelity functioning app, however HTML/CSS styling can take large amounts of time when you barely know what a flex box is. Additionally, we also struggled heavily with getting our app to function due to one of its main features being in a popup menu (Modal from material UI). We worked around this by creating an extra button in order for us to accomplish the functionality we needed. ## Accomplishments that we're proud of This is all of our first hackathon! All of us also only recently started getting into app development, and each has around a year or less of experience–so this was kind of a big deal to each of us. We were excitedly anticipating the challenge of starting something new from the ground up. While we were not expecting to even be able to submit a working app, we ended up accomplishing some of our key functionality and creating high fidelity designs. Not only that, but each and every one of us got to explore interests we didn’t even know we had. We are not only proud of our hard work in actually making this app come to fruition, but that we were all so open to putting ourselves out of our comfort zone and realizing our passions for these new endeavors. We tried new tools, practiced new skills, and pushed our necks to the most physical strain they could handle. Another accomplishment that we were proud of is simply the fact that we never gave up. It could have been very easy to shut our laptops and run around the Main Quadrangle, but our personal ties and passion for this project kept us going. ## What we learned On the technical side, Erin and Kaylee learned how to use Convex for the first time (woo!) and learned how to work with components they never knew could exist, while Megan tried her hand for the first time at React and CSS while coming up with some stellar wireframes. Galen was a double threat, going back to her roots as a designer while helping us develop our display component. Beyond those skills, our team was able to connect with some of the company sponsors and reinvigorate our passions on why we chose to go down the path of technology and development in the first place. We also learned more about ourselves–our interests, our strengths, and our ability to connect with each other through this unique struggle. ## What's next for yumma Adding the option to upload private recipes that can only be visible to you and any other user you invite to view it (so that your Ba Ngoai–grandma’s—recipes stay a family secret!) Adding more dropdown features to the input fields so that some will be easier and quicker to use A messaging feature where you can talk to other users and connect with them, so that cooking meetups can happen and you can share this part of your identity with others Allowing users to upload photos of what they make from recipes they make and post them, where the most recent of photos for each recipe will be displayed as part of a carousel on each recipe component. An ingredients list that users can edit to keep track of things they want to grocery shop for while browsing
## Inspiration > > Our team is full of food lovers and what’s a better way to show this passion than to design and develop a website related to it! We were inspired by the Hack The 6ix sponsor BMO, who proposed the challenge of answering “What to eat for dinner?”. We realized that this is probably the most asked question during the pandemic, since we can’t go out normally, and because we search our fridges every 15 minutes at home for something to eat. But don’t worry! TastyDinner is here for you! > > > ## What it does > > TastyDinner is here to answer the question: “What to eat for dinner?” by: > > > * Giving inspiration with a gallery of delicious food items to look at! > * Outputting recipes you can make with the ingredients you already have! > * Using Vision AI from Google Cloud’s Vision API for a cool experience! > > > The gallery presented allows you to scroll and gather inspiration, which can help you find the answer to what you want to eat! It’s done by using the Flickr API, where the application dynamically gets many photos related to delicious food items to display for your eyes! > As for the output of recipes, users are able to input ingredients they already have and our application will handle the rest! Our team implemented two ways for a user to input their ingredients. The first way is sending a photo of their ingredients list or receipt, and from there, it would be passed through the Google Cloud Vision API for processing of text! The second way is a simpler approach, where users could just type in ingredients themselves. > After we efficiently process the received ingredients, we then use the Spoonacular API to receive a list of recipes one could make that best fits the ingredients given! > With this web app, you can enter your list of ingredients available or take a picture of a written note and then we’ll recommend the ideal meal for you! > > > ## How we built it > > > The project was built in Visual Studio Code with MongoDB, Express, React.js, Node.js, HTML, CSS, and JavaScript. We also built an android version using Android Studio and Java that integrates with the same Node.js server being used on the web app. > > > ## Challenges we ran into > > Our team is composed of beginner hackers, and we struggled with some of the most basic things. From trouble with github, and learning what a “pull request” was to being unable to connect our React frontend with our Node.js backend, somehow we were able to push through. After staying up till 3AM on the first day, and then pulling an all nighter on the last day of the hackathon, we worked really hard to get our current results! > In the end, we had a huge blast laughing about the dumbest things past midnight, and we loved the process of fixing 3 hour long bugs. We learned in this hackathon that anything is possible, and that we were able to build a full stack app in just 36 hours! > > > ## Accomplishments that we're proud of > > We are proud of developing a website/android app that looks aesthetically pleasing, and with a fully functioning, modularized backend given our skillset. Our team worked really hard together to develop all aspects of our product. > > > ## What we learned > > We learned an incredible amount about web development and integrating the frontend and backend. Many of us came into the project with very diverse skills, so we were able to learn a lot from each other. > > > ## What's next for TastyDinner > > Stay tuned, stay hungry, cause you are going to get a #TastyDinner. > **TastyLunch coming soon!** > > >
# DJ Leap The purpose of the project is to build a DJ system you could control with simple hand movements. ## Instructions The right hand controls a variety of drum beats, the left hand controls a mixture of two songs. The two colors of the sound wave represent the two song's volume that is currently playing. ### Right Hand Finger 1 Finger 2 Finger 3 Finger 4 Finger 5 Stretching out your fingers once will enable the beat, Stretching it out again will turn it off. ### Left Hand Turning towards the left and right controls the volume of the two songs. #### Music Source User's can use any songs through soundcloud.com
partial
## Inspiration My teammates and I all save posts on Facebook, but never remember to go back to them. We wanted to create a feature that reminded the user to go back to their saved posts. ## What it does Our app sets a reminder at the time the user selects after they save a post on Facebook for later. They have the option of choosing between being reminded later that day, or in a week, or not at all. We analyze the most active time for the user on facebook, and set the reminder to display at that time. ## How we built it We used mongodb to store posts from the newsfeed to the user's saved for later posts list. We used python for the backend. We also implemented bootstrap for the frontend as well as html, css, and javascript. We used paralleldots, and pymongo in our app as well. ## Challenges we ran into We couldn't access the newsfeed of the user. However, we found a way around this, by taking the user's likes and using the id's of the pages the user has liked to generate a newsfeed of posts for the user to add to saved for later. ## Accomplishments that we're proud of We're proud of the time analysis feature that determines when the best time to display the reminder for the user is. We're proud of our app because it's our first hackathon. We're also proud of staying up, and giving it our best effort. ## What we learned We learned that creating a Design Document, having regular team meetings, and checking up on the schedule regularly are very important to the engineering process. ## What's next for faceboot We hope to see our feature implemented on facebook. We hope to attend more hackathons, and improve our skills!
## Language Barriers & Us * We both come from immigrant families and despite knowing the language, we still have trouble conveying certain emotions and feelings that we have when it comes our multicultural upbringing; to being a hyphenated American where have two identities and we aren’t exactly able to fit in with one sole identity * Slowly, it becomes more of a challenge and a struggle for us to continue our relationship and we might even feel like we want to break away from that “other side” of our cultural identity * But what about our parents, our families and other families that also find it difficult to maintain relationships with their own kids and the struggle that comes with a language barrier? ## What Makes Us Different? * We thought of a way to bridge that language barrier, especially when it comes to conveying simple or complicated phrases that are commonly used and expand it to even capturing regional variants and nuances within a specific language (Yue Chinese v.s Xiang Chinese) * We started off with a Spanish to English translation, but plan on refining this to become a website with a database behind it to house the phrases/words * This would increase the diversity that we have and acknowledge cultural difference in the “same” branch of a language (Chilean Spanish v.s Mexican Spanish) ## Target Demographic * Immigrants assimilating to a new country/culture * International travellers going to a similar country, but have a different dialect of the same native language that they’re speaking (British English vs American English)
## Inspiration We were frustrated with the normal reminders app we all have on our phones. They are limited in their usefulness and are not very motivating. With goals a big part of our lives, we wanted an app that could be so much more alongside our friends. We designed Reminder's Remorse to give people motivation to become consistent goal setters and create positive habits for themselves. There are four useful components that make Reminder's Remorse better: habit builder, reminder penalties, charity search, and friend exploration. ## What it does Reminder's Remorse not only keeps track of your reminders, it shows your consistent habits, your all time total tasks, and penalizes you by sending money to a friend or charity when you don't complete a task on time. This app is designed to help you stay on top of your tasks and help others in the process. ## How we built it The front end was created using React and the backend with Flask and Python. We utilized **CircleAPI** to handle money transactions through blockchain between friends and charities. We hosted and deployed our website using Cloudflare pages and acquired our domain from **GoDaddy**. Our project management and control flow was handled with **GitHub**. Data was stored using **Redis Cloud** for fast access. ## Challenges we ran into We were first determined to make a project with a different approach on health and switched mid way through upon discussing a new project idea. We found that using Redis Cloud for our specific parameters was difficult using various strings for storage. Writing all of the front-end components to make the application flow proved to be difficult given the time restriction as well. ## Accomplishments that we're proud of We were able to complete our project despite switching our entire idea completely after working on it for a while. Despite it being our first time deploying and hosting a website, we were able to do it quite fast. ## What we learned * How to utilize MaterialUI to create vibrant and transitional pages for our website * Deployment and hosting using Cloudflare pages * Acquiring a domain from GoDaddy and linking it to host server
losing
## Inspiration At companies that want to introduce automation into their pipeline, finding the right robot, the cost of a specialized robotics system, and the time it takes to program a specialized robot is very expensive. We looked for solutions in general purpose robotics and imagining how these types of systems can be "trained" for certain tasks and "learn" to become a specialized robot. ## What it does The Simon System consists of Simon, our robot that learns to perform the human's input actions. There are two "play" fields, one for the human to perform actions and the other for Simon to reproduce actions. Everything starts with a human action. The Simon System detects human motion and records what happens. Then those actions are interpreted into actions that Simon can take. Then Simon performs those actions in the second play field, making sure to plan efficient paths taking into consideration that it is a robot in the field. ## How we built it ### Hardware The hardware was really built from the ground up. We CADded the entire model of the two play fields as well as the arches that hold the smartphone cameras here at PennApps. The assembly of the two play fields consist of 100 individual CAD models and took over three hours to fully assemble, making full utilization of lap joints and mechanical advantage to create a structurally sound system. The LEDs in the enclosure communicate with the offboard field controllers using Unix Domain Sockets that simulate a serial port to allow color change for giving a user info on what the state of the fields is. Simon, the robot, was also constructed completely from scratch. At its core, Simon is an Arduino Nano. It utilizes a dual H Bridge motor driver for controlling its two powered wheels and an IMU for its feedback controls system. It uses a MOSFET for controlling the electromagnet onboard for "grabbing" and "releasing" the cubes that it manipulates. With all of that, the entire motion planning library for Simon was written entirely from scratch. Simon uses a bluetooth module for communicating offboard with the path planning server. ### Software There are four major software systems in this project. The path planning system uses a modified BFS algorithm taking into account path smoothing with realtime updates from the low-level controls to calibrate path plan throughout execution. The computer vision systems intelligently detect when updates are made to the human control field and acquire normalized grid size of the play field using QR boundaries to create a virtual enclosure. The cv system also determines the orientation of Simon on the field as it travels around. Servers and clients are also instantiated on every part of the stack for communicating with low latency. ## Challenges we ran into Lack of acrylic for completing the system, so we had to refactor a lot of our hardware designs to accomodate. Robot rotation calibration and path planning due to very small inconsistencies in low level controllers. Building many things from scratch without using public libraries because they aren't specialized enough. Dealing with smartphone cameras for CV and figuring out how to coordinate across phones with similar aspect ratios and not similar resolutions. The programs we used don't run on windows such as Unix Domain Sockets so we had to switch to using a Mac as our main system. ## Accomplishments that we're proud of This thing works, somehow. We wrote modular code this hackathon and a solid running github repo that was utilized. ## What we learned We got better at CV. First real CV hackathon. ## What's next for The Simon System More robustness.
## Inspiration 1. Affordable pet doors with simple "flap" mechanisms are not secure 2. Potty trained pets requires the door to be manually opened (e.g. ring a bell, scratch the door) ## What it does The puppy *(or cat, we don't discriminate)* can exit without approval as soon as the sensor detects an object within the threshold distance. When entering back in, the ultrasonic sensor will trigger a signal that something is at the door and the camera will take a picture and send to the owner's phone through a web app. The owner may approve or deny the request depending on the photo. If the owner approves the request, the door will open automatically. ## How we built it Ultrasonic sensors relay the distance from the sensor to an object to the Arduino, which sends this signal to Raspberry Pi. The Raspberry Pi program handles the stepper motor movement (rotate ~90 degrees CW and CCW) to open and close the door and relays information to the Flask server to take a picture using the Kinect camera. This photo will display on the web application, where an approval to the request will open the door. ## Challenges we ran into 1. Connecting everything together (Arduino, Raspberry Pi, frontend, backend, Kinect camera) despite each component working well individually 2. Building cardboard prototype with limited resources = lots of tape & poor wire management 3. Using multiple different streams of I/O and interfacing with each concurrently ## Accomplishments that we're proud of This was super rewarding as it was our first hardware hack! The majority of our challenges lie in the camera component as we're unfamiliar with Kinect but we came up with a hack-y solution and nothing had to be hardcoded. ## What we learned Hardware projects require a lot of troubleshooting because the sensors will sometimes interfere with eachother or the signals are not processed properly when there is too much noise. Additionally, with multiple different pieces of hardware, we learned how to connect all the subsystems together and interact with the software components. ## What's next for PetAlert 1. Better & more consistent photo quality 2. Improve frontend notification system (consider push notifications) 3. Customize 3D prints to secure components 4. Use thermal instead of ultrasound 5. Add sound detection
## Inspiration We wanted to build something that would help those with limited mobility or eyesight. We wanted to make something that was as simple and intuitive as possible, while performing complex tasks. To this end, we designed a system to allow users to locate items around their home they might not normally be able to see. ## What it does Our fleet of robots allows the user to speak the name of the object they are looking for, and will then set off autonomously to track down the item. They will report back to the user once they have found the item, while the user can watch at every step along the way with a live video stream. The user can also take manual control of the robots at any time if they so wish. ## How I built it The robots were built using laser cut plates, a raspberry pi, DC motors, and a dual voltage power system. The software used a TCP/IP library for streaming video called FireEye to send video and data from the Raspberry Pi to our Node.js server. This server performed image processing and natural language processing to determine what the user was trying to find, and identify it when the camera picked the object up. The front end was built using React.js, with Socket.io acting as the method of communication between server and UI. ## Challenges I ran into We ran into many challenges. Many. Our first problems lay with trying to get a consistent video stream from the robot to our server, and then only grew more difficult. We faces challenges trying to communicate data from our server to the robot, and from our server to the front-end UI. We also have very little experience designing user interfaces, and ran into many implementation problems. Additionally, this was the first undertaking we have coded with Node.js, which we learned was substantially different than Python. (Looking back Python probably should have been the way to go...) ## Accomplishments that I'm proud of We are particularly proud of the overall tech stack we ended up using. There are many technologies that we had to get working, and then get to communicate before our system would become functional. We learned about TCP and Web sockets, as well as coding for hardware constraints, and how to perform cloud image processing. ## What I learned We learned a substantial amount overall, mostly as it related to socket programming, and how to have multiple components share stateful data. We also learned how to deal with the constraints of network speed, and raspberry pi processing power. As such we learned about multi-threading programs to make them run more efficiently. ## What's next for MLE We would like to expand our robots to include a robot arm, such that they would be able to retrieve and interact with the objects they are searching for. We would also like to make the robots bigger such that they can more effectively navigate. We also have plans to increase the overall speed of the system, and try to eliminate network and streaming latency.
winning
## Inspiration Domain Name: voicebase.tech We were inspired by the fact that there are over 466 million people in the world with disabling hearing loss and globally the number of people of all ages visually impaired is estimated to be over 285 million. (WHO, 2018 & 2010, respectively) Yet, despite this pandemic creating numerous additional barriers for people with disabilities, there have not been many accessible, convenient, or affordable solutions. Most people who are deaf and hearing-impaired depend on the ability to read lips to converse with others, and a facial covering that impedes communication can increase frustration and affect their mental health. When the volume of speech is reduced, the listener must concentrate harder to understand and follow the communication. Couple this reduction in volume with the inability to lip-read, and it can make it very frustrating for hard-of-hearing and deaf individuals, as well as the general population. ## What it does Voicebase helps people by transcribing speech into text on their screen in real-time. ## How we built it HTML, CSS, Javascript, Twilio API, Google Cloud Speech to Text API ## Challenges we ran into Team Members with low bandwidth unable to communicate at times, time zone differences. ## Accomplishments that we're proud of Learning more about how to use Twilio API's for the first time, Google Cloud, and also read documentation from other API's we were considering like Assembly AI. ## What we learned API's, Google Cloud, HTML, CSS, Javascript Teamwork, Communication ## What's next for Voicebase Continuing to work on the integration, front end and database aspect.
## Inspiration We were trying to code something in C, so we were looking online for tutorials and guides for specific programs. We came across cprogramming.com and realized how old and how terrible the website is. So we decided it'd be fun to learn html instead and create a template website that would replace cprogramming.com. ## What it does It is a template html website for the home page of cprogramming.com ## How I built it We had to use a lot of html and css tutorials and lots of sweat and brute force. Both of use have never used HTML or CSS before so it was a fun challenge to do something new that could be functional. ## Challenges we ran into Easily the major challenge was working with three languages where 2/2 of us had no previous knowledge with HTML. Any basic functions, syntax, and methods required research. Figuring out how to do everything from run the website to make sure it was able to be seen on mobile was a challenge. ## Accomplishments that I'm proud of I'm proud of making a template of a website that looks awesome, if I say so myself. ## What I learned We learned about flex boxes and the general design of a website, which we didn't realize is much harder than we expected. ## What's next for Nocux Wonderful sleep and a beautiful shower. Maybe a foot mask if I say so myself.
## Inspiration Around 40% of the lakes in America are too polluted for aquatic life, swimming or fishing.Although children make up 10% of the world’s population, over 40% of the global burden of disease falls on them. Environmental factors contribute to more than 3 million children under age five dying every year. Pollution kills over 1 million seabirds and 100 million mammals annually. Recycling and composting alone have avoided 85 million tons of waste to be dumped in 2010. Currently in the world there are over 500 million cars, by 2030 the number will rise to 1 billion, therefore doubling pollution levels. High traffic roads possess more concentrated levels of air pollution therefore people living close to these areas have an increased risk of heart disease, cancer, asthma and bronchitis. Inhaling Air pollution takes away at least 1-2 years of a typical human life. 25% deaths in India and 65% of the deaths in Asia are resultant of air pollution. Over 80 billion aluminium cans are used every year around the world. If you throw away aluminium cans, they can stay in that can form for up to 500 years or more. People aren’t recycling as much as they should, as a result the rainforests are be cut down by approximately 100 acres per minute On top of this, I being near the Great Lakes and Neeral being in the Bay area, we have both seen not only tremendous amounts of air pollution, but marine pollution as well as pollution in the great freshwater lakes around us. As a result, this inspired us to create this project. ## What it does For the react native app, it connects with the Website Neeral made in order to create a comprehensive solution to this problem. There are five main sections in the react native app: The first section is an area where users can collaborate by creating posts in order to reach out to others to meet up and organize events in order to reduce pollution. One example of this could be a passionate environmentalist who is organizing a beach trash pick up and wishes to bring along more people. With the help of this feature, more people would be able to learn about this and participate. The second section is a petitions section where users have the ability to support local groups or sign a petition in order to enforce change. These petitions include placing pressure on large corporations to reduce carbon emissions and so forth. This allows users to take action effectively. The third section is the forecasts tab where the users are able to retrieve data regarding various data points in pollution. This includes the ability for the user to obtain heat maps regarding the amount of air quality, pollution and pollen in the air and retrieve recommended procedures for not only the general public but for special case scenarios using apis. The fourth section is a tips and procedures tab for users to be able to respond to certain situations. They are able to consult this guide and find the situation that matches them in order to find the appropriate action to take. This helps the end user stay calm during situations as such happening in California with dangerously high levels of carbon. The fifth section is an area where users are able to use Machine Learning in order to figure out whether where they are is in a place of trouble. In many instances, not many know exactly where they are especially when travelling or going somewhere unknown. With the help of Machine Learning, the user is able to place certain information regarding their surroundings and the Algorithm is able to decide whether they are in trouble. The algorithm has 90% accuracy and is quite efficient. ## How I built it For the react native part of the application, I will break it down section by section. For the first section, I simply used Firebase as a backend which allowed a simple, easy and fast way of retrieving and pushing data to the cloud storage. This allowed me to spend time on other features, and due to my ever growing experience with firebase, this did not take too much time. I simply added a form which pushed data to firebase and when you go to the home page it refreshes and see that the cloud was updated in real time For the second section, I used native base in order to create my UI and found an assortment of petitions which I then linked and added images from their website in order to create the petitions tab. I then used expo-web-browser, to deep link the website in opening safari to open the link within the app. For the third section, I used breezometer.com’s pollution api, air quality api, pollen api and heat map apis in order to create an assortment of data points, health recommendations and visual graphics to represent pollution in several ways. The apis also provided me information such as the most common pollutant and protocols for different age groups and people with certain conditions should follow. With this extensive api, there were many endpoints I wanted to add in, but not all were added due to lack of time. For the fourth section, it is very much similar to the second section as it is an assortment of links, proofread and verified to be truthful sources, in order for the end user to have a procedure to go to for extreme emergencies. As we see horrible things happen, such as the wildfires in California, air quality becomes a serious concern for many and as a result these procedures help the user stay calm and knowledgeable. For the fifth section, Neeral please write this one since you are the one who created it. ## Challenges I ran into API query bugs was a big issue in formatting back the query and how to map the data back into the UI. It took some time and made us run until the end but we were still able to complete our project and goals. ## What's next for PRE-LUTE We hope to use this in areas where there is commonly much suffering due to the extravagantly large amount of pollution, such as in Delhi where seeing is practically hard due to the amount of pollution. We hope to create a finished product and release it to the app and play stores respectively.
losing
We have the best app, the best. A tremendous app. People come from all over the world to tell us how great our app is. Believe us, we know apps. With Trump Speech Simulator, write a tweet in Donald Trump's voice and our app will magically stitch a video of Trump speaking the words you wrote. Poof! President Trump often holds long rallies with his followers, where he makes speeches that are then uploaded on Youtube and feature detailed subtitles. We realized that we could parse these subtitles to isolate individual words. We used ffmpeg to slice rally videos and then intelligently stitch them back together.
## What it does Reworkd AI is a Chrome extension that uses AI to generate customizable responses for various forms of digital communication like emails, tweets, comments. It helps users compose well-written messages with simple prompts, generate new responses easily and customize responses with options like emojis, response length, and level of detail. Try the demo to see how it can improve your digital communication and save time. ## How we built it * T3 stack * OpenAI ## Challenges we ran into Integrating the front end stack alongside the extension framework ## Accomplishments that we're proud of We're proud of our easy to use interface and also the polished feel. We also explored a lot of new tools and although we struggled, got them to cooperate in the end
## Inspiration We wanted to do something fun this hackathon! First, we thought of making a bot that imitates Trump’s speech patterns, generating sentences that sound like things Trump would say. Then we thought, "Let’s expand on that! Make bots that imitate all major presidents’ speech patterns, plus a few surprises!" ## What it does Two bots that are trained to imitate a famous person's speech patterns are randomly paired up and have a conversation about a random topic. Users then spectate for 60 seconds, and vote on who they think the bots are imitating. ## How we built it 1. Write web scraper to scrape online sources for corpuses of text (speeches, quotes) 2. Generate Markov Chains for each person/bot 3. Generate 100,000 sentences based on the Markov Chains 4. Reverse index sentences based on topic 5. Create web app ## Challenges we ran into How to efficiently reverse index 100,000 generated sentences ## What we used * Markov Chain + Web-scraped corpus + Speeches, quotes, scripts + IBM Bluemix instance + Flask, Django ## What's next for Robot Flame Wars * Launch on a webserver to the general public to use * Feed user response back into the model--e.g. if users consistently misidentify a certain bot's real life analog, automatically adjust sentence generation as appropriate.
partial
Welcome to college life! It’s a fresh start, with a bit of that first-day buzz and the thrill of what’s to come. No old friends around just yet, and it might feel like you're a long way from home. But you're about to dive into a sea of new experiences, with your goals leading the way. Tackling college can be a wild ride. Excited for the lectures but nervous about the social scene? That's completely normal, and that's why we created LectureLink – your sidekick for navigating campus life. LectureLink is about connecting – not just to classmates, but to friends who’ll ride out this adventure with you. It’s about making your college experience a group journey, not a solo mission. We’re all about the connections that come from tackling the ups and downs together. It’s more than study sessions; it’s about friendships that'll stick with you, long after the final exams. When you look back at your time at university, you’ll see a network of friends who’ve been there through it all. LectureLink is here to help you start that journey, fostering bonds and memories that last. Here’s to making every moment of your college days count. Welcome aboard! 🌟📚
## Inspiration We looked at the RBC event and wanted to make a banking website with our own SQL database application. Our concept was to make a simple utility website for people new to Canada to have an all in one utility for their needs. ## What it does Our all-in-one utility web application is a website where users can link multiple banks and accoiunts, check their transactions all in one. ## How we built it We first designed the website in Figma and coded basic elements in HTML. Then we imported tailwind CSS to help us create the same design that was used in figma. Our back-end group member then created a java-server-applet that runs all the transactions and user-management that combined with our web UI. We coded the HTML and CSS from scratch without using API or external website builders to learn and experience how to build an HTML/CSS website. We tried to adapt and learn from Google's Material Design using figma ## Challenges we ran into We had some experience with back-end development with databases, password management, and user information. However, it was our first ever experience designing and creating a website with HTML and CSS. It took us a lot of work to implement the UI to work with the back-end systems. We also wanted to implement a customer support based application based on ChatGPT, however, due to time constraints, we weren't able to finish it. ## Accomplishments that we're proud of We're very proud of the design of the website. Although the final product might've not been fully up to the initial design due to time. We believe that our Interface was friendly and modern for a concept website. We're also very proud of the custom Database and Back-end application that we made. We did not use any APIs for the back-end applications dealing with transactions and user-management. ## What we learned Programming everything in HTML and CSS in a interactive web application was a big challenge for us. There were a lot of things that we wanted to implement but couldn't due to the limitations of static webpages and time. ## What's next ? We could implement the unfinished chatGPT support feature on the website. We also had a demo of a fourm feature that we could later add a UI,UX on for it to be fully functional.
## Inspiration Lectures all around the world last on average 100.68 minutes. That number goes all the way up to 216.86 minutes for art students. As students in engineering, we spend roughly 480 hours a day listening to lectures. Add an additional 480 minutes for homework (we're told to study an hour for every hour in a lecture), 120 minutes for personal breaks, 45 minutes for hygeine, not to mention tutorials, office hours, and et. cetera. Thinking about this reminded us of the triangle of sleep, grades and a social life-- and how you can only pick two. We felt that this was unfair and that there had to be a way around this. Most people approach this by attending lectures at home. But often, they just put lectures at 2x speed, or skip sections altogether. This isn't an efficient approach to studying in the slightest. ## What it does Our web-based application takes audio files- whether it be from lectures, interviews or your favourite podcast, and takes out all the silent bits-- the parts you don't care about. That is, the intermediate walking, writing, thinking, pausing or any waiting that happens. By analyzing the waveforms, we can algorithmically select and remove parts of the audio that are quieter than the rest. This is done over our python script running behind our UI. ## How I built it We used PHP/HTML/CSS with Bootstrap to generate the frontend, hosted on a DigitalOcean LAMP droplet with a namecheap domain. On the droplet, we have hosted an Ubuntu web server, which hosts our python file which gets run on the shell. ## Challenges I ran into For all members in the team, it was our first time approaching all of our tasks. Going head on into something we don't know about, in a timed and stressful situation such as a hackathon was really challenging, and something we were very glad that we persevered through. ## Accomplishments that I'm proud of Creating a final product from scratch, without the use of templates or too much guidance from tutorials is pretty rewarding. Often in the web development process, templates and guides are used to help someone learn. However, we developed all of the scripting and the UI ourselves as a team. We even went so far as to design the icons and artwork ourselves. ## What I learned We learnt a lot about the importance of working collaboratively to create a full-stack project. Each individual in the team was assigned a different compartment of the project-- from web deployment, to scripting, to graphic design and user interface. Each role was vastly different from the next and it took a whole team to pull this together. We all gained a greater understanding of the work that goes on in large tech companies. ## What's next for lectr.me Ideally, we'd like to develop the idea to have much more features-- perhaps introducing video, and other options. This idea was really a starting point and there's so much potential for it. ## Examples <https://drive.google.com/drive/folders/1eUm0j95Im7Uh5GG4HwLQXreF0Lzu1TNi?usp=sharing>
losing
## Inspiration In the world where technology is intricately embedded into our lives, security is an exciting area where internet devices can unlock the efficiency and potential of the Internet of Things. ## What it does Sesame is a smart lock that uses facial recognition in order to grant access. A picture is taken from the door and a call is made to a cloud service in order to authenticate the user. Once the user has been authenticated, the door lock opens and the user is free to enter the door. ## How we built it We used a variety of technologies to build this project. First a Raspberry Pi is connected to the internet and has a servo motor, a button and a camera connected to it. The Pi is running a python client which makes call to a Node.js app running on IBM Bluemix. The app handles requests to train and test image classifiers using the Visual Recognition Watson. We trained a classifier with 20 pictures of each of us and we tested the classifier to unseen data by taking a new picture through our system. To control the lock we connected a servo to the Raspberry Pi and we wrote C with the wiringPi library and PWM to control it. The lock only opens if we reach an accuracy of 70% or above. We determined this number after several tests. The servo moves the lock by using a 3d-printed adapter that connects the servo to the lock. ## Challenges we ran into We wanted to make our whole project on python, by using a library for the GPIO interface of the Pi and OpenCV for the facial recognition. However, we missed some OpenCV packages and we did not have the time to rebuild the library. Also the GPIO library on python was not working properly to control the servo motor. After encountering these issues, we moved the direction of our project to focus on building a Node.js app to handle authentication and the Visual Recognition service to handle the classification of users. ## Accomplishments that we're proud of What we are all proud of is that in just one weekend, we learned most of the skills required to finish our project. Ming learned 3D modeling and printing, and to program the GPIO interface on the Pi. Eddie learned the internet architecture and the process of creating a web app, from the client to the server. Atl learned how to use IBM technologies and to adapt to the unforeseen circumstances of the hackathon. ## What's next for Sesame The prototype we built could be improved upon by adding additional features that would make it more convenient to use. Adding a mobile application that could directly send the images from an individual’s phone to Bluemix would make it so that the user could train the visual recognition application from anywhere and at anytime. Additionally, we have plans to discard the button and replace it with a proximity sensor so that the camera is efficient and only activates when an individual is present in front of the door.
## Inspiration This project was inspired by a personal anecdote. Two of the teammates, A and B, were hanging out in friend C’s dorm room. When it was time to leave, teammate B needed to grab his bag from teammate A’s dorm room. However, to their dismay, teammate A accidentally left her keycard in friend C’s dorm room, who left to go to a party. This caused A and B to wait for hours for C to return. This event planted a seed for this project in the back of teammates A and B’s minds, hoping to bring convenience to students’ lives and eliminate the annoyance of forgetting their keycards and being unable to enter their dorm rooms. ## What it does This device aims to automate the dorm room lock by allowing users to control the lock using a mobile application. The door lock’s movement is facilitated by a 3D-printed gear on a bar, and the gear is attached to a motor, controlled by an Arduino board. There are two simple steps to follow to enter the dorm. First, a phone needs to be paired with the device through Bluetooth. Both the “Pair with Device” button in the app and the button on the Bluetooth Arduino board are clicked. This only needs to be done for the first time the user is using this device. Once a connection is established between the Bluetooth board and the mobile app, the user can simply click the “Unlock door” button on the app, facilitating the communication between the Bluetooth board and the Motor board, causing the gear to rotate and subsequently causing the rod to bring down the door handle, unlocking the door. ## How we built it We used Android Studio to develop the mobile application in Java. The gear and bar were designed using Fusion360 and 3D-printed. Two separate Arduino boards were attached to the ESP32-S Bluetooth module and the motor attached to the gear, respectively, and the boards are controlled by the software part of an Arduino program programmed in C++. PlatformIO was used to automate the compilation and linking of code between hardware and software components. ## Challenges we ran into Throughout the build process, we encountered countless challenges, with a few of the greatest being understanding how the two Arduino boards communicate, figuring out the deployment mechanism of the ESP32-S module when our HC-05 was dysfunctional, and maintaining the correct circuit structure for our motor and LCD. ## Accomplishments that we're proud of Many of our greatest accomplishments stemmed from overcoming the challenges that we faced. For example, the wiring of the motor circuit was a major concern in the initial circuit setup process: following online schematics on how to wire the Nema17 motor, the motor did not perform full rotations, and thus would not have the capability to be integrated with other hardware components. This motor is a vital component for the workings of our mechanism, and with further research and diligence, we discovered that the issue was related to our core understanding of how the circuit performs and obtaining the related drivers needed to perform our tasks. This was one of our most prominent hardware accomplishments as it functions as the backbone for our mechanism. A more lengthy, software achievement we experienced was making the ESP32-S microcontroller functional. ## What we learned For several members of our group, this marked their initial exposure to GitHub within a collaborative environment. Given that becoming acquainted with this platform is crucial in many professional settings, this served as an immensely beneficial experience for our novice hackers. Additionally, for the entire team, this was the first experience operating with Bluetooth technology. This presented a massive learning curve, challenging us to delve into the intricacies of Bluetooth, understand its protocols, and navigate the complexities of integrating it into our project. Despite the initial hurdles, the process of overcoming this learning curve fostered a deeper understanding of wireless communication and added a valuable skill set to our collective expertise. Most importantly, however, we learned that with hard work and perseverance, even the most daunting challenges can be overcome. Our journey with GitHub collaboration and Bluetooth integration served as a testament to the power of persistence and the rewards of pushing beyond our comfort zones. Through this experience, we gained not only technical skills but also the confidence to tackle future projects with resilience and determination. ## What's next for Locked In Some future steps for Locked In would hopefully be to create a more robust authentication system through Firebase. This would allow users to sign in via other account credentials, such as Email, Facebook, and Google, and permit recognized accounts to be stored and managed by a centralized host. This objective would not only enhance security but also streamline user management, ensuring a seamless and user-friendly experience across various platforms and authentication methods. Another objective of Locked In is to enhance the speed of Bluetooth connections, enabling users to fully leverage the convenience of not needing a physical key or card to access their room. This enhancement would offer users a faster and smoother experience, simplifying the process of unlocking doors and ensuring swift entry. One feature that we did not finish implementing was the gyroscope, which automatically detects when the door is open and
## Inspiration In 2010, when Haiti was rocked by an earthquake that killed over 150,000 people, aid workers manned SMS help lines where victims could reach out for help. Even with the international humanitarian effort, there was not enough manpower to effectively handle the volume of communication. We set out to fix that. ## What it does EmergAlert takes the place of a humanitarian volunteer at the phone lines, automating basic contact. It allows victims to request help, tell their location, place calls and messages to other people, and inform aid workers about their situation. ## How we built it We used Mix.NLU to create a Natural Language Understanding model that categorizes and interprets text messages, paired with the Smooch API to handle SMS and Slack contact. We use FHIR to search for an individual's medical history to give more accurate advice. ## Challenges we ran into Mentoring first time hackers was both a challenge and a joy. ## Accomplishments that we're proud of Coming to Canada. ## What we learned Project management is integral to a good hacking experience, as is realistic goal-setting. ## What's next for EmergAlert Bringing more depth to the NLU responses and available actions would improve the app's helpfulness in disaster situations, and is a good next step for our group.
losing
## Inspiration The inspiration for our project stems from the increasing trend of online shopping and the declining foot traffic in physical stores. Our goal was to provide a unique and engaging experience for customers, encouraging them to visit physical stores and rediscover the joy of in-person shopping. We wanted to create an interactive and entertaining shopping experience that would entice customers to visit stores more frequently and foster a deeper connection between them and the store's brand. ## What it does Our project is an AR scavenger hunt experience that gamifies the shopping experience. The scavenger hunt encourages customers to explore the store and discover new products they may have otherwise overlooked. As customers find specific products, they can earn points which can be redeemed for exclusive deals and discounts on future purchases. This innovative marketing scheme not only provides customers with an entertaining experience but also incentivizes them to visit stores more frequently and purchase products they may have otherwise overlooked. ## How we built it To create the AR component of our project, we used Vuforia and Unity, two widely used platforms for building AR applications. The Vuforia platform allowed us to create and track image targets, while Unity was used to design the 3D models for the AR experience. We then integrated the AR component into an Android application by importing it as a Gradle project. Our team utilized agile development methodologies to ensure efficient collaboration and problem-solving throughout the development process. ## Challenges we ran into One of the challenges we faced was integrating multiple APIs and ensuring that they worked together seamlessly. Another challenge was importing the AR component and creating the desired functionality within our project. We also faced issues with debugging and resolving technical errors that arose during the development process. ## Accomplishments that we're proud of Despite the challenges we faced, we were able to achieve successful teamwork and collaboration. Despite forming the team later than other groups, we were able to effectively communicate and work together to bring our project to fruition. We are proud of the end result, which was a polished and functional AR scavenger hunt experience that met our objectives. ## What we learned We learned how difficult it is to truly ship out software, and we are grateful to have joined the hackathon. We gained a deeper understanding of the importance of project planning, effective communication, and collaboration among team members. We also learned that the development process can be challenging and unpredictable, and that it requires perseverance and problem-solving skills. Additionally, participating in the hackathon taught us valuable technical skills such as integrating APIs, creating AR functionality, and importing projects onto an Android application. ## What's next for Winnur Looking forward, we plan to incorporate Computer Vision technology into our project to prevent potential damage to our product's packaging. We also aim to expand the reach of our AR scavenger hunt experience by partnering with more retailers and enhancing the user interface and experience. We are excited about the potential for future development and growth of Winnur.
## Inspiration Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order. ## What it does You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision. ## How we built it The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors. ## Challenges we ran into One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle. ## Accomplishments that we're proud of We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with. ## What we learned We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment. ## What's next for Harvard Burger Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains.
## Inspiration In the “new normal” that COVID-19 has caused us to adapt to, our group found that a common challenge we faced was deciding where it was safe to go to complete trivial daily tasks, such as grocery shopping or eating out on occasion. We were inspired to create a mobile app using a database created completely by its users - a program where anyone could rate how well these “hubs” were following COVID-19 safety protocol. ## What it does Our app allows for users to search a “hub” using a Google Map API, and write a review by rating prompted questions on a scale of 1 to 5 regarding how well the location enforces public health guidelines. Future visitors can then see these reviews and add their own, contributing to a collective safety rating. ## How I built it We collaborated using Github and Android Studio, and incorporated both a Google Maps API as well integrated Firebase API. ## Challenges I ran into Our group unfortunately faced a number of unprecedented challenges, including losing a team member mid-hack due to an emergency situation, working with 3 different timezones, and additional technical difficulties. However, we pushed through and came up with a final product we are all very passionate about and proud of! ## Accomplishments that I'm proud of We are proud of how well we collaborated through adversity, and having never met each other in person before. We were able to tackle a prevalent social issue and come up with a plausible solution that could help bring our communities together, worldwide, similar to how our diverse team was brought together through this opportunity. ## What I learned Our team brought a wide range of different skill sets to the table, and we were able to learn lots from each other because of our various experiences. From a technical perspective, we improved on our Java and android development fluency. From a team perspective, we improved on our ability to compromise and adapt to unforeseen situations. For 3 of us, this was our first ever hackathon and we feel very lucky to have found such kind and patient teammates that we could learn a lot from. The amount of knowledge they shared in the past 24 hours is insane. ## What's next for SafeHubs Our next steps for SafeHubs include personalizing user experience through creating profiles, and integrating it into our community. We want SafeHubs to be a new way of staying connected (virtually) to look out for each other and keep our neighbours safe during the pandemic.
winning
## Inspiration **Read something, do something.** We constantly encounter articles about social and political problems affecting communities all over the world – mass incarceration, the climate emergency, attacks on women's reproductive rights, and countless others. Many people are concerned or outraged by reading about these problems, but don't know how to directly take action to reduce harm and fight for systemic change. **We want to connect users to events, organizations, and communities so they may take action on the issues they care about, based on the articles they are viewing** ## What it does The Act Now Chrome extension analyzes articles the user is reading. If the article is relevant to a social or political issue or cause, it will display a banner linking the user to an opportunity to directly take action or connect with an organization working to address the issue. For example, someone reading an article about the climate crisis might be prompted with a link to information about the Sunrise Movement's efforts to fight for political action to address the emergency. Someone reading about laws restricting women's reproductive rights might be linked to opportunities to volunteer for Planned Parenthood. ## How we built it We built the Chrome extension by using a background.js and content.js file to dynamically render a ReactJS app onto any webpage the API identified to contain topics of interest. We built the REST API back end in Python using Django and Django REST Framework. Our API is hosted on Heroku, the chrome app is published in "developer mode" on the chrome app store and consumes this API. We used bitbucket to collaborate with one another and held meetings every 2 - 3 hours to reconvene and engage in discourse about progress or challenges we encountered to keep our team productive. ## Challenges we ran into Our initial attempts to use sophisticated NLP methods to measure the relevance of an article to a given organization or opportunity for action were not very successful. A simpler method based on keywords turned out to be much more accurate. Passing messages to the back-end REST API from the Chrome extension was somewhat tedious as well, especially because the API had to be consumed before the react app was initialized. This resulted in the use of chrome's messaging system and numerous javascript promises. ## Accomplishments that we're proud of In just one weekend, **we've prototyped a versatile platform that could help motivate and connect thousands of people to take action toward positive social change**. We hope that by connecting people to relevant communities and organizations, based off their viewing of various social topics, that the anxiety, outrage or even mere preoccupation cultivated by such readings may manifest into productive action and encourage people to be better allies and advocates for communities experiencing harm and oppression. ## What we learned Although some of us had basic experience with Django and building simple Chrome extensions, this project provided new challenges and technologies to learn for all of us. Integrating the Django backend with Heroku and the ReactJS frontend was challenging, along with writing a versatile web scraper to extract article content from any site. ## What's next for Act Now We plan to create a web interface where organizations and communities can post events, meetups, and actions to our database so that they may be suggested to Act Now users. This update will not only make our applicaiton more dynamic but will further stimulate connection by introducing a completely new group of people to the application: the event hosters. This update would also include spatial and temporal information thus making it easier for users to connect with local organizations and communities.
## Inspiration False news. False news. False news everywhere. Before reading your news article in depth, let us help you give you a brief overview of what you'll be ingesting. ## What it does Our Google Chrome extension will analyze the news article you're about to read and give you a heads up on some on the article's sentiment (what emotion is the article trying to convey), top three keywords in the article, and the categories this article's topic belong to. Our extension also allows you to fact check any statement by simply highlighting the statement, right-clicking, and selecting Fact check this with TruthBeTold. ## How we built it Our Chrome extension pulls the url of the webpage you're browsing and sends that to our Google Cloud Platform hosted Google App Engine Python server. Our server is then able to parse the content of the page, determine the content of the news article through processing by Newspaper3k API. The scraped article is then sent to Google's Natural Language API Client which assess the article for sentiment, categories, and keywords. This data is then returned to your extension and displayed in a friendly manner. Fact checking follows a similar path in that our extension sends the highlighted text to our server, which checks against Google's Fact Check Explorer API. The consensus is then returned and alerted. ## Challenges we ran into * Understanding how to interact with Google's APIs. * Working with Python flask and creating new endpoints in flask. * Understanding how Google Chrome extensions are built. ## Accomplishments that I'm proud of * It works!
## Inspiration The COVID-19 pandemic resulted in many normal interactions being converted into digital ones; classes shifted to Zooms, extended families held FaceTime holidays instead of congregating, and people texted one another instead of meeting up in person. However, activism of all kinds did not stop for the pandemic. Indeed, discussions about topics such as Black Lives Matter, political activism, and climate change kept going. We wanted to provide a platform where these conversations could all happen in one place instead of being spread out across the internet. We want to make activism more accessible in a digital age where the sheer volume of disparate links and posts can be overwhelming. ## What it does Traction allows users to join and create public and private communities across many different categories of causes they might be passionate about- including social justice, environmental activism, political activism, education policy, healthcare, and so much more. Within these categories are all sorts of communities that focus more specifically on topics that people care about. Once a user joins a community, they can participate in various threads. These threads might be facilitating general discussion, providing a space to hold a digital event, and provide spaces for people to brainstorm or rally together. ## How we built it / our technology stack We developed our backend using Django, using Websockets to facilitate communication between the server and the browser. We developed our front end using React in conjunction with Material-UI, using some Material components in order to construct our own. ## Challenges we ran into We had some difficulties with WebSockets initially, plus it was a challenge to figure out how to connect information from the backend with the front end. ## Accomplishments that we're proud of We’re extremely proud about building a full stack web application in a bit under a week. Additionally, we all learned more about React.js, Material-UI, and additionally Django. At least one of our team members had never used Websockets before either. ## What we learned We definitely learned more about the various programming languages that we utilized (React.js, Django, etc). We learned a lot about collaborating in a virtual environment. In addition to our ‘hard’ coding skills, we utilized ‘soft’ skills regarding communication. We also reflected on examples of activism we’ve seen in our everyday lives in order to try to make the product more useful. ## What's next for traction Eventually we want to develop Traction into a mobile app to make it even more accessible- after all, more people have mobile phones than have access to a computer. This could allow for people to have more flexibility in where and how they use Traction. They would not need to be seated at their computer in order to stay in the loop and stay active in discussions.
partial
### We are Ally. Our mission is to save lives by automating drone data management. Continue reading to see the ethical analysis of our project! ## **Inspiration** *Tens of thousands of people are dying every year in natural disasters due to poor infrastructure.* Months after the devastating 2008 earthquake in Sichuan, China, Mindy visited the region and witnessed the horrific consequences of lacking up-to-code infrastructure. She spoke with locals, meeting a grandmother carrying a baby whose father was lost in the rubble when she was only 10 days old. Fast forward to today, the death tolls are now in the tens of thousands due to the Turkey-Syria earthquakes, with at least 20,000 victims still lying beneath the rubble. More than 6,000 buildings collapsed in the region, but a lot of the damage could have been prevented with better infrastructure monitoring. These earthquakes were extremely deadly because these cities did not enforce building codes or treat safe housing as a human right (Vox News). Construction companies have been cutting corners and ignoring building codes for decades, and the government often just lets it slide. Through researching current processes rescue teams use to help on-site, we started piecing together how impactful drones can be in this space. Specifically, we decided to create automated workflows because we’ve worked with drone data before and know that it’s a lot to take on as a first-time user. ## **What it does** Ally makes drone data management simple, giving non-technical people easy access to automated workflows for their drone data. Commercial uses of drone technology have been most prevalent in fields such as agriculture, forestry management, urban planning, and disaster relief, where most users are not technically familiar with image processing. Our end-to-end platform allows users to drag-and-drop tasks (i.e. for our current focus of disaster management: locate people stranded under rubble; map 3D reconstructions of high-priority areas; annotate imagery and send to relevant teams) into a custom workflow. For example, a search-and-rescue official’s workflow could be [search(”people in rubble”), map/locate() results]. Our goal is to eliminate overhead and allow any users to efficiently comb through drone image data to make informed decisions. During the hackathon, we focused on connecting drone images to semantic search, using OpenAI’s Clip neural network that brings together text and images. Users can upload drone images, search for keywords to resurface important timeframes, and sort images based on how close their query is. ## **What we learned** * **Drones**: Though we’ve worked with drones before, Treehacks was the first time we had access to such high-tech drones from Skydio and Parrot. It was super inspiring to learn about how fast the industry is advancing and see how passionate their teams were about their technology. * **Disaster Management**: We learned so much about the problems surrounding disaster management and narrowed down one of the major pain points– bad infrastructure. It was disappointing to hear that many of the lives lost in these natural disasters were very preventable if all parties acted ethically from the start. * **OpenAI CLIP Model**: Most models like ChatGPT use only one form of input: text. OpenAI’s CLIP model is one of the first to effectively connect images and text by embedding them in the same vector space. This allows us to basically map out not only how close images are to each other in meaning, but also how close they are to words and phrases. Treehacks was the first time we’ve worked with an existing ML model with pre-trained weights—setting up was definitely a challenge. * **Image search**: We learned that image search is hard to do. We considered alternative image search approaches, like using keywords, or other current AI methods. However, the CLIP model seemed to be the highest performing one. At the moment, CLIP performs well out-of-the-box without fine tuning. In the future, we see ways to improve the search by training CLIP further. ## **Future** There are lots of interesting possibilities for the future. After we create the automated workflow platform, we can expand to different customer bases. Farmers can create workflows to monitor their land (identify areas with pests, monitor soil moisture, track movement and health of livestock) and alert relevant teams. City planners can create workflows to monitor urban growth, detect poor infrastructure, and report any damage. Construction workers can create workflows to send their drones on routine checkups, inspect infrastructure for potential problems, and send alerts to intervening teams. ### Business Model The current plan is a freemium approach, where we give away free 1GB accounts and charge for # of users, additional storage, number of integrations, and/or computer power. We plan on first giving our platform for free out to disaster managers so they can setup their workflows and start using it for prevention methods. This way they are setup for success in the case that disaster strikes. The global disaster management market is expected to grow to 5.2 billion by 2027. The global agricultural drone market is expected to reach USD 1.7 billion by 2025, and the global market for urban planning is expected to reach USD 7.6 billion by 2027. In addition, with the rise of extreme climate events, the need for disaster relief software and tools will only grow. Automating workflows for the current existing manual processes in these fields will save immense time that is currently being used to do mundane, repetitive, logistical tasks, giving back time to spend on growth and innovation. Though money is important to run a business, we’re currently focused on saving critical time from disaster managers who will in turn be able to save more lives. ## **Ethics** We strongly believe that governments should treat safe housing as a human right. After doing a lot of research into the space, we learned that construction companies have been cutting corners and ignoring building codes for decades, and the government often just lets it slide. This was one of the biggest reasons for the high death count after the recent earthquake in Turkey and Syria. It’s ethically wrong for us as a society to allow people living in bad infrastructure to constantly be in worry about their livelihood. Or worse, not even realize how dangerous their living situation is. Thus, for this hackathon, we focused on safety because it was the most high-impact in saving lives. The first step we took was building out the drone image search, giving disaster managers a quick way to sort through their large datasets. Not only will this be extremely useful post-disaster to filter out data, but also to help identify bad infrastructure areas and notify those in the area before it’s too late. Ally makes drone data management simple and easily accessible for non-technical users. With such a high-impact space, there exist many ethical issues to further explore: 1. **Privacy** is an especially important consideration when using drones for data collection. The data collected by drones can be used to monitor people without their knowledge or consent, or to gain access to sensitive information. To ensure privacy, drones should only be used with explicit consent and with appropriate oversight or regulation. Additionally, all data collected should be treated with the utmost respect for the rights and privacy of the people involved. 2. **Accuracy** is also a key ethical concern when using drones for data collection. If the data and images collected are not accurate, then decisions made based on this data could be wrong or misguided. To ensure accuracy, the designers of the drones must ensure that the automated workflows and algorithms used are reliable and effective. 3. **Safety and security** are also important ethical considerations when using drones for data collection. If the data is used for disaster relief, for example, then there is the potential for drones to crash or malfunction, endangering the people in the area. Additionally, drone data is vulnerable to hacking, potentially leading to a breach of confidential information. To ensure safety and security, designers must ensure that the drones are equipped with appropriate safeguards and that the data collected is encrypted. 4. **Reliability** is a major ethical consideration when using drones for data collection. If the automated workflows are not reliable, then the data collected could be inaccurate and ineffective for decision making. To ensure reliability, designers must ensure that the drones are equipped with reliable and accurate sensors and that the data is stored securely. By taking these ethical considerations into account when designing our product, Ally, we will ensure that the data is used responsibly and in the best interest of the public. ### Resources * [Applications of drone in disaster management: A scoping review](https://www.sciencedirect.com/science/article/pii/S1355030621001477) * [Vox News: How these buildings made the Turkey and Syria earthquakes so deadly](https://www.vox.com/videos/2023/2/16/23602986/turkey-syria-earthquake-soft-story-buildings-collapse)
## Inspiration According to the United State's Department of Health and Human Services, 55% of the elderly are non-compliant with their prescription drug orders, meaning they don't take their medication according to the doctor's instructions, where 30% are hospital readmissions. Although there are many reasons why seniors don't take their medications as prescribed, memory loss is one of the most common causes. Elders with Alzheimer's or other related forms of dementia are prone to medication management problems. They may simply forget to take their medications, causing them to skip doses. Or, they may forget that they have already taken their medication and end up taking multiple doses, risking an overdose. Therefore, we decided to solve this issue with Pill Drop, which helps people remember to take their medication. ## What it does The Pill Drop dispenses pills at scheduled times throughout the day. It helps people, primarily seniors, take their medication on time. It also saves users the trouble of remembering which pills to take, by automatically dispensing the appropriate medication. It tracks whether a user has taken said dispensed pills by starting an internal timer. If the patient takes the pills and presses a button before the time limit, Pill Drop will record this instance as "Pill Taken". ## How we built it Pill Drop was built using Raspberry Pi and Arduino. They controlled servo motors, a button, and a touch sensor. It was coded in Python. ## Challenges we ran into Challenges we ran into was first starting off with communicating between the Raspberry Pi and the Arduino since we all didn't know how to do that. Another challenge was to structurally hold all the components needed in our project, making sure that all the "physics" aligns to make sure that our product is structurally stable. In addition, having the pi send an SMS Text Message was also new to all of us, so incorporating a User Interface - trying to take inspiration from HyperCare's User Interface - we were able to finally send one too! Lastly, bringing our theoretical ideas to fruition was harder than expected, running into multiple road blocks within our code in the given time frame. ## Accomplishments that we're proud of We are proud that we were able to create a functional final product that is able to incorporate both hardware (Arduino and Raspberry Pi) and software! We were able to incorporate skills we learnt in-class plus learn new ones during our time in this hackathon. ## What we learned We learned how to connect and use Raspberry Pi and Arduino together, as well as incorporating User Interface within the two as well with text messages sent to the user. We also learned that we can also consolidate code at the end when we persevere and build each other's morals throughout the long hours of hackathon - knowing how each of us can be trusted to work individually and continuously be engaged with the team as well. (While, obviously, having fun along the way!) ## What's next for Pill Drop Pill Drop's next steps include creating a high-level prototype, testing out the device over a long period of time, creating a user-friendly interface so users can adjust pill-dropping time, and incorporating patients and doctors into the system. ## UPDATE! We are now working with MedX Insight to create a high-level prototype to pitch to investors!
## Inspiration: The app was born from the need to respond to global crises like the ongoing wars in Palestine, Ukraine, and Myanmar. Which have made the importance of real-time, location-based threat awareness more critical than ever. While these conflicts are often headline news, people living far from the conflict zones may lack the immediate understanding of how quickly conditions change on the ground. Our inspiration came from a desire to bridge that gap by leveraging technology to provide a solution that could offer real-time updates about dangerous areas, not just in warzones but in urban centers and conflict-prone regions around the world. ## How we built it: Our app was developed with scalability and responsiveness in mind, given the complexity of gathering real-time data from diverse sources. For the backend, we used Python to run a Reflex web app, which hosts our API endpoints and powers the data pipeline. Reflex was chosen for its ability to handle asynchronous tasks, crucial for integrating with a MongoDB database that stores a large volume of data gathered from news articles. This architecture allows us to scrape, store, and process incoming data efficiently without compromising performance. On the frontend, we leveraged React Native to ensure cross-platform compatibility, offering users a seamless experience on both iOS and Android devices. React Native's flexibility allowed us to build a responsive interface where users can interact with the heat map, see threat levels, and access detailed news summaries all within the same app. We also integrated Meta LLaMA, a hyperbolic transformer model, which processes the textual data we scrape from news articles. The model is designed to analyze and assess the threat level of each news piece, outputting both the geographical coordinates and a risk assessment score. This was a particularly complex part of the development process, as fine-tuning the model to provide reliable, context-aware predictions required significant iteration and testing. ## Challenges we faced: The most pressing challenge was data scraping, particularly the obstacles put in place by websites that actively work to prevent scraping. Many news websites have anti-scraping measures in place, making it difficult to gather comprehensive data. To address this, we had to get creative with our scraping methods, using dynamic techniques that could mimic human-like browsing to avoid detection. Another major challenge was iOS integration, particularly in working with location services. iOS tends to have stricter privacy controls, which required us to implement complex authentication mechanisms and permissions handling. Additionally, deploying the backend infrastructure presented challenges in ensuring that it scaled smoothly under heavy data loads, all while maintaining low-latency responses for real-time updates. We also faced hurdles in speech-to-text functionality, as we aim to make the app more accessible by allowing users to interact with it via voice commands. Integrating accurate, multi-language speech recognition that can handle diverse accents and conditions in real-world environments is a work in progress. ## Accomplishments we're proud of: Despite these challenges, we successfully built a dynamic heat map that allows users to visually grasp the intensity of threats in different geographical areas. The Meta LLaMA model was another major achievement, enabling us to not only scrape news articles but also analyze and assign a threat level in real time. This means that a user can look at the app, see a particular area highlighted as high risk, and read news reports with data-backed assessments. We've created something that helps people stay informed about their environment in a practical, visually intuitive way. Moreover, building a fully functional app with both backend and frontend integration, while using cutting-edge machine learning models for threat assessment, is something we're particularly proud of. The app is capable of processing large datasets and serving actionable insights with minimal delays, which is no small feat given the technical complexity involved. ## What we learned: One of the biggest takeaways from this project was the importance of starting with the fundamentals and building a solid foundation before adding complex features. In the early stages, we focused on getting the core infrastructure right—ensuring the scraping, data pipeline, and database were robust enough to handle scaling before moving on to model integration and feature expansion. This allowed us to pivot more easily when challenges arose, such as working with real-time data or adjusting to API limitations. We also learned a great deal about the nuances of natural language processing and machine learning, especially when it comes to applying those technologies to dynamic, unstructured news data. It’s one thing to build an AI model that processes text in a controlled environment, but real-world data is messy, often incomplete, and constantly evolving. Understanding how to fine-tune models like Meta LLaMA to give reliable assessments on current events was both challenging and incredibly rewarding. ## What’s next: Looking ahead, we plan to expand the app’s capabilities further by integrating speech-to-text functionality. This will make the app more accessible, allowing users to dictate queries or receive voice-based updates on emerging threats without having to type or navigate through screens. This feature will be particularly valuable for users who may be on the move or in situations where typing isn’t practical. We’re also focusing on improving the accuracy and scope of our web scrapers, aiming to gather more diverse data from a broader range of news sources while adhering to ethical guidelines. This includes exploring ways to improve scraping from difficult sites and even partnering with news outlets to gain access to structured data. Beyond these immediate goals, we see potential in scaling the app to include predictive analytics, using historical data to forecast potential danger zones before they escalate. This would help users not only react to current events but also plan ahead based on emerging patterns in conflict areas. Another exciting direction is user-driven content, allowing people to report and share information about dangerous areas directly through the app, further enriching the data landscape.
partial
## Inspiration We aren't musicians. We can't dance. With AirTunes, we can try to do both! Superheroes are also pretty cool. ## What it does AirTunes recognizes 10 different popular dance moves (at any given moment) and generates a corresponding sound. The sounds can be looped and added at various times to create an original song with simple gestures. The user can choose to be one of four different superheroes (Hulk, Superman, Batman, Mr. Incredible) and record their piece with their own personal touch. ## How we built it In our first attempt, we used OpenCV to maps the arms and face of the user and measure the angles between the body parts to map to a dance move. Although successful with a few gestures, more complex gestures like the "shoot" were not ideal for this method. We ended up training a convolutional neural network in Tensorflow with 1000 samples of each gesture, which worked better. The model works with 98% accuracy on the test data set. We designed the UI using the kivy library in python. There, we added record functionality, the ability to choose the music and the superhero overlay, which was done with the use of rlib and opencv to detect facial features and map a static image over these features. ## Challenges we ran into We came in with a completely different idea for the Hack for Resistance Route, and we spent the first day basically working on that until we realized that it was not interesting enough for us to sacrifice our cherished sleep. We abandoned the idea and started experimenting with LeapMotion, which was also unsuccessful because of its limited range. And so, the biggest challenge we faced was time. It was also tricky to figure out the contour settings and get them 'just right'. To maintain a consistent environment, we even went down to CVS and bought a shower curtain for a plain white background. Afterward, we realized we could have just added a few sliders to adjust the settings based on whatever environment we were in. ## Accomplishments that we're proud of It was one of our first experiences training an ML model for image recognition and it's a lot more accurate than we had even expected. ## What we learned All four of us worked with unfamiliar technologies for the majority of the hack, so we each got to learn something new! ## What's next for AirTunes The biggest feature we see in the future for AirTunes is the ability to add your own gestures. We would also like to create a web app as opposed to a local application and add more customization.
## Inspiration Video games evolved when the Xbox Kinect was released in 2010 but for some reason we reverted back to controller based games. We are here to bring back the amazingness of movement controlled games with a new twist- re innovating how mobile games are played! ## What it does AR.cade uses a body part detection model to track movements that correspond to controls for classic games that are ran through an online browser. The user can choose from a variety of classic games such as temple run, super mario, and play them with their body movements. ## How we built it * The first step was setting up opencv and importing the a body part tracking model from google mediapipe * Next, based off the position and angles between the landmarks, we created classification functions that detected specific movements such as when an arm or leg was raised or the user jumped. * Then we correlated these movement identifications to keybinds on the computer. For example when the user raises their right arm it corresponds to the right arrow key * We then embedded some online games of our choice into our front and and when the user makes a certain movement which corresponds to a certain key, the respective action would happen * Finally, we created a visually appealing and interactive frontend/loading page where the user can select which game they want to play ## Challenges we ran into A large challenge we ran into was embedding the video output window into the front end. We tried passing it through an API and it worked with a basic plane video, however the difficulties arose when we tried to pass the video with the body tracking model overlay on it ## Accomplishments that we're proud of We are proud of the fact that we are able to have a functioning product in the sense that multiple games can be controlled with body part commands of our specification. Thanks to threading optimization there is little latency between user input and video output which was a fear when starting the project. ## What we learned We learned that it is possible to embed other websites (such as simple games) into our own local HTML sites. We learned how to map landmark node positions into meaningful movement classifications considering positions, and angles. We learned how to resize, move, and give priority to external windows such as the video output window We learned how to run python files from JavaScript to make automated calls to further processes ## What's next for AR.cade The next steps for AR.cade are to implement a more accurate body tracking model in order to track more precise parameters. This would allow us to scale our product to more modern games that require more user inputs such as Fortnite or Minecraft.
## Inspiration There is a need for an electronic health record (EHR) system that is secure, accessible, and user-friendly. Currently, hundred of EHRs exist and different clinical practices may use different systems. If a patient requires an emergency visit to a certain physician, the physician may be unable to access important records and patient information efficiently, requiring extra time and resources that strain the healthcare system. This is especially true for patients traveling abroad where doctors from different countries may be unable to access a centralized healthcare database in another. In addition, there is a strong potential to utilize the data available for improved analytics. In a clinical consultation, patient description of symptoms may be ambiguous and doctors often want to monitor the patient's symptoms for an extended period. With limited resources, this is impossible outside of an acute care unit in a hospital. As access to the internet is becoming increasingly widespread, patients may be able to self-report certain symptoms through a web portal if such an EHR exists. With a large amount of patient data, artificial intelligence techniques can be used to analyze the similarity of patients to predict certain outcomes before adverse events happen such that intervention can occur timely. ## What it does myHealthTech is a block-chain EHR system that has a user-friendly interface for patients and health care providers to record patient information such as clinical visitation history, lab test results, and self-reporting records from the patient. The system is a web application that is accessible from any end user that is approved by the patient. Thus, doctors in different clinics can access essential information in an efficient manner. With the block-chain architecture compared to traditional databases, patient data is stored securely and anonymously in a decentralized manner such that third parties cannot access the encrypted information. Artificial intelligence methods are used to analyze patient data for prognostication of adverse events. For instance, a patient's reported mood scores are compared to a database of similar patients that have resulted in self-harm, and myHealthTech will compute a probability that the patient will trend towards a self-harm event. This allows healthcare providers to monitor and intervene if an adverse event is predicted. ## How we built it The block-chain EHR architecture was written in solidity, truffle, testRPC, and remix. The web interface was written in HTML5, CSS3, and JavaScript. The artificial intelligence predictive behavior engine was written in python. ## Challenges we ran into The greatest challenge was integrating the back-end and front-end components. We had challenges linking smart contracts to the web UI and executing the artificial intelligence engine from a web interface. Several of these challenges require compatibility troubleshooting and running a centralized python server, which will be implemented in a consistent environment when this project is developed further. ## Accomplishments that we're proud of We are proud of working with novel architecture and technology, providing a solution to solve common EHR problems in design, functionality, and implementation of data. ## What we learned We learned the value of leveraging the strengths of different team members from design to programming and math in order to advance the technology of EHRs. ## What's next for myHealthTech? Next is the addition of additional self-reporting fields to increase the robustness of the artificial intelligence engine. In the case of depression, there are clinical standards from the Diagnostics and Statistical Manual that identify markers of depression such as mood level, confidence, energy, and feeling of guilt. By monitoring these values for individuals that have recovered, are depressed, or inflict self-harm, the AI engine can predict the behavior of new individuals much stronger by logistically regressing the data and use a deep learning approach. There is an issue with the inconvenience of reporting symptoms. Hence, a logical next step would be to implement smart home technology, such as an Amazon Echo, for the patient to interact with for self reporting. For instance, when the patient is at home, the Amazon Echo will prompt the patient and ask "What would you rate your mood today? What would you rate your energy today?" and record the data in the patient's self reporting records on myHealthTech. These improvements would further the capability of myHealthTech of being a highly dynamic EHR with strong analytical capabilitys to understand and predict the outcome of patients to improve treatment options.
winning
## Inspiration Imagine having to wait 7 hours to receive any kind of personalised assistance when learning a new concept. Sounds like a suboptimal use of time, right? Well, it's the reality of modern educational institutions. With the size of modern university courses growing, it is becoming more and more difficult for students to receive 1-on-1 help. Professors can only help so many students in their limited office hours and TAs can only spend so much time helping other students before the workload gets to them. Due to this, the teaching staff struggles to understand how well students are engaging with the course material. As a team of 4 undergraduate students with one of us having served as a TA for 2 semesters, we've seen these problems in teaching up close from both ends of the spectrum. To solve these problems, we propose tAI. tAI, as a personalised teaching assistant, solves problems for all parties simultaneously: it tackles TA availability and workload, provides personalised guidance to students in an empathetic manner, and shares actionable insights with professors on their strengths and weaknesses with respect to student engagement and understanding of course material. tAI, as opposed to other AI-enabled teaching assistants out there, recognises that not everyone learns through text. There is so much more that can be captured through audio and video that is forgone in current AI tutor interactions. We aim to keep the AI-enabled teaching assistant as close to the real thing--and maybe even better. ## What it does tAI is more than just an AI-powered learning assistant that throws facts at you when you're confused; powered by generative AI that accounts for , it's multimodal, empathetic, and context-aware. Your professors are still in control of what you learn. Here's how it works: 1. **Custom TA Creation**: teachers can create custom teaching assistants on the fly. They can input the topic they would like the TA to focus on, specify any custom instructions (like making the AI do more of the explaining than the students, specifying areas of focus in the TA, and creating guardrails on topics of conversations), and upload any files they want to make available to students with that TA. Then, voila! We generate a customised teaching assistant powered by Anthropic's state-of-the-art Claude-3.5 Sonnet, which is proven to have [excellent undergraduate level knowledge](https://www.anthropic.com/news/claude-3-5-sonnet). 2. **Multimodal tAI Interaction**: once teachers have generated a TA on a particular topic for their class, students can interact with these TAs on their portal. You can choose how you want to interact with the TA--do you want to text, talk, enact, or do a little bit of all of them? With support for text chat, audio chat, and video input, students are in control of how they want to learn. tAI is flexible and is here to support students in the best ways possible. 3. **Empathetic tAI**: tAI is trained to constantly pick up on emotional cues in all forms of input. If you seem disinterested or frustrated, it will cheer you up. If you seem confused or lost, it will check in on you. If you seem excited or confident, it will join you in your enthusiasm--*just as a human TA would*. This ensures that the AI-powered TA experience seems as personable as possible. 4. **Actionable Insights for Teachers**: after students complete sessions with a particular TA, we aggregate emotional and interaction data across students to generate comprehensive insights on class-wide student engagement and understanding on a particular topic or set of topics. We enable teachers to understand which topics might need follow-up in lecture, promoting a flipped classroom structure, which topics students seem confident in, and which topics could be emphasised in future practice. This also encourages professors to track learning engagement beyond test scores. Furthermore, we provide teachers with chat summaries across all chats. This allows them to review any inconsistencies and inform us about any hallucinations in the generated teaching assistants, tackling one of the key problems faced by generative AI assistants today. 5. **Data-Driven Insights for Students**: to encourage reflection and better preparation for tests, we provide students with insights on which topics they seem to have struggled with the most. We also provide students with metrics on their improvement over time (if sufficient data points are available) to positively reinforce their behaviour. ## How we built it tAI was built using the Next.js framework (with TypeScript), Hume AI, Firebase Firestore, and OpenAI. We leveraged Hume AI's Empathic Voice Interface API to create the empathetic teaching assistants and derive insights on the student's emotions expressed via voice. We utilised Hume AI's Expression Measurement API to conduct real-time analysis of facial expressions, including subtle facial movements, to include in our emotional analysis of students. We used OpenAI's GPT-4 model to create chat summaries for students. We used Firebase Firestore to store data. ## Challenges we ran into Since our project utilises various APIs, including Hume AI API, OpenAI APIs, and the Firebase APIs, it was challenging to manage all of those calls to external code in a consistent manner. Furthermore, integrating multimodal communication forms was one of the most challenging aspects of the project, as we needed to make a seamless user experience for deciding which mode to communicate with. Finally, we were able to make a single interface for communicating with the model and perform emotional analysis on all forms of input. Additionally, building a clean UI/UX and a simple user flow was one of the most time-intensive elements of the application but also one of the most critical to enable students and teachers to leverage our resources. ## Accomplishments that we're proud of * End-to-End Support for Teaching and Learning: our solution caters to both teachers and students, as opposed to many other AI + education solutions that cater to only one or the other. We need solutions that bridge the gap between them instead of trying to replace the value in teachers, and tAI is one of those solutions. We were able to build an entire flow that connects teachers and students, leaving the instructional power in the teacher's hands but the personalisation power in the students'. * Analytics for Professors: today, it's difficult for professors to account for student engagement and understanding of content before exams roll around. Test scores, however, have an impact on student futures, which is why it is important that professors have early information on student performance that they transform into actions for improvement. tAI enables this by providing emotional and interaction data to professors from student performance. * UI: our user interface is meant to be easy to navigate, self explanatory, and well-suited to schools. We believe that we ticked all of those boxes. ## What we learned * About Hume AI's Technologies: Hume AI's APIs provided us with some incredibly rich emotional insights that would be valuable in many projects involving sentiment analysis. It was great to learn how to use Hume AI's technologies! A big shoutout to their team and their examples repository in helping us getting started :) * Using Next.js: We had never coded with Next.js before, so it was interesting getting to learn a different JS framework. ## What's next for tAI: Personalized Teaching Assistant As we continue to innovate and evolve tAI, our vision extends beyond universities. We are excited to explore how tAI can be a valuable asset in other educational spaces, such as high schools, middle schools, and even independent learning platforms. Our upcoming enhancements will focus on broadening the application of tAI capabilities to support these diverse fields. Here is what we have in mind: * Support for Math Packages like MathJax and LaTeX and OCR to enable better TAs for mathematics and physical sciences * Support for different world languages * Integrating support for internet resources * SSO for schools * Diagram generation: for mathematical visualizations in particular, we identify the manim animation engine, from the famous 3b1b visualizations, as a potential source of generating visualizations. We plan to fine-tune an open source LLM such as Code Llama on a dataset of natural language to manim code conversions, and then perform inference to generate the best code that fits the current concept being discussed. Join us as we pave the way for a smarter, AI-driven ways of learning!
## Inspiration **As Computer Science is a learning-intensive discipline, students tend to aspire to their professors**. We were inspired to hack this weekend by our beloved professor Daniel Zingaro (UTM). Answering questions in Dan's classes often ends up being a difficult part of our lectures, as Dan is visually impaired. This means students are expected to yell to get his attention when they have a question, directly interrupting the lecture. Teachers Pet could completely change the way Dan teaches and interacts with his students. ## What it does Teacher's Pet (TP) empowers students and professors by making it easier to ask and answer questions in class. Our model helps to streamline lectures by allowing professors to efficiently target and destroy difficult and confusing areas in curriculum. Our module consists of an app, a server, and a camera. A professor, teacher, or presenter may download the TP app, and receive a push notification in the form of a discrete vibration whenever a student raises their hand for a question. This eliminates students feeling anxious for keeping their hands up, or professors receiving bad ratings for inadvertently neglecting students while focusing on teaching. ## How we built it We utilized an Azure cognitive backend and had to manually train our AI model with over 300 images from around UofTHacks. Imagine four sleep-deprived kids running around a hackathon asking participants to "put your hands up". The AI is wrapped in a python interface, and takes input from a camera module. The camera module is hooked up to a Qualcomm dragonboard 410c, which hosts our python program. Upon registering, you may pair your smartphone to your TP device through our app, and set TP up in your classroom within seconds. Upon detecting a raised hand, TP will send a simple vibration to the phone in your pocket, allowing you to quickly answer a student query. ## Challenges we ran into We had some trouble accurately differentiating when a student was stretching vs. actually raising their hand, so we took a sum of AI-guess-accuracies over 10 frames (250ms). This improved our AI success rate exponentially. Another challenge we faced was installing the proper OS and drivers onto our Dragonboard. We had to "Learn2Google" all over again (for hours and hours). Luckily, we managed to get our board up and running, and our project was up and running! ## Accomplishments that we're proud of Gosh darn we stayed up for a helluva long time - longer than any of us had previously. We also drank an absolutely disgusting amount of coffee and red bull. In all seriousness, we all are proud of each others commitment to the team. Nobody went to sleep while someone else was working. Teammates went on snack and coffee runs in freezing weather at 3AM. Smit actually said a curse word. Everyone assisted on every aspect to some degree, and in the end, that fact likely contributed to our completion of TP. The biggest accomplishment that came from this was knowledge of various new APIs, and the gratification that came with building something to help our fellow students and professors. ## What we learned Among the biggest lessons we took away was that **patience is key**. Over the weekend, we struggled to work with datasets as well as our hardware. Initially, we tried to perfect as much as possible and stressed over what we had left to accomplish in the timeframe of 36 hours. We soon understood, based on words of wisdom from our mentors, that \_ the first prototype of anything is never perfect \_. We made compromises, but made sure not to cut corners. We did what we had to do to build something we (and our peers) would love. ## What's next for Teachers Pet We want to put this in our own classroom. This week, our team plans to sit with our faculty to discuss the benefits and feasibility of such a solution.
# CourseAI: AI-Powered Personalized Learning Paths ## Inspiration CourseAI was born from the challenges of self-directed learning in our information-rich world. We recognized that the issue isn't a lack of resources, but rather how to effectively navigate and utilize them. This inspired us to leverage AI to create personalized learning experiences, making quality education accessible to everyone. ## What it does CourseAI is an innovative platform that creates personalized course schedules on any topic, tailored to the user's time frame and desired depth of study. Users input what they want to learn, their available time, and preferred level of complexity. Our AI then curates the best online resources into a structured, adaptable learning path. Key features include: * AI-driven content curation from across the web * Personalized scheduling based on user preferences * Interactive course customization through an intuitive button-based interface * Multi-format content integration (articles, videos, interactive exercises) * Progress tracking with checkboxes for completed topics * Adaptive learning paths that evolve based on user progress ## How we built it We developed CourseAI using a modern, scalable tech stack: * Frontend: React.js for a responsive and interactive user interface * Backend Server: Node.js to handle API requests and serve the frontend * AI Model Backend: Python for its robust machine learning libraries and natural language processing capabilities * Database: MongoDB for flexible, document-based storage of user data and course structures * APIs: Integration with various educational content providers and web scraping for resource curation The AI model uses advanced NLP techniques to curate relevant content, and generate optimized learning schedules. We implemented machine learning algorithms for content quality assessment and personalized recommendations. ## Challenges we ran into 1. API Cost Management: Optimizing API usage for content curation while maintaining cost-effectiveness. 2. Complex Scheduling Logic: Creating nested schedules that accommodate various learning styles and content types. 3. Integration Complexity: Seamlessly integrating diverse content types into a cohesive learning experience. 4. Resource Scoring: Developing an effective system to evaluate and rank educational resources. 5. User Interface Design: Creating an intuitive, button-based interface for course customization that balances simplicity with functionality. ## Accomplishments that we're proud of 1. High Accuracy: Achieving a 95+% accuracy rate in content relevance and schedule optimization. 2. Elegant User Experience: Designing a clean, intuitive interface with easy-to-use buttons for course customization. 3. Premium Content Curation: Consistently sourcing high-quality learning materials through our AI. 4. Scalable Architecture: Building a robust system capable of handling a growing user base and expanding content library. 5. Adaptive Learning: Implementing a flexible system that allows users to easily modify their learning path as they progress. ## What we learned This project provided valuable insights into: * The intricacies of AI-driven content curation and scheduling * Balancing user preferences with optimal learning strategies * The importance of UX design in educational technology * Challenges in integrating diverse content types into a cohesive learning experience * The complexities of building adaptive learning systems * The value of user-friendly interfaces in promoting engagement and learning efficiency ## What's next for CourseAI Our future plans include: 1. NFT Certification: Implementing blockchain-based certificates for completed courses. 2. Adaptive Scheduling: Developing a system for managing backlogs and automatically adjusting schedules when users miss sessions. 3. Enterprise Solutions: Creating a customizable version of CourseAI for company-specific training. 4. Advanced Personalization: Implementing more sophisticated AI models for further personalization of learning paths. 5. Mobile App Development: Creating native mobile apps for iOS and Android. 6. Gamification: Introducing game-like elements to increase motivation and engagement. 7. Peer Learning Features: Developing functionality for users to connect with others studying similar topics. With these enhancements, we aim to make CourseAI the go-to platform for personalized, AI-driven learning experiences, revolutionizing education and personal growth.
losing
# Inspiration Traditional startup fundraising is often restricted by stringent regulations, which make it difficult for small investors and emerging founders to participate. These barriers favor established VC firms and high-networth individuals, limiting innovation and excluding a broad range of potential investors. Our goal is to break down these barriers by creating a decentralized, community-driven fundraising platform that democratizes startup investments through a Decentralized Autonomous Organization, also known as DAO. # What It Does To achieve this, our platform leverages blockchain technology and the DAO structure. Here’s how it works: * **Tokenization**: We use blockchain technology to allow startups to issue digital tokens that represent company equity or utility, creating an investment proposal through the DAO. * **Lender Participation**: Lenders join the DAO, where they use cryptocurrency, such as USDC, to review and invest in the startup proposals. -**Startup Proposals**: Startup founders create proposals to request funding from the DAO. These proposals outline key details about the startup, its goals, and its token structure. Once submitted, DAO members review the proposal and decide whether to fund the startup based on its merits. * **Governance-based Voting**: DAO members vote on which startups receive funding, ensuring that all investment decisions are made democratically and transparently. The voting is weighted based on the amount lent in a particular DAO. # How We Built It ### Backend: * **Solidity** for writing secure smart contracts to manage token issuance, investments, and voting in the DAO. * **The Ethereum Blockchain** for decentralized investment and governance, where every transaction and vote is publicly recorded. * **Hardhat** as our development environment for compiling, deploying, and testing the smart contracts efficiently. * **Node.js** to handle API integrations and the interface between the blockchain and our frontend. * **Sepolia** where the smart contracts have been deployed and connected to the web application. ### Frontend: * **MetaMask** Integration to enable users to seamlessly connect their wallets and interact with the blockchain for transactions and voting. * **React** and **Next.js** for building an intuitive, responsive user interface. * **TypeScript** for type safety and better maintainability. * **TailwindCSS** for rapid, visually appealing design. * **Shadcn UI** for accessible and consistent component design. # Challenges We Faced, Solutions, and Learning ### Challenge 1 - Creating a Unique Concept: Our biggest challenge was coming up with an original, impactful idea. We explored various concepts, but many were already being implemented. **Solution**: After brainstorming, the idea of a DAO-driven decentralized fundraising platform emerged as the best way to democratize access to startup capital, offering a novel and innovative solution that stood out. ### Challenge 2 - DAO Governance: Building a secure, fair, and transparent voting system within the DAO was complex, requiring deep integration with smart contracts, and we needed to ensure that all members, regardless of technical expertise, could participate easily. **Solution**: We developed a simple and intuitive voting interface, while implementing robust smart contracts to automate and secure the entire process. This ensured that users could engage in the decision-making process without needing to understand the underlying blockchain mechanics. ## Accomplishments that we're proud of * **Developing a Fully Functional DAO-Driven Platform**: We successfully built a decentralized platform that allows startups to tokenize their assets and engage with a global community of investors. * **Integration of Robust Smart Contracts for Secure Transactions**: We implemented robust smart contracts that govern token issuance, investments, and governance-based voting bhy writing extensice unit and e2e tests. * **User-Friendly Interface**: Despite the complexities of blockchain and DAOs, we are proud of creating an intuitive and accessible user experience. This lowers the barrier for non-technical users to participate in the platform, making decentralized fundraising more inclusive. ## What we learned * **The Importance of User Education**: As blockchain and DAOs can be intimidating for everyday users, we learned the value of simplifying the user experience and providing educational resources to help users understand the platform's functions and benefits. * **Balancing Security with Usability**: Developing a secure voting and investment system with smart contracts was challenging, but we learned how to balance high-level security with a smooth user experience. Security doesn't have to come at the cost of usability, and this balance was key to making our platform accessible. * **Iterative Problem Solving**: Throughout the project, we faced numerous technical challenges, particularly around integrating blockchain technology. We learned the importance of iterating on solutions and adapting quickly to overcome obstacles. # What’s Next for DAFP Looking ahead, we plan to: * **Attract DAO Members**: Our immediate focus is to onboard more lenders to the DAO, building a large and diverse community that can fund a variety of startups. * **Expand Stablecoin Options**: While USDC is our starting point, we plan to incorporate more blockchain networks to offer a wider range of stablecoin options for lenders (EURC, Tether, or Curve). * **Compliance and Legal Framework**: Even though DAOs are decentralized, we recognize the importance of working within the law. We are actively exploring ways to ensure compliance with global regulations on securities, while maintaining the ethos of decentralized governance.
## FLEX [Freelancing Linking Expertise Xchange] ## Inspiration Freelancers deserve a platform where they can fully showcase their skills, without worrying about high fees or delayed payments. Companies need fast, reliable access to talent with specific expertise to complete jobs efficiently. "FLEX" bridges the gap, enabling recruiters to instantly find top candidates through AI-powered conversations, ensuring the right fit, right away. ## What it does Clients talk to our AI, explaining the type of candidate they need and any specific skills they're looking for. As they speak, the AI highlights important keywords and asks any more factors that they would need with the candidate. This data is then analyzed and parsed through our vast database of Freelancers or the best matching candidates. The AI then talks back to the recruiter, showing the top candidates based on the recruiter’s requirements. Once the recruiter picks the right candidate, they can create a smart contract that’s securely stored and managed on the blockchain for transparent payments and agreements. ## How we built it We built starting with the Frontend using **Next.JS**, and deployed the entire application on **Terraform** for seamless scalability. For voice interaction, we integrated **Deepgram** to generate human-like voice and process recruiter inputs, which are then handled by **Fetch.ai**'s agents. These agents work in tandem: one agent interacts with **Flask** to analyze keywords from the recruiter's speech, another queries the **SingleStore** database, and the third handles communication with **Deepgram**. Using SingleStore's real-time data analysis and Full-Text Search, we find the best candidates based on factors provided by the client. For secure transactions, we utilized **SUI** blockchain, creating an agreement object once the recruiter posts a job. When a freelancer is selected and both parties reach an agreement, the object gets updated, and escrowed funds are released upon task completion—all through Smart Contracts developed in **Move**. We also used Flask and **Express.js** to manage backend and routing efficiently. ## Challenges we ran into We faced challenges integrating Fetch.ai agents for the first time, particularly with getting smooth communication between them. Learning Move for SUI and connecting smart contracts with the frontend also proved tricky. Setting up reliable Speech to Text was tough, as we struggled to control when voice input should stop. Despite these hurdles, we persevered and successfully developed this full stack application. ## Accomplishments that we're proud of We’re proud to have built a fully finished application while learning and implementing new technologies here at CalHacks. Successfully integrating blockchain and AI into a cohesive solution was a major achievement, especially given how cutting-edge both are. It’s exciting to create something that leverages the potential of these rapidly emerging technologies. ## What we learned We learned how to work with a range of new technologies, including SUI for blockchain transactions, Fetch.ai for agent communication, and SingleStore for real-time data analysis. We also gained experience with Deepgram for voice AI integration. ## What's next for FLEX Next, we plan to implement DAOs for conflict resolution, allowing decentralized governance to handle disputes between freelancers and clients. We also aim to launch on the SUI mainnet and conduct thorough testing to ensure scalability and performance.
Project story: ## Inspiration One of our team members recently went through a raising round for his Startup. In this experience, he discovered the inefficiencies and flaws of the process. First, over 67% of founders from YC come from IVY Leagues, Stanford, and MIT alone. Just imagine what thousands of young entrepreneurs could do with access to funding! After the joy of finally finding an interested investor comes tedious legal work that takes over 6 months on average. Founders should be focusing on their products during this time instead. Crowdfunding platforms allow large numbers of investors to lead funding rounds, but their high fees, legal cost, and the long time it takes to get the money (over 5 months) make them non-viable solutions. ## What it does Angels on the block is an acronym for angel investors on the blockchain. Our platform transforms equity into NFTs and provides a seamless transference solution via smart contracts. NFTs are used for Startups to build a community of early adopters of the company's equity shareholding and provide investors with exclusive opportunity to invest by leveraging NFTs as equity on the Ethereum blockchain. For startups: after going to a financial check, they get accepted into our decentralized platform. They can fill a request to raise the amount they are looking for and the equivalent amount in shares (percentage of equity). Each offered share has an equivalent NFT, which can be traded in secondary markets. Unlike other methods, our platform transfers the money to the founders as soon as a transaction is made. This payment schedule adapts better to Startups' steady needs. NFTs help create a symbiosis relationship between users and founders. Users get discounts, access to exclusive events and early access to products, and founders have a community of early adopters that can provide valuable feedback. For investors: it democratizes the access to invest in Startups. Currently, only high-net-worth individuals/firms can invest in them. However, young and middle age people from all backgrounds seek to invest in startups for the greater good, to support an ideology, or because it is the best for the planet. These people are covered with Angels On The Block. The platform provides an array of verified startups to invest and see the impact of their money. The Startups equity is sold to an investor via a smart contract. The platform only charges a small fee of $750 per round compared to the over $5,000 and 3-5% commission charged by traditional transactions. ## How we built it Our platform takes the form of a website, which uses smart contracts on Ethereum using Solidity as the programming language to perform the transactions of selling a company stock to an investor. The smart contract takes care of minting the NFT which represents a share of a company, and providing the money to the company selling the share. The website's frontend was developed using HTML, CSS, Javascript and Web3 SDK to integrate the smart contracts on Ethereum. The backend was developed in Python with Flask framework and deployed on Google cloud run instance using docker, and the database we used was Google firebase realtime database.
winning
## Inspiration Have you ever wondered what's actually in your shampoo or body wash? Have you ever been concerned about the toxicity of certain chemicals in them on your body and to the environment? If you answered yes, you came to the right place. Welcome to the wonderful world of Goodgredients! 😀 Goodgredients provides a simple way to answer these questions. But how you may ask. ## What it does Goodgredients provides a simple way to check the toxicity of certain chemicals in them on your body and to the environment. Simply take a picture of your Shampoo or body wash and check which ingredient might harmful to you. ## How I built it The project built with React Native, Node JS, Express js, and Einstein API. The backend API has been deployed with Heroku. The core of this application is Salesforce Einstein Vision. In particular, we are using Einstein OCR (Optical Character Recognition), which uses deep learning models to detect alphanumeric text in an image. You can find out more info about Einstein Vision here. Essentially, we've created a backend api service that takes an image request from a client, uses the Einstein OCR model to extract text from the image, compares it to our dataset of chemical details (ex. toxicity, allergy, etc.), and sends a response containing the comparison results back to the client. ## Challenges I ran into As first-time ReactNative developers, we have encountered a lot of environment set up issue, however, we could figure out within time! ## Accomplishments that I'm proud of We had no experience with ReactNative but finished project with fully functional within 24hours. ## What I learned ## What's next for Goodgredients
## Inspiration It all started a couple days ago when my brother told me he'd need over an hour to pick up a few items from a grocery store because of the weekend checkout line. This led to us reaching out to other friends of ours and asking them about the biggest pitfalls of existing shopping systems. We got a whole variety of answers, but the overwhelming response was the time it takes to shop and more particularly checkout. This inspired us to ideate and come up with an innovative solution. ## What it does Our app uses computer vision to add items to a customer's bill as they place items in the cart. Similarly, removing an item from the cart automatically subtracts them from the bill. After a customer has completed shopping, they can checkout on the app with the tap of a button, and walk out the store. It's that simple! ## How we built it We used react with ionic for the frontend, and node.js for the backend. Our main priority was the completion of the computer vision model that detects items being added and removed from the cart. The model we used is a custom YOLO-v3Tiny model implemented in Tensorflow. We chose Tensorflow so that we could run the model using TensorflowJS on mobile. ## Challenges we ran into The development phase had it's fair share of challenges. Some of these were: * Deep learning models can never have too much data! Scraping enough images to get accurate predictions was a challenge. * Adding our custom classes to the pre-trained YOLO-v3Tiny model. * Coming up with solutions to security concerns. * Last but not least, simulating shopping while quarantining at home. ## Accomplishments that we're proud of We're extremely proud of completing a model that can detect objects in real time, as well as our rapid pace of frontend and backend development. ## What we learned We learned and got hands on experience of Transfer Learning. This was always a concept that we knew in theory but had never implemented before. We also learned how to host tensorflow deep learning models on cloud, as well as make requests to them. Using google maps API with ionic react was a fun learning experience too! ## What's next for MoboShop * Integrate with customer shopping lists. * Display ingredients for recipes added by customer. * Integration with existing security systems. * Provide analytics and shopping trends to retailers, including insights based on previous orders, customer shopping trends among other statistics.
## Inspiration We got the idea for this app after one of our teammates shared that during her summer internship in China, she could not find basic over the counter medication that she needed. She knew the brand name of the medication in English, however, she was unfamiliar with the local pharmaceutical brands and she could not read Chinese. ## Links * [FYIs for your Spanish pharmacy visit](http://nolongernative.com/visiting-spanish-pharmacy/) * [Comparison of the safety information on drug labels in three developed countries: The USA, UK and Canada](https://www.sciencedirect.com/science/article/pii/S1319016417301433) * [How to Make Sure You Travel with Medication Legally](https://www.nytimes.com/2018/01/19/travel/how-to-make-sure-you-travel-with-medication-legally.html) ## What it does This mobile app allows users traveling to different countries to find the medication they need. They can input the brand name in the language/country they know and get the name of the same compound in the country they are traveling to. The app provides a list of popular brand names for that type of product, along with images to help the user find the medicine at a pharmacy. ## How we built it We used Beautiful Soup to scrape Drugs.com to create a database of 20 most popular active ingredients in over the counter medication. We included in our database the name of the compound in 6 different languages/countries, as well as the associated brand names in the 6 different countries. We stored our database on MongoDB Atlas and used Stitch to connect it to our React Native front-end. Our Android app was built with Android Studio and connected to the MongoDB Atlas database via the Stitch driver. ## Challenges we ran into We had some trouble connecting our React Native app to the MongoDB database since most of our team members had little experience with these platforms. We revised the schema for our data multiple times in order to find the optimal way of representing fields that have multiple values. ## Accomplishments that we're proud of We're proud of how far we got considering how little experience we had. We learned a lot from this Hackathon and we are very proud of what we created. We think that healthcare and finding proper medication is one of the most important things in life, and there is a lack of informative apps for getting proper healthcare abroad, so we're proud that we came up with a potential solution to help travellers worldwide take care of their health. ## What we learned We learned a lot of React Native and MongoDB while working on this project. We also learned what the most popular over the counter medications are and what they're called in different countries. ## What's next for SuperMed We hope to continue working on our MERN skills in the future so that we can expand SuperMed to include even more data from a variety of different websites. We hope to also collect language translation data and use ML/AI to automatically translate drug labels into different languages. This would provide even more assistance to travelers around the world.
partial
## Inspiration Considering our team is so diverse (Pakistan, Sweden, Brazil, and Nigeria) it was natural for us to consider worldwide problems when creating our project. This problem especially has such a large societal impact, that we were very motivated to move towards a solution. ## What it does Our service takes requests from users by SMS, which we then convert to an executable query. When the query result is received we send it back using SMS. Our application makes the process user-friendly and allows for more features when accessing the internet, such as ordering an Uber or ordering food. ## How we built it The app can convert the user selection into text messages, sending them to our Twilio number. We used the Twilio API to automatically manage these texts. Using C# and python scripts we convert the text into a google search, sending the result back as a text message. ## Challenges we ran into The main challenge we faced was making the different protocols interact, it was also challenging to produce and debug everything under the time constraint. ## Accomplishments that we're proud of We are very proud of our presentation and our creative solution. As well as having such an effective collaboration that enabled us to complete as much as we did. We are very proud of how we successfully created a novel solution that is simple enough to be applicable on a large scale, having a large impact on the world. ## What we learned We learned how to automize the management of text messages, and how to make the different protocols communicate correctly ## What's next for Access What's next for Access is to expand our service, fulfilling the large potential that our solution has. We want to make more parts of the internet accessible through our service, make the process more efficient and most importantly extend our reach to those who need it the most.
## Inspiration When travelling in a new place, it is often the case that one doesn't have an adequate amount of mobile data to search for information they need. ## What it does Mr.Worldwide allows the user to send queries and receive responses regarding the weather, directions, news and translations in the form of sms and therefore without the need of any data. ## How I built it A natural language understanding model was built and trained with the use of Rasa nlu. This model has been trained to work as best possible with many variations of query styles to act as a chatbot. The queries are sent up to a server by sms with the twill API. A response is then sent back the same way to function as a chatbot. ## Challenges I ran into Implementing the Twilio API was a lot more time consuming than we assumed it would be. This was due to the fact that a virtual environment had to be set up and our connection to the server originally was not directly connecting. Another challenge was providing the NLU model with adequate information to train on. ## Accomplishments that I'm proud of We are proud that our end result works as we intended it to. ## What I learned A lot about NLU models and implementing API's. ## What's next for Mr.Worldwide Potentially expanding the the scope of what services/information it can provide to the user.
## Inspiration Only 50% of the world have internet access today. But around 65% have SMS access. That's over 1.3 billion people who have SMS access but don't have any access to the internet. Especially in developing countries, the growth of access to the internet is slowing down due to many barriers of access such as . In a world where internet connectivity is essential for fast information retrieval and for a lot of other applications, we set out to bring the world a bit closer by providing internet access to those in need. ## What it does We built a web browser that allows users to access websites completely offline without the need of WiFi or mobile data, powered by SMS technology. ## How we built it There are three components to our app. We used Flutter for the front-end, to allow user URL entry. We then SMS the URL to our Twilio number. The back-end was written in Python and waits for incoming SMS messages and scrapes the webpage to get HTML content from it. We then return the HTML using SMS to the front-end where we parse and render the webpage. ## Challenges we ran into Integrating Twilio's API was a challenge to make sure we adhere by the character limits as well as implementing the logic behind waiting for incoming messages and replying to them. In addition, SMS technology is difficult to work with because of the unreliability of the speed of the message and order. ## Accomplishments that we're proud of We're proud of our parsing algorithm that's able to take HTML and render it as a webpage on Flutter. We're also proud of our SMS communication technology that's verified using message IDs and our algorithm to accumulate all the individual SMS messages and aggregate them to form the HTML. ## What we learned This is the first time we're making a mobile app using Flutter and we learned a lot about mobile app development! We also learned how to use Twilio and a lot about how SMS technology works on mobile phones. Accumulating data about internet access, we learned that there are a lot of people out there who sometimes or all the times have access to SMS but not the internet via WiFi or mobile data. ## What's next for Telebrowser Instead of parsing just HTML, we'd like to implement a full-on browser (using for example, Chromium) that supports CSS, JavaScript and assets. We'd also like to utilize SMS messaging directly without Twilio for optimal performance
partial
## Inspiration After being transitioned to an online learning setup, it was also inevitable for many students and other internet users to increase their gadget usage and usage of Google.com as the leading search engine for all their curious minds ending **late at night** (or should I say morning). With the increase in usage, there is also an increase in the need to **decrease the Blue light emitted by our gadgets**. **Blue light can penetrate the lens directly to our retina**, causing the atrophy or even death of retinal pigment epithelial cells. The death of light-sensitive cells will result in decreased vision or even **complete permanent loss of vision**. This damage is irreversible. Due to the short wavelength of blue light, the eyeballs will be in a state of tension for a long time, causing visual fatigue. You may experience symptoms such as **eye fatigue, headache, blurred vision, dry eyes, neck and shoulder pain**. Excessive exposure to blue light at night and insufficient melatonin secretion in the human body will keep people awake and unable to fall asleep, which causes the problem of **insomnia**. **Therefore, if we want to reduce the total amount of blue light emitted by the screen, one way is to reduce the blue light component in the spectrum**, that is, **lower the color temperature and adjust the screen to a yellowish tint**. **Yellow screens are better for the eyes**. The yellow screen can effectively reduce the blue light emitted by the screen, relieve eye fatigue, and even help you fall asleep better at night. Thus, the creation of **Night, Google 🌕** , a Google Chrome extension that helps you have a better user experience in browsing through Google by changing the page background color to a more eye-friendly tint of yellow. ## What is Blue light and why is it bad? Blue light is light with relatively high energy with a wavelength between 400nm and 480nm. It has a shorter wavelength and is part of the visible spectrum. **Blue light can penetrate the lens directly to our retina**, causing the atrophy or even death of retinal pigment epithelial cells. The death of light-sensitive cells will result in decreased vision or even complete loss of vision. This **damage is irreversible**. Blue light can also cause age-related macular degeneration, which can lead to **permanent vision loss**. The lens of the human eye will absorb part of the blue light and gradually become cloudy to form cataracts, and **most of the blue light will penetrate the lens, especially children’s lenses are clearer and cannot effectively resist the blue light, which is more likely to cause macular degeneration and cataracts**. Due to the short wavelength of blue light, the eyeballs will be in a state of tension for a long time, causing visual fatigue. You may experience symptoms such as **eye fatigue, headache, blurred vision, dry eyes, neck and shoulder pain**. Excessive exposure to blue light at night and insufficient melatonin secretion in the human body will keep people awake and unable to fall asleep, which causes the problem of **insomnia**. ## What it does Night, Google 🌕 is a **Chrome Extension** that aims to help users have a more **eye-friendly experience while searching on Google.com** by changing the default bright white background color to a more relaxed yellowish tint. ## How I built it To build the extension, I used **HTML, CSS,** and **Javascript**. ## Challenges I ran into * Debugging the code when it was being put on the Chrome store developer mode * Finding the best color to use for the new background color on Google ## Accomplishments that I am proud of I am beyond glad that I was able to create my first ever Chrome extension and have it solve a problem I too am facing now. It was also the first time I created a project completely on my own, finished only in a few hours. ## What I learned I learned how to create a Chrome Extension from scratch! ## What's next for Night, Google I am looking forward to making the background even more comprehensive on Google.com, and creating more variations of colors to choose from. I also plan on creating a preview on Chrome so when users click on the extension icon, they can choose which color they prefer for the background of the page. I also plan on creating official icons for the extension so I could launch it in the future in the Chrome web store.
## Inspiration Because all of our teammates live in condos, we are deeply aware of how food delivery can be busy and crowded during rush hours(dining times). The congestion is not just a matter of inconvenience; it also poses a security risk with the increasing incidents of 'takeout thefts' stealing our meals. Our meal has been stolen many times. This shared experience sparked an innovative idea among us: the creation of an 'Last-Meter' Automated Food Delivery System with an automatic food delivery car and security cameras, By integrating that, we aim not only to enhance the efficiency of food delivery services but also to address and mitigate safety concerns. Our goal is to streamline the delivery process, ensuring that meals reach their rightful recipients securely and swiftly, redefining the last leg of the food delivery journey. ## What it does This "last meter" delivery and safety system contains an automatic car that finds its own way, starts at the drop station stops at the pick-up station, and then goes into a cycle. We build a simulated environment of the lobby and automatically detect food on the car. The logic of the car is first, the car is static at the drop station waiting for food to be put on the car. Once it detects the food is being put on the car by the force sensor, the car will automatically start to run, it will stop at the pickup station for customers or custodians to pick up the food. Once the sensor detects that the food is removed from the car, the car will start running and go back to the drop station, and be ready for the next loop. We also have a security camera empowered by openCV in the system, which can automatically detect any people's face picking up the food and record the time of the pickup. That way, the person who lost the food can easily track when and who stole the food. ## How we built it First part which is the car, we used a PVC car frame and a recyclable paper box as the frame, two gear motors with wheels connected by a L298N Motor Driver and 9V battery set to drive the motor, and a Raspberry Pi Pico at the center managing the whole system. We have two IR sensor at the bottom, separated by a appropriate distance to detect the lines, the logic is that whenever the left IR reads "1", which means it detected black lines, then the car will turn left by stop the right motor and only run left motor to adjust itself to the middle, same principle for the right IR sensor, If both sensors does not detect black, then the car will just move forward. We also connect a force sensor at the top, if both IR sensor read 1 and if the force sensor detected any change, the car knows that it has to stop and go. Here is the code for it: At first, we import the modules needed. The ADC is for force sensor, since it outputs an analog signal. ``` from machine import Pin,PWM,ADC #importing PIN and PWM and ADC import time #importing time import utime ``` Then, we define the motor settings to the output pinouts. There are two motors, each can both move forward and backward, so we define four motors. ``` motor1=Pin(10,Pin.OUT) motor2=Pin(11,Pin.OUT) motor3=Pin(12,Pin.OUT) motor4=Pin(13,Pin.OUT) # Defining enable pins and PWM object enable1=PWM(Pin(6)) enable2=PWM(Pin(7)) # Defining right and left IR digital pins as input right_ir = Pin(2, Pin.IN) left_ir = Pin(3, Pin.IN) # Defining frequency for enable pins enable1.freq(1000) enable2.freq(1000) # Setting maximum duty cycle for maximum speed enable1.duty_u16(65025) enable2.duty_u16(65025) ``` Define the output power of the motors and the threshold for the force sensor. The values of these variables are obtained from constant experimentation with functions better. ``` half_speed = 30000 threshold = 3000 ``` Define the function for motors to move forward, move backward, turn right and turn left. Motor1 and Motor2 refers to the left motor, in which Motor1 means moving forward, and Motor2 means moving backward. Motor3 and Motor4 refers to the left motor, in which Motor4 means moving forward, and Motor3 means moving backward. The read\_adc() function will read the output of a specific ADC pinout that we are connecting the force sensor to. ``` # Forward def move_forward(): motor1.high() motor2.low() motor3.low() motor4.high() enable1.duty_u16(half_speed) enable2.duty_u16(half_speed) # Backward def move_backward(): motor1.low() motor2.high() motor3.high() motor4.low() enable1.duty_u16(half_speed) enable2.duty_u16(half_speed) # Turn Right def turn_right(): motor1.high() motor2.low() motor3.high() # Right wheel moving backward motor4.low() enable1.duty_u16(half_speed) enable2.duty_u16(half_speed) # Turn Left def turn_left(): motor1.low() motor2.high() # Left wheel moving backward motor3.low() motor4.high() enable1.duty_u16(half_speed) enable2.duty_u16(half_speed) # Stop def stop(): motor1.low() motor2.low() motor3.low() motor4.low() enable1.duty_u16(0) enable2.duty_u16(0) def read_adc(): adc = ADC(Pin(26)) reading = adc.read_u16() return reading ``` Both IR sensors detect white means the car is moving on the right path. If only one IR sensor detect white, the car will turn right or left according to the position of the IR sensor. When both IR sensors detect black (when the car hits the stop line,) the code will cycle through the difference in force for one second. If the difference in force is greater than a certain threshold, the car will move forward until it crosses the stop line. ``` while True: right_val=right_ir.value() #Getting right IR value(0 or 1) left_val=left_ir.value() #Getting left IR value(0 or 1) print(str(right_val)+"-"+str(left_val)) # Controlling robot direction based on IR value if right_val==0 and left_val==0: move_forward() elif right_val==1 and left_val==0: turn_right() elif right_val==0 and left_val==1: turn_left() elif right_val==1 and left_val==1: stop() previous_value = read_adc() time.sleep(1) #check if the difference in force is greater than the threshold if abs(read_adc() - previous_value) > threshold: move_forward() time.sleep(0.7) #The car will move 0.7s forward to cross the stop line stop() ``` And here is the code for OpenCV empowered cam: There are two major sections to this program. For one of them, it identifies human faces that appear in the live stream video from the camera through using the cascade classfier and encircles the human face found using rectangles. For the second section of this program, the major purpose of it is that every certain amount of time after the camera has been turned on, it uses the face recognition library to identify where the face it and saves that facial image into a file in a large folder. The file for these images are named according to the time when they were taken using the pytz library. Finally, the final piece of code bascially allows for the user to stop running the program by pressing down the “q” letter on the keyboard. ``` import face_recognition import cv2 import numpy as np import time import os import pytz from datetime import datetime # Initialize the camera cap = cv2.VideoCapture(0) face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml') # Variable to hold the timestamp when the last face image was saved last_save_time = time.time() face_save_interval = 5 # Interval between face saves (in seconds) # Directory to save face images save_directory = "FaceImages" os.makedirs(save_directory, exist_ok=True) face_id = 0 toronto_tz = pytz.timezone('America/Toronto') while True: # Capture frame-by-frame ret, frame = cap.read() if not ret: break # Read the frame _, img = cap.read() # Convert to grayscale gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) faces = face_cascade.detectMultiScale(gray, 1.1, 4) # Get the current time current_time = time.time() # Check if it's time to save a new face if current_time - last_save_time >= face_save_interval: # Find all face locations in the current frame face_locations = face_recognition.face_locations(frame) # If faces are detected, save the first one if face_locations: # Update the last saved time last_save_time = current_time # Increment the face ID face_id += 1 toronto_time = datetime.now(toronto_tz) # Format the timestamp for the filename timestamp = toronto_time.strftime('%B %d, %Y -> %I hours %M minutes %S seconds').lower() # Save the image of the face to a file top, right, bottom, left = face_locations[0] face_image = frame[top:bottom, left:right] save_path = os.path.join(save_directory, f"Face_{timestamp}.jpg") cv2.imwrite(save_path, face_image) print(f"Saved face_{face_id}.jpg") # Display the resulting frame # Draw the rectangle around each face for (x, y, w, h) in faces: cv2.rectangle(img, (x, y), (x+w, y+h), (255, 0, 0), 2) # Display cv2.imshow('img', img) # Break the loop when 'q' is pressed if cv2.waitKey(1) & 0xFF == ord('q'): break # Release the webcam and destroy all active windows cap.release() cv2.destroyAllWindows() ``` ## Challenges we ran into We ran into many challenges, like wifi connection and many other problems, but we do not have any time to finish any of our thoughts. ## Accomplishments that we're proud of Although we only designed a prototype at limited amount of time, we are still very proud that we made the frame of the system, and we are very proud of its extensibility. We can add a lot more functions on to the frame we have and make this system more developed and mature. We are very excited about that and wish to make this system a fully-functioal system. ## What we learned We learned so much from this event. The four of us are all first-time-hackers and had little experience on hardware, coding or any of these stuffs, we all learned and build everything at the same time. We learned to wire the boards, learned what does each pin stands for on each board, learned how the logic works on the board, learned MicroPython, debugging, learned to use glue guns, leaned opencv and so many new things. We can definitely do further and better for the next Makeuoft contest! ## What's next for 'Last-Meter' Automated Food Delivery System We plan to add more modules to the car, including automatic obstacle avoidance using an IR sensor, a stabilizer using a touch module to make sure the food is stable and then run, a more powerful energy system for the car, a more intelligent line-following logic, a more properly balanced car frame, a more developed system of opencv which can automatically compare the occupants with the face detected, and last but not least, a bigger basket for food of course!
## Inspiration 💡 Do your eyes ever feel strained and dry after hours and hours spent staring at screens? Has your eye doctor ever told you about the 20-20-20 rule? Good thing we’ve automated it for you along with personalized analysis of your eye activity using AdHawk’s eye tracking device. The awesome AdHawk demos blew us away, and we were inspired by its seemingly subtle, but powerful features: it could track the user's gaze in three dimensions, recognize blink events, and has an external camera. We knew that our goal to remedy this healthcare crisis could be achieved with AdHawk. ## What it does 💻 Poor eye health has become an increasingly important issue in today’s digital world and we want to help. While you’re working at your desktop, you’ll wear the wonderful Adhawk glasses. Every 20 minutes or so, our connected app will alert you to look away for a 20 second eye break. With the eye tracking, you’ll be forced to look at least 20 feet–otherwise, the timer pauses. We also made an eye exercise game available to play where you move a ball around to hit cubes randomly placed on the screen using your eyes. This engages the eye muscles in a fun and exciting way to improve eye tracking, eye teaming and myopia. ## How we built it 🛠️ Our frontend uses React.js & Styled Components and React Three Fiber for the eye exercise game. Our backend uses Python via AdHawk's SDK with Flask and Firebase for our database. ## Challenges we ran into ⛰️ Setting up the glasses to detect the depth of our sight accurately was difficult as this was the key metric to ensure the user was taking a 20 feet eye break for 20 seconds. As well, connecting this data to the frontend was a bit of a challenge. However, with our Flask and React tech stack, it was an easy, streamlined integration. As well, we wanted to record analytics of our user’s screen time by taking any instances where their viewing distance was closer than a certain amount. It would give a user a chance to gauge their eye health and better understand their true viewing habits. This was a bit of a challenge as it was our first time using CockroachDB. ## Accomplishments that we're proud of 🏅 As coders and avid tech users, we are proud to have built a functioning app that we would actually use in our lives. Many of us personally struggle with vision problems and Visionary makes it so easy to help reduce these issues, whether it's myopia or eye strain. We’re super proud of the frontend, and the fact that we were able to incorporate the incredible Adhawk glasses into our project successfully. ## What we learned 📚 Start small and dream big. We ensured that the glasses would be able to track viewing distance and send that data to our frontend first before moving on to other features, like a landing page, data analytics, and our database setup. ## What's next for Visionary 🥅 We would love to incorporate other use cases for the Adhawk glasses, including more guided eye exercises with eye tracking, focus tracking by ensuring that the user’s eyes stay on screen, and so much more. Customized settings are also a next step. Visionary would also make for an awesome mobile app so that users can further reduce eye strain on their phones and tablets. The possibilities are truly, truly endless.
losing
# Inspiration I recently got attached to Beat Saber so I thought I'd be fun to build something similar to it. # Objective The objective of the game is to score higher than your opponent. Points are scored if a player triggers a hitbox when a note is in contact with it. **Points Chart:** **Green:** Perfect hit! The hitbox was triggered when a note was in full contact, **full points + combo bonus** **Yellow:** Hitbox was triggered when a note was in partial contact, **partial points** **Red:** Hitbox was triggered when a note was not in contact, **no points** **Combo (Bonus Points):** Combos are achieved when a hitbox triggered **Green** more than once in a row. Combos add a great amount of bonus to your score and progressively increase in value as the pace of the notes progress. # Controls & Info **HitBox:** The blue circles at the bottom of each player's half of the screen **Notes:** The orange circles that fall from the top of the screen down to the hotboxes **Player 1 (Left Side):** Key "A": Triggers the left hitbox Key "S": Triggers the center hitbox Key "D": Triggers the right hitbox **Player 2 (Right Side):** Key "J: Triggers the left hitbox Key "K": Triggers the center hitbox Key "L": Triggers the right hitbox # What's next for Rhythm Flow 1. Support for tablets. The game is very much playable on the computer but the mechanics of it can also be ported to tablets where the touch screen size is sufficient enough to use controls. 2. More game modes. Currently, there is only one game mode where two people are directly competing against each other. I have ideas for other games modes where instead of competing, two players would have collaborate together to beat the round.
## Inspiration With recent booms in AI development, deepfakes have been getting more and more convincing. Social media is an ideal medium for deepfakes to spread, and can be used to seed misinformation and promote scams. Our goal was to create a system that could be implemented in image/video-based social media platforms like Instagram, TikTok, Reddit, etc. to warn users about potential deepfake content. ## What it does Our model takes in a video as input and analyzes frames to determine instances of that video appearing on the internet. It then outputs several factors that help determine if a deepfake warning to a user is necessary: URLs corresponding to websites where the video has appeared, dates of publication scraped from websites, previous deepfake IDs (i.e. if the website already mention the words "deepfake"), and similarity scores between the content of the video being examined and previous occurrences of the deepfake. A warning should be sent to the user if content similarity scores between it and very similar videos are low (indicating the video has been tampered with) or if the video has been previously IDed as a deepfake by a different website. ## How we built it Our project was split into several main steps: **a) finding web instances of videos similar to the video under investigation** We used Google Cloud's Cloud Vision API to detect web entities that have content matching the video being examined (including full matching and partial matching images). **b) scraping date information from potential website matches** We utilized the htmldate python library to extract original and updated publication dates from website matches. **c) determining if a website has already identified the video as a deepfake** We again used Google Cloud's Cloud Vision API to determine if the flags "deepfake" or "fake" appeared in website URLs. If they did, we immediately flagged the video as a possible deepfake. **d) calculating similarity scores between the contents of the examined video and similar videos** If no deepfakes flags have been raised by other websites (step c), we use Google Cloud's Speech-to-Text API to acquire transcripts of the original video and similar videos found in step a). We then compare pairs of transcripts using a cosine similarity algorithm written in python to determine how similar the contents of two texts are (common, low-meaning words like "the", "and", "or", etc. are ignored when calculating similarity). ## Challenges we ran into Neither of us had much experience using Google Cloud, which ended up being a major tool in our project. It took us a while to figure out all the authentication and billing procedures, but it was an extremely useful framework for us once we got it running. We also found that it was difficult to find a deepfake online that wasn't already IDed as one (to test out our transcript similarity algorithm), so our solution to this was to create our own amusing deepfakes and test it on those. ## Accomplishments that we're proud of We're proud that our project mitigates an important problem for online communities. While most current deepfake detection uses AI, malignant AI can simply continually improve to counter detection mechanisms. Our project takes an innovative approach that avoids this problem by instead tracking and analyzing the online history of a video (something that the creators of a deepfake video have no control over). ## What we learned While working on this project, we gained experience in a wide variety of tools that we've never been exposed to before. From Google Cloud to fascinating text analysis algorithms, we got to work with existing frameworks as well as write our own code. We also learned the importance of breaking down a big project into smaller, manageable parts. Once we had organized our workflow into reachable goals, we found that we could delegate tasks to each other and make rapid progress. ## What's next for Deepfake ID Since our project is (ideally) meant to be integrated with an existing social media app, it's currently a little back-end heavy. We hope to expand this project and get social media platforms onboard to using our deepfake detection method to alert their users when a potential deepfake video begins to spread. Since our method of detection has distinct advantages and disadvantages from existing AI deepfake detection, the two methods can be combined to create an even more powerful deepfake detection mechanism. Reach us on Discord: **spica19**
## Intro and Idea For our team of First Year UofT Engineering Science students, this was our first Makeathon and first project as a team. We have varying ranges of experience with software and hardware within our team and decided to approach this competition as both a challenge, and a learning experience. After a couple hours of brainstorming based on our collective interests, our team arrived on an idea we were all excited about: An interactive orchestra experience to allow players to more easily play together. Jazz ensembles are easily able to improvise together, because they usually play in the same keys. Classical musicians on the other hand, are often not able to predict key changes. Our design provides a platform for conductors to change the orchestra in real-time according to their vision. By playing chords on a Midi keyboard, they can “play the orchestra” by transposing and transmitting the chords to the members of the orchestra through the use of wireless connectivity to individual displays powered by Raspberry Pi’s. ## Planning and Summary As briefly identified in our introduction, our primary stakeholders for this project are: Ourselves (a team of first year engineering students attempting their first Makeathon) Orchestra performers and conductors MakeUofT Organizers, Sponsors, and Judges Based on this, we were able to develop some rough objectives to keep us on track for 24 hours: To create a unique but achievable product To improve our software and hardware integration skills To incorporate sponsor innovations and technologies To ensure we had something to show after 24 hours, we decided to aim for a minimum viable project (MVP) before adding any bells and whistles. We were able to reach our goals of Midi communication to the serving computer and having note name communication to the player displays. After we reached our MVP, we expanded on our design to have visualization of the notes on the staff, differing transposition options, and automated chord analysis. Finally, we implemented chord suggestion and prediction using Azure Machine Learning. ## Features: Real time note communication between Midi and player displays Visual display of notes on staff Automated Chord Analysis Chord suggestion and predictions using Azure Machine Learning Multiple differently transposed sections available to accommodate a variety of instruments simultaneously ## Applications: Large group improvisation and composition Teaching and training Creating new pieces using Azure Machine Learning ## The Process ## Raspberry Pi Setup and Enclosure: To act as the receiving devices, we use four Raspberry Pi’s in our project. Each Pi is set up with Raspbian Stretch version 4.14. A 7” touch display screen is attached to one of our Pi’s, and monitors to the remaining three. Initially, we were going to use small LCD graphic displays, but ruled these out due to size. A 7” touch display was offered to us to borrow, but to keep our cost down, we opted to use monitors for the remaining Pi’s. Ideally all the Pi’s would have 7” touch display screens, but we decided for a MVP prototype, one was sufficient. Beyond our MVP prototype we added a push-button that allows the player to cycle through the available sections (more on sections to follow) on the Raspberry Pi. A simple python script was created to map the GPIO pin connected to the button to a keyboard stroke (the F5 refresh button for web pages). We left room in the box for more features to be added. We initially hoped to modify the open source PIvena Raspberry Pi enclosure, but after beginning our modifications, we realized laser cutting was not being offered as a service at MakeUofT. Based on limited fabrication lab hours available to us, we opted to design our enclosure out of foamcore. The display is set at an angle in order to allow notes to be read easily while playing an instrument. Hardware is located in a box behind the display, making it discrete but easily accessible. ## Network/IoT Setup: The Network setup remained simple throughout the project. The basic concept was to have one computer that acted as a master, then any number of displays of all shapes and sizes that could receive instructions from the master and join in on the joys of music. To accomplish this, we used a Node.js framework and wrote predominantly JavaScript to manage the interactivity of different clients. As a result of our objectives, the final product that we have produced is capable of being run on any platform with a web browser, making it highly accessible and scalable. Furthermore, as the result works off of a wireless network, it is capable of accepting a high volume of hosts without added latency, as well as being very easy to connect to. The network setup has been geared towards the IoT model by connecting devices in a collaborative way to enable people to help and support each other in harmony. Technically speaking, the socket works using Socket.io, integrating native HTTPS for requests to the Azure Cloud for learned suggestions of chords based on machine intelligence. The socketing breaks the network into a series of sections, which may all operate and play in different keys, requiring transposition for accessible harmony. The number of sections that can be created is theoretically undefined, though for our basic demonstration we make use of four different sections operating in different keys.They are all shown the classic chord data and have available to them the key in which the orchestra is playing giving the player further liberty for experimentation within the piece. ## Machine Learning: The Azure Machine Learning framework was the center of a feature for the master controller of the project. Provided with historical data of chords played, it would recommend a good follow-up chord to harmonize. Our machine learning algorithm was fed by approximately 3 million data points from pop songs. Though in retrospect we should have trained it with music that finds more standard and less repetitive chords, the concept still worked well enough, though at our hand there was slight underfitting. The structure that worked well for our means was the feeding of 3 points of historical data predicting a fourth point of data to a fair degree of reason. This is a useful feature of our design for those who would use this concept to collaborate with others or create something new, as it would support them further in a desire for a good, harmonic sound.
partial
# 🚗 InsuclaimAI: Simplifying Insurance Claims 📝 ## 🌟 Inspiration 💡 After a frustrating experience with a minor fender-bender, I was faced with the overwhelming process of filing an insurance claim. Filling out endless forms, speaking to multiple customer service representatives, and waiting for assessments felt like a second job. That's when I knew that there needed to be a more streamlined process. Thus, InsuclaimAI was conceived as a solution to simplify the insurance claim maze. ## 🎓 What I Learned ### 🛠 Technologies #### 📖 OCR (Optical Character Recognition) * OCR technologies like OpenCV helped in scanning and reading textual information from physical insurance documents, automating the data extraction phase. #### 🧠 Machine Learning Algorithms (CNN) * Utilized Convolutional Neural Networks to analyze and assess damage in photographs, providing an immediate preliminary estimate for claims. #### 🌐 API Integrations * Integrated APIs from various insurance providers to automate the claims process. This helped in creating a centralized database for multiple types of insurance. ### 🌈 Other Skills #### 🎨 Importance of User Experience * Focused on intuitive design and simple navigation to make the application user-friendly. #### 🛡️ Data Privacy Laws * Learned about GDPR, CCPA, and other regional data privacy laws to make sure the application is compliant. #### 📑 How Insurance Claims Work * Acquired a deep understanding of the insurance sector, including how claims are filed, and processed, and what factors influence the approval or denial of claims. ## 🏗️ How It Was Built ### Step 1️⃣: Research & Planning * Conducted market research and user interviews to identify pain points. * Designed a comprehensive flowchart to map out user journeys and backend processes. ### Step 2️⃣: Tech Stack Selection * After evaluating various programming languages and frameworks, Python, TensorFlow, and Flet (From Python) were selected as they provided the most robust and scalable solutions. ### Step 3️⃣: Development #### 📖 OCR * Integrated Tesseract for OCR capabilities, enabling the app to automatically fill out forms using details from uploaded insurance documents. #### 📸 Image Analysis * Exploited an NLP model trained on thousands of car accident photos to detect the damages on automobiles. #### 🏗️ Backend ##### 📞 Twilio * Integrated Twilio to facilitate voice calling with insurance agencies. This allows users to directly reach out to the Insurance Agency, making the process even more seamless. ##### ⛓️ Aleo * Used Aleo to tokenize PDFs containing sensitive insurance information on the blockchain. This ensures the highest levels of data integrity and security. Every PDF is turned into a unique token that can be securely and transparently tracked. ##### 👁️ Verbwire * Integrated Verbwire for advanced user authentication using FaceID. This adds an extra layer of security by authenticating users through facial recognition before they can access or modify sensitive insurance information. #### 🖼️ Frontend * Used Flet to create a simple yet effective user interface. Incorporated feedback mechanisms for real-time user experience improvements. ## ⛔ Challenges Faced #### 🔒 Data Privacy * Researching and implementing data encryption and secure authentication took longer than anticipated, given the sensitive nature of the data. #### 🌐 API Integration * Where available, we integrated with their REST APIs, providing a standard way to exchange data between our application and the insurance providers. This enhanced our application's ability to offer a seamless and centralized service for multiple types of insurance. #### 🎯 Quality Assurance * Iteratively improved OCR and image analysis components to reach a satisfactory level of accuracy. Constantly validated results with actual data. #### 📜 Legal Concerns * Spent time consulting with legal advisors to ensure compliance with various insurance regulations and data protection laws. ## 🚀 The Future 👁️ InsuclaimAI aims to be a comprehensive insurance claim solution. Beyond just automating the claims process, we plan on collaborating with auto repair shops, towing services, and even medical facilities in the case of personal injuries, to provide a one-stop solution for all post-accident needs.
## Inspiration We are inspired by how Machine Learning can streamline a lot of our lives and minimize possible errors which occurs. In the healthcare and financial field, one of the issues which happens the most in the Insurance field is how to best evaluate a quote for the consumer. Therefore, upon seeing the challenge online during the team-formation period, we decided to work on it and devise an algorithm and data model for each consumers, along with a simple app for consumers to use on the front end. ## What it does Upon starting the app, the user can check to see different plans offered by the company. It is listed in a ScrollView Table and customers can hence have a simple idea of what kind of deals/packages there are. Then, the user can proceed to the "Information" page, and fill out their personal information to request a quotation from the system, where the user data is transmitted to our server and the predictions are being made there. Then, the app is returned with a suitable plan for the user, along with other data graphs to illustrate the general demographics of the participants of the program. ## How we built it The app is built using React-Native, which is cross-platform compatible for iOS, Android and WebDev. While for the model, we used r and python to train it. We also used Kibana to perform data visualization and elasticsearch as the server. ## Challenges we ran into It is hard to come up with more filters in further perfecting our model with the sample data set from observing the patterns within the data set. ## Accomplishments that we're proud of Improving the accuracy of the model by two times the original that we started off with by applying different filters and devising different algorithms. ## What we learned We are now more proficient in terms of training models, developing React Native applications, and using Machine Learning in solving daily life problems by spotting out data patterns and utilizing them to come up with algorithms for the data set. ## What's next for ViHack Further fine-tuning of the recognition model to improve upon the percentage of correct predictions of our currently-trained model .
# butternut ## `buh·tr·nuht` -- `bot or not?` Is what you're reading online written by a human, or AI? Do the facts hold up? `butternut`is a chrome extension that leverages state-of-the-art text generation models *to combat* state-of-the-art text generation. ## Inspiration Misinformation spreads like wildfire in these days and it is only aggravated by AI-generated text and articles. We wanted to help fight back. ## What it does Butternut is a chrome extension that analyzes text to determine just how likely a given article is AI-generated. ## How to install 1. Clone this repository. 2. Open your Chrome Extensions 3. Drag the `src` folder into the extensions page. ## Usage 1. Open a webpage or a news article you are interested in. 2. Select a piece of text you are interested in. 3. Navigate to the Butternut extension and click on it. 3.1 The text should be auto copied into the input area. (you could also manually copy and paste text there) 3.2 Click on "Analyze". 4. After a brief delay, the result will show up. 5. Click on "More Details" for further analysis and breakdown of the text. 6. "Search More Articles" will do a quick google search of the pasted text. ## How it works Butternut is built off the GLTR paper <https://arxiv.org/abs/1906.04043>. It takes any text input and then finds out what a text generating model *would've* predicted at each word/token. This array of every single possible prediction and their related probability is crossreferenced with the input text to determine the 'rank' of each token in the text: where on the list of possible predictions was the token in the text. Text with consistently high ranks are more likely to be AI-generated because current AI-generated text models all work by selecting words/tokens that have the highest probability given the words before it. On the other hand, human-written text tends to have more variety. Here are some screenshots of butternut in action with some different texts. Green highlighting means predictable while yellow and red mean unlikely and more unlikely, respectively. Example of human-generated text: ![human_image](https://cdn.discordapp.com/attachments/795154570442833931/797931974064865300/unknown.png) Example of GPT text: ![gpt_text](https://cdn.discordapp.com/attachments/795154570442833931/797931307958534185/unknown.png) This was all wrapped up in a simple Flask API for use in a chrome extension. For more details on how GLTR works please check out their paper. It's a good read. <https://arxiv.org/abs/1906.04043> ## Tech Stack Choices Two backends are defined in the [butternut backend repo](https://github.com/btrnt/butternut_backend). The salesforce CTRL model is used for butternut. 1. GPT-2: GPT-2 is a well-known general purpose text generation model and is included in the GLTR team's [demo repo](https://github.com/HendrikStrobelt/detecting-fake-text) 2. Salesforce CTRL: [Salesforce CTRL](https://github.com/salesforce/ctrl) (1.6 billion parameter) is bigger than all GPT-2 varients (117 million - 1.5 billion parameters) and is purpose-built for data generation. A custom backend was CTRL was selected for this project because it is trained on an especially large dataset meaning that it has a larger knowledge base to draw from to discriminate between AI and human -written texts. This, combined with its greater complexity, enables butternut to stay a step ahead of AI text generators. ## Design Decisions * Used approchable soft colours to create a warm approach towards news and data * Used colour legend to assist users in interpreting language ## Challenges we ran into * Deciding how to best represent the data * How to design a good interface that *invites* people to fact check instead of being scared of it * How to best calculate the overall score given a tricky rank distrubution ## Accomplishments that we're proud of * Making stuff accessible: implementing a paper in such a way to make it useful **in under 24 hours!** ## What we learned * Using CTRL * How simple it is to make an API with Flask * How to make a chrome extension * Lots about NLP! ## What's next? Butternut may be extended to improve on it's fact-checking abilities * Text sentiment analysis for fact checking * Updated backends with more powerful text prediction models * Perspective analysis & showing other perspectives on the same topic Made with care by: ![Group photo](https://cdn.discordapp.com/attachments/795154570442833931/797730842234978324/unknown.png) ``` // our team: { 'group_member_0': [brian chen](https://github.com/ihasdapie), 'group_member_1': [trung bui](https://github.com/imqt), 'group_member_2': [vivian wi](https://github.com/vvnwu), 'group_member_3': [hans sy](https://github.com/hanssy130) } ``` Github links: [butternut frontend](https://github.com/btrnt/butternut) [butternut backend](https://github.com/btrnt/butternut_backend)
partial
## Inspiration Our inspiration for TRACY came from the desire to enhance tennis training through advanced technology. One of our members was a former tennis enthusiast who has always strived to refine their skills. They soon realized that the post-game analysis process took too much time in their busy schedule. We aimed to create a system that not only analyzes gameplay but also provides personalized insights for players to improve their skills. ## What it does and how we built it TRACY utilizes computer vision algorithms and pre-trained neural networks to analyze tennis footage, tracking player movements, and ball trajectories. The system then employs ChatGPT for AI-driven insights, generating personalized natural language summaries highlighting players' strengths and weaknesses. The output includes dynamic visuals and statistical data using React.js, offering a comprehensive overview and further insights into the player's performance. ## Challenges we ran into Developing a seamless integration between computer vision, ChatGPT, and real-time video analysis posed several challenges. Ensuring accuracy in 2D ball tracking from a singular camera angle, optimizing processing speed, and fine-tuning the algorithm for accurate tracking were a key hurdle we overcame during the development process. The depth of the ball became a challenge as we were limited to one camera angle but we were able to tackle it by using machine learning techniques. ## Accomplishments that we're proud of We are proud to have successfully created TRACY, a system that brings together state-of-the-art technologies to provide valuable insights to tennis players. Achieving a balance between accuracy, speed, and interpretability was a significant accomplishment for our team. ## What we learned Through the development of TRACY, we gained valuable insights into the complexities of integrating computer vision with natural language processing. We also enhanced our understanding of the challenges involved in real-time analysis of sports footage and the importance of providing actionable insights to users. ## What's next for TRACY Looking ahead, we plan to further refine TRACY by incorporating user feedback and expanding the range of insights it can offer. Additionally, we aim to explore potential collaborations with tennis coaches and players to tailor the system to meet the diverse needs of the tennis community.
## Inspiration More money, more problems. Lacking an easy, accessible, and secure method of transferring money? Even more problems. An interesting solution to this has been the rise of WeChat Pay, allowing for merchants to use QR codes and social media to make digital payments. But where does this leave people without sufficient bandwidth? Without reliable, adequate Wi-Fi, technologies like WeChat Pay and Google Pay simply aren't options. People looking to make money transfers are forced to choose between bloated fees or dangerously long wait times. As designers, programmers, and students, we tend to think about how we can design tech. But how do you design tech for that negative space? During our research, we found of the people that lack adequate bandwidth, 1.28 billion of them have access to mobile service. This ultimately led to our solution: **Money might not grow on trees, but Paypayas do.** 🍈 ## What it does Paypaya is an SMS chatbot application that allows users to perform simple and safe transfers using just text messages. Users start by texting a toll free number. Doing so opens a digital wallet that is authenticated by their voice. From that point, users can easily transfer, deposit, withdraw, or view their balance. Despite being built for low bandwidth regions, Paypaya also has huge market potential in high bandwidth areas as well. Whether you are a small business owner that can't afford a swipe machine or a charity trying to raise funds in a contactless way, the possibilities are endless. Try it for yourself by texting +1-833-729-0967 ## How we built it We first set up our Flask application in a Docker container on Google Cloud Run to streamline cross OS development. We then set up our database using MongoDB Atlas. Within the app, we also integrated the Twilio and PayPal APIs to create a digital wallet and perform the application commands. After creating the primary functionality of the app, we implemented voice authentication by collecting voice clips from Twilio to be used in Microsoft Azure's Speaker Recognition API. For our branding and slides, everything was made vector by vector on Figma. ## Challenges we ran into Man. Where do we start. Although it was fun, working in a two person team meant that we were both wearing (too) many hats. In terms of technical problems, the PayPal API documentation was archaic, making it extremely difficult for us figure out how to call the necessary functions. It was also really difficult to convert the audio from Twilio to a byte-stream for the Azure API. Lastly, we had trouble keeping track of conversation state in the chatbot as we were limited by how the webhook was called by Twilio. ## Accomplishments that we're proud of We're really proud of creating a fully functioning MVP! All of 6 of our moving parts came together to form a working proof of concept. All of our graphics (slides, logo, collages) are all made from scratch. :)) ## What we learned Anson - As a first time back end developer, I learned SO much about using APIs, webhooks, databases, and servers. I also learned that Jacky falls asleep super easily. Jacky - I learned that Microsoft Azure and Twilio can be a pain to work with and that Google Cloud Run is a blessing and a half. I learned I don't have the energy to stay up 36 hours straight for a hackathon anymore 🙃 ## What's next for Paypaya More language options! English is far from the native tongue of the world. By expanding the languages available, Paypaya will be accessible to even more people. We would also love to do more with financial planning, providing a log of previous transactions for individuals to track their spending and income. There are also a lot of rough edges and edge cases in the program flow, so patching up those will be important in bringing this to market.
# Catch! (Around the World) ## Our Inspiration Catch has to be one of our most favourite childhood games. Something about just throwing and receiving a ball does wonders for your serotonin. Since all of our team members have relatives throughout the entire world, we thought it'd be nice to play catch with those relatives that we haven't seen due to distance. Furthermore, we're all learning to social distance (physically!) during this pandemic that we're in, so who says we can't we play a little game while social distancing? ## What it does Our application uses AR and Unity to allow you to play catch with another person from somewhere else in the globe! You can tap a button which allows you to throw a ball (or a random object) off into space, and then the person you send the ball/object to will be able to catch it and throw it back. We also allow users to chat with one another using our web-based chatting application so they can have some commentary going on while they are playing catch. ## How we built it For the AR functionality of the application, we used **Unity** with **ARFoundations** and **ARKit/ARCore**. To record the user sending the ball/object to another user, we used a **Firebase Real-time Database** back-end that allowed users to create and join games/sessions and communicated when a ball was "thrown". We also utilized **EchoAR** to create/instantiate different 3D objects that users can choose to throw. Furthermore for the chat application, we developed it using **Python Flask**, **HTML** and **Socket.io** in order to create bi-directional communication between the web-user and server. ## Challenges we ran into Initially we had a separate idea for what we wanted to do in this hackathon. After a couple of hours of planning and developing, we realized that our goal is far too complex and it was too difficult to complete in the given time-frame. As such, our biggest challenge had to do with figuring out a project that was doable within the time of this hackathon. This also ties into another challenge we ran into was with initially creating the application and the learning portion of the hackathon. We did not have experience with some of the technologies we were using, so we had to overcome the inevitable learning curve. There was also some difficulty learning how to use the EchoAR api with Unity since it had a specific method of generating the AR objects. However we were able to use the tool without investigating too far into the code. ## Accomplishments * Working Unity application with AR * Use of EchoAR and integrating with our application * Learning how to use Firebase * Creating a working chat application between multiple users
winning
## Inspiration Social media has quickly become the primary source of news for many around the world and the prevalence of fake news has grown with it. With a range of recent natural disasters and the spread of Coronavirus(COVID-19) increasing the stakes of acting on all available information, more and more individuals are beginning to fall victim to lies spread by fake news. Being an interesting issue to address, this led us towards discovering a market gap for a tool to aggregate, filter, and visualize global social media activity over a period of time. Our project, Re:Action, aims to empower users by providing them access to an aggregated and formatted set of tweets from around the world. This allows them to become aware of inconsistencies characteristic of fake news stories in relation to what the larger majority reports, allowing fully informed before they take action on the ever-rapidly changing situation of our world. ## What it does Users are able to query keywords and visualize any relevant tweets according to their geographical locations around the world. The keyword is used as a search parameter for both News Article and Tweet scrapers, where a large collection of relevant elements of information is captured. Using type meta-data, geo-location and element information is extracted and then subsequently displayed in the corresponding location on the web app's world map. ## How I built it We began by planning and wireframing our project using Figma. For the front-end, we used React and various custom-built javascript libraries to display hotspots and clusters of hits around the world. For the back-end, we used Python & [Tweepy/GetOldTweets3] to implement our web scrapers and data processing. After serving our multi-threaded scripts with Flask, all relevant information was stored in a MongoDB Atlas database, where it was then called upon and displayed in my residence. ## Challenges I ran into The official Twitter Search API that we used severely limited the number of calls we could make (maximum 300 queries for 18 requests every 15 minutes), which made obtaining a large enough data set to train our machine learning model difficult. Many of the tweets included incomplete or improperly formatted location data, which made the visualization/plotting process of the set of elements difficult. This forced us to rely on other methods in identifying a suitable map location for the elements collected. ## Accomplishments that I'm proud of Some of the libraries featured in the front end of our web app were made from scratch.
## Inspiration A [paper](https://arxiv.org/pdf/1610.09225.pdf) by Indian Institute of Technology researchers described that stock predictions using sentiment analysis had a higher accuracy rate than those analyzing previous trends. We decided to implement that idea and create a real-time, self-updating web-app that could visually show how the public felt towards the big stock name companies. What better way then, than to use the most popular and relatable images on the web, memes? ## What it does The application retrieves text content from Twitter, performs sentiment analysis on tweets and generates meme images based on the sentiment. ## How we built it The whole implementation process is divided into four parts: scraping data, processing data, analysing data, and visualizing data. For scraping data, we were planning to use python data scraping library and our target websites are the ones where users are active and able to speak out their own minds. We wanted unbiased and representative data to give us a more accurate result. For processing data, since we will get a lot of noise when we scrape data from websites and we try to make sure that our data is concise and less time-consuming to feed our algorithm, we planned to use regular expression to create a generic template where it ignores all the emoticons. ## Challenges we ran into We encountered some technical, architectural, and timing issues. For example, in terms of technical problems, when we try to scrape data from twitter, we ran into noise issues. To clarify, a lot of users use emoticons and uncommon symbols when they post tweets, and those information is not helpful for us to find how users actually react to certain things. To solve this challenge, we came up with a idea where we use Regular Expression to form a template that only scrapes useful data for us. However, due to limited time during a hackathon, we increased efficiency by using Twitter’s Search API. Furthermore, we realized towards the end of our project that the MemeAPI had been discontinued and that it was not possible to generate memes with it. ## Accomplishments that we're proud of * Designing the project based on the mechanism of multi servers * Utilizing Google Cloud Platform, Twitter API, MemeAPI ## What we learned * Google Could Platform, especially the Natural Language and Vision APIs * AWS * React ## What's next for $MMM * Getting real time big data probably with Spark * Including more data visualization method, possibly with D3.js * Designing a better algorithm to find memes reflecting the sentiment of the public towards the company * Creating more dank memes
## Inspiration **Reddit card threads and Minji's brother's military service** We know these two things sound a little funny together, but trust us, they formulated an idea. Our group was discussing the multiple threads on Reddit related to sending sick and unfortunate children cards through the mail to cheer them up. We thought there must be an easier, more efficient way to accomplish this. Our group also began to chat about Minji's brother, who served in the Republic of Korea Armed Forces. We talked about his limited Internet access, and how he tried to efficiently manage communication with those who supported him. Light bulb! Why not make a website dedicated to combining everyone's love and support in one convenient place? ## What it does **Videos and photos and text, oh my!** A little bit of love can go a long way with Cheerluck. Our user interface is very simple and intuitive (and responsive), so audiences of all ages can post and enjoy the website with little to no hassle. The theme is simple, bright, and lighthearted to create a cheerful experience for the user. Past the aesthetic, the functionality of the website is creating personal pages for those in stressful or undesirable times, such as patients, soldiers, those in the Peace Corps, and so on. Once a user has created a page for someone, people are welcome to either (a) create a text post, (b) upload photos, or (c) use their webcam/phone camera to record a video greeting to post. The Disqus and Ziggeo APIs allow for moderation of content. These posts would all be appended to the user's page, where someone can give them the link to view whenever they want as a great source of love, cheer and comfort. For example, if this had existed when Jihoon was in the military, he could've used his limited internet time more efficiently by visiting this one page where his family and friends were updating him on their lives at once. This visual scrapbook can put a smile on anyone's face, young or old, on desktop or mobile! ## How we built it • HTML, CSS, Javascript, JQuery, Node.js, Bootstrap (worked off of a theme) • APIs: Ziggeo (videos), Disqus (commenting/photos) • Hosted on Heroku using our domain.com name • Also, Affinity Photo and Affinity Designer were used to create graphic design elements ## Challenges we ran into **36 hours: Not as long as you’d think** When this idea first came about, we got a little carried away with the functionality we wanted to add. Our main challenge was racing the clock. Debugging took up a lot of time, as well as researching documentation on how to effectively put all of these pieces together. We left some important elements out, but are overall proud of what have to present based on our prior knowledge! ## Accomplishments that we're proud of Our group is interested in web development, but all of us have little to no knowledge of it. So, we decided to take on the challenge of tackling one this weekend! We were very excited to test out different APIs to make our site functional, and work with different frameworks that all the cool kids talk about. Given the amount of time, we're proud that we have a presentable website that can definitely be built upon in the future. This challenge was more difficult than we thought it would be, but we’re proud of what we accomplished and will use this as a big learning experience going forward. ## What we learned • A couple of us knew very basic ideas of HTML, CSS, Bootstrap, node.js, and Heroku. We learned how they interact with each other and come together in order to publish a website. • How to integrate APIs to help our web app be functional • How to troubleshoot problems related to hosting the website • How to use the nifty features of Bootstrap (columns! So wonderful!) • How to host a website on an actual .com domain (thanks domain.com!) ## What's next for Cheerluck We hope to expand upon this project at some point; there’s a lot of features that can be added, and this could become a full-fledged web app someday. There are definitely a lot of security worries for something that is as open as this, so we’d hope to add filters to make approving posts easier. Users could view all pages and search for causes they’d like to spread cheer to. We would also like to add the ability to make a page public or private. If we’re feeling really fancy, we’d love to make each page customizable to a certain degree, such as different colored buttons. There will always be people in difficult situations who need support from loved ones, young and old, and this accessible, simple solution could be an appealing platform for anyone with internet access.
losing
## Inspiration No one likes being stranded at late hours in an unknown place with unreliable transit as the only safe, affordable option to get home. Between paying for an expensive taxi ride yourself, or sharing a taxi with random street goers, the current options aren't looking great. WeGo aims to streamline taxi ride sharing, creating a safe, efficient and affordable option. ## What it does WeGo connects you with people around with similar destinations who are also looking to share a taxi. The application aims to reduce taxi costs by splitting rides, improve taxi efficiency by intelligently routing taxi routes and improve sustainability by encouraging ride sharing. ### User Process 1. User logs in to the app/web 2. Nearby riders requesting rides are shown 3. The user then may choose to "request" a ride, by entering a destination. 4. Once the system finds a suitable group of people within close proximity, the user will be send the taxi pickup and rider information. (Taxi request is initiated) 5. User hops on the taxi, along with other members of the application! ## How we built it The user begins by logging in through their web browser (ReactJS) or mobile device (Android). Through API calls to our NodeJS backend, our system analyzes outstanding requests and intelligently groups people together based on location, user ratings & similar destination - all in real time. ## Challenges we ran into A big hurdle we faced was the complexity of our ride analysis algorithm. To create the most cost efficient solution for the user, we wanted to always try to fill up taxi cars completely. This, along with scaling up our system to support multiple locations with high taxi request traffic was definitely a challenge for our team. ## Accomplishments that we're proud of Looking back on our work over the 24 hours, our team is really excited about a few things about WeGo. First, the fact that we're encouraging sustainability on a city-wide scale is something really important to us. With the future leaning towards autonomous vehicles & taxis, having a similar system like WeGo in place we see as something necessary for the future. On the technical side, we're really excited to have a single, robust backend that can serve our multiple front end apps. We see this as something necessary for mass adoption of any product, especially for solving a problem like ours. ## What we learned Our team members definitely learned quite a few things over the last 24 hours at nwHacks! (Both technical and non-technical!) Working under a time crunch, we really had to rethink how we managed our time to ensure we were always working efficiently and working towards our goal. Coming from different backgrounds, team members learned new technical skills such as interfacing with the Google Maps API, using Node.JS on the backend or developing native mobile apps with Android Studio. Through all of this, we all learned the persistence is key when solving a new problem outside of your comfort zone. (Sometimes you need to throw everything and the kitchen sink at the problem at hand!) ## What's next for WeGo The team wants to look at improving the overall user experience with better UI, figure out better tools for specificially what we're looking for, and add improved taxi & payment integration services.
## Inspiration While using ridesharing apps such as Uber and Lyft, passengers, particularly those of marginalized identities, have reported feeling unsafe or uncomfortable being alone in a car. From user interviews, every woman has mentioned personal safety as one of their top concerns within a rideshare. About 23% of American women have reported a driver for inappropriate behavior. Many apps have attempted to mitigate this issue by creating rideshare services that may hire only female drivers. However, these apps have quickly gotten shut down due to discrimination laws. Additionally, around 40% of Uber and Lyft drivers are white males, possibly due to the fact that many minorities may feel uncomfortable in certain situations as a driver. We aimed to create a rideshare app which would provide the same sense of safeness and comfort that the aforementioned apps aimed to provide while making sure that all backgrounds are represented and accounted for. ## What it does Our app, Driversity (stylized DRiveristy), works similarly to other ridesharing apps, with features put in place to assure that both riders and drivers feel safe. The most important feature we'd like to highlight is a feature that allows the user to be alerted if a driver goes off the correct path to the destination designated by the rider. The app will then ask the user if they would like to call 911 to notify them of the driver's actions. Additionally, many of the user interviews we conducted stated that many women prefer to walk around, especially at night, while waiting for a rideshare driver to pick them up for safety concerns. The app provides an option for users to select in order to allow them to walk around while waiting for their rideshare, also notifying the driver of their dynamic location. After selecting a destination, the user will be able to select a driver from a selection of three drivers on the app. On this selection screen, the app details both identity and personality traits of the drivers, so that riders can select drivers they feel comfortable riding with. Users also have the option to provide feedback on their trip afterward, as well as rating the driver on various aspects such as cleanliness, safe driving, and comfort level. The app will also use these ratings to suggest drivers to users that users similar to them rated highly. ## How we built it We built it using Android Studio in Java for full-stack development. We used the Google JavaScript Map API to display the map for the user when selecting destinations and tracking their own location on the map. We used Firebase to store information and for authentication of the user. We used DocuSign in order for drivers to sign preliminary papers. We used OpenXC to calculate if a driver was traveling safely and at the speed limit. In order to give drivers benefits, we are giving them the choice to take 5% of their income and invest it, and it will grow naturally as the market rises. ## Challenges we ran into We weren't very familiar with Android Studio, so we first attempted to use React Native for our application, but we struggled a lot implementing many of the APIs we were using with React Native, so we decided to use Android Studio as we originally intended. ## What's next for Driversity We would like to develop more features on the driver's side that would help the drivers feel more comfortable as well. We also would like to include the usage of the Amadeus travel APIs.
## Inspiration With all team members living in urban cities, it was easy to use up all of our mobile data while on the go. From looking up restaurants nearby and playing pokemon go, it was easy to chew threw the limited data we had. We would constantly be looking for a nearby Tim Hortons, just to leech their wifi and look for a bus route to get home safely. Therefore, we drew our inspiration from living out the reality that our phones simply are not as useful with mobile data, and we know that many people around the globe depend on mobile data for both safety and convenience with their devices. With **NAVIGATR**, users will not have to rely on their mobile data to find travel information, weather, and more. ## What it does NAVIGATR uses machine learning and scrapes real-time data to respond to any inquiry you might have when data isn't available to you. We have kept in mind that the main issues that people may have when on the go and running out of mobile data are travel times, bus routes, destination information, and weather information. So, NAVIGATR is able to pull all this information together to allow users to use their phone to the fullest even if they do not have access to mobile data; additionally, we are able to allow users to have peace of mind when on the go - they will always have the information they need to get home safely. ## How we built it We built NAVIGATR using a variety of different technical tools; more specifically, we started by using Twilio. Twilio catches the SMS messages that are sent, and invokes a webhook to reply back to the message. Next, we use Beautifulsoup to scrape and provide data from Google seaches to answer queries; additionally, our machine learning model, GPT-3, can respond to general inquiries. Lastly, this is all tied together using Python, which facilitates communication between tools and catches user input errors. ## Challenges we ran into Mainly, the TTC API was outdated by twelve years; therefore, we had to shift our focus to webscraping. Webscraping is more reliable than the TTC API, and we were able to create our application knowing all information is accurate. Furthermore, the client is allowed to input any starting point and destination they wish, and our application is now not limited to just the Toronto area. ## Accomplishments that we're proud of We believe that we were able to address a very relevant issue in modern day society, which is safety in urban environments and mobile-data paywalls. With the explosion of technology in the last two decades, there is no reason why innovation can not be used to streamline information in this way. Moreover, we wanted to try and create an application that has geniune use for people around the globe; additionally, this goal lead us to innovate with the idea in mind of improving the daily lives of a variety of people. ## What we learned We learned how to handle webscraping off Google, as well as creating webhooks and utilizing machine learning models to bring our ideas to life. ## What's next for NAVIGATR Next, we would like to implement a wider variety of tools that align with our mission of providing users with simple answers to questions that they may have. Continuing on the theme of safety, we would like to add features which provide a user with information about high density areas vs. low density areas, weather warnings, as well as secure travel route vs. low risk travel routes. We believe that all of these features would greatly increase the impact NAVIGATR would have in a user's everyday life.
partial
## Inspiration Sometimes we don’t think too much about how infrastructure impacts the way we live our lives. Unalerted icy sidewalks or a dimly lit alleyway could pose as an inconvenience or potential danger to a person’s life. Living in the age of information, we believe that the spread of knowledge through community is very important if we are given so many resources that could provide a safer place for every day living. Though this project is only focused on the local areas of the users, these small reports can provide a way for incremental impact – logistical analysis to improve infrastructure, safety, and awareness. ## What It does SusMap addresses the problem of infrastructure by providing a map of all pinpointed data collected from users. With this data, users from around the local area would be notified of the locations (longitude/latitude coordinates) of hazardous, suspicious and accessibility issues. When a user submits the information, they will choose what type of activity (hazard, suspicion, accessibility) they would like to report, and under that activity they will choose a subtype (for instance if the user choose hazard, they can then report what kind of hazard: slippery stair way, or icicle droppings). This would be submitted to the database that we have set up with Firebase. Our database stores each data entry, which consists of 5 fields – description, latitude, longitude, type and subtype – in individual documents. In order to keep track of how many incidents have been reported in the same area, the algorithm will tally each time the coordinates match up or are within proximity to one another. These counts will also help us access the credibility of the reports – the lower the count after a certain amount of time will lower the priority of the report. ## How we built it Our app is built on Esri’s Feature and Map APIs. This played a huge part in the foundation of our code because it gave us a user interface to work with geolocations to pinpoint the different issue areas. We bounced back and forth a lot on whether or not we should use Javascript/HTML or to create a React.js app and just add a map on top of that. After speaking to different mentors, we stuck with what was known best by majority of the group: Javascript/HTML. We used Firebase to create a database to store our data on the cloud. This allows us to easily add and retrieve data after asking for user inputs. Then we used the App Engine to deploy our web application. **Challenges we ran into:** Although we were using Esri’s Feature and Map APIs, a challenge we had was deciding whether or not we would use React.js or Javascript/Html to implement our project. After speaking to different mentors, we stuck with what was known best by majority of the group: Javascript/HTML/CSS. **Accomplishments that we’re proud of:** We're super excited that we were able to successfully utilize and integrate Esri's APIs into our application. In addition, we were able to host our project on the Google Cloud Platform. Lastly, we're proud of designing our project focusing on UI/UX first, allowing for an easy to use and seamless experience for the user. ## What we learned As a beginner team in web development, we all had to refresh our skills and/or learn HTML/CSS/JavaScript. In addition, we learned so much about integrating API's into our project. In addition, we learned how to test and deploy our web app using Googe-compute-engine and Google-app-engine. ## What’s next for SusMap? We would like to turn this into a mobile app for better accessibility and to create an easier way for reporting and retrieving data. We would also like to create a function that would allow for interested parties, such as school administration and government officials who overlook public work, to query the information in the database in order to make any necessary improvements.
## Inspiration We went along with the idea from Team Formation but also had to decide on a puzzle that was simple enough to make progress on in 2 days. ## What it does Lets users sort a shuffled set of colors. The colors are generated using [Android 12's dynamic color APIs](https://m3.material.io/styles/color/dynamic-color/overview). ## How we built it It's a native Android so we relied heavily on Android Studio. We used Figma and other tools for mockups and a couple of Android libraries to help with the UI. ## Challenges we ran into The biggest challenge was getting the colors to actually swap. The library did not support this out of the box so we literally *hacked* our way to a solution. ## Accomplishments that we're proud of The aforementioned challenge was solved after a couple hours of thinking and some [nasty code](https://github.com/imashnake0/Tiles/blob/53307a5db62d9efe93d30a7d65c8f380b9352897/app/src/main/java/com/imashnake/tiles/features/Tiles.kt#L75-L99). ## What we learned To not use code like this in production. ## What's next for Tiles * Have pictures instead of just colors. * Improve the animation for the `to` block (prevent the current snapping behaviour). * Add functionality for some buttons. * Branding: We don't have a logo yet!
## Inspiration The increasing frequency and severity of natural disasters such as wildfires, floods, and hurricanes have created a pressing need for reliable, real-time information. Families, NGOs, emergency first responders, and government agencies often struggle to access trustworthy updates quickly, leading to delays in response and aid. Inspired by the need to streamline and verify information during crises, we developed Disasteraid.ai to provide concise, accurate, and timely updates. ## What it does Disasteraid.ai is an AI-powered platform consolidating trustworthy live updates about ongoing crises and packages them into summarized info-bites. Users can ask specific questions about crises like the New Mexico Wildfires and Floods to gain detailed insights. The platform also features an interactive map with pin drops indicating the precise coordinates of events, enhancing situational awareness for families, NGOs, emergency first responders, and government agencies. ## How we built it 1. Data Collection: We queried You.com to gather URLs and data on the latest developments concerning specific crises. 2. Information Extraction: We extracted critical information from these sources and combined it with data gathered through Retrieval-Augmented Generation (RAG). 3. AI Processing: The compiled information was input into Anthropic AI's Claude 3.5 model. 4. Output Generation: The AI model produced concise summaries and answers to user queries, alongside generating pin drops on the map to indicate event locations. ## Challenges we ran into 1. Data Verification: Ensuring the accuracy and trustworthiness of the data collected from multiple sources was a significant challenge. 2. Real-Time Processing: Developing a system capable of processing and summarizing information in real-time requires sophisticated algorithms and infrastructure. 3. User Interface: Creating an intuitive and user-friendly interface that allows users to easily access and interpret information presented by the platform. ## Accomplishments that we're proud of 1. Accurate Summarization: Successfully integrating AI to produce reliable and concise summaries of complex crisis situations. 2. Interactive Mapping: Developing a dynamic map feature that provides real-time location data, enhancing the usability and utility of the platform. 3. Broad Utility: Creating a versatile tool that serves diverse user groups, from families seeking safety information to emergency responders coordinating relief efforts. ## What we learned 1. Importance of Reliable Data: The critical need for accurate, real-time data in disaster management and the complexities involved in verifying information from various sources. 2. AI Capabilities: The potential and limitations of AI in processing and summarizing vast amounts of information quickly and accurately. 3. User Needs: Insights into the specific needs of different user groups during a crisis, allowing us to tailor our platform to better serve these needs. ## What's next for DisasterAid.ai 1. Enhanced Data Sources: Expanding our data sources to include more real-time feeds and integrating social media analytics for even faster updates. 2. Advanced AI Models: Continuously improving our AI models to enhance the accuracy and depth of our summaries and responses. 3. User Feedback Integration: Implementing feedback loops to gather user input and refine the platform's functionality and user interface. 4. Partnerships: Building partnerships with more emergency services and NGOs to broaden the reach and impact of Disasteraid.ai. 5. Scalability: Scaling our infrastructure to handle larger volumes of data and more simultaneous users during large-scale crises.
losing
## Reimagining Patient Education and Treatment Delivery through Gamification Imagine walking into a doctors office to find out you’ve been diagnosed with a chronic illness. All of a sudden, you have a slew of diverse healthcare appointments, ongoing medication or lifestyle adjustments, lots of education about the condition and more. While in the clinic/hospital, you can at least ask the doctor questions and try to make sense of your condition & management plan. But once you leave to go home, **you’re left largely on your own**. We found that there is a significant disconnect between physicians and patients after patients are discharged and diagnosed with a particular condition. Physicians will hand patients a piece of paper with suggested items to follow as part of a "treatment plan". But after this diagnosis meeting, it is hard for the physicians to keep up-to-date with their patients on the progress of the plan. The result? Not surprisingly, patients **quickly fall off and don’t adhere** to their treatment plans, costing the healthcare system **upwards of $300 billion** as they get readmitted due to worsening conditions that may have been prevented. But it doesn’t have to be that way… We're building an engaging end-to-end experience for patients managing chronic conditions, starting with one of the most prevalent ones - diabetes. **More than 100 million U.S. adults are now living with diabetes or prediabetes** ## How does Glucose Guardian Work? Glucose Guardian is a scalable way to gamify education for chronic conditions using an existing clinical technique called “teachback” (see here- [link](https://patientengagementhit.com/features/developing-patient-teach-back-to-improve-patient-education)). We plan to partner with clinics and organizations, scrape their existing websites/documents where they house all their information about the chronic condition, and instantly convert that into short (up to 2 min) voice modules. Glucose Guardian users can complete these short, guided, voice-based modules that teach and validate their understanding of their medical condition. Participation and correctness earn points which go towards real-life rewards for which we plan to partner with rewards organizations/corporate programs. Glucose Guardian users can also go to the app to enter their progress on various aspects of their personalized treatment plan. Their activity on this part of the app is also incentive-driven. This is inspired by current non-health solutions our team has had experience with using very low barrier audio-driven games that have been proven to drive user engagement through the roof. ## How we built it We've simplified how we can use gamification to transform patient education & treatment adherence by making it more digestible and fun. We ran through some design thinking sessions to work out how we could create a solution that wouldn’t simply look great but could be implemented clinically and be HIPAA compliant. We then built Glucose Guardian as a native iOS application using Swift. Behind the scenes, we use Python toolkits to perform some of our text matching for patient education modules, and we utilize AWS for infrastructure needs. ## Challenges we ran into It was difficult to navigate the pre-existing market of patient adherence apps and create a solution that was unique and adaptable to clinical workflow. To tackle this, we dedicated ample time to step through user journeys - patients, physicians and allied health professionals. Through this strategy, we identified education as our focus because it is critical to treatment adherence and a patient-centric solution. ## We're proud of this We've built something that has the potential to fulfill a large unmet need in the healthcare space, and we're excited to see how the app is received by beta testers, healthcare partners, and corporate wellness organizations. ## Learning Points Glucose Guardian has given our cross-disciplined team the chance to learn more about the intersection of software + healthcare. Through developing speech-to-text features, designing UIs, scraping data, and walking through patient journeys, we've maximized our time to learn as much as possible in order to deliver the biggest impact. ## Looking Ahead As per the namesake, so far we've implemented one use case (diabetes) but are planning to expand to many other diseases. We'd also like to continue building other flows beyond patient education. This includes components such as the gamified digital treatment plan which can utilize existing data from wearables and wellness apps to provide a consolidated view on the patient's post-discharge health. Beyond that, we also see potential for our platform to serve as a treasure trove of data for clinical research and medical training. We're excited to keep building and keep creating more impact.
## Inspiration For building the most potential and improved healthcare monitoring system Inspiration One of the worst things that can occur to a person is hearing the recent news of a loved one’s recent lethal drug overdose and other healthcare issues. This can occur for a number of reasons; some may have conditions such as people can forget their proper medication amounts, or perhaps the person was just thinking and acting in a rash manner. The system is designed to securely keep track of medications as well as prescriptions given to people.
## Inspiration: Our inspiration stems from the identification of two critical problems in the health industry for patients: information overload and inadequate support for patients post-diagnosis resulting in isolationism. We saw an opportunity to leverage computer vision, machine learning, and user-friendly interfaces to simplify the way diabetes patients interact with their health information and connect individuals with similar health conditions and severity. ## What it does: Our project is a web app that fosters personalized diabetes communities while alleviating information overload to enhance the well-being of at-risk individuals. Users can scan health documents, receive health predictions, and find communities that resonate with their health experiences. It streamlines the entire process, making it accessible and impactful. ## How we built it: We built this project collaboratively, combining our expertise in various domains. Frontend development was done using Next.js, React, and Tailwind CSS. We leveraged components from <https://www.hyperui.dev> to ensure scalability and flexibility in our project. Our backend relied on Firebase for authentication and user management, PineconeDB for the creation of curated communities, and TensorFlow for the predictive model. For the image recognition, we used React-webcam and Tesseract for the optical character recognition and data parsing. We also used tools like Figma, Canva, and Google Slides for design, prototyping and presentation. Finally, we used the Discord.py API to automatically generate the user communication channels ## Challenges we ran into: We encountered several challenges throughout the development process. These included integrating computer vision models effectively, managing the flow of data between the frontend and backend, and ensuring the accuracy of health predictions. Additionally, coordinating a diverse team with different responsibilities was another challenge. ## Accomplishments that we're proud of: We're immensely proud of successfully integrating computer vision into our project, enabling efficient document scanning and data extraction. Additionally, building a cohesive frontend and backend infrastructure, despite the complexity, was a significant accomplishment. Finally, we take pride in successfully completing our project goal, effectively processing user blood report data, generating health predictions, and automatically placing our product users into personalized Discord channels based on common groupings. ## What we learned: Throughout this project, we learned the value of teamwork and collaboration. We also deepened our understanding of computer vision, machine learning, and front-end development. Furthermore, we honed our skills in project management, time allocation, and presentation. ## What's next for One Health | Your Health, One Community.: In the future, we plan to expand the platform's capabilities. This includes refining predictive models, adding more health conditions, enhancing community features, and further streamlining document scanning. We also aim to integrate more advanced machine-learning techniques and improve the user experience. Our goal is to make health data management and community connection even more accessible and effective.
partial
## Inspiration We wanted to create something fun and cute while learning how to use Pygame. ## What it does In the game, the player uses the space bar to make the boy jump over obstacles. The objective is to find the boy's fox friend. ## How we built it The world is built using Pygame which utilizes Python. ## Challenges we ran into It was difficult to merge our codes since we used slightly different coding styles. ## Accomplishments that we're proud of We're proud of the overall aesthetic of the game and the flow of the different frames and mechanics. ## What we learned We learned how to implement Pygames as well as how to work as a team to create the different components. ## What's next for boy and thy fox We hope to add more features and levels to the game as well as fine-tune the mechanics.
## Inspiration I always wanted to work with the pygames library but had never gotten around to it. Likewise, I had always wanted to experiment with AI, but hadn't gotten around to that either. So I figured why not kill two birds with one stone? ## What it does This game utilizes the pygames library to build a Connect Four gameboard. There is the option to play with 2 players, but that's not all. There is also a computer to play against, with 3 different difficulty levels. ## How we built it I built this using Python and the pygames library. I used an array (list) to represent the gameboard, and created many functions to manage the game. For the AI, I changed the difficulty by changing how many moves ahead it could look. The AI would look at every one of your possible moves after every one of it's possible moves to determine what was the best. On the hardest difficulty, it looks 4 possible moves ahead and is extremely difficult to beat. The game is a constant loop that stops when someone wins, or the board is filled. ## Challenges we ran into I had challenges with the AI aspect of this problem. Having never taken any courses on AI before, I self-taught the basics, which was difficult on its own. Then I initially tried just creating functions independent of any class to effectively perform the job of looking ahead at possible moves. Eventually, I discovered it was far simpler to just create an AI class, and contain all of the necessary AI functions within that class. ## Accomplishments that we're proud of I'm proud of not only making the game work but making it look relatively nice as well. I am also proud that I was able to not just get introduced to two new topics, but turn them into a fleshed-out project. ## What we learned I learned a lot about AI, specifically the thought process that goes behind creating something that is supposed to emulate a human mind. Furthermore, to put it simply I learned that the pygames library is awesome. It contains so many helpful features that are specific to creating a game (obviously). Given that this was the first time I have made a game like this, I am glad to know these libraries are available to me. ## What's next for AI Connect Four I would very much like to make this even more fully fleshed out. I think the next step is making this game into an IOS app. Making an IOS app is something else that I have wanted to do, and now I feel I have the perfect code to test it on.
## Inspiration After looking at the Hack the 6ix prizes, we were all drawn to the BLAHAJ. On a more serious note, we realized that one thing we all have in common is accidentally killing our house plants. This inspired a sense of environmental awareness and we wanted to create a project that would encourage others to take better care of their plants. ## What it does Poképlants employs a combination of cameras, moisture sensors, and a photoresistor to provide real-time insight into the health of our household plants. Using this information, the web app creates an interactive gaming experience where users can gain insight into their plants while levelling up and battling other players’ plants. Stronger plants have stronger abilities, so our game is meant to encourage environmental awareness while creating an incentive for players to take better care of their plants. ## How we built it + Back-end: The back end was a LOT of python, we took a new challenge on us and decided to try out using socketIO, for a websocket so that we can have multiplayers, this messed us up for hours and hours, until we finally got it working. Aside from this, we have an arduino to read for the moistness of the soil, the brightness of the surroundings, as well as the picture of it, where we leveraged computer vision to realize what the plant is. Finally, using langchain, we developed an agent to handle all of the arduino info to the front end and managing the states, and for storage, we used mongodb to hold all of the data needed. [backend explanation here] ### Front-end: The front-end was developed with **React.js**, which we used to create a web-based game. We were inspired by the design of old pokémon games, which we thought might evoke nostalgia for many players. ## Challenges we ran into We had a lot of difficulty setting up socketio and connecting the api with it to the front end or the database ## Accomplishments that we're proud of We are incredibly proud of integrating our web sockets between frontend and backend and using arduino data from the sensors. ## What's next for Poképlants * Since the game was designed with a multiplayer experience in mind, we want to have more social capabilities by creating a friends list and leaderboard * Another area to explore would be a connection to the community; for plants that are seriously injured, we could suggest and contact local botanists for help * Some users might prefer the feeling of a mobile app, so one next step would be to create a mobile solution for our project
losing
## Inspiration We were inspired to create such a project since we are all big fans of 2D content, yet have no way of actually animating 2D movies. Hence, the idea for StoryMation was born! ## What it does Given a text prompt, our platform converts it into a fully-featured 2D animation, complete with music, lots of action, and amazing-looking sprites! And the best part? This isn't achieved by calling some image generation API to generate a video for our movie; instead, we call on such APIs to create lots of 2D sprites per scene, and then leverage the power of LLMs (CoHere) to move those sprites around in a fluid and dynamic matter! ## How we built it On the frontend we used React and Tailwind, whereas on the backend we used Node JS and Express. However, for the actual movie generation, we used a massive, complex pipeline of AI-APIs. We first use Cohere to split the provided story plot into a set of scenes. We then use another Cohere API call to generate a list of characters, and a lot of their attributes, such as their type, description (for image gen), and most importantly, Actions. Each "Action" consists of a transformation (translation/rotation) in some way, and by interpolating between different "Actions" for each character, we can integrate them seamlessly into a 2D animation. This framework for moving, rotating and scaling ALL sprites using LLMs like Cohere is what makes this project truly stand out. Had we used an Image Generation API like SDXL to simply generate a set of frames for our "video", we would have ended up with a janky stop-motion video. However, we used Cohere in a creative way, to decide where and when each character should move, scale, rotate, etc. thus ending up with a very smooth and human-like final 2D animation. ## Challenges we ran into Since our project is very heavily reliant on BETA parts of Cohere for many parts of its pipeline, getting Cohere to fit everything into the strict JSON prompts we had provided, despite the fine-tuning, was often quite difficult. ## Accomplishments that we're proud of In the end, we were able to accomplish what we wanted!
## Inspiration Our inspiration comes from one of our team member's bedtime story sessions with his little cousin. Whenever he reads him a picture book, his cousin always has questions like, "Why did Goldilocks walk into the house?"—and most of the time, we don’t have the answers. We wished there was a way for him to ask the characters directly, to hear their side of the story. That’s where the idea for **LiveStory** came from. We wanted to bring storybook characters to life and let kids interact with them directly to get answers to their questions in real time. ## What it does **LiveStory** is a children's storybook that comes alive! At ANY point in the story, interact with a character on the page and chat with them; how are they feeling about what just happened? With industry-leading AI voice-to-voice pipelines, readers can feel the emotion of their favourite characters. ## How we built it Our web application was built primarily using **Reflex**: * Reflex for both frontend and backend * React.js for managing the database of each page and character * Custom assistants powered by Vapi, Groq, Deepgram, and 11Labs to simulate character interactions * Reflex for Deploy ## Challenges we ran into * We initially struggled with Reflex, but over time, it became our go-to tool for building the project. * We had to prevent characters from spoiling the story by restricting their responses to what the reader had already seen. To solve this, we fed the accumulative story log into the voice API, ensuring characters only referenced the relevant parts of the story. ## Accomplishments that we're proud of * Completing the project on time and getting it fully functional!!! * Learning Reflex and Lottie from scratch and successfully implementing them over the weekend. * Collaborating with amazing Reflex engineers to create a solid product based on their platform. * Committing 20+ hours and $1,000 on travel from Waterloo, Canada, to make this hackathon happen! ## What we learned * Making a full-stack app in Reflex * Implementing beautiful vector animations with Lottie * Implementing voice-to-voice models in web apps ## What's next for LiveStory As time is the biggest limitation during a hackathon, we would have loved to pour more time into the art to make a more beautiful experience. * More stories! * More animations! Characters could, based on the emotions of their speech, have reactive animations * Character Interaction Interface: a more advanced UI can note your **emotions** * **Choose your own adventures!** With supported stories, the conversations with the characters could influence the story! * Customization for reader! We can also try feeding reader's information such as name, hobby and academic interest to serve better user experience.
## Inspiration Ollie was inspired by popular translation apps such as Google Translate and Apple Translate. Our idea expands on the basic premise of translating text from an input by including an image scanner and unique summary feature using Artificial Intelligence (AI). ## What it does This application takes large amounts of text in any language and translates it into a summary of the chosen language. It can detect, translate, and summarize a significant quantity of text from an image using Cohere as the Application Program Interface (API). It can also be used to type a selection of text and summarize it in many desired languages. Summarization can help users identify and understand the main points without having to read through large amounts of text. ## How we built it Ollie was designed and conceptualized on Figma, then coded using HTML/CSS, JavaScript and Cohere as an API. Our tech stack is simply React for the front end and node.js for the back end. To achieve our translation and summarization we are chaining a pipeline of API requests from different providers to get a seamless unique user experience. First, to get the text, we used either Google Cloud’s vision API for image-to-text detection or a textfield for direct text input. Next, we took that text and translated it to the desired language using Google Cloud’s translation API for text language translations. Finally, we utilized the Cohere AI Javascript SDK to request their generated API for text summarizations and then presented that text to the user in various levels of verbosity. ## Challenges we ran into Initially, conceptualization was a challenge that we ran into and overcame after lengthy discussions about possible project ideas. Once we settled on creating a translator application, the process became smoother in terms of design and ideation. Moreover, figuring out the user journey within the app was difficult because of the many features and potential routes that could be implemented. Coding also proved to be a challenge specifically regarding image encoding in the image to text conversion. The image size was too large and the quality was too high resulting in not being able to pass the image url through web requests because its url contained too many characters. We ended up converting the image to binary via passing the image through content type multipart/form-data as well as reducing the image resolution to the image and changing its type to jpeg. ## Accomplishments that we're proud of We are proud of conceptualizing, designing, and building our mobile application within the 36-hour time constraint. Furthermore, we are proud of our collaboration skills and taking our individual expertise to distribute the workload. The technical challenge of utilizing multiple API’s we’ve never used before and integrating them all together seamlessly was enormous and so we are proud of what we were technically able to accomplish as well. ## What we learned We learned how to convert text based images into text using Google Cloud’s vision API, then translate that text to and from desired languages, and finally summarize that text using Cohere’s summarize API. We had to read and understand the docs for each of the API’s as we had no experience using them before as well as integrate these API’s together in a seamless way. Our team also learned a lot about collaboration when working with such a fast deadline forcing us to prioritize the Minimum Viable Product (MVP) and not waste time on unimportant tasks. We also quickly identified everyone's unique strengths and delegated tasks to those who would be most efficient in completing them. ## What's next for Ollie In the future, we would like to develop this idea into a mobile app and add more advanced features such as speech-to-text translation.
partial
We were inspired by the daily struggle of social isolation. Shows the emotion of a text message on Facebook We built this using Javascript, IBM-Watson NLP API, Python https server, and jQuery. Accessing the message string was a lot more challenging than initially anticipated. Finding the correct API for our needs and updating in real time also posed challenges. The fact that we have a fully working final product. How to interface JavaScript with Python backend, and manually scrape a templated HTML doc for specific key words in specific locations Incorporate the ability to display alternative messages after a user types their initial response.
## Inspiration University gets students really busy and really stressed, especially during midterms and exams. We would normally want to talk to someone about how we feel and how our mood is, but due to the pandemic, therapists have often been closed or fully online. Since people will be seeking therapy online anyway, swapping a real therapist with a chatbot trained in giving advice and guidance isn't a very big leap for the person receiving therapy, and it could even save them money. Further, since all the conversations could be recorded if the user chooses, they could track their thoughts and goals, and have the bot respond to them. This is the idea that drove us to build Companion! ## What it does Companion is a full-stack web application that allows users to be able to record their mood and describe their day and how they feel to promote mindfulness and track their goals, like a diary. There is also a companion, an open-ended chatbot, which the user can talk to about their feelings, problems, goals, etc. With realtime text-to-speech functionality, the user can speak out loud to the bot if they feel it is more natural to do so. If the user finds a companion conversation helpful, enlightening or otherwise valuable, they can choose to attach it to their last diary entry. ## How we built it We leveraged many technologies such as React.js, Python, Flask, Node.js, Express.js, Mongodb, OpenAI, and AssemblyAI. The chatbot was built using Python and Flask. The backend, which coordinates both the chatbot and a MongoDB database, was built using Node and Express. Speech-to-text functionality was added using the AssemblyAI live transcription API, and the chatbot machine learning models and trained data was built using OpenAI. ## Challenges we ran into Some of the challenges we ran into were being able to connect between the front-end, back-end and database. We would accidentally mix up what data we were sending or supposed to send in each HTTP call, resulting in a few invalid database queries and confusing errors. Developing the backend API was a bit of a challenge, as we didn't have a lot of experience with user authentication. Developing the API while working on the frontend also slowed things down, as the frontend person would have to wait for the end-points to be devised. Also, since some APIs were relatively new, working with incomplete docs was sometimes difficult, but fortunately there was assistance on Discord if we needed it. ## Accomplishments that we're proud of We're proud of the ideas we've brought to the table, as well the features we managed to add to our prototype. The chatbot AI, able to help people reflect mindfully, is really the novel idea of our app. ## What we learned We learned how to work with different APIs and create various API end-points. We also learned how to work and communicate as a team. Another thing we learned is how important the planning stage is, as it can really help with speeding up our coding time when everything is nice and set up with everyone understanding everything. ## What's next for Companion The next steps for Companion are: * Ability to book appointments with a live therapists if the user needs it. Perhaps the chatbot can be swapped out for a real therapist for an upfront or pay-as-you-go fee. * Machine learning model that adapts to what the user has written in their diary that day, that works better to give people sound advice, and that is trained on individual users rather than on one dataset for all users. ## Sample account If you can't register your own account for some reason, here is a sample one to log into: Email: [demo@example.com](mailto:demo@example.com) Password: password
## Inspiration We were inspired by the challenge: **Mirum and JWT: Can a Computer Hear How You Feel? Seeing the emotion in IM and voice.** ## What it does By analyzing IM and voice, EmoBot will be able to score and visualize the emotional tone of a conversation. ## How we built it **With love and a dash unicorn dust... jk,** We built the back-end in Python. We researched API’s that analyzed text for sentiment, and used the IBM Watson Tone Analyzer API which assigns an emotion to input text. Our input text comes from a text file which we read. The Tone Analyzer assigns either anger, joy, disgust, sadness, or fear. It also tells us if the text was analytical, tentative, or confident. The API also returned a score between -1 and 1 for each emotional state to indicate magnitude. We also used a natural language API from Google to detect if a phrase is positive or negative. Google also returned a score between -1 and 1. If the score is above 0, it indicates a positive emotion. If the score is negative, that indicates a negative emotion. In python, we have our program read a text file and analyze it through both the IBM and Google APIs. We then compared the results from both Google and IBM. We used Google API as a backup to verify our results from the IBM API. If the APIs returned nothing for the outputs, we assumed that the phrases were emotionless and assigned it as “neutral”. Our program outputs the emotion associated with the text along with the score. ## Challenges we ran into One of the challenges we faced was implementing the API into our program. We were unfamiliar with using these API and getting it to work in Python; this took some time and research. Trying to extract the correct output emotions and logistically inputting text into the API was challenging. We also needed to create a server through Flask to connect the back-end to front-end which was challenging at first because it was a new topic. ## Accomplishments that we're proud of We made EmoBot work! And we all slept at some point during this weekend. ## What we learned Human Emotions are complicated... very complicated. ## What's next for EmoBot In a big picture, EmoBot can be further developed and implemented into fields like *commercial advertisement, IoT, & daily-care* etc. More specifically, we are still in the process of improving the accuracy of the evaluation. Right now there are emotions that we are pretty confident about, but there are also a couple of feelings that are more difficult to detect. Solving issues like this can improve general user experience of EmoBot.
partial
## Inspiration We created CommunitiCash because we believe that there must be a better way of insuring people's livelihoods. The United States has a large amount of people living paycheck to paycheck, for whom even small disturbances in their finances can lead to large consequences. In the status quo, there exists little recourse for such individuals, which is where CommunitiCash comes in. ## What it does CommunitiCash uses machine learning to analyze the financial histories of its users to aggregate them into Groups with the lowest risk for all members involved. Our algorithm predicts the future of a user's financial situation, and uses this information to connect them with others who can ensure no one falls behind. With Groups being limited to 20 members each, members are encouraged to do their part in supporting one another. ## How We built it CommunitiCash's machine learning algorithm was developed using Python, particularly with the NumPy mathematical library. After it was run on the original set of data that was provided by Vitech, the new data was hosted on Firebase, where it could be accessed by our front-end application written in JavaScript. ## Challenges We ran into One of the biggest challenges with this project was ensuring that the algorithm was accurate, especially when dealing with outliers (very high/low income users). Another challenge was transferring all of the data to Firebase, as the original Vitech data was a CSV, and Firebase's database stores JSON objects. The CSV to JSON converter we had planned on using failed on us during the night, so we were forced to convert the data ourselves. ## Accomplishments that We're proud of Knowing that our application is predicting the future to potentially help people is something we're very proud of. Not to mention, finishing the application at all despite the problems we encountered (particularly with the CSV to JSON conversion) is pretty nice too. ## What We learned We learned how to apply machine learning and probability to financial data sets, something that we most definitely could use again in the future. (We also learned that just because a CSV to JSON converter worked two weeks ago, doesn't mean it will work when you need it to...) ## What's next for CommunitiCash Our plan for CommunitiCash is to aggregate enough recorded and predicted financial data on users to be able to present to banks and money lenders, to prove that Groups are stable enough to be able to take out bigger loans and still have the capacity to pay them off. This benefits everyone involved as Group members are able to have more money for carrying out their desires, and money lenders can be more confident in knowing that they will get a return on their investment.
## Inspiration Behind Plate-O 🍽️ The inspiration for Plate-O comes from the intersection of convenience, financial responsibility, and the joy of discovering new meals. We all love ordering takeout, but there’s often that nagging question: “Can I really afford to order out again?” For many, budgeting around food choices can be stressful and time-consuming, yet essential for maintaining a healthy balance between indulgence and financial well-being. 🍔💡 Our goal with Plate-O was to create a seamless solution that alleviates this burden while still giving users the excitement of variety and novelty in their meals. We wanted to bridge the gap between smart personal finance and the spontaneity of food discovery, making it easier for people to enjoy new restaurants without worrying about breaking the bank. 🍕✨ What makes Plate-O truly special is its ability to learn from your habits and preferences, ensuring each recommendation is not only financially responsible but tailored to your unique tastes. By combining AI, personal finance insights, and your love for good food, we created a tool that makes managing your takeout spending effortless, leaving you more time to enjoy the experience. Bon Appétit! 📊🍽️ ## How We Built Plate-O 🛠️ At the core of Plate-O is its AI-driven recommendation engine, designed to balance two crucial factors: your financial well-being and your culinary preferences. Here’s how we made it happen: Backend: We used FastAPI to build a robust system for handling the user’s financial data, preferences, and restaurant options. By integrating the Capital One API, Plate-O can analyze your income, expenses, and savings to calculate an ideal takeout budget—maximizing enjoyment while minimizing financial strain. 💵📈 **Frontend**: Next.js powers our intuitive user interface. Users input their budget, and with just a few clicks, they get a surprise restaurant pick that fits their financial and taste profile. Our seamless UI makes ordering takeout a breeze. 📱✨ **Data Handling & Preferences**: MongoDB Atlas is our choice for managing user preferences—storing restaurant ratings, past orders, dietary restrictions, and other critical data. This backend allows us to constantly learn from user feedback and improve recommendations with every interaction. 📊🍴 **AI & Recommendation System**: Using Tune’s LLM-powered API, we process natural language inputs and preferences to predict what food users will love based on past orders and restaurant descriptions. The system evaluates each restaurant using criteria like sustainability scores, delivery speed, cost, and novelty. 🎯🍽️ **Surprise Meal Feature**: The magic happens when the system orders a surprise meal for users within their financial constraints. Plate-O delights users by taking care of the decision-making and getting better with each order. 🎉🛍️ ## Challenges We Overcame at Plate-O 🚧 **-Budgeting Complexity**: One of our first hurdles was integrating the Capital One API in a meaningful way. We had to ensure that our budgeting model accounted for users’ income, expenses, and savings in real-time. This required significant computation beyond the API and iteration to create a seamless experience. 💰⚙️ **Recommendation Fine-Tuning:** Balancing taste preferences with financial responsibility wasn’t easy. Most consumer dining preference data is proprietary, forcing us to spend a lot of time refining the recommendation system to ensure it could accurately predict what users would enjoy with small amounts of data, leveraging open-source Large Language Models to improve results over time. 🤖🎯 **-Data Integration**: Gathering and analyzing user preference data in real-time presented technical challenges, particularly when optimizing the system to handle large restaurant datasets efficiently while providing quick recommendations. Combining two distinct datasets, the Yelp restaurant datalist and an Uber Eats csv also required a bit of Word2Vec ingenuity. 🗄️⚡ ## Accomplishments at Plate-O 🏆 **-Smart Budgeting with AI**: Successfully implemented a model that combines personal finance data with restaurant preferences, offering tailored recommendations that help users stay financially savvy while enjoying variety in their takeout. 📊🍕 **- Novel User Experience**: Plate-O’s surprise meal feature takes the stress out of decision-making, delighting users with thoughtful recommendations that evolve with their taste profile. The platform bridges convenience and personalized dining experiences like never before. 🚀🥘 ## Lessons Learned from Plate-O’s Journey 📚 **-Simplicity Wins**: At first, we aimed to include many complex features, but we quickly realized that simplicity and focus lead to a more streamlined and effective user experience. It’s better to do one thing exceptionally well—help users order takeout wisely. 🌟🍽️ **-The Power of Learning**: A key takeaway was understanding the importance of iterative learning in both our recommendation engine and product development process. Every user interaction provided valuable insights that made Plate-O better. 🔄💡 **-Balancing Functionality and Delight**: Creating a tool that is both functional and delightful requires finding a perfect balance between user needs and technical feasibility. With Plate-O, we learned to merge practicality with the joy of food discovery. 💼🎉 ## The Future of Plate-O 🌟 **-Groceries and Beyond**: We envision expanding Plate-O beyond takeout, integrating grocery shopping and other spending categories into the platform to help users make smarter financial choices across their food habits. 🛒📊 **-Real-Time AI Assistance**: In the future, we plan to leverage AI agents that proactively guide users through their food budgeting journey, offering suggestions and optimizations for both takeout and groceries. 🤖🍱 **-Social Good**: While we already take environmental protection into account when recommending restaurants, we’re excited to explore adding complete restaurant ESG scores to help users make socially responsible dining choices, supporting local businesses and environmentally friendly options. 🌍🍽️ With Plate-O, we're not just changing how you order takeout; we're helping you become a more financially savvy foodie, one delicious meal at a time.
## Inspiration We wanted to make an app that helped people to be more environmentally conscious. After we thought about it, we realised that most people are not because they are too lazy to worry about recycling, turning off unused lights, or turning off a faucet when it's not in use. We figured that if people saw how much money they lose by being lazy, they might start to change their habits. We took this idea and added a full visualisation aspect of the app to make a complete budgeting app. ## What it does Our app allows users to log in, and it then retrieves user data to visually represent the most interesting data from that user's financial history, as well as their utilities spending. ## How we built it We used HTML, CSS, and JavaScript as our front-end, and then used Arduino to get light sensor data, and Nessie to retrieve user financial data. ## Challenges we ran into To seamlessly integrate our multiple technologies, and to format our graphs in a way that is both informational and visually attractive. ## Accomplishments that we're proud of We are proud that we have a finished product that does exactly what we wanted it to do and are proud to demo. ## What we learned We learned about making graphs using JavaScript, as well as using Bootstrap in websites to create pleasing and mobile-friendly interfaces. We also learned about integrating hardware and software into one app. ## What's next for Budge We want to continue to add more graphs and tables to provide more information about your bank account data, and use AI to make our app give personal recommendations catered to an individual's personal spending.
partial
## Inspiration In school, we were given the offer to take a dual enrollment class called Sign Language. A whole class for the subject can be quite time consuming for most children including adults. If people are interested in learning ASL, they either watch Youtube videos which are not interactive or spend HUNDREDS of dollars in classes (<https://deafchildren.org> requiring $70-100). Our product provides a cost-effective, time-efficient, and fun experience when learning the new unique language. ## What it does Of course you have to first learn the ASL alphabets. A, B, C, D ... Z. Each alphabet has a unique hand gesture. You also have the option to learn phrases like "Yes", "No", "Bored", etc. The app makes sure you have done the alphabet correctly by displaying a circular progress view on how long you have to hold the gesture. We provide many images to make the learning experience accessible. After learning all the alphabets and practicing a few words, time for GAME time :). Test your ability to show a gesture and see how long you can go until you give up. The gamified experience leads to more learning and engaging for children. ## How we built it The product was built using the language Swift. The hand-tracking was done using CoreML Components. We used hand landmarks and found distances between all points of the hand. Comparing the distances it SHOULD be and what it is as a specific time frame helps us figure out whether the hand pose is occurring. For the UI we planned it out using Figma and later wrote the code in Swift. We used the SwiftUI components to save time. For data storing we used UIData which syncs across devices with the same iCloud account. ## Challenges we ran into There are 26 alphabets. That's a lot of arrays, comparing statements, and repetitive work. Testing would sometimes become difficult because the iPhone would eventually become hot and get temperature notifications. We only had one phone to test, so phone testing was frequently used for hand landmarks mostly. The project was extremely lengthy and putting so much content in one 36 hours is difficult, so we had to risk sleep. A cockroach in the room. ## Accomplishments that we're proud of The hand landmark detection for an alphabet actually works much better than expected. Moving your hand super fast does not glitch the system. A fully functional vision app with clean UI makes the experience fun and open for all people. ## What we learned Quantity < Quality. We created more than 6 functioning pages with different level of UI quality. It's very noticeable which views were created quickly because of the time crunch. Instead of having so many pages, decreasing the number of pages and maybe adding more content into each View would make the app appear flawless. Comparing arrays of the goal array and current time-frame array is TEDIOUS. So much time is wasted from testing. We could not figure out action classifier in Swift as there was no basic open-source code. Explaining problems to Chat GPT becomes difficult because the LLM never seems to understand basic tasks, but perfectly performs in complex tasks. Stack Overflow will still be around (for now) if we face problems. ## What's next for Hands-On The app fits well on my iPhone 11, but on an iPad? I do not think so. The next step to take the project further is to scale UI, so it works for iPads an iPhones of any size. Once we fix that problem, we could release the app to the App Store. Since we do not use any API, we would have no expenses related to hosting the API. Making the app public could help people of all ages learn a new language in an interactive manner.
## Inspiration A twist on the classic game '**WikiRacer**', but better! Players play Wikiracer by navigating from one Wikipedia page to another with the fewest clicks. WikiChess adds layers of turn-based strategy, a dual-win condition, and semantic similarity guesswork that introduces varying playstyles/adaptability in a way that is similar to chess. It introduces **strategic** elements where players can choose to play **defensively or offensively**—misleading opponents or outsmarting them. **Victory can be achieved through both extensive general knowledge and understanding your opponent's tactics.** ## How to Play **Setup**: Click your chess piece to reveal your WikiWord—a secret word only you should know. Remember it well! **Game Play**: Each turn, you can choose to either **PLAY** or **GUESS**. ### **PLAY Mode**: * You will start on a randomly selected Wikipedia page. * Your goal is to navigate to your WikiWord's page by clicking hyperlinks. + For example, if your WikiWord is "BANANA," your goal is to reach the "Banana" Wikipedia article. * You can click up to three hyperlinks per turn. * After each click, you'll see a semantic similarity score indicating how close the current page's title is to your WikiWord. * You can view the last ten articles you clicked, along with their semantic scores, by holding the **TAB** key. * Be quick—if you run out of time, your turn is skipped! ### **GUESS Mode**: * Attempt to guess your opponent’s WikiWord. You have three guesses per turn. * Each guess provides a semantic similarity score to guide your future guesses. * Use the article history and semantic scores shown when holding **TAB** to deduce your opponent's target word based on their navigation path. **Example**: If your opponent’s target is "BANANA," they might navigate through articles like "Central America" > "Plantains" > "Tropical Fruit." Pay attention to their clicks and semantic scores to infer their WikiWord. ## Let's talk strategy! **Navigate Wisely in PLAY Mode!** * Your navigation path's semantic similarity indicates how closely related each page's title is to your WikiWord. Use this to your advantage by advancing towards your target without being too predictable. Balance your moves between progress and deception to keep your opponent guessing. **Leverage the Tug-of-War Dynamic**: * Since both players share the same Wikipedia path, the article you end on affects your opponent's starting point in their next PLAY turn. Choose your final article wisely—landing on a less useful page can disrupt your opponent's strategy and force them to consider guessing instead. + However, if you choose a dead end, your opponent may choose to GUESS and skip their PLAY turn—you’ll be forced to keep playing the article you tried to give them! **Semantic Similarity**: **May Not Be the Edge You Think It Is** * Semantic similarity measures how closely related the page's title is to your target WikiWord, not how straightforward it is to navigate to; use this to make strategic moves that might seem less direct semantically, but can be advantageous to navigate through. **To Advance or To Mislead?** * It's tempting to sprint towards your WikiWord, but consider taking detours that maintain a high semantic score but obscure your ultimate destination. This can mislead your opponent and buy you time to plan your next moves. **Adapt to Your Opponent**: * Pay close attention to your opponent's navigation path and semantic scores. This information can offer valuable clues about their WikiWord and inform your GUESS strategy. Be ready to shift your tactics if their path becomes more apparent. **Use GUESS Mode Strategically**: * If you're stuck or suspect you know your opponent’s WikiWord, use GUESS mode to gain an advantage. Your guesses provide semantic feedback, helping refine your strategy and closing in on their target. + Choosing GUESS also automatically skips your PLAY turn and forces your opponent to click more links. You can get even more semantic feedback from this, however it may also be risky—the more PLAY moves you give them, the more likely they are to eventually navigate to their own WikiWord. ## How we built it Several technologies and strategies were used to develop WikiChess. First, we used **web scraping** to fetch and clean Wikipedia content while bypassing iframes issues, allowing players to navigate and interact with real-time data from the site. To manage the game's state and progression, we updated game status based on each hyperlink click and used **Flask** for our framework. We incorporated **semantic analysis** using spaCy to calculate **NLP**similarity scores between articles to display to players. The game setup is coded with **Python**, featuring five categories—animals, sports, foods, professions, and sciences—generating two words from the same category to provide a cohesive and engaging experience. Players start from a page outside the common category to add an extra challenge. For the front end, we prioritized a user-friendly and interactive design, focusing on a minimalist aesthetic with **dynamic animations** and many smooth transitions. Front-end techstack was made up of **HTML/CSS, JS, image generation tools, Figma.** ## Challenges we ran into One of our biggest challenges was dealing with **iframe access controls**. We faced persistent issues with blocked access, which prevented us from executing any logic beyond simply displaying the Wikipedia content. Despite trying various methods to bypass this limitation, including using proxy servers, the frequent need to check for user victories made it clear that iframes were not a viable solution. This challenge forced us to pivot from our initial plan of handling much of the game logic on the client side using JavaScript. Instead, we had to **rely heavily on backend solutions**, particularly web scraping, to manage and update game state. ## Accomplishments that we're proud of Despite the unexpected shift in our solution approach, which significantly increased the complexity of our backend and required **major design adjustments**, we managed to overcome several challenges. Integrating web scraping with real-time game updates and **ensuring a smooth user experience** were particularly demanding. We tackled these issues by strengthening our backend logic and refining the frontend to enhance user engagement. Despite the text-heavy nature of the Wikipedia content, we aimed to make the interface visually appealing and fun, ensuring a seamless and enjoyable user experience. ## What we learned As beginners in hacking, we are incredibly proud of our perseverance through these challenges. The experience was a great learning opportunity, and we successfully delivered a product that we find both enjoyable and educational. **Everyone was able to contribute in their own way!** ## What's next for WikiChess Our first priority would be to implement an **online lobby feature** that would allow users to play virtually with their friends rather than only locally in person. We would also like to introduce more word categories, and to have a **more customized metric than semantics for the similarity score**. Ideally, the similarity score would account for the structure of Wikipedia and the strategies that WikiRacers use to reach their target article other than just by going with the closest in word meaning. We would also like to introduce a **timer-based gameplay** where players would be limited to time instead of turns to encourage a faster-paced game mode.
## Inspiration Having been inspired by their family member's journey in life who had struggled with being deaf and using sign language all their life, a member of our team pushed forward an idea to celebrate innovative technologies that now, if implemented correctly, could bridge the gap between a person's communicative challenges by helping people learn ASL and helping deaf people communicate. ## What it does ASL-Quick-Learn is a website that allows a user to signal different ASL letters and gain feedback based on what they signal. The website generates a random letter, and the user signs that and takes a picture using a handy one-click button on the website. Immediately after they take the picture, they receive a point if their sign was correct, and the website generates a new random letter for them to signal. At anytime the user can see their current score at the bottom of the screen. ## How we built it We had an extremely well-thought out step-by-step process for our code. We started by designing a webscraping script using Python and the library BeautifulSoup. This gathered images for each sign corresponding to a letter from an online ASL dictionary. Although currently, we only have support for ASL letters, in the future we want to add words and this script lets us scale fast. Next, after gathering the data we pre-processed the dataset by utilizing Google's API, Mediapipe, to help us label 21 distinct locations (each location contains an x, y, z value) on the hand that each distinct ASL gesture creates. From there we labeled each image with the corresponding letter and 63 values that are generated from each movement to make up the composition of the model's data. Designing a basic machine learning model, we were able to predict the letter corresponding a sign to a given image. This was done using technologies including TensorFlow, Mediapipe and OpenCV. Finally we got to the web ## Challenges we ran into The most major issue was the lack of data. With over 20 potential classification classes, and barely any data to support an already difficult problem, we had a hard time developing a feasible neural network model whose performance could rival that of a human who knows ASL. Another issue was the fact that we wanted to build something to that a user could easily use, but none of us had any front-end knowledge. We spent a significant portion of time learning and researching how to use HTML, CSS and Javascript to build a locally hosted website. This definitely paid off though, as we created a relatively clean looking website with functionality to live video stream and take pictures from that stream. Finally, since none of us had front-end experience, we definitely did not have full-stack development experience. Therefore, connecting our website to the Machine Learning model was quite difficult. Using Python Flask we were eventually able to complete this task and therefore were able to compare images taken live to our data. ## Accomplishments that we're proud of We're definitely proud of our product for it being our first hackathon! ## What we learned Mentioned earlier, none of us had any front-end knowledge or experience at all, but we were able to create a working locally-hosted website that had a built-in video stream and the ability for the user to take images with that. ## What's next for ASL-Quick-Learn In the future we want to add compatibility for more than just ASL letters. Fortunately, the way our pipeline is designed using webscraping, this would be very easy. Simply adding any signable word to an array in our script would allow us to garner data on that word.
partial
## Inspiration As music fans, there's nothing we like more than being surrounded by awesome groves and passionate musicians. As most of you have heard a million times, the pandemic has forced us to discover new and creative ways of engaging with others. This is why we've decided to make CrowdSong: the open-sourced crowdsourcing musical extravaganza! ## What it does Combine DAW (digital audio workstation) technology and social media together, what do you get? You guessed it! CrowdSong is a platform for sharing and interacting with other's music projects. Using our website, you can either create a track or put your own spin on someone else's. We have support for both MIDI and audio input too! ## How we built it CrowdSong is primarily built using React for the UI, alongside some javascript for processing inputs. To do so, the website made use of the [Web Audio API](https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API), which also provided functionality for the basic oscillator synthesizer we implemented. ## Challenges we ran into Our primary difficulty was the scale of the project: Turns out building an in-browser DAW isn't so easy after all! In particular, coding the animations in React and figuring out ways to make the timings tight proved to be quite challenging. ## Accomplishments that we're proud of Our team is proud to have produced a functioning prototype at all, given the amount of things we had to implement. ## What we learned Most of us were relatively unfamiliar with Javascript, providing us with a rich learning experience. We also acquired a better understanding of DSP (Digital Sound Processing) and the complexity that high quality sound brings to the table. ## What's next for CrowdSong There's ample room for exciting new features to be implemented. Here are a few that we think could really improve the project: * A better home page, with a user-personalized feed * Virtual MIDI keyboard * Having more instrument options for the MIDI tracks * Implementing filters and effects for both MIDI and audio tracks * possible support for VSTs created by the user
## Inspiration Covid was a hard time for all of us, many had to find new ways to cope with all of the new stress and anxiety. And thus came music. Being able to listen to music together is especially important during a time where you weren’t able to see your closest friends. Prior to the competition, our group had tried to do a Spotify group session, and realised that the system had gone downhill. Users could spam skipping songs and bug out the session. It was overall a bad experience. And so, after hearing about the theme, Groupify was born! An easier way to listen to songs with your friends. We aim to bring back the feeling of laughing at your friend's favourite song, and forcing them to listen to your music because you clearly have a superior taste in music. ## What it does Groupify allows users to join listening rooms with your friends. Everybody’s Spotify account is synced together so you can all listen to the same music track. Users can play, skip, and seek through songs with their peers. Groupify is a great way to introduce new music to your friends and build connections through music. ## How we built it Groupify was built as a website to maximise compatibility with all devices. We decided to use React for our frontend and Flask for our backend. We used React for our frontend because it was an opportunity to learn new JavaScript libraries. We used the react-scroll-motion library to add some scrolling animations and spice up our page. For our login page we used the information we collected from Spotify’s API to handle which user is logged in and using the program. We then got their current song and displayed it on the dashboard and shared it to the other users in the group. On the backend we used Flask to access and call Spotify’s API and collect and send data. We used WebSockets to handle sharing song data and its timestamp to other users. Overall this project was a very engaging and challenging one and allowed us to learn and improve our skills. ## Challenges we ran into When we first started out, looking at it simply we would just need to use Spotify’s WebSocket API to see when the user changes their song. We had seen an example of this from Discord, and how they were able to track what you were listening to. However, Spotify only provides its WebSocket API to authorised developers. Therefore forcing us to find a way to get a constantly updated version of our users currently playing tracks, to keep them synced with the rest of the people in a room. We were able to overcome this challenge by polling Spotify's free user’s PlayBackState endpoint. This isn’t the most elegant solution but it allows us to bypass the unnecessary restrictions and restore group sessions to their former glory. ## Accomplishments that we're proud of Building the music player The accuracy of the sync between peers We learned so much in such a short time, and are glad that it was possible for us to participate in such a great opportunity. ## What we learned Our group learned more about React, Flask, and coding backends. We were able to add animations to our website, which was a first. ## What's next for Groupify Our next step for Groupify is to add the functionality for multiple rooms. Currently, we only have one room available and it’s public for all to join. We want to add functionality for multiple rooms and private rooms that are accessed by secret room codes. This will be a major step towards setting Groupify up for public use.
# Links Youtube: <https://youtu.be/VVfNrY3ot7Y> Vimeo: <https://vimeo.com/506690155> # Soundtrack Emotions and music meet to give a unique listening experience where the songs change to match your mood in real time. ## Inspiration The last few months haven't been easy for any of us. We're isolated and getting stuck in the same routines. We wanted to build something that would add some excitement and fun back to life, and help people's mental health along the way. Music is something that universally brings people together and lifts us up, but it's imperfect. We listen to our same favourite songs and it can be hard to find something that fits your mood. You can spend minutes just trying to find a song to listen to. What if we could simplify the process? ## What it does Soundtrack changes the music to match people's mood in real time. It introduces them to new songs, automates the song selection process, brings some excitement to people's lives, all in a fun and interactive way. Music has a powerful effect on our mood. We choose new songs to help steer the user towards being calm or happy, subtly helping their mental health in a relaxed and fun way that people will want to use. We capture video from the user's webcam, feed it into a model that can predict emotions, generate an appropriate target tag, and use that target tag with Spotify's API to find and play music that fits. If someone is happy, we play upbeat, "dance-y" music. If they're sad, we play soft instrumental music. If they're angry, we play heavy songs. If they're neutral, we don't change anything. ## How we did it We used Python with OpenCV and Keras libraries as well as Spotify's API. 1. Authenticate with Spotify and connect to the user's account. 2. Read webcam. 3. Analyze the webcam footage with openCV and a Keras model to recognize the current emotion. 4. If the emotion lasts long enough, send Spotify's search API an appropriate query and add it to the user's queue. 5. Play the next song (with fade out/in). 6. Repeat 2-5. For the web app component, we used Flask and tried to use Google Cloud Platform with mixed success. The app can be run locally but we're still working out some bugs with hosting it online. ## Challenges we ran into We tried to host it in a web app and got it running locally with Flask, but had some problems connecting it with Google Cloud Platform. Making calls to the Spotify API pauses the video. Reducing the calls to the API helped (faster fade in and out between songs). We tried to recognize a hand gesture to skip a song, but ran into some trouble combining that with other parts of our project, and finding decent models. ## Accomplishments that we're proud of * Making a fun app with new tools! * Connecting different pieces in a unique way. * We got to try out computer vision in a practical way. ## What we learned How to use the OpenCV and Keras libraries, and how to use Spotify's API. ## What's next for Soundtrack * Connecting it fully as a web app so that more people can use it * Allowing for a wider range of emotions * User customization * Gesture support
losing
## Inspiration Growing up in the early 2000s, communiplant's founding team knew what it was like to grow up in vibrant communities, interconnected interpersonal and naturally. Today's post-covid fragmented society lacks the community and optimism that kept us going. The lack of optimism is especially evident through our climate crisis: an issue that falls outside most individuals loci of control. That said, we owe it to ourselves and future generations to keep hope for a better future alive, **and that future starts on the communal level**. Here at Communiplant, we hope to help communities realize the beauty of street-level biodiversity, shepherding the optimism needed for a brighter future. ## What it does Communiplant allows community members to engage with their community while realizing their jurisdiction's potential for sustainable development. Firstly, the communiplant analyzes satellite imagery using machine learning and computer vision models to calculate the community's NDMI vegetation indices. Beyond that, community members can individually contribute to their community on Commuiplant by uploading images of various flora and fauna they see daily in their community. Using computer vision models, our system can label the plantlife uploaded to the system, serving as a mosaic representing the communities biodiversity. Finally, to further engage with their communities, users can participate in the community through participation in a variety of community events. ## How we built it Communitech is a fullstack web application developed using React & Vite for the frontend, and Django on the backend. We used AWS's cloud suite for relational data storage: storing user records. Beyond that, however, we used AWS to implement the algorithms necessary for the complex categorizations that we needed to make. Namely. we used AWS S3 object storage to maintain our various clusters. Finally, we used a variety of browser-level apis, including but not limited to the google maps API and the google earth engine API. ## Challenges we ran into While UOttahack6 has been incredibly rewarding, it has not been without it challenges. Namely, we found that attempting to use bleeding-edge new technologies that we had little experience with in conjunction led to a host of technical issues. First and most significantly, we found it difficult implementing cloud based artificial intelligence workflows for the first time. We also had a lot of issues with some of the browser-level maps APIs, as we found that the documentation for some of those resources was insufficient for our experience level. ## Accomplishments that we're proud of Regardless of the final result, we are happy to have made a final product with a concrete use case that has potential to become major player in the sustainability space. All in all however, we are mainly proud that through it all we were able to show technical resilience. There were many late night moments where we didn't really see a way out, or where we would have to cut out a significant amount of functionality from our final product. Regardless we pushed though, and those experiences are what we will end up remembering UOttahack for. ## What's next for Communiplant The future is bright for Communplant with many features on the way. Of these, the most significant are related to the mapping functionality. Currently, user inputted flora and fauna live only in a photo album on the community page. Going forwards we hope to have images linked to geographic points, or pins on the map. Regardless of Communiplant's future actions, however, we will keep our guarantee to support sustainability on all scales.
## Inspiration SustainaPal is a project that was born out of a shared concern for the environment and a strong desire to make a difference. We were inspired by the urgent need to combat climate change and promote sustainable living. Seeing the increasing impact of human activities on the planet's health, we felt compelled to take action and contribute to a greener future. ## What it does At its core, SustainaPal is a mobile application designed to empower individuals to make sustainable lifestyle choices. It serves as a friendly and informative companion on the journey to a more eco-conscious and environmentally responsible way of life. The app helps users understand the environmental impact of their daily choices, from transportation to energy consumption and waste management. With real-time climate projections and gamification elements, SustainaPal makes it fun and engaging to adopt sustainable habits. ## How we built it The development of SustainaPal involved a multi-faceted approach, combining technology, data analysis, and user engagement. We opted for a React Native framework, and later incorporated Expo, to ensure the app's cross-platform compatibility. The project was structured with a focus on user experience, making it intuitive and accessible for users of all backgrounds. We leveraged React Navigation and React Redux for managing the app's navigation and state management, making it easier for users to navigate and interact with the app's features. Data privacy and security were paramount, so robust measures were implemented to safeguard user information. ## Challenges we ran into Throughout the project, we encountered several challenges. Integrating complex AI algorithms for climate projections required a significant amount of development effort. We also had to fine-tune the gamification elements to strike the right balance between making the app fun and motivating users to make eco-friendly choices. Another challenge was ensuring offline access to essential features, as the app's user base could span areas with unreliable internet connectivity. We also grappled with providing a wide range of educational insights in a user-friendly format. ## Accomplishments that we're proud of Despite the challenges, we're incredibly proud of what we've achieved with SustainaPal. The app successfully combines technology, data analysis, and user engagement to empower individuals to make a positive impact on the environment. We've created a user-friendly platform that not only informs users but also motivates them to take action. Our gamification elements have been well-received, and users are enthusiastic about earning rewards for their eco-conscious choices. Additionally, the app's offline access and comprehensive library of sustainability resources have made it a valuable tool for users, regardless of their internet connectivity. ## What we learned Developing SustainaPal has been a tremendous learning experience. We've gained insights into the complexities of AI algorithms for climate projections and the importance of user-friendly design. Data privacy and security have been areas where we've deepened our knowledge to ensure user trust. We've also learned that small actions can lead to significant changes. The collective impact of individual choices is a powerful force in addressing environmental challenges. SustainaPal has taught us that education and motivation are key drivers for change. ## What's next for SustainaPal The journey doesn't end with the current version of SustainaPal. In the future, we plan to further enhance the app's features and expand its reach. We aim to strengthen data privacy and security, offer multi-language support, and implement user support for a seamless experience. SustainaPal will also continue to evolve with more integrations, such as wearable devices, customized recommendations, and options for users to offset their carbon footprint. We look forward to fostering partnerships with eco-friendly businesses and expanding our analytics and reporting capabilities for research and policy development. Our vision for SustainaPal is to be a global movement, and we're excited to be on this journey towards a healthier planet. Together, we can make a lasting impact on the world.
## Inspiration Many people want to find ways to recycle more, make donations, find charities to support, go to local health clinics or non profits like Planned parenthood, support environmental issues but don't know how or where to look. This often means looking up a place to donate clothes in your city, or a place that accepts recycling certain materials like metals, for example. This app solves this problem by giving the location to all such organizations in one place. ## What it does Includes a map where organizations (incentivized because they want to reach more people) and people can place a pin on the map of established places (i.e. a junkyard or building housing a health clinic) upload or take photos of the place, add comments about it or other places. There are different maps based on interest like Nonprofit map, Donations map, Volunteer map, Health map and a Profile view where pins from all maps can be seen. Social media is very important to this app. It leverages social media by allowing users to login with facebook and post a comment about a location to their wall. For further discussion fostering social good, there is a section of the app where users can chat about these issues. Was inspired by the app Waze, where users can real-time comment on traffic. Here users can real-time comment on different issues.## Challenges I ran into ## How I built it Android app written in Java. Used Parse for backend and facebook APIs for login and sharing a post to facebook wall. Used Google maps API to pin to maps. ## Accomplishments that I'm proud of All of the special features on the map, such as filter by date, shake device for different version of map (i.e. hybrid), creating a chat sections so that users can communicate. Sharing to facebook, as social media sharing is an important part of the app. ## What I learned Setting up Parse database, creating functions to both take a photo with the app AND upload from existing photo library on phone. ## What's next for Contribute Monitoring what is commented/posted. github link includes code to a completely different project, the most recent commit is my project for HackPrinceton
winning
## Inspiration Public speaking is greatly feared by many, yet it is a part of life that most of us have to go through. Despite this, preparing for presentations effectively is *greatly limited*. Practicing with others is good, but that requires someone willing to listen to you for potentially hours. Talking in front of a mirror could work, but it does not live up to the real environment of a public speaker. As a result, public speaking is dreaded not only for the act itself, but also because it's *difficult to feel ready*. If there was an efficient way of ensuring you aced a presentation, the negative connotation associated with them would no longer exist . That is why we have created Speech Simulator, a VR web application used for practice public speaking. With it, we hope to alleviate the stress that comes with speaking in front of others. ## What it does Speech Simulator is an easy to use VR web application. Simply login with discord, import your script into the site from any device, then put on your VR headset to enter a 3D classroom, a common location for public speaking. From there, you are able to practice speaking. Behind the user is a board containing your script, split into slides, emulating a real powerpoint styled presentation. Once you have run through your script, you may exit VR, where you will find results based on the application's recording of your presentation. From your talking speed to how many filler words said, Speech Simulator will provide you with stats based on your performance as well as a summary on what you did well and how you can improve. Presentations can be attempted again and are saved onto our database. Additionally, any adjustments to the presentation templates can be made using our editing feature. ## How we built it Our project was created primarily using the T3 stack. The stack uses **Next.js** as our full-stack React framework. The frontend uses **React** and **Tailwind CSS** for component state and styling. The backend utilizes **NextAuth.js** for login and user authentication and **Prisma** as our ORM. The whole application was type safe ensured using **tRPC**, **Zod**, and **TypeScript**. For the VR aspect of our project, we used **React Three Fiber** for rendering **Three.js** objects in, **React XR**, and **React Speech Recognition** for transcribing speech to text. The server is hosted on Vercel and the database on **CockroachDB**. ## Challenges we ran into Despite completing, there were numerous challenges that we ran into during the hackathon. The largest problem was the connection between the web app on computer and the VR headset. As both were two separate web clients, it was very challenging to communicate our sites' workflow between the two devices. For example, if a user finished their presentation in VR and wanted to view the results on their computer, how would this be accomplished without the user manually refreshing the page? After discussion between using web-sockets or polling, we went with polling + a queuing system, which allowed each respective client to know what to display. We decided to use polling because it enables a severless deploy and concluded that we did not have enough time to setup websockets. Another challenge we had run into was the 3D configuration on the application. As none of us have had real experience with 3D web applications, it was a very daunting task to try and work with meshes and various geometry. However, after a lot of trial and error, we were able to manage a VR solution for our application. ## What we learned This hackathon provided us with a great amount of experience and lessons. Although each of us learned a lot on the technological aspect of this hackathon, there were many other takeaways during this weekend. As this was most of our group's first 24 hour hackathon, we were able to learn to manage our time effectively in a day's span. With a small time limit and semi large project, this hackathon also improved our communication skills and overall coherence of our team. However, we did not just learn from our own experiences, but also from others. Viewing everyone's creations gave us insight on what makes a project meaningful, and we gained a lot from looking at other hacker's projects and their presentations. Overall, this event provided us with an invaluable set of new skills and perspective. ## What's next for VR Speech Simulator There are a ton of ways that we believe can improve Speech Simulator. The first and potentially most important change is the appearance of our VR setting. As this was our first project involving 3D rendering, we had difficulty adding colour to our classroom. This reduced the immersion that we originally hoped for, so improving our 3D environment would allow the user to more accurately practice. Furthermore, as public speaking infers speaking in front of others, large improvements can be made by adding human models into VR. On the other hand, we also believe that we can improve Speech Simulator by adding more functionality to the feedback it provides to the user. From hand gestures to tone of voice, there are so many ways of differentiating the quality of a presentation that could be added to our application. In the future, we hope to add these new features and further elevate Speech Simulator.
## Inspiration Public speaking is an incredibly important skill that many seek but few master. This is in part due to the high level of individualized attention and feedback needed to improve when practicing. Therefore, we want to solve this with AI! We have created a VR application that allows you to get constructive feedback as you present, debate, or perform by analyzing your arguments and speaking patterns. While this was our starting motivation for ArticuLab, we quickly noticed the expansive applications and social impact opportunities for it. ArticuLab could be used by people suffering from social anxiety to help improve their confidence in speaking in front of crowds and responding to contrasting opinions. It could also be used by people trying to become more fluent in a language, since it corrects pronunciation and word choice. ## What it does ArticuLab uses AI in a VR environment to recommend changes to your pace, argument structure, clarity, and boy language when speaking. It holds the key to individualized public speaking practice. In ArticuLab you also have the opportunity to debate directly against AI, who'll point out all the flaws in your arguments and make counterarguments so you can make your defense rock-solid. ## How we built it For our prototype, we used Meta's Wit.AI natural language processing software for speech recognition, built a VR environment on Unity, and used OpenAI's powerful ChatGPT to base our feedback system on argument construction and presenting ability. Embedding this into an integrated VR App results in a seamless, consumer-ready experience. ## Challenges we ran into The biggest challenge we ran into is using the VR headset microphone as input for the speech recognition software, and then directly inputting that to our AI system. What made this so difficult was adapting the formatting from each API onto the next. Within the same thread, we ran into an issue where the microphone input would only last for a few seconds, limiting the dialogue between the user and the AI in a debate. These issues were also difficult to test because of the loud environment we were working in. Additionally, we had to create a VR environment from scratch, since there were no free assets to fit our needs. ## Accomplishments that we're proud of We're especially proud of accomplishing such an ambitious project with a team that is majority beginners! Treehacks is three of our integrants' first hackathon, so everyone had to step up and do more work or learn more new skills to implement in our project. ## What we learned We learned a lot about speech to text software, designing an environment and programming in Unity, adapting the powerful ChatGPT to our needs, and integrating a full-stack VR application. ## What's next for ArticuLab Naturally, there would be lots more polishing of the cosmetics and user interface of the program, which are currently restricted by financial resources and the time available. Among these, would be making the environment a higher definition with better quality assets, crowd responses, ChatGPT responses with ChatGPT plus, etc. ArticuLab could be useful both academically and professionally in a variety of fields, education, project pitches like Treehacks, company meetings, event organizers… the list goes on! We would also seek to expand the project to alternate versions adapted for the comfort of the users, for example, a simplified iOS version could be used by public speakers to keep notes on their speech and let them know if they're speaking too fast, too slow, or articulating correctly live! Similarly, such a feature would be integrated into the VR version, so a presenter could have notes on their podium and media to present behind them (powerpoint, video, etc.), simulating an even more realistic presenting experience. Another idea is adding a multiplayer version that would exponentially expand the uses for ArticuLab. Our program could allow debate teams to practice live in front of a mix of AI and real crowds, similarly, ArticuLab could host online live debates between public figures and politicians in the VR environment.
> > Domain.com domain: IDE-asy.com > > > ## Inspiration Software engineering and development have always been subject to change over the years. With new tools, frameworks, and languages being announced every year, it can be challenging for new developers or students to keep up with the new trends the technological industry has to offer. Creativity and project inspiration should not be limited by syntactic and programming knowledge. Quick Code allows ideas to come to life no matter the developer's experience, breaking the coding barrier to entry allowing everyone equal access to express their ideas in code. ## What it does Quick Code allowed users to code simply with high level voice commands. The user can speak in pseudo code and our platform will interpret the audio command and generate the corresponding javascript code snippet in the web-based IDE. ## How we built it We used React for the frontend, and the recorder.js API for the user voice input. We used runkit for the in-browser IDE. We used Python and Microsoft Azure for the backend, we used Microsoft Azure to process user input with the cognitive speech services modules and provide syntactic translation for the frontend’s IDE. ## Challenges we ran into > > "Before this hackathon I would usually deal with the back-end, however, for this project I challenged myself to experience a different role. I worked on the front end using react, as I do not have much experience with either react or Javascript, and so I put myself through the learning curve. It didn't help that this hacakthon was only 24 hours, however, I did it. I did my part on the front-end and I now have another language to add on my resume. > The main Challenge that I dealt with was the fact that many of the Voice reg" *-Iyad* > > > "Working with blobs, and voice data in JavaScript was entirely new to me." *-Isaac* > > > "Initial integration of the Speech to Text model was a challenge at first, and further recognition of user audio was an obstacle. However with the aid of recorder.js and Python Flask, we able to properly implement the Azure model." *-Amir* > > > "I have never worked with Microsoft Azure before this hackathon, but decided to embrace challenge and change for this project. Utilizing python to hit API endpoints was unfamiliar to me at first, however with extended effort and exploration my team and I were able to implement the model into our hack. Now with a better understanding of Microsoft Azure, I feel much more confident working with these services and will continue to pursue further education beyond this project." *-Kris* > > > ## Accomplishments that we're proud of > > "We had a few problems working with recorder.js as it used many outdated modules, as a result we had to ask many mentors to help us get the code running. Though they could not figure it out, after hours of research and trying, I was able to successfully implement recorder.js and have the output exactly as we needed. I am very proud of the fact that I was able to finish it and not have to compromise any data." *-Iyad* > > > "Being able to use Node and recorder.js to send user audio files to our back-end and getting the formatted code from Microsoft Azure's speech recognition model was the biggest feat we accomplished." *-Isaac* > > > "Generating and integrating the Microsoft Azure Speech to Text model in our back-end was a great accomplishment for our project. It allowed us to parse user's pseudo code into properly formatted code to provide to our website's IDE." *-Amir* > > > "Being able to properly integrate and interact with the Microsoft Azure's Speech to Text model was a great accomplishment!" *-Kris* > > > ## What we learned > > "I learned how to connect the backend to a react app, and how to work with the Voice recognition and recording modules in react. I also worked a bit with Python when trying to debug some problems in sending the voice recordings to Azure’s servers." *-Iyad* > > > "I was introduced to Python and learned how to properly interact with Microsoft's cognitive service models." *-Isaac* > > > "This hackathon introduced me to Microsoft Azure's Speech to Text model and Azure web app. It was a unique experience integrating a flask app with Azure cognitive services. The challenging part was to make the Speaker Recognition to work; which unfortunately, seems to be in preview/beta mode and not functioning properly. However, I'm quite happy with how the integration worked with the Speach2Text cognitive models and I ended up creating a neat api for our app." *-Amir* > > > "The biggest thing I learned was how to generate, call and integrate with Microsoft azure's cognitive services. Although it was a challenge at first, learning how to integrate Microsoft's models into our hack was an amazing learning experience. " *-Kris* > > > ## What's next for QuickCode We plan on continuing development and making this product available on the market. We first hope to include more functionality within Javascript, then extending to support other languages. From here, we want to integrate a group development environment, where users can work on files and projects together (version control). During the hackathon we also planned to have voice recognition to recognize and highlight which user is inputting (speaking) which code.
winning
## Inspiration How can we give millennials confidence in their ability to save money? When you don't believe you have money, you don't think about saving. We aimed to create a simple, fun, and engaging tool that shows you - literally - how effective saving even the smallest amount can be. Rather than display yet another screen full of unclear calculations and pie charts, we use augmented reality to pile up stacks of cash right in front of the user to show them how their small savings can become big. Money has become too abstract and the conversation has become too heavy. We need to make it real again. We need a **Reality Cheque**. ## What it does **Reality Cheque** is an Augmented Reality iOS app built using ARKit with custom 3D and 2D assets. The user enters their monthly income and any savings they may currently have. Then the app mathematically places their current savings in the form of AR stacks of cash. The experience takes them through ten and twenty-five year jumps to show how that small investment will balloon with compound interest. AR is a perfect conversation starter, so we added a screenshot-and-share function that lets users snap pics with their massive pile of future-saved cash. At the end, the app redirects that positive energy into a call-to-action: go even further by getting in touch with a local bank. It's fun, goofy, and finally paints small savings in a positive and understandable light! ## How we built it We chose to create the application in iOS so that we could utilize the iPhone’s AR capabilities. As the main developer, I began by learning how to place an AR anchor and drawing a 1-dimensional square on a horizontal plane. I then began generating random objects on that plane and focused on organizing them neatly into stacks. By dividing the horizontal plane into a matrix, I was able to safely place objects into neat rows and columns on the plane. 

After the core functionality was secure, I began implementing the UI based on the mock ups and 2D assets produced by my teammates. Once the flow of the application was established, I used the user inputs to calculate exactly how many stacks of money (5 x $20) would amount to the money the user would expect to receive upon their preset time periods (1 year, 10 years, 25 years). 
The AR app was then able to dynamically generate accurate money piles based on the personalized input and expected return of investment. Once all main items of the app were completed, I worked on adding a share screen that would allow users to photograph and send via various applications on their phones. ## Challenges we ran into A big starting challenge was figuring out how to place the objects in a random order so that they made “chaotic piles”, and also tracking these piles to make sure no objects were floating above the ground. Once a prototype of the money pile was developed, I was given the model for the actual 3D money we were going to use. There were some slight modifications to be made to the model as the orientation and scale changed when converted from .DAE to iOS’s .SCN filetype. To add dynamic polish, I challenged myself to animation the money falling from the sky. Finally, the calculation for compound interest over time was confusing, but luckily I was able to cross reference my own numbers with online calculators to make sure our algorithm was correct. 

From the user experience point of view, the UX team was challenged in trying to determine how much or how little functionality to add. Ultimately we stayed focused on our user base: unconfident and underpowered millennials who felt they didn’t have money to save. Keep the experience incredibly accessible and fun so that people would be more likely to try it, share it, and think about it. We didn’t want to replace proper retirement and savings tools - we simply wanted to instigate the desire to use them. ## Accomplishments that we're proud of Our team really came together to produce what we believe is a strong marriage of modern technology and unique user experience. We aimed from the get-go to keep technology and UX in balance to produce a high-quality app, and we’re very proud of what we accomplished. This was achieved by high-quality assets, a simple and relatable user flow, and detailed use of modern technology. ## What we learned We have a multidisciplinary team which meant we all learned new things in totally different directions. The main developer had to dig deep into iOS UI and ARKit, the 3D artist had to push their abilities in working with complex UV maps, and the 2D artist extended their practice into vector art for mobile UI. ## What's next for **Reality Cheque** We envision this app as an added feature alongside a financial institution as a form of social outreach. It could help young people feel more positive about saving money while providing them with stepping stones towards seeking more advanced financial advisement. The app can be white-labelled according to the bank's branding and redirect the user to specific, entry-level forms of investment channels. \_ "See the value of starting small." \_
## Inspiration Every project aims to solve a problem, and to solve people's concerns. When we walk into a restaurant, we are so concerned that not many photos are printed on the menu. However, we are always eager to find out what a food looks like. Surprisingly, including a nice-looking picture alongside a food item increases sells by 30% according to Rapp. So it's a big inconvenience for customers if they don't understand the name of a food. This is what we are aiming for! This is where we get into the field! We want to create a better impression on every customer and create a better customer-friendly restaurant society. We want every person to immediately know what they like to eat and the first impression of a specific food in a restaurant. ## How we built it We mainly used ARKit, MVC and various APIs to build this iOS app. We first start with entering an AR session, and then we crop the image programmatically to feed it to OCR from Microsoft Azure Cognitive Service. It recognized the text from the image, though not perfectly. We then feed the recognized text to a Spell Check from Azure to further improve the quality of the text. Next, we used Azure Image Search service to look up the dish image from Bing, and then we used Alamofire and SwiftyJSON for getting the image. We created a virtual card using SceneKit and place it above the menu in ARView. We used Firebase as backend database and for authentication. We built some interactions between the virtual card and users so that users could see more information about the ordered dishes. ## Challenges we ran into We ran into various unexpected challenges when developing Augmented Reality and using APIs. First, there are very few documentations about how to use Microsoft APIs on iOS apps. We learned how to use the third-party library for building HTTP request and parsing JSON files. Second, we had a really hard time understanding how Augmented Reality works in general, and how to place virtual card within SceneKit. Last, we were challenged to develop the same project as a team! It was the first time each of us was pushed to use Git and Github and we learned so much from branches and version controls. ## Accomplishments that we're proud of Only learning swift and ios development for one month, we create our very first wonderful AR app. This is a big challenge for us and we still choose a difficult and high-tech field, which should be most proud of. In addition, we implement lots of API and create a lot of "objects" in AR, and they both work perfectly. We also encountered few bugs during development, but we all try to fix them. We're proud of combining some of the most advanced technologies in software such as AR, cognitive services and computer vision. ## What we learned During the whole development time, we clearly learned how to create our own AR model, what is the structure of the ARScene, and also how to combine different API to achieve our main goal. First of all, we enhance our ability to coding in swift, especially for AR. Creating the objects in AR world teaches us the tree structure in AR, and the relationships among parent nodes and its children nodes. What's more, we get to learn swift deeper, specifically its MVC model. Last but not least, bugs teach us how to solve a problem in a team and how to minimize the probability of buggy code for next time. Most importantly, this hackathon poses the strength of teamwork. ## What's next for DishPlay We desire to build more interactions with ARKit, including displaying a collection of dishes on 3D shelf, or cool animations that people can see how those favorite dishes were made. We also want to build a large-scale database for entering comments, ratings or any other related information about dishes! We are happy that Yelp and OpenTable bring us more closed to the restaurants. We are excited about our project because it will bring us more closed to our favorite food!
## Inspiration As university students, emergency funds may not be on the top of our priority list however, when the unexpected happens, we are often left wishing that we had saved for an emergency when we had the chance. When we thought about this as a team, we realized that the feeling of putting a set amount of money away every time income rolls through may create feelings of dread rather than positivity. We then brainstormed ways to make saving money in an emergency fund more fun and rewarding. This is how Spend2Save was born. ## What it does Spend2Save allows the user to set up an emergency fund. The user inputs their employment status, baseline amount and goal for the emergency fund and the app will create a plan for them to achieve their goal! Users create custom in-game avatars that they can take care of. The user can unlock avatar skins, accessories, pets, etc. by "buying" them with funds they deposit into their emergency fund. The user will have milestones or achievements for reaching certain sub goals while also giving them extra motivation if their emergency fund falls below the baseline amount they set up. Users will also be able to change their employment status after creating an account in the case of a new job or career change and the app will adjust their deposit plan accordly. ## How we built it We used Flutter to build the interactive prototype of our Android Application. ## Challenges we ran into None of us had prior experience using Flutter, let alone mobile app development. Learning to use Flutter in a short period of time can easily be agreed upon to be the greatest challenge that we faced. We originally had more features planned, with an implementation of data being stored using Firebase, so having to compromise our initial goals and focus our efforts on what is achievable in this time period proved to be challenging. ## Accomplishments that we're proud of This was our first mobile app we developed (as well as our first hackathon). ## What we learned This being our first Hackathon, almost everything we did provided a learning experience. The skills needed to quickly plan and execute a project were put into practice and given opportunities to grow. Ways to improve efficiency and team efficacy can only be learned through experience in a fast-paced environment such as this one. As mentioned before, with all of us using Flutter for the first time, anything we did involving it was something new. ## What's next for Spend2Save There is still a long way for us to grow as developers, so the full implementation of Spend2Save will rely on our progress. We believe there is potential for such an application to appeal to its target audience and so we have planned projections for the future of Spend2Save. These projections include but are not limited to, plans such as integration with actual bank accounts at RBC.
partial
## Inspiration We wanted to provide an easy, interactive, and ultimately fun way to learn American Sign Language (ASL). We had the opportunity to work with the Leap Motion hardware which allowed us to track intricate real-time data surrounding hand movements. Using this data, we thought we would be able to decipher complex ASL gestures. ## What it does Using the Leap Motion's motion tracking technology, it prompts to user to replicate various ASL gestures. With real-time feedback, it tells the user how accurate their gesture was compared to the actual hand motion. Using this feedback, users can immediately adjust their technique and ultimately better perfect their ASL! ![Alt Text](https://media.giphy.com/media/3ohc1ef7gHPfHTVPnq/giphy.gif) ## How I built it Web app using Javascript, HTML, CSS. We had to train our data using various machine learning repositories to ensure accurate recognitions, as well as other plugins which allowed us to visualize the hand movements in real time. ## Challenges I ran into Training the data was difficult as gestures are complex forms of data, composed of many different data points in the hand's joints and bones but also in the progression of hand "frames". As a result, we had to take in a lot of data to ensure a thorough data-set that matched these data features to an actual classification of the correct ASL label (or phrase) ## Accomplishments that I'm proud of User Interface. Training the data. Working on a project that could actually potentially impact others! ## What I learned Hard work and dedication. Computer vision. Machine Learning. ## What's next for Leap Motion ASL More words? Game mode? Better training? More phrases? More complex combos of gestures? ![Alt Text](https://media.giphy.com/media/vFKqnCdLPNOKc/giphy.gif)
# 🎯 The Project Story ### 🔍 **About Vanguard** In today's fast-paced digital landscape, **cybersecurity** is not just important—it's essential! As threats multiply and evolve, security teams need tools that are **agile**, **compact**, and **powerful**. Enter **Vanguard**, our groundbreaking Raspberry Pi-powered vulnerability scanner and WiFi hacker. Whether you’re defending **air-gapped networks** or working on **autonomous systems**, Vanguard adapts seamlessly, delivering real-time insights into network vulnerabilities. It's more than a tool; it's a **cybersecurity swiss army knife** for both **blue** and **purple teams**! 🛡️🔐 --- ### **Air Gapped Network Deployability (CSE Challenge)** * Databases Having a dedicated database of vulnerabilities in the cloud for vulnerability scanning could pose a problem for deployments within air-gapped networks. Luckily, Vanguard can be deployed without the need for an external vulnerability database. A local database is stored on disk and contains precisely the information needed to identify vulnerable services. If necessary, Vanguard can be connected to a station with controlled access and data flow to reach the internet; this station could be used to periodically update Vanguard’s databases. * Data Flow Data flow is crucial in an embedded cybersecurity project. The simplest approach would be to send all data to a dedicated cloud server for remote storage and processing. However, Vanguard is designed to operate in air-gapped networks, meaning it must manage its own data flow for processing collected information. Different data sources are scraped by a Prometheus server, which then feeds into a Grafana server. This setup allows data to be organized and visualized, enabling users to be notified if a vulnerable service is detected on their network. Additionally, more modular services can be integrated with Vanguard, and the data flow will be compatible and supported. * Remote Control It is important for Vanguard to be able to receive tasks. Our solution provides various methods for controlling Vanguard's operations. Vanguard can be pre-packaged with scripts that run periodically to collect and process data. Similar to the Assemblyline product, Vanguard can use cron jobs to create a sequence of scripts that parse or gather data. If Vanguard goes down, it will reboot and all its services will restart automatically. Services can also be ran as containers. Within an air-gapped network, Vanguard can still be controlled and managed effectively. * Network Discovery Vanguard will scan the internal air-gapped network and keep track of active IP addresses. This information is then fed into Grafana, where it serves as a valuable indicator for networks that should have only a limited number of devices online. --- ### **Air Gapped Network Scanning (Example)** Context: Raspberri Pi is connected to a hotspot network to mimic an air gapped network. Docker containers are run to simulate devices being on the air gapped network. This example will show how Vanguard identifies a vulnerable device on the air gapped network. * Step 1: Docker Container A vulnerable docker is running on 10.0.0.9 ![alt text](https://i.imgur.com/SvjJNci.png) * Step 2: Automated Scanning on Vanguard picks up new IP Vanguard will automatically scan our network and store the information if its contains important information. Here are the cron scripts: ![Alt text](https://i.imgur.com/XZZpoXx.png) In the /var/log Vanguard Logged a new IP: ![Alt text](https://i.imgur.com/t2LJshz.png) Vanguard's port scanner found open ports on our vulnerable device: ![Alt text](https://i.imgur.com/acrcg6u.png) * Step 3: Prometheus scrapes results and Grafana displays IP Activity history show how many time an IP was seen: ![Alt text](https://i.imgur.com/NiI0Dqa.png) Vulnerability logs are displayed on our Grafana dashboard and we can see that our ports were scanned as running a vulnerable serivce. (2 red blocks on the right) (Only Port 21 and 22 ![Alt text](https://i.imgur.com/Wsco0uw.png) * Conclusion All this data flow was able to detect a new device and vulnerable services without the need of cloud or internet services. Vanguard's automated script's ran and detected the anomaly! ### 💡 **Inspiration** Our team was fascinated by the idea of blending **IoT** with **cybersecurity** to create something truly **disruptive**. Inspired by the open-source community and projects like dxa4481’s WPA2 handshake crack, we saw an opportunity to build something that could change the way we handle network vulnerabilities. We didn’t just want a simple network scanner—we wanted **Vanguard** to be **versatile**, **portable**, and **powerful** enough to handle even the most **secure environments**, like air-gapped industrial networks or autonomous vehicles 🚗💻. --- ### 🏆 **Accomplishments** * **Nmap** automates network scans, finding open ports and vulnerable services 🕵️‍♂️. * A **SQLite database** of CVEs cross-references scan results, identifying vulnerabilities in real time 🔓📊. * **Grafana** dashboards monitor the Raspberry Pi, providing metrics on **CPU usage**, **network traffic**, and much more 📈. * Wifi Cracking Module captures WPA2 handshakes and cracks them using open-source techniques, automating the process 🔑📶. * Usage of different services that will run automatically and return data. And everything comes together seamlessly in the vangaurd dashboard. Additionally, we integrated **Convex** as our backend data store to keep things **fast**, **reliable**, and easy to adapt for air-gapped networks (swap Convex for MongoDB with a breeze 🌬️ we really wanted to do take part in the convex challenge). --- ### 🔧 **Challenges We Faced** Building **Vanguard** wasn’t without its obstacles. Here's what we had to overcome: * 💻 **Air-gapped testing**: Ensuring Nmap runs flawlessly without external network access was tricky. We fine-tuned cron jobs to make the scanning smooth and reliable. * 🚦 **Data efficiency**: Working with a Raspberry Pi means limited resources. Optimizing how we process and store data was key. * 🛠️ **Seamless WiFi hacking**: Integrating WPA2 half-handshake cracking without impacting Pi performance required some creative problem-solving. --- ### 🏗️ **How We Built It** * **Hardware**: Raspberry Pi 🥧 with an external WiFi adapter 🔌. * **Backend**: We used **Convex** for data storage, with the option to switch to **MongoDB** for air-gapped use 🗃️. * **Scanning & Exploiting**: Nmap runs on a schedule to scan, and CVEs are stored in **SQLite** for mapping vulnerabilities 🔗. * **Frontend**: Built with **React** and **Next.js 14**, the user interface is sleek and efficient 🎨. * **Monitoring**: Metrics and performance insights are visualized through **Grafana**, keeping everything transparent and easy to manage 📊. A big thanks to <https://github.com/dxa4481> for the open source code for WPA2 Handshake PoC's --- ### 🚀 **What’s Next for Vanguard?** We're just getting started! Here’s what’s in store for Vanguard: * 🤖 **AI-driven vulnerability prediction**: Imagine knowing where a breach might happen **before** it occurs. We'll use machine learning to predict vulnerabilities based on historical data. * ⚙️ **Modular add-ons**: Integrate tools like **Metasploit** or **Snort** for more specialized attacks, making Vanguard a **customizable powerhouse**. * 🧳 **Enhanced portability**: We're optimizing Raspberry Pi hardware to push Vanguard’s limits even further, and exploring even more **compact** versions to make it the ultimate on-the-go tool! --- Vanguard isn’t just a project; it’s the **future** of portable, proactive **cybersecurity**. 🌐🔐 **Stay secure, stay ahead!**
## What it does What our project does is introduce ASL letters, words, and numbers to you in a flashcard manner. ## How we built it We built our project with React, Vite, and TensorFlowJS. ## Challenges we ran into Some challenges we ran into included issues with git commits and merging. Over the course of our project we made mistakes while resolving merge conflicts which resulted in a large part of our project being almost discarded. Luckily we were able to git revert back to the correct version but time was misused regardless. With our TensorFlow model we had trouble reading the input/output and getting webcam working. ## Accomplishments that we're proud of We are proud of the work we got done in the time frame of this hackathon with our skill level. Out of the workshops we attended I think we learned a lot and can't wait to implement them in future projects! ## What we learned Over the course of this hackathon we learned that it is important to clearly define our project scope ahead of time. We spent a lot of our time on day 1 thinking about what we could do with the sponsor technologies and should have looked into them more in depth before the hackathon. ## What's next for Vision Talks We would like to train our own ASL image detection model so that people can practice at home in real time. Additionally we would like to transcribe their signs into plaintext and voice so that they can confirm what they are signing. Expanding our project scope beyond ASL to other languages is also something we wish to do.
partial
## Inspiration With the cost of living increasing yearly and inflation at an all-time high, people need financial control more than ever. The problem is the investment field is not beginner friendly, especially with it's confusing vocabulary and an abundance of concepts creating an environment detrimental to learning. We felt the need to make a clear, well-explained learning environment for learning about investing and money management, thus we created StockPile. ## What it does StockPile provides a simulation environment of the stock market to allow users to create virtual portfolio's in real time. With relevant information and explanations built into the UX, the complex world of investments is explained in simple words, one step at a time. Users can set up multiple portfolios to try different strategies, learn vocabulary by seeing exactly where the terms apply, and access articles tailored to their actions from the simulator using AI based recommendation engines. ## How we built it Before starting any code, we planned and prototyped the application using Figma and also fully planned a backend architecture. We started our project using React Native for a mobile app, but due to connection and network issues while collaborating, we moved to a web app that runs on the phone using React. ## Challenges we ran into Some challenges we faced was creating a minimalist interface without the loss of necessary information, and incorporating both learning and interaction simultaneously. We also realized that we would not be able to finish much of our project in time, so we had to single out what to focus on to make our idea presentable. ## Accomplishments that we're proud of We are proud of our interface, the depth of which we fleshed out our starter concept, and the ease of access of our program. ## What we learned We learned about * Refining complex ideas into presentable products * Creating simple and intuitive UI/UX * How to use react native * Finding stock data from APIs * Planning backend architecture for an application ## What's next for StockPile Next up for StockPile would be to actually finish coding the app, preferably in a mobile version over a web version. We would also like to add the more complicated views, such as explanations for candle charts, market volume charts, etc. in our app. ## How StockPile approaches it's challenges: #### Best Education Hack Our entire project is based around encouraging, simplifying and personalizing the learning process. We believe that everyone should have access to a learning resource that adapts to them while providing them with a gentle yet complete introduction to investing. #### MLH Best Use of Google Cloud Our project uses some google services at it's core. - GCP App Engine - We can use app engine to host our react frontend and some of our backend. - GCP Cloud Functions - We can use Cloud Functions to quickly create microservices for different servies, such as backend for fetching stock chart data from FinnHub. - GCP Compute Engine - To host a CMS for the learn page content, and to host instance of CockroachDB - GCP Firebase Authentication to authenticate users securely. - GCP Recommendations AI - Used with other statistical operations to analyze a user's portfolio and present them with articles/tutorials best suited for them in the learn section. #### MLH Best Use of CockroachDB CockroachDB is a distributed SQL database - one that can scale. We understand that buying/selling stocks is transactional in nature, and there is no better solution that using a SQL database. Addditionally, we can use CockroachDB as a timeseries database - this allows us to effectively cache stock price data so we can optimze costs of new requests to our stock quote API.
Long-distance is challenging for any relationship, whether romantic, platonic, or familial. * 32.5% of college relationships are long-distance relationships (LDRs) * Nearly Three-Quarters (72%) of Americans Feel Lonely. The lack of physical presence often leads to feelings of disconnect and loneliness. Current solutions, such as video calls and messaging apps, lack the depth and immersion needed to truly feel connected. The most crucial aspect of a relationship, shared activities and involvement of senses in addition to just sight and sound, are often the hardest to achieve from a distance. This Valentine’s weekend, we present to you… VR-Tines! 💗VR-Tines is an innovative VR experience designed to enhance long-distance relationships through immersive, interactive, and emotionally fulfilling activities. ## ⭐Key Experience Points * **Collaborative Scrapbooking**: Couples can work together on a scrapbook, flipping through pages and dragging in photos and various decorative elements. This scrapbook feels like you’re actually working on it in the real world due to its alignment with 3d surfaces and interactive elements of flipping pages and dragging in elements. By having your partner right next to you, it’s like you’re working on it together in this shared space. Love to reflect on the mems :) * **Shared Taste**: Get a themed-meal at the same time! Here, we use the two users’ favorite boba orders. Using the Doordash API, we synchronize the delivery of identical beverages or snacks to both partners during their VR date, which we track in the Doordash delivery simulator. * **Enhanced Realism with Live Webcam Feed**: People typically use passthrough with VR headsets to feed what’s “real” into their experiences. We take advantage of this idea to stream a live webcam feed into the VR-tines experience, so it feels like your partner is actually sitting right next to you (and you see them in passthrough) as you do activities together! * **Tab Bar Navigation**: We support toggling through the three main options: scrapbooking, Doordash, and home. * **Social Impact**: VR-tines is about bringing people together. Our project has the potential to significantly reduce the emotional distance in long-distance relationships, fostering stronger bonds and happier couples. This problem is also important to us because all of us are in long-distance relationships! However, we notice many similar issues that arise in all kinds of relationships in life, such as family, friends, work, etc due to the difficulty of being far apart. Therefore, we sought to solve this real user problem faced every day and create a better solution than current methods of call communication to create more seamless, immersive experiences to feel closer to our loved ones. As first-time hackers and Stanford freshmen, the concept of home stretches across oceans to Myanmar and Vietnam, where our families reside. The yearning for a deeper connection with our loved ones, despite the geographical miles that separate us, sparked not just a need but a personal quest. Facing this hackathon, our greatest challenge was not the complexity of the technology or the novelty of the concept, but the mental hurdle of believing we could make a significant impact. This project became more than a hack; it evolved into a journey of discovery, learning, and overcoming, driven by our shared experiences of longing and the universal desire to feel closer to those we hold dear. It’s a testament to our belief that distance shouldn't dim the bonds of love and friendship but rather, with the right innovation, can be bridged beautifully.
## Inspiration During Hack the North, we realized how difficult it was to find suitable teammates to collaborate with. Browsing through Slack channels and random project ideas felt inefficient, leading to frustration. This struggle inspired the idea for Developers Assemble — a platform where developers can quickly find others based on skills and project needs, using a simple swiping model. ## What it does Developers Assemble connects developers looking to collaborate on projects. Users create profiles highlighting their skills (e.g., frontend, backend, full-stack), and can post or swipe on projects looking for teammates. The platform helps developers match with projects or other developers based on their specialties, making the process of forming a team easy and efficient. ## How we built it We built Developers Assemble with a focus on real-time interactions and scalability. The frontend was developed using React, Tailwind and Vite to create a dynamic and responsive user interface. For the backend, we used Django REST, with SQLite as our database. ## Challenges we ran into One of the biggest challenges we faced was integrating the backend and frontend for the first time. Integrating the React frontend with the Django backend posed difficulties, particularly in ensuring that the APIs and real-time matching features communicated seamlessly. This process taught us the importance of efficient communication between the frontend and backend, laying the groundwork for future scalability.Moving forward, we aim to add more features that will enhance the user experience and make **Developers Assemble** even more effective for connecting developers. In addition to team messaging, project management tools, and GitHub integration, we plan to introduce a ranking system where developers can be rated based on their contributions and collaboration skills. This feature will help teams identify the best matches not just based on technical skills, but also on teamwork and reliability. We also aim to refine the matching algorithm further to factor in these new ranking metrics, ensuring even more accurate and successful matches between developers and projects. By continuously improving these core functionalities, we hope to create a comprehensive platform that truly supports collaborative development. ## Accomplishments that we're proud of We’re proud of creating a seamless, intuitive platform that developers can use to find and connect with others for collaborative work. Successfully integrating real-time matching and building a system that scales with user growth were significant achievements. Additionally, building the platform from the ground up taught us important lessons about matchmaking algorithms and user experience. ## What we learned We learned that user experience is key — developers want a fast, easy-to-use platform that makes finding collaborators painless. We also learned a lot about building real-time systems and ensuring server efficiency. Fine-tuning the matching algorithm taught us how crucial it is to accurately pair users based on skills and project needs. Overall, the development process helped us gain a deeper understanding of collaboration dynamics in the tech space. ## What's next for DevelopersAssemble The next steps for Developers Assemble include adding features like team messaging, in-app project management tools, and integrating with popular developer platforms like GitHub and GitLab. We also plan to improve the matching algorithm further to offer even more refined matches, and expand the platform to cater to larger developer communities worldwide.
partial
## Inspiration What if **inappropriate comments** could be **flagged as you type them out**? Many of us know the **terrible feeling** of **unintentionally saying** something rude or insensitive. Some of us know that on the global scale of the internet, problems with **miscommunication** and **misunderstanding** are very serious. We believe that **most people mean well** on the internet, and our project is to **bring awareness** to **enable people** to carry out their **best intentions**. ## What it does **Browser plug-in** that identifies when the user's **social media comment/post** contains **language** that is similar to other comments/posts that have **sparked conflict** or **elicited backlash** ## How we built it 1. **data processing pipeline** that downloads an online database of Tweets, passes the Tweets through sentiment analysis software (**Google and Microsoft APIs**), and scores each Tweet based on the "consensus sentiment" among its replies and quotes. (All of this done in parallel across dozens of computing nodes on **Google Cloud**) 2. **hybrid RNN/CNN-based model** that predicts how likely a post is to elicit a negative response. 3. **Chrome extension** that uses the AI model to recommend users to reconsider their comments or posts if the model detects a very inflammatory post. ## Challenges we ran into **Data** is hard to obtain and process. Even though Twitter is technically open-source, each day of tweets is **1 GB (!)** and not everyone has **1 TB** free to spare (thankfully Google does). Big data means big power, but also **big costs**, **smart optimizations**, and **effective parallelization across CPUs/GPUs**. ## Accomplishments that we're proud of Tackling a very serious problem, providing a feasible solution that we can continue to develop, developing effective data-mining pipelines, using a lot of computing power (100+ CPU-hours) ## What we learned Truth is objective but hard to quantify. Emotions are subjective and still hard to quantify. Sentiment analysis is cool but social media (with its abbreviations, emojis, etc.) is nuanced ## What's next for Trigger Warning Users! Both end-users and websites that want to incorporate our tech/ideas natively
## Inspiration We love sharing information with each other online, and quite frankly, user generated information has become the major news source for us. Websites, such as Reddit, Twitter and etc. provide us with a unconventional and extremely existing perspective of the world and deliver the most unexpected information in such an expedient way. But all of us have encountered and suffered from online trolling activities-- obnoxious and sometimes malicious behaviours that undermine the prosperity of the community, and without which the world wide web would have have been a much better place. Hence our team embarked upon the endeavor to develop a Machine Learning based bad behaviours detection system for online communities, and along the way we have adopted a easy-to-use, user-friendly web interface for our project ## What it does The system records streams of comments, and temporarily stored them with Amazon DynamoDB service. Periodically, the learning algorithm would be invoked, to improve the large-scale anomolies detection system, leveraging data streaming techniques and online learning models. As a result, the well-trained model would be able to assign a score to each text block, reflecting the possibility of good behaviors, and to raise reg flags for extreme outliers. Fact Checker is the watch dog for your online community. ## How I built it Instead of building a full-stack server, we fully optimized our design architects and have implemented two lambda functions with AWS services. We separated out the back-send services and compartmentalized them into public APIs to enhance the flexibility of the system such that we could effortlessly respond to any streaming events, and therefore minimize the costs of operation. Furthermore, we implemented our own version of online machine learning - clustering algorithm, catering specifically towards the problem in question, in an effort to achieve a better outcome and accuracy. Meanwhile, our front end easy-to-use user-friendly web interface serves both as a secure path to our database as well as a neat demonstration of the functionalities ## Challenges I ran into Build a cloud-first system is completely different from implementing a full-stack server. For instance, the standardized means of communications, parameters and return values, are replaced by serialized data transmission JQuery and JSON for instance, and hence insinuate a different design architect. Additionally, getting familiar with the wide variety of AWS services is a great hurdle for us. ## Accomplishments that I'm proud of All of the team members have learned substantial amount of new knowledge through this experience. We all stepped out our comfort zones and tackled a rather challenging problem together for social good deeds. ## What I learned We learnt to prototype and utilize design thinking techniques in brainstorming, designs and implementation process. And we learnt a lot from each other about good coding practices and useful problem-solving strategies. ## What's next for Fact Checker It would be really nice if we can connect our services to the big online forums and communities. It would really nice if one day we could work with Twitter and Reddit, given the amount of daily activities on these websites and consequently the substantial amount of threats and pressures they face
## Inspiration To introduce the most impartial and ensured form of voting submission in response to controversial democratic electoral polling following the 2018 US midterm elections. This event involved several encircling clauses of doubt and questioned authenticity of results by citizen voters. This propelled the idea of bringing enforced and much needed decentralized security to the polling process. ## What it does Allows voters to vote through a web portal on a blockchain. This web portal is written in HTML and Javascript using the Bootstrap UI framework and JQuery to send Ajax HTTP requests through a flask server written in Python communicating with a blockchain running on the ARK platform. The polling station uses a web portal to generate a unique passphrase for each voter. The voter then uses said passphrase to cast their ballot anonymously and securely. Following this, their vote alongside passphrase go to a flask web server where it is properly parsed and sent to the ARK blockchain accounting it as a transaction. Is transaction is delegated by one ARK coin represented as the count. Finally, a paper trail is generated following the submission of vote on the web portal in the event of public verification. ## How we built it The initial approach was to use Node.JS, however, Python with Flask was opted for as it proved to be a more optimally implementable solution. Visual studio code was used as a basis to present the HTML and CSS front end for visual representations of the voting interface. Alternatively, the ARK blockchain was constructed on the Docker container. These were used in a conjoined manner to deliver the web-based application. ## Challenges I ran into * Integration for seamless formation of app between front and back-end merge * Using flask as an intermediary to act as transitional fit for back-end * Understanding incorporation, use, and capability of blockchain for security in the purpose applied to ## Accomplishments that I'm proud of * Successful implementation of blockchain technology through an intuitive web-based medium to address a heavily relevant and critical societal concern ## What I learned * Application of ARK.io blockchain and security protocols * The multitude of transcriptional stages for encryption involving pass-phrases being converted to private and public keys * Utilizing JQuery to compile a comprehensive program ## What's next for Block Vote Expand Block Vote’s applicability in other areas requiring decentralized and trusted security, hence, introducing a universal initiative.
losing
## Inspiration Most of us have had to visit loved ones in the hospital before, and we know that it is often not a great experience. The process can be overwhelming between knowing where they are, which doctor is treating them, what medications they need to take, and worrying about their status overall. We decided that there had to be a better way and that we would create that better way, which brings us to EZ-Med! ## What it does This web app changes the logistics involved when visiting people in the hospital. Some of our primary features include a home page with a variety of patient updates that give the patient's current status based on recent logs from the doctors and nurses. The next page is the resources page, which is meant to connect the family with the medical team assigned to your loved one and provide resources for any grief or hardships associated with having a loved one in the hospital. The map page shows the user how to navigate to the patient they are trying to visit and then back to the parking lot after their visit since we know that these small details can be frustrating during a hospital visit. Lastly, we have a patient update and login screen, where they both run on a database we set and populate the information to the patient updates screen and validate the login data. ## How we built it We built this web app using a variety of technologies. We used React to create the web app and MongoDB for the backend/database component of the app. We then decided to utilize the MappedIn SDK to integrate our map service seamlessly into the project. ## Challenges we ran into We ran into many challenges during this hackathon, but we learned a lot through the struggle. Our first challenge was trying to use React Native. We tried this for hours, but after much confusion among redirect challenges, we had to completely change course around halfway in :( In the end, we learnt a lot about the React development process, came out of this event much more experienced, and built the best product possible in the time we had left. ## Accomplishments that we're proud of We're proud that we could pivot after much struggle with React. We are also proud that we decided to explore the area of healthcare, which none of us had ever interacted with in a programming setting. ## What we learned We learned a lot about React and MongoDB, two frameworks we had minimal experience with before this hackathon. We also learned about the value of playing around with different frameworks before committing to using them for an entire hackathon, haha! ## What's next for EZ-Med The next step for EZ-Med is to iron out all the bugs and have it fully functioning.
## Inspiration Every year hundreds of thousands of preventable deaths occur due to the lack of first aid knowledge in our societies. Many lives could be saved if the right people are in the right places at the right times. We aim towards connecting people by giving them the opportunity to help each other in times of medical need. ## What it does It is a mobile application that is aimed towards connecting members of our society together in times of urgent medical need. Users can sign up as respondents which will allow them to be notified when people within a 300 meter radius are having a medical emergency. This can help users receive first aid prior to the arrival of an ambulance or healthcare professional greatly increasing their chances of survival. This application fills the gap between making the 911 call and having the ambulance arrive. ## How we built it The app is Android native and relies heavily on the Google Cloud Platform. User registration and authentication is done through the use of Fireauth. Additionally, user data, locations, help requests and responses are all communicated through the Firebase Realtime Database. Lastly, the Firebase ML Kit was also used to provide text recognition for the app's registration page. Users could take a picture of their ID and their information can be retracted. ## Challenges we ran into There were numerous challenges in terms of handling the flow of data through the Firebase Realtime Database and providing the correct data to authorized users. ## Accomplishments that we're proud of We were able to build a functioning prototype! Additionally we were able to track and update user locations in a MapFragment and ended up doing/implementing things that we had never done before.
## Inspiration We had originally planned on creating a healthcare management app, but we attended a brainstorming session on the first day of the hackathon and were immediately inspired by the idea of "Tinder for board games." We liked the idea because it was light-hearted and a fun entertainment project to work on, and all of us would actively use the app if it existed. It is an idea that allows us to connect in real life, in this digitalized world. ## What it does The web app matches users with others in their area who have similar interests in board games, card games, RPGs - you name it - and allows them to connect. The users would be given options of others nearby who were interested in similar games, and facilitate their exchanging contact information in order to meet up. ## How I built it On the front end, we used React and a framework for React called MaterialUI. On the backend, we hosted an AWS server with a MySQL database, where we stored our user information such as username, password, name, interests, location and so on. We communicate from the front-end to the backend using JavaScripts that calls PHP files which communicates with our server and database. The idea was to match people through our queries; similar interests, locations, etc will be matched together and be returned to the website. ## Challenges I ran into We had never worked with React before so it was challenging to understand the intricacies of not only React but also MaterialUI, which we understood to be similar to Bootstrap for React. We decided to use MaterialUI, which was a relatively small framework, in order to better style and present our app. Unfortunately, it did not have helpful documentation on issues such as managing and serving form data. On the back end, we ran into several server errors that we were unable to resolve such as *Error 500: Internal Server Error*. As a result, we are presenting and submitting our design and front-end here to display our ideas. ## Accomplishments that I'm proud of We are proud that we learned some new frameworks and became familiar with the syntax and styling of React applications. We all came across new languages and frameworks and are proud that we were able to make something with the new information. We're also proud of our teamwork and collaboration efforts, and how well we were able to work with strangers from around the world! ## What I learned We each learned new languages and frameworks, both from the internet and documentation and from one another. One of the main lessons we learned was a developer classic — that we should prioritize making sure our separate roles in the front- and back-end integrate well. ## What's next for PlayMate There are many improvements we would like to make, including on a basic level working out the server errors and posting data from React. Once this is worked out, we could make a much more sophisticated app, including location with GPS, social media integration, ID security checks, matching based off categories of games, multi-page support, a built-in chat using socket.io technology, etc.
winning
## Inspiration The vicarious experiences of friends, and some of our own, immediately made clear the potential benefit to public safety the City of London’s dataset provides. We felt inspired to use our skills to make more accessible, this data, to improve confidence for those travelling alone at night. ## What it does By factoring in the location of street lights, and greater presence of traffic, safeWalk intuitively presents the safest options for reaching your destination within the City of London. Guiding people along routes where they will avoid unlit areas, and are likely to walk beside other well-meaning citizens, the application can instill confidence for travellers and positively impact public safety. ## How we built it There were three main tasks in our build. 1) Frontend: Chosen for its flexibility and API availability, we used ReactJS to create a mobile-to-desktop scaling UI. Making heavy use of the available customization and data presentation in the Google Maps API, we were able to achieve a cohesive colour theme, and clearly present ideal routes and streetlight density. 2) Backend: We used Flask with Python to create a backend that we used as a proxy for connecting to the Google Maps Direction API and ranking the safety of each route. This was done because we had more experience as a team with Python and we believed the Data Processing would be easier with Python. 3) Data Processing: After querying the appropriate dataset from London Open Data, we had to create an algorithm to determine the “safest” route based on streetlight density. This was done by partitioning each route into subsections, determining a suitable geofence for each subsection, and then storing each lights in the geofence. Then, we determine the total number of lights per km to calculate an approximate safety rating. ## Challenges we ran into: 1) Frontend/Backend Connection: Connecting the frontend and backend of our project together via RESTful API was a challenge. It took some time because we had no experience with using CORS with a Flask API. 2) React Framework None of the team members had experience in React, and only limited experience in JavaScript. Every feature implementation took a great deal of trial and error as we learned the framework, and developed the tools to tackle front-end development. Once concepts were learned however, it was very simple to refine. 3) Data Processing Algorithms It took some time to develop an algorithm that could handle our edge cases appropriately. At first, we thought we could develop a graph with weighted edges to determine the safest path. Edge cases such as handling intersections properly and considering lights on either side of the road led us to dismissing the graph approach. ## Accomplishments that we are proud of Throughout our experience at Hack Western, although we encountered challenges, through dedication and perseverance we made multiple accomplishments. As a whole, the team was proud of the technical skills developed when learning to deal with the React Framework, data analysis, and web development. In addition, the levels of teamwork, organization, and enjoyment/team spirit reached in order to complete the project in a timely manner were great achievements From the perspective of the hack developed, and the limited knowledge of the React Framework, we were proud of the sleek UI design that we created. In addition, the overall system design lent itself well towards algorithm protection and process off-loading when utilizing a separate back-end and front-end. Overall, although a challenging experience, the hackathon allowed the team to reach accomplishments of new heights. ## What we learned For this project, we learned a lot more about React as a framework and how to leverage it to make a functional UI. Furthermore, we refined our web-based design skills by building both a frontend and backend while also use external APIs. ## What's next for safewalk.io In the future, we would like to be able to add more safety factors to safewalk.io. We foresee factors such as: Crime rate Pedestrian Accident rate Traffic density Road type
## Inspiration Parker was riding his bike down Commonwealth Avenue on his way to work this summer when a car pulled out of nowhere and hit his front tire. Lucky, he wasn't hurt, he saw his life flash before his eyes in that moment and it really left an impression on him. (His bike made it out okay as well, other than a bit of tire misalignment!) As bikes become more and more ubiquitous as a mode of transportation in big cities with the growth of rental services and bike lanes, bike safety is more prevalent than ever. ## What it does We designed \_ Bikeable \_ - a Boston directions app for bicyclists that uses machine learning to generate directions for users based on prior bike accidents in police reports. You simply enter your origin and destination, and Bikeable creates a path for you to follow that balances efficiency with safety. While it's comforting to know that you're on a safe path, we also incorporated heat maps, so you can see where the hotspots of bicycle theft and accidents occur, so you can be more well-informed in the future! ## How we built it Bikeable is built in Google Cloud Platform's App Engine (GAE) and utilizes the best features three mapping apis: Google Maps, Here.com, and Leaflet to deliver directions in one seamless experience. Being build in GAE, Flask served as a solid bridge between a Python backend with machine learning algorithms and a HTML/JS frontend. Domain.com allowed us to get a cool domain name for our site and GCP allowed us to connect many small features quickly as well as host our database. ## Challenges we ran into We ran into several challenges. Right off the bat we were incredibly productive, and got a snappy UI up and running immediately through the accessible Google Maps API. We were off to an incredible start, but soon realized that the only effective way to best account for safety while maintaining maximum efficiency in travel time would be by highlighting clusters to steer waypoints away from. We realized that the Google Maps API would not be ideal for the ML in the back-end, simply because our avoidance algorithm did not work well with how the API is set up. We then decided on the HERE Maps API because of its unique ability to avoid areas in the algorithm. Once the front end for HERE Maps was developed, we soon attempted to deploy to Flask, only to find that JQuery somehow hindered our ability to view the physical map on our website. After hours of working through App Engine and Flask, we found a third map API/JS library called Leaflet that had much of the visual features we wanted. We ended up combining the best components of all three APIs to develop Bikeable over the past two days. The second large challenge we ran into was the Cross-Origin Resource Sharing (CORS) errors that seemed to never end. In the final stretch of the hackathon we were getting ready to link our front and back end with json files, but we kept getting blocked by the CORS errors. After several hours of troubleshooting we realized our mistake of crossing through localhost and public domain, and kept deploying to test rather than running locally through flask. ## Accomplishments that we're proud of We are incredibly proud of two things in particular. Primarily, all of us worked on technologies and languages we had never touched before. This was an insanely productive hackathon, in that we honestly got to experience things that we never would have the confidence to even consider if we were not in such an environment. We're proud that we all stepped out of our comfort zone and developed something worthy of a pin on github. We also were pretty impressed with what we were able to accomplish in the 36 hours. We set up multiple front ends, developed a full ML model complete with incredible data visualizations, and hosted on multiple different services. We also did not all know each other and the team chemistry that we had off the bat was astounding given that fact! ## What we learned We learned BigQuery, NumPy, Scikit-learn, Google App Engine, Firebase, and Flask. ## What's next for Bikeable Stay tuned! Or invest in us that works too :) **Features that are to be implemented shortly and fairly easily given the current framework:** * User reported incidents - like Waze for safe biking! * Bike parking recommendations based on theft reports * Large altitude increase avoidance to balance comfort with safety and efficiency.
# Pythia Camera Check out the [github](https://github.com/philipkiely/Pythia). ![Pythia Diagram](https://raw.githubusercontent.com/philipkiely/Pythia/master/images/PythiaCamera.jpg) ## Inspiration #### Original Idea: Deepfakes and more standard edits are a difficult threat to detect. Rather than reactively analyzing footage to attempt to find the marks of digital editing, we sign footage on the camera itself to allow the detection of edited footage. #### Final Idea: Using the same technology, but with a more limited threat model allowing for a narrower scope, we can create the world's most secure and intelligent home security camera. ## What it does Pythia combines robust cryptography with AI video processing to bring you a unique home security camera. The system notifies you in near-real-time of potential incidents and lets you verify by viewing the video. Videos are signed by the camera and the server to prove their authenticity in courts and other legal matters. Improvements of the same technology have potential uses in social media, broadcasting, political advertising, and police body cameras. ## How we built it * Records video and audio on a camera connected to a basic WIFI-enabled board, in our case a Raspberry Pi 4 At regular intervals: * Combines video and audio into .mp4 file * Signs combined file * Sends file and metadata to AWS ![Signing](https://raw.githubusercontent.com/philipkiely/Pythia/master/images/ChainedRSASignature.jpg) On AWS: * Verifies signature and adds server signature * Uses Rekognition to detect violence or other suspicious behavior * Uses Rekognition to detect the presence of people * If there are people with detectable faces, uses Rekognition to * Uses SMS to notify the property owner about the suspicious activity and links a video clip ![AWS](https://raw.githubusercontent.com/philipkiely/Pythia/master/images/AWSArchitecture.jpg) ## Challenges we ran into None. Just Kidding: #### Hardware Raspberry Pi * All software runs on Raspberry Pi * Wifi Issues * Compatibility issues * Finding a Screwdriver Hardware lab didn't have the type of sensors we were hoping for so no heat map :(. #### Software * Continuous batched recording * Creating complete .mp4 files * Processing while recording #### Web Services * Asynchronous Architecture has lots of race conditions ## Accomplishments that we're proud of * Complex AWS deployment * Chained RSA Signature * Proper video encoding and processing, combining separate frame and audio streams into a single .mp4 ## What we learned #### Bogdan * Gained experience designing and implementing a complex, asynchronous AWS Architecture * Practiced with several different Rekognition functions to generate useful results #### Philip * Video and audio encoding is complicated but fortunately we have great command-line tools like `ffmpeg` * Watchdog is a Python library for watching folders for a variety fo events and changes. I'm excited to use it for future automation projects. * Raspberry Pi never works right the first time ## What's next for Pythia Camera A lot of work is required to fully realize our vision for Pythia Camera as a whole solution that resists a wide variety of much stronger threat models including state actors. Here are a few areas of interest: #### Black-box resistance: * A camera pointed at a screen will record and verify the video from the screen * Solution: Capture IR footage to create a heat map of the video and compare the heat map against rekognition's object analysis (people should be hot, objects should be cold, etc. * Solution: Use a laser dot projector like the iPhone's faceID sensor to measure distance and compare to machine learning models using Rekognition #### Flexible Cryptography: * Upgrade Chained RSA Signature to Chained RSA Additive Map Signature to allow for combining videos * Allow for basic edits like cuts and filters while recording a signed record of changes #### More Robust Server Architecture: * Better RBAC for online assets * Multi-region failover for constant operation
partial
# Hello Vote A simple voting app for RightMesh. One of the connected phones starts a vote, others vote. When voting is done, master phone can end it, sending voting stats to the others. ## How does it work? The RightMesh library has an autonomous networking layer which manages the connectivity between devices using Wi-Fi, Bluetooth and Wi-Fi direct. It does this by linking together hotspots using RightMesh patent-pending switching technology. In addition to helping the devices make the physical connections to each other, RightMesh implements a neighbour discovery protocol which allows devices to discover each other across many hops. As a developer you will receive events when other devices running the same app join the network (even if they are connected through devices which are running different apps). You will also receive an event when data is received from another device running the same app as you. RightMesh abstracts away the idea of IPv4, IPv6 addresses, MAC addresses etc., since any given device may have any number of connections into the mesh at a given moment. Instead every device has a MeshID which is used in place of these other types of addresses. The MeshID is actually an Ethereum compatible account which will be used very soon to keep track of how much data has been forwarded and received so that people can be incentivized to use the mesh. You can send byte arrays of data to other mesh devices and RightMesh will handle reliable communication for you. While the library does it's best to ensure connectivity to as many devices as possible at all times, the mobile nature of the devices means they may disconnect from each other . RightMesh implements networking layers to handle this enabling reliable congestion-aware end-to-end communications. Compared to similar libraries, RightMesh does not broadcast to all other devices in the mesh, it actually forms paths and performs routing using this two-layer system. For the developer, this means when they issue a send call, they can trust that the data will be received on the other side, even when the network grows in size, and despite the mobile nature of the devices. ## Documentation API reference is available at <https://developer.rightmesh.io/api/> A detailed step-by-step breakdown of how to get started can be found in our reference guide: <https://developer.rightmesh.io/reference/> In order for this sample app to work, you need to obtain RightMesh developer account, and API key from our developer website: [https://developer.rightmesh.io/](developer.rightmesh.io) Set your username, password and key in the app [build.gradle](app/build.gradle) file. The main source code is available in [MainActivity.java](app/src/main/java/io/left/hellomesh/MainActivity.java)
# JTTPSoft Soundbomb ## What is it? A simple application that synchronizes audio on multiple phones using RightMesh wireless communication systems. ## Why choose this project? RightMesh is an API with a great deal of potential for the future, and we joined their pioneering creators!
## Reimagining Patient Education and Treatment Delivery through Gamification Imagine walking into a doctors office to find out you’ve been diagnosed with a chronic illness. All of a sudden, you have a slew of diverse healthcare appointments, ongoing medication or lifestyle adjustments, lots of education about the condition and more. While in the clinic/hospital, you can at least ask the doctor questions and try to make sense of your condition & management plan. But once you leave to go home, **you’re left largely on your own**. We found that there is a significant disconnect between physicians and patients after patients are discharged and diagnosed with a particular condition. Physicians will hand patients a piece of paper with suggested items to follow as part of a "treatment plan". But after this diagnosis meeting, it is hard for the physicians to keep up-to-date with their patients on the progress of the plan. The result? Not surprisingly, patients **quickly fall off and don’t adhere** to their treatment plans, costing the healthcare system **upwards of $300 billion** as they get readmitted due to worsening conditions that may have been prevented. But it doesn’t have to be that way… We're building an engaging end-to-end experience for patients managing chronic conditions, starting with one of the most prevalent ones - diabetes. **More than 100 million U.S. adults are now living with diabetes or prediabetes** ## How does Glucose Guardian Work? Glucose Guardian is a scalable way to gamify education for chronic conditions using an existing clinical technique called “teachback” (see here- [link](https://patientengagementhit.com/features/developing-patient-teach-back-to-improve-patient-education)). We plan to partner with clinics and organizations, scrape their existing websites/documents where they house all their information about the chronic condition, and instantly convert that into short (up to 2 min) voice modules. Glucose Guardian users can complete these short, guided, voice-based modules that teach and validate their understanding of their medical condition. Participation and correctness earn points which go towards real-life rewards for which we plan to partner with rewards organizations/corporate programs. Glucose Guardian users can also go to the app to enter their progress on various aspects of their personalized treatment plan. Their activity on this part of the app is also incentive-driven. This is inspired by current non-health solutions our team has had experience with using very low barrier audio-driven games that have been proven to drive user engagement through the roof. ## How we built it We've simplified how we can use gamification to transform patient education & treatment adherence by making it more digestible and fun. We ran through some design thinking sessions to work out how we could create a solution that wouldn’t simply look great but could be implemented clinically and be HIPAA compliant. We then built Glucose Guardian as a native iOS application using Swift. Behind the scenes, we use Python toolkits to perform some of our text matching for patient education modules, and we utilize AWS for infrastructure needs. ## Challenges we ran into It was difficult to navigate the pre-existing market of patient adherence apps and create a solution that was unique and adaptable to clinical workflow. To tackle this, we dedicated ample time to step through user journeys - patients, physicians and allied health professionals. Through this strategy, we identified education as our focus because it is critical to treatment adherence and a patient-centric solution. ## We're proud of this We've built something that has the potential to fulfill a large unmet need in the healthcare space, and we're excited to see how the app is received by beta testers, healthcare partners, and corporate wellness organizations. ## Learning Points Glucose Guardian has given our cross-disciplined team the chance to learn more about the intersection of software + healthcare. Through developing speech-to-text features, designing UIs, scraping data, and walking through patient journeys, we've maximized our time to learn as much as possible in order to deliver the biggest impact. ## Looking Ahead As per the namesake, so far we've implemented one use case (diabetes) but are planning to expand to many other diseases. We'd also like to continue building other flows beyond patient education. This includes components such as the gamified digital treatment plan which can utilize existing data from wearables and wellness apps to provide a consolidated view on the patient's post-discharge health. Beyond that, we also see potential for our platform to serve as a treasure trove of data for clinical research and medical training. We're excited to keep building and keep creating more impact.
losing
## Inspiration In recent years, the population of senior citizens in America has been experiencing a worsening retirement crisis. With savings rates down, the future of Social Security in question and the American population amassing more debt, financial wellbeing is a source of stress for many households. The elderly in particular encounter struggles with critical financial responsibilities, as many are challenged with mental and physical conditions. ## What it does Users of this website will provide their phone number, and our backend Node.js software will utilize the Twilio API to send custom calls and push notifications. With these messages, our software reminds users to make necessary payments, sends weekly financial recaps, contacts family members about potential scammer activity, and informs people about additional resources for financial assistance like Medicare, retirement benefits, and supplemental security income. ## Accomplishments that we're proud of UI / UX Design ## What we learned How to collaborate effectively on a project. ## What's next for Aeonian Finish implementing logic and functionality to request a user's financial records and process them accordingly.
## What it does Alzheimer's disease and dementia affect many of our loved ones every year; in fact, **76,000 diagnoses** of dementia are made every year in Canada. One of the largest issues caused by Alzheimer's is the loss of ability to make informed, cognitive decisions about their finances. This makes such patients especially vulnerable to things such as scams and high-pressure sales tactics. Here's an unfortunate real-life example of this: <https://www.cbc.ca/news/business/senior-alzheimers-upsold-bell-products-source-1.6014904> We were inspired by this heartbreaking story to build HeimWallet. HeimWallet is a digital banking solution that allows for **supervision** over a savings account owned by an individual incapable of managing their finances, and is specifically **tailored** to patients with Alzheimer's disease or dementia. It can be thought of as a mobile debit card linked to a savings account that only allows spending if certain conditions set by a designated *guardian* are met. It allows a family member or other trusted guardian to set a **daily allowance** for a patient and **keep track of their purchases**. It also allows guardians to keep tabs on the **location of patients via GPS** every time a purchase is attempted, and to authorize or refuse attempted purchases that go beyond the daily allowance. This ensures that patients and their guardians can have confidence that the patient's assets are in safe hands. Further, the daily allowance feature empowers patients to be independent and **shop with confidence**, knowing that their disease will not be able to dominate their finances. The name "HeimWallet" comes from "-Heim" in "Alzheimer's". It also alludes to Heimdall, the mythical Norse guardian of the bridge leading to Asgard. ## How we built it The frontend was built using React-Native and Expo, while the backend was made using Python (Flask) and MongoDB. SMS functionality was added using Twilio, and location services were added using Google Maps API. The backend was also deployed to Heroku. We chose **React-Native** because it allowed us to build our app for both iOS and Android using one codebase. **Expo** enabled rapid testing and prototyping of our app. **Flask**'s lightweightness was key in getting the backend built under tight time constraints, and **MongoDB** was a natural choice for our database since we were building our app using JavaScript. **Twilio** enabled us to create a solution that worked even for guardians who did not have the app installed. Its text message-based interactions enabled us to build a product accessible to those without smartphones or mobile data. We deployed our backend to **Heroku** so that Twilio could access our backend's webhook for incoming text messages. Finally, the **Google Maps API**'s reverse geocoding feature enables guardians to see the addresses of where patients are located when a transaction is attempted. ## Challenges we ran into * Fighting with Heroku for almost *six hours* to get the backend deployed. The core mistake ended up being that we were trying to deploy our Python-based backend as a Node.js app.. oops. * Learning to use React Native -- all of us were new to it, and although we all had experience building web apps, we didn't quite have that same foundation with mobile apps. * Incorporating Figma designs on React Native in a way such that it is cross-platform between Android, iOS, and Web. A lot of styling works differently between these platforms, so it was tricky to make our app look consistent everywhere. * Managing mix of team members who were hacking in-person + online. Constant communication to keep everyone in the loop was key! ## Accomplishments that we're proud of We're super proud that we managed to come together and make our vision a reality! And we're especially proud of how much we learned and took away from this hackathon. From learning React Native, to Twilio, to getting better with Figma and sharpening our video-editing skills for our submission, it was thrilling to have gained exposure to so much in so little time. We're also proud of the genuine hard work every member of our team put in to make this project happen -- we worked deep into the A.M. hours, and constantly sought to improve the usability of our product with continuous suggestions and improvements. ## What's next for HeimWallet Here are some things we think we can add on to HeimWallet in order to bring it to the next level: * Proper integration of SOS (e.g. call 911) and Send Location functionality in the patient interface * Ability to have multiple guardians for one patient, so that there are many eyes safeguarding the same assets * Better security and authentication features for the app; of course, security is vital in a fintech product * Feature to allow patients to send a voice memo to a guardian in order to clarify a spending request
## Inspiration We have often seen the technical gap that the older generations face when they are working with the latest technologies. This restricts them from all the resources and knowledge that they can acquire using these technologies and leave them with not much choice but use the human resources. One such scenario is regarding financial literacy. Stats show many women depend on men to make any financial choices or any big purchases. The reason for that is the lack of knowledge and resources to learn about finance. To learn more about this they could use the internet, which could be less accessible to many due to the technology gap or they could ask someone else, but that is usually not an option as people feel judged. This makes people very dependant on others. After some research, most people who face this issue are women and senior citizens. Research shows most women feel comfortable asking questions to other women or their family members. But this makes them very dependent and sometimes they are treated badly if they ask a lot of questions. To empower them, we created this WebApp Pocket Patrol. So they can become more independent, learn about financial literacy, manage their finances and make their day-to-day life more accessible. With an easy UI and a target to serve people with limited tech knowledge, we want to build the gap and make our users feel more empowered. ## What it does While registering the user will choose if they are the Chef or the Foodie of the family. If they are the Chef then they will choose the number of family members they have and their usernames. Once logged in the following functionality is provided: Current Financial Situation and Daily Accessibility Income/Expense Tracker: The user will be able to track all their expenses and incomes by entering the name of the item/service along with a price + for income, - for expense. The user will also be able to add specific dishes and cost per person for more functionalities. 2) Adopt Behaviours and Changes Figure out where are you spending the most right now and where you can save. For instance spending x amount of money for food or transport and predictions on how much you will end up spending if you continue this way. Moreover, where you can save, if you are spending x amount of money transporting in public transit. Then if you were to buy a car you will be spending the y amount. The application will show quotes from different insurance companies to compare and choose the best options. The application will use your current trends to predict how much you will spend in the next few years, while if you take any other option, how much will you save in the next few years. 3) Personal Assistant Whenever you have any questions regarding anything or any questions regarding your financial literacy, you don’t have to feel uncomfortable asking it to anyone else or feel judged. You can simply ask the in-built personal assistant who will be able to answer any of those questions, questions could be as simple or as complex, there is no stupid questions in Pocket Patrol. ## How we built it React application using Google Cloud Speech-to-Text on a NodeJS backend hosted on a DigitalOcean VPS utilizing a Python script with Tensorflow for trends prediction. ## Challenges we ran into Working in a team with a different skillset and deciding on what technologies to use Working in different timezones Integrating multiple technologies together and facing many configuration issues Implementing multiple features in a limited time Training the application with a lot of different datasets Working with unsupported API’s Getting Microphone to work on Mobile Browser using Vanilla JS Making a team at last minute, as the original team didn’t show up ## Accomplishments that we are proud of Despite all the challenges we were still motivated to make something and create a product to demo The hishschooler in our team was able to get a valuable experience and learn more about industry technologies and how they are used ## What we learned It’s a learning process, even if you can’t achieve everything you will still learn a lot Always have a team in advance and keep in touch to confirm they all are still going to attend New frameworks, using Machine Learning/AI for predictions. Making an in-built voice assistant. ## What's next for Pocket Patrol We want to use advanced ML/AI algorithms to predict how the user is using the tool and understand their habits, so we can provide them with better suggestions on how they can improve and save money We want to integrate an insurance calculator which can get specifics of the car model, and give specific quotes and using this calculator from different companies can tell the user which is the best option to go with We want the voice assistant to understand the data in the app and the users information and talk to user as a person and while giving information refer to the user’s specific details We want to train our tool with the data, so it can become better at predictions React application using Google Cloud Speech-to-Text on a NodeJS backend hosted on a DigitalOcean VPS utilizing a Python script with Tensorflow for trends prediction.
winning
## Inspiration As international students, we often have to navigate around a lot of roadblocks when it comes to receiving money from back home for our tuition. Cross-border payments are gaining momentum with so many emerging markets. In 2021, the top five recipient countries for remittance inflows in current USD were India (89 billion), Mexico (54 billion), China (53 billion), the Philippines (37 billion), and Egypt (32 billion). The United States was the largest source country for remittances in 2020, followed by the United Arab Emirates, Saudi Arabia, and Switzerland. However, Cross-border payments face 5 main challenges: cost, security, time, liquidity & transparency. * Cost: Cross-border payments are typically costly due to costs involved such as currency exchange costs, intermediary charges, and regulatory costs. -Time: most international payments take anything between 2-5 days. -Security: The rate of fraud in cross-border payments is comparatively higher than in domestic payments because it's much more difficult to track once it crosses the border. * Standardization: Different countries tend to follow a different set of rules & formats which make cross-border payments even more difficult & complicated at times. * Liquidity: Most cross-border payments work on the pre-funding of accounts to settle payments; hence it becomes important to ensure adequate liquidity in correspondent bank accounts to meet payment obligations within cut-off deadlines. ## What it does Cashflow is a solution to all of the problems above. It provides a secure method to transfer money overseas. It uses the checkbook.io API to verify users' bank information, and check for liquidity, and with features such as KYC, it ensures security in enabling instant payments. Further, it uses another API to convert the currencies using accurate, non-inflated rates. Sending money: Our system requests a few pieces of information from you, which pertain to the recipient. After having added your bank details to your profile, you will be able to send money through the platform. The recipient will receive an email message, through which they can deposit into their account in multiple ways. Requesting money: By requesting money from a sender, an invoice is generated to them. They can choose to send money back through multiple methods, which include credit and debit card payments. ## How we built it We built it using HTML, CSS, and JavaScript. We also used the Checkbook.io API and exchange rate API. ## Challenges we ran into Neither of us is familiar with backend technologies or react. Mihir has never worked with JS before and I haven't worked on many web dev projects in the last 2 years, so we had to engage in a lot of learning and refreshing of knowledge as we built the project which took a lot of time. ## Accomplishments that we're proud of We learned a lot and built the whole web app as we were continuously learning. Mihir learned JavaScript from scratch and coded in it for the whole project all under 36 hours. ## What we learned We learned how to integrate APIs in building web apps, JavaScript, and a lot of web dev. ## What's next for CashFlow We were having a couple of bugs that we couldn't fix, we plan to work on those in the near future.
## Inspiration It’s Friday afternoon, and as you return from your final class of the day cutting through the trailing winds of the Bay, you suddenly remember the Saturday trek you had planned with your friends. Equipment-less and desperate you race down to a nearby sports store and fish out $$$, not realising that the kid living two floors above you has the same equipment collecting dust. While this hypothetical may be based on real-life events, we see thousands of students and people alike impulsively spending money on goods that would eventually end up in their storage lockers. This cycle of buy-store-collect dust inspired us to develop Lendit this product aims to stagnate the growing waste economy and generate passive income for the users on the platform. ## What it does A peer-to-peer lending and borrowing platform that allows users to generate passive income from the goods and garments collecting dust in the garage. ## How we built it Our Smart Lockers are built with RaspberryPi3 (64bit, 1GB RAM, ARM-64) microcontrollers and are connected to our app through interfacing with Google's Firebase. The locker also uses Facial Recognition powered by OpenCV and object detection with Google's Cloud Vision API. For our App, we've used Flutter/ Dart and interfaced with Firebase. To ensure *trust* - which is core to borrowing and lending, we've experimented with Ripple's API to create an Escrow system. ## Challenges we ran into We learned that building a hardware hack can be quite challenging and can leave you with a few bald patches on your head. With no hardware equipment, half our team spent the first few hours running around the hotel and even the streets to arrange stepper motors and Micro-HDMI wires. In fact, we even borrowed another team's 3-D print to build the latch for our locker! On the Flutter/ Dart side, we were sceptical about how the interfacing with Firebase and Raspberry Pi would work. Our App Developer previously worked with only Web Apps with SQL databases. However, NoSQL works a little differently and doesn't have a robust referential system. Therefore writing Queries for our Read Operations was tricky. With the core tech of the project relying heavily on the Google Cloud Platform, we had to resolve to unconventional methods to utilize its capabilities with an internet that played Russian roulette. ## Accomplishments that we're proud of The project has various hardware and software components like raspberry pi, Flutter, XRP Ledger Escrow, and Firebase, which all have their own independent frameworks. Integrating all of them together and making an end-to-end automated system for the users, is the biggest accomplishment we are proud of. ## What's next for LendIt We believe that LendIt can be more than just a hackathon project. Over the course of the hackathon, we discussed the idea with friends and fellow participants and gained a pretty good Proof of Concept giving us the confidence that we can do a city-wide launch of the project in the near future. In order to see these ambitions come to life, we would have to improve our Object Detection and Facial Recognition models. From cardboard, we would like to see our lockers carved in metal at every corner of this city. As we continue to grow our skills as programmers we believe our product Lendit will grow with it. We would be honoured if we can contribute in any way to reduce the growing waste economy.
## Inspiration Our inspiration came from the annoying amount of times we have had to take out a calculator after a meal with friends and figure out how much to pay each other, make sure we have a common payment method (Venmo, Zelle), and remember if we paid each other back or not a week later. So to answer this question we came up with a Split that can easily divide our expenses for us, and organize the amount we owe a friend, and payments without having a common platform at all in one. ## What it does This application allows someone to put in a value that someone owes them or they owe someone and organize it. During this implementation of a due to someone, you can also split an entire amount with multiple individuals which will be reflected in the amount owed to each person. Additionally, you are able to clear your debts and make payments through the built-in Checkbook service that allows you to pay just given their name, phone number, and value amount. ## How we built it We built this project using html, css, python, and SQL implemented with Flask. Alongside using these different languages we utilized the Checkbook API to streamline the payment process. ## Challenges we ran into Some challenges we ran into were, not knowing how to implement new parts of web development. We had difficulty implementing the API we used, “Checkbook” , using python into the backend of our website. We had no experience with APIs and so implementing this was a challenge that took some time to resolve. Another challenge that we ran into was coming up with different ideas that were more complex than we could design. During the brainstorming phase we had many ideas of what would be impactful projects but were left with the issue of not knowing how to put that into code, so brainstorming, planning, and getting an attainable solution down was another challenge. ## Accomplishments that we're proud of We were able to create a fully functioning, ready to use product with no prior experience with software engineering and very limited exposure to web dev. ## What we learned Some things we learned from this project were first that communication was the most important thing in the starting phase of this project. While brainstorming, we had different ideas that we would agree on, start, and then consider other ideas which led to a loss of time. After completing this project we found that communicating what we could do and committing to that idea would have been the most productive decision toward making a great project. To complement that, we also learned to play to our strengths in the building of this project. In addition, we learned about how to best structure databases in SQL to achieve our intended goals and we learned how to implement APIs. ## What's next for Split The next step for Split would be to move into a mobile application scene. Doing this would allow users to use this convenient application in the application instead of a browser. Right now the app is fully supported for a mobile phone screen and thus users on iPhone could also use the “save to HomeScreen” feature to utilize this effectively as an app while we create a dedicated app. Another feature that can be added to this application is bill scanning using a mobile camera to quickly split and organize payments. In addition, the app could be reframed as a social media with a messenger and friend system.
winning
> > **TL;DR**: We made a search engine designed to solve an information discovery problem that arises when you aren't sure what your query string should be; but rather have a specific document/webpage in mind that you want to find related work to. Instead of relying on the structural which exist between websites, we employ natural language processing to do this search and clustering contextually. As a result we achieve superhuman classification of not only expository documents but also code snippets. Click [here](https://docs.google.com/presentation/d/1NJb-3AHk8Ew8TBDrgL-6HgXhWXINnqWjI2rWQQkZm5c/edit?usp=sharing) to view the pitch deck. > > > ## See it in action: #### Identifies, searches and clusters sub-topics relevant to the contextual material on a given page > [View post on imgur.com](//imgur.com/8PlVucq) #### Determines algorithms and libraries used in code snippets from context ![](https://imgur.com/rmi5XNK.gif) #### Identifies implicit political context and bias ![](https://imgur.com/BQW5CpU.gif) #### Our Public-API endpoint received over 50,000 unique requests in under 36 hours ![](https://imgur.com/YiNcuB7.png) ## Inspiration With an ever changing world and a migration towards a digital landscape the average user can become overwhelmed with data and information. In this digital world humans have relied on Google searches to attain and discover their knowledge as opposed to conventional learning strategies. These searches can be classified into two subsets: knowledge discovery, where the question is undefined and direct searching, where the question is defined. 60% of searches worldwide are classified as unsuccessful meaning multiple searches were conducted before the desired result was attained. We were inspired to create a new solution where we recommend articles on a similar topic being searched to aid in the discovery and education process. We think by minimizing search time and frustration, finding the right data can be transformed into a journey instead of a pain point. We want everyone to indulge their curiosity in whatever topic, interest or random fact they are looking for. ## What it does We developed an API to expose the core algorithm “bubblRank”, made available through StdLib, anyone can query our API with a web page and receive a categorized and labelled arrangement of related pages. We show one such application of “bubblRank” by building a chrome extension that computes the bubbl cluster of any given page and provides the user with the option to navigate through the cluster in an intuitive way. The back end is powered by a self-designed (at this hackathon) state of the art natural language processing and clustering algorithm, which scrapes the meaningful text from websites in order to produce rich document representations of said websites in vector form, by averaging, and comparing their pairwise cosine similarities we are able to design a robust similarity metric to then perform Hierarchical Density Based Spatial Clustering in parallel. At every stage in the development of bubblRank we take several steps to ensure that the accuracy of our algorithm is not compromised whilst maintaining state of the art computation speed. Take for example the way in which we verify the accuracy of our document vector representations, we do a graph analysis using T-SNE plots to reduce the dimensionality of our vector space and compare the presence of clusters. We then take the Spearman’s R coefficient with respect to human tests to verify the clusters made. This attention to detail is prevalent through our entire project and it is something we are very proud of. ## How we built it Bubbl was built primarily on top of Java because of the language’s capability in parallelism making it more effective than Python (which we had originally considered) due to the fact that Java allows for cores share memory whereas Python does not. Angular and Javascript were used in the front end (web app) to facilitate a pleasant user experience. The core of the algorithm and API is exposed using StdLib and node.js. All preliminary data integration tasks were done in Python. ## Challenges we ran into The largest challenges and most prominent problem the team faced was comparing large sets of websites by similarity, which involved both accessing the data through queries, compressing the data into large vectors, semantic analysis over comparing vectors using either Euclidean space or cosine similarity and then understanding that similarity score and testing the scores. Other problems further in the project stemmed from parallel clustering and then building a strong back/front end to visually display the topics and similar articles in a innovative fashion. ## Accomplishments that we're proud of Parsing a large variety of websites and conducting TextRank with cluster algorithm Building out a Chrome web extension and back end with API calls to our clustering algorithm Comparing large corpuses of data and being able to encourage learning through a sophisticated back end similarity algorithm with a sleek UI. ## What we learned Demonstrated to our challenges, a lot of machine learning and data integration was tackled. Additionally project management was a valuable skill. ## What's next for bubbl Continuing to scale our infrastructure and expand the use cases for our API.
# 🎓 **Inspiration** Entering our **junior year**, we realized we were unprepared for **college applications**. Over the last couple of weeks, we scrambled to find professors to work with to possibly land a research internship. There was one big problem though: **we had no idea which professors we wanted to contact**. This naturally led us to our newest product, **"ScholarFlow"**. With our website, we assure you that finding professors and research papers that interest you will feel **effortless**, like **flowing down a stream**. 🌊 # 💡 **What it Does** Similar to the popular dating app **Tinder**, we provide you with **hundreds of research articles** and papers, and you choose whether to approve or discard them by **swiping right or left**. Our **recommendation system** will then provide you with what we think might interest you. Additionally, you can talk to our chatbot, **"Scholar Chat"** 🤖. This chatbot allows you to ask specific questions like, "What are some **Machine Learning** papers?". Both the recommendation system and chatbot will provide you with **links, names, colleges, and descriptions**, giving you all the information you need to find your next internship and accelerate your career 🚀. # 🛠️ **How We Built It** While half of our team worked on **REST API endpoints** and **front-end development**, the rest worked on **scraping Google Scholar** for data on published papers. The website was built using **HTML/CSS/JS** with the **Bulma** CSS framework. We used **Flask** to create API endpoints for JSON-based communication between the server and the front end. To process the data, we used **sentence-transformers from HuggingFace** to vectorize everything. Afterward, we performed **calculations on the vectors** to find the optimal vector for the highest accuracy in recommendations. **MongoDB Vector Search** was key to retrieving documents at lightning speed, which helped provide context to the **Cerebras Llama3 LLM** 🧠. The query is summarized, keywords are extracted, and top-k similar documents are retrieved from the vector database. We then combined context with some **prompt engineering** to create a seamless and **human-like interaction** with the LLM. # 🚧 **Challenges We Ran Into** The biggest challenge we faced was gathering data from **Google Scholar** due to their servers blocking requests from automated bots 🤖⛔. It took several hours of debugging and thinking to obtain a large enough dataset. Another challenge was collaboration – **LiveShare from Visual Studio Code** would frequently disconnect, making teamwork difficult. Many tasks were dependent on one another, so we often had to wait for one person to finish before another could begin. However, we overcame these obstacles and created something we're **truly proud of**! 💪 # 🏆 **Accomplishments That We're Proud Of** We’re most proud of the **chatbot**, both in its front and backend implementations. What amazed us the most was how **accurately** the **Llama3** model understood the context and delivered relevant answers. We could even ask follow-up questions and receive **blazing-fast responses**, thanks to **Cerebras** 🏅. # 📚 **What We Learned** The most important lesson was learning how to **work together as a team**. Despite the challenges, we **pushed each other to the limit** to reach our goal and finish the project. On the technical side, we learned how to use **Bulma** and **Vector Search** from MongoDB. But the most valuable lesson was using **Cerebras** – the speed and accuracy were simply incredible! **Cerebras is the future of LLMs**, and we can't wait to use it in future projects. 🚀 # 🔮 **What's Next for ScholarFlow** Currently, our data is **limited**. In the future, we’re excited to **expand our dataset by collaborating with Google Scholar** to gain even more information for our platform. Additionally, we have plans to develop an **iOS app** 📱 so people can discover new professors on the go!
## Inspiration Have you ever wanted to search something, but aren't connected to the internet? Data plans too expensive, but you really need to figure something out online quick? Us too, and that's why we created an application that allows you to search the internet without being connected. ## What it does Text your search queries to (705) 710-3709, and the application will text back the results of your query. Not happy with the first result? Specify a result using the `--result [number]` flag. Want to save the URL to view your result when you are connected to the internet? Send your query with `--url` to get the url of your result. Send `--help` to see a list of all the commands. ## How we built it Built on a **Nodejs** backend, we leverage **Twilio** to send and receive text messages. When receiving a text message, we send this information using **RapidAPI**'s **Bing Search API**. Our backend is **dockerized** and deployed continuously using **GitHub Actions** onto a **Google Cloud Run** server. Additionally, we make use of **Google Cloud's Secret Manager** to not expose our API Keys to the public. Internally, we use a domain registered with **domain.com** to point our text messages to our server. ## Challenges we ran into Our team is very inexperienced with Google Cloud, Docker and GitHub Actions so it was a challenge needing to deploy our app to the internet. We recognized that without deploying, we would could not allow anybody to demo our application. * There was a lot of configuration with permissions, and service accounts that had a learning curve. Accessing our secrets from our backend, and ensuring that the backend is authenticated to access the secrets was a huge challenge. We also have varying levels of skill with JavaScript. It was a challenge trying to understand each other's code and collaborating efficiently to get this done. ## Accomplishments that we're proud of We honestly think that this is a really cool application. It's very practical, and we can't find any solutions like this that exist right now. There was not a moment where we dreaded working on this project. This is the most well planned project that we've all made for a hackathon. We were always aware how our individual tasks contribute to the to project as a whole. When we felt that we were making an important part of the code, we would pair program together which accelerated our understanding. Continuously deploying is awesome! Not having to click buttons to deploy our app was really cool, and it really made our testing in production a lot easier. It also reduced a lot of potential user errors when deploying. ## What we learned Planning is very important in the early stages of a project. We could not have collaborated so well together, and separated the modules that we were coding the way we did without planning. Hackathons are much more enjoyable when you get a full night sleep :D. ## What's next for NoData In the future, we would love to use AI to better suit the search results of the client. Some search results have a very large scope right now. We would also like to have more time to write some tests and have better error handling.
partial
## Inspiration As members of the Montreal Startup ecosystem and newly members of the hacker community, we have been to a lot of events like startup weekends, conferences, and have been confronted to the inconvenience of having to configure WiFi access with complicated and/or long passwords. So we wanted to come up with a solution to make the experience of connecting to a WiFi hotspot less tedious, hassle-free, and more automated. ## What it does Say an event-organizer identifies the visitors, who register for his event, with their phone number. Now the visitors information are stored with their phone number in a database. Visitors use our app which connects to the database and verify them by querying their phone number. Then upon arriving at the event a bluetooth receiver detects their device and automatically exchange the wifi password to our smartphone app and connects them to the event WiFi hotspot. ## How I built it We used a Raspberry Pi as a bluetooth receiver and wifi hotspot. Our app communicate with the RPi through Bluetooth serial communication. Anytime a user enter the bluetooth receiver range, the app pairs with the receiver and sends an ID (here a phone number) via serial communication. The receiver (here the RPi) then checks the online database (IBM Bluemix Cloudant API) that this ID is registered in the database (IBM Bluemix Cloudant) and then sends back the hotspot SSID and the WPA passphrase to the app. The app then initiates a connection to the WiFi with the acquired credentials. It all goes on in the background without user interaction. Except when the user opens the app during the event, he needs to accept Bluetooth activation first. ## Challenges I ran into Making the Raspberry Pi behave as hotspots was really tricky, interfaces kept conflicting for a while. We also really struggled to understand the serial communication between the Android app and the Raspberry Pi through bluetooth. ## Accomplishments that I'm proud of This event was great for all of us. We pushed through our comfort zones, and decided to take on challenges that we were unfamiliar with, such as hardware hacks, and connecting them through android apps. The thing we are especially proud with is how much progress we have made in just 24 hours. ## What I learned We learned a lot about Bluetooth protocol and stack. We also learned quite a bit about network bridges, wireless specs. And last but not least, we learned that hackathons are also about socializing and not just hacking/coding as I first thought. ## What's next for MiFi We think MiFi has the potential to be more than a hack, and to evolve into an actual product. It makes it easy for both event-organizers and visitors to respectively share and access WiFi hotspots. Also MiFi could be used in co-working spaces, schools, office etc. There's no limit to the applications of MiFi in real life!
Are you an upperclassmen? Did you finally get that single dorm you've always wanted? Are you suddenly worried that your someone may try to "borrow" things from your room without your permission? Do you like music? Do you like bluetooth speakers? Is this too many questions? WELL HAVE I GOT A SOLUTION FOR YOU!! ## **MusiCAM** The world's first bluetooth speaker turned surveillance camera! Equipped with a huge spectrum analyzer, MusiCAM's dock contains a \_ hidden \_ web camera that allows user's to view a live stream of their dorm from anywhere. ## Inspiration This was a combination of cool things I wanted to build. I originally wanted to build a bluetooth speaker, spectrum analyzer, and surveillance camera separately. BUT WHY NOT SMUSH THEM TOGETHER??!! ## What it does Spectrum analyzer that visualizes music and *hides* a picamera that streams live video to the internet. The ideal design allows for a detachable speaker and confi ## How I built it I used a Raspberry Pi 3, Picamera, Sparkfun Redboard + spectrum analyzer shield, blood, sweat, and tears ## Challenges I ran into **WIRELESS** I spent most of my time setting up a fully headless Raspi workspace. The pi is fully configured with Static IPs for both Ethernet and WLAN connections and will stream video to a server accessible to any device on the samenetwork. This took quite some time to setup as the network setting out of the box for Raspi are not very good (I also had no idea what I was doing) ## Accomplishments that I'm proud of I am extremely proud of the headless setup. I was able to fully configure and program the pi without the use of desktop setup. I used ssh, VNC, and *practically magic* to set up the pi so that I can connect to it merely using the hotspot on my phone. A huge step forward in making this a practical product! ## What I learned Having never really used a raspberry pi for anything, I learned so much about it's capabilities, setup, and practical uses. I learned about rpi ssh, vnc servers, flashing images, and many other cool raspi things ## What's next for MusiCAM * Motion tracking notification feature that sends a text or email when a **person** is detected. * Streaming to a private web address, accessible from anywhere * Custom built, detachable bluetooth speaker
## Inspiration The world is constantly chasing after smartphones with bigger screens and smaller bezels. But why wait for costly display technology, and why get rid of old phones that work just fine? We wanted to build an app to create the effect of the big screen using the power of multiple small screens. ## What it does InfiniScreen quickly and seamlessly links multiple smartphones to play videos across all of their screens. Breathe life into old phones by turning them into a portable TV. Make an eye-popping art piece. Display a digital sign in a way that is impossible to ignore. Or gather some friends and strangers and laugh at memes together. Creative possibilities abound. ## How we built it Forget Bluetooth, InfiniScreen seamlessly pairs nearby phones using ultrasonic communication! Once paired, devices communicate with a Heroku-powered server written in node.js, express.js, and socket.io for control and synchronization. After the device arrangement is specified and a YouTube video is chosen on the hosting phone, the server assigns each device a region of the video to play. Left/right sound channels are mapped based on each phone's location to provide true stereo sound support. Socket-emitted messages keep the devices in sync and provide play/pause functionality. ## Challenges we ran into We spent a lot of time trying to implement all functionality using the Bluetooth-based Nearby Connections API for Android, but ended up finding that pairing was slow and unreliable. The ultrasonic+socket.io based architecture we ended up using created a much more seamless experience but required a large rewrite. We also encountered many implementation challenges while creating the custom grid arrangement feature, and trying to figure out certain nuances of Android (file permissions, UI threads) cost us precious hours of sleep. ## Accomplishments that we're proud of It works! It felt great to take on a rather ambitious project and complete it without sacrificing any major functionality. The effect is pretty cool, too—we originally thought the phones might fall out of sync too easily, but this didn't turn out to be the case. The larger combined screen area also emphasizes our stereo sound feature, creating a surprisingly captivating experience. ## What we learned Bluetooth is a traitor. Mad respect for UI designers. ## What's next for InfiniScreen Support for different device orientations, and improved support for unusual aspect ratios. Larger selection of video sources (Dailymotion, Vimeo, random MP4 urls, etc.). Seeking/skip controls instead of just play/pause.
losing
# HackGile HackGile is a tool that brings the Agile Workflow to hackathons Our tools lets your team create sprints, assign tasks, and organize your team as you break your hackthon project down into byte-sized pieces. ## Inspiration A lot of the time, it can be hard staying organized during hackathons. So we thought that bringing in the Agile workflow would ease some of the struggle when it comes to who's doing what ## Technology We used a lot of cool technology during this hackathon. We're running a WAMP stack, letting us use PHP for our back-end, and Javascript for out front-end. We used Materialize to style our front-end, giving it that sleek, Material UI look. ## What's Next There are so many opportunities for HackGile. Besides just using it as a workflow tool, it has the capability of supporting proof-of-work blockchain for teams as they confirm the work their colleagues have completed.
## Inspiration: Many people that we know want to get more involved in the community but don't have the time for regular commitments. Furthermore, many volunteer projects require an extensive application, and applications for different organizations vary so it can be a time-consuming and discouraging process. We wanted to find a way to remove these boundaries by streamlining the volunteering process so that people can get involved, doing one-time projects without needing to apply every time. ## What it does It is a website aimed at streamlining volunteering hiring and application processes. There are 2 main users: volunteer organizations, and volunteers. Volunteers will sign-up, registering preset documents, waivers, etc. These will then qualify them to volunteer at any of the projects posted by organizations. Organizations can post event dates, locations, etc. Then volunteers can sign-up with the touch of a button. ## How I built it We used node.js, express, and MySQL for the backend. We used bootstrap for the front end UI design and google APIs for some of the functionality. Our team divided the work based on our strengths and interests. ## Challenges I ran into We ran into problems with integrating MongoDB and the Mongo Daemon so we had to switch to MySQL to run our database. MySQL querying and set-up had a learning curve that was very discouraging, but we were able to gain the necessary skills and knowledge to use it. We tried to set up a RESTful API, but ultimately, we decided there was not enough time/resources to efficiently execute it, as there were other tasks that were more realistic. ## Accomplishments that I'm proud of We are proud to all have completed our first 24hr hackathon. Throughout this process, we learned to brainstorm as a team, create a workflow, communicate our progress/ideas, and all acquired new skills. We are proud that we have something that is cohesive functioning components and to have completed our first non-academic collaborative project. We all ventured outside of our comfort zones, using a language that we weren't familiar with. ## What I learned This experience has taught us a lot about working in a team and communicating with other people. There is so much we can learn from our peers. Skillwise, many of our members gained experience in node.js, MySQL, endpoints, embedded javascript, etc. It taught us a lot about patience and persevering because oftentimes, problems could seem unsolvable but yet we still were able to solve them with time and effort. ## What's next for NWHacks2020 We are all very proud of what we have accomplished and would like to continue this project, even though the hackathon is over. The skills we have all gained are sure to be useful and our team has made this a very memorable experience.
## Inspiration Elementary school kids are very savvy with searching via Google, and while sometimes the content returned are relevant, they may not be at a suitable reading level when the first search results talks about something like phytochemicals or pharmacology. Is there a way to assess whether links in a search result are at the level users desire to read? That's why we created Readabl. Readability is about the reader, and different personas will have their own perspective on how readability metrics can help them. Our vision is to enable users to find content suitable for their needs and help make content accessible to everyone. ## What it does Readabl offers search results along with readability metrics so that users can at a glance see what search results are suitable for them to read. ## How we built it The entire application is hosted in a monorepo consisting of a Javascript frontend framework - Svelte with a FastAPI backend endpoint. The frontend is hosted on Netlify while the backend is hosted using GCP's Cloud Run. The search and processing that takes place in the backend is built using both Google Cloud Custom Search JSON API and the py-readability-metrics library. ### Backend Hosted on GCP's Cloud Run using Docker, we are using FastAPI to communicate with our frontend to get user's search term and rank the information according back to the users. The FastAPI talks to Google Search API, retrieving information and passing it along. Before passing to the frontend, we parsed the information using a Python Library - BeautifulSoup - to get the text on the particular page to be ranked for readability. We also explored concurrent programming via Python in the backend so that we can parse multiple webpages in parallel to speed up the processing. backend -> <https://api.readabl.tech/> ### Frontend The frontend uses the Svelte framework as the main driver due to it's fast run time and minimalistic structure with little boilerplates code. We explored using a UI framework to speed up the development workflow but a lot of the existing UI frameworks suits the projects due to limited functionality and poor documentation. frontend -> <https://readto.beabetterhuman.tech/> ## Challenges we ran into We explored multiple new technologies during this hackathon. Since we are all new to the technology we used, we faced a lot of steep learning curve and issue revolving around navigating GCP: * back end processing takes a lot longer and times out the search results when there is too much to parse because of the content submitted (e.g. philosophical questions). We are also limited by Google API to be able to request only 10 links per search hence we needed to do this recursively which added on to the processing time * couldn't redeem MLH GCP credits * lack of knowledge of Svelte.js framework * lack of UI libraries to speed up development time * GCP's Cloud Run deployment blocked due to Python requirements versioning * deployment on Netlify and setting up custom domains * constantly having Git merge conflicts ## Accomplishments that we're proud of We made a working search engine! We learned a ton about development with GCP and deployment using cloud technologies ! Each of us was able to challenge ourselves by working with new tools and APIs. Moreover, we have been very supportive and helpful to each other by assisting them to the best of our knowledge. In the end, the team has made a functional product with most of the features we have envisioned from the start, and we bring home new knowledge, as well as new tools to explore later on. We knew we took on an ambitious project and we are really proud of what we were able to achieve in this hackathon. ## What we learned We have integrated and tried many APIs from various providers, which was a valuable learning experience. Solving conflicts helps us understand more thoroughly how things work behind the scene. In addition, as a team consisting of different skill sets and from different time zones, we learned how to communicate and teamwork effectively. We also learn how to help each other since each teammate had different varying of experience with certain tech stacks and applications. It was everyone's first experience working with Svelte and GCP services, so getting all the additional APIs while reducing the processing time on top of that was rather challenging. Alas, we also learned a lot of accessibility and on leveraging cloud technology. ## What's next for Readabl We plan to improve the search and ranking algorithm so we can improve on the performance. We also hope to build a community that contributes back and makes the world a bit easier to navigate at least readability wise. We are also searching for new datasets which include more information, such as scrolling speed information, color vision deficiencies information on webpages to implement a more inclusive search function. # How to Contact Us * {ben}#5927 - Benedict Neo * ceruleanox#7402 - Anita Yip * Pravallika#2768 - Pravallika Myneni * weichun#3945 - Wei Chun
losing
## Inspiration Keep yourself hydrated! ## What it does Tracks the amount of water you drank. ## How we built it We had a Bluetooth connected to the Arduino computer and sent the collected data through the app. ## Challenges we ran into The initial setup was quite difficult, we weren't sure what tools we were going to use as finding the limitations of these tools could be quite challenging and unexpected. ## Accomplishments that we're proud of We have successfully managed to combine mechanical, electrical, and software aspect of the project. We are also proud that different university students from different cultural/technical background has gathered up as a team and successfully initiated the project. ## What we learned We have learned how to maximize the usage of various tools, such as how to calculate the amount of water has been traveled through the tube using the volumetric flow rate and time taken, how to send sensor data to the app, and how to build an app that receives such data while providing intuitive user experience. ## What's next for Hydr8 Smaller component to fit in a bottle, more sensors to increase the accuracy of the tracked data. More integration with the app would also be a huge improvement.
## Inspiration We were sitting together as a team after dinner when our team member pulled out her phone and mentioned she needed to log her food – mentioning how she found the app she used (MyFitnessPal) to be quite tedious. This was a sentiment shared by many users we've encountered and we decided there must be a way that we could make this process simple and smooth! ## What it does Artemis is an Amazon Alexa experience that changes the way you engage in fitness and meal tracking. Log your food, caloric intake, and know what the breakdown of your daily diet is with a simple command. All you have to do is tell Artemis that you ate something, and she'll automatically record it for you, retrieve all pertinent nutrition information, and see how it stacks up with your daily goals. Check how you're doing at anytime by asking Artemis, "How am I doing?" or looking up your stats presented in a clear and digestible way at [www.artemisalexa.com](http://www.artemisalexa.com) ## How we built it We took the foods processed from the language request, made a call to the Nutritionix API to get the caloric breakdown, and update the backend server which live-updates the dashboard. The smart-sensor waterbottle tracks water level by using ultrasonar waves that bounce back with distance data. ## Challenges we ran into It's definitely difficult for us to model data beyond the two days we've been working on this project and we wanted to model a lot richer of a data set in our dashboard. ## Accomplishments that we're proud of We're really proud of the product we've built! * Polished and pleasant user experience * Thorough coverage of conversation, can sustain a pertinent conversation with Artemis about healthy eating. * Wide breadth of data visualization * Categorical breakdown * Variances for Caloric intake over the course of the day * Items consumed as percentages of daily nutritional breakdown * Light sensor for fluid color detection (aside from water – no cheating with soda!) * Ultrasonar sensor that measures water level ## What's next for Artemis * We're hoping to build Fitbit integration so that Alexa can directly log your food into one app.
## Inspiration Our team members all struggle to maintain proper hydration. ## What it does The **h2-AH!** is a water bottle attachment that monitors water consumption and reminds users when they have gone too long without drinking. ## How we built it We combined an arduino nano 33 Iot with an ultrasonic sensor, speaker, and attachment band. The current model uses external sensors, but the final product will be self contained, watertight, and attach to the water bottle cap with a suction cup. ## Challenges we ran into We had difficulty acquiring materials for this hackathon which limited the scope of projects that we could pursue. ## Accomplishments that we're proud of We are proud of our overall concept and presentation. ## What we learned We learned that it is important to have a wide variety of materials available for use prior to starting hackathons. ## What's next for h2-AH! The next step for **h2-AH!** is to build a more compact waterproof design that would be appropriate for the mass market.
partial
## Inspiration Planning vacations can be hard. Traveling is a very fun experience but often comes with a lot of stress of curating the perfect itinerary with all the best sights to see, foods to eat, and shows to watch. You don't want to miss anything special, but you also want to make sure the trip is still up your alley in terms of your own interests - a balance that can be hard to find. ## What it does explr.ai simplifies itinerary planning with just a few swipes. After selecting your destination, the duration of your visit, and a rough budget, explr.ai presents you with a curated list of up to 30 restaurants, attractions, and activities that could become part of your trip. With an easy-to-use swiping interface, you choose what sounds interesting or not to you, and after a minimum of 8 swipes, let explr.ai's power convert your opinions into a full itinerary of activities for your entire visit. ## How we built it We built this app using React Typescript for the frontend and Convex for the backend. The app takes in user input from the homepage regarding the location, price point, and time frame. We pass the location and price range into the Google API to retrieve the highest-rated attractions and restaurants in the area. Those options are presented to the user on the frontend with React and CSS animations that allow you to swipe each card in a Tinder-style manner. Taking consideration of the user's swipes and initial preferences, we query the Google API once again to get additional similar locations that the user may like and pass this data into an LLM (using Together.ai's Llama2 model) to generate a custom itinerary for the user. For each location outputted, we string together images from the Google API to create a slideshow of what your trip would look like and an animated timeline with descriptions of the location. ## Challenges we ran into Front-end and design require a LOT of skill. It took us quite a while to come up with our project, and we originally were planning on a mobile app, but it's also quite difficult to learn completely new languages such as swift along with new technologies all in a couple of days. Once we started on explr.ai's backend, we were also having trouble passing in the appropriate information to the LLM to get back proper data that we could inject back into our web app. ## Accomplishments that we're proud of We're proud at the overall functionality and our ability to get something working by the end of the hacking period :') More specifically, we're proud of some of our frontend, including the card swiping and timeline animations as well as the ability to parse data from various APIs and put it together with lots of user input. ## What we learned We learned a ton about full-stack development overall, whether that be the importance of Figma and UX design work, or how to best split up a project when every part is moving at the same time. We also learned how to use Convex and Together.ai productively! ## What's next for explr.ai We would love to see explr.ai become smarter and support more features. explr.ai, in the future, could get information from hotels, attractions, and restaurants to be able to check availability and book reservations straight from the web app. Once you're on your trip, you should also be able to check in to various locations and provide feedback on each component. explr.ai could have a social media component of sharing your itineraries, plans, and feedback with friends and help each other better plan trips.
## Inspiration Being frugal students, we all wanted to create an app that would tell us what kind of food we could find around us based on a budget that we set. And so that’s exactly what we made! ## What it does You give us a price that you want to spend and the radius that you are willing to walk or drive to a restaurant; then voila! We give you suggestions based on what you can get in that price in different restaurants by providing all the menu items with price and calculated tax and tips! We keep the user history (the food items they chose) and by doing so we open the door to crowdsourcing massive amounts of user data and as well as the opportunity for machine learning so that we can give better suggestions for the foods that the user likes the most! But we are not gonna stop here! Our goal is to implement the following in the future for this app: * We can connect the app to delivery systems to get the food for you! * Inform you about the food deals, coupons, and discounts near you ## How we built it ### Back-end We have both an iOS and Android app that authenticates users via Facebook OAuth and stores user eating history in the Firebase database. We also made a REST server that conducts API calls (using Docker, Python and nginx) to amalgamate data from our targeted APIs and refine them for front-end use. ### iOS Authentication using Facebook's OAuth with Firebase. Create UI using native iOS UI elements. Send API calls to Soheil’s backend server using json via HTTP. Using Google Map SDK to display geo location information. Using firebase to store user data on cloud and capability of updating to multiple devices in real time. ### Android The android application is implemented with a great deal of material design while utilizing Firebase for OAuth and database purposes. The application utilizes HTTP POST/GET requests to retrieve data from our in-house backend server, uses the Google Maps API and SDK to display nearby restaurant information. The Android application also prompts the user for a rating of the visited stores based on how full they are; our goal was to compile a system that would incentive foodplaces to produce the highest “food per dollar” rating possible. ## Challenges we ran into ### Back-end * Finding APIs to get menu items is really hard at least for Canada. * An unknown API kept continuously pinging our server and used up a lot of our bandwith ### iOS * First time using OAuth and Firebase * Creating Tutorial page ### Android * Implementing modern material design with deprecated/legacy Maps APIs and other various legacy code was a challenge * Designing Firebase schema and generating structure for our API calls was very important ## Accomplishments that we're proud of **A solid app for both Android and iOS that WORKS!** ### Back-end * Dedicated server (VPS) on DigitalOcean! ### iOS * Cool looking iOS animations and real time data update * Nicely working location features * Getting latest data from server ## What we learned ### Back-end * How to use Docker * How to setup VPS * How to use nginx ### iOS * How to use Firebase * How to OAuth works ### Android * How to utilize modern Android layouts such as the Coordinator, Appbar, and Collapsible Toolbar Layout * Learned how to optimize applications when communicating with several different servers at once ## What's next for How Much * If we get a chance we all wanted to work on it and hopefully publish the app. * We were thinking to make it open source so everyone can contribute to the app.
## Inspiration The inspiration for our project stems from the increasing trend of online shopping and the declining foot traffic in physical stores. Our goal was to provide a unique and engaging experience for customers, encouraging them to visit physical stores and rediscover the joy of in-person shopping. We wanted to create an interactive and entertaining shopping experience that would entice customers to visit stores more frequently and foster a deeper connection between them and the store's brand. ## What it does Our project is an AR scavenger hunt experience that gamifies the shopping experience. The scavenger hunt encourages customers to explore the store and discover new products they may have otherwise overlooked. As customers find specific products, they can earn points which can be redeemed for exclusive deals and discounts on future purchases. This innovative marketing scheme not only provides customers with an entertaining experience but also incentivizes them to visit stores more frequently and purchase products they may have otherwise overlooked. ## How we built it To create the AR component of our project, we used Vuforia and Unity, two widely used platforms for building AR applications. The Vuforia platform allowed us to create and track image targets, while Unity was used to design the 3D models for the AR experience. We then integrated the AR component into an Android application by importing it as a Gradle project. Our team utilized agile development methodologies to ensure efficient collaboration and problem-solving throughout the development process. ## Challenges we ran into One of the challenges we faced was integrating multiple APIs and ensuring that they worked together seamlessly. Another challenge was importing the AR component and creating the desired functionality within our project. We also faced issues with debugging and resolving technical errors that arose during the development process. ## Accomplishments that we're proud of Despite the challenges we faced, we were able to achieve successful teamwork and collaboration. Despite forming the team later than other groups, we were able to effectively communicate and work together to bring our project to fruition. We are proud of the end result, which was a polished and functional AR scavenger hunt experience that met our objectives. ## What we learned We learned how difficult it is to truly ship out software, and we are grateful to have joined the hackathon. We gained a deeper understanding of the importance of project planning, effective communication, and collaboration among team members. We also learned that the development process can be challenging and unpredictable, and that it requires perseverance and problem-solving skills. Additionally, participating in the hackathon taught us valuable technical skills such as integrating APIs, creating AR functionality, and importing projects onto an Android application. ## What's next for Winnur Looking forward, we plan to incorporate Computer Vision technology into our project to prevent potential damage to our product's packaging. We also aim to expand the reach of our AR scavenger hunt experience by partnering with more retailers and enhancing the user interface and experience. We are excited about the potential for future development and growth of Winnur.
partial
## Inspiration Mental health is a big issue in society, specifically for millennials. It is still a quite stigmatized topic and our goal was to provide an unobtrusive and subtle tool to help improve your mental health ## What it does Our application is a journal-writing application where the goal is for the user to write how they feel each day in an unstructured way. We've built a model to help predict emotions and changes in behaviour to notice when a user's mental health may be deteriorating. ## How we built it The front end was built using Angular, and the back end was built using Node.js, Express.js, and a MongoDB database. To predict emotions from text, we built a convolutional neural network in Tensorflow Keras. The model was trained using data obtained by using PRAW API to scrape Reddit. Also, Twitter tweets were obtained from online datasets. ## Challenges we ran into It was very difficult to obtain data for the machine learning model. Although there are many datasets out there, they could only be obtained for research purposes. So, we had to scrape our own data, resulting in data of a lower quality and quantity. In addition, we tried to train another model using Indico Custom Collections. However, in the python script, we ran into a Internal Server Error. In the end, we used Indico Sentiment analysis instead. ## Accomplishments that we're proud of We are very proud of the user interface. It looks very clean and will definitely be a major factor in attracting and retaining users. ## What we learned We learned that obtaining data and processing it for training is an extremely arduous process, with many small tasks along the way that can easily go wrong. ## What's next for A Note A Day As more users begin to use A Note A Day, we will definitely need to change our database to a relational database. In addition, with more users, we can obtain more relevant data to improve our machine learning model. We Currently, the application only warns users when their writing shows signs of mental health deterioration. As a next step, the application could automatically text a friend group. With serious symptoms, we could suggest professional services. Furthermore, we could incorporate cognitive behavioral therapy techniques to ask questions in more meaningful, impactful ways. In addition, we could create a premium version of A Note A Day, allowing users to connect with professional therapists. This will allow therapists to monitor a large group of users by using the model as a guideline, while also providing resources for users to avoid mental health problems at the earliest sign.
## Inspiration Mental health has become an increasingly vital issue on our campus. The facade of the perfect Stanford student (Stanford Duck Syndrome) means that real emotions and struggles are often suppressed. It is heartwarming to be able to connect with people on campus and see how they feel, in a familiar, yet anonymous way. Having the moment to connect with another person's experience of struggling with a Mid-Term or the Happiness after Stanford beats Cal can be amazingly uplifting. ## What it does Our cross-platform app allows users to share how they feel in words. Their feelings are geolocated onto a map with a circle and with a timestamp anonymously. Our NLP sentiment analyzer creates a color for the feeling based on the sentiment of the feeling expressed. This provides a cool visualization to see how people feel across different geographic levels. For example, you can zoom into a building to observe that people are generally happy in the Huang Engineering Center because of TreeHacks currently and Zoom Out to see people in Stanford are generally stressed during midterm season. The ability to zoom in and even tap on a specific circle to see how a person feels in words allows you to go local while zooming out allows you to go global and gauge the general sentiment of an area or building as transparent colors overlap into generalized shades. It is a fascinating way to connect with people's deepest feelings and find the humanity in our everyday life. ## How we built it Our front end was built in React Native through Google Maps and uses Node.js. Our backend consisted of a Flask server written in Python, on which our NLP sentiment analysis is done (Artificial Intelligence), determining colors for the circle based on the feeling estimated by the language model. Our database of feelings entries is stored in Firebase on the Cloud, with data being written to and from, to overlay feelings entries on the map. We also have a script running on Firebase to remove entries from a map after a certain time period (example 6 hours, so only the most recent entries are displayed on the map to the user). Our Flask Server is deployed on Heroku through the Cloud. ## Challenges we ran into Getting Flask to communicate with our React Native app to produce the NLP sentiment analysis. Setting up our backend through Firebase to create markers on the map and persist user's responses in the long run. ## Accomplishments that we're proud of Integrating all the different components was fascinating from Firebase to ReactNative to the NLP Sentiment Analysis through Flask. ## What we learned We had no prior experience with React Native and Node.JS, so we learned this from scratch. Integrating all the different aspects of the solution from the frontend to backend to cloud storage was a thrilling experience. ## What's next for CampusFeels We hope to add features to track the emotional wellbeing of areas in the long run as well as encourage users to develop the skills to track their own emotional well-being in the long run. We hope to apply data analytics to do this and track people's emotions related to different events/criteria e.g. Housing Choices, Weather, Big Game etc.
## Inspiration Have you ever run into the situation where you are trying to find a roommate that you are compatible with and that your current roommates will like as well? Are you tired of having the same discussion with each other roommate trying to pick the best candidate, whether it is online or offline? ## What it does Crowd vote your next roommate at your fingertips! As someone looking for a place to rent, simply go on bunkieballot.tech, select the listings you are interested in checking out, and click submit. Now the current renters of each of the selected listings will get notified via a text message, which will contain the candidate's profile. Then they can each reply back with a score of 1-10 to indicate how much you like this candidate. After all the votes are collected, Bunkie Ballot will tally the scores for each applicant of each listing. Finally the roommates will see the list of applicants in order of high to low ratings. ## How we built it We utilized StdLib as a serverless backend to implement the sending and receiving of text messages, mongoDB for database, and React and Javascript for web UI. ## Challenges we ran into As StdLib is new and our team is also learning it hands on, we ran into a few challenges mainly revolving around connecting JS API calls to StdLib functions, making DB calls to Mongo, and in general how to send and receive text messages from the StdLib functions. ## Accomplishments that we're proud of We mastered the StdLib technology! ## What we learned We learned how to use StdLib to build a peer roommate voting system! ## What's next for BunkieBallot -more fair and robust rating algorithm -more elaborate user profile -more user friendly text messages
losing
![alt tag](https://cloud.githubusercontent.com/assets/9488660/24586497/82cac59a-1770-11e7-8301-5ea1ff3adfc3.png) On average, the seasonal influenza directly costs U.S. employers $10+ billion due to hospitalization bills, lost productivity, and related medical treatment. A non-epidemic flu causes more than 200,000 people to be hospitalized. Here at Princeton University, students who are sick suffer from sub-optimal learning experiences and become socially deprived when avoiding others to prevent spreading the disease. A large number of these cases are attributed to the human hand. Lackadaisical attitudes towards washing hands combined with the habit of eating food during computer use are a recipe for misery. Healthkey promotes your well-being by automatically sanitizing your keyboard when it is not in use, preventing the transmission of pathogens from your hand to your mouth as well as to other people. With its easy-to-use system, Healthkey keeps you healthy. Product concept video found at <https://youtu.be/F4j7RQu8qIs> Demo video found at <https://youtu.be/_PbCMottWjA>
## Inspiration The inspiration for our hackathon idea stemmed from an experience observed by one of our team members who had recently been to the hospital. They noticed the numerous amount of staff required at every entrance to ensure that patients and visitors had their masks on properly, as well as asking COVID-19 screening questions and recording their time of entry into the hospital. They thought about the potential problems and implications that this might have such as health care workers having a higher chance of getting sick due to more frequent exposure with other individuals, as well as the required resources needed to complete this task. Another thing that was discussed was about the scalability of this procedure and how it could apply to schools & businesses. Hiring an employee to perform these tasks may be financially unfeasible for small businesses and schools but the social benefit that these services would provide would definitely help towards the containment of COVID-19. Our team decided to see if we could use a combination of Machine Learning, AI, Robotics, and Web development in order to automate this process and create a solution that would be financially feasible and reduce the workload on already hard-working individuals who work every day to keep us safe. ## What it does Our stand-alone solutions consists of three main elements, the hardware, the mobile app, and the software to connect everything together. **Camera + Card Reader** The hardware is meant to be placed at an entry point for a business/school. It will automatically detect the presence of a person through an ultrasonic sensor. From there, it adjusts the camera to center the view for a better image, and takes a screenshot. The screenshot is used to make an API request using Microsoft Azure Computer Vision Prediction API where it can be used to return a confidence value of a tag. (Mask / No Mask) Once the person is confirmed to be wearing a mask through AI, the individual will be prompted to scan their RFID tag. The hardware will check the owner of the RFID id and add a time checked-in or out for their profile in a cloud database. (Firestore) **Mobile Application** The mobile application is intended for the administrator/business owner who would like to be able to manage the hardware settings and observe any analytics. \_ (We did not have enough time to complete that unfortunately) \_ Additionally, the mobile app can also be used to perform basic contact tracing through a API request on a custom-made Autocode API that will check the database and determine recent potential instances of exposure between employees based on check-in and check-out times. It will also determine those employees affected and automatically send them an email with the dates of the potential exposure instances. **The software** Throughout our application, we had many smaller instances of programming/software that was used run our overall prototype. From the python scripts on our Raspberry Pi to communicate with the database, to the custom API made on Autocode, there were many small pieces that we had to put together in order for this prototype to work. ## How we built it For all of our team members, this was our first hackathon and we had to think creatively about how we were going to make our idea into a reality. Because of this, we used many well-documented/beginner-friendly services to create a "stack" that we were able to manage with our limited expertise. Our team background came mainly from robotics and hardware so we definitely wanted to incorporate a hardware element into our project, however we also wanted to take full advantage of this amazing opportunity at Hack The 6ix and apply the knowledge that we learned in the workshops. **The Hardware** In order to make our hardware, we utilized a Raspberry Pi and various sensors that we had on hand. Our hardware consisted of an RFID reader, Ultrasonic Sensor, Servo Motor, and Web Camera to perform the tasks mentioned in the section above. Additionally, we had access to a 3D printer and were able to print some basic parts to mount our electronics and create our device. **(Although our team has a stronger mechanical background, we spent most of our time programming haha)** **Mobile Application** In order to program our mobile app, we utilized a framework called Flutter which is developed by Google and is a very easy way to rapidly prototype a mobile application that can be supported by both Android and iOS. Because Flutter is based on the DART language, it was very easy to follow along tutorials and documentation, as well as some members had previous experience with Flutter. We decided to also go with firestore as our database as there was quite a lot of documentation and support between the two applications. **Software** In order to put everything together, we had to utilize a variety of skills and get creative with how we were going to connect our backend considering our limited experience in programming and computer science. In order to run the mask detector, we first used some Python scripts on a Raspberry Pi to center our camera onto the object and perform very basic face detection to determine whether to take a screenshot or not in order to send to the cloud to be processed. We did not want to stream our entire camera feed to the cloud as that could be costly due to a high rate of API requests, as well as impracticality due to hardware limitations. Because of that, we used some lower end face detection in order to determine whether a screenshot should be taken and from there we send it through an API request through Microsoft Azure Services Computer Vision Prediction API where we had trained a model to detect two classifiers. (Mask and No Mask). We were very impressed with how easy it was to set up the Azure Prediction API and it really helped our team with reliable, accurate, and fast mask detection. Since we did not have much experience with back-end in flutter, we decided to utilize a very powerful tool which was Autocode which we learned about during a workshop on Saturday. With the ease of use and utility of Autocode, we decided to create a back-end API that our mobile app could call basically with an HTTP request and through that our Autocode program could interact with our firebase database in order to perform basic calculations and achieve the basic contact tracing that we wanted in our project. The autocode project can be found here! [link](https://autocode.com/src/samsonwhua81421/unmasked-api/) ## Challenges we ran into The majority of our challenges that we ran into was due to our limited experience in back-end development which lead us with a lot of gaps in the functionality of our project. However, the mentors were very friendly and helpful and helped us with connecting the different parts of our project. Our creativity also aided in helping us connect our portions together. Another challenge that we ran into was our hardware. Because of quarantine, many of us were at home and did not have access to lab equipment that could have been very helpful in diagnosing most of our hardware problems. (Multimeters, Oscilloscopes, Soldering Irons). However, we were able to solve these problems, all be-it using very precious hackathon time to do so. ## What We learned -Hackathons are very fun, we definitely want to do more! -Sleep is very important. :) -Microsoft Azure Services are super easy to use -Autocode is very useful and cool ## What's next for Unmasked The next steps for Unmasked would be to further add to the contact tracing feature of the app, as knowing who was in the same building at the time does not provide enough information to determine who may actually be at risk. One potential solution to this would be to have employees scan their Id's based on location as well, enabling the ability to determine whether any individuals were actually near those with the virus.
## What it does Using Blender's API and a whole lot of math, we've created a service that allows you to customize and perfectly fit 3D models to your unique dimensions. No more painstaking adjustments and wasted 3D prints necessary, simply select your print, enter your sizes, and download your fitted prop within a few fast seconds. We take in specific wrist, forearm, and length measurements and dynamically resize preset .OBJ files without any unsavory warping. Once the transformations are complete, we export it right back to you ready to send off to the printers. ## Inspiration There's nothing cooler than seeing your favorite iconic characters coming to life, and we wanted to help bring that magic to 3D printing enthusiasts! Just starting off as a beginner with 3D modeling can be a daunting task -- trust us, most of the team are in the same boat with you. By building up these tools and automation scripts we hope to pave a smoother road for people interested in innovating their hobbies and getting out cool customized prints out fast. ## Next Steps With a little bit of preprocessing, we can let any 3D modeler upload their models to our web service and have them dynamically fitted in no time! We hope to grow our collection of available models and make 3D printing much easier and more accessible for everyone. As it grows we hope to make it a common tool in every 3D artists arsenal. *Special shoutout to Pepsi for the Dew*
losing
## Inspiration Mornings are hard. I think that's something everybody around the world can agree on. We're groggy, usually running a bit late, and trying to get our days started as soon as possible. And for us at Morning, that usually involves a complicated procedure every morning: checking our emails from the past night, orienting ourself with our google calendars, catching up on news, checking the weather, checking our messages, etc... Despite how hectic mornings are usually portrayed as, scientific research has shown that mornings are actually when individuals are most in touch with complex emotions, such as love, inspiration, and mood. Here at Morning, we take all the fuss out of mornings so that you can focus on what is truly important: being in tune with yourself. ## What it does Morning provides a unique unified platform designed to make mornings easier. It presents the user a customizable unified dashboard with everything a user could need in the morning in one sleek and easy to use interface. Furthermore, what makes Morning unique over over lifestyle apps is that it helps you self-diagnose your mood and motivation level by asking you how you're feeling every morning, and then generates a score everyday based on sentiment analysis of the choices selected. We present graphics of personalized data to make interpretation easy and satisfying to look at. ## How I built it We built morning with an Angujar.js framework and Firebase for storage. ## Challenges I ran into DevOps were more challenging than expected ## Accomplishments that I'm proud of The bubble picker UI is sleek. ## What I learned First time for many of us using Angular.js ## What's next for Morning It only goes uphill from here. We're thinking of adding an alarm feature and making it more of a sleep oriented app.
## Inspiration We wanted to try building a game as none of us had much experience in the area. Space Invasions is a simple but fun game that we thought would be a challenge to put together ## What it does Our webapp currently supports an account creation system, and rewards players with in-game currency as they play the game. ## How we built it We built the front end using Phaser3, a html game development framework. The backend is done with node.js and express, hosted on a Microsoft Azure site. A Firebase database is used to store accounts and balances. We chose to use a web application as it can be played on both mobile and desktop. ## Challenges we ran into We originally wanted to work in Unity but due to network issues, we were forced to use something with a smaller setup file size. We had a few issues getting set up and none of us had any real experience with Phaser. Some members of the group were also unfamiliar with Javascript, so there was some learning involved. Getting Azure set up to host our application also took a bit of troubleshooting before we got it working. ## Accomplishments that we're proud of We made a game! And in our opinion, it looks pretty good! Of course, it could use with some polishing up.. but for a group with barely any experience, we think we did pretty well. ## What we learned We learned how to use Phaser.io and how to host a webapp on Microsoft Azure. Some of us learned Node.JS for the first time. We also got to experience the complexities of developing a game from scratch. ## What's next for Space Invasion We believe that Space Invasion has potential as a Hyper Casual game. It has a currency system that can not only be used to purchase new skins for ships, but can also be used to buy powerups or extra lives. We were also looking to do a "challenge" system where you would be able to challenge your friends to beat your score. And there would be a reward system for winning a match against someone. There is also room for things like achievements and social media leaderboards. Along with a potential co-op mode, as it is web application and it would be possible to leverage web sockets to support 2 player co-op mode.
## Inspiration We wanted to make an app that helped people to be more environmentally conscious. After we thought about it, we realised that most people are not because they are too lazy to worry about recycling, turning off unused lights, or turning off a faucet when it's not in use. We figured that if people saw how much money they lose by being lazy, they might start to change their habits. We took this idea and added a full visualisation aspect of the app to make a complete budgeting app. ## What it does Our app allows users to log in, and it then retrieves user data to visually represent the most interesting data from that user's financial history, as well as their utilities spending. ## How we built it We used HTML, CSS, and JavaScript as our front-end, and then used Arduino to get light sensor data, and Nessie to retrieve user financial data. ## Challenges we ran into To seamlessly integrate our multiple technologies, and to format our graphs in a way that is both informational and visually attractive. ## Accomplishments that we're proud of We are proud that we have a finished product that does exactly what we wanted it to do and are proud to demo. ## What we learned We learned about making graphs using JavaScript, as well as using Bootstrap in websites to create pleasing and mobile-friendly interfaces. We also learned about integrating hardware and software into one app. ## What's next for Budge We want to continue to add more graphs and tables to provide more information about your bank account data, and use AI to make our app give personal recommendations catered to an individual's personal spending.
losing
## Inspiration As college students who are on a budget when traveling from school to the airport, or from campus to a different city, we found it difficult to coordinate rides with other students. The facebook or groupme groupchats are always flooding with students scrambling to find people to carpool with at the last minute to save money. ## What it does Ride Along finds and pre-schedules passengers who are headed between the same start and final location as each driver. ## How we built it Built using Bubble.io framework. Utilized Google Maps API ## Challenges we ran into Certain annoyances when using Bubble and figuring out how to use it. Had style issues with alignment, and certain functionalities were confusing at first and required debugging. ## Accomplishments that we're proud of Using the Bubble framework properly and their built in backend data feature. Getting buttons and priority features implemented well, and having a decent MVP to present. ## What we learned There are a lot of challenges when integrating multiple features together. Getting a proper workflow is tricky and takes lots of debugging and time. ## What's next for Ride Along We want to get a Google Maps API key to properly be able to deploy the web app and be able to functionally use it. There are other features we wanted to implement, such as creating messages between users, etc.
## Inspiration When we first read Vitech's challenge for processing and visualizing their data, we were collectively inspired to explore a paradigm of programming that very few of us had any experience with, machine learning. With that in mind, the sentiment of the challenge themed around health care established relevant and impactful implications for the outcome of our project. We believe that using machine learning and data science to improve the customer experience of people in the market for insurance plans, would not only result in a more profitable model for insurance companies but improve the lives of the countless people who struggle to choose the best insurance plans for themselves at the right costs. ## What it does Our scripts are built to parse, process, and format the data provided by Vitech's live V3 API database. The data is initially filtered using Solr queries and then formatted into a more adaptable comma-separated variable (CSV) file. This data is then processed by a different script through several machine learning algorithms in order to extract meaningful data about the relationship between an individual's personal details and the plan that they are most likely to choose. Additionally, we have provided visualizations created in R that helped us interpret the many data points more effectively. ## How we built it We initially explored all of the ideas that we had regarding how exactly we planned to process the data and proceeded to pick Python as a suitable language and interface in which we believed that we could accomplish all of our goals. The first step was parsing and formatting data after which we began observing it through the visualization tools provided by R. Once we had a rough idea about how our data is distributed, we continued by making models using the h2o Python library in order to model our data. ## Challenges we ran into Since none of us had much experience with machine learning prior to this project, we dived into many software tools we had never even seen before. Furthermore, the data provided by Vitech had many variables to track, so our deficiency in understanding of the insurance market truly slowed down our progress in making better models for our data. ## Accomplishments that we're proud of We are very proud that we got as far as we did even though out product is not finalized. Going into this initially, we did not know how much we could learn and accomplish and yet we managed to implement fairly complex tools for analyzing and processing data. We have learned greatly from the entire experience as a team and are now inspired to continue exploring data science and the power of data science tools. ## What we learned We have learned a lot about the nuances of processing and working with big data and about what software tools are available to us for future use. ## What's next for Vitech Insurance Data Processing and Analysis We hope to further improve our modeling to get more meaningful and applicable results. The next barrier to overcome is definitely related to our lack of field expertise in the realm of the insurance market which would further allow us to make more accurate and representative models of the data.
## Inspiration In the exciting world of hackathons, where innovation meets determination, **participants like ourselves often ask "Has my idea been done before?"** While originality is the cornerstone of innovation, there's a broader horizon to explore - the evolution of an existing concept. Through our AI-driven platform, hackers can gain insights into the uniqueness of their ideas. By identifying gaps or exploring similar projects' functionalities, participants can aim to refine, iterate, or even revolutionize existing concepts, ensuring that their projects truly stand out. For **judges, the evaluation process is daunting.** With a multitude of projects to review in a short time frame, ensuring an impartial and comprehensive assessment can become extremely challenging. The introduction of an AI tool doesn't aim to replace the human element but rather to enhance it. By swiftly and objectively analyzing projects based on certain quantifiable metrics, judges can allocate more time to delve into the intricacies, stories, and the passion driving each team ## What it does This project is a smart tool designed for hackathons. The tool measures the similarity and originality of new ideas against similar projects if any exist, we use web scraping and OpenAI to gather data and draw conclusions. **For hackers:** * **Idea Validation:** Before diving deep into development, participants can ascertain the uniqueness of their concept, ensuring they're genuinely breaking new ground. * **Inspiration:** By observing similar projects, hackers can draw inspiration, identifying ways to enhance or diversify their own innovations. **For judges:** * **Objective Assessment:** By inputting a project's Devpost URL, judges can swiftly gauge its novelty, benefiting from AI-generated metrics that benchmark it against historical data. * **Informed Decisions:** With insights on a project's originality at their fingertips, judges can make more balanced and data-backed evaluations, appreciating true innovation. ## How we built it **Frontend:** Developed using React JS, our interface is user-friendly, allowing for easy input of ideas or Devpost URLs. **Web Scraper:** Upon input, our web scraper dives into the content, extracting essential information that aids in generating objective metrics. **Keyword Extraction with ChatGPT:** OpenAI's ChatGPT is used to detect keywords from the Devpost project descriptions, which are used to capture project's essence. **Project Similarity Search:** Using the extracted keywords, we query Devpost for similar projects. It provides us with a curated list based on project relevance. **Comparison & Analysis:** Each incoming project is meticulously compared with the list of similar ones. This analysis is multi-faceted, examining the number of similar projects and the depth of their similarities. **Result Compilation:** Post-analysis, we present users with an 'originality score' alongside explanations for the determined metrics, keeping transparency. **Output Display:** All insights and metrics are neatly organized and presented on our frontend website for easy consumption. ## Challenges we ran into **Metric Prioritization:** Given the timeline restricted nature of a hackathon, one of the first challenges was deciding which metrics to prioritize. Striking the balance between finding meaningful data points that were both thorough and feasible to attain were crucial. **Algorithmic Efficiency:** We struggled with concerns over time complexity, especially with potential recursive scenarios. Optimizing our algorithms, prompt engineering, and simplifying architecture was the solution. *Finding a good spot to sleep.* ## Accomplishments that we're proud of We took immense pride in developing a solution directly tailored for an environment we're deeply immersed in. By crafting a tool for hackathons, while participating in one, we felt showcases our commitment to enhancing such events. Furthermore, not only did we conceptualize and execute the project, but we also established a robust framework and thoughtfully designed architecture from scratch. Another general Accomplishment was our team's synergy. We made efforts to ensure alignment, and dedicated time to collectively invest in and champion the idea, ensuring everyone was on the same page and were equally excited and comfortable with the idea. This unified vision and collaboration were instrumental in bringing HackAnalyzer to life. ## What we learned We delved into the intricacies of full-stack development, gathering hands-on experience with databases, backend and frontend development, as well as the integration of AI. Navigating through API calls and using web scraping were also some key takeaways. Prompt Engineering taught us to meticulously balance the trade-offs when leveraging AI, especially when juggling cost, time, and efficiency considerations. ## What's next for HackAnalyzer We aim to amplify the metrics derived from the Devpost data while enhancing the search function's efficiency. Our secondary and long-term objective is to transition the application to a mobile platform. By enabling students to generate a QR code, judges can swiftly access HackAnalyzer data, ensuring a more streamlined and effective evaluation process.
partial
**To Access:** Download Expo on your device and then open this link: <http://bit.ly/recoverybotdemo>. Our mobile app can be publicly accessed through this link. ## Inspiration Over 22 million Americans today struggle with an alcohol or drug addiction. More than 130 people every day in the US die from drug overdoses, and this number is only expected to rise. To address this massive public health crisis, we created a solution that will reduce drug related deaths, strengthen support systems among addicts, and increase the rate of recovery at an individual level. ## What it does Recovery Bot directly addresses the three major aspects of addiction intervention: emotional, mental, and physical needs. Through Recovery Bot, a user can request a conversation from a loved one, mental health specialist, or the National Drug Helpline regarding their addiction. Through Recovery Bot, a user can have a personalized, one-on-one conversation with a private and anonymous Chat Bot. We have trained Chat Bot responses to mimic the style most addiction therapists use with clients, and we have also trained the Chat Bot to provide cognitive behavioral exercises catered towards a user's history and substance. Recovery Bot also monitors the heart rate of a user on a fitbit and sends notifications to loved ones if a user's heart rate dramatically increases, a common sign of withdrawal. ## How I built it We used react native and xcode to build the app. We used fitbit ionic to build the heart rate monitor. We used Microsoft Azure's language sentiment and language analytics to create a chat bot that can produce and respond to content in a human-like way. We used twilio to connect users with the contacts they inputted on the app and the National Drug Hotline. We also used twilio to send messages to loved ones if a user experiences extremely high heart rate levels. ## Challenges I ran into We had issues with setting up the ECG heart monitor graphic on the app home screen and working with the fitbit. We also had to do an extensive amount of research on cognitive behavioral therapy because we wanted to make the Chat Bot's dialogue as accurate as possible. We also had some issues initially with connecting the Chat Bot to the app. ## Accomplishments that I'm proud of We are incredibly proud of our Chat Bot and our fitbit heart monitor. These two unique features have the potential to de-stigmatize addiction and save lives. ## What I learned Fitbit Ionic, Microsoft Azure Natural Language Processing, Twilio, Expo. We also learned more about cognitive behavioral therapy, addiction therapy, and the relapse and withdrawal process. ## What's next for Recovery Bot Creating more dialogue for Chat Bot and also making the heart monitor detect irregular heart rates (another common withdrawal symptom)
## Inspiration Nicotine addiction is an epidemic that is taking the nation in millions due to the growing popularity of "E-Cigarettes." This has affected over 50 million people in the U.S. alone including our friends, family, and loved ones. 90% of these users started smoking before reaching 20 years old. Since the outbreak of trendy E-Cig companies like JUUL and Suorin, the number of high schoolers smoking spiked by 71% in just 2 years. With this problem heavily affecting our generation and generations to come, we want to propose, HelpingHand, a solution that could help mitigate the detrimental effects we noticed. ## What it does HelpingHand aims to combat this problem by utilizing FitBit's to detect when drug abusers experience withdrawal symptoms throughout their rehabilitation process. HelpingHand emphasizes the importance of positive reinforcement, peer support, and community in helping those struggling to quit, actually quit. Our advanced technology detects when the drug abuser is experiencing stress, therefore a high probability of nicotine relapse and notify their loved ones to provide a helping hand. ## How we built it Fitbit SDK for the Fitbit app and sending REST calls to the iOS app. Flutter and Dart for the iOS app and to receive REST calls. Express.JS for Twilio API, communication between the Fitbit data on the phone and Firebase. Firebase for an intermediary server through which our platforms can communicate. Java, org.json to create mock JSON containing timestamps, heartbeat rate, number of steps taken. And lots and lots of coffee! ## Challenges we ran into Due to our unfamiliarity with the Fitbit API, an initial challenge we ran into was figuring out how to collect the intraday time-series heart rate data from the Fitbit. Cross-platform communication between the Fitbit and our iOS Flutter app proved to be rather difficult, but once we overcame that hurdle things became much easier. Our team also spent a significant amount of time with research on smoking and its health effects, particularly the effect it has on heart beat rate and heart beat variability. Unfortunately, there is no publicly available dataset containing heart rate for individuals attempting to quit smoking. After interviewing people who have experienced nicotine withdrawals in the past, we discovered that high levels of anxiety will greatly increase a smoker's urge to relapse. It was from here that we began to think of the core aspects of HelpingHand and to start building the app. ## Accomplishments that we're proud of We're surprised at ourselves as to how much all learned this weekend! There were many different API's, frameworks, and hardware we weren't entirely comfortable with and having the opportunity to challenge ourselves was a ton of fun. We're especially proud of the fact that we have been able to finish -- and get working -- the core, defining aspects of HelpingHand. Every one of the team members personally knows someone who has suffered or is currently suffering from nicotine addiction. *By leveraging existing technologies such as the Fitbit, we sincerely hope that HelpingHand will positively impact the substance abuse epidemic in the United States.* **However, HelpingHand's mission doesn't stop at PennApps. We're committed to extending the app to other substances such as opioids, as well as mental health with depression-related anxiety hacks.** ## What we learned We fiddled a ton with the Fitbit SDK, and learned how to make Fitbit apps! Communication across the different platforms -- between Fitbit, our mobile app, Firebase, and Twilio was challenging but incredibly rewarding. In addition to the technology, a big takeaway from this is the immense knowledge we gained about drug abuse, rehabilitation, and recovery. From the personal stories we shared, to the case studies we discovered, this was truly a great opportunity to learn through real-world application. ## What's next for HelpingHand We would like to scale our app so that people across the world can get help coping with nicotine addiction. It would also enable for an expansive, online support community. We know "machine learning" gets tossed around a lot, but utilizing it to minimize false positives would be super promising. Moreover, only a portion of smart watch wearers use FitBit. An integration of HelpingHand to more health watches such as the Apple Watch would make our app more accessible. We also spent time debating on whether to tackle nicotine addiction or depression-related anxiety attacks during this weekend. While we opted for nicotine addiction, we believe extending HelpingHand to depression and other abused substances will open the door for many more applications.
## Inspiration Toronto is famous because it is tied for the second longest average commute time of any city (96 minutes, both ways). People love to complain about the TTC and many people have legitimate reasons for avoiding public transit. With our app, we hope to change this. Our aim is to change the public's perspective of transit in Toronto by creating a more engaging and connected experience. ## What it does We built an iOS app that transforms the subway experience. We display important information to subway riders, such as ETA, current/next station, as well as information about events and points of interest in Toronto. In addition, we allow people to connect by participating in a local chat and multiplayer games. We have small web servers running on ESP8266 micro-controllers that will be implemented in TTC subway cars. These micro-controllers create a LAN (Local Area Network) Intranet and allow commuters to connect with each other on the local network using our app. The ESP8266 micro-controllers also connect to the internet when available and can send data to Microsoft Azure. ## How we built it The front end of our app is built using Swift for iOS devices, however, all devices can connect to the network and an Android app is planned for the future. The live chat section was built with JavaScript. The back end is built using C++ on the ESP8266 micro-controller, while a Python script handles the interactions with Azure. The ESP8266 micro-controller runs in both access point (AP) and station (STA) modes, and is fitted with a button that can push data to Azure. ## Challenges we ran into Getting the WebView to render properly on the iOS app was tricky. There was a good amount of tinkering with configuration due to the page being served over http on a local area network (LAN). Our ESP8266 Micro-controller is a very nifty device, but such a low cost device comes with strict development rules. The RAM and flash size were puny and special care was needed to be taken to ensure a stable foundation. This meant only being able to use vanilla JS (no Jquery, too big) and keeping code as optimized as possible. We built the live chat room with XHR and Ajax, as opposed to using a websocket, which is more ideal. ## Accomplishments that we're proud of We are proud of our UI design. We think that our app looks pretty dope! We're also happy of being able to integrate many different features into our project. We had to learn about communication between many different tech layers. We managed to design a live chat room that can handle multiple users at once and run it on a micro-controller with 80KiB of RAM. All the code on the micro-controller was designed to be as lightweight as possible, as we only had 500KB in total flash storage. ## What we learned We learned how to code as lightly as possible with the tight restrictions of the chip. We also learned how to start and deploy on Azure, as well as how to interface between our micro-controller and the cloud. ## What's next for Commutr There is a lot of additional functionality that we can add, things like: Presto integration, geolocation, and an emergency alert system. In order to host and serve larger images, the ESP8266' measly 500KB of storage is planning on being upgraded with an SD card module that can increase storage into the gigabytes. Using this, we can plan to bring fully fledged WiFi connectivity to Toronto's underground railway.
partial
## Team Hello and welcome to our project! We are Ben Wiebe, Erin Hacker, Iain Doran-Des Brisay, and Rachel Smith. We are all in our third year of computer engineering at Queen’s University. ## Inspiration Something our team has in common is a love of road trips. However, road trips can be difficult to coordinate, and the fun of a road trip is lost when not everyone is travelling together. As such, we wanted to create an app that will help people stay in touch while travelling and feel connected even when apart. ## What it Does The app gives users the ability to stay connected while travelling in separate cars. From the home screen, you are prompted to log in to Snapchat with your account. You then have the option to create a new trip or join an existing trip. If you create a trip, you are prompted to indicate the destination that your group will be travelling to, as well as a group name. You are then given a six-character code, randomly generated and consisting of numbers and letters, that you can copy and send to your friends so that they can join you. Once in a trip, users are taken to a screen that displays a map as the main display on the screen. The map displays each member of the trip’s Bitmoji and will update with users’ locations. Based on location, an arrival time will be displayed, letting users give their friends updates on how far away they are from their destination. As well, users can sign into Spotify, allowing all parties in the group to contribute to a shared playlist and listen to the same songs from this playlist at the same time, keeping the road trip fun despite the distance. So next time you want to take control of the aux, you’ll be taking control of all parties in your group! The software currently maps a route as generated using a Google Maps API. However, the route is not yet drawn on to the map, a messaging feature would be implemented to allow users to communicate with one another. This feature would be limited to users of passenger status to discourage drivers from texting and driving. As well, weather and traffic updates would be implemented to further aid users on road trips. ## How We Built It The team split into two sub-teams each to tackle independent tasks. Iain and Rachel took lead on the app interface. They worked in Android Studio, coding in Java, to get the activities, buttons, and screens in sync. They integrated Snapchat’s bitmoji kit, as well as the Google Maps APIs to streamline the process. Ben and Erin took lead on making the server and databases, using SQLite and Node.js. They also implemented security checks to ensure the app is not susceptible to SQL injections and to limit the accepted user inputs. The team came together as a whole to integrate all components smoothly, and efficiently as well as to test and fix errors. ## Challenges We Ran Into Several technical challenges were encountered during the creation of Konvoi. One error was in the implementation of the map on the client side. Another main issue was finding the proper dependencies and matching their versions. ## Accomplishments That We’re Proud Of First and for most, we are proud of each other’s hard work and dedication. We started this Hackathon with the mind set that we wanted to complete the app at all costs. Normally never running on less than six hours of sleep, the team struggled on only four hours per night. The best part? The team morale. Everyone had their ups and downs, and points when we did not think that we would finish and it seemed easiest to give up. We took turns being the support for each other and encouraging each other; from silly photos at 3am in matching onesies, to visiting the snack table…every…five…minutes… the team persevered and accomplished the project! On the other hand, we are proud of the app and all the potential that it has. In only 36 hours, a fully functional app that we can share together on our next team road trip (Florida anyone??) has been built. From here, we believe that this app is marketable, especially to those 18 to 30. ## What We Learned The team collectively agrees that we learned so much throughout this entire experience both technical and interpersonal. The team worked with mentors one on one multiple times throughout the hackathon, each of them bringing a new experience to our table. We spoke with Kevin from Scotiabank, who expanded our thought process with regards to how security plays a role in every project we work on. We spoke with Mike from Ritual who taught us about the Android app integration and helped us with the app implementation. Some of us had no prior knowledge of APIs, so having a knowledgeable mentor teaching us was an invaluable experience. ## What’s the Next Step for Konvoi? During the design phase, the team created a long list of features that we felt would be an asset to have. We then categorized them as mandatory (required in the Minimum Viable Product), desired (the goal of the project), nice to have (an extension of desired features), and stretch goals (interesting ideas that would be great in the future). From these lists, we were able to accomplish all mandatory and desired goals. We unfortunately did not hit an nice to have or stretch goals. They included: • Planned stops • Messaging between cars • Cost tracking for the group (when someone rents the car, someone else the hotel, etc.) • Roadside assistance (such as CAA connected into the app) • Entertainment (extend it to passengers playing YouTube videos, etc.) • Weather warnings and added predictions • A feature for a packing list
## Inspiration Car theft is a serious issue that has affected many people in the GTA. Car theft incidents have gone up 60% since 2021. That means thousands of cars are getting stolen PER WEEK, and right in front from their driveway. This problem is affecting middle class communities, most evidently at Markham, Ontario. This issue inspired us to create a tracking app and device that would prevent your car from being stolen, while keeping your friend’s car safe as well. ## What it does We built two components in this project. A hardware security system and an app that connects to it. In the app you can choose to turn on/off the hardware system by clicking lock/unlock inside the app. When on, the hardware component will use ultrasonic sensors to detect motion. If motion is detected, the hardware will start buzzing and will connect to Twilio to immediately send an SMS message to your phone. As while the app has many more user-friendly features including location tracking for the car and the option to add additional cars. ## How we built it We built the front-end design with figma. This was our first time using it and it took some Youtube videos to get used to the software, but in the end we were happy with our builds. The hardware system incorporated an arduino yun that connected and made SMS text messages through twilio’s api system. As well, the arduino required C code for all the SMS calls, LED lights, and buzzer. The hardware also included some wiring and ultrasonic sensors for detection. We finally wanted to produce an even better product so we used CAD designs to expand upon our original hardware designs. Overall, we are extremely pleased with our final product. ## Business Aspect of SeCARity As for the business side of things, we believe that this product can be easily marketable and attract many consumers. These types of products are in high demand currently as they solve an issue our society is currently facing. The market for this will be big and as this product holds hardware parts that can be bought for cheap, meaning that the product will be reasonably priced. ## Challenges we ran into We had some trouble coming up with an idea, and specifically one that would allow our project to be different from other GPS tracker devices. We also ran into the issue of certain areas of our project not functioning the way we had ideally planned, so we had to use quick problem solving to think of an alternative solution. Our project went through many iterations to come up with a final product. There were many challenges we ran into on Figma, especially regarding technical aspects. The most challenging aspect in this would’ve been the implementation of the design. Finally, the virtual hacking part was difficult at times to communicate with each other, but we persisted and were able to work around this. ## Accomplishments that we're proud of We are extremely proud of the polished CAD version of the more complex and specific and detailed car tracker. We are very proud of the app and all the designs. Furthermore, we were really happy with the hardware system and the 3-D printed model casing to cover it. ## What we learned We learned how to use Figma and as well an Arduino Yun. We never used this model of an Arduino and it was definitely something really cool. As it had wifi capabilities, it was pretty fun to play around with and implement new creations to this type of model. As for Figma, we learned how to navigate around the application and create designs. ## What's next for SeCARity -Using OpenCV to add camera detection -Adding a laser detection hardware system -Ability to connect with local authorities
## Inspiration Seeing that the theme of HackHarvard this year is **Connecting the Dots**, we were inspired to create a software to *literally* connect the dots. ## What it does Following the dots and its numbers, our software connects them together through black lines. ## What we learned The floor is *very* cold at night at the SOCH... ## What's next for dots Working on building an app that completes "Connect the Dots" worksheets in real time by taking a simple picture.
partial
## Inspiration I need to have a ray tracer going and need to understand how they work, especially since I need to be able to write a Monte Carlo based renderer, and implement my own Monte Carlo Based estimators and integrator solvers with custom sampling algorithms. I thought Unity would be a good base to start versus OpenGl, WebGL, or Direct3D, since I am more used to Unity. I should be able to simulate most of the things I want with it Unity since I can access the above graphic libraries if it's necessary. ## What it does Currently I did not get my ray tracer working fully, and am running into some errors. However, I have 3D models imported from Unity's built in assets, and was able to import molecules from ChemSpider and have them viewing in Unity very well. ## How I built it ## Challenges I ran into -Wolfram Alpha API did not authenciate me so I couldnt use it -PubChem and ChemSpider API access was something I wasn't used to. -Deprecated API usage documentation ## Accomplishments that I'm proud of Actually getting a molecule imported into Unity and having it work (I can rotate it, and pinch and zoom on my phone) ## What I learned A team is very good to have. ## What's next for Ariful's Unity Raytracer Finish up the ray tracer, and try to see if I can make it real time. Implement SQLite database and load up a bunch of chemical models with their .mol or .json data, and have it load onto the scene.
## Inspiration Our inspiration stems the difficulty and lack of precision that certain online vision tests suffer from. Issues such as requiring a laptop and measuring distance by hand lead to a cumbersome process. Augmented reality and voice-recognition allow for a streamlined process that can be accessed anywhere with an iOS app. ## What it does The app looks for signs of colorblindness, nearsightedness, and farsightedness with Ishihara color tests and Snellen chart exams. The Snellen chart is simulated in augmented reality by placing a row of letters six meters away from the camera. Users can easily interact with the exam by submitting their answers via voice recognition rather than having to manually enter each letter in the row. ## How we built it We built these augmented reality and voice recognition features by downloading the ARKit and KK Voice Recognition SDKs into Unity 3d. These SDKs exposed APIs for integrating these features into the exam logic. We used Unity's UI API to create the interface, and linked these scenes into a project built for iOS. This build was then exported to XCode, which allowed us to configure the project and make it accessible via iPhone. ## Challenges we ran into Errors resulting from complex SDK integrations made the beginning of the project difficult to debug. After this, a lot of time was spent trying to control the scale and orientation of augmented reality features in the scene in order to create a lifelike environment. The voice recognition software presented difficulties as its API was controlled by a lot of complex callback functions, which made the logic flow difficult to follow. The main difficulty in the latter phases of the project was the inability to test features in the Unity editor. The AR and voice-recognition APIs relied upon the iOS operating system which meant that every change in the code had to be tested through a long build and installation process. ## Accomplishments that we're proud of With only one of the team members having experience with Unity, we are proud of constructing such a complex UI system with the Unity APIs. Also, this was the team's first exposure to voice-recognition software. We are also proud to have used what we learned to construct a cohesive product that has real-world applications. ## What we learned We learned how to construct UI elements and link multiple scenes together in Unity. We also learned a lot about C# through manipulating voice-recognition data and working with 3D assets, all of which is new to the team. ## What's next for AR Visual Acuity Exam Given more time, the app would be built out to send vision exam results to doctors for approval. We could also improve upon the scaling and representation of the Snellen chart.
## Inspiration Our inspiration for this project was the technological and communication gap between healthcare professionals and patients, restricted access to both one’s own health data and physicians, misdiagnosis due to lack of historical information, as well as rising demand in distance-healthcare due to the lack of physicians in rural areas and increasing patient medical home practices. Time is of the essence in the field of medicine, and we hope to save time, energy, money and empower self-care for both healthcare professionals and patients by automating standard vitals measurement, providing simple data visualization and communication channel. ## What it does What eVital does is that it gets up-to-date daily data about our vitals from wearable technology and mobile health and sends that data to our family doctors, practitioners or caregivers so that they can monitor our health. eVital also allows for seamless communication and monitoring by allowing doctors to assign tasks and prescriptions and to monitor these through the app. ## How we built it We built the app on iOS using data from the health kit API which leverages data from apple watch and the health app. The languages and technologies that we used to create this are MongoDB Atlas, React Native, Node.js, Azure, Tensor Flow, and Python (for a bit of Machine Learning). ## Challenges we ran into The challenges we ran into are the following: 1) We had difficulty narrowing down the scope of our idea due to constraints like data-privacy laws, and the vast possibilities of the healthcare field. 2) Deploying using Azure 3) Having to use Vanilla React Native installation ## Accomplishments that we're proud of We are very proud of the fact that we were able to bring our vision to life, even though in hindsight the scope of our project is very large. We are really happy with how much work we were able to complete given the scope and the time that we have. We are also proud that our idea is not only cool but it actually solves a real-life problem that we can work on in the long-term. ## What we learned We learned how to manage time (or how to do it better next time). We learned a lot about the health care industry and what are the missing gaps in terms of pain points and possible technological intervention. We learned how to improve our cross-functional teamwork, since we are a team of 1 Designer, 1 Product Manager, 1 Back-End developer, 1 Front-End developer, and 1 Machine Learning Specialist. ## What's next for eVital Our next steps are the following: 1) We want to be able to implement real-time updates for both doctors and patients. 2) We want to be able to integrate machine learning into the app for automated medical alerts. 3) Add more data visualization and data analytics. 4) Adding a functional log-in 5) Adding functionality for different user types aside from doctors and patients. (caregivers, parents etc) 6) We want to put push notifications for patients' tasks for better monitoring.
losing
## Inspiration Kimoyo is named after the kimoyo beads in Black Panther-- they're beads that allow you to start a 3D video call right in the palm of your hand. Hologram communication, or "holoportation" as we put it, is not a new idea in movies. Similar scenes occur in Star Wars and in Kingsman, for example. However, holoportation is certainly an up-and-coming idea in the real world! ## What it does In the completed version of Kimoyo, users will be able to use an HTC Vive to view the avatars of others in a video call, while simultaneously animating their own avatar through inverse kinematics (IK). Currently, Kimoyo has a prototype IK system working, and has a sample avatar and sample environment to experience! ## How I built it Starting this project with only a basic knowledge of Unity and with no other VR experience (I wasn't even sure what HTC Vive was!), I leaned on mentors, friends, and many YouTube tutorials to learn enough about Vive to put together a working model. So far, Kimoyo has been done almost entirely in Unity using SteamVR, VRTK, and MakeHuman assets. ## Challenges I ran into My lack of experience was a limiting factor, and I feel that I had to spend quite a bit of time watching tutorials, debugging, and trying to solve very simple problems. That being said, the resources available saved me a lot of time, and I feel that I was able to learn enough to put together a good project in the time available. The actual planning of the project-- deciding which hardware to use and reasoning through design problems-- was also challenging, but very rewarding as well. ## Accomplishments that I'm proud of I definitely could not have built Kimoyo alone, and I'm really glad and very thankful that I was able to learn so much from the resources all around me. There have been bugs and issues and problems that seemed absolutely intractable, but I was able to keep going with the help of others around me! ## What's next for Kimoyo The next steps for Kimoyo is to get a complete, working version up. First, we plan to expand the hand inverse kinematics so the full upper body moves naturally. We also plan to add additional camera perspectives and settings, integrate sound, beginning work with a Unity network manager to allow multiple people to join an environment, and of course building and deploying an app. After that? Future steps might include writing interfaces for creation of custom environments (including AR?), and custom avatars, as well as developing a UI involving the Vive controllers-- Kimoyo has so many possibilities!
## Inspiration For the last few years, I’ve had back problems, most likely caused by poor posture when using the computer. A few weeks ago, I had a back spasm, and for a few days, I was unable to move most of my body without sharp pain in my back. Using the computer was painful, but I still needed to get work done, especially during midterm season. Both sitting in a chair and slouching in bed cause me to crane my neck, which is not good for my posture. As I was lying there in pain, I wished I had a way to just have the computer screen right above my face, while keeping the keyboard on my lap, so I could keep my spine fully straight when using the computer. I knew I could do this with a VR headset like the Oculus Rift, but those are expensive — around $400. But what if I could use my phone as the VR headset? Since I already own it, the only cost is the VR headset holder. Google Cardboard costs around $5, and a more comfortable one that I purchased off Amazon was only $20. Simply slide the phone in, and you’re ready to display content. I tried searching for an app that would allow me to mirror my screen in a stereoscopic view on my phone, but I couldn’t find one in existence. So I made one! ## What it does First, launch the app on your computer. It waits for the companion app to be launched on a phone connected by USB. Once it’s connected, you can start streaming your desktop to your phone, and there is a stereoscopic view displayed on the companion app. Although it looks small, modern phones are pretty high-resolution, so it’s really easy to read the text on the screen. Now simply slide it into the holder and put it on your head. It's just like an extra monitor, but you get a full view and you can look in any direction without hurting your neck! ## How we built it There were a lot of technical challenges implementing this. In order to have a low latency stream to the phone, I had to do a wired connection over USB, but Apple doesn’t support this natively. I used an external framework called PeerTalk, but they only allow raw TCP packets to be sent over the USB from the computer to the phone, so I had to serialize each frame, de-serialize it on the phone, and then display it stereoscopically, all in real-time. ## Challenges we ran into I had a lot of trouble with Objective-C memory safety, especially during deserialization when the data was received by the phone. I was obtaining the data on the phone but was unable to turn it into an actual image for around 24 hours due to numerous bugs. Shoutout to Bhushan, a mentor who helped me debug some annoying memory issues! ## Accomplishments that we're proud of I'm super proud that it actually works! In fact, I made one of the final commits using the headset instead of my actual computer screen. It's practical and it works well! ## What we learned Learned a ton about graphics serialization and Objective-C memory safety. ## What's next for LaptopVR I want to add the optional ability to track movements using the gyroscope, so you can have a slightly more zoomed-in version of the screen, and look up/down/left/right to see the extremes of the screen. This would make for a more immersive experience!
## Inspiration We wanted to create a creative HoloLens experience that truly transformed your space and motivated the user to interact in fun, innovative (and silly!) ways. Re-imagining simple classics seemed like a good place to start, and our redesign of Snake turned out to be more engaging than it had any right to be (: ## What it does Upon starting the game, the user is prompted to scan their space. Using the Hololens's Spatial Mapping sensors and some scripts that we wrote, we were able to get a full understanding of the user's space and automatically create a custom play area specific to your surroundings by analyzing the normals of the spatial mesh with raycasts and calculating which areas of the room find themselves empty. After scanning and generating the playspace, the user can play the game. Users must use their head to collect CyberCubes™ while at the same time avoiding the ever-growing CyberTail™ that follows them. Other special pickups are also available, like the CyberMotivationalVortex™, which attracts all of the surrounding cubes into a single point in space if you say a motivational quote (and explodes, transforming the colors of the space completely), and the CyberGravityPull™, which can help you get out of sticky situations by dropping all of the CyberTail™ spheres on the ground for a few seconds. The game also has a number of easter eggs and voice commands that can be used to enhance your CyberExperience™. Try saying "Samuel Jackson", for instance. Bonus points for whoever discovers the others. ## How I built it Unity, Hololens, C#, coding, caffeine, sheer will. ## Challenges I ran into Discovering meaningful interactions for the HoloLens is always a challenge given its limited input. Because the documentation on HoloLens development (specially with things like SpatialMapping) is so limited, we also had to develop a lot of our own technology to get the desired final result. ## Accomplishments that I'm proud of It looks polished and it's very fun to play - we also got to design a lot of our own sound effects, assets, easter eggs and interactions. The gameplay loop is simple but has depth. ## What I learned Mixed Reality is TheFuture™, a bunch of Unity and HoloLens development tricks, sound design discoveries and also what are the things that make a HoloLens interaction fun ## What's next for Cyber Snake More polish, create a story-mode and post it in the Microsoft Store for others to enjoy.
partial
## Echocare **TABLE NUMBER 100** ## Inspiration In the U.S. and S.F., homelessness continues to be a persistent issue, affecting more than 650,000 individuals in 2023 alone. Despite the lack of stable housing, many people experiencing homelessness own mobile phones. Research indicates that as many as 94% of homeless individuals have access to a cellphone, with around 70% having smartphones. This is crucial because phones act as a lifeline for communication, health services, and access to support networks. Phones help users connect with essential services like medical care, job opportunities, and safety alerts. Unfortunately, barriers such as maintaining a charge or affording phone plans remain common issues for this population. In addition, U.S. faces a huge problem of food wastage. 150,000 tonnes of food are wasted each year JUST in SF. With Echocare, we aim to kill these 2 problems with 1 stone. We leverage S.O.T.A Artificial Intelligence to provide essential services to homeless and people suffering from food insecurity in a more accessible and intuitive manner. Additionally, we allow restaurants to donate and keep track of leftover food using an inventory tracker which can be used to feed the homeless people. ## What it does Echocare is an intuitive platform designed to assist users in locating and connecting with nearby care services, such as medical centers, pharmacies, or home care providers. The application uses voice input, interactive maps, and real-time search functionalities to help users find the services they need quickly and seamlessly. With user-friendly navigation and smart recommendations, Echocare empowers people to get help with minimal effort. In addition, it offers a separate platform for restaurants to keep track of leftover food and donate them at the end of each business day to homeless people. ## How we built it * Next.js for server-side rendering and static generation. * React to build interactive and modular UI components for the front end. * TypeScript for type safety and a robust backend. * Tailwind CSS for rapid, utility-first styling of the application. * Framer Motion for smooth and declarative animations. * GSAP (GreenSock Animation Platform) for high-performance animations. (Used for Echo) * Three.js for creating 3D graphics in the browser, adding depth and interactivity. (Echo has 3d graphics built in it) * Vercel Postgres to communicate with Neon DB (Serverless Postgres) * Neon Database a serverless PostgreSQL option, for robust backend storage to store the donated food items of the restaurants. Uses minimal compute and is good for developing low-latency applications * Clerk to implement secure and seamless user authentication for managers of the restaurants who want to donate food. * Google Maps API to power the mapping functionality and location services (This was embedded with Echo to provide precise directions). * Google Places API for autocomplete suggestions and retrieving detailed place information. * Ant Design and Aceternity UI for building the forms in the food donation page and having a clean look for the landing page (inspired by the multicolored and vibrant lights of SF in the night) * Axios for making API requests to external services easily (Google Maps and Places API) * Lucide React for all of the icons used in the application. * Vapi.ai For creating the one and only assistant * Google Gemini Flash 1.5 to potentially assist with generating user-facing responses. * Groq 3.1 70b versatile (fine-tuned) to assist Vapi.ai with insights. * Cartesia to provide hyper-realistic service for users. * Deepgram for encoding ## Challenges we ran into We found it very difficult to transcribe the conversations between the user and Vapi.ai echo agent. Rishit had to code for 16 hours straight to get it working. We also found the Google Gemini Integration to be hard because the multimodal functionality wasn't easy to implement. Especially with Typescript which isn't well documented as compared to Python. Finally, stitching the backend and frontend together in the food donation page also took a lot of time to carry out. ## Accomplishments that we're proud of * Getting 4 sponsored tools to be seamlessly integrated in the application * Using a grand total of 18 tools to build Echocare from the ground up * Finishing the entire product 10 hours before the deadline * Creating a product which can truly be used to help burdened communities thrive * Having a lot of fun and enjoying the process of building Echocare! ## What we learned Typescript - We have never used it to build a project before and through this hackathon, we gained understanding of how we can use it to build cohesive applications Vapi.AI - Using the dashboard and integrating custom APIs into Vapi was a tricky operation, but we toughed it out and made it work. NeonDB - We used this DB and learned basic SQL queries to insert and get data from the DB which we used to setup the donation page. Google Maps/Places API - Although they were relatively easy to implement, some time had to spent to initialize them. Groq, Cartesia - They were definitely tricky to implement with Vapi's dashboard. Gemini Integration with TS - Although it was < 100 lines of code, we kept running into errors. Turned out that Gemini Pro Vision was deprecated and we had to use Gemini 1.5 Flash instead ;( ## What's next for Echocare We would like to train our VAPI RAG with more datasets from other cities in the United States. Also, we would like to improve the responsive dimensions of our website.
## Inspiration Homelessness is a rampant problem in the US, with over half a million people facing homelessness daily. We want to empower these people to be able to have access to relevant information. Our goal is to pioneer technology that prioritizes the needs of displaced persons and tailor software to uniquely address the specific challenges of homelessness. ## What it does Most homeless people have basic cell phones with only calling and sms capabilities. Using kiva, they can use their cell phones to leverage technologies previously accessible with the internet. Users are able to text the number attached to kiva and interact with our intelligent chatbot to learn about nearby shelters and obtain directions to head to a shelter of their choice. ## How we built it We used freely available APIs such as Twilio and Google Cloud in order to create the beta version of kiva. We search for nearby shelters using the Google Maps API and communicate formatted results to the user’s cell phone Twilio’s SMS API. ## Challenges we ran into The biggest challenge was figuring out how to best utilize technology to help those with limited resources. It would be unreasonable to expect our target demographic to own smartphones and be able to download apps off the app market like many other customers would. Rather, we focused on providing a service that would maximize accessibility. Consequently, kiva is an SMS chat bot, as this allows the most users to access our product at the lowest cost. ## Accomplishments that we're proud of We succeeded in creating a minimum viable product that produced results! Our current model allows for homeless people to find a list of nearest shelters and obtain walking directions. We built the infrastructure of kiva to be flexible enough to include additional capabilities (i.e. weather and emergency alerts), thus providing a service that can be easily leveraged and expanded in the future. ## What we learned We learned that intimately understanding the particular needs of your target demographic is important when hacking for social good. Often, it’s easier to create a product and find people who it might apply to, but this is less realistic in philanthropic endeavors. Most applications these days tend to be web focused, but our product is better targeted to people facing homeslessness by using SMS capabilities. ## What's next for kiva Currently, kiva provides information on homeless shelters. We hope to be able to refine kiva to let users further customize their requests. In the future kiva should be able to provide information about other basic needs such as food and clothing. Additionally, we would love to see kiva as a crowdsourced information platform where people could mark certain places as shelter to improve our database and build a culture of alleviating homelessness.
## Inspiration We took inspiration from the multitude of apps that help to connect those who are missing to those who are searching for their loved ones and others affected by natural disaster, especially flooding. We wanted to design a product that not only helped to locate those individuals, but also to rescue those in danger. Through the combination of these services, the process of recovering after natural disasters is streamlined and much more efficient than other solutions. ## What it does Spotted uses a drone to capture and send real-time images of flooded areas. Spotted then extracts human shapes from these images and maps the location of each individual onto a map and assigns each victim a volunteer to cover everyone in need of help. Volunteers can see the location of victims in real time through the mobile or web app and are provided with the best routes for the recovery effort. ## How we built it The backbone of both our mobile and web applications is HERE.com’s intelligent mapping API. The two APIs that we used were the Interactive Maps API to provide a forward-facing client for volunteers to get an understanding of how an area is affected by flood and the Routing API to connect volunteers to those in need in the most efficient route possible. We also used machine learning and image recognition to identify victims and where they are in relation to the drone. The app was written in java, and the mobile site was written with html, js, and css. ## Challenges we ran into All of us had a little experience with web development, so we had to learn a lot because we wanted to implement a web app that was similar to the mobile app. ## Accomplishments that we're proud of Accomplishment: We are most proud that our app can collect and stores data that is available for flood research and provide real-time assignment to volunteers in order to ensure everyone is covered in the shortest time ## What we learned We learned a great deal about integrating different technologies including XCode, . We also learned a lot about web development and the intertwining of different languages and technologies like html, css, and javascript. ## What's next for Spotted Future of Spotted: We think the future of Spotted is going to be bright! Certainly, it is tremendously helpful for the users, and at the same time, the program improves its own functionality as data available increases. We might implement a machine learning feature to better utilize the data and predict the situation in target areas. What's more, we believe the accuracy of this prediction function will grow exponentially as data size increases. Another important feature is that we will be developing optimization algorithms to provide a real-time most efficient solution for the volunteers. Other future development might be its involvement with specific charity groups and research groups and work on specific locations outside US.
partial
aa
## Inspiration Having used OpenCV for several projects, ranging mostly from machine learning detections, one of the things we have never done was anything AR related. We thought this was one possible use case of utilizing AR codes; showing ID's so that doctors, officers, and users would have a quick and easy access to their information. We also added a verification step for privacy protection in the form of twilio. ## What it does A user uploads their ID into the backend via a frontend user prompt. The image is stored alongside their names. An ARUCO code is generated to pair with the image. The user can point the code towards the screen where they will be sent a verification text. If not verified, the program will not screen the ARUCO code. If verified, the camera takes the ARUCO code and transposes the ID/image via homography transformation. There is no limit to how many it can display or what orientation or size. ## How we built it Our first task was to tackle the hard thing first: the backend and the actual AR processing. We played around in python with opencv to create an app that takes still frame images and checks for any matching ARUCO codes. Once that was done, we decided to implement it into a web framework so it is accessible by any device. We implemented a model for ID's in Django alongside a camera module and a user upload module. Certain things were needed to be changed as OpenCV did not function ideally in Django so we had to come up with a work-around in our detection function. Lastly, after it was implemented, we decided to host on AWS/Heroku for a demo. ## Challenges we ran into #### AIORTC and Browser Camera So far when we hosted the project on AWS, the camera does not work; it only does so locally. We come to found out that we need to implement aiortc in order to gain access to browser camera so we can relay it to the backend. #### Matching One of the main challenges we ran into was delegating ID's to their respective ARUCO generated ones. A possible solution we came up with was renaming the user uploaded files to match their ARUCO generated ones. Additionally, this helped deal with duplicates as we implemented a check for those. #### Out of bounds The way the detection matches the ARUCO with the ID is via a list. One of the earlier problems were due to ARUCO codes being misinterpreted for different ones, causing it to go out of bounds because it was accessing an ID/Image that didn't exist. We can solve this by changing the list implementation into a hash/dict, or add metadata/tupled info into the image. For now, we made a simple logical check by checking if in bounds and if not just to display the base ID/default. #### Standalone to Django Implementing OpenCV into Django was quite the task. It required implementing multithreading and constant updates to video stream unlike a standalone py app. Thus it also results in a lower framerate. We are unsure of what fixes we could add, but this is one of our bigger challenges so far. Additionally, modifying camera data would result in nothing happening or completely breaking the server with no error messages. ## Accomplishments that we're proud of We're proud that it manages to work; the user upload, verification, and camera to a varying degree. It is still somewhat buggy and not up to standard; there's a lot of room for improvements and modifications to increase speed, user accessibility, security, and overall design. However, this is a good start we're proud of. ## What we learned We learned how to utilize OpenCV and web frameworks to create an ARUCO detector and image transposer. Additionally, we learned how to implement it into web services rather than a standalone python application with things like multithreadding. We also learned how host our code on a Heroku/AWS Lightsail. ## What's next for AR Info * Better Security Implementation of Twilio * UI/UX Design * Modifications to Camera for better framerate * Multiple image/object transpose on ARUCO * Calibration * Implement AIORTC * Autoangle adjustment on image
## Inspiration There are millions of people around the world who have a physical or learning disability which makes creating visual presentations extremely difficult. They may be visually impaired, suffer from ADHD or have disabilities like Parkinsons. For these people, being unable to create presentations isn’t just a hassle. It’s a barrier to learning, a reason for feeling left out, or a career disadvantage in the workplace. That’s why we created **Pitch.ai.** ## What it does Pitch.ai is a web app which creates visual presentations for you as you present. Once you open the web app, just start talking! Pitch.ai will listen to what you say and in real-time and generate a slide deck based on the content of your speech, just as if you had a slideshow prepared in advance. ## How we built it We used a **React** client combined with a **Flask** server to make our API calls. To continuously listen for audio to convert to text, we used a react library called “react-speech-recognition”. Then, we designed an algorithm to detect pauses in the speech in order to separate sentences, which would be sent to the Flask server. The Flask server would then use multithreading in order to make several API calls simultaneously. Firstly, the **Monkeylearn** API is used to find the most relevant keyword in the sentence. Then, the keyword is sent to **SerpAPI** in order to find an image to add to the presentation. At the same time, an API call is sent to OpenAPI’s GPT-3 in order to generate a caption to put on the slide. The caption, keyword and image of a single slide deck are all combined into an object to be sent back to the client. ## Challenges we ran into * Learning how to make dynamic websites * Optimizing audio processing time * Increasing efficiency of server ## Accomplishments that we're proud of * Made an aesthetic user interface * Distributing work efficiently * Good organization and integration of many APIs ## What we learned * Multithreading * How to use continuous audio input * How to use React hooks, Animations, Figma ## What's next for Pitch.ai * Faster and more accurate picture, keyword and caption generation * "Presentation mode” * Integrate a database to save your generated presentation * Customizable templates for slide structure, color, etc. * Build our own web scraping API to find images
losing
## Inspiration With recent booms in AI development, deepfakes have been getting more and more convincing. Social media is an ideal medium for deepfakes to spread, and can be used to seed misinformation and promote scams. Our goal was to create a system that could be implemented in image/video-based social media platforms like Instagram, TikTok, Reddit, etc. to warn users about potential deepfake content. ## What it does Our model takes in a video as input and analyzes frames to determine instances of that video appearing on the internet. It then outputs several factors that help determine if a deepfake warning to a user is necessary: URLs corresponding to websites where the video has appeared, dates of publication scraped from websites, previous deepfake IDs (i.e. if the website already mention the words "deepfake"), and similarity scores between the content of the video being examined and previous occurrences of the deepfake. A warning should be sent to the user if content similarity scores between it and very similar videos are low (indicating the video has been tampered with) or if the video has been previously IDed as a deepfake by a different website. ## How we built it Our project was split into several main steps: **a) finding web instances of videos similar to the video under investigation** We used Google Cloud's Cloud Vision API to detect web entities that have content matching the video being examined (including full matching and partial matching images). **b) scraping date information from potential website matches** We utilized the htmldate python library to extract original and updated publication dates from website matches. **c) determining if a website has already identified the video as a deepfake** We again used Google Cloud's Cloud Vision API to determine if the flags "deepfake" or "fake" appeared in website URLs. If they did, we immediately flagged the video as a possible deepfake. **d) calculating similarity scores between the contents of the examined video and similar videos** If no deepfakes flags have been raised by other websites (step c), we use Google Cloud's Speech-to-Text API to acquire transcripts of the original video and similar videos found in step a). We then compare pairs of transcripts using a cosine similarity algorithm written in python to determine how similar the contents of two texts are (common, low-meaning words like "the", "and", "or", etc. are ignored when calculating similarity). ## Challenges we ran into Neither of us had much experience using Google Cloud, which ended up being a major tool in our project. It took us a while to figure out all the authentication and billing procedures, but it was an extremely useful framework for us once we got it running. We also found that it was difficult to find a deepfake online that wasn't already IDed as one (to test out our transcript similarity algorithm), so our solution to this was to create our own amusing deepfakes and test it on those. ## Accomplishments that we're proud of We're proud that our project mitigates an important problem for online communities. While most current deepfake detection uses AI, malignant AI can simply continually improve to counter detection mechanisms. Our project takes an innovative approach that avoids this problem by instead tracking and analyzing the online history of a video (something that the creators of a deepfake video have no control over). ## What we learned While working on this project, we gained experience in a wide variety of tools that we've never been exposed to before. From Google Cloud to fascinating text analysis algorithms, we got to work with existing frameworks as well as write our own code. We also learned the importance of breaking down a big project into smaller, manageable parts. Once we had organized our workflow into reachable goals, we found that we could delegate tasks to each other and make rapid progress. ## What's next for Deepfake ID Since our project is (ideally) meant to be integrated with an existing social media app, it's currently a little back-end heavy. We hope to expand this project and get social media platforms onboard to using our deepfake detection method to alert their users when a potential deepfake video begins to spread. Since our method of detection has distinct advantages and disadvantages from existing AI deepfake detection, the two methods can be combined to create an even more powerful deepfake detection mechanism. Reach us on Discord: **spica19**
## Inspiration The prevalence of fake news has been on the rise. It has led to the public's inability to receive accurate information and has placed a heightened amount of distrust on the media. With it being easier than ever to propagate and spread information, the line between what is fact and fiction has become blurred in the public sphere. Concerned by this situation, we built a mobile application to detect fake news on its websites and alert people when information is found to be false or unreliable, thereby hopefully bringing about a more informed electorate. ## What it does enlightN is a mobile browser with built-in functionality to detect fake news and alert users when the information they are reading - on Facebook or Twitter - is either sourced from a website known for disseminating fake news or known to be false itself. The browser highlights which information has been found to be false and provides the user sources to learn more about that particular article. ## How we built it **Front-end** is built using Swift and Xcode. The app uses Alamofire for HTTP networking, and WebKit for the browser functionality. Alamofire is the only external dependency used by the front end; other than that it's all Apple's SDK's. The webpage HTML is parsed and sent to the backend, and the response is parsed on the front end. **Back-end** is built using Python, Google App Engine, Microsoft Cognitive Services, HTML, JavaScript, CSS, BeautifulSoup, Hoaxy API, and Snopes Archives. After receiving the whole HTML text from front-end, we scrape texts from Facebook and Twitter posts with the use of the BeautifulSoup module in Python. Using the keywords of the texts by Microsoft Key Phrase Extraction API (which uses Microsoft Office's Natural Language Processing toolkit) as an anchor, we extract relevant information (tags for latent fake news) from both Snopes.com's Database and the results getting back from the hoaxy API and send this information back to the front-end. **Database** contains about 950 websites that are known for unreliable (e.g. fake/conspiracy/satire) news sources and about 15 well-known trustworthy news source websites. ## Challenges we ran into One challenge we ran into was with implementing the real-time text search in order to cross-reference article headlines and Tweets with fact-checking websites. Our initial idea was to utilize Google’s ClaimReview feature on their public search, but Google does not have an API for their public search feature and after talking to some of the Google representatives, automating this with a script would not have been feasible. We then decided to implement this feature by utilizing Snopes. Snopes does not have an API to access their article information and loads their webpage dynamically, but we were able to isolate the Snopes’ API call that they use to provide their website with results from an article query. The difficult part of recreating this API call was figuring out the proper way to encode the POST payload and request header information before the HTTP function call. ## Accomplishments that we're proud of We were able to successfully detect false information from any site after especially handling facebook and twitter. The app works and makes people aware of disinformation in real-time! ## What we learned We applied APIs that are completely new for us - Snopes’ API, hoaxy API, and Key Phrase Extraction API - in our project within the past 36 hours. ## What's next for enlightN Building a fully-functional browser and an app which detects false information on any 3rd party app. We also plan to publicize our API as it matures.
## Inspiration The intention behind NintAudio is, aside from recreating the feel of old arcade games, to offer new ways of exploring gaming. It was actually Xavier who came up with the idea, being interested in the accessibility of browsers; the question that naturally came next was "but what about video games?". This project is aimed at: * Visually impaired people, so they can share our delight when playing video games * Developers, to raise their awareness of the condition of visually-impaired people in the digital era * Everybody, to realize how much we rely on sight and how difficult it is to understand an environment with only sound. ## What it does NintAudio is a collection of retro-inspired games relying entirely on sounds: Pong, Atari Breakout, and Whac-a-Mole. ## Challenges Developing the logic and sound design with very little visual cues was quite the task. We had to work around our habitual use of sight for even very simple UX design. Rust is very new, both to the community and to our team: although documentation is available, the use of this language and many features still under development was quite ambitious. Implementation for all libraries isn't guaranteed across operating systems, and there are bugs left to be resolved with some of them (particularly the .mp3 format and SineWave stuttering with rodio). We're expecting to send patches upstream in the near future to fix the bugs with rodio. ## Why Rust? Real-time audio games meant we needed an absolutely minimum latency. This meant a garbage collector was definitely a no-go, as even a 50ms delay for the garbage collector to perform its duty would destroy the game play. Rust is a bare metal programming language, so it was a perfect fit, but its memory safety guarantee meant that we were safe from the toughest bugs, which was even more important given the experience level of the team members. Three out of four teammates were new to coding. We felt like the advanced compiler would give us a greater confidence on the code we produced, even though we were dealing with low-level bindings. ## What's next for NintAudio Unfortunately, as of now, the player must type the name of the game into a terminal in order to play -- adding a voice user interface would likely make the experience more immersive and overall more user-friendly. More games are also to be added. ## Support **MacOS** No gamepad support. **Windows with ANSI terms** No support. *Tested on: Linux (Arch & Ubuntu), Windows 10 Should work on: MacOS, Windows XP & up, Linuses with ALSA, BSDs, Drangonfly*
partial
# 💡 Inspiration Meeting new people is an excellent way to broaden your horizons and discover different cuisines. Dining with others is a wonderful opportunity to build connections and form new friendships. In fact, eating alone is one of the primary causes of unhappiness, second only to mental illness and financial problems. Therefore, it is essential to make an effort to find someone to share meals with. By trying new cuisines with new people and exploring new neighbourhoods, you can make new connections while enjoying delicious food. # ❓ What it does PlateMate is a unique networking platform that connects individuals in close proximity and provides the setup of an impromptu meeting over some great food! It enables individuals to explore new cuisines and new individuals by using Cohere to process human-written text and discern an individual’s preferences, interests, and other attributes. This data is then aggregated to optimize a matching algorithm that pairs users. Along with a matchmaking feature, PlateMate utilizes Google APIs to highlight nearby restaurant options that fit into users’ budgets. The app’s recommendations consider a user’s budget to help regulate spending habits and make managing finances easier. PlateMate takes into account many factors to ensure that users have an enjoyable and reliable experience on the platform. # 🚀 Exploration PlateMate provides opportunities for exploration by expanding social circles with interesting individuals with different life experiences and backgrounds. You are matched to other nearby users with similar cuisine preferences but differing interests. Restaurant suggestions are also provided based on your characteristics and your match’s characteristics. This provides invaluable opportunities to explore new cultures and identities. As the world emerges from years of lockdown and the COVID-19 pandemic, it is more important than ever to find ways to reconnect with others and explore different perspectives. # 🧰 How we built it **React, Tailwind CSS, Figma**: The client side of our web app was built using React and styled with Tailwind CSS based on a high-fidelity mockup created on Figma. **Express.js**: The backend server was made using Express.js and managed routes that allowed our frontend to call third-party APIs and obtain results from Cohere’s generative models. **Cohere**: User-specific keywords were extracted from brief user bios using Cohere’s generative LLMs. Additionally, after two users were matched, Cohere was used to generate a brief justification of why the two users would be a good match and provide opportunities for exploration. **Google Maps Platform APIs**: The Google Maps API was used to display a live and dynamic map on the homepage and provide autocomplete search suggestions. The Google Places API obtained lists of nearby restaurants, as well as specific information about restaurants that users were matched to. **Firebase**: User data for both authentication and matching purposes, such as preferred cuisines and interests, were stored in a Cloud Firestore database. # 🤔 Challenges we ran into * Obtaining desired output and formatting from Cohere with longer and more complicated prompts * Lack of current and updated libraries for the Google Maps API * Creating functioning Express.js routes that connected to our React client * Maintaining a cohesive and productive team environment when sleep deprived # 🏆 Accomplishments that we're proud of * This was the first hackathon for two of our team members * Creating a fully-functioning full-stack web app with several new technologies we had never touched before, including Cohere and Google Maps Platform APIs * Extracting keywords and generating JSON objects with a high degree of accuracy using Cohere # 🧠 What we learned * Prompt engineering, keyword extraction, and text generation in Cohere * Server and route management in Express.js * Design and UI development with Tailwind CSS * Dynamic map display and search autocomplete with Google Maps Platform APIs * UI/UX design in Figma * REST API calls # 👉 What's next for PlateMate * Provide restaurant suggestions that are better tailored to users’ budgets by using Plaid’s financial APIs to accurately determine their average spending * Connect users directly through an in-app chat function * Friends and network system * Improved matching algorithm
## Inspiration Access to course resources is often fragmented and lacks personalization, making it difficult for students to optimize their learning experiences. When students use Large Language Models (LLMs) for academic insights, they often encounter limitations due to LLMs’ inability to interpret various data formats—like lecture audio or photos of notes. Additionally, context often gets lost between sessions, leading to fragmented study experiences. We created OpenContext to offer a comprehensive solution that enables students to manage, customize, and retain context across multiple learning resources. ## What it does OpenContext is an all-in-one platform designed to provide students with personalized, contextually-aware resources using LLMs. It aims to make course data open source by allowing students to share their course materials. ### Features: * Upload and Process Documents: Users can upload or record audio, PDF, and image files related to their classes. * Chat Assistant: Users can chat with an assistant which will have the context of all the uploaded documents, and will be able to refer to course materials on any given questions. * Real-Time Audio Transcription: Record lectures directly in the browser, and the audio is transcribed and processed in real time. * Document Merging and Quiz Generation: Users can combine documents from different formats and generate quizzes that mimic Quizlet-style flashcards. * Progress Tracking: After completing quizzes, users receive a detailed summary of their performance. **Full User Flow**: The user faces a landing page where they are prompted with three options: Create a new chat, Upload documents, Generate quizzes. As the user navigates to Upload documents, they have the option to record their current lecture real time from their browser. Or, they can upload documents ranging from audio, pdf and image. We are using Tesseract for Optical Character Recognition on image files, OpenAI’s SpeechtoText API on audio files, and our own PDF parser for other class documents. The user can also record the lecture in real time which will be transcribed and processed in real time. After the transcription of a lecture or another class document is finished, it is displayed to the user. They will be able to create a new chat and ask our AI assistant anything related to their course materials. The assistant will have the full context of uploaded documents and will be able to answer with references to those documents. The user will also have the option to generate a quiz based on the transcription of the lecture that just got recorded. They will also be able to merge multiple class documents and generate a custom quiz out of all. The quizzes will have the format of quizlet flashcards, where a question is asked, 4 answers are provided as options and after an option is chosen, the website will prompt with either the chosen answer is correct or incorrect. The score for each question is calculated and at the end of the quiz a summary of the performance is written for the student user to track their progress. ## How We Built It * Frontend: Built with React for a responsive and dynamic user interface. * Backend: Developed with FastAPI, handling various tasks from file processing to vector database interactions. * AI Integration: Utilized OpenAI's Whisper for real-time speech-to-text transcription and embedding functionalities. * OCR: Tesseract is used for Optical Character Recognition on uploaded images, allowing us to convert handwritten or printed text into machine-readable text. Infrastructure: Hosted with Defang for production-grade API management, alongside Cloudflare for data operations and performance optimization. ### Tech Stack: * Figma: We used Figma to design our warm color palette and simplistic theme for pages and logo. * Python: Python scripts are used ubiquitously in OpenContext whether it’s for our deployment scripts for Defang or pipelines to our Vector Databases, our API servers, OpenAI wrappers or other miscellaneous tasks, we utilize Python. * React: React framework is used to build the entire components, pages, and routes on the frontend. * Tesseract OCR: For converting images to text. * FastAPI: We have multiple FastAPI apps that we use for multiple purposes. Having endpoints to respond to different types of file requests from the user, making connections to our Vector DB, and other scripting tasks are all handled by the FastAPI endpoints that we have built. * OpenAI API: We are using multiple services of OpenAI such as the Whisper for ASR or Text Embeddings for later function calling and vector storing. ### Sponsor Products: * Defang: We used Defang in order to test and host both of our API systems in a production environment. Here is the production PDF API: <https://alpnix-pdf-to-searchable--8080.prod1b.defang.dev/docs> * Terraform: We used Terraform, a main.tf script, to test and validate our configuration for our deployment services such as our API hosting with Defang and nginx settings. * Midnight: For open-source data sharing, Midnight provides us the perfect tool to encrypt the shared information. We created our own wallet as the host server and each user is able to create their own private wallets to share files securely. * Cloudflare: We are using multiple services of Cloudflare… + Vectorize: In addition to using Pinecone, we have fully utilized Vectorize, Cloudflare’s Vector DB, at a high level. + Cloudflare Registrar: We are using Cloudflare’s domain registration to buy our domain. + Proxy Traffic: We are using Cloudflare’s proxy traffic service to handle requests in a secure and efficient manner. + Site Analytics: Cloudflare’s data analytics tool to help us analyze the traffic as the site is launched. * Databricks: We have fully utilized Databricks Starter Application to familiarize ourselves with efficient open source data sharing feature of our product. After running some tests, we also decided to integrate LangChain in the future to enhance the context-aware nature of our system. ## Challenges we ran into One significant challenge was efficiently extracting text from images. This required converting images to PDFs, running OCR to overlay text onto the original document, and accurately placing the text for quiz generation. Ensuring real-time transcription accuracy and managing the processing load on our servers were also challenging. ## Accomplishments that we're proud of * Tool Mastery: In a short time, we learned and successfully implemented production environment tools like NGINX, Pinecone, and Terraform. * API Integration: Seamlessly integrated OpenAI’s Whisper and Tesseract for multi-format document processing, enhancing the utility of LLMs for students. * Quiz Generation Pipeline: Developed an efficient pipeline for custom quiz generation from multiple class resources. ## What we learned * Infrastructure Management: Gained experience using Defang, Terraform, and Midnight to host and manage a robust data application. * Prompt Engineering: Through David Malan’s session, we enhanced our ability to prompt engineer, configuring ChatGPT’s API to fulfill specific roles and restrictions effectively. ## What's next for Open Context We aim to develop a secure information-sharing system within the platform, enabling students to share their study materials safely and privately with their peers. Additionally, we plan to introduce collaborative study sessions where students can work together on quizzes and share real-time notes. This could involve shared document editing and group quiz sessions to enhance the sense of open source.
Wanted to try something low-level! MenuMate is a project aimed at enhancing dining experiences by ensuring that customers receive quality, safe, and delicious food. It evaluates restaurants using health inspection records and food-site reviews, initially focusing on Ottawa with plans for expansion. Built on React, the tool faced integration challenges with frameworks and databases, yet achieved a seamless front and backend connection. The current focus includes dataset expansion and technical infrastructure enhancement. The tool scrapes data from websites and reads JSON files for front-end display, primarily using technologies like BeautifulSoup, React, HTML, CSS, and JavaScript. The team encountered challenges, as it was their first experience with web scraping and faced difficulties in displaying data.
partial
## Inspiration How much carbon does farming really sequester ? This is one question that inspired us to create this solution. With rising interest of governments around the world to start taxing farmers for their emissions, we wanted to find a way to calculate them. ## What it does A drone with a variety of sensors measure the CO2, CH4 and albedo of the land underneath it to estimate the actual carbon offset. The data collected in drone is sent to our server online which is fetched by MATLAB to calculate the carbon offset. The drone also has sensors to calculate water quality. In future the drone will also have soil moisture detection capability using microwaves similar to remote sensing satellites. With the offset we are able to calculate the carbon credits which can then trade over the pi platform. By using blockchain we enable : 1) No double counting of credits 2) Ensure wider participation from around the world (pi has already over 35 million users) 3) Ensure only algorithmic calculated credits are there. ## How we built it Using Arduino, PI, MATLAB ## Challenges we ran into PI was a tough challenge to implement. Loading sensors on drone was another big challenge. ## Accomplishments that we're proud of We were able to get all sensors to work, collect data in real time and run matlab analysis on it ## What we learned ## What's next for We Are Sus Farms
## Inspiration Greenhouses require increased disease control and need to closely monitor their plants to ensure they're healthy. In particular, the project aims to capitalize on the recent cannabis interest. ## What it Does It's a sensor system composed of cameras, temperature and humidity sensors layered with smart analytics that allows the user to tell when plants in his/her greenhouse are diseased. ## How We built it We used the Telus IoT Dev Kit to build the sensor platform along with Twillio to send emergency texts (pending installation of the IoT edge runtime as of 8 am today). Then we used azure to do transfer learning on vggnet to identify diseased plants and identify them to the user. The model is deployed to be used with IoT edge. Moreover, there is a web app that can be used to show that the ## Challenges We Ran Into The data-sets for greenhouse plants are in fairly short supply so we had to use an existing network to help with saliency detection. Moreover, the low light conditions in the dataset were in direct contrast (pun intended) to the PlantVillage dataset used to train for diseased plants. As a result, we had to implement a few image preprocessing methods, including something that's been used for plant health detection in the past: eulerian magnification. ## Accomplishments that We're Proud of Training a pytorch model at a hackathon and sending sensor data from the STM Nucleo board to Azure IoT Hub and Twilio SMS. ## What We Learned When your model doesn't do what you want it to, hyperparameter tuning shouldn't always be the go to option. There might be (in this case, was) some intrinsic aspect of the model that needed to be looked over. ## What's next for Intelligent Agriculture Analytics with IoT Edge
## Inspiration Of all the mammals on Earth, only 4% are wild; the remaining 96% are livestock and humans. For birds only 30% are wild, the rest being chickens and poultry (Bar-On et al., 2018). Food, Agriculture, and Land Use directly account for 24% of greenhouse gas sources, more than Transportation (14%), Industry (21%), and on par with Electricity Production (25%) (IPCC, 2014).Food accounts for up to 37% of the global greenhouse emissions and 70% of water withdrawals when taking into account all phases of production and distribution (IPCC, 2019). A global transition towards more sustainable food will be among the most important strategies to reduce human impact on planetary resources. Many people want to do their part to reduce emissions, but they do not know where to start. Here we present the most accurate and up to date database on food carbon footprints to provide knowledge and tools that can support turning ideas into action. ## What it does Our app informs users about their carbon footprint based on the food products that are bought from stores. Using a revolutionary dataset from Nature scientific data built using 3349 carbon footprint values extrapolated from 841 publications, we calculated the carbon footprint of specific foods based on the quantity consumed and type of food. The application takes in user input to create a shopping list with grocery items. This can be done through manual photo upload or by adding each item to a shopping list. The application automatically tries to look for similar replacement items in a person's shopping list that has a lower carbon footprint. ## How we built it This project was built using the MERN Stack, with a splash of computer vision using OpenCV and Microsoft Azure Machine Learning and D3 for visualization. The frontend components were built using Material UI. A MongoDB database was used to store shopping lists and client data. D3, a Javascript visualization library was used to create the graph of carbon footprints per food for exploration. ## Challenges we ran into One of the biggest challenges we ran into was trying to upload an image to be stored into a URL using React. This step was crucial to the development of our project since the use case of scanning receipts for grocery and food items relied heavily on image input and processing. We thought of different ways to try to diagnose and tackle the problem such as uploading the image onto a free image hosting service, reading the image into an array of bytes (metadata was lost), and asking a mentor for help. Additionally, we believe that coming up with an idea that we were all passionate about was the hardest part. Brainstorming did not come easy to us since most of us did not know each other prior to the hackathon. Our entire team knew that we wanted to do something sustainability-related. We cycled through many ideas before settling on this one. We struggled with narrowing down our priorities since we had so many ideas to branch out upon. Some of the ideas that we wanted to implement (but couldn't get to) are listed in the last section of the README below. ## Accomplishments that we're proud of We're proud of all we've done so far, from all the time spent on brainstorming to coming up with a prototype for demonstration. Our team is particularly proud of how we were able to combine our steeply different skills to create an application. Our UI was designed by a team member who has never worked on UI in the past. Deploying and running the CV algorithm was also one of the most time-consuming and messy tasks, but we are still glad that our hard work was able to be showcased. ## What we learned There's no doubt that each of us has learned a lot from this project. Breaking it down individually, Mei: I learned how to build and design the webpage using Material UI, along with using React to integrate GET and POST requests from the frontend to the backend. ## What's next for The Secret 37% The first next step is to make our features work flawlessly. We had a difficult time integrating D3 with react. In addition, we want to make sure that the computer vision for converting receipts into carbon footprints works is reliable. We would also like to add more NLP so that the algorithm automatically finds the closest match for any food item (e.g. user types in Special K, the algorithm match cornflakes, the closest match in our database). As another next step, we would like the user to be able to track their monthly emissions from food by compiling all receipts/shopping lists. Finally, we would like to make this an integrated phone app as well.
winning
## Inspiration As electric vehicles (EVs) become increasingly popular, we saw the need for a tool that simplifies journey planning for EV drivers. Our goal was to create a solution that not only calculates optimal routes based on vehicle model and destination but also identifies nearby charging stations along the way. ## What it does uOCars is a route planner designed specifically for EVs. It takes into account factors such as vehicle model, battery life, and destination to provide users with accurate estimations of their journey duration and the optimal times to charge their vehicles. Additionally, it locates nearby charging stations to ensure a seamless travel experience. ## How we built it We built uOCars using a combination of Django for the backend logic and JavaScript for the frontend interface. We integrated Google Places API to identify charging stations along the route and implemented algorithms to calculate battery life and charging requirements based on the user's input. ## Challenges we ran into Google Places API Integration: Difficulty integrating the Google Places API to retrieve data about charging stations efficiently. Client Input Retrieval (POST Method): Spent significant time retrieving client input for the POST method, indicating challenges in data submission and processing. Battery Life Prediction: Struggled with accurately predicting battery life, indicating challenges in modeling and forecasting battery usage. Optimal Charging Time Determination: Faced difficulties in determining the best charging times based on various factors, highlighting challenges in optimization and decision-making algorithms. ## Accomplishments that we're proud of We are proud to have developed a functional and user-friendly tool that addresses a real need in the EV community. Our solution not only provides valuable information to users but also promotes sustainability by encouraging the use of electric vehicles. ## What we learned Throughout the development process, we gained valuable insights into EV technology, route planning algorithms, and API integration. We also improved our skills in frontend and backend development, as well as project management and interdepartmental teamwork. ## What's next for uOCars In the future, we plan to further enhance uOCars by incorporating advanced features such as real-time traffic updates, EV charging station availability, and integration with smart navigation systems. We also aim to expand our coverage to include more regions and support additional EV models. Overall, our goal is to continue improving the user experience and making electric vehicle travel more accessible and convenient for everyone.
EVC, or Electric Vehicle Companion, is a groundbreaking application designed to revolutionize the driving experience for electric vehicle (EV) owners. With a user-friendly interface, EVC allows drivers to input their departure location and destination, providing them with vital real-time data to make their journey smoother and more efficient. What It Does EVC calculates the duration of the charge needed to complete the journey and identifies charging points along the route. It also offers alternative routes to ensure that drivers can reach their destination with the least amount of charging time and hassle. How We Built It The app was developed using a combination of Google Maps API for route planning, and a custom database of charging stations to offer the most accurate and up-to-date information. We incorporated user feedback to refine the UI/UX, making it as intuitive as possible. Challenges We Ran Into Integrating real-time data from various charging station providers was challenging due to the differing formats and standards. Additionally, optimizing the route calculation algorithm to account for charging time and station availability required extensive testing and refinement. Accomplishments That We're Proud Of We're proud of creating an app that not only meets the basic requirements but goes beyond by offering features like estimated energy consumption, cost calculation for trips, and notifications for low charge or when a charging station is nearby. What We Learned Through this project, we learned the importance of user-centered design in creating applications that address real-world problems. We also gained insights into the complexities of the EV charging infrastructure and the significance of data accuracy and reliability. What's Next for EVC Moving forward, EVC plans to integrate more personalized features such as recommending the most energy-efficient routes based on driving habits and vehicle performance. We also aim to expand our database to include more charging stations worldwide and explore partnerships with EV manufacturers for seamless in-car app integration.
## Inspiration The first step of our development process was conducting user interviews with University students within our social circles. When asked of some recently developed pain points, 40% of respondents stated that grocery shopping has become increasingly stressful and difficult with the ongoing COVID-19 pandemic. The respondents also stated that some motivations included a loss of disposable time (due to an increase in workload from online learning), tight spending budgets, and fear of exposure to covid-19. While developing our product strategy, we realized that a significant pain point in grocery shopping is the process of price-checking between different stores. This process would either require the user to visit each store (in-person and/or online) and check the inventory and manually price check. Consolidated platforms to help with grocery list generation and payment do not exist in the market today - as such, we decided to explore this idea. **What does G.e.o.r.g.e stand for? : Grocery Examiner Organizer Registrator Generator (for) Everyone** ## What it does The high-level workflow can be broken down into three major components: 1: Python (flask) and Firebase backend 2: React frontend 3: Stripe API integration Our backend flask server is responsible for web scraping and generating semantic, usable JSON code for each product item, which is passed through to our React frontend. Our React frontend acts as the hub for tangible user-product interactions. Users are given the option to search for grocery products, add them to a grocery list, generate the cheapest possible list, compare prices between stores, and make a direct payment for their groceries through the Stripe API. ## How we built it We started our product development process with brainstorming various topics we would be interested in working on. Once we decided to proceed with our payment service application. We drew up designs as well as prototyped using Figma, then proceeded to implement the front end designs with React. Our backend uses Flask to handle Stripe API requests as well as web scraping. We also used Firebase to handle user authentication and storage of user data. ## Challenges we ran into Once we had finished coming up with our problem scope, one of the first challenges we ran into was finding a reliable way to obtain grocery store information. There are no readily available APIs to access price data for grocery stores so we decided to do our own web scraping. This lead to complications with slower server response since some grocery stores have dynamically generated websites, causing some query results to be slower than desired. Due to the limited price availability of some grocery stores, we decided to pivot our focus towards e-commerce and online grocery vendors, which would allow us to flesh out our end-to-end workflow. ## Accomplishments that we're proud of Some of the websites we had to scrape had lots of information to comb through and we are proud of how we could pick up new skills in Beautiful Soup and Selenium to automate that process! We are also proud of completing the ideation process with an application that included even more features than our original designs. Also, we were scrambling at the end to finish integrating the Stripe API, but it feels incredibly rewarding to be able to utilize real money with our app. ## What we learned We picked up skills such as web scraping to automate the process of parsing through large data sets. Web scraping dynamically generated websites can also lead to slow server response times that are generally undesirable. It also became apparent to us that we should have set up virtual environments for flask applications so that team members do not have to reinstall every dependency. Last but not least, deciding to integrate a new API at 3am will make you want to pull out your hair, but at least we now know that it can be done :’) ## What's next for G.e.o.r.g.e. Our next steps with G.e.o.r.g.e. would be to improve the overall user experience of the application by standardizing our UI components and UX workflows with Ecommerce industry standards. In the future, our goal is to work directly with more vendors to gain quicker access to price data, as well as creating more seamless payment solutions.
losing
Introducing Melo-N – where your favorite tunes get a whole new vibe! Melo-N combines "melody" and "Novate" to bring you a fun way to switch up your music. Here's the deal: You pick a song and a genre, and we do the rest. We keep the lyrics and melody intact while changing up the music style. It's like listening to your favourite songs in a whole new light! How do we do it? We use cool tech tools like Spleeter to separate vocals from instruments, so we can tweak things just right. Then, with the help of the MusicGen API, we switch up the genre to give your song a fresh spin. Once everything's mixed up, we deliver your custom version – ready for you to enjoy. Melo-N is all about exploring new sounds and having fun with your music. Whether you want to rock out to a country beat or chill with a pop vibe, Melo-N lets you mix it up however you like. So, get ready to rediscover your favourite tunes with Melo-N – where music meets innovation, and every listen is an adventure!
## Inspiration We are very interested in the idea of making computer understand human’s feeling from Mirum challenge. We apply this idea on calling center where customer support can’t see customers’ faces via phone calls or messages. Enabling the analysis of the emotional tone of consumers can help customer support understand their need and solve problems more efficient. Business can immediately see the detailed emotional state of the customers from voice or text messages. ## What it does The text from customers are colored based on their tone. Red stands for anger, white stands for joy. ## How I built it We utilize the iOS chat application from the Watson Developer Cloud Swift SDK to build this chat bot, and IBM Watson tone analyzer to examine the emotional tones, language tones, and social tones. ## Challenges I ran into At the beginning, we had trouble running the app on iPhone. We spent a lot of time on debugging and testing. We also spent a lot of time on designing the graph of the analysis results. ## Accomplishments that I'm proud of We are proud to show that our chat bot supports tone analysis and basic chatting. ## What I learned We have learned and explored a few IBM Watson APIs. We also learned a lot while trouble shooting and fixing bugs. ## What's next for **Chattitude** Our future plan for Chattitude is to color the text by sentence and make the interface more engaging. For the tone analysis result, we want to improve by presenting the real time animated analysis result as histogram.
## Inspiration Music has become a crucial part of people's lives, and they want customized playlists to fit their mood and surroundings. This is especially true for drivers who use music entertain themselves on their journey and to stay alert. Based off of personal experience and feedback from our peers, we realized that many drivers are dissatisfied with the repetitive selection of songs on the radio and also on the regular Spotify playlists. That's why we were inspired to create something that could tackle this problem in a creative manner. ## What It Does Music Map curates customized playlists based on factors such as time of day, weather, driving speed, and locale, creating a set of songs that fit the drive perfectly. The songs are selected from a variety of pre-existing Spotify playlists that match the users tastes and weighted based on the driving conditions to create a unique experience each time. This allows Music Map to introduce new music to the user while staying true to their own tastes. ## How we built it HTML/CSS, Node.js, Esri, Spotify, Google Maps APIs ## Challenges we ran into Spotify API was challenging to work with, especially authentication. Overlaying our own UI over the map was also a challenge. ## Accomplishments that we're proud of Learning a lot and having something to show for it The clean and aesthetic UI ## What we learned For the majority of the team, this was our first Hackathon and we learned how to work together well and distribute the workload under time pressure, playing to each of our strengths. We also learned a lot about the various APIs and how to fit different pieces of code together. ## What's next for Music Map We will be incorporating more factors into the curation of the playlists and gathering more data on the users' preferences.
winning
# Annotate.io ## Inspiration 💡 With school being remote, and online lectures being prominent. It is sometimes hard making clear and concise notes because profs may have a hard accent you are not used to, or perhaps your audio on your laptop is not the best or even because the house is too loud! What if there was an application that can transcribe your online lectures and summarize as well as point out key concepts! This would improve your study productivity and even promote active learning! Well that's exactly what we wanted to build! Using Assembly AI and Twilio we built a notes assistant to help build concise and elegant study sheets based on your lectures! Our product boosts productivity, as we create interactive study sheets to increase active learning and recall! ## What it does 🤔 Annotate.io is an education & productivity platform that allows users to create dynamic notes based on lecture content, users are able to submit a .mp4 file or even a YouTube link and get interactive notes! We use Assembly AI to perform topic analysis to summarize content and highlight key topics that are in the material! Users can also email a pdf version of their notes to themselves and also share their notes to others (thanks Twillio!). ## How we built it 🖥️ When building out Annotate.io, we chose 3 key design principles to help ensure our product meets the design challenge of productivity! Simplicity, Elegance and Scalability. We wanted Annotate.io to be simple to design, upgrade and debug. This led to us harnessing the lightweight framework of Flask and the magic of Python to design our backend infrastructure. To ensure our platform is scalable and efficient we harnessed Assembly AI to perform both our topic and summary analysis harnessing its topic-detection and speech API respectively. Using Assembly AI as our backbone allowed our NLP analysis to be efficient and responsive! We then used various python libraries to create our YouTube and file conversion services, enabling us to integrate into the Assembly AI infrastructure. We then use Twilio to harness the output from Assembly AI to rapidly send pdfs of their notes to our users’ emails! To create an elegant and user-friendly experience we leveraged React and various design libraries to present our users with a new, productivity focused platform to create dynamic and interactive study notes! React also worked seamlessly with our Flask backend and our third party APIs. This integration also allowed for a concurrent development stream for both our front end and back end teams. ## Challenges we ran into 🔧 One of the first challenges we faced was trying to integrate with Assembly AI, at first we weren’t having much success interfacing with Assembly AI API, however going through their documentation and looking over some sample code they provided we were able to leverage Assembly AI in our project. Another issue we initially didn’t anticipate was communicating both backend and frontend services together, due to cross-origin resource policy we initially couldn’t pass information between the two. However we managed to implement CORS which solved the issue. This year was the first time we decided to use Figma to mock up our UI although tedious it definitely helped the frontend team speed up their development process. The hackathon really challenged our normal development process. We had to make quick decisions and factor the pros and cons of various decisions. Sleep or technically the lack of sleep, was another challenge we had to overcome. Luckily now that we are done we can get some. ## What we learned 🤓 In this hack we definitely learnt a lot about the development process and bringing the best out of each of our abilities. Figma was a design tool we used for the first time and it definitely helped us with our frontend development and is a skill we will definitely be taking with us for our future careers. We also got greater insights into integrating third-party APIs and http requests. To pass our audio and video files we used ‘formdata’ which had some nuances we never knew. We also learned the importance of git ignore and managing keys correctly, and how leaking a key can be really annoying to remove :((( ## What's next for Annotate.io 🏃‍♂️ For the future we have many ideas to improve the accessibility and scalability of Annotate.io. A feature we weren’t able to develop currently but are planning to is add image recognition to handle detailed diagrams and drawings. Diagrams often paint a better picture and improve one's understanding which is something we would want to take advantage of in Annotate.io. This feature would be a game changer as annotated diagrams based on the video would improve productivity even more!
## Inspiration Our inspiration for this project was our experience as students. We believe students need more a digestible feed when it comes to due dates. Having to manually plan for homework, projects, and exams can be annoying and time consuming. StudyHedge is here to lift the scheduling burden off your shoulders! ## What it does StudyHedge uses your Canvas API token to compile a list of upcoming assignments and exams. You can configure a profile detailing personal events, preferred study hours, number of assignments to complete in a day, and more. StudyHedge combines this information to create a manageable study schedule for you. ## How we built it We built the project using React (Front-End), Flask (Back-End), Firebase (Database), and Google Cloud Run. ## Challenges we ran into Our biggest challenge resulted from difficulty connecting Firebase and FullCalendar.io. Due to inexperience, we were unable to resolve this issue in the given time. We also struggled with using the Eisenhower Matrix to come up with the right formula for weighting assignments. We discovered that there are many ways to do this. After exploring various branches of mathematics, we settled on a simple formula (Rank= weight/time^2). ## Accomplishments that we're proud of We are incredibly proud that we have a functional Back-End and that our UI is visually similar to our wireframes. We are also excited that we performed so well together as a newly formed group. ## What we learned Keith used React for the first time. He learned a lot about responsive front-end development and managed to create a remarkable website despite encountering some issues with third-party software along the way. Gabriella designed the UI and helped code the front-end. She learned about input validation and designing features to meet functionality requirements. Eli coded the back-end using Flask and Python. He struggled with using Docker to deploy his script but managed to conquer the steep learning curve. He also learned how to use the Twilio API. ## What's next for StudyHedge We are extremely excited to continue developing StudyHedge. As college students, we hope this idea can be as useful to others as it is to us. We want to scale this project eventually expand its reach to other universities. We'd also like to add more personal customization and calendar integration features. We are also considering implementing AI suggestions.
## Inspiration With the new normal and digitalization of every sector we came to realise that there are mostly blogs available on the internet on Autism and felt that the Autism community was somewhere sidelined with having barely any resources or softwares which could help the parents and therapists continue their therapies virtually and still had to depend on traditional means of crafts etc to develop puzzles for children to teach life skills . On more surfing around the internet, we realised the gravity of the situation when there was only one website which had educational games teaching autistic kids eye contact etc. which had also discontinued because of technical glitches.The severity of the situation alongside the fact that 1 in 54 children have autism inspired us to use technology to create tools catering to the needs of the children, parents and people associated with autism. Spreading awareness about Autism through our project has been the major driving force. ## What it does Walk with me is a working prototype aimed at incorporating technology into the traditional tools used in teaching people with autism basic life skills. The project is coded in colour blue and has the puzzle logo which are symbolic to autism and have been incorporated with the aim to spread awareness. The project has three distinct highlights aimed at trying to cover people who are on different levels of the autism spectrum. Special Educators use traditional means of making adaptive books to teach life skills like brushing teeth etc and to reduce the cumbersome work load in making work system using hand crafts, we decided to design prototype centralizing to the people with autism by keeping them simple so that with one click of mouse, the child can easily navigate through the steps and learn new skills making the work easier for both the educators and children. The website in itself is a compact platform providing knowledge of Autism and spreading awareness about it through blogs ,videos etc as well as linking the features together including the game prototype and bot. The discord bot linked with the website is a tool for non verbal autistic people (approx 30-40% people on spectrum are non verbal) who can use the text input to interact with the bot. The bot provides positive reinforcements to the user which in return helps in uplifting the mood of the user as well as increasing confidence and preaching self worth through phrases like “You can do it.” ,“You are the best.” etc. It also has quotations from personalities like Rumi etc. ## How we built it Our project is an amalgamation of the following things- Discord Bot- Python Front End -HTML/CSS - Bootstrap - Java script Back End - php Educational Game Prototype -Google Slides ## Challenges we ran into 1)We faced a lot of system and softwares problems while the frontend of the website was being created due to which we had to redo the whole front end. 2)We had one of our teammate leave us just after the start of the Hack due to some unforeseen circumstances, which made our schedule more scrambled but we managed to finish our project . 3) It took us some time and help from the mentors to figure out a way to present the idea of the game in form of a prototype but we managed to make the best use of google slides in prototyping our idea. ## Accomplishments that we're proud of We now know the use of HTML/CSS,using sandbox,bootstrap,template editing and shuffling in an intermediate level as well as learning how to create games/short animations using google slides. We are proud of how we were able to come together as a team since we had never met each other prior but were still able to work together in a collaborative and healthy environment. ## What we learned The research phase of the hack was an eye opening experience for us all as a team especially learning about Autism and how an autistic person spends his/her daily routine especially the tools and techniques used in teaching an autistic child life skills. We also realised that apart from numerous blogs on autism, there isn't much out there in terms of softwares or educational games etc. catering to the needs of autistic people etc which motivated us to take this project in the hackathon. On the technical forefront, working on the project has taught us how to manage our time while simultaneously collaborating with each other and making quick decisions as well as learning new technical hacks like extensions etc which enables us to code in real time. ## What's next for Walk With Me Post the hackathon, converting the prototype for the educational games into actual games using platforms like unity is on our to-do list. Considering the lack of educational games built catering to autistic people, we are planning to improve our prototype and turn it into educational games teaching people with autism basic life skills helping them with their journey into being independent. We also plan on improving the discord bot as well as text to speech functionality keeping in mind the non verbal children on the autism spectrum and helping them communicate through technology.We plan on making the security of our website as well as the discord so as to make sure there is no breach or unparliamentary action in our environment and to ensure the environment runs as a safe,civil and learning platform. Alongside this we look forward to adding more functionality to the website and making it dynamic with good quality content and incorporating a better tech stack like react into it. We also plan on improving the content of the website in terms of resources etc like sharing the journey of people who are on the spectrum, sharing books written by people having autism ,sessions and conferences catering to autism etc. We hope to even develop an app version of the website , incorporate an inbuilt chat feature for a peer to peer interaction or peer to admin interaction as well as develop more advanced,secure admin records with the database. 2) WCAG We plan to integrate and follow the WCAG 2.0(WEB ACCESSIBILITY GUIDELINES) which are : 1)Provides content not prone to have seizures 2)Content should be substituted with pictures for better understanding 3)Sentence should not be cluttered 4)Font needs to be large and legible 5)Content present needs to be verified such that it is not prone to seizures. Overall we have a lot to look forward to in terms of our Hack.
winning
## Inspiration We recognized that parliament hearings can be lengthy and difficult to comprehend, yet they have a significant impact on our lives. We were inspired by the concept of a TL:DR (too long, didn't read) summary commonly seen in news articles, but not available for parliament proceedings. As news often only covers mainstream headlines, we saw the need for an all-inclusive, easy-to-access summary of proceedings, which is what led us to create co:ngress. ## What It Does Co:ngress is an AI-powered app that summarizes the proceedings of the House of Commons of Canada for the convenience of the users. The app utilizes Cohere API for article summarization, which processes the scraped proceedings and summarizes them into a shorter, more easily understandable version. Furthermore, co:ngress fetches an AI-generated image using the Wombo API that loosely relates to the hearing's topic, adding an entertainment factor to the user experience. For scalability, we designed the app to store summaries and images in a database to minimize scraping and processing for repeat requests. ## How We Built It The co:ngress app is built using a four-step process. First, a web scraper is used to scrape the transcripts of the parliament hearings from the House of Commons website. Next, the scraped article is passed to the Cohere API for summarization. After that, the topic of the parliament proceeding is interpreted, and the Wombo API generates an image that relates to it. Finally, the image and summary are returned to the user for easy consumption. The app was built using various tools and technologies, including web scraping, API integration, and database management. ## Challenges We Ran Into One of the biggest challenges we faced was fine-tuning the Cohere API hyperparameters to achieve optimal results for our app. This process took up a lot of time and required a lot of trial and error. Additionally, we had to ensure that the web scraper was able to efficiently gather random samples from the parliament website without errors. ## Accomplishments That We're Proud of We are proud of the modular approach we took in building our app and how we were able to connect all the different pieces together to create a seamless user experience. Additionally, we were able to successfully summarize parliamentary proceedings into a useful result that can be easily understood by the average Canadian. ## What We Learned One of the key things we learned was the power and versatility of the Cohere API, which has impressive training data that helped us create accurate summaries. We also learned the importance of taking a modular approach to app development, which can make the process more efficient and less prone to errors. ## What's Next for Co:ngress For Co:ngress, the next steps would be to implement user feedback and continually improve the summarization algorithm to ensure the accuracy and relevance of the summaries. The team could also explore expanding the app to cover other areas of government proceedings or even other countries. Finally, as mentioned, creating a scalable cloud infrastructure to support a large number of users would be an important next step for the app's growth and success.
## Inspiration The inspiration for Nova came from the overwhelming volume of emails and tasks that professionals face daily. We aimed to create a solution that simplifies task management and reduces cognitive load, allowing users to focus on what truly matters. ## What it does Nova is an automated email assistant that intelligently processes incoming emails, identifies actionable items, and seamlessly adds them to your calendar. It also sends timely text reminders, ensuring you stay organized and on top of your commitments without the hassle of manual tracking. ## How we built it We built Nova using natural language processing algorithms to analyze email content and extract relevant tasks. By integrating with calendar APIs and SMS services, we created a smooth workflow that automates task management and communication, making it easy for users to manage their schedules. ## Challenges we ran into One of the main challenges was accurately interpreting the context of emails to distinguish between urgent tasks and general information. Additionally, ensuring seamless integration with various calendar platforms and messaging services required extensive testing and refinement. ## Accomplishments that we're proud of We are proud of developing a fully functional prototype of Nova that effectively reduces users' daily load by automating task management. Initial user feedback has been overwhelmingly positive, highlighting the assistant's ability to streamline workflows and enhance productivity. ## What we learned Throughout the development process, we learned the importance of user feedback in refining our algorithms and improving the overall user experience. We also gained insights into the complexities of integrating multiple services to create a cohesive solution. ## What's next for Nova Moving forward, we plan to enhance Nova's capabilities by incorporating machine learning to improve task recognition and prioritization. Our goal is to expand its features and ultimately launch it as a comprehensive productivity tool that transforms how users manage their daily tasks.
We used the MERN tech stack to create a website that can allow clubs to advertise their club events to fellow students. ## Challenges we ran into CSS ## Accomplishments that we're proud of Creating our first full stack web application ## What we learned How to use MERN fully and redbull only works for an hour max ## What's next for UBC Upcoming We'll see :)
winning
## Inspiration My project is inspired by real patient safety problems like RSI when surgical tools are carelessly left inside a patient after surgery causing infections or even organ damage. ## What it does My website provides step by step guidelines specific to the surgery and notifications to remind doctors if they have corrected the basic steps. It will show the most basic step like all surgical instruments are counted before starting the surgery and ask the doctor to mark it as completed when done. It will digitally log each tool used during operation and track their usage status. To make it hands free I have incorporated voice assist where doctors can set a timer based on the priority of the step and get a voice assisted alert. Before closing the surgical site, the system will notify the team of any card marked as incomplete acting as a safeguard against human error. ## How we built it The app was built using HTML, CSS and JavaScript. ## Challenges we ran into -Coming up with an innovative solution that was not of the existing solutions. -Faced problems integrating voice for the notifications. -Continuous integration of LLM with my website was challenging. ## Accomplishments that we're proud of -Being able come up with a solution that is helpful and addresses the existing problem. -Creating a functional website and being able to deploy it online ## What we learned Building a functional website using CSS, HTML and JavaScript ## What's next for Health Guardian Integrating LLM(s) using API so that all the surgical procedure and their individual steps is up to date.
## Inspiration Reading about issues with patient safety was... not exactly inspiring, but eye-opening. Issues that were only a matter of human error (understaffed or forgetfulness or etc) like bed sores seemed like things that could easily be kept track of to at least make sure patients could get a heightened quality of life. So we decided to make an app that tracked patient wellness and needs, not necessarily just concrete items, but all the necessary follow-up items from them as well. We understand that schedulers for more concrete events like appointments already exist, but something that can remind providers to check up on patients in 3 days to see if they have had any side effects to their new prescription or any other task would be helpful. ## What it does The Med-O-Matic keeps track of patient needs and when they're needed, and sets those needs up in a calendar and matching to-do list for a team of healthcare providers in a hospital to take care of. Providers can claim tasks that they will get to, and can mark them down as they go throughout their day. This essentially serves as a scheduled task-list for providers. ## How we built it To build the frontend, we used Vue.js. We have a database holding all the tasks on AWS DynamoDB. ## Challenges we ran into Getting started was a bit difficult and we weren't really sure which direction we should take for Med-O-Matic. There were a lot of uncertainties about what exactly would be best for our application, so we had to delve in a bit deeper by thinking about what the current process is like at hospitals and clinics, and finding areas for improvement. This has led us to addressing a process issue in task assignment to reduce the number of errors associated with inattentiveness. ## Accomplishments that we're proud of What makes our application different than others is that you can sequence tasks and use these sequences as a template. For example, a procedure like heart-surgery always has required follow up steps. You can create a heart-surgery template, that will be used to set all the required follow-up steps. After the template is created, we can easily reapply that template however many times we want! ## What we learned We learned how to deploy using DeFang, and also how to connect our frontend with DynamoDB. And we learned more about the domain of our project, which is patient safety. ## What's next for Med-O-Matic More automation would be next. We've already got some bit for making sequences of tasks, but features like a send-a-text feature for example to make the following-up-on process easier would be next- in other words, we'd add features that help do the tasks as well, instead of simply reminding providers of what they need to do. We would also connect it to some medical scheduler like EPIC's API, like EPIC. This would allow us to really get the task sequencing working seamlessly with a real workflow, as something like a surgery can be scheduled in epic, happen, and then trigger the Med-O-Matic to create all the necessary follow-up tasks from that.
## Inspiration In today's fast-paced world, highly driven individuals often overwork themselves without regard for how it impacts their health, only experiencing the consequences *when it is too late*. **AtlasAI** aims to bring attention to these health issues at an early stage, such that our users are empowered to live their best lives in a way that does not negatively impact their health. ## What it does We realized that there exists a gap between today's abundance of wearable health data and meaningful, individualized solutions which users can implement. For example, many smart watches today are saturated with metrics such as *sleep scores* and *heart rate variability*, many of which actually mean nothing to their users in practice. Therefore, **AtlasAI** aims to bridge this gap to finally **empower** our users to use this health data to enhance the quality of their lives. Using our users' individual health data, **AtlasAI** is able to: * suggest event rescheduling * provide *targeted*, *actionable* feedback * recommend Spotify playlists depending on user mood ## How we built it Our frontend was built with `NextJS`, with styling from `Tailwind` and `MaterialUI`. Our backend was built with `Convex`, which integrates technologies from `TerraAPI`, `TogetherAI` and `SpotifyAPI`. We used a two-phase approach to fine-tune our model. First, we utilized TogetherAI's base models to generate test data (a list of rescheduled JSON event objects for the day). Then, we picked logically sound examples to fine-tune our model. ## Challenges we ran into In the beginning, our progress was extremely slow as **AtlasAI** integrates so many new technologies. We only had prior experience with `NextJS`, `Tailwind` and `MaterialUI`, which essentially meant that we had to learn how to create our entire backend from scratch. **AtlasAI** also went through many integrations throughout this weekend as we strove to provide the best recommendations for our users. This involved long hours spent in fine-tuning our `TogetherAI` models and testing out features until we were satisfied with our product. ## Accomplishments that we're proud of We are extremely proud that we managed to integrate so many new technologies into **AtlasAI** over the course of three short days. ## What we learned In the development realm, we successfully mastered the integration of several valuable third-party applications such as Convex and TogetherAI. This expertise significantly accelerated our ability to construct lightweight prototypes that accurately embody our vision. Furthermore, we honed our collaborative skills through engaging in sprint cycles and employing agile methodologies, which collectively enhanced our efficiency and expedited our workflow. ## What's next for AtlasAI Research indicates that health data can reveal critical insights into health symptoms like depression and anxiety. Our goal is to delve deeper into leveraging this data to furnish enhanced health insights as proactive measures against potential health ailments. Additionally, we aim to refine lifestyle recommendations for the user's calendar to foster better recuperation.
losing
# yhack JuxtaFeeling is a Flask web application that visualizes the varying emotions between two different people having a conversation through our interactive graphs and probability data. By using the Vokaturi, IBM Watson, and Indicoio APIs, we were able to analyze both written text and audio clips to detect the emotions of two speakers in real-time. Acceptable file formats are .txt and .wav. Note: To differentiate between different speakers in written form, please include two new lines between different speakers in the .txt file. Here is a quick rundown of JuxtaFeeling through our slideshow: <https://docs.google.com/presentation/d/1O_7CY1buPsd4_-QvMMSnkMQa9cbhAgCDZ8kVNx8aKWs/edit?usp=sharing>
## Inspiration COVID revolutionized the way we communicate and work by normalizing remoteness. **It also created a massive emotional drought -** we weren't made to empathize with each other through screens. As video conferencing turns into the new normal, marketers and managers continue to struggle with remote communication's lack of nuance and engagement, and those with sensory impairments likely have it even worse. Given our team's experience with AI/ML, we wanted to leverage data to bridge this gap. Beginning with the idea to use computer vision to help sensory impaired users detect emotion, we generalized our use-case to emotional analytics and real-time emotion identification for video conferencing in general. ## What it does In MVP form, empath.ly is a video conferencing web application with integrated emotional analytics and real-time emotion identification. During a video call, we analyze the emotions of each user sixty times a second through their camera feed. After the call, a dashboard is available which displays emotional data and data-derived insights such as the most common emotions throughout the call, changes in emotions over time, and notable contrasts or sudden changes in emotion **Update as of 7.15am: We've also implemented an accessibility feature which colors the screen based on the emotions detected, to aid people with learning disabilities/emotion recognition difficulties.** **Update as of 7.36am: We've implemented another accessibility feature which uses text-to-speech to report emotions to users with visual impairments.** ## How we built it Our backend is powered by Tensorflow, Keras and OpenCV. We use an ML model to detect the emotions of each user, sixty times a second. Each detection is stored in an array, and these are collectively stored in an array of arrays to be displayed later on the analytics dashboard. On the frontend, we built the video conferencing app using React and Agora SDK, and imported the ML model using Tensorflow.JS. ## Challenges we ran into Initially, we attempted to train our own facial recognition model on 10-13k datapoints from Kaggle - the maximum load our laptops could handle. However, the results weren't too accurate, and we ran into issues integrating this with the frontend later on. The model's still available on our repository, and with access to a PC, we're confident we would have been able to use it. ## Accomplishments that we're proud of We overcame our ML roadblock and managed to produce a fully-functioning, well-designed web app with a solid business case for B2B data analytics and B2C accessibility. And our two last minute accessibility add-ons! ## What we learned It's okay - and, in fact, in the spirit of hacking - to innovate and leverage on pre-existing builds. After our initial ML model failed, we found and utilized a pretrained model which proved to be more accurate and effective. Inspiring users' trust and choosing a target market receptive to AI-based analytics is also important - that's why our go-to-market will focus on tech companies that rely on remote work and are staffed by younger employees. ## What's next for empath.ly From short-term to long-term stretch goals: * We want to add on AssemblyAI's NLP model to deliver better insights. For example, the dashboard could highlight times when a certain sentiment in speech triggered a visual emotional reaction from the audience. * We want to port this to mobile and enable camera feed functionality, so those with visual impairments or other difficulties recognizing emotions can rely on our live detection for in-person interactions. * We want to apply our project to the metaverse by allowing users' avatars to emote based on emotions detected from the user.
## Inspiration We were all interested in emotional analysis and we felt that this was really cool and effective tool for anyone to use. This tool can be used by anyone who is trying to make a better presentation, be that a student, public-speaker, or actor. ## What it does We have built two separate but related tools that can work together to help people make the most compelling presentation possible. For someone who is trying to write a document of some sort, we have the Live Sentiment Analysis tool. This is a web app where someone can edit their work and hone in on a targeted emotional impact. The Sentiment Analysis tool uses Watson NLP API to get document-level and sentence/clause level analysis of the emotional content of the text. We provide regular feedback and updates on the overall and more specific emotional content of a document, as well as how your edits are changing that emotional content. The second tool is used to help people master the audio portion of the presentation. Anyone who wants to use the tool can record an audio file and upload it. We use Google Voice API to extract the text data from this recording. Then, we send this text data to the Watson API to perform sentiment analysis on top of each clause in the presentation. We also analyze audio data from the mp3 file with the DeepAffects model which recognizes the emotional content of speech without incorporating information about what the words being spoken are. Then we compare the clause-level emotional tags from the text and the audio data to see whether the person is really able to match his voice to his words and captivate his audience. ## How we built it ## Challenges we ran into We had some challenges integrating APIs and integrating the frontend and backend. Other big issues were making the product as effective as possible. For example, we worked on the scoring function so that it would provide good results. The function combined data from the text-analysis and the audio analysis and we had to determine a way of combining this data in a way that reasonably represented the quality of someone's speech. Another minor challenge was choosing an effective way to get the clauses. For the real-time text analysis, we had to get the clauses so that they had enough words that they would represent meaningful emotional data, but they were small enough that the user could get frequent feedback. ## Accomplishments that we're proud of We ran audio analysis using a deep neural network. The training data is really hard to find, and we built upon previous models like DeepAffects. Extending the classic neutral, positive and negative tagging, we are proud of the emotions and scores based on a complex algorithm that our application predicts from your voice and maps it to the actual sentiment using IBM Watson. ## What we learned We understood that there is not much publically available data for audio training, hence it was important to build upon previously trained models and use RESTful APIs wherever possible. At the same time, we learnt microservices, commuication between client and server, using jquery and javascript (as back-end engineers). ## What's next for Hearo We want to keep building the product that we have started at CalHacks 6.0. Newer features could be live emotion tagging from the microphone, reducing latency, improving accuracy of the models used.
winning
[Play The Game](https://gotm.io/askstudio/pandemic-hero) ## Inspiration Our inspiration comes from the concern of **misinformation** surrounding **COVID-19 Vaccines** in these challenging times. As students, not only do we love to learn, but we also yearn to share the gifts of our knowledge and creativity with the world. We recognize that a fun and interactive way to learn crucial information related to STEM and current events is rare. Therefore we aim to give anyone this opportunity using the product we have developed. ## What it does In the past 24 hours, we have developed a pixel art RPG game. In this game, the user becomes a scientist who has experienced the tragedies of COVID-19 and is determined to find a solution. Become the **Hero of the Pandemic** through overcoming the challenging puzzles that give you a general understanding of the Pfizer-BioNTech vaccine's development process, myths, and side effects. Immerse yourself in the original artwork and touching story-line. At the end, complete a short feedback survey and get an immediate analysis of your responses through our **Machine Learning Model** and receive additional learning resources tailored to your experience to further your knowledge and curiosity about COVID-19. Team A.S.K. hopes that through this game, you become further educated by the knowledge you attain and inspired by your potential for growth when challenged. ## How I built it We built this game primarily using the Godot Game Engine, a cross-platform open-source game engine that provides the design tools and interfaces to create games. This engine uses mostly GDScript, a python-like dynamically typed language designed explicitly for design in the Godot Engine. We chose Godot to ease cross-platform support using the OpenGL API and GDScript, a relatively more programmer-friendly language. We started off using **Figma** to plan out and identify a theme based on type and colour. Afterwards, we separated components into groupings that maintain similar characteristics such as label outlining and movable objects with no outlines. Finally, as we discussed new designs, we added them to our pre-made categories to create a consistent user-experience-driven UI. Our Machine Learning model is a content-based recommendation system built with Scikit-learn, which works with data that users provide implicitly through a brief feedback survey at the end of the game. Additionally, we made a server using the Flask framework to serve our model. ## Challenges I ran into Our first significant challenge was navigating through the plethora of game features possible with GDScript and continually referring to the documentation. Although Godot is heavily documented, as an open-source engine, there exist frequent bugs with rendering, layering, event handling, and more that we creatively overcame A prevalent design challenge was learning and creating pixel art with the time constraint in mind. To accomplish this, we methodically used as many shortcuts and tools as possible to copy/paste or select repetitive sections. Additionally, incorporating Machine Learning in our project was a challenge in itself. Also, sending requests, display JSON, and making the recommendations selectable were considerable challenges using Godot and GDScript. Finally, the biggest challenge of game development for our team was **UX-driven** considerations to find a balance between a fun, challenging puzzle game and an educational experience that leaves some form of an impact on the player. Brainstorming and continuously modifying the story-line while implementing the animations using Godot required a lot of adaptability and creativity. ## Accomplishments that I'm proud of We are incredibly proud of our ability to bring our past experiences gaming into the development process and incorporating modifications of our favourite gaming memories. The development process was exhilarating and brought the team down the path of nostalgia which dramatically increased our motivation. We are also impressed by our teamwork and team chemistry, which allowed us to divide tasks efficiently and incorporate all the original artwork designs into the game with only a few hiccups. We accomplished so much more within the time constraint than we thought, such as training our machine learning model (although with limited data), getting a server running up and quickly, and designing an entirely original pixel art concept for the game. ## What I learned As a team, we learned the benefit of incorporating software development processes such as **Agile Software Development Cycle.** We solely focused on specific software development stages chronologically while returning and adapting to changes as they come along. The Agile Process allowed us to maximize our efficiency and organization while minimizing forgotten tasks or leftover bugs. Also, we learned to use entirely new software, languages, and skills such as Godot, GDScript, pixel art, and design and evaluation measurements for a serious game. Finally, by implementing a Machine Learning model to analyze and provide tailored suggestions to users, we learned the importance of a great dataset. Following **Scikit-learn** model selection graph or using any cross-validation techniques are ineffective without the data set as a foundation. The structure of data is equally important to manipulate the datasets based on task requirements to increase the model's score. ## What's next for Pandemic Hero We hope to continue developing **Pandemic Hero** to become an educational game that supports various age ranges and is worthy of distribution among school districts. Our goal is to teach as many people about the already-coming COVID-19 vaccine and inspire students everywhere to interpret STEM in a fun and intuitive manner. We aim to find support from **mentors** along the way, who can help us understand better game development and education practices that will propel the game into a deployment-ready product. ### Use the gotm.io link below to play the game on your browser or follow the instructions on Github to run the game using Godot
## Inspiration This project was inspired by the Professional Engineering course taken by all first year engineering students at McMaster University (1P03). The final project for the course was to design a solution to a problem of your choice that was given by St. Peter's Residence at Chedoke, a long term residence care home located in Hamilton, Ontario. One of the projects proposed by St. Peter's was to create a falling alarm to notify the nurses in the event of one of the residents having fallen. ## What it does It notifies nurses if a resident falls or stumbles via a push notification to the nurse's phones directly, or ideally a nurse's station within the residence. It does this using an accelerometer in a shoe/slipper to detect the orientation and motion of the resident's feet, allowing us to accurately tell if the resident has encountered a fall. ## How we built it We used a Particle Photon microcontroller alongside a MPU6050 gyro/accelerometer to be able to collect information about the movement of a residents foot and determine if the movement mimics the patterns of a typical fall. Once a typical fall has been read by the accelerometer, we used Twilio's RESTful API to transmit a text message to an emergency contact (or possibly a nurse/nurse station) so that they can assist the resident. ## Challenges we ran into Upon developing the algorithm to determine whether a resident has fallen, we discovered that there are many cases where a resident's feet could be in a position that can be interpreted as "fallen". For example, lounge chairs would position the feet as if the resident is laying down, so we needed to account for cases like this so that our system would not send an alert to the emergency contact just because the resident wanted to relax. To account for this, we analyzed the jerk (the rate of change of acceleration) to determine patterns in feet movement that are consistent in a fall. The two main patterns we focused on were: 1. A sudden impact, followed by the shoe changing orientation to a relatively horizontal position to a position perpendicular to the ground. (Critical alert sent to emergency contact). 2. A non-sudden change of shoe orientation to a position perpendicular to the ground, followed by a constant, sharp movement of the feet for at least 3 seconds (think of a slow fall, followed by a struggle on the ground). (Warning alert sent to emergency contact). ## Accomplishments that we're proud of We are proud of accomplishing the development of an algorithm that consistently is able to communicate to an emergency contact about the safety of a resident. Additionally, fitting the hardware available to us into the sole of a shoe was quite difficult, and we are proud of being able to fit each component in the small area cut out of the sole. ## What we learned We learned how to use RESTful API's, as well as how to use the Particle Photon to connect to the internet. Lastly, we learned that critical problem breakdowns are crucial in the developent process. ## What's next for VATS Next steps would be to optimize our circuits by using the equivalent components but in a much smaller form. By doing this, we would be able to decrease the footprint (pun intended) of our design within a clients shoe. Additionally, we would explore other areas we could store our system inside of a shoe (such as the tongue).
## Inspiration We all had our own contributions to the creation of our idea; An educational game seemed to suit all of our interests best, allowing us to explore cybersecurity and ethical issues in the future of AI while marvelling at the innovative new-comings of machine learning. In addition, most of us could learn a new skill: game development in Unity. All of it came together as we were able to think more creatively than we would have if we hadn't made a story-based, educational project, rather than hyper focusing on technical aspects of the project. We were more inclined to have a better balance of creativity with technicality than to be ## What it does The game takes place in a sci-fi, data filled world, representing the datasets and machine learning models that contribute to the creation of an AI model. The main character, a cute little piece of data, is tasked with "fixing" the world it is in, by playing minigames that solve the issues with the data. * The first minigame is meant to represent cleaning data, looking for "bad" data to destroy, and "completing" the incomplete data. This is a timed mini game, with the goal of destroying 10 data and completing 10 data within a minute, chasing after the enemies to fix them. * The second minigame is meant to represent data privacy and avoiding the use of sensitive information in training AI model. This is done in a ChatGPT themed space invaders style. * The last minigame (incomplete) is meant to teach other considerations that AI developers must keep in mind with regards to ethics. The main character is taken into a dinosaur runner style game, where it must collect the hearts that represent the ethics concerns that are taken into consideration, symbolizing these moral values are kept in mind when building the AI. This part of the final product is not fully finished, so would need further developing to fully serve its purpose. After these three minigames are completed, the AI model is ready to be trained and developed, and the main character has saved this doomed world, allowing it to contribute to the innovation that the future of AI holds. ## How we built it Each person on the team took on a task. We broke them up as such: * Minigame teleportation system, main map, and gameflow * Asset/Graphic Design, main menu * Minigame 1 * Minigame 2 * Minigame 3 (?) (for whoever finished their part) ## Challenges we ran into As most of us were not familiar with Unity, a lot was learned as we went along, and many times from mistakes. Some challenges we ran into were: * Collaboration on a Unity Project with more than 3 people is complicated, so we had to use a git repository for only assets. We each worked on a different minigame/aspect of the game, and brought all of the scenes together on one computer at the end to complete the final product. * Half of our team had little to no experience with game development in Unity, so much of their time was spent learning, and things moved a little slower as they got used to the structure and workflow, constantly debugging * We started the project confident we could make it aesthetically pleasing and fully complete, especially with a team of 4 people. But, we only had 3 team members present on the second QHacks day due to unforseen circumstances, making the development process lengthier and more hectic for us. ## Accomplishments that we're proud of * We made something we like! We came to QHacks not knowing what to expect, and we ended it feeling accomplished. For our first Hackathon, we feel pride in the game we made, especially considering how much of it we learned on the spot and have now added to our skillset. We learned a lot and produced some impressive logic within our minigames despite being new to Unity. The fact that we have working games after spending ages debugging, wondering if it would ever work, is a huge accomplishment. * We think the graphics in our game are quite visually appealing, sleek and simple. This gives them their charm, and they make the game look put together even if the game is not fully finished. ## What we learned * Setting priorities within tasks is imperative when on a time crunch. We knew we wanted good graphics to add appeal to the game when our minigames would start off simple, but also knew when to switch gears from asset design to coding. * Not everything will be perfect. It is okay to leave things to fix later if they are not crucial to the functioning of a program, and it is also okay if there isn't time to get it done eventually. The point of a hackathon is to do what you can, to search your mind at its greatest depths and produce creative ideas, not a full stack fully functioning application. ## What's next for AI Safety Squad We had always planned to add multiple minigames to place greater emphasis on more aspects of ethics and cybersecurity concerns of AI, so we will definitely expand the amount of tasks the main character has to go through to make the AI a good, valid model. We would also like to add more complexity to the whole game, both to expand our own knowledge of Unity further, but also to increase the game's quality and refine it to a more finished product. We would do this in terms of graphics by adding animations to make things more visually pleasing, and also on the game logic side by making more interesting levels that require more time and effort. Lastly, we would like to solidify the story of the game and present more information about the backend of AI to players to make the game even more educational. We would like to make it as useful as possible to those who may not be well versed in the future of AI in relation to cybersecurity and privacy.
winning
## Inspiration The question, "is this recyclable?", is an all too common one. We noticed that both ourselves and our peers have a hard time classifying what bin certain trash items belong in. Finding resources for your local waste regulations is currently very complicated, as a google search only results in long and unreadable databases. This problem had us questioning how we could design a solution that would concisely tell someone exactly how to dispose of a given waste item, ultimately encouraging people to dispose of their trash items correctly. Bin Buddy aims to provide a user friendly experience that removes the hassle of sorting trash items and is actively ***making a change*** in the amount of recyclable waste that reaches recycling facilities instead of landfills. ## What it does A user is able to input any item that they want to know the recyclability of. This input is then compared to the City of Toronto's waste recyclability database, and uses OpenAI to find the most similar entry. Relevant and concise data of how to properly dispose of the waste item is then displayed to the user. For items that cannot be traditionally recycled, such as electronic waste, the user is prompted to view our Depots Near Me feature which will provide nearby locations that will accept these items. For more information about recycling guidelines and current statistic, users can view our Guidelines feature. ## How we built it Our application is developed using React.js for our frontend and a Flask app for our backend. Input is gathered from the user by two different methods; text or speech. This input is then posted to the backend where we use OpenAI GPT-3.5 to determine the most relevant information based on comparing the user input to the options available on the City of Toronto's waste recyclability database. A concise summary of our findings is returned to the frontend where the results are displayed clearly to the user. The Depots Near Me feature is powered by Google Maps Places API which uses a user selected location to create a map display of nearby waste depot locations. ## Challenges we ran into **Finding a problem**: Finding a meaningful and relevant project idea was very important to us as we wanted our project to have a large impact on the world. Aside from debugging code, this was the most time consuming and difficult part of our project. Being able to communicate with each other and discuss different solutions was an essential aspect towards the success of our project. **Frontend & Backend**: Challenges we faced include the connection between our React frontend and our Flask backend. Navigating CORS error was both a tedious and time consuming task that delayed our progress. ## Accomplishments that we are proud of As a team of new hackers, we approached this Hackathon with the goal being to produce a working final product. We specifically chose to tackle a solution that was out of our comfort zone which challenged each team member. Being able to have finished this project with a solution that accomplishes our initial goals is something that each of us are very proud of. ## What we learned **Figma**: A majority of our team members had never used Figma prior to this project. This provided a great learning opportunity for us to develop our UI/UX skills by prototyping what we wanted our solution to look like before transferring our visions to code. **Full Stack Web Application**: Creating a full stack web application is extremely involved and required us to do extended research due to our inexperience. Navigating the many bugs we encountered forced us to properly understand the functionality of our program leading to thorough learning of proper web app practices. ## What's next for Bin Buddy In the future, we plan to expand Bin Buddy outside of Toronto by incorporating recycling databases for cities all around the world.
## Inspiration According to GlobalNews Canada, a staggering one-third of recyclable materials are mistakenly sent to landfills each year. This issue has been compounded by the COVID-19 pandemic, leading to a notable increase in litter on Toronto's streets, as highlighted in a report by the Toronto Star. It's evident that Toronto is facing a significant waste disposal problem, posing environmental concerns and carrying substantial financial implications. The annual cost of citywide clean-up efforts amounts to millions of dollars. This website application follows the trend of explosive growth of IoT by utilizing the webcam to capture images for the purpose of connecting data with other systems. It recognizes the corresponding item using Artificial Intelligence Model. Furthermore, the user can file a report about street littering and access the real-time map about the location of reported street littering cases. Consequently, the city council can analyze the data and increase productivity in monitoring and assist in environmental protection. ## What it does This website application employs webcam technology to capture images, which are then processed by the GPT-4-Vision-Preview model for recognition of corresponding items. Additionally, the model is provided with a predetermined list of classifications in a specific format, ensuring deterministic responses. This streamlined approach enhances accuracy and efficiency in item identification and classification. Moreover, users have the option to report instances of street littering and access a real-time map displaying the locations of reported cases. This data empowers the city council to analyze trends and enhance monitoring efforts, thereby boosting productivity and supporting environmental protection initiatives. ## How we built it **Front End**: We initiated the development process by crafting an intuitive UI using Figma, a design tool that helped us conceptualize and refine the user interface. With our design in place, we translated it into code using React, a powerful framework known for its ability to create dynamic and interactive web applications. React enabled us to build a user-friendly interface that responds quickly to user interactions, enhancing the overall user experience. **Back End**: In the backend infrastructure, we leveraged a combination of technologies to ensure robust functionality. MongoDB served as our primary database solution, offering scalability and flexibility for storing various types of data. We harnessed its document-oriented architecture to efficiently manage data related to user reports and disposal information. Additionally, we utilized Flask, a lightweight web framework in Python, for API routing and handling HTTP requests. Flask provided a streamlined approach to developing RESTful APIs, allowing us to define endpoints and manage communication between the frontend and backend components effectively. Furthermore, we integrated pre-trained models from OpenAI for image processing and classification, enhancing our system's ability to recognize and categorize items from user-uploaded images. This integration enabled us to provide users with accurate disposal information based on real-time analysis of uploaded images. In the event of a user filing a report through the web application, pertinent details such as location data and waste type were collected and securely stored in the MongoDB Database. We then leveraged the Google Maps API to visualize reported cases in real-time, providing users with a comprehensive overview of waste management activities in their area directly from the frontend interface. ## Challenges we ran into 1. **Non-deterministic Responses from ChatGPT Model**: The replies from the ChatGPT image recognition model were inconsistent and sometimes deviated from the specified format. We had to implement prompt engineering techniques to stabilize the responses. 1. **Conversion Issues from Figma to HTML/CSS**: Converting designs from Figma to HTML/CSS proved problematic, as it didn't translate seamlessly. Consequently, we had to extensively revise and essentially redesign the UI from scratch. 2. **Limitations of ChatGPT Model Token**: The ChatGPT model had a restricted token limit, which was insufficient for our needs. We couldn't provide it with the full list of classifications due to the token cap, hindering our ability to leverage the model to its fullest potential. 3. **Mismanagement of API Keys**: Managing API keys posed a challenge, as we inadvertently exposed secret keys on GitHub during our initial commits. As a result, our first ChatGPT API keys were revoked, necessitating additional security measures and caution moving forward. Consequently, we made the decision to utilize ChatGPT instead, leveraging its cloud-based processing capabilities. This shift enabled us to overcome processing limitations while still achieving our project goals efficiently. 4. **Initial Plan with Zero-Shot Image Classification**: Initially, we intended to employ a zero-shot image classification model from Hugging Face. However, due to constraints in processing power, classifying each image locally wasn't feasible. ## Accomplishments that we're proud of 1. **Successful Implementation of Webcam Image Recognition**: We take pride in successfully implementing a robust image recognition system utilizing a webcam, providing users with seamless and efficient functionality. 2. **Accurate Map Plotting from User Data**: Our achievement in developing an active and precise map plotting feature, which seamlessly integrates user-generated data, underscores our commitment to providing valuable and relevant information in real-time. 3. **Utilization of Image Recognition Model for Garbage Prediction**: We're proud of successfully integrating an image recognition model into our system, enabling accurate prediction of the type of garbage from user-uploaded images. This innovative approach enhances the efficiency and effectiveness of waste management processes, contributing to a cleaner and more sustainable environment. 4. **Intuitive and User-Friendly Design**: We're proud to have created an intuitive and user-friendly design and UI, catering to users of all experience levels and ensuring a positive and engaging user experience. ## What's next for Trash Cam Moving forward, our team is eager to expand our project into mobile application development, offering users even more accessible solutions to address waste management challenges. We are dedicated to enhancing the accuracy of our image recognition model, particularly in navigating "gray area" cases like determining the proper disposal method for contaminated pizza boxes. Additionally, we plan to integrate advanced machine learning technologies, such as the K-Nearest Neighbors (KNN) algorithm, to predict the type or location of potential littering cases. By incorporating these innovations, we aim to propel the urban technological revolution forward and accelerate the development of sustainable cities
## Inspiration The inspiration from merchflow was the Google form that PennApps sent out regarding shipping swag. We found the question regarding distribution on campuses particularly odd, but it made perfect sense after giving it a bit more thought. After all, shipping a few large packages is cheaper than many small shipments. But then we started considering the logistics of such an arrangement, particularly how the event organizers would have to manually figure out these shipments. Thus the concept of merchflow was born. ## What it does Merchflow is a web app that allows event organizers (like for a hackathon) to easily determine the optimal shipping arrangement for swag (or, more generically, for any package) to event participants. Below is our design for merchflow. First, the event organizer provides merchflow with the contact info (email) of the event participants. Merchflow will then send out emails on behalf of the organizer with a link to a form and an event-specific code. The form will ask for information such as shipping address as well if they would be willing to distribute swag to other participants nearby. This information will be sent back to merchflow’s underlying database Firestore and updates the organizer’s dashboard in real-time. Once the organizer is ready to ship, merchflow will compute the best shipping arrangement based on the participant’s location and willingness to distribute. This will be done according to a shipping algorithm that we define to minimize the number of individual shipments required (which will in turn lower the overall shipping costs for the organizer). ## How we built it Given the scope of PennApps and the limited time we had, we decided to focus on designing the concept of Merchflow and building out its front end experience. While there is much work to be done in the backend, we believe what we have so far provides a good visualization of its potential. Merchflow is built using react.js and firebase (and related services such as Firestore and Cloud Functions). We ran into many issues with Firebase and ultimately were not able to fully utilize it; however, we were able to successfully deploy the web app to the provided host. With react.js, we used bootstrap and started off with airframe React templates and built our own dashboard, tabs, forms, tables, etc. custom to our design and expectations for merchflow. The dashboard and tabs are designed and built with responsiveness in mind as well as an intention to pursue a minimalistic, clean style. For functionalities that our backend isn’t operational in yet, we used faker.js to populate it with data to simulate the real experience an event planner would have. ## Challenges I ran into During the development of merchflow, we ran into many issues. The one being that we were unable to get Firebase authentication working with our React app. We tried following several tutorials and documentations; however, it was just something that we were unable to resolve in the time span of PennApps. Therefore, we focused our energy on polishing up the front end and the design of the project so that we can relay our project concept well even without the backend being fully operational. Another issue that we encountered was regarding Firebase deployment (while we weren’t able to connect to any Firebase SDKs, we were still able to connect the web app as a Firebase app and could deploy to the provided hosted site). During deployment, we noticed that the color theme was not properly displaying compared to what we had locally. Since we specify the colors in node\_modules (a folder that we do not commit to Git), we thought that by moving the specific color variable .scss file out of node\_modules, change import paths, we would be able to fix it. And it did, but it took quite some time to realize this because the browser had cached the site prior to this change and it didn’t propagate over immediately. ## Accomplishments that I'm proud of We are very proud of the level of polish in our design and react front end. As a concept, we fleshed out merchflow quite extensively and considered many different aspects and features that would be required of an actual service that event organizers actually use. This includes dealing with authentication, data storage, and data security. Our diagram describes the infrastructure of merchflow quite well and clearly lays out the work ahead of us. Likewise, we spent hours reading through how the airframe template was built in the first place before being able to customize and add on top of it, and in the process gained a lot of insight into how React projects should be structured and how each file and component connects with each other. Ultimately, we were able to turn what we dreamed of in our designs into reality that we can present to someone else. ## What I learned As a team, we learned a lot about web development (which neither of us is particularly strong in) specifically regarding react.js and Firebase. For react.js, we didn’t know the full extent of modularizing components could bring in terms of scale and clarity. We interacted and learned the workings of scss and javascript, including the faker.js package, on the fly as we try to build out merchflow’s front end. ## What's next for merchflow While we are super excited about our front end, unfortunately, there are still a few more gaps to turn merchflow into an operational tool for event organizers to utilize, primarily dealing with the backend and Firebase. We need to resolve the Firebase connection issues that we were experiencing so we can actually get a backend working for merchflow. After we are able to integrate Firebase into the react app, we can start connecting the fields and participant list to Firestore which will maintain these documents based on the event organizer’s user id (preventing unauthorized access and modification). Once that is complete, we can focus on the two main features of merchflow: sending out emails and calculating the best shipping arrangement. Both of these features would be implemented via a Cloud Function and would work with the underlying data stored in Firestore. Sending out emails could be achieved using a library such as Twilio SendGrid using the emails the organizer has provided. Computing the best arrangement would require a bit more work to figure out an algorithm to work with. Regardless of algorithm, it will likely utilize Google Maps API (or some other map API) in order to calculate the distance between addresses (and thus determine viability for proxy distribution). We would also need to utilize some service to programmatically generate (and pay for) shipping labels.
losing
## Inspiration Feeling major self-doubt when you first start hitting the gym or injuring yourself accidentally while working out are not uncommon experiences for most people. This inspired us to create Core, a platform to empower our users to take control of their well-being by removing the financial barriers around fitness. ## What it does Core analyses the movements performed by the user and provides live auditory feedback on their form, allowing them to stay fully present and engaged during their workout. Our users can also take advantage of the visual indications on the screen where they can view a graph of the keypoint which can be used to reduce the risk of potential injury. ## How we built it Prior to development, a prototype was created on Figma which was used as a reference point when the app was developed in ReactJs. In order to recognize the joints of the user and perform analysis, Tensorflow's MoveNet model was integrated into Core. ## Challenges we ran into Initially, it was planned that Core would serve as a mobile application built using React Native, but as we developed a better understanding of the structure, we saw more potential in a cross-platform website. Our team was relatively inexperienced with the technologies that were used, which meant learning had to be done in parallel with the development. ## Accomplishments that we're proud of This hackathon allowed us to develop code in ReactJs, and we hope that our learnings can be applied to our future endeavours. Most of us were also new to hackathons, and it was really rewarding to see how much we accomplished throughout the weekend. ## What we learned We gained a better understanding of the technologies used and learned how to develop for the fast-paced nature of hackathons. ## What's next for Core Currently, Core uses TensorFlow to track several key points and analyzes the information with mathematical models to determine the statistical probability of the correctness of the user's form. However, there's scope for improvement by implementing a machine learning model that is trained on Big Data to yield higher performance and accuracy. We'd also love to expand our collection of exercises to include a wider variety of possible workouts.
# Links Youtube: <https://youtu.be/VVfNrY3ot7Y> Vimeo: <https://vimeo.com/506690155> # Soundtrack Emotions and music meet to give a unique listening experience where the songs change to match your mood in real time. ## Inspiration The last few months haven't been easy for any of us. We're isolated and getting stuck in the same routines. We wanted to build something that would add some excitement and fun back to life, and help people's mental health along the way. Music is something that universally brings people together and lifts us up, but it's imperfect. We listen to our same favourite songs and it can be hard to find something that fits your mood. You can spend minutes just trying to find a song to listen to. What if we could simplify the process? ## What it does Soundtrack changes the music to match people's mood in real time. It introduces them to new songs, automates the song selection process, brings some excitement to people's lives, all in a fun and interactive way. Music has a powerful effect on our mood. We choose new songs to help steer the user towards being calm or happy, subtly helping their mental health in a relaxed and fun way that people will want to use. We capture video from the user's webcam, feed it into a model that can predict emotions, generate an appropriate target tag, and use that target tag with Spotify's API to find and play music that fits. If someone is happy, we play upbeat, "dance-y" music. If they're sad, we play soft instrumental music. If they're angry, we play heavy songs. If they're neutral, we don't change anything. ## How we did it We used Python with OpenCV and Keras libraries as well as Spotify's API. 1. Authenticate with Spotify and connect to the user's account. 2. Read webcam. 3. Analyze the webcam footage with openCV and a Keras model to recognize the current emotion. 4. If the emotion lasts long enough, send Spotify's search API an appropriate query and add it to the user's queue. 5. Play the next song (with fade out/in). 6. Repeat 2-5. For the web app component, we used Flask and tried to use Google Cloud Platform with mixed success. The app can be run locally but we're still working out some bugs with hosting it online. ## Challenges we ran into We tried to host it in a web app and got it running locally with Flask, but had some problems connecting it with Google Cloud Platform. Making calls to the Spotify API pauses the video. Reducing the calls to the API helped (faster fade in and out between songs). We tried to recognize a hand gesture to skip a song, but ran into some trouble combining that with other parts of our project, and finding decent models. ## Accomplishments that we're proud of * Making a fun app with new tools! * Connecting different pieces in a unique way. * We got to try out computer vision in a practical way. ## What we learned How to use the OpenCV and Keras libraries, and how to use Spotify's API. ## What's next for Soundtrack * Connecting it fully as a web app so that more people can use it * Allowing for a wider range of emotions * User customization * Gesture support
## Inspiration As students, we have found that there are very few high-quality resources on investing for those who are interested but don't have enough resources. Furthermore, we have found that investing and saving money can be a stressful experience. We hope to change this for those who want to save better with the help of our app, hopefully making it fun in the process! ## What it does Our app first asks a new client a brief questionnaire about themselves. Then, using their banking history, it generates 3 "demons", aka bad spending habits, to kill. Then, after the client chooses a habit to work on, it brings them to a dashboard where they can monitor their weekly progress on a task. Once the week is over, the app declares whether the client successfully beat the mission - if they did, they get rewarded with points which they can exchange for RBC Loyalty points! ## How we built it We built the frontend using React + Tailwind, using Routes to display our different pages. We used Cohere for our AI services, both for generating personalized weekly goals and creating a more in-depth report. We used Firebase for authentication + cloud database to keep track of users. For our data of users and transactions, as well as making/managing loyalty points, we used the RBC API. ## Challenges we ran into Piecing the APIs together was probably our most difficult challenge. Besides learning the different APIs in general, integrating the different technologies got quite tricky when we are trying to do multiple things at the same time! Besides API integration, definitely working without any sleep though was the hardest part! ## Accomplishments that we're proud of Definitely our biggest accomplishment was working so well together as a team. Despite only meeting each other the day before, we got along extremely well and were able to come up with some great ideas and execute under a lot of pressure (and sleep deprivation!) The biggest reward from this hackathon are the new friends we've found in each other :) ## What we learned I think each of us learned very different things: this was Homey and Alex's first hackathon, where they learned how to work under a small time constraint (and did extremely well!). Paige learned tons about React, frontend development, and working in a team. Vassily learned lots about his own strengths and weakness (surprisingly reliable at git, apparently, although he might have too much of a sweet tooth). ## What's next for Savvy Saver Demos! After that, we'll just have to see :)
winning
## Inspiration The inspiration behind Booble was to create a game that involved a unique user interface that also helped the user learn more about hardware and arduino. ## What it does The code runs a game that is interactive and is controlled by a wave of the hand using a leap motion sensor. It involves friendly pixel animation and an arduino uno board for additional input. ## How we built it Renee made the sprites, backgrounds, objects and music for the game. Olga created the control interface and control design Taylor coded the c++ for the front end of the game. ## Challenges we ran into Not enough time to finish project so we did not get all the input or story that we wanted but we have a cute game! ## Accomplishments that we're proud of We coded a c++ game in under 24 hours, we learned pixel art and how to create 8-bit music. ## What's next for Booble: Adventures in Arduino Getting all of the interfaces working and implementing levels.
## Inspiration The inspiration of our game came from the arcade game Cyclone, where the goal is to click the button when the LED lands on a signaled part of the circle. ## What it does The goal of our game is to click the button when the LED reaches a designated part of the circle (the very last LED). Upon successfully doing this it will add 1 to your score, as well as increasing the speed of the LED, continually making it harder and harder to achieve this goal. The goal is for the player to get as high of a score as possible, as the higher your score is, the harder it will get. Upon clicking the wrong designated LED, the score will reset, as well as the speed value, effectively resetting the game. ## How we built it The project was split into two parts; one was the physical building of the device and another was the making of the code. In terms of building the physical device, at first we weren’t too sure what we wanted to do, so we ended up with a mix up of parts we could use. All of us were pretty new to using the Arduino, and its respective parts, so it was initially pretty complicated, before things started to fall into place. Through the use of many Youtube videos, and tinkering, we were able to get the physical device up and running. Much like our coding process, the building process was very dynamic. This is because at first, we weren’t completely sure which parts we wanted to use, so we had multiple components running at once, which allowed for more freedom and possibilities. When we figured out which components we would be using, everything sort of fell into place. For the code process, it was quite messy at first. This was because none of us were completely familiar with the Arduino libraries, and so it was a challenge to write the proper code. However, with the help of online guides and open source material, we were eventually able to piece together what we needed. Furthermore, our coding process was very dynamic. We would switch out components constantly, and write many lines of code that was never going to be used. While this may have been inefficient, we learned much throughout the process, and it kept our options open and ideas flowing. ## Challenges we ran into In terms of main challenges that we ran into along the way, the biggest challenge was getting our physical device to function the way we wanted it to. The initial challenge came from understanding our device, specifically the Arduino logic board, and all the connecting parts, which then moved to understanding the parts, as well as getting them to function properly. ## Accomplishments that we're proud of In terms of main accomplishments, our biggest accomplishment is overall getting the device to work, and having a finished product. After running into many issues and challenges regarding the physical device and its functions, putting our project together was very satisfying, and a big accomplishment for us. In terms of specific accomplishments, the most important parts of our project was getting our physical device to function, as well as getting the initial codebase to function with our project. Getting the codebase to work in our favor was a big accomplishment, as we were mostly reliant on what we could find online, as we were essentially going in blind during the coding process (none of us knew too much about coding with Arduino). ## What we learned During the process of building our device, we learned a lot about the Arduino ecosystem, as well as coding for it. When building the physical device, a lot of learning went into it, as we didn’t know that much about using it, as well as applying programs for it. We learned how important it is to have a strong connection for our components, as well as directly linking our parts with the Arduino board, and having it run proper code. ## What's next for Cyclone In terms of what’s next for Cyclone, there are many possibilities for it. Some potential changes we could make would be making it more complex, and adding different modes to it. This would increase the challenge for the player, and give it more replay value as there is more to do with it. Another potential change we could make is to make it on a larger scale, with more LED lights and make attachments, such as the potential use of different types of sensors. In addition, we would like to add an LCD display or a 4 digit display to display the player’s current score and high score.
## Inspiration Since we are all stuck at home, it seemed like a good time to bring out the old games we used to play as kids. We are bringing back the wooden labyrinth game but with a modern twist. ## What it does Similar to the classic wooden labyrinth game, you are to guide your marble (in this case, your bunny) from start to finish. On your journey, you will have to move the joystick in different directions to avoid the holes and dead ends. So have fun watching your bunny hop from side to side when you tilt, and please don’t kill it... ## How we built it Our A-MAZE-ing labyrinth is created out of two Arduino Uno's. Each Arduino communicates through Bluetooth transceivers and one acts as a sender while the other acts as the receiver. The sending end uses a joystick shield that controls the labyrinth with the analog sticks. An OLED screen is attached to the joystick for fun animations while the game is running. On the other end, the receiver side uses two servo motors and two QTI sensors. The motors help maneuver the labyrinth while the QTI sensors sense for the marble. If it falls into the wrong hole, one sensor will send a signal over to play a sad/angry emoji. When the marble successfully makes it to the end, a different sensor tells the OLED to play the winning animation. ## Challenges we ran into While creating this project, we ran into both hardware and software problems. For the software side, we ran into issues where the code would not talk to each other through the Bluetooth modules. Information that was sent over from the sender side didn't match on the receiving end and this problem took a bit longer than anticipated to fix. On the hardware side, the main problem was getting the QTI sensors to detect the marble moving at a fast pace. This problem was tackled when creating a few tubes to guide the marble when it dropped into a hole. ## Accomplishments that we're proud of We are proud that we were able to complete the model of our labyrinth. Besides that, we are both satisfied that we completed our first hackathon. ## What we learned We learned that combining items together can cause a lot of problems. When adding the OLED with the motors and detection, any delays that were added to the animations would have to be completed before anything else would go on. ## What's next for our A-MAZE-ing Labyrinth In the future, we want to redesign our model to make it more visually appealing for the user. Looking even further down the line, it would be a huge achievement to see our product sold in stores and online to beginners and coders of all ages.
losing
## Inspiration High blood pressure is on one of the leading causes of early death. ## What it does Provides the ability to measure blood pressure using a any camera. ## How we built it We asked the campus first aid officers to take our blood pressure while we recorded a video of my forehead and palm. We analyzed the key points to see minuet changes in the RGB channels. The signal needed to be filtered and plotted to conduct further analysis. The outputs of our filtered data was analyzed numerically (using math) and with machine learning. The heart beat delay between the palm and the forehead is directly correlated with blood pressure. ## Challenges we ran into -A small sample size -Time -"Noise" from lights and emi ## Accomplishments that we're proud of -82% accuracy with our current model ## What we learned -A LOT of numerical data analysis techniques ## What's next for CmyMP (Mohammad, Humam, Alex) We're still in school, however, once we graduate we would like to pursue entrepreneurship. <https://drive.google.com/open?id=1shyujSsOZuRtN1SlrrAJFTB894EEcayZ>
## Inspiration While we were coming up with ideas on what to make, we looked around at each other while sitting in the room and realized that our postures weren't that great. You We knew that it was pretty unhealthy for us to be seated like this for prolonged periods. This inspired us to create a program that could help remind us when our posture is terrible and needs to be adjusted. ## What it does Our program uses computer vision to analyze our position in front of the camera. Sit Up! takes your position at a specific frame and measures different distances and angles between critical points such as your shoulders, nose, and ears. From there, the program throws all these measurements together into mathematical equations. The program compares the results to a database of thousands of positions to see if yours is good. ## How we built it We built it using Flask, Javascript, Tensorflow, Sklearn. ## Challenges we ran into The biggest challenge we faced, is how inefficient and slow it is for us to actually do this. Initially our plan was to use Django for an API that gives us the necessary information but it was slower than anything we’ve seen before, that is when we came up with client side rendering. Doing everything in flask, made this project 10x faster and much more efficient. ## Accomplishments that we're proud of Implementing client side rendering for an ML model Getting out of our comfort zone by using flask Having nearly perfect accuracy with our model Being able to pivot our tech stack and be so versatile ## What we learned We learned a lot about flask We learned a lot about the basis of ANN We learned more on how to implement computer vision for a use case ## What's next for Sit Up! Implement a phone app Calculate the accuracy of our model Enlarge our data set Support higher frame rates
## Inspiration During the second wave of covid in India, I witnessed the unthinkable amount of suffering unleashed on the people. During the peak, there were around 400,000 positive cases per day that were reported, according to various media reports a significant amount of positive cases went unreported. Images of piles of dead bodies being cremated almost everywhere around the country left me in shock. Since then I wanted to make/contribute to helping as much as I could. I took this problem as an inspiration to make an effort. Silent Hypoxia is when a patient does not feel shortness of breath, yet their oxygen level drops drastically, this is a very dangerous situation that has claimed many lives in this pandemic. To detect Silent Hypoxia, continuous monitoring of a patient's oxygen saturation is needed, unfortunately, general oximeters available in the market are manual and must be used at frequent intervals. This is a big problem, for one, due to the extreme shortage of healthcare workers particularly in India, individual attention to patients for measuring SPO2 every few minutes is impossible, which increases the chances of Silent Hypoxia going undetected. The solution is, continuous monitoring of oxygen saturation, this feature, unfortunately, is not offered by common affordable oximeters, taking it as a challenge, I came up with a prototype solution. When a person has advanced age, they are likely to experience a decrease in physical quality, one of the weaknesses (physical decline) experienced by the elderly is a weakness in their legs, which will make them more susceptible to fall. Falling is an event that causes a conscious subject to be on the ground unintentionally. Factors that cause falls are ill-informed like stroke, slippery or wet floors, holding places that are not strong or not easily held. These days’ falls have become a major health problem, particularly in the old aged ones. According to the statistics of WHO, 646,000 fatal falls are being recorded and 37.3 million falls that are not so fatal but which needs medical treatment have occurred existing solutions include computer vision to detect if the person falls, this process is highly susceptible to lighting conditions and is very restricted when it comes to covering a wide area. For example, a camera cannot detect fall in the bathroom because there is usually no camera ## What it does It also solves another problem, that is of network and communications, to explain, imagine there is a patient wearing the device, which uses wifi to connect to the internet and send data to dynamodb. But if the patient goes to the bathroom, for example, the wifi connection might get attenuated due to walls and physical obstructions, another situation, in developing and undeveloped countries wifi is still a luxury and very uncommon so due to these real-world conditions, depending on just wifi and Bluetooth like most smartwatches and fitness wearables do, is a bad idea and not reliable, for this reason, oxy, along with wifi also has a GSM module that connects to the internet via GPRS, the GPRS network is available almost everywhere on earth, vastly improving reliability. ## How I built it The device continuously monitors data from the SPO2 Sensor and Inertial Measurement Unit, and sends the data to dynamo db through an API gateway and lambda function, it can either use wifi or GPRS to connect to the API, the only difference between wifi and gprs is, gprs uses AT commands to connect to an intermediate gateway because the module i had at hand does not support SSL. so Once the device detects oxygen levels dropping below a certain point or physical fall, Smartphone app sends a notification, so if a patient needs 24/7 monitoring of SPO2 levels, you don’t have to take out an oximeter and measure manually every five minutes which can be exhausting for patient and caretaker, also, in India and other similar countries, there was an extreme shortage of healthcare workers who can be physically present nearby patient all the time to measure the oxygen levels, so, through the web app, which is hosted on the Graviton EC2 instance, they can add as many devices they want to monitor remotely, and medical history for emergency purpose of every patient is one click away, this, can allow them to keep monitoring patients’ spo2 while they tend to other important tasks. The parameters of notification on the app are customizable, you can adjust the time intervals and threshold values to trigger notifications. The device can be powered through a battery or USB, with the microcontroller esp8266 being the brain. The device can use inbuilt wifi to connect to the internet or it can do it through GPRS and SIM800L module, it also features onboard battery charging and discharging, with overcharge and overcurrent protection. And measurement is taken through an SPO2 sensor by Melexis. The cost of making the device prototype was around 9 USD, if mass-produced the price can come down significantly. ## Challenges I ran into The biggest challenge was to get data from the SPO2 sensor MAX30100, although there are libraries available for it, the bad schematic design of the breakout board made it impossible to get any data. I had to physically tinker around with the tiny SMD resistors on the sensor to make sure the I2C lines of the sensor work on a logic level of 3.3V. ## Accomplishments that I'm proud of For me, the proudest accomplishment is to have a working prototype of not only hardware but software too. ## What I learned The most important skill I learned is to connect the microcontroller to AWS DynamoDB through Lambda Gateway, and also how not to burn your fingers while desoldering teeny-tiny SMD components, ouch! that hurt 😂. ## What's next for oxy The hardware enclosure that houses the device is must be made clamp-like or strap-on, to make it a proper wearable device, I wanted to do it right now but I lost time trying to implement the device and app.
winning
## Inspiration As the world grapples with challenges like climate change, resource depletion, and social inequality, it has become imperative for organizations to not only understand their environmental, social, and governance (ESG) impacts but also to benchmark and improve upon them. However, one of the most significant hurdles in this endeavor is the complexity and inaccessibility of sustainability data, which is often buried in lengthy official reports and varied formats, making it challenging for stakeholders to extract actionable insights. Recognizing the potential of AI to transform this landscape, we envision Oasis as a solution to democratize access to sustainability data, enabling more informed decision-making and fostering a culture of continuous improvement toward global sustainability goals. By conversing with AI agents, companies are able to collaborate in real-time to gain deeper insights and work towards solutions. ## What it does Oasis is a groundbreaking platform that leverages AI agents to streamline the parsing, indexing, and analysis of sustainability data from official government and corporate ESG reports. It provides an interface for companies to assess their records and converse with an AI agent that has access to their sustainability data. The agent helps them benchmark their practices against practices of similar companies and narrow down ways that they can improve through conversation. Companies can effortlessly benchmark their current sustainability practices, assess their current standings, and receive tailored suggestions for enhancing their sustainability efforts. Whether it's identifying areas for improvement, tracking progress over time, or comparing practices against industry standards, Oasis offers a comprehensive suite of features to empower organizations in their sustainability journey. ## How we built it Oasis uses a sophisticated blend of the following: 1. LLM (LLaMA 2) parsing to parse data from complex reports. We fine-tuned an instance of `meta-llama/Llama-2-7b-chat-hf` on the HuggingFace dataset [Government Report Summarization](https://huggingface.co/datasets/ccdv/govreport-summarization) using MonsterAPI. We use this model to parse data points from ESG PDF text, since these documents are in a non-standard format, into a JSON format. LLMs are incredibly powerful at extracting key information and summarization, which is why we see such a strong use case here. 2. Open-source text embedding model (SentenceTransformers) to index data including metrics and data points within a vector database. LLM-parsed data points contain key descriptors. We use an embedding model to index these descriptors in semantic space, allowing us to compare similar metrics across companies. Two key points may not have the same descriptions, but are semantically similar, which is why indexing with embeddings is beneficial. We use the SentenceTransformer model `msmarco-bert-base-dot-v5` for text embeddings. We also use the InterSystems IRIS Data Platform to store embedding vectors, on top of the LangChain framework. This is useful for finding similar metrics across different companies and also for RAG, as discussed next. 3. Retrieval augmented generation (RAG) to incorporate relevant metrics and data points into conversation To enable users to converse with the agent and inspect and make decisions based on real data, we use RAG integrated with our IRIS vector database, running on the LangChain framework. We have a frontend UI for interacting with our agent in real time. 4. Embedding similarity to semantically align data points for benchmarking across companies Our frontend UI also presents key metrics for benchmarking a user’s company. It uses embedding similarity to find company metrics and relevant metrics from other companies. ## Challenges we ran into One of the most challenging parts of the project was prompting the LLM and running numerous experiments until the LLM output matched what was expected. Since LLMs are non deterministic in nature and we required outputs in a consistent JSON form (for parsed results), we needed to prompt the LLM and reinforce the constraints multiple times. This was a valuable lesson that helped us learn how to leverage LLMs in intricate ways for niche applications. ## Accomplishments that we're proud of We are incredibly proud of developing a platform that not only addresses a critical global challenge but does so with a level of sophistication and accessibility that sets a new standard in the field. Successfully training AI models to navigate the complexities of ESG reports marks a significant technical achievement. The ability to turn dense reports into clear, actionable insights represents a leap forward in sustainability practice. ## What we learned Throughout the process of building Oasis, we learned the importance of interdisciplinary collaboration in tackling complex problems. Combining AI and sustainability expertise was crucial in understanding both the technical and domain-specific challenges. We also gained insights into the practical applications of AI in real-world scenarios, particularly in how NLP and machine learning can be leveraged to extract and analyze data from unstructured sources. The iterative process of testing and feedback was invaluable, teaching us that user experience is as important as the underlying technology in creating impactful solutions. ## What's next for Oasis The journey for Oasis is just beginning. Our next steps involve expanding the corpus of sustainability reports to cover a broader range of industries and geographies, enhancing the platform's global applicability. We are also exploring the integration of predictive analytics to offer forward-looking insights, enabling users to not just assess their current practices but also to anticipate future trends and challenges. Collaborating with sustainability experts and organizations will remain a priority, as their insights will help refine our models and ensure that Oasis continues to meet the evolving needs of its users. Ultimately, we aim to make Oasis a cornerstone in the global effort towards more sustainable practices, driving change through data-driven insights and recommendations.
### Simple QR Code Bill Payment #### nwHacks 2020 Hackathon Project #### Main repository for the rapidserve application ### Useful Links * [Github](https://github.com/rossmojgani/rapidserve) * [DevPost](https://devpost.com/software/rapidserve-g1skzh) ### Team Members * Ross Mojgani * Dryden Wiebe * Victor Parangue * Aric Wolstenholme ### Description RapidServe is a mobile application which allows restaurants to charge their customers through a mobile application interface. Powered with a React Native frontend and Python Flask API server with a mongoDB database, RapidServe uses QR codes linked to tables to allow the customer to scan the QR code at their table and pay for any item at their table. Once all the items at the customers table are paid for, the customer is free to go and the waiter/waitress does not need to be bothered and wait for each customer at a table to pay individually. ### Technical Details * Frontend Mobile Application **(React Native)** + The frontend was implemented using React Native, there is a landing page where the user can register or log in, using a facebook integration to link their facebook account. + While creating an account, if the user is a waiter/waitress, they are prompted to enter their restaurant ID, along with entering their username/password combination. If the user is a customer, they will just be prompted for a username/password combination. + The page which comes up next is a page to scan a QR code which corresponds to the table which the waiter/waitress is serving or the customer is sitting at, the customer will be able to see which items have been charged to their table and pay for whichever items they need to. The waiter/waitress will be allowed to add items to the table they are serving. + The user can pay for their items and the waiter/waitress can see if the table has been paid for and know the customers are good to go. * API Details **(Flask/Python API)** + The API for this application was implemented using the flask framework along with Python, there was documentation which the frontend used to make their HTTP requests, [API DOCUMENTATION](https://github.com/rossmojgani/rapidserve/blob/master/API.md), this API document was the contract between the frontend and the backend in terms of what arguments were sent into what type of HTTP requests. The API was hosted on a virtual machine in the cloud. + The API queried our mongoDB database based on which requests were being processed, which was also hosted on a virtual machine on the cloud, more below. * Database Details **(MongoDB)** + The database used was mongoDB, which was queried from the Flask/Python server using PyMongo and Flask\_PyMongo, we used two collections mainly, **users, and orders** which stored objects based on what a user needed to have stored (see [API DOCUMENTATION](https://github.com/rossmojgani/rapidserve/blob/master/API.md) for a user object example) and for what a tables order would be (again, see [API DOCUMENTATION](https://github.com/rossmojgani/rapidserve/blob/master/backend/API.md) for a table object example)
## Inspiration During these covid times, people have pondered over two major facets: "Finance" and "Environment". So we thought out a way how people today can "Finance the Environment". We wanted to motivate people to invest in promoting sustainable developement. ## What it does We have built a web application that will allow new investors to explore various stocks that promote sustainable development and help the environment. User can save their preferences as we have connected their "InvestGreen" to google accounts. Apart from exploring various stocks, users can also write blogs and share their ideas and experiences of going green. We also have chat support, which we handled now, but soon we wish to shift it to the financial advisors' talk point. Basically, we have created a platform to promote sustainable development. ## How we built it We have used ReactJS for the frontend. Google Auth (Firebase) for the authentication of users and Alphadvantage API to get all the data for the companies' stock market, Ascend for a chat support feature and various plug-ins to convert data to highly visual graphs for better designs. ## Challenges we ran into We come from a non-CS background and some of us are new to react.js. Most of us were flutter developers. But we wanted to try react.js. We started building the whole website on react and have learnt something from the workshop that was conducted yesterday also. It was difficult to get the appropriate content of the website. It was difficult to integrate the chat support and the dashboard to reply to it. ## Accomplishments that we're proud of We're very happy that we learnt a new tech stack like react.js so soon. We learnt how to build chat support on the webpage. We had to design the web page ourselves which we are very proud of. ## What we learned We have learnt the importance of teamwork. We had split the work among each other and it had made things faster. We really enjoyed building stuff overnight. ## What's next for InvestGreen We will deploy our website in the days to come. We are thinking to introduce an option that would help people to directly invest in the companies from this website itself instead of only showing the data.
partial
**DEMO VIDEO**: <https://drive.google.com/drive/folders/1Eh3pVSfi8KzsKQ5bC8A-uBp98ogV6XRr> The video was not able to upload to YouTube or other platforms so we have attached a Google Drive link here (Devpost would not let us put it in the video field). ## Inspiration Most students don’t encounter machine learning until college. However, this typical track of Calculus 2 > Machine Learning may be outdated. With the development of powerful machine learning tools like IntersystemsML, complex mathematics is no longer necessary for a conceptual understanding of machine learning. **We make the bold claim that kids – even middle schoolers – can and should be exposed to machine learning concepts.** With this goal, Whisker Workshops sets out to preserve precious machine learning insights while replacing formidable blocks of code and equations with cute hand-drawn graphics and games. ## What it does Whisker Workshops consists of two main playground sections: one allows students to interactively build a neural network to represent the nonlinear exclusive or (XOR) gate while the other allows users to explore various dimensionality reduction levels for principal component analysis (PCA). We designed the UI to be cute and hand-drew graphics down to the buttons, aiming to foster a friendly and welcoming environment for beginners. For the XOR gate playground, the truth table for the logic gate is given (no prior experience is assumed) — the goal is then to create a network to reproduce XOR. The playground is very visual: the user can see the entire neural network in real-time while updating neurons and edges. Edge weights can be updated by clicking on them, and the thickness of an edge visualizes its corresponding weight; the number of neurons is adjustable through a slider. Then, the player can run the network on different inputs and see how well they do: the task is essentially presented as a game of sorts! The PCA playground (also known as “More Isn’t Always Better”) follows a similar visual philosophy: users can change the dimensionality of input data (Fashion MNIST dataset, images of clothes and accessories) and visualize how well the model performs through an interactive chart. Our friendly cat mentor also guides us through the entire process and explains why we observe the results we do! ## How we built it All of the graphics and artwork in the final product were drawn by our team and incorporated into the website frontend built in React.JS and CSS. The PCA playground was built by first conducting principal component analysis in Python on the Fashion MNIST dataset for a variety of output dimensionalities. We then used InterSystems IntegratedML to train a machine learning model for each of the resulting datasets. Through this process, we were delightfully surprised by the versatility of InterSystems IntegratedML’s capabilities. Not only was the model able to predict the 2-way XOR task, a classic non-linearity test, but it was also able to perform quite well on classifying MNIST fashion pieces, with no knowledge of what the input was. The resulting accuracy data was then visualized through Chart.JS, a data visualization library in Javascript. The cat animations and dialogue were all created in CSS. The XOR gate playground was designed primarily in React.JS, using a lot of different state logic and hooks to create the interactive neural network editor and interface for running the model. Due to the highly custom nature of the project, we manually implemented forward propagation for the network. All styling was done in CSS. ## Challenges we ran into It was often difficult to create certain visual designs, effects and features with CSS: for example, the slider that demonstrates the results of Principal Component Analysis for different output dimensions. This was especially the case for importing our custom graphics and artwork into our project. This improved our debugging skills significantly and also taught us how to better utilize documentation to learn. Another challenge we faced was navigating and running models on IntegratedML. We are very grateful for Thomas from Intersystems for his exceedingly helpful advice, allowing us to successfully train and validate the models we needed. ## Accomplishments that we're proud of We’re proud of having built a functional application with real-world value during this hackathon. As a team, we believe that Whisker Workshop isn’t just a toy project but a starting point for future AI education initiatives aimed at a younger audience. Our team is also proud to have explored and combined skills from a lot of different fields including machine learning, web development, data analysis, and graphic design. Another accomplishment that we feel proud of is the fact that we incorporated the InterSystems IntegratedML framework into our project as this took us a lot of effort and research. ## What we learned We learned how to make a stylish and customized UI and utilize CSS to its fullest extent; along with incorporating custom graphic design, we learned to create a product that would appeal to our target audience: children without technical knowledge interested in exploring machine learning. Thinking about our target audience was a skill we improved a lot throughout TreeHacks as we continually aimed to look at our product from users’ perspective. Using InterSystems IntegratedML also taught us how to effectively use external tools to help build our product. Through the process, we had to read a lot of documentation and learn to understand the new paradigm presented by this specific system — this is a skill that is essential as programmers always use different tools and technologies to assist them. We were also puzzled as to how to efficiently run the machine learning algorithm in real time after each update made by the user without taking a toll on the user’s computer or taking too long. We learned how to custom-build a machine learning algorithm without the conventional tools such as Tensorflow or Pytorch, which would be too inefficient to run each time. We eventually accomplished this by extracting only the features essential to the differentiation task with principal component analysis, and then coding up our own dense network. ## What's next for Whisker Workshop We plan to expand this concept of abstractified ML to middle school and high school students who don’t have access to machine learning education. We hope to promote Whisker Workshop to elementary and middle school teachers, who can encourage their students to explore machine learning in their free time. One extension is a playground for kids to create a multi-layer deep learning model that will perform image classification. For each layer, they will have options to input a dense or convolutional layer and sliders to tune hyperparameters — observing how different settings impact accuracy in this gamified and interactive environment will allow students to develop intuition about deep learning concepts (e.g. network architecture, activations, etc.).
## Inspiration Sagnik is a student-instructor for the UC Berkeley Intro to ML class. Some students complained that they had no intuitive idea what happened when they changed model parameters, and that they'd like to see how the model's output changed interactively with change in parameters. ## What it does Visualizes the decision boundary of various machine learning algorithms (SVM, decision trees, boosted trees, multilayer perceptrons, etc.). This helps **democratize access to machine learning education**. There are many people who are getting started with machine learning on their own, and they rely completely on free resources available online. This visualizer will be a great tool for such people to get an in-depth understanding of decision boundaries, overfitting, generalization, etc. Since this is a free, open-source tool available online, instructors across the world can use it in their classrooms to teach, explain, and demo machine learning algorithms, and then students can go home and use it to further their understanding. ## How we built it We started with the code of Dash's [SVM visualizer](https://github.com/plotly/dash-sample-apps/tree/master/apps/dash-svm) and heavily modified it to add support for other machine learning algorithms. ## Challenges we ran into * Refining the UI -- positioning the logo correctly * Python issues with libraries ## Accomplishments that we're proud of * Sagnik: Finishing a project at a hackathon!!! * Completing the parameters for other ML algorithms. ## What I learned * Sagnik: Making a web app and deploying it to Heroku * Colin: Changing around frontend attributes using python * Komila: Finishing designs using Adobe Illustrator ## What's next for ML Algorithm Visualizer Sagnik will use this as a teaching tool in class, and keep adding to the web app as needed to maximize students' learning. The following are planned for the very near future: * adding more options for the existing algorithms * ability to import custom data * add animated APIs * add the ability for students to interactively click on their graph data points * add descriptions for each ML algorithm so students can get exposed to new ones * add a separated frontend component so that visuals can become more customized
## About the Project NazAR is an educational tool that automatically creates interactive visualizations of math word problems in AR, requiring nothing more than an iPhone. ## Behind the Name *Nazar* means “vision” in Arabic, which symbolizes the driving goal behind our app – not only do we visualize math problems for students, but we also strive to represent a vision for a more inclusive, accessible and tech-friendly future for education. And, it ends with AR, hence *NazAR* :) ## Inspiration The inspiration for this project came from each of our own unique experiences with interactive learning. As an example, we want to showcase two of the team members’ experiences, Mohamed and Rayan’s. Mohamed Musa moved to the US when he was 12, coming from a village in Sudan where he grew up and received his primary education. He did not speak English and struggled until he had an experience with a teacher that transformed his entire learning experience through experiential and interactive learning. From then on, applying those principles, Mohamed was able to pick up English fluently within a few months and reached the top of his class in both science and mathematics. Rayan Ansari had worked with many Syrian refugee students on a catch-up curriculum. One of his students, a 15 year-old named Jamal, had not received schooling since Kindergarten and did not understand arithmetic and the abstractions used to represent it. Intuitively, the only means Rayan felt he could effectively teach Jamal and bridge the connection would be through physical examples that Jamal could envision or interact with. From the diverse experiences of the team members, it was glaringly clear that creating an accessible and flexible interactive learning software would be invaluable in bringing this sort of transformative experience to any student’s work. We were determined to develop a platform that could achieve this goal without having its questions pre-curated or requiring the aid of a teacher, tutor, or parent to help provide this sort of time-intensive education experience to them. ## What it does Upon opening the app, the student is presented with a camera view, and can press the snapshot button on the screen to scan a homework problem. Our computer vision model then uses neural network-based text detection to process the scanned question, and passes the extracted text to our NLP model. Our NLP text processing model runs fully integrated into Swift as a Python script, and extracts from the question a set of characters to create in AR, along with objects and their quantities, that represent the initial problem setup. For example, for the question “Sally has twelve apples and John has three. If Sally gives five of her apples to John, how many apples does John have now?”, our model identifies that two characters should be drawn: Sally and John, and the setup should show them with twelve and three apples, respectively. The app then draws this setup using the Apple RealityKit development space, with the characters and objects described in the problem overlayed. The setup is interactive, and the user is able to move the objects around the screen, reassigning them between characters. When the position of the environment reflects the correct answer, the app verifies it, congratulates the student, and moves onto the next question. Additionally, the characters are dynamic and expressive, displaying idle movement and reactions rather than appearing frozen in the AR environment. ## How we built it Our app relies on three main components, each of which we built from the ground up to best tackle the task at hand: a computer vision (CV) component that processes the camera feed into text: an NLP model that extracts and organizes information about the initial problem setup; and an augmented-reality (AR) component that creates an interactive, immersive environment for the student to solve the problem. We implemented the computer vision component to perform image-to-text conversion using the Apple’s Vision framework model, trained on a convolutional neural network with hundreds of thousands of data points. We customize user experience with a snapshot button that allows the student to position their in front of a question and press it to capture an image, which is then converted to a string, and passed off to the NLP model. Our NLP model, which we developed completely from scratch for this app, runs as a Python script, and is integrated into Swift using a version of PythonKit we custom-modified to configure for iOS. It works by first tokenizing and lemmatizing the text using spaCy, and then using numeric terms as pivot points for a prioritized search relying on English grammatical rules to match each numeric term to a character, an object and a verb (action). The model is able to successfully match objects to characters even when they aren’t explicitly specified (e.g. for Sally in “Ralph has four melons, and Sally has six”) and, by using the proximate preceding verb of each numeric term as the basis for an inclusion-exclusion criteria, is also able to successfully account for extraneous information such as statements about characters receiving or giving objects, which shouldn’t be included in the initial setup. Our model also accounts for characters that do not possess any objects to begin with, but who should be drawn in the display environment as they may receive objects as part of the solution to the question. It directly returns filenames that should be executed by the AR code. Our AR model functions from the moment a homework problem is read. Using Apple’s RealityKit environment, the software determines the plane of the paper in which we will anchor our interactive learning space. The NLP model passes objects of interest which correspond to particular USDZ assets in our library, as well as a vibrant background terrain. In our testing, we used multiple models for hand tracking and gesture classification, including a CoreML model, a custom SDK for gesture classification, a Tensorflow model, and our own gesture processing class paired with Apple’s hand pose detection library. For the purposes of Treehacks, we figured it would be most reasonable to stick with touchscreen manipulation, especially for our demo that utilizes the iPhone device itself without being worn with a separate accessory. We found this to also provide better ease of use when interacting with the environment and to be most accessible, given hardware constraints (we did not have a HoloKit Apple accessory nor the upcoming Apple AR glasses). ## Challenges we ran into We ran into several challenges while implementing our project, which was somewhat expected given the considerable number of components we had, as well as the novelty of our implementation. One of the first challenges we had was a lack of access to wearable hardware, such as HoloKits or HoloLenses. We decided based on this, as well as a desire to make our app as accessible and scalable as possible without requiring the purchase of expensive equipment by the user, to be able to reach as many people who need it as possible. Another issue we ran into was with hand gesture classification. Very little work has been done on this in Swift environments, and there was little to no documentation on hand tracking available to us. As a result, we wrote and experimented with several different models, including training our own deep learning model that can identify gestures, but it took a toll on our laptop’s resources. At the end we got it working, but are not using it for our demo as it currently experiences some lag. In the future, we aim to run our own gesture tracking model on the cloud, which we will train on over 24,000 images, in order to provide lag-free hand tracking. The final major issue we encountered was the lack of interoperability between Apple’s iOS development environment and other systems, for example with running our NLP code, which requires input from the computer vision model, and has to pass the extracted data on to the AR algorithm. We have been continually working to overcome this challenge, including by modifying the PythonKit package to bundle a Python interpreter alongside the other application assets, so that Python scripts can be successfully run on the end machine. We also used input and output to text files to allow our Python NLP script to more easily interact with the Swift code. ## Accomplishments we're proud of We built our computer vision and NLP models completely from the ground up during the Hackathon, and also developed multiple hand-tracking models on our own, overcoming the lack of documentation for hand detection in Swift. Additionally, we’re proud of the novelty of our design. Existing models that provide interactive problem visualization all rely on custom QR codes embedded with the questions that load pre-written environments, or rely on a set of pre-curated models; and Photomath, the only major app that takes a real-time image-to-text approach, lacks support for word problems. In contrast, our app integrates directly with existing math problems, and doesn’t require any additional work on the part of students, teachers or textbook writers in order to function. Additionally, by relying only on an iPhone and an optional HoloKit accessory for hand-tracking which is not vital to the application (which at a retail price of $129 is far more scalable than VR sets that typically cost thousands of dollars), we maximize accessibility to our platform not only in the US, but around the world, where it has the potential to complement instructional efforts in developing countries where educational systems lack sufficient resources to provide enough one-on-one support to students. We’re eager to have NazAR make a global impact on improving students’ comfortability and experience with math in coming years. ## What we learned * We learnt a lot from building the tracking models, which haven’t really been done for iOS and there’s practically no Swift documentation available for. * We are truly operating on a new frontier as there is little to no work done in the field we are looking at * We will have to manually build a lot of different architectures as a lot of technologies related to our project are not open source yet. We’ve already been making progress on this front, and plan to do far more in the coming weeks as we work towards a stable release of our app. ## What's next for NazAR * Having the app animate the correct answer (e.g. Bob handing apples one at a time to Sally) * Animating algorithmic approaches and code solutions for data structures and algorithms classes * Being able to automatically produce additional practice problems similar to those provided by the user * Using cosine similarity to automatically make terrains mirror the problem description (e.g. show an orchard if the question is about apple picking, or a savannah if giraffes are involved) * And more!
partial