hackathon_id int64 1.57k 23.4k | project_link stringlengths 30 96 | full_desc stringlengths 1 547k ⌀ | title stringlengths 1 60 ⌀ | brief_desc stringlengths 1 200 ⌀ | team_members stringlengths 2 870 | prize stringlengths 2 792 | tags stringlengths 2 4.47k | __index_level_0__ int64 0 695 |
|---|---|---|---|---|---|---|---|---|
10,347 | https://devpost.com/software/xdr-and-xsoar-working-together | Inspiration - We noticed that clients would often ask us what response actions that XDR can take when providing XDR demos or having XDR conversations. We also noticed that other EDR tools did not provide similar functionality.
What it does - We have created a Playbook in XSOAR that identifies when a user has attempted to install malware, emails the end user, and then isolates the machine. It then sends an email to the IT Security team notifying them of this activity.
How we built it - We built a Playbook in XSOAR, created email accounts to mock the scenario, and also detonated malware on an Endpoint to simulate this.
Challenges we ran into - Having to modify the Playbook to get it to function properly.
Accomplishments that we're proud of - This is a demo scenario that other SEs can use, we can show the value of XDR and XSOAR together in just one example. This is an idea that's ready to be delivered to customers with little to no development time.
What we learned - How the integration between XDR and XSOAR really works and the intricacies involved in the integration.
What's next for - Continue capturing ideas and building onto this playbook more.
https://github.com/demisto/content/pull/9203
Built With
api
xdr | XDR and XSOAR working together | Combining XDR + XSOAR together to create a compelling use case. | ['Sanket Shah', 'Chaithanya Sajja', 'Shane Markley'] | [] | ['api', 'xdr'] | 24 |
10,347 | https://devpost.com/software/user-id-magic | Inspiration
User-ID is one of the foundational technologies that allows Palo Alto Networks firewalls to protect networks. XSOAR was missing an integration for User-ID so we built it!
What it does
From XSOAR, one can retrieve or push user-id mappings to Palo Alto Networks firewalls. There are a few use cases for this.
1 - While investigating an incident, it could be really important to pull the user mapping from the firewall to identify the user associated with the incident. Having this automated allows the data to be pulled as soon as possible after the event is identified.
2 - In some cases, the normal sources of user to IP mappings may not provide a complete set of data. Normally most mappings come from Active Directory. How do you get mappings for machines that are not attached to AD like BYOD devices, guest networks, IOT or linux systems? This integration with XSOAR provides a way.
3 - The administrator can build a lists of MAC to user mappings and IP to user mappings. These can be used to push pre-determined user names for IPs or MAC addresses.
4 - A syslog listener integration can be configured to receive DHCP events and automatically create User-ID mappings based on the DHCP MAC address.
How we built it
We built a custom automation to extend the capabilities of the Panorama integration to handle User-ID functions. We also built custom classifiers and mappers to ingest syslog information and fully automate building User-ID mappings from this data. Simple playbooks were created to support automated mappings, pushing static mappings and a manual process for administrators to manage the databases associated with the MAC and IP mappings.
Challenges we ran into
Understanding mappings and classifiers to extract the proper fields from syslog.
Finding documentation on the APIs of various products.
Accomplishments that we're proud of
We have a fully functional solution that gives User-ID output and supports automatic mapping of DHCP and static IP addresses.
We built this in less than 48 hours.
What we learned
XSOAR product capabilities for example, mapping incoming data to fields.
What's next for User-ID Magic
Add support for User-ID groups
Currently this has been tested with Palo Alto syslog and ISC DHCP servers. Expand coverage to other types of DHCP servers.
Integration to push mappings to dynamic DNS
Built With
python
Try it out
github.com | User-ID Magic | What if I told you I could map new machines to users on your NGFW? | ['Todd Walker', 'Robert Lemm', 'Scott White', 'Rod Gonzalez'] | [] | ['python'] | 25 |
10,347 | https://devpost.com/software/automatic-xsoar-tenant-creation | Automation architecture
Inspiration
We were getting multiple XSOAR Tenant creation requests for which we had to spend dedicated time on each individual request. To simplify this, we have automated the end-to-end process of Tenant deployment.
What it does
We have created the XSOAR-Tenant-Creation Integration with "Automatic Tenant Creation" Playbook. It works with the integration command, !NTT-Tenant-Creation with 3 mandatory arguments for creating XSOAR tenant, i.e. Customer_Name, Customer_id (both are used in naming convention) and XSOAR_Host.
You can define the arguments accordingly in your playbook.
How we built it
We gathered the requirements from various teams and created a standardized process for Tenant creation that includes the Tenant naming convention. Then, we created the Integration and matched the Service Now inputs in Playbook as per the required use case.
Challenges we ran into
Developing integration was challenging as we were not able to identify the correct parameter to be used, but we kept trying with different options and were finally able to achieve the result.
Accomplishments that we're proud of
We were able to create and deploy end-to-end automation of XSOAR Tenant deployment.
What we learned
We learned that Tenant name should not have blanks or spaces and using XSOAR API effectively.
What's next for Automatic XSOAR Tenant Creation
We are currently working on XSOAR Automatic Host deployment and are planning to add several reporting features in XSOAR that will get the number of Tenants under each host as well as provide the Tenant utilization.
Built With
api
json
python
Try it out
github.com | Automatic XSOAR Tenant Creation | "Automate the Automation" | ['Ashish B', 'Mohan Mittal'] | [] | ['api', 'json', 'python'] | 26 |
10,347 | https://devpost.com/software/intel-driven-vulnerability-management | Campaign Layout and Collection Playbook
Campaign Indicators Layout
XDR Incident with Vulnerability Management Enrichment
Internal Host indicator and dynamic scoring
Automated Vuln Scan, Reporting, and Prioritization
Inspiration
Vulnerability Management teams are swamped with vulnerabilities, and have a hard time actually improving an organization's risk posture.
What it does
The Intel-Driven Vulnerability Management content pack enables organizations to connect vulnerability information to actual threats that have the intent and capability to do harm. Take the specter and meltdown vulnerabilities, for example. They're high vulnerabilities because theoretically they could lead to RCE, but in practice no one has the capability to exploit them--no proof of concept or in the wild attack has been demonstrated. Similarly, nation states that may have the capability to exploit such vulnerabilities may not have the intent to harm an organizations particular country or industry.
This content pack helps close the Window of Opportunity by prioritizing patches for vulnerabilities that are actively being exploited by motivated and capable threat actors. It also retroactively identifies incidents that involved that CVE so they can be prioritized appropriately, and lastly, it identifies any new exploit attempts as being part of a campaign that must be given a high priority.
How I built it
Cortex XSOAR's robust and flexible Case Management capabilities allowed us to build a Threat Intel Campaign case type and layout to organize threat intel. The indicators enrichment and database were extended to improve CVE and Internal Host handling. And we put everything together with heavy use of Incident and Indicator linking mechanisms.
What's next for Intel-Driven Vulnerability Management
Other vulnerability scanning tools can be easily added to the skeleton provided. We focused primarily on NMAP because it's free and easy to deploy, but also created a playbook for Tenable. Other scanning tools like Qualys could easily be added.
Built With
xsoar | Intel-Driven Vulnerability Management | Extracts CVE's from threat intel reports, assess vulnerability to those threats through scanning, identifies exploit attempts, and prioritizes patches based on the risk they pose to the organization. | ['Shawn Murphy', 'Nicholas Ericksen'] | [] | ['xsoar'] | 27 |
10,347 | https://devpost.com/software/sales-user-onboarding | Inspiration
We were inspired by listening to the needs of SalesOps and hearing how they spent tons of time working on manual processes that were boring and repetitive.
What it does
Using ServiceNow, Okta, and Salesforce, our playbook streamlines the process with greater speed and accuracy. It checks ServiceNow for new sales onboarding requests, checks the time constraints, validates in Okta, takes a pre existing user in Salesforce and copies it with the replaced new user info, posts it to Salesforce and finally updates the ServiceNow ticket.
How I built it
We built it by first defining the requirements with SalesOps, figuring out how to mimic the key steps using Salesforce’s API, applying those steps within Salesforce to XSOAR, even modifying the Salesforce integration settings. Then we tested the commands using ServiceNow as a way to retrieve the data we need to ingest, and used dummy data to ensure we were getting the desired outputs.
Challenges I ran into
We ran into the challenge that the salesforce XSOAR integration lacked certain outputs for the salesforce query. Also, various sections within the XSOAR documentation were lacking. Finally, the integration in Javascript made it difficult to modify
Accomplishments that I'm proud of
Proud of building custom automation scripts to process the JSON template used by salesforce. Also proud of the speed in which we were able to develop at. We are also proud of all new knowledge we have for the Okta, Salesforce, and ServiceNow API’s which our playbook utilized.
What I learned
We learned that XSOAR enables very easy overall task execution, however, these tasks are often reliant on custom integrations that may lack the necessary features. Luckily, we can easily build those features ourselves. Building this process and participating in this Hackathon helped our team in accessing other XSOAR use cases and developing our XSOAR pipeline.
What's next for Sales User Onboarding
We are planning on adding more tasks such as assigning permission groups within salesforce as well as further exception handling. On the business end we want to formalize the data entry into ServiceNow. By automating this process, we are also creating other automation opportunities to eliminate other manual business steps for this process
Built With
okta
salesforce
servicenow | Sales User Onboarding | The Sales User Onboarding procedure is time consuming and prone to human errors. Using ServiceNow, Okta, and Salesforce, our playbook streamlines the process with greater speed and accuracy. | ['Kevin Ong'] | [] | ['okta', 'salesforce', 'servicenow'] | 28 |
10,347 | https://devpost.com/software/qa-chatbot-to-support-customers | demo picture
Inspiration:
Nikesh has an initiative to launch more robots across our platform and across the company. This will not only improve efficiency, but also reduces the cost.
What it does
It conducts basic conversation with users to address their support requests. Given user's input, the chatbot will reply with the appropriate reply as well as recommend relevant knowledge base articles to the user's query based on the historical users' support ticket content and support engineers' replies.
How I built it
I extracted the training corpus from historical ticket database first. Then I developed a novel machine learning recommendation model on top of open sourced natural language processing framework released by GOOGLE in Nov 2018. Last, I developed a graphical user interface(GUI) to provide a better user experience.
Challenges I ran into
I consider myself a data scientist. I am an expert in machine learning and data science, but I have not done much in frontend coding and has no experience in GUI development before.
Also, the machine learning is over 400M in size due to its 1.2 billion parameters and the corpus is also large in size, which exceeds github submission size cap(100M). I have to host these non-code content in google drive for download, only PANW accounts can view them.
Accomplishments that I'm proud of
I think my chatbot is very useful and has the potential to be applied to a wide range of areas. It is ready to be used, easy to be integrated into current business logic and can be trained with different domain knowledge to adapt to different use cases. Recommending only top 2 related links from the over 130,000 knowledge articles and tech-docs is very challenging. However, the machine learning model developed yields a very high accuracy in offline evaluation. For thousands of custom service tickets since 2018, about 1/3 of total support engineer solved tickets have exactly same links recommended by the chatbot. Prisma cloud Customer success team already expressed strong interests to deploy it to the production.
What I learned
What's next for QA chatbot to support customers
Built With
machine-learning
natural-language-processing
python
tkinter
Try it out
drive.google.com | QA chatbot to support customers | solve the support tickets by chatbot | ['Jin Huang'] | [] | ['machine-learning', 'natural-language-processing', 'python', 'tkinter'] | 29 |
10,347 | https://devpost.com/software/from-the-frying-pan-into-the-fire | Inspiration
We were inspired by police/ambulance sirens - everyone sees the flashing lights and sound and knows immediately to move the heck out of the way. So how can we create something like that for SOC analysts? Something that signifies ACTION! as soon as someone sees it, regardless of what they are doing.
What it does
Our project combines the power of Philips Hue API with XSOAR - allowing XSOAR users to add in Hue Light effects to any of their playbooks for a little added flair during incident investigations. Users will be able to add different colored lights at different points during the playbook to signal triggering of an incident, closure of an incident, required action from users, etc.
How we built it
We created an integration in XSOAR for Philips Hue using their open source API, then a wrapper script to get/set the light ID inside of a playbook, and lastly a malware playbook to put it all together.
Challenges we ran into
Only one team member had a set of Philips Hue lights, so most of the time we had to perform testing with his set of lights.
What we learned
Not to procrastinate ;)
What's next for HUEge Automation!
At some point, we would like to create an integration with some sort of speaker so we can play music at different parts of playbooks - like the Mission Impossible theme when the analyst begins investigating.
Built With
api
philips
philips-hue
python | HUEge Automation | How many lights does it take to wake up a SOC analyst? | ['Mitch Densley', 'Lauren Lee', 'Ashley Richardson', 'Mohit Mohta'] | [] | ['api', 'philips', 'philips-hue', 'python'] | 30 |
10,347 | https://devpost.com/software/kubernetes-container-stig-cis-compliance-and-remediation | Scanning a container for vulnerabilities
Inspiration
DevSecOps is on everyone's mind right now and although there are existing XSOAR integrations for k8s distros like Google Kubernetes Engine, there does not seem to be a generic Kubernetes integration that could talk to clusters using only the k8s API. This capability is important for runtime container security and container image policy compliance as it allows operators the ability to scan and audit their containers and clusters without requiring installing new software or pods in the cluster.
What it does
This project provides runtime container security and policy for Kubernetes clusters using the K8s REST APIs only. Unlike other container security solutions like PAN's Prisma or scanners from Aqua Security and others it does not require installing additional pods into the cluster and containers and clusters can be scanned for configuration weaknesses and security vulnerability remotely much like the CLI tool
kubectl
works. The integrations works with any Kubernetes distro like RedHat OpenShift or Azure Container Service via inspecting objct specifications executing shell commands inside running containers..
Cluster pods, controllers, services and routes configurations can be audited against benchmarks from STIG/CIS/NIST and running containers inside pods can be scanned for vulnerable operating system packages as well as application security vulnerabilities affecting .NET, nodejs, Python etc. applications.
Integrations
Kubernetes
Vulners.com
Playbooks
Container auditing using benchmarks like
STIG RHEL 7
.
Container vulnerability scanning using the
Vulners.com
and others vulnerability scanners.
Built With
kubernetes
vulners
xsoar
Try it out
github.com | Kubernetes container STIG/CIS compliance and remediation | Automate auditing containers for security benchmark compliance using the Kubernetes REST/Python APIs and the Vulners.com vulnerability scanner. | ['Allister Beharry'] | [] | ['kubernetes', 'vulners', 'xsoar'] | 31 |
10,347 | https://devpost.com/software/ztp-of-prisma-access | after commit
Inspiration
My first Inspiration was after we lost a deal to ZScaler. One of the main concerns the customer had was that Zscaler had a one-button deployment tool/integration to fully deploy ZIA (ZSCaler Internet Access).
My Second inspiration was the fact that SEs need to manually configure Prisma Access for each POC manually. Not all SEs have access to a Prisma access tenant to test and validate their deployment and leveraging Cortex XSOAR we are able to fully automate the deployment of :
Service Setup
Remote Network
Service Connection
Mobile Users
Our 3rd inspiration was to be able to speed up the process for production deployment. This playbook is able to deploy Prisma Access automatically from the ground up. We can leverage this playbook for PS engagement, POC, Customer deployment. During PS or Production deployment, we usually need to configure a lot of Remote Networks or Service Connection. This playbook can bulk import all the configurations for all your RN or SC IPsec tunnel with the right bandwidth, IPsec configuration, and onboard the configuration in the plugin.
When this playbook ends, you commit the configuration and you are able to use Prisma Access right away.
What it does
This playbook automates the entire process of deploying and configuring Prisma Access. This playbook is configuring the following.
Service Infrastructure options (Subnet, Template and Template Stack)
Mobile Users Onboarding (Global Protect and Gateway Configuration)
Mobile Users IP Pool
Mobile User DNS configuration
Authentication Profile
Mobile User IP Pool
Zone Creation for Mobile Users (Mobile-Trust , Mobile-Untrust)
Zone Creation for Remote Networks (RN-Trust , RN-Untrust)
Creation of the Device Group for all Prisma Access Devices(SC,RN,Mobile Users)
Assigning the right zone to the right option in the Prisma Access Plugin (Trust zone to Trusted Zone and Untrust zone to Untrusted Zone)
Pre-Rule for Mobile Users
Pre-Rule for Remote Networks
Log Forwarding Profile for Remote Networks
Log Forwarding Profile for Mobile Users
Creation of All Standard IPSEC, IKE, Crypto , IKE Gateway Template for Service Connections
Creation of All Standard IPSEC, IKE , Crypto, IKE Gateway Template for Remote Networks
Creation of All Standard configuration and Template for Service Connections
Creation of All Standard configuration and Template for Remote Networks
Bulk Import of Tunnel for Remote Networks leveraging CSV
Bulk Import of Tunnel for Service Connection leveraging CSV
Onboarding of Bulk Import of Tunnel for Service Connection
Onboarding of Bulk Import of Tunnel for Remote Networks
How I built it
We have decided to build this playbook leveraging PowerShell and new custom integration from panorama. We did analyze the panhandler skillet to make sure we are covering everything and that we are able to do it at scale and for everything at the same time.
We required advanced API commands to be run that were not handled via the existing Panorama Integration so we modified it and added an advanced command that was able to take in XPath and Element variables so essentially we would be able to utilize any command from the API.
The PowerShell scripts passed in all the required commands and variables and also handled the ingestion of CSV files along with loops to bulk execute commands. Utilizing the new Integration command we ensured that the API key remained safe within the integration setup.
Challenges I ran into
We had to analyze how XSOAR works with PowerShell and we had to integrate the new integration in XSOAR to leverage the integration instead of arguments
Accomplishments that I'm proud of
Being able to have a fully automated Prisma Access Deployment in PowerShell with bulk import was key
What I learned
I did learn a lot about how our API and PansOS is built and how it's working. Begin able to configure something manually and see the API/Language call in the debug console to be able to automate it in PowerShell was nice.
What's next for ZTP of Prisma Access
Have this playbook available for everyone and that PS , CS , SEs and customers use this playbook to speedup the onboarding and speedup our market share.
The password for the Video is : &Z6XyK$&
Built With
access
api
csv
custom
integration
panorama
powershell
prisma
Try it out
drive.google.com | ZTP of Prisma Access | Deploying Prisma access is sometimes painful it and can be hard to remember all the details and what should be done in the right order. Leveraging Cortex XSOAR we can fully deploy Prisma Access ! | ['Dilan Kapadia', 'XAVIER TREPANIER-TAUPIER'] | [] | ['access', 'api', 'csv', 'custom', 'integration', 'panorama', 'powershell', 'prisma'] | 32 |
10,347 | https://devpost.com/software/global-protect-to-infoblox-dns-update | Inspiration
A customer wanted to know if it was possible to update Infoblox when Global Protect clients connected or disconnected from their Palo Alto Networks NGFW
What it does
There are multiple playbooks depending on how the customers wishes to trigger playbooks. One playbook is setup to be job triggered while another is built to take syslog information from the NGFW and take action from those.
How I built it
It was built using playbook books, integrations, and some automations (written in Python)
Challenges I ran into
The initial request was pretty simple. They wanted Adds and Updates, but later admitted they would like to delete. Deletion was more complicated from the job triggering playbook since you must maintain a state table to know what needs to be delete. So from there I created the syslog triggering playbook. The biggest challenge here was to carve the syslog messages down to just logon and logoff events. They needed to include enough information for all Adds, Updates, and Deletes.
Accomplishments that I'm proud of
I was proud of the creation of the previous users / current user state table in the data context. Also being able to cleanly take in syslog data from the NGFW to trigger the events.
What I learned
I was very surprised that the NGFW didn't allow this sort of integration or taking DHCP leases from external sources.
What's next for Global Protect To Infoblox DNS Update
As larger customers start need this sort of capability, I can see the need for more scale. Like utilizing the user state tables in the NGFW to calculate on the fly the actions and to call the Infoblox APIs with the fewest number of calls.
Built With
api
automations
infoblox
integrations
playbooks
python
Try it out
github.com | Global Protect To Infoblox DNS Update | Many organizations use DNS to manage security and operational activities. Due to COVID-19 more companies require this. So I've created a project to sync Infoblox DNS with Global Protect. | ['Scott Brumley'] | [] | ['api', 'automations', 'infoblox', 'integrations', 'playbooks', 'python'] | 33 |
10,347 | https://devpost.com/software/air-gap-hopper | Air Gap Hopper
Offline Panorama content update diagram
Air Gapped Intelligence/Reputation query
The Hulk :-)
Inspiration
The idea for "Air Gap" toolbox came actually from a need of our customers.
Many of them have some kind of "Air Gapped" or "Offline - Disconnected from the internet" Networks.
Those customers are from different disciplines. They could be military, government, industrial and even financial like banks and trading companies with highly secure offline networks.
The fact that those networks are disconnected make the customers "suffer" from simple things like: upgrading their equipment or content efficiently or making real-time intelligence queries. Or even simple things like whois. I am not talking about automation of moving files in and out (most of it done manually).
We thought it does not need to be like this and we have the perfect technology to help them!
What it does
So far ,we built two use cases:
Intelligence/Reputation query from internal/air-gapped network to the internet by using unidirectional diodes (UDP on the way out and files on the way in).
Getting content update files for Panorama management server (to update the content like A/V on firewalls) from the internet to the Air-gapped network .
How we built it
We used two XSOAR servers to accomplish this task. One internal and one external. Each use-case has two playbooks that run on each server and depend on each other in order to work.
We mostly utilise existing Integrations and automations to do the task but had to do additional development to accomplish all our goals. We added:
5 New commands for Panorama and SMB
1 Automation
1 Incident type
1 custom field
More info in the videos of the usecase.
https://youtu.be/Ss3K9jpwzic
https://youtu.be/fQxhtzYX3R4
Challenges we ran into
We had some issues with Panorama API but eventually manages to solve it to run smooth like butter :-)
Another issue was our planning to meet and work for two straight days - Plan was shuttered by the LockDown so we had to move to Zoom - less convenient option for Hackathon.
Accomplishments that we're proud of
We are really proud to be able to assemble a diverse (we have 2 SE's 1 SA and 1 Support Engineer . One female and 3 male participants) team of people from different roles in the company and be able to create something that in our opinion will be very useful for our customers.
disruption #collaboration #execution #integrity #inclusion #innovation
What we learned
We learned a lot about the lines of work of other roles , other technologies. Every one of us deepened their skill in their main technology + added additional skills in technologies mastered by peers.
What's next for Air Gap Hopper
We are not stopping here. We have many more "Air Gap" use-cases in our mind.
Just few examples:
Getting logs for troubleshooting out of Secure networks after "Masking" all sensitive data.
Getting more type of files in - after using CDR,Sandboxing, AV inspection
Getting data of monitoring/analytics systems out tot he main management system.
Automation in ICS Networks (Industrial and IOT world)
*"Helping" ICS networks with no SIEM to get logs out to main SIEM/Analytics system
And many more :-)
PS - we are here not for the money , we want the Jackets! (money would be ok as well though...)
Built With
python
Try it out
github.com
youtu.be | Air Gap Hopper | This content pack is set to be a "toolbox" for organisations with "Air Gap" networks. There are currently two use-cases. Reputation query across air gap and offline Panorama content update. | ['Alex Pekarovsky', 'Valentin Zamy', 'Itzhak Zorenshtain', 'Meytal Mizrahi'] | [] | ['python'] | 34 |
10,347 | https://devpost.com/software/drew-bar-patrick | automated email confirming block
top level playbook
URL EDL
block file playbook
Inspiration
Working with multiple enterprises there is a real lack of control in the SOC for many. We wanted to design something that empowers and gives the SOC instant access to block malicious IOCs and also curate their own IOCs to be blocked with a an automated job.
TIM is an extension of SOAR but also, Automatic Response (AR) is a natural extension of TIM.
What it does
Vendor and product agnostic, this content pack blocks IOCs with ease. Within the Threat Response layout for each IOC (file, IP, URL, account, domain) can also be unblocked: for when an investigation is ongoing, the SOC can tidy up their work by unblocking the IOC.
How we built it
With a lot of help from Bar in Engineering and Drew, our beloved SA.
Challenges we ran into
Normal paid work gets in the way a bit :) We would have liked to test on more products but we could only get our hands on XDR, NGFW and Checkpoint FW
Accomplishments that we're proud of
Working the last few weekends and as an international team!
What we learned
Working across different departments with different skill sets can enable innovation.
What's next for Int. League Of XSOARdinary Gentlemen
We really want to test out the Search and Destroy API in XDR when it comes out!
Built With
checkpoint
panw
python
xdr
Try it out
despin.lab.demisto.works | Int. League Of XSOARdinary Gentlemen: Threat Response | Use case demonstrating the natural extension of TIM: TIM + AR(Automatic response). SOC manager can take back control to block or unblock malicious indicators with the touch of a button in XSOAR | ['Patrick Bayle', 'Drew Masters', 'Bar Katzir'] | [] | ['checkpoint', 'panw', 'python', 'xdr'] | 35 |
10,347 | https://devpost.com/software/app-id-magic | Inspiration
Robert Lemm's brainchild
What it does
This project was created to increase visibility into a network beyond the Palo Alto Networks provided AppID's, reduce the attack-surface and greatly reduce the amount of time it takes to research and create custom AppID's; most of which, would normally be a manual process. Additionally, it also provides an easy way to write custom app-id's using machine-learning based on traffic patterns of unknown applications and querying threat logs for ssl & web-browsing traversing Palo Alto NGFW's. All of the initial research has been automated and presented to XSOAR in the form of an incident. Once the playbook associated with the incident is completed, the new custom AppID is then pushed to the NGFW/Panorama. URL-Based AppID learning takes a threat log feed and displays the top-10 urls with ssl or web-browsing as the application and creates an incident with the relevant data to be selected and pushed to the NGFW or panorama. The AppID engine to discover payloads/URL's was created with a previous project. We used XSOAR to provide a mechanism to view the results from the discovery process and complete the process of creating custom AppID's on the NGFW/Panorama.
How we built it
We used Python, Elastic Stack and unknown traffic logging to capture, cluster(machine-learning) and produce data to create an incident based on results from clustering. In XSOAR, we created playbooks, automation, classifiers, layouts and custom fields to provide an easy workflow for completing the configuration process for the NGFW/Panorama.
Challenges we ran into
Docker debates
Lots of discussions around the workflow process.
Debates around how to exchange/move data to XSOAR.
Accomplishments that we're proud of
Everyone on the team contributed heavily to the process of getting results and formatting the data properly. The most difficult task was interpreting XSOAR's context data to produce the correct XML structure for Firewall/Panorama commits. We had to bring together a diverse team to learn from each others expertise. Our team expertise consisted of automation, NGFW and security backgrounds. No team member had all of these, which truly made it a team effort in formulating a solution.
What we learned
A ton about XSOAR, but more importantly, howto automate workflows using incidents, playbooks, automation, classifiers and layouts.
What's next for App-ID Magic
Manual capture upload for analysis.
Add approval process to validate newly created Custom AppID.
Add reports for custom AppID's(number of appid's discovered/created etc....)
Data enrichment from incident and telemetry info.
Create packaging for back-end process(docker container) for ease of deployment.
Built With
api
python
rest
xml
Try it out
github.com | App-ID Magic | This will make Creating Custom App-ID's a breeze in any environment | ['Rod Gonzalez', 'Robert Lemm', 'Todd Walker', 'Scott White'] | [] | ['api', 'python', 'rest', 'xml'] | 36 |
10,347 | https://devpost.com/software/devsecops-automation-and-orchestration | Architecture
Playbook1
Playbook2
Playbook3
Playbook4
Playbook5
Playbook6
Inspiration
DevSoarOps project is inspired by the tremendous success of the SOAR technology in the SOC, and how SOAR can easily tap in a DevOps software factory and cross the SOC boundary from one angel; And from another angel, Expanding the SOC service catalog to address a new territory of use cases :
DevSecOps Use Cases
.
Three factors in my view can enable organizations - with the help of SOAR technology - to shift security as left as the planning stage of a continues integration pipeline and make DevSecOps to be within reach, these factors are:
Software Defined Everything including security controls provides a layer of abstraction that allows the functions of these controls to be called in CI/CD pipeline.
Market's decision that directed security vendors to open their platforms for integrations through standard API interfaces. Closed platforms offerings are no longer in play, no matter how good these platforms are.
Automation and Orchestration in the security space has started already with SOAR technology, can easily tap into DevOps Eco-System.
CI/CD orchestration tools such as Jenkins, CircleCI and others were primarily built by and to developers, SOAR is better positioned to cover this orchestration gap between DevOps and SecOps for the following reasons:
While CI/CD orchestration pipelines are arguably easy to read and troubleshoot by developers , SOAR provides the same orchestration workflow in two different formats that are readable by both Developers and Security Analysts.
SOAR provides way more to a DevSecOps Eco-System than CI/CD Orchestrator does:
Collaboration between teams members in the Eco-System.
Cases management.
Central reporting and long list of out of box integrations with security tools.
What it does
With couple of additional integrations, playbooks, fields - and UX components at a later stage - , XSOAR can be turned into a DevSoarOps Orchestrators that taps in a DevSecOps Eco-System and solve for a spectrum of use cases in different stages of CI/CD pipelines.
From threat-modeling in the
Planning
stage to IaC security in
Dev
, static code analysis in
Build
, post deployment scans in
Deploy
and
Monitoring
/Responding to incidents once the code is running in production.
DevSoarOps does that via a number of integrations with different software factory tools:
IDE's
Code Repo Providers
CI/CD Orchestrators
Code Compilers/Builders/Packagers
Workload Orchestration Platforms
Configuration Automation Tools
Threat Modelers
SAST/DAST/IAST Tools
Vulnerability Scanners
Networks Security Tools
Asset Management Tools
Log Management Tools
Ticketing Systems
And a number of DevSecOps Automation Playbooks, the following are some of the Playbooks that I am working on:
Plan
: New Sprint Threat Modeling Playbook
Dev
: Discover and Eradicate APP & IaC Code Vuln’s and Policy Violations
Build
: Identify vulnerabilities in the dependent components
Test
: Tests and vulnerability management
Deploy
: Policy as Code Enforcement
Monitor
: Security Incident Investigation and Response
How I built it
I started planning for this project by mocking a software factory that has a number of tools in two CI & CD Pipelines:
I have coded two new integrations to implement "Discover and Eradicate APP & IaC Code Vuln’s and Policy Violations" automation/orchestration use case (above) :
Github long running integration to process Git web-hook deliveries
LGTM ( A SAST cloud service built on CodeQL) integration with the following commands:
lgtm-get-project-by-url to get the SAST project details
lgtm-get-project-config to get Extraction and Analysis config by LGTM Project ID
lgtm-get-analysis-status to get get PR/Commit Analysis Results by analysis ID
lgtm-get-alerts-details to get Alerts Details by Analysis ID if any alerts triggered
lgtm-run-commit-analysis to run on-demand SAST analysis on one Commit
lgtm-run-project-query to on-demand SAST analysis on a Github Repo
And then created two new incident types along with new custom fields:
New Git PR incident
New App Task incident
Multiple playbooks are created to achieve this sample "Discover and Eradicate APP & IaC Code Vuln’s and Policy Violations" use case:
A PR triage Playbook that parse and record the PR details once the PR web-hook delivery is received by the new Github LR integration.
A Github App Task analysis Playbook that parses the received Github App Task delivery, create vulnerability indicators if any vuln found and then link the App Task incident&indicators to the PR incident
The App Task analysis Playbook has two sub-playbooks:
LGTM Task Analysis Playbook, the parses the LGTM task deliveries and create SAST vuln indicators if any
Prisma Cloud Task Analysis Playbook, the parses the Prisma task deliveries and create IAST vuln indicators if any
Challenges I ran into
XSOAR UX was built mainly for SOC analysts, ingesting DevOps data into the SOAR would require additional UX components that maps a pipeline of tasks, I was able to work around that by using multiple incident types and playbooks that maps incidents (Pipeline tasks) together.
Accomplishments that I'm proud of
I did Demo the "Discover and Eradicate APP & IaC Code Vuln’s and Policy Violations" playbook to a major bank that has an advanced DevOps Eco-System and the feedback was rewarding to my efforts working on this project.
What I learned
Evaluating security controls in a very competitive market is no longer about a nice UI interface nor an integrated platform from one vendor, but rather the openness of these controls to integrations with other controls and how much automation does these controls have.
SOAR Technology is well positioned to shift security to the very left of the software development life cycle.
What's next for DevSecOps Automation and Orchestration
More integrations and more playbooks :)
Built With
github
lgtm
prismacloud
python
xsoar
Try it out
github.com | DevSecOps Automation and Orchestration | This project aims to run security operations at the same speed of a modern DevOps Eco-System with an orchestrated layer of SOAR defined and abstracted security controls integrated in CI/CD Pipelines. | ['Ayman Mahmoud'] | [] | ['github', 'lgtm', 'prismacloud', 'python', 'xsoar'] | 37 |
10,347 | https://devpost.com/software/xsoar-inspector | Mr Audit Inspector
Playbook reference
XSOAR Architecture: Audit PB & Integration
Inspiration
More Developers, More Alterations, More Controls
What it does
XSOAR Audit Integration fetches and dumps the user audit logs. Then, the Audit Inspector playbook generates the XSOAR Audit report for deleted/modified content, that is sent over email to XSOAR Administrators or Content owners.
How we built it
We have multiple teams and dedicated Tenant/Client owners for the XSOAR content. There were ongoing challenges in tracking the content changes and similar requests were received from Tenant owners/client as well.
To address this, we have created the XSOAR Audit integration that will fetch and ingest the user audit logs for all tenants. Then, the XSOAR Audit playbook will run as a JOB on top of XSOAR Audit integration to generate the report for
deleted
content from XSOAR.
Challenges we ran into
Ingesting and parsing XSOAR Audit logs in SIEM.
Generating Audit reports via Playbook.
Accomplishments that we are proud of
We were able to resolve the client challenges and enabled them to meet the compliance requirements.
What we learned
We explored varied XSOAR API use cases and learned the usage of DT in our playbook related tasks.
What's next for XSOAR Inspector
We will be using it as a part of our Tenant Creation checklist, so that our clients and XSOAR Tenant owners can track the content level changes.
Built With
json
python
rest
xsoar
Try it out
github.com | XSOAR Inspector | Mr. Audit at work | ['Mohan Mittal', 'Ashish B'] | [] | ['json', 'python', 'rest', 'xsoar'] | 38 |
10,347 | https://devpost.com/software/content-pack-management | Inspiration
After upgrading from version 5 to v 6, I have a lot of pack updates on a daily basis. This removes the need to manually update each pack, one by one. It also allows me to remove all the packs I no longer use, again, not having to go through them one by one.
What it does
Mass removes or updates Content Packs in XSOAR.
How I built it
Using some string, gaffer tape and sticky back plastic.
Challenges I ran into
Undertstanding the API commands in XSOAR that are actually not documented, nor supported.
Accomplishments that I'm proud of
I wrote it all with my eyes closed! Well, I didn't really. But that would have been something if I did!
What I learned
The irony of managing content in the marketplace using a pack that is in the marketplace.
What's next for Content Pack Management
Just have to keep it updated if the API changes at all.
Built With
python | Content Pack Management | This small pack allows for mass updating and removal of content packs from the Marketplace. | ['Adam Burt - Demisto'] | [] | ['python'] | 39 |
10,347 | https://devpost.com/software/github-secrets-detection-in-xsoar | Integration in the UI
GitHub Secret Incidents
Incident Details
Playbook
Inspiration
Shift-left security is a practice to embrace security to the software at the earliest stage of the development process by performing static code analysis, vulnerability scans and also secret detection to avoid data leak. Most of these are set up by the DevOps and developers with different levels, tools, priorities and schedules in the developer Git pre-commit hooks or within the CICD pipeline.
Can we have an XSOAR automation pack to quickly configure a GitHub repository URL, do the scans dynamically with new commits in a branch, and create incidents for the SOC department to investigate?
What it does
I have created a XSOAR automation pack for GitHub Secrets Detection. This is the list of the features:
It can automatically detect the new commits in a branch of a repository.
It supports GitHub.com and GitHub Enterprise.
It can scan all the commit histories of an existing repository.
It supports customized file name and secret key patterns.
How I built it
First, I was using demisto-sdk to scaffold an empty project. The integration is written in Python. To detect the secrets in the file content and certain file names, I am integrating truffleHog (
https://github.com/dxa4481/truffleHog
) and shhgit (
https://github.com/eth0izzle/shhgit/blob/master/config.yaml
) that has a nice list of patterns for the file name and path.
Challenges I ran into
There are many secret detection libraries with very different capabilities. For example, Yelp's detect-secrets can only scan the current source code without the git histories and rely on an interactive way to reduce the false positive. Picking one that's easy to use is a challenge.
TruffleHog is a nice library that only has ~400 lines of the code. One of its options --since-commit is broken and I had to find a patch for that.
As of now, demisto-sdk has a bug in uploading the new layout container JSON.
Accomplishments that I'm proud of
I think this integration is unique and agile. It's easy to set it up, and we already have incidents created for the SOC team. The team can also make it better by tuning the patterns.
What I learned
This is actually the second integration I have worked on, hence I have not learned a lot in the XSOAR area. Doing this project is purely for the "fun" and my team, Palo Alto Networks IT First Customer, is encouraging me to do so.
What's next for GitHub Secrets Detection in XSOAR
We should add some unit tests.
Besides doing secret detection, static code analysis can also be integrated.
Demo Video
That picture of my daughter Johanna and me was taken 6 years ago. I found it funny we were working on our "laptops" together and now she is already helping me with the demo!
Built With
demisto
demisto-sdk
git
github
gitpython
python
trufflehog
xsoar
Try it out
github.com | GitHub Secrets Detection in XSOAR | An XSOAR integration to search through the GitHub repositories for secrets in commit history. | ['Matthew Kwong'] | [] | ['demisto', 'demisto-sdk', 'git', 'github', 'gitpython', 'python', 'trufflehog', 'xsoar'] | 40 |
10,347 | https://devpost.com/software/failed-login-attempts | Main-playbook
Sub-playbook
Query result from SIEM (Splunk)
Email notification
Password Generator
Active Directory Integration - Reset Password, Disable account, Enable account
Built With
ad
email
passphrase-random-password
splunk
xsoar
Try it out
github.com | Workflow of Failed login attempts | Reduce the workload of daily operations since internal users always forget password and try to brute force their own account. | ['longlongtino Yiu'] | [] | ['ad', 'email', 'passphrase-random-password', 'splunk', 'xsoar'] | 41 |
10,347 | https://devpost.com/software/asg-playbook | Inspiration
Playbook
Built With
amazon-web-services | Playbook | Playbook | ['Tri Labs'] | [] | ['amazon-web-services'] | 42 |
10,347 | https://devpost.com/software/automated-content-updates-in-the-marketplace | Inspiration
Inspired by the marketplace and general usability.
What it does
Automatically updates all content that have updates available in the marketplace to save the user the hassle of trying to click through multiple update procedures.
How I built it
Build in Python3 via REST API calls to marketplace APIs.
Challenges I ran into
Figuring out the dependency management and subversions of content packs.
Accomplishments that I'm proud of
Solving the subversioning validation issue.
What I learned
Truly anything inside of XSOAR can be automated. If you can do it in the UI, you can automate it!
What's next for Automated content updates in the marketplace
Automated content with multi-threaded API posts for max performance.
Built With
automation
playbooks
python
restapi
xsoar
Try it out
github.com | Automated content updates in the marketplace | This is a workflow for updating all installed and expired content automatically via REST API w/dependency support for XSOAR's 6.0 marketplace. | ['Steven Yang'] | [] | ['automation', 'playbooks', 'python', 'restapi', 'xsoar'] | 43 |
10,347 | https://devpost.com/software/ntts-soc-analytics-soar-hackathon-entry | Integration
Playbooks
Automations
Analysis
Response
Full Demo Video
https://www.youtube.com/watch?v=XoNHNvx4I3Y
Inspiration
The Little Engine That Could.
What it does
Provides enhanced capabilities for dealing with LogRhythm Cases and Alarms. In addition to optionally fetching Cases from a LogRhythm instance, it provides the following commands:
lr-add-alarms-to-case
Adds the specified alarms to the specified case.
lr-add-case-note
Adds note evidence to a Case.
lr-create-case
Creates a Case in LogRhythm
lr-drilldown-on
Drills down on an alarm.
lr-update-case-status
Update the status of a case by sending the numerical status code.
lr-update-case-summary
Updates the case summary.
How we built it
One Playbook task at a time.
Challenges we ran into
We had use cases requiring Case management functionality. Since existing Integrations did not provide the needed commands, our first challenge was learning how to write one of our own. This in turn led to the need to write customized Playbooks to handle other LogRhythm-specific requirements. Combining like Alarms into a single Case was perhaps the most complex problem we faced.
Accomplishments that we're proud of
Developing our own Integration to handle Case management
Creating an entire end-to-end solution for working with LR Alarms and Cases
What we learned
Jacob: That Tony doesn't comment his code or follow all conventions.
Tony: That he should better document his code and follow all conventions.
Subhanga: It's not as easy as we make it look.
What's next for wRESTling with LogRhtythm
The team will be happy to take a REST from LogRhythm.
Built With
magic
python
xsoar
Try it out
github.com | wRESTling with LogRhythm | An enhanced LogRhythm Integration with case management capabilities and sample playbooks. | ['Anthony Steckman', 'jacob mohrbutter', 'Subhanga Dixit'] | [] | ['magic', 'python', 'xsoar'] | 44 |
10,347 | https://devpost.com/software/the-hive-project-pack | Inspiration
No great inspiration here. Just to show that we can integrate with ANY platform.
What it does
Simply integrates with The Hive Project and supports mirroring of incidents.
How I built it
Using the XSOAR platform and Python.
Challenges I ran into
Understanding mirroring.
Accomplishments that I'm proud of
That it works.
What I learned
Mirroring process in XSOAR.
What's next for The Hive Project Pack
Not too sure yet.
Built With
python | The Hive Project Pack | Integration and layout for The Hive Project incidents and commands. | ['Adam Burt - Demisto'] | [] | ['python'] | 45 |
10,347 | https://devpost.com/software/wait-for-field | Inspiration
Playbooks allow us to wait for results of a task, but not the contents of a field. This pack aims to solve that.
What it does
The user can drop this sub-playbook into a playbook and define which field they are waiting to be populated. The validation of population is defined by regex, so it can wait for a specific value also.
How I built it
Using 1 automation script that checks the field value and returns a true or false whether it matches the user provided regex or not.
Challenges I ran into
None
Accomplishments that I'm proud of
It took about 30mins to make.
What I learned
Not everything useful has to take a long time to build.
What's next for Wait for field
Who knows. Maybe some pretty icons.
Built With
python | Field Polling | This pack is small, simple yet powerful. The pack contains 1 automation script and 1 playbook. The playbook allows the user to specify a field and wait for that field to be populated before continuing | ['Adam Burt - Demisto'] | [] | ['python'] | 46 |
10,347 | https://devpost.com/software/space-x | Inspiration
After seeing a submission from Harri, I thought it would be nice to follow up with another legendary company from Elon Musk - Space-X
What it does
The integration provides a lot of information about Space-X; primarily about the flight missions (both historical and future).
How I built it
The integration uses Python and was coded within the XSOAR platform. It leverages the r-spacex information set (as Space-X do not actually have an API to interact with). And no, you can't control any space shuttles or launches :o)
Challenges I ran into
None.
Accomplishments that I'm proud of
It works, to date, which is always nice.
What I learned
More about mirroring and the importance of deciding whether mirroring needs to be two-way or not. In this instance, it only needed to be inbound.
What's next for Space-X
Hopefully many successful missions as they take us closer to living on other planets.
Built With
python | Space-X | The Space-X integration pulls all information associated with the Space-X program. It can pull incidents (future flight missions) and it supports inbound mirroring too. | ['Adam Burt - Demisto'] | [] | ['python'] | 47 |
10,347 | https://devpost.com/software/xsoar-with-prisma-cloud | Inspiration There is a severe lack of integration between XSOAR and PCS.
What it does
Parses out the OOTB policies in PCS. And then emails the AWS owner of the S3 bucket.
How I built it
Python
Challenges I ran into
There is a lack of commands in the Prisma Cloud integration. I added new commands such as 'remediate.'
Accomplishments that I'm proud of
We can remediate right from XSOAR. We never have to log into PCS and handle the alert.
What I learned
What's next for XSOAR with Prisma Cloud
Built With
python | XSOAR with Prisma Cloud | What if XSOAR could pull in the Prisma Cloud alerts, fully qualify the URL, recommend CLI remediation, email the cloud account owner, and then finally remediate from XSOAR. | ['Erik witkop', 'Nicholas Ericksen'] | [] | ['python'] | 48 |
10,347 | https://devpost.com/software/ssl-verifier | Inspiration
I saw in the news and a lot of my customers experienced the fact that MS Teams went down due to the fact that they forgot to renew the certificate of one of their Teams infra servers. This sparked an idea to create a SSL verifier that gets the certificate of a site and the expiration date so you can create automated workflow around it to make sure it does not happen to you.
What it does
Gets the certificate from a site. Shows issuer, expiration date and time to expiration.
How I built it
Python automation script
Challenges I ran into
-
Accomplishments that I'm proud of
Real customer pain point solved and no additional products required.
What I learned
-
What's next for SSL Verifier
Possibly expanding it to monitor domains also as I have seen a few e-commerce sites go down due to domain registeration expiry.
Built With
python | SSL Verifier | Forgetting to monitor your SSL certificate statuses? Has your business seen downtime due to this? No more! Using XSOAR its simple to create workflow to monitor and make sure it does not happen anymore | ['Hruuttila Ruuttila'] | [] | ['python'] | 49 |
10,347 | https://devpost.com/software/tesla-smart-charger | Playbook
Inspiration
I own a Tesla Model S and in my region the electric company offers spot prices for electricity. So by timing your charging right you can save money each day when you charge your Tesla for the next day adventures.
What it does
Get day ahead pricing from the electricity market provider (NordPool in my case)
Find the lowest price
Schedule a charge command to be sent to the car a the optimal time to charge as cheap as possible.
How I built it
XSOAR automation to fetch electricity pricing, new custom Integration to talk to my Tesla over the Owners API
Challenges I ran into
Day ahead pricing is only public at 14.00. otherwise you get the same day.
Tesla API is not officially documented.
Accomplishments that I'm proud of
Working scenario end to end for a bit unorthodox use-case :)
What I learned
More python.
What's next for Tesla Smart Charger
-Enhance the Tesla integration commands to have more API commands.
-Calculate savings
Built With
python | Tesla Smart Charger | Do you own a fleet of Teslas as a rental car company or as a cab company? Is your electric bill killing you every day when you need to charge your cars? Tesla Smart Charger to the rescue! | ['Hruuttila Ruuttila'] | [] | ['python'] | 50 |
10,347 | https://devpost.com/software/automationrisingprinceproject | xSoar Demo Playbook
Inspiration
Many teams may have existing Ansible playbooks they would like to use during an xSoar investigation. The integration with AWX allows many of the preexisting workflows to be fired off to remediate various incidents.
What it does
This project can be use to call ansible playbooks that are in AWX through a xSoar playbook.
How I built it
An Integration Pack was created to package the integration script and vagrant was used to stand up the testing environment.
Challenges I ran into
Before this project I've never used xSoar. This project forced me to learn a new platform to contribute to the hackathon!
What I learned
xSoar is incredibly scaleable and extensible platform. You can integration this platform with almost anything out there. The possibilities are endless!
What's next for IntegrationForAWX
Look into using the Ansible Facts stored to help classify and enhance the various fields in an xSoar incident.
Built With
ansible
awx
python
vagrant
virtualbox
Try it out
github.com | IntegrationForAWX | xSoar playbook, automation and integration to integrate AWX | ['Daniel Prince'] | [] | ['ansible', 'awx', 'python', 'vagrant', 'virtualbox'] | 51 |
10,347 | https://devpost.com/software/pispinner | Inspiration
What it does
TBD
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for PISpinner
Built With
ai
python
rest | PISpinner | Identifies PI data on server logs | ['pavan mummareddy'] | [] | ['ai', 'python', 'rest'] | 52 |
10,349 | https://devpost.com/software/pitch-perfection-live | Main Menu
In-Game
In-Game
In-Game
We were inspired by other multiplayer rhythm games that we played together during the quarantine, such as Beat Saber and osu!. We were also inspired by karaoke and the idea of making an online game between two languages. We faced challenges with communicating between the server and the client as well as getting the UI to look clean. We learned a lot about socket communication in Python and Java as well as using AI to generate pitches and the FFT algorithm.
Here are the Discord Tags of our team members:
Ayaan - crimsonΩ#9323
Andrew - FencerDoesStuff#8634
Pavan - Poxter#0001
Brandon - RedPanda#4397
Lawson
Built With
ai
crepe
gson
java
json
neuralnetworks
opencv
org.json
pickle
pyaudio
pygame
python
sockets
wave
Try it out
drive.google.com
drive.google.com
github.com | Pitch Perfection Live | Pitch Perfection Live is a real-time multiplayer karaoke game, where players compete to sing the songs at the right pitch to get the most points. | ['Andrew Ogundimu', 'Lawson Wright', 'Brandon Pae', 'Ayaan de Silva', 'Pavan Kumar'] | ['1st Place - Apple Watch!'] | ['ai', 'crepe', 'gson', 'java', 'json', 'neuralnetworks', 'opencv', 'org.json', 'pickle', 'pyaudio', 'pygame', 'python', 'sockets', 'wave'] | 0 |
10,349 | https://devpost.com/software/movemnt | Landing page
Loading
Results
Note: Please feel free to try our web app out at:
https://82cc9ad70f8f.ngrok.io/
Update: This link is now disabled
View our youtube demo at:
https://www.youtube.com/watch?v=xnaw4taaLoI
Our discord usernames are: [redacted] and [redacted]
Inspiration
Recently, in light of the civil unrest, we’ve seen social media become a tool for empowerment. This empowerment, however, should not be limited geographically. We want all people to understand how trends form and continue to develop, and how where you are in the world affects the justice movements you see.
What it does
The user’s experience is relatively simple-- they only have to open an app and then input a trend they’d like to see the search. We take the trend and then analyze it. Perhaps the most important form of analysis we do is sentiment analysis. We trained an LSTM (Long short-term memory) model which is a type of Recursive Neural Network to assign a sentiment score of between 0 and 1. Alongside this, to illustrate where the tweets originate visually, we provide the locations where the tweeting is concentrated: from both parsed out location data and the user’s location if that wasn’t available.
Of course, for parity, we offer a button to view a random tweet in the data set as well as google search links for entities named in the tweets.
How we built it
The majority of our project was coded in Python, with HTML/CSS/JS for displaying the information. We did the Keras/Tensorflow machine learning on Google’s CoLab, only to move it over to a Jupyter notebook. That is, in turn, connected to a Jupyter notebook running a Flask server. We use heatmap.js for the interactive background on the loading page and utilize the Twitter API for the trending hashtags and to access the tweets. Named Entity Recognition comes from the spaCy API.
Challenges we ran into
Because our project has so many components, one of the largest projects was the integration of the features. In particular, once one of our members trained the RNN, we had to figure out how to integrate both Keras and flask. Because of various compatibility issues (including issues with downloading the model and then reuploading it to use), we ended up running the flask server through the Jupyter notebook, which did mean we couldn’t deploy it to a website-- admittedly a stretch goal for us. We also found it quite challenging to decide between various services for what we were doing-- settling on MapBox for the map was a tradeoff we chose to take for its elegant and simple design.
Accomplishments that we're proud of
Because we are both visual learners, we wanted to make sure that we were able to understand the data visually. Therefore, instead of offering numbers, we gave visual representations. The heatmaps, in particular, were a great way to cover large areas of the map. The map also includes the entire world on it, to avoid limiting content and social trends to just the continental United States. This also meant making a fun interactive landing page that the user can draw on with their mouse, and displaying sentiment as a heatmap as opposed to just a collection of raw numbers.
Alongside this, we are glad we were able to offer some insight into what kind of information we gather. We are happy that we can give users the ability to randomly access a tweet from the set, and that we gave people the ability to search the terms that came up.
What we learned
We learned a lot about NLP (Natural Language Processing) through this. Initially, we wanted to use ML for the Named Entity Recognition, only to find a better dataset for sentiment analysis. Overall, we learned just how much information can be extracted from a text, and how much data there is in a tweet. From just one sentence, we were able to pull out the locations in the tweet, named entities, and the sentiment available in the tweet. On a more technical level, for machine learning, we learned about how an LSTM can better capture long-range dependencies and be very effective in NLP for sentiment analysis because of the multi-layer structure that lets them store information for longer.
What's next for Movemnt
First and foremost, we want to put this out for everyone to use, and that means deploying it to the web. While there is a tunnel that can be used to access the site, greater access means more people can use the product.
We also hope that the app is used for social justice initiatives, and we believe it has promise as a research tool to understand how trends may be confined geographically. When socially aware people use our app, they can see how far their trends and initiatives are moving.
Built With
flask
geopy
keras
nltk
python
tensorflow
Try it out
82cc9ad70f8f.ngrok.io | Movemnt | Trend Location and Analysis | ['Ivan Galakhov', 'Yash Parikh'] | ['2nd Place - Holy Stone Drone'] | ['flask', 'geopy', 'keras', 'nltk', 'python', 'tensorflow'] | 1 |
10,349 | https://devpost.com/software/predent | Home Page
Government Page
Upload Crash Data
Generated Heatmap
Map with Risk Reports
Drivers Page
Submit Witnessed Accident
Resident Report Map
Data Visualization Page (Please look at Website)
Intensity Heatmap
FAQs
Inspiration
Throughout the last few months, our team has received our permits and begun driving for the first time. We are now able to experience how dangerous the roads and infrastructure are. With our eyes finally opened to such an understated and risky problem, we set out to help solve it using technology.
Car crashes remain the leading cause of death for people under 30, meaning it is incredibly critical to understand and attack this problem. Specifically, we were incredibly shocked to find out that during COVID-19, several states have found an increase in fatal car crashes. We noted that most technological solutions combating this problem focus on driver education and safety, and while this is incredibly important, we focused our efforts towards a less addressed approach.
Thus, we took a different approach from the common hackathon project.
Instead of creating an application meant for general use, we developed an application specifically for state and city governments. We plan to implement our software as part of a nationwide government plan to promote smarter designing of road infrastructure. Since governments often utilize outside developers to build applications, we believe our website fills a normally unoccupied niche, and projects like this should be encouraged in the hackathon community. However, we still made our website useful to normal drivers with useful features.
Thus, we developed PreDent, which analyzes road data through a machine learning algorithm to identify high-risk crash sites.
What it does
PreDent is a unique progressive web application that identifies the accident-prone areas of a city through machine learning. The core of our project is an ML model that inputs static features (speed limits, road signs, road curvature, traffic volume), weather (precipitation, temperature), human factors, and many other attributes to ultimately output a map of city roads with hotspots of where collisions are likely. Note that our demo shows the process, but because our model is incredibly complex and large, the only way for us to deploy our model is to get access to expensive, high-powered servers. Our model will work on any city’s dataset, but they would have to be collected or provided to us.
First, government officials can upload a csv file of their collected traffic data, which many
already
have in private storages. This file is uploaded to
Google Cloud
, and we then input it into our model. Once we finish processing their data, we notify them via email. Our model then outputs: 1) coordinates of crash sites 2) specific issues at each crash site, and 3) a heat map overview of the city. Additionally, using the model generated coordinates, we create an interactive map using the
Google Maps API
.
With this information at hand
, city designers can informatively improve their roads by determining where to fix roads, add additional signs, adjust speed limits, and more. This information is essential for promoting safer roads and infrastructure.
We also have a page for common drivers. Residents from partner cities can find a map with the hotspots of where crashes are likely. These heatmaps change based on an hourly basis and time of year to account for rush hours and temperature/weather. The common pedestrian or driver can also help improve the efficiency of our model by inputting data about crashes in their neighborhoods by interactively placing pins on the map, which we aggregate with already collected data using
Firebase
.
Lastly, we have a
Data Visualization
page, where we show our process of analyzing data and determining which factors are important. We show our exploratory data analysis process and visualizations of key attributes. We used
GeoPandas
and
Fiona
to render these images. Instead of just uploading plots and graphs, we rendered our data into real
dimensional visualizations
and maps.
How we built it
After numerous hours of wireframing, conceptualizing key features, and outlining tasks, we divided the challenge amongst ourselves by assigning Ishaan to developing the UI/UX, Adithya to connecting the
Firebase
backend, Ayaan to managing, training, and developing the ML model and creating heatmaps, and Viraaj to developing our map system and integrating our heatmaps.
We coded the entire app in 4 languages:
HTML, CSS, Javascript
, and
Python
(Python3 /iPython). Developing and optimizing our Geospatial-ML model was done through
Jupyter Notebook
and
AI-Fabric from UIPath
. We used
Javascript
to create our maps, and used
Google Cloud
to store our data. We hosted our website through
Netlify
and
Github
.
After reading documentation, we developed our model and tested it on open-sourced data from Utah roads (from Medium) and produced the heatmaps. We also created a web-scraper to collect data from state databases to create our training sets. We scraped weather and road infrastructure databases to add to our available data. We pinpointed thousands of crash sights as our positive samples, and randomly sampled for negatives from locations where crashes never occurred. We trained two models, a gradient boosting model and a neural network, and found that the gradient boosting model performed better. We documented all our progress in our
Jupyter Notebook
, which we recommend reading.
Challenges we ran into
The primary challenge that arose for us was training and deploying our model. It was incredibly difficult to find data; we only were able to find one publicly available dataset from Utah. In addition, since we have never created a geospatial-ML model, developing our model and creating maps with hotspots was our main challenge. We read lots of documentation to learn how frameworks like
ArcGis
work. While we were not able to deploy our model due to not having an affordable yet high-computation web server, we were able to make it functional regardless of the dataset, meaning as long as cities give us data, we can create heatmaps for them.
Accomplishments we are proud of
We are incredibly proud of how our team found a distinctive yet viable solution to revolutionize road development and driving. We are proud that we were able to develop one of our most advanced models so far, which was mostly possible through
UIPath
training. We are extremely proud of developing a solution that has never been previously considered or implemented in this setting and developing a working model.
What we learned
Our team found it incredibly fulfilling to use our Machine Learning knowledge in a way that could effectively assist governments in assessing roads and finding ways to make them safer, especially when there aren’t quick and effective ways to do so currently. Seeing how we could use our software engineering skills to impact people’s daily lives and safety was the highlight of our weekend.
From a software perspective, developing geospatial-models was our main focus this weekend. We learned how to effectively build models and generate descriptive heatmaps. We learned how to use great frameworks for ML such as
AI-Fabric from UIPath
. We grew our web development skills and polished our database skills.
What is next for PreDent
We believe that our application would be best implemented on a local and state government level. These governments are in charge of designing efficient and safe roads, and we believe that with the information they acquire through our models, they can take steps to improve roads and reduce risks of crashes.
In terms of our application, we would love to deploy the model on the web for automatic integration. Given that our current situation prevents us from buying a web server capable of running the model, we look forward to acquiring a web server that can process high level computation, which would automate our service.
Our Name
PreDent has a few different meanings, which we’ve listed out below:
“Pre” means prior to an accident
“Dent” refers to denting a car during an accident
“Dent” is also short for a car accident, which we try to avoid
“PreDent” is very similar to “prevent”, which is the primary goal of our system
Built With
ai
css
esri
fiona
firebase
geopandas
geospatial
google-cloud
google-maps
html
javascript
jupyter-notebook
keras
machine-learning
pandas
python
sci-kit
tensorflow
ui
uipath
xgboost
Try it out
github.com
predent.tech | PreDent | Using ML to promote safer driving by predicting crash hotspots. | ['Adithya Peruvemba', 'Ishaan Bhandari', 'Ayaan Haque', 'Viraaj Reddi'] | ['1st Place (Sponsored by PEC)', 'Best Web Application', 'MacroTech Sponsored Prize', '3rd Place - Airpods', 'First Overall'] | ['ai', 'css', 'esri', 'fiona', 'firebase', 'geopandas', 'geospatial', 'google-cloud', 'google-maps', 'html', 'javascript', 'jupyter-notebook', 'keras', 'machine-learning', 'pandas', 'python', 'sci-kit', 'tensorflow', 'ui', 'uipath', 'xgboost'] | 2 |
10,349 | https://devpost.com/software/day-14-team-topato | The actual website:
https://day-14--i8sumpi.repl.co/instructions.html
(the website)
This is our first hackathon. We are beginners.
Team members:
Kira Lewis (√-1 2^3 Σ π)
Jacqueline Shih (jacquelantern)
Emily Wang (asianpear)
Carmen Zhang (CarmenZ)
Built With
css
html
javascript
Try it out
github.com | Day 14 (Team Topato) | Quarantine boredom killer | ['Emily Wang'] | ['Best Beginner - Raspberry PI'] | ['css', 'html', 'javascript'] | 3 |
10,349 | https://devpost.com/software/worldlens | Inspiration
With the recent social issues occurring in our nation, we decided to create an informative app for kids and teens to use to educate themselves about these issues. We’ve created an easy to use app that allows the user to play stories on issues throughout the world! There are also additional resources for the user to look at if they are interested and want to further educate themselves on these issues. WorldLens is an easy way for people to educate themselves on issues occurring in the world!
What it does
WorldLens is an app that allows the user to play an interactive story for different social issues throughout the world. Users can also look at information in our app related to issues presented in our app. These additional resources allow the user to inform themselves on these issues.
How we built it
This app was created using Xcode, and the UI for the app was created using SwiftUI. The data for all the resources was web scraped using code that we wrote in Python and then added to Cloud Firestore, where the app would fetch the data from. To make the story, we designed the graphics ourselves. Additionally, the animations from our app came from using the Lottie package.
Challenges we ran into
This was our first time working with SwiftUI, and one of our team members had no prior experience with coding, so it was difficult to code many aspects of our app, including the parts that heavily relied on SwiftUI, like the story. Additionally, we were unfamiliar with integrating Firebase with SwiftUI, so we spent a lot of time trying to figure it out.
Accomplishments that we're proud of
We are proud that we created a fully functional app which uses Cloud Firestore as well as a beautiful UI that also integrates animations. When we were researching these topics, we also learned about many issues that are occurring in the world right now. Additionally, we learned about many features of SwiftUI and were able to incorporate them in our app.
What we learned
Andy: This was my first hackathon and first time coding with SwiftUI. I learned how to make my first mobile app and I was able to get a bit more fluent with coding. I also learned how to webscrape and use Cloud Firestore.
Tracy: I learned how to use SwiftUI for the first time as well as web scraping and incorporating animations into the project. This was my first time making a project that heavily focused on UI, so I learned a lot about SwiftUI. I've also never officially learned Python, so I learned a lot from web scraping as well.
What's next for WorldLens
In the future, we plan to add more stories to WorldLens along with additional resources for the user to find and look at. We also want to add more issues occurring around the world and provide stories for them too.
We are beginners
Built With
firebase
firestore
lottie
python
swift
swiftui
xcode | WorldLens | A mobile app that opens your eyes to global social issues through interactive stories and resources | ['Tracy Wei', 'Andy Wei'] | [] | ['firebase', 'firestore', 'lottie', 'python', 'swift', 'swiftui', 'xcode'] | 4 |
10,349 | https://devpost.com/software/no-touch-disinfectant-wipes-dispenser-bukrio | The problem
Hey, I am Tanya Rustogi and I got the idea of the wipes dispenser when I was thinking of how Covid-19 is affecting developing countries. My first thought was that to open a wipes container like lysol you need to touch at least two surfaces which can spread coronavirus. Additionally, having a container of wipes per person in an office or school is not realistic due to the shortage of disinfecting wipes. Then came the idea of an affordable, easy disinfecting wipes dispenser that can be used for classrooms to day cares to shopping carts everywhere.
The solution
So what this dispenser does is when an object such as your hand comes within ten centimeters of the sensor, the motor starts moving which is connected to a rod with rolled up wipes on it. The rotation of the motor moves the roll of wipes, causing them to unroll and make their may out of the container.
How to build
So, each of the pins except the ground and vcc on the motor driver are connected to pins on the arduino, which we defined in the code. The trigger and echo pin on the sensor are also connected to the arduino which are defined in the code. Then the ground and vcc of both the motor and the sensor are connected to the ground and vcc of the arduino which is connected to the power.
The sensor detects the distance by seeing how long it takes a wave to come back. The code on the arduino makes sure that if the sensor detects something within 10 centimeters of it, it runs the function stepper which causes the motor to run.
The container is made from the lysol container, hopefully making it cheaper for developing countries. The container has two holes, one for the wipes to come out from and one for the motor. Then we need to connect the motor to the container which I achieved with tape. The rod connects to the motor which is held on the other side through the hole already provided in the lysol container. Now when the motor rotates, the rod rotates as well.
What’s next
This is just a prototype, with more material, the final product would look cleaner with a box covering the circuits and the pcbs and circuits connected to the container.
What did I learn
I think the most important thing I learned through this experience is time-management due to the time constraints of two days to make the whole thing as well as perseverance to be able to try again despite how many times the circuit and the code did not work as it was supposed to.
https://drive.google.com/file/d/1secnvbVoC_D83cX3iTIrF0KT7zyEYGL_/view?usp=sharing
Built With
arduino-uno
arduinoide
python
stepper-motor
ultrasonic-sensor
Try it out
github.com | No-Touch Disinfectant Wipes Dispenser | A prototype of a no-touch dispenser that is easy and affordable to make and could be used from cleaning tables to disinfecting carts. | [] | [] | ['arduino-uno', 'arduinoide', 'python', 'stepper-motor', 'ultrasonic-sensor'] | 5 |
10,349 | https://devpost.com/software/hotdog-stew | For HackMann this year we decided we wanted to design a fun and entertaining website since we’ve found ourselves pretty starved for entertainment throughout quarantine. We were inspired by The Bored Button, and wanted our website to provide a similar experience. Hotdog Stew is quick and easy game to play. You answer a series of 5 random controversial questions and receive a controversial score at the end. Such questions could include, “Is a hot dog a sandwich” and “Is Cereal a stew”? This website not only gives users the ability to plan the game on their own but also allows them to communicate with others in a comments section which appears at the end of each question.
We didn't remember much of html when we entered this hackathon, so this was definitely a great refresher! We also learned about Flask and we all participated in a very helpful workshop about it!
Hope you enjoy!
Also, this is all of our first hackathons, so we are beginners and our video is being shared on Google Drive.
Built With
flask
html
python | Hotdog Stew | A website for determining if hotdogs are sandwiches. | ['Jolie Nelsen'] | [] | ['flask', 'html', 'python'] | 6 |
10,349 | https://devpost.com/software/simplitize | Home Page
UI of the Summarizer Form
UI of the Question Answering Form
You can see the POST requests sent to our API here
Page shown after user submits
Our model got the right answer!!!
Inspiration
As students interested in Data Science and Machine Learning, we've found that a great way to stay up to date with this rapidly changing field is to read academic papers recently published. However, many of these papers were extremely long and tough to comprehend, and the abstract of a paper was often missing, and if it was present it wasn't informative. Furthermore, we often spend a lot of time going through papers that were similar to something we read before, but we didn't know that beforehand due to the sheer length of the paper. Also, we often were looking for an answer to a specific question, but we didn't know where to look for an answer, as the answer was often buried in 80 pages of technical terminology. To solve this, we developed Simplitize, a web app that helps you understand academic papers via NLP Question Answering and Document Summarization.
What it does
There are two features of Simplitize. First, the user can copy and paste a paper into our webpage, and we'll summarize it in
under 10 sentences
. Second, the user can copy and paste a paper into our software along with a question about the paper, and we'll provide them the answer to their question (or "None" if the paper does not contain the answer). I'll go more into how this works below.
How We built it
We built the frontend with HTML, CSS, and JavaScript and used the Mobirise builder to beautify it. Our backend is written in python with the flask framework. We also used Pytorch and BERT for question answering and NLTK for document summarization.
Document Summarization
There are two types of document summarization, abstractive summarization, and extractive summarization. Abstractive summarization is where we try to provide a summary by focusing on the big picture, but keep most of the main sentences intact. Extractive summarization is when we break down most of the sentences to summarize the document, however, there are often many grammatical errors and it is tough to understand. We chose to use abstractive summarization as the purpose of our project was to make these papers easier to understand. We used Natural Language Processing to give each sentence a "rating" to how essential it was to the "big idea" of the paper, and then ranked the sentences. We then presented the user with the most important sentences in their paper constructed into a holistic summary.
Question Answering
We wanted to go beyond the typical hackathon project of an article summarizer, so we integrated an extremely new deep learning algorithm to provide question answering: a transformer. A transformer is an algorithm used for seq2seq modeling, and we wanted to train a model that could extract information from text. We then found the Stanford QUestion Answering Dataset (
SQuAD
), and planned on training a transformer on SQuAD. However, training a model from scratch on SQuAD would take four days to train, and we had only had 20 hours to go. To solve this, we applied
transfer learning
and used a pre-trained transformer and performed hyperparameter tuning locally, which took ~4 hours. We then saved our model and integrated it with our flask API to connect it to the frontend.
Challenges We ran into
There were two major challenges we ran into (in addition to a gazillion bugs):
As stated earlier, we ran into time constraints, preventing us from training a transformer on SQuAD from scratch. However, we solved this through transfer learning. We learned that transfer learning is used very commonly in the field of Natural Language Processing.
We had issues connecting our Flask API with Heroku, which is where I normally deploy flask APIs which I've written. Unfortunately, I didn't have time to debug this issue, so I ran the API on localhost and used ngrok tunneling to get an endpoint URL. When we put this into production, we plan on deploying it on either Heroku or PythonAnywhere as those are more scalable.
Accomplishments that We're proud of
We have a fully functional web application that can be deployed, which can help many students get a better understanding of deep learning.
At the point of publishing this project, I believe we are the only software that provides high-quality question answering specifically for academic papers; our project is novel and hasn't been repeated!!
We learned about how we can apply transfer learning to natural language processing, something that neither of us had much experience with. This is applicable in the future, as most NLP algorithms are built via transfer learning.
What We learned
We learned how to apply transfer learning to Natural Language Processing, which neither of us had done before. We plan on continuing to use transfer learning when working on other projects.
We learned a lot about the applications of question answering, which is a relatively new field in NLP. We hope to apply this to other fields in the future.
What's next for Simplitize
We plan on hosting our API on PythonAnywhere, as it is clearly more scalable than running it on a local server. After that, we hope to deploy our website on simplitize.tech (in process of buying domain right now). We hope to get feedback on our project, and then reiterate.
Built With
bert
css3
flask
html5
javascript
nltk
python
pytorch
Try it out
github.com | Simplitize | Helping you understand complex academic papers via NLP Question Answering and Document Summarization | ['Kshitij Rao'] | [] | ['bert', 'css3', 'flask', 'html5', 'javascript', 'nltk', 'python', 'pytorch'] | 7 |
10,349 | https://devpost.com/software/scriptsense | ScriptSense
ScriptSense is a Visual Studio Code extension that uses AI for code completion.
Inspiration
Our team was inspired to create this project when we heard of Open AI’s still relatively new GPT-2 unsupervised language model. It has been used by people for fun to predict/generate paragraphs, so we decided to take that and apply it to something useful — code completion! Currently, current popular code completion solutions like IntelliSense are pretty good, but the problem is that they can usually only suggest the next word in the code. ScriptSense attempts to predict/generate the next few lines of code using Open AI’s GPT-2 unsupervised language model.
What it does
ScriptSense completes the next few line(s) of your code. All you have to do is write some code or write a comment, then press “ctrl+space, enter” on Windows or “cmd+space, enter” on macOS. It works for most simple code, but will not work at all in code using external libraries, algorithms, classes, recursion, or any other more complex concepts. Given sufficient time and data for the model to train, ScriptSense should work for theoretically all programming languages.
How we built it
We first used TypeScript to get the text on the line that you typed in Visual Studio Code. Then we used Python to request the output data based on the input data from the server which was running the OpenAI GPT-2 1.5B unsupervised language model. Then we fed that data through a data pipe to the TypeScript & Javascript files in the Visual Studio Code extension, which we then would display as a suggestion in Visual Studio Code.
Challenges we ran into
We had some issues with conflicts with other Visual Studio Code extensions, as well as problems with random critical warnings and errors. Often, the extension would take too long to start up, and sometimes no code completion suggestions would popup. Eventually, we managed to fix all these issues. In the end we still had a few issues, but it was good enough for a demo. One of our largest challenges was that one of our team members' wifi wasn't working. After messing around with the internal wiring of the house for a few hours, he finally managed to fix the issue — but by that time there was only one hour left in the hackathon, and there were still a few things that needed to be completed.
Accomplishments that we're proud of
We are proud that we got the OpenAI GPT-2 1.5B model working and the Visual Studio Code extension code completion suggestion working.
What we learned
Our team learned how to use OpenAI’s GPT-2 model as well as how to create a Visual Studio Code extension and feed data through data pipes between different programming languages.
What's next for ScriptSense
Our team would like to further improve everything in the extension overall, finish the project, and fix the remaining issues. Finally, we are interested in polishing the project and publishing ScriptSense as a Visual Studio Code extension on the public extension store.
Built With
javascript
openai-gpt2
python
typescript | ScriptSense | AI-Powered Code Completion | ['George Shao', 'Marcus Chan'] | [] | ['javascript', 'openai-gpt2', 'python', 'typescript'] | 8 |
10,349 | https://devpost.com/software/lo-fai | One of the preliminary designs we made for the YouTube livestream
Our Logo!
Picture of the YouTube Livestream interface
Picture of the OBS Stream Setup
Inspiration
Last year, one of our team members attended HackMann 2019, the first hackathon he had ever taken part in. Not really knowing what he was doing, he foolishly attempted a machine learning program in the language Processing which didn’t go so smoothly- it end up crashing during the live presentation). However, his group did have excellent taste in music, playing
Patrick Open Sesame Lo-Fi
for the entirety of the hackathon, garnering compliments from both other hackers and mentors alike.
Now, for HackMann 2020, this competitor returns, with more machine learning knowledge, and more lo-fi music. We believe music is the perfect medium in bridging gaps and making connections between people. Our group has bonded over a shared love of making and consuming music, and we want to share this with as many people as we can. We see our project as the perfect intersection between our shared interests in technology and music- we were able to explore music beyond physical instruments and create a unique and accessible experience. Lo-FAi is a youtube livestream which acts as a constant stream of lo-fi music generated in real time by a machine learning model.
What it does
We built an LSTM neural network to generate Lo-Fi music. The music is then played over a dynamic background we created and streamed onto Youtube, which can be accessed through our website. This allows viewers to have constant access to a stream of newly created music for them to listen to.
How we built it
To build the neural network we used
Keras
with a
TensorFlow
backend in
Google Colab
. This enabled us to train the LSTM model using GPU acceleration, and host our code online to work with each other. To host the livestream, we initially tried to use the
pylivestream
library on a
Raspberry Pi
but elected to instead use the Windows version of
OBS Studio
. For our landing site, we used
repl.it
as a host, and programmed it using
HTML, CSS, and JavaScript
. For our livestream, we used
PyGame
to create the animated stream screen, and
OBS Studio
to control audio and video input.
Challenges we ran into
There are few sources that supply LO-FI music in a MIDI format for free. After searching and not finding anything, we downloaded LO-FI sample kits with MIDI files, and we also converted some WAV files to midi.
When figuring out how to livestream on a Raspberry Pi, we decided to try live streaming in python using the pylivestream module. It took two hours just to find the settings file, and then there were more problems with internet connection and detecting the camera. Once we got live streaming working with pylivestream on a different computer, YouTube flagged our first test as explicit because only a head was in the frame, and it thought that we weren’t wearing clothes. We had to set up a new livestream after that, and we also decided to switch to OBS for more reliable live streaming. We also ended up switching to a Windows computer because the Raspberry Pi could not install OBS.
Accomplishments that we're proud of
Finishing the project mainly - the neural net alone took around 2 hours to train on a Google Colab GPU, not including the time spent collecting midi files and converting wav files into midis. Add onto that figuring out how to youtube livestream, trying (and failing) to host the livestream on a raspberry pi, and just a general need to not stay up for 48 hours straight, this project was a time crunch from beginning to end.
However, the part of this project that we feel is most impressive is 100% the livestream. Whenever channels constantly stream music, they typically do so from a predefined playlist or some source that’s so long it might as well be infinite. For our live stream, we are able to generate the music in real time, creating music which is guaranteed to be unique and new to the viewer.
What we learned
We learned so much through this project that it’s almost impossible to figure out where to start. Starting with the audio generation, we looked into all sorts of styles of audio generation. For the dynamic backgrounds, we learned how to use pygame. From researching the structure of lo-fi music to learning neural network structures and ultimately figuring out how to program the LSTM model in python, the music generation was a wild ride that taught us a lot about how computers can see music.
None of us have ever gone live on YouTube before. In fact, this actually meant that, upon coming up with this idea slightly before the opening ceremony of the hackathon, one of us had to register with YouTube to be verified for live streaming, which requires a 24 hour waiting period. Almost everything in this project was new. We learned that the supposed python livestreaming library really only works in command line, and that Raspberry Pis are absolutely
terrible
at working with any sort of streaming, whether it be a static image or a webcam video. But in the end, we did actually learn a lot about OBS live streaming, how to go live on youtube, how to manage live audio and video generation and so much more.
What's next for Lo-FAi
There are a lot of places to go with Lo-FAi. For starters, we could layer on more instruments to give the music a more complete sound, as we only trained the melody due to time and processing constraints. This could be done by creating a network which learns how drum beats and lead parts fit into the general lo-fi piece, and generates those parts off of the current network output.
Another possible future endeavor is to figure out how to properly host the livestream on a raspberry pi or some other low power and cheap computer. This would make it more feasible to run the livestream for longer periods of time, as a desktop computer is not being used to control the input into the livestream.
Built With
colab
css
html
javascript
keras
midi.js
obs
procreate
pygame
python
tensorflow
Try it out
lofai.kgauld1.repl.co
github.com | Lo-FAi | Lo-FAi is a machine learning model program that generates music in real time and streams on YouTube 24/7. | ['Kevin Gauld', 'Ethan Horowitz', 'Ashley Fong'] | [] | ['colab', 'css', 'html', 'javascript', 'keras', 'midi.js', 'obs', 'procreate', 'pygame', 'python', 'tensorflow'] | 9 |
10,349 | https://devpost.com/software/hackmann-2020-submission-deploy-the-boy | This game is dedicated to Henry Bloom
Built With
rpg-maker
Try it out
drive.google.com | Hackmann 2020 Submission - Deploy the Boy | The Legend of Henry | ['Anthony White'] | [] | ['rpg-maker'] | 10 |
10,349 | https://devpost.com/software/kommunity-9zwgya | Login Page
Volunteer
Request Help
Google Cloud Dashboard
MongoDB Database
Inspiration
Due to COVID-19, there are a lot of individuals struggling in their daily lives. Kommunity allows people to support people around them, through a web app.
What it does
Kommunity allows individuals to volunteer to accept requests for help in completing a task. This could range from picking up groceries, carrying furniture, to gardening--the potential is unlimited.
How I built it
I first set up a virtual machine on Google Cloud. After configuring some parameters, I also set up a MongoDB server on that virtual machine. It stores the tasks to complete as well as user information.
I then developed the front end on my own computer. There are three pages.
Login/Sign-Up
Volunteer - Accept a request for help completing a task.
Request - Post a request for someone to help you complete a task.
When a user loads the volunteer page, it accesses all of the open tasks from the virtual machine. When they submit a request, it adds to the database.
The most challenging part for me was the backend. I next set up an Express.JS server on the virtual machine which would allow a client to access the MongoDB database. After much trial and error, I successfully got the backend setup.
Challenges I ran into
Finding out how to use Google Cloud Virtual Machine
Setting up the MongoDB server on the Virtual Machine
Writing the Node.JS server REST API so that it would handle requests properly.
Accessing the Node.JS server to get information.
Accomplishments that I'm proud of
I am proud of the full integration of the client-side and the cloud virtual machine which has the backend. It worked out really nicely.
What I learned
I learned a lot about using cloud virtual machines and backend server development.
What's next for Kommunity
Fix user interface to fit more devices, have profile pictures, more responsive, look better in general.
Setup secure login system.
Setup user accounts (Right now, they are not stored in MongoDB).
Use React.JS for the cards.
Suggest suggested volunteer opportunities by location and date.
Built With
css3
express.js
gcp
html5
javascript
mongodb
node.js
Try it out
github.com
kommunity.vercel.app | Kommunity | Especially during these trying times, finding someone to help with simple daily chores is difficult. Kommunity is a platform for people to request someone to help with a task or volunteer to help. | ['Anirudh Kotamraju'] | [] | ['css3', 'express.js', 'gcp', 'html5', 'javascript', 'mongodb', 'node.js'] | 11 |
10,349 | https://devpost.com/software/music-therapy-2h7pir | We all love music, and we each believe that music is a powerful tool that can change feelings and improve mental well-being. We wanted to use music to break barriers of language and to create a widely accessible form of emotional support.
We learned about the importance of teamwork and group cooperation and if people didn't contribute enough, the whole project would fall apart. We learned the importance of time-management and how meeting deadlines is essential in order to complete the final product.
Although we did not face many challenges, we had an imbalance of work distribution, thus causing frustration regarding deadlines.
We built our first drafts were made out of Python to make the basic functions. We then switched the platform to HTML because it included more features such as formatting.
Music is an underrated and underappreciated tool. We hope that we can share the power of music through our music therapy website!
Thank you!
We are doing this in the Beginner Selection.
Our Team Name: Music Therapy
Our names: Anna Kim, Anyi Sharma, Emily Park, Grace Yoon, Helena Zhang
Our Discord usernames: skigirl321, moths or butterflies, emily_park, grace_yoon_BPN, helena_zhang
Built With
html
python
Try it out
drive.google.com | Music Therapy with Anna, Anyi, Emily, Grace, and Helena | Music Therapy to help improve well-being! (Beginner Selection) | ['Emily Park', 'hack hackhack', 'Grace Yoon', 'Helena Zhang', 'Anna Kim'] | [] | ['html', 'python'] | 12 |
10,349 | https://devpost.com/software/restyle-xzvnot | Restyle
Some websites on the internet look really bad. When designers design websites, they sometimes forget about dealing with scaling and other cases like that. One thing that really annoyed me is that on Messenger, the big sides bars have a min-width tag that makes scaling down the page a pain. So I made a chrome extension to fix just that. For each domain, users can make a bunch of css presets. They can hide elements, change sizes, or make light mode pages into dark mode.
Discord: ham#7955 (nicknamed Nico)
Built With
chrome
css
html
javascript
Try it out
github.com | Restyle | A chrome extension that allows a user to edit create css presets for webpages | ['Nicolaes Anderberg'] | [] | ['chrome', 'css', 'html', 'javascript'] | 13 |
10,349 | https://devpost.com/software/intersection | GIF
landing page with animated circles
Wireframing in Figma
Inspiration
Talking about your experiences with people who understand and relate to them can be therapeutic and beneficial to your mental health as it can alleviate your sense of isolation and encourage a feeling of belonging. However, it can be difficult to find people with shared identities both in the real world or on social media apps due to stigma or a lack of visibility. We wanted to create a platform for people to share experiences to gain a greater sense of commonality with others.
What it does
Intersection is a social network with the sole purpose of connecting people to identity-based forums in which they can speak freely in a safe environment, share their story, engage in discussions, come across different points of view, and meet new people with intersecting identities.
When you join Intersection, you'll have the opportunity to add tags that correspond to your identities and experiences. Once you add your tags, you'll be able to connect with others with similar experiences through a customized forum that displays posts that relate to your tags. If you see a post you resonate with, you can comment on the post to show your support or further the discussion. Intersection also has built-in comment suggestions that can help you start a conversation.
In addition to strengthening your connection with those of shared identities, Intersection also encourages you to learn more about the experiences of those who are different than you. The discover page shows popular posts based on the number of comments across all tags for you to explore. You can also chat in a general chat room with other Intersection users.
Intersection will also give you friend suggestions based on how many identities and experiences you have in common. Lastly, the profile page displays your username, tags, bio, and posts. It also features a list of hotlines and resources for those who may need them.
How we built it
After wireframing our website using Figma, we used HTML, CSS, and JavaScript to develop our website. Repl.it was used to collaboratively generate the website. We used mongoDB to store user information, including usernames, hashed passwords, stories, tags, and friends. Socket.io was used to create the chat.
Challenges we ran into
Although Intersection is a webapp that works best on a computer or laptop, our team worked on making it accessible on other devices, including mobile phones. This was a struggle that took over a couple of long hours and involved the rearranging of different aspects of the webapp.
Another challenge we ran into involved resetting the modal for posting stories. After posting a story, all the previous entries would remain in the input boxes. In order to reset the boxes, we had to set the form values to the values before editing or set the values to empty strings.
Accomplishments that we're proud of
We put careful thought into the design and spent a lot of time designing the logo and wireframing the website. On the landing page, colorful animated circles in the background float up around, colliding and overlapping with each other at times, symbolizing the journey and the connections made by people who use Intersection.
We’re also proud of creating customized feeds based on users’ tags. We wanted to allow users to view stories from individuals who had shared experiences as well as introduce them to new stories to help broaden their perspectives and understandings of others.
What we learned
It was our first time using socket.io to create a real-time chat, so we had to learn how to send and retrieve information from the server so that everyone on the chat would see new messages.
We also generated much of our website dynamically, as the website needed to be updated based on the tags selected and stories posted. This was a new experience for us, as our previous websites were generated solely in HTML rather than the combination of HTML and JavaScript.
What's next for Intersection
We hope to expand the features of Intersection to include private messaging and the ability to filter and sort posts. We plan to create algorithms to notify users of resources if their posts indicate an alarming need for help and to remove comments and posts that do not match the positive and supportive community that Intersection stands for. We'd like to implement the ability to block a user and a feature to autofill the comment response box with the suggested comment when selected rather than the user typing in the suggested comment.
Built With
css
flask
html
javascript
mongodb
python
socket.io
Try it out
github.com
intersection--bach5000.repl.co | Intersection | Find out how you intersect with others | ['Bach Nguyen', 'Christina W', 'Michelle Bryson', 'Jendy Ren'] | [] | ['css', 'flask', 'html', 'javascript', 'mongodb', 'python', 'socket.io'] | 14 |
10,349 | https://devpost.com/software/tristate-quarantine-tracker | We are submitting as beginners: This is our first hackathon
Inspiration
We were inspired to create this website because it is important for people to realize if they have to quarantine after returning home to or visiting the Tristate area. Limiting the spread of COVID19 saves lives, reduces the strain on the health care system and helps the economy recover more quickly.
What it does
The user selects from which State they are entering the Tristate area. The program then scrapes the data from the COVID tracking project, calculates the necessary statistics and compares it to the Tristate area's regulation. Finally, the program returns whether the user has to quarantine or not when (re-) entering the Tristate area.
How we built it
We first created the python script to web-scrape the data. Then we used BeautifulSoup to parse the data efficiently and used SQLite to connect to our table containing additional data needed. Then we built the logic about whether a quarantine is necessary using if statements. Then we used html for the front end. Finally, we used crontab to refresh our data and keep it up to date.
Challenges we ran into
Some challenges we faced were connecting the python code with the html code along with figuring out how to keep our data updated. We solved these problems by using Crontabs and writing the entire html code through the python script.
Accomplishments that we're proud of
One accomplishment we are proud of is that we successfully connected CronTab to our Python program in order to regularly update our data and keep the users up to date.
What we learned
Throughout the process of creating our project we learned many things from working efficiently together as a team to exploring new aspects of programming languages. One of the main things we learned was to schedule a program to run using crontab.
What's next for Tristate Quarantine Tracker
We plan to implement the travel advice for states other than the tristate area, and for other countries, including Europe. We plan to expand globally and help governments worldwide to contain the spread of COVID-19.
Built With
html
online-cron-job
python
sqlite
Try it out
tri-state-quarantine-tracker.000webhostapp.com
drive.google.com | Tristate Quarantine Tracker | Traveling to the Tristate area during COVID from another state? Keep yourself and those around you safe by using our Tristate Quarantine Tracker. | ['Sophie W', 'Charlotte W', 'Caroline Willer-Burchardi'] | [] | ['html', 'online-cron-job', 'python', 'sqlite'] | 15 |
10,349 | https://devpost.com/software/rain-or-shine-the-weather-app-by-rain-li | Displaying temp. of Melbourne in Celsius, which is very cold. So the background is the blue gradient.
NY temp. in Celsius, warm gradient
NY temp. in Fahrenheit, warm gradient
I noticed a while ago, the MacOS and iPadOS lacks an easy to use, simple weather app. Although the iOS system does come with a default weather application, I believe there could be improvements to make. Also, most weather websites available now are slow, full of ads, and hard to navigate in general. By suffering through all the UI flaws, you are presented with the most basic information. Therefore, I decided to build a weather app, or a webpage more specifically. I chose to use JS, HTML, and CSS to create Rain or Shine because isn't constrained by the operating system of the device. I want Rain or Shine to have a broader reach than just iOS or Android users.
This is not your ordinary weather app, Rain or Shine one of the fastest and most efficient way to look up the weather of any city in the world. Designed with speed in mind, while not sacrificing the UI experience. This minimalistic weather app can provide accurate and real time weather data for all. Some standout features are:
-Background changes based on temperature (cold gradient when <59F/15C, warm gradient otherwise)
-Option for Celsius or Fahrenheit mode
-Instant, accurate, and real time weather results
-Smooth transitions throughout
-Live dates
-Hover to highlight the button and search bar
-API taken from OpenWeatherMap, which is very accurate
By using Vanilla Javascript, I tried to keep things as simple as possible, but the most efficient. My program gets the real time weather data from OpenWeatherMap.org, by using their API. In the video, I compared my program to the top result for "world weather" on Google. Rain or Shine got the weather, current temperature, and highest/lowest temp. in 1.7 sec. While the other website did the same thing in more than 10 sec. This shows the speed of Rain or Shine, while not lacking the information and the user experience.
Some challenges are styling the CSS to make the website look as cool and modern as possible, while making the most prominent information pop. Next, I didn't have much experience working with APIs before, but I overcame the lack of experience and successfully implemented the API.
I am proud of this entire website, because I styled this website from scratch, and tried things I have never done before. The result turned out well, and I am very happy with it. Doing this project helped me further my knowledge in creating webpages, using APIs, and the JS language in general.
Next steps for Rain or Shine could be developing this idea into an iOS and Android app. Maybe place more information besides the weather, like real time air quality.
Built With
api
css
html
javascript
openweathermap
Try it out
github.com
rainliofficial.github.io | Rain or Shine | The Weather App by Rain Li | ['Rain Li'] | [] | ['api', 'css', 'html', 'javascript', 'openweathermap'] | 16 |
10,349 | https://devpost.com/software/helpful-recipes | Productivity apps are interesting to make!
Made by E-ZAM (Abby#5977 & TheZhaozinator#9979)
Built With
api
html
javascript
Try it out
drive.google.com | Helpful Recipes by E-ZAM | Improve the cooking experience | ['Erin Zhao', 'MoonstoneBlueAbby Morse'] | [] | ['api', 'html', 'javascript'] | 17 |
10,349 | https://devpost.com/software/yemen-site | Yemen Crisis - Deployed @
Yemen.moonsdontburn.design
This is our first hackathon, so we hope to submit as beginners.
Project
Website created to spread awareness on the Yemen crisis.
Story
The conflict in Yemen is the largest humanitarian crisis in the world. Over 24 million people—80% of the population—are in need of humanitarian assistance, including more than 12 million children. One of the Arab world’s poorest countries, Yemen has been devastated by a civil war since 2015, becoming a living hell for its citizens. This is a national crisis—and it demands our attention.
In 2011, Abdrabbuh Mansour Hadi became the president of Yemen. He struggled at first, and the Houthi movement, made up of Yemen’s Zaidi Shia Muslim minority, took advantage of this to take control of the Saada province and its neighboring areas. Many Yemenis supported this because they didn’t like the transition of presidential power, which gave the Houthi rebels the power to take over the capital, Sana’a, in 2014. Soon, they tried to take over the entire country, forcing Hadi to flee abroad.
Many Sunni countries (ie. Saudi Arabia) began an air campaign to try to defeat the Houthis, leading to years of military stalemate and tragedy in Yemen since then.
7,700 civilian deaths have been verified by the United Nations, but many monitoring groups believe the death toll is far higher. In October 2019, the U.S.-based Armed Conflict Location and Event Data Project said it had recorded more than 100,000 fatalities. Every 10 minutes in Yemen, one child dies. About 70% of the population does not have access to drinking water.
After facing disease and hunger for six years, COVID-19 is pushing a devastated health infrastructure to the brink of collapse. As of May 30, 2020, only 2,678 people had been tested out of 28 million Yemenis. Out of those 2,678 people, there have been 400 COVID-19 cases and 87 deaths. With 500 ventilators, 700 intensive care units, and a population of 29 million people, these numbers are bound to rise.
Many people still aren’t aware of why and how it actually started and what the true impact of the war has been on Yemen’s civilians. Thus, we wanted to accumulate information into a single website that will concisely provide the necessary details to help whoever sees it become more aware of the dire situation in Yemen.
We created this website on Flutter alongside many packages. We implemented a color theme of black, white, and red for an aesthetic appeal, even making sure to start the project with sections of the colors black, white and red, and end the project with black, white, and red, hoping to create a more enjoyable experience for the viewer. We also thematically made all the pages align to time, ensuring that the chronology of the website eventually reached today so that the call to action is a lot more impactful and relevant to the viewer. We also implemented a tracker for live covid cases, it is tracked using the “Dataflowkit kit COVID tracking API.” Dealing with an API was difficult and required a lot of research, but we managed to get it to function. Further, we embellished the experience but putting many different animations within the project. This was also difficult, both creatively and logistically, but we think the final product is smooth and appealing. Overall, we wanted to provide a meaningful experience through design and story telling.
We begin with a short overview of the Yemen crisis. In the background, as you can see, is what Yemen looked like prior to the civil war—it was a beautiful nation, with gorgeous buildings and evident patriotism. As we continue scrolling, we delve deeper into the pre-war era in Yemen, looking further at its architecture, nightlife, and culture before the conflict began. We did our best to create a sense of storytelling through images, quotes, and short, informational paragraphs.
Then, after the backstory of the war is shown, we see a juxtaposition of past and current Yemen. The photograph held in front of the camera shows an image of a building prior to the war, and in the background the viewer can see that same building as it is now—crumbled, destroyed, just like the nation in which it resides.
As we continue scrolling through the website, we encounter a brief timeline summarizing significant events in Yemen’s history from 2011 to now. This provides a smooth transition from pre-war Yemen to present-day Yemen. We see starving children and read quotes from both organizations focused on helping Yemenis and the Yemenis themselves.
Finally, at the end of the website, we reach the current state of Yemen. We learn the devastating effects COVID-19 has had on Yemen, and we realize how its impact will only increase further due to the lack of resources in the country to combat the virus. With a final message, the website calls for action with the display of two buttons: one to call the phone number for Congress, and one that directly links to a website where we can donate to help fund the distribution of needed resources to the Yemenis.
Overall, the process of creating this project was difficult, but fun. Time-consuming, but also enlightening, as we learned a lot about Yemen’s civil war while creating the website. With our thorough research and carefully-designed displays, we not only hope that our website is impressive to you, the judges, but we also truly hope that anyone who stumbles across our website will become more knowledgeable about the crisis in Yemen and hopefully take action to help support the suffering people living there.
Our discord usernames are:
kathiehuang#1619
vivekmad000#0690
Moon#3587
Built with...
Dart
Flutter SDK
Flutter Packages (Found in pubspec.yaml)
Video Demo
https://drive.google.com/file/d/1bz70re3Ie3Y1EAWw-4KtPc0uY_Nx3YaX/view?usp=sharing
or
https://youtu.be/3SFNXPYJKFQ
Built With
dart
html
kotlin
objective-c
swift
Try it out
github.com | The Yemen Crisis | The conflict in Yemen is the largest humanitarian crisis in the world. We created a website dedicated to spreading awareness about the civil war in Yemen, aiming for a more informed society. | ['Philip Vu', 'Kathie Huang', 'Madhavi Vivek'] | [] | ['dart', 'html', 'kotlin', 'objective-c', 'swift'] | 18 |
10,349 | https://devpost.com/software/pixelheart | The PixelHeart home page
The charity viewing page
The charity donation log page
Inspiration
We found a subreddit where users could paint one pixel on a collective image every hour. It was a social experiment to see how people would collaborate to make designs together. It ended up showing that people could work together to achieve goals really well.
What it does
PixelHeart puts a twist on that idea. It allows nonprofit organizations to set up fundraiser murals. Users can then donate a small amount of money to draw on the murals. This project combines charitable giving, fun for artists, and a technological challenge for developers.
How we built it
We made an Express web app using Node.js. Our custom authentication solution used the
bcrypt
hashing module from npm and stored JSON Web Tokens in browser cookies. We used a MariaDB instance as our database and queried it with Sequelize.js, which is a JavaScript wrapper for SQL. We used HTML canvases to implement drawing and the Stripe API to process payments. Our front end used the Fomantic-UI CSS framework and a templating engine called Nunjucks.
Challenges we ran into
HTML canvases are very difficult to work with
Trying to store images files that constantly change requires some intelligent design choices
Accomplishments that we're proud of
We're proud that we were able to come up with an idea that's fun, technologically impressive, and helps support those in need.
What we learned
We had never worked with many of these technologies before. In fact, the only technologies on the list above that we had all used before were Express and Node.js itself. Everything else was new to at least one person, so everyone learned how to use some new technologies. We also got practice making design decisions quickly and trying to come up with a project idea.
What's next for PixelHeart
We'd like to bring this app to production to actually support charities. Before we do that, we need to rethink some of our design choices in terms of data storage. We also need to fix some of the rough-around-the-edges issues that often come with Hackathon projects like poor input validation and security vulnerabilities.
Team Members - Flight Crew
Dominic Rutkowski (dominicrutk#3030)
Vladimir Tivanski (pɐlʌ#8077)
Built With
bcrypt
canvas
express.js
jsonwebtokens
mariadb
node.js
sequelize.js
stripe | PixelHeart | A collaborative drawing app to support charities | ['Dominic Rutkowski', 'VladimirTivanski Tivanski'] | [] | ['bcrypt', 'canvas', 'express.js', 'jsonwebtokens', 'mariadb', 'node.js', 'sequelize.js', 'stripe'] | 19 |
10,349 | https://devpost.com/software/the-coronavirus-meter-by-aanya-gupta-and-tina-hansong | Beginner category:
Please see the link in the field below!
inspirations
staysafe
coronavirus
Code link:
https://drive.google.com/file/d/1BfBvDlwvD6clR2tdiHOfqKVAn8wmBVZw/view?usp=sharing
Built With
python
Try it out
drive.google.com | The Coronavirus Meter: By Aanya Gupta and Tina Hansong | An indispensable tool to save humanity! | ['Aanya Gupta'] | [] | ['python'] | 20 |
10,349 | https://devpost.com/software/helply-g7kory | Helply Logo
Item Identification and Processing System
User Donation Profile
Leaderboard for Gamification and Incentivization
User Landing Page with Google Maps Integration
Item Identification Subsystem
Distribution Subsystem with Google Maps Integration
Team
Team Name: Helply
Discord: rsrajan#8591
Inspiration
In 2019, the total generation of solid municipal waste was 267.8 million tons, or approximately 4.51 pounds per person, per day. Of this amount, only a mere 23% was recycled. Additionally, in 2020, over 15%, or 40 million Americans, are affected by poverty nationwide. In the 21st century, with growing wastage and increasing poverty, Helply simplifies the donation and recycling experience for everybody.
What it does
Helply allows users to pinpoint optimal donation and recycling centers, ship, and recycle their old household items, and receive reward points in return for helping their communities while reducing their waste production -- all within the comforts of home.
How I built it
The app’s skeleton was built around Ionic, an Angular.js framework built on top of the Apache Cordova platform. Through Ionic, Helply has been optimized for both iOS and Android devices.
The detection of the item being donated or recycled was built on Google's Cloud AutoML platform. The AutoML backend has been extensively trained to identify the object itself, the state, and the condition of the item in the picture instantly.
The distribution subsystem was built on top of the AutoML backend and uses the weights from AutoML and the user's geolocation to find optimal donation and recycling centers, as well as populating the shipping labels.
Challenges I ran into
Due to the vastness and subjectivity within the identification of the object and the condition, it was difficult to create a catch-all AutoML model that could identify every object being inputted. Another tedious aspect of the development process was getting the numerous APIs and libraries all working together.
Accomplishments that I'm proud of
I'm proud of having a diverse set of subsystems, APIs, and libraries all working in conjunction while maintaining a streamlined and clean front end and user experience.
What I learned
I learned how to: create an Ionic/Angular.js frontend, maintain a responsive and clean UI navigation, create an AutoML model, and integrate different languages, all while supporting various frontends/backends to make a cohesive application.
What's next for Helply
Look into future possibilities of adding features like using gamified reward points to purchase products from companies sponsoring Helply and promoting community outreach.
Built With
angular.js
apache
css3
google-cloud
html5
ionic
javascript
node.js
Try it out
github.com | Helply | An awesome way to donate and recycle | ['Rohit Rajan'] | ['Track Winner: Work and Productivity'] | ['angular.js', 'apache', 'css3', 'google-cloud', 'html5', 'ionic', 'javascript', 'node.js'] | 21 |
10,349 | https://devpost.com/software/orbital-simulator | Inspiration
This year when our physics class went online, our teachers weren’t able to use any of their practical materials to explain the circular motion of gravity. With this tool, hopefully it won’t be as hard.
What it does
Uses Newtonian Physics to calculate the paths of different massive bodies determined by gravitation.
How I built it
Using Java, and OSP, a barebones visual library which allows you to draw shapes on a graph basically.
Challenges I ran into
UI Development.
Accomplishments that I'm proud of
The inelastic collisions. Also, the variable time intervals, slowing down the simulation when higher forces and higher speeds are involved so as to get more accurate calculations. It’s also our first project based hackathon so I’m pretty thrilled that we finished at all!
What I learned
Not to write everything and then test.
What's next for Orbital Simulator
Elastic Collisions.
Coded by Alec#1101, Hobbes#9558, Tyler890#9160
Video by Viraj_Cz#7872
Built With
java
opensourcephysics
Try it out
github.com | Orbital Simulator | A mathematically accurate simulation of multi-body orbits, and inelastic collisions! | ['c21ac Alec'] | [] | ['java', 'opensourcephysics'] | 22 |
10,349 | https://devpost.com/software/prometheus-3dars5 | Inspiration
Prometheus has stolen fire from the gods to give to human kind, however the mortal chosen to bear the torch needs to return to their people, and keep the fire alive.
I love 2D platforms and atmospheric games, and I've always wanted to create my own! For HackMann, I wanted to try creating a game for the first time, so I combined my two favorite genres, and create my own game!
The environment came from my love of Greek Mythology. I love the story of Prometheus and fire, and I figured it would be super fun to dramatize the mad dash that humans took from the gods with fire for the first time.
What it does
Prometheus is a platformer where every move that you make takes away what little light and fire you have. If the fire dies, so do you. Don’t waste any jumps, find the tinder you need to keep going, and light campfires to refresh the flame.
You control the main character using WASD, and can attack using space. You can only see the world around you in the shrinking light of the fire, and every move you make decreases your life and view.
How I built it
Prometheus was built using the Godot engine, with hand-illustrated graphics, and royalty-free music and sound effects.
Using Godot was awesome because it runs on GDScript, a Python-like scripting language, and it has a really robust physics engine, that is still open enough that it allowed me to tinker around with it to get movement feeling how I wanted.
All of the graphics I made with a marker and paper, then scanned them with my phone, and cleaned them up using Photoshop to make looping animations and perfectly connecting tiles.
Accomplishments that I'm proud of
I'm really proud of the overall ambiance and look of the game. I love how the hand drawn illustrations create the look and feel of a story being retold, and I love how the ever decreasing ring of light creates tension with the player. I've died so many times playing Prometheus, and I really like the difficulty, because with so many Campfires to re spawn at, deaths are not setbacks, but surviving with only a fraction of your health bar left feels extremely powerful and satisfying.
What's next for Prometheus
In the future, I'd like to continue adding enemies, such as a Cyclops who throws rocks, or a Dryad who spews water at the player. I'd also like to add in puzzles that continue to use the fire, such as burnable logs and grass. I'd also love to add a boss fight with Zeus at the end, where the character needs to dodge lighting strikes and other attacks to get to the end.
Built With
gdscript
godot
photoshop
Try it out
gotm.io
drive.google.com | Prometheus | An atmospheric, hand illustrated 2D platformer | ['Nathan Dimmer'] | [] | ['gdscript', 'godot', 'photoshop'] | 23 |
10,349 | https://devpost.com/software/grocery-store-automator-ljezip | Inspiration-
Our idea for this project came through personal experiences. We know that during these times our parents have spent countless hours standing in lines, just to be denied access into the store. We checked regularly for times slots, but there were rarely any times available. Our code was aimed to help people know when they could get inside of a store. This lead us to create the Grocery Store Automater (GSA). First, the program opens Google Chrome and goes to the Walmart login. Next, it asks the user how many items they want, and what they want. We previously created an account, and it logins in with the username and password.
How we built it-
To build our code, we first made a plan. We decided on our idea and wrote our Pseudocode. Then we started writing our code. We did each part at a time to make sure our code worked how we wanted it too. Then we tested our code multiple times. Lastly, we discussed any ideas we would want to add to our code to make it better. We also created an email, so our code could log in to Walmart. Once we finished we double-checked our code and made sure everything was running smoothly.
Challenges we ran into.
We also faced some challenges while writing our code. For example, we had to read the documentation a lot. This allowed us to learn more about how selenium works. We also had trouble finding the file path for our code. These problems sometimes stopped our code from running smoothly, but we continued to try and fix the problem.
Accomplishments that we're proud of
We are proud of the code that we made, as a whole, and how it can be implemented by many people who wish to shop online. There were some struggles along the way that we had to go through in order to finish the project. We are proud of our 'try again' morality and the fact that we had kept troubleshooting in order to fix the problem. We are especially proud that this is that first hackathon that we have entered in, and take much pride in the complexity of our work
What we learned
We learned a lot while making this code. We discovered many new classes and functions that we could use in our code. For example, a function we learned was "send_keys." This function helped the computer "type". This function was very useful in our code. This is also our first hackathon. We learned a lot about how a hackathon works, and the process of it. We learned a lot in this hackathon and we believe our code will help many people.
What's next for Grocery Store Automator
We have some ideas to expand the Grocery Store Automater. We thought about putting some extra security features, to help the user with porch pirates and others who are looking to steal the packages that originally belong to the user. We can also expand the usage of the program, by not only adding the products to cart, but also directly buy them.
Built With
chrome
python
selenium
Try it out
github.com | Grocery Store Automator | A quick way to get groceries while staying safe. In this pandemic, people are afraid of getting sick while grocery shopping. Our product will ensure that you stay healthy while buying groceries. | ['Arya Kunisetty', 'Sriya Neti', 'Sriram Natarajan', 'Saahith Veeramaneni'] | [] | ['chrome', 'python', 'selenium'] | 24 |
10,349 | https://devpost.com/software/expressions | A fun way for children with autism to learn to recognize emotions using ML
Built With
adobe
after-effects
android-studio
azure
camerakit-api
firebase
java | Expressions | A fun way for children with autism to learn to recognize emotions using ML | ['Rebecca Zhu'] | [] | ['adobe', 'after-effects', 'android-studio', 'azure', 'camerakit-api', 'firebase', 'java'] | 25 |
10,349 | https://devpost.com/software/novis | Diagnostic Information
Main Page
Inspiration
I've worked with various forms of image classification before, and a major difficulty was always tracking down a large enough dataset. Often, finding a dataset is completely impossible for a particularly niche task. This website allows you to train once on a widely available dataset and then reuse that model to perform diagnosis for a different task.
What it does
I used cell stain images as a case study, but it's pretty intuitive to see how this may be applied to other applications.
The website allows the user to submit any image of a cell stain, regardless of disease, and receive various forms of diagnostic information. This includes various forms of distance between the submitted image and the normal cell distribution, telling the user whether the proposed cell likely is a normal cell or not. It also includes an anomaly heatmap, showing the exact unusual regions in the submitted image, allowing for smarter analysis of the image. Finally, it includes a similarity feature, showing images that are semantically similar with the submitted image, along with their diagnosis, as essentially an automatic visual "case study" search.
How I built it
The core technology used is a variational autoencoder (VAE). I go into more depth in the video, but it essentially allows me to model a distribution of images, parameterized by a latent variable that, ideally, models the semantic features in the image. The VAE is trained with respect to the evidence lower bound (ELBO) which is composed of negative reconstruction loss (
mse
is used here) and negative
KL divergence
between the variational distribution over the latent variable and the prior distribution over the latent variable (standard normal is used here). Those three metrics are the distance measures used to judge whether a given image is in the original distribution or not.
The idea being based of
Baur et al.
, the difference between the original image and the reconstruction is used for automatic anomaly segmentation.
The image similarity is measured through the cosine distance between the latent variables.
The model was trained using Tensorflow and Keras (source is on github).
The website itself is hosted with Flask, with Redis handling the queuing of requests.
Challenges I ran into
Because of resource constraints, I had to convert the keras model to a tflite model before I could deploy it. But, the standard normal layer used for latent variable sampling isn't supported by tflite in python. I fixed this by replacing the standard normal layer with the mean after training, though I'm worried that this may affect accuracy.
Accomplishments that I'm proud of
I've never implemented VAEs for an actual use case before, so this was a very valuable experience.
What I learned
I strengthened by skills in various areas of machine learning and a web development.
What's next for Novis
I want to expand the model to other types of images. I also want to clean up parts of the design of the website.
Built With
flask
keras
python
redis
tensorflow
Try it out
novis.alexw.xyz
github.com | Novis | A machine learning powered platform to assist with diagnosis without the need of a specific dataset | [] | [] | ['flask', 'keras', 'python', 'redis', 'tensorflow'] | 26 |
10,349 | https://devpost.com/software/maskdetech | About
A web application that is geared towards detecting if users are wearing the proper protective coverings or not during the pandemic.
Built With
flask
python
Try it out
github.com | MaskDetech | A web application that is geared towards detecting if users are wearing the proper protective coverings or not during the pandemic. | ['Jeremy Nguyen', 'Nand Vinchhi'] | [] | ['flask', 'python'] | 27 |
10,349 | https://devpost.com/software/nasa-vs-aliens-driver-s-seat | rendering
rendering 2
camera dev
field layout of enemy AI
logo
Beginner category: This is our first Hack-a-thon.
Inspiration
We were inspired by classic arcade games like Galaga and Space Invaders. We like the style of the games, but we wanted to give it a 3D twist. Along with that, we wanted to present the classic game with a modern theme, NASA.
What it does
The game allows the player to step into the driver's seat and fly a NASA space shuttle and shoot lasers at alien UFO's
How I built it
We built the game using Unreal Engine 4 and the website using Brackets.
Challenges I ran into
We ran into challenges with the camera angles for the UE4 game.
Accomplishments that I'm proud of
We are proud of our usage of textures and shaders in the UE4 game to give the game a stylish and unique space feel.
What I learned
We learned how to integrate our CSS files with our HTML files and use videos and pictures with HTML. Further, we learned how to use enemy AI in UE4 and how to texture objects.
What's next for NASA vs Aliens: Driver's Seat
To continue developing this game, we would have more game levels with power-ups and more challenging AI.
Download
To download our website files, video demo, or the actual game, visit this
link
and download the respective files.
To view the website, click
here
.
Names
Our team is Jack Komaroff (jkom23), Eric Do (Ramenman), Larry Tao (FaKe ARt3mIs), and Ethan Fry (PandaTon). Our team name is Semicolon Haters.
Built With
brackets
c++
css
html5
photoshop
unreal-engine
Try it out
jkom23.github.io
drive.google.com
drive.google.com | NASA vs Aliens: Driver's Seat | A 3D version of Galaga/Space Invaders with a modern day NASA twist! | ['Jack Komaroff', 'larrytao05 Tao', 'Eric Do', 'Ethan Fry'] | [] | ['brackets', 'c++', 'css', 'html5', 'photoshop', 'unreal-engine'] | 28 |
10,349 | https://devpost.com/software/bankology | Helping the next generation handle their money
Inspiration
(beginner)
We were inspired to create this project because parents and schools are not teaching students what it means to use money responsibly. Instead of focusing on how to earn money ethically and use it in the right places, schools are focused on delivering education that grew outdated years ago. But no worries. Now that Bankology is here, we can fix the growing lack of financial knowledge in the next generation. With our student designed introductory courses to extremely important topics like: Finance in the Real World, Investing and the Stock Market, Banking, and also Entrepreneurship, we can give students the proper financial education they deserve.
What it does
Bankology is a student created and student -oriented website that teaches children the importance of money and using money the right way. Having the skills to tackle financial problems is becoming increasingly difficult for children resulting in an increase in higher average debt for people under the age of 30. What's the problem? It's quite simple.
Parents and schools are not teaching students what it means to use money responsibly. Instead of focusing on something that we will undoubtedly be using throughout our lives, schools are focusing on outdated curriculum that doesn’t really teach finance.But no worries. Now that Bankology is here, we can fix the growing lack of financial knowledge in the next generation. With our student designed introductory courses to extremely important topics like: Finance in the Real World, Investing and the Stock Market, spending wisely we can give students the proper financial education they deserve.
How I built it
We built the project with
repl.it
and
jsonbox
for the database. We used nodejs, and web development.
Challenges we ran into
Working together remotely was honestly more difficult than we anticipated but we worked through it.
Accomplishments that I'm proud of
We're all proud of finishing the project on time. Most of our group being first-timers we were working throughout trying to understand why this piece of code wasn't working or why that piece of code was working that way. Honestly we are just proud of participating.
What I learned
We learned how to work with teammates. We learned how to make a team out of complete strangers. We did learn code too(see we did something) such as some tricks with HTML.
SokkaNeverDies1: I learned the basics of html and css, photoshop, teamwork skills, how to communicate online.
Stephanie (randomperson2342): I learned node.js and became more familiar with jQuery. I also developed my teamwork and communication skills.
Hackermon: I learned a little bit of jquery while working on this project but I also learned a lot about stocks and finances. I learned to work with other people and make friends.
HarshKaria: This was my first time actually using HTML and working with it. It was awesome.
What's next for Bankology
After the hackathon, we plan to add an actual game type simulator with Bankology, something along the lines of investopedia and the Stock Market Game.
Built With
css3
html5
javascript
jquery
node.js
Try it out
bankology--pdaniely.repl.co
github.com
bankology.herokuapp.com | Bankology | Money for the Next Gen | ['Harsh Karia', 'Hackermon .', 'Stephanie Liu'] | [] | ['css3', 'html5', 'javascript', 'jquery', 'node.js'] | 29 |
10,349 | https://devpost.com/software/test-together-yjzsuf | Inspiration
After Covid-19 lockdown restrictions have loosened up in some states in the US, I realized how more people might need knowledge of where testing sites were
What it does
How I built it
I used Figma to design the UI and exported it over to Android Studio XML files. Then I tried to figure out a COVID-19 Testing Center API and implemented it into my app.
Challenges I ran into
This was my first time focusing on UI and working with Figma and also the first app I've made in Android in about a few years, so it was a bit difficult to relearn the interface and the code.
Accomplishments that I'm proud of
I'm very proud that I was able to learn a new way to design applications and also for making a semi-functional android app. I also typically make web applications with APIs, so this was a fun new challenge.
What I learned
I learned how to work with Figma and also learned how to implement an API in Android and making JSON calls in the Android Java language.
What's next for Test Together
Implementing Google Firebase Authentication and allowing users to save certain test centers. Also working on creating a map for users to visualize the location in relation to themselves.
Built With
android-studio
api
figma
json
Try it out
github.com | Test Together | An android app seeking to inform the public about COVID-19 testing sites in their state | ['Fay Lin'] | [] | ['android-studio', 'api', 'figma', 'json'] | 30 |
10,349 | https://devpost.com/software/cognito-bnup5t | Note
We had the individual AI and Keylogger working, we were unfortunatley unable to combine them and thus there was no demo. To watch the individual demos for the AI and Keylogger you can go down to demo videos.
Inspiration
Every year thousands of people are diagnosed with Parkinson's. Many times its too late and their relatives are forced to watch as they become shells of the humans that they once were. The worst part is that Parkinson's is often difficult to detect and can be tricky to diagnose.
We wanted to create something that could help detect Parkinsons in a quick and efficent way, without being obtrusive or expensive.
Thus we present Cognito
What it does
Cognito is very simple
Cognito has a key logger which tracks specific typing metrics. Cognito stores these metrics which are then analyzed by an advanced AI algorithim which detects for signs of Parkinson's. If Cognito detects signs of the disease it will send an email alerting the user about the issue.
Cognito is a simple background script that doesn't need an internet connection and is simple and hassle free.
Cognito has plenty of potential as typing can not only be used for Parkinsons, but for many other dieases as well such as Huntington's and Alzheimer's Disease.
How we built it
We used the neuroQWERTY dataset and keras to develop an algorithm which detects whether or not a user has Parkinson's.
We then used python to develop a simple keylogger that tracks the metrics which the Algorithm will need to analyze.
Challenges we ran into
Developing the Algorithm was incredibly difficult. It took a lot of time and was difficult to debug. It was the group's first time working with numerical classification in AI.
Accomplishments that we're proud of
We are proud that we got the AI algorithm to work. It took a lot of time and effort, but it payed off!
We were also proud that we could detect such a deadly disease with such a simple metric... typing!
What we learned
We learned how AI works and how to use it and learned how to do key detection with python. For two thirds of the group it was their first time working with any sort of AI at all so this was a new and fun journey for all!
What's next for Cognito
We would like to develop our algorithm even further. Unfortunatley, Cognito has a 50% accuracy, however with the discovery of some new datasets and with the ability to spend more time with the Algorithim we feel confident that we could get the prediction to over 70%.
We then plan to release this so that we can start helping people all around the world!
Demo Videos:
Keylogger:
https://www.youtube.com/watch?v=iiWSOhdJlaA
AI:
https://www.youtube.com/watch?v=YuE35_pRSeE
Discords
CantTouchThis#8155
theaditya24#8701
anishfish#5103
Citation
Arroyo-Gallego, Teresa et al. “Detecting Motor Impairment in Early Parkinson's Disease via Natural Typing Interaction With Keyboards: Validation of the neuroQWERTY Approach in an Uncontrolled At-Home Setting.” Journal of medical Internet research vol. 20,3 e89. 26 Mar. 2018, doi:10.2196/jmir.9462
We used a few studies for inspiration and we used the neuroQWERTY dataset. The keylogger and the algorithm were our own however.
Built With
keras
keystrokes
machine-learning
python
smtplib
tensorflow
Try it out
github.com | Cognito | Detect Parkinsons with the power of typing and AI | ['Anish Karthik', 'Gaurish Lakhanpal', 'Aditya Tiwari'] | ['2nd Place', 'MacroTech Sponsored Prize', '2nd Place'] | ['keras', 'keystrokes', 'machine-learning', 'python', 'smtplib', 'tensorflow'] | 31 |
10,349 | https://devpost.com/software/datadaygrind | HeartTrends Logo
Home page, displays the various analyses performed.
Depicts the age distribution among those affected by heart disease.
Proof for domain name, hearttrends.tech
Inspiration
Cardiovascular diseases, resulting in compromised blood vessels, clogged blood clots, and weakened hearts, are the leading cause of death for men, women, and the most racial/ethnic groups in the United States. One person every 37 seconds die from heart disease. HeartTrends offers an eye-opening data analysis that delves into the multiple factors and variables behind those affected.
What it does
HeartTrends is a simple yet immersive web application that discovers interesting trends and distributions from a UC Irvine database. The website depicts 5 novel factors behind cardiovascular diseases -- age, chest pain type, maximum heart rate, resting blood pressure, and serum cholesterol. Initially greeted with the home page, the user can choose from a variety of selection cards that link to a full-page exploratory analysis of previously mentioned factors behind the disease. Each plot offers an interesting relationship between general heart health and variables that can be easily measured at any time. The dataset used is primary data from people hospitalized for cardiovascular disease. Thus, the user can compare their own heart rate and blood pressure to the distribution, serving as a predictive model for future heart attacks and resulting symptoms.
How I built it
I built HeartTrends using R and NextJS. To generate all of the plots and charts that are displayed on the website, I coded a command script that reads in a CSV file taken from the UC Irvine Dataset on Kaggle. Then I sorted the file into the corresponding variables and created 6 ggplots of the age, heart rate, chest pain, blood pressure, and serum cholesterol. The y-axes display the frequency and the x-axes are the numerical distributions. After I created a plot of each of the factors, I combined them into one image, shown in the "All Plots" card, by using the grid.arrange function. I used NextJS and used CSS styling to create an easily navigable UI. The 6 cards redirect the user to their chosen analysis, serving as an informative and effective web application.
Challenges I ran into
This was one of my first times analyzing data and creating meaningful plots in R Studio. It was challenging to build and arrange the plots, but after reading more R documentation, it was awesome to see such interesting trends come out of a complex CSV file.
Accomplishments that I'm proud of
I'm proud of providing an easy-to-use predictive model for such a prevalent and dangerous condition.
What I learned
I learned how to data-mine and plot interesting graphs. I also discovered the impact that a good CSS style can make on a website. Initially, my website was composed of radio buttons and default fonts. But after creating new text formats and designs, I was surprised to see how great the UI appeared.
What's next for HeartTrends
I plan for HeartTrends to be expanded to include more diseases, such as different cancers and viruses. With the widespread COVID-19 chaos, I want to provide an exploratory website that can show the distribution of the virus around the world.
Built With
javascript
nextjs
r
react
Try it out
github.com
hearttrends.tech | HeartTrends | Predictive models for cardiovascular diseases through an exploratory UI. | ['Danny Zhang'] | [] | ['javascript', 'nextjs', 'r', 'react'] | 32 |
10,349 | https://devpost.com/software/write-it | Words are displayed on the screen.
Users hold up their work to have it read
If users spell the word incorrectly, they lose one of their three attempts
Using Google Cloud's Handwriting Detection, we find what was written
Users can write the word displayed
There is a leaderboard to see how users did in comparison to others
HandRight - HackMann 2020 Project
Inspiration
We were inspired to create HandRight after witnessing the struggles of parents trying to teach their their kids how to handwrite words at home firsthand. COVID-19 has especially exacerbated these difficulties as teachers and educators are unable to meet with and teach young students due to the quarantine restrictions. And with parents working full-time, students find it hard to stay motivated and are not able to practice their handwriting skills. They are left alone and with no one to guide them first hand, they are sacrificing their learning. We wanted to help struggling students deal with global education crisis.
What it does
HandRight is a fun and captivating game that uses computer vision to help students practice their handwriting without the presence of parents or other mentors. Furthermore, HandRight offers 3 core features:
Allows students to write with real writing implements such as pens and pencils
Allows students to get instant feedback (the kind they can't get from their teachers during COVID-19)
Gamifies the process of handwriting by assigning scores and points, and a leaderboard, motivating students to practice
How We built it
We used:
Flask for the backend and for handling the serving of the documents.
HTML, CSS, Javascript for building an aesthetically-pleasing frontend.
Python as the primary language for the functionalities and backend.
OpenCV for Computer Vision and reading the video.
EAST deep learning text detector to locate text in the video.
Google Cloud Vision Text Recognition for reading the words on the paper.
Challenges We ran into
We struggled to find or create a good handwriting detection library. We went through many options, including pyteresseract and a TF1-based model till we settled on Google Cloud, which working incredibly well.
In addition, as shown in
our issue
, VSC's in-built terminal on a Mac doesn't ask for video permission, which causes it to abort. Lastly, time was definitely one of our largest problems. Due to the sheer amount of work that we had to do in this short time period, we had to organize ourselves really well and work non-stop.
Accomplishments that I'm proud of
We're proud that we were able to finally find a working handwriting-identification library. In addition, we were able to resolve the Abort Error issue, so our app was functional for all our team members. Finally, we're proud that we got the application functional and ready for potential use! Making a complex game with ml vision, a right/wrong functionality and a realtime leader board was very tough, but we persevered through it and are extremely proud of our final results.
What We learned
We learned about different OCR platforms, having experimented with many options. We also gained a lot more experience with Flask, learning how to link videos inside of Flask and have them operate at the same time as the app itself without causing problems. Not only did we learn a lot about programming, we also learned how to work together really well in a high pressured environment. Since this was a virtual hackathon, we had a lot of difficulty at the start keeping track of each other and what we were supposed to do. But then, we started assigning roles, keeping track of our project through discord and Range.cc (tools that none of us had experience with) and regularly talking with each other on discord. After that, our project began flowing much more smoothly and by the end, we were able to complete it!
What's next for HandRight
Next, we hope to clean the game up a little bit, then push it out to the public. In addition, we hope to insert an audio feature to have a student learn spelling as well. Finally, we hope to make the website more kid-friendly, inserting characters and music to make the website compelling to children.
Team
Veer Gadodia
- Veer#7244
Shreya Chaudhary
- GenericPerson#6928
Mihir Kachroo
- Mihir#7285
Dhir Kachroo
-dhir2907#7695
Built With
css3
east-deep-learning
flask
google-cloud-vision
html5
javascript
opencv
python
sass
Try it out
github.com | HandRight | Teaching students how to handwrite using computer vision. | ['Veer Gadodia', 'Shreya C', 'Mihir Kachroo', 'Dhir Kachroo'] | ['MacroTech Sponsored Prize', 'Best use of Google Cloud'] | ['css3', 'east-deep-learning', 'flask', 'google-cloud-vision', 'html5', 'javascript', 'opencv', 'python', 'sass'] | 33 |
10,349 | https://devpost.com/software/lectureline | App (login page)
App (record a lecture)
App (condense into notes)
App (home page-store/organize notes)
Here
is the link to our business plan,
here
are our slides,
here
is the code demo video, and
here
is our Framer prototype!
Inspiration
As fellow students, we have firsthand experience with the struggles of trying to keep up with fast-paced lectures. Students are trying to take detailed notes that they can use to study and review while also trying to pay attention to the lecture and understand the key concepts. It is difficult to get down all of the information with this kind of stress, and students often return from lectures with incomplete notes that they are not able to understand because they were not able to learn much during their time in the lecture. We realized that the best way to learn efficiently is to pay attention to what the lecturer is saying and observe the visuals during the lecture so that when you leave the lecture, you are able to further review and study on your own with some basic understanding of the concepts. However, this isn’t ideal because you leave the lecture without any notes. We tried taking audio recordings of the lecture so that we could refer to it later, but that was time-consuming as we would have to listen to the entire lecture again in order to review. This inspired us to create LectureLine, so that students are able to learn as efficiently as possible by paying attention during lectures and reviewing information with the LectureLine notes.
It’s not just us! Students across the globe struggle with note-taking and the lack of absorption in fast-paced lectures. A recent study demonstrated that 72% of students have difficulty in taking adequate notes and can’t record information fast enough. After conducting a survey of 96 individuals this weekend, we discovered that 83.3% had a heavy increase in self-learning due to COVID-19, 90.6% felt rushed in lectures, and an overwhelming 92.7% said that they would love to see an application that creates notes in real-time and adds resources.
What it does
LectureLine is a clean and efficient mobile application that revolutionizes the process of notetaking with real-time transcription and summarization with links and visuals. Features of LectureLine include fluid note-to-note linking, real-time transcription, compatibility across all devices, notes storage and organization, and offline capabilities.
The application utilizes the process of real-time transcription, but also contains the feature of summarizing the information into bullet points that capture key points and concepts, which none of its competitors include. This is vital in crafting efficient and easy-to-study notes that are more helpful for students in high-stress situations. The application goes far beyond the simple transcription and summary. Along with this recording and transcribing process, the application detects and categorizes key terms and concepts in order to generate images and visuals that may help the student. Along with visuals as a resource, the application will also take these key concepts and display helpful links and resources in order to allow the student to delve deeper and explore the concept further.
How we built it
Using Framer, we developed a virtual prototype that demonstrates the UI/UX aspect of LectureLine. This includes a clear process of how the mobile application works, and what the desired interface of LectureLine looks like. Along with simply demonstrating the interface of note-taking, our virtual prototype shows a clear demonstration of how an individual can create an account, organize and store their notes, and change their type of subscription. Small and desired features that we hope to implement in the future are also included to display the full workings of LectureLine.
Additionally, we created a code demo as a proof of concept for LectureLine. This program, written in Python, utilizes the user’s device’s microphone to listen for information and then transcribes it into written notes. The program then identifies the key concepts of what the speaker is saying through natural language processing and includes links and images into the notes. Then, the user can save their notes onto their own device.
Challenges we ran into
Due to the time pressure of this project, we were not able to include all desired aspects of LectureLine into our prototype. Although our code demo demonstrates the basic workings of our product, we hope to fully implement these desired features in the future with more time. One of the biggest challenges we faced was incorporating the use of punctuation into our transcribed notes. It became difficult to analyze verbal diction in order to process punctuation and capitalization. To overcome this challenge, we utilized source code for punctuation and were able to adapt it into our situation and program.
Additionally, we were hoping to incorporate more features into our prototypes such as offline capabilities, note-to-note linking, and textbook recommendations. With more time, we hope to incorporate these features, as well as machine learning so that the app would, over time, be able to recognize a voice and adapt to the accent and speaking style in order to make more accurate notes. We also plan on using machine learning and natural language processing so that LectureLine can identify the subject of a lecture and take more precise and helpful notes based on the subject (for example: in science subjects, LectureLine would include more labeled diagrams).
Accomplishments that we're proud of
Although we had little experience in Python and natural language processing as a team, we were able to work together in order to understand these concepts. We are extremely proud of our working project and the new concepts we were able to learn. In order to incorporate these new concepts, we conducted a lot of research and went through a lot of trial and error.
Additionally, this was our first time using Framer to prototype our mobile application, and we feel accomplished with the professionalism and efficiency of our given model. Although we felt pressured under time, we are super excited to showcase our working prototype and code demo!
Another thing that we are proud of is our business plan and slides. We did our best to create professional and clean materials that showcase our company, product ideas, and strategies.
What we learned
During this process, we were able to learn a lot about emerging technologies such as natural language processing in Python and how they can be implemented into a situation as basic as note-taking. We also conducted research on efficient learning and note-taking strategies in order to maximize the potential of our product, so we were able to learn about how we can improve our own habits and hopefully help others as well!
We experimented and learned the use of UI/UX design as well as the importance of market analysis, which allowed us to better improve LectureLine in comparison to our competitors. Most importantly, we learned the importance of time management and efficiency, which allowed us to successfully complete this project!
What's next for LectureLine
LectureLine is a mobile application as of now, but we plan to expand to browser extensions and other technologies in order to increase compatibility. This provides the user with much more accessibility, as they can use any device to access its features. We hope to partner with educational institutions, such as schools and universities, in order to reach a larger portion of our main target consumers, students. The team will run various marketing strategies and promotions on numerous websites and university platforms in order to promote the use of this time-efficient and user-friendly product.
LectureLine also has the potential to benefit working professionals, and we plan to maximize that potential through an additional work industry version. This version of LectureLine would have features that are specific to taking notes for meetings and informational sessions. For example, LectureLine would take notes during a meeting and automatically send those notes to meeting participants in order to ensure that everyone is on the same page and that there is no confusion. Through further development, LectureLine would be able to create timelines and assign tasks to individuals based on meeting notes.
Further down the road, the LectureLine team hopes to provide new developments and features in order to increase the productivity and efficiency of our application. For further development of LectureLine, technologies such as natural language processing and machine learning will further be implemented to maximize the functions of LectureLine. This includes textbook recommendations, access to multiple languages, and browser compatibility. Additional features will be added as well, such as saving and condensing the lecture audio by increasing the speed and reducing the time when the lecturer is not speaking so that users may listen to it efficiently.
Built With
framer
natural-language-processing
python
pyttsx3
speech-recognition
Try it out
github.com | LectureLine | Revolutionize note-taking: every line counts. | ['Sasha Mittal'] | ['Best Beginner Hack'] | ['framer', 'natural-language-processing', 'python', 'pyttsx3', 'speech-recognition'] | 34 |
10,349 | https://devpost.com/software/ez-eats | EZ eats app UI
EZ eats logo
EZ eats website
GitHub, Framer, and Presentation Link down below!!
Inspiration
We were inspired to create EZ eats after seeing and researching how much food is wasted in the world. Especially with our current situation, restaurants are wasting more and more unused food and losing money at the same time. This accounts for about ⅓ of the world’s food production which is wasted and goes to show how much can be done if we knew how to distribute it properly since it is still perfectly good food, it just hasn't been sold.
This not only contributes to growing food wastage numbers but also takes away from the economy which is already tremendously suffering. Restaurants are losing tremendous amounts of money as not as many customers are going to restaurants these days.
The major food wastage problem and the economy sailing into a colossal storm, due to the pandemic, inspired us to create EZ eats!
What it does
EZ eats connects users to restaurants in their area which have perfectly good unused food after hours that they would normally have to throw out. Users can order food from their favorite restaurants based on availability through our app and pick it up accordingly. The app will connect users with restaurants of their preferred cuisines and they can see which restaurants have leftover food daily, allowing them to purchase whatever they want while being at a cheaper price than the normal selling price. Since the food is being sold after hours, the price is reduced anywhere from 30-70 percent so the restaurants are still making money instead of losing it by throwing the food away, and users can buy good food at a cheaper price. This helps the problem of food waste and the economy at the same time.
How we built it
We made a website using HTML for EZ eats. We began by writing a simple code asking the user to input their contact information in order to receive updates about the steps that EZ eats will take moving forward. We also inserted logos and social media links where people can find out more about the app when it goes up and running. The website will update with new developments for EZ eats and will periodically send information to users who have signed up with their emails. In the future, this website will allow for more people to learn about EZ eats and its purpose; we are aiming to bring awareness to food wastage and how we can act upon it with the creation of this app.
We also created a functioning prototype on Framer which showcases the UI for the EZ eats app. We made all the screens for the app’s UI and then proceeded to link the buttons to their respective pages, allowing for a perfectly working prototype which can then be used as a model for the real app when created.
Challenges we ran into
Some challenges we ran into were agreeing on an idea as well as what we were going to do in order to execute it. We first started off by brainstorming ideas and then coming together in order to decide, which did take a considerable amount of back and forth. We then took some time to decide how we were going to turn the idea into a reality. These challenges overall helped us create a solid plan as to what we were going to do and how we were going to go about doing it, allowing for us to have a very smooth experience when actually working on a bulk of the project.
Other problems we faced were coding the website through HTML. We are both beginner level HTML coders, so a majority of our time was focussed on developing the website. We learned a lot along the process, ultimately, the long process was worth it.
Accomplishments that we're proud of
We are proud that we were able to create a website and a working prototype in the allotted time and that we were able to work together so seamlessly in order to create the best product we could. The website shows that the app is currently in progress and allows any interested people to sign up whenever they want to receive updates. This prototype allows us to see a true vision of what this app will be as well as its functionality.
What we learned
We learned how to make an idea into reality in a short amount of time by making a functioning prototype and website. We also learned the nuances of creating a website like the code and purchasing the domain so the idea could truly be ours. We also became more able to manage our time very efficiently and make an idea into reality. Let’s modernize our world and eat for less!
What's next for EZ eats
We hope that people truly see the vision of EZ eats and how it is aiming not only to help the environment but the economy as well. We plan to continue by doing some more in-depth market analysis and advertising it online and in-person in order to stir up interest. The more people who know about this problem and use this app to make a step forward, the more that we can curb the problem of food wastage and the effects this is taking on restaurants and the economy. This app is all about making a difference and this could be the first step to making it happen. Let’s modernize our world and eat for less!
Built With
framer
html
Try it out
github.com
framer.com
docs.google.com | EZ eats | Let's modernize our world and eat for less. | ['Mihika Bhatnagar', 'Annika Desai'] | [] | ['framer', 'html'] | 35 |
10,352 | https://devpost.com/software/mixpose-web | Tigergraph Scheme
Tigergraph Explorer
Inspiration
We are building a yoga platform because yoga has helped our families get out of depression. As a side effect, it has made us more flexible. Throughout COVID-19, people are required to social distance and loneliness has become a big problem. We want to empower instructors to be able to produce better quality content and allow people to do yoga at home, and if possible, with friends in aid of creating community and battling loneliness.
What it does
We are building a live stream yoga class web application. What makes our app special and unlike other live streaming apps is we are using A.I. pose tracking and stick figures to provide a feedback loop from teachers to users. This way students are able to see each other, and instructors can view all of the students. Tiger graph in the backend helps to do fast analytic tools for instructors so they can see the run their classes better
How I built it
We used TigerGraph and GSQL for data analytics. Exporting firebase data directly into tigergraph. We have created 3 Verticies and 5 different edges for the hackathon itself. Lesson, User and Instructor. In which edges include users being friends with each other, user attending a class, user giving feedback to a class, teachers teach a class and users can follow the teachers. We've also created additional GSQL to help facilitate to tools.
We used Agora’s Real Time Engagement Video SDK. Then we are running TensorFlow A.I. pose detection on top, once we get the skeleton points, we can then draw the stick figure through Augmented Reality. Since you can’t inference on top of the HTML video element, We did this is by creating a canvas to redraw all the livestream, then run the inference on top of the canvas itself to draw the detection. After the detection is done, we then draw the stick figure through AR overlay on top of user’s live feed video in real time.
We are also giving choices for users to either join the public channels, their own private channel or create a channel for their friends to take the yoga class together. The instructors will be subscribed to all the channels. This way students can protect their privacy from other students while still allowing teacher to guide them.
Because we are using Agora SDK across all platforms, the Android user can actually now see the web users and vice versa, with instructors seeing everyone indistinguishably.
Challenges I ran into
Getting A.I. to run on top of live video feed from Agora’s Video SDK proved to be a little more difficult than we thought, but we were able to solve the problem by redrawing the video feed onto a canvas then doing the inference on top of the canvas itself.
GSQL was another challenge, luckily learning another tool to learn, the detailed step by step experience is being documented at
https://www.hackster.io/364351/how-to-use-tigergraph-for-analytics-e476fa
We are writing down our AI solution on
https://www.hackster.io/mixpose/running-ai-pose-detection-on-top-of-agora-video-sdk-d812ce
Another challenge is some users don’t really want to turn on their camera, so we created a private mode trying to accommodate their privacy concerns via Agora’s SDK.
Accomplishments that I’m proud of
We’ve launched web app on
https://mixpose.com
and we are now testing it with actual users. This is much more scarier because we want to ensure our users have the best experiences using our application.
Another accomplishment we are very proud of is that we actually have the license to use the music in the demo video :)
What I learned
GSQL for the first time, and running graph SQL becomes really powerful
What’s next for MixPose Web
We are ready to take this idea forward and turn it into a startup. 3 of us Co-Founders have quit our jobs to working towards it full steam ahead.
Built With
agora
ai
ar
augmented-reality
firebase
tensorflow
tigergraph
Try it out
mixpose.com
github.com
www.hackster.io
www.hackster.io | MixPose Web App | MixPose is a live streaming platform for yoga classes. We use A.I. on the Edge to do pose detection for the users and to send feedback to the yoga instructors. | ['Peter Ma', 'Sarah Han', 'Ethan Fan'] | ['First 50 Qualified Submission', 'General Submission', 'Most Popular', 'First Place (1)'] | ['agora', 'ai', 'ar', 'augmented-reality', 'firebase', 'tensorflow', 'tigergraph'] | 0 |
10,352 | https://devpost.com/software/transtreaming | Architecture Diagram
home page
Meeting Room
Meeting Room with functionality
Inspiration
During this pandemic, our day to day communication is even more digitalized with people self-isolating themselves. The importance of remote meetings and communications has increased
more than ever. Keeping in mind this importance we should make efforts to make this communication better and better. With a diverse range of people speaking different languages and
working together their communication should be smooth. The native language is the best way to transmit your ideas so an application that enables meeting attendees to speak in their native
languages is essentials. This need and its importance motivated us to work on this idea.
What it does
The application transcribes user audio in any language and then translates it to the desired destination language. The translated text is shown at the chatbox of both attendees.
For example, Take two attendees A and B. If attendee A has chosen English as his/her language and attended B has chosen german then in attendee A chatbox all the conversation will be displayed in English and in attendee B chatbox all the conversation will be displayed in german.
How We built it
First off we integrated Agora web SDK for a realtime video calling feature. Then we transcribed the user audio at the client-side of the application. After the audio is transcribed to text on the client-side, the transcribed text is sent to the server using a socket, and the text is then translated and send to the client partner application on his/her socket channel. The translated text that is returned is shown on the chatbox of the attendee that has chosen that specific language.
Challenges we ran into
1: Differentiate between the transcribed text of both the attendees as each transcribed text has to be translated into a different language.
2: Deciding the distributions of the functionality on the client-side and the server-side.
Accomplishments that I'm proud of
1: Successful implementation of realtime translated subtitles.
What We learned
1: Explore the agora web SDK and how to integrate with any modern web framework.
2: Explore the different NLP libraries.
3: Understood in a bit more detail on how sockets work.
2: How audio transcribing is done.
3: How to work with socket and APIs to make a real-time application.
What's next for Transtreaming
Really excited about the future of Transtreaming as they're always so much room for improvements. This application can be used in many use cases like helping diverse people communicate in a meeting. For that, we need to integrate this on websites and applications that offer remote meeting services.
If we move one step forward we can extend the idea to the computer vision side by detecting hand directions and converting them text for people who are unable to hear.
Presentation video of Transtreaming
https://youtu.be/jrSjLrxAKhc
Live Demo of Transtreaming
https://youtu.be/ZqaPyCA1OMY
GitHub Repos
1: Project Description Repo:
https://github.com/zilehuda/transtreaming
2- Backend Repo:
https://github.com/zilehuda/transtreaming-europa
3- Frontend Repo:
https://github.com/zilehuda/transtreaming-jupiter
Built With
agora
flask
github
google-translate
google-web-speech-api
heroku
react
Try it out
transtreaming-jupiter.herokuapp.com | Transtreaming | Border-less Communication | ['Zilehuda Tariq', 'Abdul Hannan Rai', 'Muhammad Taimour'] | ['Second Place (1)'] | ['agora', 'flask', 'github', 'google-translate', 'google-web-speech-api', 'heroku', 'react'] | 1 |
10,352 | https://devpost.com/software/rogue-bots-lite | Built using agora and unreal
First Screen
Game Play
Inspiration
Video games were once widely perceived as inherently anti-social. However, the World Health Organization, which has warned about the risks of too much gaming, recently launched #PlayApartTogether, partnering with major gaming studios to encourage people to stay home.
Around the globe, people's lives have been turned upside down by social distancing measures and even more stringent lockdowns put in place to slow the spread of the coronavirus. Some people are using isolation to explore new hobbies or finish up long-delayed household projects.
In the first couple of weeks of social distancing, I was pretty hungry for simple video chats, just getting together with friends, but over the past few weeks or so, I've really wanted some new ways to spend time with friends.
So I build this really fun game to play with friends.
What it does
It is a top-down shooter game with an only goal for the player to survive from the dangerous bots trying to reduce your health. With entering the same channel name and encryption key two players join a game. It has multiple power gadgets that help you during your game, like using the blue balls you can regain your health while the green balls help you shoot three bullets at a time, the bots keep on respawning and you have to dodge or kill them in order to survive
How I built it
With the help of Agora plugin, I was able to embed real-time video chat into my game created using Unreal Engine 4.
Challenges I ran into
As Agora SDK for Unreal is still in beta figuring out it's working was a bit catchy
Accomplishments that I'm proud of
I really enjoyed playing the game which I made with my friend, as previously my game did not use to have a video call which made it really boring for me and using Agora I got a chance to integrate video call in my games making my game more immersive and lively
What I learned
Learned about Agora SDK will use it in my future projects
What's next for Rogue Bots Lite
I will be adding a feature where players can be given an option to broadcast their gameplay directly to streaming platforms like twitch
Built With
agora
unreal-engine
Try it out
drive.google.com | Rogue Bots Lite | A super fun top-down shooter game built using Unreal Engine and Agora | ['Harsh Agarwal'] | ['Third Place (1)'] | ['agora', 'unreal-engine'] | 2 |
10,352 | https://devpost.com/software/lyricist | Splash Screen (App)
Login Screen (App)
Carousel and Player (App)
Start Livestream (App)
Generated Musical Notes (App)
Home Page 1 (Web)
Home Page 2 (Web)
Footer & FAQ (Web)
Join Livestream (Web)
Generated Notes (Web)
Postman
Lyricist
Lyricist helps get musical notes from online music classes automatically.
Inspiration
To make quarantine more productive, a mutual friend of our team, Ekaansh decided to start posting videos of him playing the guitar playing new tunes composed by him on his Instagram. His followers gave really positive responses and many of them even wanted his musical sheet notes so they could learn and try to recreate the soothing music. Making the notes manually is a very tedious task and hence when Ekaansh told us about what was happening, our team decided to make a cross platform (Web+ iOS+ Android) application to solve his problem.
What it does
Lyricist helps download musical notes from online music classes automatically with just the click of a button or even view the notes in real time.
How we built it
We use Agora to send the Audio stream to be transcribed and Audio Notes are hence generated using which we request our backend to send the exact Musical Notes. Our backend scrapes through a well-reputed website and using Selenium replies back to the frontend of the website and app with the required Musical Sheet.
Challenges we ran into
Deployment of the backend server which generates the Music Sheet is a problem and hence this works only on localhost for now. This is why while testing the Sheet Generation, you'll have to run the code locally but you can see the UI of the whole code including the Livestream using Agora and Note Generation on the hosted site.
Accomplishments that we're proud of
We came up with this idea and built it from scratch in less than two days due to exams in our college during the rest of the hackathon.
What we learned
Music! A lot of music! We had no clue learning how to play music could be this challenging!
Steps to run the Server
$ git clone https://github.com/Meherdeep/RTE-Hack
$ cd RTE-Hack
$ pip3 install -r requirements.txt
$ python3 -m uvicorn server:app --reload
Useful Links
Lyricist Website
Agora.io Website
Demo Video
Requirements
[x] Agora RTC SDK (or
CDN
)
[x] Agora App ID
[x] AWS Account
[x] Jupyter Notebook
[x] FAST API requirements (requirements.txt)
_____ _ _ __ __
|_ _| | | | \ \ / /
| | | |__ __ _ _ __ | | __ \ V /___ _ _
| | | '_ \ / _` | '_ \| |/ / \ // _ \| | | |
| | | | | | (_| | | | | < | | (_) | |_| |
\_/ |_| |_|\__,_|_| |_|_|\_\ \_/\___/ \__,_|
______
| ___|
| |_ ___ _ __
| _/ _ \| '__|
| || (_) | |
\_| \___/|_|
______ _ _ _ _
| ___ \ (_) | | | | | |
| |_/ / ___ _ _ __ __ _ | |_| | ___ _ __ ___| |
| ___ \/ _ \ | '_ \ / _` | | _ |/ _ \ '__/ _ \ |
| |_/ / __/ | | | | (_| | | | | | __/ | | __/_|
\____/ \___|_|_| |_|\__, | \_| |_/\___|_| \___(_)
__/ |
|___/
License
GPL-3.0 ©
Akshat Gupta, Mehereep Thakur and Sai Sandeep
if (youEnjoyed) {
starOurRepository();
}
Built With
agora
agora.io
css3
dcodejs
fast-api
flutter
html5
javascript
jquery
jupyter-notebook
natural-language-processing
rtc
selenium
webrtc
Try it out
github.com | Lyricist | Lyricist helps download musical notes from online music classes automatically with just the click of a button or even view the notes in real time. | ['Akshat Gupta', 'Meherdeep Thakur', 'Sai Sandeep Rayanuthala'] | ['Audience Favorite'] | ['agora', 'agora.io', 'css3', 'dcodejs', 'fast-api', 'flutter', 'html5', 'javascript', 'jquery', 'jupyter-notebook', 'natural-language-processing', 'rtc', 'selenium', 'webrtc'] | 3 |
10,352 | https://devpost.com/software/bio-pulse | Where did you get the idea from?
In the year 2017, my young sister who was age two was diagnosed with stomach cancer in Zimbabwe. Due to lack of medical facilities in Zimbabwe, she was referred to India hospital were here surgical operations were to take place. My family was facing financial challenges adding on the basket was now Matifadzaishe and mom’s plane ticket, money for hotel services was needed as well and money for surgical operations was needed as well which was really expensive for a lowly income family like mine. However, the family sacrificed and sold most of the things we had for Matifadzaishe and mom to go to India for her surgical operation. The whole medication process took about a year for them in India. This was the most difficulty and tough time that my family experienced financially, socially and psychologically. What if there was a telemedicine facility in Zimbabwe?
How the system works
A specialized doctor can carry out surgery remotely without having to incur all the costs. A lot of people are dying in Zimbabwe because they cannot afford to go abroad to see specialists they would have been directed to. The proposed system will help local people hence solving the problem skill doctor is one place and the patient is another place.
Targeted market?
Our target users are people in the medical community like doctors, hospitals medical practical’s
What is the Business model?
We lease them to hospitals so we handle maintenance and software updates. They are to pay an annual fee to us. We provide support and maintenance.
The architecture of the system
The robotic arm system was developed using an Arduino board and a couple of servos and card boxes. For the software that controls it, it was an electron app using NodeJS, that is where we used Agora’s live streaming sdk. We used pub numbs for Web Sockets to communicate with serial ports. There is a PHP session manager which manages sessions between people who control the robot and the robotic arms themselves. These sessions are created using PHP and MySQL. The front end is also PHP and MySQL where the user’s login.
Built With
agora
arduino
javascript
mysql
node.js
php
pubnub
sdk
Try it out
github.com | Bio Pulse | A system aimed at ensuring that patients get the best surgeons no matter where they are. | ['Tapiwanashe Mugoniwa'] | [] | ['agora', 'arduino', 'javascript', 'mysql', 'node.js', 'php', 'pubnub', 'sdk'] | 4 |
10,352 | https://devpost.com/software/workbooks | Inspiration
When I saw the Agora SDK I knew I wanted to try and make something that let me combine both drawing and video chat. I ultimately decided on a solution that targeted Tutors and Students because of the pandemic and the challenges many students are currently facing today.
What it does
Workbooks lets tutors create a workbook full of math problems. Tutors can then help guide students in solving these problems all while chatting via video. Progress is saved and tutors can monitor progress at any time.
How I built it
I used Swift to build a iPhone/iPad application powered by Agora for video and Firebase for data / image storage. I used drawsana to power the drawing and created a firestore listener to track changes to the database. I utilized the Firebase reference ID to create a custom channel name in Agora. This was when a user creates a new workbook they can also have a private video channel in Agora.
Challenges I ran into
Realtime! I wasn't sure if I was going to be able to get the realtime drawing to work and how well it would work across iPhone and iPad. I had to make some adjustments to my views in order to get the drawing to work, but in the end I was able to accomplish what I set out to do.
Accomplishments that I'm proud of
Getting the realtime drawing and video to work was something I was really proud of. This was the first time I've done anything like this so to see it work was very exciting for me. Luckily Agora made the video chat very simple with it's Swift SDK so I was able to get that to work after a bit of troubleshooting.
What I learned
This was my first time using Agora so I was able to learn a lot by going trough the Agora SDK. I also was able to discover more products available through Agora such as realtime chat!
What's next for Workbooks
I want to continue to improve upon workbooks by letting Tutors create custom equations and secure the workbooks so that Tutors have control over who their students are.
Built With
agora
firebase
ios
swift | Workbooks | Tutors and students can work on math problems together while chatting via video. Workbooks can help tutors track students progress. | ['AJ Rahim'] | [] | ['agora', 'firebase', 'ios', 'swift'] | 5 |
10,352 | https://devpost.com/software/studyroom-4eo7ng | Onboarding Screen
Login Page
Meeting Room
Meeting screen
Meeting in-session
Inspiration
In these Covid-19 times, most of the classes happen online, and even though there are apps like Zoom, Google Meet, etc. students as well as the teachers dont get the feel of a classroom, which can hamper the learning experience in some way, as students might not feel motivated enough to attend an online class. The StudyRoom app provides an infrastructure for students to experience a totally virtual classroom, where they can take a seat on the virtual benches with their classmates and attend lectures, just like a real classroom.
How we built it
We built it on Flutter, Firebase, and of course Agora RTM and Live Interactive SDK. The app was also deployed for CI/CD on Codemagic to run on iOS and Web apps.
Challenges we ran into
Managing and modifying Agora RTM to our requirements and even a few runtime errors while trying to run the Runner on XCode for iOS.
Accomplishments that we're proud of
The amazing teamwork, and constant working together to get rid of any bugs together.
What's next for StudyRoom
Using ML and AI, better integration and capabilities to give out immediate push notifications whenever the user has missed on something and we would also like to incorporate features like setting deadlines for an assignment, a scheduler for planning lecture timings, a tracker for recording scores assigned to each student and whiteboarding facility to a teacher.
Try it out
github.com | StudyRoom | An easy seamless classroom experience designed for teachers and students to conduct online classes using Agora RTM SDK. | ['Jui T'] | [] | [] | 6 |
10,352 | https://devpost.com/software/mr-agor-world-s-first-rte-powered-virtual-interviewer | Welcome
Interview Question
Score Board
Roadmap
Architecture Structure
A long time back...
Mr. Agor used to conduct the interviews in the traditional style by calling the candidates to his office and interviewing them. This process was repetitive, boring and cumbersome. Mr. Agor had to spend hours and hours going through the resumes of the candidates, and then shortlisting the candidates and interview them. Then came the coronavirus pandemic and Mr Agor could no longer conduct the offline interviews. It was time for Mr Agor to mend his ways...
Then Mr. Agor got introduced to Agora.io, a
superpower Real-Time Engagement technology platform
. Mr Agor realized that he could use Agora for building an A.I powered platform where any number of candidates can join and give their interviews to an A.I powered interviewer who
behaves like Mr. Agor and simulates the exact same experience
through a powerful audio-video interview.
And thus was born the virtual Mr. Agor. The virtual Mr Agor invites applications from thousands of candidates and then interviews them. Each candidate is presented with a set of 8 different questions. While the candidate answers the questions, Mr Agor uses his A.I technologies of computer vision and natural language processing to grade the candidate for their confidence, and body language, and ensure that the candidate doesn't cheat.
Now the actual Mr. Agor has to only the interview the candidates who are already interviewed and selected by virtual Mr. Agor. Come the next pandemic, Mr. Agor has built a
scalable and powerful way
to easily find the best candidates for his company.
Mr Agor used the high-quality video and audio calls of Agora. His website was setup using the Web SDK of Agora which was easy to setup and even works in regions with low network reception. Mr Agor is now reaching out to corporations for using his product and will charge them for every candidate they hire.
Mr. Agor is relaxing on the beach and interviewing the candidates for his company and his clients.
What are you doing?
Built With
agora
css
html5
javascript
jquery
machine-learning
rtc
tensorflow
Try it out
d11lkttv4lq7bj.cloudfront.net | Mr. Agor - World's first RTE powered virtual interviewer | Mr. Agor is an A.I powered audio-video remote interview RTE platform to conduct 1000s of remote interviews simultaneously using the power of A.I. | [] | [] | ['agora', 'css', 'html5', 'javascript', 'jquery', 'machine-learning', 'rtc', 'tensorflow'] | 7 |
10,352 | https://devpost.com/software/screen-code | Inspiration
test
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for |Test|||
Built With
javascript
Try it out
bitbucket.org | |Test||| | tes | [] | [] | ['javascript'] | 8 |
10,352 | https://devpost.com/software/unrealte | Inspiration
What it does
UnRealTE
is an
Unreal Engine
application that uses the
AgoraPlugin
for Real Time Engagement.
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for UnRealTE
Try it out
bitbucket.org | UnRealTE | UnRealTE is an Unreal Engine application that uses the AgoraPlugin for Real Time Engagement. | ['Warp Smith'] | [] | [] | 9 |
10,352 | https://devpost.com/software/dragon-game | have fun
Inspiration
to develop a game
What it does
a dragon game
How I built it
with html5 css javascript
Challenges I ran into
Accomplishments that I'm proud of
the output of the project
What I learned
developing a game with javascript
What's next for dragon game
Built With
css
html5
javascript
Try it out
github.com | dragon game | fun and joy | ['Swetha Ramagiri'] | [] | ['css', 'html5', 'javascript'] | 10 |
10,352 | https://devpost.com/software/teapot | our software architecture
Inspiration
Following shelter in place after the current pandemic, we found our in-person social interactions to be severely limited. Although a lot of current tools offer some kind of virtual conferencing functionality, often there is a lot of friction in getting in a call with our closest friends degrading the experience. In addition, current telecommunication solutions are often overloaded with features taking away from the core audio communication functionality.
What it does
Meet teahouse, a frictionless virtual audio lounge. Here, lounges are persistent, and the simple interface is built around the audio call itself. You can seamlessly transport from lounge to lounge with convenient and simple audio controls. teahouse enables you to be present with others virtually while allowing you to multitask, recreating the experience of hanging with friends, family, and coworkers in person.
Consider this, you are in a virtual class (Zoom University maybe) and have a question about something the professor just said. If you were in person, you could simply ask the person next to you, but now, you have to resort to sluggishly typing out your question to ask over chat.
With teahouse, you can quickly hop on to a lounge without leaving the virtual class and quickly get your question answered by your classmates.
How we built it
Application
We used the Flutter framework for the iOS and Android application. In addition, we used the Agora RTC Flutter SDK for the voice communications. For authentication, we used Firebase Authentication to allow Sign in with Google on the app. Also, all the requests to the REST API contain the user’s authentication token, to preserve both security and privacy. We also use Firebase Dynamic Links to allow users to click on a link to join a new room.
Backend
teahouse’s architecture is predominantly serverless. We used AWS Lambda and AWS DynamoDB as our backend with the REST API through the AWS API Gateway. The DB is used to store information about the user and their lounges.
Challenges we ran into
Keeping track of users on call
It was difficult to identify new users who joined a call they had not been in before. To resolve this, we created a new API to update the local call lists of all members of the call.
Lag from API on UI
We were noticing considerably high latency while retrieving data from the API. To solve this, we started caching some of the more static pieces of data and the performance improved considerably.
Accomplishments that we're proud of
We’re especially proud of the invite logic of teahouse. When you click a teahouse link, it loads either the app on your phone or the app store page if you don’t have the app. From our experience, link invites provide a frictionless way of joining rooms and allows for easy sharing. We were able to do this using Firebase Dynamic Links and local device logic.
What we learned
Integrating a Flutter App with a custom API deployed with AWS Lambda + API Gateway
Implementing voice functionality in an app using Agora SDK
Implementing room invite logic
What's next for teahouse
Adding spatial audio for directional audio support
Improving audio quality with additional signal processing
Built With
agora
amazon-web-services
firebase
flutter | teahouse | frictionless virtual audio lounges | ['Rishabh Jain', 'Rohan Dhesikan', 'Eric Cheng'] | [] | ['agora', 'amazon-web-services', 'firebase', 'flutter'] | 11 |
10,352 | https://devpost.com/software/eduar-kvfont | Login
Detail Screen
Topics
First Screen
Shapes
Inspiration
While coronavirus continues to spread across the globe, many countries have decided to close schools as part of a social distancing policy in order to slow transmission of the virus.
However, these closure of schools has affected the education of more than 1.5 billion children and youth worldwide due to the coronavirus (COVID-19) pandemic.
My little sister is one of them because of Covid19, she couldn't attend her classes, while later on, her school started doing video call classes but it was far less effective because of the missing practical experience, which is like the heart of teaching children.
Covid19 has been very tough for everyone but it can have an even severe effect if we let it to continue hampering the education of kids.
So I thought how can I help narrow the learning gap?
Being a CSC student, I decided to make this app which can get the practical element back into teaching while being in a video call.
What it does
Augmented reality can help make classes more interactive and allow learners to focus more on practice instead of just theory. As AR adds virtual objects to the real world, it lets students train skills using physical devices.
Our app essentially does that it helps students learn about things like shapes, colour etc using AR. Helping children get the basic practical experience they are missing right now in their video call class
How I built it
I used for Flutter with Agora SDK for video call and ARCore for the Augmented Reality.
I chose Flutter as it helps in development of mobile apps that allow lightning speed transitions and it comes packed with design elements that are known to fit right into the native self of both Android and iOS.
Challenges I ran into
It was fun to carefully design the UI/UX of our app keeping children's as our target customer. Selecting icons or pictures that feel more appealing to children or making sure the interactive objects are big enough for small hands etc. Also integrating AR into the video call was pretty challenging.
Accomplishments that I'm proud of
I am super proud to say that I showed this app to Mr Manish who is the head of a school called Wisdom and got valuable feedback to work further on the app like adding the name tags and showed his interest in helping us integrate our app in their school.
What I learned
Honestly, it was a roller coaster ride and I would like to thank Agora team as this project helped me push my limits as working on app catering to children can be pretty challenging while I also learned the implementation of AR and the impact it can have on our education system
What's next for EduAR
We want to launch it in the play store and tie-up with a school to implement our product. We are awestruck with our product's future potential and would like to be one of the pioneers of bringing change in children's education especially during these testing times
Built With
agora
arcore
flutter
Try it out
github.com | EduAR | An app that uses the power of Augmented Reality to help kids get the practical experience while learning while in a video call. | ['Rishav Raj Jain'] | [] | ['agora', 'arcore', 'flutter'] | 12 |
10,352 | https://devpost.com/software/exercise-together | Live Video Streaming
Video Room
Youtube enabled
Live Data Syncing
Search Bar
Authentication
DynamoDB
Home
Inspiration
We know that physical activity and social interaction have immense benefits*. During lockdown, many people aren't able to go to the gym or see any of their friends in person. I wanted to create an app to help people get their endorphins up and see their gym buddies across the world.
*
https://www.cdc.gov/physicalactivity/basics/pa-health/index.htm
,
https://www.mercycare.org/bhs/services-programs/eap/resources/health-benefits-of-social-interaction/
What it does
Exercise Together is a web app that allows 3 people to share video while watching the same Youtube exercise class and log their exercise activity.
It works like this:
A user visits the website and either creates and account or logs in. Amazon Cognito is used for authentication.
Once authenticated, the user is directed to a dashboard depicting the amount of time spent exercising with Exercise Together.
The user clicks join room and enters a room name. Up to 3 of their friends enter the same name to join the same room.
The users enter a video chat room and can search for a Youtube exercise video together by utilizing the search bar. Once everything is ready, they click start exercise to begin!
When the video ends, the user returns to the dashboard and their time spent exercising is logged.
Exercise Together is helpful when you want to exercise with your friends and simulates an exercise class you could do at the gym like yoga or pilates. This way people can work out with their friends that are all over the world!
How I built it
I used react and redux to build the front end of the project. For the backend, I used Serverless functionality like Cognito, AWS Lambda, S3, DynamoDB, and App Sync. Cognito verifies the user so that I can log exercise data for every user separately. All data is stored in DynamoDB. When people enter a room, Agora.io livestreams everyone's video to each other, so they can see each other's faces while React is used to display everyone's video. Every change you make to the search bar or clicking a Youtube video is logged to DynamoDB and is logged to all the other clients in the same room through AppSync. As a result, everyone in the room can see the same view at the same time. When you finish the workout, the data is sent to DynamoDB with the email you logged in as the key for the data. On the dashboard, a get request is made back to DynamoDB, so that you can see your exercise data for the whole week.
Challenges I ran into
I used a wide variety of services in order to develop the application that I wasn't experienced with previously like Agora.io, AWS Amplify, and AWS AppSync. Learning them was difficult and I went through a lot of troubleshooting with those services in the code. Moreover, syncing all these services together into one application was a large challenge, and I kept trying different pieces of code one at a time to try to get them to work together.
Accomplishments that I'm proud of
I was able finally learn how to use web sockets (AWS AppSync uses web sockets), which I'm really excited to use for my future projects! Web sockets are especially crucial for online games, which I want to make.
What I learned
I learned how to use a multitude of services and link them together. For example, I learned web sockets, Agora.io, AWS Amplify, and AWS Appsync. All these services would be immensely useful for my fire projects, so I believed that I really benefited from creating this project.
What's next for Exercise Together
Some extensions I'd like to make include:
Adding Fitbit and Apple Health functionality so that users who use them can all see data logged onto the website.
Making a sidebar like to that people could use to see who is currently online out of their friends list and join a room with them. In order to implement that, I would have to use AWS Neptune, which uses the same technology that Facebook uses for Facebook Friends.
Creating a phone app using React Native. I feel that more people would like to use a phone app rather than the website.
There are still
many bugs
, especially with the video streaming since I'm using a third party API and a free account for it. For example:
The video streaming only works chrome.
Entering the video room with more than one person is a buggy process. The way I get it to work is by duplicating the tab for each user entering and closing the previous tab.
The Cognito verification link redirects to localhost, but will confirm the account.
Built With
agora.io
amplify
appsync
cognito
cookie
dynamodb
graphql
javascript
lambda
materialize-css
node.js
react
redux
s3
serverless
ses
websocket
Try it out
exercisetogether.rampotham.com
github.com
www.youtube.com | Exercise Together | Exercise Together is a webapp that simulates your own group fitness class online with your friends | ['ram potham'] | ['The Wolfram Award'] | ['agora.io', 'amplify', 'appsync', 'cognito', 'cookie', 'dynamodb', 'graphql', 'javascript', 'lambda', 'materialize-css', 'node.js', 'react', 'redux', 's3', 'serverless', 'ses', 'websocket'] | 13 |
10,352 | https://devpost.com/software/coachally-interactive-virtual-classroom-video-calling-app | Assist feature
CoachAlly Home page
User's can easily seek guidance , report bugs
Seek guidance with in-app screenshot&doodle feature instantly
Video Call
AR Classroom
Broadcast Mode
Inspiration
During these pandemic days, our team too are facing issues while learning through online portals. So our team took a
step forward in resolving the common issues and further improvising it.
What it does
CoachAlly application helps in creating interactive virtual classrooms using the latest technologies like
Augmented Reality
and creates room for the virtual classroom through
high-quality video calling
with a low-latency experience.
Augmented reality in education is surging in popularity in schools worldwide. Through AR, educators are able to improve learning outcomes through increased engagement and interactivity.AR features aspects that enhance the learning of abilities like problem-solving, collaboration, and creation to better prepare students for the future. Teachers can include custom AR objects and pre-recorded lecture videos which help students view course materials at the ease of their home.
Live sessions can be held virtually through the class meet option. We have designed a one-step join meeting keeping in mind of young students. App seeks only the meet code and doesn't collect other credentials thus improvising the privacy of end-user.
We have also integrated an
ASSIST
feature which guides the users step-by-step if they either need a walkthrough on a feature or if they encounter a bug. Our main advantage of this feature allows users can make use of an in-app screenshot feature with a doodle option on board with ease to contact the admin/developer hassle-free.
How I built it
Came across the
Flutter
technology recently and since then was caught up with it. We are
amateurs
and this is our first big step upfront on solving the problem with it.
We have approached our problem with Flutter which makes the app run natively on all platforms. The UI is made with help of google's material UI. The video call runs seamlessly with the help of agora as backend. The feedbacks, assist is done with the help of wiredash which provides instant messages which the end-users provide.
Would thank our sponsor echo-AR which helped us integrate AR seamlessly with our app.
CoachAlly is a light-weight app which is available across various platforms
-
Mobile platforms- IOS, Android
Desktop app-MacOs, Windows, Linux
Web app- Across all browsers
Challenges I ran into
We came across many challenges as this our first big approach using Flutter. We thank the mentors who took the time to help us. Students get insights on concepts& better understanding with AR & am proud to be a part to contribute to the global community.
Accomplishments that I'm proud of
We are very proud of the big leap which we dared to attempt has come out a bug-free working app in a short span of hours.Have learned many skills way from starting of the Hack. We learned to face the challenge by short days to give the best outcome of our app.
What's next for CoachAlly -Interactive Virtual Classroom & Video Calling app
We aim to increase security and add feature-rich contents and make our app more accessible to all age groups.We plan to improvise our app consistently for best end user satisfaction.
Built With
agora
ar
cupertino-ios
dart
echoar
flutter
materialui
Try it out
github.com | CoachAlly -Interactive AR Virtual Classroom & Video Call app | CoachAlly application helps in creating interactive virtual classrooms using the latest technologies like Augmented Reality and creates room for the virtual classroom through high-quality video calls. | ['Sudir Krishnaa RS'] | [] | ['agora', 'ar', 'cupertino-ios', 'dart', 'echoar', 'flutter', 'materialui'] | 14 |
10,352 | https://devpost.com/software/self-aware | Medi-Box 3D Designing
Medi-Book Web Application Interface
Medi-Box
After 3D Printing, Final Design
hardware
Medi-Box
Doctors List on Medi-Book
Medi-Box
hardware
this picture shows our software with hardware and mobile application
working with ecg module
working
Inspiration
I have seen many peoples who live in remote areas, who move from one city to another in case of job postings. These people don't know about availability of hospitals, clinics, medicals and verified doctors near to them. So, we have developed a platform where people can easily connect with verified doctors near to their area by searching for doctors on our platform based on location.
The people of remote areas even big city people don't know about the latest medical schemes provided by the government. So, they can't use these very crucial medical schemes for their own. Our project will aware all patients about government medical schemes with eligibility criteria.
Their are so many people who are handicapped and faced difficulty in going to the hospitals for regular checkup of basic body parameters. Our project have IoT based box (a wellness device), which will help patients to have their normal body parameters reading at their own home and they can share their readings with doctor.
What it does
This project is for all those peoples who live in remote areas, valleys, hills, for those who are often move from one city to anoher because of buisness meetings and other things. All these peoples do not know about the availability of doctors, hospitals, clinics near to them. Even in case of COVID-19, this software is best to search doctors, hospitals, medical shops and clinics near to them. On MEDI-BOOK, patient can search doctors based on location selected and specilization of doctors. The major advantage of this web application is that peoples can see Government provided Medical Schemes very easily. This feature is not available on any existing projects. This software also have chat system through which patient can send their symptoms, previous medical reports and readings from MEDI-BOX to the selected doctor of any country and doctor from their end can prescribe patient very easily. Patients can have their MEDI-BOX readings on this software. Pateints can book appointments of any doctor. One of the major feature of MEDI-BOOK is that it will show live tracking of COVID-19 cases and news on it for the sake of patients and every time new case occurs in the area of patient, he/she will get notification of it automatically. If we see on larger picture, this software will going to help a lot to the world if we launch it.
With this, we have a wellness device "THE MEDI-BOX" which is a small box that can be connected with an android application and measures the human body parameters which includes "BODY_TEMPERATURE, PULSE RATE, ECG, HEART BEAT" and also "LIVE READING OF POLLUTION, AREA TEMPERATURE and HUMIDITY" of the area in which patient is currently stay, to check whether the current environment is suitable for the pateint or not. This box is easy to carry. All the readings will automatically send to cloud, MEDI-BOX mobile application and MEDI-BOOK software and these details will shared with doctor. We are now working to convert this box into a wearable band.
How I built it
It is built using basic programming languages and backend languages. I used thingspeak cloud for medi-box data storage and mysql for medi-book data storage.
Challenges I ran into
Sending real time data to cloud, but I made it.
Accomplishments that I'm proud of
Patients now will be aware about medical schemes which they can use for their own welfare.
Patients can easily connect with verified doctors near to their area.
Patients can have their wellness checking at their own home very easily.
What I learned
How to gather all data and use of web scraping also.
What's next for Self Aware
We will work on it and we are working on turning the medi-box into a wearable band and adding more functionality to them.
Built With
3d-designing
3dprinting
android-studio
arduino
bootstrap
css3
dht11
ecg-module
esp8266
firebase
google-maps
html5
java
javascript
jquery
lm35
mit-app-inventor
mq135
mysql
php
thingspeak
Try it out
drive.google.com
drive.google.com
drive.google.com
drive.google.com
drive.google.com
drive.google.com
drive.google.com
drive.google.com
drive.google.com
drive.google.com | Self Aware | The project is for the remote areas people and handicapped people who faced difficulty to go hospital/clinic for regular treatment. This project made simple for them to connect with doctors from home. | ['Rishabh Gupta', 'VIVEK CHHABRA', 'Amit Goyal', 'Rajneesh chaturvedi'] | [] | ['3d-designing', '3dprinting', 'android-studio', 'arduino', 'bootstrap', 'css3', 'dht11', 'ecg-module', 'esp8266', 'firebase', 'google-maps', 'html5', 'java', 'javascript', 'jquery', 'lm35', 'mit-app-inventor', 'mq135', 'mysql', 'php', 'thingspeak'] | 15 |
10,352 | https://devpost.com/software/kids-collection | Games
What's next for Kids Collection
Bold
italics
Built With
games | Kids Collection | Website that can kids use it. Or kind of games learning that can use. | ['Myma joy Gomez'] | [] | ['games'] | 16 |
10,352 | https://devpost.com/software/visual-cam | Visual Cam
How it was made
How it was made
Inspiration
One day, we were perusing youtube looking for an idea for our school's science fair. On that day, we came across a blind YouTuber named Tommy Edison. He had uploaded a video of himself attempting to cross a busy intersection on his own. It was apparent that he was having difficulty, and at one point he almost ran into a street sign. After seeing his video, we decided that we wanted to leverage new technology to help people like Tommy in daily navigation, so we created Visual Cam AI.
What it does
In essence, Visual Cam AI uses object detection to detect both the position and state of a crosswalk light (stop hand or walking person). It then processes this information and relays it to the user through haptic feedback in the form of vibration motors inside a headband. This allows the user to understand whether it is safe to cross the street or not, and in which direction they should face when crossing.
How I built it
We started out by gathering our own dataset of 200+ images of crosswalk lights because there was no existing library for those images. We then ran through many iterations on many different models, training each model on this data set. Through the different model architectures and iterations, we strove to find a balance between accuracy and speed. We eventually discovered an SSDLite Mobilenet model from the TensorFlow model zoo had the balance we required. Using transfer learning and many iterations we trained a model that finally worked. We implemented it onto a raspberry pi with a camera, soldered on a power button and vibration motors, and custom-designed a 3D printed case with room for a battery. This made our prototype wearable device.
Challenges I ran into
When we started this project, we knew nothing about machine learning or TensorFlow and had to start from scratch. However, with some googling and trying stuff out, we were able to figure out how to implement TensorFlow for our project with relative ease. Another challenge was collecting, preparing, and labeling our data set of 200+ images. Although, our most important challenge was not knowing what it's like to be visually impaired. To overcome this, we had to go out to people in the blind community and talk to them so that we could properly understand the problem and create a good solution.
Accomplishments that I'm proud of
-Making our first working model that could tell the difference between a stop and go
-Getting the haptic feedback implementation to work with the Raspberry Pi
-When we first tested the device and successfully crossed the street
-When we presented our work at TensorFlow World 2019
-All of these milestones made us very proud because we are progressing towards something that could really help people in the world.
What I learned
Throughout the development of this project, we learned so much. Going into it, we had no idea what we were doing. Along the way, we learned about neural networks, machine learning, computer vision, as well as practical skills such as soldering and 3D CAD. Most of all, we learned that through perseverance and determination, you can make progress towards helping to solve problems in the world, even if you don't initially think you have the resources or knowledge.
What's next for Visual Cam
We hope to expand its ability for detecting objects. For example, we would like to add detection for things such as obstacles so that it may aid in more than just crossing the street. We are also working to make the wearable device smaller and more portable, as our first prototype can be somewhat burdensome. In the future, we hope to eventually reach a point where it will be marketable, and we can start helping people everywhere.
Built With
3dprinting
colab
google-cloud
machine-learning
opencv
python
raspberry-pi
ssd
tensorflow
tinkercad | Visual Cam | Visual Cam AI is a device used to help visually challenged people to crossroads visual Cam uses object detection to detect both the position and state of a crosswalk light stop hand or walking person | ['Tom Mathew Jose'] | [] | ['3dprinting', 'colab', 'google-cloud', 'machine-learning', 'opencv', 'python', 'raspberry-pi', 'ssd', 'tensorflow', 'tinkercad'] | 17 |
10,354 | https://devpost.com/software/face-it-by-intellect-designs | 3D Model of Headband
3D Model of Face Shield
3D Model of "Face It"
3D Model of "Face It" seperated
Prototype Model - "Face It"
Prototype Model - "Face It" rolled up next to the model cylindrical packaging
Prototype Model - "Face It" rolled up and placed in model cylindrical packaging
Prototype Model - "Face It" rolled up and placed in model cylindrical packaging
Prototype Model - "Face It" Packaging
Abstract/Inspiration
During this epidemic, the need and demand for PPE has increased. Masks are now a requirement in most public places both in the US and worldwide. Masks have been shown to slow the spread of the virus, but there are some obvious disadvantages. Masks cover the majority of one’s face. This removes nonverbal face cues as a form of communication especially in schools and in the workplace. This is especially hard for the non verbal communicators. In addition, people touch their face while donning masks which tend to decrease their efficacy. Glasses fog up and most masks do not fit most faces. However, face shields allow the user to protect their face from particulates while still ensuring that your face is seen. This can possibly be a different alternative than a cloth mask or homemade masks especially when in close contact with others. But face shields are bulky and hard to carry around. “Face It” combines portability and flexibility so the average person is able to carry a face shield with them throughout the day. The headband is engineered for both flexibility, sturdiness, and adjustability. Our “roll up” technology turns a bulky system into a lightweight cylinder that is the size of your average water bottle.
What it does?
"Face It" does what all face shields so, which is protect the user from microbials and particulates especially when in close contact with others. What makes are face shield different is it that is:
Flexible design enables it to be rolled up into a cylinder the size of a water bottle
Adjustable headband for all head shapes and sizes
Can be sterilized and clean for continuous use
How I built it
The 3D models were designed in SolidWorks. The prototype model was built using a Monoprice Select Mini 2 3D printer using TPU for the Headband and PLA for the packaging. We used Powerpoint to create a presentation and narrate over the slides as well as Google Drive to share information, pictures, videos, and SolidWork files.
Challenges I ran into
The main challenges was designing the cushioning portion of "Face It". The 3D model would not render the piece sufficiently on the surface of the adjustable headband due to a slate and curved surface. This is where the bulk of the time was used. Another challenge is the 3D printer needing to be calibrated for a specific type of material and successfully printing (no cracking or gaps) the prototype.
Accomplishments that I'm proud of
We are proud of the prototype we were able to print especially under the time constraints. Unfortunately, we did not have the full two days to do this challenge due to personal circumstances, but we are excited to know that we finished and were able to successfully 3D print a prototype.
What I learned
We learned that designing a product has its ups and downs. There are certain issues you cannot plan for and the time crunch did exasperate some problems. We learned the SolidWorks can be picky and not enable you to build on slanted and angled surfaces. This is in reference to the cushion portion of the product. We learned that material changes in 3D printers can be problematic if it is not calibrated correctly. We learned that there comes time when you have to stick with an idea and not continuously change it as you build it or you will never finish building it. We learned there is power and working to everyone's strength in the team so that each person can work simultaneously in order to bring the product pieces together.
What's next for "Face It" by Intellect Designs
We believe this is a simple, but useful product. As schools and businesses open up, this can be a another layer of protection for the everyday citizen. The headband can be manufactured in different colors or even have designs especially for children. The carrying case can be turned into a pocketbook/handbag or a strap can be added to the packaging. The headband design itself lends to the flexibility of the face shield so that can be patented and possibly licensed.
Built With
google-drive
monoprice-select-mini-v2
powerpoint
solidworks
youtube
Try it out
drive.google.com | "Face It" by Intellect Designs | Tired of those bulky face shields you have to carry around? “Face It” is an adjustable and "rollable" face shield. It can fit in most bags as it is only the size of your average water bottle. | ['Shena Marshall', 'Shaneil Da Silva', 'Ngozi Okonkwo'] | [] | ['google-drive', 'monoprice-select-mini-v2', 'powerpoint', 'solidworks', 'youtube'] | 0 |
10,354 | https://devpost.com/software/hygea | try it
Web app
Subscription Selection
Check out and discount reminder
UV germicidal chamber concept (SolidWorks)
Blueprints of the UV germicidal chamber
What is Hyega
Hygea is a subscription-based service, where users will sign up to receive boxes of PPE gear, each delivery containing a Biohazard appropriate disposal bag. The used PPE gear is placed in this bag and collected with each new delivery to be sanitized and appropriately recycled, in return for discounts for the user.
Our inspiration
We were inspired to tackle the subject of safely disposing of/recycling PPE based on our research, which indicated there are unique problems faced in this area. As a potential biohazard, PPE cannot be recycled normally, and with the pandemic PPE usage is way up. These both contribute to vast quantities of PPE gear making its way into the environment and polluting it. We wanted to create a sustainable solution to help bring gear where it's needed and help protect the environment at the same time.
Hygea is a subscription-based service, where users will sign up to receive boxes of PPE gear, each delivery containing a Biohazard appropriate disposal bag. The used PPE gear is placed in this bag and collected with each new delivery to be sanitized and appropriately recycled, in return for discounts for the user.
How We built it.
We built our idea from the ground up tackling the unsafe disposal of PPE. Using the link below, you can follow the flow of our ideas. Each team member came up with individual ideas, then we combined several suggestions together to come up with the current idea. Using a flow chart and a visual board, we were able to collaborate on choosing which method work best with our project.
https://miro.com/app/board/o9J_kuyug7c=/
Challenges We ran into.
Time-zones was a challenge that we ran into as different members of the team lived in other parts of the world. Their Knowledge were certainly helpful, however the team had to account for amount of time spent on the project per session and section to allow for input from every member.
Another challenge we worked through was different ideas/combining ideas. Due to the diversity of the group in knowledge and expertise, there were lots of different ideas that were really good. To prevent "group think", each member with an idea gave a presentation of their ideas to the group and even combined some ideas resulting in a stronger solution for this project.
Accomplishments that we are proud of.
As a team, we were able to come up with a solution of safely disposing of/recycling PPE based on our research, data, and team work. We are so proud of the accomplishment of working together as a team, respecting each others expertises. No one feel shamed or attacked, for there suggestion or ideas as we had open deep lines of communication on each segment of the project.
In the initial research portion of this phase.
we personally learned quite a bit about the bio-hazards of used PPE, and the problems this presents to the effort to go green. It bring with is contamination, endangering the environment and post a huge threat to future generation.
If all goes well.
we believe that Hygea is a valid business model and great value add to individuals and the global community at large and have hopes of continuing growing it as a company beyond this hackathon.
Recycling and repurposing
aside that only 15% of day-to-day disposed PPE is contaminated
The technologies for virus erradication on objects are already invented, such as UVC lights and pherclorhide vapors, the idea is to take these technologies and implement them on fully automated processes down the line (this is were The UV germicidal Chamber concept got inspired of)
The biggest challenge for this service
our biggest challenge is to compete against the plastic products, by eco-responsability and fashion we hope to find our niche, thrive, and maybe someday fully dethrone the plastic, which is gonna take several fully-automatic processes and green electricity...it can be done... its just gonna be slow.
Built With
figma
google-suite
miro
solidworks
Try it out
www.figma.com
drive.google.com | Hygea | A subscription based PPE delivery service with a focus on sustainability. Used PPE is recollected to be sanitized and correctly recycled, incentivized for the user with future discounts. | ['Krishu Agrawal', 'jorge Cornejo', 'Warren Yao', 'sam rey', 'Joey Sbarro'] | [] | ['figma', 'google-suite', 'miro', 'solidworks'] | 1 |
10,354 | https://devpost.com/software/clearfit-mask-68snx0 | The Clear Solution: ClearFit Masks
Inspiration
We the
ClearFit Mask team
were inspired to take on this project due to there initially being a lack of facemasks and making them more accessible. Upon further speculation, we saw that these standard masks do not provide users with
comfort
or the
ability to read facial expressions
.
What it does
To
solve
these problems, we decided to make
clear masks custom fit
to the users' face.
How it was built
After many initial designs and extensive research on current masks on market, we came up with a unique design which would be the most
effective and impactful
in tackling the issues and 3D models on SolidWorks. We also created designs for future iterations going forward that would be great for team members to spend more time on in the future.
Challenges we ran into
It was a
challenge
to combine the two ideas to make this idea possible since we didn't have a subject matter expert on exporting a face scan to a 3D printable image. We also ran into the challenge of not being able to brain storm in the same room. The benefits of typical hackathons is that the whole team is in the same room for a few days able to brain storm and work together due to COVID-19 we were unable to do that but video chatting and other methods of communicating helped to keep the team united throughout the project.
Accomplishments that we are proud of
Communication, team work, and dedication to making an impact.
We were able to
overcome
the challenge of working on this project virtually. And our team was also able to highlight each team members
strengths
and
grow together
.
What we learned
In doing this hackathon, we were able to come up with so many
creative
ideas for new face masks and are excited to pitch our ideas and expand upon our current design to make it more accessible and marketable.
What's next for ClearFit Mask
We are working on obtaining FDA approval and adding new features such as designing a small fan and adding a mountable face shield. | ClearFit Mask | Facemasks have become essential in saving lives. Those on the market do not provide good fit, are uncomfortable, illicit a wide range of emotions. ClearFit Mask team solves these issues for all ages. | ['Bhavik Vashi', 'Kevin Lu', 'Reema Dawar', 'Fatima Kazmi', 'Ria Dawar', 'Ommer Khaw', 'Roopa Jagadish'] | [] | [] | 2 |
10,354 | https://devpost.com/software/3d-health-ppe | SaniSafe
SaniSafe
Disinfect. Defend. Daily.
SaniSafe is an innovative, 3-in-1 disinfection device for PPE masks, gloves, face shields, and gowns, and more—you can use it at home, at work, and at play. SaniSafe provides a way to re-use PPE kits at least once or twice, helping in reducing the demand for such equipment.
Fighting COVID-19
Why UV?
UV light is known for being a highly effective disinfectant, especially UV-C with its short wavelength. With the current COVID-19 pandemic, the need for effective, reliable disinfecting solutions is more urgent than ever. Seeing the depletion in the worldwide stock of safety equipment due to an exponential increase in demand due to the pandemic has created the need for a method or a device that can make re-using this equipment possible. The idea for SaniSafe emerged while observing these needs.
Key Features
SaniSafe has a hollow cuboidal structure which can be ideally made from ABS or Polylactic Acid polymer, made from biological materials such as cornstarch or sugarcane. A number of UV lamps are attached in it to provide 360° UV-C exposure. SaniSafe also provides an option of disinfectant spray, which is not yet offered in any other competitor's concepts. The device also has a laser sensor-enabled liquid hand sanitizer dispenser attached to it.
Product Demo
link
Challenges
The main challenge we faced was in the field of designing our project as there is no other reference available. At last, we created a system that would bring down the expenditure on PPE and face masks by 60%.
Next Steps
We aim to make our device more autonomous and user friendly in the next prototype.
Built With
catia | SaniSafe | Disinfect. Defend. Daily. | A 3-in-1 disinfection device for PPE and more, enabling safe reuse | ['Rachael Deng', 'Anant Singh Gambhir', 'Maninder Singh', 'Rajat Sharma'] | [] | ['catia'] | 3 |
10,354 | https://devpost.com/software/sustainable-face-shield-ea3z6c | Inspiration
Apple inc's face shield style inspired me because I thought that was simple, easily constructed, and suitable for fast batch production.
How I built it
We built this project by the drawing software.
Challenges I ran into
Materials chosen are our challenge, because it is difficult to find the perfect one, and we must consider the trade-off.
What I learned
We learned the properties and prices of some plastics, like PET, PLA, and so on. And we learned how to a 3D model. Last but not least, we learned more knowledge about 3D printing. | Sustainable Face Shield | Our ideas are find the sustainable materials and simplified the structure of the face shield to reduce waste | ['Sishi Cheng', 'Wenhao Yan'] | [] | [] | 4 |
10,354 | https://devpost.com/software/mymask | Top of Home Page
Why we stand out
Find nearby makers (not yet completed)
Our team!
Explanation of how to get a mask frame
Our form (with the image upload)
STL File of mask frame
Required Information:
Team Captain - Pranav Teegavarapu
Team Members:
Pranav Teegavarapu
Benjamin Smith
Abstract of the project
MyMask is a web application where a user can upload a picture of their face and they are returned with a
mask frame
that is personalized for them. We chose to focus on mask frames as they uniquely solve the issues that people are facing with conventional facemasks: they make it a more snug fit, make the mask more durable, and further reduce the risk of exposure to COVID-19. In order for a mask frame to work, it must be a very good fit to one's face; otherwise, it'll do more harm than good (It'll make one extremely uncomfortable). In order to get an extremely high accuracy, we applied state-of-the-art computer vision models to perform
facial landmarking
on one's face, allowing us to match them with a mask frame that fits.
The hackathon category
Day-to-day PPE: We modified the typical design of a facem ask to maximize comfort and wearability(by making it a more snug fit), along with fixing current issues in supplying (and delivering) 3D printed mask frames to those who need it. Our product is
over 3 times
less expensive to produce when compared to typical 3D printed face masks.
Tools used to build the project
The frontend of our web app is built in HTML/CSS/JS, and we used mobirise to give us a basic template for how we wanted it to look like. Our backend (3D model selection and facial landmarking) was done in Python, using OpenCV and DLib for Computer Vision. We used also used the Flask framework to create a REST API, which connected our frontend and backend.
Link to Video
Inspiration
Earlier this week, I saw a news article about the concept of 3D printed
mask frames
. I was really intrigued by this potential solution to the global PPE shortage, and I wanted to use this opportunity to explore their potential in "upgrading PPE", and I was able to do so!!
Challenges we ran into:
We initially wanted to try using Volumetric Regression Networks (VRNs) to reconstruct a 3D model of one's face, and to programmatically create a mask frame for that 3D model. We were able to build off of Microsoft Research's implementation of a VRN, and we ended up successfully creating an API which converted an image into a 3D model. However, we were unable to process this in our code, as the 3D model was actually a mesh of points that had no width (like a sheet of paper folded in the shape of a face; while it looked like a 3D model, we couldn't programmatically create a mask frame). We spent over a day dealing with this, and ended up having to switch to facial landmarking, due to the limit of the technologies we used.
Our API initially had a latency of close to minute, due to the amount of time spent on processing the image. In the end, we were able to significantly cut this time, by resizing the image and converting it to grayscale.
What we're proud of
We persevered through not being able to implement a VRN, and I'm really proud that we didn't quit after this, and that we made it through!!
We were able to build a production ready web app during this weekend, and I'm really proud of what we made!!
We're really proud of how we were able to optimize our code (when creating our API) to minimize latency, and I find it awesome how we were able to minimize it to under 10 seconds!!
What's Next for MyMask
We plan on deploying our website as soon as possible, and we hope to get feedback on our project from the community (hopefully through this hackathon). From there, we hope to reach out to online communities of 3D printing enthusiasts to try to get them to try implementing our idea.
Built With
c#
css3
dlib
dotnet
fomantic-ui
html5
javascript
opencv
python | MyMask | Sustainable, Personalized Face Mask Frames built via Computer Vision | ['Benjamin Smith'] | [] | ['c#', 'css3', 'dlib', 'dotnet', 'fomantic-ui', 'html5', 'javascript', 'opencv', 'python'] | 5 |
10,354 | https://devpost.com/software/project-momo | MoMo Lab Design
Inspiration
COVID-19 illuminated glaring systemic problems and disrupted the social, physical, and mental health of our people and planet.
Seeing our friends, family, colleagues, and patients suffer due to consequences of COVID-19 engendered an urgent, irrepressible and growing need to plan for, mitigate, and adapt in order to protect our people and planet.
To do so forced us to examine the factors involved in this crisis and to consider what we, humanity, can do to lessen suffering.
We recognized that we must
bridge silos
across sectors.
This is the only way we can
transform the legacy of this Anthrocopene age
from
tragic and destructive
to
collaborative, humanitarian, intelligent and forward-thinking
.
Initially, this can start as a collaboration between experts in healthcare, science, technology, economics, design, environmental sustainability and engineering.
COVID-19 is merely a preview of what the future will be like if we choose to continue along our current trajectory unabated. However, it also shows us that
we can make drastic and sudden changes when forced to.
Let's not
sit and wait passively for our next tragedy to strike.
Let's
utilize our collective intelligence across sectors and plan for and mitigate tragedy instead of allowing ourselves, our loved ones, and our children continue to succumb to it.
Let's build a better tomorrow...today.
If not for humanity, then why
?
What it does
It is a self-contained, deployable, sustainable, mini-factory that serves to build PPE and medical equipment anywhere in the world, on demand.
How we built it
We saw that it was possible for someone with no background in engineering, technology, design, or economics to learn to print effective PPE.
We extrapolated on that fact and made our design modular, sustainable, and deployable.
Challenges we ran into
Idealism versus pragmatism.
Initial differences regarding our goals.
Knowing what level of detail we can realistically go into over a limited time frame.
Perfect versus good enough with scalability in mind for the long-term.
Communication.
Role clarification.
Overlapping on parts of our submission and subsequent inefficiency and conflict.
Accomplishments that we're proud of
We merged idealism with pragmatism.
We overcame initial differences regarding our goals.
We clarified and refined our goals for this project and for future ones.
We improved our triage skills so that enough details were included to make a concept feasible while simultaneously permitting evolution of the initial concept.
We overcame individual differences so that an important idea could be realized.
We overcame initial naïvetés in regard to each of our professions.
We learned the importance of clarifying tasks to prevent inefficiency and conflict.
Teamwork and compromise make for feasibility.
End product is a real solution to an imminent, critical problem and has the potential to have a significant impact on our present and future.
What we learned
How to merge idealism with pragmatism.
All team members must be on the same page regarding our goals, and this requires compromise.
Clarification of roles and responsibilities are key and prevent inefficiency and miscommunication.
Enough details should be included to make a concept feasible while simultaneously permitting evolution of the initial concept.
To collaborate to build something for which there is a desperate and growing need.
How individuals across different sectors can work together and overcome initial misunderstandings and disagreements.
How to work effectively as a team while harnessing each of our unique skill sets and recognizing the value each member provides.
Perfect versus good enough with scalability in mind for the long-term.
Together, we can build a better future!
What's next for MoMo Lab
Cost estimates for leasing or buying.
Further refinement of concept and feasability of design.
Additional considerations regarding certifications of technicians, standard operating procedures.
Capital to launch this venture.
Built With
powerpoint
solidworks
zoom | MoMo Lab | Deployable, modular mini-factories that will sustainably produce medical PPE wherever needed. MoMo Lab: Let's build a better tomorrow...today. | ['Kristina .', 'Lydia Bargielski', 'Yan Badiolle', 'Houston Wade'] | [] | ['powerpoint', 'solidworks', 'zoom'] | 6 |
10,354 | https://devpost.com/software/day-to-day-ppe-s16zgj | Basic layout of PPE
Inspiration
The pandemic of COVID19 is the main inspiration behind our project. The shortage of PPE world wide made us to proud face shield and think about this idea.
What it does
Provides safety
How I built it
With household material
Challenges I ran into
Needed to be prepared in short time
Accomplishments that I'm proud of
Comfortable and safe PPE is made
What I learned
Technical and social aspect of PPE
What's next for Day to day ppe
Designing, 3D printing and production | Day to day ppe | Day to day PPE | ['shikshya gautam'] | [] | [] | 7 |
10,354 | https://devpost.com/software/ppe-for-school | Inspiration
The coronavirus global pandemic is a dark illustration of a popular definition of economics: the study of human decision making under conditions of scarcity.
The first wave of scarcity-based worry focused on ventilators. Currently, a new wave of worry is on the availability of PPE (Personal Protective Equipment) needed both by frontline workers and the population at large.
While some projects (such as
https://getusppe.org/
) have addressed shortages in PPE among healthcare workers, there is less attention on shortages and lack of access to quality PPE for the rest of the population.
Of particular concern are school-aged children (K-12). Many states are trying to reopen schools, and when this happens it is inevitable that this will lead to some increase in transmission of the coronavirus. This is an issue we as a society need to address proactively, not reactively: in addition to preventing a rise in cases of Multisystem Inflammatory Syndrome in Children (MIS-C) associated with coronavirus infection, there is a great risk in children serving as unwitting, asymptomatic carriers of the infection to their teachers and school staff, friends, and parents and other family members.
Access to PPE in schools is absolutely critical to mitigating the spread of coronavirus through our school systems and suppressing future waves of future outbreaks. To address this problem and the urgent need of many schools that may be unable to purchase their own effective PPE and may serve the most vulnerable populations, we have created a prototype web-app to streamline the process of finding and fulfilling school PPE needs. It is a simple window between schools in need and donors with resources to fill that need.
As schools are instructed to reopen in the fall, many schools are unequipped to provide safe learning environments for their students and faculty. Many school districts across the US have already undergone rounds of budget cuts in recent years, forcing them to make hard financial decisions as they prepare themselves to welcome their student body back into their halls again this year. We believe that many schools cannot afford to purchase face masks a la carte, in bulk, from online retailers such as Amazon. They need our support, and we believe our solution can help address that need gap.
What it does
Our web application acts as a donation marketplace, where schools can register as users and list their demand for PPE. Providers can access the website as visitors and can be put into contact with schools to coordinate a drop off or mail in. The schools are listed in order of need but also can be filtered by student body age (K-5 vs. high school). Providers can range from retailers to manufacturers to business owners to medical centers, anyone who is understanding and sympathetic to the budget crisis schools will be facing in the upcoming fall.
How we built it
We first went to the Figma drawing board and wireframed our app. Then we split into two groups: three in charge of the frontend and one in charge of user authentication in the backend. Our tech stack is React/Redux JS and Material UI in the frontend and node.js in the backend.
Challenges we ran into
Coordinating and delegating responsibilities was difficult at first as it took a second to get everyone on the same page. Once we finished the ideation stage, however, things went much more smoothly. We took advantage of Slack and Google Hangouts to coordinate.
Accomplishments that we're proud of
Being able to build a full-stack web application in two days is incredible, and I can speak for my team when I say we are so proud of ourselves for pulling it together while working remotely.
What we learned
We learned that the ideation/brainstorming stage is often the biggest time sink and it's important to delegate a good amount of time discussing amongst each other in the beginning over video call to quickly assign responsibilities.
What's next for PPE for Schools
We would love to expand it to include two user journeys (schools and providers) so that providers can track the impact they've had on schools in a quantifiable and rewarding way.
Business Model
Basic Questions
What does our app do?
It allows schools to sign up as users and publicly display their information and needs regarding PPE. It creates a national registry of schools seeking donations of PPE that can be used by potential donors/suppliers to identify schools most in need, contact them, and initiate a donation or transaction for lifesaving PPE by schools.
What are the app’s strengths?
It is easy for a new user/visitor to understand the purpose of the app upon reaching the home page
It is easy to use and scales based on user engagement.
It is not a paid (download) app so there is no barrier to entry for new users
What are the app’s weaknesses?
The MVP of the app is mostly a database but does not immediately demonstrate a clear value-add for donors besides making information available. If a donor found a school through our platform, then contacted them, and forgot about our platform, there would potentially be nothing to show that our platform enabled/facilitated/made possible the transaction/donation.
GoFundMe Business Model
GoFundMe is now funded largely by donations — users are presented with a voluntary option at the end of a transaction to send a few extra dollars to the site.
Verdict: Good.
Socially acceptable, debatable profitability given that GoFundMe gains a lot of donations due to already having a very positive reputation.
Amazon Transaction Fee Model
Charge a flat middleman fee for transactions over the platform, but this would require integrating Stripe or some other payment system probably…
Verdict: reasonable, but needs work.
if we can charge the donator (a large company) a reasonable flat fee, this would be reasonable.
But we need to have some sort of value add that makes this an attractive option for would-be donators. Ideally the app should make it very easy for the donator to: find a school in need (ideally in their region), connect with the school, make the donation, use this donation as a positive PR event.
Subscription model: charge users (schools) a monthly or yearly fee for signing up.
Verdict: unacceptable both business-wise and socially.
Schools are in need, which makes this business model morally unattractive. It gives the appearance of praying on the needy at the same time as asking others (donors) to be charitable. Hypocritical.
Schools are not likely to pay for an unproven service, and the current webapp is unlikely to demonstrate enough of a value add to convince schools the money is worth it (does the app host the transaction, or facilitate it through a third party platform like stripe, paypal? Does it promise donations to the school?).
Note on Subscription Model
this could work if we allowed donors to sign up as users. This is an important goal for the app in the longer term since having donors as users would allow us to notify them of opportunities to donate and keep them in the loop. Donors are really the cash cow that make this app relevant; if we don’t have donors, we could get a thousand schools signed up and never match them with anything.
Built With
javascript
node.js
react
redux
router
Try it out
github.com
github.com
docs.google.com | PPE for Schools | A web application that acts as a donation marketplace and connects providers with schools in urgent need of PPE for the upcoming school year to protect their students and faculty. | ['Albert C', 'You Song H', 'Stephanie Zou', 'Theo Carney'] | [] | ['javascript', 'node.js', 'react', 'redux', 'router'] | 8 |
10,354 | https://devpost.com/software/the-germ-eraser-0dcnrz | PLA Prototype
Render 1
Render 2
Render 3
Info graphic 1
Banner Ad
Render 4
UV Light Poster
Inspiration
We were faced with a problem; surfaces aren't safe but you still need to touch them, how do you do that safely. The options on the market now seem to get the job don but there are some big flaws in their design and they need to be addressed.
What it does
Our Team has developed a device we call the Germ Eraser. It is a portable UVC Light that the user shines on an unclean surface before touching. We designed it to be about the size of a standard handheld thermometer. The user would simply shine the light on a surface for about 8 seconds to disinfect it.
When we look under the hood, our build would consist of an injection molded housing made out of ABS or PETG. In our demonstration model, we printed it out of PLA. The electronics consist of a 40 watt UVC Excimer Lamp, a 3,300 14V Lipo Battery, a Mechanical Time Delay Circuit, and an LED Cleaning Cycle Indicator. We wanted to make sure that this device would work reliably and be cost effective to manufacture.
How I built it
Since we are all industrial design students, we just followed the process we were taught:
Develop a Problem Statement
Brainstorming &Market Research
Ideation
Prototyping
Modeling
Testing
Final Product
We started out with defining our problem and the we had 4 sketching sessions where did some form exploration. At the conclusion of each session, we took the best elements of all the forms and then used those as a framework for the next round of sketches. After 4 cycles we were able to hone in on a final form that we all liked.
At this point we split the work up, some of us did the 3D modeling while others wrote the script for the video and conducted our research. Slowly as the weekend progressed, we were able to polish up our work into a presentation that we are all proud of!
Challenges I ran into
One of the main challenges was that of size. We wanted the product to be as small as it could be but we still wanted it to be effective. The parameters that we gave ourselves dictated the size of some components and that forced the shape to change a little as the project progressed.
Another issue was that of the electronic timing circuit. None of us are particularly well acquainted with electronics so it was a bit of an uphill battle but we ultimately were able to overcome these little bumps in the road.
Accomplishments that I'm proud of
Even though we were not able to meet in person and share ideas that way, we were able to work and communicate as a team and ultimately achieve something that we were all surprised at how much we got done in a week. We are truly proud of our little product!
What I learned
We learned what 5 people and a weekend of hard work can do. Usually it takes us about 6-10 weeks during the semester to complete what we did in 2.5 days.
What's next for The Germ Eraser
Hopefully we can get some of the UVC Bulbs and run some tests on some surfaces and fine tune the dosages.
Thanks for the Consideration, We had a Blast!
The Studio Survivors:
Joe Oliveira
Daniel Haines
Kajal Ramrup
Lemmuel Escalona
Matthew Mateo
Built With
3dprinting
industrialdesign
keyshot9
premierpro
solidworks | The Germ Eraser | Erase the Problem! Don't Be Part of It | ['Joseph Oliveira', 'Lemmuel Escalona', 'Matthew Mateo', 'Kajal Ramrup', 'Daniel Haines', 'Daniel Haines'] | [] | ['3dprinting', 'industrialdesign', 'keyshot9', 'premierpro', 'solidworks'] | 9 |
10,354 | https://devpost.com/software/ready-set-wearables | Hand Sanitizer Container
Ready Set Wearables
Door Pull - Open
Door Pull - Closed
Pill Holder
Design Sketches - Natasha
Design Sketches - Liz
Design Sketches - Casey
Inspiration
We took inspiration from the “Everyday Carry Movement.” Everyday carry is about minimalism and functionality. You can’t fit a lot in your pockets, so the items you choose need to be designed to perfectly suit your needs. Whether you’re working out, forgetful, or wearing women’s clothing, sometimes stuffing your pockets just isn’t the best option - if you have pockets at all.
What it does
We are excited to present “Ready Set Wearables” a set of watchband accessories that are ready to help you carry today’s necessities. We will reduce anxiety, increase compliance with CDC health regulations, and save lives by slowing the spread of COVID-19.
Ready Set Wearables enables you to carry essential items like hand sanitizer, a door pull, or emergency medication - right on your wristwatch.
How I built it
How can we carry things when we don’t have a purse or pockets? How can we make that act as convenient as possible? As we sought to solve our problems, we sketched, hacked, made CAD models, 3D printed prototypes, and eventually arrived at a solution that balances wearability, functionality, and popular fashion aesthetics.
Challenges I ran into
Balancing ergonomics and form factor with size and materials that would fit on a watchband.
Accomplishments that I'm proud of
We came together with specific expertise in design, business, and engineering and collaborated seamlessly. We are also proud of the solutions we were able to deliver in such a short period of time and of Liz for making amazing renderings of our team's design. We are excited to continue with this project.
What I learned
We learned to streamline our design process and improved our ability to convey ideas and information as succinctly as possible.
What's next for Ready Set Wearables
We are interested in exploring the process of getting this patented.
Built With
3d-printing
metal
solidworks | Ready Set Wearables | Ready Set Wearables enables you to carry essential items like hand sanitizer, a door pull, or emergency medication - right on your wristwatch. | ['Natasha Dzurny', 'Liz Spencer', 'Casey Walker'] | [] | ['3d-printing', 'metal', 'solidworks'] | 10 |
10,354 | https://devpost.com/software/hands-free-door-handle-attachments-vjbsqd | Inspiration
Our inspiration for this project came after we stumbled upon an article that detailed the lack of hygiene products and the spread of the Covid-19 virus in South Africa. We were shocked at the lack of supplies that were available for the population to use, especially those that were common where we lived, such as hand sanitizers. The lack of essential items throughout Africa drove us to help the continent. Here, in California, our members, after visiting hospitals and orthodontics places, noticed that when people exited, entered, and used facilities such as bathrooms at the centers, they were forced to touch the same door handle to get in and out of the place. This meant that if the door is not sanitized properly the viruses and bacteria on one person’s hands were being transferred to other people, and if one person sneezed into their hands, this same virus would get transferred to others as well! The connection between the situations in Africa and California allowed us to see a dire need in materials that eliminated frequent contact with commonly shared surfaces, such as door handles.
After our initial brainstorming session, we realized that we could not simply just create another form of door handle. This would be too expensive to manufacture, and different door handles would be needed for various needs. Therefore, in order to address this issue, we created universal door handle attachments, which can simply be attached on top of most types of door handles, and utilize only zip ties! Specifically, these designs use one’s arm to operate doors, rather than hands, to prevent the spread of infectious diseases to other people via elimination of a common shared surface.
What it does
In order to address the issue of spreading infections through shared surfaces, we have created universal door handle attachments that can be placed on the top or side of most existing door handles. When a person needs to open a door, they use their arm to push down or to the side of the door handle, depending on the orientation of the door handle. When pulling their arm back, the raised lip on the attachment hooks onto the user’s arm, allowing the person to “grab” the door despite using no hands in the process. The attachment works for push-doors as well, as the attachment forms a platform over the handle itself, allowing one to turn the door handle by pushing down with the side of their arm and pushing the door open. The final product can be attached onto door handles using zip ties only!
How we built it
We designed this product using OnShape and rendered it using Fusion 360. The final physical form is fabricated using a 3D printer and PLA filament. When attaching to door knobs, a few zipties and scissors (optional) are needed. The cost for one Armdle is approximately $2.11.
Challenges we ran into
Creating the angled Armdle required much brainstorming and precision, and it was difficult to get the angle right. We examined multiple door handle designs as well as how high door handles were from the bottom of the door in order to find the angle that would be most comfortable for the user. Also, creating a door handle that was aesthetically pleasing was a challenge as well, as we wanted to give it a clean presentation. Finally, making sure that the door handle fit securely and was easy-to-use was difficult as well, as we had to think about the issue from a customer’s standpoint and how a customer would utilize the door handle when they approached it.
Accomplishments that we're proud of
We are proud of being able to create a fully functional design that works and one that can be directly implemented into businesses and centers in need. It is stable, has a lost cost associated with it, and can be used on a majority of door handle designs. Also, we had never used a 3D printer before this, so because of this project, we were able to figure out on how to use a 3D printer to print our own designs.
What we learned
From creating this design, we learned about the design process. The necessity of brainstorming was evident to us, as because of the thorough brainstorm session that we completed together, we were able to come up with tangible and creative ideas. Also, we learned about the importance of continuously improving our design, as we went through multiple iterations of the product, with each one becoming better and better as the days passed by. Finally, we learned about trial and error, and that it failing will occur, no matter what. It is how we get past these failures and hurdles is what truly matters. During the creation of Armdle, we thoroughly discussed why we failed and how to prevent this in the upcoming days, and this tactic is one that we will implement in our future robotics meetings as well.
In addition to this, we learned and experimented with more features in OnShape, and for many of these, we did not work with before. Also, we learned that we can render the design in different lightings and environments, allowing different aspects of the Armdle design to be shown more effectively.
What's next for Armdle
In the near future, our team hopes to spread Armdle to essential centers and businesses that need the door handle attachments in order to help control the Covid-19 pandemic all over the African continent. Specifically, we want to distribute these to local hospitals, as it could help decrease the risk of others receiving viruses on peoples’ hands, and popular locations where there are many visitors a day. In addition, we would like to improve the regular Armdle design to be compatible with spherical door knobs as well.
Built With
autodesk-fusion-360
onshape
Try it out
www.thingiverse.com | Armdle | An innovative, versatile door handle attachment that lessens the risk of COVID-19 transmission by hand through one of the most commonly shared surfaces in modern society: door handles. | ['Riya Bhatia', 'Abeer Bajpai', 'Wenhao Xu', 'Sonal Naik'] | [] | ['autodesk-fusion-360', 'onshape'] | 11 |
10,354 | https://devpost.com/software/shieldu-face-mask-8v2s7w | Rapid Prototype of custom ShieldU PPE face mask
Inspiration:
What it does:
We created SHIELDU, a new proprietary consumer face mask to help prevent the transmission of the Coronavirus for use in private and public spaces, and high traffic areas. Our custom PPE Lightweight Commercial quality Masks with removable eye shield -
Washable and reusable
Removable Eye-Shield with snap tape
Adjustable sizing with Velcro tape and elastic bands
Sustainable design with lightweight mesh fabric made from recycled Bionic yarn
How I built it: we created a rapid prototype with our own custom pattern base on our illustrated design. We used a sample of the lightweight mesh fabric, snaps, elastic, and velcro and created an in house prototype.
Challenges I ran into translating some of our 3D concepts into functional viable commercial options.
Accomplishments that I'm proud of
Creating new options for PPE facemasks including Bionic recycled fabrics and 3D printed designs.
What I learned
What's next for SHIELDU FACE MASK
LOCAL DOMESTIC PRODUCTION MANUFACTURING OF SHIELDU
LAUNCH OF SHIELDU MASK ON OUR ONLINE SHOP FOR PUBLIC!!
Built With
3d
desinged
face
mask
prototype
render
shield | SHIELDU FACE MASK | Custom PPE Lightweight Commercial quality Masks with removable eye shield | ['Bejan Moers'] | [] | ['3d', 'desinged', 'face', 'mask', 'prototype', 'render', 'shield'] | 12 |
10,354 | https://devpost.com/software/moisture-absorbent-mask | Prototype 2
Prototype 1
JCRMRG 3D Hackathon
To: The Judges of the Competition
From: Sagar Peddanarappagari, Sree Adinarayana Dasari, and Paritosh Padmakumar; Students, Pennsylvania State University
Date: July 11th, 2020
This memo seeks to address the issue of moisture accumulation during the use of face masks or face coverings. The topic of face mask usage has been well-publicized within the realms of the American media and the issues concerning its usage have as well.
Summary:
Our product aims to reduce breathing resistance due to moisture accumulation on the inner layer of the mask using two prototypes. The prototype is based on a duck mask and the second prototype is an advanced version of the prototype which uses an active filter neck chain.
Product Description:
Prototype 1- The prototype is based on a type of mask called a duck mask (Source 1). The mask is shaped like a duck’s beak, with a horizontal partition through the middle of the mask. Our prototype builds on this design by using two different surfaces on the top and bottom partitions of the mask. The bottom part is based on a synthetic fiber that is designed for the absorption of exhaled moisture. The top part of the mask is made from N95 mask material that will work akin to a normal N95 mask. This could significantly reduce the breathing resistance that is experienced by the mask wearer as the moisture is absorbed by the synthetic material placed on the bottom half of the mask.
Prototype 2- The second prototype is an advanced and far more complex version of the first. A face covering which does not allow any air inflow or outflow is connected to a neck ring that has a filter and fan that allows the user to inhale and exhale clear, safe, and filtered air. The ring will have an inlet and outlet to provide clean air and remove moisture and CO2 laden air. This can be beneficial for use in high-risk areas where there is an increased spread of COVID-19. Additionally, this wearable device can be used during strenuous exercise as normal masks can cause breathing issues during exercise, especially in individuals with underlying conditions (Source 4).
Business and Innovative Value:
The use of masks amongst the general American public has been a controversial topic in recent weeks. According to a recent survey published by YouGov in an article in BBC says that 73% of the American public uses some for, of face-covering while in public in comparison to countries like Italy and Spain where the mask usage stands at 86% and 83% respectively(Source 2). Some of the objections raised by the opposition include moisture and CO2 accumulation. Our product aims to remove this aspect of face-covering usage.
The implications of this product in a business sense could be potentially disruptive. Individuals who are concerned with moisture accumulation in their face masks could be willing to pay for such products. The COVID-19 crisis will last in some form until the administration of a viable vaccine to achieve herd immunity (Source 3). Thus, masks will remain in demand for a significant period in which significant revenue can be generated.
Conclusion:
If this idea is implemented to the full extent of its potential, it can be a disruptive force in the face-covering market. It can increase the percentage of face mask users and help in the abatement in the spread of COVID-19 cases, which could further reduce the death toll.
Sources:
1.Source 1 -
https://www.oxstreets.org.uk/n95-mask-duck.html
2.Source 2 -
https://www.bbc.com/news/health-51205344
3.Source 3 -
https://newsnetwork.mayoclinic.org/discussion/herd-immunity-and-covid-19-what-you-need-to-know/
4.Source 4 -
https://blogs.bmj.com/bjsm/2020/06/12/should-people-wear-a-face-mask-during-exercise-what-should-clinicians-advise/
Built With
solidworks
Try it out
github.com
pennstateoffice365-my.sharepoint.com | Moisture Absorbent Mask | Active filtration to remove moisture from the mask | ['Sagar Sri', 'Sree Adinarayana Dasari', 'Paritosh Padmakumar'] | [] | ['solidworks'] | 13 |
10,354 | https://devpost.com/software/sustainable-healthcare-supply | Inspiration
My inspiration comes from the idea of a circular economy of materials. This is when most of the materials in a system stay in the system,and very few new materials have to be introduced.
What it does
I enabled a circular economy for face shields in the healthcare setting by substituting traditional materials with more recyclable ones. The proposed materials allow for the Face shields to be made out of completely recycled materials, and they are also fully recyclable. This is complemented by the recycling of nitrile gloves into recycling bins for the face shields in healthcare settings which participate in the proposed circular economy.
How I built it
I decided on using PET for the shield as recycled PET is widely available from plastic bottles (this also diverts plastic bottles from ending up in landfills). It also shares many qualities with available face shields.
I decided on HDPE for the crown of the face shield as it's the most recycled plastic, and it's very sanitary, safe, cheap, and easy to work with.
Recycling bins, along with various other useful furnishings can be made with composite material from recycled nitrile gloves.
Challenges I ran into
It was hard for me to figure out what to do with the gloves as I originally wanted to substitute the material like I did with the face shields. Unfortunately medical gloves need to have very specific properties associated with the materials they come in (nitrile, latex, pvc, etc...) and there don't seem to be sustainable alternatives to the available materials with the right properties.
Accomplishments that I'm proud of
I'm glad that I could bring the products made from the nitrile gloves back into the equation as complementary products. The glove recycling process enables them to be made into various other things such as lawn chairs, freebees, and shelving so hospitals don't have to be the only ones using the resulting products.
What I learned
I learned that pointless disposal is ubiquitous. It seems prevalent throughout every industry especially the medical one. I learned about all of the other things that hospitals throw out which don't have to go to waste but don't have adequate recycling support. Only 15% of hospital waste is considered hazardous, and the rest could be dealt with mush more sustainably.
What's next for Sustainable Healthcare Supply
I looked into medical gowns, and face masks a little. These are both things that are already made of very recyclable material, and are yet more things that don't have adequate support for recycling.
Built With
adobe-dimension
autodesk-fusion-360 | Sustainable Healthcare Supply | My System will improve sustainability in the healthcare setting. Substituting materials in face shields allows for a circular economy. This is supported by recycling bins made from old nitrile gloves. | ['Daniel Haines'] | [] | ['adobe-dimension', 'autodesk-fusion-360'] | 14 |
10,354 | https://devpost.com/software/med-dimensions-reusable-modular-face-shield | Injection Molded - Side View
Injection Molded - Bottom View
Inspiration
With the onset of COVID-19, many healthcare facilities around the world lacked essential personal protective equipment (PPE). This lack of supplies is putting medical professionals at risk and decreases the quality of patient care. Most available PPE is single-use which heavily strains the supply chain, especially with overseas manufacturers having products caught up in customs. This has caused a surge of untested and ineffective alternatives flooding the market, putting users at risk with a false sense of security and allowing respiratory particulate to contact individuals' faces. Seeing this need, our team decided that we would put our design skills, and 3D printing capabilities to use. We work so there no longer needs to be desperate pleas for proper PPE and sufficient supply. We never want to receive another call as we did from a Washington State hospital: a doctor lost to COVID, and the other on a ventilator because there was no PPE, and nowhere to get it. With COVID spreading rapidly through the US, we saw an opportunity to help both clinicians and our citizens. Our inspiration is simple: Nobody should be afraid to go out in public, or receive healthcare, because they don’t have the right protective gear.
What it does
The shield is fully sealed with hydrophobic, closed-cell foam that prevents the ingress of bodily fluids and ultra-small airborne particles, while still remaining comfortable for long periods of use (and yes we tested that extensively). It is durable, can easily be disinfected after use, is adjustable, and comfortable for users with different head shapes. We focused on choosing materials that would allow the design to flex and conform to any head it would encounter, yet is relatively inert so common disinfectants can kill viral or bacterial contaminants, without compromising or reacting with the materials at hand. The end result is a shield, designed to ANSI Z87.1 D3 specifications, with an easily exchangeable clear visor and head strap that has a price per use equivalent or lower than commercially available single-use face shields. To date, we have not found a product at a similar price offering a reusable design and offering a competitive price per use for disposable shields. These designs are up to the standards of healthcare facilities, yet are equally affordable and available for public use.
How I built it
There were many iterations, many concepts, and lots of hours printing to get to where we are today. In the truest fashion of iterative design, we would brainstorm, design, print, tweak, and run the whole process again. We scrounged materials for any place that had them while we narrowed our search for just the right combination. We optimized this reusable face shield for multiple manufacturing methods. We started with additive manufacturing, with a focus on FDM, and took special care to optimize the wall thickens to perfectly match nozzle widths so there is no more time wasted on filling in small dead spaces. For injection molding, we created a design that uses minimal material and the least necessary draft angle to drive the price per part as low as possible, and ensure good friction fit for the shield. Organically thought this process we came to use ribs to hold the shield in place, eliminating the need for costly and complicated assembly, yet easy to disinfect surfaces where fluid would not be able to well up and allow for viral and bacterial proliferation. The final icing on the cake was the decision to attached our head strap directly to the clear plastic shield, which allowed for easy flat packaging, further driving down the cost of getting the models in users' hands.
Challenges I ran into
There is an unforeseen challenge to innovating at a rapid pace with clinically relevant design in mind; how do you connect with experts who are willing to trial and provide feedback on devices, when they are busy enough fighting a pandemic? The short answer is you just keep making calls until you get in somewhere. Many an hour was spent on the phone collecting ideas, feedback, design constraints, and nice to have features for our devices. We wanted to focus on a design built to withstand a harsh clinical environment, satisfy stringent disinfection protocols, yet be cross-functional for clinical and daily use. With a limited budget, we needed a design to satisfy all users to keep our prospective tooling costs as low as possible. To that end, learning of the many different manufacturing technologies, their limits, and specific design constraints for each was mind-blowing. We can say with absolute certainty, we have learned more during this process than we ever expected to know about the product development cycle and in a very short time.
Accomplishments that I'm proud of
The team is particularly proud of how much we were able to learn and execute on this learning, in such a short time. We could not have done it without the help of this community, our mentors, and those who graciously fielded our cold calls to point us in the right direction. Our work partially reflects blood, sweat, and angry tears at SolidWorks, but to a larger extent the capacity of individuals to help each other in times of need. We can never claim to have mastered the product development/commercialization pipeline, but we have learned much more than we had ever thought possible through the support of the entire community. Our shield may not be a beacon of hope for the world at large, but it has made our group hopefully that we can help those in need through hard work and determination. Regardless of the outcome of our effort (beyond just the scope of the hackathon), we are proud to call ourselves a team and proud of the designs we have formulated.
What I learned
If we hadn’t fully appreciated it from our schooling, we now know: listening to the end-user and translating their concerns into a design is a very difficult process! We constantly found small hiccups or roadblocks, be it a limitation of the manufacturing technology or functionality limits, that we needed to design around. At the end of the day, we found how critical comfort and safety are to the end-user. A robust design that hurt after 3 hours was not an option. Conversely, something that could be worn for hours on end that was ultra-comfortable and lightweight might lack proper safety features. The symbiosis between these two factors would be critical for these designs to be adopted and effective. As we worked through this process these two factors bloomed into a tree of considerations we needed to make. Visual clarity is a must for long term adoption of the device, but thicker sheets of plastic, which tended to be less affected by optical aberrations from bending and manufacturing, are much more expensive to replace frequently. Never forget the issue of fogging and CO2 build up as well. It all came down to a simple constraint; when a shield is scratched it becomes hard to use and can greatly impact the quality of care.
There is much to be said about the design process and manufacturing technologies as well. So much so we will spare everyone the book we could write, but there is a short take away. Communication is key for the process and individuals involved to be successful. At every step of the process, information needs to flow between team members or you might find yourself with a design deviating from customer requirements, or a manufacturing firm quoting you thousands of dollars for a difficult to mold part. Silos will not enhance the speed of iteration and will complicate the path to success. We learned so much more about the process together, than we ever could have pieced together from individual efforts. Collaboration is key!
What's next for Med Dimensions - Reusable & Modular Face Shield
The group is actively perusing future commercialization efforts for these face shields. The group hopes to produce a small production run for validation of the design and further customer feedback. We built, the design to conform to ANSI Z87.1 D3 testing requirements and look to receive industry-recognized certification. We will continue with our scalability analysis and conduct a manufacturing readiness review, to ensure the face shield will have both utility and optimal design for manufacturing. With preliminary sets of assembly instructions and tooling requirements, we will continue to advance our manufacturing, assembly, and distribution networks to drive our BOM and material costs as low as possible, without compromising on the quality of the shield. We will continue to forward our efforts in offering the first reusable face shield at a cost per use equivalent or lower than a single-use disposable shield. | Med Dimensions - Reusable & Modular Face Shield | A modular, reusable, easy to disinfect face shield to prevent the ingress of bodily fluids and ultra-small airborne particles with easy to replace components when worn or scratched | ['Sean Bellefeuille', 'William Byron', 'Jacob Pincus'] | [] | [] | 15 |
10,354 | https://devpost.com/software/fairfield-u-ppe | One of the first models of our shields.
Face shield headbands in UV sanitation cabinets. We use these cabinets to sanitize our products before donating.
Stacks of headbands fresh from the printers, ready to be separated.
Example photo of doctor we donated shields to.
Fairfield University Team
Inspiration
Our team is composed of 3 students from Fairfield University in Connecticut: Lilliana Delmonico ‘20, bioengineering; Evan Fair ‘22, bioengineering; and Andrew Jobson ‘20, computer engineer. Being college students, our lives were drastically transformed when the coronavirus pandemic hit the US. Being either marooned on campus or at home, we found we had more time on our hands. While some spent this time taking up a new hobby or completing unfinished projects, we found a way to put our individual talents in design, 3-D printing, and engineering to good use. A group of five engineering and nursing students began a 3-D printed PPE initiative out of our university’s engineering lab. We decided to mass produce face shields for healthcare workers, nonprofits, and even our university with the intention of filling the high demand for PPE in our community. This took a lot more initiative and engineering than we ever imagined though, and served as a great design challenge for us during this time of quarantine!
Why Our Design?
Our face shields are meant to be used as an added layer of protection against COVID-19. It prevents users from subconsciously touching his or her face. Our model allows for the user to wear most medical grade masks, cloth face coverings, or glasses comfortably underneath. The design lends itself to good circulation of air and are virtually fog proof. Since they are completely made of plastic, they are easy to clean and thus can be reused. In addition, over 180 headbands can fit in a standard UV sanitation cabinet for easy and effective sanitation. They are also a general “one size fits all,” meaning they can fit most head types without being uncomfortable in any regard. These are very easy to 3-D print and have very minimal plastic waste while printing. One of the unique features is their ability to be personalized. Being from a university, we added our Fairfield University logo to the shields. But it is very simple to change this logo to reflect our donors, as seen in the Fairfield Prep models in the files link below. This puts a distinctive spin on our design.
Design Evolution
Our design is an enhanced version of the 3DVerkstan 3-D printed headband. Originally, the design was meant for a 8.5” x 11” sheet of clear plastic to be hole punched and fitted onto the headband. We found many issues with mass producing these shields. The forehead section was too small, causing the plastic sheet to rest against your nose, leaving very little room for wearing a mask underneath. Additionally, the distance of the shield to the user’s face caused the shield to fog up. Also, the headband was extremely weak at the corners, causing the headbands to break when someone tried to put it on. With these issues noticed, we choose to do some genuine hacking in order to distribute a quality product.
The first step we took was to print out one of the original 3DVerkstan headbands which was
scaled for the best fit. Then we took the headband and a ruler and manually measured each section to recreate the headband in Solidworks. This was key to our design since it gave us the ability to modify the headband as we wished. The next step was to go through multiple iterations of design. Each iteration focuses on the issues stated before. After 15 iterations we now have our final product. Check out our final models below!
The design elements which have evolved into our headband has made it practical for manufacturing and for long term usage. Extending the forehead section of the headband puts perfect distance between the shield and the user’s mouth and nose. It leaves enough room to comfortably wear a larger mask or glasses underneath. This extruded section at the forehead was hollowed out to reduce the amount of filament used. Later on we were able to put logos into this blank section which makes the headbands unique to the places we sent them. The corners of the headband are now rounded, moving the weakest point further down the headband which is significantly more flexible. This structural change has allowed us to be able to mitigate the amount of “failed” prints and save precious time and materials. The overall thickness of the headband was increased to add comfort to the forehead section. We elongated the headband to place the pressure on the back of the skull where it is least sensitive. Since we mass produce these shields we developed a method of stacking the headbands so we can print multiple at a time. This allows us to print up to 25 at a time on one printer. This takes about 42 hours to print on a MakerBot. But these printers run continuously. And with the aid of 13 printers in total (2 Taz models (makes about 16 in 30 hours), 3 Ender models (makes about 20 in 35 hours), and 6 MakerBots, we can produce about 300 headbands per week. But we completely automated ourselves out of the printing process allowing us to work 24/7 with minimum effort. Life hack!
We also had to hack our way into being able to use different face shields. We originally used the 8.5” x 11” sheet of clear plastic with three pegs which secured the shield to the headband. These were sturdy and usable for our headband model, but were sharp-edged and not automatically cut to scale. In addition, there was much manual labor that went into making these shields ready for distribution; we had to 3-hole punch these shields and the user had to round out the edges if they so chose. We started receiving face shields from InLine Plastics in Shelton, CT These shields came with two hole punches on each side, and had a curved bottom. After many iterations we successfully adjusted the pegs on our headband to fit these shields. We even named these headbands “TheDominicSpecial”after our professor whose wife generously donated these shields from InLine Plastics. Without the hacker spirit we wouldn’t have considered the modifications which made this project successful.
Product Successes (thus far!)
Our team has distributed 3313 shields to healthcare workers, nonprofits, and even our own university community within the 4 months we have been working on this project. We have distributed within 5 different states and have had articles written about our efforts in press within CT, MA, and even TX. We received remarkable feedback from our users and they have proven to be effective as face shields, especially in hospital settings where they are worn for long periods of time. Many of our users find our shields superior in that they are easy to assemble, comfortable, and practical.
What we learned
Throughout engineering school we learn about the engineering design process: what the steps are, how it is conducted, and what it takes to complete and market a product. Engaging in this project allowed us to see this engineering design process in action. We began with a problem, came up with a solution, designed (and redesigned) a product, manufactured this product, and distributed it to actual users with great success. This experience demonstrated service through engineering in action and allowed us to use our talents to help those in need.
The reason we entered this hackathon is because we are all hackers at heart. We modified something from someone else and made it completely epic. That is just what hackers do. We look at things that aren’t desirable and change it into something we are proud of. We took a decent design, "hacked" it, and ended up with a product which we can proudly donate to health care workers, nursing homes, and anyone else who may need PPE.
What's next for Our Team
With the COVID-19 case numbers steadily declining in CT, as well as the rise of injection molding, the need for PPE is very much decreasing. Yet, we have been able to use our 3D printing experience to enhance the 3D printing community on our university campus. We look forward to taking our experiences within this project, and at this hackathon, to continue taking our talents in engineering and doing good.
Built With
ender
makerbot
solidworks
taz6
tazpro | Innovative Face Shield for Day to Day Usage | We designed a simple yet effective 3-D printed headband for face shields. These are already being used in hospitals and nursing homes across the northeast as well as within our own university. | ['Lilliana Delmonico', 'Drew Jobson', 'Evan Fair'] | [] | ['ender', 'makerbot', 'solidworks', 'taz6', 'tazpro'] | 16 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.