hackathon_id
int64
1.57k
23.4k
project_link
stringlengths
30
96
full_desc
stringlengths
1
547k
title
stringlengths
1
60
brief_desc
stringlengths
1
200
team_members
stringlengths
2
870
prize
stringlengths
2
792
tags
stringlengths
2
4.47k
__index_level_0__
int64
0
695
10,435
https://devpost.com/software/scrollhole
Title Website screenshot Website screenshot Website screenshot Inspiration The inspiration came from desiring a simplicity in user interaction to produce sound and visuals. We chose the interaction of scrolling, where the speed of scrolling controls certain parameters of both audio and visual. What it does SCROLLHOLE is a tunnel-like UI where scrolling forwards plays audio samples forward and progresses forward through a "score", and scrolling backwards does the opposite. Scroll long enough, and you'll encounter new "regions" of the audio and visual design. How I built it Gina designed the sounds and audio playback, as well as the general concept for the project, and Eric designed the visual animations and handled event sequencing. The visuals are almost entirely produced with p5.js, and the sounds are all generated with Tone.js. Challenges I ran into We ran into many challenges! Fortunately, none of them ultimately compromised the vision. Getting the radial background to show up and change, for example, required clearing the draw() method first so that old shapes were not still on the screen. Figuring out the bounds of Tone's GrainPlayer was also important to not overload the audio processing. Accomplishments that I'm proud of We think it's an innovative, fun, and somewhat addictive interface! We're proud as heck of that. What I learned Eric knew a little p5, but both of us came to love this JavaScript library. And neither of us knew Tone.js, which is super feature-rich! We also learned how difficult it is to generate novel things using machine learning. To incorporate our original idea of sound morphing, for example, would have required a lot more data acquisition and cleansing. What's next for SCROLLHOLE We'd like to hook up Magenta.js to generate drum accompaniments to SCROLLHOLE, and use Tone.js to make a sequencer from clicking on shapes, representing on/off MIDI notes in a sequence. Built With jquery p5.js tone.js Try it out ginacollecchia.github.io github.com
SCROLLHOLE
SCROLLHOLE is a generative audio-visual web interface, where the interaction of scrolling produces what you see and hear.
['Gina Collecchia', 'Eric Heep']
[]
['jquery', 'p5.js', 'tone.js']
14
10,435
https://devpost.com/software/infinite-drums
Infinite Drums—drum sequencer using MusicVAE and WebGL Inspiration I am mostly inspired by the works of Tero Parviainen and Monica Dinculescu. The idea for my project came from the architecture of MusicVAE and how new samples are requested from it, similar to the visual stack of sequences in Infinite Drums. What it does Infinite Drums samples drum sequences using Magenta.js from the MusicVAE model. To give the visual impression of infinity, a stack of drum sequences is shown in 3D. The user can interact with the project in two ways: Via "Next" button: Pressing the "Next" button plays back a new drum sequence from MusicVAE and visualises the pattern Via mouse drag & zoom: The user can rotate the 3D scene and zoom in and out How I built it The drum sequences are generated by Magenta.js using the MusicVAE model with "drums_2bar_lokl_small" checkpoints. For music playback I use Tone.js with custom audio samples, played back using a Tone.Players instance. To glue the sounds together I chain the output of the players into an equalizer, as well as a reverb. For the visualisation I use cables, a visual programming editor for WebGL. In cables I render the current sequence of drums as spheres. To visualise the infinity of the latent space I also render some more (19.200) spheres in the z-direction. Challenges I ran into I ran into various challenges, which I have partly solved. Some issues need to be addressed after the hackathon. Solved: Communicating with a cables patch: cables was used for the visualisation. Using custom functions I communicate between my regular JavaScript code and the cables patch. Finding out which note pitches belong to which instrument. I plan on publishing a small library to make this task easier in the future. Unsolved: Keeping audio and video in sync: I followed the Tone.js recommendations, but it seems like I did something wrong in drawing synchronous to the audio Using a bundler: Currently cables is not compatible with ES6 or CommonJS module bundlers. To save bandwidth I have to find a better solution how to bundle all code and minimise the assets. Loading indicators and handling: It should be indicated when assets are being loaded and when it is ready to be used Sometimes drum samples seem to only contain one bar, so half of the sequence is empty. I should filter these out or repeat them to get rid of the silence. Animations between states: Initially I planned on animating drum sequences from the infinite stack to the current sequence being played. There are various visual tweaks which would improve the overall project, but would require more time. Accomplishments that I'm proud of Playing music from a Neural Network. I wanted to build something using Magenta.js since I first read about it. Now I finally did :) Finishing an entry Visual appearance What I learned How to get samples out of a MusicVAE model How to map note pitches to instrument names (for example "Kick Drum") How to display a lot of spheres in cables using WebGL instancing What's next for Infinite Drums There are a lot of possible performance improvements Better visual + audio sync Mobile optimisation Audio samples: Currently I am using wav files. I could save bandwidth by using mp3 / ogg (for Firefox). Also I would like to check out sound fonts. Bundling: Find a way to use a module bundler with cables Try to find better samples Add option for MIDI out, so the project could be used together with other audio tools Built With cables magenta magenta.js musicvae tone webaudio webgl Try it out github.com
Infinite Drums
Drum sequences from the latent space—as far as your eyes can see
['Tim Pulver']
[]
['cables', 'magenta', 'magenta.js', 'musicvae', 'tone', 'webaudio', 'webgl']
15
10,435
https://devpost.com/software/songtree
SongTree Visual Interface Inspiration Generative machine learning tools are one possible route to creating ambient and meditative compositions. This project was inspired by the idea of creating a link to a natural de-stressing environment from within the browser, and from our curiosity about one of the more peaceful natural sounds in existence: the calls of birds. Birdsong is both aesthetic and functional, fundamentally similar and yet very distinct from human melodies. The work of music theorists has addressed the structure and phrasing of bird-calls and melodies. Physical modeling approaches have described the mechanisms and acoustic properties of birdsong and allowed for some imitative synthesis. More recently, machine learning approaches have been turned to the problem of bird species identification. But what might machine learning have to offer in terms of generating bird-like melodies? What it does To explore these questions, we attempted to build an interactive and meditative browser experience leveraging magenta to generate variations on bird-calls. Unfortunately we were unable to complete a fully working app in the allotted time. However the individual components and prototypes that we were able to complete are showing promise! The video demo shows the visual environment running in the browser, with samples of ambience and re-synthesized bird calls from our "Bird MIDI" dataset. How I built it The visuals were built with React.js using a geometric and algorithmic approach to generate minimalist and gentle interpretations of a natural scene, centering on a tree. The audio framework was built with Tone.js, incorporating sample playback to provide a peaceful and musical/natural soundscape, and FM synthesis to realize the bird-calls. In order to coax birdlike songs from Magenta's MusicVAE models, we created our own database of "Bird MIDI," using a specially tuned FFT to find the dominant frequencies in clips of birdsong from Kaggle's British Birdsong Dataset and converting these to their nearest MIDI note number. However we ran into difficulty attempting to train our own model using MusicVAE, MusicRNN, and MidiME - these efforts are ongoing. Challenges I ran into Navigating the Magenta documentation; organizing our team effort; and accounting for different levels of familiarity with JS and Python. Accomplishments that I'm proud of The problem has been posed before: what is bird-song's relationship to human musical representation? However the practical possibility of being able to search the latent space of typical melodies for similar melodies to those called by the bird; to interpolate between bird and human melodic sequences, and much more, now appear tantalizing close with the right combination of machine learning and creative transcription of the birdsong itself. What I learned Our whole team has become familiar with new aspects of web technology, machine learning, and Google's magenta. What's next for SongTree SongTree needs to be finished and given a home to live on the internet! Even though we were not able to complete our project fully in the time frame, we have high hopes to still do so. Built With magenta python react tone.js Try it out github.com
SongTree
A meditative space with bird-like melodies produced by MusicVAE, offering an interactive and peaceful visual and aural landscape.
['Harrison Adams', 'Saksham Trehan', "Alexandra D'Yan", 'Quin Scacheri', 'Yichen Zhang']
[]
['magenta', 'python', 'react', 'tone.js']
16
10,435
https://devpost.com/software/cv-theramin
A sample model generated by user movement Inspiration When faced with the challenge of applying creative machine learning in the context of music, we immediately thought about how we respond to music through dancing. As music lovers and dance enthusiasts, we wanted to create an application that anyone can enjoy regardless of their musical expertise. We are ultimately trying to blur the line between music listeners and dancers by using ML. What it does While playing their music using the microphone input, the user busts a couple moves in front of the camera. The music gets uploaded to the backend server, where it gets split into chunks and the body movements are analyzed. Each piece of music is then paired with a certain body-part. When the user starts dancing, snippets get rearranged or remixed based on the user’s body movements and become available to them as an MP3 download. This is a novel application that allows for a more interactive user experience since they have control over culturally powerful and symbolic songs, which make anyone dance. However, with this tool, the user’s dance itself, gives them the ability to mix up the song and alter it however they would like to. How we built it We developed this application based on the already proven concept (as seen in google’s body synth) that the body can be an instrument, but we’ve taken it further to give the user the power to cut up an existing piece of music as the user dances. We used pose-net and a custom designed motion detection algorithm to analyse the user’s dancing. The machine-learning model, built with Tensorflow, was used to analyze movement and pair it with a particular piece of music. The frontend was built with HTML, JS, and various SVGs developed using js frameworks. The song that the user plays is uploaded to a backend Flask server using audiorecorder.js and http get and post requests. After it gets sent to the backend, the audio is broken into snippets using Python’s pydub library along with ffmpeg. The user’s movements along with the song they danced to gets sent to the frontend, where the movements get analyzed with the magenta library and a tune is developed. The altered audio is then available for download as an MP3 file for the user. Challenges we ran into Magenta was feeling last minute and challenging, but our failure to use it easily is also very valuable to this hackathon which is designed to get us to test out this software. We are proud of producing a valuable project that has provided useful information to the art/tech world going forward but our hard work ‘testing’ Magenta has meaning! We faced various struggles in our project, including finding a way to merge our code our together and sending the code to the backend. For some of us, it was our first time using the Flask server and a relatively high learning curve to use HTTP get and post requests. Additionally combining the snippets of audio was also a difficult challenge since the Pydub library didn’t have many straightforward features to make this possible. Accomplishments that we're proud of Eyve: I worked on PoseNet and movement detection. This was interesting because all of my previous work in computer vision was done in python using resnet50, so working in JS was an interesting challenge for this project. I learned a lot about coding in JS and working on the front end of a webapp for this project. Meredith: One of the interesting things we are doing with the UI is that we are importing the graphics as SVG elements and then manipulating them as elements of the dom via jquery. This was a new technique for me as previously I either used canvas, and canvas API calls to generate graphics or libraries like p5. This was a fantastic way to iterate quickly with designers and design elements that export to SVG. Hyma: I worked on both the frontend and backend web interfaces, taking in audio through the mic, then sending it to the backend via http get/post requests, then splitting the audio into separate chunks using Python. I also made it possible to construct tunes that can be played on the frontend based on analysis of body movements. Vaidehi: I was the ever-ready innovator with my beginner-creative-coder eyes focused on slack so as to work with my team on getting ‘creative’ and storified with the code. I also prolifically churned out heaps of imagery and animations (even though I am a beginner with adobe!) to construct a disruptive carnivalesque aesthetic which combines the flow of music with fluid movements of a dancer with of-course bodily fluid! I am proud of the contrast between ‘machine learning’ and rabelaisian references! What we learned Vaidehi: During this hackathon the biggest leaps of faith were to make an SVG to be animated via posenet- which entailed a painstaking night of feeling like a beginner puppet maker on illustrator- but the end result was just as zany and creatural as I could have dreamed of! Also making a mock-up of a website in Adobe XD - making me swiftly develop skills towards being a professional web designer and become inspired with more ideas for artistic website to create in the future! Eyve: I learned a lot about poseNet and JavaScript coding. My primary focus coming into this project was working on the motion detection on the back end, so when I also ended up working on the front end it was an exciting adventure. There are several stylistic differences between working in JS and working in python, so it was odd to go back to things like semi-colons on line ends. PoseNet was an interesting model to work with because its unlike anything I've previously worked with. I'm used to having a lot more knobs and dials to twiddle to get better results, but PoseNet is essentially prepackaged, you give it an image and it tells where the poses are. Working with this tool allowed me to focus more on the collection and processing of data. Hyma: I learned how to use VScode, Flask, and HTTP get/post requests. It was a pretty high learning curve for me, as I didn't have much experience on sending information through the server or working with audio files. I also learned a couple JS frameworks and really enjoyed being a part of both the front and backend development. What's next for Bodily Remixes Given more time, we would want to develop the merging of the human body and ‘sounds of music’ - for example- what does a leg sound like, what does a hand sound like? We particularly wanted to focus on how the snippets of music get assigned to certain parts of your body and what role the user could take in selecting and assigning the snippets of music. We also hope to gamify the process of converting user movements to tunes in Magenta by encouraging the user to make more daring/unique motions that could potentially make the audio sound better. Built With flask javascript json magenta p5 python tensorflow Try it out github.com
Bodily remixes
A song plays, you dance, then what? The song just ‘moved’ your limbs, affected the arrangement of your body but what if your body. But what if you could affect the arrangement of a song?
['Leela Yaddanapudi', 'Vaidehi Bhargava', 'Eyvonne Geordan', 'Meredith Finkelstein']
[]
['flask', 'javascript', 'json', 'magenta', 'p5', 'python', 'tensorflow']
17
10,435
https://devpost.com/software/bitrate-dance-song-detector
YMCA pose Inspiration The fusion between art, technology, and this hackathon. We search for something fun and viable to do a prototype in this short time period. What it does Detects your dance poses and if it detects you are dancing the YMCA song, it starts playing it until detects you are not dancing it anymore. How we built it Team effort. We divided tasks and each member did one component in one week! Afterward, we integrate everything and worked! :) Challenges we ran into Finding time to dedicate to this project and using the libraries. Accomplishments that we're proud of The prototype we made :) What we learned Be aware of all the available material that might be useful to create beautiful projects that integrate art and technology. What's next for Bitrate Dance-Song Detector Improve web page design Improve Classifier's accuracy by adding more data and tunning the model, since it has been trained only in data from people of the group. In order to be more accurate, we would need to add more data from different people with different shapes of the body and sizes. In addition, people might do differently the poses for the same song. So, we probably will have overfitting if the model is being tested with other people. Add support for more songs: train other classifiers to identify which song is being danced. In the future it would be good to be able to create dynamically support for new songs, training the model with data. For this feature several steps are needed: Have a view for capturing poses for a certain song The system would need to train the model with the data generated Once the model is trained it should be integrated with the server to classify poses against the new song Built With docker javascript magenta posenet python sklearn tune.js Try it out github.com
BitRate Dance-Song Detector
BitRate Dance-Song Detector
['Anthony Figueroa', 'Mikaela Pisani', 'Manuela Viola']
[]
['docker', 'javascript', 'magenta', 'posenet', 'python', 'sklearn', 'tune.js']
18
10,435
https://devpost.com/software/artone
Inspiration I love to watch paintings and was always quick to move into the emotions and feelings that a painting projects. While looking at a painting I looked forward to have some music that blends into the atmosphere. Every Color has its own emotion and feeling related to it. The atmosphere for an art must depict the proper atmosphere of that piece. What it does This projects takes input as Images or Painting Image and detects the 5 Major colors from that Image. Also detects the maximum spread color. After detecting the names of the Image can be taken and the data can be sent for detecting the emotion. The detected emotion and feeling according to the parameters set will be used to get a song/music that related to the art. How I built it The whole project is maximum on python. The image taken is not resized to reduce chances of missing out main colors during detection. I had some experience with image processing still PIL as a library is new to me. Challenges I ran into Selecting proper cases for the derived parameters was challenging. Accomplishments that I'm proud of The color detection method is working perfectly and can detect the proper colors. What I learned It was a great to be introduced to p5.js and try a bit of more Python libraries. What's next for Artone The main focus still is to implement proper machine learning and algorithm to better understand more of art forms. This is very beginning and will be developed in stages. Built With html numpy p5.js pil python scipy shazamapi Try it out github.com
Artone
The project brings out the unique emotions attached to a painting by collecting the feelings from colors used and plays a song or music helping the viewer blend with the artist's thoughts.
['Shiv Ratna']
[]
['html', 'numpy', 'p5.js', 'pil', 'python', 'scipy', 'shazamapi']
19
10,435
https://devpost.com/software/cammachine
Home page, please give cam permission It detects body keypoints and play sounds based on the tiles It also works with dogs :) Inspiration Dance, it's something we do after we hear music, what if we do it in reverse order? We can can create music from our dance! I was inspired by Cristobal Valenzuela's "Sidewalk Orchestra" demo I saw over the workshops. And there is one also from Tero Parviainen on his site. And here is my implementation :D What it does It detects your pose trough the webcam, and plays the given sound of the marked tile when the bar pases over it. How I built it It's a flask project that serves the web page containing the p5js/ml5 code Challenges I ran into I'm in the middle of a high work load, so sparing time for this was difficult. Accomplishments that I'm proud of Finally made it! This is my second p5js project What I learned More p5js tricks! What's next for CamMachine More functions!! Add a synthesizer! Ability to save sounds, record beats! Built With flask heroku ml5 p5js python Try it out cammachine.herokuapp.com github.com
CamMachine
This experiment uses your webcam and ml5 pose detection to draw body keypoints. Sounds will play as the bar advances trough the tiles
[]
[]
['flask', 'heroku', 'ml5', 'p5js', 'python']
20
10,435
https://devpost.com/software/holibeats
The board to customise your music Indian classical music How can you listen? Who we are? Our Aim is to provide tranquility through melody Inspiration Youtube Meditation Music- I used to listen to it so as to release my tensions , restlessness, eye burns and headaches.So I tried to create a customized music application where the user can listen as she/he wishes. What it does It creates music from indian instruments,tibetan music,raagas ,nature sounds like waterfall ,insects and birds. How I built it I first followed the tutorials and I found tonejs and magentajs appealing ,so I used them to make this application.I used HTML and css too. Challenges I ran into Challenges were using Magenta ML models.I have to still explore a lot to get thorough with them.My team mate also gave in due to tight schedule so I had to do everything.But due to these challenges , I have learnt and improved myself a lot. Accomplishments that I'm proud of My first Hackathon!And I built what I had imagined to do and it turned out to bet better! What I learned As Im very new to tech field , I dont have lot of expertise and this was my first hackathon too.But after this hackathon I learnt many things apart from technologies.The will to make the project and continue despite all challenges. I learned a lot from this.I saw many tutorials and videos related to Magenta which were very interesting and learnt Tonejs ,Magenta and P5 too. What's next for Holibeats 1.I would properly implement magenta models into it. 2.Would like to use noise to convert into music. 3.Make music from fingerprint patterns . Basically , I have lots of plans to do for Holibeats.Im really excited and thank the organisers for this amazing oppurtunity. Built With css html javascript magentajs tonejs Try it out rashmitha520.github.io github.com
Holibeats
We are Holibeats and we intend to provide a calming and soothing music experience to the listeners.We have included various carefully curated sounds to improve the user's mood through music.
[]
[]
['css', 'html', 'javascript', 'magentajs', 'tonejs']
21
10,435
https://devpost.com/software/eternal-circles
Welcome Page First Page Drone Drag and Drop the melodies All Discs Playing Settings About Eternal Circles Like many other people, I have a frustrated dream… To compose something that I am proud of. Whether it is due to lack of talent, lack of discipline or both, composing is quite a difficult process for me. Due in part to the above, I am attracted to the help that AI can provide to the artist in the creative process and in this project I wanted to know a little more about the possibilities of magenta.js in the creation of melodies and ways of combine those melodies. Eternal Circles provides the artist with a palette of melodies that they can explore and play with: I like to think of this application as an instrument in which melodies can be combined, changed, generated, mixed. You can also combine timbres, volumes, and textures. The application has 4 voices (discs) and 8 melodies (tiles) with which you can create an infinite number of combinations (works), in which both artist and listener have a dynamic role during the execution of the piece. For the creation of this application I used many of the concepts learned during the Bitrate meetings, specifically the teachings on the use of P5.js Tone.js and Magenta.js , libraries that I had never used. Tero Parviainen's blogs / codes were also very helpful to me as well as the article Melody Mixer: Using TensorFlow.js to Mix Melodies in the Browser ; on the other hand, I greatly admire the work of Arvo Pärt and Leo Brouwer and in Eternal Circles I also sought to implement some of their musical characteristics. I am not a professional programmer, I have only taken some online courses, for this reason, many things were difficult for me in the implementation of my idea and I am sure that several things could have been implemented in a better way. I especially had trouble handling the asynchronous concept in JavaScript and creating a nice user interface. I am proud to have finished much of the functionalities that I thought of in my head and I also like the result, but I know many things should be still improved. I learned many things creating this application, especially I was shocked to see that even using just the client (the explorer Chrome in this case) there are such powerful libraries to make music without the need for a server. Magenta is undoubtedly an exceptional tool for the creative process of artists, and there is no doubt that we will see many new uses for this tool soon. I would like to add soon to Eternal Circles the ability to change all the melodies: In this version, the user can only use the default melodies or those generated by magenta, I would like that outside of this, she could also use her own melodies and / or modify the previous ones. Feel free to test my app at https://juancopi81-eternal-circles.glitch.me/ (Please use headphones, the app it is been tested in Chrome with laptop and desktop) Built With css3 html5 javascript magenta.js p5 tensorflow.js tone.js Try it out github.com
Eternal Circles
This project aims to help artists so they can discover new melodies and interesting ways of combine them in an easy, intuitive, and fun way. I am looking to generate a "melodies-palette" using AI
['Juan Piñeros']
[]
['css3', 'html5', 'javascript', 'magenta.js', 'p5', 'tensorflow.js', 'tone.js']
22
10,435
https://devpost.com/software/lofi-generator
Inspiration Inspired by the late night study sessions to lofi hip hop that every student has done at some point. In particular, two albums by Potsu: Just Friends and Ivy League. What it does Generates 32 bar songs with continuously changing instrumentation. How I built it The chord progressions are built on variations of the common jazz progressions 1-6-2-5 and 2-5-1-6 in a 32 bar form (AABA or ABAC) using secondary/tritone/other chord substitutions to add some non-diatonic spice. The bassline is built on the root, fifth, and leading tone for each chord using seed patterns I came up with on my bass guitar. It is played by a sine wave synth with added saturation. The chord voicings are built using my implementation of the 3-note shell voicing technique. They are played on a piano/guitar sampler with added effects. The melody follows the form of the chord progression. It is built by encoding/decoding a VAE sampled melody to the chord progression in 2 measure segments. The resulting melody is aligned to the most commonly associated scale with each chord. The drum patterns are generated by encoding/decoding a hip hop pattern I made. The instruments and patterns are scheduled by Tonejs events, which are managed by controllers for each part and a master controller which interacts with the React app. All the audio is routed to the master output and the audio spectrum visualizer, which uses Tone.FFT and React. Challenges I ran into I had a hard time figuring out how to incorporate non-diatonic notes into the music because I was not satisfied with how flat purely diatonic music sounds. I went through many ways to generate chord progressions, including graphs and simple markov chains, over the course of about a week. However, I felt those methods didn’t create a sense of direction and never resolved predictably. In the end, I settled on common jazz turnarounds that I knew would sound nice, and I added variation by using chord substitution. Originally I planned to include genres outside of hip hop as well. However, implementing generation methods for different comping, bassline, and drum groove was overwhelmingly complex, and researching took more time than programming. As it turns out, music sounds interesting when it breaks established patterns. After several days of minimal progress, I took a few days of break and decided to stick to the most iconic subgenre of lofi, which is lofi hip hop. I also had plans to do effect automation like what is possible in most DAWs but couldn’t come up with something that sounded pleasing to the ear. Mixing audio is something I have never been comfortable with when producing music, but I tried my best to balance out all the parts with filters and EQ. In particular, reducing the ringing on lower piano pitches was extremely problematic, but I solved it by reducing frequencies around 200-700 Hz. Lofi music has a very chill vibe that I wanted my art to convey. However, nothing I produced with my drawing tablet and p5js really meshed well. Instead, I decided to go for a simple purple gradient behind my visualizer. The purple gradient rectangle is supposed to represent the sky seen through a window in a typical “lofi aesthetic” bedroom. Ultimately, the product is something that I am satisfied with, enjoy listening to, and looking at. Accomplishments that I'm proud of This is the second hackathon I have ever participated in and the first hackathon that I submitted my project. What I learned I learned that I should set reasonable expectations for myself and be proud of what I ultimately create. Furthermore, I reinforced my existing music theory knowledge and coding/design skills by finding patterns or generalizations in the way musical genres are produced or performed. It was a great learning experience applying machine learning to a field where optimal results are subjective, and it is topic that I am continuing to learn about. What's next for Lofi Generator I would definitely love to expand on this project in the future by implementing different genres of music and by adding artwork that enhances the “lofi aesthetic”. Built With magenta magenta/music react sharp11 sharp11js tonal tonaljs tone tonejs Try it out vin-huynh.github.io
Lofi Generator
Comfy lofi hip hop beats for listening!
['Vincent Huynh']
[]
['magenta', 'magenta/music', 'react', 'sharp11', 'sharp11js', 'tonal', 'tonaljs', 'tone', 'tonejs']
23
10,435
https://devpost.com/software/recurrent-sands
Recurrent Sands in action (1) Recurrent Sands in action (2) Recurrent Sands is intended as a flexible tool of sonic manipulation. It has two primary components: a granular synthesizer and a melody generator utilizing machine learning. In creating the granular synthesizer and its controls, care was taken to make the instrument user-friendly, but also non-prescriptive. The granular parts can be shaped into tonal relationships or regular meter, but they can also be left free. The melody generator creates a symbolic melody, and then plays two parts in parallel: a digital synthesizer, and the correlated pitch-shifted granular part. Either or both of these parts can be overdubbed into their own buffers, which can then be treated as a source for further granular synthesis manipulation. The instrument uses the Nexus-UI library for its user controls interface, and it uses P5.js to visualize the current buffer and its loop constraints. The digital synthesizers and their effects are created using Tone.js, with some direct recourse to the underlying Web Audio API. The melody generation is done using Magenta.js's MusicRNN implementation of their recurrent neural network model, as well as one of their pre-trained models. The pre-trained model is far more inclined toward both tonal melody and 4/4 meter than the rest of this instrument, and a longer-term goal would be to train a more flexible model. Using the transcribed MIDI (or note sequences) from a large set of recorded improvisations could be an interesting data set with which to train a new model, in an extension of work done by George Lewis and Pauline Oliveros. A significant challenge relates to pitch detection. I would like this instrument to be able to perform pitch detection on the sample buffers in real time. This would allow those pitches to be used as the seed for melody generation, creating a more cohesive instrument that could be capable of sustaining coherent harmonic movement over longer periods of time, or to stay within a single harmony with substantial internal sonic movement. I tried many tools to do this - the most promising is ML5.js's PitchDetection, as it runs very well in real time. However, it's limited to performing pitch detection on a live input source, and it has issues with the current build of Tone.js (I believe this is because of its use of the deprecated Web Audio method "createScriptProcessor", though I'm not positive). I was unsuccessful at debugging the interlibrary issues, but I would like to work on contributing to the ML5.js library to both fix this bug and make the class more versatile, allowing it to perform either real-time pitch detection, or offline batch rendering of buffers. The pitchfinder repository showed a lot of promise, as it has this flexibility of use, but it was far too slow to use in this instrument, causing audio dropouts and slowing down the other real-time audio processing. The issues with processing speed in pitch detection relate to my other recurring challenge - providing visual feedback to the instrument's player without causing dropouts or glitches in the real-time audio. This was most present in the P5.js canvas drawing of the buffer and its loop constraints. Performing this at a typical frame rate caused audio issues. I slowed the frame rate and made the buffer drawing less responsive, as the audio playback is most important, but this tradeoff seems unavoidable given that Javascript does not support multi-threading. Even with an asynchronous call, at some point the drawing will need to happen, so how can it avoid any potential issues with real-time processing? For a similar reason, right now these buffers are in .ogg format - I would prefer higher-resolution audio, but given that the instrument is using eight buffers and processing more than a dozen audio events per second in some cases - and that this is all happening in real-time in the browser - using the lower-resolution audio seems like a reasonable tradeoff. I'm proud of the depth of the sonic creativity possible with this instrument, and I plan to use it as a personal tool for my own compositions. The possibilities of the recombination of sonic materials in the evolution of soundscapes is a useful toolset for film composing work, or for free play. I hope others are interested in exploring its possibilities as well. The instrument's openness was very much an intentional choice, allowing for the development of chaotic and atonal sound worlds, as well as the ability to return to consonance and simplicity through a pruning of its parts. I learned a great deal building the instrument. Most of my audio programming to this point has been done in Supercollider and Max/MSP, so this was a chance to deepen my knowledge of the Web Audio API, as well as the fantastic libraries that are utilized in this project. I've never used CSS or HTML for anything substantial before, so the crafting of the interface was also a nice learning experience. It's still quite simple, and the questions surrounding user interaction will be ones that I continue to ponder and address. This was a valuable opportunity to build a browser-based instrument with the support of the Gray Area and Magenta communities, and to learn from the wonderful workshops that were presented as part of this series. Built With javascript magentajs nexus-ui p5js tensorflowjs tonejs Try it out www.gebrauchsmusik.com github.com
Recurrent Sands
Recurrent Sands is a browser-based musical instrument connecting two of the most exciting areas of development within the world of electronic music: granular synthesis and machine learning.
['Dylan Neely']
[]
['javascript', 'magentajs', 'nexus-ui', 'p5js', 'tensorflowjs', 'tonejs']
24
10,435
https://devpost.com/software/music-matrix-q4clj0
Inspiration The Musing team has actively been producing live streamed live music events since 2018 together with many established live music producers and artists. These activities increased exponentially since early 2020 due to the sudden urgent need of live streaming within the live entertainment industry, leading to much learning and understanding of the strengths and weaknesses of the online media format for live concerts. During live-streamed shows, the lack of real-time social and musical interactions between performers and online audiences often hinders the events from delivering the feelings of “now”, “here” and “together”, thus devaluating the experience for artists and music lovers. At the same time the tech, including state-of-art machine learning, MIR, and interactive online environments, should be mature to support innovative solutions to these problems. Moreover, it should be possible to utilize these technologies to actually add new value to the live music experience by connecting people in immersive musical environments online. This inspired us to develop the concept for a devoted virtual music space, to enable a more synchronous experience of live music together with others online. The aim is to let people around the world co-create online live music experiences together by musical expressions through body movements and sounds, to enable a more interactive and engaging platform for live music online. What it does Music Matrix is a web app generating an interactive audiovisual backdrop for a live music performance in real-time, based on musical features of the live music audio from the performing artist, and the body movements from the online audiences. These musical expressions are then embodied in the virtual environment by machine learning models accumulating the user's movement input, synthesizing and synchronizing it with the musical audio input, and feeding it back into the virtual environment as a real-time audio-visual feedback on user interactions, that forms a musical action-perception feedback loop within the virtual music environment. By enabling movement and dance interactions in real-time, with responsive visual feedback synchronized to the live music, the Music Matrix app enables performing artists and audiences to co-create the audiovisual experience of the online live show. How I built it We developed an audio-visual interface in Unity, with 3d objects responding to musical features of the live audio signal, that the content creating musician users may input from their device's microphone to generate the virtual environment. For the input for the online audience users, we integrated PoseNet into Unity, to retrieve the concert audience's body motion via the web camera and let this input influence the graphic components for music visualization on the user interface. The real-time music audio, and the user's body movement input, are synchronized by a real-time beat tracking algorithm. The rhythmically synchronized audio and movement data is retrieved from Unity and encoded into MIDI by the Magenta.js Onset & Frames model. This MIDI data stream is fed to the magenta.js GrooVAE model, which as output emits a latent representation of the user's rhythmical input, and the rhythmical input accumulated by all the audience users, and the music, in order to interpolate between these latent spaces merging the inputs from the audience users with the live music. The output from the GrooVAE models is then called by the user's movement interactions via PoseNet, and sent into the Unity environment to trigger rhythmical music visualizations synchronized with the music via the beat tracker. Challenges I ran into Getting live streaming into Unity - we implemented live streaming via Wowza streaming cloud to the web to use as the artist user's input to the app, but as HLS streaming was not natively supported in Unity we ended up using only audio input via the device microphone, leaving integration of audiovisual HLS live streaming input as a scope for the near future. Accomplishments that I'm proud of Working together in a creative and productive process with a great multi-talented team with a presence around the world on four different continents. What's next for Music Matrix We have many ideas and concepts that we envisioned or partly implemented and want to complete and include in the app in the near future. For example, we developed a script for real-time retrieval form a live music audio signal of a distribution over associated genre-tags to the music, that we want to integrate with the app to let it influence the interface rendering according to the style of the live music, e.g via visual filters utilizing image style transfer. Interactive live music visualizations, developed in Unity and enhanced by Magentas ML technology, is really a limitless concept. We feel that we just began gently scratching on the surface of the potential of this during the hackathon, and will continue to develop the project further in the near future. In particular, we want to involve this project within our activities as live music producers, to let the results reach end-users directly within our network and test group of artists, musicians, and music lovers, and bring added value to real-life live music productions. During this autumn we will take part in producing several live music arrangements with professional artists, venues, and producers when we want to continue to develop and use the interactive online backdrop. The development of the project beyond this hackathon will moreover in part be as an open-source experimental platform för research in user-oriented MIR and machine learning within live music performance contexts. Built With c# essentia javascript madmom magenta p5.js python tensorflow unity wowza Try it out github.com docs.google.com
Music Matrix
Music Matrix is an interactive concert backdrop, that is a musical XR environment generated purely from the live audio together with the audience's movements as a co-creational live music experience.
['Nils Kakoseos Nyström', 'aradhana chaturvedi', 'Maria Enge', 'Juan Diego Lozano Martín', 'Felipe Ferreira', 'Mona Lisa Thakur', 'Rutvik Chauhan', 'Gustav Lindsten']
[]
['c#', 'essentia', 'javascript', 'madmom', 'magenta', 'p5.js', 'python', 'tensorflow', 'unity', 'wowza']
25
10,435
https://devpost.com/software/castme-v162wn
Splash Screen Customize Character Main Menu Presentation Screen View from the middle of the class Motion Capture Streaming demo Male Professor teaching View from the back Female Professor Teaching castme.life website homepage Try it out here: Intro Demo (2 min): https://youtu.be/Xm6KWg1YS3k Complete Demo: https://youtu.be/1h1ERaDKn6o Download pipeline here: https://www.castme.life/wp-content/uploads/2020/04/castme-life%20Win64%20v-2.1beta.zip Documentation to use this pipeline: https://www.castme.life/forums/topic/how-to-install-castme-life-win64-v-2-1beta/ Complete source code (1.44 GB): https://drive.google.com/open?id=1GdTw9iONLywzPCoZbgekFFpZBLjJ3I1p castme.life website: https://castme.life Inspiration Video lectures are present in abundance but the mocap data of those video lectures is 10 times ahead in the form of precise data. High quality and a large amount of data are one of the requirements of best argmax predicting ML models, so we have used here the mocap data. Despite the availability of such promising data, the problem of generating bone transforms from audio is extremely difficult, due in part to the technical challenge of mapping from a 1D signal to a 3D transform (translation, rotation, scale) float values, but also since humans are extremely attuned to subtle details in expressing emotions; many previous attempts at simulating talking character have produced results that look uncanny( two company- neon, soul-machine). In addition to generating realistic results, this paper represents the first attempt to solve the audio speech to character bone transform prediction problem by analyzing a large corpus of mocap data of a single person. As such, it opens to the door to modeling other public figures, or any 3D character (through analyzing mocap data). Text to audio to bone transform, aside from being interesting purely from a scientific standpoint, has a range of important practical applications. The ability to generate high-quality textured 3D animated character from audio could significantly reduce the amount of bandwidth needed in video coding/transmission (which makes up a large percentage of current internet bandwidth). For hearing impaired people, animation synthesis from bone transform could enable lip-reading from over-the-phone audio. And digital humans are central to entertainment applications like movies special effects and games. What it does Some of the cutting edge technologies like ML and DL have solved many problems of our society with far better accuracy than an ideal human can ever do. We are using this tech to enhance our learning procedure in the education system. The problem with every university student is, they have to pay a big amount of money for continuing to study at any college, they have to interact with the lecturers and professors to keep getting better and better. We are solving the problem of money. Our solution to this problem is, we have created here an e-text data to human AR character sparse point mapping machine learning model to replace the professors and use our ai bots to teach the same thing in a far more intractable and intuitive way that can be ever dome with the professors. The students can learn even by themselves AR characters too. How we built it This project explores the opportunities of AI, deep learning for character animation, and control. Over the last 2 years, this project has become a modular and stable framework for data-driven character animation, including data processing, network training, and runtime control, developed in Unity3D / Unreal Engine-4/ Tensorflow / Pytorch. This project enables using neural networks for animating character locomotion, face sparse point movements, and character-scene interactions with objects and the environment. Further advances on this project will continue to be added to this pipeline. Challenges we ran into For Building, first of all, a studio kind of environment, we have to collect a bunch of equipment, software, and their requisites. Some of them have been listed following. Mocap suite- SmartSuite Pro from www.rokoko.com - single: $2,495 + Extra Textile- $395 GPU + CPU - $5,000 Office premise – $ 2,000 Data preprocessing Prerequisite software licenses- Unity3D, Unreal Engine-4.24, Maya, Motionbuilder Model Building AWS Sagemaker and AWS Lambda inferencing Database Management System Further, we started building. Accomplishments that we're proud of The thinking of joining a virtual class, hosting a class, having a realtime interaction with your colleagues, talking with him, asking questions, visualizing an augmented view of any equipment, and creating a solution is in itself is an accomplishment. Some of the great features that we have added in here are: Asking questions with your avatar professors, discussing with your colleagues, Learning at your own time with these avatars professors and many more. some of the detailed descriptions have been given in the submitted files. What we learned This section can be entirely technical. All of the C++ and Blueprint part of a Multiplayer Game Development. We have started developing some of the designs in MotionBuilder, previously we have been all using the Maya and Blender. What's next for castme We are looking for a tie-up with many colleges and universities. Some of the examples are Galgotiah University, Abdul Kalam Technical University (AKTU), IIT Roorkee, IIT Delhi. Recording an abundance amount of the lecture motion capture data, for better training our (question-answering-motion capture data) machine learning model. More info For more info on the project contact me here: gerialworld@gmail.com , +1626803601 Built With blueprint c++ php python pytorch tensorflow unreal-engine wordpress Try it out castme.life www.castme.life github.com www.castme.life
castme
We are revolutionizing the way the human learns. We use the Avatar Professors to teach you in a virtual class.Talk to your professors,ask questions, have a discussion with your colleagues in realtime.
['Md. Zeeshan', 'Rodrixx Studio']
[]
['blueprint', 'c++', 'php', 'python', 'pytorch', 'tensorflow', 'unreal-engine', 'wordpress']
26
10,435
https://devpost.com/software/drum-machine-co6ab1
DrummerBoy Inspiration We're a team of music creators. We've worked with youngteam and collaborated with a large community across Discord, and some of our pieces have been awarded prizes and listened to by KennyBeats. One of the biggest issues we've seen in the field is the inability for music creators to casually create a beat that pairs well with their chord progressions or melodies. We wanted a product that we'd want to use ourselves, and began researching existing products. The innovative design of the Teenage Engineering OP-1 gave us an elegant end-result to target, and the Nsynth Super gave us great ides on what ways neural networks can be used to aid musicians. After understanding what these products offered, we began designing DrummerBoy: a drum machine which quickly and accurately produces complimentary drum beats for a given melody. Dylan had built and sold an abundance of modular synthesizers, and so we started on the ambitious journey of employing both machine learning and our experience with circuit design. What it does DrummerBoy takes in an audio file, which may entail a chord progression played on a piano or a melody strummed on a guitar. Regardless of what instruments make up the input, we'll process this file and analyze how we can match a drum beat to it. How I built it We attacked the hardware and software sides in parallel. On the software side, we first began by implementing the MusicVAE model, hoping to gain familiarity with Google's Magenta. After having a functional pipelines where we could not only sample from audio inputs but interpolate between two different inputs, we continued with DrumsRNN to evaluate how we could produce the drum beat to accompany the input melody. We looked at a multitude of factors such as BPM, the quantized version of the MIDI notes, etc. Using these factors, we produced a drum pattern using DrumsRNN and a model trained with the help of magenta's architecture to accompany the instrumentalist or artist as he/she/they play the beat. We used both Python & C to communicate with the Teensy and send signals corresponding to the MIDI file. For the hardware, we decided to use a Raspberry Pi to run the model and communicated over serial with a Teensy 3.2 to generate and output audio. The Teensy Audio Library was perfect for loading and playing samples based on a MIDI file. The Teensy was connected via USB to the Raspberry Pi, and a python script sent bytes to the teensy as note signals. WAV drum samples were converted to bytestreams using wav2sketch and flashed onto the Teensy. These drum samples were then played using the AudioPlayMemory object inside of the Teensy Audio Library and output through the Teensy's built in DAC. Finally, this signal was fed through a 2.5mm jack into a speaker. Challenges we ran into We began this project on May 21, just two months after we returned home from Berkeley due to the COVID-19 pandemic. Throughout this entire process, we ran into an abundance of errors, but we'll point out two substantial hurdles we overcame on the software and hardware sides. On the software side, we ran into errors trying to implement MusicVAE, notably generating our own checkpoints, and automating the conversion of audio (wav) files to MIDI files. On the hardware side, we struggled with navigating the Teensy audio library and communicating with different subsystems, in this case the Raspberry Pi. Accomplishments that we're proud of We're so proud that we were able to really complete this drum machine! Our complete pipeline works, and it couldn't have been possible without immense effort from each of our team members. We learned how to combine our unique skillsets, ranging from music creation (shared) to circuit design, machine learning, and integration with the Raspberry Pi. With each issue that we ran into, we made sure to address it in a proper manner by discussing potential solutions with the team and doing adequate research before implementing our ideas in code. What We learned This was our first experience working with Magenta, and we gained so much familiarity with it. In addition, two of our members have experience with neural networks but have never employed it in a musical sense. Previously, we'd just focused on computer vision and NLP applications, never thinking we could combine our passions for music creation and ML together. Moving forward, we've gained so many ideas on fun ideas we want to use, particularly around DDSP. We've learned that there's so many interesting applications to work on with our skillsets, and our team is excited to work together to tackle these projects in addition to continuing to work on DrummerBoy. What's next for DrummerBoy Something we experimented with, but were ultimately unable to complete by this deadline, was the ability for the user to choose a drum beat from different genres and experiment with them. We ran into issues implementing this idea with DrumsRNN, and this is the next feature we want to offer with DrummerBoy. In addition, we're hoping to increase functionality on the hardware side. We couldn't finish our idea of allowing users to change fine features about each MIDI sound, something we believe can add value should we provide the option of choosing from different genres. Built With machine-learning raspberry-pi Try it out github.com
DrummerBoy
Hardware drum machine that accurately produces complimentary drum beats for a given melody.
['Dylan Reimer', 'Jason Dong', 'tejjogani Jogani']
[]
['machine-learning', 'raspberry-pi']
27
10,435
https://devpost.com/software/edit-musik-dengan-fl-studio-mobile
Layar aplakasi FL Studio Mobile Belajar editing sangat tertarik untuk pelanggan terutama di bidang musik, Fl Studio Mobile adalah aplakasi khusus arasement beebagai Macam genre musik. Built With aplikasi flstudiomobile Try it out drive.google.com
Edit Musik dengan Fl Studio Mobile
Selamat datang di project baru saya, saya akan menjelaskan cara edit musik dengan Fl Studio Mobile di android
['JORDANLIS OFFICIAL']
[]
['aplikasi', 'flstudiomobile']
28
10,435
https://devpost.com/software/musik-editing-witht-fl-studio-mobile
Inspiration What it does How I built it Challenges I ran into Accomplishments that I'm proud of What I learned What's next for Musik editing witht fl Studio Mobile Selamat mencoba Built With indonesia
Musik editing witht fl Studio Mobile
Selamat datang di project baru saya
['Leoni Studio Official']
[]
['indonesia']
29
10,435
https://devpost.com/software/midiflux-triome
Inspiration- slider mixes What it does performs training of midi manipulation and code changes ## How I built it- glitch collaborative and triome Challenges I ran into- script basics Accomplishments that I'm proud of- changes in values What I learned- java What's next for midiflux@triome deploy Built With glitch midi.js Try it out pastoral-majestic-crab.glitch.me
midiflux@triome
manipulation of melodies using glitch code project and triome sliders
['karam thapar']
[]
['glitch', 'midi.js']
30
10,435
https://devpost.com/software/pi-ke4nfz
Logo Auto Cropping - Screenshot Landscape Colorization - Screenshot Deep Painterly Harmonization - Screenshot Inspiration Harmonies is an online photo editor that aims to simplify the process of editing photos. Now, you can use the same advanced tools from photoshop by dragging and dropping easily into the canvas. We take advantage of the capabilities of computer vision to help our users edit photos in an appropriate way. What it does Harmonies app helps the designers, and editors to create a full, rich experience for the users or customers. In addition to the regular editing tools (like Crop, Rotation, Drawing, and Shapes); we provide the user with three powerful computer vision techniques to cut, color, and add images. Image Colorization: Harmonies deals with the process of taking a grayscale input (black and white) image and then produces a colored image that represents the semantic colors and tones of the input. Image Segmentation: Harmonies benefits from the concept of Image Segmentation to extract some parts from the image and return a png photo. Deep Painterly Harmonization: Harmonies produces significantly better results than photo compositing or global stylization techniques and that enables creative painterly edits that would be otherwise difficult to achieve. How we built it Front-End: We used React for front-end development. It gives the privilege to have a single page application (SPA) with a clean modern design that is easily maintainable. Back-End: The technologies used in the backend are node js, express, MongoDB. We established our frontend and backend communication using JWT tokens. RESTful APIs We have used Flask library to create a web API for both the segmentation and the coloring models. This API takes base64 images as inputs and runs preprocessing on these images then feed them to each model depending on the request URL. This API was then deployed to Azure web services via a git repository and integrated with the front-end editor. Image Colorization: In this part we reimplemented Colorful Image Colorization using PyTorch for images auto colorization. Image Segmentation: In this part we reimplemented Rethinking Atrous Convolution for Semantic Image Segmentation using PyTorch for auto-cropping a person from an image. We used the same concept of image segmentation and instead of adding masks, we return a PNG photo. Deep Painterly Harmonization: In this part we reimplemented Deep Painterly Harmonization using PyTorch to add harmonies to the adjusted element. CV technical details are fully described on our Github Repository Challenges we ran into The biggest challenge we faced was that the team worked together remotely, spread over different time zones. Moreover, it was difficult to: Create a complete Machine learning web application using React and Flask, and dealing with different APIs and data types. Developing the application workflow to be fully automated. One big challenge is not having an NVIDIA GPU on my device, one way of solving this problem was using the cloud for testing and inference. Accomplishments that we are proud of We are proud that we have participated in this competition competing against people from all over the world. Furthermore, this Hackathon helped with meeting other incredibly talented people like us, working as a team, and taking on challenges that put our problem-solving skills to the test. This is our first time as a team and we have successfully created a full MVP during the Hackathon. Moreover, our model has been deployed as a real-life project, and it could be used easily. What we learned How to integrate the backend API with a frontend making it secure with JWTs. Deploying a machine learning model to the cloud. Get into the team-work technique and sharing ideas. Working completely remotely with a team for the first time. What's next for Harmonies The future of Harmonies depends on two main pillars, the first is the technical pillar and the second is the commercialization pillar. For the technical part, we will retrain our model with a bigger dataset to get better results with image colorization and auto-cropping. We are also studying adding some more computer vision features like image enhancement and converting images into pictures. We will optimize the code to reduce the run time. For the commercialization part, we are thinking about how to cover the cost of the cloud by adding some ads to the service, make premium packages, or making use of the imaging data. In the future, we will develop a mobile version of our website to support more users. Built With amazon-web-services azure express.js flask google-cloud jwts mongodb node.js pytorch react reactstrap torchvision Try it out harmonies.studio github.com
Harmonies
Because your images need some magic
['Mohamed Amr', 'Mahmoud Yusof', 'Mohamed Abdullah', 'AbdelRahman Emam', 'Ahmed Samir']
[]
['amazon-web-services', 'azure', 'express.js', 'flask', 'google-cloud', 'jwts', 'mongodb', 'node.js', 'pytorch', 'react', 'reactstrap', 'torchvision']
31
10,435
https://devpost.com/software/cellphonia-covid-19-therapy-meditation
Inspiration Imagined self care audio for an individual, alone, while suffering the disease symptoms of Covid-19 What it does Against the background of hospital or perhaps alone in hospice care an audio support the patient care. How we built it Audio is brokered on the foundation of the http://cellphonia.org/ ISP servers that have already delivered network music content of 15 world wide performances. In the past Twilio https://www.twilio.com/ has supported Cellphonia in the acquisition of mobile phone contributions made by the public and will most likely be utilized again. Challenges we ran into TBD Accomplishments that we're proud of TBD What we learned To follow What's next for Cellphonia: COVID-19 Therapy Meditation We'll know at the end. Built With cellphonia cellphonia.org https://www.twilio.com/ server
Cellphonia: COVID-19 Therapy Meditation
Meditative audio support to a stricken patient and care team in the 15 days of 24/7 therapy.
['Steve Bull', 'Scot Gresham-Lancaster']
[]
['cellphonia', 'cellphonia.org', 'https://www.twilio.com/', 'server']
32
10,436
https://devpost.com/software/scan-sound
Inspiration In the US, 1 in 4 people will have a stroke in their lifetime. Working in healthcare, we have seen the devastating effects of strokes on our patients - lifelong, irreversible disability and death. The average time to critical, life-saving care in the event of a stroke is 3.8 hours, but treatment is most effective when started within 3 hours. Delayed treatment leads to long-term disability, decreased functionality, and worsened quality of life for the patient and their families. These effects are further felt financially, as costs of long-term care increase exponentially with permanent disability. This increased burden weighs on individuals and the US healthcare system as a whole. We all know individuals that received care too late and were left to cope with their new normal. These consequences affect everyone - families, friends, caretakers, even employers. We aim to shorten the window and help people recognize strokes faster. What it does Our solution is an app that helps users recognize symptoms of a stroke and shorten the time to critical care. It will help the user figure out if something is wrong, and give them next steps, streamlining the next level of care and getting to a hospital. How we built it With rapidly progressive technological advancements, more and more people are gaining access to smartphones with facial recognition. We see industry trends moving toward utilizing smartphone technology in healthcare through Apple’s Health app, Fitbits, smartwatches, etc. The proportion of elderly individuals using smartphones is also increasing, allowing us to utilize technology that folks already possess. Challenges we ran into In a survey of family and friends, most indicated that they would be hesitant to pay out of pocket for such an application. This posed a challenge of how we could finance such an endeavor. We realize there are many stakeholders to reducing morbidity due to stroke complications. Improved outcomes and long-term functionality saves cost in providing nursing care and treating further complications such as pneumonias associated with dysphagia, falls due to weakness, etc. Additionally, while anyone can experience a stroke, there are well-described risk factors. We struggled with how to appropriately incorporate this knowledge into our application. We decided that we would create a "demographics" section to best understand our user and how to help them. What's next for Scan&Sound We hope to expand this technology to recognize other conditions with varying levels of urgency. For example, recognizing jaundice or scleral icterus may trigger an alert to make an appointment with a primary care physician. Built With dlib matlab opencv python
Scan&Sound
App for early recognition of stroke
['Leeore Levinstein', 'hadas braude', 'Shunit Polinsky', 'Sean Heilbronn-Doron', 'Ron L']
['Data Driven Healthcare Track']
['dlib', 'matlab', 'opencv', 'python']
0
10,436
https://devpost.com/software/invictus
GIF DCGAN image generation Sample argumented image Skin classifer Inspiration In the news, there has been a lot of talk about the dermatological symptoms associated with COVID-19 infection, even if the individual is otherwise asymptomatic. These symptoms may include rash, blisters, or itchy hives. While researching this topic, our team began to realize that skin conditions related to COVID-19 may look different between patients with fair skin and those with darker skin. A paper by Lester et al demonstrated that in a systematic review of pictures in scientific articles describing skin manifestations associated with COVID-19, 93% (120 out of 130) were taken with patients with the three fairest skin tones (Types I-III). 6% (7 out of 130) showed patients with Type IV skin, and there was no representation of the darkest skin tones (Type V and VI). This can lead to cognitive biases that contribute to underdiagnosis of COVID-19 infection in patients with darker skin that are otherwise asymptomatic. This issue isn't just limited to COVID-19 though. Turns out, many other skin conditions that present differently in patients with dark skin compared to those with fair skin. The following is an example of atopic dermatitis in an infant with dark skin compared to one with fair skin. If the right image is what medical students, nurses, and physicians see in textbooks and medical journals, you can imagine how easy it would be to misdiagnose the patient on the left! Furthermore, 47% of dermatologists report insufficient exposure to patients with darker skin during their training , which directly impacts the quality of patient care and contributes to poorer health outcomes in minorities. Our team aims to resolve this issue by creating a web app that increases the representation of dark skin in medical databases, journals, and textbooks. What it does Our team's web app aids physicians and other healthcare providers in uploading pictures of their patients’ skin lesions, creating an open-source database of skin conditions in patients with darker skin tones. We then plan to use this database in downstream machine learning algorithms, including training a DCGAN to generate even more examples of skin conditions in darker skin tones. Our goal is to increase the visibility of skin conditions in all skin tones and remove cognitive biases that contribute to poorer health outcomes in minorities. How we built it low fidelity wireframe using balsamiq high fidelity wireframe using Figma Firebase Authentication Image generator using tensorflow DCGAN Challenges we ran into Because there are no established datasets containing images of skin conditions in minorities or darker-skinned individuals, we had trouble figuring out a way to train the machine learning models. We are also having a lot of issues with DCGAN. Accomplishments that we're proud of We have completed the UI design and built a working web app with login, sign up, image uploading and linked to the neutral style transfer, which allows conversion of Caucasian skin patches to darker skin. Also, we have made a separate skin classifier that would aid skin condition diagnosis for doctors. What we learned We learnt a lot on the current problems and challenges faced by people of colours, different skin conditions, and DCGAN for generation of realistic fake images. What's next for PixGen We aim to eventually have our web app address more diseases in which the appearance of skin looks different between fair-skinned and dark-skinned individuals. These diseases may include, but are not limited to: Lyme Disease Eczema and Psoriasis Kawasaki disease Acne Spider bites Cancer Built With balsamiq css dcgan figma firebase google-cloud html javascript python react tensorflow tf-keras-api Try it out www.figma.com github.com
PixGen
Database and image generator to train doctors on skin conditions of people of colours
['Ava Chan', 'Yuheng Zhang', 'Umar Ali', 'Vaidehi Patel', 'Aishwarya Vijayakumar']
['Data Driven Healthcare Track']
['balsamiq', 'css', 'dcgan', 'figma', 'firebase', 'google-cloud', 'html', 'javascript', 'python', 'react', 'tensorflow', 'tf-keras-api']
1
10,436
https://devpost.com/software/icd-codex
Inspiration Thousands of Americans are wrongly misquoted on their health insurance yearly due to the ineffective monitoring of ICD (International Classification of Diseases) codes. However, it is difficult to automate or automatically flag mistakes, because there are so many such codes. Simultaneously, the field of Natural Language Processing has provided advances in “embedding,” which open the door to making the classification problems with many outputs a more tractable problem. We believe such advancements are crucial in any personalized medicine informatics workflow. What it does ICD codex creates a vector embedding for ICD codes. With just a few lines of code, practitioners can efficiently adapt their algorithms and take advantage of superior model architecture. How we built it We used the network library to build a graphical representation of the ICD coding structure, which was fed into a word2vec implementation. We also followed the scikit-learn API and used Twine to deploy onto PyPi . Our website was built using Sphinx. Challenges we ran into Our project was based on a simple idea, hinged on execution and polish. Despite the difficulty of implementing our neural network models and cleaning XML data (which is how the ICD hierarchy itself is documented), our workflow had to be seamless from the perspective of the user. It was a challenge to organize developers working on different aspects of analysis and design, especially remotely. Accomplishments that we're proud of We are proud to have brought our vision to reality. Today, anyone in the world can run pip install icdcodex and use our software to build healthcare informatics applications for personalized medicine in just a few lines of code. Furthermore, they can access well-designed documentation at https://icd-codex.readthedocs.io/en/latest/ , making the barrier to entry quite reasonable. What we learned Our team learned the importance of data representation in personalized medicine. Thoughtfully designed algorithms and data structures for representing a patient’s health paves the way for more automation, fewer chances for error and a better allocation of hospital resources. Furthermore, this experience taught us the importance of high-level documentation to ensure clarity and understanding of our work, which we were able to do through a Sphinx website. What's next for ICD-Codex Going forward, ICD-Codex will serve as an easy-to-use and impactful API for personalized medicine workflows. Our next step is to partner with hospitals to double-check their medical coding so point out errors in a timely and more accurate manner. Built With google-bigquery networkx node2vec sphinx Try it out icd-codex.readthedocs.io pypi.org
ICD-Codex
Miscategorized ICD codes are costly for patients and hospitals alike. How do we improve diagnostic accuracy? With ICD-codex: an easy-to-use model to better computationally represent ICD codes.
['Natasha Nehra', 'Jeremy Fisher', 'Hamrish Saravanakumar', 'Alhusain Sakr', 'Tejas Patel']
['Data Driven Healthcare Track']
['google-bigquery', 'networkx', 'node2vec', 'sphinx']
2
10,436
https://devpost.com/software/just-in-time-h4j6p5
Inspiration Many elderly people often forget to take their medication. Not only is adherence and consistency a difficult thing but being able to read the label and input your own reminders on your calendar is a hassle. What it does Our app quickly and seamlessly scans a prescription label, reads aloud the important information, and automatically adds it to your profile for future voice reminders. How I built it We used Android Studio to develop the app. Firebase and the ML Kit for the Text Recognition and the Realtime Database functionality. Challenges I ran into The user interface was a challenge for us. Also, several people collaborating and editing the same codebase was difficult to manage. Accomplishments that I'm proud of We developed an idea that can help the elderly adhere to their medication. What I learned Communication and cooperation between teammates. Built With android android-dev android-studio databse firebase java machine-learning Try it out github.com drive.google.com drive.google.com
Just in Time
An app for the elderly that captures prescription labels to automatically set reminders to take medicine. Build with simplicity in mind.
['Maanasa Pillai', 'Abd-El-Aziz Zayed', 'Chloe He']
['Aging In Place Track']
['android', 'android-dev', 'android-studio', 'databse', 'firebase', 'java', 'machine-learning']
3
10,436
https://devpost.com/software/sygn-alzheimer
Github Link: https://github.com/ssaradhi5/MedHacks2020 Website: neurotechuw.com Inspiration Today, in the US alone, there are over 5,800,000 American citizens over the age of 65 living with Alzheimer's. On an annual basis, Alzheimer's costs the US roughly $305 Billion dollars, and by 2050, estimates project this figure to increase to as much as $1.1 Trillion per year. Most patients with Alzheimer's today are diagnosed at the mild dementia stage, only after they have already begun to experience significant memory and thinking issues. However, if the aggregate amount of all Americans alive today who will develop Alzheimer's were to be diagnosed earlier, when they have a mild cognitive impairment, it would save the US $7.9 trillion. Although there is no cure for Alzheimer's, early diagnosis for Alzheimer's results in many benefits for the healthcare system, patients, and their families. In addition to the cost savings for patients and the government, early diagnosis enables patients to access treatment options earlier, allowing them to have a greater chance of benefiting from new treatments, and the possibility of enrolling in clinical trials for new therapies. Additionally, with a diagnosis, patients can choose to adjust their lifestyle habits to slow cognitive decline and maximize the time they spend with their friends and family. Thus, we decided that there needed to be a solution for aging individuals (those most susceptible to AD & dementia) to enable them to have their cognitive health screened & monitored in an innovative fashion, so HCP's can use this information to inform diagnostic decisions. What it does Syne is a screening and data processing platform for cognitive impairment monitoring. Syne helps HCP's screen for changes in cognitive impairment as aging patients routinely get tested over time. The Syne testing process is two-fold, Part A is an MMSE test created using a Wordpress website along with a form, where HCP's can enter patient test results which can then be related to an estimated level of cognitive impairment. Part B involves an EEG test which has become more and more established within academia as a screening method for Alzheimer's. The EEG data is gathered using an OpenBCI and is then processed by a signal processing algorithm on Google Colab, and can then be compared to literature values within the form (that can be accessed from the webite) to generate insights for HCP's. How we built it The creation of Syne involved both hardware and software components which involved the usage of an OpenBCI device, Google Collab, and WordPress to create a more comprehensive screening and data analysis platform. Initially, we ordered a Cyton board headset from OpenBCI which is a headset that contains 16 electrodes for EEG streaming of data across the scalp, and a 3-axis accelerometer. Once we ordered the product, it required assembling of the given computer chips, wires, and electrodes, and installation of the GUI. Afterward EEG data from the scalp was streamed, recorded, and then exported as a CSV file, and as a BDF file to our Google Colab platform. Our developed Python-based script then sorted the files received and then converted them into a RAW file for processing using the MNE library. The time-series data was converted to the frequency domain and then used to compute the coherence of each pair of the electrode, final average coherence, total spectral power, and average theta power using logical loops and conditional statements. Finally, the resulting arrays of data were IIR Bandpass filtered using a Butterworth filter into each of the alpha, beta, theta, delta sub-bands of interest and were also plotted to provide a graphical interpretation of the brain waves. Of course, this entire procedure has a user interface in the form of a wesbite that was developed using WordPress. The recorded cognitive assessment answers and computations will be stored on a database for each user over time. Submissions are then compared for the user over time for further medical analysis . Challenges we ran into Since our problem space was situated in the disease screening niche of neuroscience, our primary challenges were obtaining several sources of academic literature to support our proposed solution. After extensive searching and discussion with experienced specialists, our next challenge occurred with the implementation of the openBCI which was a tool that none of us were familiar with. Finally, the last challenge we ran into was when we were creating the Python script which required a lot of troubleshooting. This is because the computations we ran were complex, and the signal processing aspect needed to be accurate to ensure that the data was being properly filtered. To ensure that our calculations were correct, we also used other websites and handwritten calculations of sample data to verify our code. Accomplishments that we're proud of We are very proud of integrating both hardware and software components into our final product. Additionally, we're excited at the success observed in computing our metrics which required a lot of software debugging and research. Finally, we're proud that our product is lightweight, computationally fast, and visually appealing and intuitive for the user. What we learned We learned hands-on skills related to hardware in terms of assembling circuit boards together, installing necessary drivers and software development kits, working with electrodes and wires. Software skills learned included Python which entailed new libraries like MNE and Scipy which we were initially unaware of. Additionally, we learned more about Alzheimer's disease, other studies, the idea behind our proposed metrics. Finally, a very important skill of researching scientific literature, properly analyzing sources, reviewing procedure methodologically, and proper data collection protocols were also learned. What's next for Syne - Alzheimer's Detection Assistant Future features for Syne include privacy & security features, EHRintegrations, EEG Training Modules for HCP's and ML-Driven Data Analysis which would generate even more meaningful insights, especially as more and more data is gathered and processed. Our market strategy for Syne would be to first focus on partnerships with small scale-AD focused providers in the US, where our platform could be used to routinely monitor aging patients for cognitive impairment, which will improve patient outcomes and will allow us to improve our data analysis methods. Once our model and techniques have been successfully developed to screen for cognitive impairment with extremely high reliability, we could scale across the US being more readily adopted by traditional providers, before attempting to enter international markets which abide by a myriad of rules and regulations. Built With mne numpy openbci python scipy
Syne - Alzheimer's Detection Assistant
Screening platform for cognitive impairment using EEG & MMSE testing to inform Alzheimer's diagnostic decisions
['Surya Pandiaraju', 'Marwan A Rahman', 'William Kim', 'Srikar Saradhi', 'Andrew Situ']
['Aging In Place Track']
['mne', 'numpy', 'openbci', 'python', 'scipy']
4
10,436
https://devpost.com/software/picpill
Sign-In CatalogActivity PicPill Inspiration Our ideas stemmed from our familial roots. Each member had grandparents who immigrated to the United States and had a difficult time learning the English language. They all encountered medical issues at some point during their time here and struggled to manage their medication for one reason or another. They did not know how to read the instructions on their pill bottles nor did they understand the complicated medical terms used by doctors. As they aged, many of them ended up forgetting to take their medication and then compensating by taking more or choosing to ignore the instructions simply because they couldn’t comprehend them. This contributed to the deterioration of their health, putting them back where they started: in the hospital. What It Does This app was created to scan a prescription and then parse the image to find strings containing the medication name, dosage, and instructions. Once the scan is complete, the user will be prompted to double-check the information to make sure it all matches. Following confirmation, ideally, the app allows the user to create a translation of the instructions into their native language. A pictogram then illustrates how to take their medication. How We Built It This Android native application was built in Android Studio with the help of Google Cloud and Firebase. We used the native Android feature of intents to retrieve images from the user’s Gallery. We then _ attempted _ to upload images to Google Cloud’s OCR in order to parse the medication name, dosage, and instructions and store them in the Firestore. We then built a custom RecyclerView class to display data fetched from Firebase to the user. Challenges Most of our team comes from a biology background and lacked coding experience. It was challenging to learn new coding languages in such a short time frame. In order to incorporate all the elements we wanted in our app, we were tasked with learning how to use GitHub, Android Studio, and Google Cloud all within 24 hours. So, it was difficult trying to understand many unfamiliar programming terms and implementing them successfully. For the programming the biggest difficulty was dealing with trying to implement the FireBase and Cloud functionalities. Accomplishments that we're proud of With only one experienced coder on our team, we are proud of the initiative others took to learn how to code and our willingness to go beyond our comfort zones. Given that we started as strangers, we are proud of how easily we collaborated and communicated with each other. When one of us ran into an obstacle, another teammate provided a quick rebound. In such a short time, our team was able to create a comfortable and productive environment. We were also able to successfully produce a GUI that included pictograms and create a functioning app with many prospective applications. During this time, we were able to discover a multitude of different app building software and the many layers they comprise of. We also learned about the vast collections of usable API that are available to the general public and many beginning developers. Through Android Studio and GitHub, we were exposed to new programming formats and coding using a more collaborative approach. For most of us, this was our first time implementing APIs and using Google Cloud. As a result of this, we gained a more fundamental picture of coding as a universal language that can be applied to telemedicine. What's Next for PicPill PicPill has an immense potential to grow with time and implementation of advanced smart features. As an app that focuses on making lives easier, it can be predicted that PicPill will diffuse beyond its intended audience and towards daily use in the general public. With the addition of a QR code feature, stable translation method, website domain, and iOS compatibility, PicPill will be well on its way to becoming a reliable source for prescription translation. By using a simplified and streamlined system, users will become more comfortable and knowledgeable about their prescriptions. Built With android-studio firebase google-translate java Try it out github.com
PicPill
No one should have to worry about being able to understand the right way to take their prescription. PicPill will read the bottle with you, and provide language translations and simple illustrations.
['Anisha Beladia', 'Lucas Sarantos', 'Nina Brooks', 'Josephine Johannes', 'maya .']
['Aging In Place Track']
['android-studio', 'firebase', 'google-translate', 'java']
5
10,436
https://devpost.com/software/ihurt
Inspiration Telemedicine is thriving in the pandemic, however there are still difficulties and miscommunication between physicians and patients especially due to lack of physical examinations. i-Hurt will facilitate the process of evaluating pain simply through a web app and will support the struggles of the deaf, people with hearing loss, limited digital proficient and limited English proficient communities. What it does Show the patient, just follow visual instructions, answer simple questions and your doctor will receive a summary of your pain experience as if a brief in-person physical examination was conducted. How we built it Wrote detailed Doctor and Patient User Journals with mockups and a web application Tech Stack Used Used HTML, CSS, Javascript and used React framework making Homepage and Login/Sign Up but also a bit of user cases with photo images Accomplishments that we are proud of The web app and the video - it took us a lot of time! Also time zones. What we learned Effective teamwork skills, being trustworthy, patient with teammates and technologies, issues physician run into during telemedicine appointments. What's next for iHurt Plans to translate this universally in English and with different sign languages from around the world Built With css domain.com google-cloud-apis html javascript
i-Hurt
Are you in pain? Is the pandemic still there? You need i-Hurt! A web app to evaluate your pain. Just follow visual instructions, answer easy questions and your doctor will get a summary of your pain.
['Faisal Al Munajjed', 'Dharmawan Santosa', 'Azeezah Muhammad', 'Sabina Sarinzhipova']
['Patient Care During a Pandemic Track']
['css', 'domain.com', 'google-cloud-apis', 'html', 'javascript']
6
10,436
https://devpost.com/software/umass-1
Inspiration In 2020, innovation is one of the few things that are saving lives and returning us to a new normal. We contemplated how we can contribute to these innovations and believe that we have come up with a reasonable and effective product. The current challenges that the world is facing among this pandemic has inspired us to help create solutions that can keep everybody safer during these times. What it does This device is a simple, cheap, and intuitive invention that will allow people to increase their personal health using very baseline concepts. A moisture sensor on the device can be placed within the filters or pockets of masks of virtually all types, and will inform the user when the moisture has reached an unsafe level in the mask. The device will use bluetooth connectivity to a personal smartphone to relay information such as weather, air quality, and location to adjust the level of moisture needed to set the device off. Once the threshold is reached, the user is informed to change or replace their mask or filter, preventing accidental overuse of a now unsafe mask. How we built it We used an Arduino and simple sensors that allows for communication between the sensors and the app involved in helping track moisture levels. Not to mention we also used one of the most common object in all of our house holds, a mask!! Challenges We Faced Implementing the code for the app. We were able to get the app to work, the Arduino to work, and the lights to work individually but when we put it all together we noticed that the light didn't change at times, or the humidity sensor was not communicating to the app. We ended up being able to make our code more efficient and trouble shoot each problem by working through it step by step. What's next for UMass 1 We are striving to complete this project fully and pitch the idea to hopefully make a difference in the world :) Built With arduino c++ expo.io mask react-native sensors typescript Try it out github.com
MASKerAID
A mask with an integrated sensor that detects moisture and particles in the filter to determine and alert the user of mask usability and area safety.
['Dasani Prideaux', 'Max McMullan', 'Bryce Parkman', 'Rachel Fainkichen']
['Patient Care During a Pandemic Track']
['arduino', 'c++', 'expo.io', 'mask', 'react-native', 'sensors', 'typescript']
7
10,436
https://devpost.com/software/telesafe
Inspiration Living through the COVID-19 pandemic, we've all gone through the minor annoyances associated with transitioning our everyday lives into an online format. However, there are certain experiences which translate extremely poorly into the virtual space. One of our team members volunteers at a Philadelphia clinic to help patients there gain access to social services, including therapy and support group sessions for recovering victims of alcoholism, drug abuse, and mental health disorders. Working with these patients in the past few months, he's heard firsthand how these services have severely deteriorated in quality after their attempts to transition online. Events such a zoom-bombings have led to concerns over privacy, and video conference calls simply cannot offer the same level of intimacy as in-person sessions. This means that these individuals are cut off from one of the few safe spaces they had to speak about their experiences. Combined with the emotional and financial stress being felt worldwide due to the pandemic, it's no surprise that both mental illness-related suicides and mortality related to drug overdoses have significantly increased over the past few months. We created Telesafe as a way to bring back these safe spaces to these individuals, enabling them to reconnect with their network, which is a vital part of their treatment and recovery. With our domain telesafe.space , we want to provide a safe space for those in quarantine to continue getting the support they need and promote mental wellness. What it does Telesafe is a closed-network medium to connect with other members of an individual's support group. Members will be invited to join a given group by the group facilitator, and any identifying information remains hidden to all other members unless both the individual and the group facilitator wishes to make that information available for each of the members. Members will be able to chat, share updates, share photos, and video-call under the purview of the facilitator, and the facilitator can also place members into smaller groups to create a more intimate experience. Of particular interest to facilitators, especially clinicians, is the use of the Google Cloud Natural Language API to analyze each participant's general sentiment and mood over time, as determined from the language of their posts. The mood level of each participant is displayed to the facilitator in a simple, easy to understand manner in a dashboard that also allows the facilitator to keep track of and schedule appointments. How we built it Given the complexity of the features required for such a solution (e.g. video conferencing), our main goal given the time constraints of MedHacks was to provide a visualization of what Telesafe would look like and a functional implementation of the Google Cloud APIs. Website mock-ups were designed in Figma, a popular UI design tool, after considering the various features and information that we researched that would benefit both members and facilitators. We were also able to create a working site that demonstrated the usage of sentiment analysis in the facilitator's dashboard. Stretch goals for this project include creating a functional site to better demonstrate the experience of Telesafe's users including a full dashboard and profile with metrics for facilitators and clinicians in understanding their patients at a glance. Regarding the implementation of the sentiment analysis, we are utilizing Flask to connect the Python script which passes the relevant text data to Google Cloud Natural Language API via an HTTP request and retrieves the asynchronous response back (via JS promise) with the JavaScript logic in our site to display the correct visual representation of each patient’s sentiment in the doctor dashboard. This visual representation was implemented using a color scale with red, yellow and green, to convey patient mental welfare at a glance. Challenges we ran into Given that none of us has extensive experience working on web development, many of the technologies utilized in this project represented our first time using them. As a result, we inevitably faced several challenges getting our project into the current state. We had some trouble setting up API access to Google Cloud as the tutorial we followed unfortunately was out of date, but we managed to pull through to get an end-to-end implementation of the sentiment analysis. Another issue that we faced is that, since we are utilizing Flask, we need to deploy our web app to a service like Google Cloud App Engine, AWS Elastic Beanstalk, or Heroku. We experienced some difficulties in deployment and unfortunately we were not able to deploy the project in time. However, we believe that our demo will successfully demonstrate the plan for the project and address many concerns regarding its feasibility. Accomplishments that we're proud of While we were unable to fully flesh out a working implementation of the website given the time constraints of the hackathon, we believe that we succeeded in developing an solid plan and feature set for Telesafe that addresses many of the problems we identified with current solutions as well as several of the concerns associated with a solution like ours. We were able to further cement this through detailed mock-ups of the user interface of Telesafe that reflect the careful design choices that were a result of our research into support groups and medical privacy. Finally, we are proud to have demonstrated a use case for Google Cloud Natural Language in our project by using sentiment analysis as a way for doctors to easily glean information of each of their patients in a simple manner. What we learned From the technical perspective, we learned a great deal on web development as a whole in addition to experience in several technologies that we previously never worked with such as Google Cloud. But more importantly, we learned a great deal about the medical field, particularly regarding privacy of patient information and support groups in general. As a platform for support groups, Telesafe must ensure that patient information remains secure and do its best to keep data (posts, video chats, etc.) to stay within the site for the sake of patient privacy and trust. What's next for Telesafe Moving forward, we'd like to finalize all the various features of Telesafe, such as the secure video conferencing. Additionally, we'd like to test Telesafe with a focus group of patients and doctors to receive feedback on how we can improve the user experience. We'd also like further research to be done on using the data from the Google Cloud Natural Language API for diagnoses, as we believe this holds a lot of potential. Built With css figma google-cloud-healthcare-api google-cloud-natural-language-api html javascript python Try it out telesafe.space github.com
Telesafe
Telesafe is a secure and reliable platform for support group members to reconnect with their networks when in-person meetings aren't possible. These groups are vital for the health of their members.
['Nikhil Avadhani', 'Justin Mickus', 'Matthew Hallac', 'Allen Liu']
['Best Domain Registered with Domain.com', 'Patient Care During a Pandemic Track']
['css', 'figma', 'google-cloud-healthcare-api', 'google-cloud-natural-language-api', 'html', 'javascript', 'python']
8
10,436
https://devpost.com/software/senior
Inspiration The ageing boomer population is driving the need for change in our society, and it is our job as the leaders of the next generation to support them. Therefore, our team wanted to create a platform that allows one to age with minimal amounts of setbacks while ensuring and nurturing an individual’s resilience and resources. We believe that ageing with resilience starts with a healthy lifestyle. We wanted to tackle current health dilemmas among the seniors of our society, including dementia, mental health, need for accessibility, and improvement of functionality. What it does Using Google Maps API and Google Vision API, this app aims to improve the mental health of seniors by allowing them to go on safe trails nearby their location, nearest senior centers, and by allowing them to engage in therapeutic activities such as birdwatching. This app is designed to assist the seniors during the Covid-19 pandemic, by providing them with more information and details about the virus, such as the safety measures to take and its symptoms. Through the use of QR codes, this web app also provides a daily meal plan with recipes to healthy alternatives to everyday delicious food, as well as with video tutorials. Through a Nutritional Fitness API, users are also able to track their calories, as well as alternative meal options. Lastly, our app also allows users to keep track of their medical history and medications, with resources such as medicine cabinet, lab results, previous appointments, and alarms for their medication. How I built it My team and I used GitHub as the main source in creating our WebApp. We used a combination of HTML, CSS, JSON, and JavaScript coding languages to create our final result. With these techniques, we were able to format the website with a login, register portal, COVID page, and multiple tabs to help the users navigate there daily life. We included multiple APIs such as Google Maps, Google directions, Vision, Fitness, Reminder, and Calendar APIs. We also used programs such as EchoAr to create our various 3D images using QR codes on our site and User way program that had many accessibility functions that were used in our WebApp. Challenges I ran into Some challenges we ran into was the implementation of Google APIs into our apps, such as google maps or vision API. We also ran into issues when creating a domain for our web app and had to troubleshoot. Other challenges include creating a creative and engaging layout for our web app. Accomplishments that I'm proud of We are very proud of the end results of our project. Our web app contains many benefits that everyone can use, for example, a healthy food plan, COVID updates, and personalized reminders and assistance. We are most proud of how we incorporated different programs such as APIs and QR Codes, into one web app to create a powerful and easily accessible resource for all. We were able to all work collectively on this project at the same time, and we believe that our web app is beneficial. What I learned Previously we had created web apps using other software developers such as glitch to create the web app. However, this time we learned to use GitHub to create a professional web app. We were also able to successfully implement APIs into our code. Overall, we were able to improve our programming language and learn from others by attending workshops and making connections. What's next for Senior+ Senior+ is a web application that is always expanding and updating with new features. In the future, we can also improve the formatting of the web app to make it more visually appealing. Another likely addition to our app would be to add different languages as Canada is a multicultural country and this addition will only extend our help to those who need it. We can also add more features such as step count, sleep hours etc. Built With accessibilitywidgetapi api calendarapi css echoar google-directions html html5 javascript jekyll json maps pictureanalysis reminderapi Try it out farwamubasher.github.io github.com
Senior+
Want to stay young forever? Senior+ is the way for aging in place.
['Farwa Mubasher', 'Marium Farooqi', 'Arwa Shamsaldin', 'Laiba Anwar', 'arham2k Sheikh']
['Best Use of EchoAR']
['accessibilitywidgetapi', 'api', 'calendarapi', 'css', 'echoar', 'google-directions', 'html', 'html5', 'javascript', 'jekyll', 'json', 'maps', 'pictureanalysis', 'reminderapi']
9
10,436
https://devpost.com/software/kwann
Inspiration As COVID-19 worsens the Opioid epidemic in Canada, one of the key problems is that there still aren’t ways to get reliable real-time data on opioid overdoses. Existing tools only report monthly, quarterly or yearly, and the data that is collected in real-time is stuck in physical formats that are inaccessible to decision makers. What it does Our system is an end-to-end overdose recording system, where Paramedics responding to overdose scenarios record critical data which is aggregated and queried by healthcare providers from physicians to community health organizations to better shape their opioid response plan. How We built it We used Flutter for a platform-agnostic mobile app, Firebase to host a REST-like API (leveraging Cloud Functions and Firestore), with Google maps and vision apis. Challenges we encountered Learning new technologies! We got to spend some late nights working out bugs, but figured most of them out by this morning :-) Accomplishments that we're proud of Tying together a lot of new technologies, and coming together as a team to put out a fairly robust solution addressing a tangible problem space. What we learned Ask 'so what' and don't spend toooo long on the api ;) What's next for kwann Our team is pursuing a final-year design project at the university of Waterloo focused on the opioid crisis - we hope to use the learnings from this Hackathon to pursue this work! Built With firebase flutter mapsapi plotly visionapi Try it out github.com
kwann
Enabling healthcare providers to improve opioid response plans through aggregated First Responder data.
['Neil Brubacher', 'Arie Field', 'Kevin Chan', 'Wasiq Mohammad', 'Nina Phan']
['Best Use of Google Cloud']
['firebase', 'flutter', 'mapsapi', 'plotly', 'visionapi']
10
10,436
https://devpost.com/software/panpal-9nbzmi
Inspiration During the COVID-19 pandemic, one of our team members enrolled in a mental health training course in order to learn how to be an advocate and help break the stigma against mental illness. After taking the course, it helped open doors into understanding how many people during the pandemic who may be older, with underlying mental and physical conditions are stressed, anxious, lonely and are scared about their health. This raised the question: how do people who tested positive for COVID-19 (or any other disease for a different pandemic) deal with this treatment along with adhering and receiving quality treatments for other underlying conditions? In order to relieve these tensions, team BuddyUp created an app to help COVID-19 positive patients with underlying conditions navigate through this time of uncertainty by providing an app that allows patients to access support groups and help adhere to their care. This in turn reduces their fear and increases their confidence and emotional morale that they can fight through their illnesses. What it does PanPal (short for PandemicPal and adapted from the word “penpal”) allows COVID-19 positive patients to adhere to their care by connecting with other infected patients with similar underlying conditions. These connections will be in the form of support groups, which are automatically determined by a machine learning algorithm that is deployed on Google Cloud AI Engine. Patients can message and can have scheduled, facilitated calls with other members in their support group to discuss their mental and physical well-being and help motivate each other during this daunting time in their lives. A facilitator will also be present during these calls to help moderate severe stresses by providing mental health morale. These individuals can be therapists, psychologists, nonprofit or private mental health advocates that partner with the hospital and are licensed on aiding these types of issues. At the end of the call, patients receive a feedback survey with questions on how their health conditions are presently, their emotional well being and whether or not the meeting was beneficial to them. Physicians will look over these responses to check on their patients and see if there needs to be any tweaks to their treatment plans. In addition to community building through shared emotional experiences, patients will also have a tool that helps them adhere to their medication through a gamification styled method. Everyday, patients will check a box to whether or not they have been taking their prescriptions. For every day they persistently take their medications, they will receive raffle tickets that they can use to win a prize! You can take a look at our two relevant GitHub projects here: Unsupervised K-Means and Spectral Clustering ML Algorithm Functional Web Demo You can take a look at our underlying condition list and ranking system here . This is referred to as "Heuristics - Ranking Conditions" in the clustering algorithm's documentation. How we built it The clustering algorithm was built using the Synthea COVID-19 Specialized Dataset . We used standard data-encoding techniques with a special heuristic to rank conditions and their risks for COVID complication, so when grouped, patients can have access to highly relevant information. We decided to incorporate other features like geo-location, age, gender, medical history and vitals and other medical observations to build the features for the clustering algorithm. The implementation was assessed both using python sklearn’s K-Means and Spectral Clustering algorithms. For the dataset and features used, Spectral Clustering gave the best groups in terms of relevance and similarity between the members. We deployed the model to Google cloud to use the AI engine to do the continuous groupings as new patients opt-in for PanPals. The front-end, including the gamification of patients taking prescriptions daily and the log of weekly group meetings was built using ReactJS, HTML, and CSS. The group chat for patients was made using NodeJS, Express, Socket.io, HTML/CSS and Javascript. Adobe XD was used to design and prototype the mobile app for patients and the website portal for physicians. Images used in the prototype are free through Adobe plug-ins (UI Faces, Icons 4 Design, and Stock) as well as from the Apple interface [1,2]. Challenges we ran into Data encoding and feature reduction were the main challenges with the implementation of the clustering algorithm. The dataset had a lot of categorical data which created issues when encoding with one-hot encoders due to the vast amount of unique categories. Collaboration between healthcare experts and CS experts within the team helped us come up with the heuristic based approach which vastly reduced the number of features while increasing our accuracy significantly. Accomplishments that we're proud of Technically speaking, we are proud that we’ve been able to tackle so many technical challenges in such a limited amount of time. The person that created the ML clustering algorithm had very limited prior experience implementing ML and he learned it within the course of half a day, which is something we’re all very impressed by. We also managed to deploy the model to Google Cloud to any future predictions on new data. Team BuddyUp is proud that we were able to bring in a diverse set of individuals with different mindsets and skill sets, ranging from machine learning/data science, UX experience, front-end programming, ethical dilemma analysis, and biomedical research experience in order to help a special niche of patients have better care at an emotional point of view. We all worked and communicated with each other with great ease and would always build on each other's innovative ideas. It was a blast to work in this team! What we learned We had a truly diverse team. In that regard, we all learned from each other bits and pieces about the principles of clinical science, bioethics and patient care, machine learning, UX design, and frontend programming. Not only was there a transfer of knowledge and skills between skilled developers to healthcare pre-professionals, but we were truly able to understand the nuances, such as separating patient data to create clusters using algorithms to learning about health care norms through zoom workshops set by mentors and panelists, when marrying tech and healthcare together to create something fresh. What's next for PanPal Our mission is to extend this tool not only for the current pandemic but to the other ones that lie in the future. The confusion and anxiety that a complex clinical situation brings is intense, and support groups are hard to come by for rare combinations of diseases. This app gives the ability to arrange these support groups within a large hospital network almost effortlessly. The concept of using machine learning and the heuristic-based approach is hugely extensible. Several possible use cases are: Automatic creation of mental health support groups based around race/ethnicity Clustering transplant patients based on closest possible organ match (as determined with many factors) Sorting patients that have higher risk for developing complications from hospital acquired infection into adequately equipped facilities Keeping all that in mind, the issue boils down to this: we all need a shoulder to lean on. Having support groups that are extremely customized with people that suffer the exact same way allows for greater understanding, emotional connection, and reassurance. At the most basic level, this function helps people explore different remedies or solutions that help others like them in their treatment journey, and rely on those that have walked the same treatment path for emotional stability during an extremely uncertain time. Providing this emotional support is the next step in ensuring the highest quality of care. Image Citations IRStobe/Adobe Stock (2020). Doctor talking about organ transplantation. Artificial human organ, human langs. Flask with artificial lungs. The latest bioengineering technologies, health and medicine concept. Retrieved from: https://stock.adobe.com/ca/291372245?as_content=api&as_campaign=qooqee&tduid=f9a0f65dd6ef6620413d9349d7b288ba&as_channel=affiliate&as_campclass=redirect&as_source=arvato Apple Incorporated. (2020). Calendar. [mobile iOS 13.5.1]. Built With css express.js html javascript node.js python react scikit-learn socket.io Try it out uahmed23.github.io github.com
PanPal
ML generated support groups tailored to the patient to help with adherence and emotional support
['Sanjeeth Rajaram', 'Umer Ahmed', 'Shardool Patel', 'Srimaye Samudrala', 'Christina Lukasko']
['Best Use of Google Cloud']
['css', 'express.js', 'html', 'javascript', 'node.js', 'python', 'react', 'scikit-learn', 'socket.io']
11
10,436
https://devpost.com/software/vision-checker
Landing How It Works Try It Out! Eye Chart Inspiration Given the current circumstances surrounding COVID-19, we were inspired to provide people with the quality of care that they had become accustomed to before the pandemic. What it does Our website has an eye chart embedded into it to test patients on their eyesight. Patients can take the exam as many times as they would like and their scores will be calculated through interpreting audio with Google's Speech-to-Text API. This allows patients to keep track of their eye health at home, allowing people to be more involved in their own healthcare and facilitating greater communication between doctors and their patients. How we built it For the front-end of our website, we used HTML5, CSS, and JavaScript to create a simple, yet thorough website with instructions on how to use it to perform the eye exam. We used Python to interface with the Google Cloud API and storage system. Flask was used to connect our front-end with our back-end functions. In addition, we utilized Google's Speech-to-Text API to detect audio from the user and compare that with the actual letters on the eye chart provided on the website. A scoring function based on Snellen's All About Vision was used to provide an accurate score for users. Challenges we ran into A challenge we faced was our limited knowledge of Flask and Google Cloud, both of which we learned as we went along. Also, one of the aspects of our project involved alerting the user about when to move on to the next letter. This was quite challenging to implement because of synchronizing it with the audio recording. Accomplishments that we're proud of We are proud of both our back-end and front-end. We were glad to have learned a lot in the process! What we learned Through our project, we learned how to use Google Cloud and understand the various functionalities that are available to us. We learned how to use Flask and how to work towards creating middleware. What's next for Vision Checker A future development we were considering is to include randomized charts, rather than one standard eye chart, to ensure that patients do not simply memorize it. These charts may also potentially include charts in other languages to cater towards those of different ethnical and cultural backgrounds. Lastly, we hope to implement user authentication so that patients may be able to track and monitor the change in their eyesight over time. Built With bootstrap css flask google-cloud google-web-speech-api html javascript python Try it out visionchecker.herokuapp.com github.com
Vision Checker
During the pandemic, routine visits to medical professionals have become difficult, particularly in-person eye exams. Our website aims to solve this issue by carrying out virtual eye exams.
['Isha Sharma', 'Abhay Gopinathan', 'Stephen Yang']
['Google Cloud COVID-19 Hackathon Fund']
['bootstrap', 'css', 'flask', 'google-cloud', 'google-web-speech-api', 'html', 'javascript', 'python']
12
10,436
https://devpost.com/software/neutralize-tx8r34
Welcome Screen Login Manage Symptoms Symptoms, travel and exposures added Booking saved in User Hospital Booking Hospital Home Screen Inspiration I was mainly inspired by the COVID-19 pandemic and really wanted to do something about it, even a small contribution. I have had few friends living in the United States, unaware they were exposed to the virus and were still at homes, fortunately, one of them fell sick and that's when they all tested positive, as the virus doesn't necessarily show symptoms. This app is dedicated to them. By Neutralize, you can find out your nearest COVID testing center, book an appointment and hospitals would send you time slots for attending. By this, you don't have to stand in queues and risk yourself getting exposed to the virus. You can also mention any symptoms or probable exposures and even travels such that hospitals can send you immediate assistance if required. What it does The app allows users to select various COVID centers on the map, using Google Maps SDK and users can also visit any desired COVID center's website to go learn about them and book if needed. In the booking section, users can enter their name, age, and date for booking. The booking on submission will be verified by the hospital and they will provide you a slot accordingly. If the hospital is out of slots on the day, you booked, then you can delete your current booking and choose a new date or hospital as per your preference. Users can also add symptoms, probable exposure, and travel within 14 days if they wish so that hospitals can view this and provide assistance immediately if needed. Users can also delete symptoms if they wish or they can update new ones as well. How I built it The app is made with Firebase for the back-end and Flutter for the front-end. Google Maps SDK has been used for the map feature and Google Sign-in for secure and easy, hassle-free login. With Google Maps, I can change the Map type to Satellite or Normal according to the user preference, set markers on the map indicating the location of COVID centers. With Firebase, all data- adding, deleting, and updating is done quickly and also Firebase and Google Sign-in together provide, the current user's information such as profile pic, email, and name. Google Sign-in alone is used to signing into the app, hence user doesn't have to first sign up and then sign in, it all happens in one single tap. Challenges I ran into Few app crashes, using Firebase for Flutter sounds easy but I had tons of errors and this might have slowed me down a lot and I had to spend 6-8 hours figuring and fixing Firebase issues, but I'm happy and proud that I fixed all errors on my own and got to learn a lot about it. Accomplishments that I'm proud of Fixing errors with Firebase is one of the biggest accomplishments, I'm proud of. Using Google Maps SDK for the first time was sort of difficult but in the end, I came out with something pretty good out of it. What I learned Time management and bug fixing. These two play a major role in my hack as they both go side by side, and I am really happy to have learned more about Firebase read writes and Cloud Firestore. What's next for Neutralize Getting in touch with hospitals, so they can manage the database and send out emails to the booked user, confirming their booking along with time slots. Sign-in with Apple will be added for iOS devices, even though Sign-in with Google also works for iOS users, but with Sign-in with Apple, users can use their FaceID or TouchID to login to the app, making it very seamless, especially with the iOS platform. Siri and Google Assistant integration will be added so that logged in users can book hospitals near them with their voice itself. Built With dart firebase firestore flutter google-maps Try it out github.com
Neutralize
Visit your nearest COVID centers, by booking an appointment so that you wouldn't have to stand in queues, which helps social distancing a lot and risk of exposure to the virus.
[]
['Google Cloud COVID-19 Hackathon Fund']
['dart', 'firebase', 'firestore', 'flutter', 'google-maps']
13
10,436
https://devpost.com/software/covidcast-bp4eof
Screenshot of CovidCast in action Prototype of our offline version Inspiration CovidCast is like a weather forecast; but for coronavirus outlook in your area. Due to the rapid spread of COVID-19, social interactions are to be kept to a minimum. Even doctor-patient interactions have been impacted. This poses a significant challenge for patients as they might not know if their condition requires in-person care. As a result, patients might not receive the care they need, resulting in a worsening of their condition. At present, leaving one's home poses a certain risk, but this risk is not so readily quantified. Our product tries to solve both these problems, while also being accessible to those with limited or no internet access. What it does Our project focuses on three core ideas: First, our “Covid Check” feature simulates a typical doctor patient interaction in order to help patients assess their condition and advise them - if needed - to seek medical attention. We hope this will improve patient compliance as well as keep patients safe! Secondly, our “Covid Risk” feature allows the user to assess their COVID-19 exposure risk before they even reach their destination! Our risk analysis algorithm takes into account the occupancy at their destination (popular times), air quality index, and the number of active cases in the region. These parameters were chosen as current research suggests they play a significant role with regards to the spread and severity of the disease. Risk levels are categorized as either very low, low, medium, high, or very high. We hope to provide users with an accurate reflection of the risk they take during this global pandemic. Thirdly, as we continued our research, we found that many people around the world, especially in developing countries, do not have access to reliable internet. We decided to make an alternative device that could collect the same information from the internet and then relay it to more rural areas over radio waves. By using a Raspberry Pi Zero W and a radio module, we could create this device for under $25 USD. It was imperative that our project be as inclusive as possible. Along with the Raspberry Pi device, our product is compatible with google homes and is ready for use. How we built it Created a COVID-19 checklist based on current standards Developed an algorithm for the risk analysis using COVID-19 case data, air quality index, and destination occupancy (popular times) Used JavaScript to code the inputs for risk analysis and generate a risk analysis. Used DialogFlow to integrate our Covid Check and Covid Risk to Google Assistant Integrated Raspberry Pi technology with Google Assistant features. Challenges we ran into Like any project, we ran into some challenges. The first challenge was that all of us had either little to no experience using JavaScript. We also discovered that the Google Maps API does not have a method to retrieve the popular times of a location. To help, we talked to mentors and did our research to come up with a solution. Accomplishments that we're proud of We are really proud of our project - all 3 components - and how we managed to do this within the 36 hours of the hackathon. What we learned We learned so much during this whole hackathon! It took us several hours to learn how to write the code and get the right APIs for the risk analysis. We all had very limited experience with JavaScript and working with DialogFlow, so it was really cool understanding and learning how they work. It has been really great to learn about research through published papers, coding, and technology that we have not used before! What's next for CovidCast We have many ideas on how to improve CovidCast. For our risk analysis algorithm, more parameters could be added (ex. socio-economic factors) and machine learning could be employed. Added language support would help us better serve those who do not speak English. In addition it is ready to add to a Google Home and we can work on developing the offline interface as well. Built With airvisualapi dialogflow firebase google-cloud google-places javascript raspberry-pi
CovidCast
CovidCast helps patients determine their need for in-person medical care and risk associated with leaving their home.
['Mehreen Ali', 'Esha Tulsian', 'Bhavika Kagathi', 'Portia Rayner', 'Alexander Scott']
['Google Cloud COVID-19 Hackathon Fund']
['airvisualapi', 'dialogflow', 'firebase', 'google-cloud', 'google-places', 'javascript', 'raspberry-pi']
14
10,436
https://devpost.com/software/patientside
Inspiration Over the course of the COVID-19 pandemic, we've had to let go of many of the things we had been accustomed to, from movies to restaurants and much more. Unfortunately, an unintended consequence of the shift in focus to the pandemic has resulted in letting go of a crucial aspect of people's medical lives: other ailments not necessarily linked to COVID-19. From hearing countless stories of missed cancer screenings, rises in cardiac arrests, and much more, we decided we wanted to develop a tool to encourage people to seek treatment and guidance for their ailments, regardless of their relation to the ongoing pandemic. What it does PatientSide is a screening service that is triggered via voice command from a platform such as Google Assistant or Siri. Upon initiating the service, it will engage in conversation with the user to triage them based on the severity of their symptoms. Starting with a basic COVID-19 screening, it then triages the reported symptoms and assigns a priority based on their inherent severity and potential diagnoses. Depending on the severity, the patient may be prompted to seek further treatment at a later time or immediately. Regardless of outcome, a transcript of the conversation is then sent to the patient's primary care physician for further records, which the provider can view from a provider-facing app platform. This service helps promote the disclosure of non-pandemic illness symptoms. It also helps promote social distancing by staggering hospital and clinic visits to avoid crowding patient wait rooms. How we built it We used Flutter for creating the interface that the clinicians would access and Google Firebase/Firestore for the setting up the backend in terms of storing JSON formatted user/patient historical interactions with Google Assistant. We also used Google Conversations Action Console to set up the test conversation structure with Google Assistant. Challenges we ran into None of our team members had prior experience with the Google Cloud platform, but thankfully the workshops this year along with mentorship were a terrific aid in our development experience. What's next for PatientSide We hope that PatientSide can partner with various healthcare systems to refine our decision algorithm for our screening tool and provide real-time occupancy and foot-traffic information to further dilute waiting periods and promote social distancing. We believe that PatientSide can be the next step forward in promoting holistic care during a global pandemic so that no patient is left behind. Built With firebase flutter google google-cloud objective-c swift
PatientSide
Improving medical care for non-pandemic related illnesses through voice-assistant conversation software as a screening tool for simple triage
['Andrew Massoud', 'Tomer Brezner', 'Aryan Ahmad', 'abhi jhanji', 'Faraz Ali']
['Google Cloud COVID-19 Hackathon Fund']
['firebase', 'flutter', 'google', 'google-cloud', 'objective-c', 'swift']
15
10,436
https://devpost.com/software/elderlylife-9s75xi
Elderlife Inspiration The different needs and resources of the elderly and the young. The need for support and sense of community during the pandemic time. Treasure hunt. AR technology. What it does Elderlife is an app designed to help the elderly aging in place have a better lifestyle. It uses the concept of “ordering a person” to help out a task that is relatively harder for older adults to do on their own. We target young volunteers such as high school students to perform simple tasks that do not require professional skills, for instance going grocery shopping for the other adults, taking them on walks, taking out the trash for them, etc. Elderlife provides entertainment for the elderly and allows them to design their own “treasure hunt” game. The elderly get to list the tasks they need help with, and choose the rewards the volunteers get in return for completing those tasks. While this makes the system more interesting for older adults, this would also benefit the volunteers. Elderlife provides volunteer opportunities for younger people like high school students. It allows volunteers to have fun while volunteering, and they can also gain volunteering hours. It would be a source of entertainment for both the young people and the elderly community. When you enter the app you will see two options. One for the elderly and one for volunteers. And a huge calendar on the bottom that doesn’t really do anything. Verified accounts labeled elderly will go to the elderely tab to create tasks for volunteers to complete for them. Under Elderly tab there will be options to create a new task, location address, and virtual prize for the volunteer. You will be able to clear tasks when the task is completed. Under the Volunteer tab, there is an option that brings you to the map where you can view the prizes with the locations. Once you select the prize you would like to receive then the task required for the prize will appear. Once the task is completed, the prize can be collected. This platform is important especially during the pandemic time. This is a source of entertainment for both youths and the elderly. The younger population can give back to society while still enjoying a treasure hunt game. This treasure hunting game can be a source of stress relief for elderlies because they can take part by deciding the treasure they give. Even with people social distancing, it will work for some of the tasks. The grocery drop-off, lawn-mowing, and other tasks that would not require direct contact. This is another way to create more connections for the elderly and the community and. Virtual gaming can help the elderly feel more connected to the community. This feature would also help save a lot of money for them, especially for poorer communities that may not have the best healthcare or insurance plans. This is a way to support each other in the community right now and in the future during the pandemic and combating isolation. Though these volunteers may not entirely take over the position of a caregiver, Elderlife can definitely take some tasks off the shoulders of the elderlies. How I built it Elderlife is designed to be an android app (Android studio) with features including GPS and AR (EchoAR, Unify, GitHub). Challenges I ran into Team members had to balance hackathon with other important commitments. Team members with little knowledge in computer science had to learn how to use EchoAR and how to load it into an app. Accomplishments that I'm proud of Everyone has his or her own skills that are complementary to one another. We worked really hard as a team, cooperated really well with each other even though our work styles were very different. We made friends and learned a lot from this Hackathon. Our app not only addresses the wellbeing of elderly, but also helps form community bond and advocates for a new lifestyle in the aging population. What I learned The problems and opportunities in healthcare and the technologies available to make it better. What's next for ElderlyLife We will keep improving this app, add new features, and make it available to the elderly. Built With android-studio echoar github java unify
Elderlife
Happy Aging!
['Xin Chen', 'Tzu-yu Cheng', 'Shaoyu Zhu', 'Shivani Surti', 'Johnny Y']
["People's Choice"]
['android-studio', 'echoar', 'github', 'java', 'unify']
16
10,436
https://devpost.com/software/medsearch-xrw2v6
Inspiration At the end of the day, everything we do is for the benefit of patients and ensuring that all patients have the ability to access quality care at low costs. What it does Our platform provides a searchable database that patients can use to find and compare the costs of medical procedures across many different hospitals in the state of California. How we built it There are two components to our platform: 1.) A web application mock-up of MEDSEARCH. This was built with python and can eventually be transferred into a fully working website that can be accessed by patients from any electronic device. 2.) A dashboard, which is a searchable database that was built using Tableau. Challenges we ran into As charge master lists are too detailed, fail to provide an indication of actual amounts paid, and don't group price information at the procedure level, we decided to clean and analyze California OSHPD average hospital cost data for our platform mock up. Although we began with some previously aggregated data from OSHPD hospitals, our team still had to add information from almost 150 individual OSHPD hospital forms and add address information from 300 to create a data set of about 8000 rows with real average cost information. We also had to recategorize and group medical procedures so they would be more navigate-able. Accomplishments that we're proud of The issue of price transparency in healthcare costs is a well-known and long-standing issue. We are thrilled to have been able to work together as a team to create a product that could prove useful to patients in navigating the healthcare system and obtaining quality care at reduced costs. Although our dashboard and site is still in development, our use of real average cost data means upon completion consumers will be able to use what we created to have more informed conversations with medical professionals and potentially save significant costs on essential services. What we learned We learned quite a bit about hospital pricing and some of the factors that contribute to the pricing of hospital procedures. What's next for MEDSEARCH 1) associate diagnoses with each of the listed procedures 2) track insurance coverage across different hospitals 3) price adjustment depending on insurance 4) establish a standard format for procedure names and data collection 5) expand to states beyond California and across the US Built With python tableau Try it out docs.google.com
MEDSEARCH
To empower elderly patients to access quality care at the lowest costs
['James Tourkistas', 'Nicolas Cutrona Cutrona', 'Gattuoch Kuon', 'Victor Ekuta', 'Irene Pak']
[]
['python', 'tableau']
17
10,436
https://devpost.com/software/intellidiagnosis
Inspiration Vertigo is an extremely common disease among the elders these days, and few people know the way of diagnosing and properly treating it. What it does So we created a program where, through having our users answer a set of questions, we can accurately determine whether they have BPPV (Benign Paroxysmal Positional Vertigo). We also provide pictures and videos in our program so that our users can properly follow these instructions and be treated without having to go to the hospital. How we built it We designed a two-phase mobile app that one may easily download from any app store (whether on iOS or Android). This mobile app will take one to a questionnaire first used by us to evaluate the disease state and assess if there is a need to proceed to the repositioning practice section for a try. If the patient is highly suspected to have BPPV, our questionnaire would redirect them back to the next section of our app: a text/video-instructed repositioning instruction. This repositioning process takes less than 10min and would significantly reduce the symptoms for BPPV patients after done only once. We know that not all vertigo is caused by BPPV and you might be wondering if a misdiagnosis may lead non-BPPV patients to do this exercise: the answer is that there are no harms proven medically to non-BPPV patients with vertigos. In other words, even if one's vertigo is not caused by BPPV, doing this repositioning practice will not make their symptoms go away or deteriorate. Why not give it a try, then? Challenges we ran into To provide a reliable assessment, we wanted to make sure that our questioning process is professional and scientifically makes sense. However, due to the lack of research interests in this field, we had had a hard time finding many papers that detail this process. The only paper that lists the actual questions used in this process was found and employed. We had consulted Dr. Wu, an otolaryngologist at Peking Union Medical College Hospital, whose specialty is vertigo-related diseases to further confirm the validity of this research. We present our great appreciation to her support during this time. Accomplishments that we're proud of None of us has had prior experiences with app development, but we have tried our best to learn a brand new programming language in such a short amount of time to accomplish our goal. Built With xamarin xamarin.forms Try it out github.com docs.google.com
BPPV Diagnosis
Diagnose BPPV on the go - right on your phone.
['Junxiang Wang', 'Yuchen Yang', 'Zihan Wang']
[]
['xamarin', 'xamarin.forms']
18
10,436
https://devpost.com/software/healthx
Inspiration There are many people in rural areas who can't afford medical treatment. Due to the pandemic, many rural people in India, Brazil and USA are not getting proper medical treatment. Community workers are not able to handle real time monitoring of the overcrowded rural population. There are many common diseases in the rural area such as chronic kidney disease, coronary artery disease, and cataract.These common diseases are preventable, but due to the lack of monitoring, many people are dying. There is a need for a technology platform to help fill this gap that has been created. What it does HealthX is a platform which helps monitor rural people in USA, Brazil, and India via app/web platform with the help of community workers in India, Brazil and USA. How we built it We prepared the web platform and UI/UX design of the app. For the web platform, we used HTML and CSS for the frontend and firebase for backend. We used Google Cloud for locating nearby hospitals, tracking ambulance, and the AI chatbot using Dialogflow. Additionally, we used Firebase to create user authentication, a realtime database, and to store information. We used GitHub to host our website publicly and for file management. Challenges we ran into There were many challenges we ran into, but that's what programming's all about. One challenge, we had to go through various idea changes in order to consider real time monitoring for rural people. Another challenge we ran into was the challenge of using Firebase, since it was our first time using it. Accomplishments that I'm proud of We are proud of so many things. We made use of this project to the best of our abilities in the allotted time.We successfully created UI/UX design of app and the web part. Additionally, we combined all of our skills to create a website that use multiple frameworks and we are proud of this website. We love the UI/UX and we love the Backend, it was our first time as well using these frameworks. Finally, we are proud of the amount of work we pulled off. We would have never thought we could accomplish this much in such a small amount of time. What's next for HealthX Launch the platform in next 2-3 months for ASHA, CHW and CHA community workers in India, USA and Brazil. Expand HealthX to other rural areas Built With css dialogflow firebase google-cloud google-maps html javascript Try it out mohinishteja.github.io rhea372863.invisionapp.com
HealthX
Empowering the health of many rural citizens
['Srikar Kusumanchi', 'Katherine Chae', 'Mohinish Teja', 'Rhea Shastri']
[]
['css', 'dialogflow', 'firebase', 'google-cloud', 'google-maps', 'html', 'javascript']
19
10,436
https://devpost.com/software/me-mo-the-smart-note-taking-device
Overview of "Me-mo": The Smart Note-Taking Device "Me-mo" Logo Design ft. Dojo the Panda "Me-mo" prototype design Dojo the Panda wearing a "Me-mo" device! Inspiration I was inspired to create "Me-mo" as a device for my abuelos and my dad. They are all important people in my life and I wanted to design something that would be useful to them in their everyday routines. As my grandparents and my dad have aged, I have noticed an increase in their forgetfulness and their efforts to remember important things by jotting them down. My dad writes memos on the backs of mail envelopes, my grandpa writes notes on small squares of paper when he's on the phone, and grandma likes to write down her favorite novelas and songs in notebooks. Although this is a temporary solution, they all run into the same issues of forgetting what, where, and when they wrote down these notes and memos. "Me-mo" serves to bridge the gap and help my abuelos, my dad, and other aging adults minimize forgetfulness while not compromising their routine of note-taking. In addition to creating this device with my family in mind, my university work-study job further encouraged me to develop "Me-mo" to address the: aging in place with resilience and resources track. I am a student worker at a non-profit organization that specializes in providing support and resources to family caregivers that care for aging adults suffering with brain impairing conditions like Alzheimer's and dementia. Through my job I have learned more about the daily struggles and obstacles that caregivers and care recipients encounter while managing these life with these conditions. With my personal experiences and a driving interest in helping older adults, I created "Me-mo" to one day alleviate and assist our loved ones so they may age comfortably and respectably without compromising their lifestyles! What I learned? MedHacks 2020 was my first hackathon and my first introduction to designing and addressing a real-world problem: aging in place with resilience and resources. I learned more about myself over the course of the weekend as I was encouraged to think outside of the box and develop my idea from the ground up. From this small experience, I learned about encountering and addressing challenges with my idea through researching and problem-solving. One thing I do hope to improve at my next hackathon is working in a team! As a rookie, working independently was a HUGE challenge because I am sophomore, undergraduate, biomedical engineer and I do not have any coding or technical experience creating a program or even a mobile phone app. Overall, this was a wonderful experience and I am thankful to have been apart of MedHacks 2020 this year! How I developed my project? I approached my project by focusing on the of development end of my idea. I worked independently throughout this hackathon and with no coding or technical background skills I took a lot of freedom and liberty with my developing my device. I relied on online research, learning, and reading about other available devices to shape "Me-mo" from my beginning note-taking idea into what it is now. Built With google
Me-mo: "The Smart Note-Taking Device"
An "aging-friendly" device that helps you never forget the last note you wrote down!
['Joelle De Jesus']
[]
['google']
20
10,436
https://devpost.com/software/data-analysis-system-for-senior-s-health-dassh
Inspiration A population health consultant was sharing 2 projects which she was heading at a local hospital. One used EHR records to identify patients who were at potential risk for stroke if they had atrial fibrillation while the other used them to predict 12 month risk of emergency admission. It seemed like the hospital and primary care networks were able to effectively target their resources to patients who needed them even when patients did not present to healthcare establishments. However for each of these projects the consultant had to spend a lot of time going back and forth between the data scientists/computer scientists to specify what she wanted/needed. So I thought that it would be great if we could find out from doctors how they approach these issues and then implement a common framework that would enable all these analyses once it is plugged into an EHR database. What it does Generates exploratory plots from various parts of EHR records based on a population obtained by filtering on various parameters like Age, gender, race, conditions etc. How I built it On my computer Challenges I ran into Understanding the way different libraries and frameworks functioned Accomplishments that I'm proud of None What I learned How to use flask and App engine What's next for Data Analysis System for Senior's Health (DASSH) Implementing filtering on composite features like BMI and clinical risk scores like CHADVASC. Implement selection of segments of the sub population from graph interactions Implement the ability to pull up individual patient details Implement additional exploratory plots for areas like socioeconomic determinants of health Built With bootstrap css3 flask html5 jquery numpy pandas python Try it out medhack2020.ew.r.appspot.com
Data Analysis System for Senior's Health (DASSH)
Leverage EHR records for targeted patient screening and intervention
['Catherine Nguyen', 'Emily Chen', 'Green-Lotus Hawks', 'David Chong', 'Hannah Nie']
[]
['bootstrap', 'css3', 'flask', 'html5', 'jquery', 'numpy', 'pandas', 'python']
21
10,436
https://devpost.com/software/guardian-angel-7e16nq
Inspiration As a team, we were inspired to solve a global issue by mitigating the impacts of the aging population. It is estimated that by 2030, over 1 billion people will be aged 65 years or older (National Institute of Health). Older population is also more prone to developing chronic illness. At least 80% of Americans aged more than 60 have at least one chronic illness which is also the leading cause of death among people 65 and older (CDC, 2017). In addition, we are also experiencing global shortage in trained health care workers and the imbalance in supply and demand will only get worse over time. It is projected that the global deficit of healthcare workers will worsen from 7.2 million in 2013 to 12.9 million by 2035 (WHO). By now we can all see the potentially disastrous outcome in the overall health of the elderly population in the near future. We need a tool that can address the issues of growing healthcare demand from the aging population and the limited availability of healthcare workers. In order to alleviate this problem, we created the Guardian Angel. What it does The Guardian Angel is an AI powered patient monitoring system that does not require patients to use any advanced devices like smartphones. All that patients need to do is pick up a phone call and start conversing with the Guardian Angel. How it works: Guardian Angel will initiate a call to patient A at a scheduled time Guardian Angel follows up on the patient of their overall health and asks for any new or changing symptoms. It may also ask for the most recent BP and body weight readings. While the patient is answering, the Guardian Angel is constantly analyzing the patient's speech and picks up on any symptoms that require further investigation by the healthcare professionals If the patient mentions any alarming symptoms, the Guardian Angel will notify the nurse and the nurses will take a look and triage them if they need an appointment with a physician for further investigation. Guardian Angel enables three things that the traditional telemedicine could not: Any landline phone can provide service to the elderly without access to advanced technology Enhanced automation enables extraction of medically relevant information provided by patient Conversion of speech to text facilitates electronic health record documentation How I built it We used the following tools to build Guardian Angel: Flutter Flutter packages : url_launcher, firebase_auth, provider, highlight_text, avatar_glow, and more Google Cloud Speech-To-Text API Google Cloud Firestore HTML, CSS, JavaScript (WebFlow) Challenges I ran into We were first planning to use Twilio Programmable Voice to implement the automation of phone calling, until we found out that flutter_twilio_voice has limited support across OS. That was when we came up with an alternative for Twilio, the Flutter url_launcher package. Another challenge we encountered was enabling voice recognition in android emulators: while there was no problem with our code, the emulator was not detecting, thus pouring a lot of effort in making it functionable. Building an algorithm to detect words that may hint illness was another challenge. Accomplishments that I'm proud of We believed that the most efficient way of showing a patient’s health status was by presenting patient information and data extracted from Guardian Angel on a doctor’s tablet. We never had an experience building an application with flutter but we tried to learn the skills as much as we could in a limited amount of time. At the end, we felt very proud that we successfully created a proof-of-concept solution to a serious problem that needs to be addressed. What I learned From this experience, we learned a lot about challenges that the current healthcare system encounters, and what difficulties elderly patients face with. We also learned that with small changes to the healthcare system, it could bring much more valuable experience between elderly patients and doctors. What's next for Guardian Angel Refining machine learning to improve accuracy in important data extraction Support for multiple languages Introduction of “small talk” function to alleviate feelings of social isolation among elderly population Built With css firebase-auth flutter google-cloud-firestore google-cloud-speech-to-text html javascript url-launcher webflow Try it out github.com
Guardian Angel
#agingInPlace
['Yona Kim', 'Sang min Lee', 'James Lee', 'Jiyeon Park', 'Rae Kim']
[]
['css', 'firebase-auth', 'flutter', 'google-cloud-firestore', 'google-cloud-speech-to-text', 'html', 'javascript', 'url-launcher', 'webflow']
22
10,436
https://devpost.com/software/diagnosee
Homepage Dashboard Comparison between original video and GAN generated video. Logo Comparison of original video frame (left) and high-resolution video frame generated from the website (right) Inspiration My cousin experienced an overbite for his braces during quarantine. While this would be a small problem during non-COVID times, I saw first hand the inefficiencies of telemedicine through Zoom. During the initial session at MedHacks, my team member learned from experts that low video quality is a major barrier for high-quality care in telemedicine. Thus, we created a deep learning solution that allows physicians to access high-res videos for their telemedicine appointments while lessening the technological strain on patients. What it does The website uses generative adversarial networks (GAN) to create high quality videos from lower resolution patient videos. This is especially useful for physicians that need to perform a physical examination, who would otherwise struggle without high-def visual interaction (Dermatologists, pediatricians, geriatricians). Patients can upload videos from their telemedicine examinations and select one of the many physicians available to give comments on his/her condition. Our server processes uploaded videos and generates the higher-resolution version of the video automatically. Patients can then go on to their dashboard, which provides a link to view the high resolution video alongside the original video, enter any notes, and submit the videos to the physician. On the other end, physicians can login to the website to view all submitted videos from patients and provide feedback and diagnoses. How I built it We used TecoGAN, a GAN trained to generate super resolution videos from low-res videos. Furthermore, we chose to use Flask to build our webapp, as it was quick and functional. We created multiple SQL tables to store user data and login information; we did so while keeping security and privacy in mind. Using our experience in web development, we packaged the application into a simple and intuitive interface. Challenges I ran into We did not have enough time to implement everything we envisioned, and therefore we had to prioritize and focus on core functionalities. It was not possible to train a full GAN with the short amount of time, so we had to use a model pre-trained on non-medical data. We were mostly limited by computation power. The TecoGAN utilizes lots of computational resources to infer on live video. So we had to settle for short video clips rather than transforming the entire video conference. We ran into difficulties with the chat functionality as well, but we persevered and completed a minimum viable product. Accomplishments that I'm proud of We are proud that we persisted through the many challenges and completed a final product. We were able to fully integrate the GAN model and generate higher resolution videos using our website, create convenient interfaces with chat and video viewing tools, configure user and video databases to store data safely, and design an intuitive UI to facilitate client-user interactions. What I learned Although there were loads of technical challenges on the way, the most important skill we learned was being able to collaborate virtually while keeping each other motivated through the night. We had an ambitious goal from the start and being able to work as a team to resolve the many technical/implementation issues that we ran into was the biggest learning opportunity for all of us. What's next for DiagnoSEE We want to adapt DiagnoSEE to be compatible with the current telemedicine platforms that are in use. Moreover, we hope to include more functionalities such as allowing physicians to annotate video frames for particular areas of interest; implementing a feedback loop between patient videos and the GAN model to improve model performance; adding video editing tools to adjust lighting, obstructions, etc. The current version is fully functioning with relatively light workloads. If DiagnoSEE were implemented in production, we plan on using distributed computing and dynamic scaling to increase our rendering capabilities. We also plan on implementing a flight control to manage the rendering queue, adding in parameters such as priority to give precedence to more timely appointments. Built With flask heroku javascript numpy pandas python tensorflow threading Try it out medhacks1.herokuapp.com
DiagnoSEE
Enhance video resolution with deep learning for telemedicine.
['Vani Gupta', 'Mingye Wang', 'Jinay Jain', 'Alan Sun', 'Matthew Yu']
[]
['flask', 'heroku', 'javascript', 'numpy', 'pandas', 'python', 'tensorflow', 'threading']
23
10,436
https://devpost.com/software/covid-19-pre-screening-for-higher-risk-groups
Inspiration We aimed to provide suspected patients a pre-screening tool to assess their lung health, and whether they would need immediate medical attention. This phone app can potentially free up hospital space, as well as medical workers' attention for those whose lives are on the verge. What it does Instruct the patients to inhale fully, and exhaling into the microphone. Assessment questions that may ask for the patients' age, sex, race, family history, etc. Use the idea of FVE1/FVC, to generate a score for the patient's lung health. The score might be affected by the patients' answers in the second part. What's next for COVID-19 Pre-screening for Higher Risk Groups We hope to make this app compatible to gadgets such as spirometry, such that people can obtain the gadget and monitor their lung health at home any time. We also hope to use machine learning in the future as more data of the breathing patterns become available, so our app can move towards the diagnostics direction. Built With javascript p5.js vue.js
COVID-19 Pre-screening for Higher Risk Groups
During the Covid-19 global pandemic, hospitals are getting overwhelmed and risk-presenting for our senior citizens. This phone APP can give the patients a lung health pre-assessment.
['Ju He', 'Weiheng Qin', 'Yuxin Guo', 'Wenfei Yang']
[]
['javascript', 'p5.js', 'vue.js']
24
10,436
https://devpost.com/software/mobile-memories
Inspiration By the time you finish reading this sentence, there will be 2 more diagnosed cases of dementia in the world. By the time you finish reading this submission, there will be a total 100 new cases of dementia. And by the time we reach the year 2030, nearly 80 million people across the globe will be affected by this devastating neurological disorder. The scientific and medical community is racing against the clock to find a treatment for dementia before the disease reaches epidemic proportions. However, about 99.6% of clinical trials have failed. A potential reason for this is because by the time symptoms emerge, it’s too late. Therefore, efforts are concentrated on diagnosing the disease in its early stages. Most tools used to detect dementia involve neuroimaging such as CT scan, PET and MRI, which are very expensive and might dissuade individuals from seeing if they have early signs of the disease. Our goal at Mobile Memory is to introduce a new cost-effective method — natural language processing — to screen for dementia during the early stages and allow a greater window of opportunity for treatment. Additionally, we also store the user’s audio files within the app’s Memory Bank to have their memories documented before they’re affected by dementia. Our team chose the aging and resilience track to improve the way we diagnose dementia in a cost-effective manner and preserve what makes us who we are — our memories. What it does There are two main functions of the Mobile Memories app. The first one is the Natural Language Processor, in which we have developed an algorithm to analyze speech patterns and check for the earliest signs of dementia, decades before other cognitive symptoms occur. The app collects samples of speech by posing a prompt to the user. These prompts include “What’s on your mind”, “How was your day”, and other customizable questions. Users have the option to select how often they want to be prompted and over what time period (1 every day vs. 2 a month). Once the audio recording is obtained, the app applies a machine learning algorithm from Google Cloud to test for linguistic metrics such as pause duration and speech segment duration to name a few. If our algorithm detects a significant decline in speech pattern, the app will notify the user, the assigned caregiver/family member, and the user’s physician. From there, the physician can proceed to conduct standard mental exams to assess the extent of symptoms and provide the next steps forward. The second function is in the Memory Bank feature. After audio files are analyzed by Natural Language Processing, they are stored in the Memory Bank so that users and their loved ones are able to access past memories and relive the pleasant experiences. This feature of the app is meant to preserve the identity of the user so that individuals are not characterized by their disease, but by their good character and humanism beforehand. MemoryBank is empowering as it reminds everyone the authentic identity of the user, not the one forced upon by dementia. How we built it The project was divided into 2 main components: 1) Developing the machine learning model; 2) Building the Mobile Memories app. For the machine learning model, we used Google Cloud and the available libraries there to build a speech analyzer that tests for linguistic metrics such as pause duration and speech segment. We were able to access actual datasets from people living with dementia and used it to train our model, making it more accurate and authentic. In the app development side, we developed it as a React Native app, using Expo CLI to make it available for both Android and iOS. The Mobile Memories app contains a variety of features, including audio recording, audio storage (in the form of audio diaries), and data analytics of the speech patterns from the audio diary. For data analytics, we linked our model with the speech patterns of the user to monitor their cognitive ability over time. Challenges we ran into Since we are creating an app to help identify early signs of Dementia, we had to figure out the metrics of speech by which we analyze those signs. It was a challenge for us to look at past research papers to find scientific backing on the metrics of speech used, and implement those findings into our code. We also found it challenging to get the dataset needed for developing the machine learning model. Since the model requires a lot of dataset, we decided to ask an organization for actual data . We also had to design a program to be able to parse through the data and get the ones we want. It was also a challenge to familiarize ourselves with React Native and understanding how to use it to build the Mobile Memories app. Accomplishments that we're proud of We are very proud of being able to use actual datasets from people who live with dementia. We think it’s important to build this project on a foundation that is credible and authentic, and being able to access and implement real data to our machine learning model was an encouraging accomplishment for our team. What we learned From the technical side, we learned more about 1) app development; 2) how to base a ML model on scientific findings and data; and 3) how to parse through datasets. We also developed a bigger sense of empathy to those living with dementia. Researching and developing the Mobile Memories app made us aware of how dementia affects our ability to comprehend and interact with life and puts into perspective the impact of the neurological disorder. This allowed us to better consider inclusivity when building apps or programs in the future, reminding us that technological innovations should always include everybody. What's next for Mobile Memories We understand the opportunities that Mobile Memories can bring and plan to improve the Machine Learning model with more datasets and possibly collaborating with researchers to better tailor it to meet current needs. We also plan to integrate an official screen test into the app that once administered can be sent to a physician for further inspection and action. In addition to that, we also hope to do our part in early intervention by integrating cognitive games for patients to play in hopes of improving their cognitive abilities. Built With expo.io javascript python react react-native Try it out github.com
Mobile Memories
Preserving memories, one mind at a time
['Zachary LaJoie', 'Laetania Belai', 'Erica Lehotzky', 'Daniel Hariyanto', 'drewgupta Gupta']
[]
['expo.io', 'javascript', 'python', 'react', 'react-native']
25
10,436
https://devpost.com/software/teledoc-smart-assistant
Inspiration The modern world has, for the first time in decades, experienced the havoc caused by novel Coronavirus COVID-19 for the past six months and counting. The world's big pharma, governments, and scientists are working incessantly to create vaccines and medication to the market to tackle the pandemic. The scientific community recommended the lockdown and social distancing. This pandemic forced most of us to go virtual because of the lockdown and social distancing. This include most of the patients opted or even forced to access medical consultation through a video call. Video calls do not provide physiological metrics such as vital signs that include oxygen saturation and temperature. Because of the lack of these physiological wellbeing metrics, the patients may have received a suboptimal clinical diagnosis and may even misdiagnosis. There are many devices in the market for assessing vital signs for decades. However, the use of these devices is not adopted by a large population due to technological barriers. Here, we have tested the feasibility of an IoT device to functions as a smart associate to the telehealth doctor providing real-time vital signs recording What it does TeleDoc Smart Associate not only provides real-time vital signs but also capable of integrating sophisticated sensors such as force data, EKG, EMG, and others. In addition, the device also capable of processing machine vision and artificial intelligence algorithms. These advanced capabilities provide an unparalleled advantage in performing advanced data analytics such as saccades, movement recognitions, and speech analysis. This device is also capable of providing thermal imagery for identifying wound healing. We could implement gamification to collect the data for improved patient adherence to medications and rehabilitation. How I built it With a python powered machine vision camera consists of multiple I/O channels along with CAN, SPI, and I2C communication buses. These communication buses can be used to control many devices at once. This device will be standalone and all the data will reside on the unit. The patient can voluntarily choose to share the report generated using HL-7 standards to easily integrate with electronic health records. Challenges I ran into Short time and unavailability of resources due to lockdown. These include sensors and electronic equipment. Accomplishments that I'm proud of Successfully showed the feasibility of teleDOC Smart Associate. Identified novel ways to find the solution of clinical importance. What I learned I learned that it is not easy to gain accurate vital information from a patient virtually. What's next for teleDoc Smart Associate Add advanced data analytics features along with the medication adherence protocols. Because of the small form factor, we can turn this device into a social distancing enforcing wearable. Built With iot micropython openmv rtsp Try it out drive.google.com
teleDoc Smart Associate
Machine Vision AI Powered Portable device for real-time monitoring telehealth patient visits
['Mekedes Dejenie', 'Evelyn .', 'Anil Thota']
[]
['iot', 'micropython', 'openmv', 'rtsp']
26
10,436
https://devpost.com/software/nightingale-onwv3m
Inspiration Currently, the number of patients per doctor is increasing exponentially in North America. In addition, the number of patients each doctor has to take care of will increase even more under the current Pandemics situation. It can be said that the core of our project is to find a way to solve this problem with a limited number of doctors to increase the number of patients to take care of. What it does one of the ways I came up with was to use telemedicine to help doctors take care of more patients. In addition, in order to reduce unnecessary time, we have introduced OPQRST methods that allow rapid diagnosis. From the patient's point of view, it's a little simpler, but it's easier to express one's pain, and doctors can also easily understand the generalized patient's pain expressions, which can save more time. This is because doctors do not have to understand each patient's expression. How I built it We designed the user interface of Nightingale on the Webflow platform. We could format the questionnaires in HTML and CSS files through its visual editor. Simultaneously, we attempted to set up a micro web server through the Flask framework. With the framework, it is possible to also validate and transact the input data from the front-end: When a patient inputs the health information into forms, the application saves and reorganizes data to create their Electronic Health Record (EHR). Then, it sends a copy of the EHR to the server so that the medical team can access the information on a real-time basis. This medical dashboard can display multiple EHRs with one of two sorting options: by timestamp (default) and by a numeric indicator based on discomfort level. To calculate the indicator value, we sum: the discomfort level (from 1 to 10), the frequency pattern of the pain (1 for occasional and 2 for constant or pulsing), the pain's spread to other body parts(0 for no and 1 for yes), as well as the patient's personal information such as: the age group of the patient (2 for [0, 4] or 65+ years old and 1 for else), and BMI (2 for over 35, 1 for under 18.5 or over 30, and 0 otherwise).
Nightingale
From the patient's point of view, it's a simpler, but it's easier to express one's pain, and doctors can also easily understand the generalized patient's pain expressions, which can save more time.
['Crystal Park', 'Junghoon Cho', 'Rhina Kim', 'Lana Kang', 'Hyunmin Park']
[]
[]
27
10,436
https://devpost.com/software/dementia-tracker
Inspiration The rising incidence rate of dementia and Alzheimer's globally, as well as an aging population, served as inspiration for our project. What it does We developed a website that requires the patient to fill out information about themselves, and from this information, a quiz type of game is created which tests the patient’s memory. The patient’s information, game, and its results (score) is saved in a database. Our website is meant for patients to visit and play repeatedly over time, and patients’ results would be assessed holistically for each individual How I built it We first created the website by using AngularJS for front-end development and node.js for back-end development. In addition, we also used Bootstrap. Dummy data was used to test the functionality of the algorithm, but for real world use, data would be obtained through the patient information form on our website. Challenges I ran into Challenges we faced along the way include determining appropriate questions to build a patient portfolio and thus, a quiz game. Questions must be strictly factual and cannot be opinions that change over time (eg favorite food). Additionally, our team has little experience in back-end development, so it created an obstacle for us. Accomplishments that I'm proud of As individuals, we gained new knowledge in other topic areas. For example, our computer science member learned more about human biology and patient interaction, whereas our biochemistry member learned more about web app development. Additionally, creating possible solutions to improve the accessibility of resources for the elderly in low income areas was an exciting and interesting experience. What I learned As a whole, we learned more about dementia, development of web apps, and the creation of algorithms. Expanding our knowledge and building upon our preexisting knowledge was a very rewarding experience. What's next for Dementia Tracker Based on further research, we hope to implement more questions that would give a better assessment for the patient's risk. We also need to research more about which questions answered incorrectly are more indicative of dementia patients. Built With angular.js bootstrap c++ node.js Try it out github.com
Dementia Tracker
A website to assess potential dementia patients
['John Heo', 'Gabrielle Nicole Perez']
[]
['angular.js', 'bootstrap', 'c++', 'node.js']
28
10,436
https://devpost.com/software/agex
Logon Screen Appointment Welcome Screen ChatBot Appointment Confirmation Companion Scheduling Context API Maps API Agex: Elderly care chatbot Inspiration Elderly care has always been an action point in the field of geriatrics. With 46 million people in the US in this age group and the need of them to stay at home results in some form of depression in around 10% of the population. This results in a need for all around clock care for the elderly population. As mentioned all around the clock care is needed for the elderly population. At home nurse assistance is costly and not all can afford it. But with smartphones being present with almost everyone but with the issue of no common point to meet all needs at any time of the day. Also none of these apps are specific to the elderly population. And they being not so tech-savvy there is a huge need for a platform which is easy to navigate. As mentioned there is a need for an empathetic but vigital virtual assistant which can just act as company or even provide the necessary help in times of need. What it Does Here the development of a platform for all on demand services for the elderly i.e A- X Webapp service : AgeX is shown. Through this tool the elderly can easily navigate through the different features like nursing assistance which eventually calls the required nurse/physician. Along with sending in a confirmation email and notification pop-up. As for other services which are geographical domain specific like on-demand food services can also be enabled through this feature. The highlight of the tool is a chatbot to provide companionship to the elderly whenever needed through to & fro empathetic conversational statements and questions; which is also connected to the previous feature by assisting features like contacting the physician/therapist or advice for nearby restaurants. There is also an SOS tool in need for help. Lastly there is a feature to call a companion whenever in need like just to have a fun conversation or assistance for a certain task is required, There is a database of such companions/ volunteers through which such a feature is run. How we built it The application was built as a responsive web application, compatible to be deployed as a PWA using container view for native applications on iOS and Android platforms, using the following tech stack: For BackEnd services : Python-Flask Deployed : Google Cloud Platform Templating engine : Jinja FrontEnd : HTML + CSS Google Maps API Chatbot Flow : Lex Google GeoLocation API Google Places API Google Cloud SQL Service Google GeoCoding API The app starts with a splash screen, and a login prompt. At this time, we decided to regulate our users on the platform and make it invite only, barring the registration page. Next, the user is greeted with 3 options: Get On - Demand Service (Nurse / Medicine / Care) Chat with our empathetic chat bot Get Companion for Social Events / Online The flow from here on depends on the option that the user selects. If they select the first option, the application accesses the current location of the user (After asking for permission) and offers on demand service, giving the option to schedule a service and giving the confirmation code along with mail notification. After choosing the second option, the user is taken to our empathetic chat bot, where they can chat, and understanding the context of the conversation, the chatbot gives them options to schedule events, find companions, get on-demand service, and understand if the user wants food or is not feeling well. The third option is our (in progress) chat with nearby like minded elderly people, to do which, they need to request to connect first, and upon approval, they can connect. The same list is used in close conjunction for other features as well. Throughout the application, we have given the user an option for SOS and using clutter free UI, an easy user experience. Challenges faced Cloud Deployment of SQL instance was something which was really difficult, especially testing the same on local system through the use of cloud proxy script. Integration of Google places API and creating it context-aware. Accomplishments A web app was created using various tools which would really be helpful for not only during the time of pandemic but also in day to day life. What we learned We learnt a lot of things during the course of this Hackathon. A few of them are: Progressive web app development Using Google Cloud Platform Building a chat-bot Integration of google’s API into our web app We also developed our collaborative skills during the course. What's next Some of the things that we want to try in the near future: Generate continuous behavioral analysis reports with respect to responses by the patient. This will provide us with important insights into the overall well being of the user. Different profiles for user and concerned caretaker to allow improved tracking of responses and needs Voice enabled navigation And more importantly take user feedback and work continuously towards improving the experience Important: Logging Info Login with username: test@gmail.com and password: test Built With flask gcp google-geocoding google-geolocation jinja lex python Try it out github.com 34.122.10.25
AgeX
Elderly people need care and comfort to lead a healthy life. We try to improve the lives of these people by providing a application that is designed with their needs in mind..
['Shreyash Kumar', 'Apoorv Dayal']
[]
['flask', 'gcp', 'google-geocoding', 'google-geolocation', 'jinja', 'lex', 'python']
29
10,436
https://devpost.com/software/actilive-aynzfq
Inspiration Last year, One of our group mates, Canqi, went to an elderly home to give a piano concert and was shocked -- the elderly home was in a poor condition; most of the seniors were in wheelchairs; and there were only a few caretakers for a lot of seniors. After the concert, she saw many of them crying and telling me that it was the first time they listened to a live concert since coming to the elderly home. She suddenly realized that our society needs to care much more about the elderly. And this was why we decided to create our unique platform, ActiLivE. What it does ActiLivE is a one-of-a-kind application where seniors create content for other seniors. Seniors can share instructional videos, pictures, and steps for activities they enjoy. Unlike other how-to apps, ActiLivE recognizes and caters the wide range of physical and mental abilities of seniors. Seniors can filter through activities using difficulty levels and tags. For example, for a senior suffering from mobility and memory problems, they can exclude the tags “mobility” and “memory” to find activities they can enjoy, stress-free. The ActiLivE community will try the activities then leave a review, improving this user-generated content in a positive-feedback loop. Finally, for seniors who don’t use apps, they can choose mailing subscriptions for printed activity cards to be delivered every week or month. The same customizability from the app will be available for mail subscribers as well. How I built it We used Figma to create this app. Challenges I ran into Coding an app is difficult in 24H Accomplishments that I'm proud of We are very proud of our innovation. The features and the visuals on our app, ActiLivE are appealing and will provide for the aging population. One of the claims ActiLive provides is a sense of purpose. Through online live events and fun seminars seniors can participate in subjects they enjoy. Finding a sense of purpose has been a central theme in social psychology. According to Carl Rogers Humanistic theory the basic motive of all people is to reach self actualization. More specifically, Maslow’s hierarchy of needs show that people satisfy needs in a specific order from physiological to safety to belonging to self-esteem in order to reach self actualization What I learned We learned how severe the issue of social isolation is among the aging population. What's next for ActiLivE Moving forward, we can make the following improvements for ActiLivE in the future. We will use Google Cloud’s Video Intelligence API to suggest tags and categories under the uploaded videos and tutorials. This might make navigation on ActiLivE easier. Also, Google Cloud’s Machine Learning functions can recommend further similar videos after seniors finish watching a video tutorial. We can start with the local use of the application and then expand user demographics on a global scale. We can find partner charity organizations and schools to enrich the life of seniors. We can also expand to food delivery and direct messaging. Food delivery might be especially helpful to the elderly, since many of them have limited moving abilities. Built With figma
ActiLivE
ActiLivE is a one-of-a-kind application where seniors create content for other seniors. Seniors can share instructional videos, pictures, and steps for activities they enjoy. Such as baking a cake
.
['Youkie Shiozawa', 'Shakson Isaac', 'Canqi Li', 'snailipop 78']
[]
['figma']
30
10,436
https://devpost.com/software/ssmile
Inspiration We believe it is the duty of the more fortunate to protect those most vulnerable. While researching the most prevalent diseases among the elderly, multiple members of the team mentioned that they personally know someone who has suffered from a stroke and have witnessed firsthand their debilitating effects on the survivors’ lifestyles. After realizing the worldwide scope of the issue, our team knew that something had to be done. SSMILE revolves around the F.A.S.T. acronym in order to provide steadfast detection, enhance emergency responsiveness, and potentially save lives. SSMILE’s Abilities Facial Muscle Weakness Detection: facial droops are one of the first signs of a stroke. Through facial recognition algorithms, SSMILE can scan the user’s face for irregularities and signs of muscle weakness. Arm Motor Control Detection: weakness in the arms is another key warning of a stroke. Through an encoded gyroscope accelerometer, SSMILE can test if there is any drifting in the user’s arms through a quick 10-second test! Speech Function Failure Detection: the last sign of stroke in the FAST acronym is irregular speech. After establishing a personalized baseline pattern of speech, SSMILE can compare new speech samples to the original baseline and quantify any outliers. Time To Contact Emergency: through push notifications, family members of the user can obtain recordings of speech, arm movement, and smiles to visually confirm if a stroke is present. If a stroke is confirmed emergency services can be immediately contacted. How We Built it Check the video demo! Project Challenges As many of us were inexperienced coders, a large portion of the programming development was difficult. Developing the facial recognition system, gyroscope accelerometer, and speech reader was irrefutably challenging. Additionally, converting PC implemented algorithms into a user-friendly android app was tricky and certainly took a good amount of debugging and thinking. Project Accomplishments The program can successfully detect facial droops, weaknesses in arms, and speech function failure. SSMILE’s Future Given the time constraint, we were unable to implement many ideas that we conjured up! However, we are excited to see what the future may hold for SSMILE. Here’s a list of things we foresee in SSMILE’s near future… Train our model to differentiate facial drooping due to strokes, bell’s palsy, post-stroke symptoms, injury, etc. Develop a personalized baseline status of facial features to account for possible post-stroke and facial injuries Develop a passive arm motor control sensor once the mobile device is unlocked to account for sudden strokes Develop an additional arm motor control test using augmented reality (ie. search for an object within the room) Train our model with databases of stroke patients and their cognitive and behavioral functions Train the language model on an NLP corpus from the late 1900s to accommodate for age-specific speech patterns and to realize language function failure detection without prompting the users to read out sentences Convert prompted test methods into a passive automatic detection mechanism Appreciation! A big thanks to the guidance and mentorship from Dr. Vishu Ravi and esteemed Software Engineer Terrel Ibanez! Built With kivy python Try it out github.com
SSMILE - Stroke Screening Made Instant, Live, and Easy
As the 2nd global leading cause of death, strokes have certainly made their mark. Survivors may suffer from paralysis or dysphasia. However, early detection and prevention can be as easy as a SSMILE.
['Danny Lee', 'Stella Li', 'Zoe Kim', 'Eugene Song', 'Jonathan Liu']
[]
['kivy', 'python']
31
10,436
https://devpost.com/software/tear-tool-for-evaluating-addressing-readmissions
Inspiration Most of our team members come from public health or nursing backgrounds. We know research has shown that social determinants of health relate greatly impact a person's health. We decided to use a data-driven method for correlating these determinants and hospital readmissions. What it does Our machine learning model is trained to flag patients most at risk of hospital readmission based on these social factors: income, education level, zip code, race, and ethnicity. How I built it We did a data analysis to select the most important social factors responsible for flagging the patient at risk for readmission. Then, feature selection was carried out using the correlation matrix. We trained our machine learning model on those data points. We then intend to train a ML model on the selected features for flagging the patients. Once the ML model is trained, It could be deployed in the hospitals to identify patients at higher risk for readmission. Challenges I ran into Our team members spanned three countries and multiple time zones. We rarely all worked on this project simultaneously. However, we divided up responsibilities and tasks based on our skills and were able to complete this project. Accomplishments that I'm proud of We are proud of our application of machine learning to a very relevant health care issue that has long been burdening our healthcare system! What I learned We all brought a unique set of skills to this team (computer science, direct patient care experience, public health knowledge). I think we all learned from each other's strengths and gained a greater appreciation for the skills our teammates brought to the table. What's next for TEAR: Tool for Evaluating & Addressing Readmissions We used only a couple data points for training our machine learning model. In the future, this model could incorporate more social determinants of health or any other variables of interest to correlate to readmission risk. Built With bootstrap css html python r Try it out tear-analysis.github.io
TEAR: Tool for Evaluating & Addressing Readmissions
Using machine learning, we have devised a tool for identifying patients most at risk for readmission based on social determinants of health.
['Erica Nelson', 'Cindy Kuang', 'Aayush Kumar', 'Joey Zhou']
[]
['bootstrap', 'css', 'html', 'python', 'r']
32
10,436
https://devpost.com/software/wecare-56kheq
all residents resident personal information prescription drug record registered prescription drug for resident Jake notification on calendar Inspiration The devastating impacts of the COVID-19 pandemic was felt greatest within the nursing home community, accounting for half of all deaths related to COVID-19 and over 80% of these deaths in Canada. Importantly, this revealed a fatal flaw in the workings of the nursing home, an inability to contain an infection and protect the health of the most vulnerable population. Unsurprisingly, the conditions and treatment of nursing home residents have been unsatisfactory, and residents have reported neglect or witnessed acts of neglect. Medication errors have been experienced in as many as 70% of nursing home residents in the UK. To date, there has yet to be consensus on how to best improve the care offered to this population of seniors. Increasingly, the elderly population are relying on polypharmacy and multi-disciplinary care to treat their chronic illnesses. In one study, residents were found to be taking an average of 8 medications and experienced an average of 1.9 medication errors. Heart disease – a frequent morbidity in the elderly – alone may necessitate a combination of three key drugs: Aspirin, ACE-inhibitor, and beta-blocker. Additionally, the use of multidisciplinary care increases the likelihood for error or unnecessary medication, whereby unnecessary prescriptions may occur. The clinical consequences of these errors may include adverse drug events, non-adherence, or negative drug-interactions. With time, these errors incur greater healthcare costs and consumption of material and human resources. Staff can become overwhelmed with the medical demands of their patients and subsequently commit preventable errors. However, the problem in nursing homes is at the systems level rather than the individual level. Medical errors can be reduced by finding solutions upstream and implementing efficient systems that are easily comprehensible, have built in redundancy for safety, and most importantly, serves the needs of the user. In this project, we have sought to create an app that streamlines the medication delivery schedule for nursing home staff to reduce error and improve patient outcomes. What it does Wecare is a mobile app that specifically designed for nursing home staffs. In the app, the staff can create a personal profile for the resident who he's responsible for taking care of and record the medical histories of this resident. For each resident, the staffs will also record his prescription drugs details(medication name, medicine ATC, route, dosage, time to take, frequency) and an event will be created accordingly on his calendar; The staff will get a notification if it's time to give medication. This feature helps the staff to better organize each person's medication schedule and avoid possible medication error. Accomplishments that we're proud of & What we learned We always know that most of the nursing homes don't have a good living environment. But only during this hackathon, we have actually came up with a solid and promising idea that we know will be helpful. We will extend this project to our graduation design project and eventually we hope we will be able to introduce the final product to the local nursing homes. What's next for Wecare We are aware that an elder might receive multiple prescriptions from different specialists but some medicines are not supposed to be taken together. So we aim to detect medications that might badly interact with each other if they appear in one person's medication schedule. We wish to add another section where will be used to record the elder's special living habits such as A always wants to have dinner at 4:30pm and B always needs to go to washroom at 2am. By having all these informations, the nursing home's staffs will be able to provide a higher quality of living environment for the senior people. Built With expo.io javascript node.js npm react-native sqlite xcode Try it out github.com
Wecare
A mobile APP that helps nursing home staffs to record and organize seniors' medical histories, medication schedules and special living habits in order to better serve them.
['Tianzhu Fu', 'wzongxia', 'Lancer Guo', 'Shaan Bhambra', 'Hedi Zhao']
[]
['expo.io', 'javascript', 'node.js', 'npm', 'react-native', 'sqlite', 'xcode']
33
10,436
https://devpost.com/software/messaging-calendar-for-dementia-patients-y081ub
Inspiration After having a talk with one of our team member's dad who works closely with the elderly and dementia patients, we decided to do our small contribution and give back to the community that fosters our growth. What it does MCD is a scheduling app aimed towards helping dementia patients by sending notifications to the user for pre-defined calendar events/tasks. How I built it We built the front-end using Flutter and the Back-end using Firebase. Dart was our main programming language while we also used swift for better User Interface. Challenges I ran into This was the first time working with flutter and Dart programming language for all of us on the team so it was a bit difficult for us to get all the intricacies of the new language in such a short time. What I learned We learned the basics of Flutter App development whereas we also learned the importance and effectiveness of teamwork and delegation. What's next for Messaging Calendar for Dementia Patients We want to add an additional functionality of location tracking where a family member will get the notification in the case the user did not have access top their phone. We plan on implementing this functionality using Google Maps API. Built With dart firebase flutter swift trello
Messaging Calendar for Dementia Patients
Need someone to remind you to do a task? We got you! Do not have your phone with you? We still got you!
['Carlos Molina', 'Rushi Sharma', 'Nicolas Renteria', 'Julian Delgado']
[]
['dart', 'firebase', 'flutter', 'swift', 'trello']
34
10,436
https://devpost.com/software/priority-queue-app
This is our prototype we apply to develop this application in the future with the following user dising Inspiration During the Covid19 pandemic a lot of outpatients with ongoing medical treatments were indirectly affected by the biosecurity measures proposed by the governments to stop the spread of the virus. In this context, non-urgent health matters such as ophthalmology, dentistry and dermatology were considered low priority and, in some cases, appointments were cancelled or postponed in an effort to avoid human face-to-face interaction as much as possible. In other cases, appointments turned into phone calls or video examinations making it hard for the practitioner to give accurate diagnosis. We understand this can be frustrating based on our own experience, however, as a group of technology enthusiasts we believe in the power of machine learning and data-driven decisions to solve the challenges faced by any industry. For this hackathon, we have focused on the dermatological sector and we want to make skin-related distanced diagnosis a positive experience for both practitioners and patients. What it does We have designed and android app that embeds a CNN architecture deep learning model to classify nine different skin lesions. The app makes uses of the smartphone camera, so a photo of the skin-related problem is taken and sent to the neural network to predict to which type of skin lesion it belongs. For each skin condition, there will be a prescription and will tell if the patient is required to make a face-to-face appointment with the practitioner. How we built it To build our deep learning model we have used Python with the TensorFlow, TensorFlow Lite and Keras libraries. This CNN consists on three convolutional layers that extract the main characteristics of the images followed by two hidden layers (MLP) for image classification. Finally, an output layer with a SoftMax activation function that allows multiclass classification. For our android app we have used Java in Android Studio. The app integrates the AI model (developed in python) and implements a prediction function that allows to capture the bytes of the image taken by the camera and tell a possible skin lesion. Challenges we ran into Improve prediction accuracy as the dataset we worked with was relatively small (2241 images). Convert our Keras model into TensorFlow Lite (for android purposes) due to our lack of experience in machine learning for smartphones. It was the first hackathon for three of our members and one member is in a different time zone. Work with a multidisciplinary team (comp science, engineering, business and medicine) Accomplishments that we are proud of We all improved our teamworking and communication skills. Our medicine student member became familiar with artificial intelligence in the health sector. Our business student member learnt how to build android apps and got UI design experience. What we learnt We learnt how to use Tensorflow Lite to implement AI models in smartphones. We learnt about nine different types of skin lesions and their treatments What's next for Skin Alarm app We would like to implement a system able to schedule face-to-face appointments based on the priority of the disease. We plan to implement a scheduler that advises when, at what time and in what dose a medicine should be taken. We aim to expand Skin Alarm to other types of diseases (e.g. chest x-rays to detect lung-related problems or x-rays to detect bone-related fractures) We plan to implement our app prototype design Video: https://youtu.be/NW_5fn6e5Ms (the audio is broken) Domain.com https://ai-recommends-paracetamol.tech/ Built With android-studio keras python tensorflow tensorflowlite Try it out github.com
Skin alarm
An andorid app that allows the user to take a photo of their skin-related issue an predict to what skin lesion group it belongs and provide a recommended treatment to that matter.
['Wilson Campoverde Carrilo', 'Diego Muñoz', 'Jessica Campoverde', 'Marcel Mauricio Moran Calderon', 'Lenin Cruz']
[]
['android-studio', 'keras', 'python', 'tensorflow', 'tensorflowlite']
35
10,436
https://devpost.com/software/optirx
Inspiration Chronic Illnesses are quite common in the US with 6 in every 10 adults suffering from at least one of them. Additionally 5 out of 10 leading causes of deaths in the US are these chronic diseases. For at least 60% of these diseases, Narrow therapeutic index or NTI drugs are prescribed. But the critical nature of these drugs is the “narrow therapeutic range”, that a slight change in dosage can either make the drug toxic and fatal or make it entirely useless. Hence, such chronic care management requires continuous and vigilant care. Taking a different dose, changes in your personal life, missing a dose, forgetting what was the dose assigned in the first place, can affect the drug blood concentration levels by a lot and keep you at a high risk of either the drug not being effective or toxic. What it does We propose here a responsive app : OPTI-RX (on both PC and mobile devices) to track the drug effectiveness over the course of time, and continuously send the updates and alerts if any anomalies are found in the dosage effectiveness to the required physician. We can create personal profiles of the person and simultaneously save them to be accessed in future through login username and password, Additionally when we receive the prescription for the first time, the app can use the scanned photo to automatically fill in the details for the person like Name, age, body weight etc. as well as the drug profile including name and dosage. Once such parameters are added, one of the features of the app can reassure if the dose is safe or else toxic along with adding this datapoint to the time series plot as mentioned above, as well as finding the nearest dose for safe use. How we built it For building the application we utilized the Google Cloud’s Vision API, Flask and React. The personal profile creation in front end (React) takes in the data from the user and provides it to the back-end (Flask, hosted on the Google Compute Engine). Then it provides the extracted information to Google Cloud SQL platform and confirms the login ID and username or creates a new profile and saves to the database. The prescription recognition is done through Optical Character Recognition, the scanned photo is securely transferred from React to Flask to Google Cloud Vision APIs. The extracted drug and personal profile data from the prescription, is sent as an array of strings to the back-end model/algorithm. The algorithm behind estimating the drug concentration levels in the blood uses the CKD-EPI Creatinine equation, which accounts for the effect of personal factors like body weight, age, gender etc. on GFR (renal clearance). While the equation parameters and the therapeutic range limits are found from literature. The estimated drug blood concentration level is checked in the therapeutic range, which then classifies it as True if safe else False (toxic). Additionally an approximate dose is estimated for the safe drug blood concentration. Challenges we ran into One of the biggest challenges faced is developing a connection between Google Cloud SQL and VM Instance for personal profile creation. Other challenges include retrieving the pharmacokinetic based equation model for incorporating the personal factors affect on clearance rate along with the data for the parameters in those equations. Accomplishments that we're proud of We created an interactive web app, where the complete database was based on Google cloud APIs and app based tools like Flask and React. The model is also well constructed through extensive literature review and pharmacokinetic analysis. What we learned A number of new tools and concepts were learnt (many were mentioned above). With a project like this carried out in such a short time, a good team spirit, time management, problem solving skills all were very well grasped through this Hackathon. What's next for OptiRX The app we created is still in its initial stages, there are a number of developments which can take place over this. At this stage the app as well as the back-end algorithm uses only body weight, height, gender and age as the personal factors. Other time dependent lifestyle factors like nutrition intake, sleep times, sickness etc. can also affect the clearance rate and thus affecting the drug concentration. We use a deterministic/ theoretical model for drug concentration estimation; further we can use a regression based model which is trained from previous data of personal factors and advised therapeutic dose and hence predicts the effective dose based on the current personal and drug parameters. Lastly as a moonshot, we can try to detect the disease and aggression of the disease based on the present symptoms and personal characteristics, where the model is trained through WebMD datasets Built With flask google-cloud google-vision-api python react sql
OptiRX
Time variability amongst individuals can affect blood concentration levels of narrow therapeutic drugs & requires close monitoring. OptiRx an AI based app reassures drug dosage based on past data.
['Deep Soni', 'Naman Tiwari', 'Prabhjot Luthra']
[]
['flask', 'google-cloud', 'google-vision-api', 'python', 'react', 'sql']
36
10,436
https://devpost.com/software/ultraassist-an-ultrasound-assistant-for-pregnant-women
UltraAssist: An ultrasound assist for pregnant women (35.224.138.39) Inspiration In the US around 10% of the female population or 16.5 million females are pregnant. Out of which a third of them require regular assistance which includes regular checkups in the clinic, accounting for around 5.5 million women in a year or around 150 daily cases. But due to pandemic pregnant women are reluctant for in-person visits to the clinics.Also they are more sensitive to getting infected during in-person visits. With everything happening virtually these days, people are trying to utilize technology as much as possible. In this scenario, portable ultrasound equipment currently available in the market can also be used for at-home checkup. But as individuals with no medical background, it’s hard for them to navigate around the ultrasound, exactly to locate the probe at desired locations for accurate measurements and analyze the ultrasound images, especially when there isn’t a physician/nurse available for this assistance. What it Does We propose a physician guided virtual assistant for pregnant women to conduct ultrasound scans at home without assistance from medical personnel. The application will also show labels of organs to the user for better understanding of scans. There will be two login options, one for physicians and one for the user. Once the web app opens on both ends, the physician will point out the position for the probe to be placed; the patient will accordingly place the probe for efficient image capturing. How we built it The application was built as a responsive web-app (can be used both one mobile and desktops) using ReactJS as the frontend with Flask (python) and Google Cloud APIs as the backend. The app starts with a splash page, where the user has an option to either continue as a patient or a physician. The patient is shown an image of the abdominal area and the same image is shown on the patients end to create a common point of reference. The physician indicates which part of the abdomen needs to be scanned and it is shown to the patient. Once the patient has performed the scan of the region, they can upload and send it to the physician. At this point, the backend processes the ultrasound images using the SonoNet Neural Network model (on the Google Compute Engine) , classifies what can be observed in the ultrasound image and sends the classification and image to the physician. The physician can then either end the analysis or indicate the next area that needs to be analysed. Challenges faced Main challenge was to synchronize ultrasound probe positions at physician and user end. Accomplishments A web app was created using various tools which would really be helpful for pregnant women not only during the time of pandemic but also in normal situations. What we learned We learnt implementation of technical tools like ReactJS, Flask and GCP. We got exposure to SonoNet. It was a great experience to work virtually and complete prototyping of the project in such a short period of time. What's next There are a lot of potential features that can be integrated in the current prototype. Artificial intelligence algorithms can be used to detect any pregnancy related complications. Deep learning based image segmentation can be implemented for accurate labelling of organs. A chatbot can be developed for probe positioning during the ultrasound scan. Built With flask google-compute-engine react Try it out 35.224.138.39
UltraAssist
An ultrasound assistant for pregnant women
['INDRANUJ GANGAN', 'Siddharth Kothiyal']
[]
['flask', 'google-compute-engine', 'react']
37
10,436
https://devpost.com/software/ai-spab-9qpyne
Inspiration: Why go to the hospital when you can treat yourself at home? What!? We know that not all diseases are treatable at home. But what if there’s a tool for letting you know ‘if’ you need to go to the hospital. The degree of confidence plays a vital role in providing insight into how a symptom shows an initial sign of chronic disease. Sometimes, by taking some precautionary measures, you can avoid the possibility of a disease in its initial stages. AI spab is one such tool that provides predictive insights for both doctors and patients to track a certain disease based on symptoms. Dataset: http://people.dbmi.columbia.edu/~friedma/Projects/DiseaseSymptomKB/index.html What it does: It provides predictive analysis based on the symptoms provided by the patient. It provides doctors the ability to track patients' data and see what diseases are prevalent in a certain area. How we built it: We used Google Cloud Platform’s AutoML API to train the model and deploy it to run the predictive model Challenges we ran into: Since all the members are new to Google Cloud Services, we had a challenging time building Cloud API’s but ultimately with the help of the mentors, we could solve the issue. What’s next for AISpab: Making the general checkups completely virtualized, making it usable across different hospitals for the easy transforming of records, making it such that it can be used to automatically schedule appointments for doctors based on a patients health priority Built with: Google Cloud Platform: AutoML, Firebase, Tensorflow. Frontend: VueJS Built With automl ml node.js vuejs Try it out calm-sea-70186.herokuapp.com github.com
AI Spab
Why go to the hospital when you can treat yourself at home? AI Spab is a tool for letting you know ‘if’ you need to go to the hospital.
['Harshit Ambalkar', 'Ushasree Boora', 'parthavi96 Kaka', 'Harsh Kathiriya', 'Jimil Patel']
[]
['automl', 'ml', 'node.js', 'vuejs']
38
10,436
https://devpost.com/software/find-the-fib
Inspiration Atrial fibrillation is the commonest cardiac arrhythmia, and one of the comments cardiac problems. It affects around 4% of the elderly population above sixty, and goes up to 8% from 80 years and older. It leads to a 5-fold increased risk of strokes. Approximately 70% of AF related strokes can be avoided with adequate diagnosis and treatment. The difficulty with AF is making the diagnosis, as up to 40% of patients are asymptomatic, and even in those with symptoms, there is a low detection rate with single ecg readings. Holter ECGs are expensive and often our older population do not have access to this technology. We were inspired by the idea of creating a product which could increase our ability to detect diseases which has such a big impact on the lives of our elderly population. What it does We designed a device and a service that work together to facilitate continuous monitoring of the Electrocardiogram of patients with similar accuracy and lower costs than the current technology. Moreover, your software is scalable to devices with small form factors, so there is potential for a wearable ECG classifier product to be developed. How we built it We started by designing our hardware device on paper, and migrated it to Autodesk. Due to time and financial contraints, we were unable to physically build our device. Our software is inspired by the DeepECG model ( https://github.com/ismorphism/DeepECG ) and was trained on the Physionet Challenge 2017 dataset ( https://archive.physionet.org/pn3/challenge/2017/ ). We used an AWS instance to train the model and then hosted a web UI on Google Cloud to facilitate easy user interaction. Challenges we ran into Our main challenges were related to the time constraints of prototyping within a 36 hour window. We also imposed some financial constraints and form factor constraints on our custom device. As with a lot of machine learning projects, we faced memory and compute power shortages during the training phase, and had to improvise when we ran out of our Google Cloud CPU quota. Our backend and frontend were also notoriously hard to connect to each other due to a few network issues and related problems. Accomplishments that we're proud of Firstly, we overcame every challenge, bug and failed build thrown at us over this 36 hour period to create something entirely new. Second, each of us was not only able to use our expertise in to benefit the team, but we were able to learn from each other and expand our interests and knowledge. Finally, we all demonstrated our dedication and commitment to the project by pulling an all-nighter to work! What we learned The most important thing we learned was how to work together, ideate, problem solve and effectively communicate with each other. Each of us also expanded our own interests and knowledge through the workshops and each other's experience. What's next for Find the Fib We want to expand on the diagnostic capability from exclusively Atrial Fibrillation to other cardiac anomalies such as other arrhythmias and heart lesions and even myocardial infarction. We also plan to scale our device down to a wearable form factor and incorporate wireless and bluetooth capabilities. Finally, we recognize that this technology can be applied to various populations other than elderly patients, and we want to adapt our devices and servcies in any way necessary to facilitate easier diagnoses and treatment for everyone. Built With amazon-web-services flask google-cloud javascript keras node.js python react tensorflow Try it out github.com
Find the Fib
This is an Electrocardiogram Analysis software that facilitaties continous monitoring and Atrial Fibrillation Detection through machine learning.
['Mihir Modak', 'Yuxiang Zi', 'Muhammad Raees Tayob', 'Pearle Shah']
[]
['amazon-web-services', 'flask', 'google-cloud', 'javascript', 'keras', 'node.js', 'python', 'react', 'tensorflow']
39
10,436
https://devpost.com/software/care-it-e4w2gz
Features and Advantages of CARE-IT Plan of action INSPIRATION? During the current Covid 19 Pandemic, the healthcare systems including hospitals, pharmacists, researchers, and investors, have been focusing on processing and analyzing data to find the possible treatment and the right vaccine for the virus while they ignored the non-Covid related sectors that could result in more serious healthcare issues. WHAT IT DOES? Comprehensive health care setup for chronic patient. Built With google googlescholar slide Try it out docs.google.com
CARE-IT
Non-COVID health care services in times of COVID
['Sumanta Majumdar', 'Rimah Aldhafeeri', 'Sarada Mondal']
[]
['google', 'googlescholar', 'slide']
40
10,436
https://devpost.com/software/sneakpeakintoleak
EMG result Inspiration After the age of 3, the involuntary loss of urine is socially stigmatizing, whether it occurs in grade school, at the office, during a bridge luncheon, on the golf course, or in the nursing home. Children and adults go to great lengths to deny and hide urinary incontinence, which can pose physical and psycho social impediments to the enjoyment of life. Incontinence is not a life threatening ailment but is overlooked so often. We have seen our own grandparents suffer from this. This stigma should be addressed before it gets too late. With age not only does the bladder walls weaken but our ability to discuss it also weakens. So, lets embrace incontinence without the fear of judgement. Introduction Damaged nerves may send signals to the bladder at the wrong time, causing its muscles to squeeze without warning. The symptoms of overactive bladder include: urinary frequency—defined as urination eight or more times a day or two or more times at night urinary urgency—the sudden, strong need to urinate immediately urge incontinence—leakage of urine that follows a sudden, strong urge to urinate. EMG tests to predict incontinence. Electromyography assessment of PFM consists of the use and interpretation of the surface EMG recording of a muscle Electromyography (EMG) is a diagnostic procedure that evaluates the health condition of muscles and the nerve cells that control them. These nerve cells are known as motor neurons. They transmit electrical signals that cause muscles to contract and relax. An EMG translates these signals into graphs or numbers, helping doctors to make a diagnosis. Our workflow This assistive technology makes use of EMG data as an input to predict incontinence. The EMG value is dependent on the pressure exerted by pelvic parasympathetic nerves , lumbar sympathetic nerves and pudendal nerves. Our algorithm predicts incontinence using age, sex, urethral closure pressure (cm H2O), leak-point pressure (cm H2O) and MVC_peak value from the EMG in micro volts as input parameters. Intrinsic sphincteric deficiencies identified by the Valsalva leak-point pressure ≤60 cm H20 measurement in the sitting position and/or a urethral closure pressure ≤20 cm H20 in the sitting position. Peak EMG amplitude in the 5 s MVC window for a healthy person is found to be ~ 97.6. Peak EMG amplitude in the 5 s MVC window for a incontinent person is found to be ~ 39.1.[1] How we built it The data set was synthetically created by the team (because of unavailability of datasets on this study). We used logistic regression, support vector machine and random forest to train our data. We deployed the logistic regression model in a web application. The back end of the web application was built using flask. The front end of this application was developed using HTML and CSS. The application is extremely user friendly. Target Audience This study includes non-diabetic elderly (above the age of 50) suffering from incontinence due to old age. Challenges couldn’t stop us We are a team of aspiring bio engineers with first hand knowledge of technical and biological aspects. However, we lack proficiency in a specific field. Despite the initial setback, we were firm on identifying a simple yet crucial problem statement that requires attention. Our passion for this topic allowed us to break through technical barriers. A specific data set with the parameters we required wasn’t available and following advice from our mentors we proceeded to create a synthetic data set. Due the obvious limitation of number of records in the dataset we couldn’t deploy it on more accurate algorithms like neural networks whose result shall hold substantial meaning. By employing SVM, random forest and logistic regression we were able to achieve an accuracy of 80% with the synthetic data. Accomplishments that we're proud of Despite the unavailability of data, lack of access to IoT resources (and three mental breakdowns!!) we stimulated and hypothesized a very doable hack – SneakPeakIntoLeak What we learned We dived into the field of neurology in depth and were able to channelize our knowledge of this field coupled with technology to come up with a hack for incontinence for the elderly. Most of all we learned to never give up and chase what inspires you. What's next for SneakPeakIntoLeak Our current model will require assistance from staff either in the hospital or nursing homes to help the elderly check their bladder health. In the future we aim to connect the sensor to our application so that real time values can be updated automatically. Weekly reports of EMG tests of bladder pressure would help keep incontinence in check reinforcing our belief in the track- “aging with resilience and resources”. References Scientific papers: [1] Koenig, I., Luginbuehl, H. and Radlinger, L., 2017. Reliability of pelvic floor muscle electromyography tested on healthy women and women with pelvic floor muscle dysfunction. Annals of Physical and Rehabilitation Medicine, 60(6), pp.382-386. [2] Yoshimura, N. and Chancellor, M.B., 2003. Neurophysiology of lower urinary tract function and dysfunction. Reviews in urology, 5(Suppl 8), p.S3. [3] Mellgren, A.F., Zetterström, J. and Nilsson, B.Y., 2006. Electromyography and pudendal nerve terminal motor latency. In Constipation (pp. 105-109). Springer, London. [4] Kiff, E.S. and Swash, M., 1984. Normal proximal and delayed distal conduction in the pudendal nerves of patients with idiopathic (neurogenic) faecal incontinence. Journal of Neurology, Neurosurgery & Psychiatry, 47(8), pp.820-823. [5] Chmielewska, D., Stania, M., Kucab–Klich, K., Błaszczak, E., Kwaśna, K., Smykla, A., Hudziak, D. and Dolibog, P., 2019. Electromyographic characteristics of pelvic floor muscles in women with stress urinary incontinence following sEMG-assisted biofeedback training and Pilates exercises. Plos one, 14(12), p.e0225647. [6] Vogel, S.L., 2001. Urinary incontinence in the elderly. Ochsner Journal, 3(4), pp.214-218. Books: Vaginal surgery for incontinence and prolapse - Philippe E. Zimmern, Peggy A. Norton, François Haab and Christopher C.R. Chapple Other links: https://www.niddk.nih.gov/health-information/urologic-diseases/bladder-control-nerve-disease https://www.researchgate.net/figure/Cystometry-and-EMG-recordings-from-spinal-cord-intact-A-and-T8-T9-spinal-cord-injury_fig1_313469309 https://www.nia.nih.gov/health/urinary-incontinence-older-adults Built With css flask html python Try it out sneakpeekintoleak-api.herokuapp.com
SneakPeakIntoLeak
Breaking taboos, one stigma at a time
['Tooba Subhani', 'Rutwik Palaskar', 'Raiee Gulhane', 'Swati R']
[]
['css', 'flask', 'html', 'python']
41
10,436
https://devpost.com/software/ai-doc-j0pcum
Inspiration The current pandemic has affected a lot of people. It will leave an enormous impact on our lives. In the current situation, many people could not visit hospitals due to the fear of COVID-19. So this inspired us to create a chat-bot and a website that makes an appointment for a virtual meeting with the doctor. So that the patients can stay home and get connected with the doctor. What it does AI-Doc chat-bot collects the data from the user and will book an appointment for a virtual meeting with the doctor by opening the AI-Doc website. If the doctor accepts their request then the patient can talk with the doctor. The website also contains a COVID Risk Calculator which predicts the risk of the patient. How I built it We made a chat-bot using Gupshup and connected it to the website. The website is made up of react and framer motion. Patient fills the form in the website and if the doctor is found in the prescribed department and location then he will get a mail asking for "Accept" or "Reject". If there is no doctor in the prescribed department and location then you will get a message like "Could not find a doctor". Included locations in the project: Hyderabad, Vizag Included departments in the project: ENT Specialist, Cardiology, Dermatology, Pediatrician, Opthompology, General Physcian. COVID Risk Calculator: Our website also consists a COVID Risk Calculator which is built using JavaScript. We referred research papers to built this calculator. Challenges I ran into Faced some problems while creating the chat-bot. Accomplishments that I'm proud of We are participating in hackathon for the first time and learned a lot during this process What I learned We learned how to deploy our own website in the web, using Gupshup and creating a chat-bot and also learned nodejs. We learned team-work and team management What's next for AI-Doc Built With framer-motion heroku node.js react sms-gupshup
AI-Doc
A better place to get connected to a doctor
['Vedha Krishna Yarasuri', 'Naveena Kota', 'vedha krishna']
[]
['framer-motion', 'heroku', 'node.js', 'react', 'sms-gupshup']
42
10,436
https://devpost.com/software/using-a-neural-network-to-detect-conditions-in-chest-x-rays
According to a study conducted by the BJM Quality & Safety Journal, as much as 28% of medical misdiagnoses turn out to be life-threatening or life-altering, and a significant portion of common misdiagnoses across healthcare providers worldwide belongs to x-rays. This program looks at taking the first step towards solving this problem by utilizing a web application that houses a pre-trained deep learning neural network capable of taking x-ray scan images as input and outputting a result as to whether a patient is positive for a certain medical condition. Despite the small time constraints of this project, we take pride in the fact that we were able to re-train and complete a working network, as well as the user interface that we used to house it in order to make it more accessible to the average user. More importantly, we see a brighter future in which further development on this project will allow for more efficient training of the neural network, and thus the growth of a new software that has the potential to revolutionize healthcare by utilizing the power of data. Built With python Try it out bit.ly
Using a Neural Network to Detect Conditions in Chest X-Rays
This project uses a neural network to detect certain medical conditions in chest x-rays.
['Eknoor Singh', 'Suhayb Islam', 'Aryan Singh']
[]
['python']
43
10,436
https://devpost.com/software/synoptel-v1oqde
Inspiration Most countries in the world, including the United States, have rapidly shifted to telemedicine after the advent of the novel coronavirus. This includes the aged population namely those with chronic conditions like heart disease. According to the National Council on Aging, about 78 percent of older adults have at least one chronic disease that require regular check ups. These patients cannot leave their homes because they fall under the vulnerable population required to remain in isolation. Old age brings in struggles of using technology and the inability to understand and retain vital information from video calls. We want to address both these challenges through the power of artificial intelligence. By providing the aged with an easy to use application that serves as a one-stop shop for summarised medical consultation records, we aim to create a more comfortable and secure telemedicine environment. What it does Synoptel is a web-based application that converts the doctor-patient conversation from live audio/video calls to text. It extracts elements such as the symptoms of the patient, the diagnosis by the clinician, medication proposed, and other treatments advised. This information is presented to the patient in the form of tabulated notes/lists and is saved on the patient's end. Before reaching the patient, the notes are reviewed by the clinician himself ensuring utmost accuracy. This ensures that the patient clearly understands everything discussed during the call and reduces the risk of patients forgetting to the names of medications and/or following instructions given by the doctor. The doctor also has access to these notes so as to have a simplified version of patient records available. In the present era of humanitarian crisis (CoViD-19), where access to healthcare services is getting difficult, this serves as a tool to bridge the gap. How I built it After the video conferencing gets over, we would be converting the video codec into an audio embedding using an inbuilt library for audio/video processing. This audio would then be converted into transcripts (as words and sentences, using the Amazon AWS Transcribe API). We then filtered out relevant entities and major metadata using the AWS Comprehend-Medical model, which helped in removing all the useless words (stopwords), and applying an Extractive BERT model to convert the leftover sentences into word embeddings, and then into feature vectors. These vectors were then classified in metric space as symptoms, medications or treatment procedures. The final summary, after being approved by the doctor, was shown to the patient, in the form of a web application. Challenges I ran into We had problems integrating AWS with the web app with minimal latency The categories of the labels often overlapped, so we needed to decide individual categories for some borderline cases Integrating the app with a live application is the future scope and has nearly been completed Accomplishments that I'm proud of Joining a team of brilliant minds and extreme talent cross professional fields. Approaching a healthcare problem that could prevent a number of deaths. Successfully making our way through stats and tech highs. Having had the chance to be mentored by stalwarts from every sector. Working around 48 hours, we’ve explored softwares and mastered them overnight, running on (and sometimes crawling) coffee & calls - all from our safe space. What I learned Precise case notes are crucial for not only the doctor but also the patient. So many conditions get worsened because the patient cannot fully grasp the doctor's instructions. We learned that this problem has become much more serious during this pandemic. Put together, my team members and I have explored and learned softwares we were previously alien to. We learned how to work together over online communication media. What's next for Synoptel The reach of Synoptel is diverse. It is open to exploration and we believe we have the capability to develop a system for: Sending alerts to remind patients to take their medications based on call notes Alerting the hospital system/your clinician via video/text message alert Setting reminders of your daily therapeutic doses for the patient Reminders added to your near ones’ calendar too so you have a full support system. A full power Chatbot to ease patient interaction Built With amazon-web-services-(aws-transcribe angular.js aws-comprehend-apis) react.js
Synoptel
An application that provides and stores a quick synopsis of live telemedicine consultations for elderly patients
['Manav Darji', 'Rajat Sahay', 'Anushna Banerjee', 'Nisarg Shah', 'Natasha Raut']
[]
['amazon-web-services-(aws-transcribe', 'angular.js', 'aws-comprehend-apis)', 'react.js']
44
10,436
https://devpost.com/software/umass-2
Inspiration A few years ago, Paige was on vacation in Maine and happened to stumble upon a Moose tour! Eager to see one, Paige jumped on the tour bus at dusk and patiently waited. While she was waiting, there was a moose fact song playing, and that's where it all started. The children's song highlighted how moose antlers are the fastest growing bone in the animal kingdom. Coupled with curiosity and a little background in biomaterials, Paige's imagination started taking over. How could we take this impressive regenerative aspect of antler bone and apply it to the human body to help prevent, treat, and even improve injuries? What it does Our product Alkass, combines the technology of stem cell therapy and a stint, implant, or bioglass to increase bone regeneration. We theorize that we can utilize antler cells to reinvigorate the human cell environment, fixing the balance between the buildup and breakdown of bone. Placing these cultured cells within the body along with a stint, implant, or bioglass may aid in natural healing, and the absorption of these cells may allow the bone to regrow back to its intended strength. How we built it We 3-D printed all the bones that we used for the demonstration. Most of our project is theoretically based and modeled by Play-doh. Challenges we ran into We ran into some significant challenges. We got stuck figuring out how we can actually "prove" this science. At the end of the day, we decided to trust our intuition, data we found online, and the biomaterials knowledge we have and connected them all. It was difficult at times trying to talk about systems we may have only heard of once, but it pushed us to broaden our spectrum and sought out answers to these questions. Accomplishments that we're proud of There are so many things that we were able to do in such a short amount of time that we never dreamed was possible. It was one of the first times we talked about theoretical science outside of a classroom. We used our education to design, create, and implement a product for the first time. What we learned We have learned that no matter how many roadblocks we may face, as long as we keep working together and continue to problem solve effectively, we can overcome it all and succeed. What's next for UMass 2 We have some serious plans for UMass 2 and our product Alkass. We are going to introduce our project to our professors to hopefully solidify the science behind it. Built With 3dprinting
UMass 2
Alkass is more than just a product, it’s a lifestyle. Our product mimics the astounding regenerative abilities that deer species use to grow their antlers each mating season.
['Paige Ruschke', 'Cyrus Karimy', 'Maximilian Schott', 'Bryan Gong']
[]
['3dprinting']
45
10,436
https://devpost.com/software/minimal-intrusion-nursing-min-program-yx2ws7
Community Health Service Center Get the MOST out of the LEAST Inspiration Based on the global health and aging report published by WHO, there are already 524 million people aged 65 or above worldwide in 2010, and the number is expected to reach 1.5 billion by 2050, representing16 percent of the world’s population.1 A large part of them may experience one or more chronic illnesses, disabilities, and even difficulties in living independently. These patients require long-term home monitoring of various health parameters, such as blood pressure and glucose level.2 Unfortunately, many of these tedious monitoring processes are likely to be forgotten. They may also pose significant intrusions to the elders’ normal lives. More importantly, most elder people are not able to effectively interpret these data until they meet their care providers, and due to sensory degradation, they are often unlikely to aware of subtle changes in health conditions, which is especially dangerous during events that may jeopardize their health and lives. Inspired by challenges addressed above, we would like to design a series of non-invasive and multi-functional devices that can measure patients’ health-related parameters in real-time and transmit the data into databases that can be accessed by both patients and their community health centers through our website, which is called Community-based Health Centers (CHC). There will also be algorithms evaluating patients’ health conditions based on their medical profiles. We hope that this concept would greatly improve the life quality for elder people, as well as ensuring their safety and welfare. What it does Our project is mainly divided into two components, the non-interruptive and multi-functional hardware and the website (database and connection system). For the hardware portion, we design a series of none-invasive equipment that can take people’s health related parameters in their daily life without them doing anything considerably different from their daily life. These devices will be embedded in people’s living environments, such as living rooms and bedrooms. Here are some examples: Spoon style glucose detector: Blood sugar level monitoring is important for diabetics, but it can be difficult to remember doing the measurement regularly. Thus, we design a special spoon embedded with a detector. When the patient put food in the mouth, the spoon can automatically sample patient’s saliva and perform subsequent analysis to obtain the blood sugar level. Thus, whenever and wherever the patients are eating with our spoon, their blood glucose level can be monitored. Multi-functional recliner chair: Monitoring blood pressure level and pulse is important for a variety of aging-related diseases, such as hypertension. We think it optimal to implant these functions within a comfortable recliner chair, where patient could sit in to watch television, read books, while their health-related parameters are collected even without their conscious awareness. This would minimize the need for the patient to THINK ABOUT taking the measurements and allow their data to be collected when they live their normal life. Another major part of our project is building a website named Community/Intermediate Health service Center (CHC/IHC), which is connected to the database. Collected data can be transmitted to our database through the Internet. Both the patients’ health providers and community based medical professors can log in their account to view and manage the real time health data of the patients. Functions of Senior (patient) interface include: Login Visualized change in key measures (eg. blood pressure, heart rate) Functions of Caregiver interface: Login Table of patients' conditions Algorithm to alert unusual patients' conditions to caretakers Patient record & contact details We will also develop algorithms that can learn to decide whether the data of a specific person looks normal based on his or her personal medical record and give suggesting diagnosis to doctors if abnormality or even potential emergencies occurs. Then the doctor will be alarmed by the website about the data fluctuation as well as suggestions, and then doctors can decide whether the situation is serious enough to intervene (probably call their patients to check their situation). Additionally, if the patients have any minor medical-related questions regarding their health, they can simply open the chatting feature (both text message and voice form) on our website to directly communicate with a community health profession. We hope that these features would significantly alleviate the pressure on elder people’s caring system by utilizing the valuable nursing resources to the maximum efficiency, as well as allowing the elder people to be notified of any abnormality in time to avoid any further complications. How I built it Hardware: Spoon style glucose detector: we will embed an integrated circuit at the first half side of a spoon, which includes a button cell battery as the energy source, saliva collector, analog/digital signal transformer, and a Wi-Fi transmitter module. There would be a rubber-like material covered on the circuit and seal the chip and battery from outside environment. There’s a small valve on the coverage, right in front of the saliva collector. The valve will open only when the user’s tongue touches it and provides enough pressure. Then saliva can flow through the collector, which has glucose oxidase embedded inside. Glucose in saliva will react chemically with the enzyme and change the current in at different levels, depending on the glucose concentration. Then current, an analog signal will be transformed into a standard digital signal through A/D transformer and finally packed and transmitted through Wi-Fi by the transmitter module. Multi-functional recliner chair: The chair can measure the user’s pulse, blood pressure and temperature. There is a non-contact infrared digital thermometer buried inside the handle of the chair with only the detector facing out. If an old man is lying his arms out on the handle, the thermometer can then scan him and get the data of body temperature. What's more, the blood pressure monitor that is fixed at the top left part of the chair can also be used to detect the pulse and blood pressure, integrate the data, and then send them to the data transformer that is connected with Wi-Fi (together with the data of body temperature), which can send data to our database. Website & Database: To present the monitoring data, we designed a website with two user interfaces, one for patients and one for caregivers. Languages: CSS, HTML, JavaScript, MATLAB To build the automatic warning system, we deploy MATLAB Algorithms. The model would identify the problematic heart signal by doing Fourier transformation of heart signals. Challenges we ran into Initially, our team had a tough discussion on the devices we are attempting to design. We want to compact as many functionalities within the comfy chair to minimize the equipment the patient needs to buy for measuring their health data. However, as we explore our ideas further, we believe it impractical to pack all the measuring device into the chair – it would take an exorbitant cost. Hence, we conducted some searches and reanalyzed the needs of most of elder people so that we selected a few crucial features to implant into our chair. In addition, our team initially knew little about website construction, which constitutes a major component of our project. After watching several tutorials on basic web design, we were able to figure out the fundaments of website construction and we managed to successfully develop a website that allows us to effectively demonstrate our ideas. Accomplishments that I'm proud of We’ve designed two unprecedented multifunctional devices that can be used as a chair or a fork as well as a health monitoring system. They can measure users’ vital signs as well as blood pressure and blood sugar level without doing any measurements intentionally. We intergrade the measurement into people's life. We have completed the device design, UI design, the proof-of-concept MVP, and the development for a functional web app. What we learned We learnt a lot about equipment design, such as how to implant the sensors and signal transmitters with optimal efficiency and practicality. We also learned many fields in website and software development, including Web UX/UI design and mockup, video editing using iMovie, calling APIs, CSS and HTML coding. Furthermore, we have a deeper understanding of how to convert conceptual ideas to real-life applications, as we initiate our design from an entirely intangible notion to equipment that can be used by the older people to help them live a higher-quality life. What's next for Minimal Intrusion Nursing (MIN) program To promote our project further, we can improve our current MIN devices and develop new ones. We can also help more senior population via covering more communities through collaborating with local health centers or establishing new CHCs. Furthermore, we want to expand our database such that more chronic diseases can be diagnosed automatically, which may conserve the already limited medical resources. Additionally, we expect to extend the functionalities of our CHCs and the websites so that more services could be provided to people in need. Built With css html javascript matlab Try it out community-health-center.github.io
Minimal Intrusion Nursing (MIN) project
Our project is mainly divided into two components, the non-interruptive and multi-functional hardware and the website (database and connection system).
['Rongrong Liu', 'Jia Pan', 'sophia Xu', 'Yian Qian', 'Tianyue Zhu']
[]
['css', 'html', 'javascript', 'matlab']
46
10,436
https://devpost.com/software/skin-lesion-classification
Deep learning model architecture Landing page of web application Info page Results page with predictions Inspiration Due to the current global pandemic, it may be difficult and risky to go to the hospital to receive check ups and a lot of family doctor appointments are done virtually over call. Most skin lesions need to be monitored over time to properly identify which type it is and how to go about treatment. With our application, we’d like to bring a simple diagnosis at the cost of just a single click. This way, the user can have a general idea of what the condition could be and stay educated and informed. What it does Classifies images of skin lesions into 1 of 7 categories and outputs probabilities for each of the 7 classes: Actinic Keratoses Basal Cell Carcinoma Benign Keratosis Dermatofibroma Melanoma Melanocytic Nevi Vascular skin lesion How we built it Using the HAM10000 dataset we built a deep learning model using a simple CNN architecture on python. The model was stored in a .h5 file that could be read in through Flask and used to output predictions based on an image the user uploads. We also have an info page that displays examples of each class of skin lesion and includes helpful links to informative websites. There is also the feature that the user to email their family physician with the image and their results. Challenges we ran into The original front end was built in React and was very Javascript heavy. Our python program needed to take in the user's uploaded image in order to predict the final output and we could not figure out how to pass that information from React to a python script, then pass the predictions back to React. We decided to migrate our entire front end over to Flask. What's next? In the future, we would like to redesign the UI and create a mobile version of the application to make the app more user friendly. Users will be able to directly take a photo from their mobile phones to upload. Built With flask html5 javascript keras python Try it out github.com
Skin Lesion Classification
A web application that allows users to upload an image of their skin lesion and the model will output the probabilities of it falling under 7 possible categories
['Rebecca Ma', 'Padmaashini Sukumaran', 'Candace Ng', 'Yewon Kwak']
[]
['flask', 'html5', 'javascript', 'keras', 'python']
47
10,436
https://devpost.com/software/neopet-2pgzxt
The Novel Application to Parkinsons and Essential Tremor (NeoPET) provides an innovative, seamlessly-integrated way of tracking Parkinson’s and Essential Tremor symptoms and treatment responses through a smartphone. By utilizing machine learning to analyze and quantify symptom severity, NeoPET is able to provide critical, time-sensitive information to contextualize disease progression and treatment response. The portal offered by NeoPET allows physicians to be alerted to urgent events and evaluate treatment options at any time Inspiration In the U.S., each year, 60,000 Parkinson’s cases are added to the 1,000,000 currently diagnosed patients, while only 16,000 motor specialists or translational DBS clinicians, often consolidated around urban areas, are available to treat them. [1] These temporal, geographical, and physical limitations provide an extenuating frustrating experience for many PD and ET patients alike, especially for first visits or if they feel a treatment is not working — or worse. Oftentimes, these specialists are booked months in advance for new or existing patients, making both prompt and accessible intervention extremely difficult. [2] Parkinson's Disease (PD) and Essential Tremor are dynamic in both how they present themselves in patients and how the research community quantifies severity and progression, meaning existing data that cannot cross platforms is often lost. Furthermore, treatment options such as L-dopa or deep brain stimulation (DBS) settings can have rapid, drastic effects that do not set in until hours, days, or weeks after implementation. We identified a need for an app that would provide physicians with the real-time data needed to modulate treatment plans and intervene on rapid progression without the need for an in-person visit. What it does The app passively and actively tracks various features that are used to evaluate the severity of Parkinson’s in clinic through day to day movement and activities. Primary complaints from PD and ET remote tracking methods are constant device interactivity and physical discomfort of a wearable, so we strive to combine a pleasant, almost completely background process for the user while not sacrificing accuracy and reliability. Phone sensors can flag trigger events with strong confidence to begin or terminate a certain datastream collection. The amalgamation and detailed breakdown of these results posted at the periodicity of collection can be viewed through a physician portal. This implementation enables physicians to continuously monitor their patients post treatment with little to no inconvenience for the patient and provides the clinician with more data to drive better clinical outcomes and more efficiently optimize treatment plans. How I built it Using Android Studio, we developed an app that implementes the capabilities of smart phones to serve as a diagnostic tool for symptom severity. By passively tracking the user’s typing habits and gait through smartphone keyEvents, accelerometers, proximity sensors, quaternions, and gyroscopes, we obtained information which would be automatically uploaded onto the cloud. When our sensor pipeline, running as a background service, senses phone-in-pocket and several steps, recording of the IMU-6/DOF initiates, and terminates after a 5 second pause in motion or 2 minutes of streaming. This gait data is then processed in MATLAB, then extracting 20 time features and 9 frequency domain features, all significant to the progression and state of PD and ET. This includes Stride Time, Stride Variance, omnidirectional Amplitude Variance, omnidirectional peak frequency, and omnidirectional median frequency. These are both raw classifiers researchers use and also features which can be imported into an SVM if given more grouped data.These were not able to be detailed in the video due to time constraints but we are happy to answer questions regarding their use and relevancy. For a more comprehensive patient profile, we also combined two too-often issued features into a symbiotic stream. We offer a patient journal and mental state survey in the local app, recommended to be taken once a week or more often. However, PD adherence to tasks is low, especially surveys as most people feel they are not meaningful, even if they are delivered standardized as instructed by UPDRS. To combat this and collect data efficiently, built into this app we have an objective bradykinesia and tremor+handedness analysis using keystroke events to classify 9 hold-time and 18 latency features. Supporting this leveraging function is a database with PD and Naive keystroke data, which MATLAB then used to normalize the data, reduce the dimensionality using a Linear Discriminant Analysis technique with pseudolinear discrimination, and train several binary classifier support vector machines on data subsets using a standardized gaussian kernel and radial basis function, which yields 98.08% accuracy for overall PD presence, 94.12% accuracy for handedness of bradykinesia and 96.37% accuracy for tremor presence. This is following a k-fold cross validation (k = 10) to ensure over fitting is not happening. These data streams are then stored and viewable in a classified and succinct way for the clinician to skim or deeply analyze themselves. Challenges I ran into The upload and download from data servers to move the data between the app, computer, and website proved difficult because MATLAB did not have native functionality for Google Firebase. Developing an optimal LDA and SVM was an issue, and we had to make do with the data out there, meaning not all gathered functions could have percentile features (percentile severity is a growingly preferred method of continuous progression over discrete) based on SVM’s. Had we more time or perhaps data specifically, we'd have attempted to use accelerometer data for hand tremors as well, but databases are not accessible without requesting from the owner or author, which is not within the timescope of MedHacks. Accomplishments that I'm proud of We are proud of our resilience. Many times we did feel as though we’d gone over our heads, but we persevered past these doubts and misgivings and created a product that we are proud of and believe can make a difference in the lives of Parkinson's patients. We take pride in our ability to rise to the challenge and develop a functional app with multi-source data acquisition capabilities within 48 hours, having little prior experience. What I learned Android app development and how to utilize web APIs for data transfer. What's next for NeoPET In the future, we would like to implement more functionality to our app such as encrypted call data processing to assess dysarthria, but privacy concerns pushed this out of the scope for now. We consider in the future, if we gain traction, implementing a voice, emotion, and facial masking analyzer (other important parameters) that works in the background of virtual interactions with clinicians for telehealth appointments. We also found ANNs can be used on accelerometer data to classify the likely “location” (bed,pocket,bag,etc) for the phone, which would provide an even more accurate trigger event system. Additional features would improve our classifier and provide more valuable feedback to the physician. Works Cited Statistics. (n.d.). Retrieved September 06, 2020, from https://www.parkinson.org/Understanding-Parkinsons/Statistics Chard, G. (2014, May 1). Telehealth to Digital Medicine: How 21st Century Technology Can Benefit Patients (Issue brief). Retrieved September 6, 2020, from https://docs.house.gov/meetings/IF/IF14/20140501/102173/HHRG-113-IF14-Wstate-ChardG-20140501.pdf Built With android firebase html java javascript matlab Try it out drive.google.com
NeoPET
The Novel Application to Parkinsons and Essential Tremor (NeoPET) provides an innovative way of tracking Parkinson’s and Essential Tremor symptoms and treatment responses through a smartphone.
['Dominic Marticorena', 'Marcos Zachary', 'Zack Goldblum', 'Jeffrey Ai']
[]
['android', 'firebase', 'html', 'java', 'javascript', 'matlab']
48
10,436
https://devpost.com/software/servicedog
Home screen: Say hi to your cute service dog! Engage in fun and interactive activities! Take a selfie while wearing a mask as one of the daily challenges! Win prizes by completing tasks. You can earn a new toy for your dog, or unlock a new dog (or cat)! Meet new friends and see each other's treat scores! Unlock a dog park by completing daily activities! Here, you can play with your dog or dress them up. Submission for Track #3: Patient adherence and quality care during a global pandemic Inspiration The COVID-19 pandemic has undoubtedly changed how education is provided, changing the in-person classroom setting to a virtual, remote one. While this barrier will eventually disappear as the pandemic settles down, there will still be ongoing concerns of having young children adhere to essential health practices, such as wearing masks properly, washing hands thoroughly, and practicing social distancing. Our goal, through this app, is to positively reinforce these essential health practices to children (particularly those from ages 5 to 12) and provide a fun social platform for interaction with their peers. What it does Our app functions in a mobile game-like manner, with educational aspects, social aspects, and a self-logging system of garnering "treats" or points to journal progress and status. When you first open Service Dog, you are greeted with the home page, where 4 other tabs lay at the bottom of the screen. All of these contribute to positively reinforcing healthy practices. Home Starting with the Home page, you can view your default avatar, a dog, and your current number of “treats”, which is the point system used in Service Dog. These treats are gained overtime, through completing activities and customizing your dog. You can click on the icons on the Home screen to redirect you to the My Park page, which is a virtual play area for your dog. There, you can stylize your avatar, do activities with your dog, and customize the park’s background. In this page, you can use the treats you’ve accumulated to make your dog as happy and stylish as possible! My Friends! Next, you can navigate yourself to the first tab to the far left of the bottom of the screen, and there you have the “My Friends” page. This page allows you to view a leaderboard of your current friend’s profiles and status, where it shows how many treats they currently have from completing daily activities. This page helps you connect while promoting healthy practices with your friends. My Prizes! By completing your daily activities, you earn treats that go into your account. These treats allow you to unlock various prizes in the “Prizes” page, including different avatars to choose from and clothing to customize your avatar. Daily Activities! You can also appear on your friend’s Friend list showing the number of treats you have, when you accomplish healthy tasks shown on the “Daily Activity” page. On this page, there is a list of activities you can perform, where a check mark would appear next to the task name after you complete them. Examples of some activities include: washing your hands while singing, drink 8 cups of water, and take a walk 6 feet apart. Each activity promotes staying healthy and safe, including washing hands, staying hydrated, wearing a mask, and social distancing while being active. Learn More! The last page to your far bottom right is the “Learn More” page. Here, you can read infographics that answer questions related to healthy practices during the COVID-19 pandemic. For easy navigation and information, there is also a help button at the top right hand corner of most pages that give you a brief description of each page and its purpose. How we built it We created and designed the majority of our assets using Figma. To create our high fidelity prototype, we took the slides from Figma and made them interactive with InVision. Challenges we ran into Initially we wanted to program our app using Android Studio, but due to time limitations and time zone differences, we realized that it is not feasible. Consequently, we opted to create a high-fidelity prototype that wasn’t bogged down by the technical nitty-gritty of coding. For the creation of our high-fidelity prototype, the sheer amount of assets that we have to generate is a challenge to us. In addition, we were also limited by the functionalities of InVision (which occasionally causes some trouble for the 'back' button). Accomplishments that we're proud of We are really excited about our final product -- both for the overall concept and the design aesthetics and functionalities! For the former, an experienced mentor in the industry was really impressed by our app and would be likely to use it with his children if it existed. And for the latter, we have done user testing for our app and users of our target demographic expressed a strong liking towards the design of our app & would use it! What we learned We learnt how to do rapid-prototyping and arrived at a high-fidelity prototype within a very short duration. We also learnt to use tools like Figma and InVision to help develop our prototype. What's next for Service Dog While our app is currently in its prototype phase through Invision, we have plans for potential future implementations and a direction that we’d like to see this app move towards. Already planned out in the app prototype is a camera function used for taking “mask selfies” to gather more treats. Within this photo upload function, we can add a mask-checking system through machine learning and computer vision techniques to correct the user if a mask is being worn wrong, say it isn’t covering the nose. Location services can also be used to send the user notifications once they reach home or enter a public area. Notifications can include “Did you just come back home? Wash your hands to stay safe!” or “Are you at a cafe? Make sure to follow social distancing and mask guidelines!” Another potential application of location services is to use API technology to detect bluetooth signals from nearby devices, warning the user of large social gatherings. Finally, we plan to add-in parent and teacher accounts to allow connection with their kids. This will hopefully provide useful feedback for parents and teachers on the child’s progress. With more time and resources, we believe our app has the potential to grow into a useful tool for both children and adults. As our target audience will be between the ages 5 and 12, we will first attempt to code via XCode and Swift to produce an iOS application, as younger audiences tend to skew towards iOS devices. Built With figma invision Try it out invis.io
Service Dog
Does your child struggle to keep up with the daily tasks of the pandemic? Our app, Service Dog, can give your kid a friendly canine companion to keep them accountable :)
['Aine Kenwood', 'Nicholas G. Kim', 'Nicasio Ng', 'Delphine Tan', 'Olivia Wang']
[]
['figma', 'invision']
49
10,436
https://devpost.com/software/mapaxcess
Screenshot of our app We had to do some extensive research for assessing risk levels. For the technical side, Expo had some limitations that we encountered as well. Built With expo.io express.js google-maps mongodb node.js react react-native typescript Try it out github.com
MapaXcess
Our idea was to create an application which would help users know about risk levels for certain establishments, so that they can shop/plan out their itinerary in a safer way.
['Yin Nan Huang', 'Haihan Chen']
[]
['expo.io', 'express.js', 'google-maps', 'mongodb', 'node.js', 'react', 'react-native', 'typescript']
50
10,436
https://devpost.com/software/eegenie
User Setup Emergency Contact Setup User Alert Emergency Contact Alert Seizure Diary Preictal Spectrogram Interictal Spectrogram Inspiration Epilepsy affects 50 million people worldwide, and often epileptics experience depression and anxiety due to the unexpected nature of seizure onset. Patients who live alone or those who do not recall seizures are especially vulnerable. 75% of seizures can be attributed to unknown causes and could benefit from personalized treatment and analysis. What it does EEG is used as the "gold standard" in clinical seizure diagnosis, and portable EEG shows great promise for collection of high quality data. Our software aims to not only integrate resources such as seizure diary and emergency/EMS contact, but also to use machine learning to determine whether or not the user is heading into a seizure (preictal stage). We believe EEGenie can lower anxiety from unexpected seizures, help users understand their seizures/triggers, and lower risk of injury by warning and informing emergency contacts by leveraging personal data. How we built it The workflow of the prediction classifier involves Short-Time Fourier Transform to de-noise the data and prepare it for a convolutional neural network. The output is then passed through SVM classifier to finally determine if a seizure occurring is likely or not. The goal is to have this classifier located on a server, such that the computations involved in predicting seizures are done on the cloud--so as to not burn through the user's battery. The mockup of the front end of the app we have is generated in Figma and would be implemented in React Native to make API calls to and from the server. The mockup includes the emergency alert system and the seizure diary. Challenges we ran into One of the most important challenges we faced was the machine learning of large datasets and lack of processing power. False positives and negatives were also a concern that could be improved by analysis as more data becomes available. Meanwhile they have been mitigated by cancel buttons, cancel notifications, and a 60s delay before emergency contacts are alerted. Data privacy and confidentiality most be considered as the personal datasets generated may be useful for future research. Enforcing patient consent and anonymizing names with client/patient IDs can be used to address this. Finally the difference in signal quality from consumer and medical EEGs was a concern. Studies such as a 'Comparison of Medical and Consumer Wireless EEG Systems for Use in Clinical Trials' by Ratti et al. have demonstrated the Mindwave Fp1 to be comparable in power spectra medical EEGs, but more considerations must be taken into account on the limitations of portable EEGs. Accomplishments that we're proud of Researching the portable EEG and finding promising results with the same and designing the prediction algorithm were one of our biggest accomplishments. What we learned Interesting facts about the epileptic patients and also furthering our knowledge about Machine Learning. And more importantly, working with a team that helped each other every step of the way. What's next for EEGenie Test with consumer (portable) and medical EEGs Incorporate more parameters like biochemical changes in the data set. Add a fall protection in the form of an inflatable to protect the head. To record all data and improve prediction rate and make it personalized to the patient Built With datasets eeg figma flask python raspberry-pi react-native tensorflow
EEGenie
Monitoring, predicting, and supporting epilepsy patients with a portable EEG to create datasets for personalized treatment and alert emergency contacts
['Colton Bogucki', 'Faith Lum', 'Riya Gupta', 'Michael Liew']
[]
['datasets', 'eeg', 'figma', 'flask', 'python', 'raspberry-pi', 'react-native', 'tensorflow']
51
10,436
https://devpost.com/software/elderlycare
GIF Home Inspiration The current number of people aged 65 or older is estimated to be 698 million people. These individuals have multiple comorbidities that would often need medical attention or some of may have mental health issues. However, with the current pandemic going on, the elderly are advised to stay home to protect themselves from the coronavirus. How do these individuals see their doctors and have care provided for them? How can they monitor their medication usage when they are isolated without needing to neglect their autonomy for their own health? How can loved ones monitor these patients? What it does Digital monitoring and care service for the elderly people living at home with their family! Keep track of daily mental and physical health Set up Appointment & Virtual Visiting Medication Reminders & Recorder How we built it Python for the web, Balsamiq for the UI/UX design Challenges we ran into Communication with different team member who has different timezone The understanding gap of how the elderly are facing Accomplishments that we're proud of Teamwork solution for the problems What we learned Taking care of the elderly is important and need to be done by following medical guidance. What's next for ElderlyCare Do patient interview, get a solid understanding of how our solution can really help them. Built With balsamiq python Try it out eldercareapp.netlify.app
elderCare
Providing optimal care for the elderly while at home!
['Osman Warsi', 'Valerie Lima', 'Zachary Flahaut', 'Aung Phyo Linn', 'tao zhang']
[]
['balsamiq', 'python']
52
10,436
https://devpost.com/software/covid-updates
Inspiration There is an important need for technological tools to manage the spread of COVID-19, as manual methods run into problems such as faulty memory, delayed communication to potential contacts, and increased strain on health institutions. Contact tracing tools have been proposed and implemented in other countries using digital technologies. However, similar solutions have failed to gain ground in America due to controversy over digital privacy and government surveillance. The problems that we currently face are numerous. Recent Survey data shows that more than 7/10 Americans are unwilling to download a contact tracing app -- with the majority of respondents citing privacy as the reason. Public Health workers face harassment and threats for conducting contact tracing in local communities. Location tracking is disallowed by the Google/Apple joint app -- but healthcare officials say that location tracking is crucial for successful contact tracing. Furthermore, most contact tracing apps depend on a high percentage of the population using it in order for them to be successful. We hope to present an application that can prioritize privacy concerns and still provide relevant and important information to users in real-time, while also being useful for healthcare officials. We also believe that our approach can provide a solution that doesn't depend on having a certain percentage of the population use it. We have selected patient adherence during a pandemic as our track because we believe that our application's primary goal is to influence the public's behavior and encourage them to make the right decisions to stop COVID. What it does Our application tracks and stores the important user locations for two weeks locally Gives regular updates about local coronavirus cases Sends notifications when necessary E.g. ---> Possible exposure (A health care official determines a location and timeframe, this information is sent out to all users, and each user application checks to see if their stored locations match) ---> High-risk Area (The user is entering a region where elevated cases can be found) Why we built it this way Our application idea will allow it to be successful even without having a high percentage of the population using it. Whereas Bluetooth requires other people to opt into the Bluetooth system, our application doesn't require other people to opt in to be effective. Furthermore, all location data never leaves your phone and there are no outgoing messages, which leaves no reason for anyone to be concerned about privacy. While cases are still high across America, there are still great disparities in case concentration between cities and communities that make national-level, state-level, and even county-level numbers almost irrelevant. This is why up-to-date data that is as geographically specific is important for the public to know. By creating an application with automatic updates, we provide an easy way for people to quickly be informed when they do decide to leave the house. Finally, as cases are declining in many areas, sudden spikes and outbreaks are still seen even in the form of clusters. Our application will allow health care officials to quickly respond to these outbreaks by notifying people as rapidly and accurately as possible. Clusters can arise from restaurants, events, clubs, etc... and it can be difficult to track possible contacts without location history. How I built it Android Studio and Google Maps SDK Challenges I ran into Mobile Programming was a first for our entire team Accomplishments that I'm proud of We've built a somewhat functional application What's next for Covid Updates --Creating notification system --Using firebase to send updates regarding clusters --Make the background location storing more efficient and fine-tuned Built With android-studio google-maps Try it out github.com
Covid Updates
Informed Decision Making based on Personalized Updates
['Angela Chen', 'Jasun Chen', 'Katrina Jolly']
[]
['android-studio', 'google-maps']
53
10,436
https://devpost.com/software/animo-81d57a
Animo GIF Inspiration Given our team’s medical backgrounds, many of us have witnessed first hand the devastation that neurodegenerative diseases can cause first-hand. We understand first hand the many challenges that these patients face - from long waiting lists to poorly personalised regimes. Given our long-standing interest in the research space of these diseases, combined with our technical backgrounds, we aspire to design a patient-centric technological solution to better connect clinicians with the well-being of their patients. What it does Clinicians measure Parkinson’s severity using a UPDRS score to use as a basis for drug prescription. Studies have shown machine learning models capable of predicting said UPDRS score from patient vocal recordings. It is estimated that 90% of Parkinson’s patients have some form of vocal issue in early disease stages. Our app collects patient voice recording of the ‘a’ vowel phonation into the cloud system and predicts disease severity with machine learning. This will allow a better understanding of disease progression to clinicians and ultimately better drug management. Research seems to be very supportive of translating this to other neurodegenerative diseases, such as Alzheimer's. How I built it Our prototype machine learning model is developed from an open-source database containing 188 Parkinson’s patients and 64 healthy control patients. Using Python Scikit-learn, our decision tree classifier has achieved ROC AUC values at 0.80, with a specificity of 0.64 and sensitivity of 0.9. The 100 best features were selected using SelectKBest. This algorithm runs on feature-extracted audio waveforms, collected from our phone app. Challenges I ran into The database mentioned above does not have patient severity measurements. We hope to extend our prototype algorithm with a more representative database and advanced feature extraction and analysis, to improve its sensitivity and specificity. We would also demonstrate this in a clinical trial, as required by FDA guidelines. Accomplishments that I'm proud of Speech analysis for Parkinson’s severity prediction is a new and rapidly growing field with great potential in improving patient drug management, and our solution at this time appears to be novel. The opportunity to equip clinicians with daily data of disease severity will clearly benefit the quality of life of Parkinson’s patients. What I learned Speech analysis for Parkinson’s severity prediction is a growing field with great potential in improving patient drug management. The opportunity to equip clinicians with daily data of disease severity will clearly benefit the quality of life of Parkinson’s patients. What's next for Animo We have been in touch with experts in the field of Parkinson research, and the feedback so far has been very positive on both the problem and the solution we’ve identified. Going forward, we hope to partner with both public health bodies and external funding initiatives to bring this vision to life. To continue our research in the US we would hope to partner with leading organisations such as the Michael J Fox research group for Parkinson's Disease as well as applying for an NIH grant to pursue the mobileHealth space. We additionally would consider a joint venture with companies such as Novartis who manufacture levodopa. Built With google-cloud kaggle python scikit-learn
Animo
With seamless data collection, Animo allows clinicians to provide personalised regimes to patients suffering neurodegenerative diseases, transforming the treatment landscape and the lives of patients.
['Rohan Sanghera', 'Dana Z', 'YUDHISTIRA LUMADYO']
[]
['google-cloud', 'kaggle', 'python', 'scikit-learn']
54
10,436
https://devpost.com/software/medi-care
Inspiration Our project is based on positive results from the following research: http://ceur-ws.org/Vol-2142/paper10.pdf We also provide a better solution by just using smartphones What it does Helps track medicine intake using AR,image recognition,3D models and How we built it using Unity3D and various SDKs like vuforia and echoAR Challenges we ran into Ran out of time to build my own restapi for drug information Accomplishments that we're proud of Integrating frontend and backend What we learned A lot about how medication adherance is a big problem,read various research papers too. Also learned a lot about Unity GUI system and echoAR double way communication What's next for Medi-cARe A better notification system and timer. Built With c# echoar tts unity vuforia Try it out github.com
Medi-cARe
AR based solution for patient adherence
['Vaibhav Suri', 'Ashit Mehta']
[]
['c#', 'echoar', 'tts', 'unity', 'vuforia']
55
10,436
https://devpost.com/software/petals-ejm1r7
petals landing page home page track progress! navigation doctors can add/customize tasks for patients! Inspiration The inspiration for Petals stemmed from the growing need for patient adherence assistance amid the pandemic, where uncertainty in routine health habits greatly impacts our most vulnerable populations. We wanted to create a healthcare specific communication application which congregates relevant and useful information while streamlining the communication channel between patients and their healthcare providers. What it does Our app promotes patient adherence and offers a unique opportunity for healthcare providers to provide quality care. It allows doctors and patients to communicate in a novel way, as the app allows healthcare providers to assign tasks (ex. Physiotherapy stretches) and patients to submit their proof of completion of each task. Additional functionalities are offered such as clinic locating, calendar scheduling and more. How we built it We used javascript and React-native to program our mobile app. Our design process began with Figma diagrams, which helped us envision how we wanted the app to look. We then constructed all of the screens on React-native and viewed our designs via Android Studio emulator. Finally, we implemented functional details such as navigation and button onPress actions in order to complete our app. We worked with Firebase to store all of our app’s user information. Challenges we ran into Our team had varying levels of experience with react-native and javascript, including members who didn’t have any prior experience at all. It was quite challenging to learn from scratch under such a short amount of time. Furthermore, there were some challenges with installing our software and getting the working environment set up. During app construction, there were challenges with some of the functionality implementation. Debugging was also very challenging at times when we weren’t sure why the app wasn’t fully behaving as intended. Accomplishments that we're proud of We are very proud of ourselves for building a functioning and practical app in such a short amount of time! Great job everybody! What we learned All of our team members learned new things about react-native, and those of us working with Firebase also learned new things in that regard. What's next for Petals We will make more cool apps together, and try to develop a cure for COVID-19 using a combination of C++ and C#. Built With android-studio figma firebase google-cloud javascript react-native xcode Try it out github.com
Petals
Keep up with your health and receive tailored feedback from your healthcare provider anytime, anywhere. Petals is a unique virtual pocket clinic, inspiring patience adherence and quality healthcare.
['Veronica Nguyen', 'Ethan Tan', 'Emily Lukas', 'Ivy Han']
[]
['android-studio', 'figma', 'firebase', 'google-cloud', 'javascript', 'react-native', 'xcode']
56
10,436
https://devpost.com/software/eyes-first-your-central-ocualr-telehealth-solution
Front page of our app with a clear and simple UI Inspiration We wanted to design an application that would help telehealth visits by helping provide more information to doctors. We found that at home ocular tests are not readily available in a single package. What it does Home testing would be able to aid a physician in diagnosis by tracking health data over time and aid in telehealth visits by providing more information than you currently obtain with more general medical applications How we built it Built in Android Studio with Java and Kotlin. Audio instructions and images embedded for better user experiences, especially for people with eye problems Challenges we ran into Loss of some team members, as well as only 1 person left on the team with app development experience Accomplishments that we're proud of We ended up finishing our project with 2 team members after starting with 4. It increased the work that needed to be done, but we ended up being able to deliver a product What we learned What's next for Eyes First: Your Central Ocular Telehealth Solution Building out the rest of the platform, and doing more customer discovery on our idea to further validate market need and product design Built With android-studio git github java kotlin
Eyes First: Your Central Ocular Telehealth Solution
A home diagnostic aids for ocular health to increase convenience and reducing cost by minimizing the need for an in person visit
['Brandon Gaitan', 'peiyichiang Chiang']
[]
['android-studio', 'git', 'github', 'java', 'kotlin']
57
10,436
https://devpost.com/software/pillpoint
Medication Classification vs. Amount of Images used Data between training and testing sets for Deep Learning Testing Data Values Training Data Values Deep Learning Learning Progress Pill Sorter Pill Tray Housing Unit Our project, PillPoint, is an augmented drug identifier and sorter for automatic pill dispensers. Automatic pill dispensers are not a novel concept and already serve as a great resource for those who take many medications daily, but we feel there are some issues with them that could be tackled. The largest is that most of these dispensers require the pills to be very carefully loaded in, and for the groups for whom these dispensers are targeted, namely the elderly and people with conditions affecting memory and fine motor skills, loading in the medications may prove a difficult barrier. In addition, we feel that the design of most pill dispensers is too reminiscent of laboratory equipment and could take on a more subtle, kitchen-appliance-like aesthetic. In our device, every pill inserted will be analyzed to determine its brand. The patient can pour in an entire month’s worth of pills all at once, where they will be automatically identified and sorted based on pill dimension and markings using image recognition. The patient would then input what times they want to take their medication(s) so that the PillPoint can dispense and alert them at those times. In training our model, we used the Deep Learning Toolbox and Image Processing Toolbox from the MathWorks Suite. We trained our model using a dataset of 1,322 images which consisted of twelve medications in their numerous pill/capsule varieties, and had the model use deep learning to analyze the characteristics of  pills from the dataset with the goal to correctly identify each drug. In addition, we provided a design for the PillPoint that uses a funnel to intake the pills and delivers each dose via a slide and tray delivery system. One thing that definitely needs work is our model— in the limited time we had, we were able to train a model with 45% accuracy. For PillPoint to be a viable product, accuracy would need to be incredibly high to avoid any liability issues. We also struggled to assemble the program to schedule dose reminders. Despite the lower-than-ideal accuracy rate, the experience taught us a lot about using Deep Learning Software, and we feel that we made some important steps forward in tackling this issue. Built With matlab solidworks Try it out drive.google.com github.com
PillPoint
An automated pill dispenser that would automatically sort and distribute pills at the desired time using AI
['Gelyn Balcita', 'Adam Harb', 'Brianna Orozco']
[]
['matlab', 'solidworks']
58
10,436
https://devpost.com/software/real-time-covid-19-risk-assessment
Inspiration As members of a wider society deeply affected by the COVID-19 shutdowns, we were inspired to create a platform which may give users peace of mind when venturing into a post-pandemic world, which could at the same time be a useful tool to assess the real-time risk of virus transmission, enabling public officials to rapidly address fluid reopening and closing of public spaces. What it does Displays a heat map of buildings in the area of focus. It shows metrics on the risk of entering a building based on the positive case rate in that city and the number of people in the building at any time. Allow businesses to input relevant data e.g. masks required, additional ventilation, maximum occupancy, any other COVID-19 precautions that will show up as a pop-up window when the user clicked on a building or location. How we built it We created a simple front-end (react) to display google maps with a heat map overlayed displaying the risk of certain areas in the city. We have a back-end (node.js and express) to store and calculate the risk factor of each building in the city. Challenges we ran into Practical implementation of the mathematical disease-spread model proved to be outside the scope of this Medhacks experience. Specifically had problems with density probability modelling, average velocity for the walking speeds of individuals in the businesses. Although the intended model (Gorscé et. al., 2014) was established and feasible, integration into our real-time web application required extensive numerical method techniques. These challenges are to be addressed in future work. Accomplishments that we're proud of Adapted existing mathematical model for our own usage and finishing a minimum viable product to demo! What we learned We learned about how SIR, SEI models work in disease transmission and make needed adjustments within the mathematical models to simulate disease spread. We learned about the assumptions used to create these models and the governing principles behind the equations. What's next for Real Time COVID-19 Risk Assessment Deploy our first prototype, assess and gather feedback for improvement. We will also incorporate additional databases in our web app to validate our input values. Furthermore, we will implement a more sophisticated model by accounting for mask-wearing behavior with a scaling factor as well as respiratory protection factors to adjust our risk factors in the mathematical model. Built With google-maps javascript node.js pyosmium python react Try it out github.com
Real Time COVID-19 Risk Assessment
Develop interactive real time heat map for individuals/public health officials to assess their likelihood of COVID-19 infection in public spaces.
['Keer Zhang', 'alicia macdonald', 'Cyrus D', 'Taylor Zowtuk', 'Clayton Molter']
[]
['google-maps', 'javascript', 'node.js', 'pyosmium', 'python', 'react']
59
10,436
https://devpost.com/software/elderly-at-ease
Inbox of all messages Community, listing of matches Private message thread with match Landing Profile with assistant open Connect, listing of unseen users Successful match (both users selected checkmark) Track Aging in place track. Inspiration As close friends and former volunteers with and for the elderly, our entire team understands the personal struggles, both physical and emotional, that they endure as they age in place. When asked what they would like, their answers may vary, from a helping hand to someone to walk with to someone to just talk with, but, ultimately, it always boils down to having a caretaker and a community, consisting of physical and emotional resources. For the growing world-wide population of elderly who are all concerned with aging in place, then, we set down to do our research, understand their key pain points, design software targeted to address them, and make it happen (feasible and viable, near production-ready software). What it does 'Elderly at Ease' is a software designed with seniors first. It consists of the key capabilities to connect with others, interact with one's connections/matches, and access a virtual assistant for any and all help, from browsing the web to short jokes. In other words, it balances security and robustness--multifactor authentication and a thoroughly tested, production-ready piece of software--with user experience and autonomy--easy to use and navigate interface without clutter and the ability to get one's own caregivers and create one's own community. How we built it Began with system design and modeling of the components as part of the object-oriented programming paradigm. Then coded in Python, SQL, JavaScript, HTML, and CSS with an SQLite database and the Django web framework. Also used REST APIs, both developed by us and external (Twilio API for multifactor authentication, Dialogflow API for assistant). Challenges we ran into Focusing on production-readiness required lots of refinement and testing, which made it particularly hard given the time constraints. Specifically, we had to find a way to balance ease-of-use with robustness when implementing a security feature. We solved this challnge by designing and implementing a custom authorization flow using the Twilio API to provide optional SMS/Phone verification and creating a clear mark on users' profiles indicating their verification status. Additionally, we set up all the system components on Django with an object-oriented programming design for reusability and quick shipment into production; specifically, it could be shipped with Amazon Web Services and Elastic Beanstalk. Accomplishments that we're proud of We are most proud that we were able to translate our research and knowledge of end users' pain points (elderly aging in place) into software that successfully targets all of them. Specifically, our near production-ready software provides a secure and flexible platform for the elderly to access and acquire physical and emotional resources, such as caregivers and a community. What we learned We learned both the larger visionary skills needed to produce an overall system for production and the more detail-oriented skills needed to edit and configure 'black boxes' (source code, databases) for optimal use in a project. Working under time constraints accelerated our growth in the areas of system design, project management/collaboration, and object-oriented design and programming with a focus on software that is feasible, viable, and can be moved into production. Additionally, we configured any and all third-party libraries that were used at their source code and ran into many moments where manual configuration of the SQLite database linked to Django was necessary. What's next for Elderly at Ease We believe in our work. Our next steps consist of a simple action plan to develop the necessary components to move toward production (e.g. cloud server database, server hosting, CDN) and acquire funding and support (e.g. Duke Innovation & Entrepreneurship). But it all begins with your help and support to jump start our solution into the real world. Built With css dialogflow-api django google-cloud html javascript python rest-api sql twilio Try it out github.com
Elderly at Ease
All-in-one software for seniors' emotional and physical needs in aging in place by bridging the digital divide.
['Joon Young Lee', 'Sarah Jung', 'Alicia Wu', 'Nina Nguyen']
[]
['css', 'dialogflow-api', 'django', 'google-cloud', 'html', 'javascript', 'python', 'rest-api', 'sql', 'twilio']
60
10,436
https://devpost.com/software/medconnect2020-cngh9t
This project was bootstrapped with Create React App . Available Scripts In the project directory, you can run: npm start Runs the app in the development mode. Open http://localhost:3000 to view it in the browser. In the middle of the pandemic, we wanted to keep our friends and family safe! How to use Javascript, HTML, learnt the ropes of React, and Firebase! Built With css firebase html javascript react react-native Try it out github.com
MedConnect2020
MedConnect is a platform that pulls everybody together to combat COVID-19 and keeps your family and friends safe!!
['Rachana Murali Narayanan', 'Ishraaq Shams', 'Abhishek Agarwal']
[]
['css', 'firebase', 'html', 'javascript', 'react', 'react-native']
61
10,436
https://devpost.com/software/medhack-smartshoes
workflow workflow workflow Codes Landing page of website Product description 3D model of shoe Inspiration Falls are the second most common accidental deaths in elderly people. Primary reason for death due to falls are gaits and balance disorder. What it does Smart shoes have different types of sensors that help to identify the degree of gaits whether high, medium, or low. Once identified, the app will help in regular monitoring. The infrared sensor in the front of the shoe will help to detect if any obstacle is present in front of the foot. How I built it I created the 3d prototype of the smart shoes. Then the sensors, arduino, gsm chips will be embedded in it with electric charging pin. The data generated, when a person walks, will be sent to the cloud server and the machine learning model deployed there, will help to perform gait analysis and monitoring services. Then for fall prevention, the Infrared sensor will detect any obstacle in front of the foot. If the obstacle is detected then, a small beeper will beep which will save the person from colliding with obstacle. If anyhow, the person falls, then the gyroscope sensor will detect the change in orientation of shoe and will alarm the care-taker of that elderly patient or user. Challenges I ran into I am a computer science undergrad, so I faced difficulty in analyzing the parameters that would affect gaits disease. Also, I am an individual participant, so it was difficult for me to create the actual prototype What I learned I have learned various things from this hackathon. Like how to complete a task within a short span of time. How to make 3d models and animations. Improved my web development and machine learning skills. Built With arduino blender css3 figma flask gsm html machine-learning pedometer python sensors Try it out github.com
SmartShoes-Team SAL
Smart shoes for gait analysis, prognosis and falls prevention and detection in elderly people
['Shobhit Aryan']
[]
['arduino', 'blender', 'css3', 'figma', 'flask', 'gsm', 'html', 'machine-learning', 'pedometer', 'python', 'sensors']
62
10,436
https://devpost.com/software/easy-beat-xb3kv9
Inspiration The motivation for this project was initially inspired by helping my grandma with her computer problems. I was helping her fix a relatively simple issue but she told me, to her, “it seemed like rocket science”. When my parents wanted her to get a smartwatch to help monitor her health, she was equally confused by technology. This idea got us thinking about the interaction between elderly people and technology and how it affects their daily lives. We ended up settling on trying to develop a tool that would simplify wearbile health data (ECG in our case) for the elderly population, since the data can be extremely useful but unfortunately not very easy to understand. What it does Our tool takes patients ECG readings from their wearable devices and uses a deep learning algorithm to detect if there is an anomaly in their heart readings. If the data is abnormal, our tool will send the important data to the patient's electronic medical records and provide them with a simple “normal” vs “abnormal” result. The user will also be displayed possible “next steps” such as the phone number for a local doctor (or family doctor) to possibly set up an appointment. Additional simple resources (such as videos) are then provided to the user so they can learn more about how to improve/maintain their cardio health and understand what their ECG signals mean. How we built it We built it by starting with a machine learning model that was improved upon (using TensorFlow) then created a simple web framework that would connect to our ECG algorithm. Challenges we ran into We had many challenges throughout the weekend ranging from the time zone difference (since almost every member of our team was in a different zone) as well as sleep deprivation. We also found obtaining useful large data sets to be more difficult than expected since some of the data sets we wanted to initially had labelling that was not clear or in a difficult format to understand. Accomplishments that we are proud of We are very proud of coming up with an idea and being able to execute it in such a short period of time and with very little experience. Moreover, we strongly believe that our project addresses the difficulties faced by the elderly regarding ECG wearables. What we learned From this project, we learned a ton about collaborating as a group virtually and how to successfully work on a project when all the members are spread across the world. During the current COVID times this an extremely useful skill and something we were happy to improve. We also learned a lot about developing learning algorithms and deploying a simple web app.Finally being that it was our first hackathon, we learned how to best approach a hackathon and work collaboratively. On top of that, we learned so much from the speakers and presenters. What's next for Easy Beat The next steps for Easy Beat would be making the connection between the website and the smartwatch, as well as enabling the submission of the patients’ ECG to their electronic medical record and making sure that all the patient data is protected. Built With bootstrap css fitbit flask font-awesome google-fonts html javascript keras python tensorflow withings Try it out github.com dry-forest-61795.herokuapp.com easybeat.org easybeat.tech
Easy Beat
The elderly often struggle with cardiovascular disease as well as technology. This led us to build an age-friendly tool that helps them to manage and understand such diseases based on ECG records.
['Saswat Mishra', 'Rudra Prasad Dash', 'Mariana Ferreira Nunes', 'Louis Garber']
[]
['bootstrap', 'css', 'fitbit', 'flask', 'font-awesome', 'google-fonts', 'html', 'javascript', 'keras', 'python', 'tensorflow', 'withings']
63
10,436
https://devpost.com/software/aarogya-tech
Inspiration What it does It screens people for COVID-19 How I built it Challenges I ran into Accomplishments that I'm proud of What I learned What's next for aarogya.tech Try it out aarogyatech.amritshenava98.repl.co
aarogya.tech
An platform that allows people to be screened for COVID-19
['Amrth Ashok Shenava']
[]
[]
64
10,436
https://devpost.com/software/early-covid-19-detector
Elevator Pitch: One of the reasons COVID has spread so quickly across the world is the difficulty of diagnosis. We developed a quick, easy, and portable method of predicting COVID infection by testing early onset symptoms! Our project has the power to reduce the need for expensive and labor-intensive testing with a reusable machine. Thus, this device can help millions of people by warning of infection early, before possible spread, especially those working in healthcare! Just in case the computer does not work: https://www.youtube.com/watch?v=sv6G76cxlbY&t=4s BIG Ideas: Cheap Easy to use Only necessary to use once a day Can work off its own data Doesn't even need WIFI, let alone a power outlet Super Portable Reasonably Accurate Its reusable Most tests are one time use, and our test (though far less accurate than nose or antibody) Reuse reduce recycle? Rubric Category 1: Did the team put thought into the user experience? Yes, we made it easy to use and simple We factored it in to be used once a day before bed or after waking up The sensors are pretty easy to use Our temperature meter later on will be an Infrared Receiver (IR) which would be even easier and contactless to use Custom PCB will make packaging a lot nicer (No larger than a smartphone) How well designed is the interface? Its two buttons (yes and no), so it should be pretty simple and straightforward Simple and easy to understand recommendations Does it try to fulfill the user’s needs? Yeah, we can predict COVID decently accurate Its portable and won’t require WIFI if you don't want it to (though using an AI to learn someone’s normal sensor values would help accuracy Rubric Category 2: How effective was the presentation? You tell us :) Does this team have what it takes to carry on the solution and implement the project? We designed this, put our hearts into it, and believe in it! Rubric Category 3: Does the team provide a convincing rationale for why their solution may work and do they address significant technical issues relevant to their problem? Reasoning on how our device works: Article on predictive modeling based on symptoms: https://www.nature.com/articles/s41591-020-0916-2?fbclid=IwAR3F7tMT9V8Saa3ol-Wv4B7pQ88jlAM_tz351sJHb1iuDmxnPcuhWSNpJeI#further-reading What if we could make a cheap and portable device that could do all that! Why our solution works: We have lots of previous research showing these symptoms are correlated to COVID infection We used a predicting equation based off the a real one with high accuracy (pulled lots of data from paper) Technical issues we aim to solve in problem we are solving: Most testing requires lab work and trained personnel Ours only needs a 9V battery and not even any wifi Can be used more and easier in developing countries Rubric Category 4: How significantly can your project better lives in communities across the world? COVID as of now: 26.6M infected, 875K dead Can be used in developing countries over nasal and antibody tests that are expensive and technologically/resource intensive Does the project address a valid issue? COVID as of rn: 26.6M infected, 875K dead We have ways to test COVID, but our current methods are resource intensive and hard to implement Was there a clear effort to understand the background of the problem? We spent hours researching and reading different papers on the subject matter. Rubric Category 5: How technically impressive was the hack? Our hardware isn’t too impressive, but the design is really something considering the circumstances. We built it all, coded it, and made a model in 1 day! Was the technical problem the team tackled difficult? We made it much cheaper, reusable, and built a decently accurate predictor, I’d say that’s pretty difficult to do Did it use a particularly clever technique or did it use many different components? Clever techniques: modeling fever after biological processes with limiting functions, a logistical regression curve for the predictor equation Many components: hardware menu, advanced predictor model, and CAD box design Did the technology involved make you go "Wow"? For a bunch of college freshmen, in one weekend in the middle of a pandemic, I hope so! Mentor/Professional Advice? Dr. Mannisi - “The device would be useful with early symptoms” Yes I know this is half a truth, but it’s a truth nonetheless! We expand on what he told us. We decided that our device would be very useful in detecting COVID symptoms in people with a smaller progression of the disease. Ms. Osnat Adam RN, MA in Nursing (Head ER nurse in Galilee Medical Center) This is a smart idea that can really help! She also noted that this device can be useful at home and to include other measures of symptoms such as gastrointestinal symptoms of COVID. Built With and-portable-method-of-predicting-covid-infection-by-testing-early-onset-symptoms!-our-project-has-the-power-to-reduce-the-need-for-expensive-and-labor-intensive-testing-with-a-reusable-machine.-thus arduino before-possible-spread easy excel this-device-can-help-millions-of-people-by-warning-of-infection-early
Early COVID-19 Detector
We developed an easy and portable device capable of predicting COVID infection by testing early symptoms! Our project has the potential to reduce the expensive and labor-intensive testing.
['Ethan Levy', 'Jaechan Lee', 'Eli Levenshus']
[]
['and-portable-method-of-predicting-covid-infection-by-testing-early-onset-symptoms!-our-project-has-the-power-to-reduce-the-need-for-expensive-and-labor-intensive-testing-with-a-reusable-machine.-thus', 'arduino', 'before-possible-spread', 'easy', 'excel', 'this-device-can-help-millions-of-people-by-warning-of-infection-early']
65
10,436
https://devpost.com/software/covid-19-stay-safe-data-map-bj1g2e
Home Page Access Map Demonstration Inspiration- Providing people with information that will help them stay safe by giving them necessary information regarding COVID-19 precautions being taken at different places. What it does- Users are able to enter a location and find out if their selected location is following COVID-19 precautions and guidelines. Users are also able to create an account and help give data on different locations to increase the accuracy of the results. How I built it- Using HTML, JavaScript, CSS, Figma Challenges I ran into- Finding existing data sets and trying to code both front-end and back-end in time with only one team member who was equipped with good coding skills. Also had some difficulty in figuring out how to use and incorporate an API key with the limited time given, one team member had a large time zone difference Accomplishments that I'm proud of- Teamwork, Communication, the amount we were able to accomplish despite our shortcomings What I learned- Using Figma, learned and gained a broader knowledge of coding What's next for COVID-19 Stay Safe Data Map- Idea is to expand to include many geographical regions as possible and different establishments like restaurants, cafes, salons etc Built With css html java javascript shell Try it out github.com
COVID-19 Stay Safe Data Map
Provide users with information regarding safety precautions being taken at different locations and current COVID-19 cases count in the areas nearby.
['Aastha Kasera', 'Angela Yang', 'Leeda Sea', 'Amy Nguyen']
[]
['css', 'html', 'java', 'javascript', 'shell']
66
10,436
https://devpost.com/software/meddit-nagyx1
Inspiration We were inspired by recent stories we have heard in the news about how online communities helped people recognize unforeseen COVID-19 symptoms. Meddit promotes the idea of collective knowledge and communal experience so that we may find medical stories that resonate with us and help us understand the niche ways health issues present in each of us. We were also inspired by the notion of personalized information (a precursor to personalized medicine). While “Web MD” and google search provide some information, many may find medical literature too technical for them or the general list of symptoms too vague. Perhaps people may even find it hard to describe their symptoms in a way google can provide accurate and helpful results. A mass collection of experiences with various diagnoses may benefit not just the patients but the health care givers. Only through observation do we learn of how comorbidities and demographics play a hand in the way illnesses manifest. Perhaps with a platform like meddit doctors may learn the full scope of the patient experience. What it does Meddit is a web app that uses natural language processing to connect users to medical stories similar to their own. By allowing people to describe symptoms and experiences in their own words, we hope to provide personalized information. On the home page you are greeted with a “Tell us how you feel” prompt in which users are to enter their general symptoms and experiences. A list of related posts are to pop up below, allowing people to read about experiences similar to theirs. We hope that some form of “verified diagnoses” can be attached to these posts, to prevent the spread of misinformation. How I built it Meddit currently comes in two parts, the web application, built using the MERN Stack (comprised of MongoDB, Express, React, and Node.js), and the natural language processing script, built using Python. On the web application, posts by patients are available and stored in the MongoDB database. The NLP component of our project was built using the NLP library, spacy. Spacy allowed us to tokenize paragraphs and categorize words based on parts of speech, context, and labels. Ideally, we would have constructed our own named entity recognition model, which would have allowed us to create categories such as symptoms, illnesses, and medications and would help us find similarities between posts. Alas, due to time constraints we decided to create a smaller-scale version of this idea, and make our own “dictionary” of medical categories - which we called bins. To screen the posts for common ideas, we calculated similarity scores between a post and each bin. This feature uses the token similarity function from spacy. In spacy, identical words receive scores of 1, synonyms or lexically similar words are scored very near 1 and contextually similar words are scored between .5 and 1. We used these scores to categorize each post into the bins. When a query is asked in meddit, we figure out which bins it belongs in and then calculate similarity scores for the posts in the relevant bins. Challenges I ran into While we have two halves of the project, we ran out of time before we could stitch them together. Currently, one can post on the web application and store that information but the python script to return relevant posts is not linked up. On the NLP side of things, we currently mimic the web app’s server with a text file of posts. The script can calculate bin scores for each post and also, given a query, provide a list of similar posts. What's next for meddit We would like to refine our NLP algorithm by training our own medically-centered named entity recognition (NER) model. This will let us better recognize medically similar words (eg. categorizing words into symptoms, illnesses, and medications). Built With css mern mongodb node.js python react spacy
meddit
Meddit is a web application that uses natural language processing to match a user’s explanation of their symptoms with other users’ medical experiences. This promotes personalized healthcare.
['Katharine Lee']
[]
['css', 'mern', 'mongodb', 'node.js', 'python', 'react', 'spacy']
67
10,436
https://devpost.com/software/emotion-of-the-heart
Inspiration I grew up in unpleasant environments and learned to cope by listening to certain kinds of music. Over time I realized I could self evaluate my mood based on the music I was listening to. I wanted to make that process algorithmic. I wanted to make something I would use long term and something that could help doctors have an easier time with non compliant patients. What it does It measure HR data while a patient listens to music. The music and HR data is then paired and then seeing how listening to certain genres or music can affect mood. Then later known calming music is queued to help with a coming panic attack, anxiety and/or help patients with dementia or Alzheimer become calmer. The music is picked based on learned previous data. How I built it I wanted to build it to be very portable so I built it using a Raspberry Pi. I used an ANT+ sensor and Garmin HRM chest strap. The data is received by the raspberry pi and fed into the model which then outputs a song. Everything was programmed in python. Challenges I ran into I ended up doing all of the work in 8 hours. I had to travel over the weekend and so I had very limited time to put this together. My Sd card corrupted as I was leaving and I didn't have any team members because I was traveling. They did ask politely, but I had to refuse because I knew I had no set schedule of where I would be or if I would even be near wifi. Accomplishments that I'm proud of I am proud of the proof of concept I made that works. I believe it can make a real difference, as I had a troubled upbringing, I understand how important music can be to help people cope with certain situations. This project is just the beginning. What I learned I learned a lot about what to do and what not to do when it comes to training data and sets. My major mistake was in regards to training my model, but I believe the foundation is there for me to build on later. What's next for Emotion of the Heart I want to build out the model and expand it to exercise motivation. I want to make it have more sensors such as body temperature and conductivity. The Github was used more for notes/journal and is very messy. It does not indicate anything about my quality of code, only my thought process. Built With ant+ ffmpeg machine-learning python raspberry-pi raspbian youtube-dl Try it out github.com
Emotion of the Heart
learning the language of the heart through heart monitor data
['Karan Naik']
[]
['ant+', 'ffmpeg', 'machine-learning', 'python', 'raspberry-pi', 'raspbian', 'youtube-dl']
68
10,436
https://devpost.com/software/fire-hacker
Prototype of Fire hacker which attaches to the Stove knob Inspiration In order to impact the lives of older adults, we do not need to introduce tech-heavy solutions. We can use interfaces they are comfortable within novel ways to solve issues they might face. There are 172,100 reported cases of cooking fires per year (471 cases daily). The largest factor amongst these cases is equipment unattended. One in ten Alzheimer's Disease patients is 65 years and older. With older adults comes the increased risk of falls (more than 1/3 are 65 years and older). This motivated us to design the Fire-Hacker What it does and How we built it The Fire-Hacker uses a sensor-based approach. A sensor that detects rotation is attached to the stove knob. Once it detects rotation, it switches on the 5:00 min timer in the controller. Once the timer runs out, it sets off a buzzer and a red LED flashes. On the sound of the buzzer, the person is alerted to the kitchen and resets the timer (hits the reset button to let the food cook longer until the next 5:00 min alert or simply turn off the stove because the food is cooked!) If in case the person cannot reach to the stove in 15 mins, a final buzzer goes off followed by the stove shut off (automated switch adapter for an electric stove or an automated value shut off for a gas stove), and the alarm is turned off. The system then needs to be manually restarted. The Fire-Hacker can be installed on any existing stove, by a person who has no expertise in the matter (few at-home installation steps). It is also economical and can be used with any stove type. What's next for Fire-Hacker We plan on developing a beta version of the working system. This will be followed by conducting user-studies and obtaining feedback to develop a market-ready model. After this we will work on implementing a sustainable assembly line, to scale up quantitative production. To achieve these, we will actively seek out investments. Once this is up and running with Fire-Hackers' in circulation and use, we can look to implement the concept of this system to rotating fixtures like water taps. _ *Note from the video: The cost is <50$ and not >50$ as shown in the video (our bad!) _ Built With adafruit arduino redboard solidworks
Fire Hacker
Senior citizens friendly - revolutionary product which warns you every 5 minutes when the stove is on by flashing a red light and beeping, if unattended within 15 minutes will auto shut off the stove.
['Tarana Kaovasia', 'Parimal Joshi', 'Parmi Thakker', 'Charan Pasupuleti']
[]
['adafruit', 'arduino', 'redboard', 'solidworks']
69
10,436
https://devpost.com/software/side-by-side
Inspiration During the COVID lockdowns, many people have felt lonely and disconnected. Although the CDC encourages us to get outside and walk to maintain health, older people especially may not feel safe walking alone. Social isolation and lack of activity are part of a vicious cycle that results in decreased cognitive and physical function and increased healthcare costs across the population. Something as simple as a phone call can make someone’s day--and provide motivation for physical activity. If you know your walking buddy is going to be waiting for you to join them, you’re that much less likely to skip out on your exercise. Pairing walking and talking with a friend helps re-create a sense of social connectedness that COVID lockdowns, along with other realities of life, take away. What it does Side By Side matches up trained volunteers with users looking for virtual walking partners: somebody who’ll be on the other side of the phone to chat with them, motivate them, and have their back in case of emergency. Hobbies, favorite sports teams, and other interests are all used to find potential partners, and the user has the final say in who they want to talk to. Users can schedule walks with volunteers, or simply find one who’s available right away. The SOS feature allows users to call emergency services, and the Check-in feature allows volunteers who are concerned that something’s gone wrong to see if their buddy is okay, and if necessary, use the app to reach an emergency contact. Another optional feature of the app allows users to embark on a virtual journey--you can track your progress along a famous historical trail like El Camino de Santiago, a fantastical trek like the path from Hobbiton to the Lonely Mountain, or just to a desired target distance. It maps where you are on your quest, and after your walk tells you about landmarks you reached on the way. How we built it We did a deep dive into creating the user flow and functions that we wanted to see in the app, relying on proto.io to draft the interface that we want to see. We based our initial design on feedback from one of our members who is in our target demographic, and included user-friendly elements such as readable font and clear iconography. Meanwhile, we did preliminary market research via a survey sent out to our networks. We asked questions such as whether people have felt socially isolated during the past few months and if they would volunteer to walk virtually with a stranger. A full 75% are willing to chat with a stranger during a walk, and even more said they would recommend this app to a friend. These survey responses created additional guidelines for the application by giving us an idea of what the end user might want. Some of these answers, such as question prompts and customizable details, went directly into our drafted design. Challenges we ran into None of our team members are experienced app developers, so we decided to focus on designing the user experience and nailing down the features and functions the app should have. A major challenge came up when considering user safety and liability. No one in the team had legal knowledge of this area, but we did research on comparable tech companies and how they handle their terms of service with users. This allowed us to have a clearer understanding of the expectations of the application and the consumers. Accomplishments that we're proud of Despite not having the technical skills to code the full program, we were able to work with proto.io to create a layout and wire framework of what the application would look like. Our time was efficiently spent looking at all aspects of the application rather than trying to debug issues with an unfamiliar coding language. The design of the application lays groundwork for future app developers to have an understanding of what is needed and what will align with end users. What we learned One of the most important things we learned is to have the end user in mind throughout the process of product development. From decisions such as the size of font to a name that would clearly communicate the purpose of the application, we made sure to have the consumer’s best interest in mind so they could enjoy the application. We also learned the importance of iteration, since one person’s idea can change the course of the original proposal. What's next for Side by Side As with all applications with just a design format, there are many future steps to follow. One of the most imperative is to reach out to Blue Label Labs, a company that can help ensure our application complies with HIPAA laws. Since our application includes an emergency call service in case your walking buddy is hurt, and users can include health information, we want to ensure we are abiding by the rules in place. We’d also like to meet with technology company lawyers to curate agreements for users as well as the Terms of Service. These will create guidelines for the future of the application, so it’s necessary to do this early in the process. We’ll continue our market research by reaching out to a larger audience to survey their attitudes towards this app, and ensure that we are aligning well with our target demographic. We’ll also reach out to experts and other stakeholders, such as senior centers and community health organizations. There will be training in place for volunteers. We need to design and develop this virtual training, which is intended to support volunteers so that they feel comfortable acting as the safety net for our users and know when action is necessary. To become a volunteer, they’ll need to pass a subject-matter test and a background check. Finally, we’ll also work on translating our team’s coding experience in other areas into app development skills, and meet with more seasoned devs who can help us bring this application to life.
Side by Side
A phone app that matches you virtually with a walking buddy for real conversation, connection, and safety.
['Elizabeth Ericksen', 'Jacob Roman', 'Katherine Shih', 'Taylor Pistone', 'Ke Xu']
[]
[]
70
10,436
https://devpost.com/software/alamal_-d0jurz
This inspired me because of the lack of adequate health care in conflict areas or developing country regions to deal with any pandemic, whether the response is morally or psychologically or even simple consultations, so this inspired me to create a site for consultations that is not only symptomatic counseling, but counseling for psychological pressures such as feeling isolated and how to spend time and many other things in addition To create health courses within this site that are available to everyone free of charge that includes everything new in dealing with this pandemic in addition to a reminder to the person joining us of the need to be patient and create recreational activities for them in addition to establishing a simple care center in such areas in cooperation with volunteer doctors and the most important psychiatrists because what he needs Everyone is now getting out of isolation and offering them a simple meal in this center to send themselves optimism in addition to some enthusiastic music, and this will encourage them to continue to visit the simple center How will I ensure the help of volunteer doctors until I finish through what my site will provide me from the profits of Google Adsense symbolic amounts that help them to continue With us I faced financial challenges, so this is a project that needs to be done Even if money is symbolic, I am unable to do it, and this is unless it is Someone is approaching him before me, because it is an idea that emanates from the idea of ​​the project, Alamal, meaning that we will face challenges and pandemics by sending in our souls the ability to do so. Built With bulksms
Alamal_الامل
Let us be a bright torch in the face of dark pandemics with hope
['Mohammed Alshekh']
[]
['bulksms']
71
10,436
https://devpost.com/software/android-reservation-app
Here a user can change her password A user to book appointment before going to the hospital. Patients can call the hospital to book appointment Less people in the waiting room Reset password New Slots or announcement with notifications using firebase Live typing in the app Activation code not yet entered Registration form Home page after login Activation code entered Email activation code sent to email Inspiration In the midst of this pandemic COV19, Being in the queue sometimes in hospitals or clinic can be avoided by using this android application which allow the patients to book appointment ahead of time and come to meet the doctor at the appointed time to avoid crowd and being in the queue. The application can be used to manage patients appointments, know the number of patients to attend to per day,track unattended patients, manage the space at the sitting room and many more. This could save time for patients as well. The solutions this software is bringing to hospitals and clinic is to reduce contact between patients and healthcare practitioners in the midst of COV19. I built an android app to be used by doctors and patients during this COV19 to avoid taking risk. My project is an android app that can be useful during this pandemic.COV19 is a disease contracted through contact with an infected person. So this android app is to help health practitioners and patients as well to stay safe even going to hospital. The app back end is hosted on my website server so any test with your email are safe. Did not rest at all The patient has the option to register an account from the hospital or not. The hospital bcause of COV19 may decide to attend to patient only on appointment booked beforehand by the patient. The patient does not need to go the hospital to book the appointment, it might be done through the app. The time divided by the doctor to attend to patients is call slots in the app. When the secretary or doctor posts a new slot the notification will be shown on users phone that they can book an appointment. The registration process is quite simple . The all slots tabs displays all the available slot, it can be updated, deleted at any time if booked by someone. That side is private and can only be accessed by the doctor or nurses responsible for appointments. To an account with the hospital and benefit from some other things, it important to register and have hospital card. To register you need a valid email address through which you will receive a code to activate your account. The app has reset password option with otp confirmation code. These are the information to access that side Pass Name:Word Pass Code:1234 Pass Name:Bride Pass code:1111 News tab can be used for announcement. If there is a need of something I learnt more about php and database What's next for Android reservation app.There are a lot to add to this app in the future. planned to give it a name Doctor'app. Built With android-studio java json mysql php Try it out github.com github.com
Android reservation app
In the midst of this pandemic COV19, Being in the queue sometimes in clinic can be avoided by using this app which allow the patients to book time beforehand to avoid crowd and being in the queue.
['nyonouglo koffi']
[]
['android-studio', 'java', 'json', 'mysql', 'php']
72
10,436
https://devpost.com/software/medata-intelligent-opioid-prescription-tracker
UI for New Patient Data Collection Test Demonstration Using Pseudo Dataset Inspiration The opioid crisis has been a major growing issue in US healthcare for the past few decades, and there has been no clear solution in sight. We decided to attempt our solution to the opioid crisis through this hackathon, which would be a great way to spread our passion for problem solving and teamwork. What it does MeData is a system that compares new opioid-prescribed patient data with a large database of existing or past patients' data. The comparison narrows down to a list of database subjects who share nearly all the same attributes as the new patient. Their dosages are then averaged to give the recommended dosage for the new patient. This tackles both the issue of inconsistent prescriptions across different healthcare providers, as well as the issue of reselling or overdosing on extra medicine. The MeData portal is meant for use by doctors, so that they have a guideline to help them prescribe the right amount of opioids to their patients. How we built it We used Python to make the frontend GUI to collect new patient data, as well as write the backend algorithms for searching and comparing between patients. Since we didn't have a real database of patients, we used MATLAB to mimic a database containing 113,153 patients, all with unique attritubes, such as age, height, weight, sex, previous medical conditions, ongoing medical conditions, and other ongoing prescriptions. During our testing, the pseudo dataset proved to be an excellent source for comparison, and we believe it serves as an excellent substitute for a real database. To increase efficiency, we split tasks for creating each section. Jeff handled frontend, Fil handled backend and Peter synthesized the test database. Afterwards, we joined together to merge the three parts, which took most of our time. Challenges we ran into We were all rusty at Python to begin with, so writing the 363 lines of our final code wasn't easy. There was a lot of learning on the fly for the tkinter and numpy libraries, and we ran into many logical errors even when there were no compiling errors. As expected, most of our time was spent debugging. Accomplishments that we're proud of As challenging as it was, the exhilaration of solving intermediate issues along the way proved to be a great motivation for moving forward. The moment we completed a successful test after merging the three parts was pure ecstasy. What we learned We learned that being familiar with a mainstream programming language is extremely useful to get a good head start on our projects. Time management is also a good skill to have, especially when planning out a 36-hour group project. Most importantly, we learned much more in depth about the opioid crisis and its negative impact on our citizens and economy, and that our efforts today will lead to breakthroughs tomorrow. What's next for MeData Intelligent Opioid Prescription Tracker MeData seems very promising in this elementary stage, since everything works well so far. We have a lot of room to make our algorithm more precise and efficient, as well as make a cleaner, more presentable UI. As a proof of concept, our 2020 Medhacks submission isn't bad, but our motivation to improve will definitely push MeData forward. Built With matlab numpy python tkinter Try it out github.com
MeData Intelligent Opioid Prescription Tracker
MeData is our take on tackling the opioid crisis. The system compares new patient data with an existing patient database to determine the best prescription dosage for the new patient.
['jeefthebeef Xing', 'Peter Weiss', 'Filip Aronshtein']
[]
['matlab', 'numpy', 'python', 'tkinter']
73
10,436
https://devpost.com/software/homequarantine-phowqv
Inspiration The outbreak of deadly coronavirus disease 2019 (COVID-19) has created a global health crisis that has had a deep impact on the way we perceive our world and our everyday lives. The patterns of transmission and the alarming rate of contamination threaten the world. It is a human tendency to find peace in other companies, but the safety measures put in place to stop the spread of the virus require social distancing. People suffering from diseases like cancer, diabetes, respiratory disorders, and others require special care and attention during this crucial time, and caring for someone with these disorders has become even more critical. The person may be at a higher risk of infection because cancer and other treatments often weaken their immune systems. Due to the increased risk of exposure to the virus by going out in public, most hospitals and clinics have changed their visitation policies. Some may allow one visitor per patient, and others may allow no visitors. Timely review of patient data with close to real-time feedback is a critical success factor in today’s disease management. The development of a bidirectional automatic monitoring gateway would function as an interface between the patient and doctors This solution will considerably reduce the need of patients to visit hospitals and thus lower the risk of infection. Also, self-monitoring of the patients will relieve the family from constant contact from the Corona patient. What it does As our hospitals and medical organizations are filled at their maximum capacities, the government should advocate home isolation and medical care. But due to the highly infectious nature of the virus, the patient of diseases like cancer, diabetes, and suspected corona carrying victims need to be isolated completely. Thus, here is a one-stop solution to the above problem. The development of a bidirectional automatic monitoring gateway would function as an interface between the patient and doctors. How it works HomeQuarantine-SelfIsolation focusses on the three H’s which are of prime importance- Home (self-isolation at the family level) The healthcare service provider (linking patient and medical support) Hospitals (24*7 virtual supervision) It is an effective and secure tool to wirelessly transfer data from different measurement devices to the health care service provider by using a mobile platform. The system consists of a mobile platform, which collects the information from the measuring devices, and a server platform, which receives the collected data and forwards them to hospitals which helps in the regular monitoring of the patients. Challenges I ran into Bluetooth enabled data transfer was quite a challenge, as it had little privacy concerns as well. But overall, it was effective in isolating the patient at the ease of his home. What's next for HomeQuarantine Timely review of patient data with close to real-time feedback is a critical success factor in today’s disease management. This solution will considerably reduce the need for patients to visit hospitals and thus lower the risk of infection. Also, self-monitoring of the patients will relieve the family from constant contact from the Corona patient. We want to collaborate with our state government for transforming this idea into real cases for the hospitals. Built With bluetooth c c# database.com firebase python unity wireless wireless-applications-services Try it out docs.google.com drive.google.com
HomeQuarantine-SelfIsolate
Reducing the need of patients and family members to visit the hospitals during COVID-19 pandemic time
['G B']
[]
['bluetooth', 'c', 'c#', 'database.com', 'firebase', 'python', 'unity', 'wireless', 'wireless-applications-services']
74
10,436
https://devpost.com/software/buddy-the-social-robot
Buddy (Under Construction) Inspiration Two themes which stood out to us from Dr. Szanton’s talk were the devastating impact that loneliness can have on older adults, and the potential to support those aging-in-place by empowering them to pursue their own functional goals. We decided to address both by creating Buddy: a social robot linked to a web application. Buddy can provide fun social interactions and check in on clients’ progress towards their functional goals. We also thought about how this platform could easily implement other kinds of “check-ins”, like screenings for memory loss and depression. What it does Buddy’s web app allows a caregiver or health care professional to configure the settings to be specific to what each client needs, including regular goals and screenings. This configuration then informs how our virtual robot interacts with the user. These interactions include jokes and small talk, regular check-ins regarding goals, and positive feedback regarding progress (or lack thereof). How we built it We divided our time into idea selection, prototyping, architecture design, proof of concepts, implementing the core functionality, and finally finishing touches. Even though we were working remotely, we planned on updating each other often regarding progress we had made or bugs we were stuck on. Challenges we ran into Working remotely and on such a short schedule meant we had to stick to software we could prototype quickly with. A few times we ran into a dead end with a library or tool we hadn’t used before and had to double back. Accomplishments that we’re proud of Producing a functional web app, back end, and blender animations for our character all in one day’s work! This project was a ton of effort but seeing it all come together at the end was very rewarding. What we learned Deciding to switch to a different library early on when encountering challenges isn’t always a waste, and could in fact save valuable time. Also, once a team has divvied up tasks it can be tempting to stick with them until completion, but sometimes having a second perspective on an issue you’ve spent a long time with can really help catch bugs and think of new ideas. What’s next for Buddy We plan on continuing this by working on a physical prototype, taking care to focus on small size (for ease of transport and accessibility), durability, and affordability. We would want our physical prototype to both store data locally and be wifi-enabled (for configuration via the web app). Built With amazon-web-services blender flask panda3d python
Buddy
Combating loneliness while empowering adults aging-in-place to pursue their functional goals.
['Alexandru Barbur', 'Iulia Barbur']
[]
['amazon-web-services', 'blender', 'flask', 'panda3d', 'python']
75
10,436
https://devpost.com/software/bytebuddy
Societies often have a difficulty in understanding the needs of our world’s aging population. One hindrance to understanding aging is that people can rarely grasp it until they reach old age themselves. To be able to put ourselves in the shoes of the elderly was a challenge, as the tasks that we might find simple can be difficult for them. We often take for granted simple acts such as cooking a meal or remembering to close our front door. Throughout this Hackathon, we learned how we can make technology more accessible to the elderly through the use of calm and contrasting colors, layouts, and different sources to guide them in using the application, such as videos and text audios. Much thought has been put into how we can make the final product simple and accessible for the senior population. Built With bootstrap css/html electron figma javascript node.js typescript Try it out github.com
ByteBuddy
Introducing ByteBuddy, a personal step-by-step instructional website for the elder population and those living with dementia. ByteBuddy carries out steps safely and efficiently.
['Neelima Potharaj', 'Snipta Mallick', 'Chelsea Maramot', 'Warisha Rehman']
[]
['bootstrap', 'css/html', 'electron', 'figma', 'javascript', 'node.js', 'typescript']
76
10,436
https://devpost.com/software/earlgrey-eckalx
Dashboard with intuitive card management UI Customizable cards for post-visit patient clarity Inspiration Currently, a big problem for Aging in Place is the transparency between providers and elderly aging-in-place patients and their caretakers. During a visit, a skilled operator may be able to successfully emphasize specific instructions to the patient, but this may not be the case every time- there may be instances where the patient is overwhelmed and forgets certain procedures and instructions. Clinics try to solve this by issuing handfuls of leaflets and paper documentation and though they may have the required information, it may result in a lot of reading and research and won't foster the best patient experience. What it does Our solution is Caredeck - an application featuring “Carecards” for providers to fully customize the patient’s post-visit experience. These cards provide digestible information and can link any necessary resources efficiently including the next Telehealth visits, prescription instructions, and notes from previous visits. A typical use case may be a prescription medicine. In an optimal integrated situation, the provider facing workflow would include entering data of the prescribed medicine for a specific patient and populating Carecards automatically. Then Carecards will then show up on the patient's dashboard that they will be able to interact with to find out more about the prescription. These cards will indicate a description of the medication (color, size, the color of the container), as well as the amount to be taken, and frequency. The provider could set up "safety nets" for certain prescriptions that would provide communicative measures for the patient in case an instance such as allergies to the medication or forgotten dosages would occur as secondary measures may be necessary. This application does not only pertain to prescription medicine, but workflows could be set up to send mini push surveys to screen and collect mental health data as questionnaires could be pulled from preexisting workflows such as PHQ-2 screenings. The customization of these cards on the patient end is extremely flexible leaving a lasting advantage for those who struggle to keep up with their daily routines, and medical care. How it’s built Our team built the prototype on Figma, but if implemented, this project will be an app for IOS which will be coded in Objective-C. This application is intended to be an application provided by a single healthcare provider since it requires that patient data be stored and accessed. If a healthcare provider lacks the database infrastructure to house this data, HIPAA compliant cloud services such as AWS or OCI can be used. Then this information can be transferred securely using a secure network protocol like SSL. Both the doctor and the patient will have separate interfaces. The patient interface is what we demonstrate in this project. The doctor interface may be used to update and create items to check in on patients. Challenges we ran into The biggest challenge that we faced was coming up with a system and design that would make it simple and useful for elderly patients to use. We know that those that are older are not as familiar with technology, and may also have disabilities that prevent them from having the same experience as others. We put significant thought into making an interface that would be easy to understand and use, yet effective and robust. What we learned We learned a lot about healthcare and it’s complexities. As a UX designer and software engineer, we knew little to none about the problems that healthcare professionals face. So learning about them and then hearing about the ways that we can use our skills to contribute and help such an important field was very eye-opening. We are excited to learn more! What's next for EarlGrey We would love for this project to be implemented! We would need support from a hospital or a healthcare provider, but this application would greatly help the patients and doctors alike, and we would love to be a part of this adventure. Built With figma notion Try it out www.figma.com
Caredeck
Caredeck is an application featuring Carecards, providing eRx info and telehealth, for providers to fully customize and minimize risk in the patient’s post-visit experience.
['Andrew Yong', 'Jonathan Chu']
[]
['figma', 'notion']
77
10,436
https://devpost.com/software/commonwealth-health
Home Page Auth0 Authentication Google Login Option After Logging In Profile Page Health-CommonWealth We've heard many stories, especially during this COVID-19 pandemic, about a friend or a loved one who has undergone a major surgery/procedure to get better only to realize they are severely in debt. This can be because someone overlooked charges or independent contractors in the hustle and stress of their pre-ops, creating misinformed patients, who don't realize the charges they’re liable for. We want to eliminate the shock involved with opening a medical bill. We want to give the power to the patient. This is probably one of the hardest times of their lives, and they should at least be able to avoid financial uncertainty. Our premise centers around one word: access. We want this to be the beginning of a world where a patient has access to their own medical information, doctor's notes, billing charges, medical history, as well as any future charges they are scheduled to incur. The biggest challenge is creating a system that will integrate with the hospital's current system so that staff and doctors don't have to perform any extra steps but also relay the data in a secure way to the hands of the patient. Our aspiration is to build different components that will integrate into the one app all hospitals and patients will ever need. Built With auth0 canva css domain.com git github html javascript json jupyter-notebook node.js python react vercel yarn Try it out github.com healthcommonwealth.tech
Health CommonWealth
Personalized patient portal focused on cost transparency between patients and hospitals to unite a fragmented healthcare system.
['Vedavit Shetty', 'David Minasyan', 'Keshav Kunver', 'keerthana yogananthan']
[]
['auth0', 'canva', 'css', 'domain.com', 'git', 'github', 'html', 'javascript', 'json', 'jupyter-notebook', 'node.js', 'python', 'react', 'vercel', 'yarn']
78
10,436
https://devpost.com/software/virtual-caretaker-p7skqg
Inspiration Our inspiration stemmed from the challenges that our elderly may face frequently. We learned more about this topic through conversations that we had with professionals in the field. They noted that it would be beneficial for our elderly to have an available companion to talk to, provide insights and reminders, as well as keep track of the user's health with daily check-ins. We hope that an interactive Alexa skill minimizes the need for interfacing directly with on-screen technology and provides a more natural and sustainable way to support our seniors, both mentally and physically. What it does Virtual Caretaker is an Alexa skill that can be easily incorporated into any Amazon Alexa device. The skill allows users to receive customized and personal check-in messages, reminders, and scheduling information. Not only is this useful for our elderly who might want this kind of assistance, but it also provides a way for health professionals to exchange information with the users. How we built it We built this skill with the Amazon Developer Console and Services and with Python. Challenges we ran into The biggest challenge we ran into was being able to collaborate in real-time using Amazon Developer Console. Since there is no way to share the internal model and code for the skill among our team members, we often found ourselves on long video chats and shared screens to debug and code together. Accomplishments that we're proud of We are really proud to have learned how to utilize Amazon's Alexa Skill tools. While we haven't mastered it, we were able to identify the structure and take advantage of it to a point that we have a working Alexa Skill that can carry a continuous conversation. Not only does our skill check in on the user, but it also follows up on the user's recorded history. What we learned We learned how to collaborate in an environment that wasn't really so conducive to collaboration. Between being completely remote for the hackathon, plus working on a tool in an unfamiliar service, we were able to put together a project that we are proud of and have many future plans for. What's next for Virtual Caretaker Our future development includes: Integrating with MyChart to report data directly to a primary care physician Fully developing Virtual Caretaker to maintain prolonged and meaningful conversations with the user Adding reminders for appointments, medications, checking in with loved ones, etc. Adding compatibility with other devices (health monitors, screens, printers, thermostats, emergency detectors etc.) for seamless remote data monitoring Detecting early onset diseases such as dementia through behavioral pattern recognition Built With amazon-alexa amazon-web-services python
Virtual Caretaker
An Alexa Skill tailored to fit the needs and reconnect the elderly community.
['Priya Sapra', 'Tiburon Benavides', 'Manusri Viswanathan']
[]
['amazon-alexa', 'amazon-web-services', 'python']
79
10,436
https://devpost.com/software/smart-masks
Smart Mask Design Our Problem Statement is that: With the ever-growing number of Covid-19 cases from the past 8 months, and several lockdowns issued by governments internationally, the human race has to define the New Normal in the Post-Covid era and start to re-open offices and schools. However, the underlying issue is that people don't feel safe yet to let their loved ones out of their homes. And this is completely justified, because of the weakly regulated ‘prevention techniques’, of wearing a mask at all times, and to maintain 6 feet distance from others in public places. Therefore, we decided to go on a journey to use technology to create the New Normal, in order for schools and offices to re-open as soon as possible. And we are doing this by releasing the concept of the ‘SMART MASK’. We plan to be B2B and B2C providers, but more about that later. With our product, we are targeting two industries. The primary target industry is the Biotechnology Industry, in the BioPharma Market (with a Compound Annual Growth of 7.2% through 2020 to 2026), and the secondary target industry is Pharmaceutical industry, in the surgical mask market (with a CAGR of 8.38% through 2020 - 2024). We came to this conclusion as we plan to have a brand image of a technology company before having that of just any other mask company in the consumer’s eyes, and this is what makes us different. The Smart Mask is a mask with IoT breathing sensors, which notifies you or your family if you are not wearing a mask in public areas. With the help of a team of Biotechnologist and IoT specialists, we can design these sensors, to record the location of your mask and your mobile application. Also the IoT sensors will record the breath of the person wearing the mask. So, if no one is breathing into the sensor, even if it is taken along in the public place, the person will be notified to wear it as well. The mask asks for calendar and location access, and with that the user can be reminded to keep the mask nearby the night before a calendar event outside the house, and notify if the mask is forgotten at home while out of the house.The Smart Mask will come in a range of different designs. We plan to have designs with a variety of colours and also those that are cultural in nature. In the B2C Model, Users have control over making and joining groups, with their families and loved ones, and can choose to share location and their mask’s location with them. So a parent can know if their child is wearing his/her mask in school or forgot it on the school bus. In the B2B Model, While the businesses can’t track their employees after job, however they will be notified if two or more users have not worn masks or maintained social distancing for prolonged periods. With the Smart Masks, reaching the ‘New Normal’ will come to a reality more than ever. This would mean re-opening shopping malls, schools, offices, public parks, if Smart Masks are used extensively. In order to be aware of our strengths and weaknesses, I would like to share our SWOT Analysis. Our Strengths are that we have a strong vision for our product, we are learning developers, enrolled in the best universities in Australia and India for AI. Also being in the Gen-Z we both know the right ways to market this product to the youth and how to effectively use marketing growth channels to scale. Also, we have many opportunities that will benefit our business, like having the right timing to launch the Smart Masks (as we are in the midst of a pandemic). We have no direct competition yet, as there are no such products available on the market, but also have a large customer base which is growing in hundreds of thousands on a daily basis (as Covid 19 unfortunately infects communities at a time). To add, we could also add machine learning and artificial intelligence features in the near future. All in all, the growth potential for the industry and scalability of our product both are promising. However, like every other business, we do have weaknesses, as we have low financial funding, and do not have an experienced team of IoT and BioTech specialists. Further on, our biggest threat is the possibility of large monopoly conglomerates who would also release similar products. But still, the market is big enough for a startup like us to survive the competition until we scale significantly, and being the first comer to the industry will surely reward us with the loyal customers. We are targeting to first sell the product in Indonesia, due to its rising cases of Covid 19, and so are benchmarking the data with the Smartwatch market in Indonesia. 2.5% of Indonesians have Smartwatches, out of which 23% of them wear it for fitness purposes (which means that 1.495 million people are ready to spend about a $100 on a tech-lifestyle device in Indonesia). If the Smart Mask manages to attract even a mere 2% of those 1.495 million people to buy our Masks, and 10% of them to have it on a subscription model, then about 180,000 Indonesians would use our product. To conclude, if our product gets a steady increase in the influx of customers for a year after the first MVP (of a working product), as shown in the table, then we can hit 3.8 million dollars in revenue, with a 2.16 million dollar cost… leaving us with a profit in the first year of 1.71 million dollars. And the only investment we need is the right resources, to gather the right team. Built With figma html Try it out www.figma.com docs.google.com
Smart Masks
As an attempt to minimise effects of Covid, we present 'Smart Masks' which connect with your Mobile Phone and notifies on correct your way of wearing the mask, and when you forget your mask at home.
['Rahul Mawa']
[]
['figma', 'html']
80