hackathon_id
int64
1.57k
23.4k
project_link
stringlengths
30
96
full_desc
stringlengths
1
547k
title
stringlengths
1
60
brief_desc
stringlengths
1
200
team_members
stringlengths
2
870
prize
stringlengths
2
792
tags
stringlengths
2
4.47k
__index_level_0__
int64
0
695
10,150
https://devpost.com/software/biology_quiz
Home Screen Settings Screen Quiz Screen Results Screen Correct Answer Screen Inspiration Biology Quiz App is conceived keeping in view the UN-SDGs Quality Education Goal. What it does It provides quality content for students aspiring to enter a Medical college. The app contains 19 Chapters ranging from basic Biology to advance topics such as Genetics. It can be used as a quiz as well as a guide as correct answers of all the questions is provided at the end of the quiz. How I built it I have built it with the awesome Flutter framework provided by Google. The front end is beautifully crafted using Flutter while firebase firestore is used to store and retrieve the quiz information. Challenges I ran into The biggest challenge was to store quality and latest content regarding Biology. Worked day and night to store the data on firebase firestore and made it available through Biology Quiz App. Accomplishments that I'm proud of I am very happy that i have covered Flutter framework in a very short span of time and developed a full fledge app with it. Utilising firestore also improves the performance of the app. Built With android-studio dart firebase firebase-firestore flutter ios-simulator xcode Try it out github.com
Biology Quiz 2020
Biology Quiz App for Medical College Entrance Exam
['Muhammad Abid']
['Surprises']
['android-studio', 'dart', 'firebase', 'firebase-firestore', 'flutter', 'ios-simulator', 'xcode']
1
10,150
https://devpost.com/software/covd-track-4n7qjc
Inspiration We want to help medical staff it's very hard for any dr to find all the possible suspects after a single confirmed case of COVD so we want to make this project which will try to give a list of all the possible suspects + infected areas What it does This app has two parts, first a scanner the scanner is an app which will scan the user-id card and then update the user location points, so if on each store we have this scanner app then we will have enough data to find all the suspects How we built it We have build UI for the all-screen on paper then draw the UI design in Adobe Xd, later we transferred all the assets from XD to Android studio and use in Flutter app, Challenges we ran into Connecting with Firebase Using of QR Scanner library Loading user data through Smart Card Accomplishments that we're proud of We have proud that we have completed this project, it's functional and we can make entries at different points and then load all the points of the specific person to load his paths What we learned We have learned to complete the task in a very small amount of time What's next for COVID TRACK We have to add Help module (Graphical ) Infected areas showing on map track and show all the suspects Built With firebase flutter https://pub.dev/packages/google-map-location-picker https://pub.dev/packages/google-maps-flutter https://pub.dev/packages/google-maps-flutter-platform-interface https://pub.dev/packages/qr-flutter Try it out github.com drive.google.com
COVID TRACK PATH
A solution for tracking person path followed and find all the possible suspects if someone have positive test
['Roman Khan', 'Umer Waqas']
['Surprises']
['firebase', 'flutter', 'https://pub.dev/packages/google-map-location-picker', 'https://pub.dev/packages/google-maps-flutter', 'https://pub.dev/packages/google-maps-flutter-platform-interface', 'https://pub.dev/packages/qr-flutter']
2
10,150
https://devpost.com/software/tik-tak-toe-game
window.fbAsyncInit = function() { FB.init({ appId : 115745995110194, xfbml : true, version : 'v3.3' }); // Get Embedded Video Player API Instance FB.Event.subscribe('xfbml.ready', function(msg) { if (msg.type === 'video') { // force a resize of the carousel setTimeout( function() { $('[data-slick]').slick("setPosition") }, 2500 ) } }); }; (function (d, s, id) { var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id; js.src = "https://connect.facebook.net/en_US/sdk.js"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk')); player setup page game board result page before entering for palyers first page to startgame Inspiration self interest and by watching videos of GDG peshawar. What it does simple fun How I built it by flutter using dart Challenges I ran into applying the oop concept Accomplishments that I'm proud of learned dart , know about flutter What I learned how to setup environment for development , dart , flutter, andriod studio. What's next for tik tak toe game thinking about to start practice on networking apps and online shopping apps Built With dart flutter
tik tak toe game
simple tik tak toe mobile application
['Umair Ansari']
[]
['dart', 'flutter']
3
10,150
https://devpost.com/software/startups
Inspiration for new startups What it does it connect startup companies to new ideas How I built it Challenges I ran into Accomplishments that I'm proud of What I learned What's next for Startups Built With flutter Try it out github.com
Startups
To build Platform for new ideas and startups
['The New base']
[]
['flutter']
4
10,150
https://devpost.com/software/shopsafe-21onk6
Hardware Mask Detection Population/Capacity Regulation Home Screen Google Maps Local Stores First Joined Queue Enter Store Prompt Thank You!!! <3 Inspiration Social Distancing is the new norm, and maintaining optimal crowd levels in public places like shops, supermarkets, etc. has become a challenge. We present a high tech solution to address these issues. Capacity regulation and standing in lines at supermarkets and grocery stores is a prime example of how COVID-19 has changed society. For many people, these lines are the only places where they interact with random people or even come close to others. This situation is thus one of the only places where they have a slight chance of becoming infected. This can be dangerous to many people with underlying health issues or the elderly. We aim to increase preventative measures at stores and decrease contact duration in order to maximize the safety of our users. What it does We have two parts to our product. The hardware component uses a machine-learning algorithm to detect whether a person entering the store has a mask on or not. The detection is paired with a servo motor that opens the door if a mask is detected or stays closed if no mask is detected. The system also has an automated voice to tell users to come back with a mask on or if they are good to enter the store. We used OpenCV to capture each frame of the video stream from a webcam and applied the ML algorithm to it, which returns the corresponding signal via a serial port to the hardware. The hardware component also regulates the population of the store in order to minimize store capacity while keeping a steady and fast flow of customers. Using the same webcam, we detected whether a user has passed the boundary lines and increments the population depending on the total amount entered and total amount exited. We can then pass the population and the entrance/exit rates to a backend where the front end could use to calculate the estimated wait time for each store. For the software component, we used Flutter, Google Maps API, Tensor Flow, Keras, OpenCV, and Python. Flutter was used to build the front end, which utilized the google maps api for the Search Stores function. We created the ML algorithm using tensor Flow and Keras. With OpenCV, we captured a video stream from a webcam and processed every frame, then passed each frame into a neural network that was trained on a dataset of face images, both with and without masks. This was a Hard cascade type of classification, which made a detection of a face, outlined the mouth region of the face, and tried to detect a mask object. The algorithm ran through each frame and if it detected a mask, it would send a signal via a serial port to the hardware. Our system used a python library called pyserial to communicate between the software and hardware. How we built it Machine Learning: Tensorflow with keras frontend OpenCV Dlib Frontend: Flutter Google Maps API Python Hardware: External webcam Arduino Uno microcontroller SG90 Low power servo Cardboard Algorithm Face Mask Detection To train any machine learning model, a good a reliable dataset is the first and foremost requirement. To generate a dataset for this problem, an image of a mask with a transparent background is superimposed on multiple faces. This way we could have multiple images of people wearing a face mask. (Note: do not use the same image for with and without mask dataset. USE DIFFERENT IMAGES). Once the model is trained, OpenCV is use to get the video stream and analyze it frame by frame. In each frame of the video, the model is run to see if the person is wearing a face mask or not. People Counter The code first identifies the people in the video stream and form a bounding box around them. This bounding box is then used to find the centroid of the person. This centroid determines the relative position of the person in the frame. An imaginary vertical line is drawn using OpenCV. The position of the centroid and the line are stored as pixel position values. If the centroid position crosses the imaginary division line the person has either entered or exited the store. This way we can calculate how many people have entered and exited the store and keep a population count of the people inside. Challenges we ran into Overall system integration Optimal training, dataset augmentation Live camera stream lighting adjustment affects accuracy significantly Mask detection is very janky if the person is not close to the camera People tracking needs to be implemented where there aren't people randomly standing around, a smarter approach needs to be taken to account for randomly moving and standing people who occlude views Need to test (and train) with more diverse set of people (due to social distancing only 2 of us could test the live mask detection) Frame by frame detection is ineffective when attached to a physical process (like opening a door), so we skipped frames and introduced deliberate delays in the video stream processing to allow the user time to walk through the door and have a safe distance from the next/previous person Accomplishments that we're proud of Our system works! We incorporate different systems and components together, with 2 separate machine learning based sub modules We identified some issues with real time video stream analysis and were able to figure out how to address these issues. We are also proud that the application works and is capable of queuing up your position in multiple stores. What we learned We learned how to integrate the Google Maps API into our code, which we have never done before. This allowed us to use location services to determine the user’s location and nearby stores. We also improved our ML skills by working with image detection this time. Integrating the overall system was a challenge and we learned many things along the way. What's next for ShopSafe We hope to actually implement our service in stores around the country, but we need to reach out to local stores to agree to use our service. The queuing system can only work when the store agrees to only let users in through the queue. We also need some funding in order to distribute our hardware system. We hope to benefit the world with our creation because our solution can actually have a huge impact in the world. Built With arduino dlib flutter keras opencv python sg90 tensorflow webcam Try it out github.com
ShopSafe
Convenient and Safe Shopping for Everyone
['James Han', 'Muntaser Syed', 'Prashant Bhandari']
['Top 3 Best Data Science/Machine Learning Hack', 'The Wolfram Award']
['arduino', 'dlib', 'flutter', 'keras', 'opencv', 'python', 'sg90', 'tensorflow', 'webcam']
5
10,150
https://devpost.com/software/todo-app-39x7pb
Home Screen create new task Inspiration learning and getting new experiences What it does A general memo and tasks creating app able to provide people ease with their smartphones How we built it a lot of teamwork and help Challenges we ran into a lot, flutter's community is very unhealthy at the moment. Accomplishments that we're proud of finally being able to develop a flutter code. What we learned new experiences and languages. What's next for ToDo App it's a work in progress yet. Built With android-studio dart flutter
ToDo App
People will be able to save their daily tasks and memos in a compact and minimal environment.
['Yousaf Tariq', 'Shah Rukh', 'Moiz Hussain']
[]
['android-studio', 'dart', 'flutter']
6
10,150
https://devpost.com/software/covid-19-self-screening-test-at-home-without-internet-access
"STAY AT HOME,BE YOUR OWN DOCTOR" Inspiration I belong to a backward area "Chitral" of PAKISTAN,whether internet accessibility is a great issue ,and secondly and every person over here cannot afford to go to hospital each day to test whether he/she has been affected by the virus or not? thirdly due to lack of medical staff here in hospitals,the doctors cannot concentrate on those patients who are serious or having more chances to get affected,because of those patients who visits each day to hospital just because they have a doubt of being affected by the virus,that's why keeping in view all the problems above ,what i could do in such circumstances being a student of computer system engineer was to find a possible and easily accessible solution to all the problems mentioned above in a form of my app. What it does It provides a platform to all people around ,specially those who live in backward areas ,including our area "Chitral",to test themself at home through this app,and will get to know that whether they have been affected by the Virus or not?Or whether any need any medical treatment right now,or do they need to visit a doctor or not?on the basis of the symptoms of the patient,and this app will declare a patient whether he/she is going through a normal Flu,Phenomena or whether it is corona? How I built it I am working still on the app not completed yet,and will complete soon,as i am trying to make it more user friendly. Challenges I ran into Internet accessibility and load shedding are the primary issues i faced due to which couldn't complete the project on time and still working on it.Secondly i came to know lately about the dateline due to not having an internet access. Accomplishments that I'm proud of The circumstances through which the world is going through now a days,people are suffering through this virus,the number of death rate is increasing day by day ,so in such circumstances if my app could even save a life of a single person,or if could help someone who do not afford to go to hospital each day,so that would be the biggest success or achievement for me,for me the real achievement is to play a role in decreasing anyone's suffering. What I learned The best thing that i learned and i am learning during working on my app,is how to practically apply things which we have learned in our educational institutes,how can we play our role when our country needs our contribution?and how to work within limited time and with less resources? What's next for Covid-19 self screening test at home without Internet access After completing my project i would love to work more on my idea,i would add a chat of a patient with a medical specialist setting at their homes,and he/she would guide the patient according to his condition,and some other ideas are there board to work on. Built With bootstrap3 c++ flutter oop phython
Covid-19 self screening test at home without Internet access
"STAY AT HOME,BE YOUR OWN DOCTOR".My app will work without any internet connection,and will help people of backward areas to test themself at home,and will guide them accordingly.
['Iram Hassan']
[]
['bootstrap3', 'c++', 'flutter', 'oop', 'phython']
7
10,150
https://devpost.com/software/foodies-xfc05b
Drawer Home Screen Item Display Inspiration From GDG Peshawar's Bootcamp What it does Food Delivery App UI How I built it Flutter with Dart, along with Firebase Challenges I ran into Animations Accomplishments that I'm proud of Firebase and smooth UI Animations What I learned Flutter and Dart What's next for Foodies To make it a complete Food Delivery System Built With flutter Try it out drive.google.com
Foodies
Order Food online - App UI developed with Flutter
['Usman Khan']
[]
['flutter']
8
10,150
https://devpost.com/software/food-help
Inspiration is that I was thinking of creating something else but suddenly I realized it's the last date for submission so just created something with listing of few orphanages needing help, nothing special here. Built With dart flutter
Food Help
Get contact info of orphanages who need help during pandemic.
['Digdarshan Subedi']
[]
['dart', 'flutter']
9
10,150
https://devpost.com/software/butcher-finder
Inspiration Eid ul Azha project What it does It will find a butcher How I built it Challenges I ran into Accomplishments that I'm proud of What I learned What's next for Butcher Finder Built With flutter
Butcher Finder
Find Perfect Butcher For Eid ul Azha
['asad wali', 'mansoor khan', 'Muhammad Yasir Durrani']
[]
['flutter']
10
10,150
https://devpost.com/software/corona-guard
Corona Guard Home Screen Resources Page Contact Log Page (part 2) Contact Log Page (part 3) Settings Page Contact Log Page (part 1) NFC Scanner Station Overview NFC Scanner Station Front View NFC Scanner Station Side View NFC Scanner Interface NFC Prototype Backend Backend Database Inspiration Since December 2019, the Coronavirus pandemic labeled Covid-19 by the World Health Organization has spread to over 188 United Nations member countries infecting over 4.9 million people while causing over 300,000 deaths around the globe. Coronavirus’s high RO value of 2.5 combined with its long incubation period of up to 14 days means that the only way to control the spread of Covid-19 is to social distance. This has caused the majority of public spaces to close wreaking havoc on the global economy. To reopen the majority of the global economy safely, robust testing and tracing infrastructure is needed to prevent a new spike in Covid-19 cases and deaths. Most countries around the world lack robust infrastructure for tracking the spread of Covid-19 letting it spread very quickly causing unexpected spikes in cases all over the world. Contact tracing is a method of tracking personal interactions in order to preemptively warn a person before they spread Covid-19 to others who they will come into contact with in the future. By tracking personal interactions before Covid-19 actually spreads, many of the risks of being in public are reduced while healthcare providers can take a proactive approach when treating suspected cases of Covid-19 potentially saving thousands of lives. All primary, secondary, and tertiary interactions are logged which allows people to get notified even if they are at a low risk of contracting Covid-19. Contact tracing also allows health care professionals to allocate limited resources like medications and vaccines to people who need them most by finding people who are at the highest risk for contracting Covid-19. What it does Corona Guard is a secure contact tracing app that utilizes peer to peer bluetooth communication to anonymously track the spread of COVID-19. It notifies users of daily interaction with other users of the application, gives updates regarding the number of direct, indirect, and distant interactions with people testing positive for COVID-19, and calculates the risk of having the virus. Corona Guard also features an NFC chip that users must check into before going into public spaces to ensure that the public spaces are within healthy capacity levels. Owners of these public spaces also have the option of barring people from entering their property if they have a high risk user. The app has a resources page to give the most up-to-date and accurate news and recommendations during the pandemic to prevent misinformation. All in all, Corona Guard aims to curb the spread of COVID-19 at its sources through its anonymous contact tracing and NFC system so that communities collectively can tackle the pandemic once and for all. How we built it The main mobile application for Corona Guard was made using Google’s Flutter SDK. Flutter is a mobile SDK that is compatible with Android Studio and Xcode letting mobile applications be compatible with both android and ios devices of any shape, size, and operating system version. Flutter enables Corona Guard to run on any modern smartphone with proper scaling and a responsive UI. Using the Flutter SDK, our team developed a responsive UI that enables users to get real time data about their risk of infection while receiving notifications if they have been in contact with someone who has tested positive for Covid-19. The app also provides links where users can get up to date information about Covid-19 spread and procedures in their respective areas. The user interface of our app was designed with the intention of giving users easy to access information while being as transparent as possible about how a user’s data is stored and used. The backend of the app was built using Google Firebase. There were two main tables, one for users and one for the entire system. The user side had these fields: “uuids heard”, a boolean “infected” value, and a calculated “risk” percentage. Every user updates the uuids they heard to the system, and this is stored in the “events” table. This table has 3 fields as well: “uuid”, a “time” value, and the user who uploaded this value. Together, these two datatables work to send data to the Flutter frontend. Firebase gives our app the ability to adapt to a changing user base being fast and responsive with one or one billion users. The NFC scanner we built utilizes the MIFARE NFC standard which can transmit over 1KB of data wirelessly in under a second. The scanner reads the data in blocks which each hold 16 bytes of data. There are 64 blocks meaning the scanner reads 1024 bytes of data in total. The NFC tag in the phone stores whether a person has been exposed or infected with Covid-19 in the first byte in the first block as either a 0 or a 1; 0 for negative and 1 for positive. We stored this data in the first block so that in the event the scanner gets a partial read the scanner will still be able to display a positive or negative result. Lastly, if the rest of the bytes in the block are not clear (set to 0) the scanner will read the phone’s NFC chip as invalid as the person who is scanning the phone is likely using an unsupported app. As no data is needed by the arduino microcontroller, our NFC scanner can run without an internet connection making it even easier to use for businesses of all sizes. The only potential maintenance a business would have to perform are firmware updates every few months as we continue to optimize the scanner to become faster and faster. Industrial Scanner Design As part of our NFC scanner, we also prototyped a sleek and industrial scanner enclosure for use in public spaces to accompany the NFC scanner and user interface. The scanner enclosure is made out of aerospace-grade aluminum and features a stylish industrial design. The user interface includes an NFC reader/writer and a large, high resolution 12" LED display. Challenges we ran into Some challenges we ran into while building Corona Guard was how to make it as private as possible. We initially thought of the idea to use geolocation or GPS, but many are hesitant towards giving their every location to a private company. Thus, we decided to transmit anonymous UUIDs (Unique User ID) between users in order to track which phones had contact with each other. These UUIDs are not connected to any private data such as someone's name, so we decided that this was a secure enough way for everyone to remain completely anonymous while still being able to accurately track contacts. Accomplishments that we're proud of The application is fully functional and is compatible with iOS 8 or newer and Android Jelly Bean, v16, 4.1.x or newer. What we learned We learned the fundamentals of app development and how to build a working application from the ground up. We learned how to use the Flutter SDK for the front-end UI/UX and Google Firebase for the backend. What's next for Corona Guard We are hopeful that this application can provide communities a tool to collectively combat the spread of COVID-19 through its accessibility and ease-of-use. Through connections with health organizations, we can provide COVID-19 testing centers with information on who to prioritize testing and give health officials valuable information to help stop the spread of the pandemic. In the future, we hope to continue expanding our knowledge of algorithms and data science techniques to make the backend of the app more efficient and scalable. Built With arduino bluetooth dart firebase flutter nfc Try it out github.com
Corona Guard
Corona Guard is a smart contact tracing app that aims to slow the spread of COVID-19 and other infectious diseases by logging interactions between humans in a secure and private way.
['Vikas Ummadisetty', 'Derek Xu', 'Krishna Veeragandham', 'Subash Shibu']
[]
['arduino', 'bluetooth', 'dart', 'firebase', 'flutter', 'nfc']
11
10,159
https://devpost.com/software/asteroid-the-pytorch-based-source-separation-toolkit
Asteroid on GitHub Asteroid's docs Asteroid-models Community on Zenodo Asteroid's landing page Inspiration DeMask's inspiration We were having a fast call and one of us was in the train. We couldn't hear him well because he was wearing a mask. We directly thought about building surgically masked speech enhancement model with Asteroid! In the current covid pandemic situation, we can build better mask-adapted speech technologies to help people keep their masks on and spread their words without spreading the virus. Asteroid's inspiration It all started during a speech processing research project. We wanted to go fast and tried "ready-to-use" speech separation/speech enhancement models. We quickly realized that nothing was really working as expected, and spent our time fixing other people's bugs instead of doing research. This is the struggle that inspired Asteroid, and motivated us to open-source our code with things that just work . While sharing research code is already a step in the right direction, sharing readable and reproducible code should be the standard. Asteroid aims at empowering developers and researchers with tools that makes this even easier. What it does About DeMask DeMask is a simple, yet effective, end-to-end model to enhance speech when wearing face masks. It restores the frequency content which is distorted by the face mask, making the muffled speech sound cleaner. The recipe to train the model is here and the pretrained model here , About Asteroid Asteroid is an audio source separation toolkit built with PyTorch and PyTorch-Lightning. Inspired by the most successful neural source separation systems, it provides all neural building blocks required to build such a system. To improve reproducibility, recipes on common audio source separation datasets are provided, including all the steps from data download/preparation through training to evaluation. Asteroid exposes all levels of granularity to the user from simple layers to complete ready-to-use models. Our pretrained models are hosted on the asteroid-models community in Zenodo . Loading pretrained models is trivial and sharing them is also made easy with asteroid's CLI. You can check, our landing page , our repo , our latest docs and our model hub . To try Asteroid, install the latest release with pip install asteroid or the current version with pip install git+https://github.com/mpariente/asteroid ! How we built it Demask is trained on synthetic data generated from LibriSpeech's ( Panayotov et al. 2015 ) clean speech, distorted by approximate surgical mask finite impulse response (FIR) filters taken from Corey et al. 2020 . The synthetic data is then augmented using room impulse responses (RIRs) from the FUSS dataset ( Wisdom et al. 2020 ). A simple neural network estimates a time-frequency mask to correct the speech distortions. Thanks to Asteroid's filterbanks (formulated using torch.nn ), we could use a time domain loss with a time-frequency model which yielded better results. Asteroid uses native PyTorch for layers and modules, a thin wrapper around PyTorchLightning for training. Most objects (models, filterbanks, optimizers, activation functions, normalizations) are retrievable from string identifiers to improve efficiency on the command line. Recipes are written in bash to separate data preparation, training and evaluation, as adopted in Kaldi and ESPNet ASR toolkits. During training, PyTorchLightning's coolest features (mixed-precision, distributed training, torch_xla support, profiling, and more!) stay at the fingertips of our users. We use Zenodo's REST API to upload and download pretrained models. Our favorite torch op? We love unfold and fold and built cool end-to-end signal processing tools with it ! Examples Load DeMask's pretrained model using Hub or Asteroid # Without installing Asteroid from torch import hub model = hub.load( "mpariente/asteroid", "demask", "popcornell/DeMask_Surgical_mask_speech_enhancement_v1" ) # With asteroid install from master from asteroid import DeMask model = DeMask.from_pretrained("popcornell/DeMask_Surgical_mask_speech_enhancement_v1") Directly use it to enhance your muffled speech recordings using the asteroid-infer CLI asteroid-infer popcornell/DeMask_Surgical_mask_speech_enhancement_v1 --files muffled.wav To find the name of the model, we browsed our Zenodo community and picked the DeMask pretrained model ( snapshot here ). Train a speech separation model in 20 lines. from torch import optim from pytorch_lightning import Trainer from asteroid import ConvTasNet from asteroid.losses import PITLossWrapper from asteroid.data import LibriMix from asteroid.engine import System train_loader, val_loader = LibriMix.loaders_from_mini(task='sep_clean', batch_size=4) model = ConvTasNet(n_src=2) optimizer = optim.Adam(model.parameters(), lr=1e-3) loss = PITLossWrapper( lambda x, y: (x - y).pow(2).mean(-1), # MSE pit_from="pw_pt", # Point in the pairwise matrix. ) system = System(model, optimizer, loss, train_loader, val_loader) trainer = Trainer(fast_dev_run=True) trainer.fit(system) Challenges we ran into In our first approach, we wanted to use the MASC dataset from the Compare challenge (classification with unpaired data, see here , and use style transfer to perform enhancement but the amount of data was too small and the differences between mask vs. no-mask too subtle. We suspect that surgical masks don't affect speech as much as self-made coths masks do. When we got the impulse responses (IRs) from ( Corey et al. 2020 ), none of our first ideas worked because the filters contained the IR of the microphone and the room, and the phase was noisy. We then resorted to design ad-hoc FIR filters which directly approximate the frequency response of the masks in ( Corey et al. 2020 )). The filters were dynamically generated at training time to augment the available data. Approximating the filters by hand saved us in the end ! We would have loved to create a live demo of DeMask on the browser but the model was not jitable, we'll definitely work on it in the future. Accomplishments that we are proud of Our PITLossWrapper takes any loss function and turns it into an efficient permutation invariant one ! Check out our notebook about it . Using Zenodo's REST API to automatize model sharing from the command line was quite challenging and we believe it's a game changer to allow users to share their pretrained models. Giving proper credit is underrated: we're proud to release pretrained models with automatically-generated appropriate license notices on them ! We received the impulse responses from ( Corey et al. 2020 ) less than a week before the challenge's deadline, ran into technical issues for generating the training data but didn't quit ! We've adapted PyTorch's sphinx template to create our beautiful docs . We've made our very own landing page and we love it. We've gathered more than 20 contributors from both academia and industry. We opened a leaderboard in PapersWithCode and we see new entries all the time. What we learned Individually, we've learned to work as a team, set our goals, separate tasks and act fast. What's next For DeMask, integrating end-to-end denoising and dereverberation with demasking would make a good candidate for an open source version of NVIDIA RTX Voice. For Asteroid, pretty much everything is next: A tighter integration with torchaudio and torch's ComplexTensor . TorchScript support. End to end separation to ASR with ESPNet Multi-channel extensions A nice refactoring of the bash recipes into Python CLI. A growing community of users and contributors. and the list can go on.. Acknowledgement We'd like to thank all Asteroid's contributors ! • mhu-coder • sunits • JorisCos • etzinis • vitrioil • jensheit • Ariel12321 • tux-coder • saurabh-kataria • subhanjansaha • mdjuamart • hangtingchen • groadabike • dditter • bmorris3 • DizzyProtos • Built With pypi python pytorch pytorch-lightning torchaudio zenodo Try it out asteroid-team.github.io github.com mpariente.github.io zenodo.org
DeMask
Enhance speech when wearing face masks - built with Asteroid
['Pariente Manuel', 'Samuele Cornell', 'Michel Olvera', 'Fabian-Robert Stöter', 'Jonas Haag']
['First Place']
['pypi', 'python', 'pytorch', 'pytorch-lightning', 'torchaudio', 'zenodo']
0
10,159
https://devpost.com/software/a-qeysp1
Overview Theory Inspiration n recent years, machine learning-based algorithms and softwares have rapidly spread in society. However, cases have been found where these algorithms unintentionally make discriminatory decisions(e.g., the keynote by K. Crawford at NeurIPS 2017). For example, allocation harms can occur when AI systems extend or withhold opportunities, resources, or information. Some of the key applications are in hiring, school admissions, and lending. [1] Since Pytorch didn't have a library to achieve fairness yet, we decided to create one. What it does FairTorch provides tools to mitigate inequities in classification and regression. Classification is only available in binary classification. A unique feature of this tool is that you can add a fairness constraint to your model by simply adding a few codes. Challenges I ran into In the beginning, we attempted to develop FairTorch based on the fairlearn’s reduction algorithm. However, it was implemented based on scikit-learn and was not a suitable algorithm for deep learning. It requires ensemble training of the model, which would be too computationally expensive to be used for deep learning. To solve that problem, we implemented a constrained optimization without ensemble learning to fit the existing fairlearn algorithm for deep learning. How we built it We employ a method called group fairness, which is formulated by a constraint on the predictor's behavior called a parity constraint, where X is the feature vector used for prediction, A is a single sensitive feature (such as age or race), and Y is the true label. A parity constraint is expressed in terms of an expected value about the distribution on (X, A, Y). In order to achieve the above, constrained optimization is adopted. We implemented loss as a constraint. The loss corresponds to parity constraints. Demographic Parity and Equalized Odds are applied to the classification algorithm. We consider a binary classification setting where the training examples consist of triples (X, A, Y), where X is a feature value, A is a protected attribute, and Y ∈ {0, 1} is a label.A classifier that predicts Y from X is h: X→Y. The demographic parity is shown below. E[h(X)| A=a] = E[h(X)] for all a∈A ・・・(1) Next, the equalized odds are shown below. E[h(X)| A=a、Y=y] = E[h(X)|Y=y] for all a∈A, y∈Y・・・(2) We consider learning a classifier h(X; θ) by pytorch that satisfies these fairness conditions. The θ is a parameter. As an inequality-constrained optimization problem, we convert (1) and (2) to inequalities in order to train the classifier. M μ (X, Y, A, h(X, θ) ≦ c・・・(3) Thus, the study of the classifier h(X; θ) is as follows. Min_{θ} error( X, Y) subject to M μ (X, Y, A, h(X, θ)) ≦ c To apply this problem to pytorch's gradient method-based parameter optimization, we make the inequality constraint a constraint term R. R = B |ReLU(M μ (X, Y, A, h(X, θ)) - c)|^2・・・ (4) Accomplishments that we're proud of We confirmed by experiment that inequality is reduced just adding 2 lines of code. What we learned What we learn is how to create criteria of fairness, the mathematical formulations to achieve it. What's next for FairTorch As the current optimization algorithm is not yet refined in FairTorch, we plan to implement a more efficient constrained optimization algorithm. Other criteria of fairness have also been proposed besides demographic parity and equalized odds. In the future, we intend to implement other kinds of fairness. References A Reductions Approach to Fair Classification (Alekh Agarwal et al.,2018) Fairlearn: A toolkit for assessing and improving fairness in AI (Bird et al., 2020) Built With python pytorch Try it out github.com pypi.org
FairTorch
FairTorch provides tools to mitigate inequities in deep learning. A unique feature of this tool is that you can add a fairness constraint to your model by simply adding just a few lines of code.
['Akihiko Fukuchi', 'Masashi Sode', 'yoko yabe', 'Yasufumi Nakata']
['First Place']
['python', 'pytorch']
1
10,159
https://devpost.com/software/pytorchxai
Inspiration The year 2020 brought changes all around the globe, imposing new standards when it comes to protecting everybody around you. In this time of need, one of the most harmed and overloaded institutions is the hospital, which is the first line of defence against the pandemic and a desperately needed place for some of us. People that have chronic, autoimmune or degenerative diseases must stay in touch with their doctors to keep their physical health in check. As a multiple-sclerosis patient himself, our teammate Bogdan is bound to make regular visits to the hospital. The current global crisis complicated everybody's lives and he is no exception. He has to travel to other cities for periodic checks or to get treatment. It is frustrating to sometimes get back home with more questions than answers. However, Bogdan is a software engineer and he believes that AI could solve some of these issues. What it does Q&AId solves this problem by: providing the user answers to questions on clinical data, providing the hospital with a transcript of what the patient needs, reducing the waiting time, and unloading the hospital triage. Q&AId is a conversation agent that relies on a series of machine learning models to filter, label, and answer medical questions, based on a provided image as further described. The transcript can then be forwarded to the closest hospitals and the patient will be contacted by one of them to make an appointment. Each hospital nearby has their models trained on private data that finetunes a visual question answering (VQA) model and other models, based on available data (e.g. brain anomaly segmentation). We aggregate all of the tasks that these hospitals can do into a single chat app, offering the user results and features from all nearby hospitals. When the chat ends, the transcript is forwarded to each hospital, a doctor being in charge of the final decision. Q&Aid is simplifying the hospital logic backend by standardizing it to a Health Intel Provider (HIP). A HIP is a collection of models trained on local data that receives a text and visual input, afterwards filtering, labelling and feeding the data to the right models and generating at the end output for the aggregator. Any hospital is identified as a HIP holding custom models and labelling based on its knowledge. How we built it There are three sections of the app that are worth mentioning: Q-Aid-App Created using React-Native. Authentication and database support by AWS Amplify. Awesome chat created using GiftedChat. Backed by the PyTorch core algorithms and models. Q-Aid-Core Server built with FastAPI DockerHub deployment as a docker image. script that partially builds the following diagram: This stack builds up to: A DNS-Record for the application. An SSL/TLS certificate. An Application Load Balancer with that DNS record and certificate. An ECR Container Registry to push our Docker image to. An ECS Fargate Service to run our Q&Aid backend. Inspired by this tutorial . Q-Aid-Models Visual Question Answering Visual Question Answering is a challenging task for modern Machine Learning. It requires an AI system that can understand both text and language, such that it can answer text-based questions given the visual context (an image, CT scan, MRI scan, etc.). Our VQA engine is based on MedVQA, a state-of-the-art model trained on medical images and questions, using Meta-Learning and a Convolutional Autoencoder for representation extraction, as presented here . Medical Brain Segmentation Medical segmentation is the task of highlighting a region or a set of regions with a specific property. While this task is mostly solved in the general-purpose setup, in the medical scene this task is quite hard because of the difficulty of the problem, humans having a bigger error rate when highlighting abnormalities in the brain and the lack of data. Our model uses an UNet architecture, a residual network based on downsampling and upsampling that has good performances on the localization of different features, as presented in the Pytorch hub , thanks to the work of Mateusz Buda. Medical Labeling Medical labelling is the task of choosing what kind of image the user is feeding into the app. So far, possible labels are brain, chest, breast, eyes, heart, elbow, forearm, hand, humerus, shoulder, wrist. Currently, our VQA model has support only for brain and chest, but we are working on adding support to multiple labels. Our model uses a Densenet121 architecture from the torchvision module, the architecture having been proved suitable for medical imagery by projects like MONAI that uses it extensively. Medical Filtering Medical filtering is the task of labelling images in two sets, medical and non-medical, as we want to filter all non-medical data before being fed into the other machine learning models. Our model uses a Densenet121 architecture from the torchvision module. Datasets The datasets used in this project are the augmented version of: VQA-RAD Tiny ImageNet Medical Decathlon Mednist - the dataset is kindly made available by Dr. Bradley J. Erickson M.D., PhD (Department of Radiology, Mayo Clinic) under the Creative Commons CC BY-SA 4.0 license . Challenges we ran into The hackathon has been quite a journey for the past few months, as the idea has constantly evolved. At first, Tudor came up with the idea that PyTorch would need an Explainable Artificial Intelligence module. We decided that we want support for module interpretability and Tensorboard support, we called it TorchXAI. We learned a lot about model interpretability and how to integrate features in Tensorboard from the PyTorch API as plugins, Bogdan implemented a ton of algorithms. After a few weeks, Andrei showed us captum , which did all that we wanted to do, but better. A bit demotivated, we were searching for a new idea, Bogdan came up with a medical use case, to use PyTorch to enhance hospitals. After Bogdan motivated us to continue, we started shaping the new idea, to find use cases, figure out the needs, what can be done and, most importantly, what we could do. As a multiple-sclerosis patient himself, Bogdan is bound to make regular visits to the hospital. The current global crisis complicated everybody's lives, and MS patients are no exception. Some of them have to travel to other cities for periodic checks or the treatment, and it can be frustrating and time-wasting. But he believed that AI could solve some of the issues and motivated us that we can solve this together. After we found a good problem to work on, we started giving each other continuous feedback, working on different ideas. Tudor and Bogdan are contributors in the OpenMined community, a community that works to enable privacy-preserving machine learning. We expect that a future release will enable us to do Federated Learning at scale in the cloud. We decided that the hackathon would be the best place to start working on applying machine learning in the healthcare and afterwards the right next steps would be to enable privacy in healthcare so that hospitals could exchange more data. Afterwards, Bogdan came up with the notion of Health Intel Provider - HIP. This would be an abstraction that we would use for any hospital, research lab, or just a medical data owner that would want to join our network to train algorithms on any task. This would become the computational backbone required to use any kind of machine learning or privacy tools in a hospital. At this moment, we decided that we want a medical chat that can help interpret medical imaging and define medical terms for the user as well as provide medical transcripts for the doctors to understand the needs of the patient. After that, the data search started. After we saw how little data is available publicly in healthcare, we realized that we are on the right track and our work could have a good impact. After finding the right datasets and use cases, Tudor and Andrei started to train the models. Andrei came up with the idea of using a Visual Question Answering model because it would fit well with our medical chat task. Down the road, we faced tons of bugs, from the incompatibility of react-native with TorchScript to adapting different data distributions to match each other, but the most important thing is that we've learned a lot and had fun doing it. At this point, we are looking forward to integrating more models (the next one being on retinopathy) and privacy tools to enable private location sharing and maybe even inference on private data by using PySyft. Accomplishments that we're proud of Team Tackling a difficult problem that is important to us and being able to deliver a working proof of concept solution is a great source of pride for our team. This feat involves building and integrating several distinct moving parts ranging from machine learning pipelines to cloud infrastructure and mobile development. It requires a good understanding of all systems involved and, above all else, great communication, prioritization, and scheduling within the team. We view having successfully navigated all of the above in a relatively short period of time as a significant accomplishment. Bogdan I am really happy that we continuously found new sources of motivation to create a tool that might tackle some real issues, and that we learned a lot on the way. And I hope we made a small step in the right direction. Tudor This hackathon has been a source of valuable lessons and great achievements, the ones that I am most proud of are: made progress towards solving a real-world hard problem, managed to sharpen my communication and computer vision skills, made my first technical project with my older brother. <3 Andrei The PyTorch Summer Hackaton was a way for me to explore ideas outside of my typical areas of interest. Deep Learning for Medical applications has a lot of different issues compared to the more traditional Deep Learning tasks. I enjoyed learning about the solutions addressing the lack of qualitative data and the network architectures. This field has the potential of having a great impact on AI for Social Good, and I'm glad that we were able to develop a working prototype showcasing a Deep-Learning enabled medical assistant. Such tools could change the medical landscape, providing access to powerful diagnosis tools even in the most remote corners of the world. Horia As a student, solving clearly defined problems using known methods is the norm. As such, having the opportunity to identify a meaningful problem and explore novelty solutions offers great challenges and satisfactions. I am tremendously proud to be part of an effort that democratizes access to quality diagnoses while preserving the essence of the doctor-patient relationship. AI will shape the future of medical technology and nudging this transformation is a true personal accomplishment. What we learned As a team, we've learned how to express our ideas the right way and how to give constructive feedback. We've learned that we are here for the journey and we should make the most out of it, even if there are different opinions along the road. As individuals, each of us learned valuable social and technical lessons and even brand new skills from scratch. Here are a few of them: Bogdan learned: XAI and model interpretability how to make conversational agents react-native to deploy on AWS Tudor learned: TorchScript FastAPI Torch Hub MONAI Andrei learned: VQA segmentation captum Horia learned: PyTorch computer vision presentation skills What's next for Q&Aid Q&Aid has three tracks for its future: Integrating more medical machine learning models. Adding more features to the app. Integrating OpenMined technologies for privacy. Recruiting medical experts, doctors, and patients. Andrei and Tudor are looking into the first track, integrating retinopathy detection and generative models as well for augmentation. Bogdan is working on the second track, integrating better authentification services, searching for better-distributed architectures and polishing the application UI based on feedback. Bogdan and Tudor are working on applying Federated Learning between HIPs using PySyft from OpenMined, a library that wraps PyTorch and other data science tools for private model sharing and training. Both of them are working as active contributors in OpenMined to bring these privacy features closer to the healthcare scene. Horia is working on the fourth track, searching for motivated students at the medical faculty in Bucharest that could help us gather data to furthermore train our VQA model and give feedback on its answers. Built With amazon-web-services fastapi javascript python pytorch react-native Try it out github.com appdistribution.firebase.dev
Q&Aid
Q&Aid is the healthcare assistant that democratizes access to high-quality diagnoses. It comforts patients, unburdens doctors and generates trust by building a lifelike doctor-patient relationship.
['Horia Ion', 'Bogdan Cebere', 'Andrei Manolache']
['First Place']
['amazon-web-services', 'fastapi', 'javascript', 'python', 'pytorch', 'react-native']
2
10,159
https://devpost.com/software/groundwav
Landing Page of Website Type Description and Cuisine of Dish Result of Dishes matching specification from storage Upload Image for Search Result of Image Search Application Launch Screen Analysis page which shows recipes Details page Rasoee We are a group of friends studying together, and we all happen to be foodies. So this is a project quite close to our heart, or maybe stomach. To be honest we didn't really think a lot about this topic, after we registered we met online for a brainstorming session, and of course everyone liked the idea related to food, so we went with it. So, all of us had varying levels of familiarity with Machine Learning and PyTorch as well as knowledge in other fields, and so we divided the tasks among ourselves accordingly. One guy made the website, one guy made the app, the others looked for datasets and models that we could train to make this work. We wanted to be ambitious enough to make out own dataset despite there being Food-101 available online, so we went ahead with that and scraped to the best of our abilities to make a workable dataset. The cleaning of the dataset was quite a gruelling task, followed by the question of which model to use. We tested a few small scale models as we wanted this to be deployable on mobile devices, and in the end got a good enough accuracy with one to go ahead. This was followed by the usual nitpicking in the looks of the website and the application along with the addition of auxiliary features like provision of recipe and list of ingredients. Finally we made something, that we are proud of, and hope it prevents many food lovers out there from racking their brains during identifying dishes, and use more of their time eating. Built With django python pytorch Try it out rasoee.herokuapp.com
Rasoee
Recognition of Dish from Image along with Recipe Provision
['Arijit Gupta', 'Anish Mulay', 'Dev Churiwala', 'Smitesh Patil', 'Ameya L']
['Second Place']
['django', 'python', 'pytorch']
3
10,159
https://devpost.com/software/fluence-5g2s9m
Inspiration Transformers are getting bigger and better. The SoTA baseline seems like a never-ending chase. However, this progress raises concerns on two cardinal aspects: How much compute do these models require? The largest transformer (GPT-3) requires 10K GPUs for performing few-shot learning. This is 10x bigger than the previous largest transformer. These models require extensive compute (training time), huge amounts of data to perform well. What are they learning and how it is carried out? These large pre-trained LMs are brittle when distribution shifts during inference. They are biased (for instance, generative models such as GPT-2 will associate nurses with women often). They are treated as black-box models in some aspect, we don't know what impact does finetune has and broadly, what do they look at and how do they perform this reasoning. What it does The main goal is to provide standardized modules for compute-efficient and robust algorithms. Compute efficient methods such as ZSL, Meta-Learning, Adaptive methods, Importance sampling can reduce the amount of data our models need and at the same time provide us competitive performance with standard fine-tuning methods. Another line of research has dived into how we can make these models robust. Debiasing methods aim to provide models with more generalization capabilities. Interpretability methods such as probing classifiers help us study their internal dynamics. Fluence provides standardized API (similar to HF Transformers) to integrate these methods with existing workflows. Almost all the modules require similar arguments as any transformers code would require minimal changes. Thus reducing overhead from the user's endpoint. More details can be found in the video. How I built it This library uses Pytorch for all its functionalities. This library is part of a research project which will be published in the next few months. Being an active user and contributor of Pytorch and Transformers, I realized that there is some form of the gap that can be filled in regards to computing efficiency and robust methods. I looked at many different implementations to understand the issues (different ways of loading data, models expecting different inputs, custom training loops, no standard way to report results) and want to address these issues in this domain similar to Transformers. You can simply feed in any Automodel or nn.Module model, wrap it inside Fluence provided methods and let the rest be taken care of by it. The current functionalities come from what I feel were the essential starting point. Challenges I ran into It took me a lot of time to make some methods work (such as HEX, due to its instability with matrix inversion, MAML for transformers, which now uses higher , integrating these methods with HF Pytorch Trainer ). I had to read many different papers to understand the problems and how they can be better implemented. Some of the methods were implemented in TF which were ported to PT (required me to read TF docs). Accomplishments that I'm proud of I am proud of implementing modules that didn't have a proper implementation. This library going forth will include some of the best practices in research. In the process, I submitted several PRs to the Transformers repo. For instance, adding Pytorch Native AMP in the Trainer . I think Fluence's direction will be determined by the community response. It has always been one of my research goals to create an ML library that makes it easier for researchers to try out their ideas and prototype it with minimal overhead. What I learned I learned a ton about NLI research since this is the task on which I tested out these methods. I learned a lot about the Transformers library such as their standard APIs to instantiate modules and training workflow. I liked it and this is one of the reasons why Fluence integrates with their workflow. I learned about code coverage in general and added it to this library. What's next for Fluence There are a lot of things which will come to Fluence in the coming months. The meta-learning pipeline needs improvement in terms of providing flexibility to users. I hope to add improved pruning methods (inspired by LTH). There are a few sampling methods currently. I hope to make the data order aspect easy to manage. I want to add sparse methods also possibly which integrates well with autograd . Improvements in documentation and the addition of examples will be an ongoing effort. Built With python pytorch transformers Try it out github.com
Fluence
A Pytorch based Deep Learning library for Low resource NLP research and robustness
['Prajjwal Bhargava']
['Second Place']
['python', 'pytorch', 'transformers']
4
10,159
https://devpost.com/software/carefree-learn
Inspiration I've been working on tabular datasets in the past few years, and managed to build a rough AutoML system that beat the 'auto sklearn' solution to some extend. After I met PyTorch, I was deeply attracted by its simplicity and power, but I failed to find a satisfying solution for tabular datasets which was 'carefree' enough. So I decided to take advantage of my knowledges and build one myself, and here comes the carefree-learn , which aims to provide out of the box tools to train neural networks on tabular datasets with PyTorch. What it does Here's the documents that covers most of the following statements. carefree-learn provides high level APIs for PyTorch to simplify the training on tabular datasets. It features: A scikit-learn-like interface with much more 'carefree' usages. In fact, carefree-learn provides an end-to-end pipeline on tabular datasets, including AUTOMATICALLY deal with: Detection of redundant feature columns which can be excluded (all SAME, all DIFFERENT, etc). Detection of feature columns types (whether a feature column is string column / numerical column / categorical column). Imputation of missing values. Encoding of string columns and categorical columns (Embedding or One Hot Encoding). Pre-processing of numerical columns (Normalize, Min Max, etc.). And much more... Can either fit / predict directly from some numpy arrays, or fit / predict indirectly from some files locate on your machine. Easy-to-use saving and loading. By default, everything will be wrapped into a zip file! Distributed Training, which means hyper-parameter tuning can be very efficient in carefree-learn . Supports many convenient functionality in deep learning, including: Early stopping. Model persistence. Learning rate schedulers. And more... Some 'translated' machine learning algorithms, including: Trainable (Neural) Naive Bayes Trainable (Neural) Decision Tree Some brand new techniques which may boost vanilla Neural Network (NN) performances on tabular datasets, including: TreeDNN with Dynamic Soft Pruning , which makes NN less sensitive to hyper-parameters. Deep Distribution Regression (DDR) , which is capable of modeling the entire conditional distribution with one single NN model. Highly customizable for developers. We have already wrapped (almost) every single functionality / process into a single module (a Python class), and they can be replaced or enhanced either directly from source codes or from local codes with the help of some pre-defined registration functions provided by carefree-learn . Full utilization of the WIP ecosystem cf* , such as: carefree-toolkit : provides a lot of utility classes & functions which are 'stand alone' and can be leveraged in your own projects. carefree-data : a lightweight tool to read -> convert -> process ANY tabular datasets. It also utilizes cython to accelerate critical procedures. To try carefree-learn , you can install it with pip install carefree-learn . How I built it I structured the carefree-learn backend in three modules: Model , Pipeline and Wrapper : Model : In carefree-learn , a Model should implement the core algorithms. It assumes that the input data in training process is already 'batched, processed, nice and clean', but not yet 'encoded'. Fortunately, carefree-learn pre-defined some useful methods which can encode categorical columns easily. It does not care about how to train a model, it only focuses on how to make predictions with input, and how to calculate losses with them. Pipeline : In carefree-learn , a Pipeline should implement the high-level parts, as listed below: It assumes that the input data is already 'processed, nice and clean', but it should take care of getting input data into batches, because in real applications batching is essential for performance. It should take care of the training loop, which includes updating parameters with an optimizer, verbosing metrics, checkpointing, early stopping, logging, etc. Wrapper : In carefree-learn , a Wrapper should implement the preparation and API part. It should not make any assumptions to the input data, it could already be 'nice and clean', but it could also be 'dirty and messy'. Therefore, it needs to transform the original data into 'nice and clean' data and then feed it to Pipeline . The data transformations include: Imputation of missing values. Transforming string columns into categorical columns. Processing numerical columns. Processing label column (if needed). It should implement some algorithm-agnostic methods (e.g. predict , save , load , etc.). It is worth mentioning that carefree-learn uses registrations to manage the code structure. Challenges I ran into Most of the challenges I ran into was to build a system. I need to make sure that users can use it easily, and developers can also extend it without spending too much efforts. This took me days to design & refactor. The second challenge was the data processing module ( carefree-data ). Since the target of carefree-learn is to fit (almost) any tabular datasets with high performance, I need to implement a whole bunch of data processing methods into carefree-data , in an automatic manner. This again took me days to design & optimize. Another challenge was the multiprocessing part. Using CUDA and multiprocessing is not easy, especially when I needed to do some fine grained logging within the multiprocessing process. This aaagain took me days to experiment & resolve. Accomplishments that I'm proud of I've made training NNs on tabular datasets really easy now: import cflearn m = cflearn.make() # fit np.ndarray m.fit(x_np, y_np, x_cv_np, y_cv_np) m.predict(x_test_np) # fit python lists m.fit(x_list, y_list, x_cv_list, y_cv_list) m.predict(x_test_list) # fit files m.fit(x.txt, x_cv=x_cv.txt) m.predict(x_test.txt) Although the demand of working with tabular datasets is not that large, I'll be very happy if carefree-learn could help someone who needs it. I'm also proud that I've written some documents for carefree-learn . What I learned How to build an easy-to-use (Deep Learning?) system :) How to write documents :D How to make videos XD What's next for carefree-learn The next step is to make some benchmark testing and optimize carefree-learn 's performance. I'm pretty sure it can reach a satisfying level with some tuned default settings. And, as always, bug fixing XD Built With numpy python pytorch Try it out github.com
carefree-learn
A minimal AutoML solution for tabular datasets based on PyTorch
['宇健 何']
['Second Place']
['numpy', 'python', 'pytorch']
5
10,159
https://devpost.com/software/rexana-the-robot
Hardware Dashboard UI Real-time Object Detection and 3D pose estimations from robot to Cloud AWS server to infer then results returned back to robot using sockets Base Electronics Rexana / Dashboard Inspiration 1) The idea of having a Voice Assistant like Alexa or Siri that could extend into the physical world for tasks around the house 2) Having an interactive robot for fun and immersive language education, She understands and can reply in Spanish 3) The Idea of having a personal robot that in the future as the ability to be a useful care giver/ physical assistant What it does Rexana is a voice-activated personal assistant robot that currently does the following tasks: Autonomous navigation around the house, using distance and location data (landmarks) from the 2D lidar, her cardinal/compass bearing, the wheel encoders that track each wheel's distance and object detection Rexana knows where she is around the house and using voice commands to navigate around the house. Why? Paranoid you left the oven on after leaving the house... Rexana can be accessed on my phone via a browser to give text or voice command to "go to the oven" I can then view the oven via the web cam. Forgot to water the plants "Rexana water the pot plant in the lounge room"..feeling lazy... "Rexana bring me the Pringles". Rexana has custom "hands" that can be switched for purpose fit tasks, watering can, magnets, grippers. More demo here: https://rexanapaperai.wordpress.com/demos/ As Rexana is navigating around the house she takes note of detected objects (using Detectron2 + some custom object detection) I divided her viewpoint into left, straight and right. So I can ask her about objects she can see, she stores data about each object (compass bearing, X,Y coordinates based on wheel encoders and distance-based on lidar) this allows her to recall objects or go to recently seen objects via voice or text command. Using the above data I can also practice Spanish in a fun, immersive way with her by asking her what she can see "Que puedes ver aqui", "A la izquierda hay libros" a la derecha el television". Programmatic training her hand movements is very tedious, using 3D human body pose estimation I can train her much faster and intuitively to do tasks, she can copy my actions waving, gestures or picking up objects. She also has a retro-inspired dashboard for monitoring, training and manually controlling. How I built it Build Blog: https://rexanapaperai.wordpress.com/ Rexana is a physical robot made from scratch using 3D printed parts, several plastic plant pots, 8 servo engines, camera, wheel encoders,2 dc engines a 2D LIDAR a magnetometer and a Raspberry PI onboard computer. I used Pytorch Detectron2, 3D human pose estimation and experimented with Pytorch Geometric. Data is captured and formatted on the onboard Raspberry Pi then sent to an AWS server for realtime inference over web sockets which then returns the pose/detected/inferred results. Challenges I ran into Her arm dimensions and joints are very different from a human so pose estimation is not very accurate (version 2 will be bigger and more closely resemble human joint positions and dimensions Ran out of spray paint, so she has the "I'm going to take over the world" evil A.I look, will be giving a fresh paint job (white and pink) as well as stronger arms (one of the arms now has some stripped gears making is jerky and unpredictable) Accomplishments that I'm proud of Working proof of concept! What I learned PyTorch is awesome, the tutorials and libraries are a huge time saving, using existing building blocks I was able to focus more energy on the unique parts of my project. I burnt our 3 servo engines trying to get the arms working well so learned a lot about servo motor torque and how to power them. Autonomous, human sized and genuinely useful robots are achievable, although some of the functionality is basic or rough I was able to complete a proof of concept and the lessons learned and existing groundwork will make the next version significantly better. What's next for Rexana the Robot - PyTorch V2 bigger size, human dimension arms for better pose estimation, create docs, improve code and open-source. Self annotation and improve automatic training by showing objects and giving names and locations. More info and demos here: https://rexanapaperai.wordpress.com/ Built With python pytorch raspberry-pi Try it out bitbucket.org rexanapaperai.wordpress.com
Rexana the Robot
More than a voice assistant, this project is the foundation for physical household robot trained to autonomously complete basic tasks
['Dan O']
['Third Place']
['python', 'pytorch', 'raspberry-pi']
6
10,159
https://devpost.com/software/torchexpo
Python Library Docs - Landing Python Library Docs - Image Segmentation Python Library Docs - Image Classification Python Library Docs - Sentiment Analysis TorchExpo Website - Model Details Page TorchExpo Website - List All Models Page TorchExpo Website - List All Publishers Page TorchExpo Video Poster Android Application - Image Segmentation Output Android Application - List All Tasks Android Application - Image Segmentation Input Android Application - Sentiment Analysis Android Application - Image Classification Note: This project applies to both Web/Mobile Applications and PyTorch Developer Tools Category. Please judge it accordingly ☺️ Inspiration Main motivation behind TorchExpo was to simplify research to production deployment for mobile devices. It is inspired by PyTorch Forum's Mobile Category Questions and Tensorflow Hub . What it does TorchExpo is not just a web and mobile application but it comes with a python library as well. The Python library is available via Pip and it helps you to convert a SoTA model in TorchScript and ONNX format in just one line Read the Docs pip install torchexpo The website is for users who don't want to convert and explore all solutions. They can just download already converted models and start using. https://torchexpo.now.sh The mobile application is to support all the use cases where users can download and try all the SoTA models with just few clicks and no expertise. APK Download Link It currently supports Vision (Image Classification, Image Segmentation), NLP (Sentiment Analysis) models. How to convert a Vision model? from torchexpo.vision import image_segmentation model = image_segmentation.fcn_resnet50() model.extract_torchscript() model.extract_onnx() How to convert a NLP model? from torchexpo.nlp import sentiment_analysis model = sentiment_analysis.electra_imdb() model.extract_torchscript() model.extract_onnx() Note : Model variants are currently not supported on mobile and web application (i.e. when you see ResNet, its only ResNet18 on mobile and web, as support for variants is on the way ) How I built it After my work on proof of concept on PyTorch Android, I was exploring ways to try out more models, sadly I couldn't find any go-to solution. I came across the TF Hub website and got to know, PyTorch ecosystem really misses a platform like this! I jumped on to carving the website (backend with frontend) and the mobile application. I later carved a library from all my learnings and thought, instead of me hosting, how easy it will be if people can convert on their own? Python library is built on top of TorchVision and HuggingFace's Transformers. It gives API and modules for easy extension ( Source Code ) The website is built using React and Javascript and hosted on Vercel ( Source Code ) The backend which serves the REST APIs is built using Node.js, MongoDB and hosted on Heroku ( Source Code ) The mobile application is built using Kotlin, Android Architecture Components, PyTorch Android and lot of custom classes to support models ( Source Code ) Challenges I ran into One of the main challenge was to support SoTA models on Android which required lot of tensor-like operations to be written for Android. Another interesting challenge was to design the library to make it minimal, intuitive and still easy to use. Working on NLP tasks on mobile application along with HuggingFace gave lot of tough days. Apart from that, working on CI/CD for smooth deployments of both mobile application and website, working on CI for python package and the documentation was really challenging. Accomplishments that I'm proud of Releasing alpha versions of website, mobile and package all by myself on time. Challenges I overcame while building the mobile application's tough operations and python package. But also proud that I will be supporting this project full-time after this hackathon to improve it, make it open to community for contributions and also work with some core maintainers (I am looking for them!) What I learned I learnt alot about mobile deployment for machine learning models during this hackathon and how tricky it is on low resource devices, which helped me carve this solution. How tricky it is to launch a product was my constant feedback to myself. How to make a minimal yet pleasing(I think so) presentation video and that recording needs to be done late at night with silence :) I learnt failures are part of this new remote hackathon (being my first one), where you find yourself working on different things every new week. One has to tame their wild ideas and with proper care and training, turn them into magnificient beasts. What's next for TorchExpo Working on making the repository and the ecosystem around it stable (Looking for core developers to contribute to this project) Making SoTA model variants available on website as well as mobile Support much needed Caffe2 Mobile format (e.g. Advance Tutorial) for extraction along with Quantization (e.g. Dynamic and Static) Begin with iOS Development, followed by Google Play Store and iOS App Store Release Opening up to community for more official/research models Built With android bash ci dropbox heroku jupyter kotlin node.js pypi python pytorch pytorch-android react torchvision transformers vercel Try it out torchexpo.now.sh torchexpo.rtfd.io github.com
TorchExpo
Collection of models and extensions for mobile deployment in PyTorch
['Omkar Prabhu']
['Third Place']
['android', 'bash', 'ci', 'dropbox', 'heroku', 'jupyter', 'kotlin', 'node.js', 'pypi', 'python', 'pytorch', 'pytorch-android', 'react', 'torchvision', 'transformers', 'vercel']
7
10,159
https://devpost.com/software/realrate-explainable-ai-for-company-ratings
Explaing wages for young American workers Causing: CAUSal INterpretation using Graphs Causing is a multivariate graphical analysis tool helping you to interpret the the causal effects of a given equation system. We want to explain AI decisions, ensuring transparency and fair treatment. Causing is explainable AI (XAI): We make transparent black box neural networks. Input: You simply have to put in a dataset and provide an equation system in form of a python function. The endogenous variable on the left had side are assumed being caused by the variables on the right hand sight of the equation. Thus, you provide the causal structure in form of an directed acyclic graph (DAG). Output: As an output you will get a colored graph of quantified effects acting between the model variables. You are able to immediately interpret mediation chains for every individual observation - even for highly complex nonlinear systems. Further, the method enables model validation. The effects are estimated using a structural neural network. You can check wether your assumed model fits the data. Testing for significance of each individual effect guides you in how to modify and further develop the model. The method can be applied to highly latent models with many of the modeled endogenous variables being unboserved. The Causing approach is quite flexible. The most severe restriction certainly is that you need to specify the causal model / causal ordering. If you know the causal ordering but not the specific equations, you can let the Causing model estimate a linear relationship. Just plug in sensible starting values. Further, exogenous variables are assumed to be observed and deterministic. Endogenous variables instead may be manifest or latent and they might have error correlated terms. Error terms are not modeled explicitly, they are automatically dealt with in the regression / backpropagation estimation. A Real World Example To dig a bit deeper, here we have a real world example from social sciences. We analyze how the wage earned by young American workers is determined by their educational attainment, family characteristics, and test scores. https://github.com/HolgerBartel/Causing/blob/master/education.md Scientific Abstract We propose simple linear algebra formulas for the causal analysis of equation systems. The effect of one variable on another is the total derivative. We extend them to endogenous system variables. These total effects are identical to the effects used in graph theory and its do-calculus. Further, we define mediation effects, decomposing the total effect of one variable on a final variable of interest over all its directly caused variables. This allows for an easy but in-depth causal and mediation analysis. To estimate the given theoretical model we define a structural neural network (SNN). The network's nodes are represented by the model variables and its edge weights are given by the direct effects. Identification could be given by zero restrictions on direct effects implied by the equation model provided. Otherwise, identification is automatically achieved via ridge regression / weight decay. We choose the regularization parameter minimizing out-of-sample sum of squared errors subject to at least yielding a well conditioned positive-definite Hessian, being evaluated at the estimated direct effects. Unlike classical deep neural networks, we follow a sparse and 'small data' approach. Estimation of structural direct effects is done using PyTorch and automatic differentiation taylormade for fast backpropagation. We make use of our closed form effect formulas in order to compute mediation effects. The gradient and Hessian are also given in analytic form. How I built it Causing is a free software written in Python 3 . It makes use of PyTorch for automatic computation of total derivatives and SymPy for partial algebraic derivatives. Graphs are generated using Graphviz and PDF output is done by Reportlab . We use PyTorch to perform model estimation. Autograd is used for automatic differentiation of the expert model, giving the indiviual effects of key figures on the financial strenght. I constructed a Structural Neural Network Class (SNN) in order to represent my specail model structure. Graphviz is used to plot easily interpretable dependency graphs. Use of PyTorch Causing uses PyTorch, Autograd, SymPy and Graphviz to explain causality and ensure fair treatment. PyTorch was used for three tasks: Using autograd to compute the effects, being simply the total derivatives of the model. Defining our own NN class: a Structural Neural Network, restricting many weights to zero, enabling identification and interpretation of single neurons ("explainable AI"). Using optimization algorithms like Adam or RProp for estimation of real-world causal effects. Challenges I ran into Autograd cannot be used for cyclic models yet. So I restricted myself to directed acyclic graphs (DAG). Masking via PyTorch, ie restricting certain coeffients to zero is was not flexible enough for my purposes. Accomplishments that I'm proud of I am proud of having made even quite complex models easily interpretable. This is the base for fair treatment by AI. What I learned PyTorch was easy to start with. But I had to build my own customized neural network. I wa happy to learn that PyTorch is tailor-made for those individual customizations. What's next Scalability for big data. Use maksing of model weights to speed-up the model Built With autograd graphviz python pytorch sympy Try it out github.com
Causing: CAUSal INterpretation using Graphs
Causing is a tool for Explainable AI (XAI). We explain causality and ensure fair treatment. It is developed by RealRate, an AI rating agency.
['Dr. Holger Bartel']
['Third Place']
['autograd', 'graphviz', 'python', 'pytorch', 'sympy']
8
10,159
https://devpost.com/software/leggodutch
Logo Index Page 1 2 Receipt used 3 4 5 Inspiration Everyone has probably experienced the pain of splitting the bill after a great dinner with your friends at least once in your life. No one really wants to pick up the bill because it is always so hard to go after people to remind them to pay for their share of the food. On top of that, no one is really in the mood to do mathematical calculations and send Venmo requests after a full meal. LeggoDutch recognizes this problem and aims to assist users to allocate bills to the right person with our receipt recognition model. By splitting the receipt up into different line items, the user can assign them to different people. Once the food is assigned, LeggoDutch will calculate the meal costs and send reminder texts to the different parties to return the amount of money to the user. What it does & How we built it After the user submits a photo of the receipt, LeggoDutch's model will preprocess the image to de-skew and soften the lighting for better text processing. It then uses text detection and recognition to pick up fragments from the image before converting them to texts. The text detection module uses ClovaAI's CRAFT PyTorch engine in conjunction with Tesseract 4.0 to parse the image into text characters. Afterward, another module comes into play to analyze the text and break it down into food items and their corresponding prices before displaying them to the user. The front-end interface will handle the user input and organize the list of friends and mobile numbers. Following this, it will allow the user to assign food items to the correct person by clicking on the right name on the dropdown menus. LeggoDutch uses the Twilio API to disseminate text reminders to the other people. Because we are currently using the trial version of Twilio API, we are only able to send text messages to registered numbers. Challenges we ran into ClovaAI's CRAFT PyTorch was not enough to give us accurate predictions on the prices of the different items. It sometimes gets confused by the period sign between the dollar and cents figures. Hence, we had to put in more data sets to further train the model. The quality of the text recognition is also heavily dependent on the orientation of the image as well as the contrast in lighting between the receipt and its background. We struggled to capture the four corners of the receipt under such circumstances and can only transform the image to a limited extent. However, with the combined help from our different machine learning components (image processing, image segmentation, and text recognition), we are able to cover some of the flaws of our image processing technique. Because the Twilio API's trial version does not allow us to send text messages to numbers that are not registered on our accounts, we are unable to implement our product widely until we secure funding for a proper Twilio subscription. Accomplishments that we're proud of We are happy to be able to improve on an existing trained model from ClovaAI by developing our own training process. Even though we were not able to implement a fail-proof image cropping and transformation technique, we managed to cover the flaws of the cropping technique with the other components' capabilities. We are also happy to have been able to deploy a machine learning model along with a Flask API online. It's our first time to have completed an end-to-end program. What we learned We learned how to design our data set to better train our model. At the same time, we learned other machine learning techniques for image transformations and their relative advantages and disadvantages over each other. It is also the first time that we have attempted to develop and deploy a machine learning model. We learned a lot about containerization and how to work with environment variables. What's next for LeggoDutch We plan to implement Venmo API for direct request and payment, as well as a user account system to connect friends and store their information. This would help to facilitate future transactions and improve the overall user experience. At the same time, we hope to train our Pytorch model to respond to receipts from different countries so that LeggoDutch can be used all over the world. Team Shi Jie Samuel Tan - stan1@haverford.edu Iryna Khovryak - ikhovryak@haverford.edu Minh Quan Phan - qmp23@drexel.edu Built With flask pytorch torchvision twilio Try it out 3.236.50.182 github.com
LeggoDutch
All you need is an image of your receipt and LeggoDutch will help you break it down to items that can be charged to different people!
['Shi Jie Samuel Tan', 'Iryna Khovryak', 'Minh Quan Phan']
[]
['flask', 'pytorch', 'torchvision', 'twilio']
9
10,159
https://devpost.com/software/fairtorch
docs created with Sphinx Monitor during training Inspiration In light of the recent events which highlight the systemic biases deeply ingrained in our society, we were inspired to create a product that could combat this issue in machine learning models. Machine learning models that may have vast consequences and further feed a cycle of social bias are everywhere, from those used in healthcare systems, policing, hiring, and more. What it does Our API contains many metrics which may be used to evaluate the fairness of the predictions of a model during and after training. We include a monitor class which can be instantiated during training to plot fairness metrics in real-time. Our API provides an adversarial wrapper class which can help to reduce bias in the model itself (as opposed to evaluating the results of the model). How we built it Our metrics are standard fairness metrics which are made compatible with PyTorch Tensors and Numpy arrays. The monitor is built on continually updating a Matplotlib Pyplot figure with the performance of a PyTorch model. The adversary wrapper class augments a pre-trained PyTorch model with a feed-forward adversarial network. We created auto-built docs using Sphinx. Challenges we ran into Of course, collaborating virtually is a struggle with any team. Our team decided to use tools such as Asana, Slack, and Zoom to facilitate our collaboration. Becoming familiar with such platforms will be useful to us in future group projects and company settings. Accomplishments that we're proud of Learning to integrate Sphinx with our API was rewarding, as it is such a useful and beautiful documentation technique. It was also rewarding to see our tools integrated into an actual classifier which predicts gender from images of faces of various races. What we learned Two of our team members, Nadine and Joyce, did not have experience with machine learning before, so they were able to gain a basic understanding of the entire machine learning pipeline. Michelle had never used PyTorch before, so this was new for her. Max learned about Sphinx. What's next for FairTorch We would like to integrate more sophisticated adversarial wrapper classes, specifically for image classification tasks, as our current implementation is a basic model that is intended for general use. We would also like to implement more visualizations. Built With matplotlib numpy python pytorch sphinx Try it out github.com fairtorch.github.io
FairTorch
A Python library for fairness in PyTorch models.
['Max Hirsch', 'Michelle Xu', 'Nadine Meister', 'Joyce Zhang']
[]
['matplotlib', 'numpy', 'python', 'pytorch', 'sphinx']
10
10,159
https://devpost.com/software/zebra-ai-9f8zh1
Zebra.AI Inspiration Recent abuse of AI for policing. Solution Zebra.AI aims to use stratified 2 sample t-tests and autoencoders to generate unique testing data from race segmented data points. We use these to run racial bias detection in classification models. Vision We aim to expand Zebra.AI to other types of models and to develop an educational interpretability dashboard to help ML engineers understand the flaws in their models. Try it out zerbraai.herokuapp.com github.com
Zebra.AI
A bias detection engine
['Anas Awadalla', 'Goral Pahuja', 'Ramya Bhaskara', 'Abraham Fraifeld']
[]
[]
11
10,159
https://devpost.com/software/covidash-jfrds9
Overview of Project CoviDash is an informative and responsive dashboard for information on COVID cases in Ontario. Using a heatmap and multiple graphs, data can be found for COVID cases in each area, location, and date across the province. Also, using pytorch, predicted case numbers can also be found for seven days past the last available date from the dataset used. The slider under the heatmap can be used to view case data from past, present, and future dates, and any data can be hovered over to see the approximate cases in that given area. The time series chart displays the data over time for each location, giving an idea on what the case curve looks like and what it is predicted to be at over the next week. Finally, the bar graph displays the amount of cases in each location for the date selected on the heatmap, along with its average rate of change. Built With bootstrap flask heatmap.js heatmap.js-back-end:-matplotlib matplotlib numpy pandas plotly.js pytorch scikit-learn Try it out github.com
CoviDash
CoviDash is an informative dashboard for information on COVID cases in Ontario.Using a heatmap and multiple graphs, data can be found for COVID cases in major location and date across the province.
['Connor Czarnuch', 'Giacomo Loparco', 'Cameron Dufault', 'Bilal Jaffry', 'Jacob Gordon']
[]
['bootstrap', 'flask', 'heatmap.js', 'heatmap.js-back-end:-matplotlib', 'matplotlib', 'numpy', 'pandas', 'plotly.js', 'pytorch', 'scikit-learn']
12
10,159
https://devpost.com/software/torchologist
COVID Symptoms of COVID Inspiration Covid-19 is infecting and killing a lot of people, and the infecting curve is not flattening even after half a year, so we need to prepare for fighting with it for a long time. Hospitals would not be able to focus on this single disease, instead covid19 patients will be mixed with patients with all kinds of diseases. So it would be helpful to have a tool that distinguishes covid19 from other diseases. Given that one important step for covid19 diagnosis is x-ray scan, we developed a model that distinguishes different diseases including covid19, pneumonia, breast cancer, and brain tumor. (more can be integrated in future) We believe this tool can help doctors diagnose covid19 and other diseases from x-ray scans in seconds. What it does We developed a model that distinguishes different diseases including covid19, pneumonia, breast cancer, and brain tumour from x-ray images with deep learning techniques. How we built it Using the AWS Deep Learning AMI service, we built the model using a concatenation of four datasets, and each dataset is multiclass. From this dataset, we created a train and validation set that was used to train and simultaneously validate the model. We loaded a pre-trained (VGG16) model with our custom parameters and hyperparameters to train the model. Next, the model was deployed using Amazon Sagemaker. From Sagemaker, we got our endpoint which was used by our frontend. Finally, the web application was deployed using AWS Amplify. Challenges I ran into Using the AWS Deep Learning AMI service, we built the model using a concatenation of four datasets, and each dataset is multiclass. From this dataset, we created a train and validation set that was used to train and simultaneously validate the model. We loaded a pre-trained (VGG16) model with our custom parameters and hyperparameters to train the model. Next, the model was deployed using Amazon Sagemaker. From Sagemaker, we got our endpoint which was used by our frontend. Finally, the web application was deployed using AWS Amplify. Model Training: Training our model on a concatenated dataset, each with multi classes, required lots and lots of experiments. It was challenging to decide on a specific architecture as well as hyperparameters for tuning the model to achieve a fair loss and accuracy. Data Source: It’s not easy to find enough data for training. What makes it worse is that the hackathon requires us to use some data from AWS data exchange, in which most of the datasets are not free. The free data sets are usually too small, and some of them took a long time to get permission. App Responsiveness: We also faced some challenges in making the app responsive to multiple screens. Accomplishments that I'm proud of Achievements we are proud of includes running many experiments on a concatenated dataset instead of the popular single dataset. Based On our experiments and tuning, seeing we were able to successfully train and improve the model accuracy was a great accomplishment. We are proud that we are able to develop and deploy the entire web application in the given time frame. As of now, there is no application that can do what our application can so it gives us pleasure to share and demonstrate with others. What I learned What we learned that we would always remember and remain happy about was how to work with some AWS services. AWS Sagemaker and Amplify to be specific. For us, we could regard this as our first deployed/production model. We were able to turn something outside the classroom or general learning into a real-world solution. Also as a team, we learned to work very well with each other considering the differences in cultural and technical background. It posed little challenges but overall was a smooth run. What's next for A.I. Radiologist Well, our model is multi-class so we can add more classes and make it a full-fledged AI Radiologist with many other organs. With a better dataset, the accuracy will be increased and our model will give better results. In the near future, we will be able to predict with any X-ray images for medical use, including all kinds of cancer and even fractures. The accuracy is not too high for now, but it’s purely restricted by the training data size, as long as enough data is available, the model will be more accurate than any human doctors! Built With amazon-web-services pytorch sagemaker
A.I. Radiologist
Radiology at your Home.
['Charles Yusuf', 'Nihal Nihalani', 'Matthew K', 'Chao Zhang']
[]
['amazon-web-services', 'pytorch', 'sagemaker']
13
10,159
https://devpost.com/software/pyvinci
Login page. Domain main will take the user to this page. Registration page. New user needs to register, and then login. User can assign a project name after clicking "Create New Project" User can logging to account and see the list of projects, or create a new one. After uploading images in project view, the user can click "Begin Modeling" to run the panoptic segmentation modeling on each image. The PyTorch model will generate the labels for each image and display them below each image. User can click "Home" to create a new project. Full application concept with both use cases released. The user will be able to select the generated labels to control creating a new image. Inspiration Art is always present as we experience the world. We find it everywhere through life; nature, sports and sculptures. It is part of what defines the human race. As we are overwhelmed with information, sometimes we do not have time to appreciate the art in the experiences we have. We believe our experiences should be art. This why Pyvinci was created, to make life trips and experiences a beautiful canvas. What it does Pyvinci is a hashtag generator and an image art painter that allows the user to upload images for a given experience or trip and it generates art according to the objects observed in the image. Besides creating beautiful art, Pyvinci provides the right hashtags for the user to use in their social media platforms. How we built it The main areas of the application; client, server, and the models were deployed in separate docker containers to allow the app to have easier and faster deployment. The machine learning model of Pyvinci was deployed as a worker so it could be a more scalable solution. The model picks new jobs for every new projects that runs the model. We used a panoptic segmentation model to catch every possible background and foreground object image. The main server API receives a new job from the client and adds it to the database for the workers to pick up and perform modeling on. Challenges we ran into The biggest challenge we encountered was when developing the algorithm that creates the art. We wanted to use a layer base sequential framework for scene generator with GANs as the second worker model. However, the strategy that we use did not allow us to have the required time to train the model to the level of accuracy that we wanted. Accomplishments that we're proud of We are very proud to have complete a deployed app that uses a machine learning model. What we learned How to work with PyTorch and its all many of its components. How to create panoptic segmentation models and many other types with Detectron2. Also, how to work with docker containers to have a better structure to scale the application. How to best deploy machine learning models with an application. What's next for PyVinci We are going to better train the GAN model to be able to generate art from the multiple objects we extract from the images to give users the ability to control how the image is generated. Also we would like to create a queue that the workers pick up jobs from rather than pick them up from the database. Built With amazon-web-services aws-ec2 cv2 detectron2 docker go-fiber golang json miragejs nginx npm numpy os posgres psycopg2 python pytoch react requests sqlalchemy sys torch torchvision Try it out pyvinci.com github.com github.com github.com github.com
PyVinci
Generate new images based on your own experiences and create hashtags for your social media platform!
['Santiago Norena', 'Hector Mejia', 'Nicolas David', 'Alejandro Martinez', 'Sahivy Gonzalez']
[]
['amazon-web-services', 'aws-ec2', 'cv2', 'detectron2', 'docker', 'go-fiber', 'golang', 'json', 'miragejs', 'nginx', 'npm', 'numpy', 'os', 'posgres', 'psycopg2', 'python', 'pytoch', 'react', 'requests', 'sqlalchemy', 'sys', 'torch', 'torchvision']
14
10,159
https://devpost.com/software/merchseum
Inspiration helping art and cultural institutions (museum, art gallery) also street artists from Covid-19 impact that decrease their revenue. we think we can help them with collaborating to make cool stuff. so that's why Merchseum was created. What it does create a unique design merch with style transfer between artist artwork and user photo How we built it We train the model using PyTorch through sagemaker notebook and colab and convert the model to onnx for OpenCV and make it accessible with flask. also, we use printful API for make mockup and soon make it print on demand Challenges we ran into It’s the first time for us to learn deep learning (thank you PyTorch) so it takes time for us to complete this project. Accomplishments that we're proud of we know the importance of Math and Statistic in our life (for Deep Learning) What we learned Pytorch, Deep Learning, Patience (for training the good model) What's next for Merchseum connect to the payment more Print on Demand merch (Pillow, Case, Bag, Tee, Many More) make an affiliate program so user and artist can make money together artists can train their artwork by Merchseum platform Built With flask gcp messenger node.js onnx opencv printful pytorch sagemaker torchvision
Merchseum
a Pytorch-Messenger Powered App to Generate Unique Merchandise for Helping Art & Cultural Institution and Street Artist
['Dimas Nashiruddin Al Faruq', 'Syafirah Abdullah']
[]
['flask', 'gcp', 'messenger', 'node.js', 'onnx', 'opencv', 'printful', 'pytorch', 'sagemaker', 'torchvision']
15
10,159
https://devpost.com/software/pictex
Loading View Landing Page Upload With Camera or Image Gallery Let the CNN Model Process Download the .tex file from a Global CDN Inspiration As college students who often typeset assignments in LaTeX, we became interested in the possibility of creating an app that converts a handwritten document into a LaTeX file. We were really interested in learning more about computer vision and app development, so we decided to try implementing the idea and called it PicTeX.  What It Does We created PicTeX by using an RCNN model to preprocess images for regions of interest and then use a classifier to identify each symbol. We used PyTorch to train our classifier and S3 buckets to integrate the object detection model functionality into an iOS app.  Challenges We Ran Into One of the most difficult parts of the project was developing a model to identify different symbols. We started with a You Only Look Once (YOLO) model for identifying symbols in a page, which ended up being too inaccurate. After switching to an approach using an RCNN model, it took quite a while to figure out the correct threshold levels and learning rates would lead to the most accurate model.  Accomplishments We are really proud of the way that we were able to create a handwriting recognition model from scratch and integrate it into an app that completes the entire handwriting-to-LaTeX process. It was our first time using PyTorch, AWS, and SwiftUI, so we were able to learn a lot from figuring out how to use each environment and working to combine them into one project.  For the Future We will definitely continue working on PicTeX in the future, and our first goals will be to develop a more accurate symbol classifier, as well as add more features to the user interface. We are very excited about the project and hope that it will be able to aid many people in the future! Built With amazon-web-services cloudfront lambda opencv pytorch s3 swiftui Try it out github.com
PicTex
Convert your math homework into LaTeX with just a picture!
['Len Huang', 'Zach Nowak', 'Erica Chiang']
[]
['amazon-web-services', 'cloudfront', 'lambda', 'opencv', 'pytorch', 's3', 'swiftui']
16
10,159
https://devpost.com/software/diagno
Inspiration This project was started with the ultimate goal of applying modern technology to improve healthcare conditions across the globe and help save human lives. Cardiovascular diseases account for 17.9 million lives annually becoming the most common cause of human death. More than 75% of these deaths take place in low and middle-income countries, where people have limited access to healthcare resources and trained cardiologists. 12-lead Electrocardiograms are among the major tools used by cardiologists for diagnosis of different heart conditions . The capturing of these signals usually happen through an ECG device. Ever since the first ECG device was invented, the process has been unchanged for decades, and accurate diagnosis heavily depends on well-trained cardiologists. With the advent of deep neural networks, frameworks like PyTorch and large open-source datasets, there’s green-light that this process can be automated making healthcare solutions more affordable and accessible to everyone on earth What it does At Diagno we have developed a deep learning algorithm that is able to identify 5 different cardiac conditions from 12-lead ECG signals with over 90% accuracy . Diagno's web app [ https://diagno-ui.herokuapp.com/ ] allows anyone to upload a 12-lead ECG recording as a JSON file and get the machine-generated prediction within a couple of seconds. How we built it The neural network model used is a 1D Convolutional Neural, largely inspired by ResNet.The model takes input of 12 1-dimensional signals corresponding to 12 ECG leads, sampled at 400Hz, and of 12 samples length. At the final layer, the model outputs probabilities for each cardiac condition. class ECGNet(nn.Module): def __init__(self, input_channels=12, N_labels=2, kernel_size =17, n_blocks=4): super().__init__() self.padding= (kernel_size-1)//2 self.conv1 = nn.Conv1d(input_channels, 64, kernel_size=kernel_size, padding=self.padding) # input_channelsx4096 -> #64x4096 self.bn1 = nn.BatchNorm1d(64) #64x4096 self.relu1 = nn.ReLU() #64x4096 self.resblock1 = self.ResBlock(64,4096,128,1024) self.resblock2 = self.ResBlock(128,1024,196,256) self.resblock3 = self.ResBlock(196,256, 256, 64) self.resblock4 = self.ResBlock(256,64, 320, 16) self.flatten = nn.Flatten() self.dense_final = nn.Linear(320*16, N_labels) self.sigmoid_final = nn.Sigmoid() def forward(self, x_in): x = self.conv1(x_in) x = self.bn1(x) x = self.relu1(x) x, y = self.resblock1((x,x)) x, y = self.resblock2((x,y)) x, y = self.resblock3((x,y)) x, _ = self.resblock4((x,y)) x = self.flatten(x) x = self.dense_final(x) x = self.sigmoid_final(x) return x class ResBlock(nn.Module): def __init__(self, n_filters_in, n_samples_in, n_filters_out, n_samples_out, dropout_rate=0.8, kernel_size=17): super(ECGNet.ResBlock, self).__init__() self.padding=(kernel_size-1)//2 downsample= n_samples_in//n_samples_out self.conv1 = nn.Conv1d(n_filters_in, n_filters_out, kernel_size=kernel_size, padding=self.padding) self.bn1 = nn.BatchNorm1d(n_filters_out) self.relu1 = nn.ReLU() self.dropout1 = nn.Dropout(p=dropout_rate) self.conv2 = nn.Conv1d(n_filters_out, n_filters_out, kernel_size=kernel_size, stride=downsample, padding=self.padding) self.sk_max_pool= nn.MaxPool1d(downsample) self.sk_conv = nn.Conv1d(n_filters_in, n_filters_out, kernel_size=1) self.bn2 = nn.BatchNorm1d(n_filters_out) self.relu2 = nn.ReLU() self.dropout2 = nn.Dropout(p=dropout_rate) def forward(self, inputs): x,y = inputs y = self.sk_max_pool(y)# skip connection (Max Pool -> 1dConv) y = self.sk_conv(y) x = self.conv1(x) #Conv1d x = self.bn1(x) #bn x = self.relu1(x) #ReLU x = self.dropout1(x) #dropout x = self.conv2(x) #conv x = x+y y = x x = self.bn2(x) #bn x = self.relu2(x) #relu x = self.dropout2(x) #dropout return x,y The deep neural network model was trained on a subset of 2020 PhysioNet Computing in Cardiology Challenge Data. The trained model is able to predict 5 different cardiac conditions with over 90% accuracy. Model is deployed on AWS using TorchServe What we learned We got hands-on experience on using PyTorch for Model building Training Deployment What's next for Diagno As the next step of Diagno, we are planning to build an embedded 12-lead ECG capturing hardware device with a Raspberry Pi and Texas Instruments ADS129X Analog Front End Board Built With amazon-web-services python pytorch Try it out github.com diagno-ui.herokuapp.com
Diagno
AI Based Remote Cardiology Solution
['Bashana Elikewela', 'Nushan Vitharana', 'Udith Haputhanthri', 'Shehan Munasinghe']
[]
['amazon-web-services', 'python', 'pytorch']
17
10,159
https://devpost.com/software/pyspooky
UI for sound extraction The YOLO Object Detection Page In built file browser to select the video to be analysed UI for selecting unique frames of interest obtained form the YOLO Detection and performing super resolution on them Inspiration Recently, one of our team members, Abhijit Ramesh has his bicycle stolen. He had gone shopping, parked his bicycle outside the store, and by the time he returned his bicycle was missing. Gathering his wits, Abhijit looked around and saw a CCTV camera of the store which would have caught the footage of the theft. So he went in and requested the store owners to let him see the footage. The footage was there, but it's resolution was not good enough for Abhijit to get any useful information. So Abhijit returned home, walking, thinking that CCTV feeds are basically not very useful. Being a computer science student working with image processing, computer vision and deep learning, an idea struck him. He called two of his other friends and laid down his plan so that the CCTV feeds like those could actually serve some purpose. And that idea, led to us building Psychic CCTY, a tool to make use of videos from CCTV feeds, as well as other kinds of videos, captured by people at a crime scene. What it does Psychic CCTV will help you analyze a video using the following methods: You have a video(even a low resolution one), and you're sure you might find some object of interest in the video, but you're too busy to sit and watch the entire video. Even if you sit and watch the video, you might skip something by mistake, because after all, you're human. So we have provided an option to perform object detection on the entire video, in real-time which stores all frames with objects of interest. Now, you did get an object of interest, but since you're not a computer, you still feel it could have a higher resolution. So used those saved frames and increase their resolution using our super-resolution technique. It works almost in real-time, taking just 4-6 seconds to perform the operation on a frame. You already have a few images you want to analyze. Select them and run the super-resolution on your own custom images. And finally, you might have a video recording with sound in it too. Now there's a lot of interference in the background, a lot of sound sources sound. So, select the video and Psychic CCTV will extract the audio and split into sources so you can clearly hear the vocals and well as the background noise separately and gather useful information from it. How we built it Super Resolution In order to enhance the quality of images, we are using super resolution. Super resolution has been implemented by us from scratch completely in PyTorch. After researching a bit, we found two methods: Using SRResnet Using SRGAN So we decided to go ahead and implement both the methods. Once we had a trained model for both, we ran them on a few photos and in the end, came the conclusion that SRGAN performs better than the SRResnet. Input Image SRGAN Output Input Image SRGAN Output Input Image SRGAN Output Object Detection Abhijit just needed the super-resolution technique to have a fix to his problem, but since we started out with the project, we decided to expand the functionality a bit. We added an option to detect objects in video feeds as well. For this, we have used YOLO Object Detection, again implemented from scratch in PyTorch. Separating Soundtracks In videos that might be recorded at crime scenes such as accidents, hit and run cases, snatch thefts on roads, etc sound tracks play a very important role in addition to the video. Mostly when such a thing happens, someone or the other will end up recording a video on their phones. Now this video might be blurry, unstable, not of a high quality and all this is handled by our above mentioned steps, but at the same time the audio might also not be clear. If audio could be split into vocals and other categories, it would be much easier to understand what happened. The sound of the car going away in a hit and run case in which the car directly isn't very visible would help in determining the model and make of the car. On the spot, people might exclaim and say some important visual details regarding a crime, but without the authorities being present. More such things can be caught on video. In order to improve the process of analyzing the audio, we extract the audio from a given video and split it into vocal and non-vocal using deep learning models. For more information and samples please look into our readme on GitHub: https://github.com/Fireboltz/Psychic-CCTV/blob/master/README.md . Overview of the entire app: Object Detection Screen Dialog to allow user to choose the video to analyse Sound Extraction Frames with objects of interest displayed along with the option to perform super resolution Challenges we ran into Training models from scratch took us a lot more time than we expected. We wanted a completely offline solution, a desktop app that could be used anytime, on any system, so we decided to make a GUI in python itself, which was the first time we used a package called PySimpleGUI for such a project. This part of making UI in python itself was one of the most challenging tasks. Accomplishments that we're proud of Creating a stand-alone offline application that can run on any platform. Creating a project that actually solves the real-world problem of analysis videos of a low quality. This project could actually come in handy to relevant authorities when investigating crime and video footage is available. What we learned Making GUIs in python Working together as a team completely remotely and having all meetings and discussions online. Reading and implementing techniques for super-resolution mentioned research papers. Handling audio analysis. Going through a very extensive documentation(of PySimpleGUI) and learning and using the features relevant to our project. What's next for Psychic CCTV Improvements in the GUI Integrate features such as extracting information from vehicle number plates if possible from a given video feed Adding facial extraction so that an officer who might use the application can easily see the face of a person/s of interest Publishing packages for the application for Windows, Linux and macOS and creating our first release Since this is an open-source project, and open source solely depends upon contributors, we would also like to spread the word about our project and welcome more contributors to join us and give their ideas too and help out with the development. Built With pysimplegui python pytorch qt Try it out github.com
Psychic-CCTV
An easy to use system for extracting data(both audio and video) from a CCTV Camera feed which might not be of high quality to help police officers analyse footage of crimes scenes more easily.
['Abhijit Ramesh', 'Xerous Wazler', 'Yash Khare']
[]
['pysimplegui', 'python', 'pytorch', 'qt']
18
10,159
https://devpost.com/software/genrl-a-pytorch-reinforcement-learning-library
Inspiration Reinforcement Learning is a rapidly expanding subfield of AI. One of the current major challages to the field is difficulty of reproducing results and lack of accesibility for newcomers. GenRL is our attempt to make it easier for people to understand and get started with RL as well as providing a standardised way for researchers to reproduce results. What it does GenRL has the following goals - Approachability and accessibility: We have included multiple tutorials and extensive documentation to lower the barrier for newcomers to the field to get working models. Extensibility: Be easy to extend core functions of the library for implementing novel agents and new research. Modularity: By separating out all the underlying common features of each algorithm. Features - Unified Trainer and Logger class: code reusability and high-level UI Ready-made algorithm implementations: ready-made implementations of popular RL algorithms. Extensive Benchmarking Environment implementations Heavy Encapsulation useful for new algorithms How we built it The project was built completely in an Open Source manner on GitHub. What originally was a core group of people has expanded to include multiple open source contributors. Challenges we ran into Creating base classes that are extendible to any algorithm What should be the core of the library? What abstraction layers should be created? Accomplishments that we're proud of Agent encapsulations Abstraction layers that are extensible to any new algorithm What we learned Open Source software developement Working in a team Pytorch core Reinforcement Learning What's next for GenRL - A PyTorch Reinforcement Learning library Including other key areas of RL such as multi agent and evolutionary Providing extensive support for distributed training of agents Expanding tutorials to cover wide range of topics in RL Built With numpy python pytorch Try it out github.com genrl.readthedocs.io
GenRL
A Modular Reinforcement Learning library in PyTorch
['Sharad Chitlangia', 'Het Shah', 'Atharv Sonwane', 'Sampreet Arthi', 'Ajay Subramanian']
[]
['numpy', 'python', 'pytorch']
19
10,159
https://devpost.com/software/contact-tracing-ai
Inspiration COVID-19 is the most significant challenge the world has faced in 75 years. As of the time of writing of this article, 20M people have been infected, of which 730k have died as a result [ 1 ]. In addition, the lockdown measures designed to mitigate the spread of the disease are predicted to reduce worldwide economic growth by up to 6% [ 2 ], corresponding to $5.4 trillion in lost GDP in 2020 alone. This is in addition to the untold future consequences from higher order effects such as mass unemployment, social isolation, and postponing unrelated medical care, to name just a few. Ultimately the solution to COVID-19 will likely involve one or more medicinal therapeutics like vaccines. Unfortunately, however, it’s unclear when these might become widely available, or if a sufficient proportion of the population would volunteer for such untested treatments, or even whether such a thing is possible to create in a reasonable time frame, if at all. And while the efficacy of face masks is well established [ 3 ], so is the propensity for much of the world’s population to avoid wearing them [ 4 ]. Now and for the foreseeable future, the world’s most effective and reliable defence against the spread of COVID-19 is contact tracing [ 5 ]. This is a manual process that involves public health authorities interviewing known infected cases, determining to whom they may have been exposed during their infectious period, and contacting those exposed individuals so that they can isolate, thereby preventing further spread of the virus. While effective when done correctly, contact tracing is a highly labour-intensive process that relies on infected individuals being able to retrace their steps precisely over time spans of several weeks. Not only do infected cases need to recall with whom they interacted during this period, but they also need to know how to contact those individuals. This is often unfeasible, especially in dense urban environments [ 6 ]. Multiple solutions have been proposed to meet this challenge. Typically these require a majority of the population to install an "exposure notification" application onto their mobile phones [ 7 ], or to carry a dedicated piece of hardware such as a wristband. Unfortunately, public health authorities agree that these approaches are ineffective [ 8 ] due to the low specificity of the technologies upon which they are built, such as GPS, Bluetooth, and ultrasound. For example, these tools are unable to differentiate between unmasked individuals having a conversation less than a foot apart (high degree of exposure), and masked individuals dozens of feet apart and/or separated by a physical barrier such as a wall (zero exposure). In addition, they fail to leverage the empathy and persuasiveness of human personnel, which are critical for the success of contact tracing operations [ 9 ]. Some of the most successful contact tracing efforts have been in South Korea and Singapore. One of the key methods employed by public health authorities in these countries that others have so far largely ignored is the systematic review of security camera footage [ 10 , 11 ]. By watching videos recorded by standard CCTV cameras that are often ubiquitous in private and public spaces alike, contact tracers in these countries are able to pinpoint which individuals were exposed to known COVID-19 cases, and can do so without having to rely on fallible human memory. Why, then, has the rest of the world not followed suit? The answer may have to do with cultural differences, particularly with respect to privacy. While citizens of South Korea and Singapore may be used to the idea of being recorded, much of the rest of the world (especially in the West) find this idea highly unsettling -- despite the fact that CCTV cameras are are already present in many Western cities to an equal or greater extent compared to their Asian counterparts [ 12 ]. In addition, the idea that a nation’s government can know the precise movements and activities of that nation’s citizens at any time is often seen as antithetical to free democratic societies. Addressing these concerns is the motivation behind Contact Tracing AI: a software system that lets organizations leverage their existing technology infrastructure (e.g. security cameras) to prevent COVID-19 outbreaks efficiently, accurately, and at population scale with the help of state-of-the-art computer vision. When a customer, employee, or visitor is confirmed to have COVID-19, organizations are often forced to shut down to prevent further spread, costing $millions in lost revenue and other expenses. With Contact Tracing AI, organizations can quickly and automatically determine a) who may have been exposed, and b) how to contact them. What it does After signing up at www.ContactTracingAI.com , organizations can either manually upload video files, or connect their Video Management Systems (e.g. Genetec Omnicast) for automated analysis. Contact information is determined via an opt-in QR code system (no app required), or via automated integrations with Point-of-Sales (e.g. Lightspeed Retail) and Access Control (e.g. Genetec Security Center) systems. The submission for this hackathon is a small part of the complete system. It contains drag-and-drop video upload, keypoint detection, and tracking. How we built it In the following, refer to: https://www.contacttracingai.com/static/img/about-1.png https://www.contacttracingai.com/static/img/about-2.png Consider an arbitrary floor plan (representative of a typical retail store, factory, or office building). Inside there may be one or more Cameras (C), Subjects (S), and external pieces of Equipment (E) like Access Control systems, Point-of-Sales systems, and/or QR codes. Cameras generate Videos (V), which are composed of Images (I). A ten second long video recorded at 30 frames per second will generate 300 images. Using Deep Neural Networks, we extract various Features (F) from these images, including: Person detection keypoints (e.g. mouth, hands, feet) Actions (e.g. coughing, speaking) Personal Protective Equipment (PPE) (e.g. masks) Features of the same person between Images are linked together to form Tracks (T). Because humans don't change in their appearance, position, or velocity very quickly between subsequent Images (i.e. during 1/30th of a second), Tracks are selected so as to minimize the change in these attributes across consecutive Images. Tracks corresponding to the same person in different Videos are linked together to form Subjects (S). Because humans don't significantly change in their appearance between Cameras, Subjects can be selected so as to minimize the change in appearance across Cameras. (Note that "appearance" here refers to shape, size, and clothing, not biometric measures like face or gait.) Exposure (X) between two subjects is a function of the Features. This is defined according to the standard public health definition: 15 minutes or more of conversation within two metres (six feet) apart, without wearing face masks [ 13 ]. Whenever Subjects interact with Equipment, they generate Events: Swipes for Access Control, Payments for Point-of-sales, and Scans for QR codes. This is how we are able to determine Identities, including Names and Contact information. In order to associate Events with Subjects, synchronization between the Cameras and the Equipment is required in both time (digital clocks) and space (location in field-of-view). This is accomplished during onboarding via a single click for each Camera/Equipment pair. Events are time-aligned with Tracks using the Viterbi algorithm. Infection Risk (R) for a particular Subject is a function of their Exposure to all other Subjects and their respective Identities (including infection status). Infection status is provided in one of two ways: Via the user (e.g. an employer) upon being informed either directly by the infected Subject (e.g. an employee), or via a public health organization Via the infected Subject directly via text message (only available if they previously opted in via a QR code). The system is built in Python with PyTorch and RQ. Keypoint detection is implemented with Detectron2, and re-identification is implemented with Torchreid. Challenges we ran into Determining optimal implementations in terms of ease of implementation, accuracy, and performance for keypoint detection, tracking, and subject re-identification Determining which components to open source Accomplishments that we're proud of It works! (We are continuously iterating and improving.) We are official Genetec technology partners, which means this technology will be integrated into their products What we learned Tracking is hard Re-identification is even harder PyTorch is awesome! What's next for Contact Tracing AI Deploying on GPUs for improved throughput Integrating human-in-the-loop feedback using Amazon Mechanical Turk Improving on detection, tracking, and re-identification Sources [1] https://gisanddata.maps.arcgis.com/apps/opsdashboard/index.html#/bda7594740fd40299423467b48e9ecf6 [2] https://fas.org/sgp/crs/row/R46270.pdf [3] https://www.mayoclinic.org/diseases-conditions/coronavirus/in-depth/coronavirus-mask/art-20485449 [4] https://www.bbc.com/news/world-52015486 [5] https://www.mckinsey.com/~/media/McKinsey/Industries/Public%20and%20Social%20Sector/Our%20Insights/Contact%20tracing%20for%20COVID%2019%20New%20considerations%20for%20its%20practical%20application/Contact-tracing-for-covid-19-new-considerations-May-2020.pdf [6] http://currents.plos.org/outbreaks/index.html%3Fp=64648.html [7] https://www.cdc.gov/coronavirus/2019-ncov/php/contact-tracing/contact-tracing-plan/digital-contact-tracing-tools.html [8] https://apps.who.int/iris/rest/bitstreams/1279465/retrieve [9] https://www.businessinsider.com/contact-tracing-jobs-founder-describes-successful-applicant-2020-5 [10] https://www.bbc.com/news/world-asia-51866102 [11] https://globalnews.ca/news/6942244/south-korea-coronavirus-tracing-routes/ [12] https://www.securitymagazine.com/articles/90759-what-are-the-worlds-most-heavily-surveilled-cities [13] https://www.cdc.gov/coronavirus/2019-ncov/global-covid-19/operational-considerations-contact-tracing.html Built With detectron2 flask javascript python pytorch redis rq Try it out github.com www.contacttracingai.com visiontracing.herokuapp.com
Contact Tracing AI
Contact Tracing AI lets organizations leverage their existing technology infrastructure (e.g. security cameras) to prevent COVID-19 outbreaks efficiently, accurately, and at scale with computer vision
['Richard Abrich', 'Andrew Grebenisan', 'Nate Armstrong']
[]
['detectron2', 'flask', 'javascript', 'python', 'pytorch', 'redis', 'rq']
20
10,159
https://devpost.com/software/dashai
Landing page. Choosing the metrics. The model-builder. Training configuration. Configuring auto ML. Training the model. Visualizing attributions. Inspiration We are students of the FastAI course by Jeremy Howard, which we highly recommend. A tagline they've used is, "Making neural networks uncool again." What they mean by this isn't that artificial intelligence doesn't deserve the hype ascribed to it. Instead, they indicate that cool things tend to be accessible only to the elite, the super-rich, the one percent. FastAI intends to ensure that neural nets don't fall into the category of things that are "cool" by that definition. We found that super inspiring. However, even with the outstanding work FastAI does, it only makes it accessible to the coders on the planet, which makes up only 26.4 million people out of 7.7 billion , only about 0.3% of people. Now, of course, this is a worthy goal, and one deserving of applause, because the actual number of people who can use neural networks right now is even smaller. However, we would like to help FastAI along with their goal and do our part in increasing that number by allowing people to create state-of-the-art deep learning models without writing any code. Making a no-code deep learning application not just allows non-coders to access the wonders of deep learning, it also lets coders speed up prototyping, allowing them to bring their programs to production much quicker. Finally, explainability is an up-and-coming field that is needed by the multitudes and utilized by the few, and so we wanted to provide easy access to that as well. What it does DashAI provides a simple graphical user interface (GUI) that guides users through a step-by-step process through creating, training, and saving a model. We also implement optional steps for Auto ML and visualizing attributions as a way of perceiving explainability. We go into more detail below. Step 1: Choosing the task. We provide our users with the ability to choose their type of application early on in the process. DashAI uses this information in later stages, to suggest architectures that have achieved state-of-the-art results in that task. Users can choose one of four tasks: collaborative filtering, tabular, text, and vision. Step 2: Selecting the dataset. Users then provide the dataset they intend to use, and they have options to let DashAI know how to utilize the dataset best. DashAI then asks how the user wants to split the dataset (into training and validation sets), how to label it, and what transforms the user wants to apply on the dataset. Step 3: Selecting the model. Users then have to choose what architecture they want their model to have. DashAI provides architectures that have achieved state-of-the-art results in the task defined by the user, but the user may use any model built using PyTorch layers. Step 4: (Optional) Auto ML At this point, users may choose one of three options: to use DashAI default hyper-parameters; to input hyper-parameter values of their choosing; or to use DashAI's auto ML component, Verum , to select the best possible hyper-parameter values. In Verum , users may choose which hyper-parameters they would like tuned, the number of experiments they want to run, and whether they would like to have the resulting values automatically applied to the model. Step 5: (Optional) Training the model. DashAI then provides a simple training interface, where, if they have not chosen to utilize Verum 's automatic applying feature, users may input the hyper-parameter values required for training. Users can also pick between generic training and 1-cycle training . Step 6: (Optional) Explainability Users can then choose to visualize the attributions in the explainability component of DashAI, DashInsights . They may choose from a multitude of attribution-calculation algorithms, depending on their task. The visualizations can provide insight into why a model is predicting what it is predicting. Step 7: (Optional) Saving the model. Finally, if users are so inclined, they can save their models as .pth files. We provide instructions on how to use these files in our Wiki in our GitHub repo . How we built it We designed our application to leverage the broad use-cases of JSON files for API communication between our front- and back-ends. We were thus able to write our code safe in the knowledge that everything we need to know just needed to be in a JSON file. Further, this allowed us to split our team into two so each could work without having to wait for updates from the other. Also, in production, users could modify values from the JSON, and the Flask server could use these to generate a model, train it, save it, and everything else we do. We based our application on FastAI for everything that they provide out of the box. We wrote everything else we wanted to add using PyTorch and libraries written on top of it. For our hyper-parameter tuning component, Verum , we used the Ax library. To allow users to visualize the attributions of their models, we used the Captum library. Challenges we ran into Given the wide variety of features that we wanted to provide, we had to find and use open-source libraries that have done some fantastic work. However, this meant that we had trouble interfacing them. For example, we would need to use a wrapper around a layer to get one library to work, but that would break another. Another thing with all the libraries we used was that, despite them being well documented and well written, we still faced issues that seemed to have no solution we could find. Thus, we spent many an hour scouring through source code and figuring it out to find one. Another of the major challenges we faced was becuase we wanted our users to see what was happening, and that meant we had to interface our terminal output to our JavaScript front-end—no easy task. React is a front-end library that doesn't allow direct access to the file system, so in the end, we had to write a hack for that. ;) Accomplishments that we're proud of Everything! :D What we learned All of us are students, so despite our internships, we are still relatively new to writing an entire application from an idea to a product. We were thrilled that we got to leverage some amazing libraries from the open-source community to full effect. We were working with automated hyper-parameter tuning for the first time, and it was fascinating to learn the processes behind them from the people at Ax. Similarly, finding out how to get a model to tell us why it thinks what it thinks, from the good folks at Captum, was also captivating. Finally, during these troubled times, with every team member doing an internship (and two starting college semesters in the middle of the competition period), we learned how to collaborate remotely on a project we were all passionate about with everything else we were doing. We faced a few roadblocks, but it was all loads of fun! What's next for DashAI We have two features coming soon: a way for us to give users a template for them to deploy their models with no code, once again inspired by FastAI; and support on DashInsights for collaborative filtering and tabular models, which we currently don't provide. We are also looking forward to opening this project up to the open-source community and finding out all the creative and fascinating new things we can add to DashAI! Built With ax captum electron fastai flask node.js pytorch react restful-api Try it out github.com
DashAI
DashAI allows its users to create record-breaking, state-of-the-art models with just their datasets, no coding required. DashAI takes care of hyper-parameter tuning, training, and explainability.
['Kaushik Muralidharan', 'Joe Rishon Manoj', 'Amit jha', 'Manikya Bardhan']
[]
['ax', 'captum', 'electron', 'fastai', 'flask', 'node.js', 'pytorch', 'react', 'restful-api']
21
10,159
https://devpost.com/software/torch-deploy
API Docs Demo page Inspiration People spend a lot of time designing and fine tuning their models and at the end of the process, many want to move their work to production. However, serving your model as an API is often a hassle and requires a lot of boilerplate code. This library aims at streamlining that process and making it extremely easy (1 line!) to serve and deploy your pytorch model as an API. What it does pytorch-deploy is a minimalist python package that allows a user to serve a pytorch model as an API in just one line of code! Just install the package with pip, import the package, and call deploy with your model as an argument. Example import torch import torchvision.models as models from torch_deploy import deploy resnet18 = models.resnet18(pretrained=True) resnet18.eval() deploy(resnet18, pre=torch.tensor) How we built it We used fastapi and uvicorn to create a robust and scalable API endpoint to serve a pytorch model. pytorch was used to offer custom pre and post processing functions. In addition, the torch.nn.Module interface is leveraged for the introspection that is done to offer the maximum amount of flexibility in the inference pipeline. Challenges we ran into It was hard to work with teammates remotely since we are all in different time zones around the world and have different day to day schedules. Without the ability of being physically together, it was difficult to stay motivated and help each other out on problems and learning new API's or libraries. Accomplishments that we're proud of Being able to iterate on our product and slowly make the package more comprehensive. We came up with a simple solution to a common problem and we were able to finish building the core functionalities that we envisioned for this tool. Simple and intuitive to use Offers flexibility with custom pre and post processing functions Works with any PyTorch model A variety of sample code to showcase usage What we learned The process of serving a model as an API was something all team members were unfamiliar with at the start. Fiona also learned how to build a package, the general structure of packages, and how to upload a package to PyPI in order for packages to be installed with the pip command. Owen (Chang Heng) learned a lot about how to build an API using the FastAPI library. Hulbert learned how to use the FastAPI security and OAuth2 library as well as how to use JSON Web Tokens for better security. What's next for torch-deploy We are still working on an OAuth2 login system that requires correct user credentials to use torch-deploy with secure password encryption and temporary JWT tokens. In the future we want to expand upon our analytics features for model usage and make it more comprehensive. Built With fastapi python pytorch uvicorn Try it out github.com
torch-deploy
Easily serve your PyTorch model as an API endpoint in only 1 line of code.
['Chang Heng Mo', 'Fiona Xie', 'Hulbert Zeng']
[]
['fastapi', 'python', 'pytorch', 'uvicorn']
22
10,159
https://devpost.com/software/infinitybatch
dataflow use case 01: Keras-like training use case 02: Custom training workflow cover infinitybatch Infinitybatch is an open source solution for PyTorch that helps deep learning developers to train with bigger batch size than it could be loaded into GPU RAM through a normal PyTorch train loop. The core concept of the idea comes from the fact that GPU time is expensive and the usage of own GPU cluster or a cloud based GPU service has to be optimized to be cost efficient. Furthermore, developers and researchers regularly have limited access to GPU. However, CPU based training mostly allows higher batches than a normal GPU could provide, it is much slower. Infinitybatch helps to use GPU during training with bigger batch size thanks to the special unloading and uploading process that manages the GPU RAM to avoid memory overrun. Inspiration Nowadays computers at home or at companies usually contain GPU. However, these GPUs have very poor computational resources compared to dedicated deep learning GPUs like Tesla or Quadro series. Despite of limitation, the home or company GPU, it can be used for experimenting, testing new theories, trying new approaches and learning. Using cloud based GPU services can be expensive especially when the whole concept of model is made on the cloud. A properly trained model is based on a lot of dead ends. A huge amount of model architectures are in the trash after the training phase. A developer or researcher has to try different batch sizes with different batch-creation technology and tune other hyperparameters as well. If any of these steps can be used on own GPU, it saves a lot of money and internet bandwidth compared with a cloud based service. The simplest way to solve batch size problem is to use little batch size that fits into GPU memory in all circumstances. There are a lot of forum topics about possible solutions. Using the same batch size, compared to a PyTorch normal train loop, the result with the existing solutions will be slightly different. However the results should be the same. The lack of a real solution provoked us to explore how painful the problem is. That’s why we made a survey. We got more than a thousand responses. 33% percent of people quite often get CUDA memory error during training. More than 70% percent try different batch size usually. Now, we have proof, that the problem is real and really burning. What it does Infinitybatch is an open source solution that helps deep learning developers to train with bigger batch size than it could be loaded into GPU RAM. It is built under MIT license. The only one limitation with Infinitybatch is that at least the model and one element at the same time must be fit into GPU-RAM. Infinitybatch fits perfectly into PyTorch and PyTorch’s workflow. It does not override or exploit any PyTorch’s function or class. A train loop with infinitybatch suits smoothly into the logic of a common train loop. The user initializes a model, an optimizer, a criterion and a dataloader and the train can just begin. This kind of workflow guarantees compatibility through different PyTorch versions in the future as well. If the torch.stack and torch.clone will be the same and the core concept of autograd function will remain, there is no harm to this compatiblity. How we built it Our main goal was to make a solo Python solution which is robust enough to sustain some version change of PyTorch and is easy to use without major issues. Therefore we had to figure out how to satisfy our needs in Python level. There are two main concepts in infitybatch that can be considered as unique or special. One conception is the way how we treat the output of the forward pass of a model and the other conception is to separate forward-pass and backpropagation devices. The output of a model should be treated carefully since it contains a computational graph which is essential to be preserved for the backpropagation process at least till the calculation of the loss. PyTorch doesn’t let developers to access autograd directly on the purpose to ensure its proper workflow at any circumstances. This limit slightly encumbers to find working solutions for a couple of problems like this which is solved by infinitybatch now. The ability to separate forward and backward devices hardly hangs together with the ability to preserve computational graphs. Graphs are essential for the loss to backpropagate. If the graph is broken for any reason the backpropagation will lead to bad results. This is the reason why common solutions for increasing batch size on GPU RAM don't work. Infinitybatch clones the output of the model with its computational graph and this clone is moved away from the device of the model to the device with more memory – usually CPU RAM – where the backpropagation will occur. As the whole batch is forwarded the outputs get stacked and together with targets the loss is calculated. The computation of the loss happens on the backward device however it affects the data on the forward device as well. All of the rest of training is just like any other PyTorch model training and this means real freedom for a developer who tries out infinitybatch. Challenges we ran into We figured out that we have to get more familiar with the core conception of PyTorch’s autograd system and memory usage conception in general. Our solution touches the core part of this framework. We had a further block in our development since we targeted to make a solo Python solution without going to the C++ level. The reason is quite easy; we wished to have a simple but robust solution that is easy to use and that hopefully resists some version changes in the near future. Accomplishments that we're proud of We made a survey to measure how painful the problem is that we try to solve. We had interesting feedback and we had the luck of getting the opinion of more than a thousand developers from all over the world. We made the Tesla Index inspired from our background research. We plan to continue the development of this index because we are thinking it will break kind of walls in the world of developers. The idea of the Tesla index comes from The Economist’s Big Mac Index. That index focuses on the purchasing power parity and national market and goods, but programmers are living in a globalized world where every item can be purchased from the global market. Tesla Index compares the local salaries through the most expensive Tesla card that is available at the moment. We made a complete book about the background of the problem; our research, market analysis and our solution as well. What we learned We learned a lot about the discourse around batch size and hyperparameter optimization in general. We also experienced deep fundamentals of the PyTorch autograd system and we are thinking we have a better overview of its conception. At least we hope so. What's next for infinitybatch We have a detailed future plan for this project. eval() However making evaluation with infinitybatch is very easy for user who like to have “canonical way” for everything we plan to implement .eval() function separately. The hardest task of this implementation is to relate to dataloaders because of the freedom use that we can provide at this point. PyPi Since infinitybatch is a production ready module it could be right now loaded into PyPi.org to be usable as a Python package globally. We plan to wait till the outcome of this hackathon gets clear and then we will upload this package together with a ReadTheDocs documentation and with a lot of examples and tests as well. Event monitor To support do-it-yourself train loops even more we plan add an event monitor to infinitybatch. This feature could warn the user if they leaves out something important from the forward-backward process. Improvement of warnings A good and highly configurable warning system is needed for infinitybatch in the future to be able to serve experimenters and real experts at the same time. We plan to add warning plans and warning levels and more safety functions. Improvement of the containers The containers of infinitybatch are very simple constructions at the moment. In the future we plan to add sophisticated storage strategies and container-related functions too. Counter of learnable parameters We plan to add sophisticated counter function to make a summary about the characteristics of learnable parameters of a model. We plan to show a detailed overview across the whole model including the number of elements and their aspects to memory and speed as well. This feature is far away from a level where pure Python could be enough. Memory usage monitor To be able to have a better overview of what happens in the memory we have to dig much deeper therefore we plan to make a detailed memory usage monitor. To develop this we have to leave the level of Python but we plan to build something that is usable in the pure Python level as well. Improving memory usage If we have a better overview of what happens in the memory why not to improve the usage of the memory. At least the precise and up-to-date calculation of active and inactive memory blocks would be very welcome and the lifecycle of temporary variables could be also examined. Improving UI At the moment our user interface is far away from perfect but at least we have a verbose property to decide about the appearance of very simple prints during the training. In the future we plan to have more configurable training prints. We plan to offer saving stats into file and we plan to create some functionality to be able to connect with graphical interfaces. Change of model We plan to add observer to the model, optimizer and criterion properties of infinitybatch to be able to monitor important changes of these attributes without the need to implement a wrapper of the PyTorch method. This ways the use of infinitybatch would be much more convenient. “In case if needed” plans The improvements below are a question of need. Because of different reasons we don’t consider those ones that much important right now but we are open and curious about the real-world needs of the users. Callbacks Based on the first experiences we plan to add the ability to place callbacks in the worklfow of infinitybatch. We think the ability of calling forward-backward stages separately should be enough to build very different train loops but there could be draw up some real needs. Improvement of tools.CudaHistory Classes tools.CudaHistory and tools.CudaMemorySnapshot began their lives like experimental classes. The importance of the management of snapshots like memory state at a given moment or anything else is obvious therefore we plan improve history based containers. Complexity We plan to add the ability of using more models, criterions or optimizers together. This leads to much more complex epochs and functionality. Though it would be nice to have that level of functionality yet tomorrow it seems it’s not that easy. To give all the needed energy for this improvement we need to have a better view to the real-world use-cases of our users. Built With python pytorch Try it out github.com
infinitybatch
Infinitybatch is an open source solution for PyTorch that helps deep learning developers to train with bigger batch size than it could be loaded into GPU RAM through a normal PyTorch train loop.
['Richárd Ádám Vécsey Dr.', 'Axel Ország-Krisz Dr.']
[]
['python', 'pytorch']
23
10,159
https://devpost.com/software/klurdy-xr
Web based app 3d model of apparel on skinned body before GIF using ar filter on spapchat Inspiration From our RnD, we realized that the fashion industry suffers from a number of challenges: some being an inconsistent supply of fabrics, poor online shopping experience for consumers where what you see is not what get's delivered, with no virtual fitting options. There is also the issue of products from high-end fashion brands being reproduced as counterfeits and sold online with the consumer having no means to know what real and what's fake. Case in point, nike seized selling its products on amazon . We solve some of these problems by looking at 3D commerce for fashion, and use social media to boost the virality of products. What it does KlurdyXR is an ai-powered experience for rendering 3D graphics. We are piloting with the fashion industry by providing virtual fitting experiences on social media. The models track human poses from a camera feed in real-time and use this inference to know where to augment the apparel on top of the person. How I built it We trained 3 different models separately to perform specific tasks, that is single person detection in a frame, body part segmentation, and single human pose estimation in the wild. These models need to be exported in onnx format. With the help of the snap team, we integrated these models using SnapML features on lens studio forming a machine learning pipeline using the device camera texture as the input. We modeled apparel sketches provided to us by a fashion designer into 3D assets, using a skinned body.with an occlusion body. Fabrics were digitized using photoshop, using 1024x1024 textures. After importing these assets in Lens studio, we created a script for swapping materials and post-processing output from ML pipeline to placing 3D assets in required targets on the screen. We opted to use world space features of lens studio to infer the depth of a coordinate in the frame. Challenges I ran into SnapML has a limit of 10Mb model sizes, so we had to compromise the complexity of networks to have mobile-first neural networks *8 2D pose estimation is good but still misses the z-axis. We are working on 3D pose inference on mobile to make it better with videos Accomplishments that I'm proud of Our team is the first on the planet to ever produce and launch apparel virtual fitting experiences on snap chat. We are pioneers in running mobile XR fitting experiences in the fashion industry. . What I learned PyTorch support for onnx makes models available on web-based platforms Optimization and fine-tuning of pre-trained PyTorch models Cloud plays an important role in accelerating ML experiments What's next for Klurdy XR Integrate models with our web app and create an experience on the mobile web. Integrate models with unreal engine 4 and unity and commercialize as AR plugin Try it out beta.klurdy.com github.com www.snapchat.com
Klurdy XR
Klurdy is a design studio with its own clothing line. We are working with snapchat in their ar creator residency program to make virtual fitting experiences using their camera for our clothes.
['B Washakes']
[]
[]
24
10,159
https://devpost.com/software/newsbert
NewsBERT If you want to stay up to date on technical discussions, you probably browse different sources of information like reddit, twitter, medium and various programming blogs. Inspiration In the recent two years there was lots of progress made in NLP because of transformer models. One remarkable feature of these pretrained language models can be used for tasks like Zero-Shot Learning . Zero-shot learning for text mining is basically unsupervised classification where the classes are text themselves. What it does We tackle the problem of organizing information from different social media feeds in single wall that can be sorted by topics. The app pulls articles from RSS feeds and lets the user filter the articles by topic classes. How we built it The app is built using streamlit. We used pretrained models from huggingface transformers and haystack libraries to extract topic scores. More precisely we use Natural Language Inference models and construct pairs (text, "text is on {topic}") for given topics. The score gives the confidence that text entails the sentence "text is on {topic}" for each topic. This is used as our topic match score. Our implementation uses deepset's haystack library to reduce zero-shot learning to search problem: for each topic we find top k documents that match query "text is on {topic}". What's next for NewsBERT We need to research ways to get better topic scores, for example using approaches similar to ones proposed in Pattern-Exploiting Training. We also want to check whether classes specified by topic names correspond to something that can be extracted using topic modeling. Built With huggingface-transformers python pytorch streamlit Try it out colab.research.google.com
NewsBERT
Using BERT & other transformer models for organizing RSS feed data
['Jakub Bartczuk', 'Grzegorz Klimek', 'Piotr Rudnicki']
[]
['huggingface-transformers', 'python', 'pytorch', 'streamlit']
25
10,159
https://devpost.com/software/vz-pytorch-a-smarter-way-to-visualize-pytorch-models
Inspiration Machine learning engineers rely heavily on visualizations. When we're learning about a new architecture in a paper or blog post, we often find ourselves immediately scanning for a model diagram to give us a sense of the network's structure and key features. When we're implementing our own models, we use automatically-generated diagrams like those in TensorBoard to diagnose bugs and identify mistakes. Visualizations are the best way to quickly and intuitively understand a neural network and bridge the gap between code and our own mental models. But current model diagramming solutions leave a lot to be desired. Existing automatic graph visualizers often fail to capture our intuitions. For instance, while every hand-designed diagram of an RNN shows it unrolled across time, layout engines like TensorBoard have no notion of time and end up producing jumbled and chaotic graphs in these common use cases. Even if we the engineers know how we want the diagram to look, we have no way to influence these automatic visualizers, and have to manually maintain a diagram with pen & paper, or PowerPoint. Neither of these solutions help bridge the gap between code and mental models, so we still find ourselves flitting between diagrams, plots, and command-line outputs, trying to remember which metrics correspond to which parts of the model. What it does VZ-PyTorch produces beautiful, intuitive neural network visualizations that unify structure, implementation, and metrics. With just a few lines of code, VZ-PyTorch can render a diagram of any PyTorch model. These diagrams can be fine-tuned by inserting unobtrusive visualization cues into your code, which tell the layout engine how to structure your diagram to match your intuition. You can also attach plots and text outputs to the diagram, embedding information about your layers and tensors within the diagram itself. How we built it VZ-PyTorch combines a Python library with a simple logging server and a visualization tool we developed called Vizstack. A typical usage of VZ-PyTorch looks like this: The user imports the vz-pytorch Python library into their code and specifies a PyTorch model they wish to track. vz-pytorch uses PyTorch hooks and monkeypatching to track execution of PyTorch functions and modules in a computation graph data structure. The computation graph is translated to a Vizstack directed acyclic graph layout, which is serialized and sent to a simple Node.js logging server. The logging server sends the serialized graph to any connected frontends, which render the graphs using Vizstack React components. Challenges we ran into The biggest problem we encountered was how to beautifully lay out neural networks. A key feature of neural network models is their deeply nested structure; if each node is a function call, there might be additional function calls within that, which in turn call other functions, and so on. Many graph layout libraries do not handle this well, either pretending the nested structure doesn't exist or crashing entirely. Ideal neural network diagrams have other properties which are often unsupported, like horizontal or vertical alignments of nodes, orthogonal edge routing, and different edge orientation directions at different levels of nesting. To work around the limitations of existing libraries, we had to implement our own graph layout library called Nodal that could handle these advanced use cases. Another substantial problem was how exactly to track PyTorch model execution. Module hooks which update the computation graph when executed are helpful, but fail to capture common operators like addition and transposition. We ultimately settled on an approach which combined hooks with dynamic overwrites of the functions in the torch library and on torch.Tensor , which update the computation graph when called. Accomplishments that we're proud of We are proud that our tool, without any special casing, is able to handle a diverse set of models, from basic feedforward networks to complex time-series RNNs and Transformers. We put a lot of work into understanding the styles and semantics of good machine learning diagrams and believe that our solution accurately captures those properties. We're also proud that the entire rendering pipeline, from the Python library to the React components to the graph layout engine, is built using our Vizstack and Nodal libraries, giving us full control and ensuring that we can tune our tool to consistently produce beautiful diagrams. The flexibility of our Vizstack tools also allowed us to implement key VZ-PyTorch features, like embedded plots, in less than an hour and with no special casing. What we learned A harsh lesson of this project was just how hard it is to get visualization right. Humans are incredibly good at synthesizing information from a diagram, but this means that even a slightly jumbled or sloppy diagram can really confuse the viewer. This is why the TensorBoard visualization falls short for virtually all the ML researchers we talked to. For our diagrams to be beautiful and useful, we spent a lot of time optimizing our graph layouts and tweaking parameters like colors, shapes, and sizes. The flipside of this lesson was experiencing how powerful a good visualization can be. When we got these parameters right, the structure of complex models like Transformers and ResNets quickly became clear, even to team members who weren't familiar with those models before. Our visualizations even helped us debug our example models; it wasn't until we saw the excess edges in the graph that we realized we were returning unnecessary vectors from our LSTM implementation. When a visualization is just right, everything clicks and becomes clear. What's next for VZ-Pytorch We plan to continuing improving our layout engine to make even cleaner diagrams at even larger model scales. Support for more advanced models will also require improvements to our Python library, such as enabling tracking of C function and CUDA kernel calls. We're also working on expanding our selection of out-of-the-box embedded plots, making it even easier to add useful information about tensors and layers in model diagrams. Built With python pytorch react typescript vizstack Try it out github.com
VZ-PyTorch
Visualize neural networks built with PyTorch.
['Nikhil Bhattasali', 'Ryan Holmdahl']
[]
['python', 'pytorch', 'react', 'typescript', 'vizstack']
26
10,159
https://devpost.com/software/pytorch_tiramisu
Speedup obtained using pytorch_tiramisu Tiramisu is a compiler for sparse and dense deep learning. It was created by an MIT CSAIL Team. Thanks to the polyhedral model it is based on, it can apply various code optimizations and it proved its efficiency in optimizing deep learning operators including RNNs and Sparse Neural Networks. This project consists on integrating Tiramisu as a backend to PyTorch. This way, users of PyTorch will benefit from the acceleration obtained by Tiramisu optimizations. Thanks to Torchscript, we could get PyTorch IR and convert it to Tiramisu IR while applying the fusion operator which turns out to be the most efficient optimization. What's next for pytorch_tiramisu --> Implementation and optimization of more deep learning operators --> Complete support of Sparse Neural Networks Built With c++ pytorch tiramisu Try it out github.com
pytorch_tiramisu
Tiramisu is a compiler for sparse and dense deep learning. We aim to integrate Tiramisu into PyTorch to benefit from the acceleration obtained in Tiramisu.
['Hadjer Benmeziane']
[]
['c++', 'pytorch', 'tiramisu']
27
10,159
https://devpost.com/software/pytorch-covid-fighter
Front Page Classification Component Comparison X-rays/CT-scans and generate reports Analysis X-rays/CT-scans and generate reports Inspiration As COVID-19 peaks, there is a shortage of health workers (especially in developing countries). So it means medical trainees and interns have to step up and understand this disease. This tool can help users in understanding and analysing chest x-rays and CT scans. It can also help users in differentiating between COVID and similar chest infections like viral pneumonia. Intended Audience: medical professionals and data analyists. What it does This tool uses pre-trained resnet18 model from torchvision to classify chest X-ray images as COVID/viral pneumonia and normal. As a medical trainee (or even a non-medical regular user) I can use this classifier to get an initial idea of a patient's image i am analysing. Then this tool can help me analyse the image and look for certain COVID specific imaging features like ground glass opacities, white consolidations and pavement patterns along lung peripheries. I can use image processing features (invert/zooming/contrast etc) to better recognise such imaging features. This tool also gives me a comparison component. Using that I can compare patient's image with other COVID/pneumonia or normal images. Such intuitive comparison can further tell me about features such as lymph nodes, lung cavities which are present in similar chest infections but absent in COVID. This tool also provides report generation which a medical trainee can share. How I built it Classification part uses pre-trained resnet18 model from torchvision. It is trained for 1000 imageNet classes. I changed the last fully connected layer features from 1000 to 3 and did training on COVID-19 radiology dataset ( https://www.kaggle.com/tawsifurrahman/covid19-radiography-database ). The accuracy reached is greater than 95% on the dataset. For classification in browser, i converted the pytorch model to ONNX model using torch.onnx module. Then i used the onnxjs NPM package for classification. Report Generation was done in the browser itself taking care of data privacy. Image comparison uses new Javascript APIs such as ResizeObserver. Image processing was achieved using CSS3 filters. Challenges I ran into Running the trained model in browser/nodejs environments was the biggest issue. Converting pytorch model to ONNX model reduced the accuracy of the model. After a lot of searching, I failed to debug the exact issue. Initially wanted to just run the model in browser itself (for data privacy concerns) but the converted model was around 43MB, so it took a lot of time for download by browser. So i had to go the nodejs way. Accomplishments that I'm proud of Completing all basic functionalities of classifying, analysing and comparing. Learning a lot about ONNX and interoperability (or lack of) of ML models. Learning a bit about COVID along the way and making a tool that can have a positive social impact. What I learned Pytorch, Torchvision. Taking advantage of already pre-trained model like resnet18. Standardising ML model formats using ONNX. On the front-end, using Suspense with React for lazy loading components and new JS/CSS APIs. What's next for Pytorch Covid Fighter Implement reading dicom images. Have a classifier for CT-scans too. Try to incorporate more resources and learning material in the app, so that medical trainees/interns can level up and reduce the shortage of front-line health warriors. Ability to add more public datasets for comparison component. Github: https://github.com/akhil-vij/pytorch-covid Model and Jupyter notebook: https://github.com/akhil-vij/pytorch-covid/tree/master/model Built With node.js onnx react torchvision Try it out pytorch-covid-fighter.herokuapp.com
Pytorch Covid Fighter
As Covid-19 peaks, there is a shortage of health care staff. Tool helps to automate classifying, studying and differentiating COVID-19 chest x-ray and CT-scans from other similar chest infections.
['Akhil Vij']
[]
['node.js', 'onnx', 'react', 'torchvision']
28
10,159
https://devpost.com/software/dbse-monitor
Object Detection Alertness Detection Emotions Detection He neutral Final Product Prototype installed Another angle DBSE Monitor: Drowsiness, Blind Spot and Emotions monitor. Drowsiness, emotions and attention monitor for driving. Also detects objects at the blind spot via CV and the NVIDIA Jetson Nano. Follow this link for direct instructions on how to run our demos for the three applications (Also very cool individual video demos for the three): https://github.com/altaga/DBSE-monitor#laptop-test Remember that this is an embedded solution, so for the complete experience you'll have to build your own in which you can find instructions to build it in our Github: https://github.com/altaga/DBSE-monitor Inspiration and Introduction We will be tackling the problem of drowsiness when handling or performing tasks such as driving or handling heavy machinery and the blind spot when driving. With some features on the side. In adition to that But let's take this on from the beginning we first have to state what the statistics show us: Road Injury is the 8th cause of death worldwide: More than most Cancers and on par with Diabetes. Huge area of opportunity, let's face it autonomy is still has a long way to go. A big cause is distraction and tiredness or what we call "drowsiness". The Center for Disease Control and Prevention (CDC) says that 35% of American drivers sleep less than the recommended minimum of seven hours a day. It mainly affects attention when performing any task and in the long term, it can affect health permanently. According to a report by the WHO (World Health Organization) (2), falling asleep while driving is one of the leading causes of traffic accidents. Up to 24% of accidents are caused by falling asleep, and according to the DMV USA (Department of Motor Vehicles) (3) and NHTSA (National Highway traffic safety administration) (4), 20% of accidents are related to drowsiness, being at the same level as accidents due to alcohol consumption with sometimes even worse consequences than those. Also, the NHTSA mentions that being angry or in an altered state of mind can lead to more dangerous and aggressive driving (5), endangering the life of the driver due to these psychological disorders. Solution and What it does We createda system that is able to detect a person's "drowsiness level", this with the aim of notifying the user about his state and if he is able to drive. At the same time it will measure the driver’s attention or capacity to garner attention and if he is falling asleep while driving. If it positively detects that state (that he is getting drowsy or distracted), a powerful alarm will sound with the objective of waking the driver. Additionally it will detect small vehicles and motorcycles in the automobile’s blind spots. In turn, the system will have an accelerometer to generate a call to the emergency services if the car had an accident to be able to attend the emergency quickly. Because an altered psychological state could and will generate dangerous driving, through Pytorch we will analyze his facial features to determine their emotional state and play music that can generate a positive response to the driver. How we built it This is the connection diagram of the system: The brain of the project is the Jetson Nano, it will take care of running through both of the Pytorch-powered Computer vision applications, using a plethora of libraries in order to perform certain tasks. The two webcams serve as the main sensors to carry out Computer Vision and then Pytorch will perform the needed AI in order to identify faces and eyes for one application and objects for the other and will send the proper information through MQTT in order to emmit a sound or show an image in the display. As features we added geolocation and crash detection with SMS notifications. That done through twilio with an accelerometer. Notice how depending on the task at hand we will perform different CV analysis and use diferent algoritms and libraries. With of course different responses or actions. The first step was naturally to create the three Computer vision applications and run them on a Laptop or any PC for that matter before going to an embedded computer, namely the jetson nano: Performing eye detection after a face is detected: And then testing Object detection for the Blind spot notifications on the OLED screen: After creating both of the applications it was time to make some hardware and connect everything: This is the mini-display for the object detection through the blind spot. The acceleromenter for crash detection. Now here's is how we perform the Emotion detection: The emotion monitor uses the following libraries: OpenCV: Image processing. (OpenCV) Haarcascades implementation. (OpenCV) Face detection (Pytorch) Emotion detection VLC: Music player. The emotion detection algorithm is as follows: Detection that there is a face of a person behind the wheel: Once we have detected the face, we cut it out of the image so that we can use them as input for our convolutional PyTorch network. The model is designed to detect the emotion of the face, this emotion will be saved in a variable to be used by our song player. According to the detected emotion we will randomly reproduce a song from one of our playlists: If the person is angry we will play a song that generates calm If the person is sad, a song for the person to be happy If the person is neutral or happy we will play some of their favorite songs Note: If the detected emotion has not changed, the playlist will continue without changing the song. The finished prototype. Because it is primarily an IoT enabled device, some of the features like the proximity indicator and the crash detector are not possible to test remotely without fabricating your own. Having said that, the Pytorch-made computer vision drowsiness and attention detector that tracks eye and faces works on any device! Even the alarm. If you will be running it on a laptop our Github provides instruction as you need quite several libraries. Here is the link, you just have to run the code and it works perfectly (follow the github instructions): https://github.com/altaga/DBSE-monitor#laptop-test You can find step by step documentation of how to do your own fully enabled DBSE monitor, on our github: https://github.com/altaga/DBSE-monitor Challenges we ran into At first we wanted to run Pytorch and do the whole CV application on a Raspberry Pi 3, which is much more available and an easier platform to use. It probably was too much processing for the Raspi3 as it wasn't able to run everything we demanded so we upgraded to an embeded computer specialized for ML and CV applications as it has an onboard GPU: the Nvidia Jetson Nano. With it we were able to run everything and more. Later we had a little problem of focus with certain cameras so we had to experiment with several webcams that we had available to find one that didn't require to focus. The one we decided for is the one shown in the video. Despite it's age and probably lack of resolution it was the correct one for the job as it mantained focus on one plane instead of actively switching. What we learned and What's next for DBSE Monitor. I would consider the product finished as we only need a little of additional touches in the industrial engineering side of things for it to be a commercial product. Well and also a bit on the Electrical engineering perhaps to use only the components we need. This is the culmination of a past project that we have completely polished to reach these heights. This one has the potential of becoming a commercially available option regarding Smart cities as the transition to autonomous or even smart vehicles will take a while in most cities. That middle ground between the Analog, primarily mechanical-based private transports to a more "Smart" vehicle is a huge opportunity as the transition will take several years and most people are not able to afford it. Thank you for reading. Built With arduino jetpack jetson-nano mqtt opencv python pytorch twilio yolo Try it out github.com
DBSE-monitor
Drowsines, blind spot, emotions and attention monitor for driving or handling heavy machinery. Also detects objects at the blind spot via Computer Vision powered by Pytorch and the Jetson Nano.
['Luis Eduardo Arevalo Oliver', 'Victor Alonso Altamirano Izquierdo', 'Alejandro Sánchez Gutiérrez']
[]
['arduino', 'jetpack', 'jetson-nano', 'mqtt', 'opencv', 'python', 'pytorch', 'twilio', 'yolo']
29
10,159
https://devpost.com/software/deep-virtual-try-on-cloths-powered-by-pytorch
Extreme right - final output, second from right - ouput with fake face Output with different pose Ouput Output with different pose Output Inspiration We started our journey when we try to purchase some clothes in a nearby apparel shop during this COVID-19 pandemic. We realized that people count is significantly low, after some research we found that people are reluctant to go to shops and try on clothes, due to the fear of getting Covid-19. Then we thought of developing an android or mobile application which helps to wear clothes without wearing them physically. We searched on YouTube and Google for similar implementation. We found exactly 2 implementations. Virtual Try-On mirror uses the Microsoft Kinect sensor. Customers need to present in front of the display and cloths can be fitted and also change the cloth using hand gesture control. The problems with this system are the fitting won't work perfectly, very high hardware cost, the customer has to wait for their turn, difficult to implement. Virtual Try-On mobile application with simple drag and drop. The problem with this one is that the fitting becomes a disaster and also the UI is not user friendly. What it does In simple words, it's a RESTful API deployed as a Flask application. Where you can upload your full body image. After that, you can upload different upper human body clothes such as T-shirts, shirts, etc, and see how well it fit in your body. How we built it We first collected the dataset for the Virtual Try-On( http://47.100.21.47:9999/overview.php ). The image includes different full-body images, their parsed images, and key_points.json(OpenPose). We trained our model by dividing the dataset into different categories and do some data processing. 4. We trained our network using PyTorch and saved the weights. Combined GMM(Geometric Matching Module, generator_parsing, generator_app_cpvton(Clothing shape texture preserving VTON) and generator_face. Implemented on Google Colab in Flask RESTful API. Customers can upload the poses and the Openpose estimate the pose and write a JSON file along with applying a instance segmentation for parse image and save the output to separate folders. After that customer can upload any upper cloth image and the program first remove the background, save the output to separate folders and then we apply our PyTorch pre-trained model to get the result with detailed warped cloth on the customer image. In the output, we get two images, one with a fake face or generated face and one with the original face To understand the pipeline we put all the intermediate steps in the output image. It is now deployed on AWS EC2(p2.xlarge) VM-GPU powered instance. We also built an android wireframe on Adobe XD, we are in process of building the app using the RESTful API. Challenges we ran into 1. Combining GMM, generator_parsing, generator_app_cpvton(Clothing shape texture preserving VTON) and generator_face is a very hard task since it needs a lot of optimization in the code. Deploying as a flask application along with OpenPose implementation needs a lot of effort. Since we need to build the OpenPose using Caffe and need to satisfy a lot of GPU driver requirements such as CUDA and cuDNN. Deploying on AWS EC2(p2.xlarge) taken more time than we expected due to protobuf conflicts in OpenPose Installation(CMakelists.txt bash file changes and also library installation issues.). Implementing instance segmentation algorithm also taken a lot of time, because we need to include specific cloth segementation on human body. Accomplishments that we're proud of Developing worlds first detailed Virtual Try-On cloths powered by PyTorch. Implemented a web app for user to interact. Built an android wireframe and in the process of completing it. What we learned we learned to integrate different PyTorch models in to a single program Understood the importance of dynamic computational graph of PyTorch in our project What's next for Deep Virtual Try-On cloths powered by PyTorch An android application where user can just take the photo of them and apply the Virtual clothing in real-time. Training the model on lower human body clothes like pants, shorts, etc, and images of men. Built With bash java python pytorch Try it out ec2-13-233-237-14.ap-south-1.compute.amazonaws.com
Deep Virtual Try On cloths powered by PyTorch
Worlds first API for Deep Virtual Try on cloths exclusively for pandemic recovery in apparel industry. Powered by powerful PyTorch deep learning model with detailed cloth warping
['Nandakishor M', 'anjali m']
[]
['bash', 'java', 'python', 'pytorch']
30
10,159
https://devpost.com/software/torchonvideo-frien3
torchOnVideo Logo torchOnVideo supports the following set of subtasks for video based deep learning Sample code for using the frame_interpolation module. Shows the simplicity and ease of use of the model. Inspiration There has been a great boom and consumption of videos in recent years. As expected, the focus of the deep learning community towards video-based applications has also increased manifold. While brainstorming for ideas we realised that a unified library tackling the various subtasks of video-based deep learning which could be a great utility for beginners and experts alike. What it does We provide an all-encompassing library that provides models, training and testing strategies, custom data set classes, metrics, and many more utilities involving state-of-the-art papers of the various subtasks of video-based deep learning. How we built it The subtasks/subdomains of this domain were completely new for most of us and hence the first task for us was a highly intensive literature survey in order to understand the importance, feasibility, and challenges in relation to our idea. Post this, weekly chat calls to explain and enhance our understanding of the papers as well as code testing and review sessions helped us to further our code in a collaborative manner. There was always a constant tussle between providing functionality as well as providing ease of use and flexibility. However, a lot of hard work and multiple code iterations help us reach the current stage of the library. Challenges we ran into Understanding, debugging sources, and implementing new components of the library were highly tricky. The intricate nature of the State-of-the-art papers made us read through multiple iterations of the paper before we began actually building or extending code. Planning the actual structure of the code to enhance both the functionality as well as the ease of use was challenging. We needed to ensure that the library is simple and lucid to use for a beginner as well as an expert. Accomplishments that we're proud of Created a simple and clean interface covering a good number of subtasks. It is highly modular and extendible. We hope this helps to make practice and research on deep learning on video easy and accessible to the Community. Thoroughly understood the details and intricacies with the state of art papers and also raised more interest in this domain. Provided Support for a good number of video-based datasets What we learned A unified library for deep learning on Video is an urgent need Building for the state of the art papers is challenging and hence needs more support and contribution for active development of this library. It is a highly resource-intensive domain and thus also needs intensive research on how to reduce their overheads. This will definitely spearhead the domain exponentially. There are still many more subtasks/subdomains to tackle and we are fired up to take these challenges at the earliest. What's next for torchOnVideo Provide actual mini data sets for all essential datasets so that anyone can actually begin and understand their models by an even more hands-on approach - especially for those with limited network and storage resources. Work on implementing even more extendible video dataset classes and loaders and figure ways in which PyTorch's video io library can also be enhanced at the same time. Also, add the functionality of video-based samplers. Provide support for more subtasks and state of the art papers to further the aim of this library. We intended to also add a video classification task however decided to leave it aside for the time being due to an already amazing MMAction library Build in-depth multi GPU supporting components which were understood over the implementation of the current papers Involve the community and seek contribution to building this into a really amazing and valuable library! Built With python pytorch Try it out github.com
torchOnVideo
A PyTorch Library for Deep Learning on Videos
['Priyanshu Sinha', 'Shardul Parab', 'Akash Manna', 'Shambhavi Mishra']
[]
['python', 'pytorch']
31
10,159
https://devpost.com/software/torchtraining
GitHub with CI/CD pipeline Main documentation page Released on PyPI Colab introduction tutorial Part of Horovod's integration source (WIP) PyTorch docs (including shapes) Tensorboard integration (see introduction tutorial) Colab tutorial with GANs (WIP) Realeased on Dockerhub Part of comet.ml integration (WIP) PyTorch compatible custom losses (metrics and others as well) So you want to train neural nets with PyTorch? Here are your options: plain PyTorch - a lot of tedious work like writing metrics or for loops external frameworks - more automated in exchange for less freedom, less flexibility , lots of esoteric functions and stuff under the hood Enter torchtraining - we try to get what's best from both worlds while adding: explicitness, functional approach, easy extensions and freedom to structure your code! All of that using single ** piping operator! Version Docs Tests Coverage Style PyPI Python PyTorch Docker LOC Tutorials See tutorials to get a grasp of what's the fuss is all about: Introduction - quick tour around functionalities with CIFAR100 classification and tensorboard . GAN training - more advanced example and creating you own pipeline components. Installation See documentation for full list of extras (e.g. installation with integrations like horovod ). To just start you can install via pip : pip install --user torchtraining Why torchtraining ? There are a lot of training libraries around for a lot of frameworks. Why would you choose this one? torchtraining fits you, not the other way around We think it's impossible to squeeze user's code in an overly strict API. We are not trying to fit everything into a single... .fit() method (or Trainer god class, see 40! arguments in PyTorch-Lightning trainer ). This approach has shown time and time again it does not work for more complicated use cases as one cannot foresee the endless possibilities of training neural network and data generation user might require. torchtraining gives you building blocks to calculate metrics, log results, distribute training instead. Implement single forward instead of 40 methods Implementing forward with data argument is all you will ever need (okay, accumulators also need calculate , but that's it), we add thin __call__ . Compare that to PyTorch-Lightning 's LightningModule (source code here ) training_step training_step_end training_epoch_end (repeat all the above for validation and test ) validation_end , test_end configure_sync_batchnorm configure_ddp init_ddp_connection configure_apex configure_optimizers optimizer_step optimizer_zero_grad tbptt_split_batch (?) prepare_data train_dataloader tng_dataloader test_dataloader val_dataloader This list could go on (and probably will grow even bigger as time passes). We believe in functional approach and using only what you need (a lot of decoupled building blocks instead of gigantic god classes trying to do everything). Once again: we can't foresee future and won't squash everything into single class . Explicitness You are offered building blocks and it's up to you what you want to use. Still, you are explicit about everything going on in your code, for example: when, where and what to log to tensorboard when and how often to run optimization what neural network(s) go into what step what data you choose to accumulate and how often which component of your pipeline should log via loguru and how to log (e.g. to stdout and file or maybe over the web?) See introduction tutorial to see how it's done Neural network != training We don't think your neural network source code should be polluted with training. We think it's better to have data preparation in data.py module, optimizers in optimizers.py and so on. With torchtraining you don't have to crunch any functionalities into single god class . Nothing under the hood (almost) ~3000 lines of code (including comet-ml , neptune and horovod integration) and short functions/classes allow you to quickly dig into the source if you find something odd/not working. It's leverages what exists instead of reinventing the wheel. PyTorch first We don't force you to jump into and from numpy as most of the tasks can already be done in PyTorch . We are pytorch first. Unless we have to integrate third party tool... In that case you don't pay for this feature if you don't use it! Easy integration with other tools If we don't provide an integration out of the box, you can request it via issues or make your own PR . Any code you want can almost always be integrated via following steps: make a new module (say amazing.py ) create new classes inheriting from torchtraining.Operation implement forward for each operation which takes single argument data which can be anything ( Tuple , List , torch.Tensor , str , whatever really) process this data in forward and return results you have your own operator compatible with ** ! Other tools integrate components by trying to squash them into their predefined APIs and/or trying to be smart and guess what the user does (which often fails). Here's how we do: Example of integration of neptune image logging: import torchtraining as tt class Image(tt.Operation): def __init__( self, experiment, log_name: str, image_name: str = None, description: str = None, timestamp=None, experiment=None, ): super().__init__() self.experiment = experiment self.log_name = log_name self.image_name = image_name self.description = description self.timestamp = timestamp # Always forward some data so it can be reused def forward(self, data): self.experiment.log_image( self.log_name, data, self.image_name, self.description, self.timestamp ) return data Contributing This project is currently in it's infancy and we would love to get some help from you! You can find current ideas inside issues tagged by [DISCUSSION] (see here ). accelerators.py module for distributed training callbacks.py third party integrations (experiment handlers like comet-ml or neptune ) Also feel free to make your own feature requests and give us your thoughts in issues ! Remember: It's only 0.0.1 version, direction is there but you can be sure to encounter a lot of bugs along the way at the moment Why ** as an operator? Indeed, operators like | , >> or > would be way more intuitive, but : Those are left associative and would require users to explicitly uses parentheses around pipes > cannot be piped as easily Way more complicated code on our side to handle >> or | Currently ** seems like a reasonable trade-off, still it may be subject to change in future. Built With bash comet-ml github-workflow horovod loguru neptune python pytorch rich Try it out colab.research.google.com szymonmaszke.github.io pypi.org github.com hub.docker.com
torchtraining
All You need is `forward` and `**` operator for functional neural network training!
['Szymon Maszke']
[]
['bash', 'comet-ml', 'github-workflow', 'horovod', 'loguru', 'neptune', 'python', 'pytorch', 'rich']
32
10,159
https://devpost.com/software/billboard_next
Next week's top 10 as of Aug. 25 Inspiration After songs made famous on TikTok began to rise on Billboard’s Hot 100, we began to wonder if we could predict each week’s top 10 songs based on the previous chart-topping songs. What it does Billboard Next is a data-driven website that predicts that Billboard’s top 10 chart-topping songs for the next week. Using the PyTorch machine learning library, Billboard Next generates the next hit tunes based on previous popular songs. Clicking the Spotify icon for each song opens Spotify in a new tab and begins playing the selected song. How we built it HTML , CSS , and JavaScript were used to create the front end design of Billboard Next. The design was based on Billboard’s modern website, and we wanted to mimic the aesthetic in Billboard Next. Flask was used to connect out backend model to our front end display, and our domain was hosted on Heroku . A neural network was created in PyTorch and was trained in Google Colab to predict next week’s top ten songs. This network was trained on the last 2 years of Billboard top 100 data, and takes the current and previous top 100 songs to predict next week's top 10. The billboard.py library was used to access the top 100 songs, and the Spotify Web API was used to grab Spotify links for each song. Challenges we ran into The first challenge we ran into while developing Billboard Next was hosting our site. In the past, we have hosted on repl.it , but their service lacks support for PyTorch, so we had to switch our host to Heroku, which none of us have used before. After 116 deploys , we had a working site hosted on Heroku rather than repl.it. Another challenge we faced was grabbing the Spotify url for each song. While the billboard.py library used to give the Spotify link with every song, that feature has been deprecated for years. We got around this by using the Client Credentials flow for the Spotify Web API , allowing us to register our app with the API for song lookup without requiring users to sign in. We verified the app in the main Flask code, and used the access key granted to look up the song links in the JavaScript code used in displaying the songs. Accomplishments that we’re proud of We’re very proud of our predictions and linking our website Spotify . Billboard’s website does not have the ability to play the ranked songs, and we wanted to be able to listen to the songs without an additional search. We’re also very proud of the clean, modern aesthetic that matches Billboard’s original website. What we learned This was the first time that any of us have used Flask for a website rather than Node.js , so we have all learned the basics of Flask and how to host a website using Python rather than JavaScript. This was also the first time we have hosted an app in Heroku, so we learned how to deploy an app on Heroku from a GitHub repository. Also, we learned how to use the Spotify Web API , as, while some of us had tried to experiment with it before, we had never been able to implement it into a project successfully before. Also, most of us did not have much experience using PyTorch, so this was a huge learning experience for all of us. What's next for Billboard Next Going forward, we hope to add an analysis of the audio files to detect what makes a song popular. While our current model relies on past Billboard data in order to predict the most popular songs for the next week, we hope to incorporate analysis of the actual song to see what aspects of the songs make them more popular. This would allow us to predict how popular newly released songs will be. Built With css3 flask heroku html5 javascript python pytorch Try it out github.com billboard-next.herokuapp.com
Billboard Next
Predict the next top hits!
['Kevin Gauld', 'Ethan Horowitz', 'Jendy Ren', 'Christina W', 'Ryan Vaz']
[]
['css3', 'flask', 'heroku', 'html5', 'javascript', 'python', 'pytorch']
33
10,159
https://devpost.com/software/ojos
A reported event The events page The main profile page OJOS Introduction At some point in our lives, most of us will become carers to our elderly parents. Unfortunately, due to our busy lifestyles, we can not always be there to attend to them. To address this problem, many choose to install security cameras in their parents’ homes. This way they can instantly check in on their parents, even multiple times every day. Sadly, this solution is far from being perfect: There is practically zero chance to catch a critical event, such as falling or violence, at the moment it happens. In addition, in the cases of these events, response time is a critical factor, and a delayed response might cost lives. Fortunately, today’s technology allows us to create a solution. OJOS (“eyes” in Spanish) is a system that connects to cameras at parents’ homes, and provides carers with real-time alerts about dangerous events using computer vision. This solution provides 24/7 guard, significantly shortens the reaction time to dangerous events, and can potentially save lives. Moreover, by analyzing video over longer periods of time, OJOS can spot deteriorations in the elderly’s behavior. Today, this information is unobtainable without constant human observation. OJOS is a submission to the Web and Mobile Applications powered by PyTorch category of the hackathon. Watch the project video submission Screenshots The user profile page Here, the user can edit their data, and their cameras: The events page Here, the user can view all the events the system recognized Reporting events In case of wrong classification, the user can report an event. This way, we can retrain the models and achieve better accuracy as we progress. Tech Stack Our project relies on multiple open source software and external services. Web: Django, PostgreSQL ML and Image processing: Pytorch, ML_Flow, albumentations, OpenCV, face_recognition lib, sklearn, numpy, pandas, matplotlib Tools: youtube-dl, FFMPEG External: Mailgun, Twilio (Future integration), Android Virtual Device, AWS Connecting to the IP cameras turned out to be a big challenge. After trying many different options, we solved it by running Android emulators on EC2 machines, capturing the video from the screen, and streaming it to the classification system. We used neu.ro to train our human position models: First we trained on UCF101, a public dataset for human action We then applied transfer learning on fall & positions datasets: Fall Detection Dataset , UR Fall Detection Dataset , Multiple cameras fall dataset , Fall detection Dataset (#2) . Code All the code can be found under the organization @myojos in GitHub. A more technical analysis can be found here . Repository Description ojos-user-website The user facing web app. ojos-notifications-system Code responsible for sending immediate alerts and daily reports (containing all the events that happened throughout the day). aws_config Configuration and shell commands for the AWS instance StickFigureMode Repository for the code that helps us preserve privacy by replacing humans with stick figures fall-detection Repository with the main content: models, notebooks, etc. URL The system is up and running on myojos.tech Team We started working on this project one month before the hackathon. The core PyTorch based part happened during the hackathon period. Name Email Jonathan Harel harelj6@gmail.com Khaled Fadel khaledkee0@gmail.com Built With android-emulator django ffmpeg mlflow opencv pandas python pytorch torchvision Try it out myojos.tech
OJOS
OJOS is a system that connects to cameras at parents’ homes, and provides carers with real-time alerts about dangerous events using computer vision.
['Jonathan Harel', 'Khaled KEE Fadel']
[]
['android-emulator', 'django', 'ffmpeg', 'mlflow', 'opencv', 'pandas', 'python', 'pytorch', 'torchvision']
34
10,159
https://devpost.com/software/draft-zypejl
GIF GIF Sliver Maestro Inspiration A human learns how to draw with simple shapes and sketching. At first, we just try to copy it by following the image pixel by pixel and we don’t need a demonstration or a hard-coded drawing steps to achieve this. However, for robots, it’s not the case and we would like to democratize art by enabling self-learning for robots. What it does Sliver Maestro is a simulated artistic robot and its expertise is doodling! Sliver Maestro applies sketching by just one look at an image and experiencing how a human would draw it. How we built it We used DeepMind’s Deep Recurrent Attentive Writer model and Quick, Draw! 's simplified binary dataset to generate sequential images and extract stroke movements. Draw network is a recurrent auto encoder that uses attention mechanisms. Attention mechanism focus on a small part of the input data in each time step, and iteratively generates image that is closer to the original image. The network is trained with stochastic gradient descent and the objective function is variational upper bound on the log likelihood of the data. The advantage of DRAW to other image generation approaches is that the model generates the entire scene by reconstructing images step by step where parts of a scene are created independently from others, and approximate sketches are successively refined. The Quick, Draw! dataset is a collection of 50 million drawings across 345 categories, contributed by players of the game Quick, Draw!. Files are simplified 28x28 grayscale bitmaps in numpy format. Our experimental results of DRAW model on Quick Draw! dataset can be accessed from here . We simulated drawings by using both PyGame and a robot simulation environment, CoppeliaSim simulator with Baxter robot model and Remote API for Python. In the post-processing, we first convert outputs into binary images and then into svg files. We use an svg parser to convert into coordinates and Sliver Maestro draws the generated images with successive refinements provided by the model. As shown in the animation in the image gallery, the advantage of DRAW to other image generation approaches is that the model generates the entire scene by reconstructing images step by step where parts of a scene are created independently from others, and approximate sketches are successively refined. An illustration of DRAW's refinements which is obtained by subtracting consecutive images is also given in the gallery. Challenges we ran into Reconstructing images which are independently created in a way that the robotic arm can follow the lines and draw the image smoothly. Humans tend to draw with a combination of multiple strokes even if it is a simple doodling. So, we had to translate images generated by multiple strokes into a trajectory that is achievable by only one stroke. Accomplishments that we're proud of We are proud of successfully applying our skills into different platforms and their integration. It was a good practice for us to integrate a computer vision project with a robotic simulation. We are also proud of Baxter for being that much easy going and patient in our simulation trials. What we learned SVG format is very useful! What's next for Sliver Maestro We will experiment DeepMind’s SPIRAL model and geometric approaches such as Riemann manifolds. Built With pytorch v-rep
Sliver Maestro
Sliver Maestro is a simulated artistic robot and its expertise is doodling!
['Mine Melodi Caliskan', 'ozgurpolat Polat']
[]
['pytorch', 'v-rep']
35
10,159
https://devpost.com/software/nkiruka
Inspiration Sometimes we can't always ask someone to take pictures for us, and selfies might not be the best option, so we wanted to create an app that allows users to take pictures freely without the need to ask someone else or worry about bad angles. What it does This is an app that recognizes keywords via voice recognition model and takes a picture on the mobile device for users. How we built it We trained the deep learning model using PyTorch libraries and built-in speech commands in torchaudio datasets. The app development was programmed in Javascript using Android libraries. Challenges we ran into During app development, we had problems with continuously sending data from microphone to the model for voice recognition. The app crashed and showed an "unknown" error. After model training, we serialized the model to be used in the app. However, Pytorch Mobile only works with Pytorch version 3 and our model was developed in Pytorch version 2. Accomplishments that we're proud of This is a team of 3 members, we were not able to meet together to discuss and work together on this project due to the covid situation. However, we managed to communicate effectively online and worked remotely together on this project. Meanwhile, all team members have jobs and other responsibilities outside of this project. What we learned During this project development, we learned to be responsible for our tasks and to update the team on any changes coming up. We learned some aspects of software project management that we have not learned before during this covid situation. What's next for nkiruka Next, we hope to fix the Pytorch version issue so that the app can use the voice recognition model. After fully implementing the english version model, we hope to include voice recognition models in other languages. Eventually, we hope to include gesture recognition for those who have speaking disabilities and for situations when voice recognition is not ideal. Built With android javascript python pytorch torchaudio
nkiruka
nkiruka is a voice recognition camera application that allows users to take pictures by using keywords.
['Tristan Hilbert', 'Jayee Li', 'lyubogankov']
[]
['android', 'javascript', 'python', 'pytorch', 'torchaudio']
36
10,159
https://devpost.com/software/shakespearean-poem-generation-for-greeting-cards
Home Screen of the website Poem Generated by char-RNN displayed on gift card Inspiration The idea behind the project was inspired by the works of William Shakespeare. A noteworthy poem, which though is not a sonnet but lead us to explore this domain would be the "Quality of Mercy" from Merchant of Venice. Once we were introduced to the Sonnets written by him, we were fascinated by the interesting rhyme scheme and structure in it. We wanted to replicate this while using our knowledge in the computer science field. What it does The Application consists of two parts, char-RNN and Web Interface. The char-RNN uses LSTM Architecture to analyze and generate the probability of a character appearing given the previous N characters. This concept was used to generate poems of various lengths with sonnets as reference. The Web Interface, provides the user a platform to interact with the RNN model to generate interesting poems and displays it on a greeting card. The objective of the interface is to act as a prototype application for the RNN. How we built it The char-RNN was built using pytorch in google colab environment. Various model architecture and hyperparameters were tested and the weights of the best outcome of these were stored as pth files. In the end the most optimal model was chosen for the web application. The web application was developed using ReactJS framework which is considered to be one the best for developing single page applications. Heroku was used to host both, the API and the web application. Challenges we ran into The challenges we ran into while makes this project were: The neural network not crossing a particular accuracy value and getting stagnant. Deciding whether punctuations should be considered with the model or not. Training the model for a long time was difficult as hardware resources were limited. Choosing the optimal Pytorch version such that the application can run without any errors while accounting for the memory space. Reducing the response time between API and Web App as much as possible. Accomplishments that we're proud of Implementing a char-RNN which can generate understandable poems on the spot for a given user input. Implementing a web application that can interact with the NN model and present the user with the result. Hosting an application on the internet for the first time. What we learned How to create an optimal RNN which can understand the given input. Avoiding vanishing gradient problems by trying out different architectures such as LSTM etc. Hosting web application and establishing communication between the API and the web application. What's next for Shakespearean Poem Generation for Greeting Cards Reducing the time of response between API and web application. Improving upon the web application to enable users to order the gift cards generated. Improving the char-RNN created to make more meaningful poems by applying NLP concepts. Built With css flask github heroku html javascript python pytorch react Try it out anastasia-sonnet.herokuapp.com github.com
Shakespearean Poem Generation for Greeting Cards
Greeting/Gift cards are something personal. However most just fill it with "Happy Birthday" etc. We have added flavor to it by creating a RNN-Poem Generator with shakesperean sonnets as reference.
['Aakash Ezhilan', 'Adityan Sunil Kumar']
[]
['css', 'flask', 'github', 'heroku', 'html', 'javascript', 'python', 'pytorch', 'react']
37
10,159
https://devpost.com/software/dermi
Dermi focuses on helping the community by having early diagnosis of skin diseases so that treatment can happen sooner, reducing deaths, and it is less expensive (reducing the burden on systems such as Medicare). Inspiration Many die and become disabled every year from skin diseases. If early detection can solve the issue, we need to make it more available. I aim to do that using deep learning. What it does Suggests skin disease from image. How I built it First I collected a dataset using a web scraper. Then I created a ResNet50 model using Pytorch.. Afterwards, I created a Flask server for inference and used Ngrok to make it publicly available. Created an Android app using Kotlin and Room for on-device database. Note: long press on a diagnosis allows you to delete it. Accomplishments that I'm proud of I am proud of the application and the speed at which I built it. What I learned Learning how to create a ngrok server on Colab itself. What's next for Dermi Larger dataset Using several models to be able to create the most accurate diagnosis. The current app uses 1 model, however, I previously created another model with another dataset, which I chose not to use for this hackathon due to the time restriction. Using other available datasets (this app can integrate with other models, making it extensible) iOS application Built With android fast.ai kotlin pytorch room Try it out github.com
Dermi (rash edition)
Simple but powerful skin rash classifier.
['Vijay Daita']
[]
['android', 'fast.ai', 'kotlin', 'pytorch', 'room']
38
10,159
https://devpost.com/software/pylaymon-a-layer-monitoring-package-for-neural-networks
GIF Feature Map Monitoring using PyLaymon. Source code on Github How to use Pylaymon Documentation PyPi Inspiration When we train a neural network it is not very intuitive to understand what the neural network is learning underneath or how each layer in the neural network is representing the data on which it is trained on. One way to a better comprehension of the network is to visualise the feature maps. So, if we can visualise the layers/ feature maps during the training process, we can have a more clear picture of what the network is learning and which layer parameters needs to be fine-tuned. Furthermore, by having a peek through the features maps of various layers we will be able to get a notion of whether a layer is biasing at some set of examples or not. Hence a package that can help envision the feature maps of the desired layers of a model, can help us train our network more efficiently. What it does PyLaymon is a python based package , which is used to visualize the features maps of a given model or a set of layers of a model. It consists of a feature map monitor that maintains a list of layer observers objects (layers whose feature maps need to be visualised). The monitor notifies these layer observers to update their display objects when the state of the activation maps corresponding to each of the layer changes. One can refer to the documentation page of the project to view more details about the package and how to use it in a project . How I built it The package has been built in python using the PyTorch framework to create hooks into the layers that are being monitored/visualised. It uses matplotlib to display the feature maps, although one can create custom display classes of their choice and hook them into the corresponding layer observer. The project maintains documentation that is hosted using readthedocs. The documentation follows the sphinx format with the ability of automatic API/ class documentation. The package is versioned and maintained on PyPi. The project also follows coding best practices and uses flake8 and black libraries as pre-commit hooks. Challenges I ran into The main challenge of the project was to define an architecture through which one can easily add or remove layers that need to be monitored or visualised. Also, the architecture should be abstracted to a level such that a developer can define his/her custom displays, monitors, or observers. Finally, creating a display to visualise the feature maps using matplotlib was quite challenging, as one has to maintain a mapping of which display/plot held data to which layer and update those plots when new data arrived. Accomplishments that I'm proud of Designing, developing, and deploying an end to end solution to a problem in a mere 20 days is the proudest accomplishment for me. What I learned I learned a lot in the process, from exploring different design patterns to understanding PyTorch's internal code and finally, building, hosting, and documenting a PyPi package from scratch. What's next for PyLaymon The journey does not end here, its just a beginning for PyLaymon. I have already started drafting the road-map for the next version. For example, the updates to the plots in the current version happen synchronously which affects the training time of the model. In the future release, these updates would be async with more inbuilt displays and monitors to capture other aspects of the network such as total loss function or loss at each layer. Also, I have made the project open-source, so contributions are most welcomed. Built With github matplotlib numpy python python-package-index pytorch read-the-docs Try it out pypi.org github.com laymon.readthedocs.io docs.google.com
PyLaymon
A python based package for monitoring and visualising the layers of a neural network.
['Shubham Gupta']
[]
['github', 'matplotlib', 'numpy', 'python', 'python-package-index', 'pytorch', 'read-the-docs']
39
10,159
https://devpost.com/software/exercise-exe
Inspiration During this covid period, Social Distancing must be followed. This made outdoor activities totally impossible. But what if you exercise at home, at get rewarded for it. We know Daily Exercise is important because Regular physical activity can improve our muscle strength and boost our endurance. Exercise delivers oxygen and nutrients to our tissues and helps our cardiovascular system work more efficiently. And when our heart and lung health improve, we have more energy to tackle daily chores. We wanted to promote Yoga during Lockdown. What it does Its as simple as Childsplay. Just record your video of doing yoga(max to max 5 min) and upload it to the webapp and it will detect all the poses/activities you did and will reward you accordingly. At end of the month. You can see yourself on the leaderboard. 🥳🥳🥳🥳🥳 How we built it We built it using Pytorch and Flask. Pytorch was used to Train the CNN model to detect different postures. Challenges we ran into 1)Finding appropriate dataset. 2)Deploying the model using web app. Accomplishments that we're proud of 1)We started learning Deep learning only one month back and we are proud that we built a model from scratch with an acceptable accuracy but faced many problems in deployment phase but We will get out of it really soon.💪💪💪💪 What we learned It was a great experience working with PyTorch and we look forward to complete our project and deploy it for services. What's next for Exercise.exe Deployment on web app and android app. Built With colab flask python pytorch Try it out github.com
Exercise.exe
Creating a web interface which rewards you when you exercise... 😋😋😋
['Khushhal Reddy', 'Aryamaan Srivastava 18BIT0215']
[]
['colab', 'flask', 'python', 'pytorch']
40
10,159
https://devpost.com/software/class-summarization
Inspiration Can know the contents of the lecture without watching it from beginning to end, and they can see a brief summary of the contents. What it does Transcribe voice to text and summary of texts. How we built it The web page is created by flask. Speech recognition is used by pocket-sphinx. In order to summarize paragraph, bert-extractive-summarizer is used. Challenges we ran into Deployment code Accomplishments that we're proud of first time participating in Hackathon and made full software. What we learned how to use pytorch What's next for class Summarization better speech recognition engine Built With anaconda bert-extractive-summarizer chrome css flask html javascript jquery pocketsphinx python speechrecognition torch windows-10 Try it out github.com
CS (Class Summarization)
Transcribe WAV file to text and make summary of contents using summarization method.
['Moon Cody', 'Dongho Kang']
[]
['anaconda', 'bert-extractive-summarizer', 'chrome', 'css', 'flask', 'html', 'javascript', 'jquery', 'pocketsphinx', 'python', 'speechrecognition', 'torch', 'windows-10']
41
10,159
https://devpost.com/software/pied-paper
Front page Data page Tech stack pipeline Inspiration https://www.axios.com/americans-fake-news-problem-terrorism-da565b6c-6ab3-42a1-ae08-3400d68ab99c.html It is to no one’s surprise that fake news is rampant across the internet. Thankfully, there has been increasing interest in thwarting this wave of misinformation through the help of machine learning. Kaggle competitions on fake news detection have displayed impressive ranges of accuracy on the test datasets. However, it is without a doubt very difficult to detect fake news on inference and label it as such. Also, this raises the question of thresholds. If the model’s softmax yields a value that is somewhere in the middle can we confidently say to people that a certain news article is fake? Furthermore, can we provide a label to people about certain news articles without really fact-checking the content? That’s why we shifted our approach to do something no one has ever done; gamify news itself. What it does Our website aggregates news from various media sites and uses a PyTorch-based neural net model to classify articles as fake or real. This model is trained on a fake/real news dataset obtained from Kaggle. The model’s prediction is shown to the user, and user input is also taken to measure users’ agreement with the model. Articles can be sorted by genre and date. I think this revolutionizes how we view news in two folds. Firstly, we are actually stepping away from personalizing data to give a more objective view of how an individual may approach new information. With pied paper you are not simply given true news. We are giving the user the power to decide whether an article is true or fake based on their interpretation, AND then giving them feedback. Even then we are not telling the users that they are wrong or right. We are simply displaying information about how other people think and what machines think. This way I think we can more naturally engage users to become more thoughtful in how they approach new information. Secondly, we are incentivizing users to read the news more carefully and thoroughly because we have essentially gamified news. At the end of every article, you are being tested for your objectivity and are compared with everyone else. Furthermore, I think the data obtained from the dataset can be fed back into the ML dataset as some sort of bias to help build a better model that can detect fake news on inference. How I built it Model We used TorchText for language preprocessing ( padding news articles and such). Then using AWS Sagemaker we hosted the .pth on an endpoint which can be accessed by AWS gateway through AWS lambda. Backend We used Express Js along with Node to orchestrate feeding and fetching articles from the AWS Sagemaker endpoint. We used Postgres to store the retrieved data in an SQL format so that categorization and searching were made much quicker. Frontend We used React JS to design the user interface, Redux to handle state management and Material UI for css theming. NewsAPI was used to fetch the news themselves. Challenges I ran into Optimizing the model Designing the front-end to give a feeling of choice to users Using AWS Sagemaker to host Pytorch endpoint Accomplishments that I'm proud of We have a pretty robust prototype running at piedpaper.net! Tell us how you feel about the website and interact with it as much as possible. What I learned We learned as a team how to integrate an existing PyTorch model into something that can be used on inference time to create a hands-on machine learning experience. We also learned that ON AVERAGE, fox news seems to be the news outlet with most of their news labeled as 'fake'. What's next for Pied Paper *Create a better data visualization and metric for the true/false values of the users. ( Make it more interactive !) *Improve our machine learning model for better accuracy. And perhaps use our user input as some kind of a bias in the algorithm. *Create a dedicated user experience by allowing them to log in and record their scores. Built With materialui newsapi pytorch react sagemaker torchtext Try it out piedpaper.net www.kaggle.com
Pied Paper
Creating a better aggregate news channel with the power of text analysis.
['Minyoung Na', 'Paul Kang', 'Donghyun Park']
[]
['materialui', 'newsapi', 'pytorch', 'react', 'sagemaker', 'torchtext']
42
10,159
https://devpost.com/software/nn-gabler
Inspiration Tensorflow playground project What it does Generates PyTorch scripts based on inputs. How I built it With pytorch and django What's next for NN-Gabler A better graphical builder. Built With django pytorch Try it out github.com
NN-Gabler
A interface that helps people generate neural network scripts based on inputs.
['JinelleH Hou', 'Gerald.Shen', 'hy-chen CHEN']
[]
['django', 'pytorch']
43
10,159
https://devpost.com/software/stockroom
stockroom logo listing data, model and experiments Adding data to stockroom with cli Inspiration Version controlling for machine learning is not a solved problem although there are several attempts. The major challenge is that the software 2.0 consist of different components such as code, model, data, and experiment parameters. All the existing attempts that we know have taken a similar approach to version controlling ML but comes with different APIs or different features as their USP. But we had realized the need for a completely different, radical enough, approach that could change the way versioning story has been written What it does Versioning data/model/experiment is important. Let's look at some of the challenges These entities could be huge in size. Copying and deleting them on each git checkout is just not efficient Time-traveling through the commit history is extremely hard since that involves moving of this GB/TB/PB of data between folders (something like what git does - moving files from .git to the root of the repo) You'd need to version your code. So it's ideal if you can manage to have your version control system like git go hand in hand with the version controlling system for ML Model parameters are tensors under the hood. Storing them as tensors would open the possibility of easily analyzing them It's very often a case that the whole data is not fittable into an end system like a laptop. Ability to partially fetch the data is important It is very likely that you run a lot of experiments and saves model parameters multiple times. If somebody would need to fetch all of these experiments only to use that last successful, optimized one, it's a pity. Ability to clone some part of repository is important Direct integration to common frameworks saves time. For experts and for beginners. Collaboration is the key, but it's painful if every collaborator need to have the complete repository with them to add more data or a new model and some experiment parameters Any modern ML versioning system that relies on traditional version controlling software has these underlying limitations. With stockroom (through hangar) we introduce a new approach. We keep the data in a common format in a backend and make it accessible only with our python APIs. Your repository also keeps the metadata about your data (including the content hash) so that another person can clone only the metadata (few Mbs in size) and start collaborating without fetching the whole repository. You will also have the ability to fetch data by name instead of cloning the whole PS: Stockroom is built on top of hangar How we built it Stockroom, right now, only be able to take tensors, ints, floats, or strings as input. We have built the whole platform on top of hangar which is optimized for tensor storage. We have made three user interfacing classes (we call it shelves). Data Model Experiment Each shelf knows what data it would get and choose the right optimization strategy in the backend. We have made the python API dictionary-like so that it wouldn't introduce a learning curve. Data Dealing with data in the stockroom is done through the Data shelf. from stockroom import StockRoom, make_torch_dataset stock = StockRoom() image = stock.data['image'] label = stock.data['label'] image[0].shape # something like (3, 224, 224) label[0].shape # almost always () dataset = make_torch_dataset([image, label]) But how would we add data into the stockroom. You can use the same APIs but it's ideal to use the importer s since we are doing some more optimization to increase the data saving speed. We have made some of the pytorch ecosystem datasets built into stockroom (more on the way) and it is available through the stock CLI. This is how you would import cifar10 into stockroom from torchvision ~$ stock import torchvision.cifar10 Model The model shelf is designed to take the parameters returned from state_dict() call. StockRoom takes each layer and saves it as a tensor. Metadata of these layers would be saved separately. We build it back when it is accessed. Stockroom's storage is content addressed. So if you are training only a few of the layers and other layers are frozen, we will be able to save only the changed parameters from your state_dict and keep a reference to the non-changed parameters stock = StockRoom(enable_write=True) mymodel = get_model() # training stock.model['resnet'] = mymodel.state_dict() # loading from checkpoing newmodel = get_model() newmodel.load_state_dict(stock.model['resnet']) Experiment This shelf is for storing your hyperparameters, other details you'd like to track as part of your training etc. We'd wanted to make it take activations from intermediate layers as well if you wish to save them. But we couldn't do for the hackathon. stock = StockRoom() # training loop with stock.enable_write(): stock.experiment['lr'] = lr Challenges we ran into The major hurdle for any storage system is to keep the data consistent and keep from corruption. Thus hangar allows only one write-enabled object to be created. Working with this limitation while keeping a sane and easy UI (in python) was an extremely difficult UX problem. Because we'd want to send the readers to multiprocessed dataloaders while keeping them from not writing into the storage from multiple processes. Dealing with different types of dataset was another big concern. Different datasets have different properties and hence having a proper way of interacting with data without introducing complexities was tough. Accomplishments that we are proud of Stockroom itself is a proud entity for all of us. It's efficient, it's easy, it fits into users' coding style and the user doesn't need to learn another tool to start using stockroom. We are also planning about stockroom universe where we could make utilities like finding data leak or visualizing weights etc that could go well with stockroom's API What we learned The way PyTorch users interact with data and model was an invaluable knowledge that we have gained by talking to different people. Building a tool like stockroom would teach you a ton!. Storage optimization, API design etc. And it will expose a 1000 different problems that you'd otherwise never see. For instance, what if you have 100 TB data and you are using 1000 nodes for a distributed training setup. You'd be downloading this huge dataset to all the shards/nodes. But what if stockroom's partial cloning would help you here and download only what is required for each node on the fly. It's possible with stockroom and we would have never thought about a possibility like this if it wasn't for stockroom What's next for stockroom We are excited. We have a laid-out roadmap in our wiki that has few things that we would want to build soon. But in general, I think we'd want to work on stockroom and make it a super light weight, easy to learn but highly efficient and version controlling system that is tightly coupled to pytorch and it's ecosystem tools Built With grpc hangar hdf5 numpy python pytorch Try it out github.com stockroom.page github.com
stockroom
Version control for software 2.0 that's built around PyTorch
['Sherin Thomas', 'Jithin James', 'Rick Izzo']
[]
['grpc', 'hangar', 'hdf5', 'numpy', 'python', 'pytorch']
44
10,159
https://devpost.com/software/butterfly-park
Move the hand infront of the camera to control the virtual hand Butterfly Park scene Pipeline of the project Palm Detector with keypoints Inspiration The Butterfly Park game is inspired from the Augmented Reality. The perception pipeline is inspired from mediapipe framework, in which multiple ML models are pipelined together to get the best of ML application. What it does In this project a virtual hand is controlled by moving the real hand in front of the camera. By clicking on the butterfly button a flower is created on the hand which attracts the butterfly towards the hand. We can now move the virtual hand as a real hand movement. How I built it I built the game in Unity3D. The palm detector and 7 keypoints are detected and mapped to the virtual hand in the game scene. The perception framework is built on the mediapipe pretrained model. The model is converted to pytorch framework by defining the architecture and copying the weights from the pretrained mediapipe model. Later this model is conveted to ONNX format to run in the Unity3D game engines Barracuda Inference pipeline. The pytorch model and the postprocessing - NMS suppresion are all converted to ONNX format for easier deployment in the Unity3D. Mediapipe model -> Pytorch model -> ONNX model -> Barracuda Inference -> Unity Gameplay rendering. Challenges I ran into The ONNX format models (palm detector and postprocessing) ran fine on the python end but the Barracuda inference in the Unity3D engine did not support some of the operators in the ONNX model. I tried to fix the issue but did not workout. Accomplishments that I'm proud of Through this project, I learned how a ML model is deployed end-to-end in an mobile environment. I was able to convert a model from tflite to pytorch all by studying the underlying architecture of the model. My goal is to have an Augmented reality butterfly park. This current project is a good baseline for further devlopment. What I learned To convert tflite model to pytorch just by looking at the model architecture. To convert pytorch model to ONNX. To integrate pytorch on Unity3D game engine for mobile game deployment. What's next for Butterfly Park My goal is to have an Augmented reality butterfly park. This current project is a good baseline for further devlopment. Currently I am using only a palm detector model, I will continue to use this to do land mark localization of the fingers in 3D and build an Augmented Reality Butterfly Park. Built With onnx pytorch unity Try it out github.com github.com
Butterfly Park
Move your hands to feed the Butterfly in the Virtual Butterfly Park
['Sujay Babruwad']
[]
['onnx', 'pytorch', 'unity']
45
10,159
https://devpost.com/software/torchbearer
default adversary model default ANN architecture default classifier model Inspiration I was inspired to develop a tool to help train more ethical AI after seeing the results of some biased models, such as the depixelizer model which transformed a pixelated picture of Obama into a white-looking person. That inspired me to leverage my hobby for coding and open-sourcing my work to help develop tools to build less-biased AI. What it does The AdversarialDebiasing class from IBM's AI Fairness 360 open-source Python library has been refactored into a more user-friendly and flexible version using PyTorch. This class can now dynamically instantiate a classifier with user-defined artificial neural network architectures and automatically generate a corresponding adversary model which will help the classifier be trained with less bias. Users can also pass pre-instantiated models in place of the default. This enables AI and ML engineers to train any binary classification model within an adversarial debiasing framework, as well as provide a starting-off point for further customization of this process. A future version will enable multi-class predictions for both the classifier and adversary models. You can click here to see the source code and the new demo notebook in Google Colab How I built it I created a predefined ANN, classifier, and adversary models that are defined outside the main AdversarialDebiasing class and as mentioned above can be modified or swapped by users. The default versions are based on the original hard-coded TensorFlow implementation. This implementation also involved a staircase-wise exponential decay scheduler for the learning rate, which I implemented by adjusting an existing version that had been published on GitHub Gist. By replicating the ANN up to but not including the final activation from the classifier within the adversary model, the classifier weights and biases can be embedded into the adversary at each global step of the fitting loop to force the adversary to try and optimize them according to its objective of discerning the protected attribute value. The gradients of these layers are then projected back to the classifier using the method described in the original white paper such that the classifier can both maximize its accuracy while also trying to minimize the adversary's accuracy. Using the original notebook which demonstrated the TensorFlow version and changing the parameters to match the new required inputs, I was able to confirm that the code is working as it should with the AI Fairness 360 development team. The refactored code can be reviewed by clicking here . Challenges I ran into Understanding both the gradient projection process defined in the white paper, as well as the original TensorFlow implementation of this process, was the main challenge I ran into. All other challenges could be solved by referring to community questions that had been previously answered online (I consider ptrblck as an unknowing contributor to this project thanks to all his answers on the PyTorch discussion board). After several discussions with my peers who were helping me throughout this project ( Ryan Khurana and Adam Resnick ), I was able to figure out how to implement the same process in PyTorch. An AI researcher from IBM who worked on the original TensorFlow implementation has confirmed that the code is working correctly. Accomplishments that I'm proud of I'm proud of having successfully refactored TensorFlow code into PyTorch code despite my limited experience with either library. I am also proud that the AI Fairness 360 development team is interested in integrating my solution into their library and excited by the potential value that AI developers like myself could obtain from leveraging my code. What I learned I learned that PyTorch is an incredibly powerful and flexible tool for AI and ML engineering and development in Python and that it can be effectively integrated with other Python libraries. I also learned a lot about the theories and conceptual frameworks related to the development of more ethical AI, as well as how to leverage PyTorch to make such efforts more seamless for users of libraries such as IBM's AI Fairness 360. What's next for Torchbearer I plan to make further improvements to the code to enable greater flexibility and after thorough testing, I will coordinate with the IBM researchers who maintain the AI Fairness 360 library to commit my implementation into their master repository. As I currently work as a data science consultant, I plan to leverage this project to provide proof-of-concept projects for our clients and help hone their best-practices with respect to the development of more ethical AI. My former manager from my HSBC internship has already expressed interest in this. It is my hope to spend the 30 minutes I would get with the PyTorch team if I win to go over the code and discuss ways I can help make it more flexible and accessible for developers around the world. Built With aif360 jupyterlab numpy pandas python pytorch scikit-learn Try it out github.com
Torchbearer
PyTorch refactoring and improvement of the adversarial debiasing inprocessing algorithm, which is currently implemented in TensorFlow 1.x as part of IBM's open-source AI Fairness 360 library.
['Yoseph Zuskin']
[]
['aif360', 'jupyterlab', 'numpy', 'pandas', 'python', 'pytorch', 'scikit-learn']
46
10,159
https://devpost.com/software/vingo-interactive-museum-excursions
Main page Museum page with achievement and export progress Recognizer interface Recognized image About APP 😷 Pandemic changed the world and we need new approaches to ordinary things. ⚡️ We reinvent the museum excursions because peoples want to see world art culture but make it more interesting and safe 👮‍♂️. 🔥 Vingo is application that will your own guide in museums. Vingo gives u a emoji-puzzle, that's mean some picture that contains this emojis in the room and u need to find that. For example 💃💃💃💃💃 - is Matisse's picture "Dance" . We see five dancing persons in the picture and in the emojis. Also 🚣🌊 ⛅ - Ivan Aivazovsky, "The Ninth Wave" . 👀 After u find the picture, u need to put ur camera to the picture and that will recognize, and u see a short Instagram- 😉 This application allows making interesting museum excursions without big groups of peoples. Challanges Our applications have got a realtime picture recognizer. The main challenge is to make zero-shot picture recognizer in realtime on mobile. Because we have got only one image per picture (we scrape all images from the Hermitage museum site). Also, we want to recognize pictures on device from video, without taping on any buttons and sending data on the server. How its works To train recognizer we train mobilenet by PyTorch lightning on WikiArt dataset to predict the genre of this picture and then we tune the model on metric learning task, to make good embeddings for pictures. To infer network on the device we use torch-jit for IOS. We get a cropped image from the camera every 0.5-sec end send it to the network to get embeddings vectors. And then just find the nearest image by KNN from the database. To reject false-positive activations we use a strong distance threshold. Built With pytorch swift Try it out github.com
VinGo - interactive museum excursions
Vingo, that gives u emoji-puzzle, that mean some picture that contains this emojis in the room and u need to find that, after u need to put ur camera to the picture and that will recognize.
['Alexander Mamaev', 'Andrey Zhevlakov']
[]
['pytorch', 'swift']
47
10,159
https://devpost.com/software/image-segmentation-japlo4
Inspiration To text my newly learnt pytorch What it does The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze. How I built it Semantic segmentation can be done using a model pre-trained on images labeled using predefined list of categories. An example in this sense is the DeepLabV3 model, which is already implemented in PyTorch. Challenges I ran into Lots of bugs Accomplishments that I'm proud of How to resolve the bugs I had. What I learned How best to work with pytorch. What's next for Image segmentation Implementing in an app. Built With pytorch Try it out github.com
Image segmentation
It functions as an image segmentation website
['Ugochukwu Nnachor']
[]
['pytorch']
48
10,159
https://devpost.com/software/torchsenti-extensions-for-sentiment-analysis
Inspiration In our day-to-day research, we usually face problems when (dataset, pre-trained model) are scattered over various places. It is a very time-consuming task for us to search for certain datasets that meet our needs, then look for pre-trained models that are already available online, and do benchmarking for several pre-trained models. Based on those problems, we are inspired to create a library that contains datasets from many different sources for fellow researchers to use in the future. What it does This library is a one-stop solution for researchers in conducting research on the topic of Sentiment Analysis which has several features provided below: Dataset Available Sentiment Analysis IMDB Movie Reviews Pros and Cons Movie Review Trip Advisor City Search Data Yelp Review Features Text Cleansing e.g removing hyperlinks WordPiece Tokenization with tagging for aspect extraction Entity metrics for aspect detection How we built it We provided a feature for the researcher to download the specific dataset in raw format or preprocessed format, load and split dataset Challenges we ran into We faced many difficulties in preprocessing each of the datasets. What we learned What have we learned so far is that software development is hard and needs to consider the design pattern What's next for torchsenti We have several feature update in the near feature, like wrapper for text cleansing, WordPiece Tokenization, etc Built With python pytorch Try it out pypi.org github.com
torchsenti
Sentiment Analysis Library for Research with PyTorch
['Ruben Stefanus', 'Andreas Chandra', 'Andhika S Pratama']
[]
['python', 'pytorch']
49
10,159
https://devpost.com/software/binnovate
Actual Picture Blue print Inspiration- Problems that inspired me to make the project • Garbage Pollution (in land, water etc.) • No Proper Segregation of Waste (everything in garbage mixes up in landfills) • Filling Up of Space on Land by Garbage( this is proved by World Bank)- https://www.worldbank.org/en/news/feature/2013/10/30/global-waste-on-pace-to-triple is the link. • Economic Slowdown of the Country- GDP growth has slowed down by almost 5% which is very low as compared to other years. • High Cost of Fertilizers and Time Consumption by Composting What it does- • Safe and Easy segregation of wastes • Quick Composting • Compressing Waste for space • Affordable Cost • Reuse of wastewater How I built it- • The Dustbin consists of three parts- Metal Bin, Biodegradable Bin and Plastic Bin which has been put in a slanting manner on which waste can slide down. They were assembled. • It works on two sensors- Metal Induction sensor and Capacitive Proximity sensor (helps in segregation) [on Metal Bin and Bio Bin respectively], Two Metal gear motors (helps opening the door) [in Metal Bin and Bio Bin ], a BO motor [only in Plastic Bin] and three small containers [present in all bins]. The sensors were put in the right place. • Metal Bin has a shredder which crushes plastic bottles and bags into bite-sized pieces so that it takes less space. • Bio Bin has a churner and a compressor. While churner mixes (in better terms ‘churns’) the waste to make compost, the compressor will compress and make the waste compact-sized. • Bio Bin has a LED strip (showcased as heaters) which would heat the wastes so that the water inside dries up and makes compost. All of this was created, assembled and put to work. Challenges I ran into- Research, Difficulty in Assembling, No availability of components in market, all these were the challenges I had faced. Accomplishments that I'm proud of- I was able to complete this project and learnt new things in Arduino. What I learned- New programs in Arduino and Introduction to Fusion 360 and laser cutting. What's next for BINNOVATE- Will make it IoT enabled so that you can check the details on your mobile and make its working finer and make it more affordable. Note : Want to add fact that it was earlier named InnoBin and then changed to Binnovate Built With arduino autodesk-fusion-360 laser-cutting mdf sensors Try it out youtu.be
BINNOVATE
A SMART DUSTBIN WHICH HAS ABILITY TO SEGREGATE WASTES ON ITS OWN AND DO PRIMITIVE STAGE RECYCLING WITHIN ITS CONTAINER AND FREE THE ENVIRONMENT FROM GARBAGE POLLUTION.
['Durlabh Biswas']
[]
['arduino', 'autodesk-fusion-360', 'laser-cutting', 'mdf', 'sensors']
50
10,159
https://devpost.com/software/fastai-xla-extension-library
project logo our documentation site a fastai jupyter notebook with the fastai_xla_extension library import Inspiration This was inspired by a post in the fastai forums where Jeremy (the creator of the fastai library) concurred with another user about the importance of TPU support for the fastai library. He also mentioned that "it shouldn't be a big job to add either AFAICT..." I felt intrigued by this statement and with much trepidation, I (Butch) decided to take on the challenge. I posted about this hackathon in another post on the fastai forums, suggesting the idea of forming a team to work on adding TPU support for the fastai library. My now team mate (David) liked the idea and now there's two of us working together since then. What it does The package allows the fastai library to run on TPUs using Pytorch-XLA How we built it We are building it using Jupyter notebooks using a system developed by Jeremy and Sylvain (creators of the fastai library) called nbdev . Moreover, nbdev automatically generates the python library package from the source Jupyter notebooks as well as the documentation. This also allows us to build out the documentation alongside the development of the system, and keeping the documentation in sync with the code as well as the tests is a lot easier when your documentation is also an executable Jupyter notebook. Since we need a TPU enabled environment to test out the library, we are also running and testing the libraries and Jupyter notebooks on Colab and Kaggle. Challenges we ran into The biggest challenge we are facing is the fact that the fastai library was built on the underlying assumption that it would be running either on a GPU or a CPU enabled environment. I don't think they considered the possibility of running it on a TPU -- which is understandable, since this 2nd version of the library (fastai version 2 aka fastai2) was developed prior to, or around the same time the Pytorch XLA support for TPUs was announced. Accomplishments that we're proud of Our goal is to make the usage of TPUs on the fastai library as seamless as possible. It should take only the most minimal of changes to existing fastai code to run it on TPUs. Another thing we are proud of is that for a small python package developed by 2 newbies on their spare time, it comes complete with documentation, samples and an installable python package via pip. And even the samples are jupyter notebooks that can be runnable in one click on Google Colab. If you want to check out our library, just click here to test it out. What we learned We learned a lot about the internals of the fastai library and learned some insights into the design decisions behind the library. We also learned a little bit of the Pytorch XLA APIs and what it takes to run Pytorch on TPUs. What's next for fastai xla extension library We are currently focused on running fastai on a single TPU core. Once we get that running well, we'll start focusing on running fastai on multiple cores. Our eventual goal is to enable fastai to train large models, such as the HuggingFace Transformer models to do transfer learning on TPUs. Built With fastai pytorch pytorch-xla Try it out colab.research.google.com github.com butchland.github.io
fastai xla extensions library
To enable the fastai library to run on TPUs using Pytorch-XLA
['Butch Landingin', 'tyoc213']
[]
['fastai', 'pytorch', 'pytorch-xla']
51
10,159
https://devpost.com/software/none-yet
Main screen Generate sprites here Game Over :( Game play Inspiration It's like the dinosaur game, but it lets you play as the character. What it does Based on a picture you take or upload it will generate the sprites for the game to use. It segments the image using Pytorch/Detectron and some filters are applied to it How I built it It's a flask server running Detectron Functions and P5js for the game engine. Challenges I ran into Found some coding bugs but I was able to fix them Accomplishments that I'm proud of This is my first project with p5js and detectron2, so it's really cool to learn them both at the same time and I am proud of watching my kids playing with them as characters. What I learned p5js, this is so cool! What's next for TorchBoard I will change some UI elements soon and I would like to generate NFTs to use with the Metaverse I would love to use reinforcement learning or genetic algorithms so TorchBoard can play itself. Built With detectron2 flask opencv p5js python pytorch Try it out github.com 3.139.101.139
TorchBoard v2
The dinosaur game but you are the dinosaur. You take a picture and it's segmented with Detectron to generate the character that you play with.
[]
[]
['detectron2', 'flask', 'opencv', 'p5js', 'python', 'pytorch']
52
10,159
https://devpost.com/software/smart-parking-and-billing-system
Inspiration found it interesting What it does modern day parking system How I built it using many different aspects Challenges I ran into in the cloud data Accomplishments that I'm proud of successfully done What I learned cloud computing What's next for smart parking and billing system make it to large scale Built With apis cloud hardware
smart parking and billing system
innovative parking system
['Hrutvika Muttepwar']
[]
['apis', 'cloud', 'hardware']
53
10,159
https://devpost.com/software/learned-spectral-compression
Visualization of the leaky floor function Size of compressed parameters in MB -- note that the initial quantization was 10 bits instead of 32, so the model is already 3x smaller Reconstruction loss for that case. There's essentially no change until iteration 50. Inspiration The resources required to train a state-of-the-art machine learning model has doubled every four months (on average). This trend translates to an unsustainable increase in carbon emissions, and model training is only the tip of the iceberg. Nvidia estimates that model inference accounts for at least 80% of the computation spent on deep learning worldwide. Training and inference costs are increasing because models are becoming larger as well. The GPT-2 XL configuration has around 1.5 billion parameters. Released only a year and a half later, the largest configuration of GPT-3 has 175 billion parameters. The required computational resources increased (roughly) linearly. Beyond the environmental impact of training these enormous models, they also introduce the inconvenience of being too large to effectively perform inferences on commodity hardware, let alone fine-tune. As a result, control over computation shifts from the end-user to powerful servers in the cloud. While the idea of "migrating to the cloud" is nothing new, there are a number of privacy risks involved in doing so. It's obvious that we need to make smaller networks. But how? What it does Learned Spectral Compression ( lsc ) is a library that compresses machine learning models during training using a somewhat-novel approach that is heavily inspired by image compression. lsc does this in three steps. First, it converts the model weights of a pre-trained model into a spectral representation, which is still of the same dimensionality as the weights themselves. Then, the user of the library merely needs to continue training the converted model on the original dataset, balancing the optimization criteria of the model (i.e model accuracy) with a quantization loss function which tries to compress the model. Finally, lsc performs a entropy coding pass when the user saves the model to disk, further compressing the model. For the most part, the user of this library does not have to consider the details of how the technique works -- they just need to use the spectral model as they normally would. To convert a model to the spectral representation, just run from lsc import spectral, quantization_loss q_net = spectral(my_model) spectral will convert my_model in-place. q_net is the quantization network. This network learns how to compress the model during the optimization process. The user does not need to do anything with it other than pass q_net.parameters() into their optimizer during the fine-tuning or training step. Admittedly the quantization network does add to the memory/space footprint of the model overall, but only by ~10 KB. optim = torch.optim.Adam( list(my_model.parameters()) + list(q_net.parameters()), lr=1e-3 ) Next, during the optimization loop, the user needs to incorporate the quantization loss into their training process. The quantization loss represents the average number of bits per spectral weight in the model. By default, this value starts at 10 bits of precision for all weights (which is already a ~70% improvement over most float32 models). Depending on the use-case, the user might want to modify the output of this loss function (perhaps to limit compression beyond a certain point). my_usual_model_loss = ... q_loss = quantization_loss(my_model) loss = my_usual_model_loss + q_loss loss.backward() optim.step() After training, the user can run the final entropy coding stage and extract a state_dict using the compress_weights function. Ideally we wouldn't need a custom compress_weights function (and the compression would just happen during state_dict() ) but I haven't figured that part out yet. my_state_dict = compress_weights(my_model) When using lsc , I have observed >95% savings in model size with no noticeable changes in accuracy (see Resnet-152 notebook). lsc hypothetically should be able to compress the parameters of nearly any machine learning model, although in practice most of the savings would be noticed in linear and convolutional layers. lsc does not reduce the memory consumption during model training. Reducing peak training memory consumption might be possible using this approach, but it is definitely fairly difficult to do without heavy gradient checkpointing. The current implementation of lsc does not actually reduce memory during inference either, but that can be easily added (especially once I figure out an easier way to override state_dict ). How I built it lsc is (roughly) differentiable n-dimensional JPEG. JPEG might conjure up memories of terrible block artifacts and blurry text , but trust me, lossy tensor compression and gradient descent go together like fine wine and cheese. In a sense, this approach is similar to model pruning -- we want to learn which frequency bands of the model weights are actually important. Prior to training, we project the initial model weights into a frequency representation. We do so by first computing the mean and variance across each of the weight tensors and then redistributing the weights accordingly (similar to batch norm). As a result, most of the weights are distributed in the range expected by the DCT. Next, we split the data into chunks on each dimension, and then perform an n-dimensional DCT on the chunk dimensions. This is analogous to the block coding steps and DCT steps in JPEG, but so far, everything is differentiable. Internally, the spectral function modifies all of the relevant layers of the initial model into HyperNetwork modules. HyperNetwork modules instantiate and run another module type to generate the parameters of the original network (at the moment just weight and bias ), and then runs the initial module with the computed weights. SpectralCompressionWrapper is the "weight generator" used for this library, each weight of each module is paired with a SpectralCompressionWrapper . All SpectralCompressionWrapper s share the same underlying quantization network. The forward pass of the wrapped model involves the following steps -- Compute a quantization tensor by using the quantization network. The quantization network accepts a tensor of positions (i.e locations in the weights matrix), computes a sin-cos positional embedding (like Transformers or NeRF), and then runs a small MLP to estimate the quantization tensor. Multiply the spectral weights by 2^{quantization tensor}, run the leaky floor function, and then divide the spectral weights by 2^{quantization tensor}. Run the inverse DCT to re-generate an amplitude representation. Reshape the weights into their original shape Undo the gaussian re-distribution. Leaky floor is pretty simple, it basically lets us differentiate through the floor function by leaking gradients through. def leaky_floor(x, m=0.01): floored = torch.floor(x) return floored + m * (x - floored) There is a lot more documentation on each of these individual components in the source code :) Challenges I ran into I struggled quite a bit to extend PyTorch with the intended behavior. lsc is ideally supposed to be a magical opaque box that can wrap anything that extends nn.Module . To do so, it replaces every submodule of the network with hyper-networks that generate the model weights. So I had to find a way to automatically reparameterize arbitrary models without affecting the internal state that creates state_dict . The dimensionality of the output of this hypernetwork is the dimensionality of the model weights, which is quite large. That means that a lot of care has to be taken to ensure that gradients are not accumulated for intermediate steps. This is part of the reason why lsc computes the DCT in blocks -- it means that the maximum dimensionality of an activation or gradient of the quantization network scales with the maximum block size (which is usually < 256). I also experimented with gradient checkpointing, but the increase in compute time was not worth it. I mostly have been working on this project over the last week (when I discovered the summer hackathon). Mostly this weekend. While I do feel that PyTorch was certainly the right tool for the task, this entire project (perhaps fittingly) did feel like a giant hack. Accomplishments that I'm proud of I think the entropy coding stage is pretty unique. Implementing a somewhat-efficient n-dimensional Morton encoder that uses a variable dimension size was fun, especially given that it replaces the "zig-zag" ordering in JPEG. I couldn't get it to run remotely efficiently in pure Python PyTorch, so I just rewrote that part in numba . Also the compression ratio is pretty good. I think lsc can be compared to SOTA pruning (which admittedly are very hard to compare because of a lack of a consistent benchmark). What I learned I learned about Morton ordering, positional encoding, PyTorch checkpointing, PyTorch's nn.Module internals, and tensor decomposition. What's next for Learned Spectral Compression In no particular order, here are some critical TODOs: Figure out a cleaner interface to do compressed model save/load. I spent a while trying to override state_dict 's behavior (to store the compressed representation rather than the ), but I couldn't figure it out in time for the hackathon deadline. At the moment, lsc does not actually save any memory in inference mode. You could totally run the de-compression in real-time on a layer-by-layer basis, saving plenty of GPU memory. I already spent a decent amount of time optimizing most of the entropy coding to do so as well, I just didn't have enough time to implement it. More experiments! I mostly tested on ResNet and some small models. I am curious how it would do on GPT-2 but I definitely do not have the resources to fine-tune anything larger than gpt2-small or possibly gpt2-medium . I'm using an external library's differentiable DCT implementation, and it's pretty slow (~30% of the runtime). I think I could speed it up by switching between the non-fast Fourier transform and the fast variant on a dimension-by-dimension basis, and fusing the various operations together. Maybe this step could even be implemented in tensor comprehensions, or possibly a separate CUDA extension. Figure out a way to compress non-parameter tensors, like the running mean/variance batch norm. Use 128-bit integers (instead of 64-bit integers) in the Morton encoding so that we can have > 8D tensors with a max chunk size of 256. It could be interesting to try using low-rank tensor decomposition in tandem with this lossy spectral compression approach. Maybe Tucker decomposition? Switch from the discrete Fourier/cosine transform to the discrete wavelet transform (like JPEG 2000) See the github repo or this colab notebook for more details Built With pytorch Try it out github.com
Learned Spectral Compression
Compress the parameters of any PyTorch model by a factor of 10 without reducing accuracy. Just call `spectral(your_model_here)` and fine-tune your model with an additional compression loss term.
['Srinivas Kaza']
[]
['pytorch']
54
10,159
https://devpost.com/software/pytuna
PyTuna! A screenshot of the PyTuna CLI wizard in action, guiding a user through preprocessing A screenshot of the PyTuna Jupyter Notebook wizard in action, guiding a user through preprocessing The architecture of PyTuna's underlying convolutional neural net A visualization of the use case for PyTuna. Inspiration In the past decade, we've made massive strides in developing accurate architectures and models to the point where most people can, in under a half-hour, use an out-of-the-box model on a dataset. Even so, out of the 24 million programmers in the world, just 300,000 are proficient in AI. This is because the effectiveness of these models is gated by how well the data is prepared ; the data preprocessing step is where many programmers, beginners and experts alike, find the most difficulty. As any programmer knows, "garbage in, garbage out" — in other words, a model is only as good as the data that comes in. However, there are few hard-and-fast rules or workflows to follow when preprocessing a dataset, especially with images. There are so many options to choose from — augmentation, normalization, object detection, Gaussian blur — and little consensus on how to use them. What it does With this in mind, we made PyTuna, a pytorch framework that acts as a wizard to guide data scientists through preprocessing any image dataset. Under the hood, PyTuna has a convolutional neural network trained on 50 world-class image datasets and the preprocessing strategies that the most frequently-cited research papers and the most successful Kaggle notebooks used for each of them. Given any dataset, PyTuna first samples a representative subset. Then, it uses a ConvNet to predict a set of preprocessing techniques that are well-suited for the dataset. Next, the PyTuna wizard walks through each of the techniques with the user, automating, educating, and informing at each step. PyTuna makes the case for each technique, but the user gets the final say. Finally, PyTuna returns a list of preprocessed images in PyTorch tensor format, ready to be fed into a model . It also optionally pickles a copy of the preprocessed dataset for future use. The PyTuna wizard serves to make preprocessing a painless experience , even for coders with little to no experience. With PyTuna, anyone can use PyTorch for computer vision. How we built it We gathered 50 of the most frequently used image datasets, then gathered the most cited research papers and highest-ranked Kaggle submissions for each. Then, we manually read through and recorded which preprocessing methods each one used. We ended up choosing the following set of 12 preprocessing steps to predict from: scaling, augmentation, normalization, zero-centering, removing background colors, object detection, Gaussian blur, perturbation, contrast, grayscaling, histogram equalization, label one-hot encoding. To generate our training data, we randomly sampled 10 images from a dataset, which were then combined into a single stack of images. We repeated this process 40 times within each dataset to gain a representative set of stacks for the entire dataset. Each label was represented as a tensor with 12 binary elements, each corresponding to whether a preprocessing step was used or not (e.g. [1, 0, 0, 0 ,1, 1, 0, 0, 0, 0, 0, 0]). We then created our own custom model consisting of the following layers: Input: (1 x 10 x 3 x 224 x 224) which represents (batch size, stack size, color channels, height, width) 3D Convolution (10 -> 20 channels) ReLU MaxPool 3D Convolution (20 -> 40 channels) ReLU MaxPool Flatten Fully Connected (121000 -> 4096) ReLU Fully Connected (4096 -> 1024) ReLU Fully Connected (1024 -> 12) Sigmoid We trained the model across 10 epochs, with 1520 iterations per epoch, taking a total of 12 CPU hours on 2 Intel Xeon 20 core processors. We then evaluated our model on 5 image stacks per dataset and took the mean of the 5 results as our final prediction. We obtained a final mean squared error of 0.12 per class. Challenges we ran into The two largest challenges that we ran into were collecting and hand labeling data to train our model on and selecting an appropriate model architecture. As there has never been anything similar to what we have done, our team of 5 had to find 50 impactful image-based datasets, convert each dataset into a format that our API can understand, and search for top Kaggle submissions and highly cited research papers that documented the preprocessing steps used for the dataset. We then had to decide on a list of preprocessing steps that our model could predict. We ended up creating a spreadsheet containing the 50 datasets as well as 12 columns representing each of the preprocessing steps we predict for. The next challenge came from deciding on an architecture that would be able to convey the information required to decide on what preprocessing steps to use to the model. We first experimented with creating a model of all fully connected layers whose input was a grid representing the means and standard deviations of a sample of images from each dataset. We found that this architecture had poor performance when predicting many of the preprocessing steps like object detection and image augmentation. We also realized that this model wouldn’t be able to make spatial associations within images as it was only looking at the means and standard deviations of how pixel values varied across the sample. We then moved on to using a transfer learning approach using a ResNet with layers prepended and appended to the network in order for the network to effectively analyze multidimensional data. However, this architecture ended up overfitting to the sample of images used from each dataset and therefore had poor validation accuracy when a different sample of images was used. Finally, we switched to a custom network architecture where we used 3d convolution layers in conjunction with several fully connected layers which was able to learn the preprocessing steps without overfitting to any particular set of images. Accomplishments that we're proud of As rising second-years in university (each attending different universities), this is the most ambitious project any of us has ever done. We were able to learn a tremendous amount about preprocessing image datasets, designing a convolutional neural network, and training it successfully. In all, we are most proud of the chance to help all coders; we really just want to make it possible for anyone to do computer vision in PyTorch. What we learned We learned a ton in the process of designing and tuning our convolutional neural net. We also picked up a thing or two in the process of reading hundreds of notebooks and research papers to see how they preprocessed their datasets. Since we are all beginner hackers, it was a super exciting experience to set this huge goal at the start of the summer and come close to hitting all of our main goals. Even so, the real learning was the friends we made along the way <3 Acknowledgements We are tremendously grateful to NCSA at UIUC for their compute resources. This work utilizes resources supported by the National Science Foundation's Major Research Instrumentation program, grant #1725729, as well as the University of Illinois at Urbana-Champaign. What's next for PyTuna As with any convolutional neural net, we would love to continue tuning our hyperparameters and fiddling with the architecture to squeeze out as much performance as possible. We’re also looking into more sophisticated methods of representing large datasets, such as math papers on the most representative subset of a large set. For our endgame, we’d like PyTuna to become a PyTorch module, available for any and all to use. Thank you so much for this amazing opportunity! We hope you have a great week! Built With cuda cudnn numpy pillow python pytorch torchvision Try it out github.com
PyTuna
An intelligent preprocessing wizard
['Jonathan Ko', 'Kanav Kalucha', 'Andrew Chen', 'Vicki Xu', 'Pranshu Chaturvedi']
[]
['cuda', 'cudnn', 'numpy', 'pillow', 'python', 'pytorch', 'torchvision']
55
10,159
https://devpost.com/software/paint-93tej5
The Home page The beautiful canvas Variations of realistically synthesized image Images with an Artistic touch Image with a custom style infused The art Gallery The cart After Checkout Inspiration The main reason we developed this project is because painting is a very difficult skill and there are a lot of people who are passionate about painting but not so good at it. So we wanted to see if AI can be helpful in delivering the art of painting to a larger audience. This platform can be a very good start for those who wish to paint but are not too skilled at painting. More than all of that, it's always fun to allow a computer to finish our painting! What it does pAInt packs an MS Paint like canvas with which the user paints a rough sketch. It then converts it into a realistic version with different styles. New styles can be obtained on refreshing the screen every time. It can add an artistic touch to the realistically synthesized image. The artist can also upload a design or another picture from which styles get transferred to the artist's original painting. There is also a complete gallery/cart experience that is built into the application How we built it DL: Pix2Pix model was used for generating a realistic image. Cycle GAN was used for producing variations of synthesized image, Bicycle GAN for the artistic touches and Neural Style Transfer for customized artistic touch. All models were built using PyTorch alone. Application: The application was built using TypeScript , that calls the python script that uses Pytorch with the models that we built. The frontend, built with VueJs communicates with the REST endpoint exposed by the TypeScript app to complete the user experience. We have used AWS's EC2 for hosting the web application Challenges we ran into Training with GAN as a whole including mode collapse and stability/convergence issues while training it. The lack of styling according to personalized input in the synthesizing model posed a problem which was overcome by the introduction of Cycle GAN. Devising the segmentation maps for different classes and mapping it between different datasets. Forking jspaint and customizing it to suit our needs was the most challenging part Setting up the AWS server with all the dependencies; setting up the GPU. Linking the backend with the front end and all the communications was a bit challenging Accomplishments that we're proud of Building a stand-alone end2end application which can help lots of creative minds (and idle minds too xD). The very thought that we have our application that is accessible over the internet is a very cherishing one What we learned Strong theoretical and practical implications of GANs, Working with AWS, NodeJS, VueJS, Integrating multiple models together, building an API to suit the needs and the usage of pytorch in real world ML problems... What's next for pAInt Enable synchronous painting access that enables two artists to complete an art in the canvas over the internet Checkout being made as paid which can support local artists. Delivering printed art to the customer... (We are being really ambitious here! xD) Built With amazon-ec2 css3 html5 javascript node.js python pytorch typescript vuejs Try it out 23.20.38.12 github.com
pAInt
AI augmented painting and art shopping experience
['Keerthan Ramnath', 'Subrahmanyam Arunachalam', 'Hari Rajesh']
[]
['amazon-ec2', 'css3', 'html5', 'javascript', 'node.js', 'python', 'pytorch', 'typescript', 'vuejs']
56
10,159
https://devpost.com/software/easy-text-data-augmentation-in-pytorch
Inspiration Torchtext dataset classes provide a convenient, high-level interface to reading natural language data. However, they do not provide an interface for including data transformations at read time in the way that torchvision does. Further, they return data in a vectorized format, which precludes the application of data augmentation policies after reading. Current SOTA results across all domains include some form of augmentation for accuracy, so it's important to be able to do this with text data. What it does niacin is a python library with a collection of common text data augmentation functions, like backtranslation, word order swapping, and synonym replacement. Previously, it had not been usable with PyTorch dataloader classes, because torchtext dataset classes did not support transformations. niacin now includes torchtext-like dataset classes that can apply an arbitrarily large number of input transformations before vectorizing the data. Additionally, niacin now also includes an implementation of RandAugment , a tunable policy for applying augmentation functions that has produced results comparable with more involved policies, like AutoAugment. What's next for Easy text data augmentation in PyTorch Currently, the augmentation functions inside niacin are restricted to English, which limits their utility in most parts of the world. Future work will include adding support for a broader variety of languages. Built With nltk numpy python pytorch Try it out github.com niacin.readthedocs.io
Easy text data augmentation in PyTorch
Creates torchtext-like dataset classes, but ones that allow data augmentation policies like RandAugment to be applied to data on the fly at read time.
['Dillon Niederhut']
[]
['nltk', 'numpy', 'python', 'pytorch']
57
10,159
https://devpost.com/software/eyelang-py
Locate eye Move object on screen 1 Move object on screen 2 Options menu 1 Options menu 2 Evaluation sample Eye Gaze dataset sample Inspiration Our project was inspired by the reality that while ASL and braille may work as forms of communication for people with certain disabilities, others who are paralyzed or have difficulties with fine motor control may not be able to use those methods. However, people who are paralyzed may still have control of their eyes, which is why we built eyelang.py, an eye movement communication web app using machine learning for people with limited mobility. What it does The model locates the user’s eye from a webcam and outputs the eye position: left, right, up, down, or center. Our demos include using eyes to move an object on screen, which can be applied to moving a mouse or playing a video game. An options menu demo shows the user choosing “Yes” or “No” by looking left or right, respectively. A text typing demo maps each sequence of 3 eye positions to a character to replace typing with keys. How I built it We trained a CNN for 2 epochs using the PyTorch torchvision library on normalized images from the Eye Gaze dataset on Kaggle. We relabeled the images with 5 classes-left, right, up, down, and center-based on the dataset’s eye gaze vector. The feed-forward CNN architecture was adapted from the Stanford University paper ‘Convolutional Neural Networks for Eye Tracking Algorithm’ by Griffin and Ramirez, but we changed the input size and output classes and added an extra Linear layer to prevent loss of information because their output layer had 56 classes while we only had 5. To locate the eye from a webcam, we used the facemesh library, which locates key points on a user’s face. From there, the node server POSTs the eye image data to a Flask server that serves our model, and the Flask server normalizes the image and responds with the classification. Challenges I ran into All of us were relatively new to machine learning, so figuring out how to load data and use PyTorch involved a substantial learning curve. The accuracy of our model was 60%, possibly due to the Eye Gaze gaze vectors not being clearly left or right but many slightly left or right, and the model had issues classifying up, likely due to the dataset having very few up images. When training, the loss didn’t stabilize as expected, possibly due to the learning rate being too high. Trying to pipe between our javascript web app and our python model was a challenge, including navigating CORS permissions when sending requests between servers. Accomplishments that I'm proud of We were literally holding our breath the first time we trained the NN and had a great moment when it finished training without snagging on any errors! Even though the final model was only 60% accurate on the test images, it was a big improvement from the essentially random guessing of our first model, and it was reasonably accurate for controlling an object on screen. In general, we are really proud of our project idea and are committed to continue developing the app after this hackathon to become an impactful product. What I learned We learned a ton about the PyTorch library, different types of neural networks and their applications, and many other machine learning concepts. We experimented with different inputs and CNN architectures, such as normalized vs. unnormalized data, 2 or 3 linear layers, more or fewer classes, and training on one or both eyes, which gave us a better grasp of hyperparameter optimization. What's next for eyelang.py We want to focus on improving the accuracy of our model by changing the CNN architecture, using an adaptive learning rate to prevent overshooting the minimum loss, filtering our data to only include clearly right or left gazes, and adding more up images to the data. The increased accuracy would make typing text with eye movements more viable, at which point we could implement a text-to-speech feature for real-time conversation. We will improve our web app UI to more easily switch between demos and allow users to set custom mappings between eye positions and keys. We also want our NN to recognize more eye movements, such as blinking, which could stand in for clicking a mouse, raising eyebrows, etc. Ideally, we want to integrate eyelang.py’s functionality into native keybindings and mouse input so that users can operate any software with their eyes. Built With facemesh flask html/css javascript node.js python pytorch Try it out github.com colab.research.google.com
eyelang.py
Eye movement-based communication for people with limited mobility.
['Cindy Zou', 'Alexandra Jakucewicz']
[]
['facemesh', 'flask', 'html/css', 'javascript', 'node.js', 'python', 'pytorch']
58
10,159
https://devpost.com/software/math-notetaker
Landing Page What it does Starting to work in a new machine learning framework or package can be confusing. Instead of wading into the documentation from the beginning, it can be helpful to find a familiar use case. The documentation for sklearn, caret, numpy, and scipy were used to train a doc2vec model implemented in pytorch, giving each module of the documentation a dense embedding vector representation. Pairwise similarities between vectors were computed and ranked, giving the closest match to each section of the documentation. Below is a t-SNE low dimensional representation of the embedding vectors and a lookup table for the most similar documentation sections for the one selected. How I built it The doc2vec model was built in PyTorch using torchtext. Additionally worked with a model using the transformers library, using a pretrained BERT model to generate sentence embeddings. The web app is built using dash and hosted on pythonanywhere. Challenges I ran into The project isn't quite finished - time was the biggest constraint on completion. I entered this hackathon with some basic PyTorch skills and the effort I put in has made me much more comfortable with the framework. I look forward to continuing to play with it. What I learned I became a comfortable PyTorch user, learned to play with pre-trained transformers in the framework, and became better at turning papers into code. Built With dash python pytorch torchtext Try it out spencerbraun.pythonanywhere.com
Doc Lookup
Similarity scores between ML framework documentation
['Spencer Braun']
[]
['dash', 'python', 'pytorch', 'torchtext']
59
10,159
https://devpost.com/software/sotorch
Results of different optimization methods for f(x) = |x| + 6.0 Results of different optimization methods for f(x) = |x| + 6.0 -- (part II) Inspiration The inspiration for sotorch appeared when I started implementing Visual Odometry in some experiments of my MSc course. I saw that I could save a lot of time if I used the automatic differentiation of PyTorch's autograd instead of implementing the Jacobians and Hessians by hand. Also, I saw that, for this type of problem (that requires second-order optimization), It was better to use some of the optimization algorithms implemented in SciPy instead of the optimizers that come with PyTorch - I tried different optimizers from both libraries and L-BFGS-B from SciPy worked better for me. Then I decided to combine those features of each library: autograd from PyTorch and optimizers from SciPy. What it does sotorch saves the user from the work of writing Jacobians and Hessians manually, as it uses PyTorch in order to get the analytic Jacobians and Hessians (in the majority of cases PyTorch provides the analytical gradients). The analytic gradients are preferred over their numeric counterpart because of computing cost and precision. The user is free to define any objective function, as long as it is composed of differentiable operations from PyTorch. How we built it We borrowed some ideas contained in a GitHub issue discussion at the PyTorch repository - it is properly referenced in sotorch's repository. We also use almost the same interface of scipy.optimize.minimize. So sotorch is kind of a wrapper of scipy.optimize.minimize equipped with automatic Jacobians and Hessians. Challenges we ran into We had to complete the first version of sotorch and make this submission with a lot of parallel work going on in our workplace. It was a challenge but we are proud of completing it. Accomplishments that we're proud of We were proud of seeing sotorch being imported like a "proper module" because it was a novelty for us. What we learned We had to do some new things, like adding a license file and creating a "installable wheel" for using sotorch as a module. Now we want to make it public in some package repository. What's next for sotorch We would like to extend sotorch for other optimizers, allow the usage of custom optimizers, add support for Jacobian-vector and Hessian-vector products, and give support to parallelization in GPU. Built With numpy python pytorch scipy Try it out github.com
sotorch
Auxiliary second-order optimization tools compatible with PyTorch.
['Ronnypetson Souza da Silva', 'MirelleB']
[]
['numpy', 'python', 'pytorch', 'scipy']
60
10,159
https://devpost.com/software/sentiment-analysis-thw35j
Demo of Sentiment Analysis in action Inspiration After learning about BERT, I wanted to try my hand at implementing a simple Sentiment Analysis script using PyTorch and HuggingFace's Transformers. When I saw this hackathon I decided to add other Transformers models such as AlBERT and DistilBERT to it as well as change the whole code from beginning to end to make it more readable. What it does We can put an English phrase, and it'll predict whether it has a positive or negative sentiment. It'll also let us know its certainty of this prediction. This can be done directly on the command line or with the client&server implemented using vue.js&flask. How I built it It uses Stanford's Sentiment Treebank as the dataset, which has movie reviews and labels in the form of postivie or negative. Thanks t PyTorch I was able to easily create a dataloader for this as well as implement training and evaluation loops. I also used HuggingFace's Transformers for the 3 different kinds of transformers I used. Challenges I ran into It was hard to figure out how to upload a model I myself trained, so that people could demo my implementation by just downloading my model without having to train theirs. Accomplishments that I'm proud of After sharing my repository on LinkedIn it quickly reached 100+ stars, as well as on the model's page ( https://huggingface.co/barissayil/bert-sentiment-analysis-sst ) I can see that it's already been downloaded 3000+ times! So it means some people are actually using it, which is amazing. What I learned Implementing AI models and sharing them with people is fun! What's next for Sentiment Analysis I'll add other transformers such as T5. I will also do some experiments with the multi-lingual version of BERT. I already tried it on my own and say that by just training it in SST, which is in English, it already has a pretty good performance in French (but not in Turkish). Built With flask python pytoch transformers vue.js Try it out github.com
Sentiment Analysis
Sentiment analysis neural network trained by fine-tuning BERT, ALBERT, or DistilBERT on the Stanford Sentiment Treebank.
['Baris Sayil']
[]
['flask', 'python', 'pytoch', 'transformers', 'vue.js']
61
10,159
https://devpost.com/software/disease-prediction-using-x-ray
Lung-Disease Prediction Changing healtcare industry. View the demo » About The Project An intelligent platform to predict disease of chest x-rays. This platform makes use of a machine learning algorithm capable of tracking and detecting diseases. Artificial Intelligence (AI) has emerged as one of the most disruptive forces behind digital transformation that is revolutionizing the way we live and work. This applies to the field of healthcare and medicine too, where AI is accelerating change and empowering physicians to achieve more Resources used in the project National Institutes of Health (NIH) chest x-ray dataset. This dataset is a publicly available and medically curated dataset. Technique State-of-the-art DenseNet for image classification. DenseNet is an open-source deep learning algorithm with implementations available in Keras (using TensorFlow as a back-end). We also explored the PyTorch version of DenseNet. Class Activation Maps are used to understand model activation and visualize it. Motivation Some facts: Two-thirds of the world's population lacks access to trained radiologists, even when imaging equipment is readily available. The lack of image interpretation by experts may lead to delayed diagnosis and could potentially increase morbidity or mortality rates for treatable diseases like pneumonia. Approx. 2.5 million people die from lung diseases. Built With With a lot of love 💖, motivation to help others 💪🏼 and Python 🐍, using: Pytorch Google Colab (with its wonderful GPUs) A real-time Flask and Dash integration (along with Dash Bootstrap Components ) A real-time database, of course, from Firebase Vercel (hosting repository) Angular 10 Inspired by the CheXNet work done by Stanford University ML Group, we explore how we can build a deep learning model to predict diseases from chest x-ray images. Usage Data Exploration We use a labelled dataset that was released by the NIH. The dataset is described in this paper, and you can download it from here . It includes over 30,805 unique patients and 112,120 frontal-view X-ray images with 14 different pathology labels (e.g. atelectasis, pneumonia, etc.) mined from radiology reports using NLP methods such as keyword search and semantic data integration. The NIH-released data also has 983 hand-labelled images covering 8 pathologies, which can be considered as strong labels. Model Training Deep neural networks are notoriously hard to train well, especially when the neural networks get deeper. We use the DenseNet-121 architecture with pre-trained weights from ImageNet as initialization parameters. This allows us to both pass the gradient more efficiently and train a deeper model. This architecture alleviates the vanishing-gradient problem and enables feature map reuse, which makes it possible to train very deep neural networks. we used the AUROC score to measure the performance for the diseases by selecting the model with the lowest validation loss. Disease AUC Score Disease AUC Score Atelectasis 0.689804 Effusion 0.769636 Cardiomegaly 0.699429 Consolidation 0.725847 Infiltration 0.655084 Edema 0.817075 Mass 0.601279 Emphysema 0.603675 Nodule 0.571633 Fibrosis 0.660121 Pneumonia 0.634000 Pleural_Thickening 0.650140 Pneumothorax 0.677171 Hernia 0.647572 What's next for Disease Prediction using X-RAY Develop a phone application that can recognise the diseases Improve user interface for the angular web app Partner with doctors to build a real-world chest x-ray database. Test prototype with a Radiologist Challenges Early diagnosis and treatment of pneumonia and other lung diseases can be challenging, especially in geographical locations with limited access to trained radiologists. Database limitations There are several limitations of the dataset which may limit its clinical applicability or performance in a real-world setting. First, radiologists often interpret chest x-rays acquired in two projections, frontal and lateral, which aids in both disease classification and localization. The NIH dataset we used in this blog only provides frontal projections (PA and AP). Second, clinical information is often necessary for a radiologist to render a specific diagnosis, or at least provide a reasonable differential diagnosis. Built With angular.js python pytorch Try it out py-torch-hackathon-git-upload.shreyaspapi.vercel.app
Disease Prediction using X-RAY
Disease prediction system that can help radiologists review Chest X-rays more efficiently. System is able able to help detect pneumonia and 13 other thoracic diseases using chest x-ray images.
['Nikhil Ramakrishnan', 'Shreyas Papinwar']
[]
['angular.js', 'python', 'pytorch']
62
10,159
https://devpost.com/software/go-doctor
template Inspiration fix long-running problems What it does detecting the severity of symptoms How I built it Challenges I ran into Accomplishments that I'm proud of What I learned What's next for Go doctor Built With bootstrap flask python python-package-index pytoch pytorch xd Try it out github.com xd.adobe.com
Go doctor
a diagnostic application and hospital treatment proposal according to your area
['Francisco Nathanaël CAPO-CHICHI']
[]
['bootstrap', 'flask', 'python', 'python-package-index', 'pytoch', 'pytorch', 'xd']
63
10,159
https://devpost.com/software/rld
Inspiration As a machine learning engineer and a PhD candidate in Reinforcement Learning I spend a lot of time evaluating the trained agent's behavior. While some of the information can be presented in TensorBoard in the form of Key Performance Indicators (KPIs), some insights can be gathered only by visualizing agent's behavior and observing it with a human eye. Typically, this kind of visualization is done using render() method of an OpenAI Gym-compatible environment, but this approach suffers from four major drawbacks: the trajectory is generated live, thus if you haven't specified a seed value, you won't be able to replay the same trajectory once again; by default, there is no control on the playback - you have to implement pausing on your own, but rewinding is completely out of the scope; the render() method have access to the whole internal state of the environment. This might potentially lead to dangerous situations, where the scene is rendered using data that is not included in the observation; the visualization of the observation only doesn't fully explain why the agent picked an action A, instead of an action B, at some time step T. What it does rld solves these drawbacks by using offline rollouts and providing a fully-controllable external viewer, which uses only the observation data to render the scene. rld also allows to calculate and visualize observation attributations with respect to a picked or any given action. How I built it Captum and PyTorch are used to calculate attributations of the observation with respect to the given actions. Flask is used to serve static files and provide simple API to query for the rollout and its trajectories. The front-end is written in React and one type of viewers ( WebGLViewer ) is using three.js for WebGL in the browser. Challenges I ran into Currently, Captum only accepts torch.Tensor s as an input and allows for a single target definition. To use it in reinforcement learning field, I had to implement multiple wrappers, which encodes e.g. dict-like observation space into a single torch.Tensor or stacks multiple torch.Tensors to calculate attributations for the same observation, but for the multiple targets (e.g. with MultiDiscrete action space). Additionally, when testing rld on Atari environments, it turned out that serializing large arrays to JSON format is not the most efficient solution. :) Accomplishments that I'm proud of Completed the hackathon! What I learned React. What's next for rld Enhancing compatibility by adding new viewers and improving API to use custom viewers, and extending functionality by adding more debugging tools (e.g. trajectory hotspots, observation surgery tool). Built With bootstrap javascript python pytorch ray react rllib Try it out github.com
rld
A development tool for evaluation and interpretability of reinforcement learning agents.
['Tomasz Wrona']
[]
['bootstrap', 'javascript', 'python', 'pytorch', 'ray', 'react', 'rllib']
64
10,159
https://devpost.com/software/recyclemate-bot
RecycleBot Conversation Flow -A typo some where in the mix Testing out the Raspberry Pi & Arduino - The MCU's behind the smart bin. Testing Mobile feature Inspiration I recently just completed a Tools for Data Analysis course, which was taught in Python, which I found interesting. Signing up to this hackathon made me feel excited to practice and learn more. This project draws inspiration from experience during my stay in Finland. At times I had found myself recycling bottles - i.e 5 500ml plastic bottles are worth 1 Euro = 1 Cheese Burger at McDonald's. Stores like Aldi have set up a big machine that allows users to recycle bottles for a cash incentive or reduction towards the cost of goods for the next purchase. In Finland, Zambia ( my home country ), and in other parts of the world that I have lived in, I have noticed that a huge number of plastic bottles or broken glass bottles not disposed of properly in walking trails, bus stops and almost anywhere. The problem is that the cost of the bottles is often passed on to the consumer by businesses with oftentimes no incentive or way to recover the cost and save the environment. Machines that recycle such those at the Aldi store and many others have huge costs that don't favor small businesses and startups nor do these facilities operate 24/7. What it does RecycleMateBot is a smartbin that classifies bottles meant to be disposed off, rewards consumers for recycling and provides businesses with analytics. i.e how many glass bottles are recycled in the city of Freemantle ? A fitted camera and motion sensor to take pictures after motion has been dected. An LED to signal to the user that it was now a good time to place the bottle in the bin. How I built it I started of by building a trigger and a fun way to interact with it. I setup the raspberry-pi , camera and motion sensors. I setup a script that takes a photo delays and takes another one. Another script I had written was one that sent photos an AWS S3 Bucket. I used AWS-PythonSDK in part of the upload script. I later then setup a chatbot on AWS Lex, this bot will help walk any user through recycling mission. If the user responds "Yes" to the question "do you want to recycle your bottle ?" an AWS Lambda function is triggered and it sends MQTT message to a specific topic called "start/light". The Arduino 1500 plays as the client listening for an MQTT message as soon as it recives one, its LED shines brightly and the user is now aware that it is time to recycle the bottle. The model was pretrained and Alexnet rather than Densenet was used. Challenges I ran into I realized that the 8MP Raspberry-Pi Camera wasn't going to provide as much as it could. - So it was time to turn to facebook, to collect pictures and send them to S3 which is a challenge I am trying to figure out. I found it challening to get acesss to a 3D printer for the device enclosure. Not having enough access to resouces like 3D printers. Accomplishments that I'm proud of I am proud to have started with the project and made a submission. I am proud to have built a notebook file that extracts key data for the problem I am trying to solve. What I learned I have learned a lot about image datasets and how the number of layers can determine the expected outcomes. I learned about how to use Docker & CloudFormation to deploy stacks to AWS. YAY What's next for RecycleMate Bot Hopefully you can see a RecycleMateBot at a park near you soon Add cash / micro payments using uRaiden network 3D print an enclosure post-covid-19 pandemic. Built With amazon-web-services arduino c+ python pytorch raspberry-pi react Try it out github.com recyclemate.notebook.ap-southeast-2.sagemaker.aws recyclebot-71bba.web.app app.recyclematebot.com
RecycleMate Bot
RecycleMate is a platform that makes it possible to recycle bottles in a multi-beneficial way for businesses by providing data using machine learning and rewarding individuals for recycling.
['Chilumba Machona']
[]
['amazon-web-services', 'arduino', 'c+', 'python', 'pytorch', 'raspberry-pi', 'react']
65
10,159
https://devpost.com/software/i-got-total-day
The start Be ready to record the video Start recording... Uploading... Processing... downloading... Finished! The architecture of our application Inspiration Short videos nowadays are a popular way for entertaining and showing one's charisma on social medias such as Instagram. For example, many apps for special face effects emerge and draw great public attention. In contrast, few apps are designed to make effects on the whole body. This kind of apps should bring out more social interaction like what TikTok achieves and make much more fun between friends. However, people could spend lots of time on editing their videos before releasing to social medias. This mainly reduces users' willingness to shot a video. As a result, it is vital to speed up the process and even automatically replace some funny clips. Source from https://pythonawesome.com/implementation-of-face-landmark-detection-with-pytorch/ What it does Based on the above reason, we make an effort on developing a system for calculating motion similarity. The motion presented as a series of keypoints will firstly be collected through PosNet built on Pytorch. By the calculation, the most similar video clip will be used to replace the original clip, and the edited video will have an amazing transition effect . How we built it The system can be simply divided into frontend and backend. We develope an iOS app as the user interface, and the interface will let users shot their own video. After that, the video will be sent to backend for further processing, and the result will eventually be visualized in the frontend (which is the iOS app). The backend is responsible for two important items which are pose prediction and automatic video editing. The weight of PosNet originates from this GitHub repo , which is built on Pytorch . All functions related to image processing are implemented by Python including basic I/O. Our system is deployed through ngrok and can be accessed through the iOS app. Challenges we ran into First of all, a proper length of video clips as the basic unit for both data transmission and image processing should be tested and determined. In addition, the preparation of the video database is time-consuming and only videos with merely few people and a clean background are filtered. Finally, the biggest challenge occurs in the calculation of similarity because of the difference in scale and angle of targets in different videos. Additional transform matrix should be applied to the alignment of their keypoints. By the way, since we decided to complete our work through a "hackathon" way, the work just began two days before the deadline. Although we come up with the idea few weaks ago, we leave a extremely tight schedule for programming😂🖖 Accomplishments that we're proud of We believe that the application can strengthen the connection of people by a brand new way! The idea can be extended to the exising social media apps or video-sharing service🇹🇼 What we learned More familiar with Pytorch 🔥 New Knowledge ✨ When surveying the way for deploying the Pytorch model, we had found torchserve , which was released just 4 months ago. Although we did not adopt this way in the end, this tool has a great potential for practical use. Besides, through the implementation of PosNet, we had learned more about how keypoints of human body are predicted. Great TeamWork Leads To Success Although the schedule is quite tight, it is helpful for us to appropriately split jobs in advance. This makes us successfully achieve the application on time! What's next To better achieve our goal, the application should be compatiable with popular social medias to share the edited video with friends. Afterwards, we should expand our database for similarity calculation by collecting more videos. Furthermore, it might fascinating to allow users to define and customize their own video database. Since the size of current database is quite small, a more efficient way for calculating similarilty is required to handle a huge volume of data in the future. Segmentation on person is also a interesting development direction because the result can help apply the facial effects to the human body. Built With node.js python pytorch swift
I got a total day
A deep learning-based video-sharing social media app - TikTorch!
['Randy Liu', '韓維 107碩-', '子皓 林', 'jimcute219']
[]
['node.js', 'python', 'pytorch', 'swift']
66
10,159
https://devpost.com/software/voice-assistant-7e368k
Voice assistant using pytorch project name : Voice assistant powered by Pytorch Team members: 1)Harini I (Sri sairam engineering college,India) 2)Balakrishnan V (St.Joseph's College of engineering,India) Description : We have done a simple voice assistant application which basically handles customer queries.Basically our concept is to make this as a shop assistant but we also added some cool stuffs so that it works as a basic voice assistant . We used Natural Language Tool kit which helps us to understand the text receive valuable insights. NLP tools give us a better understanding of how the language may work in specific situations. Moreover, people also use it for different business purposes. Such proposes might include data analytics, user interface optimization, and value proposition. But, it was not always this way. Natural Language Toolkit features include: Text classification Part-of-speech tagging Entity extraction Tokenization Parsing Stemming Semantic reasoning We used pretrained model . Processes involved: 1)Tokenization 2)stemming 3)creating bag of words (array of numbers) 4)creating intents Eg: tag:'greeting'->pattern:'hi how are you','Hey hi'->response:'Hey hi','hello'. 5)creating training data 6)load the model. we used speech recognition module to get input from the user and done the above mentioned processes.Appropriate response from the intent file is chosen by our model and the text format of the output is converted to speech. Features: 1)Sending email 2)Opening browsers 3)playing songs 4)Getting feedback from the user when customer leave the shop. We also done the front end of our application but there is a problem in our flask sever.We currently working on it to fix. We also trained YOLO v5 to classify the gender so that it can address customers which will be so good.But we couldn't link that with our project .we are beginners so we are suffering a lot to do this kind of things like creating API of our trained model and using it in our another project. We hope we have done the best which we can do. Really learned a lot of new things.And had a great time in participating in pytorch summer hackathon 2020. Built With flask gmailapi neuralnet numpy python pytorch pyttx3 speechrecognition torch Try it out github.com
Voice assistant
We have done a simple voice assistant application which basically handles customer queries.It also do browser operations, sending e-mails,playing songs etc.
['Balakrishnan Vivekanandhan', 'Harini I']
[]
['flask', 'gmailapi', 'neuralnet', 'numpy', 'python', 'pytorch', 'pyttx3', 'speechrecognition', 'torch']
67
10,159
https://devpost.com/software/grow-n-track-u7962v
Webpage A look into the Locust locator MAP (rep every spotting on locust over a time period) LocustTrack Our Vision : Equipping every single farmer with accessible resources and aid them in making the right choices . Q1. What is a locust attack/invasion/plague? When the locusts start attacking crops and thereby destroy the entire agricultural economy, it is referred to as a locust plague/locust invasion. Plagues of locusts have devastated societies since the Pharaohs led ancient Egypt, and they still wreak havoc today.  Over 60 countries are susceptible to swarms. Q2. Types of locusts - There are four types of locusts that create a plague – desert locust, migratory locust, Bombay locust, and tree locust. The desert locust is a notorious species. Found in Africa, the Middle East, and Asia, this species inhabits an area of about six million square miles, or 30 countries, during a quiet period, according to National Geographic. During a plague, when large swarms descend upon a region, however, these locusts can spread out across some 60 countries and cover a fifth of Earth's land surface. Q3. How and when do locusts become harmful? During dry spells, solitary locusts are forced together in the patchy areas of land with remaining vegetation. This sudden crowding makes locusts. Then, when rains return—producing moist soil and abundant green plants, locusts begin to reproduce rapidly and become even more crowded together. In these circumstances, they shift completely from their solitary lifestyle to a group lifestyle in what’s called the gregarious phase. Locusts can even change colour and body shape when they move into this phase. Their endurance increases and even their brains get larger. Locusts have huge appetites.One of these insects can eat its own weight in food in a single day.And they're devastating crops in East Africa, where millions of people are already considered food-insecure. Q4. What is a locust swarm? Locust swarms are typically in motion and can cover vast distances—some species may travel 81 miles or more a day. Locust swarms devastate crops and cause major agricultural damage, which can lead to famine and starvation. A swarm of desert locust containing around 40 million locusts can consume (or destroy) food that would suffice the hunger need of 35,000 people, assuming that one person consumes around 2.3 kg of food every day. In 1954, a swarm flew from northwest Africa to Great Britain, while in 1988, another made the lengthy trek from West Africa to the Caribbean, a trip of more than 3,100 miles in just 10 days. Locust swarms devastate crops and cause major agricultural damage, which can lead to famine and starvation. Locusts occur in many parts of the world, but today locusts are most destructive in subsistence farming regions of Africa. Q5. Locust Effect on Africa ? The worst locust outbreak in generations has descended upon East Africa and the Horn of Africa. Without immediate action, 4.9 million people could face starvation this summer. This disaster comes at the worst possible time for countries like Somalia already facing the double emergency of food shortage and COVID-19. Seven facts about the situation on the ground: 1. Desert locusts are extremely dangerous – These migratory insects inflict insurmountable damage in minutes. Even a tiny swarm consumes the same amount of food in one day as 35,000 people. Swarms have already destroyed hundreds of thousands of hectares of crops and pastureland in eight countries—Kenya, Uganda, South Sudan, Ethiopia, Somalia, Eritrea, Djibouti and Sudan—and threaten to spread wider. 2. Five million people are at risk of hunger and famine- As of March, the locust infestation in East Africa has already damaged more than 25,000 kilometers of cropland. Without swift intervention, populations will face mass starvation this summer. 3. A new swarm is hatching – A fourth generation of locust eggs is now hatching, which experts predict will create a locust population 8,000 times larger than the current infestation. 4. Somalia will likely be hit hardest – The Somali government was first in the region to declare a nationwide emergency in response to the desert-locust crisis. Without humanitarian assistance, 3.5 million people are projected to face food crisis between July and September. The region is already overwhelmed by cycles of widespread violence, drought, floods, chronic food shortages, and disease. 5. This the worst outbreak in 70 years – Without expedited preventative measures, swarms will migrate from East Africa to West Africa. “This is the worst locust invasion we have seen in our generation,” says Sahal Farah of Docol, an IRC partner organization. “It destroyed pastures, contaminated water sources and [has] displaced many pastoral households. The worst of all is that we do not have the capacity to control it, and so far we have not received any external support.” 6. Women face increased risk – If harvests fail, the IRC estimates that 5,000 households, especially those led by women, will need urgent humanitarian assistance by August. As food prices skyrocket, women and girls will face an increase in violence and theft as their partners are forced to travel in search of food and work. Additionally, women will be forced to take on additional responsibilities in managing existing farms or small businesses, even as they tend to the needs of their families. 7. More funding is necessary to stop widespread famine – The IRC is calling for $1.98 million to alleviate the desert-locust emergency in Somalia in 2020. We are also appealing to the United Nations and affected countries to continue technical analysis of locust movements along with continued information sharing—before it is too late. Q6. Crop Failure and Hunger Famine In Africa . In Africa, hunger is increasing at an alarming rate. Economic woes, drought, and extreme weather are reversing years of progress so that 237 million sub-Saharan Africans are chronically undernourished, more than in any other region. In the whole of Africa, 257 million people are experiencing hunger, which is 20% of the population. Successive crop failures and poor harvests in Zambia, Zimbabwe, Mozambique, and Angola are taking a toll on agriculture production, and food prices are soaring. In the past three growing seasons, parts of Southern Africa experienced their lowest rainfall since 1981. As a result of these dire events, 41 million people in Southern Africa are food insecure and 9 million people in the region need immediate food assistance. That number is expected to rise to 12 million as farmers and pastoralists struggle to make ends meet during the October 2019 through March 2020 lean season.Close to five million people in East Africa could be at risk of famine and hunger as the ‘worst locust invasion in a generation’ continues to destroy crops, contaminate water sources and displace thousands of households, a new report has warned.The infestation, which first appeared in the region last June and has already passed through a number of generation cycles, is feeding on hundreds of thousands of hectares of crops across at least eight countries. HISTORY OF FOOD FAMINE – • 2011 to 2012 — The Horn of Africa hunger crisis was responsible for 285,000 deaths in East Africa. • 2015 to 2016 — A strong El Niño affected almost all of East and Southern Africa, causing food insecurity for more than 50 million people. • 2017 — 25 million people, including 15 million children, needed humanitarian assistance in East Africa. In September, inter-communal conflict in Ethiopia led to more than 800,000 people becoming internally displaced. • 2018 — Africa was home to more than half of the global total of acutely food-insecure people, estimated at 65 million people. East Africa had the highest number at 28.6 million, followed by Southern Africa at 23.3 million, and West Africa at 11.2 million. • 2019 — Food security is deteriorating and expected to worsen in some countries between October 2019 and January 2020. Locusts attack across the world   By the end of 2019, there were swarms in Ethiopia, Eritrea, Somalia, Kenya, Saudi Arabia, Yemen, Egypt, Oman, Iran, India, and Pakistan As of January 2020, the outbreak is affecting Ethiopia, Kenya, Eritrea, Djibouti, and Somalia. The infestation "presents an unprecedented threat to food security and livelihoods in the Horn of Africa," according to the United Nations Food and Agriculture Organization. Kenya has reported its worst locust outbreak in 70 years, while Ethiopia and Somalia haven’t seen one this bad in quarter of a century. They are now heading toward Uganda and fragile South Sudan, where almost half the country faces hunger as it emerges from civil war. Uganda has not had such an outbreak since the 1960s and is already on alert. Uganda has not had to deal with a locust infestation since the ’60s so there is concern about the ability for experts on the ground to be able to deal with it without external support In a country like South Sudan, where already 47% of the population is food insecure this crisis would cause devastating consequences. Q7.How can locust swarming/attack be prevented? Weather patterns and historical locust records help experts predict where swarms might form. Once identified, an area is sprayed with chemicals to kill locusts before they can gather.  Historically, locust control has involved spraying of organo-phospate pesticides on the night resting places of the locusts. Intervention in the early stages of a locust outbreak is generally advised.This reduces the amount of pesticide to be applied because the locusts are localized over a relatively smaller region. As an outbreak continues to develop first into an upsurge then into a plague,more and more countries are affected and much larger areas need to be treated. Nevertheless a preventive strategy may not always be effective.Access to infested areas may be limited due to insecurity;financial and human resources can’t be mobilized quickly enough to control an outbreak in time;or weather and environmental conditions are unusually favourable for locust development so the national control capacity is overwhelmed. So,what can be done? HERE COMES THE USE OF LOCUST LOCATOR Locust swarm attcks can be prevented with early monitoring of the breeding grounds of the insects. Now,United Nations is already doing this work. Through various ground,air and satellite surveillance techniques,image processing methods,data analysis and a diversified modus operandi,scientists,researchers,biologists are working day in and day out in order to build a model,or a method so that these attacks can be prevented,before they grow to wreck massive destruction and havoc. But,the common man cannot comprehend the need or purpose behind all this. This is a situation where experts with years of experience,modern technological software,methods and tools at their disposal are still baffled by the unusually high outbreak of the locusts this year. So what can we expect from an ordinary let alone an illiterate person to do ? How can they know how to save themselves from this raging menace? How can we ensure that they - the pillars of support of this entire urbanised culture and people; survive and continue to prosper? Here’s where our application is useful. By making an application in their local language and making it easy to use, we remove any challenges the locals might face while taking advantage of our app. Q8.But,why did we do this? Being fortunate enough to be able to use technology amidst the comfort of our living conditions,we were discussing about the havoc that this year had bestowed upon humankind,starting with Australian bushfires to COVID-19. And we yearned to do something,in order to make the world a slightly better place,than what it was. We knew that we couldn’t be frontline warriors of Coronavirus alongwith doctors and other personnel,since none of us are associated with medical background. But we had the belief that using our knowledge in the fields of data science,database management,app development;to name a few,we could atleast try to do something to give back to society,and thus was born..GROW N TRACK. So,while browsing for things we could do,we stumbled upon this idea and saw the wonderful initiative Microsoft and the African Literacy Project had taken to organise this Hack for Africa global event. Q9.What do we do? Essentially, we track locust and send warning message to the registered users. From the satellite data available,we obtain the locusts location. We keep a record of the user location and when the locust enter the vicinity of the user we warn them via text and whatsapp. For now we used Whatsapp but if we can implement the project with funding and resources then we plan to use normal text messages. Q10.How warning them is useful ? It helps them take necessary protection to save themself from such adversities. Also,it has a vital role to play in formulation of future plans We implemented machine learning in our tracker to predict the direction of movement a couple of days before it happens and try to predict the next possible mass breeding spots. We also plan to have a feature in which a user can mark a place where they spot the locusts and if we get same marking from a specified radius of the users we alert the concerned authorities and mark the place in our map. By analysing the data,we found that locusts infested only specific crops,and only during specific time periods of the year. By correlating that with the pH of the soil in those areas,we were successful in building an algorithm that would help them to decide the best crop to be planted according to the pH of the soil,so that they could yield the maximum profits out of their crops,all the while being protected from the problem of locusts ruining their hardwork. As of April 2020, efforts to control the locusts are being hampered by ongoing restrictions in travel and shipping due to the COVID-19 pandemic, contributing to the global coronavirus food crisis. Hence,if we can implement Grow N Track,then surely we can put a huge leap in bringing the whole world to normalcy if the nations can slowly go back to their food production levels before the disaster and hence resume trading activities of food and other products. Q11. Who are we? Visit the developers page to know more about us and contact us . We love to work on Projects that helps improve people's lives and leaves a good impact in this world. Regards- Kartik Agarwal , Anush Krishnav.V , Indrashis Mitra , Nima Pourjafar Built With css flask html5 javascript php python sql Try it out grow-n-track.herokuapp.com
LocustTrack
An efficient web app that provides farmers make better choices and save their work from locust attacks .
['Kartik Agarwal', 'Nima Pourjafar', 'anush krishna v']
['Philly Codefest 2020 Collaborative Award', 'Philly Codefest 2020 Best Hack for Social Good', 'Bring ESG funds to the spotlight (Vanguard)']
['css', 'flask', 'html5', 'javascript', 'php', 'python', 'sql']
68
10,159
https://devpost.com/software/vortex
Vortex development pipelines Inspiration Our startup has focus on computer vision product to enable insight automation from available cctv camera or smartphone camera. Throughout our journey, we explore and encounter many deep learning - based computer vision algorithm across different frameworks and implementations. Thus, we started to face a common problems : A slow knowledge distribution across research members due to high learning curve on state-of-the-art model and utilities for model development ( such as data augmentations technique, data loading technique, model's optimization, etc ) Difficulties to implement an incremental improvement ( for example, graph optimization ) if we utilize many frameworks Thus, we realize we need a unified framework for model development, in which this framework must be modular so it can easily be integrated with other model's development tools, e.g. : experiment logging, hyperparameter optimization; and it also needs to provide some flexibility to reuse it's component if the user choose to develop their own model's training script. And finally it needs to support production-grade model optimization to support deployment scenario. Our expectation of this framework is so that the user can easily explore many utilities while cut the unnecessary time to learn the specific implementation for a deep learning model development, so they can iterate faster on the experiments and let the Vortex developer takes the hard implementation part We named this framework as Visual Cortex ( Vortex ) What it does Vortex provides a high-level complete development pipeline of a computer vision deep learning model : Training pipeline Validation pipeline Prediction pipeline IR Graph Export pipeline Hyperparameter Optimization pipeline which can be accessed by provide a single experiment file and a CLI command. However, user also can choose to utilize Vortex public API if they wish to integrate vortex into their own script. How we built it We choose Pytorch as the base deep learning framework due to its popularity and convenience to develop model in it. We carefully point out the atomic components of a deep learning model's development, such as dataset, dataloader, models, logger, optimizer, training iterator, etc. And design modular interaction between them in which in the end will form a full operational pipeline. We explore utilities that support deep learning model development such as albumentations for data augmentation, nvidia DALI for data loader and augmentations, optuna for hyperparameter optimization; and integrate them seamlessly to vortex so that user can utilize them only by modifying the experiment file Challenges we ran into We still think that our code design may not be perfect and we need many iterations to better improve it time to time. In the deep learning side, we struggle to replicate several architecture sota result when we integrate it to Vortex ( for example, currently our object detection implementation still cannot produce intended result). In other side, implementing graph optimization to ONNX is still pose a challenge because not all Pytorch operator is natively supported by the ONNX format, and furthermore not all ONNX operator is supported by the runtime, in this case we choose onnxruntime to be the runtime for ONNX graph IR prediction pipeline. Yet, we still quite satisfied with the result Accomplishments that we're proud of We succesfully prove that Vortex pipelines is fully integrated and working good, at least for the image classification model We succesfully integrate many useful utilities, such as albumentations, optuna, nvidia DALI into it, greatly expanding options for model developer with ease of use And finally we proud that we succesfully provide seamless ONNX and torchscript export for Vortex model And finally, we quite proud of our model validation report which not only evaluate the model's performance on dataset, but also the resource usage of it, please check it out What we learned We learned many things, several of those are How hyperparameter optimization can be very useful and easily be integrated How the importance of data standard playing important roles on CV development, (e.g. bounding box data have many format : xywh, xyxy ,cxcywh, etc. and this is prone to the fault) How to tinker and find way around for unsupported ONNX operators How Pytorch design can greatly supports our implementation when refactoring model's structure, (in Vortex we separate several model's dependencies as components : such as ( normalizer, post processing, loss function ) How we iterate to find better modular design for our framework What's next for Vortex Supports for other task, such as segmentation Implement mixed-precision training mechanism Distributed training More publicly accepted models More runtime support based on ONNX and Torchscript IR Built With albumentations comet-ml onnx optuna python pytorch torchscript Try it out github.com
Visual Cortex (Vortex)
A computer vision development framework which incorporates end-to-end development pipeline, from model training experimentation to graph intermediate representation export
['Tri Wahyu Utomo', 'Alvin Prayuda', 'Mufti Irawan Fikri', 'Fahri Ali Rahman', 'Faris Rahman']
[]
['albumentations', 'comet-ml', 'onnx', 'optuna', 'python', 'pytorch', 'torchscript']
69
10,159
https://devpost.com/software/quickasl-g2qi1d
Inspiration I am new to the USA as a student, and I observed some students in a few of my classes who were always accompanied by American Sign Language (ASL) interpreters and did their job very well in converting knowledge for these students. I wish I could interact to them more without both of us constantly needing interpreters. The challenge gets even worse during the COVID-19 pandemic. Many interpreters may be confined to their homes and as we have classes online, it might get difficult for the special students to get their classes interpreted in real time. This project is a very beginner attempt by me for ASL-to-Text conversion. Note that I'm not trying to cut down on interpreters' jobs - just trying to help people who can't afford one! What it does The laptop webcam detects ASL hand gestures and converts them to letters. How I built it I used python , pytorch , numpy and opencv to build the image capture, detection and classification features of this web app. The web app is built using flask . Challenges I ran into I had absolutely no knowledge of pytorch . This hackathon was my first experience with it. I have worked with tensorflow before and have a fair understanding of deep learning. I also came across this challenge very close to the deadline, and could not find time to work on this project until 2 days before the deadline (due to my new semester, interviews and other deadlines). But I was determined to at least participate in this hackathon, even if I don't make it to the finish line (or the winning parade :) Accomplishments that I'm proud of I built this project in just 2 days - that also included time for my coursework, interviews' preparation, PG&E power outages. This is the shortest time I have taken to build a working project of such a level (albeit not the best UI and lacking many features). The run was like actually attending an 48-hours in-person hackathon. What I learned PyTorch and OpenCV are my biggest takeaways after this project - I finally have more to explore in that area. I also learned that I could pull something off in such a short time - the reason why I named it QuickASL . Hopefully this project would also look good on my resume! :) What's next for QuickASL I wish to work further on: user authentication enabled for the web app proper ASL-to-Text conversion with intents, and vice versa possible ASL-to-Speech conversion, and vice versa export the project to a mobile app, so as to increase the user base export the project as an extension for browsers (for videos) and meeting apps like zoom, chime etc. I believe all these features in a single app would make this app a suitable companion for a large number of people. Built With conda flask numpy opencv python pytorch Try it out github.com
QuickASL
What do you do when you are in the presence of people who only speak ASL and you don't know any? And there is no interpreter around? This real-time ASL-to-Text conversion web app comes to the rescue!
['Shamli Singh']
[]
['conda', 'flask', 'numpy', 'opencv', 'python', 'pytorch']
70
10,159
https://devpost.com/software/the-gatekeeper
The gatekeeper running, detecting a face with mask The gatekeeper running, detecting a face without mask Inspiration Currently, every country is being challenged by COVID19. This means that everywhere in the world people should be wearing face masks for each others safety. Wearing a face mask is still relatively new for a lot of people around the globe. Also unfortunately not everyone does abide to the law, the rules and best practices on how to wear a face mask. This is where the gatekeeper comes in. What it does The gate keeper will detect faces and at the same time it will also detect if they are wearing a face mask. How we built it Firstly we used a dataset that contains both masked and unmasked faces and we loaded it in a dataframe. With our data frame ready, we can create and train our model using pytorch. Our model contains multiple layers and with weights we can prioritise certain layers. In our last step we used openCV to load in our trained model which can detect if a mask is being worn or not. But we combine this with a face detector model, so we can run this on a live feed, freeze the frames and run mask detection on those frames. We then visualise this in a flask app (as seen in our demo video). Challenges we ran into There is a learning curve. We had issues with light, where it was too dark in the room when recording. So the model was detecting that a face mask is being worn while this was negative. It took us a while why this was an issue. Accomplishments that we're proud of That we have a working prototype, that's actually also running pretty well. Looks quite professional and it can be used in a real scenario at airports, stations, automatic doors, etc. What we learned Basically everything we did was pretty much new for us, so we learned quite a lot. We also have a better understanding of machine learning. What's next for The gatekeeper Currently our model is only trained to detect wether a mask in being worn or not. We can extend on this with if it's being worn correctly or not. We could also use IoT and integrate it with gates, doors, etc to let people in if correctly using a face mask. Potentially we could use this to count how many people are inside a room/building with(out) a mask. Built With flask opencv python pytorch Try it out github.com
The gatekeeper
Detect if people are wearing a mask during COVID times. Good and cheap solution for airports and public transport
['Nabil Makhout', 'Osamah Bukraa', 'Ilias Bukraa']
[]
['flask', 'opencv', 'python', 'pytorch']
71
10,159
https://devpost.com/software/pneumonia-detection-using-deep-learning
An example image of detected Pneumonia sites by our proposed algorithm. Image showing the pneumonia detection API running on localhost. Inspiration Pneumonia is one of the major reasons of death all over the world. It is even more dangerous for children and the elderly. And if we see the current situation of COVID-19, then we need a good system that can detect lung diseases accurately. So, this project is a starting step to solve one of the major issues in the healthcare industry which can be scaled enormously in the future . What it does This project consists of a deep learning model which takes the x-ray images of lungs as input. Then it predicts whether the lungs are affected by pneumonia or not. If it finds traces of pneumonia, then it draws bounding boxes around those sites. How I built it The model that I built is a Faster RCNN model with a ResNet50 backbone. The Faster RCNN model has been pre-trained on the COCO dataset. I finetune the model and again train on lung images. The dataset is taken from RSNA Pneumonia Detection Challenge which was held in 2018 on the Kaggle platform. It consisted of enough images to start with a good baseline model. Challenges I ran into Training time and computation power was the biggest hurdle that I ran into. Faster RCNN model resizes the images into 800x800 pixels by default. I changed that resize into 1024x1024. This increased training time a lot. I trained the model on Kaggle kernel for 21000 iterations with a batch size of 8 (30 epochs). Training any longer was not possible due to the kernel running time of 9 hours. Accomplishments that I'm proud of I got Average Precision of 0.251 on my private validation set. The best leaderboard score on Kaggle was 0.25475. So, I think that it is a big achievement for me. I know that the model can be improved much further with more training and more data but it has a good starting point as of now . What I learned Building deep learning models for medical imaging prediction is a very important task and will become more so in the future. But approaching such solutions using deep learning is difficult as medical images are very different from other real-life image data. Data and training time are the two most important factors when building deep learning solutions for medical imaging. More data and more training always help. Working alone for a hackathon is always difficult. One should team-up with like-minded people to get things done in a much better way. I could not find anyone who was very interested in this project. But I thought that this was a serious problem that needed to be solved so I moved ahead with it. What's next for Pneumonia Detection using Deep Learning I want to improve my model even further with more data and training. As of now it is not possible due to the non-availability of really good compute power with me. Winning this hackathon will surely help me do so. As of now, this model can be run with Flask API on the localhost. But I want to deploy it as a website and maybe even team up with some medical professionals so that they can provide their insights . Built With css flask html nvidia python pytorch Try it out github.com
Pneumonia Detection using Deep Learning
Detecting pneumonia from chest radiographs using deep learning with the PyTorch framework.
['Sovit Rath']
[]
['css', 'flask', 'html', 'nvidia', 'python', 'pytorch']
72
10,159
https://devpost.com/software/eyetorch-fhbxyt
Deep Learning has indeed become an important part of our life, it has gained considerable attention in the field of study and is getting explored with a good amount of enthusiasm. It's incredible power to detect complex patterns have led us to believe that it can be used for a task as difficult as Diabetic Retinopathy disease diagnosis . The idea of such a model being used by doctors for gaining certainty in their diagnosis inspired us to get started with it approximately 2.5 weeks back! eyeTorch is a website which takes as input a zoomed in image of the patient's eye and detect the level of severity of the disease with 0 being perfectly fine and 4 being critical situation . We decided to use AlexNet framework after referring to some amount of research papers, to classify images, after pre-processing them. This was followed by deploying the model, which we did using flask. The templates for the web pages were made purely out of HTML and CSS . The model, hence, was successfully run on our local machine after implementing the website we made. There were quite a lot of challenges that we faced, which ranged from something as little as pre-processing a large amount of input images to linking our model with the flask server; from deciding the correct network architecture to hosting the website on quite a lot of hosting services available. I'm very glad to mention that we were able to resolve most of these challenges. Both of us have been practicing our skills in the world of AI for quite some time now, and naturally we had never touched upon website building, but this project helped us into not only understanding how flask works but also into making a full fledged website of our own! Hence, this is an accomplishment that we are really proud of. We gained a good amount of knowledge about Convolutional Neural Networks and while looking for the correct architecture, we also stumbled upon a few open source codes to solve this problem - such as using fastai models - which not only motivated us to understand their solutions but also look for improvements in their already existing solution. This also enhanced our debugging skills which are a must for every programmer. Last but not the least, we would be planning to extend this solution to other eye defects as well, such as that of Cataract, Glaucoma, and Refractive Errors. Built With css flask gpu html numpy pandas pillow python pytorch torchvision Try it out github.com
eyeTorch
Vision care with AI
['Esha Pahwa', 'ACHLESHWAR LUTHRA']
[]
['css', 'flask', 'gpu', 'html', 'numpy', 'pandas', 'pillow', 'python', 'pytorch', 'torchvision']
73
10,159
https://devpost.com/software/ai-based-virtual-assistant
** Gist About the Project: ** In today’s world of automation, everyone wants to have an automated tool which make their day to day life very easy. Day to day work involve opening any application present in system, play and control music, solve mathematical expressions, search on web, sending mail, setting reminders, getting daily news updates, weather details, etc. Everyone gets bored and tired doing same thing manually. People need automation in their life. We build an Automated AI based Virtual Assistant to perform all the above-mentioned jobs and many more. It will reduce workload and will do ones work in just single voice command. No virtual assistant is available for both Linux and Windows. This virtual assistant has facility to listen to your commands and speak the response of your statement. You can turn off/on these facilities on your choice. These instructions are stored in a sqlite database so that any action performed can be store in database for further use. Conversation is not hardcoded as other chatbots might have. In this we train our model on manually build dataset for conversation. This virtual assistant has features like: ► Sending text message to a contact number, ► Control Music, ► Search Google or YouTube, ► Play Games, ► Solve Mathematical Expression, ► Send Email to Contacts, ► Find your location and path, ► Wallpaper change facility, ► On/Off speak and listening ► Weather Conditions. ► Get News ► Usual conversation, etc Technologies Used: TFLearn: Deep learning library featuring a higher-level API for TensorFlow. We used TFLearn library to build and train our model for conversation just like chatbots. These chatbots derive from a form of artificial intelligence called Natural Language Processing (NLP). One particularly important aspect of NLP that is used for programming a chatbot are the concepts of stemming and tokenization. Lancaster stemming library from NLTK package is used to collapse distinct word forms. We connect our virtual assistant to sqlite database so that execution of commands or instruction can be done very easily. We used different python libraries such as speech_recognition, Wikipedia, wolframalpha, file_search, etc. for the purpose to attain automation. Conclusion: We have built an Artificial Intelligence based Virtual Assistant for both Linux and Windows with the integration of sqlite database, NLP and TFLearn. Future Scope: Graphical User Interface (GUI) and few more features will be added in the upcoming version of this Virtual Assistant. Built With natural-language-processing python speechapi sqlite tensorflow tflearn wikipedia Try it out github.com
AI based Virtual Assistant for both Windows and Linux.
I build an Automated AI based Virtual Assistant to perform open apps, play and control music, solve mathematical expressions, search on web, sending mail, setting reminders, weather details, etc
['Shivam Gupta']
[]
['natural-language-processing', 'python', 'speechapi', 'sqlite', 'tensorflow', 'tflearn', 'wikipedia']
74
10,159
https://devpost.com/software/besmile-app-with-pyemotion
BeSmile - Welcome Page BeSmile - Home Page BeSmile - Report Page What is PyEmotion? PyEmotion is the python package for Facial Emotion Recognition and its build on top of PyTorch . And it will be supported various application development like, Web/Desktop and Mobile. Using PyEmotion package, We can detect the person's facial emotions like, Neutral Angry Fear Happy Sad Surprise How PyEmotion working? Using PyTorch, I have trained a custom model for detect the person's Facial Emotion. This package will provide a pretrained model's functions which is helping to detect the person's emotion. I have added my custom trained model to this python package. So, It will take care of the Emotional Recognition for Web, Desktop and mobile apps. What is BeSmile? BeSmile is a web app, It mainly builds with PyEmotion. Using AI programs to detect the Person's mood while doing their work. Why we need to use Besmile? → Besmile not only detect the person's mood, It also helps to increase your happy face count during the stressful working hours. Yes, Besmile will send the funny joke during your work hours. Actually, It truly helps to increase your happy face count. So, The BeSmile's overall idea is to improve the happy face count. Besmile also can detect the children mood and give insights to online teacher. I mean, During the quarantine days most of the people using the video conferencing calls. So, we can track the particular person's emotions during the call and online class etc. How BeSmile working? Using python flask to I have created a simple web app and I have implemented the PyEmotion's functions.  This application based on 3 main pages, lets discuss the pages individually. Welcome Page  - This is the start point of the application, Just enter the email to go the next page. Enter email option available at very first time only. Home Page -  Here you can see the webcam view and your emotions count. Also, below the cam view, Report button available. Report Popup - Click the report button to you can view your emotions count. In the report popup we can see the % of the emotion. Each and every 10 minutes the application will show the funny joke. We can also set the joke interval time, yes it like cool right. How inspired to do this application? I strongly believe the following sentence "Every Invention Has Personal Needs". So, that way the idea came from during the covid Quarantine days. Yes, working with Quarantine days are the worst part in my life. Each and every day I started with happy face, but end with angry face. I need to reduce the angry face. So, this is the main reason to build this application What are the advantages of my concept? Capability: There is no restriction to use GPU not supported system, You can run this application on any type systems and also the various operating systems. NO External API: We don't need to use any external API on this application. Data Security: Each and every user's data will be safe. There is no chance of Data breach. Offline: This application will be work on offline mode also :) Deployment: Also, We don't need to deploy anything because, Initially I have created a python package called PyEmotion. So, It will take care of the model file availability and other things. What are the disadvantages of my concept? I would like to get it from your side and also I will convert as an advantage one :) What I learned Lot of things :  How to use PyTorch with python Using PyTorch to train a custom model and detect person's facial emotions. What is next? I will spend some time to develop this as a Desktop App and share my colleagues and friends to use it. It can detect the children mood and give insights to online teacher. I mean, During quarantine days most of the people using the video conferencing calls. So, we can track the particular person's emotions during the call or online class etc. Need to add an option like. Every user can post overall report to their social media's. Will add idle option. Restriction to detect multiple person's emotions. What are the PyTorch tools used? torch torchvision facenet_pytorch Built With python pytoch Try it out github.com github.com pypi.org
BeSmile with PyEmotion
Using AI programs to detect the person's mood while doing their work. And it mainly improves their happy face.
['Karthick Nagarajan']
[]
['python', 'pytoch']
75
10,159
https://devpost.com/software/codemix-nlp
Our primary features Our webite is also a Progressive Web App, check it out! The model we are using to understand code-switched data Our Python package also has a documentation. Checkout https://lingualytics.github.io/py-lingualytics/ We tried to use other models than transformer like this one, but transformers outperformed them. Inspiration Spanish and Hindi are the 2nd and 4th most spoken languages in the world. However, most people in these countries communicate in a mix of languages, like English mixed with their native language. All of the current NLP models fail to understand such mixing. We have developed models and tools to solve this problem. This mixing of languages is formally called code-switching . What it does Lingualytics, with the help of Pytorch, provides tools for both developers and businesses to process, analyze, and develop models for code switched data. For developers and data scientists, we have developed a python package, and we also have deployed a web app to test out our models. Python package The developer tool is a Python package. Py-Lingualytics helps you to Download code-switched datasets. Available datasets are CS-En-Es-Corpus Vilares, D. et al. 2015 SAIL-2017 Dipankar Das., et al. 2017 Sub-Word LSTM Joshi, Aditya, et al. 2016 Preprocess data Removal of code-switched stopwords . Digit, punctuation and excessive whitespace removal Train any state of the art model for classification, using Pytorch You can use any model and tokenizer on Huggingface and train it on a dataset Represent text with the help of n-grams If you're confused at any point. You can refer to the library's documentation . To try Py-Lingualytics, you can install it with pip install lingualytics . You can also get started with the getting-started notebook The library is available on PyPi . You can also find the source code on Github. The Web App You can input text into the Progressive Web App , and it will show the sentiment of the input. It currently supports English, Hindi, and a mix of both as well. To install the web app, open the website On the phone, you'll automatically see an option on the bottom to install the app. On Desktop, click on the + on the right side of the address bar to add it. Pretrained models We also have uploaded pretrained models that can work with English-Hindi and English-Spanish data. Check them out here . How we built it The Model We first had to figure out the right preprocessing techniques to interpret a code-switched text. We did some extensive research to find the right stopwords, punctuations, and tokenizers to get the right word embeddings. The next step was to decide a model that could work with multiple languages simultaneously, and we finally went ahead with Multilingual Bert (Devlin, Jacob, et al. 2018) which is compatible with 104 languages. We then fine-tuned this model on English-Hindi and English-Spanish codemixed datasets. There is another approach of using Sub-Word LSTM (Joshi, Aditya, et al. 2018), but BERT outperformed that approach. The Web App We used React for the front end and Axios to build the API. Our CSS framework was bootstrap and we used Github Pages to deploy the website. We used the latest practices like using hooks in react and functional components and the ECMAScript 6 syntax. The source code of the website is available on Github. The Python Library We generalized each of the steps we used to make the model and developed a python library for downloading, preprocessing, training, and representation of code switched data. Design of the library The design of using Pytorch for training was majorly inspired by Fastai as they have a friendly API built on top of PyTorch. For the rest of the operations, the design was inspired by Texthero as we found it to be the easiest to use along with excellent documentation. Documentation We used Sphinx to convert the docstrings of each function into full documentation. We also have a getting-started notebook. Pretrained Models We used our own library to train models on English-Hindi and English-Spanish datasets and uploaded them on Huggingface for anyone to use. Challenges we ran into Even though Spanish and Hindi are the 2nd and 4th most spoken languages in the world, very few research on how people communicate in these languages. Therefore finding relevant literature to do code-switched NLP was not an easy task. PyTorch was helpful in this case as it's easy to customize the training procedure as per our needs. We also couldn't find quality datasets or preprocessing techniques to work with English-Spanish or English-Hindi data. The existing NLP libraries didn't support code-switched data out of the box, and we ended up rewriting a lot of functions in those libraries. Accomplishments that we're proud of We're proud of Helping in democratizing NLP by developing the first library to work with code switched data. Develop the first web app where users can do sentiment analysis on code-switched data. What I learned We didn't have much knowledge about NLP but had a firm grip on Machine Learning and Deep Learning. We understood that NLP is an application of ML and developed an understanding of the following techniques Preprocessing: Stopwords, Tokenization, Stemming, Lemmatization Transformers and why are they currently the best models to work with text We also learned how to develop a Python package to PyPi and how to document it using Sphinx. What's next for Lingualytics Add Spanish and Spanglish compatibility to our web app. The next step is to collect quality code-switched data. We realized the publically available datasets were not enough and richer datasets have to be collected. We also plan to add support for additional NLP tasks like: Language Identification (LID) POS Tagging (POS) Named Entity Recognition (NER) Sentiment Analysis (SA) Question Answering (QA) Natural Language Inference (NLI) Built With bootstrap gh-pages github hugggingface nltk photoshop python python-package-index pytorch react react-native sass sphinx texthero torch transformers Try it out lingualytics.tech pypi.org github.com github.com twitter.com www.instagram.com www.linkedin.com lingualytics.github.io
Lingualytics
Democratizing NLP by understanding a mix of languages
['Rohan Rajpal', 'Royal Tomar', 'Srijan Jain']
[]
['bootstrap', 'gh-pages', 'github', 'hugggingface', 'nltk', 'photoshop', 'python', 'python-package-index', 'pytorch', 'react', 'react-native', 'sass', 'sphinx', 'texthero', 'torch', 'transformers']
76
10,159
https://devpost.com/software/nngeometry
Inspiration When exploring recent deep learning research papers, I found it striking that Fisher Information Matrices and Neural Tangent Kernels are used in many projects across several subdomains of deep learning, but each of these projects have their own implementation, that is often buggy, and limited to their use. I instead think that instead of reinventing the wheel every time, there is a need for a library that would do exactly this: make it easy for researchers and practitioners to implement algorithms that use these matrices, and give them access to recent advances in approximate thereof. What it does NNGeometry allows to quickly define and evaluate most linear algebra operations using Fisher Information Matrices and Neural Tangent Kernels. As a motivation, let us consider a technique of continual learning called Elastic Weight Consolidation (or EWC). In EWC, we need to compute the simple formula dwT F dw. The problem is that this simple formula turns out to be very difficult to implement in practice, for the following reasons: F is a very large matrix, in fact it is d x d where d is the number of parameters of a neural network, up to 10^8 in recent architectures. When writing maths I can simply write dw, but in real life this vector is a bunch of scalar parameters, split accross several layers. Similarly, F must be computed for all scalar parameters, and when computing dwT F dw we need to make sure that parameters are correctly mapped between dw and F. In short, it is not as simple as writing torch.dot(torch.mv(F, dw), dw) , How I built it Challenges I ran into Accomplishments that I'm proud of What I learned What's next for NNGeometry Built With pytorch Try it out github.com
NNGeometry
Fast and easy Fisher Information Matrices and Neural Tangent Kernels
['Thomas George']
[]
['pytorch']
77
10,159
https://devpost.com/software/betazero
As a team, we've always loved playing adversarial games like poker and chess that have complex but interesting optimal strategies. We were naturally inspired by Google's AlphaZero AI, not only because it beat the world's top chess-playing algorithms like stockfish, but also because it played such far-sighted, positional moves that had strategic consequences long after the move was played. AlphaZero displayed a true strategic 'understanding' of the game that is reteaching us as humans how to think about this ancient game. And we wanted to build a chess AI like it. A chess AI requires two main parts: a valuator, which tells you how good a certain board state is, and a searcher, which helps search for the best board state using that valuator. The first iteration of TauZero used a simple search algorithm (minimax) that brute forced the search. And for the valuator we hard coded a rule-based system to value certain board states (eg. a knight on your side contributes +3 to a total, while a pawn on your opponent's side contributes -1 to the total board value). We quickly discovered two things: (1) we needed a more accurate valuator, because the agent wasn't taking into account 'positional advantages' in chess (i.e. are my pieces on better squares than yours?), and positional threats (i.e. am I about to move a piece to a square that will make your game significantly harder to play in the future). In order to correct the issues caused by a purely heuristic approach, we needed a model that was capable of encapsulating a complex understanding of how pieces interact on the board. To do this, we used a convolutional neural net implemented by extending nn.Module and using PyTorch's object oriented API. The CNN is an optimal choice for our model given that convolutional filters can capture spatial relationships and thus take into account the interaction between pieces located at differing points on the board. By training the model using PyTorch's implementation of autograd to execute supervised learning on Stockfish board evaluations, our engine began playing more conventional openings and setups in chess theory and in human games in general. Overall an improvement. (2) We needed to speed up our game-tree search as the current method allowed for a very limited search depth.We looked to decrease the number of possibilities our agent had to consider (alpha-beta pruning), as well as some other "position probing" techniques where we'd only fully explore possibilities we previously deemed promising, saving computational time. And finally, with our experience, we implemented a drastically different algorithm--the same algorithm in the AlphaZero paper, with a few modifications. This was probably the most difficult part of the project, as we read through difficult academic literature and coded up complex data structures in order to store what we needed to run the algorithm. But in the end we recreated AlphaZero, excepting that we pre-trained the network before letting it perform self play, in order for it to converge faster. In order to carry out move selection, we used PyTorch for the entirety of the training framework and move selection algorithm. The the architecture is essentially a deep residual convolutional neural net that was trained using backpropagation on the cross-entropy between predicted prior probabilities and posterior probabilities improved using Monte-Carlo Tree Search. In order to implement this algorithm, we heavily relied on Torch's utilities that allow us to calculate and retain gradients through unconventional sampling routines, which would not have been easy using a less flexible machine learning framework. Furthermore, Torch's easy GPU acceleration that allowed us to quickly transfer tensors between the CPU and GPU helped us vectorize our code and make the most out of limited computational resources. Our next steps for this project will be to rewrite everything in C++ using PyTorch's ATen library, where we can take advantage of multithreading and feasibly run multiple workers on a threaded game tree data structure so as to search the state space rapidly and enable faster training. Overall it was so interesting being able to create algorithms in a space we understood from a non-RL perspective so well, and to leverage that knowledge to build something tangible. Built With flask python pytorch Try it out playtauzero.herokuapp.com
TauZero
A novel reinforcement learning algorithm that learns to play chess using self play.
['Collin Wang', 'Thomas Chen']
[]
['flask', 'python', 'pytorch']
78
10,159
https://devpost.com/software/sketch-architect
Inspiration Complitely distrupt the modeling industry, specifically the Architecture Industry. We envisioned to usign simple sketch interfaces to provide real time feedback for initial conceptual design phases. What it does This application is able to recognize and classify sketch to 3D solid and geometry How I built it with pytorch and fast.ai Challenges I ran into Using large multiple GPU to train model with a new custom dataset that we developed Accomplishments that I'm proud of Experimenting with Pytorch3D in order to achieve this goal Built With fast.ai pytorch pytorch3d Try it out github.com
SketchGAN
This project make your sketches jumping in 3D, today's modelling software require a lot of experience even in initial Conceptual design phases. Using fast.ai and pytorch3D we simplify this process.
['Alberto Tono']
[]
['fast.ai', 'pytorch', 'pytorch3d']
79
10,159
https://devpost.com/software/gdfgddfd
simplification filtering segmentation classification predefined_pde manually_defined_pde Inspiration I work in field of PDEs on graphs. PDEs on graphs are known to have many applications . This summer I started exploring "deep learning on graphs" and I came upon torch_geometric . The learning process on graphs, i.e. the forward convolution is defined by something known as Message Passing equation . While going through the documentation of torch_geometric and analyzing the Message Passing equation I realized that one can also implement PDEs on graph using the Message Passing class of torch_geometric and hence also benefits from GPU acceleration. What it does Following the news of hackathon announcement, and based on the recent epiphany of the connection between Message Passing and PDEs, I decided to create a tool torch_pdegraph which can facilitate solving pdes on graphs and demonstrate some out of their many potential applications in jupyter-notebooks. How I built it At the backend all the magic is done by the torch_geometric 's Message Passing interface. I have created two subpackages pdes which contains some predefined PDEs on graphs and operators which contains some operators on graphs and can be used to run a custom PDE as shown in the notebooks Although one can run PDEs on any graph/network (as long as it has edges and scalar weights), but in the notebooks I am only showing their application on simple knn-graphs of images and pointclouds. Challenges I ran into To show some applications of PDEs on images and pointclouds in the jupyter-notebooks I had to create some simple knn-graphs. Although torch_cluster comes with a knn graph creation method, I found it to be slow when the node features have high dimension. This was overcome using the facebook's faiss library which allows very fast knn similarity search on the GPU. Accomplishments that I'm proud of I think the biggest accomplished for which I am happy is the skill I have developed, which is to implement PDEs via Message Passing. Certainly this skill will accelerate and expand my PhD research. What I learned I learned quite a few valuable things: By implementing those notebooks I learned the extent of connection between PDEs on graphs and Message Passing on graphs. The inner working of torch_geometric . Using faiss for similarity search. What's next for torch_pdegraph I hope that this submission will raise interest in PDEs on graphs and their application. Right now I have shown only few out of many applications of PDEs on graph, in the future I plan to add more pde/notebooks showing their applications like inpainting, colorization, active-contours. I also plan to create a subpackage graphs which would allow various graph creation methods on images and pointclouds. Also I will maximize doing everything in torch's ecosystem. Few applications:- Simplification Filtering Segmentation Classification How it looks in the code? (1) (2) Built With faiss open3d python-scientific-stack torch torch-geometric tqdm Try it out github.com
torch_pdegraph
Solving PDEs (partial differential equations) on graphs
['amitoz azad']
[]
['faiss', 'open3d', 'python-scientific-stack', 'torch', 'torch-geometric', 'tqdm']
80
10,159
https://devpost.com/software/mooc-anonymous-yv2q0s
Snapshot of web app Banner Banner 2 GIF Inspiration Great traditional boot camps are becoming costly and presenting a barrier of entry for many students and global learners who resort to Massive Open Online Courses(MOOCs). MOOCs are great equalizers for learners because of their general affordability and convenience. However, research and surveys conducted on online learning across the various MOOCs have shown very low completion rates when students do not feel supported. Bootcamps on the other hand have really high completion rates. As a result, we decided to find the intersection of the benefits of online courses and boot camps to create ROBO-BootCamp. An AI platform that uses a quiz to design personalized Bootcamp curriculums with online courses. With our current state of learning and pandemic, the demand for more courses has generated enormous content that is hard to navigate . Our platform allows remote and virtual learning to be more intuitive and fitted for each and every candidate who has access to the internet. . What it does With the advent of remote learning, many institutions and online experts are generating content by the second. The high surge of courses on various MOOC platforms “makes it a burden to choose from so many options” In a sea of MOOCs, our algorithm helps you customize the most tailored fit curriculum for your learning objectives As a user, all you need is a desire to learn and a specific subject or topic you want to master. We take care of the planning and making sure you get courses that will be compatible with your learning style, knowledge base, and schedule. You can take our "test" and let our AI design your personalized Bootcamp curriculum that is based on the information you provide. Our platform also utilizes a chatbot to process your queries and suggest the most compatible course(s) that fits your profile. How we built it Technical Architecture: Carefully curated own dataset of Coursera courses modified with unique columns. Multiclass classification model built with Pytorch and Sci-kit learn in Google Colabs. The application was built in Google Cloud compute engine Front-end built in Python Flask. Application hosted in Google Cloud app engine. JavaScript takes user input for test as numbers and returns as a list to Python script to be predicted. Features fully built out Facebook messenger chatbot Technologies Used: Python Jupyter Notebooks Google Colabs Pytorch Python-Flask Google Cloud Facebook Messenger HTML CSS JavaScript Pandas Numpy Scikit-Learn Rest Service Challenges we ran into We built a recommendation system first before realizing that it didn't really help with our use case and we switched to a multiclass classification model. Creating a dataset with our own unique columns was very difficult. We had to spend a lot of time making sure all the rows of data were accurate. Figuring out how to send data from JavaScript to Python Flask was difficult. Deploying the web application online to the cloud was challenging. The Facebook messenger chatbot lags a lot in response and trying to debug why that happens was challenging as well. Working as a virtual team was challenging, trying to figure out schedules and zoom meetings was challenging. Accomplishments that we're proud of Successfully deployed our web application online even with our challenges. Successfully integrated Facebook messenger chatbot. Got our first 50 users to test out our application to get some feedback. Got the opportunity to learn a lot and upgrade our skills set and expand our portfolio of machine learning projects. We had an idea and we were able to implement that idea. What we learned We learned a lot about building unique datasets. We had to add unique columns in our Coursera dataset in order to make our multiclass PyTorch classification model work. Torch.save and torch.load in Pytorch are built on top of Pickle. For some reason when trying to load a save .pth file, in local it works but when deployed in the cloud, it looks for particular classes in Gunicorn even when you specify the class inside the python script and that breaks the application. Deploying machine learning algorithms in the cloud is harder than running machine learning code in local machines. In order to get data from JavaScript to Python Flask, you need to send a POST request to a url and return values inside of Python script. The Pytorch official website has great documentation and tutorials for a lot of machine learning projects. Pytorch is much more easy and pythonic to pick up than other deep learning frameworks like Tensorflow. It is quite easy to integrate Facebook Messenger chatbots into web applications than we had anticipated. NEVER start a machine learning project with the model in mind. Always start with the problem you're trying to solve, a data set for that problem, a way to solve without machine learning, and finally start thinking about a model. What's next for ROBO-BootCamp Optimize our web app for mobile devices. Enlarge the dataset to include more courses from more platforms including Udacity, Udemy, Khan Academy, YouTube Education, edX, and a lot more. Improve our multiclass classification model. Add more engaging questions in our test to create better curriculums for the users Add features to create schedules for users based on their availability and course syllabus and integrate with their calenders. (for Beta version) Improve user experience overall by creating better dashboards for users to track their curriculum. Implement a recommendation engine in Pytorch when we get more users for the platform Add feature to connect users based on curriculums and interest to learn together through Facebook messenger groups to create online learning "townhouses". Get 100 users and tester to try out our application and get feedback from them. Start an ed-tech startup for ROBO-BootCamp Built With css facebook-messenger flask google-cloud google-colab html javascript jupyter-notebooks numpy pandas python pytorch rest-service scikit-learn Try it out robobootcamp.com
ROBO-BootCamp
An AI BootCamp solution.
['Samuel Osei Afriyie', 'Emmanuel Acheampong', 'Abdul-Latif Gbadamoshie', 'uchennaaduaka1 Aduaka']
[]
['css', 'facebook-messenger', 'flask', 'google-cloud', 'google-colab', 'html', 'javascript', 'jupyter-notebooks', 'numpy', 'pandas', 'python', 'pytorch', 'rest-service', 'scikit-learn']
81
10,159
https://devpost.com/software/daily-brief
The Home Page. Inspiration This app was inspired by the surge in news stories recently. The plan was to create an app that presents news in digestible format and automate the process. What it does The app retrieves popular news stories and uses a machine learning model to create a summarized version of the article. The app then displays it for the users. How I built it The application is divided into three parts, the model, the data retrieval, and the web page where the news stories are served. The model I used to summarize the news articles is the T5 model from Google. I used huggingface's library to utilize the 't5-small' version of the model. The model was then fine-tuned using PyTorch Lightning on the CNN/Daily Mail dataset. The Google Colab notebook used is https://colab.research.google.com/drive/1qSyCcNG8q2ZQ7g2c4hY4UGoqtcF8_HgE?usp=sharing . The model was then used for inference and hosted on AWS's ECS. The ECS container retrieved articles from https://newsapi.org/ and utilized the fine tuned model to create the summarized articles. The summaries were then stored in a DynamoDB database. Lastly, the web page where the model is deployed in a react native progressive web application. The application is hosted on Firebase and utilizes the AWS Gateway to have a Restful API to retrieve the latest summaries from the DynamoDB database. Challenges I ran into One of the main challenges I ran into was the processing power needed to fine tune the T5 model. The base model of T5 has 220m parameters which was too large for the model to be trained efficiently on Google Colab. As a result I focused on using the smaller version with _ only _ 60m parameters. Another challenge that I ran to was deploying the inference model on ECS as this was the first time I worked with Amazon Web Services. Accomplishments that I'm proud of Learning about a new model and fine-tuning it from scratch. Utilizing AWS to create a seamless end product. What's next for Daily Brief Currently, the summaries work well enough however, there often cases with ill grammar or punctuation. This could be solved with using the a larger version of the T5 model. Built With amazon-dynamodb amazon-ecs amazon-web-services huggingface pytorch react Try it out news-app-a8a95.web.app colab.research.google.com
Daily Brief
A web app which provides users with a 2-3 line summary each of the top news of the day. Will utilize Pytorch to create a model to create the news article summaries.
['Yash Mundra']
[]
['amazon-dynamodb', 'amazon-ecs', 'amazon-web-services', 'huggingface', 'pytorch', 'react']
82
10,159
https://devpost.com/software/gaia-s-league
Take a picture screen Your garbages collected and your score. The leaderboard of Gaia's League heroes Inspiration As we like to spend time in park running, we remarked an increase in the number of garbage in park, forest, etc... Unfortunately, no one like cleaning place cause it is dirty, time-consuming but especially not fun at all ! We tried to create an application to encourage children and their parents to clean the planet by making it fun ! What it does First you need to find a dirty place in a park for example. You start the application and take a first picture. You then clean the place as fast as you can and when you have finished, you take another picture of the cleaned place. Our machine learning algorithm will compare the two pictures, count the number of garbage in each picture respectively, and reward you with points accordingly to the number of garbage you have collected in a limited time. Our algorithm also compare the two picture and check that the place is the same one (in order to avoid cheating). How we built it We used React Native for the application part. The pictures are hosted with Firebase / Firestore and their are send to a rest API written in Python / Django. The neural network is built with Pytorch. To be more precise, we used a efficientDet architecture (the state-of-the-art in terms of image segmentation) with transfer learning already trained on ImageNet. Challenges we ran into The design of the application and also the UI. As our main target is children we had to think about designing something that can be fun / easily design for children ! The neural network training and tweaking was also time-consuming, however we found good database for garbage detection. Accomplishments that we're proud of We succeeded to create the application quite fast and release it on the Google Play Store without major bugs. We also are quite proud of the neural network construction and the fact that it works correctly ! What we learned To be more confident with React Native and Pytorch. Also, it was the first time that we were combining a neural network with a mobile application, so we have learned how to publish the API and try to optimize it to give results as fast as possible to the application. What's next for Gaia's League Optimization of the algorithm (the algorithm takes ~10 seconds to detect both images and to upload), we probably can faster this by using a better image compression and a better hosted machine. Share it with many users and get feedback to continue to work on :) We also think about using for garbage detection in see for example. Built With django python pytorch react react-native Try it out play.google.com github.com
Gaia's League
Gaia's League aim at cleaning the planet. Take a picture of a dirty place, clean it, then take another picture of the result. Our ML algorithm will count the garbage collected and reward with point.
['Neroksi N.', 'Marin Bouthemy']
[]
['django', 'python', 'pytorch', 'react', 'react-native']
83
10,159
https://devpost.com/software/deepfakedetection
Inspiration 70% of every human brain resource is gone to his visual cortex. with the advent of deepfakes this visual information is misleading and hacked. These fakes can mislead public or even be used to shame a person. I want to help solve this problem for the betterment of everyoone. What it does How I built it pytorch, python Challenges I ran into Accomplishments that I'm proud of What I learned What's next for deepfakedetection deepfake audio detection and more resource efficient development Built With python pytorch Try it out github.com
deepfakedetection
We are already late in taking precautions for covid but not so late for deepfakes. I want to help protect the world from these fakes.
['Nithin Varghese']
[]
['python', 'pytorch']
84
10,159
https://devpost.com/software/insight-c6hxug
Making NLP accessible and also demonstrating what value can be derived from it. This is NLP as a Service. Natural Language Processing is being used more and more in enterprise to extract information from unstructured data. With the advent of Language Models and Transfer Learning in domain of NLP this use has only accelerated. However, it is still very niche and extremely specialized knowledge to design, train and deploy a NLP based solution by leveraging these Language Models specially the Transformers architecture based solutions. My solution is designed to work around this challenge. Using my application and the back-end API service, the user can get inference using fine-tuned models which work at a high level of accuracy. The solution is designed in a modular way such that the API service and the GUI can be expanded for further downstream NLP tasks, by adding language models and related files for the specific NLP task. The solution is build using 2 different aspects: Transformers and Deep Learning for NLP. Fine Tuning Language models using Pytorch for specific NLP task. An API service and User Interface: To interact with these NLP models to get inference for the various tasks. Built With fastapi python pytorch streamlit Try it out github.com
Insight
NLP as a Service. Project Insight is designed for users to get insights into user the textual data based on Pre-Trained and Fine-Tuned NLP models..
['Abhishek Kumar Mishra']
[]
['fastapi', 'python', 'pytorch', 'streamlit']
85
10,159
https://devpost.com/software/real-time-sudoku-solver-computer-vision-deep-learning
demo_pic2 demo_pic1 Inspiration I made a Sudoku solver using PyGame some time ago and I wanted to do something like this ever since. Then I saw that this hackathon was on, so I decided to implement the neural network using PyTorch too. What it does As the name suggests, it solves Sudoku puzzles by looking at it and it's on the lines of Augmented Reality. Using the camera, it searches for a 9*9 Sudoku puzzle in the frame, extracts it, solves it and overlays the solution on the puzzle itself. All you gotta do is show the incomplete puzzle to the camera! How I built it For the digit recognition part , I used pytorch to build a Convolutional Neural Network and then trained it on the MNIST dataset of handwritten digits. For the image processing part, I intensively used OpenCV. And to solve the sudoku, at first I was applying normal recursion and backtracking, but then it was too slow, so I used Peter Novig's optimized version of backtracking and a medium article by Naoki Shibuya, which reduced the time taken to a great extent! Challenges I ran into Detecting the empty blocks in the Sudoku puzzle was one major challenge. Also, superimposing the solution on the puzzle itself was a bit challenging but then there were OpenCV functions which made it doable. What I learned PyTorch and OpenCV mainly. What's next for Real TIme Sudoku Solver - Computer Vision & Deep Learning Probably bundle this into a mobile app and publish it to the app stores. Built With numpy opencv python pytorch torchvision Try it out github.com
Real Time Sudoku Solver - CV/DL
Application which uses the camera and searches for a 9*9 Sudoku puzzle in the frame, then extracts it (by recognizing the digits and the white spaces) and then superimposes the solution on it.
['Raghav Virmani']
[]
['numpy', 'opencv', 'python', 'pytorch', 'torchvision']
86
10,159
https://devpost.com/software/codesnippetsearch-net
Web application index page Web application search page Web extension Inspiration Code search (for me at least) is one of the most important tools during coding. I noticed that searching and navigating through code heavily outweighs the actual coding. When I'm already familiar with a codebase, I find that exact search is what I need (for example regular expressions and case insensitive search). That is because I'm already familiar with the naming scheme, and I can reasonably predict how to formulate a search query. An example of this is the search tool from Visual Studio Code. It's my daily driver because it's fast, accurate, and reliable. But when I'm not familiar with the codebase, or even a new folder within an existing large project, it makes searching more difficult because it's more trial and error. A tool like CodeSnippetSearch would allow me to easily explore unfamiliar code focusing on the semantics without getting bogged down in the syntax. This is especially useful when onboarding a new developer onto a project because it can be a significant boost to their productivity. Outside of a work environment, we encounter unfamiliar code in the form of GitHub repositories. Semantic search tools would provide a faster way for users to find answers to their issues directly in the code. Consequently, it would lessen the burden on maintainers to provide these answers. Quickly locating the source of their problems would hopefully also encourage users to contribute to the repository. What it does CodeSnippetSearch, a web application, and a web extension that allows you to search GitHub repositories using natural language queries and code itself. How I built it The main data source for CodeSnippetSearch is Github's CodeSearchNet project. It contains approximately 6 million functions from 6 programming languages (Go, Python, Php, Java, Ruby, and Javascript). CodeSearchNet also provides various baseline implementations of neural code search in Tensorflow. My implementation was inspired by their "Neural Bag of Words" baseline implementation. Before the hackathon, I had written CodeSnippetSearch in Keras and it was only able to search through the CodeSearchNet dataset. Due to difficulties when developing and deploying the models, I decided to switch to PyTorch when I wanted to add support for searching GitHub repositories. CodeSnippetSearch works by using joint embeddings of code and queries to implement a neural search system. The training objective is to map code and corresponding queries onto vectors that are close to each other. With this, we can embed a natural language query and then use nearest neighbor search to return a set of matching code snippets. During training, we use function docstrings as substitutes for natural language queries. To learn the embeddings I combine a set of sequence encoders (weighted bag of words in this case) to encode the inputs. The loss function can be intuitively explained as maximizing the inner product between the corresponding code and query pairs while minimizing the inner product between non-corresponding pairs. To train a repository model I simply take the model that was trained on the CodeSearchNet data, extract the embedding weights, and fine-tune them on a repository-specific dataset that was extracted separately. I built the neural model in PyTorch and I'm using AnnoyIndex for nearest neighbor search. The web application backend is written in Django and the frontend of the web application and web extension is written in Vue. Challenges I ran into Providing fast search results Using Githubs tree-sitter to parse and extract functions from Github repositories How to use base models to fine-tune the repository models Accomplishments that I'm proud of During the project I discovered a bug in Github's CodeSearchNet. The problem was in the configuration of the nearest neighbor search. Fixing the bug bumped the final evaluation metrics by almost two times. What I learned Handling large machine learning tasks from preprocessing the data to deploying trained PyTorch models into production. What's next for CodeSnippetSearch.net Adding support for more programming languages Adding more repositories to the web application Improving the search results Built With django python pytorch vue Try it out codesnippetsearch.net addons.mozilla.org github.com
CodeSnippetSearch.net
Search GitHub repositories using natural language queries and code itself.
['Rok Novosel']
[]
['django', 'python', 'pytorch', 'vue']
87
10,159
https://devpost.com/software/covid-19-detection-using-deep-learning-ai
Thereafter images taken by astronauts will be analyzed by a CNN model to show us the impact created on the group by objects -lunar craters. Inspiration.One of the things that should be done in this scenario is manual testing so that the true situation can be understood and right decision is taken and disadvantages of manual testing is costly less testing kits and other so using deep learning is better. Due to the fact that the disease is highly contagious.Covid 19 analysis using deep learning includes lungs xrays of patience , the basic ideas is to classify the affected as covid or normal How I built it i used anacoda navigation to lauch jupyter notebook and import some libraries eg matplot keras ... Challenges I ran into it was hard to install open cv in the anaconda framework Accomplishments that I'm proud of the system is working What I learned .perfecting python skills and machine learning What's next for Covid 19 Detection using deep learning (Ai) . adding more dataset Built With ai kera machine-learning python pytorch tensorflow
covid 19 with Ai
i think using Xray images to tell if somebody is positive or negative is comfortable
['limo patrick']
[]
['ai', 'kera', 'machine-learning', 'python', 'pytorch', 'tensorflow']
88