Spaces:
Sleeping
Sleeping
md updated in app.py
Browse files
app.py
CHANGED
|
@@ -1,6 +1,5 @@
|
|
| 1 |
import streamlit as st
|
| 2 |
|
| 3 |
-
|
| 4 |
st.set_page_config(
|
| 5 |
page_title="Hello",
|
| 6 |
page_icon="👋",
|
|
@@ -20,176 +19,177 @@ st.markdown(
|
|
| 20 |
|
| 21 |
## Table of Content
|
| 22 |
|
| 23 |
-
| Index | Title |
|
| 24 |
-
|
| 25 |
-
| 0 | intro to python - creating pi |
|
| 26 |
-
| 0 | intro to python |
|
| 27 |
-
| 01 | numpy, pandas, matplotlib |
|
| 28 |
-
| 02 | ann and cnn |
|
| 29 |
-
| 02 | gradient descent in neural networks |
|
| 30 |
-
| 02 | run a neural network models on tpu |
|
| 31 |
-
| 03 | run an installed neuralnet |
|
| 32 |
-
| 04a | more in cnn (famous cnn) |
|
| 33 |
-
| 04a | more in cnn |
|
| 34 |
-
| 04a | popular cnn walkthrough with training and evaluating on test set |
|
| 35 |
-
| 04b | 3d cnn using captcha ocr |
|
| 36 |
-
| 04b | vit classifier on mnist |
|
| 37 |
-
| 04c | chestxray classification |
|
| 38 |
-
| 04d | class activation map |
|
| 39 |
-
| 05 | fine tuning neural network |
|
| 40 |
-
| 06a | autoencoder |
|
| 41 |
-
| 06b | image denoising |
|
| 42 |
-
| 07a | variational autoencoder |
|
| 43 |
-
| 07b | neural network regressor + bayesian last layer |
|
| 44 |
-
| 08 | inference of autoencoder |
|
| 45 |
-
| 09a | image segmentation |
|
| 46 |
-
| 09b | image segmentation unet |
|
| 47 |
-
| 09c | image segmentation unet dense style |
|
| 48 |
-
| 09d | image segmentation unet attention style |
|
| 49 |
-
| 10 | dcgan on masked mnist |
|
| 50 |
-
| 10 | masked image model |
|
| 51 |
-
| 10 | reconstruct mnist fashion image from ae to vapaad |
|
| 52 |
-
| 10 | reconstruct mnist image from ae to vapaad |
|
| 53 |
-
| 10 | vapad test v1 |
|
| 54 |
-
| 10 | vapad test v2 |
|
| 55 |
-
| 10a | dcgan |
|
| 56 |
-
| 10b | dcgan on masked mnist |
|
| 57 |
-
| 11a | huggingface on names |
|
| 58 |
-
| 11b | transformers |
|
| 59 |
-
| 11c | lstm on IMDB |
|
| 60 |
-
| 11c | simple RNN on sine function |
|
| 61 |
-
| 11d | text encoder using transformers |
|
| 62 |
-
| 11e | attention layer sample |
|
| 63 |
-
| 11f | convolutional lstm next frame prediction |
|
| 64 |
-
| 11g | convolutional lstm next frame prediction |
|
| 65 |
-
| 11h | next frame prediction convolutional lstm |
|
| 66 |
-
| 11i | next frame prediction convolutional lstm + attention |
|
| 67 |
-
| 11j | next frame prediction vapaad |
|
| 68 |
-
| 11k | next frame ecoli prediction instruct-vapaad class (updated) with stop gradient |
|
| 69 |
-
| 11k | next frame prediction instruct-vapaad class (updated) with stop gradient |
|
| 70 |
-
| 11k | next frame prediction instruct-vapaad class with stop gradient |
|
| 71 |
-
| 11k | next frame prediction instruct-vapaad with stop gradient |
|
| 72 |
-
| 11k | next frame prediction instruct-vapaad |
|
| 73 |
-
| 13 | bert on IMDB |
|
| 74 |
-
| 14 | music generation |
|
| 75 |
-
| 15 | functional api and siamise network |
|
| 76 |
-
| 16a | use lstm to forecast stock price |
|
| 77 |
-
| 16b | use neuralprophet to forecast stock price |
|
| 78 |
-
| 16c | use finviz to get basic stock data |
|
| 79 |
-
| 16d | dynamic time warping |
|
| 80 |
-
| 17 | introduction to modeling gcl |
|
| 81 |
-
| 18a | image classification with vit |
|
| 82 |
-
| 18b | transformer |
|
| 83 |
-
| 18c | transformers can do anything |
|
| 84 |
-
| 18d | attention |
|
| 85 |
-
| 18e | transformers and multi-head attention |
|
| 86 |
-
| 19a | text generation with GPT |
|
| 87 |
-
| 19b | quick usage of chatGPT |
|
| 88 |
-
| 19c | build quick chatbot using clinical trails data |
|
| 89 |
-
| 19c | fine tune chatgpt clinical trials data - part 1 |
|
| 90 |
-
| 19c | fine tune chatgpt clinical trials data - part 2 |
|
| 91 |
-
| 19c | fine tune chatgpt olympics data - part 1 |
|
| 92 |
-
| 19d | distances between two sentences |
|
| 93 |
-
| 20b | generate ai photo by leapai |
|
| 94 |
-
| 21 | neural machine learning translation |
|
| 95 |
-
| 21a | image classification with vision transformer |
|
| 96 |
-
| 21b | image segmentation |
|
| 97 |
-
| 21b | image_classification_with_vision_transformer_brain_tumor |
|
| 98 |
-
| 21b | object detection using vision transformer |
|
| 99 |
-
| 21b | shiftvit on cifar10 |
|
| 100 |
-
| 21c | face recognition |
|
| 101 |
-
| 21d | neural style transfer |
|
| 102 |
-
| 21e | 3d image classification |
|
| 103 |
-
| 21f | object detection inference from huggingface |
|
| 104 |
-
| 21f | object detection inference |
|
| 105 |
-
| 22a | monte carlo policy gradient |
|
| 106 |
-
| 22b | dql carpole |
|
| 107 |
-
| 22c | dqn carpole keras |
|
| 108 |
-
| 23a | actor-critic intro using toy data |
|
| 109 |
-
| 23a | actor-critic intro |
|
| 110 |
-
| 23b | actor-critic with ppo |
|
| 111 |
-
| 24a | basic langchain tutorial |
|
| 112 |
-
| 24a | fine tune falcon on qlora |
|
| 113 |
-
| 24a | fine tune llm bert using hugginface transformer |
|
| 114 |
-
| 24a | semantic_similarity_with_bert |
|
| 115 |
-
| 24b | character level text generation using lstm |
|
| 116 |
-
| 24b | custom agent with plugin retrieval using langchain |
|
| 117 |
-
| 24b | fast bert embedding |
|
| 118 |
-
| 24b | internet search by key words |
|
| 119 |
-
| 24b | palm api getting started |
|
| 120 |
-
| 24b | pandasAI demo |
|
| 121 |
-
| 24b | scrape any PDF for QA pairs |
|
| 122 |
-
| 24b | scrape internet with public URL |
|
| 123 |
-
| 24b | self refinement prompt engineering |
|
| 124 |
-
| 24b | semantic similarity with keras nlp |
|
| 125 |
-
| 24b | serpapi openai |
|
| 126 |
-
| 24c | fine tune customized qa model |
|
| 127 |
-
| 24d | fine tune llm tf-f5 |
|
| 128 |
-
| 24d | langchain integrations of vector stores |
|
| 129 |
-
| 24d | performance evaluation of finetuned model, chatgpt, langchain, and rag |
|
| 130 |
-
| 24e | working with langchain agents |
|
| 131 |
-
| 24f | api call to aws lambda with llama2 deployed |
|
| 132 |
-
| 24f | fine tune bert using mrpc dataset and push to huggingface hub |
|
| 133 |
-
| 24f | fine tune Llama 2 using ysa data in colab |
|
| 134 |
-
| 24f | fine tune llama2 in colab |
|
| 135 |
-
| 24f | fine tune llama2 using guanaco in colab |
|
| 136 |
-
| 24f | fine tune llama3 with orpo |
|
| 137 |
-
| 24f | fine tune Mistral_7B_v0_1 using dataset openassistant guanaco |
|
| 138 |
-
| 24f | hqq 1bit |
|
| 139 |
-
| 24f | inference endpoint interaction from huggingface |
|
| 140 |
-
| 24f | inference from llama-2-7b-miniguanaco |
|
| 141 |
-
| 24f | jax gemma on colab tpu |
|
| 142 |
-
| 24f | llm classifier tutorials |
|
| 143 |
-
| 24f | load and save models from transformers package locally |
|
| 144 |
-
| 24f | load sciq formatted dataset from huggingface into chroma |
|
| 145 |
-
| 24f | load ysa formatted dataset from huggingface into chroma |
|
| 146 |
-
| 24f | ludwig efficient fine tune Llama2 7b |
|
| 147 |
-
| 24f | process any custom data from pdf to create qa pairs for rag system and push to huggingface |
|
| 148 |
-
| 24f | process custom data from pdf and push to huggingface to prep for fine tune task of llama 2 using lora |
|
| 149 |
-
| 24f | prompt tuning using peft |
|
| 150 |
-
| 24f | started with llama 65b |
|
| 151 |
-
| 24f | what to do when rag system hallucinates |
|
| 152 |
-
| 24g | check performance boost from QA context pipeline |
|
| 153 |
-
| 24h | text generation gpt |
|
| 154 |
-
| 24i | google gemini rest api |
|
| 155 |
-
| 26 | aws textract api call via post method |
|
| 156 |
-
| 27a | image captioning vit-gpt2 on coco2014 data |
|
| 157 |
-
| 27b | image captioning cnn+transformer using flickr8 (from fine-tune to HF) |
|
| 158 |
-
| 27b | image captioning cnn+transformer using flickr8 data save and load locally |
|
| 159 |
-
| 27c | keras integration with huggingface tutorial |
|
| 160 |
-
| 27d | stock chart captioning (from data cleanup to push to HF) |
|
| 161 |
-
| 27d | stock chart image classification using vit part 1+2 |
|
| 162 |
-
| 27d | stock chart image classifier using vit |
|
| 163 |
-
| 27e | keras greedy image captioning (inference) |
|
| 164 |
-
| 27e | keras greedy image captioning (training) |
|
| 165 |
-
| 28a | quantized influence versus cosine similarity |
|
| 166 |
-
| 28b | quantized influence versus cosine similarity |
|
| 167 |
-
| 28c | quantized influence versus cosine similarity |
|
| 168 |
-
| 29a | dna generation to protein folding |
|
| 169 |
-
| 30a | v-jepa (ish) on mnist data |
|
| 170 |
-
| 30a | vapad test v1 |
|
| 171 |
-
| 30a | vapad test v2 |
|
| 172 |
-
| 30e | moving stock returns instruct-vapaad class (success) |
|
| 173 |
-
| 30e | redo rag from scratch using openai embed and qim |
|
| 174 |
-
| 31a | redo rag from scratch using openai embed and qim |
|
| 175 |
-
| 31b | redo rag from scratch using openai embed + qim + llama3 |
|
| 176 |
-
| 31c | redo rag with auto question generation |
|
| 177 |
-
| 32a | text-to-video initial attempt |
|
| 178 |
-
| _ | audio processing in python |
|
| 179 |
-
| _ | blockchain tutorial (long) |
|
| 180 |
-
| _ | blockchain tutorial |
|
| 181 |
-
| _ | dataframe querying using pandasAI |
|
| 182 |
-
| _ | extract nii files |
|
| 183 |
-
| _ | fake patient bloodtest generator |
|
| 184 |
-
| _ | Image Processing in Python_Final |
|
| 185 |
-
| _ | kmeans_from_scratch |
|
| 186 |
-
| _ | Manifold learning |
|
| 187 |
-
| _ | openai new api |
|
| 188 |
-
| _ | pca |
|
| 189 |
-
| _ | rocauc |
|
| 190 |
-
| _ | simulate grading rubrics with and without max function |
|
| 191 |
-
| _ | simulation of solar eclipse |
|
| 192 |
-
| _ | Unrar, Unzip, Untar Rar, Zip, Tar in GDrive |
|
|
|
|
| 193 |
|
| 194 |
"""
|
| 195 |
)
|
|
|
|
| 1 |
import streamlit as st
|
| 2 |
|
|
|
|
| 3 |
st.set_page_config(
|
| 4 |
page_title="Hello",
|
| 5 |
page_icon="👋",
|
|
|
|
| 19 |
|
| 20 |
## Table of Content
|
| 21 |
|
| 22 |
+
| Index | Title | Description |
|
| 23 |
+
|-------|-------------------------------------------------------------------------------------------|---------------------------------------------------------------|
|
| 24 |
+
| 0 | intro to python - creating pi | Learning Python by simulating pi |
|
| 25 |
+
| 0 | intro to python | Learning Python basics |
|
| 26 |
+
| 01 | numpy, pandas, matplotlib | Introduction to essential Python libraries for data science |
|
| 27 |
+
| 02 | ann and cnn | Exploring artificial neural networks and convolutional neural networks |
|
| 28 |
+
| 02 | gradient descent in neural networks | Understanding gradient descent optimization in neural networks|
|
| 29 |
+
| 02 | run a neural network models on tpu | Running neural network models on Tensor Processing Units (TPU)|
|
| 30 |
+
| 03 | run an installed neuralnet | Executing a pre-installed neural network model |
|
| 31 |
+
| 04a | more in cnn (famous cnn) | Deep dive into famous convolutional neural network architectures |
|
| 32 |
+
| 04a | more in cnn | Further exploration of convolutional neural networks |
|
| 33 |
+
| 04a | popular cnn walkthrough with training and evaluating on test set | Step-by-step guide to training and evaluating CNNs on a test dataset |
|
| 34 |
+
| 04b | 3d cnn using captcha ocr | Using 3D CNNs for optical character recognition in CAPTCHAs |
|
| 35 |
+
| 04b | vit classifier on mnist | Implementing a Vision Transformer (ViT) classifier on MNIST dataset |
|
| 36 |
+
| 04c | chestxray classification | Classifying chest X-ray images using neural networks |
|
| 37 |
+
| 04d | class activation map | Visualizing regions affecting neural network decisions with class activation maps |
|
| 38 |
+
| 05 | fine tuning neural network | Techniques for fine-tuning pre-trained neural networks |
|
| 39 |
+
| 06a | autoencoder | Exploring autoencoders for unsupervised learning |
|
| 40 |
+
| 06b | image denoising | Using neural networks to remove noise from images |
|
| 41 |
+
| 07a | variational autoencoder | Learning about variational autoencoders and their applications |
|
| 42 |
+
| 07b | neural network regressor + bayesian last layer | Building a neural network regressor with a Bayesian approach |
|
| 43 |
+
| 08 | inference of autoencoder | Performing inference with autoencoders |
|
| 44 |
+
| 09a | image segmentation | Techniques for segmenting images using neural networks |
|
| 45 |
+
| 09b | image segmentation unet | Implementing U-Net architecture for image segmentation |
|
| 46 |
+
| 09c | image segmentation unet dense style | Advanced U-Net with dense layers for image segmentation |
|
| 47 |
+
| 09d | image segmentation unet attention style | U-Net with attention mechanisms for improved segmentation |
|
| 48 |
+
| 10 | dcgan on masked mnist | Using DCGANs on MNIST dataset with masked inputs |
|
| 49 |
+
| 10 | masked image model | Exploring models for processing images with masked areas |
|
| 50 |
+
| 10 | reconstruct mnist fashion image from ae to vapaad | Reconstructing fashion images from autoencoders to VAPAAD models |
|
| 51 |
+
| 10 | reconstruct mnist image from ae to vapaad | Image reconstruction from autoencoders to VAPAAD models |
|
| 52 |
+
| 10 | vapad test v1 | Initial tests on VAPAAD model performance |
|
| 53 |
+
| 10 | vapad test v2 | Further testing on VAPAAD model enhancements |
|
| 54 |
+
| 10a | dcgan | Exploring Deep Convolutional Generative Adversarial Networks |
|
| 55 |
+
| 10b | dcgan on masked mnist | Applying DCGANs to MNIST with masked inputs |
|
| 56 |
+
| 11a | huggingface on names | Utilizing Hugging Face libraries for name-based tasks |
|
| 57 |
+
| 11b | transformers | Comprehensive guide to using transformer models |
|
| 58 |
+
| 11c | lstm on IMDB | Applying LSTM networks for sentiment analysis on IMDB reviews |
|
| 59 |
+
| 11c | simple RNN on sine function | Exploring simple recurrent neural networks with sine functions |
|
| 60 |
+
| 11d | text encoder using transformers | Building a text encoder with transformer architecture |
|
| 61 |
+
| 11e | attention layer sample | Examples and applications of attention layers |
|
| 62 |
+
| 11f | convolutional lstm next frame prediction | Using convolutional LSTMs for predicting video frames |
|
| 63 |
+
| 11g | convolutional lstm next frame prediction | Further exploration of convolutional LSTMs for frame prediction|
|
| 64 |
+
| 11h | next frame prediction convolutional lstm | Advanced techniques in LSTM-based video frame prediction |
|
| 65 |
+
| 11i | next frame prediction convolutional lstm + attention | Integrating attention with LSTMs for enhanced frame prediction |
|
| 66 |
+
| 11j | next frame prediction vapaad | Predicting video frames using VAPAAD models |
|
| 67 |
+
| 11k | next frame ecoli prediction instruct-vapaad class (updated) with stop gradient | Updated E. coli frame prediction with VAPAAD and stop gradients|
|
| 68 |
+
| 11k | next frame prediction instruct-vapaad class (updated) with stop gradient | Improved frame prediction with updated VAPAAD and stop gradients |
|
| 69 |
+
| 11k | next frame prediction instruct-vapaad class with stop gradient | Frame prediction using VAPAAD with gradient stopping |
|
| 70 |
+
| 11k | next frame prediction instruct-vapaad with stop gradient | Enhancing VAPAAD models with stop gradient techniques |
|
| 71 |
+
| 11k | next frame prediction instruct-vapaad | Introduction to frame prediction with VAPAAD models |
|
| 72 |
+
| 13 | bert on IMDB | Applying BERT for sentiment analysis on IMDB reviews |
|
| 73 |
+
| 14 | music generation | Exploring neural networks for generating music |
|
| 74 |
+
| 15 | functional api and siamise network | Utilizing Keras Functional API for Siamese networks |
|
| 75 |
+
| 16a | use lstm to forecast stock price | Forecasting stock prices with LSTM networks |
|
| 76 |
+
| 16b | use neuralprophet to forecast stock price | Stock price prediction using the NeuralProphet model |
|
| 77 |
+
| 16c | use finviz to get basic stock data | Retrieving stock data using the Finviz platform |
|
| 78 |
+
| 16d | dynamic time warping | Exploring dynamic time warping for time series analysis |
|
| 79 |
+
| 17 | introduction to modeling gcl | Basics of modeling with Generative Causal Language (GCL) |
|
| 80 |
+
| 18a | image classification with vit | Using Vision Transformers for image classification |
|
| 81 |
+
| 18b | transformer | Deep dive into the workings of transformer models |
|
| 82 |
+
| 18c | transformers can do anything | Exploring the versatility of transformer models |
|
| 83 |
+
| 18d | attention | Understanding the mechanisms and applications of attention |
|
| 84 |
+
| 18e | transformers and multi-head attention | Advanced topics on transformers and multi-head attention |
|
| 85 |
+
| 19a | text generation with GPT | Generating text using GPT models |
|
| 86 |
+
| 19b | quick usage of chatGPT | Guide to quickly deploying chatGPT for conversational AI |
|
| 87 |
+
| 19c | build quick chatbot using clinical trails data | Creating a chatbot with clinical trials data for rapid response|
|
| 88 |
+
| 19c | fine tune chatgpt clinical trials data - part 1 | Part 1 of fine-tuning chatGPT with clinical trials data |
|
| 89 |
+
| 19c | fine tune chatgpt clinical trials data - part 2 | Part 2 of fine-tuning chatGPT with clinical trials data |
|
| 90 |
+
| 19c | fine tune chatgpt olympics data - part 1 | Part 1 of fine-tuning chatGPT with data from the Olympics |
|
| 91 |
+
| 19d | distances between two sentences | Computing semantic distances between sentences |
|
| 92 |
+
| 20b | generate ai photo by leapai | Generating photos using LeapAI technology |
|
| 93 |
+
| 21 | neural machine learning translation | Exploring neural machine translation systems |
|
| 94 |
+
| 21a | image classification with vision transformer | Classifying images using Vision Transformers |
|
| 95 |
+
| 21b | image segmentation | Techniques for segmenting images with neural networks |
|
| 96 |
+
| 21b | image_classification_with_vision_transformer_brain_tumor | Classifying brain tumor images with Vision Transformers |
|
| 97 |
+
| 21b | object detection using vision transformer | Object detection using Vision Transformers |
|
| 98 |
+
| 21b | shiftvit on cifar10 | Applying ShiftViT architecture to CIFAR-10 dataset |
|
| 99 |
+
| 21c | face recognition | Implementing facial recognition systems |
|
| 100 |
+
| 21d | neural style transfer | Exploring neural style transfer techniques |
|
| 101 |
+
| 21e | 3d image classification | Classifying 3D images using neural networks |
|
| 102 |
+
| 21f | object detection inference from huggingface | Performing object detection inference using Hugging Face models|
|
| 103 |
+
| 21f | object detection inference | Techniques for conducting object detection inference |
|
| 104 |
+
| 22a | monte carlo policy gradient | Implementing Monte Carlo policy gradients for reinforcement learning |
|
| 105 |
+
| 22b | dql carpole | Applying deep Q-learning to the CartPole problem |
|
| 106 |
+
| 22c | dqn carpole keras | Implementing a deep Q-network for CartPole with Keras |
|
| 107 |
+
| 23a | actor-critic intro using toy data | Introduction to actor-critic methods with toy data |
|
| 108 |
+
| 23a | actor-critic intro | Basics of actor-critic reinforcement learning methods |
|
| 109 |
+
| 23b | actor-critic with ppo | Implementing actor-critic with Proximal Policy Optimization |
|
| 110 |
+
| 24a | basic langchain tutorial | Introductory tutorial on using LangChain |
|
| 111 |
+
| 24a | fine tune falcon on qlora | Fine-tuning Falcon models on Qlora dataset |
|
| 112 |
+
| 24a | fine tune llm bert using hugginface transformer | Fine-tuning BERT models using Hugging Face transformers |
|
| 113 |
+
| 24a | semantic_similarity_with_bert | Exploring semantic similarity using BERT models |
|
| 114 |
+
| 24b | character level text generation using lstm | Generating text at the character level with LSTM networks |
|
| 115 |
+
| 24b | custom agent with plugin retrieval using langchain | Creating custom agents with plugin retrieval in LangChain |
|
| 116 |
+
| 24b | fast bert embedding | Generating quick embeddings using BERT |
|
| 117 |
+
| 24b | internet search by key words | Conducting internet searches based on key words |
|
| 118 |
+
| 24b | palm api getting started | Getting started with PALM API |
|
| 119 |
+
| 24b | pandasAI demo | Demonstrating capabilities of pandasAI library |
|
| 120 |
+
| 24b | scrape any PDF for QA pairs | Extracting QA pairs from PDF documents |
|
| 121 |
+
| 24b | scrape internet with public URL | Scraping the internet using public URLs |
|
| 122 |
+
| 24b | self refinement prompt engineering | Developing refined prompts for better AI responses |
|
| 123 |
+
| 24b | semantic similarity with keras nlp | Exploring semantic similarity using Keras NLP tools |
|
| 124 |
+
| 24b | serpapi openai | Utilizing SerpAPI with OpenAI services |
|
| 125 |
+
| 24c | fine tune customized qa model | Fine-tuning a customized QA model |
|
| 126 |
+
| 24d | fine tune llm tf-f5 | Fine-tuning LLM TF-F5 for specialized tasks |
|
| 127 |
+
| 24d | langchain integrations of vector stores | Integrating LangChain with vector storage solutions |
|
| 128 |
+
| 24d | performance evaluation of finetuned model, chatgpt, langchain, and rag | Evaluating performance of various finetuned models and systems |
|
| 129 |
+
| 24e | working with langchain agents | Guide to using LangChain agents |
|
| 130 |
+
| 24f | api call to aws lambda with llama2 deployed | Making API calls to AWS Lambda with Llama2 deployed |
|
| 131 |
+
| 24f | fine tune bert using mrpc dataset and push to huggingface hub | Fine-tuning BERT on MRPC dataset and publishing to Hugging Face|
|
| 132 |
+
| 24f | fine tune Llama 2 using ysa data in colab | Fine-tuning Llama 2 with YSA data on Colab |
|
| 133 |
+
| 24f | fine tune llama2 in colab | Fine-tuning Llama2 on Google Colab |
|
| 134 |
+
| 24f | fine tune llama2 using guanaco in colab | Fine-tuning Llama2 using Guanaco dataset on Colab |
|
| 135 |
+
| 24f | fine tune llama3 with orpo | Fine-tuning Llama3 with ORPO dataset |
|
| 136 |
+
| 24f | fine tune Mistral_7B_v0_1 using dataset openassistant guanaco | Fine-tuning Mistral_7B_v0_1 with OpenAssistant Guanaco dataset |
|
| 137 |
+
| 24f | hqq 1bit | Exploring 1bit quantization for model compression |
|
| 138 |
+
| 24f | inference endpoint interaction from huggingface | Managing inference endpoints from Hugging Face |
|
| 139 |
+
| 24f | inference from llama-2-7b-miniguanaco | Inference with the Llama-2-7B-MiniGuanaco model |
|
| 140 |
+
| 24f | jax gemma on colab tpu | Utilizing JAX Gemma on Google Colab TPUs |
|
| 141 |
+
| 24f | llm classifier tutorials | Tutorials on using large language models for classification |
|
| 142 |
+
| 24f | load and save models from transformers package locally | Techniques for loading and saving Transformer models locally |
|
| 143 |
+
| 24f | load sciq formatted dataset from huggingface into chroma | Loading SciQ formatted datasets from Hugging Face into Chroma |
|
| 144 |
+
| 24f | load ysa formatted dataset from huggingface into chroma | Loading YSA formatted datasets from Hugging Face into Chroma |
|
| 145 |
+
| 24f | ludwig efficient fine tune Llama2 7b | Efficiently fine-tuning Llama2 7B using Ludwig |
|
| 146 |
+
| 24f | process any custom data from pdf to create qa pairs for rag system and push to huggingface | Processing custom PDF data to create QA pairs for RAG system |
|
| 147 |
+
| 24f | process custom data from pdf and push to huggingface to prep for fine tune task of llama 2 using lora | Preparing custom PDF data for Llama 2 fine-tuning using Lora |
|
| 148 |
+
| 24f | prompt tuning using peft | Using prompt engineering and tuning for fine-tuning models |
|
| 149 |
+
| 24f | started with llama 65b | Getting started with the Llama 65B model |
|
| 150 |
+
| 24f | what to do when rag system hallucinates | Handling hallucinations in RAG systems |
|
| 151 |
+
| 24g | check performance boost from QA context pipeline | Evaluating performance improvements from QA context pipelines |
|
| 152 |
+
| 24h | text generation gpt | Exploring text generation capabilities of GPT models |
|
| 153 |
+
| 24i | google gemini rest api | Using Google Gemini REST API |
|
| 154 |
+
| 26 | aws textract api call via post method | Making POST method API calls to AWS Textract |
|
| 155 |
+
| 27a | image captioning vit-gpt2 on coco2014 data | Captioning images with VIT-GPT2 on COCO2014 dataset |
|
| 156 |
+
| 27b | image captioning cnn+transformer using flickr8 (from fine-tune to HF) | Image captioning using CNN and transformers on Flickr8 dataset |
|
| 157 |
+
| 27b | image captioning cnn+transformer using flickr8 data save and load locally | Saving and loading CNN+transformer models for image captioning |
|
| 158 |
+
| 27c | keras integration with huggingface tutorial | Integrating Keras with Hugging Face libraries |
|
| 159 |
+
| 27d | stock chart captioning (from data cleanup to push to HF) | Developing stock chart captioning models from start to finish |
|
| 160 |
+
| 27d | stock chart image classification using vit part 1+2 | Classifying stock charts using VIT in two parts |
|
| 161 |
+
| 27d | stock chart image classifier using vit | Classifying stock charts using Vision Transformers |
|
| 162 |
+
| 27e | keras greedy image captioning (inference) | Performing inference with Keras models for image captioning |
|
| 163 |
+
| 27e | keras greedy image captioning (training) | Training Keras models for greedy image captioning |
|
| 164 |
+
| 28a | quantized influence versus cosine similarity | Comparing quantized influence and cosine similarity measures |
|
| 165 |
+
| 28b | quantized influence versus cosine similarity | Deep dive into quantized influence metrics versus cosine similarity |
|
| 166 |
+
| 28c | quantized influence versus cosine similarity | Analyzing the impact of quantized influence in machine learning models |
|
| 167 |
+
| 29a | dna generation to protein folding | From generating DNA sequences to modeling protein folding |
|
| 168 |
+
| 30a | v-jepa (ish) on mnist data | Applying V-JEPA models on MNIST dataset |
|
| 169 |
+
| 30a | vapad test v1 | Initial tests and evaluation of VAPAAD models |
|
| 170 |
+
| 30a | vapad test v2 | Further evaluations and improvements of VAPAAD models |
|
| 171 |
+
| 30e | moving stock returns instruct-vapaad class (success) | Successful implementation of moving stock returns with VAPAAD |
|
| 172 |
+
| 30e | redo rag from scratch using openai embed and qim | Rebuilding RAG systems using OpenAI Embeddings and QIM |
|
| 173 |
+
| 31a | redo rag from scratch using openai embed and qim | Reconstructing RAG systems from the ground up with new technologies |
|
| 174 |
+
| 31b | redo rag from scratch using openai embed + qim + llama3 | Advanced rebuilding of RAG using Llama3, OpenAI Embed, and QIM |
|
| 175 |
+
| 31c | redo rag with auto question generation | Enhancing RAG systems with automatic question generation |
|
| 176 |
+
| 32a | text-to-video initial attempt | Initial trials in converting text descriptions to video content|
|
| 177 |
+
| _ | audio processing in python | Techniques for processing audio data in Python |
|
| 178 |
+
| _ | blockchain tutorial (long) | Comprehensive guide to blockchain technology |
|
| 179 |
+
| _ | blockchain tutorial | Introduction to blockchain concepts and applications |
|
| 180 |
+
| _ | dataframe querying using pandasAI | Using pandasAI for advanced dataframe querying |
|
| 181 |
+
| _ | extract nii files | Techniques for extracting data from NII file formats |
|
| 182 |
+
| _ | fake patient bloodtest generator | Generating synthetic patient blood test data for simulations |
|
| 183 |
+
| _ | Image Processing in Python_Final | Comprehensive guide to image processing in Python |
|
| 184 |
+
| _ | kmeans_from_scratch | Implementing K-means clustering algorithm from scratch |
|
| 185 |
+
| _ | Manifold learning | Exploring manifold learning techniques for dimensionality reduction |
|
| 186 |
+
| _ | openai new api | Guide to using the latest OpenAI API features |
|
| 187 |
+
| _ | pca | Principal component analysis for data simplification |
|
| 188 |
+
| _ | rocauc | Understanding ROC-AUC curves and their applications |
|
| 189 |
+
| _ | simulate grading rubrics with and without max function | Simulating grading systems with variations in calculation |
|
| 190 |
+
| _ | simulation of solar eclipse | Modeling solar eclipse events |
|
| 191 |
+
| _ | Unrar, Unzip, Untar Rar, Zip, Tar in GDrive | Techniques for managing compressed files in Google Drive |
|
| 192 |
+
|
| 193 |
|
| 194 |
"""
|
| 195 |
)
|