Shipmaster1 commited on
Commit
c05011f
Β·
verified Β·
1 Parent(s): 1227663

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -76
README.md CHANGED
@@ -1,76 +1,75 @@
1
- ---
2
- title: RAG Implementation Notebook
3
- emoji: πŸ”
4
- colorFrom: blue
5
- colorTo: purple
6
- sdk: gradio
7
- sdk_version: 3.50.2
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- # RAG Implementation Notebook
13
-
14
- This space contains a Jupyter notebook demonstrating a Retrieval Augmented Generation (RAG) implementation using OpenAI's API and Hugging Face models.
15
-
16
- ## Features
17
- - PDF document processing
18
- - Text chunking and embedding
19
- - Vector database implementation
20
- - RAG pipeline with context-aware responses
21
-
22
- ## How to Use
23
- 1. Clone this repository
24
- 2. Install the requirements: `pip install -r requirements.txt`
25
- 3. Open the notebook: `jupyter notebook Pythonic_RAG_Assignment.ipynb`
26
-
27
- ## Requirements
28
- See `requirements.txt` for the complete list of dependencies.
29
-
30
- # πŸ§‘β€πŸ’»Β What is [AI Engineering](https://maven.com/aimakerspace/ai-eng-bootcamp)?
31
-
32
- AI Engineering refers to the industry-relevant skills that data science and engineering teams need to successfully **build, deploy, operate, and improve Large Language Model (LLM) applications in production environments**.
33
-
34
- In practice, this requires understanding both prototyping and production deployments.
35
-
36
- During the *prototyping* phase, Prompt Engineering, Retrieval Augmented Generation (RAG), Agents, and Fine-Tuning are all necessary tools to be able to understand and leverage. Prototyping includes:
37
- 1. Building RAG Applications
38
- 2. Building with Agent and Multi-Agent Frameworks
39
- 3. Fine-Tuning LLMs & Embedding Models
40
- 4. Deploying LLM Prototype Applications to Users
41
-
42
- When *productionizing* LLM application prototypes, there are many important aspects ensuring helpful, harmless, honest, reliable, and scalable solutions for your customers or stakeholders. Productionizing includes:
43
- 1. Evaluating RAG and Agent Applications
44
- 2. Improving Search and Retrieval Pipelines for Production
45
- 3. Monitoring Production KPIs for LLM Applications
46
- 4. Setting up Inference Servers for LLMs and Embedding Models
47
- 5. Building LLM Applications with Scalable, Production-Grade Components
48
-
49
- This bootcamp builds on our two previous courses, [LLM Engineering](https://maven.com/aimakerspace/llm-engineering) and [LLM Operations](https://maven.com/aimakerspace/llmops) πŸ‘‡
50
-
51
- - Large Language Model Engineering (LLM Engineering) refers to the emerging best-practices and tools for pretraining, post-training, and optimizing LLMs prior to production deployment. Pre- and post-training techniques include unsupervised pretraining, supervised fine-tuning, alignment, model merging, distillation, quantization. and others.
52
-
53
- - Large Language Model Ops (LLM Ops, or LLMOps (as from [WandB](https://docs.wandb.ai/guides/prompts) and [a16z](https://a16z.com/emerging-architectures-for-llm-applications/))) refers to the emerging best-practices, tooling, and improvement processes used to manage production LLM applications throughout the AI product lifecycle. LLM Ops is a subset of Machine Learning Operations (MLOps) that focuses on LLM-specific infrastructure and ops capabilities required to build, deploy, monitor, and scale complex LLM applications in production environments. _This term is being used much less in industry these days._
54
-
55
- # πŸ† **Grading and Certification**
56
-
57
- To become **AI-Makerspace Certified**, which will open you up to additional opportunities for full and part-time work within our community and network, you must:
58
-
59
- 1. Complete all project assignments.
60
- 2. Complete a project and present during Demo Day.
61
- 3. Receive at least an 85% total grade in the course.
62
-
63
- If you do not complete all assignments, participate in Demo Day, or maintain a high-quality standard of work, you may still be eligible for a *certificate of completion* if you miss no more than 2 live sessions.
64
-
65
- # πŸ“š About
66
-
67
- This GitHub repository is your gateway to mastering the art of AI Engineering. ***All assignments for the course will be released here for your building, shipping, and sharing adventures!***
68
-
69
- # πŸ™ Contributions
70
-
71
- We believe in the power of collaboration. Contributions, ideas, and feedback are highly encouraged! Let's build the ultimate resource for AI Engineering together.
72
-
73
- Please to reach out with any questions or suggestions.
74
-
75
- Happy coding! πŸš€πŸš€πŸš€
76
-
 
1
+ ---
2
+ title: RAG Implementation Notebook
3
+ emoji: πŸ”
4
+ colorFrom: blue
5
+ colorTo: purple
6
+ sdk: gradio
7
+ sdk_version: 5.23.3
8
+ app_file: app.py
9
+ pinned: false
10
+ ---
11
+
12
+ # RAG Implementation Notebook
13
+
14
+ This space contains a Jupyter notebook demonstrating a Retrieval Augmented Generation (RAG) implementation using OpenAI's API and Hugging Face models.
15
+
16
+ ## Features
17
+ - PDF document processing
18
+ - Text chunking and embedding
19
+ - Vector database implementation
20
+ - RAG pipeline with context-aware responses
21
+
22
+ ## How to Use
23
+ 1. Clone this repository
24
+ 2. Install the requirements: `pip install -r requirements.txt`
25
+ 3. Open the notebook: `jupyter notebook Pythonic_RAG_Assignment.ipynb`
26
+
27
+ ## Requirements
28
+ See `requirements.txt` for the complete list of dependencies.
29
+
30
+ # πŸ§‘β€πŸ’»Β What is [AI Engineering](https://maven.com/aimakerspace/ai-eng-bootcamp)?
31
+
32
+ AI Engineering refers to the industry-relevant skills that data science and engineering teams need to successfully **build, deploy, operate, and improve Large Language Model (LLM) applications in production environments**.
33
+
34
+ In practice, this requires understanding both prototyping and production deployments.
35
+
36
+ During the *prototyping* phase, Prompt Engineering, Retrieval Augmented Generation (RAG), Agents, and Fine-Tuning are all necessary tools to be able to understand and leverage. Prototyping includes:
37
+ 1. Building RAG Applications
38
+ 2. Building with Agent and Multi-Agent Frameworks
39
+ 3. Fine-Tuning LLMs & Embedding Models
40
+ 4. Deploying LLM Prototype Applications to Users
41
+
42
+ When *productionizing* LLM application prototypes, there are many important aspects ensuring helpful, harmless, honest, reliable, and scalable solutions for your customers or stakeholders. Productionizing includes:
43
+ 1. Evaluating RAG and Agent Applications
44
+ 2. Improving Search and Retrieval Pipelines for Production
45
+ 3. Monitoring Production KPIs for LLM Applications
46
+ 4. Setting up Inference Servers for LLMs and Embedding Models
47
+ 5. Building LLM Applications with Scalable, Production-Grade Components
48
+
49
+ This bootcamp builds on our two previous courses, [LLM Engineering](https://maven.com/aimakerspace/llm-engineering) and [LLM Operations](https://maven.com/aimakerspace/llmops) πŸ‘‡
50
+
51
+ - Large Language Model Engineering (LLM Engineering) refers to the emerging best-practices and tools for pretraining, post-training, and optimizing LLMs prior to production deployment. Pre- and post-training techniques include unsupervised pretraining, supervised fine-tuning, alignment, model merging, distillation, quantization. and others.
52
+
53
+ - Large Language Model Ops (LLM Ops, or LLMOps (as from [WandB](https://docs.wandb.ai/guides/prompts) and [a16z](https://a16z.com/emerging-architectures-for-llm-applications/))) refers to the emerging best-practices, tooling, and improvement processes used to manage production LLM applications throughout the AI product lifecycle. LLM Ops is a subset of Machine Learning Operations (MLOps) that focuses on LLM-specific infrastructure and ops capabilities required to build, deploy, monitor, and scale complex LLM applications in production environments. _This term is being used much less in industry these days._
54
+
55
+ # πŸ† **Grading and Certification**
56
+
57
+ To become **AI-Makerspace Certified**, which will open you up to additional opportunities for full and part-time work within our community and network, you must:
58
+
59
+ 1. Complete all project assignments.
60
+ 2. Complete a project and present during Demo Day.
61
+ 3. Receive at least an 85% total grade in the course.
62
+
63
+ If you do not complete all assignments, participate in Demo Day, or maintain a high-quality standard of work, you may still be eligible for a *certificate of completion* if you miss no more than 2 live sessions.
64
+
65
+ # πŸ“š About
66
+
67
+ This GitHub repository is your gateway to mastering the art of AI Engineering. ***All assignments for the course will be released here for your building, shipping, and sharing adventures!***
68
+
69
+ # πŸ™ Contributions
70
+
71
+ We believe in the power of collaboration. Contributions, ideas, and feedback are highly encouraged! Let's build the ultimate resource for AI Engineering together.
72
+
73
+ Please to reach out with any questions or suggestions.
74
+
75
+ Happy coding! πŸš€πŸš€πŸš€