eaglelandsonce commited on
Commit
26d0a40
·
verified ·
1 Parent(s): 7dee445

Create home.ipynb

Browse files
Files changed (1) hide show
  1. home.ipynb +270 -0
home.ipynb ADDED
@@ -0,0 +1,270 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "nbformat": 4,
3
+ "nbformat_minor": 5,
4
+ "metadata": {
5
+ "kernelspec": {
6
+ "display_name": "Python 3",
7
+ "language": "python",
8
+ "name": "python3"
9
+ },
10
+ "language_info": {
11
+ "name": "python",
12
+ "version": "3.x"
13
+ }
14
+ },
15
+ "cells": [
16
+ {
17
+ "cell_type": "markdown",
18
+ "metadata": {},
19
+ "source": [
20
+ "# LangChain Course: From Basics to Intermediate\n",
21
+ "Welcome to this hands-on LangChain course. Over 20 cells, we'll cover core concepts, build chains, use memory, agents, retrieval, and more."
22
+ ]
23
+ },
24
+ {
25
+ "cell_type": "markdown",
26
+ "metadata": {},
27
+ "source": [
28
+ "## Course Overview & Prerequisites\n",
29
+ "**Objectives:** Learn how to create LLM-driven applications using LangChain.\n",
30
+ "**You'll learn to:**\n",
31
+ "- Install and configure LangChain and dependencies.\n",
32
+ "- Build simple LLMChains and run them.\n",
33
+ "- Manage conversation memory.\n",
34
+ "- Design advanced prompt templates.\n",
35
+ "- Integrate agents with external tools.\n",
36
+ "- Load documents and build RAG workflows.\n",
37
+ "- Monitor performance with callbacks.\n",
38
+ "\n",
39
+ "Prerequisites:\n",
40
+ "- Python 3.8+ installed.\n",
41
+ "- An OpenAI API key set as `OPENAI_API_KEY`.\n",
42
+ "- Basic Python knowledge."
43
+ ]
44
+ },
45
+ {
46
+ "cell_type": "code",
47
+ "metadata": {},
48
+ "source": [
49
+ "# 1. Install Dependencies\n",
50
+ "!pip install langchain openai faiss-cpu wikipedia tqdm"
51
+ ],
52
+ "execution_count": null,
53
+ "outputs": []
54
+ },
55
+ {
56
+ "cell_type": "markdown",
57
+ "metadata": {},
58
+ "source": [
59
+ "## 2. Initialization\n",
60
+ "Import core classes and initialize your LLM with sensible defaults."
61
+ ]
62
+ },
63
+ {
64
+ "cell_type": "code",
65
+ "metadata": {},
66
+ "source": [
67
+ "from langchain import LLMChain, PromptTemplate\n",
68
+ "from langchain.llms import OpenAI\n",
69
+ "\n",
70
+ "# Initialize OpenAI LLM\n",
71
+ "llm = OpenAI(temperature=0.5, max_tokens=150)\n",
72
+ "print('LLM initialized:', llm)"
73
+ ],
74
+ "execution_count": null,
75
+ "outputs": []
76
+ },
77
+ {
78
+ "cell_type": "markdown",
79
+ "metadata": {},
80
+ "source": [
81
+ "## 3. Build Your First LLMChain\n",
82
+ "An `LLMChain` ties a prompt template to an LLM. We'll create a personalized greeting chain."
83
+ ]
84
+ },
85
+ {
86
+ "cell_type": "code",
87
+ "metadata": {},
88
+ "source": [
89
+ "# Define a simple prompt template\n",
90
+ "template = 'Hello, {name}! Welcome to LangChain on {date}.'\n",
91
+ "prompt = PromptTemplate(input_variables=['name','date'], template=template)\n",
92
+ "chain = LLMChain(llm=llm, prompt=prompt)\n",
93
+ "\n",
94
+ "# Run the chain\n",
95
+ "from datetime import datetime\n",
96
+ "output = chain.run({'name':'Alice','date': datetime.now().strftime('%Y-%m-%d')})\n",
97
+ "print(output)"
98
+ ],
99
+ "execution_count": null,
100
+ "outputs": []
101
+ },
102
+ {
103
+ "cell_type": "markdown",
104
+ "metadata": {},
105
+ "source": [
106
+ "## 4. Conversation Memory\n",
107
+ "Use `ConversationBufferMemory` to keep track of past messages and context."
108
+ ]
109
+ },
110
+ {
111
+ "cell_type": "code",
112
+ "metadata": {},
113
+ "source": [
114
+ "from langchain.memory import ConversationBufferMemory\n",
115
+ "\n",
116
+ "# Set up memory and chain\n",
117
+ "memory = ConversationBufferMemory()\n",
118
+ "chain_mem = LLMChain(llm=llm, prompt=prompt, memory=memory)\n",
119
+ "\n",
120
+ "# Run with context retained\n",
121
+ "print(chain_mem.run({'name':'Bob','date':'2025-05-18'}))\n",
122
+ "print(chain_mem.run({'name':'Carol','date':'2025-05-19'}))\n",
123
+ "\n",
124
+ "# Inspect memory buffer\n",
125
+ "print('Memory buffer:', memory.buffer)"
126
+ ],
127
+ "execution_count": null,
128
+ "outputs": []
129
+ },
130
+ {
131
+ "cell_type": "markdown",
132
+ "metadata": {},
133
+ "source": [
134
+ "## 5. Advanced Prompt Templates\n",
135
+ "Delegate complex formatting to LangChain’s `PromptTemplate`. Example: translation."
136
+ ]
137
+ },
138
+ {
139
+ "cell_type": "code",
140
+ "metadata": {},
141
+ "source": [
142
+ "# Translation example\n",
143
+ "translate_template = 'Translate the following text to {language}: {text}'\n",
144
+ "translate_prompt = PromptTemplate(input_variables=['language','text'], template=translate_template)\n",
145
+ "formatted = translate_prompt.format(language='French', text='LangChain simplifies LLM apps.')\n",
146
+ "print(formatted)"
147
+ ],
148
+ "execution_count": null,
149
+ "outputs": []
150
+ },
151
+ {
152
+ "cell_type": "markdown",
153
+ "metadata": {},
154
+ "source": [
155
+ "## 6. Agents & Tools\n",
156
+ "Agents enable your chain to call external tools. Here’s a zero-shot agent with Wikipedia lookup."
157
+ ]
158
+ },
159
+ {
160
+ "cell_type": "code",
161
+ "metadata": {},
162
+ "source": [
163
+ "from langchain.agents import initialize_agent, Tool\n",
164
+ "from langchain.tools import WikipediaQueryRun\n",
165
+ "\n",
166
+ "wiki_tool = Tool(\n",
167
+ " name='wiki',\n",
168
+ " func=WikipediaQueryRun().run,\n",
169
+ " description='Search Wikipedia for factual queries'\n",
170
+ ")\n",
171
+ "agent = initialize_agent([wiki_tool], llm, agent='zero-shot-react-description', verbose=True)\n",
172
+ "# Ask agent a question\n",
173
+ "print(agent.run('Who was Ada Lovelace?'))"
174
+ ],
175
+ "execution_count": null,
176
+ "outputs": []
177
+ },
178
+ {
179
+ "cell_type": "markdown",
180
+ "metadata": {},
181
+ "source": [
182
+ "## 7. Document Loading & Vector Stores\n",
183
+ "Load files (PDF, text) and build FAISS indexes for semantic retrieval."
184
+ ]
185
+ },
186
+ {
187
+ "cell_type": "code",
188
+ "metadata": {},
189
+ "source": [
190
+ "from langchain.document_loaders import PyPDFLoader\n",
191
+ "from langchain.embeddings import OpenAIEmbeddings\n",
192
+ "from langchain.vectorstores import FAISS\n",
193
+ "\n",
194
+ "# Load a PDF document\n",
195
+ "loader = PyPDFLoader('example.pdf')\n",
196
+ "docs = loader.load()\n",
197
+ "\n",
198
+ "# Create embeddings and index\n",
199
+ "embeddings = OpenAIEmbeddings()\n",
200
+ "vectorstore = FAISS.from_documents(docs, embeddings)\n",
201
+ "\n",
202
+ "print(f'Indexed {len(docs)} documents into FAISS.')"
203
+ ],
204
+ "execution_count": null,
205
+ "outputs": []
206
+ },
207
+ {
208
+ "cell_type": "markdown",
209
+ "metadata": {},
210
+ "source": [
211
+ "## 8. RetrievalQA Chain\n",
212
+ "Combine retrieval with generation to answer questions over your documents."
213
+ ]
214
+ },
215
+ {
216
+ "cell_type": "code",
217
+ "metadata": {},
218
+ "source": [
219
+ "from langchain.chains import RetrievalQA\n",
220
+ "\n",
221
+ "qa_chain = RetrievalQA.from_chain_type(\n",
222
+ " llm=llm,\n",
223
+ " chain_type='stuff',\n",
224
+ " retriever=vectorstore.as_retriever()\n",
225
+ ")\n",
226
+ "# Example question\n",
227
+ "response = qa_chain.run('What is the main topic of the document?')\n",
228
+ "print('Answer:', response)"
229
+ ],
230
+ "execution_count": null,
231
+ "outputs": []
232
+ },
233
+ {
234
+ "cell_type": "markdown",
235
+ "metadata": {},
236
+ "source": [
237
+ "## 9. Callback Handlers\n",
238
+ "Monitor token usage, cost, and latency with `get_openai_callback`."
239
+ ]
240
+ },
241
+ {
242
+ "cell_type": "code",
243
+ "metadata": {},
244
+ "source": [
245
+ "from langchain.callbacks import get_openai_callback\n",
246
+ "\n",
247
+ "with get_openai_callback() as cb:\n",
248
+ " result = chain.run({'name':'Dave','date':'2025-05-20'})\n",
249
+ "print('Output:', result)\n",
250
+ "print(f'Total tokens: {cb.total_tokens}, Prompt tokens: {cb.prompt_tokens}, Completion tokens: {cb.completion_tokens}')"
251
+ ],
252
+ "execution_count": null,
253
+ "outputs": []
254
+ },
255
+ {
256
+ "cell_type": "markdown",
257
+ "metadata": {},
258
+ "source": [
259
+ "## 10. Next Steps & Exploration\n",
260
+ "Having covered chains, memory, templates, agents, retrieval, and callbacks, explore:\n",
261
+ "- Custom chain classes and asynchronous execution.\n",
262
+ "- Integration with Streamlit, FastAPI, or Flask for web UIs.\n",
263
+ "- Multi-agent orchestration and tool chaining.\n",
264
+ "- Optimizing prompt engineering for production cost and latency.\n",
265
+ "\n",
266
+ "Happy building!"
267
+ ]
268
+ }
269
+ ]
270
+ }