Antoine KRAJNC commited on
Commit
4ac4c09
·
1 Parent(s): 6099a4b

add readme requirements

Browse files
Files changed (4) hide show
  1. Dockerfile +34 -0
  2. README.md +23 -0
  3. app.py +49 -0
  4. requirements.txt +7 -0
Dockerfile ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM langchain/langchain:latest
2
+
3
+
4
+ # Update packages and install nano and curl
5
+ RUN apt-get update -y
6
+ RUN apt-get install nano curl -y
7
+
8
+ # THIS IS SPECIFIC TO HUGGINFACE
9
+ # We create a new user named "user" with ID of 1000
10
+ RUN useradd -m -u 1000 user
11
+ # We switch from "root" (default user when creating an image) to "user"
12
+ USER user
13
+ # We set two environmnet variables
14
+ # so that we can give ownership to all files in there afterwards
15
+ # we also add /home/user/.local/bin in the $PATH environment variable
16
+ # PATH environment variable sets paths to look for installed binaries
17
+ # We update it so that Linux knows where to look for binaries if we were to install them with "user".
18
+ ENV HOME=/home/user \
19
+ PATH=/home/user/.local/bin:$PATH
20
+
21
+ # We set working directory to $HOME/app (<=> /home/user/app)
22
+ WORKDIR $HOME/app
23
+
24
+ # Copy all local files to /home/user/app with "user" as owner of these files
25
+ # Always use --chown=user when using HUGGINGFACE to avoid permission errors
26
+ COPY --chown=user . $HOME/app
27
+
28
+ # Install all dependencies
29
+ RUN pip install -r requirements.txt
30
+ # make sure all packages are up to date
31
+ RUN pip install --upgrade -r requirements.txt
32
+
33
+ # Run FastAPI
34
+ CMD fastapi run app.py --port 7860
README.md CHANGED
@@ -10,3 +10,26 @@ short_description: Very simple langchain api to translate anything
10
  ---
11
 
12
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
13
+ # Langchain basics
14
+
15
+ This is a basic application that runs an LLM application using Langchain and Langserve.
16
+
17
+ ## Requirements
18
+
19
+ You need to have:
20
+
21
+ * Mistral Account to get a Mistral API Key
22
+ * Docker installed
23
+
24
+ ## Run the application
25
+
26
+ To run the application simply:
27
+
28
+ ```bash
29
+ docker run -p 7860:7680 -e MISTRAL_API_KEY=REPLACE_WITH_YOUR_MISTRAL_API_KEY jedha/langchain-base
30
+ ```
31
+ And then open web browser and go to:
32
+
33
+ * http://localhost:7860/chain/playground
34
+
35
+ This endpoint will prompt you to a web page to test the `/chain` endpoint.
app.py ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ from fastapi import FastAPI
3
+ from langchain_core.prompts import ChatPromptTemplate
4
+ from langchain_core.output_parsers import StrOutputParser
5
+ from langchain_mistralai import ChatMistralAI
6
+ from langserve import add_routes
7
+
8
+ # 1. Create prompt template
9
+ # Here we create a simple prompt with two inputs
10
+ # First a "system" prompt that corresponds to the instruction for the model
11
+ # Second a "user" prompt that corresponds to what a user inputs when interacting with the model
12
+ system_template = "Translate the following into {language}:"
13
+ prompt_template = ChatPromptTemplate.from_messages([
14
+ ('system', system_template),
15
+ ('user', '{text}')
16
+ ])
17
+
18
+ # 2. Create model
19
+ # Here we chose a model from Mistral
20
+ # Generally you should always try to use Chat models even if the purpose of the app is not to chat
21
+ model = ChatMistralAI(model="mistral-large-latest")
22
+
23
+ # 3. Create parser
24
+ # This simply outputs the result of the LLM as pure string
25
+ parser = StrOutputParser()
26
+
27
+ # 4. Create chain
28
+ # Here we create a workflow that
29
+ # First -> Read the prompt
30
+ # Second -> Apply the model on the given prompt
31
+ # Third -> Output the result as a string
32
+ chain = prompt_template | model | parser
33
+
34
+ # 5. App definition
35
+ # Here we instanciate a FastAPI application
36
+ app = FastAPI(
37
+ title="LangChain Server",
38
+ version="1.0",
39
+ description="A simple API server using LangChain's Runnable interfaces",
40
+ )
41
+
42
+ # 6. Adding chain route
43
+ # Finally this is a LangServe Wrapper that creates a endpoint at /chain
44
+ # with a playground that you can play with at /chain/playground when the server is up
45
+ add_routes(
46
+ app,
47
+ chain,
48
+ path="/chain",
49
+ )
requirements.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ langchain
2
+ langchain-community
3
+ langchain-mistralai
4
+ langchain-openai
5
+ langserve[all]
6
+ langgraph
7
+ fastapi[standard]