repo_id stringlengths 15 132 | file_path stringlengths 34 176 | content stringlengths 2 3.52M | __index_level_0__ int64 0 0 |
|---|---|---|---|
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/flow-with-additional-includes/requirements.txt | promptflow[azure]
promptflow-tools
bs4 | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/flow-with-additional-includes/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
url:
type: string
default: https://www.microsoft.com/en-us/d/xbox-wireless-controller-stellar-shift-special-edition/94fbjc7h0h6h
outputs:
category:
type: string
reference: ${convert_to_dict.output.category}
ev... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-if-else/data.jsonl | {"question": "What is Prompt flow?"}
{"question": "What is ChatGPT?"} | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-if-else/llm_result.py | from promptflow import tool
@tool
def llm_result(question: str) -> str:
# You can use an LLM node to replace this tool.
return (
"Prompt flow is a suite of development tools designed to streamline "
"the end-to-end development cycle of LLM-based AI applications."
)
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-if-else/README.md | # Conditional flow for if-else scenario
This example is a conditional flow for if-else scenario.
By following this example, you will learn how to create a conditional flow using the `activate config`.
## Flow description
In this flow, it checks if an input query passes content safety check. If it's denied, we'll re... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-if-else/default_result.py | from promptflow import tool
@tool
def default_result(question: str) -> str:
return f"I'm not familiar with your query: {question}."
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-if-else/generate_result.py | from promptflow import tool
@tool
def generate_result(llm_result="", default_result="") -> str:
if llm_result:
return llm_result
else:
return default_result
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-if-else/requirements.txt | promptflow
promptflow-tools | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-if-else/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
question:
type: string
default: What is Prompt flow?
outputs:
answer:
type: string
reference: ${generate_result.output}
nodes:
- name: content_safety_check
type: python
source:
type: code
path: conte... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-if-else/content_safety_check.py | from promptflow import tool
import random
@tool
def content_safety_check(text: str) -> str:
# You can use a content safety node to replace this tool.
return random.choice([True, False])
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic-with-builtin-llm/data.jsonl | {"text": "Python Hello World!"}
{"text": "C Hello World!"}
{"text": "C# Hello World!"} | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic-with-builtin-llm/README.md | # Basic flow with builtin llm tool
A basic standard flow that calls Azure OpenAI with builtin llm tool.
Tools used in this flow:
- `prompt` tool
- built-in `llm` tool
Connections used in this flow:
- `azure_open_ai` connection
## Prerequisites
Install promptflow sdk and other dependencies:
```bash
pip install -r r... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic-with-builtin-llm/requirements.txt | promptflow
promptflow-tools
python-dotenv | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic-with-builtin-llm/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
text:
type: string
default: Python Hello World!
outputs:
output:
type: string
reference: ${llm.output}
nodes:
- name: hello_prompt
type: prompt
inputs:
text: ${inputs.text}
source:
type: code
p... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic-with-builtin-llm/hello.jinja2 | system:
You are a assistant which can write code. Response should only contain code.
user:
Write a simple {{text}} program that displays the greeting message when executed. | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/web-classification/data.jsonl | {"url": "https://www.youtube.com/watch?v=kYqRtjDBci8", "answer": "Channel", "evidence": "Both"}
{"url": "https://arxiv.org/abs/2307.04767", "answer": "Academic", "evidence": "Both"}
{"url": "https://play.google.com/store/apps/details?id=com.twitter.android", "answer": "App", "evidence": "Both"}
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/web-classification/convert_to_dict.py | import json
from promptflow import tool
@tool
def convert_to_dict(input_str: str):
try:
return json.loads(input_str)
except Exception as e:
print("The input is not valid, error: {}".format(e))
return {"category": "None", "evidence": "None"}
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/web-classification/README.md | # Web Classification
This is a flow demonstrating multi-class classification with LLM. Given an url, it will classify the url into one web category with just a few shots, simple summarization and classification prompts.
## Tools used in this flow
- LLM Tool
- Python Tool
## What you will learn
In this flow, you wil... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/web-classification/run.yml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json
flow: .
data: data.jsonl
variant: ${summarize_text_content.variant_1}
column_mapping:
url: ${data.url}
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/web-classification/fetch_text_content_from_url.py | import bs4
import requests
from promptflow import tool
@tool
def fetch_text_content_from_url(url: str):
# Send a request to the URL
try:
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/113.0.0.0 Safari/537.3... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/web-classification/classify_with_llm.jinja2 | system:
Your task is to classify a given url into one of the following categories:
Movie, App, Academic, Channel, Profile, PDF or None based on the text content information.
The classification will be based on the url, the webpage text content summary, or both.
user:
The selection range of the value of "category" must... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/web-classification/summarize_text_content__variant_1.jinja2 | system:
Please summarize some keywords of this paragraph and have some details of each keywords.
Do not add any information that is not in the text.
user:
Text: {{text}}
Summary: | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/web-classification/prepare_examples.py | from promptflow import tool
@tool
def prepare_examples():
return [
{
"url": "https://play.google.com/store/apps/details?id=com.spotify.music",
"text_content": "Spotify is a free music and podcast streaming app with millions of songs, albums, and "
"original podcasts. It... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/web-classification/run_evaluation.yml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json
flow: ../../evaluation/eval-classification-accuracy
data: data.jsonl
run: web_classification_variant_1_20230724_173442_973403 # replace with your run name
column_mapping:
groundtruth: ${data.answer}
prediction: ${run.outputs.category} | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/web-classification/requirements.txt | promptflow[azure]
promptflow-tools
bs4 | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/web-classification/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
environment:
python_requirements_txt: requirements.txt
inputs:
url:
type: string
default: https://play.google.com/store/apps/details?id=com.twitter.android
outputs:
category:
type: string
reference: ${convert_to_dict.... | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/web-classification/summarize_text_content.jinja2 | system:
Please summarize the following text in one paragraph. 100 words.
Do not add any information that is not in the text.
user:
Text: {{text}}
Summary: | 0 |
promptflow_repo/promptflow/examples/flows/standard/web-classification | promptflow_repo/promptflow/examples/flows/standard/web-classification/.promptflow/flow.tools.json | {
"package": {},
"code": {
"fetch_text_content_from_url.py": {
"type": "python",
"inputs": {
"url": {
"type": [
"string"
]
}
},
"source": "fetch_text_content_from_url.py",
"function": "fetch_text_content_from_url"
},
"summariz... | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/basic-chat/README.md | # Basic Chat
This example shows how to create a basic chat flow. It demonstrates how to create a chatbot that can remember previous interactions and use the conversation history to generate next message.
Tools used in this flow:
- `llm` tool
## Prerequisites
Install promptflow sdk and other dependencies in this fold... | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/basic-chat/chat.jinja2 | system:
You are a helpful assistant.
{% for item in chat_history %}
user:
{{item.inputs.question}}
assistant:
{{item.outputs.answer}}
{% endfor %}
user:
{{question}} | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/basic-chat/requirements.txt | promptflow
promptflow-tools | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/basic-chat/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
chat_history:
type: list
default: []
question:
type: string
is_chat_input: true
default: What is ChatGPT?
outputs:
answer:
type: string
reference: ${chat.output}
is_chat_output: true
nodes:
- i... | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-wikipedia/data.jsonl | {"chat_history":[{"inputs":{"question":"What is ChatGPT?"},"outputs":{"answer":"ChatGPT is a chatbot product developed by OpenAI. It is powered by the Generative Pre-trained Transformer (GPT) series of language models, with GPT-4 being the latest version. ChatGPT uses natural language processing to generate responses t... | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-wikipedia/search_result_from_url.py | import random
import time
from concurrent.futures import ThreadPoolExecutor
from functools import partial
import bs4
import requests
from promptflow import tool
session = requests.Session()
def decode_str(string):
return string.encode().decode("unicode-escape").encode("latin1").decode("utf-8")
def get_page_s... | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-wikipedia/README.md | # Chat With Wikipedia
This flow demonstrates how to create a chatbot that can remember previous interactions and use the conversation history to generate next message.
Tools used in this flow:
- `llm` tool
- custom `python` Tool
## Prerequisites
Install promptflow sdk and other dependencies in this folder:
```bash
... | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-wikipedia/get_wiki_url.py | import re
import bs4
import requests
from promptflow import tool
def decode_str(string):
return string.encode().decode("unicode-escape").encode("latin1").decode("utf-8")
def remove_nested_parentheses(string):
pattern = r"\([^()]+\)"
while re.search(pattern, string):
string = re.sub(pattern, ""... | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-wikipedia/augmented_chat.jinja2 | system:
You are a chatbot having a conversation with a human.
Given the following extracted parts of a long document and a question, create a final answer with references ("SOURCES").
If you don't know the answer, just say that you don't know. Don't try to make up an answer.
ALWAYS return a "SOURCES" part in your answe... | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-wikipedia/requirements.txt | promptflow
promptflow-tools
bs4 | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-wikipedia/process_search_result.py | from promptflow import tool
@tool
def process_search_result(search_result):
def format(doc: dict):
return f"Content: {doc['Content']}\nSource: {doc['Source']}"
try:
context = []
for url, content in search_result:
context.append({"Content": content, "Source": url})
... | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-wikipedia/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
chat_history:
type: list
default: []
question:
type: string
default: What is ChatGPT?
is_chat_input: true
outputs:
answer:
type: string
reference: ${augmented_chat.output}
is_chat_output: true
... | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-wikipedia/extract_query_from_question.jinja2 | system:
You are an AI assistant reading the transcript of a conversation between an AI and a human. Given an input question and conversation history, infer user real intent.
The conversation history is provided just in case of a context (e.g. "What is this?" where "this" is defined in previous conversation).
Return t... | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-math-variant/data.jsonl | {"question": "Compute $\\dbinom{16}{5}$.", "answer": "4368", "raw_answer": "$\\dbinom{16}{5}=\\dfrac{16\\times 15\\times 14\\times 13\\times 12}{5\\times 4\\times 3\\times 2\\times 1}=\\boxed{4368}.$"}
{"question": "Determine the number of ways to arrange the letters of the word PROOF.", "answer": "60", "raw_answer": "... | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-math-variant/README.md | # Test your prompt variants for chat with math
This is a prompt tuning case with 3 prompt variants for math question answering.
By utilizing this flow, in conjunction with the `evaluation/eval-chat-math` flow, you can quickly grasp the advantages of prompt tuning and experimentation with prompt flow. Here we provide a... | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-math-variant/chat.jinja2 | system:
You are an assistant to calculate the answer to the provided math problems.
Please return the final numerical answer only, without any accompanying reasoning or explanation.
{% for item in chat_history %}
user:
{{item.inputs.question}}
assistant:
{{item.outputs.answer}}
{% endfor %}
user:
{{question}}
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-math-variant/chat_variant_2.jinja2 | system:
You are an assistant to calculate the answer to the provided math problems.
Please think step by step.
Return the final numerical answer only and any accompanying reasoning or explanation seperately as json format.
user:
A jar contains two red marbles, three green marbles, ten white marbles and no other marble... | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-math-variant/requirements.txt | promptflow
promptflow-tools | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-math-variant/chat_variant_1.jinja2 | system:
You are an assistant to calculate the answer to the provided math problems.
Please think step by step.
Return the final numerical answer only and any accompanying reasoning or explanation seperately as json format.
user:
A jar contains two red marbles, three green marbles, ten white marbles and no other marble... | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-math-variant/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
environment:
python_requirements_txt: requirements.txt
inputs:
chat_history:
type: list
is_chat_history: true
default: []
question:
type: string
is_chat_input: true
default: '1+1=?'
outputs:
answer:
type... | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-math-variant/extract_result.py | from promptflow import tool
import json
import re
# The inputs section will change based on the arguments of the tool function, after you save the code
# Adding type to arguments and return value will help the system show the types properly
# Please update the function name/signature per need
@tool
def my_python_too... | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-math-variant | promptflow_repo/promptflow/examples/flows/chat/chat-math-variant/.promptflow/flow.tools.json | {
"package": {},
"code": {
"chat.jinja2": {
"type": "llm",
"inputs": {
"chat_history": {
"type": [
"string"
]
},
"question": {
"type": [
"string"
]
}
},
"source": "chat.jinja2"
},
"cha... | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-image/README.md | # Chat With Image
This flow demonstrates how to create a chatbot that can take image and text as input.
Tools used in this flow:
- `OpenAI GPT-4V` tool
## Prerequisites
Install promptflow sdk and other dependencies in this folder:
```bash
pip install -r requirements.txt
```
## What you will learn
In this flow, yo... | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-image/chat.jinja2 | # system:
You are a helpful assistant.
{% for item in chat_history %}
# user:
{{item.inputs.question}}
# assistant:
{{item.outputs.answer}}
{% endfor %}
# user:
{{question}} | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-image/requirements.txt | promptflow
promptflow-tools | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-image/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
environment:
python_requirements_txt: requirements.txt
inputs:
chat_history:
type: list
is_chat_history: true
question:
type: list
default:
- data:image/png;url: https://images.idgesg.net/images/article/2019/11/ed... | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf_tool.py | from promptflow import tool
from chat_with_pdf.main import chat_with_pdf
@tool
def chat_with_pdf_tool(question: str, pdf_url: str, history: list, ready: str):
history = convert_chat_history_to_chatml_messages(history)
stream, context = chat_with_pdf(question, pdf_url, history)
answer = ""
for str in... | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat-with-pdf-azure.ipynb | %pip install -r requirements.txtfrom azure.identity import DefaultAzureCredential, InteractiveBrowserCredential
try:
credential = DefaultAzureCredential()
# Check if given credential can get token successfully.
credential.get_token("https://management.azure.com/.default")
except Exception as ex:
# Fall... | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/README.md | # Chat with PDF
This is a simple flow that allow you to ask questions about the content of a PDF file and get answers.
You can run the flow with a URL to a PDF file and question as argument.
Once it's launched it will download the PDF and build an index of the content.
Then when you ask a question, it will look up th... | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/batch_run.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json
#name: chat_with_pdf_default_20230820_162219_559000
flow: .
data: ./data/bert-paper-qna.jsonl
#run: <Uncomment to select a run input>
column_mapping:
chat_history: ${data.chat_history}
pdf_url: ${data.pdf_url}
question: ${data.questio... | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/build_index_tool.py | from promptflow import tool
from chat_with_pdf.build_index import create_faiss_index
@tool
def build_index_tool(pdf_path: str) -> str:
return create_faiss_index(pdf_path)
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/__init__.py | import sys
import os
sys.path.append(
os.path.join(os.path.dirname(os.path.abspath(__file__)), "chat_with_pdf")
)
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/setup_env.py | import os
from typing import Union
from promptflow import tool
from promptflow.connections import AzureOpenAIConnection, OpenAIConnection
from chat_with_pdf.utils.lock import acquire_lock
BASE_DIR = os.path.dirname(os.path.abspath(__file__)) + "/chat_with_pdf/"
@tool
def setup_env(connection: Union[AzureOpenAIConn... | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/eval_run.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json
#name: eval_groundedness_default_20230820_200152_009000
flow: ../../evaluation/eval-groundedness
run: chat_with_pdf_default_20230820_162219_559000
column_mapping:
question: ${run.inputs.question}
answer: ${run.outputs.answer}
context:... | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/openai.yaml | # All the values should be string type, please use "123" instead of 123 or "True" instead of True.
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/OpenAIConnection.schema.json
name: open_ai_connection
type: open_ai
api_key: "<open-ai-api-key>"
organization: ""
# Note:
# The connection information will... | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/requirements.txt | PyPDF2
faiss-cpu
openai
jinja2
python-dotenv
tiktoken
promptflow[azure]
promptflow-tools | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/flow.dag.yaml.single-node | inputs:
chat_history:
type: list
default:
- inputs:
question: what is BERT?
outputs:
answer: BERT (Bidirectional Encoder Representations from Transformers) is a
language representation model that pre-trains deep bidirectional
representations from unlabeled text by... | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/rewrite_question_tool.py | from promptflow import tool
from chat_with_pdf.rewrite_question import rewrite_question
@tool
def rewrite_question_tool(question: str, history: list, env_ready_signal: str):
return rewrite_question(question, history)
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/download_tool.py | from promptflow import tool
from chat_with_pdf.download import download
@tool
def download_tool(url: str, env_ready_signal: str) -> str:
return download(url)
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
chat_history:
type: list
default: []
pdf_url:
type: string
default: https://arxiv.org/pdf/1810.04805.pdf
question:
type: string
is_chat_input: true
default: what is BERT?
config:
type: object... | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/find_context_tool.py | from promptflow import tool
from chat_with_pdf.find_context import find_context
@tool
def find_context_tool(question: str, index_path: str):
prompt, context = find_context(question, index_path)
return {"prompt": prompt, "context": [c.text for c in context]}
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat-with-pdf.ipynb | %pip install -r requirements.txtimport promptflow
pf = promptflow.PFClient()
# List all the available connections
for c in pf.connections.list():
print(c.name + " (" + c.type + ")")# create needed connection
from promptflow.entities import AzureOpenAIConnection, OpenAIConnection
try:
conn_name = "open_ai_con... | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/qna_tool.py | from promptflow import tool
from chat_with_pdf.qna import qna
@tool
def qna_tool(prompt: str, history: list):
stream = qna(prompt, convert_chat_history_to_chatml_messages(history))
answer = ""
for str in stream:
answer = answer + str + ""
return {"answer": answer}
def convert_chat_history_... | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/flow.dag.yaml.multi-node | inputs:
chat_history:
type: list
default: []
pdf_url:
type: string
default: https://arxiv.org/pdf/1810.04805.pdf
question:
type: string
is_chat_input: true
default: what NLP tasks does it perform well?
outputs:
answer:
type: string
is_chat_output: true
reference: ${qna_to... | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/download.py | import requests
import os
import re
from utils.lock import acquire_lock
from utils.logging import log
from constants import PDF_DIR
# Download a pdf file from a url and return the path to the file
def download(url: str) -> str:
path = os.path.join(PDF_DIR, normalize_filename(url) + ".pdf")
lock_path = path +... | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/README.md | # Chat with PDF
This is a simple Python application that allow you to ask questions about the content of a PDF file and get answers.
It's a console application that you start with a URL to a PDF file as argument. Once it's launched it will download the PDF and build an index of the content. Then when you ask a question... | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/find_context.py | import faiss
from jinja2 import Environment, FileSystemLoader
import os
from utils.index import FAISSIndex
from utils.oai import OAIEmbedding, render_with_token_limit
from utils.logging import log
def find_context(question: str, index_path: str):
index = FAISSIndex(index=faiss.IndexFlatL2(1536), embedding=OAIEmb... | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/.env.example | # Azure OpenAI, uncomment below section if you want to use Azure OpenAI
# Note: EMBEDDING_MODEL_DEPLOYMENT_NAME and CHAT_MODEL_DEPLOYMENT_NAME are deployment names for Azure OpenAI
OPENAI_API_TYPE=azure
OPENAI_API_BASE=<your_AOAI_endpoint>
OPENAI_API_KEY=<your_AOAI_key>
OPENAI_API_VERSION=2023-05-15
EMBEDDING_MODEL_DEP... | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/build_index.py | import PyPDF2
import faiss
import os
from pathlib import Path
from utils.oai import OAIEmbedding
from utils.index import FAISSIndex
from utils.logging import log
from utils.lock import acquire_lock
from constants import INDEX_DIR
def create_faiss_index(pdf_path: str) -> str:
chunk_size = int(os.environ.get("CHU... | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/qna_prompt.md | You're a smart assistant can answer questions based on provided context and previous conversation history between you and human.
Use the context to answer the question at the end, note that the context has order and importance - e.g. context #1 is more important than #2.
Try as much as you can to answer based on the ... | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/__init__.py | import sys
import os
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/rewrite_question_prompt.md | You are able to reason from previous conversation and the recent question, to come up with a rewrite of the question which is concise but with enough information that people without knowledge of previous conversation can understand the question.
A few examples:
# Example 1
## Previous conversation
user: Who is Bill C... | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/qna.py | import os
from utils.oai import OAIChat
def qna(prompt: str, history: list):
max_completion_tokens = int(os.environ.get("MAX_COMPLETION_TOKENS"))
chat = OAIChat()
stream = chat.stream(
messages=history + [{"role": "user", "content": prompt}],
max_tokens=max_completion_tokens,
)
... | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/test.ipynb | from main import chat_with_pdf, print_stream_and_return_full_answer
from dotenv import load_dotenv
load_dotenv()
bert_paper_url = "https://arxiv.org/pdf/1810.04805.pdf"
questions = [
"what is BERT?",
"what NLP tasks does it perform well?",
"is BERT suitable for NER?",
"is it better than GPT",
"whe... | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/main.py | import argparse
from dotenv import load_dotenv
import os
from qna import qna
from find_context import find_context
from rewrite_question import rewrite_question
from build_index import create_faiss_index
from download import download
from utils.lock import acquire_lock
from constants import PDF_DIR, INDEX_DIR
def ch... | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/constants.py | import os
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
PDF_DIR = os.path.join(BASE_DIR, ".pdfs")
INDEX_DIR = os.path.join(BASE_DIR, ".index/.pdfs/")
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/rewrite_question.py | from jinja2 import Environment, FileSystemLoader
import os
from utils.logging import log
from utils.oai import OAIChat, render_with_token_limit
def rewrite_question(question: str, history: list):
template = Environment(
loader=FileSystemLoader(os.path.dirname(os.path.abspath(__file__)))
).get_template... | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/utils/oai.py | from typing import List
import openai
from openai.version import VERSION as OPENAI_VERSION
import os
import tiktoken
from jinja2 import Template
from .retry import (
retry_and_handle_exceptions,
retry_and_handle_exceptions_for_generator,
)
from .logging import log
def extract_delay_from_rate_limit_error_msg(... | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/utils/__init__.py | __path__ = __import__("pkgutil").extend_path(__path__, __name__) # type: ignore
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/utils/lock.py | import contextlib
import os
import sys
if sys.platform.startswith("win"):
import msvcrt
else:
import fcntl
@contextlib.contextmanager
def acquire_lock(filename):
if not sys.platform.startswith("win"):
with open(filename, "a+") as f:
fcntl.flock(f, fcntl.LOCK_EX)
yield f
... | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/utils/logging.py | import os
def log(message: str):
verbose = os.environ.get("VERBOSE", "false")
if verbose.lower() == "true":
print(message, flush=True)
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/utils/retry.py | from typing import Tuple, Union, Optional, Type
import functools
import time
import random
def retry_and_handle_exceptions(
exception_to_check: Union[Type[Exception], Tuple[Type[Exception], ...]],
max_retries: int = 3,
initial_delay: float = 1,
exponential_base: float = 2,
jitter: bool = False,
... | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/utils/index.py | import os
from typing import Iterable, List, Optional
from dataclasses import dataclass
from faiss import Index
import faiss
import pickle
import numpy as np
from .oai import OAIEmbedding as Embedding
@dataclass
class SearchResultEntity:
text: str = None
vector: List[float] = None
score: float = None
... | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/.promptflow/flow.tools.json | {
"package": {},
"code": {
"setup_env.py": {
"type": "python",
"inputs": {
"connection": {
"type": [
"AzureOpenAIConnection",
"OpenAIConnection"
]
},
"config": {
"type": [
"object"
]
}
}... | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/tests/base_test.py | import unittest
import os
import time
import traceback
class BaseTest(unittest.TestCase):
def setUp(self):
root = os.path.join(os.path.dirname(os.path.abspath(__file__)), "../../")
self.flow_path = os.path.join(root, "chat-with-pdf")
self.data_path = os.path.join(
self.flow_pat... | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/tests/chat_with_pdf_test.py | import os
import unittest
import promptflow
from base_test import BaseTest
from promptflow._sdk._errors import InvalidRunStatusError
class TestChatWithPDF(BaseTest):
def setUp(self):
super().setUp()
self.pf = promptflow.PFClient()
def tearDown(self) -> None:
return super().tearDown()
... | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/tests/azure_chat_with_pdf_test.py | import unittest
import promptflow.azure as azure
from azure.identity import DefaultAzureCredential, InteractiveBrowserCredential
from base_test import BaseTest
import os
from promptflow._sdk._errors import InvalidRunStatusError
class TestChatWithPDFAzure(BaseTest):
def setUp(self):
super().setUp()
... | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/data/bert-paper-qna.jsonl | {"pdf_url":"https://arxiv.org/pdf/1810.04805.pdf", "chat_history":[], "question": "What is the name of the new language representation model introduced in the document?", "answer": "BERT", "context": "We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations fr... | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/data/bert-paper-qna-1-line.jsonl | {"pdf_url":"https://arxiv.org/pdf/1810.04805.pdf", "chat_history":[], "question": "What is the name of the new language representation model introduced in the document?", "answer": "BERT", "context": "We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations fr... | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/data/bert-paper-qna-3-line.jsonl | {"pdf_url":"https://arxiv.org/pdf/1810.04805.pdf", "chat_history":[], "question": "What is the main difference between BERT and previous language representation models?", "answer": "BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context... | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/data/invalid-data-missing-column.jsonl | {"pdf_url":"https://arxiv.org/pdf/1810.04805.pdf"}
| 0 |
promptflow_repo/promptflow/examples/flows/evaluation | promptflow_repo/promptflow/examples/flows/evaluation/eval-classification-accuracy/data.jsonl | {"groundtruth": "App","prediction": "App"}
{"groundtruth": "Channel","prediction": "Channel"}
{"groundtruth": "Academic","prediction": "Academic"}
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.