index int64 0 0 | repo_id stringclasses 596 values | file_path stringlengths 31 168 | content stringlengths 1 6.2M |
|---|---|---|---|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/graph_vectorstores/__init__.py | """.. title:: Graph Vector Store
Graph Vector Store
==================
Sometimes embedding models don't capture all the important relationships between
documents.
Graph Vector Stores are an extension to both vector stores and retrievers that allow
documents to be explicitly connected to each other.
Graph vector store retrievers use both vector similarity and links to find documents
related to an unstructured query.
Graphs allow linking between documents.
Each document identifies tags that link to and from it.
For example, a paragraph of text may be linked to URLs based on the anchor tags in
it's content and linked from the URL(s) it is published at.
`Link extractors <langchain_community.graph_vectorstores.extractors.link_extractor.LinkExtractor>`
can be used to extract links from documents.
Example::
graph_vector_store = CassandraGraphVectorStore()
link_extractor = HtmlLinkExtractor()
links = link_extractor.extract_one(HtmlInput(document.page_content, "http://mysite"))
add_links(document, links)
graph_vector_store.add_document(document)
.. seealso::
- :class:`How to use a graph vector store as a retriever <langchain_community.graph_vectorstores.base.GraphVectorStoreRetriever>`
- :class:`How to create links between documents <langchain_community.graph_vectorstores.links.Link>`
- :class:`How to link Documents on hyperlinks in HTML <langchain_community.graph_vectorstores.extractors.html_link_extractor.HtmlLinkExtractor>`
- :class:`How to link Documents on common keywords (using KeyBERT) <langchain_community.graph_vectorstores.extractors.keybert_link_extractor.KeybertLinkExtractor>`
- :class:`How to link Documents on common named entities (using GliNER) <langchain_community.graph_vectorstores.extractors.gliner_link_extractor.GLiNERLinkExtractor>`
Get started
-----------
We chunk the State of the Union text and split it into documents::
from langchain_community.document_loaders import TextLoader
from langchain_text_splitters import CharacterTextSplitter
raw_documents = TextLoader("state_of_the_union.txt").load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
documents = text_splitter.split_documents(raw_documents)
Links can be added to documents manually but it's easier to use a
:class:`~langchain_community.graph_vectorstores.extractors.link_extractor.LinkExtractor`.
Several common link extractors are available and you can build your own.
For this guide, we'll use the
:class:`~langchain_community.graph_vectorstores.extractors.keybert_link_extractor.KeybertLinkExtractor`
which uses the KeyBERT model to tag documents with keywords and uses these keywords to
create links between documents::
from langchain_community.graph_vectorstores.extractors import KeybertLinkExtractor
from langchain_community.graph_vectorstores.links import add_links
extractor = KeybertLinkExtractor()
for doc in documents:
add_links(doc, extractor.extract_one(doc))
Create the graph vector store and add documents
-----------------------------------------------
We'll use an Apache Cassandra or Astra DB database as an example.
We create a
:class:`~langchain_community.graph_vectorstores.cassandra.CassandraGraphVectorStore`
from the documents and an :class:`~langchain_openai.embeddings.base.OpenAIEmbeddings`
model::
import cassio
from langchain_community.graph_vectorstores import CassandraGraphVectorStore
from langchain_openai import OpenAIEmbeddings
# Initialize cassio and the Cassandra session from the environment variables
cassio.init(auto=True)
store = CassandraGraphVectorStore.from_documents(
embedding=OpenAIEmbeddings(),
documents=documents,
)
Similarity search
-----------------
If we don't traverse the graph, a graph vector store behaves like a regular vector
store.
So all methods available in a vector store are also available in a graph vector store.
The :meth:`~langchain_community.graph_vectorstores.base.GraphVectorStore.similarity_search`
method returns documents similar to a query without considering
the links between documents::
docs = store.similarity_search(
"What did the president say about Ketanji Brown Jackson?"
)
Traversal search
----------------
The :meth:`~langchain_community.graph_vectorstores.base.GraphVectorStore.traversal_search`
method returns documents similar to a query considering the links
between documents. It first does a similarity search and then traverses the graph to
find linked documents::
docs = list(
store.traversal_search("What did the president say about Ketanji Brown Jackson?")
)
Async methods
-------------
The graph vector store has async versions of the methods prefixed with ``a``::
docs = [
doc
async for doc in store.atraversal_search(
"What did the president say about Ketanji Brown Jackson?"
)
]
Graph vector store retriever
----------------------------
The graph vector store can be converted to a retriever.
It is similar to the vector store retriever but it also has traversal search methods
such as ``traversal`` and ``mmr_traversal``::
retriever = store.as_retriever(search_type="mmr_traversal")
docs = retriever.invoke("What did the president say about Ketanji Brown Jackson?")
""" # noqa: E501
from langchain_community.graph_vectorstores.base import (
GraphVectorStore,
GraphVectorStoreRetriever,
Node,
)
from langchain_community.graph_vectorstores.cassandra import CassandraGraphVectorStore
from langchain_community.graph_vectorstores.links import (
Link,
)
from langchain_community.graph_vectorstores.mmr_helper import MmrHelper
__all__ = [
"GraphVectorStore",
"GraphVectorStoreRetriever",
"Node",
"Link",
"CassandraGraphVectorStore",
"MmrHelper",
]
|
0 | lc_public_repos/langchain/libs/community/langchain_community/graph_vectorstores | lc_public_repos/langchain/libs/community/langchain_community/graph_vectorstores/extractors/keybert_link_extractor.py | from typing import Any, Dict, Iterable, Optional, Set, Union
from langchain_core._api import beta
from langchain_core.documents import Document
from langchain_community.graph_vectorstores.extractors.link_extractor import (
LinkExtractor,
)
from langchain_community.graph_vectorstores.links import Link
KeybertInput = Union[str, Document]
@beta()
class KeybertLinkExtractor(LinkExtractor[KeybertInput]):
def __init__(
self,
*,
kind: str = "kw",
embedding_model: str = "all-MiniLM-L6-v2",
extract_keywords_kwargs: Optional[Dict[str, Any]] = None,
):
"""Extract keywords using `KeyBERT <https://maartengr.github.io/KeyBERT/>`_.
KeyBERT is a minimal and easy-to-use keyword extraction technique that
leverages BERT embeddings to create keywords and keyphrases that are most
similar to a document.
The KeybertLinkExtractor uses KeyBERT to create links between documents that
have keywords in common.
Example::
extractor = KeybertLinkExtractor()
results = extractor.extract_one("lorem ipsum...")
.. seealso::
- :mod:`How to use a graph vector store <langchain_community.graph_vectorstores>`
- :class:`How to create links between documents <langchain_community.graph_vectorstores.links.Link>`
How to link Documents on common keywords using Keybert
======================================================
Preliminaries
-------------
Install the keybert package:
.. code-block:: bash
pip install -q langchain_community keybert
Usage
-----
We load the ``state_of_the_union.txt`` file, chunk it, then for each chunk we
extract keyword links and add them to the chunk.
Using extract_one()
^^^^^^^^^^^^^^^^^^^
We can use :meth:`extract_one` on a document to get the links and add the links
to the document metadata with
:meth:`~langchain_community.graph_vectorstores.links.add_links`::
from langchain_community.document_loaders import TextLoader
from langchain_community.graph_vectorstores import CassandraGraphVectorStore
from langchain_community.graph_vectorstores.extractors import KeybertLinkExtractor
from langchain_community.graph_vectorstores.links import add_links
from langchain_text_splitters import CharacterTextSplitter
loader = TextLoader("state_of_the_union.txt")
raw_documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
documents = text_splitter.split_documents(raw_documents)
keyword_extractor = KeybertLinkExtractor()
for document in documents:
links = keyword_extractor.extract_one(document)
add_links(document, links)
print(documents[0].metadata)
.. code-block:: output
{'source': 'state_of_the_union.txt', 'links': [Link(kind='kw', direction='bidir', tag='ukraine'), Link(kind='kw', direction='bidir', tag='ukrainian'), Link(kind='kw', direction='bidir', tag='putin'), Link(kind='kw', direction='bidir', tag='vladimir'), Link(kind='kw', direction='bidir', tag='russia')]}
Using LinkExtractorTransformer
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Using the :class:`~langchain_community.graph_vectorstores.extractors.link_extractor_transformer.LinkExtractorTransformer`,
we can simplify the link extraction::
from langchain_community.document_loaders import TextLoader
from langchain_community.graph_vectorstores.extractors import (
KeybertLinkExtractor,
LinkExtractorTransformer,
)
from langchain_text_splitters import CharacterTextSplitter
loader = TextLoader("state_of_the_union.txt")
raw_documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
documents = text_splitter.split_documents(raw_documents)
transformer = LinkExtractorTransformer([KeybertLinkExtractor()])
documents = transformer.transform_documents(documents)
print(documents[0].metadata)
.. code-block:: output
{'source': 'state_of_the_union.txt', 'links': [Link(kind='kw', direction='bidir', tag='ukraine'), Link(kind='kw', direction='bidir', tag='ukrainian'), Link(kind='kw', direction='bidir', tag='putin'), Link(kind='kw', direction='bidir', tag='vladimir'), Link(kind='kw', direction='bidir', tag='russia')]}
The documents with keyword links can then be added to a :class:`~langchain_community.graph_vectorstores.base.GraphVectorStore`::
from langchain_community.graph_vectorstores import CassandraGraphVectorStore
store = CassandraGraphVectorStore.from_documents(documents=documents, embedding=...)
Args:
kind: Kind of links to produce with this extractor.
embedding_model: Name of the embedding model to use with KeyBERT.
extract_keywords_kwargs: Keyword arguments to pass to KeyBERT's
``extract_keywords`` method.
""" # noqa: E501
try:
import keybert
self._kw_model = keybert.KeyBERT(model=embedding_model)
except ImportError:
raise ImportError(
"keybert is required for KeybertLinkExtractor. "
"Please install it with `pip install keybert`."
) from None
self._kind = kind
self._extract_keywords_kwargs = extract_keywords_kwargs or {}
def extract_one(self, input: KeybertInput) -> Set[Link]: # noqa: A002
keywords = self._kw_model.extract_keywords(
input if isinstance(input, str) else input.page_content,
**self._extract_keywords_kwargs,
)
return {Link.bidir(kind=self._kind, tag=kw[0]) for kw in keywords}
def extract_many(
self,
inputs: Iterable[KeybertInput],
) -> Iterable[Set[Link]]:
inputs = list(inputs)
if len(inputs) == 1:
# Even though we pass a list, if it contains one item, keybert will
# flatten it. This means it's easier to just call the special case
# for one item.
yield self.extract_one(inputs[0])
elif len(inputs) > 1:
strs = [i if isinstance(i, str) else i.page_content for i in inputs]
extracted = self._kw_model.extract_keywords(
strs, **self._extract_keywords_kwargs
)
for keywords in extracted:
yield {Link.bidir(kind=self._kind, tag=kw[0]) for kw in keywords}
|
0 | lc_public_repos/langchain/libs/community/langchain_community/graph_vectorstores | lc_public_repos/langchain/libs/community/langchain_community/graph_vectorstores/extractors/link_extractor.py | from __future__ import annotations
from abc import ABC, abstractmethod
from typing import Generic, Iterable, Set, TypeVar
from langchain_core._api import beta
from langchain_community.graph_vectorstores import Link
InputT = TypeVar("InputT")
METADATA_LINKS_KEY = "links"
@beta()
class LinkExtractor(ABC, Generic[InputT]):
"""Interface for extracting links (incoming, outgoing, bidirectional)."""
@abstractmethod
def extract_one(self, input: InputT) -> Set[Link]:
"""Add edges from each `input` to the corresponding documents.
Args:
input: The input content to extract edges from.
Returns:
Set of links extracted from the input.
"""
def extract_many(self, inputs: Iterable[InputT]) -> Iterable[Set[Link]]:
"""Add edges from each `input` to the corresponding documents.
Args:
inputs: The input content to extract edges from.
Returns:
Iterable over the set of links extracted from the input.
"""
return map(self.extract_one, inputs)
|
0 | lc_public_repos/langchain/libs/community/langchain_community/graph_vectorstores | lc_public_repos/langchain/libs/community/langchain_community/graph_vectorstores/extractors/link_extractor_transformer.py | from typing import Any, Sequence
from langchain_core._api import beta
from langchain_core.documents import Document
from langchain_core.documents.transformers import BaseDocumentTransformer
from langchain_community.graph_vectorstores.extractors.link_extractor import (
LinkExtractor,
)
from langchain_community.graph_vectorstores.links import copy_with_links
@beta()
class LinkExtractorTransformer(BaseDocumentTransformer):
"""DocumentTransformer for applying one or more LinkExtractors.
Example:
.. code-block:: python
extract_links = LinkExtractorTransformer([
HtmlLinkExtractor().as_document_extractor(),
])
extract_links.transform_documents(docs)
"""
def __init__(self, link_extractors: Sequence[LinkExtractor[Document]]):
"""Create a DocumentTransformer which adds extracted links to each document."""
self.link_extractors = link_extractors
def transform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
# Implement `transform_docments` directly, so that LinkExtractors which operate
# better in batch (`extract_many`) get a chance to do so.
# Run each extractor over all documents.
links_per_extractor = [e.extract_many(documents) for e in self.link_extractors]
# Transpose the list of lists to pair each document with the tuple of links.
links_per_document = zip(*links_per_extractor)
return [
copy_with_links(document, *links)
for document, links in zip(documents, links_per_document)
]
|
0 | lc_public_repos/langchain/libs/community/langchain_community/graph_vectorstores | lc_public_repos/langchain/libs/community/langchain_community/graph_vectorstores/extractors/gliner_link_extractor.py | from typing import Any, Dict, Iterable, List, Optional, Set, Union
from langchain_core._api import beta
from langchain_core.documents import Document
from langchain_community.graph_vectorstores.extractors.link_extractor import (
LinkExtractor,
)
from langchain_community.graph_vectorstores.links import Link
# TypeAlias is not available in Python 3.9, we can't use that or the newer `type`.
GLiNERInput = Union[str, Document]
@beta()
class GLiNERLinkExtractor(LinkExtractor[GLiNERInput]):
"""Link documents with common named entities using `GLiNER`_.
`GLiNER`_ is a Named Entity Recognition (NER) model capable of identifying any
entity type using a bidirectional transformer encoder (BERT-like).
The ``GLiNERLinkExtractor`` uses GLiNER to create links between documents that
have named entities in common.
Example::
extractor = GLiNERLinkExtractor(
labels=["Person", "Award", "Date", "Competitions", "Teams"]
)
results = extractor.extract_one("some long text...")
.. _GLiNER: https://github.com/urchade/GLiNER
.. seealso::
- :mod:`How to use a graph vector store <langchain_community.graph_vectorstores>`
- :class:`How to create links between documents <langchain_community.graph_vectorstores.links.Link>`
How to link Documents on common named entities
==============================================
Preliminaries
-------------
Install the ``gliner`` package:
.. code-block:: bash
pip install -q langchain_community gliner
Usage
-----
We load the ``state_of_the_union.txt`` file, chunk it, then for each chunk we
extract named entity links and add them to the chunk.
Using extract_one()
^^^^^^^^^^^^^^^^^^^
We can use :meth:`extract_one` on a document to get the links and add the links
to the document metadata with
:meth:`~langchain_community.graph_vectorstores.links.add_links`::
from langchain_community.document_loaders import TextLoader
from langchain_community.graph_vectorstores import CassandraGraphVectorStore
from langchain_community.graph_vectorstores.extractors import GLiNERLinkExtractor
from langchain_community.graph_vectorstores.links import add_links
from langchain_text_splitters import CharacterTextSplitter
loader = TextLoader("state_of_the_union.txt")
raw_documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
documents = text_splitter.split_documents(raw_documents)
ner_extractor = GLiNERLinkExtractor(["Person", "Topic"])
for document in documents:
links = ner_extractor.extract_one(document)
add_links(document, links)
print(documents[0].metadata)
.. code-block:: output
{'source': 'state_of_the_union.txt', 'links': [Link(kind='entity:Person', direction='bidir', tag='President Zelenskyy'), Link(kind='entity:Person', direction='bidir', tag='Vladimir Putin')]}
Using LinkExtractorTransformer
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Using the :class:`~langchain_community.graph_vectorstores.extractors.link_extractor_transformer.LinkExtractorTransformer`,
we can simplify the link extraction::
from langchain_community.document_loaders import TextLoader
from langchain_community.graph_vectorstores.extractors import (
GLiNERLinkExtractor,
LinkExtractorTransformer,
)
from langchain_text_splitters import CharacterTextSplitter
loader = TextLoader("state_of_the_union.txt")
raw_documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
documents = text_splitter.split_documents(raw_documents)
ner_extractor = GLiNERLinkExtractor(["Person", "Topic"])
transformer = LinkExtractorTransformer([ner_extractor])
documents = transformer.transform_documents(documents)
print(documents[0].metadata)
.. code-block:: output
{'source': 'state_of_the_union.txt', 'links': [Link(kind='entity:Person', direction='bidir', tag='President Zelenskyy'), Link(kind='entity:Person', direction='bidir', tag='Vladimir Putin')]}
The documents with named entity links can then be added to a :class:`~langchain_community.graph_vectorstores.base.GraphVectorStore`::
from langchain_community.graph_vectorstores import CassandraGraphVectorStore
store = CassandraGraphVectorStore.from_documents(documents=documents, embedding=...)
Args:
labels: List of kinds of entities to extract.
kind: Kind of links to produce with this extractor.
model: GLiNER model to use.
extract_kwargs: Keyword arguments to pass to GLiNER.
""" # noqa: E501
def __init__(
self,
labels: List[str],
*,
kind: str = "entity",
model: str = "urchade/gliner_mediumv2.1",
extract_kwargs: Optional[Dict[str, Any]] = None,
):
try:
from gliner import GLiNER
self._model = GLiNER.from_pretrained(model)
except ImportError:
raise ImportError(
"gliner is required for GLiNERLinkExtractor. "
"Please install it with `pip install gliner`."
) from None
self._labels = labels
self._kind = kind
self._extract_kwargs = extract_kwargs or {}
def extract_one(self, input: GLiNERInput) -> Set[Link]: # noqa: A002
return next(iter(self.extract_many([input])))
def extract_many(
self,
inputs: Iterable[GLiNERInput],
) -> Iterable[Set[Link]]:
strs = [i if isinstance(i, str) else i.page_content for i in inputs]
for entities in self._model.batch_predict_entities(
strs, self._labels, **self._extract_kwargs
):
yield {
Link.bidir(kind=f"{self._kind}:{e['label']}", tag=e["text"])
for e in entities
}
|
0 | lc_public_repos/langchain/libs/community/langchain_community/graph_vectorstores | lc_public_repos/langchain/libs/community/langchain_community/graph_vectorstores/extractors/link_extractor_adapter.py | from typing import Callable, Iterable, Set, TypeVar
from langchain_core._api import beta
from langchain_community.graph_vectorstores import Link
from langchain_community.graph_vectorstores.extractors.link_extractor import (
LinkExtractor,
)
InputT = TypeVar("InputT")
UnderlyingInputT = TypeVar("UnderlyingInputT")
@beta()
class LinkExtractorAdapter(LinkExtractor[InputT]):
def __init__(
self,
underlying: LinkExtractor[UnderlyingInputT],
transform: Callable[[InputT], UnderlyingInputT],
) -> None:
self._underlying = underlying
self._transform = transform
def extract_one(self, input: InputT) -> Set[Link]: # noqa: A002
return self._underlying.extract_one(self._transform(input))
def extract_many(self, inputs: Iterable[InputT]) -> Iterable[Set[Link]]:
underlying_inputs = map(self._transform, inputs)
return self._underlying.extract_many(underlying_inputs)
|
0 | lc_public_repos/langchain/libs/community/langchain_community/graph_vectorstores | lc_public_repos/langchain/libs/community/langchain_community/graph_vectorstores/extractors/hierarchy_link_extractor.py | from typing import Callable, List, Set
from langchain_core._api import beta
from langchain_core.documents import Document
from langchain_community.graph_vectorstores.extractors.link_extractor import (
LinkExtractor,
)
from langchain_community.graph_vectorstores.extractors.link_extractor_adapter import (
LinkExtractorAdapter,
)
from langchain_community.graph_vectorstores.links import Link
# TypeAlias is not available in Python 3.9, we can't use that or the newer `type`.
HierarchyInput = List[str]
_PARENT: str = "p:"
_CHILD: str = "c:"
_SIBLING: str = "s:"
@beta()
class HierarchyLinkExtractor(LinkExtractor[HierarchyInput]):
def __init__(
self,
*,
kind: str = "hierarchy",
parent_links: bool = True,
child_links: bool = False,
sibling_links: bool = False,
):
"""Extract links from a document hierarchy.
Example:
.. code-block:: python
# Given three paths (in this case, within the "Root" document):
h1 = ["Root", "H1"]
h1a = ["Root", "H1", "a"]
h1b = ["Root", "H1", "b"]
# Parent links `h1a` and `h1b` to `h1`.
# Child links `h1` to `h1a` and `h1b`.
# Sibling links `h1a` and `h1b` together (both directions).
Example use with documents:
.. code_block: python
transformer = LinkExtractorTransformer([
HierarchyLinkExtractor().as_document_extractor(
# Assumes the "path" to each document is in the metadata.
# Could split strings, etc.
lambda doc: doc.metadata.get("path", [])
)
])
linked = transformer.transform_documents(docs)
Args:
kind: Kind of links to produce with this extractor.
parent_links: Link from a section to its parent.
child_links: Link from a section to its children.
sibling_links: Link from a section to other sections with the same parent.
"""
self._kind = kind
self._parent_links = parent_links
self._child_links = child_links
self._sibling_links = sibling_links
def as_document_extractor(
self, hierarchy: Callable[[Document], HierarchyInput]
) -> LinkExtractor[Document]:
"""Create a LinkExtractor from `Document`.
Args:
hierarchy: Function that returns the path for the given document.
Returns:
A `LinkExtractor[Document]` suitable for application to `Documents` directly
or with `LinkExtractorTransformer`.
"""
return LinkExtractorAdapter(underlying=self, transform=hierarchy)
def extract_one(
self,
input: HierarchyInput,
) -> Set[Link]:
this_path = "/".join(input)
parent_path = None
links = set()
if self._parent_links:
# This is linked from everything with this parent path.
links.add(Link.incoming(kind=self._kind, tag=_PARENT + this_path))
if self._child_links:
# This is linked to every child with this as it's "parent" path.
links.add(Link.outgoing(kind=self._kind, tag=_CHILD + this_path))
if len(input) >= 1:
parent_path = "/".join(input[0:-1])
if self._parent_links and len(input) > 1:
# This is linked to the nodes with the given parent path.
links.add(Link.outgoing(kind=self._kind, tag=_PARENT + parent_path))
if self._child_links and len(input) > 1:
# This is linked from every node with the given parent path.
links.add(Link.incoming(kind=self._kind, tag=_CHILD + parent_path))
if self._sibling_links:
# This is a sibling of everything with the same parent.
links.add(Link.bidir(kind=self._kind, tag=_SIBLING + parent_path))
return links
|
0 | lc_public_repos/langchain/libs/community/langchain_community/graph_vectorstores | lc_public_repos/langchain/libs/community/langchain_community/graph_vectorstores/extractors/__init__.py | from langchain_community.graph_vectorstores.extractors.gliner_link_extractor import (
GLiNERInput,
GLiNERLinkExtractor,
)
from langchain_community.graph_vectorstores.extractors.hierarchy_link_extractor import (
HierarchyInput,
HierarchyLinkExtractor,
)
from langchain_community.graph_vectorstores.extractors.html_link_extractor import (
HtmlInput,
HtmlLinkExtractor,
)
from langchain_community.graph_vectorstores.extractors.keybert_link_extractor import (
KeybertInput,
KeybertLinkExtractor,
)
from langchain_community.graph_vectorstores.extractors.link_extractor import (
LinkExtractor,
)
from langchain_community.graph_vectorstores.extractors.link_extractor_adapter import (
LinkExtractorAdapter,
)
from langchain_community.graph_vectorstores.extractors.link_extractor_transformer import ( # noqa: E501
LinkExtractorTransformer,
)
__all__ = [
"GLiNERInput",
"GLiNERLinkExtractor",
"HierarchyInput",
"HierarchyLinkExtractor",
"HtmlInput",
"HtmlLinkExtractor",
"KeybertInput",
"KeybertLinkExtractor",
"LinkExtractor",
"LinkExtractor",
"LinkExtractorAdapter",
"LinkExtractorAdapter",
"LinkExtractorTransformer",
]
|
0 | lc_public_repos/langchain/libs/community/langchain_community/graph_vectorstores | lc_public_repos/langchain/libs/community/langchain_community/graph_vectorstores/extractors/html_link_extractor.py | from __future__ import annotations
from dataclasses import dataclass
from typing import TYPE_CHECKING, List, Optional, Set, Union
from urllib.parse import urldefrag, urljoin, urlparse
from langchain_core._api import beta
from langchain_core.documents import Document
from langchain_community.graph_vectorstores import Link
from langchain_community.graph_vectorstores.extractors.link_extractor import (
LinkExtractor,
)
from langchain_community.graph_vectorstores.extractors.link_extractor_adapter import (
LinkExtractorAdapter,
)
if TYPE_CHECKING:
from bs4 import BeautifulSoup
from bs4.element import Tag
def _parse_url(link: Tag, page_url: str, drop_fragments: bool = True) -> Optional[str]:
href = link.get("href")
if href is None:
return None
url = urlparse(href)
if url.scheme not in ["http", "https", ""]:
return None
# Join the HREF with the page_url to convert relative paths to absolute.
url = str(urljoin(page_url, href))
# Fragments would be useful if we chunked a page based on section.
# Then, each chunk would have a different URL based on the fragment.
# Since we aren't doing that yet, they just "break" links. So, drop
# the fragment.
if drop_fragments:
return urldefrag(url).url
return url
def _parse_hrefs(
soup: BeautifulSoup, url: str, drop_fragments: bool = True
) -> Set[str]:
soup_links: List[Tag] = soup.find_all("a")
links: Set[str] = set()
for link in soup_links:
parse_url = _parse_url(link, page_url=url, drop_fragments=drop_fragments)
# Remove self links and entries for any 'a' tag that failed to parse
# (didn't have href, or invalid domain, etc.)
if parse_url and parse_url != url:
links.add(parse_url)
return links
@dataclass
class HtmlInput:
content: Union[str, BeautifulSoup]
base_url: str
@beta()
class HtmlLinkExtractor(LinkExtractor[HtmlInput]):
def __init__(self, *, kind: str = "hyperlink", drop_fragments: bool = True):
"""Extract hyperlinks from HTML content.
Expects the input to be an HTML string or a `BeautifulSoup` object.
Example::
extractor = HtmlLinkExtractor()
results = extractor.extract_one(HtmlInput(html, url))
.. seealso::
- :mod:`How to use a graph vector store <langchain_community.graph_vectorstores>`
- :class:`How to create links between documents <langchain_community.graph_vectorstores.links.Link>`
How to link Documents on hyperlinks in HTML
===========================================
Preliminaries
-------------
Install the ``beautifulsoup4`` package:
.. code-block:: bash
pip install -q langchain_community beautifulsoup4
Usage
-----
For this example, we'll scrape 2 HTML pages that have an hyperlink from one
page to the other using an ``AsyncHtmlLoader``.
Then we use the ``HtmlLinkExtractor`` to create the links in the documents.
Using extract_one()
^^^^^^^^^^^^^^^^^^^
We can use :meth:`extract_one` on a document to get the links and add the links
to the document metadata with
:meth:`~langchain_community.graph_vectorstores.links.add_links`::
from langchain_community.document_loaders import AsyncHtmlLoader
from langchain_community.graph_vectorstores.extractors import (
HtmlInput,
HtmlLinkExtractor,
)
from langchain_community.graph_vectorstores.links import add_links
from langchain_core.documents import Document
loader = AsyncHtmlLoader(
[
"https://python.langchain.com/docs/integrations/providers/astradb/",
"https://docs.datastax.com/en/astra/home/astra.html",
]
)
documents = loader.load()
html_extractor = HtmlLinkExtractor()
for doc in documents:
links = html_extractor.extract_one(HtmlInput(doc.page_content, url))
add_links(doc, links)
documents[0].metadata["links"][:5]
.. code-block:: output
[Link(kind='hyperlink', direction='out', tag='https://python.langchain.com/docs/integrations/providers/spreedly/'),
Link(kind='hyperlink', direction='out', tag='https://python.langchain.com/docs/integrations/providers/nvidia/'),
Link(kind='hyperlink', direction='out', tag='https://python.langchain.com/docs/integrations/providers/ray_serve/'),
Link(kind='hyperlink', direction='out', tag='https://python.langchain.com/docs/integrations/providers/bageldb/'),
Link(kind='hyperlink', direction='out', tag='https://python.langchain.com/docs/introduction/')]
Using as_document_extractor()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you use a document loader that returns the raw HTML and that sets the source
key in the document metadata such as ``AsyncHtmlLoader``,
you can simplify by using :meth:`as_document_extractor` that takes directly a
``Document`` as input::
from langchain_community.document_loaders import AsyncHtmlLoader
from langchain_community.graph_vectorstores.extractors import HtmlLinkExtractor
from langchain_community.graph_vectorstores.links import add_links
loader = AsyncHtmlLoader(
[
"https://python.langchain.com/docs/integrations/providers/astradb/",
"https://docs.datastax.com/en/astra/home/astra.html",
]
)
documents = loader.load()
html_extractor = HtmlLinkExtractor().as_document_extractor()
for document in documents:
links = html_extractor.extract_one(document)
add_links(document, links)
documents[0].metadata["links"][:5]
.. code-block:: output
[Link(kind='hyperlink', direction='out', tag='https://python.langchain.com/docs/integrations/providers/spreedly/'),
Link(kind='hyperlink', direction='out', tag='https://python.langchain.com/docs/integrations/providers/nvidia/'),
Link(kind='hyperlink', direction='out', tag='https://python.langchain.com/docs/integrations/providers/ray_serve/'),
Link(kind='hyperlink', direction='out', tag='https://python.langchain.com/docs/integrations/providers/bageldb/'),
Link(kind='hyperlink', direction='out', tag='https://python.langchain.com/docs/introduction/')]
Using LinkExtractorTransformer
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Using the :class:`~langchain_community.graph_vectorstores.extractors.link_extractor_transformer.LinkExtractorTransformer`,
we can simplify the link extraction::
from langchain_community.document_loaders import AsyncHtmlLoader
from langchain_community.graph_vectorstores.extractors import (
HtmlLinkExtractor,
LinkExtractorTransformer,
)
from langchain_community.graph_vectorstores.links import add_links
loader = AsyncHtmlLoader(
[
"https://python.langchain.com/docs/integrations/providers/astradb/",
"https://docs.datastax.com/en/astra/home/astra.html",
]
)
documents = loader.load()
transformer = LinkExtractorTransformer([HtmlLinkExtractor().as_document_extractor()])
documents = transformer.transform_documents(documents)
documents[0].metadata["links"][:5]
.. code-block:: output
[Link(kind='hyperlink', direction='out', tag='https://python.langchain.com/docs/integrations/providers/spreedly/'),
Link(kind='hyperlink', direction='out', tag='https://python.langchain.com/docs/integrations/providers/nvidia/'),
Link(kind='hyperlink', direction='out', tag='https://python.langchain.com/docs/integrations/providers/ray_serve/'),
Link(kind='hyperlink', direction='out', tag='https://python.langchain.com/docs/integrations/providers/bageldb/'),
Link(kind='hyperlink', direction='out', tag='https://python.langchain.com/docs/introduction/')]
We can check that there is a link from the first document to the second::
for doc_to in documents:
for link_to in doc_to.metadata["links"]:
if link_to.direction == "in":
for doc_from in documents:
for link_from in doc_from.metadata["links"]:
if (
link_to.direction == "in"
and link_from.direction == "out"
and link_to.tag == link_from.tag
):
print(
f"Found link from {doc_from.metadata['source']} to {doc_to.metadata['source']}."
)
.. code-block:: output
Found link from https://python.langchain.com/docs/integrations/providers/astradb/ to https://docs.datastax.com/en/astra/home/astra.html.
The documents with URL links can then be added to a :class:`~langchain_community.graph_vectorstores.base.GraphVectorStore`::
from langchain_community.graph_vectorstores import CassandraGraphVectorStore
store = CassandraGraphVectorStore.from_documents(documents=documents, embedding=...)
Args:
kind: The kind of edge to extract. Defaults to ``hyperlink``.
drop_fragments: Whether fragments in URLs and links should be
dropped. Defaults to ``True``.
""" # noqa: E501
try:
import bs4 # noqa:F401
except ImportError as e:
raise ImportError(
"BeautifulSoup4 is required for HtmlLinkExtractor. "
"Please install it with `pip install beautifulsoup4`."
) from e
self._kind = kind
self.drop_fragments = drop_fragments
def as_document_extractor(
self, url_metadata_key: str = "source"
) -> LinkExtractor[Document]:
"""Return a LinkExtractor that applies to documents.
Note:
Since the HtmlLinkExtractor parses HTML, if you use with other similar
link extractors it may be more efficient to call the link extractors
directly on the parsed BeautifulSoup object.
Args:
url_metadata_key: The name of the filed in document metadata with the URL of
the document.
"""
return LinkExtractorAdapter(
underlying=self,
transform=lambda doc: HtmlInput(
doc.page_content, doc.metadata[url_metadata_key]
),
)
def extract_one(
self,
input: HtmlInput, # noqa: A002
) -> Set[Link]:
content = input.content
if isinstance(content, str):
from bs4 import BeautifulSoup
content = BeautifulSoup(content, "html.parser")
base_url = input.base_url
if self.drop_fragments:
base_url = urldefrag(base_url).url
hrefs = _parse_hrefs(content, base_url, self.drop_fragments)
links = {Link.outgoing(kind=self._kind, tag=url) for url in hrefs}
links.add(Link.incoming(kind=self._kind, tag=base_url))
return links
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_transformers/embeddings_redundant_filter.py | """Transform documents"""
from typing import Any, Callable, List, Sequence
import numpy as np
from langchain_core.documents import BaseDocumentTransformer, Document
from langchain_core.embeddings import Embeddings
from pydantic import BaseModel, ConfigDict, Field
from langchain_community.utils.math import cosine_similarity
class _DocumentWithState(Document):
"""Wrapper for a document that includes arbitrary state."""
state: dict = Field(default_factory=dict)
"""State associated with the document."""
@classmethod
def is_lc_serializable(cls) -> bool:
return False
def to_document(self) -> Document:
"""Convert the DocumentWithState to a Document."""
return Document(page_content=self.page_content, metadata=self.metadata)
@classmethod
def from_document(cls, doc: Document) -> "_DocumentWithState":
"""Create a DocumentWithState from a Document."""
if isinstance(doc, cls):
return doc
return cls(page_content=doc.page_content, metadata=doc.metadata)
def get_stateful_documents(
documents: Sequence[Document],
) -> Sequence[_DocumentWithState]:
"""Convert a list of documents to a list of documents with state.
Args:
documents: The documents to convert.
Returns:
A list of documents with state.
"""
return [_DocumentWithState.from_document(doc) for doc in documents]
def _filter_similar_embeddings(
embedded_documents: List[List[float]], similarity_fn: Callable, threshold: float
) -> List[int]:
"""Filter redundant documents based on the similarity of their embeddings."""
similarity = np.tril(similarity_fn(embedded_documents, embedded_documents), k=-1)
redundant = np.where(similarity > threshold)
redundant_stacked = np.column_stack(redundant)
redundant_sorted = np.argsort(similarity[redundant])[::-1]
included_idxs = set(range(len(embedded_documents)))
for first_idx, second_idx in redundant_stacked[redundant_sorted]:
if first_idx in included_idxs and second_idx in included_idxs:
# Default to dropping the second document of any highly similar pair.
included_idxs.remove(second_idx)
return list(sorted(included_idxs))
def _get_embeddings_from_stateful_docs(
embeddings: Embeddings, documents: Sequence[_DocumentWithState]
) -> List[List[float]]:
if len(documents) and "embedded_doc" in documents[0].state:
embedded_documents = [doc.state["embedded_doc"] for doc in documents]
else:
embedded_documents = embeddings.embed_documents(
[d.page_content for d in documents]
)
for doc, embedding in zip(documents, embedded_documents):
doc.state["embedded_doc"] = embedding
return embedded_documents
async def _aget_embeddings_from_stateful_docs(
embeddings: Embeddings, documents: Sequence[_DocumentWithState]
) -> List[List[float]]:
if len(documents) and "embedded_doc" in documents[0].state:
embedded_documents = [doc.state["embedded_doc"] for doc in documents]
else:
embedded_documents = await embeddings.aembed_documents(
[d.page_content for d in documents]
)
for doc, embedding in zip(documents, embedded_documents):
doc.state["embedded_doc"] = embedding
return embedded_documents
def _filter_cluster_embeddings(
embedded_documents: List[List[float]],
num_clusters: int,
num_closest: int,
random_state: int,
remove_duplicates: bool,
) -> List[int]:
"""Filter documents based on proximity of their embeddings to clusters."""
try:
from sklearn.cluster import KMeans
except ImportError:
raise ImportError(
"sklearn package not found, please install it with "
"`pip install scikit-learn`"
)
kmeans = KMeans(n_clusters=num_clusters, random_state=random_state).fit(
embedded_documents
)
closest_indices = []
# Loop through the number of clusters you have
for i in range(num_clusters):
# Get the list of distances from that particular cluster center
distances = np.linalg.norm(
embedded_documents - kmeans.cluster_centers_[i], axis=1
)
# Find the indices of the two unique closest ones
# (using argsort to find the smallest 2 distances)
if remove_duplicates:
# Only add not duplicated vectors.
closest_indices_sorted = [
x
for x in np.argsort(distances)[:num_closest]
if x not in closest_indices
]
else:
# Skip duplicates and add the next closest vector.
closest_indices_sorted = [
x for x in np.argsort(distances) if x not in closest_indices
][:num_closest]
# Append that position closest indices list
closest_indices.extend(closest_indices_sorted)
return closest_indices
class EmbeddingsRedundantFilter(BaseDocumentTransformer, BaseModel):
"""Filter that drops redundant documents by comparing their embeddings."""
embeddings: Embeddings
"""Embeddings to use for embedding document contents."""
similarity_fn: Callable = cosine_similarity
"""Similarity function for comparing documents. Function expected to take as input
two matrices (List[List[float]]) and return a matrix of scores where higher values
indicate greater similarity."""
similarity_threshold: float = 0.95
"""Threshold for determining when two documents are similar enough
to be considered redundant."""
model_config = ConfigDict(
arbitrary_types_allowed=True,
)
def transform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
"""Filter down documents."""
stateful_documents = get_stateful_documents(documents)
embedded_documents = _get_embeddings_from_stateful_docs(
self.embeddings, stateful_documents
)
included_idxs = _filter_similar_embeddings(
embedded_documents, self.similarity_fn, self.similarity_threshold
)
return [stateful_documents[i] for i in sorted(included_idxs)]
class EmbeddingsClusteringFilter(BaseDocumentTransformer, BaseModel):
"""Perform K-means clustering on document vectors.
Returns an arbitrary number of documents closest to center."""
embeddings: Embeddings
"""Embeddings to use for embedding document contents."""
num_clusters: int = 5
"""Number of clusters. Groups of documents with similar meaning."""
num_closest: int = 1
"""The number of closest vectors to return for each cluster center."""
random_state: int = 42
"""Controls the random number generator used to initialize the cluster centroids.
If you set the random_state parameter to None, the KMeans algorithm will use a
random number generator that is seeded with the current time. This means
that the results of the KMeans algorithm will be different each time you
run it."""
sorted: bool = False
"""By default results are re-ordered "grouping" them by cluster, if sorted is true
result will be ordered by the original position from the retriever"""
remove_duplicates: bool = False
""" By default duplicated results are skipped and replaced by the next closest
vector in the cluster. If remove_duplicates is true no replacement will be done:
This could dramatically reduce results when there is a lot of overlap between
clusters.
"""
model_config = ConfigDict(
arbitrary_types_allowed=True,
)
def transform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
"""Filter down documents."""
stateful_documents = get_stateful_documents(documents)
embedded_documents = _get_embeddings_from_stateful_docs(
self.embeddings, stateful_documents
)
included_idxs = _filter_cluster_embeddings(
embedded_documents,
self.num_clusters,
self.num_closest,
self.random_state,
self.remove_duplicates,
)
results = sorted(included_idxs) if self.sorted else included_idxs
return [stateful_documents[i] for i in results]
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_transformers/openai_functions.py | """Document transformers that use OpenAI Functions models"""
from typing import Any, Dict, Optional, Sequence, Type, Union
from langchain_core.documents import BaseDocumentTransformer, Document
from langchain_core.language_models import BaseLanguageModel
from langchain_core.prompts import ChatPromptTemplate
from pydantic import BaseModel
class OpenAIMetadataTagger(BaseDocumentTransformer, BaseModel):
"""Extract metadata tags from document contents using OpenAI functions.
Example:
.. code-block:: python
from langchain_community.chat_models import ChatOpenAI
from langchain_community.document_transformers import OpenAIMetadataTagger
from langchain_core.documents import Document
schema = {
"properties": {
"movie_title": { "type": "string" },
"critic": { "type": "string" },
"tone": {
"type": "string",
"enum": ["positive", "negative"]
},
"rating": {
"type": "integer",
"description": "The number of stars the critic rated the movie"
}
},
"required": ["movie_title", "critic", "tone"]
}
# Must be an OpenAI model that supports functions
llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")
tagging_chain = create_tagging_chain(schema, llm)
document_transformer = OpenAIMetadataTagger(tagging_chain=tagging_chain)
original_documents = [
Document(page_content="Review of The Bee Movie\nBy Roger Ebert\n\nThis is the greatest movie ever made. 4 out of 5 stars."),
Document(page_content="Review of The Godfather\nBy Anonymous\n\nThis movie was super boring. 1 out of 5 stars.", metadata={"reliable": False}),
]
enhanced_documents = document_transformer.transform_documents(original_documents)
""" # noqa: E501
tagging_chain: Any
"""The chain used to extract metadata from each document."""
def transform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
"""Automatically extract and populate metadata
for each document according to the provided schema."""
new_documents = []
for document in documents:
extracted_metadata: Dict = self.tagging_chain.run(document.page_content) # type: ignore[assignment]
new_document = Document(
page_content=document.page_content,
metadata={**extracted_metadata, **document.metadata},
)
new_documents.append(new_document)
return new_documents
async def atransform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
raise NotImplementedError
def create_metadata_tagger(
metadata_schema: Union[Dict[str, Any], Type[BaseModel]],
llm: BaseLanguageModel,
prompt: Optional[ChatPromptTemplate] = None,
*,
tagging_chain_kwargs: Optional[Dict] = None,
) -> OpenAIMetadataTagger:
"""Create a DocumentTransformer that uses an OpenAI function chain to automatically
tag documents with metadata based on their content and an input schema.
Args:
metadata_schema: Either a dictionary or pydantic.BaseModel class. If a dictionary
is passed in, it's assumed to already be a valid JsonSchema.
For best results, pydantic.BaseModels should have docstrings describing what
the schema represents and descriptions for the parameters.
llm: Language model to use, assumed to support the OpenAI function-calling API.
Defaults to use "gpt-3.5-turbo-0613"
prompt: BasePromptTemplate to pass to the model.
Returns:
An LLMChain that will pass the given function to the model.
Example:
.. code-block:: python
from langchain_community.chat_models import ChatOpenAI
from langchain_community.document_transformers import create_metadata_tagger
from langchain_core.documents import Document
schema = {
"properties": {
"movie_title": { "type": "string" },
"critic": { "type": "string" },
"tone": {
"type": "string",
"enum": ["positive", "negative"]
},
"rating": {
"type": "integer",
"description": "The number of stars the critic rated the movie"
}
},
"required": ["movie_title", "critic", "tone"]
}
# Must be an OpenAI model that supports functions
llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")
document_transformer = create_metadata_tagger(schema, llm)
original_documents = [
Document(page_content="Review of The Bee Movie\nBy Roger Ebert\n\nThis is the greatest movie ever made. 4 out of 5 stars."),
Document(page_content="Review of The Godfather\nBy Anonymous\n\nThis movie was super boring. 1 out of 5 stars.", metadata={"reliable": False}),
]
enhanced_documents = document_transformer.transform_documents(original_documents)
""" # noqa: E501
from langchain.chains.openai_functions import create_tagging_chain
metadata_schema = (
metadata_schema
if isinstance(metadata_schema, dict)
else metadata_schema.schema()
)
_tagging_chain_kwargs = tagging_chain_kwargs or {}
tagging_chain = create_tagging_chain(
metadata_schema, llm, prompt=prompt, **_tagging_chain_kwargs
)
return OpenAIMetadataTagger(tagging_chain=tagging_chain)
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_transformers/google_translate.py | from typing import Any, Optional, Sequence
from langchain_core._api.deprecation import deprecated
from langchain_core.documents import BaseDocumentTransformer, Document
from langchain_community.utilities.vertexai import get_client_info
@deprecated(
since="0.0.32",
removal="1.0",
alternative_import="langchain_google_community.DocAIParser",
)
class GoogleTranslateTransformer(BaseDocumentTransformer):
"""Translate text documents using Google Cloud Translation."""
def __init__(
self,
project_id: str,
*,
location: str = "global",
model_id: Optional[str] = None,
glossary_id: Optional[str] = None,
api_endpoint: Optional[str] = None,
) -> None:
"""
Arguments:
project_id: Google Cloud Project ID.
location: (Optional) Translate model location.
model_id: (Optional) Translate model ID to use.
glossary_id: (Optional) Translate glossary ID to use.
api_endpoint: (Optional) Regional endpoint to use.
"""
try:
from google.api_core.client_options import ClientOptions
from google.cloud import translate # type: ignore[attr-defined]
except ImportError as exc:
raise ImportError(
"Install Google Cloud Translate to use this parser."
"(pip install google-cloud-translate)"
) from exc
self.project_id = project_id
self.location = location
self.model_id = model_id
self.glossary_id = glossary_id
self._client = translate.TranslationServiceClient(
client_info=get_client_info("translate"),
client_options=(
ClientOptions(api_endpoint=api_endpoint) if api_endpoint else None
),
)
self._parent_path = self._client.common_location_path(project_id, location)
# For some reason, there's no `model_path()` method for the client.
self._model_path = (
f"{self._parent_path}/models/{model_id}" if model_id else None
)
self._glossary_path = (
self._client.glossary_path(project_id, location, glossary_id)
if glossary_id
else None
)
def transform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
"""Translate text documents using Google Translate.
Arguments:
source_language_code: ISO 639 language code of the input document.
target_language_code: ISO 639 language code of the output document.
For supported languages, refer to:
https://cloud.google.com/translate/docs/languages
mime_type: (Optional) Media Type of input text.
Options: `text/plain`, `text/html`
"""
try:
from google.cloud import translate # type: ignore[attr-defined]
except ImportError as exc:
raise ImportError(
"Install Google Cloud Translate to use this parser."
"(pip install google-cloud-translate)"
) from exc
response = self._client.translate_text(
request=translate.TranslateTextRequest(
contents=[doc.page_content for doc in documents],
parent=self._parent_path,
model=self._model_path,
glossary_config=translate.TranslateTextGlossaryConfig(
glossary=self._glossary_path
),
source_language_code=kwargs.get("source_language_code", None),
target_language_code=kwargs.get("target_language_code"),
mime_type=kwargs.get("mime_type", "text/plain"),
)
)
# If using a glossary, the translations will be in `glossary_translations`.
translations = response.glossary_translations or response.translations
return [
Document(
page_content=translation.translated_text,
metadata={
**doc.metadata,
"model": translation.model,
"detected_language_code": translation.detected_language_code,
},
)
for doc, translation in zip(documents, translations)
]
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_transformers/doctran_text_qa.py | from typing import Any, Optional, Sequence
from langchain_core.documents import BaseDocumentTransformer, Document
from langchain_core.utils import get_from_env
class DoctranQATransformer(BaseDocumentTransformer):
"""Extract QA from text documents using doctran.
Arguments:
openai_api_key: OpenAI API key. Can also be specified via environment variable
``OPENAI_API_KEY``.
Example:
.. code-block:: python
from langchain_community.document_transformers import DoctranQATransformer
# Pass in openai_api_key or set env var OPENAI_API_KEY
qa_transformer = DoctranQATransformer()
transformed_document = await qa_transformer.atransform_documents(documents)
"""
def __init__(
self,
openai_api_key: Optional[str] = None,
openai_api_model: Optional[str] = None,
) -> None:
self.openai_api_key = openai_api_key or get_from_env(
"openai_api_key", "OPENAI_API_KEY"
)
self.openai_api_model = openai_api_model or get_from_env(
"openai_api_model", "OPENAI_API_MODEL"
)
async def atransform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
raise NotImplementedError
def transform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
"""Extracts QA from text documents using doctran."""
try:
from doctran import Doctran
doctran = Doctran(
openai_api_key=self.openai_api_key, openai_model=self.openai_api_model
)
except ImportError:
raise ImportError(
"Install doctran to use this parser. (pip install doctran)"
)
for d in documents:
doctran_doc = doctran.parse(content=d.page_content).interrogate().execute()
questions_and_answers = doctran_doc.extracted_properties.get(
"questions_and_answers"
)
d.metadata["questions_and_answers"] = questions_and_answers
return documents
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_transformers/beautiful_soup_transformer.py | from typing import Any, Iterator, List, Sequence, Tuple, Union, cast
from langchain_core.documents import BaseDocumentTransformer, Document
class BeautifulSoupTransformer(BaseDocumentTransformer):
"""Transform HTML content by extracting specific tags and removing unwanted ones.
Example:
.. code-block:: python
from langchain_community.document_transformers import BeautifulSoupTransformer
bs4_transformer = BeautifulSoupTransformer()
docs_transformed = bs4_transformer.transform_documents(docs)
""" # noqa: E501
def __init__(self) -> None:
"""
Initialize the transformer.
This checks if the BeautifulSoup4 package is installed.
If not, it raises an ImportError.
"""
try:
import bs4 # noqa:F401
except ImportError:
raise ImportError(
"BeautifulSoup4 is required for BeautifulSoupTransformer. "
"Please install it with `pip install beautifulsoup4`."
)
def transform_documents(
self,
documents: Sequence[Document],
unwanted_tags: Union[List[str], Tuple[str, ...]] = ("script", "style"),
tags_to_extract: Union[List[str], Tuple[str, ...]] = ("p", "li", "div", "a"),
remove_lines: bool = True,
*,
unwanted_classnames: Union[Tuple[str, ...], List[str]] = (),
remove_comments: bool = False,
**kwargs: Any,
) -> Sequence[Document]:
"""
Transform a list of Document objects by cleaning their HTML content.
Args:
documents: A sequence of Document objects containing HTML content.
unwanted_tags: A list of tags to be removed from the HTML.
tags_to_extract: A list of tags whose content will be extracted.
remove_lines: If set to True, unnecessary lines will be removed.
unwanted_classnames: A list of class names to be removed from the HTML
remove_comments: If set to True, comments will be removed.
Returns:
A sequence of Document objects with transformed content.
"""
for doc in documents:
cleaned_content = doc.page_content
cleaned_content = self.remove_unwanted_classnames(
cleaned_content, unwanted_classnames
)
cleaned_content = self.remove_unwanted_tags(cleaned_content, unwanted_tags)
cleaned_content = self.extract_tags(
cleaned_content, tags_to_extract, remove_comments=remove_comments
)
if remove_lines:
cleaned_content = self.remove_unnecessary_lines(cleaned_content)
doc.page_content = cleaned_content
return documents
@staticmethod
def remove_unwanted_classnames(
html_content: str, unwanted_classnames: Union[List[str], Tuple[str, ...]]
) -> str:
"""
Remove unwanted classname from a given HTML content.
Args:
html_content: The original HTML content string.
unwanted_classnames: A list of classnames to be removed from the HTML.
Returns:
A cleaned HTML string with unwanted classnames removed.
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_content, "html.parser")
for classname in unwanted_classnames:
for element in soup.find_all(class_=classname):
element.decompose()
return str(soup)
@staticmethod
def remove_unwanted_tags(
html_content: str, unwanted_tags: Union[List[str], Tuple[str, ...]]
) -> str:
"""
Remove unwanted tags from a given HTML content.
Args:
html_content: The original HTML content string.
unwanted_tags: A list of tags to be removed from the HTML.
Returns:
A cleaned HTML string with unwanted tags removed.
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_content, "html.parser")
for tag in unwanted_tags:
for element in soup.find_all(tag):
element.decompose()
return str(soup)
@staticmethod
def extract_tags(
html_content: str,
tags: Union[List[str], Tuple[str, ...]],
*,
remove_comments: bool = False,
) -> str:
"""
Extract specific tags from a given HTML content.
Args:
html_content: The original HTML content string.
tags: A list of tags to be extracted from the HTML.
remove_comments: If set to True, the comments will be removed.
Returns:
A string combining the content of the extracted tags.
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_content, "html.parser")
text_parts: List[str] = []
for element in soup.find_all():
if element.name in tags:
# Extract all navigable strings recursively from this element.
text_parts += get_navigable_strings(
element, remove_comments=remove_comments
)
# To avoid duplicate text, remove all descendants from the soup.
element.decompose()
return " ".join(text_parts)
@staticmethod
def remove_unnecessary_lines(content: str) -> str:
"""
Clean up the content by removing unnecessary lines.
Args:
content: A string, which may contain unnecessary lines or spaces.
Returns:
A cleaned string with unnecessary lines removed.
"""
lines = content.split("\n")
stripped_lines = [line.strip() for line in lines]
non_empty_lines = [line for line in stripped_lines if line]
cleaned_content = " ".join(non_empty_lines)
return cleaned_content
async def atransform_documents(
self,
documents: Sequence[Document],
**kwargs: Any,
) -> Sequence[Document]:
raise NotImplementedError
def get_navigable_strings(
element: Any, *, remove_comments: bool = False
) -> Iterator[str]:
"""Get all navigable strings from a BeautifulSoup element.
Args:
element: A BeautifulSoup element.
remove_comments: If set to True, the comments will be removed.
Returns:
A generator of strings.
"""
from bs4 import Comment, NavigableString, Tag
for child in cast(Tag, element).children:
if isinstance(child, Comment) and remove_comments:
continue
if isinstance(child, Tag):
yield from get_navigable_strings(child, remove_comments=remove_comments)
elif isinstance(child, NavigableString):
if (element.name == "a") and (href := element.get("href")):
yield f"{child.strip()} ({href})"
else:
yield child.strip()
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_transformers/doctran_text_extract.py | from typing import Any, List, Optional, Sequence
from langchain_core.documents import BaseDocumentTransformer, Document
from langchain_core.utils import get_from_env
class DoctranPropertyExtractor(BaseDocumentTransformer):
"""Extract properties from text documents using doctran.
Arguments:
properties: A list of the properties to extract.
openai_api_key: OpenAI API key. Can also be specified via environment variable
``OPENAI_API_KEY``.
Example:
.. code-block:: python
from langchain_community.document_transformers import DoctranPropertyExtractor
properties = [
{
"name": "category",
"description": "What type of email this is.",
"type": "string",
"enum": ["update", "action_item", "customer_feedback", "announcement", "other"],
"required": True,
},
{
"name": "mentions",
"description": "A list of all people mentioned in this email.",
"type": "array",
"items": {
"name": "full_name",
"description": "The full name of the person mentioned.",
"type": "string",
},
"required": True,
},
{
"name": "eli5",
"description": "Explain this email to me like I'm 5 years old.",
"type": "string",
"required": True,
},
]
# Pass in openai_api_key or set env var OPENAI_API_KEY
property_extractor = DoctranPropertyExtractor(properties)
transformed_document = await qa_transformer.atransform_documents(documents)
""" # noqa: E501
def __init__(
self,
properties: List[dict],
openai_api_key: Optional[str] = None,
openai_api_model: Optional[str] = None,
) -> None:
self.properties = properties
self.openai_api_key = openai_api_key or get_from_env(
"openai_api_key", "OPENAI_API_KEY"
)
self.openai_api_model = openai_api_model or get_from_env(
"openai_api_model", "OPENAI_API_MODEL"
)
async def atransform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
"""Extracts properties from text documents using doctran."""
try:
from doctran import Doctran, ExtractProperty
doctran = Doctran(
openai_api_key=self.openai_api_key, openai_model=self.openai_api_model
)
except ImportError:
raise ImportError(
"Install doctran to use this parser. (pip install doctran)"
)
properties = [ExtractProperty(**property) for property in self.properties]
for d in documents:
doctran_doc = (
doctran.parse(content=d.page_content)
.extract(properties=properties)
.execute()
)
d.metadata["extracted_properties"] = doctran_doc.extracted_properties
return documents
def transform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
"""Extracts properties from text documents using doctran."""
try:
from doctran import Doctran, ExtractProperty
doctran = Doctran(
openai_api_key=self.openai_api_key, openai_model=self.openai_api_model
)
except ImportError:
raise ImportError(
"Install doctran to use this parser. (pip install doctran)"
)
properties = [ExtractProperty(**property) for property in self.properties]
for d in documents:
doctran_doc = (
doctran.parse(content=d.page_content)
.extract(properties=properties)
.execute()
)
d.metadata["extracted_properties"] = doctran_doc.extracted_properties
return documents
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_transformers/long_context_reorder.py | """Reorder documents"""
from typing import Any, List, Sequence
from langchain_core.documents import BaseDocumentTransformer, Document
from pydantic import BaseModel, ConfigDict
def _litm_reordering(documents: List[Document]) -> List[Document]:
"""Lost in the middle reorder: the less relevant documents will be at the
middle of the list and more relevant elements at beginning / end.
See: https://arxiv.org/abs//2307.03172"""
documents.reverse()
reordered_result = []
for i, value in enumerate(documents):
if i % 2 == 1:
reordered_result.append(value)
else:
reordered_result.insert(0, value)
return reordered_result
class LongContextReorder(BaseDocumentTransformer, BaseModel):
"""Reorder long context.
Lost in the middle:
Performance degrades when models must access relevant information
in the middle of long contexts.
See: https://arxiv.org/abs//2307.03172"""
model_config = ConfigDict(
arbitrary_types_allowed=True,
)
def transform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
"""Reorders documents."""
return _litm_reordering(list(documents))
async def atransform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
return _litm_reordering(list(documents))
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_transformers/html2text.py | from typing import Any, Sequence
from langchain_core.documents import BaseDocumentTransformer, Document
class Html2TextTransformer(BaseDocumentTransformer):
"""Replace occurrences of a particular search pattern with a replacement string
Arguments:
ignore_links: Whether links should be ignored; defaults to True.
ignore_images: Whether images should be ignored; defaults to True.
Example:
.. code-block:: python
from langchain_community.document_transformers import Html2TextTransformer
html2text = Html2TextTransformer()
docs_transform = html2text.transform_documents(docs)
"""
def __init__(self, ignore_links: bool = True, ignore_images: bool = True) -> None:
self.ignore_links = ignore_links
self.ignore_images = ignore_images
def transform_documents(
self,
documents: Sequence[Document],
**kwargs: Any,
) -> Sequence[Document]:
try:
import html2text
except ImportError:
raise ImportError(
"""html2text package not found, please
install it with `pip install html2text`"""
)
# Create a html2text.HTML2Text object and override some properties
h = html2text.HTML2Text()
h.ignore_links = self.ignore_links
h.ignore_images = self.ignore_images
new_documents = []
for d in documents:
new_document = Document(
page_content=h.handle(d.page_content), metadata={**d.metadata}
)
new_documents.append(new_document)
return new_documents
async def atransform_documents(
self,
documents: Sequence[Document],
**kwargs: Any,
) -> Sequence[Document]:
raise NotImplementedError
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_transformers/markdownify.py | import re
from typing import Any, List, Optional, Sequence, Union
from langchain_core.documents import BaseDocumentTransformer, Document
class MarkdownifyTransformer(BaseDocumentTransformer):
"""Converts HTML documents to Markdown format with customizable options for handling
links, images, other tags and heading styles using the markdownify library.
Arguments:
strip: A list of tags to strip. This option can't be used with the convert option.
convert: A list of tags to convert. This option can't be used with the strip option.
autolinks: A boolean indicating whether the "automatic link" style should be used when a a tag's contents match its href. Defaults to True.
heading_style: Defines how headings should be converted. Accepted values are ATX, ATX_CLOSED, SETEXT, and UNDERLINED (which is an alias for SETEXT). Defaults to ATX.
kwargs: Additional options to pass to markdownify.
Example:
.. code-block:: python
from langchain_community.document_transformers import MarkdownifyTransformer
markdownify = MarkdownifyTransformer()
docs_transform = markdownify.transform_documents(docs)
More configuration options can be found at the markdownify GitHub page:
https://github.com/matthewwithanm/python-markdownify
""" # noqa: E501
def __init__(
self,
strip: Optional[Union[str, List[str]]] = None,
convert: Optional[Union[str, List[str]]] = None,
autolinks: bool = True,
heading_style: str = "ATX",
**kwargs: Any,
) -> None:
self.strip = [strip] if isinstance(strip, str) else strip
self.convert = [convert] if isinstance(convert, str) else convert
self.autolinks = autolinks
self.heading_style = heading_style
self.additional_options = kwargs
def transform_documents(
self,
documents: Sequence[Document],
**kwargs: Any,
) -> Sequence[Document]:
try:
from markdownify import markdownify
except ImportError:
raise ImportError(
"""markdownify package not found, please
install it with `pip install markdownify`"""
)
converted_documents = []
for doc in documents:
markdown_content = (
markdownify(
html=doc.page_content,
strip=self.strip,
convert=self.convert,
autolinks=self.autolinks,
heading_style=self.heading_style,
**self.additional_options,
)
.replace("\xa0", " ")
.strip()
)
cleaned_markdown = re.sub(r"\n\s*\n", "\n\n", markdown_content)
converted_documents.append(
Document(cleaned_markdown, metadata=doc.metadata)
)
return converted_documents
async def atransform_documents(
self,
documents: Sequence[Document],
**kwargs: Any,
) -> Sequence[Document]:
raise NotImplementedError
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_transformers/__init__.py | """**Document Transformers** are classes to transform Documents.
**Document Transformers** usually used to transform a lot of Documents in a single run.
**Class hierarchy:**
.. code-block::
BaseDocumentTransformer --> <name> # Examples: DoctranQATransformer, DoctranTextTranslator
**Main helpers:**
.. code-block::
Document
""" # noqa: E501
import importlib
from typing import TYPE_CHECKING, Any
if TYPE_CHECKING:
from langchain_community.document_transformers.beautiful_soup_transformer import (
BeautifulSoupTransformer,
)
from langchain_community.document_transformers.doctran_text_extract import (
DoctranPropertyExtractor,
)
from langchain_community.document_transformers.doctran_text_qa import (
DoctranQATransformer,
)
from langchain_community.document_transformers.doctran_text_translate import (
DoctranTextTranslator,
)
from langchain_community.document_transformers.embeddings_redundant_filter import (
EmbeddingsClusteringFilter,
EmbeddingsRedundantFilter,
get_stateful_documents,
)
from langchain_community.document_transformers.google_translate import (
GoogleTranslateTransformer,
)
from langchain_community.document_transformers.html2text import (
Html2TextTransformer,
)
from langchain_community.document_transformers.long_context_reorder import (
LongContextReorder,
)
from langchain_community.document_transformers.markdownify import (
MarkdownifyTransformer,
)
from langchain_community.document_transformers.nuclia_text_transform import (
NucliaTextTransformer,
)
from langchain_community.document_transformers.openai_functions import (
OpenAIMetadataTagger,
)
__all__ = [
"BeautifulSoupTransformer",
"DoctranPropertyExtractor",
"DoctranQATransformer",
"DoctranTextTranslator",
"EmbeddingsClusteringFilter",
"EmbeddingsRedundantFilter",
"GoogleTranslateTransformer",
"Html2TextTransformer",
"LongContextReorder",
"MarkdownifyTransformer",
"NucliaTextTransformer",
"OpenAIMetadataTagger",
"get_stateful_documents",
]
_module_lookup = {
"BeautifulSoupTransformer": "langchain_community.document_transformers.beautiful_soup_transformer", # noqa: E501
"DoctranPropertyExtractor": "langchain_community.document_transformers.doctran_text_extract", # noqa: E501
"DoctranQATransformer": "langchain_community.document_transformers.doctran_text_qa",
"DoctranTextTranslator": "langchain_community.document_transformers.doctran_text_translate", # noqa: E501
"EmbeddingsClusteringFilter": "langchain_community.document_transformers.embeddings_redundant_filter", # noqa: E501
"EmbeddingsRedundantFilter": "langchain_community.document_transformers.embeddings_redundant_filter", # noqa: E501
"GoogleTranslateTransformer": "langchain_community.document_transformers.google_translate", # noqa: E501
"Html2TextTransformer": "langchain_community.document_transformers.html2text",
"LongContextReorder": "langchain_community.document_transformers.long_context_reorder", # noqa: E501
"MarkdownifyTransformer": "langchain_community.document_transformers.markdownify",
"NucliaTextTransformer": "langchain_community.document_transformers.nuclia_text_transform", # noqa: E501
"OpenAIMetadataTagger": "langchain_community.document_transformers.openai_functions", # noqa: E501
"get_stateful_documents": "langchain_community.document_transformers.embeddings_redundant_filter", # noqa: E501
}
def __getattr__(name: str) -> Any:
if name in _module_lookup:
module = importlib.import_module(_module_lookup[name])
return getattr(module, name)
raise AttributeError(f"module {__name__} has no attribute {name}")
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_transformers/nuclia_text_transform.py | import asyncio
import json
import uuid
from typing import Any, Sequence
from langchain_core.documents import BaseDocumentTransformer, Document
from langchain_community.tools.nuclia.tool import NucliaUnderstandingAPI
class NucliaTextTransformer(BaseDocumentTransformer):
"""Nuclia Text Transformer.
The Nuclia Understanding API splits into paragraphs and sentences,
identifies entities, provides a summary of the text and generates
embeddings for all sentences.
"""
def __init__(self, nua: NucliaUnderstandingAPI):
self.nua = nua
def transform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
raise NotImplementedError
async def atransform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
tasks = [
self.nua.arun(
{
"action": "push",
"id": str(uuid.uuid4()),
"text": doc.page_content,
"path": None,
}
)
for doc in documents
]
results = await asyncio.gather(*tasks)
for doc, result in zip(documents, results):
obj = json.loads(result)
metadata = {
"file": obj["file_extracted_data"][0],
"metadata": obj["field_metadata"][0],
}
doc.metadata["nuclia"] = metadata
return documents
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_transformers/doctran_text_translate.py | from __future__ import annotations
import asyncio
from typing import Any, Optional, Sequence
from langchain_core.documents import BaseDocumentTransformer, Document
from langchain_core.runnables.config import run_in_executor
from langchain_core.utils import get_from_env
class DoctranTextTranslator(BaseDocumentTransformer):
"""Translate text documents using doctran.
Arguments:
openai_api_key: OpenAI API key. Can also be specified via environment variable
``OPENAI_API_KEY``.
language: The language to translate *to*.
Example:
.. code-block:: python
from langchain_community.document_transformers import DoctranTextTranslator
# Pass in openai_api_key or set env var OPENAI_API_KEY
qa_translator = DoctranTextTranslator(language="spanish")
translated_document = await qa_translator.atransform_documents(documents)
"""
def __init__(
self,
openai_api_key: Optional[str] = None,
language: str = "english",
openai_api_model: Optional[str] = None,
) -> None:
self.openai_api_key = openai_api_key or get_from_env(
"openai_api_key", "OPENAI_API_KEY"
)
self.openai_api_model = openai_api_model or get_from_env(
"openai_api_model", "OPENAI_API_MODEL"
)
self.language = language
async def _aparse_document(
self, doctran: Any, index: int, doc: Document
) -> tuple[int, Any]:
parsed_doc = await run_in_executor(
None, doctran.parse, content=doc.page_content, metadata=doc.metadata
)
return index, parsed_doc
async def _atranslate_document(
self, index: int, doc: Any, language: str
) -> tuple[int, Any]:
translated_doc = await run_in_executor(
None, lambda: doc.translate(language=language).execute()
)
return index, translated_doc
async def atransform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
"""Translates text documents using doctran."""
try:
from doctran import Doctran
doctran = Doctran(
openai_api_key=self.openai_api_key, openai_model=self.openai_api_model
)
except ImportError:
raise ImportError(
"Install doctran to use this parser. (pip install doctran)"
)
parse_tasks = [
self._aparse_document(doctran, i, doc) for i, doc in enumerate(documents)
]
parsed_results = await asyncio.gather(*parse_tasks)
parsed_results.sort(key=lambda x: x[0])
doctran_docs = [doc for _, doc in parsed_results]
translate_tasks = [
self._atranslate_document(i, doc, self.language)
for i, doc in enumerate(doctran_docs)
]
translated_results = await asyncio.gather(*translate_tasks)
translated_results.sort(key=lambda x: x[0])
translated_docs = [doc for _, doc in translated_results]
return [
Document(page_content=doc.transformed_content, metadata=doc.metadata)
for doc in translated_docs
]
def transform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
"""Translates text documents using doctran."""
try:
from doctran import Doctran
doctran = Doctran(
openai_api_key=self.openai_api_key, openai_model=self.openai_api_model
)
except ImportError:
raise ImportError(
"Install doctran to use this parser. (pip install doctran)"
)
doctran_docs = [
doctran.parse(content=doc.page_content, metadata=doc.metadata)
for doc in documents
]
for i, doc in enumerate(doctran_docs):
doctran_docs[i] = doc.translate(language=self.language).execute()
return [
Document(page_content=doc.transformed_content, metadata=doc.metadata)
for doc in doctran_docs
]
|
0 | lc_public_repos/langchain/libs/community/langchain_community/document_transformers | lc_public_repos/langchain/libs/community/langchain_community/document_transformers/xsl/html_chunks_with_headers.xslt | <?xml version="1.0" encoding="UTF-8" ?>
<!-- HTML PRE CHUNK:
This performs a best-effort preliminary "chunking" of text in an HTML file,
matching each chunk with a "headers" metadata value based on header tags in proximity.
recursively visits every element (template mode=list).
for every element with tagname of interest (only):
1. serializes a div (and metadata marking the element's xpath).
2. calculates all text-content for the given element, including descendant elements which are *not* themselves tags of interest.
3. if any such text-content was found, serializes a "headers" (span.headers) along with this text (span.chunk).
to calculate the "headers" of an element:
1. recursively gets the *nearest* prior-siblings for headings of *each* level
2. recursively repeats that step#1 for each ancestor (regardless of tag)
n.b. this recursion is only performed (beginning with) elements which are
both (1) tags-of-interest and (2) have their own text-content.
-->
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns="http://www.w3.org/1999/xhtml">
<xsl:param name="tags">div|p|blockquote|ol|ul</xsl:param>
<xsl:template match="/">
<html>
<head>
<style>
div {
border: solid;
margin-top: .5em;
padding-left: .5em;
}
h1, h2, h3, h4, h5, h6 {
margin: 0;
}
.xpath {
color: blue;
}
.chunk {
margin: .5em 1em;
}
</style>
</head>
<body>
<!-- create "filtered tree" with only tags of interest -->
<xsl:apply-templates select="*" />
</body>
</html>
</xsl:template>
<xsl:template match="*">
<xsl:choose>
<!-- tags of interest get serialized into the filtered tree (and recurse down child elements) -->
<xsl:when test="contains(
concat('|', $tags, '|'),
concat('|', local-name(), '|'))">
<xsl:variable name="xpath">
<xsl:apply-templates mode="xpath" select="." />
</xsl:variable>
<xsl:variable name="txt">
<!-- recurse down child text-nodes and elements -->
<xsl:apply-templates mode="text" />
</xsl:variable>
<xsl:variable name="txt-norm" select="normalize-space($txt)" />
<div title="{$xpath}">
<small class="xpath">
<xsl:value-of select="$xpath" />
</small>
<xsl:if test="$txt-norm">
<xsl:variable name="headers">
<xsl:apply-templates mode="headingsWithAncestors" select="." />
</xsl:variable>
<xsl:if test="normalize-space($headers)">
<span class="headers">
<xsl:copy-of select="$headers" />
</span>
</xsl:if>
<p class="chunk">
<xsl:value-of select="$txt-norm" />
</p>
</xsl:if>
<xsl:apply-templates select="*" />
</div>
</xsl:when>
<!-- all other tags get "skipped" and recurse down child elements -->
<xsl:otherwise>
<xsl:apply-templates select="*" />
</xsl:otherwise>
</xsl:choose>
</xsl:template>
<!-- text mode:
prints text nodes;
for elements, recurses down child nodes (text and elements) *except* certain exceptions:
tags of interest (handled in their own list-mode match),
non-content text (e.g. script|style)
-->
<!-- ignore non-content text -->
<xsl:template mode="text" match="
script|style" />
<!-- for all other elements *except tags of interest*, recurse on child-nodes (text and elements) -->
<xsl:template mode="text" match="*">
<xsl:choose>
<!-- ignore tags of interest -->
<xsl:when test="contains(
concat('|', $tags, '|'),
concat('|', local-name(), '|'))" />
<xsl:otherwise>
<xsl:apply-templates mode="text" />
</xsl:otherwise>
</xsl:choose>
</xsl:template>
<!-- xpath mode:
return an xpath which matches this element uniquely
-->
<xsl:template mode="xpath" match="*">
<!-- recurse up parents -->
<xsl:apply-templates mode="xpath" select="parent::*" />
<xsl:value-of select="name()" />
<xsl:text>[</xsl:text>
<xsl:value-of select="1+count(preceding-sibling::*)" />
<xsl:text>]/</xsl:text>
</xsl:template>
<!-- headingsWithAncestors mode:
recurses up parents (ALL ancestors)
-->
<xsl:template mode="headingsWithAncestors" match="*">
<!-- recurse -->
<xsl:apply-templates mode="headingsWithAncestors" select="parent::*" />
<xsl:apply-templates mode="headingsWithPriorSiblings" select=".">
<xsl:with-param name="maxHead" select="6" />
</xsl:apply-templates>
</xsl:template>
<!-- headingsWithPriorSiblings mode:
recurses up preceding-siblings
-->
<xsl:template mode="headingsWithPriorSiblings" match="*">
<xsl:param name="maxHead" />
<xsl:variable name="headLevel" select="number(substring(local-name(), 2))" />
<xsl:choose>
<xsl:when test="'h' = substring(local-name(), 1, 1) and $maxHead >= $headLevel">
<!-- recurse up to prior sibling; max level one less than current -->
<xsl:apply-templates mode="headingsWithPriorSiblings" select="preceding-sibling::*[1]">
<xsl:with-param name="maxHead" select="$headLevel - 1" />
</xsl:apply-templates>
<xsl:apply-templates mode="heading" select="." />
</xsl:when>
<!-- special case for 'header' tag, serialize child-headers -->
<xsl:when test="self::header">
<xsl:apply-templates mode="heading" select="h1|h2|h3|h4|h5|h6" />
<!--
we choose not to recurse further up prior-siblings in this case,
but n.b. the 'headingsWithAncestors' template above will still continue recursion.
-->
</xsl:when>
<xsl:otherwise>
<!-- recurse up to prior sibling; no other work on this element -->
<xsl:apply-templates mode="headingsWithPriorSiblings" select="preceding-sibling::*[1]">
<xsl:with-param name="maxHead" select="$maxHead" />
</xsl:apply-templates>
</xsl:otherwise>
</xsl:choose>
</xsl:template>
<xsl:template mode="heading" match="h1|h2|h3|h4|h5|h6">
<xsl:copy>
<xsl:value-of select="normalize-space(.)" />
</xsl:copy>
</xsl:template>
</xsl:stylesheet>
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/utils/openai_functions.py | # these stubs are just for backwards compatibility
from langchain_core.utils.function_calling import (
FunctionDescription,
ToolDescription,
convert_pydantic_to_openai_function,
convert_pydantic_to_openai_tool,
)
__all__ = [
"FunctionDescription",
"ToolDescription",
"convert_pydantic_to_openai_function",
"convert_pydantic_to_openai_tool",
]
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/utils/math.py | """Math utils."""
import logging
from typing import List, Optional, Tuple, Union
import numpy as np
logger = logging.getLogger(__name__)
Matrix = Union[List[List[float]], List[np.ndarray], np.ndarray]
def cosine_similarity(X: Matrix, Y: Matrix) -> np.ndarray:
"""Row-wise cosine similarity between two equal-width matrices."""
if len(X) == 0 or len(Y) == 0:
return np.array([])
X = np.array(X)
Y = np.array(Y)
if X.shape[1] != Y.shape[1]:
raise ValueError(
f"Number of columns in X and Y must be the same. X has shape {X.shape} "
f"and Y has shape {Y.shape}."
)
try:
import simsimd as simd
X = np.array(X, dtype=np.float32)
Y = np.array(Y, dtype=np.float32)
Z = 1 - np.array(simd.cdist(X, Y, metric="cosine"))
return Z
except ImportError:
logger.debug(
"Unable to import simsimd, defaulting to NumPy implementation. If you want "
"to use simsimd please install with `pip install simsimd`."
)
X_norm = np.linalg.norm(X, axis=1)
Y_norm = np.linalg.norm(Y, axis=1)
# Ignore divide by zero errors run time warnings as those are handled below.
with np.errstate(divide="ignore", invalid="ignore"):
similarity = np.dot(X, Y.T) / np.outer(X_norm, Y_norm)
similarity[np.isnan(similarity) | np.isinf(similarity)] = 0.0
return similarity
def cosine_similarity_top_k(
X: Matrix,
Y: Matrix,
top_k: Optional[int] = 5,
score_threshold: Optional[float] = None,
) -> Tuple[List[Tuple[int, int]], List[float]]:
"""Row-wise cosine similarity with optional top-k and score threshold filtering.
Args:
X: Matrix.
Y: Matrix, same width as X.
top_k: Max number of results to return.
score_threshold: Minimum cosine similarity of results.
Returns:
Tuple of two lists. First contains two-tuples of indices (X_idx, Y_idx),
second contains corresponding cosine similarities.
"""
if len(X) == 0 or len(Y) == 0:
return [], []
score_array = cosine_similarity(X, Y)
score_threshold = score_threshold or -1.0
score_array[score_array < score_threshold] = 0
top_k = min(top_k or len(score_array), np.count_nonzero(score_array))
top_k_idxs = np.argpartition(score_array, -top_k, axis=None)[-top_k:]
top_k_idxs = top_k_idxs[np.argsort(score_array.ravel()[top_k_idxs])][::-1]
ret_idxs = np.unravel_index(top_k_idxs, score_array.shape)
scores = score_array.ravel()[top_k_idxs].tolist()
return list(zip(*ret_idxs)), scores # type: ignore
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/utils/openai.py | from __future__ import annotations
from importlib.metadata import version
from packaging.version import parse
def is_openai_v1() -> bool:
"""Return whether OpenAI API is v1 or more."""
_version = parse(version("openai"))
return _version.major >= 1
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/utils/ernie_functions.py | from typing import Literal, Optional, Type, TypedDict
from langchain_core.utils.json_schema import dereference_refs
from pydantic import BaseModel
class FunctionDescription(TypedDict):
"""Representation of a callable function to the Ernie API."""
name: str
"""The name of the function."""
description: str
"""A description of the function."""
parameters: dict
"""The parameters of the function."""
class ToolDescription(TypedDict):
"""Representation of a callable function to the Ernie API."""
type: Literal["function"]
function: FunctionDescription
def convert_pydantic_to_ernie_function(
model: Type[BaseModel],
*,
name: Optional[str] = None,
description: Optional[str] = None,
) -> FunctionDescription:
"""Convert a Pydantic model to a function description for the Ernie API."""
schema = dereference_refs(model.schema())
schema.pop("definitions", None)
return {
"name": name or schema["title"],
"description": description or schema["description"],
"parameters": schema,
}
def convert_pydantic_to_ernie_tool(
model: Type[BaseModel],
*,
name: Optional[str] = None,
description: Optional[str] = None,
) -> ToolDescription:
"""Convert a Pydantic model to a function description for the Ernie API."""
function = convert_pydantic_to_ernie_function(
model, name=name, description=description
)
return {"type": "function", "function": function}
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/utils/user_agent.py | import logging
import os
log = logging.getLogger(__name__)
def get_user_agent() -> str:
"""Get user agent from environment variable."""
env_user_agent = os.environ.get("USER_AGENT")
if not env_user_agent:
log.warning(
"USER_AGENT environment variable not set, "
"consider setting it to identify your requests."
)
return "DefaultLangchainUserAgent"
return env_user_agent
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/utils/__init__.py | """
**Utility functions** for LangChain.
"""
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/utils/google.py | """Utilities to use Google provided components."""
from importlib import metadata
from typing import Any, Optional
def get_client_info(module: Optional[str] = None) -> Any:
r"""Return a custom user agent header.
Args:
module (Optional[str]):
Optional. The module for a custom user agent header.
Returns:
google.api_core.gapic_v1.client_info.ClientInfo
"""
from google.api_core.gapic_v1.client_info import ClientInfo
langchain_version = metadata.version("langchain")
client_library_version = (
f"{langchain_version}-{module}" if module else langchain_version
)
return ClientInfo(
client_library_version=client_library_version,
user_agent=f"langchain/{client_library_version}",
)
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/google_speech_to_text.py | from __future__ import annotations
from typing import TYPE_CHECKING, List, Optional
from langchain_core._api.deprecation import deprecated
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
from langchain_community.utilities.vertexai import get_client_info
if TYPE_CHECKING:
from google.cloud.speech_v2 import RecognitionConfig
from google.protobuf.field_mask_pb2 import FieldMask
@deprecated(
since="0.0.32",
removal="1.0",
alternative_import="langchain_google_community.SpeechToTextLoader",
)
class GoogleSpeechToTextLoader(BaseLoader):
"""
Loader for Google Cloud Speech-to-Text audio transcripts.
It uses the Google Cloud Speech-to-Text API to transcribe audio files
and loads the transcribed text into one or more Documents,
depending on the specified format.
To use, you should have the ``google-cloud-speech`` python package installed.
Audio files can be specified via a Google Cloud Storage uri or a local file path.
For a detailed explanation of Google Cloud Speech-to-Text, refer to the product
documentation.
https://cloud.google.com/speech-to-text
"""
def __init__(
self,
project_id: str,
file_path: str,
location: str = "us-central1",
recognizer_id: str = "_",
config: Optional[RecognitionConfig] = None,
config_mask: Optional[FieldMask] = None,
):
"""
Initializes the GoogleSpeechToTextLoader.
Args:
project_id: Google Cloud Project ID.
file_path: A Google Cloud Storage URI or a local file path.
location: Speech-to-Text recognizer location.
recognizer_id: Speech-to-Text recognizer id.
config: Recognition options and features.
For more information:
https://cloud.google.com/python/docs/reference/speech/latest/google.cloud.speech_v2.types.RecognitionConfig
config_mask: The list of fields in config that override the values in the
``default_recognition_config`` of the recognizer during this
recognition request.
For more information:
https://cloud.google.com/python/docs/reference/speech/latest/google.cloud.speech_v2.types.RecognizeRequest
"""
try:
from google.api_core.client_options import ClientOptions
from google.cloud.speech_v2 import (
AutoDetectDecodingConfig,
RecognitionConfig,
RecognitionFeatures,
SpeechClient,
)
except ImportError as exc:
raise ImportError(
"Could not import google-cloud-speech python package. "
"Please install it with `pip install google-cloud-speech`."
) from exc
self.project_id = project_id
self.file_path = file_path
self.location = location
self.recognizer_id = recognizer_id
# Config must be set in speech recognition request.
self.config = config or RecognitionConfig(
auto_decoding_config=AutoDetectDecodingConfig(),
language_codes=["en-US"],
model="chirp",
features=RecognitionFeatures(
# Automatic punctuation could be useful for language applications
enable_automatic_punctuation=True,
),
)
self.config_mask = config_mask
self._client = SpeechClient(
client_info=get_client_info(module="speech-to-text"),
client_options=(
ClientOptions(api_endpoint=f"{location}-speech.googleapis.com")
if location != "global"
else None
),
)
self._recognizer_path = self._client.recognizer_path(
project_id, location, recognizer_id
)
def load(self) -> List[Document]:
"""Transcribes the audio file and loads the transcript into documents.
It uses the Google Cloud Speech-to-Text API to transcribe the audio file
and blocks until the transcription is finished.
"""
try:
from google.cloud.speech_v2 import RecognizeRequest
except ImportError as exc:
raise ImportError(
"Could not import google-cloud-speech python package. "
"Please install it with `pip install google-cloud-speech`."
) from exc
request = RecognizeRequest(
recognizer=self._recognizer_path,
config=self.config,
config_mask=self.config_mask,
)
if "gs://" in self.file_path:
request.uri = self.file_path
else:
with open(self.file_path, "rb") as f:
request.content = f.read()
response = self._client.recognize(request=request)
return [
Document(
page_content=result.alternatives[0].transcript,
metadata={
"language_code": result.language_code,
"result_end_offset": result.result_end_offset,
},
)
for result in response.results
]
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/image_captions.py | from io import BytesIO
from pathlib import Path
from typing import Any, List, Tuple, Union
import requests
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
class ImageCaptionLoader(BaseLoader):
"""Load image captions.
By default, the loader utilizes the pre-trained
Salesforce BLIP image captioning model.
https://huggingface.co/Salesforce/blip-image-captioning-base
"""
def __init__(
self,
images: Union[str, Path, bytes, List[Union[str, bytes, Path]]],
blip_processor: str = "Salesforce/blip-image-captioning-base",
blip_model: str = "Salesforce/blip-image-captioning-base",
):
"""Initialize with a list of image data (bytes) or file paths
Args:
images: Either a single image or a list of images. Accepts
image data (bytes) or file paths to images.
blip_processor: The name of the pre-trained BLIP processor.
blip_model: The name of the pre-trained BLIP model.
"""
if isinstance(images, (str, Path, bytes)):
self.images = [images]
else:
self.images = images
self.blip_processor = blip_processor
self.blip_model = blip_model
def load(self) -> List[Document]:
"""Load from a list of image data or file paths"""
try:
from transformers import BlipForConditionalGeneration, BlipProcessor
except ImportError:
raise ImportError(
"`transformers` package not found, please install with "
"`pip install transformers`."
)
processor = BlipProcessor.from_pretrained(self.blip_processor)
model = BlipForConditionalGeneration.from_pretrained(self.blip_model)
results = []
for image in self.images:
caption, metadata = self._get_captions_and_metadata(
model=model, processor=processor, image=image
)
doc = Document(page_content=caption, metadata=metadata)
results.append(doc)
return results
def _get_captions_and_metadata(
self, model: Any, processor: Any, image: Union[str, Path, bytes]
) -> Tuple[str, dict]:
"""Helper function for getting the captions and metadata of an image."""
try:
from PIL import Image
except ImportError:
raise ImportError(
"`PIL` package not found, please install with `pip install pillow`"
)
image_source = image # Save the original source for later reference
try:
if isinstance(image, bytes):
image = Image.open(BytesIO(image)).convert("RGB") # type: ignore[assignment]
elif isinstance(image, str) and (
image.startswith("http://") or image.startswith("https://")
):
image = Image.open(requests.get(image, stream=True).raw).convert("RGB") # type: ignore[assignment, arg-type]
else:
image = Image.open(image).convert("RGB") # type: ignore[assignment]
except Exception:
if isinstance(image_source, bytes):
msg = "Could not get image data from bytes"
else:
msg = f"Could not get image data for {image_source}"
raise ValueError(msg)
inputs = processor(image, "an image of", return_tensors="pt")
output = model.generate(**inputs)
caption: str = processor.decode(output[0])
if isinstance(image_source, bytes):
metadata: dict = {"image_source": "Image bytes provided"}
else:
metadata = {"image_path": str(image_source)}
return caption, metadata
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/readthedocs.py | from __future__ import annotations
from pathlib import Path
from typing import TYPE_CHECKING, Any, Iterator, List, Optional, Sequence, Tuple, Union
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
if TYPE_CHECKING:
from bs4 import NavigableString
from bs4.element import Comment, Tag
class ReadTheDocsLoader(BaseLoader):
"""Load `ReadTheDocs` documentation directory."""
def __init__(
self,
path: Union[str, Path],
encoding: Optional[str] = None,
errors: Optional[str] = None,
custom_html_tag: Optional[Tuple[str, dict]] = None,
patterns: Sequence[str] = ("*.htm", "*.html"),
exclude_links_ratio: float = 1.0,
**kwargs: Optional[Any],
):
"""
Initialize ReadTheDocsLoader
The loader loops over all files under `path` and extracts the actual content of
the files by retrieving main html tags. Default main html tags include
`<main id="main-content>`, <`div role="main>`, and `<article role="main">`. You
can also define your own html tags by passing custom_html_tag, e.g.
`("div", "class=main")`. The loader iterates html tags with the order of
custom html tags (if exists) and default html tags. If any of the tags is not
empty, the loop will break and retrieve the content out of that tag.
Args:
path: The location of pulled readthedocs folder.
encoding: The encoding with which to open the documents.
errors: Specify how encoding and decoding errors are to be handled—this
cannot be used in binary mode.
custom_html_tag: Optional custom html tag to retrieve the content from
files.
patterns: The file patterns to load, passed to `glob.rglob`.
exclude_links_ratio: The ratio of links:content to exclude pages from.
This is to reduce the frequency at which index pages make their
way into retrieved results. Recommended: 0.5
kwargs: named arguments passed to `bs4.BeautifulSoup`.
"""
try:
from bs4 import BeautifulSoup
except ImportError:
raise ImportError(
"Could not import python packages. "
"Please install it with `pip install beautifulsoup4`. "
)
try:
_ = BeautifulSoup(
"<html><body>Parser builder library test.</body></html>",
"html.parser",
**kwargs,
)
except Exception as e:
raise ValueError("Parsing kwargs do not appear valid") from e
self.file_path = Path(path)
self.encoding = encoding
self.errors = errors
self.custom_html_tag = custom_html_tag
self.patterns = patterns
self.bs_kwargs = kwargs
self.exclude_links_ratio = exclude_links_ratio
def lazy_load(self) -> Iterator[Document]:
"""A lazy loader for Documents."""
for file_pattern in self.patterns:
for p in self.file_path.rglob(file_pattern):
if p.is_dir():
continue
with open(p, encoding=self.encoding, errors=self.errors) as f:
text = self._clean_data(f.read())
yield Document(page_content=text, metadata={"source": str(p)})
def _clean_data(self, data: str) -> str:
from bs4 import BeautifulSoup
soup = BeautifulSoup(data, "html.parser", **self.bs_kwargs)
# default tags
html_tags = [
("div", {"role": "main"}),
("main", {"id": "main-content"}),
]
if self.custom_html_tag is not None:
html_tags.append(self.custom_html_tag)
element = None
# reversed order. check the custom one first
for tag, attrs in html_tags[::-1]:
element = soup.find(tag, attrs)
# if found, break
if element is not None:
break
if element is not None and _get_link_ratio(element) <= self.exclude_links_ratio:
text = _get_clean_text(element)
else:
text = ""
# trim empty lines
return "\n".join([t for t in text.split("\n") if t])
def _get_clean_text(element: Tag) -> str:
"""Returns cleaned text with newlines preserved and irrelevant elements removed."""
elements_to_skip = [
"script",
"noscript",
"canvas",
"meta",
"svg",
"map",
"area",
"audio",
"source",
"track",
"video",
"embed",
"object",
"param",
"picture",
"iframe",
"frame",
"frameset",
"noframes",
"applet",
"form",
"button",
"select",
"base",
"style",
"img",
]
newline_elements = [
"p",
"div",
"ul",
"ol",
"li",
"h1",
"h2",
"h3",
"h4",
"h5",
"h6",
"pre",
"table",
"tr",
]
text = _process_element(element, elements_to_skip, newline_elements)
return text.strip()
def _get_link_ratio(section: Tag) -> float:
links = section.find_all("a")
total_text = "".join(str(s) for s in section.stripped_strings)
if len(total_text) == 0:
return 0
link_text = "".join(
str(string.string.strip())
for link in links
for string in link.strings
if string
)
return len(link_text) / len(total_text)
def _process_element(
element: Union[Tag, NavigableString, Comment],
elements_to_skip: List[str],
newline_elements: List[str],
) -> str:
"""
Traverse through HTML tree recursively to preserve newline and skip
unwanted (code/binary) elements
"""
from bs4 import NavigableString
from bs4.element import Comment, Tag
tag_name = getattr(element, "name", None)
if isinstance(element, Comment) or tag_name in elements_to_skip:
return ""
elif isinstance(element, NavigableString):
return element
elif tag_name == "br":
return "\n"
elif tag_name in newline_elements:
return (
"".join(
_process_element(child, elements_to_skip, newline_elements)
for child in element.children
if isinstance(child, (Tag, NavigableString, Comment))
)
+ "\n"
)
else:
return "".join(
_process_element(child, elements_to_skip, newline_elements)
for child in element.children
if isinstance(child, (Tag, NavigableString, Comment))
)
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/async_html.py | import asyncio
import logging
import warnings
from concurrent.futures import Future, ThreadPoolExecutor
from typing import (
Any,
AsyncIterator,
Dict,
Iterator,
List,
Optional,
Tuple,
Union,
cast,
)
import aiohttp
import requests
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
from langchain_community.utils.user_agent import get_user_agent
logger = logging.getLogger(__name__)
default_header_template = {
"User-Agent": get_user_agent(),
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*"
";q=0.8",
"Accept-Language": "en-US,en;q=0.5",
"Referer": "https://www.google.com/",
"DNT": "1",
"Connection": "keep-alive",
"Upgrade-Insecure-Requests": "1",
}
def _build_metadata(soup: Any, url: str) -> dict:
"""Build metadata from BeautifulSoup output."""
metadata = {"source": url}
if title := soup.find("title"):
metadata["title"] = title.get_text()
if description := soup.find("meta", attrs={"name": "description"}):
metadata["description"] = description.get("content", "No description found.")
if html := soup.find("html"):
metadata["language"] = html.get("lang", "No language found.")
return metadata
class AsyncHtmlLoader(BaseLoader):
"""Load `HTML` asynchronously."""
def __init__(
self,
web_path: Union[str, List[str]],
header_template: Optional[dict] = None,
verify_ssl: Optional[bool] = True,
proxies: Optional[dict] = None,
autoset_encoding: bool = True,
encoding: Optional[str] = None,
default_parser: str = "html.parser",
requests_per_second: int = 2,
requests_kwargs: Optional[Dict[str, Any]] = None,
raise_for_status: bool = False,
ignore_load_errors: bool = False,
*,
preserve_order: bool = True,
trust_env: bool = False,
):
"""Initialize with a webpage path."""
# TODO: Deprecate web_path in favor of web_paths, and remove this
# left like this because there are a number of loaders that expect single
# urls
if isinstance(web_path, str):
self.web_paths = [web_path]
elif isinstance(web_path, List):
self.web_paths = web_path
headers = header_template or default_header_template
if not headers.get("User-Agent"):
try:
from fake_useragent import UserAgent
headers["User-Agent"] = UserAgent().random
except ImportError:
logger.info(
"fake_useragent not found, using default user agent."
"To get a realistic header for requests, "
"`pip install fake_useragent`."
)
self.session = requests.Session()
self.session.headers = dict(headers)
self.session.verify = verify_ssl
if proxies:
self.session.proxies.update(proxies)
self.requests_per_second = requests_per_second
self.default_parser = default_parser
self.requests_kwargs = requests_kwargs or {}
self.raise_for_status = raise_for_status
self.autoset_encoding = autoset_encoding
self.encoding = encoding
self.ignore_load_errors = ignore_load_errors
self.preserve_order = preserve_order
self.trust_env = trust_env
def _fetch_valid_connection_docs(self, url: str) -> Any:
if self.ignore_load_errors:
try:
return self.session.get(url, **self.requests_kwargs)
except Exception as e:
warnings.warn(str(e))
return None
return self.session.get(url, **self.requests_kwargs)
@staticmethod
def _check_parser(parser: str) -> None:
"""Check that parser is valid for bs4."""
valid_parsers = ["html.parser", "lxml", "xml", "lxml-xml", "html5lib"]
if parser not in valid_parsers:
raise ValueError(
"`parser` must be one of " + ", ".join(valid_parsers) + "."
)
async def _fetch(
self, url: str, retries: int = 3, cooldown: int = 2, backoff: float = 1.5
) -> str:
async with aiohttp.ClientSession(trust_env=self.trust_env) as session:
for i in range(retries):
try:
kwargs: Dict = dict(
headers=self.session.headers,
cookies=self.session.cookies.get_dict(),
**self.requests_kwargs,
)
if not self.session.verify:
kwargs["ssl"] = False
async with session.get(
url,
**kwargs,
) as response:
try:
text = await response.text()
except UnicodeDecodeError:
logger.error(f"Failed to decode content from {url}")
text = ""
return text
except (aiohttp.ClientConnectionError, TimeoutError) as e:
if i == retries - 1 and self.ignore_load_errors:
logger.warning(f"Error fetching {url} after {retries} retries.")
return ""
elif i == retries - 1:
raise
else:
logger.warning(
f"Error fetching {url} with attempt "
f"{i + 1}/{retries}: {e}. Retrying..."
)
await asyncio.sleep(cooldown * backoff**i)
raise ValueError("retry count exceeded")
async def _fetch_with_rate_limit(
self, url: str, semaphore: asyncio.Semaphore
) -> Tuple[str, str]:
async with semaphore:
return url, await self._fetch(url)
async def _lazy_fetch_all(
self, urls: List[str], preserve_order: bool
) -> AsyncIterator[Tuple[str, str]]:
semaphore = asyncio.Semaphore(self.requests_per_second)
tasks = [
asyncio.create_task(self._fetch_with_rate_limit(url, semaphore))
for url in urls
]
try:
from tqdm.asyncio import tqdm_asyncio
if preserve_order:
for task in tqdm_asyncio(
tasks, desc="Fetching pages", ascii=True, mininterval=1
):
yield await task
else:
for task in tqdm_asyncio.as_completed(
tasks, desc="Fetching pages", ascii=True, mininterval=1
):
yield await task
except ImportError:
warnings.warn("For better logging of progress, `pip install tqdm`")
if preserve_order:
for result in await asyncio.gather(*tasks):
yield result
else:
for task in asyncio.as_completed(tasks):
yield await task
async def fetch_all(self, urls: List[str]) -> List[str]:
"""Fetch all urls concurrently with rate limiting."""
return [doc async for _, doc in self._lazy_fetch_all(urls, True)]
def _to_document(self, url: str, text: str) -> Document:
from bs4 import BeautifulSoup
if url.endswith(".xml"):
parser = "xml"
else:
parser = self.default_parser
self._check_parser(parser)
soup = BeautifulSoup(text, parser)
metadata = _build_metadata(soup, url)
return Document(page_content=text, metadata=metadata)
def lazy_load(self) -> Iterator[Document]:
"""Lazy load text from the url(s) in web_path."""
results: List[str]
try:
# Raises RuntimeError if there is no current event loop.
asyncio.get_running_loop()
# If there is a current event loop, we need to run the async code
# in a separate loop, in a separate thread.
with ThreadPoolExecutor(max_workers=1) as executor:
future: Future[List[str]] = executor.submit(
asyncio.run, # type: ignore[arg-type]
self.fetch_all(self.web_paths), # type: ignore[arg-type]
)
results = future.result()
except RuntimeError:
results = asyncio.run(self.fetch_all(self.web_paths))
for i, text in enumerate(cast(List[str], results)):
yield self._to_document(self.web_paths[i], text)
async def alazy_load(self) -> AsyncIterator[Document]:
"""Lazy load text from the url(s) in web_path."""
async for url, text in self._lazy_fetch_all(
self.web_paths, self.preserve_order
):
yield self._to_document(url, text)
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/gcs_directory.py | import logging
from typing import Callable, List, Optional
from langchain_core._api.deprecation import deprecated
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
from langchain_community.document_loaders.gcs_file import GCSFileLoader
from langchain_community.utilities.vertexai import get_client_info
logger = logging.getLogger(__name__)
@deprecated(
since="0.0.32",
removal="1.0",
alternative_import="langchain_google_community.GCSDirectoryLoader",
)
class GCSDirectoryLoader(BaseLoader):
"""Load from GCS directory."""
def __init__(
self,
project_name: str,
bucket: str,
prefix: str = "",
loader_func: Optional[Callable[[str], BaseLoader]] = None,
continue_on_failure: bool = False,
):
"""Initialize with bucket and key name.
Args:
project_name: The ID of the project for the GCS bucket.
bucket: The name of the GCS bucket.
prefix: The prefix of the GCS bucket.
loader_func: A loader function that instantiates a loader based on a
file_path argument. If nothing is provided, the GCSFileLoader
would use its default loader.
continue_on_failure: To use try-except block for each file within the GCS
directory. If set to `True`, then failure to process a file will not
cause an error.
"""
self.project_name = project_name
self.bucket = bucket
self.prefix = prefix
self._loader_func = loader_func
self.continue_on_failure = continue_on_failure
def load(self) -> List[Document]:
"""Load documents."""
try:
from google.cloud import storage
except ImportError:
raise ImportError(
"Could not import google-cloud-storage python package. "
"Please install it with `pip install google-cloud-storage`."
)
client = storage.Client(
project=self.project_name,
client_info=get_client_info(module="google-cloud-storage"),
)
docs = []
for blob in client.list_blobs(self.bucket, prefix=self.prefix):
# we shall just skip directories since GCSFileLoader creates
# intermediate directories on the fly
if blob.name.endswith("/"):
continue
# Use the try-except block here
try:
loader = GCSFileLoader(
self.project_name,
self.bucket,
blob.name,
loader_func=self._loader_func,
)
docs.extend(loader.load())
except Exception as e:
if self.continue_on_failure:
logger.warning(f"Problem processing blob {blob.name}, message: {e}")
continue
else:
raise e
return docs
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/hn.py | from typing import Any, List
from langchain_core.documents import Document
from langchain_community.document_loaders.web_base import WebBaseLoader
class HNLoader(WebBaseLoader):
"""Load `Hacker News` data.
It loads data from either main page results or the comments page."""
def load(self) -> List[Document]:
"""Get important HN webpage information.
HN webpage components are:
- title
- content
- source url,
- time of post
- author of the post
- number of comments
- rank of the post
"""
soup_info = self.scrape()
if "item" in self.web_path:
return self.load_comments(soup_info)
else:
return self.load_results(soup_info)
def load_comments(self, soup_info: Any) -> List[Document]:
"""Load comments from a HN post."""
comments = soup_info.select("tr[class='athing comtr']")
title = soup_info.select_one("tr[id='pagespace']").get("title")
return [
Document(
page_content=comment.text.strip(),
metadata={"source": self.web_path, "title": title},
)
for comment in comments
]
def load_results(self, soup: Any) -> List[Document]:
"""Load items from an HN page."""
items = soup.select("tr[class='athing']")
documents = []
for lineItem in items:
ranking = lineItem.select_one("span[class='rank']").text
link = lineItem.find("span", {"class": "titleline"}).find("a").get("href")
title = lineItem.find("span", {"class": "titleline"}).text.strip()
metadata = {
"source": self.web_path,
"title": title,
"link": link,
"ranking": ranking,
}
documents.append(
Document(
page_content=title, link=link, ranking=ranking, metadata=metadata
)
)
return documents
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/iugu.py | import json
import urllib.request
from typing import List, Optional
from langchain_core.documents import Document
from langchain_core.utils import get_from_env, stringify_dict
from langchain_community.document_loaders.base import BaseLoader
IUGU_ENDPOINTS = {
"invoices": "https://api.iugu.com/v1/invoices",
"customers": "https://api.iugu.com/v1/customers",
"charges": "https://api.iugu.com/v1/charges",
"subscriptions": "https://api.iugu.com/v1/subscriptions",
"plans": "https://api.iugu.com/v1/plans",
}
class IuguLoader(BaseLoader):
"""Load from `IUGU`."""
def __init__(self, resource: str, api_token: Optional[str] = None) -> None:
"""Initialize the IUGU resource.
Args:
resource: The name of the resource to fetch.
api_token: The IUGU API token to use.
"""
self.resource = resource
api_token = api_token or get_from_env("api_token", "IUGU_API_TOKEN")
self.headers = {"Authorization": f"Bearer {api_token}"}
def _make_request(self, url: str) -> List[Document]:
request = urllib.request.Request(url, headers=self.headers)
with urllib.request.urlopen(request) as response:
json_data = json.loads(response.read().decode())
text = stringify_dict(json_data)
metadata = {"source": url}
return [Document(page_content=text, metadata=metadata)]
def _get_resource(self) -> List[Document]:
endpoint = IUGU_ENDPOINTS.get(self.resource)
if endpoint is None:
return []
return self._make_request(endpoint)
def load(self) -> List[Document]:
return self._get_resource()
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/scrapfly.py | """Scrapfly Web Reader."""
import logging
from typing import Iterator, List, Literal, Optional
from langchain_core.document_loaders import BaseLoader
from langchain_core.documents import Document
from langchain_core.utils import get_from_env
logger = logging.getLogger(__file__)
class ScrapflyLoader(BaseLoader):
"""Turn a url to llm accessible markdown with `Scrapfly.io`.
For further details, visit: https://scrapfly.io/docs/sdk/python
"""
def __init__(
self,
urls: List[str],
*,
api_key: Optional[str] = None,
scrape_format: Literal["markdown", "text"] = "markdown",
scrape_config: Optional[dict] = None,
continue_on_failure: bool = True,
) -> None:
"""Initialize client.
Args:
urls: List of urls to scrape.
api_key: The Scrapfly API key. If not specified must have env var
SCRAPFLY_API_KEY set.
scrape_format: Scrape result format, one or "markdown" or "text".
scrape_config: Dictionary of ScrapFly scrape config object.
continue_on_failure: Whether to continue if scraping a url fails.
"""
try:
from scrapfly import ScrapflyClient
except ImportError:
raise ImportError(
"`scrapfly` package not found, please run `pip install scrapfly-sdk`"
)
if not urls:
raise ValueError("URLs must be provided.")
api_key = api_key or get_from_env("api_key", "SCRAPFLY_API_KEY")
self.scrapfly = ScrapflyClient(key=api_key)
self.urls = urls
self.scrape_format = scrape_format
self.scrape_config = scrape_config
self.continue_on_failure = continue_on_failure
def lazy_load(self) -> Iterator[Document]:
from scrapfly import ScrapeConfig
scrape_config = self.scrape_config if self.scrape_config is not None else {}
for url in self.urls:
try:
response = self.scrapfly.scrape(
ScrapeConfig(url, format=self.scrape_format, **scrape_config)
)
yield Document(
page_content=response.scrape_result["content"],
metadata={"url": url},
)
except Exception as e:
if self.continue_on_failure:
logger.error(f"Error fetching data from {url}, exception: {e}")
else:
raise e
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/powerpoint.py | import os
from typing import List
from langchain_community.document_loaders.unstructured import UnstructuredFileLoader
class UnstructuredPowerPointLoader(UnstructuredFileLoader):
"""Load `Microsoft PowerPoint` files using `Unstructured`.
Works with both .ppt and .pptx files.
You can run the loader in one of two modes: "single" and "elements".
If you use "single" mode, the document will be returned as a single
langchain Document object. If you use "elements" mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
--------
from langchain_community.document_loaders import UnstructuredPowerPointLoader
loader = UnstructuredPowerPointLoader(
"example.pptx", mode="elements", strategy="fast",
)
docs = loader.load()
References
----------
https://unstructured-io.github.io/unstructured/bricks.html#partition-pptx
"""
def _get_elements(self) -> List:
from unstructured.__version__ import __version__ as __unstructured_version__
from unstructured.file_utils.filetype import FileType, detect_filetype
unstructured_version = tuple(
[int(x) for x in __unstructured_version__.split(".")]
)
# NOTE(MthwRobinson) - magic will raise an import error if the libmagic
# system dependency isn't installed. If it's not installed, we'll just
# check the file extension
try:
import magic # noqa: F401
is_ppt = detect_filetype(self.file_path) == FileType.PPT # type: ignore[arg-type]
except ImportError:
_, extension = os.path.splitext(str(self.file_path))
is_ppt = extension == ".ppt"
if is_ppt and unstructured_version < (0, 4, 11):
raise ValueError(
f"You are on unstructured version {__unstructured_version__}. "
"Partitioning .ppt files is only supported in unstructured>=0.4.11. "
"Please upgrade the unstructured package and try again."
)
if is_ppt:
from unstructured.partition.ppt import partition_ppt
return partition_ppt(filename=self.file_path, **self.unstructured_kwargs) # type: ignore[arg-type]
else:
from unstructured.partition.pptx import partition_pptx
return partition_pptx(filename=self.file_path, **self.unstructured_kwargs) # type: ignore[arg-type]
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/assemblyai.py | from __future__ import annotations
from enum import Enum
from pathlib import Path
from typing import TYPE_CHECKING, Iterator, Optional, Union
import requests
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
if TYPE_CHECKING:
import assemblyai
class TranscriptFormat(Enum):
"""Transcript format to use for the document loader."""
TEXT = "text"
"""One document with the transcription text"""
SENTENCES = "sentences"
"""Multiple documents, splits the transcription by each sentence"""
PARAGRAPHS = "paragraphs"
"""Multiple documents, splits the transcription by each paragraph"""
SUBTITLES_SRT = "subtitles_srt"
"""One document with the transcript exported in SRT subtitles format"""
SUBTITLES_VTT = "subtitles_vtt"
"""One document with the transcript exported in VTT subtitles format"""
class AssemblyAIAudioTranscriptLoader(BaseLoader):
"""Load AssemblyAI audio transcripts.
It uses the AssemblyAI API to transcribe audio files
and loads the transcribed text into one or more Documents,
depending on the specified format.
To use, you should have the ``assemblyai`` python package installed, and the
environment variable ``ASSEMBLYAI_API_KEY`` set with your API key.
Alternatively, the API key can also be passed as an argument.
Audio files can be specified via an URL or a local file path.
"""
def __init__(
self,
file_path: Union[str, Path],
*,
transcript_format: TranscriptFormat = TranscriptFormat.TEXT,
config: Optional[assemblyai.TranscriptionConfig] = None,
api_key: Optional[str] = None,
):
"""
Initializes the AssemblyAI AudioTranscriptLoader.
Args:
file_path: An URL or a local file path.
transcript_format: Transcript format to use.
See class ``TranscriptFormat`` for more info.
config: Transcription options and features. If ``None`` is given,
the Transcriber's default configuration will be used.
api_key: AssemblyAI API key.
"""
try:
import assemblyai
except ImportError:
raise ImportError(
"Could not import assemblyai python package. "
"Please install it with `pip install assemblyai`."
)
if api_key is not None:
assemblyai.settings.api_key = api_key
self.file_path = str(file_path)
self.transcript_format = transcript_format
self.transcriber = assemblyai.Transcriber(config=config)
def lazy_load(self) -> Iterator[Document]:
"""Transcribes the audio file and loads the transcript into documents.
It uses the AssemblyAI API to transcribe the audio file and blocks until
the transcription is finished.
"""
transcript = self.transcriber.transcribe(self.file_path)
# This will raise a ValueError if no API key is set.
if transcript.error:
raise ValueError(f"Could not transcribe file: {transcript.error}")
if self.transcript_format == TranscriptFormat.TEXT:
yield Document(
page_content=transcript.text, metadata=transcript.json_response
)
elif self.transcript_format == TranscriptFormat.SENTENCES:
sentences = transcript.get_sentences()
for s in sentences:
yield Document(page_content=s.text, metadata=s.dict(exclude={"text"}))
elif self.transcript_format == TranscriptFormat.PARAGRAPHS:
paragraphs = transcript.get_paragraphs()
for p in paragraphs:
yield Document(page_content=p.text, metadata=p.dict(exclude={"text"}))
elif self.transcript_format == TranscriptFormat.SUBTITLES_SRT:
yield Document(page_content=transcript.export_subtitles_srt())
elif self.transcript_format == TranscriptFormat.SUBTITLES_VTT:
yield Document(page_content=transcript.export_subtitles_vtt())
else:
raise ValueError("Unknown transcript format.")
class AssemblyAIAudioLoaderById(BaseLoader):
"""
Load AssemblyAI audio transcripts.
It uses the AssemblyAI API to get an existing transcription
and loads the transcribed text into one or more Documents,
depending on the specified format.
"""
def __init__(
self, transcript_id: str, api_key: str, transcript_format: TranscriptFormat
):
"""
Initializes the AssemblyAI AssemblyAIAudioLoaderById.
Args:
transcript_id: Id of an existing transcription.
transcript_format: Transcript format to use.
See class ``TranscriptFormat`` for more info.
api_key: AssemblyAI API key.
"""
self.api_key = api_key
self.transcript_id = transcript_id
self.transcript_format = transcript_format
def lazy_load(self) -> Iterator[Document]:
"""Load data into Document objects."""
HEADERS = {"authorization": self.api_key}
if self.transcript_format == TranscriptFormat.TEXT:
try:
transcript_response = requests.get(
f"https://api.assemblyai.com/v2/transcript/{self.transcript_id}",
headers=HEADERS,
)
transcript_response.raise_for_status()
except Exception as e:
print(f"An error occurred: {e}") # noqa: T201
raise
transcript = transcript_response.json()["text"]
yield Document(page_content=transcript, metadata=transcript_response.json())
elif self.transcript_format == TranscriptFormat.PARAGRAPHS:
try:
paragraphs_response = requests.get(
f"https://api.assemblyai.com/v2/transcript/{self.transcript_id}/paragraphs",
headers=HEADERS,
)
paragraphs_response.raise_for_status()
except Exception as e:
print(f"An error occurred: {e}") # noqa: T201
raise
paragraphs = paragraphs_response.json()["paragraphs"]
for p in paragraphs:
yield Document(page_content=p["text"], metadata=p)
elif self.transcript_format == TranscriptFormat.SENTENCES:
try:
sentences_response = requests.get(
f"https://api.assemblyai.com/v2/transcript/{self.transcript_id}/sentences",
headers=HEADERS,
)
sentences_response.raise_for_status()
except Exception as e:
print(f"An error occurred: {e}") # noqa: T201
raise
sentences = sentences_response.json()["sentences"]
for s in sentences:
yield Document(page_content=s["text"], metadata=s)
elif self.transcript_format == TranscriptFormat.SUBTITLES_SRT:
try:
srt_response = requests.get(
f"https://api.assemblyai.com/v2/transcript/{self.transcript_id}/srt",
headers=HEADERS,
)
srt_response.raise_for_status()
except Exception as e:
print(f"An error occurred: {e}") # noqa: T201
raise
srt = srt_response.text
yield Document(page_content=srt)
elif self.transcript_format == TranscriptFormat.SUBTITLES_VTT:
try:
vtt_response = requests.get(
f"https://api.assemblyai.com/v2/transcript/{self.transcript_id}/vtt",
headers=HEADERS,
)
vtt_response.raise_for_status()
except Exception as e:
print(f"An error occurred: {e}") # noqa: T201
raise
vtt = vtt_response.text
yield Document(page_content=vtt)
else:
raise ValueError("Unknown transcript format.")
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/stripe.py | import json
import urllib.request
from typing import List, Optional
from langchain_core.documents import Document
from langchain_core.utils import get_from_env, stringify_dict
from langchain_community.document_loaders.base import BaseLoader
STRIPE_ENDPOINTS = {
"balance_transactions": "https://api.stripe.com/v1/balance_transactions",
"charges": "https://api.stripe.com/v1/charges",
"customers": "https://api.stripe.com/v1/customers",
"events": "https://api.stripe.com/v1/events",
"refunds": "https://api.stripe.com/v1/refunds",
"disputes": "https://api.stripe.com/v1/disputes",
}
class StripeLoader(BaseLoader):
"""Load from `Stripe` API."""
def __init__(self, resource: str, access_token: Optional[str] = None) -> None:
"""Initialize with a resource and an access token.
Args:
resource: The resource.
access_token: The access token.
"""
self.resource = resource
access_token = access_token or get_from_env(
"access_token", "STRIPE_ACCESS_TOKEN"
)
self.headers = {"Authorization": f"Bearer {access_token}"}
def _make_request(self, url: str) -> List[Document]:
request = urllib.request.Request(url, headers=self.headers)
with urllib.request.urlopen(request) as response:
json_data = json.loads(response.read().decode())
text = stringify_dict(json_data)
metadata = {"source": url}
return [Document(page_content=text, metadata=metadata)]
def _get_resource(self) -> List[Document]:
endpoint = STRIPE_ENDPOINTS.get(self.resource)
if endpoint is None:
return []
return self._make_request(endpoint)
def load(self) -> List[Document]:
return self._get_resource()
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/xml.py | """Loads Microsoft Excel files."""
from pathlib import Path
from typing import Any, List, Union
from langchain_community.document_loaders.unstructured import (
UnstructuredFileLoader,
validate_unstructured_version,
)
class UnstructuredXMLLoader(UnstructuredFileLoader):
"""Load `XML` file using `Unstructured`.
You can run the loader in one of two modes: "single" and "elements".
If you use "single" mode, the document will be returned as a single
langchain Document object. If you use "elements" mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
--------
from langchain_community.document_loaders import UnstructuredXMLLoader
loader = UnstructuredXMLLoader(
"example.xml", mode="elements", strategy="fast",
)
docs = loader.load()
References
----------
https://unstructured-io.github.io/unstructured/bricks.html#partition-xml
"""
def __init__(
self,
file_path: Union[str, Path],
mode: str = "single",
**unstructured_kwargs: Any,
):
file_path = str(file_path)
validate_unstructured_version(min_unstructured_version="0.6.7")
super().__init__(file_path=file_path, mode=mode, **unstructured_kwargs)
def _get_elements(self) -> List:
from unstructured.partition.xml import partition_xml
return partition_xml(filename=self.file_path, **self.unstructured_kwargs) # type: ignore[arg-type]
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/geodataframe.py | from typing import Any, Iterator
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
class GeoDataFrameLoader(BaseLoader):
"""Load `geopandas` Dataframe."""
def __init__(self, data_frame: Any, page_content_column: str = "geometry"):
"""Initialize with geopandas Dataframe.
Args:
data_frame: geopandas DataFrame object.
page_content_column: Name of the column containing the page content.
Defaults to "geometry".
"""
try:
import geopandas as gpd
except ImportError:
raise ImportError(
"geopandas package not found, please install it with "
"`pip install geopandas`"
)
if not isinstance(data_frame, gpd.GeoDataFrame):
raise ValueError(
f"Expected data_frame to be a gpd.GeoDataFrame, got {type(data_frame)}"
)
if page_content_column not in data_frame.columns:
raise ValueError(
f"Expected data_frame to have a column named {page_content_column}"
)
if not isinstance(data_frame[page_content_column], gpd.GeoSeries):
raise ValueError(
f"Expected data_frame[{page_content_column}] to be a GeoSeries"
)
self.data_frame = data_frame
self.page_content_column = page_content_column
def lazy_load(self) -> Iterator[Document]:
"""Lazy load records from dataframe."""
# assumes all geometries in GeoSeries are same CRS and Geom Type
crs_str = self.data_frame.crs.to_string() if self.data_frame.crs else None
geometry_type = self.data_frame.geometry.geom_type.iloc[0]
for _, row in self.data_frame.iterrows():
geom = row[self.page_content_column]
xmin, ymin, xmax, ymax = geom.bounds
metadata = row.to_dict()
metadata["crs"] = crs_str
metadata["geometry_type"] = geometry_type
metadata["xmin"] = xmin
metadata["ymin"] = ymin
metadata["xmax"] = xmax
metadata["ymax"] = ymax
metadata.pop(self.page_content_column)
# using WKT instead of str() to help GIS system interoperability
yield Document(page_content=geom.wkt, metadata=metadata)
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/news.py | """Loader that uses unstructured to load HTML files."""
import logging
from typing import Any, Iterator, List
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
logger = logging.getLogger(__name__)
class NewsURLLoader(BaseLoader):
"""Load news articles from URLs using `Unstructured`.
Args:
urls: URLs to load. Each is loaded into its own document.
text_mode: If True, extract text from URL and use that for page content.
Otherwise, extract raw HTML.
nlp: If True, perform NLP on the extracted contents, like providing a summary
and extracting keywords.
continue_on_failure: If True, continue loading documents even if
loading fails for a particular URL.
show_progress_bar: If True, use tqdm to show a loading progress bar. Requires
tqdm to be installed, ``pip install tqdm``.
**newspaper_kwargs: Any additional named arguments to pass to
newspaper.Article().
Example:
.. code-block:: python
from langchain_community.document_loaders import NewsURLLoader
loader = NewsURLLoader(
urls=["<url-1>", "<url-2>"],
)
docs = loader.load()
Newspaper reference:
https://newspaper.readthedocs.io/en/latest/
"""
def __init__(
self,
urls: List[str],
text_mode: bool = True,
nlp: bool = False,
continue_on_failure: bool = True,
show_progress_bar: bool = False,
**newspaper_kwargs: Any,
) -> None:
"""Initialize with file path."""
try:
import newspaper
self.__version = newspaper.__version__
except ImportError:
raise ImportError(
"newspaper package not found, please install it with "
"`pip install newspaper3k`"
)
self.urls = urls
self.text_mode = text_mode
self.nlp = nlp
self.continue_on_failure = continue_on_failure
self.newspaper_kwargs = newspaper_kwargs
self.show_progress_bar = show_progress_bar
def load(self) -> List[Document]:
iter = self.lazy_load()
if self.show_progress_bar:
try:
from tqdm import tqdm
except ImportError as e:
raise ImportError(
"Package tqdm must be installed if show_progress_bar=True. "
"Please install with 'pip install tqdm' or set "
"show_progress_bar=False."
) from e
iter = tqdm(iter)
return list(iter)
def lazy_load(self) -> Iterator[Document]:
try:
from newspaper import Article
except ImportError as e:
raise ImportError(
"Cannot import newspaper, please install with `pip install newspaper3k`"
) from e
for url in self.urls:
try:
article = Article(url, **self.newspaper_kwargs)
article.download()
article.parse()
if self.nlp:
article.nlp()
except Exception as e:
if self.continue_on_failure:
logger.error(f"Error fetching or processing {url}, exception: {e}")
continue
else:
raise e
metadata = {
"title": getattr(article, "title", ""),
"link": getattr(article, "url", getattr(article, "canonical_link", "")),
"authors": getattr(article, "authors", []),
"language": getattr(article, "meta_lang", ""),
"description": getattr(article, "meta_description", ""),
"publish_date": getattr(article, "publish_date", ""),
}
if self.text_mode:
content = article.text
else:
content = article.html
if self.nlp:
metadata["keywords"] = getattr(article, "keywords", [])
metadata["summary"] = getattr(article, "summary", "")
yield Document(page_content=content, metadata=metadata)
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/polars_dataframe.py | from typing import Any, Iterator
from langchain_core.documents import Document
from langchain_community.document_loaders.dataframe import BaseDataFrameLoader
class PolarsDataFrameLoader(BaseDataFrameLoader):
"""Load `Polars` DataFrame."""
def __init__(self, data_frame: Any, *, page_content_column: str = "text"):
"""Initialize with dataframe object.
Args:
data_frame: Polars DataFrame object.
page_content_column: Name of the column containing the page content.
Defaults to "text".
"""
import polars as pl
if not isinstance(data_frame, pl.DataFrame):
raise ValueError(
f"Expected data_frame to be a pl.DataFrame, got {type(data_frame)}"
)
super().__init__(data_frame, page_content_column=page_content_column)
def lazy_load(self) -> Iterator[Document]:
"""Lazy load records from dataframe."""
for row in self.data_frame.iter_rows(named=True):
text = row[self.page_content_column]
row.pop(self.page_content_column)
yield Document(page_content=text, metadata=row)
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/oracleadb_loader.py | from typing import Any, Dict, List, Optional
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
class OracleAutonomousDatabaseLoader(BaseLoader):
"""
Load from oracle adb
Autonomous Database connection can be made by either connection_string
or tns name. wallet_location and wallet_password are required
for TLS connection.
Each document will represent one row of the query result.
Columns are written into the `page_content` and 'metadata' in
constructor is written into 'metadata' of document,
by default, the 'metadata' is None.
"""
def __init__(
self,
query: str,
user: str,
password: str,
*,
schema: Optional[str] = None,
tns_name: Optional[str] = None,
config_dir: Optional[str] = None,
wallet_location: Optional[str] = None,
wallet_password: Optional[str] = None,
connection_string: Optional[str] = None,
metadata: Optional[List[str]] = None,
):
"""
init method
:param query: sql query to execute
:param user: username
:param password: user password
:param schema: schema to run in database
:param tns_name: tns name in tnsname.ora
:param config_dir: directory of config files(tnsname.ora, wallet)
:param wallet_location: location of wallet
:param wallet_password: password of wallet
:param connection_string: connection string to connect to adb instance
:param metadata: metadata used in document
"""
# Mandatory required arguments.
self.query = query
self.user = user
self.password = password
# Schema
self.schema = schema
# TNS connection Method
self.tns_name = tns_name
self.config_dir = config_dir
# Wallet configuration is required for mTLS connection
self.wallet_location = wallet_location
self.wallet_password = wallet_password
# Connection String connection method
self.connection_string = connection_string
# metadata column
self.metadata = metadata
# dsn
self.dsn: Optional[str]
self._set_dsn()
def _set_dsn(self) -> None:
if self.connection_string:
self.dsn = self.connection_string
elif self.tns_name:
self.dsn = self.tns_name
def _run_query(self) -> List[Dict[str, Any]]:
try:
import oracledb
except ImportError as e:
raise ImportError(
"Could not import oracledb, "
"please install with 'pip install oracledb'"
) from e
connect_param = {"user": self.user, "password": self.password, "dsn": self.dsn}
if self.dsn == self.tns_name:
connect_param["config_dir"] = self.config_dir
if self.wallet_location and self.wallet_password:
connect_param["wallet_location"] = self.wallet_location
connect_param["wallet_password"] = self.wallet_password
try:
connection = oracledb.connect(**connect_param)
cursor = connection.cursor()
if self.schema:
cursor.execute(f"alter session set current_schema={self.schema}")
cursor.execute(self.query)
columns = [col[0] for col in cursor.description]
data = cursor.fetchall()
data = [
{
i: (j if not isinstance(j, oracledb.LOB) else j.read())
for i, j in zip(columns, row)
}
for row in data
]
except oracledb.DatabaseError as e:
print("Got error while connecting: " + str(e)) # noqa: T201
data = []
finally:
cursor.close()
connection.close()
return data
def load(self) -> List[Document]:
data = self._run_query()
documents = []
metadata_columns = self.metadata if self.metadata else []
for row in data:
metadata = {
key: value for key, value in row.items() if key in metadata_columns
}
doc = Document(page_content=str(row), metadata=metadata)
documents.append(doc)
return documents
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/helpers.py | """Document loader helpers."""
import concurrent.futures
from pathlib import Path
from typing import List, NamedTuple, Optional, Union, cast
class FileEncoding(NamedTuple):
"""File encoding as the NamedTuple."""
encoding: Optional[str]
"""The encoding of the file."""
confidence: float
"""The confidence of the encoding."""
language: Optional[str]
"""The language of the file."""
def detect_file_encodings(
file_path: Union[str, Path], timeout: int = 5
) -> List[FileEncoding]:
"""Try to detect the file encoding.
Returns a list of `FileEncoding` tuples with the detected encodings ordered
by confidence.
Args:
file_path: The path to the file to detect the encoding for.
timeout: The timeout in seconds for the encoding detection.
"""
import chardet
file_path = str(file_path)
def read_and_detect(file_path: str) -> List[dict]:
with open(file_path, "rb") as f:
rawdata = f.read()
return cast(List[dict], chardet.detect_all(rawdata))
with concurrent.futures.ThreadPoolExecutor() as executor:
future = executor.submit(read_and_detect, file_path)
try:
encodings = future.result(timeout=timeout)
except concurrent.futures.TimeoutError:
raise TimeoutError(
f"Timeout reached while detecting encoding for {file_path}"
)
if all(encoding["encoding"] is None for encoding in encodings):
raise RuntimeError(f"Could not detect encoding for {file_path}")
return [FileEncoding(**enc) for enc in encodings if enc["encoding"] is not None]
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/open_city_data.py | from typing import Iterator
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
class OpenCityDataLoader(BaseLoader):
"""Load from `Open City`."""
def __init__(self, city_id: str, dataset_id: str, limit: int):
"""Initialize with dataset_id.
Example: https://dev.socrata.com/foundry/data.sfgov.org/vw6y-z8j6
e.g., city_id = data.sfgov.org
e.g., dataset_id = vw6y-z8j6
Args:
city_id: The Open City city identifier.
dataset_id: The Open City dataset identifier.
limit: The maximum number of documents to load.
"""
self.city_id = city_id
self.dataset_id = dataset_id
self.limit = limit
def lazy_load(self) -> Iterator[Document]:
"""Lazy load records."""
from sodapy import Socrata
client = Socrata(self.city_id, None)
results = client.get(self.dataset_id, limit=self.limit)
for record in results:
yield Document(
page_content=str(record),
metadata={
"source": self.city_id + "_" + self.dataset_id,
},
)
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/discord.py | from __future__ import annotations
from typing import TYPE_CHECKING, List
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
if TYPE_CHECKING:
import pandas as pd
class DiscordChatLoader(BaseLoader):
"""Load `Discord` chat logs."""
def __init__(self, chat_log: pd.DataFrame, user_id_col: str = "ID"):
"""Initialize with a Pandas DataFrame containing chat logs.
Args:
chat_log: Pandas DataFrame containing chat logs.
user_id_col: Name of the column containing the user ID. Defaults to "ID".
"""
if not isinstance(chat_log, pd.DataFrame):
raise ValueError(
f"Expected chat_log to be a pd.DataFrame, got {type(chat_log)}"
)
self.chat_log = chat_log
self.user_id_col = user_id_col
def load(self) -> List[Document]:
"""Load all chat messages."""
result = []
for _, row in self.chat_log.iterrows():
user_id = row[self.user_id_col]
metadata = row.to_dict()
metadata.pop(self.user_id_col)
result.append(Document(page_content=user_id, metadata=metadata))
return result
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/apify_dataset.py | from typing import Any, Callable, Dict, List
from langchain_core.documents import Document
from pydantic import BaseModel, model_validator
from langchain_community.document_loaders.base import BaseLoader
class ApifyDatasetLoader(BaseLoader, BaseModel):
"""Load datasets from `Apify` web scraping, crawling, and data extraction platform.
For details, see https://docs.apify.com/platform/integrations/langchain
Example:
.. code-block:: python
from langchain_community.document_loaders import ApifyDatasetLoader
from langchain_core.documents import Document
loader = ApifyDatasetLoader(
dataset_id="YOUR-DATASET-ID",
dataset_mapping_function=lambda dataset_item: Document(
page_content=dataset_item["text"], metadata={"source": dataset_item["url"]}
),
)
documents = loader.load()
""" # noqa: E501
apify_client: Any
"""An instance of the ApifyClient class from the apify-client Python package."""
dataset_id: str
"""The ID of the dataset on the Apify platform."""
dataset_mapping_function: Callable[[Dict], Document]
"""A custom function that takes a single dictionary (an Apify dataset item)
and converts it to an instance of the Document class."""
def __init__(
self, dataset_id: str, dataset_mapping_function: Callable[[Dict], Document]
):
"""Initialize the loader with an Apify dataset ID and a mapping function.
Args:
dataset_id (str): The ID of the dataset on the Apify platform.
dataset_mapping_function (Callable): A function that takes a single
dictionary (an Apify dataset item) and converts it to an instance
of the Document class.
"""
super().__init__(
dataset_id=dataset_id, dataset_mapping_function=dataset_mapping_function
)
@model_validator(mode="before")
@classmethod
def validate_environment(cls, values: Dict) -> Any:
"""Validate environment.
Args:
values: The values to validate.
"""
try:
from apify_client import ApifyClient
client = ApifyClient()
if httpx_client := getattr(client.http_client, "httpx_client"):
httpx_client.headers["user-agent"] += "; Origin/langchain"
values["apify_client"] = client
except ImportError:
raise ImportError(
"Could not import apify-client Python package. "
"Please install it with `pip install apify-client`."
)
return values
def load(self) -> List[Document]:
"""Load documents."""
dataset_items = (
self.apify_client.dataset(self.dataset_id).list_items(clean=True).items
)
return list(map(self.dataset_mapping_function, dataset_items))
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/roam.py | from pathlib import Path
from typing import List, Union
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
class RoamLoader(BaseLoader):
"""Load `Roam` files from a directory."""
def __init__(self, path: Union[str, Path]):
"""Initialize with a path."""
self.file_path = path
def load(self) -> List[Document]:
"""Load documents."""
ps = list(Path(self.file_path).glob("**/*.md"))
docs = []
for p in ps:
with open(p) as f:
text = f.read()
metadata = {"source": str(p)}
docs.append(Document(page_content=text, metadata=metadata))
return docs
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/athena.py | from __future__ import annotations
import io
import json
import time
from typing import Any, Dict, Iterator, List, Optional, Tuple
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
class AthenaLoader(BaseLoader):
"""Load documents from `AWS Athena`.
Each document represents one row of the result.
- By default, all columns are written into the `page_content` of the document
and none into the `metadata` of the document.
- If `metadata_columns` are provided then these columns are written
into the `metadata` of the document while the rest of the columns
are written into the `page_content` of the document.
To authenticate, the AWS client uses this method to automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Amazon Textract service.
"""
def __init__(
self,
query: str,
database: str,
s3_output_uri: str,
profile_name: Optional[str] = None,
metadata_columns: Optional[List[str]] = None,
):
"""Initialize Athena document loader.
Args:
query: The query to run in Athena.
database: Athena database.
s3_output_uri: Athena output path.
profile_name: Optional. AWS credential profile, if profiles are being used.
metadata_columns: Optional. Columns written to Document `metadata`.
"""
self.query = query
self.database = database
self.s3_output_uri = s3_output_uri
self.metadata_columns = metadata_columns if metadata_columns is not None else []
try:
import boto3
except ImportError:
raise ImportError(
"Could not import boto3 python package. "
"Please install it with `pip install boto3`."
)
try:
session = (
boto3.Session(profile_name=profile_name)
if profile_name is not None
else boto3.Session()
)
except Exception as e:
raise ValueError(
"Could not load credentials to authenticate with AWS client. "
"Please check that credentials in the specified "
"profile name are valid."
) from e
self.athena_client = session.client("athena")
self.s3_client = session.client("s3")
def _execute_query(self) -> List[Dict[str, Any]]:
response = self.athena_client.start_query_execution(
QueryString=self.query,
QueryExecutionContext={"Database": self.database},
ResultConfiguration={"OutputLocation": self.s3_output_uri},
)
query_execution_id = response["QueryExecutionId"]
while True:
response = self.athena_client.get_query_execution(
QueryExecutionId=query_execution_id
)
state = response["QueryExecution"]["Status"]["State"]
if state == "SUCCEEDED":
break
elif state == "FAILED":
resp_status = response["QueryExecution"]["Status"]
state_change_reason = resp_status["StateChangeReason"]
err = f"Query Failed: {state_change_reason}"
raise Exception(err)
elif state == "CANCELLED":
raise Exception("Query was cancelled by the user.")
time.sleep(1)
result_set = self._get_result_set(query_execution_id)
return json.loads(result_set.to_json(orient="records"))
def _remove_suffix(self, input_string: str, suffix: str) -> str:
if suffix and input_string.endswith(suffix):
return input_string[: -len(suffix)]
return input_string
def _remove_prefix(self, input_string: str, suffix: str) -> str:
if suffix and input_string.startswith(suffix):
return input_string[len(suffix) :]
return input_string
def _get_result_set(self, query_execution_id: str) -> Any:
try:
import pandas as pd
except ImportError:
raise ImportError(
"Could not import pandas python package. "
"Please install it with `pip install pandas`."
)
output_uri = self.s3_output_uri
tokens = self._remove_prefix(
self._remove_suffix(output_uri, "/"), "s3://"
).split("/")
bucket = tokens[0]
key = "/".join(tokens[1:] + [query_execution_id]) + ".csv"
obj = self.s3_client.get_object(Bucket=bucket, Key=key)
df = pd.read_csv(io.BytesIO(obj["Body"].read()), encoding="utf8")
return df
def _get_columns(
self, query_result: List[Dict[str, Any]]
) -> Tuple[List[str], List[str]]:
content_columns = []
metadata_columns = []
all_columns = list(query_result[0].keys())
for key in all_columns:
if key in self.metadata_columns:
metadata_columns.append(key)
else:
content_columns.append(key)
return content_columns, metadata_columns
def lazy_load(self) -> Iterator[Document]:
query_result = self._execute_query()
content_columns, metadata_columns = self._get_columns(query_result)
for row in query_result:
page_content = "\n".join(
f"{k}: {v}" for k, v in row.items() if k in content_columns
)
metadata = {
k: v for k, v in row.items() if k in metadata_columns and v is not None
}
doc = Document(page_content=page_content, metadata=metadata)
yield doc
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/trello.py | from __future__ import annotations
from typing import TYPE_CHECKING, Any, Iterator, Literal, Optional, Tuple
from langchain_core.documents import Document
from langchain_core.utils import get_from_env
from langchain_community.document_loaders.base import BaseLoader
if TYPE_CHECKING:
from trello import Board, Card, TrelloClient
class TrelloLoader(BaseLoader):
"""Load cards from a `Trello` board."""
def __init__(
self,
client: TrelloClient,
board_name: str,
*,
include_card_name: bool = True,
include_comments: bool = True,
include_checklist: bool = True,
card_filter: Literal["closed", "open", "all"] = "all",
extra_metadata: Tuple[str, ...] = ("due_date", "labels", "list", "closed"),
):
"""Initialize Trello loader.
Args:
client: Trello API client.
board_name: The name of the Trello board.
include_card_name: Whether to include the name of the card in the document.
include_comments: Whether to include the comments on the card in the
document.
include_checklist: Whether to include the checklist on the card in the
document.
card_filter: Filter on card status. Valid values are "closed", "open",
"all".
extra_metadata: List of additional metadata fields to include as document
metadata.Valid values are "due_date", "labels", "list", "closed".
"""
self.client = client
self.board_name = board_name
self.include_card_name = include_card_name
self.include_comments = include_comments
self.include_checklist = include_checklist
self.extra_metadata = extra_metadata
self.card_filter = card_filter
@classmethod
def from_credentials(
cls,
board_name: str,
*,
api_key: Optional[str] = None,
token: Optional[str] = None,
**kwargs: Any,
) -> TrelloLoader:
"""Convenience constructor that builds TrelloClient init param for you.
Args:
board_name: The name of the Trello board.
api_key: Trello API key. Can also be specified as environment variable
TRELLO_API_KEY.
token: Trello token. Can also be specified as environment variable
TRELLO_TOKEN.
include_card_name: Whether to include the name of the card in the document.
include_comments: Whether to include the comments on the card in the
document.
include_checklist: Whether to include the checklist on the card in the
document.
card_filter: Filter on card status. Valid values are "closed", "open",
"all".
extra_metadata: List of additional metadata fields to include as document
metadata.Valid values are "due_date", "labels", "list", "closed".
"""
try:
from trello import TrelloClient # type: ignore
except ImportError as ex:
raise ImportError(
"Could not import trello python package. "
"Please install it with `pip install py-trello`."
) from ex
api_key = api_key or get_from_env("api_key", "TRELLO_API_KEY")
token = token or get_from_env("token", "TRELLO_TOKEN")
client = TrelloClient(api_key=api_key, token=token)
return cls(client, board_name, **kwargs)
def lazy_load(self) -> Iterator[Document]:
"""Loads all cards from the specified Trello board.
You can filter the cards, metadata and text included by using the optional
parameters.
Returns:
A list of documents, one for each card in the board.
"""
try:
from bs4 import BeautifulSoup # noqa: F401
except ImportError as ex:
raise ImportError(
"`beautifulsoup4` package not found, please run"
" `pip install beautifulsoup4`"
) from ex
board = self._get_board()
# Create a dictionary with the list IDs as keys and the list names as values
list_dict = {list_item.id: list_item.name for list_item in board.list_lists()}
# Get Cards on the board
cards = board.get_cards(card_filter=self.card_filter)
for card in cards:
yield self._card_to_doc(card, list_dict)
def _get_board(self) -> Board:
# Find the first board with a matching name
board = next(
(b for b in self.client.list_boards() if b.name == self.board_name), None
)
if not board:
raise ValueError(f"Board `{self.board_name}` not found.")
return board
def _card_to_doc(self, card: Card, list_dict: dict) -> Document:
from bs4 import BeautifulSoup # type: ignore
text_content = ""
if self.include_card_name:
text_content = card.name + "\n"
if card.description.strip():
text_content += BeautifulSoup(card.description, "lxml").get_text()
if self.include_checklist:
# Get all the checklist items on the card
for checklist in card.checklists:
if checklist.items:
items = [
f"{item['name']}:{item['state']}" for item in checklist.items
]
text_content += f"\n{checklist.name}\n" + "\n".join(items)
if self.include_comments:
# Get all the comments on the card
comments = [
BeautifulSoup(comment["data"]["text"], "lxml").get_text()
for comment in card.comments
]
text_content += "Comments:" + "\n".join(comments)
# Default metadata fields
metadata = {
"title": card.name,
"id": card.id,
"url": card.url,
}
# Extra metadata fields. Card object is not subscriptable.
if "labels" in self.extra_metadata:
metadata["labels"] = [label.name for label in card.labels]
if "list" in self.extra_metadata:
if card.list_id in list_dict:
metadata["list"] = list_dict[card.list_id]
if "closed" in self.extra_metadata:
metadata["closed"] = card.closed
if "due_date" in self.extra_metadata:
metadata["due_date"] = card.due_date
return Document(page_content=text_content, metadata=metadata)
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/blockchain.py | import os
import re
import time
from enum import Enum
from typing import List, Optional
import requests
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
class BlockchainType(Enum):
"""Enumerator of the supported blockchains."""
ETH_MAINNET = "eth-mainnet"
ETH_GOERLI = "eth-goerli"
ETH_SEPOLIA = "eth-sepolia"
ETH_HOLESKY = "eth-holesky"
POLYGON_MAINNET = "polygon-mainnet"
POLYGON_MUMBAI = "polygon-mumbai"
POLYGON_AMOY = "polygon-amoy"
ARB_MAINNET = "arb-mainnet"
ARB_SEPOLIA = "arb-sepolia"
OP_MAINNET = "opt-mainnet"
OP_SEPOLIA = "opt-sepolia"
BASE_MAINNET = "base-mainnet"
BASE_SEPOLIA = "base-sepolia"
BLAST_MAINNET = "blast-mainnet"
BLAST_SEPOLIA = "blast-sepolia"
ZKSYNC_MAINNET = "zksync-mainnet"
ZKSYNC_SEPOLIA = "zksync-sepolia"
ZORA_MAINNET = "zora-mainnet"
ZORA_SEPOLIA = "zora-sepolia"
class BlockchainDocumentLoader(BaseLoader):
"""Load elements from a blockchain smart contract.
See supported blockchains here: https://python.langchain.com/v0.2/api_reference/community/document_loaders/langchain_community.document_loaders.blockchain.BlockchainType.html
If no BlockchainType is specified, the default is Ethereum mainnet.
The Loader uses the Alchemy API to interact with the blockchain.
ALCHEMY_API_KEY environment variable must be set to use this loader.
The API returns 100 NFTs per request and can be paginated using the
startToken parameter.
If get_all_tokens is set to True, the loader will get all tokens
on the contract. Note that for contracts with a large number of tokens,
this may take a long time (e.g. 10k tokens is 100 requests).
Default value is false for this reason.
The max_execution_time (sec) can be set to limit the execution time
of the loader.
Future versions of this loader can:
- Support additional Alchemy APIs (e.g. getTransactions, etc.)
- Support additional blockchain APIs (e.g. Infura, Opensea, etc.)
""" # noqa: E501
def __init__(
self,
contract_address: str,
blockchainType: BlockchainType = BlockchainType.ETH_MAINNET,
api_key: str = "docs-demo",
startToken: str = "",
get_all_tokens: bool = False,
max_execution_time: Optional[int] = None,
):
"""
Args:
contract_address: The address of the smart contract.
blockchainType: The blockchain type.
api_key: The Alchemy API key.
startToken: The start token for pagination.
get_all_tokens: Whether to get all tokens on the contract.
max_execution_time: The maximum execution time (sec).
"""
self.contract_address = contract_address
self.blockchainType = blockchainType.value
self.api_key = os.environ.get("ALCHEMY_API_KEY") or api_key
self.startToken = startToken
self.get_all_tokens = get_all_tokens
self.max_execution_time = max_execution_time
if not self.api_key:
raise ValueError("Alchemy API key not provided.")
if not re.match(r"^0x[a-fA-F0-9]{40}$", self.contract_address):
raise ValueError(f"Invalid contract address {self.contract_address}")
def load(self) -> List[Document]:
result = []
current_start_token = self.startToken
start_time = time.time()
while True:
url = (
f"https://{self.blockchainType}.g.alchemy.com/nft/v2/"
f"{self.api_key}/getNFTsForCollection?withMetadata="
f"True&contractAddress={self.contract_address}"
f"&startToken={current_start_token}"
)
response = requests.get(url)
if response.status_code != 200:
raise ValueError(
f"Request failed with status code {response.status_code}"
)
items = response.json()["nfts"]
if not items:
break
for item in items:
content = str(item)
tokenId = item["id"]["tokenId"]
metadata = {
"source": self.contract_address,
"blockchain": self.blockchainType,
"tokenId": tokenId,
}
result.append(Document(page_content=content, metadata=metadata))
# exit after the first API call if get_all_tokens is False
if not self.get_all_tokens:
break
# get the start token for the next API call from the last item in array
current_start_token = self._get_next_tokenId(result[-1].metadata["tokenId"])
if (
self.max_execution_time is not None
and (time.time() - start_time) > self.max_execution_time
):
raise RuntimeError("Execution time exceeded the allowed time limit.")
if not result:
raise ValueError(
f"No NFTs found for contract address {self.contract_address}"
)
return result
# add one to the tokenId, ensuring the correct tokenId format is used
def _get_next_tokenId(self, tokenId: str) -> str:
value_type = self._detect_value_type(tokenId)
if value_type == "hex_0x":
value_int = int(tokenId, 16)
elif value_type == "hex_0xbf":
value_int = int(tokenId[2:], 16)
else:
value_int = int(tokenId)
result = value_int + 1
if value_type == "hex_0x":
return "0x" + format(result, "0" + str(len(tokenId) - 2) + "x")
elif value_type == "hex_0xbf":
return "0xbf" + format(result, "0" + str(len(tokenId) - 4) + "x")
else:
return str(result)
# A smart contract can use different formats for the tokenId
@staticmethod
def _detect_value_type(tokenId: str) -> str:
if isinstance(tokenId, int):
return "int"
elif tokenId.startswith("0x"):
return "hex_0x"
elif tokenId.startswith("0xbf"):
return "hex_0xbf"
else:
return "hex_0xbf"
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/gcs_file.py | import os
import tempfile
from typing import Callable, List, Optional
from langchain_core._api.deprecation import deprecated
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
from langchain_community.document_loaders.unstructured import UnstructuredFileLoader
from langchain_community.utilities.vertexai import get_client_info
@deprecated(
since="0.0.32",
removal="1.0",
alternative_import="langchain_google_community.GCSFileLoader",
)
class GCSFileLoader(BaseLoader):
"""Load from GCS file."""
def __init__(
self,
project_name: str,
bucket: str,
blob: str,
loader_func: Optional[Callable[[str], BaseLoader]] = None,
):
"""Initialize with bucket and key name.
Args:
project_name: The name of the project to load
bucket: The name of the GCS bucket.
blob: The name of the GCS blob to load.
loader_func: A loader function that instantiates a loader based on a
file_path argument. If nothing is provided, the
UnstructuredFileLoader is used.
Examples:
To use an alternative PDF loader:
>> from from langchain_community.document_loaders import PyPDFLoader
>> loader = GCSFileLoader(..., loader_func=PyPDFLoader)
To use UnstructuredFileLoader with additional arguments:
>> loader = GCSFileLoader(...,
>> loader_func=lambda x: UnstructuredFileLoader(x, mode="elements"))
"""
self.bucket = bucket
self.blob = blob
self.project_name = project_name
def default_loader_func(file_path: str) -> BaseLoader:
return UnstructuredFileLoader(file_path)
self._loader_func = loader_func if loader_func else default_loader_func
def load(self) -> List[Document]:
"""Load documents."""
try:
from google.cloud import storage
except ImportError:
raise ImportError(
"Could not import google-cloud-storage python package. "
"Please install it with `pip install google-cloud-storage`."
)
# initialize a client
storage_client = storage.Client(
self.project_name, client_info=get_client_info("google-cloud-storage")
)
# Create a bucket object for our bucket
bucket = storage_client.get_bucket(self.bucket)
# Create a blob object from the filepath
blob = bucket.blob(self.blob)
# retrieve custom metadata associated with the blob
metadata = bucket.get_blob(self.blob).metadata
with tempfile.TemporaryDirectory() as temp_dir:
file_path = f"{temp_dir}/{self.blob}"
os.makedirs(os.path.dirname(file_path), exist_ok=True)
# Download the file to a destination
blob.download_to_filename(file_path)
loader = self._loader_func(file_path)
docs = loader.load()
for doc in docs:
if "source" in doc.metadata:
doc.metadata["source"] = f"gs://{self.bucket}/{self.blob}"
if metadata:
doc.metadata.update(metadata)
return docs
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/joplin.py | import json
import urllib
from datetime import datetime
from typing import Iterator, List, Optional
from langchain_core.documents import Document
from langchain_core.utils import get_from_env
from langchain_community.document_loaders.base import BaseLoader
LINK_NOTE_TEMPLATE = "joplin://x-callback-url/openNote?id={id}"
class JoplinLoader(BaseLoader):
"""Load notes from `Joplin`.
In order to use this loader, you need to have Joplin running with the
Web Clipper enabled (look for "Web Clipper" in the app settings).
To get the access token, you need to go to the Web Clipper options and
under "Advanced Options" you will find the access token.
You can find more information about the Web Clipper service here:
https://joplinapp.org/clipper/
"""
def __init__(
self,
access_token: Optional[str] = None,
port: int = 41184,
host: str = "localhost",
) -> None:
"""
Args:
access_token: The access token to use.
port: The port where the Web Clipper service is running. Default is 41184.
host: The host where the Web Clipper service is running.
Default is localhost.
"""
access_token = access_token or get_from_env(
"access_token", "JOPLIN_ACCESS_TOKEN"
)
base_url = f"http://{host}:{port}"
self._get_note_url = (
f"{base_url}/notes?token={access_token}"
f"&fields=id,parent_id,title,body,created_time,updated_time&page={{page}}"
)
self._get_folder_url = (
f"{base_url}/folders/{{id}}?token={access_token}&fields=title"
)
self._get_tag_url = (
f"{base_url}/notes/{{id}}/tags?token={access_token}&fields=title"
)
def _get_notes(self) -> Iterator[Document]:
has_more = True
page = 1
while has_more:
req_note = urllib.request.Request(self._get_note_url.format(page=page))
with urllib.request.urlopen(req_note) as response:
json_data = json.loads(response.read().decode())
for note in json_data["items"]:
metadata = {
"source": LINK_NOTE_TEMPLATE.format(id=note["id"]),
"folder": self._get_folder(note["parent_id"]),
"tags": self._get_tags(note["id"]),
"title": note["title"],
"created_time": self._convert_date(note["created_time"]),
"updated_time": self._convert_date(note["updated_time"]),
}
yield Document(page_content=note["body"], metadata=metadata)
has_more = json_data["has_more"]
page += 1
def _get_folder(self, folder_id: str) -> str:
req_folder = urllib.request.Request(self._get_folder_url.format(id=folder_id))
with urllib.request.urlopen(req_folder) as response:
json_data = json.loads(response.read().decode())
return json_data["title"]
def _get_tags(self, note_id: str) -> List[str]:
req_tag = urllib.request.Request(self._get_tag_url.format(id=note_id))
with urllib.request.urlopen(req_tag) as response:
json_data = json.loads(response.read().decode())
return [tag["title"] for tag in json_data["items"]]
def _convert_date(self, date: int) -> str:
return datetime.fromtimestamp(date / 1000).strftime("%Y-%m-%d %H:%M:%S")
def lazy_load(self) -> Iterator[Document]:
yield from self._get_notes()
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/url_playwright.py | """Loader that uses Playwright to load a page, then uses unstructured to parse html."""
import logging
from abc import ABC, abstractmethod
from typing import TYPE_CHECKING, AsyncIterator, Dict, Iterator, List, Optional
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
if TYPE_CHECKING:
from playwright.async_api import Browser as AsyncBrowser
from playwright.async_api import Page as AsyncPage
from playwright.async_api import Response as AsyncResponse
from playwright.sync_api import Browser, Page, Response
logger = logging.getLogger(__name__)
class PlaywrightEvaluator(ABC):
"""Abstract base class for all evaluators.
Each evaluator should take a page, a browser instance, and a response
object, process the page as necessary, and return the resulting text.
"""
@abstractmethod
def evaluate(self, page: "Page", browser: "Browser", response: "Response") -> str:
"""Synchronously process the page and return the resulting text.
Args:
page: The page to process.
browser: The browser instance.
response: The response from page.goto().
Returns:
text: The text content of the page.
"""
pass
@abstractmethod
async def evaluate_async(
self, page: "AsyncPage", browser: "AsyncBrowser", response: "AsyncResponse"
) -> str:
"""Asynchronously process the page and return the resulting text.
Args:
page: The page to process.
browser: The browser instance.
response: The response from page.goto().
Returns:
text: The text content of the page.
"""
pass
class UnstructuredHtmlEvaluator(PlaywrightEvaluator):
"""Evaluate the page HTML content using the `unstructured` library."""
def __init__(self, remove_selectors: Optional[List[str]] = None):
"""Initialize UnstructuredHtmlEvaluator."""
try:
import unstructured # noqa:F401
except ImportError:
raise ImportError(
"unstructured package not found, please install it with "
"`pip install unstructured`"
)
self.remove_selectors = remove_selectors
def evaluate(self, page: "Page", browser: "Browser", response: "Response") -> str:
"""Synchronously process the HTML content of the page."""
from unstructured.partition.html import partition_html
for selector in self.remove_selectors or []:
elements = page.locator(selector).all()
for element in elements:
if element.is_visible():
element.evaluate("element => element.remove()")
page_source = page.content()
elements = partition_html(text=page_source)
return "\n\n".join([str(el) for el in elements])
async def evaluate_async(
self, page: "AsyncPage", browser: "AsyncBrowser", response: "AsyncResponse"
) -> str:
"""Asynchronously process the HTML content of the page."""
from unstructured.partition.html import partition_html
for selector in self.remove_selectors or []:
elements = await page.locator(selector).all()
for element in elements:
if await element.is_visible():
await element.evaluate("element => element.remove()")
page_source = await page.content()
elements = partition_html(text=page_source)
return "\n\n".join([str(el) for el in elements])
class PlaywrightURLLoader(BaseLoader):
"""Load `HTML` pages with `Playwright` and parse with `Unstructured`.
This is useful for loading pages that require javascript to render.
Attributes:
urls (List[str]): List of URLs to load.
continue_on_failure (bool): If True, continue loading other URLs on failure.
headless (bool): If True, the browser will run in headless mode.
proxy (Optional[Dict[str, str]]): If set, the browser will access URLs
through the specified proxy.
Example:
.. code-block:: python
from langchain_community.document_loaders import PlaywrightURLLoader
urls = ["https://api.ipify.org/?format=json",]
proxy={
"server": "https://xx.xx.xx:15818", # https://<host>:<port>
"username": "username",
"password": "password"
}
loader = PlaywrightURLLoader(urls, proxy=proxy)
data = loader.load()
"""
def __init__(
self,
urls: List[str],
continue_on_failure: bool = True,
headless: bool = True,
remove_selectors: Optional[List[str]] = None,
evaluator: Optional[PlaywrightEvaluator] = None,
proxy: Optional[Dict[str, str]] = None,
):
"""Load a list of URLs using Playwright."""
try:
import playwright # noqa:F401
except ImportError:
raise ImportError(
"playwright package not found, please install it with "
"`pip install playwright`"
)
self.urls = urls
self.continue_on_failure = continue_on_failure
self.headless = headless
self.proxy = proxy
if remove_selectors and evaluator:
raise ValueError(
"`remove_selectors` and `evaluator` cannot be both not None"
)
# Use the provided evaluator, if any, otherwise, use the default.
self.evaluator = evaluator or UnstructuredHtmlEvaluator(remove_selectors)
def lazy_load(self) -> Iterator[Document]:
"""Load the specified URLs using Playwright and create Document instances.
Returns:
A list of Document instances with loaded content.
"""
from playwright.sync_api import sync_playwright
with sync_playwright() as p:
browser = p.chromium.launch(headless=self.headless, proxy=self.proxy)
for url in self.urls:
try:
page = browser.new_page()
response = page.goto(url)
if response is None:
raise ValueError(f"page.goto() returned None for url {url}")
text = self.evaluator.evaluate(page, browser, response)
metadata = {"source": url}
yield Document(page_content=text, metadata=metadata)
except Exception as e:
if self.continue_on_failure:
logger.error(
f"Error fetching or processing {url}, exception: {e}"
)
else:
raise e
browser.close()
async def aload(self) -> List[Document]:
"""Load the specified URLs with Playwright and create Documents asynchronously.
Use this function when in a jupyter notebook environment.
Returns:
A list of Document instances with loaded content.
"""
return [doc async for doc in self.alazy_load()]
async def alazy_load(self) -> AsyncIterator[Document]:
"""Load the specified URLs with Playwright and create Documents asynchronously.
Use this function when in a jupyter notebook environment.
Returns:
A list of Document instances with loaded content.
"""
from playwright.async_api import async_playwright
async with async_playwright() as p:
browser = await p.chromium.launch(headless=self.headless, proxy=self.proxy)
for url in self.urls:
try:
page = await browser.new_page()
response = await page.goto(url)
if response is None:
raise ValueError(f"page.goto() returned None for url {url}")
text = await self.evaluator.evaluate_async(page, browser, response)
metadata = {"source": url}
yield Document(page_content=text, metadata=metadata)
except Exception as e:
if self.continue_on_failure:
logger.error(
f"Error fetching or processing {url}, exception: {e}"
)
else:
raise e
await browser.close()
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/wikipedia.py | from typing import Iterator, Optional
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
from langchain_community.utilities.wikipedia import WikipediaAPIWrapper
class WikipediaLoader(BaseLoader):
"""Load from `Wikipedia`.
The hard limit on the length of the query is 300 for now.
Each wiki page represents one Document.
"""
def __init__(
self,
query: str,
lang: str = "en",
load_max_docs: Optional[int] = 25,
load_all_available_meta: Optional[bool] = False,
doc_content_chars_max: Optional[int] = 4000,
):
"""
Initializes a new instance of the WikipediaLoader class.
Args:
query (str): The query string to search on Wikipedia.
lang (str, optional): The language code for the Wikipedia language edition.
Defaults to "en".
load_max_docs (int, optional): The maximum number of documents to load.
Defaults to 100.
load_all_available_meta (bool, optional): Indicates whether to load all
available metadata for each document. Defaults to False.
doc_content_chars_max (int, optional): The maximum number of characters
for the document content. Defaults to 4000.
"""
self.query = query
self.lang = lang
self.load_max_docs = load_max_docs
self.load_all_available_meta = load_all_available_meta
self.doc_content_chars_max = doc_content_chars_max
def lazy_load(self) -> Iterator[Document]:
"""
Loads the query result from Wikipedia into a list of Documents.
Returns:
A list of Document objects representing the loaded
Wikipedia pages.
"""
client = WikipediaAPIWrapper( # type: ignore[call-arg]
lang=self.lang,
top_k_results=self.load_max_docs, # type: ignore[arg-type]
load_all_available_meta=self.load_all_available_meta, # type: ignore[arg-type]
doc_content_chars_max=self.doc_content_chars_max, # type: ignore[arg-type]
)
yield from client.load(self.query)
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/ifixit.py | from typing import List, Optional
import requests
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
from langchain_community.document_loaders.web_base import WebBaseLoader
IFIXIT_BASE_URL = "https://www.ifixit.com/api/2.0"
class IFixitLoader(BaseLoader):
"""Load `iFixit` repair guides, device wikis and answers.
iFixit is the largest, open repair community on the web. The site contains nearly
100k repair manuals, 200k Questions & Answers on 42k devices, and all the data is
licensed under CC-BY.
This loader will allow you to download the text of a repair guide, text of Q&A's
and wikis from devices on iFixit using their open APIs and web scraping.
"""
def __init__(self, web_path: str):
"""Initialize with a web path."""
if not web_path.startswith("https://www.ifixit.com"):
raise ValueError("web path must start with 'https://www.ifixit.com'")
path = web_path.replace("https://www.ifixit.com", "")
allowed_paths = ["/Device", "/Guide", "/Answers", "/Teardown"]
""" TODO: Add /Wiki """
if not any(path.startswith(allowed_path) for allowed_path in allowed_paths):
raise ValueError(
"web path must start with /Device, /Guide, /Teardown or /Answers"
)
pieces = [x for x in path.split("/") if x]
"""Teardowns are just guides by a different name"""
self.page_type = pieces[0] if pieces[0] != "Teardown" else "Guide"
if self.page_type == "Guide" or self.page_type == "Answers":
self.id = pieces[2]
else:
self.id = pieces[1]
self.web_path = web_path
def load(self) -> List[Document]:
if self.page_type == "Device":
return self.load_device()
elif self.page_type == "Guide" or self.page_type == "Teardown":
return self.load_guide()
elif self.page_type == "Answers":
return self.load_questions_and_answers()
else:
raise ValueError("Unknown page type: " + self.page_type)
@staticmethod
def load_suggestions(query: str = "", doc_type: str = "all") -> List[Document]:
"""Load suggestions.
Args:
query: A query string
doc_type: The type of document to search for. Can be one of "all",
"device", "guide", "teardown", "answer", "wiki".
Returns:
"""
res = requests.get(
IFIXIT_BASE_URL + "/suggest/" + query + "?doctypes=" + doc_type
)
if res.status_code != 200:
raise ValueError(
'Could not load suggestions for "' + query + '"\n' + res.json()
)
data = res.json()
results = data["results"]
output = []
for result in results:
try:
loader = IFixitLoader(result["url"])
if loader.page_type == "Device":
output += loader.load_device(include_guides=False)
else:
output += loader.load()
except ValueError:
continue
return output
def load_questions_and_answers(
self, url_override: Optional[str] = None
) -> List[Document]:
"""Load a list of questions and answers.
Args:
url_override: A URL to override the default URL.
Returns: List[Document]
"""
loader = WebBaseLoader(self.web_path if url_override is None else url_override)
soup = loader.scrape()
output = []
title = soup.find("h1", "post-title").text
output.append("# " + title)
output.append(soup.select_one(".post-content .post-text").text.strip())
answersHeader = soup.find("div", "post-answers-header")
if answersHeader:
output.append("\n## " + answersHeader.text.strip())
for answer in soup.select(".js-answers-list .post.post-answer"):
if answer.has_attr("itemprop") and "acceptedAnswer" in answer["itemprop"]:
output.append("\n### Accepted Answer")
elif "post-helpful" in answer["class"]:
output.append("\n### Most Helpful Answer")
else:
output.append("\n### Other Answer")
output += [
a.text.strip() for a in answer.select(".post-content .post-text")
]
output.append("\n")
text = "\n".join(output).strip()
metadata = {"source": self.web_path, "title": title}
return [Document(page_content=text, metadata=metadata)]
def load_device(
self, url_override: Optional[str] = None, include_guides: bool = True
) -> List[Document]:
"""Loads a device
Args:
url_override: A URL to override the default URL.
include_guides: Whether to include guides linked to from the device.
Defaults to True.
Returns:
"""
documents = []
if url_override is None:
url = IFIXIT_BASE_URL + "/wikis/CATEGORY/" + self.id
else:
url = url_override
res = requests.get(url)
data = res.json()
text = "\n".join(
[
data[key]
for key in ["title", "description", "contents_raw"]
if key in data
]
).strip()
metadata = {"source": self.web_path, "title": data["title"]}
documents.append(Document(page_content=text, metadata=metadata))
if include_guides:
"""Load and return documents for each guide linked to from the device"""
guide_urls = [guide["url"] for guide in data["guides"]]
for guide_url in guide_urls:
documents.append(IFixitLoader(guide_url).load()[0])
return documents
def load_guide(self, url_override: Optional[str] = None) -> List[Document]:
"""Load a guide
Args:
url_override: A URL to override the default URL.
Returns: List[Document]
"""
if url_override is None:
url = IFIXIT_BASE_URL + "/guides/" + self.id
else:
url = url_override
res = requests.get(url)
if res.status_code != 200:
raise ValueError(
"Could not load guide: " + self.web_path + "\n" + res.json()
)
data = res.json()
doc_parts = ["# " + data["title"], data["introduction_raw"]]
doc_parts.append("\n\n###Tools Required:")
if len(data["tools"]) == 0:
doc_parts.append("\n - None")
else:
for tool in data["tools"]:
doc_parts.append("\n - " + tool["text"])
doc_parts.append("\n\n###Parts Required:")
if len(data["parts"]) == 0:
doc_parts.append("\n - None")
else:
for part in data["parts"]:
doc_parts.append("\n - " + part["text"])
for row in data["steps"]:
doc_parts.append(
"\n\n## "
+ (
row["title"]
if row["title"] != ""
else "Step {}".format(row["orderby"])
)
)
for line in row["lines"]:
doc_parts.append(line["text_raw"])
doc_parts.append(data["conclusion_raw"])
text = "\n".join(doc_parts)
metadata = {"source": self.web_path, "title": data["title"]}
return [Document(page_content=text, metadata=metadata)]
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/s3_file.py | from __future__ import annotations
import os
import tempfile
from typing import TYPE_CHECKING, Any, Callable, List, Optional, Union
from langchain_community.document_loaders.unstructured import UnstructuredBaseLoader
if TYPE_CHECKING:
import botocore
class S3FileLoader(UnstructuredBaseLoader):
"""Load from `Amazon AWS S3` file."""
def __init__(
self,
bucket: str,
key: str,
*,
region_name: Optional[str] = None,
api_version: Optional[str] = None,
use_ssl: Optional[bool] = True,
verify: Union[str, bool, None] = None,
endpoint_url: Optional[str] = None,
aws_access_key_id: Optional[str] = None,
aws_secret_access_key: Optional[str] = None,
aws_session_token: Optional[str] = None,
boto_config: Optional[botocore.client.Config] = None,
mode: str = "single",
post_processors: Optional[List[Callable]] = None,
**unstructured_kwargs: Any,
):
"""Initialize with bucket and key name.
:param bucket: The name of the S3 bucket.
:param key: The key of the S3 object.
:param region_name: The name of the region associated with the client.
A client is associated with a single region.
:param api_version: The API version to use. By default, botocore will
use the latest API version when creating a client. You only need
to specify this parameter if you want to use a previous API version
of the client.
:param use_ssl: Whether or not to use SSL. By default, SSL is used.
Note that not all services support non-ssl connections.
:param verify: Whether or not to verify SSL certificates.
By default SSL certificates are verified. You can provide the
following values:
* False - do not validate SSL certificates. SSL will still be
used (unless use_ssl is False), but SSL certificates
will not be verified.
* path/to/cert/bundle.pem - A filename of the CA cert bundle to
uses. You can specify this argument if you want to use a
different CA cert bundle than the one used by botocore.
:param endpoint_url: The complete URL to use for the constructed
client. Normally, botocore will automatically construct the
appropriate URL to use when communicating with a service. You can
specify a complete URL (including the "http/https" scheme) to
override this behavior. If this value is provided, then
``use_ssl`` is ignored.
:param aws_access_key_id: The access key to use when creating
the client. This is entirely optional, and if not provided,
the credentials configured for the session will automatically
be used. You only need to provide this argument if you want
to override the credentials used for this specific client.
:param aws_secret_access_key: The secret key to use when creating
the client. Same semantics as aws_access_key_id above.
:param aws_session_token: The session token to use when creating
the client. Same semantics as aws_access_key_id above.
:type boto_config: botocore.client.Config
:param boto_config: Advanced boto3 client configuration options. If a value
is specified in the client config, its value will take precedence
over environment variables and configuration values, but not over
a value passed explicitly to the method. If a default config
object is set on the session, the config object used when creating
the client will be the result of calling ``merge()`` on the
default config with the config provided to this call.
:param mode: Mode in which to read the file. Valid options are: single,
paged and elements.
:param post_processors: Post processing functions to be applied to
extracted elements.
:param **unstructured_kwargs: Arbitrary additional kwargs to pass in when
calling `partition`
"""
super().__init__(mode, post_processors, **unstructured_kwargs)
self.bucket = bucket
self.key = key
self.region_name = region_name
self.api_version = api_version
self.use_ssl = use_ssl
self.verify = verify
self.endpoint_url = endpoint_url
self.aws_access_key_id = aws_access_key_id
self.aws_secret_access_key = aws_secret_access_key
self.aws_session_token = aws_session_token
self.boto_config = boto_config
def _get_elements(self) -> List:
"""Get elements."""
from unstructured.partition.auto import partition
try:
import boto3
except ImportError:
raise ImportError(
"Could not import `boto3` python package. "
"Please install it with `pip install boto3`."
)
s3 = boto3.client(
"s3",
region_name=self.region_name,
api_version=self.api_version,
use_ssl=self.use_ssl,
verify=self.verify,
endpoint_url=self.endpoint_url,
aws_access_key_id=self.aws_access_key_id,
aws_secret_access_key=self.aws_secret_access_key,
aws_session_token=self.aws_session_token,
config=self.boto_config,
)
with tempfile.TemporaryDirectory() as temp_dir:
file_path = f"{temp_dir}/{self.key}"
os.makedirs(os.path.dirname(file_path), exist_ok=True)
s3.download_file(self.bucket, self.key, file_path)
return partition(filename=file_path, **self.unstructured_kwargs)
def _get_metadata(self) -> dict:
return {"source": f"s3://{self.bucket}/{self.key}"}
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/googledrive.py | # Prerequisites:
# 1. Create a Google Cloud project
# 2. Enable the Google Drive API:
# https://console.cloud.google.com/flows/enableapi?apiid=drive.googleapis.com
# 3. Authorize credentials for desktop app:
# https://developers.google.com/drive/api/quickstart/python#authorize_credentials_for_a_desktop_application # noqa: E501
# 4. For service accounts visit
# https://cloud.google.com/iam/docs/service-accounts-create
from pathlib import Path
from typing import Any, Dict, List, Optional, Sequence, Union
from langchain_core._api.deprecation import deprecated
from langchain_core.documents import Document
from pydantic import BaseModel, model_validator, validator
from langchain_community.document_loaders.base import BaseLoader
SCOPES = ["https://www.googleapis.com/auth/drive.readonly"]
@deprecated(
since="0.0.32",
removal="1.0",
alternative_import="langchain_google_community.GoogleDriveLoader",
)
class GoogleDriveLoader(BaseLoader, BaseModel):
"""Load Google Docs from `Google Drive`."""
service_account_key: Path = Path.home() / ".credentials" / "keys.json"
"""Path to the service account key file."""
credentials_path: Path = Path.home() / ".credentials" / "credentials.json"
"""Path to the credentials file."""
token_path: Path = Path.home() / ".credentials" / "token.json"
"""Path to the token file."""
folder_id: Optional[str] = None
"""The folder id to load from."""
document_ids: Optional[List[str]] = None
"""The document ids to load from."""
file_ids: Optional[List[str]] = None
"""The file ids to load from."""
recursive: bool = False
"""Whether to load recursively. Only applies when folder_id is given."""
file_types: Optional[Sequence[str]] = None
"""The file types to load. Only applies when folder_id is given."""
load_trashed_files: bool = False
"""Whether to load trashed files. Only applies when folder_id is given."""
# NOTE(MthwRobinson) - changing the file_loader_cls to type here currently
# results in pydantic validation errors
file_loader_cls: Any = None
"""The file loader class to use."""
file_loader_kwargs: Dict["str", Any] = {}
"""The file loader kwargs to use."""
@model_validator(mode="before")
@classmethod
def validate_inputs(cls, values: Dict[str, Any]) -> Any:
"""Validate that either folder_id or document_ids is set, but not both."""
if values.get("folder_id") and (
values.get("document_ids") or values.get("file_ids")
):
raise ValueError(
"Cannot specify both folder_id and document_ids nor "
"folder_id and file_ids"
)
if (
not values.get("folder_id")
and not values.get("document_ids")
and not values.get("file_ids")
):
raise ValueError("Must specify either folder_id, document_ids, or file_ids")
file_types = values.get("file_types")
if file_types:
if values.get("document_ids") or values.get("file_ids"):
raise ValueError(
"file_types can only be given when folder_id is given,"
" (not when document_ids or file_ids are given)."
)
type_mapping = {
"document": "application/vnd.google-apps.document",
"sheet": "application/vnd.google-apps.spreadsheet",
"pdf": "application/pdf",
}
allowed_types = list(type_mapping.keys()) + list(type_mapping.values())
short_names = ", ".join([f"'{x}'" for x in type_mapping.keys()])
full_names = ", ".join([f"'{x}'" for x in type_mapping.values()])
for file_type in file_types:
if file_type not in allowed_types:
raise ValueError(
f"Given file type {file_type} is not supported. "
f"Supported values are: {short_names}; and "
f"their full-form names: {full_names}"
)
# replace short-form file types by full-form file types
def full_form(x: str) -> str:
return type_mapping[x] if x in type_mapping else x
values["file_types"] = [full_form(file_type) for file_type in file_types]
return values
@validator("credentials_path")
def validate_credentials_path(cls, v: Any, **kwargs: Any) -> Any:
"""Validate that credentials_path exists."""
if not v.exists():
raise ValueError(f"credentials_path {v} does not exist")
return v
def _load_credentials(self) -> Any:
"""Load credentials.
The order of loading credentials:
1. Service account key if file exists
2. Token path (for OAuth Client) if file exists
3. Credentials path (for OAuth Client) if file exists
4. Default credentials. if no credentials found, raise DefaultCredentialsError
"""
# Adapted from https://developers.google.com/drive/api/v3/quickstart/python
try:
from google.auth import default
from google.auth.transport.requests import Request
from google.oauth2 import service_account
from google.oauth2.credentials import Credentials
from google_auth_oauthlib.flow import InstalledAppFlow
except ImportError:
raise ImportError(
"You must run "
"`pip install --upgrade "
"google-api-python-client google-auth-httplib2 "
"google-auth-oauthlib` "
"to use the Google Drive loader."
)
creds = None
# From service account
if self.service_account_key.exists():
return service_account.Credentials.from_service_account_file(
str(self.service_account_key), scopes=SCOPES
)
# From Oauth Client
if self.token_path.exists():
creds = Credentials.from_authorized_user_file(str(self.token_path), SCOPES)
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
elif self.credentials_path.exists():
flow = InstalledAppFlow.from_client_secrets_file(
str(self.credentials_path), SCOPES
)
creds = flow.run_local_server(port=0)
if creds:
with open(self.token_path, "w") as token:
token.write(creds.to_json())
# From Application Default Credentials
if not creds:
creds, _ = default(scopes=SCOPES)
return creds
def _load_sheet_from_id(self, id: str) -> List[Document]:
"""Load a sheet and all tabs from an ID."""
from googleapiclient.discovery import build
creds = self._load_credentials()
sheets_service = build("sheets", "v4", credentials=creds)
spreadsheet = sheets_service.spreadsheets().get(spreadsheetId=id).execute()
sheets = spreadsheet.get("sheets", [])
documents = []
for sheet in sheets:
sheet_name = sheet["properties"]["title"]
result = (
sheets_service.spreadsheets()
.values()
.get(spreadsheetId=id, range=sheet_name)
.execute()
)
values = result.get("values", [])
if not values:
continue # empty sheet
header = values[0]
for i, row in enumerate(values[1:], start=1):
metadata = {
"source": (
f"https://docs.google.com/spreadsheets/d/{id}/"
f"edit?gid={sheet['properties']['sheetId']}"
),
"title": f"{spreadsheet['properties']['title']} - {sheet_name}",
"row": i,
}
content = []
for j, v in enumerate(row):
title = header[j].strip() if len(header) > j else ""
content.append(f"{title}: {v.strip()}")
page_content = "\n".join(content)
documents.append(Document(page_content=page_content, metadata=metadata))
return documents
def _load_document_from_id(self, id: str) -> Document:
"""Load a document from an ID."""
from io import BytesIO
from googleapiclient.discovery import build
from googleapiclient.errors import HttpError
from googleapiclient.http import MediaIoBaseDownload
creds = self._load_credentials()
service = build("drive", "v3", credentials=creds)
file = (
service.files()
.get(fileId=id, supportsAllDrives=True, fields="modifiedTime,name")
.execute()
)
request = service.files().export_media(fileId=id, mimeType="text/plain")
fh = BytesIO()
downloader = MediaIoBaseDownload(fh, request)
done = False
try:
while done is False:
status, done = downloader.next_chunk()
except HttpError as e:
if e.resp.status == 404:
print("File not found: {}".format(id)) # noqa: T201
else:
print("An error occurred: {}".format(e)) # noqa: T201
text = fh.getvalue().decode("utf-8")
metadata = {
"source": f"https://docs.google.com/document/d/{id}/edit",
"title": f"{file.get('name')}",
"when": f"{file.get('modifiedTime')}",
}
return Document(page_content=text, metadata=metadata)
def _load_documents_from_folder(
self, folder_id: str, *, file_types: Optional[Sequence[str]] = None
) -> List[Document]:
"""Load documents from a folder."""
from googleapiclient.discovery import build
creds = self._load_credentials()
service = build("drive", "v3", credentials=creds)
files = self._fetch_files_recursive(service, folder_id)
# If file types filter is provided, we'll filter by the file type.
if file_types:
_files = [f for f in files if f["mimeType"] in file_types] # type: ignore
else:
_files = files
returns = []
for file in _files:
if file["trashed"] and not self.load_trashed_files:
continue
elif file["mimeType"] == "application/vnd.google-apps.document":
returns.append(self._load_document_from_id(file["id"])) # type: ignore
elif file["mimeType"] == "application/vnd.google-apps.spreadsheet":
returns.extend(self._load_sheet_from_id(file["id"])) # type: ignore
elif (
file["mimeType"] == "application/pdf"
or self.file_loader_cls is not None
):
returns.extend(self._load_file_from_id(file["id"])) # type: ignore
else:
pass
return returns
def _fetch_files_recursive(
self, service: Any, folder_id: str
) -> List[Dict[str, Union[str, List[str]]]]:
"""Fetch all files and subfolders recursively."""
results = (
service.files()
.list(
q=f"'{folder_id}' in parents",
pageSize=1000,
includeItemsFromAllDrives=True,
supportsAllDrives=True,
fields="nextPageToken, files(id, name, mimeType, parents, trashed)",
)
.execute()
)
files = results.get("files", [])
returns = []
for file in files:
if file["mimeType"] == "application/vnd.google-apps.folder":
if self.recursive:
returns.extend(self._fetch_files_recursive(service, file["id"]))
else:
returns.append(file)
return returns
def _load_documents_from_ids(self) -> List[Document]:
"""Load documents from a list of IDs."""
if not self.document_ids:
raise ValueError("document_ids must be set")
return [self._load_document_from_id(doc_id) for doc_id in self.document_ids]
def _load_file_from_id(self, id: str) -> List[Document]:
"""Load a file from an ID."""
from io import BytesIO
from googleapiclient.discovery import build
from googleapiclient.http import MediaIoBaseDownload
creds = self._load_credentials()
service = build("drive", "v3", credentials=creds)
file = service.files().get(fileId=id, supportsAllDrives=True).execute()
request = service.files().get_media(fileId=id)
fh = BytesIO()
downloader = MediaIoBaseDownload(fh, request)
done = False
while done is False:
status, done = downloader.next_chunk()
if self.file_loader_cls is not None:
fh.seek(0)
loader = self.file_loader_cls(file=fh, **self.file_loader_kwargs)
docs = loader.load()
for doc in docs:
doc.metadata["source"] = f"https://drive.google.com/file/d/{id}/view"
if "title" not in doc.metadata:
doc.metadata["title"] = f"{file.get('name')}"
return docs
else:
from PyPDF2 import PdfReader
content = fh.getvalue()
pdf_reader = PdfReader(BytesIO(content))
return [
Document(
page_content=page.extract_text(),
metadata={
"source": f"https://drive.google.com/file/d/{id}/view",
"title": f"{file.get('name')}",
"page": i,
},
)
for i, page in enumerate(pdf_reader.pages)
]
def _load_file_from_ids(self) -> List[Document]:
"""Load files from a list of IDs."""
if not self.file_ids:
raise ValueError("file_ids must be set")
docs = []
for file_id in self.file_ids:
docs.extend(self._load_file_from_id(file_id))
return docs
def load(self) -> List[Document]:
"""Load documents."""
if self.folder_id:
return self._load_documents_from_folder(
self.folder_id, file_types=self.file_types
)
elif self.document_ids:
return self._load_documents_from_ids()
else:
return self._load_file_from_ids()
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/lakefs.py | import os
import tempfile
import urllib.parse
from typing import Any, List, Optional
from urllib.parse import urljoin
import requests
from langchain_core.documents import Document
from requests.auth import HTTPBasicAuth
from langchain_community.document_loaders.base import BaseLoader
from langchain_community.document_loaders.unstructured import UnstructuredBaseLoader
class LakeFSClient:
"""Client for lakeFS."""
def __init__(
self,
lakefs_access_key: str,
lakefs_secret_key: str,
lakefs_endpoint: str,
):
self.__endpoint = "/".join([lakefs_endpoint, "api", "v1/"])
self.__auth = HTTPBasicAuth(lakefs_access_key, lakefs_secret_key)
try:
health_check = requests.get(
urljoin(self.__endpoint, "healthcheck"), auth=self.__auth
)
health_check.raise_for_status()
except Exception:
raise ValueError(
"lakeFS server isn't accessible. Make sure lakeFS is running."
)
def ls_objects(
self, repo: str, ref: str, path: str, presign: Optional[bool]
) -> List:
qp = {"prefix": path, "presign": presign}
eqp = urllib.parse.urlencode(qp)
objects_ls_endpoint = urljoin(
self.__endpoint, f"repositories/{repo}/refs/{ref}/objects/ls?{eqp}"
)
olsr = requests.get(objects_ls_endpoint, auth=self.__auth)
olsr.raise_for_status()
olsr_json = olsr.json()
return list(
map(
lambda res: (res["path"], res["physical_address"]), olsr_json["results"]
)
)
def is_presign_supported(self) -> bool:
config_endpoint = self.__endpoint + "config"
response = requests.get(config_endpoint, auth=self.__auth)
response.raise_for_status()
config = response.json()
return config["storage_config"]["pre_sign_support"]
class LakeFSLoader(BaseLoader):
"""Load from `lakeFS`."""
repo: str
ref: str
path: str
def __init__(
self,
lakefs_access_key: str,
lakefs_secret_key: str,
lakefs_endpoint: str,
repo: Optional[str] = None,
ref: Optional[str] = "main",
path: Optional[str] = "",
):
"""
:param lakefs_access_key: [required] lakeFS server's access key
:param lakefs_secret_key: [required] lakeFS server's secret key
:param lakefs_endpoint: [required] lakeFS server's endpoint address,
ex: https://example.my-lakefs.com
:param repo: [optional, default = ''] target repository
:param ref: [optional, default = 'main'] target ref (branch name,
tag, or commit ID)
:param path: [optional, default = ''] target path
"""
self.__lakefs_client = LakeFSClient(
lakefs_access_key, lakefs_secret_key, lakefs_endpoint
)
self.repo = "" if repo is None or repo == "" else str(repo)
self.ref = "main" if ref is None or ref == "" else str(ref)
self.path = "" if path is None else str(path)
def set_path(self, path: str) -> None:
self.path = path
def set_ref(self, ref: str) -> None:
self.ref = ref
def set_repo(self, repo: str) -> None:
self.repo = repo
def load(self) -> List[Document]:
self.__validate_instance()
presigned = self.__lakefs_client.is_presign_supported()
docs: List[Document] = []
objs = self.__lakefs_client.ls_objects(
repo=self.repo, ref=self.ref, path=self.path, presign=presigned
)
for obj in objs:
lakefs_unstructured_loader = UnstructuredLakeFSLoader(
obj[1], self.repo, self.ref, obj[0], presigned
)
docs.extend(lakefs_unstructured_loader.load())
return docs
def __validate_instance(self) -> None:
if self.repo is None or self.repo == "":
raise ValueError(
"no repository was provided. use `set_repo` to specify a repository"
)
if self.ref is None or self.ref == "":
raise ValueError("no ref was provided. use `set_ref` to specify a ref")
if self.path is None:
raise ValueError("no path was provided. use `set_path` to specify a path")
class UnstructuredLakeFSLoader(UnstructuredBaseLoader):
"""Load from `lakeFS` as unstructured data."""
def __init__(
self,
url: str,
repo: str,
ref: str = "main",
path: str = "",
presign: bool = True,
**unstructured_kwargs: Any,
):
"""Initialize UnstructuredLakeFSLoader.
Args:
:param lakefs_access_key:
:param lakefs_secret_key:
:param lakefs_endpoint:
:param repo:
:param ref:
"""
super().__init__(**unstructured_kwargs)
self.url = url
self.repo = repo
self.ref = ref
self.path = path
self.presign = presign
def _get_metadata(self) -> dict:
return {"repo": self.repo, "ref": self.ref, "path": self.path}
def _get_elements(self) -> List:
from unstructured.partition.auto import partition
local_prefix = "local://"
if self.presign:
with tempfile.TemporaryDirectory() as temp_dir:
file_path = f"{temp_dir}/{self.path.split('/')[-1]}"
os.makedirs(os.path.dirname(file_path), exist_ok=True)
response = requests.get(self.url)
response.raise_for_status()
with open(file_path, mode="wb") as file:
file.write(response.content)
return partition(filename=file_path)
elif not self.url.startswith(local_prefix):
raise ValueError(
"Non pre-signed URLs are supported only with 'local' blockstore"
)
else:
local_path = self.url[len(local_prefix) :]
return partition(filename=local_path)
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/azure_blob_storage_container.py | from typing import List
from langchain_core.documents import Document
from langchain_community.document_loaders.azure_blob_storage_file import (
AzureBlobStorageFileLoader,
)
from langchain_community.document_loaders.base import BaseLoader
class AzureBlobStorageContainerLoader(BaseLoader):
"""Load from `Azure Blob Storage` container."""
def __init__(self, conn_str: str, container: str, prefix: str = ""):
"""Initialize with connection string, container and blob prefix."""
self.conn_str = conn_str
"""Connection string for Azure Blob Storage."""
self.container = container
"""Container name."""
self.prefix = prefix
"""Prefix for blob names."""
def load(self) -> List[Document]:
"""Load documents."""
try:
from azure.storage.blob import ContainerClient
except ImportError as exc:
raise ImportError(
"Could not import azure storage blob python package. "
"Please install it with `pip install azure-storage-blob`."
) from exc
container = ContainerClient.from_connection_string(
conn_str=self.conn_str, container_name=self.container
)
docs = []
blob_list = container.list_blobs(name_starts_with=self.prefix)
for blob in blob_list:
loader = AzureBlobStorageFileLoader(
self.conn_str,
self.container,
blob.name,
)
docs.extend(loader.load())
return docs
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/directory.py | import concurrent
import logging
import random
from pathlib import Path
from typing import Any, Callable, Iterator, List, Optional, Sequence, Tuple, Type, Union
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
from langchain_community.document_loaders.csv_loader import CSVLoader
from langchain_community.document_loaders.html_bs import BSHTMLLoader
from langchain_community.document_loaders.text import TextLoader
from langchain_community.document_loaders.unstructured import UnstructuredFileLoader
FILE_LOADER_TYPE = Union[
Type[UnstructuredFileLoader], Type[TextLoader], Type[BSHTMLLoader], Type[CSVLoader]
]
logger = logging.getLogger(__name__)
def _is_visible(p: Path) -> bool:
parts = p.parts
for _p in parts:
if _p.startswith("."):
return False
return True
class DirectoryLoader(BaseLoader):
"""Load from a directory."""
def __init__(
self,
path: str,
glob: Union[List[str], Tuple[str], str] = "**/[!.]*",
silent_errors: bool = False,
load_hidden: bool = False,
loader_cls: FILE_LOADER_TYPE = UnstructuredFileLoader,
loader_kwargs: Union[dict, None] = None,
recursive: bool = False,
show_progress: bool = False,
use_multithreading: bool = False,
max_concurrency: int = 4,
*,
exclude: Union[Sequence[str], str] = (),
sample_size: int = 0,
randomize_sample: bool = False,
sample_seed: Union[int, None] = None,
):
"""Initialize with a path to directory and how to glob over it.
Args:
path: Path to directory.
glob: A glob pattern or list of glob patterns to use to find files.
Defaults to "**/[!.]*" (all files except hidden).
exclude: A pattern or list of patterns to exclude from results.
Use glob syntax.
silent_errors: Whether to silently ignore errors. Defaults to False.
load_hidden: Whether to load hidden files. Defaults to False.
loader_cls: Loader class to use for loading files.
Defaults to UnstructuredFileLoader.
loader_kwargs: Keyword arguments to pass to loader_cls. Defaults to None.
recursive: Whether to recursively search for files. Defaults to False.
show_progress: Whether to show a progress bar. Defaults to False.
use_multithreading: Whether to use multithreading. Defaults to False.
max_concurrency: The maximum number of threads to use. Defaults to 4.
sample_size: The maximum number of files you would like to load from the
directory.
randomize_sample: Shuffle the files to get a random sample.
sample_seed: set the seed of the random shuffle for reproducibility.
Examples:
.. code-block:: python
from langchain_community.document_loaders import DirectoryLoader
# Load all non-hidden files in a directory.
loader = DirectoryLoader("/path/to/directory")
# Load all text files in a directory without recursion.
loader = DirectoryLoader("/path/to/directory", glob="*.txt")
# Recursively load all text files in a directory.
loader = DirectoryLoader(
"/path/to/directory", glob="*.txt", recursive=True
)
# Load all files in a directory, except for py files.
loader = DirectoryLoader("/path/to/directory", exclude="*.py")
# Load all files in a directory, except for py or pyc files.
loader = DirectoryLoader(
"/path/to/directory", exclude=["*.py", "*.pyc"]
)
"""
if loader_kwargs is None:
loader_kwargs = {}
if isinstance(exclude, str):
exclude = (exclude,)
self.path = path
self.glob = glob
self.exclude = exclude
self.load_hidden = load_hidden
self.loader_cls = loader_cls
self.loader_kwargs = loader_kwargs
self.silent_errors = silent_errors
self.recursive = recursive
self.show_progress = show_progress
self.use_multithreading = use_multithreading
self.max_concurrency = max_concurrency
self.sample_size = sample_size
self.randomize_sample = randomize_sample
self.sample_seed = sample_seed
def load(self) -> List[Document]:
"""Load documents."""
return list(self.lazy_load())
def lazy_load(self) -> Iterator[Document]:
"""Load documents lazily."""
p = Path(self.path)
if not p.exists():
raise FileNotFoundError(f"Directory not found: '{self.path}'")
if not p.is_dir():
raise ValueError(f"Expected directory, got file: '{self.path}'")
# glob multiple patterns if a list is provided, e.g., multiple file extensions
if isinstance(self.glob, (list, tuple)):
paths = []
for pattern in self.glob:
paths.extend(
list(p.rglob(pattern) if self.recursive else p.glob(pattern))
)
elif isinstance(self.glob, str):
paths = list(p.rglob(self.glob) if self.recursive else p.glob(self.glob))
else:
raise TypeError(
f"Expected glob to be str or sequence of str, but got {type(self.glob)}"
)
items = [
path
for path in paths
if not (self.exclude and any(path.match(glob) for glob in self.exclude))
and path.is_file()
]
if self.sample_size > 0:
if self.randomize_sample:
randomizer = random.Random(
self.sample_seed if self.sample_seed else None
)
randomizer.shuffle(items)
items = items[: min(len(items), self.sample_size)]
pbar = None
if self.show_progress:
try:
from tqdm import tqdm
pbar = tqdm(total=len(items))
except ImportError as e:
logger.warning(
"To log the progress of DirectoryLoader you need to install tqdm, "
"`pip install tqdm`"
)
if self.silent_errors:
logger.warning(e)
else:
raise ImportError(
"To log the progress of DirectoryLoader "
"you need to install tqdm, "
"`pip install tqdm`"
)
if self.use_multithreading:
futures = []
with concurrent.futures.ThreadPoolExecutor(
max_workers=self.max_concurrency
) as executor:
for i in items:
futures.append(
executor.submit(
self._lazy_load_file_to_non_generator(self._lazy_load_file),
i,
p,
pbar,
)
)
for future in concurrent.futures.as_completed(futures):
for item in future.result():
yield item
else:
for i in items:
yield from self._lazy_load_file(i, p, pbar)
if pbar:
pbar.close()
def _lazy_load_file_to_non_generator(self, func: Callable) -> Callable:
def non_generator(item: Path, path: Path, pbar: Optional[Any]) -> List:
return [x for x in func(item, path, pbar)]
return non_generator
def _lazy_load_file(
self, item: Path, path: Path, pbar: Optional[Any]
) -> Iterator[Document]:
"""Load a file.
Args:
item: File path.
path: Directory path.
pbar: Progress bar. Defaults to None.
"""
if item.is_file():
if _is_visible(item.relative_to(path)) or self.load_hidden:
try:
logger.debug(f"Processing file: {str(item)}")
loader = self.loader_cls(str(item), **self.loader_kwargs)
try:
for subdoc in loader.lazy_load():
yield subdoc
except NotImplementedError:
for subdoc in loader.load():
yield subdoc
except Exception as e:
if self.silent_errors:
logger.warning(f"Error loading file {str(item)}: {e}")
else:
logger.error(f"Error loading file {str(item)}")
raise e
finally:
if pbar:
pbar.update(1)
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/baiducloud_bos_file.py | import logging
import os
import tempfile
from typing import Any, Iterator
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
from langchain_community.document_loaders.unstructured import UnstructuredFileLoader
logger = logging.getLogger(__name__)
class BaiduBOSFileLoader(BaseLoader):
"""Load from `Baidu Cloud BOS` file."""
def __init__(self, conf: Any, bucket: str, key: str):
"""Initialize with BOS config, bucket and key name.
:param conf(BceClientConfiguration): BOS config.
:param bucket(str): BOS bucket.
:param key(str): BOS file key.
"""
self.conf = conf
self.bucket = bucket
self.key = key
def lazy_load(self) -> Iterator[Document]:
"""Load documents."""
try:
from baidubce.services.bos.bos_client import BosClient
except ImportError:
raise ImportError(
"Please using `pip install bce-python-sdk`"
+ " before import bos related package."
)
# Initialize BOS Client
client = BosClient(self.conf)
with tempfile.TemporaryDirectory() as temp_dir:
file_path = f"{temp_dir}/{self.bucket}/{self.key}"
os.makedirs(os.path.dirname(file_path), exist_ok=True)
# Download the file to a destination
logger.debug(f"get object key {self.key} to file {file_path}")
client.get_object_to_file(self.bucket, self.key, file_path)
try:
loader = UnstructuredFileLoader(file_path)
documents = loader.load()
return iter(documents)
except Exception as ex:
logger.error(f"load document error = {ex}")
return iter([Document(page_content="")])
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/cassandra.py | from __future__ import annotations
from typing import (
TYPE_CHECKING,
Any,
AsyncIterator,
Callable,
Iterator,
Optional,
Sequence,
Union,
)
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
from langchain_community.utilities.cassandra import aexecute_cql
_NOT_SET = object()
if TYPE_CHECKING:
from cassandra.cluster import Session
from cassandra.pool import Host
from cassandra.query import Statement
class CassandraLoader(BaseLoader):
def __init__(
self,
table: Optional[str] = None,
session: Optional[Session] = None,
keyspace: Optional[str] = None,
query: Union[str, Statement, None] = None,
page_content_mapper: Callable[[Any], str] = str,
metadata_mapper: Callable[[Any], dict] = lambda _: {},
*,
query_parameters: Union[dict, Sequence, None] = None,
query_timeout: Optional[float] = _NOT_SET, # type: ignore[assignment]
query_trace: bool = False,
query_custom_payload: Optional[dict] = None,
query_execution_profile: Any = _NOT_SET,
query_paging_state: Any = None,
query_host: Optional[Host] = None,
query_execute_as: Optional[str] = None,
) -> None:
"""
Document Loader for Apache Cassandra.
Args:
table: The table to load the data from.
(do not use together with the query parameter)
session: The cassandra driver session.
If not provided, the cassio resolved session will be used.
keyspace: The keyspace of the table.
If not provided, the cassio resolved keyspace will be used.
query: The query used to load the data.
(do not use together with the table parameter)
page_content_mapper: a function to convert a row to string page content.
Defaults to the str representation of the row.
metadata_mapper: a function to convert a row to document metadata.
query_parameters: The query parameters used when calling session.execute .
query_timeout: The query timeout used when calling session.execute .
query_trace: Whether to use tracing when calling session.execute .
query_custom_payload: The query custom_payload used when calling
session.execute .
query_execution_profile: The query execution_profile used when calling
session.execute .
query_host: The query host used when calling session.execute .
query_execute_as: The query execute_as used when calling session.execute .
"""
if query and table:
raise ValueError("Cannot specify both query and table.")
if not query and not table:
raise ValueError("Must specify query or table.")
if not session or (table and not keyspace):
try:
from cassio.config import check_resolve_keyspace, check_resolve_session
except (ImportError, ModuleNotFoundError):
raise ImportError(
"Could not import a recent cassio package."
"Please install it with `pip install --upgrade cassio`."
)
if table:
_keyspace = keyspace or check_resolve_keyspace(keyspace)
self.query = f"SELECT * FROM {_keyspace}.{table};"
self.metadata = {"table": table, "keyspace": _keyspace}
else:
self.query = query # type: ignore[assignment]
self.metadata = {}
self.session = session or check_resolve_session(session)
self.page_content_mapper = page_content_mapper
self.metadata_mapper = metadata_mapper
self.query_kwargs = {
"parameters": query_parameters,
"trace": query_trace,
"custom_payload": query_custom_payload,
"paging_state": query_paging_state,
"host": query_host,
"execute_as": query_execute_as,
}
if query_timeout is not _NOT_SET:
self.query_kwargs["timeout"] = query_timeout
if query_execution_profile is not _NOT_SET:
self.query_kwargs["execution_profile"] = query_execution_profile
def lazy_load(self) -> Iterator[Document]:
for row in self.session.execute(self.query, **self.query_kwargs):
metadata = self.metadata.copy()
metadata.update(self.metadata_mapper(row))
yield Document(
page_content=self.page_content_mapper(row), metadata=metadata
)
async def alazy_load(self) -> AsyncIterator[Document]:
for row in await aexecute_cql(self.session, self.query, **self.query_kwargs):
metadata = self.metadata.copy()
metadata.update(self.metadata_mapper(row))
yield Document(
page_content=self.page_content_mapper(row), metadata=metadata
)
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/obs_file.py | # coding:utf-8
import os
import tempfile
from typing import Any, List, Optional
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
from langchain_community.document_loaders.unstructured import UnstructuredFileLoader
class OBSFileLoader(BaseLoader):
"""Load from the `Huawei OBS file`."""
def __init__(
self,
bucket: str,
key: str,
client: Any = None,
endpoint: str = "",
config: Optional[dict] = None,
) -> None:
"""Initialize the OBSFileLoader with the specified settings.
Args:
bucket (str): The name of the OBS bucket to be used.
key (str): The name of the object in the OBS bucket.
client (ObsClient, optional): An instance of the ObsClient to connect to OBS.
endpoint (str, optional): The endpoint URL of your OBS bucket. This parameter is mandatory if `client` is not provided.
config (dict, optional): The parameters for connecting to OBS, provided as a dictionary. This parameter is ignored if `client` is provided. The dictionary could have the following keys:
- "ak" (str, optional): Your OBS access key (required if `get_token_from_ecs` is False and bucket policy is not public read).
- "sk" (str, optional): Your OBS secret key (required if `get_token_from_ecs` is False and bucket policy is not public read).
- "token" (str, optional): Your security token (required if using temporary credentials).
- "get_token_from_ecs" (bool, optional): Whether to retrieve the security token from ECS. Defaults to False if not provided. If set to True, `ak`, `sk`, and `token` will be ignored.
Raises:
ValueError: If the `esdk-obs-python` package is not installed.
TypeError: If the provided `client` is not an instance of ObsClient.
ValueError: If `client` is not provided, but `endpoint` is missing.
Note:
Before using this class, make sure you have registered with OBS and have the necessary credentials. The `ak`, `sk`, and `endpoint` values are mandatory unless `get_token_from_ecs` is True or the bucket policy is public read. `token` is required when using temporary credentials.
Example:
To create a new OBSFileLoader with a new client:
```
config = {
"ak": "your-access-key",
"sk": "your-secret-key"
}
obs_loader = OBSFileLoader("your-bucket-name", "your-object-key", config=config)
```
To create a new OBSFileLoader with an existing client:
```
from obs import ObsClient
# Assuming you have an existing ObsClient object 'obs_client'
obs_loader = OBSFileLoader("your-bucket-name", "your-object-key", client=obs_client)
```
To create a new OBSFileLoader without an existing client:
```
obs_loader = OBSFileLoader("your-bucket-name", "your-object-key", endpoint="your-endpoint-url")
```
""" # noqa: E501
try:
from obs import ObsClient
except ImportError:
raise ImportError(
"Could not import esdk-obs-python python package. "
"Please install it with `pip install esdk-obs-python`."
)
if not client:
if not endpoint:
raise ValueError("Either OBSClient or endpoint must be provided.")
if not config:
config = dict()
if config.get("get_token_from_ecs"):
client = ObsClient(server=endpoint, security_provider_policy="ECS")
else:
client = ObsClient(
access_key_id=config.get("ak"),
secret_access_key=config.get("sk"),
security_token=config.get("token"),
server=endpoint,
)
if not isinstance(client, ObsClient):
raise TypeError("Client must be ObsClient type")
self.client = client
self.bucket = bucket
self.key = key
def load(self) -> List[Document]:
"""Load documents."""
with tempfile.TemporaryDirectory() as temp_dir:
file_path = f"{temp_dir}/{self.bucket}/{self.key}"
os.makedirs(os.path.dirname(file_path), exist_ok=True)
# Download the file to a destination
self.client.downloadFile(
bucketName=self.bucket, objectKey=self.key, downloadFile=file_path
)
loader = UnstructuredFileLoader(file_path)
return loader.load()
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/spreedly.py | import json
import urllib.request
from typing import List
from langchain_core.documents import Document
from langchain_core.utils import stringify_dict
from langchain_community.document_loaders.base import BaseLoader
SPREEDLY_ENDPOINTS = {
"gateways_options": "https://core.spreedly.com/v1/gateways_options.json",
"gateways": "https://core.spreedly.com/v1/gateways.json",
"receivers_options": "https://core.spreedly.com/v1/receivers_options.json",
"receivers": "https://core.spreedly.com/v1/receivers.json",
"payment_methods": "https://core.spreedly.com/v1/payment_methods.json",
"certificates": "https://core.spreedly.com/v1/certificates.json",
"transactions": "https://core.spreedly.com/v1/transactions.json",
"environments": "https://core.spreedly.com/v1/environments.json",
}
class SpreedlyLoader(BaseLoader):
"""Load from `Spreedly` API."""
def __init__(self, access_token: str, resource: str) -> None:
"""Initialize with an access token and a resource.
Args:
access_token: The access token.
resource: The resource.
"""
self.access_token = access_token
self.resource = resource
self.headers = {
"Authorization": f"Bearer {self.access_token}",
"Accept": "application/json",
}
def _make_request(self, url: str) -> List[Document]:
request = urllib.request.Request(url, headers=self.headers)
with urllib.request.urlopen(request) as response:
json_data = json.loads(response.read().decode())
text = stringify_dict(json_data)
metadata = {"source": url}
return [Document(page_content=text, metadata=metadata)]
def _get_resource(self) -> List[Document]:
endpoint = SPREEDLY_ENDPOINTS.get(self.resource)
if endpoint is None:
return []
return self._make_request(endpoint)
def load(self) -> List[Document]:
return self._get_resource()
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/twitter.py | from __future__ import annotations
from typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Sequence, Union
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
if TYPE_CHECKING:
import tweepy
from tweepy import OAuth2BearerHandler, OAuthHandler
def _dependable_tweepy_import() -> tweepy:
try:
import tweepy
except ImportError:
raise ImportError(
"tweepy package not found, please install it with `pip install tweepy`"
)
return tweepy
class TwitterTweetLoader(BaseLoader):
"""Load `Twitter` tweets.
Read tweets of the user's Twitter handle.
First you need to go to
`https://developer.twitter.com/en/docs/twitter-api
/getting-started/getting-access-to-the-twitter-api`
to get your token. And create a v2 version of the app.
"""
def __init__(
self,
auth_handler: Union[OAuthHandler, OAuth2BearerHandler],
twitter_users: Sequence[str],
number_tweets: Optional[int] = 100,
):
self.auth = auth_handler
self.twitter_users = twitter_users
self.number_tweets = number_tweets
def load(self) -> List[Document]:
"""Load tweets."""
tweepy = _dependable_tweepy_import()
api = tweepy.API(self.auth, parser=tweepy.parsers.JSONParser())
results: List[Document] = []
for username in self.twitter_users:
tweets = api.user_timeline(screen_name=username, count=self.number_tweets)
user = api.get_user(screen_name=username)
docs = self._format_tweets(tweets, user)
results.extend(docs)
return results
def _format_tweets(
self, tweets: List[Dict[str, Any]], user_info: dict
) -> Iterable[Document]:
"""Format tweets into a string."""
for tweet in tweets:
metadata = {
"created_at": tweet["created_at"],
"user_info": user_info,
}
yield Document(
page_content=tweet["text"],
metadata=metadata,
)
@classmethod
def from_bearer_token(
cls,
oauth2_bearer_token: str,
twitter_users: Sequence[str],
number_tweets: Optional[int] = 100,
) -> TwitterTweetLoader:
"""Create a TwitterTweetLoader from OAuth2 bearer token."""
tweepy = _dependable_tweepy_import()
auth = tweepy.OAuth2BearerHandler(oauth2_bearer_token)
return cls(
auth_handler=auth,
twitter_users=twitter_users,
number_tweets=number_tweets,
)
@classmethod
def from_secrets(
cls,
access_token: str,
access_token_secret: str,
consumer_key: str,
consumer_secret: str,
twitter_users: Sequence[str],
number_tweets: Optional[int] = 100,
) -> TwitterTweetLoader:
"""Create a TwitterTweetLoader from access tokens and secrets."""
tweepy = _dependable_tweepy_import()
auth = tweepy.OAuthHandler(
access_token=access_token,
access_token_secret=access_token_secret,
consumer_key=consumer_key,
consumer_secret=consumer_secret,
)
return cls(
auth_handler=auth,
twitter_users=twitter_users,
number_tweets=number_tweets,
)
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/nuclia.py | import json
import uuid
from typing import List
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
from langchain_community.tools.nuclia.tool import NucliaUnderstandingAPI
class NucliaLoader(BaseLoader):
"""Load from any file type using `Nuclia Understanding API`."""
def __init__(self, path: str, nuclia_tool: NucliaUnderstandingAPI):
self.nua = nuclia_tool
self.id = str(uuid.uuid4())
self.nua.run({"action": "push", "id": self.id, "path": path, "text": None})
def load(self) -> List[Document]:
"""Load documents."""
data = self.nua.run(
{"action": "pull", "id": self.id, "path": None, "text": None}
)
if not data:
return []
obj = json.loads(data)
text = obj["extracted_text"][0]["body"]["text"]
print(text) # noqa: T201
metadata = {
"file": obj["file_extracted_data"][0],
"metadata": obj["field_metadata"][0],
}
return [Document(page_content=text, metadata=metadata)]
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/base.py | from langchain_core.document_loaders import BaseBlobParser, BaseLoader
__all__ = [
"BaseBlobParser",
"BaseLoader",
]
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/email.py | import os
from pathlib import Path
from typing import Any, Iterator, List, Union
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
from langchain_community.document_loaders.unstructured import (
UnstructuredFileLoader,
satisfies_min_unstructured_version,
)
class UnstructuredEmailLoader(UnstructuredFileLoader):
"""Load email files using `Unstructured`.
Works with both
.eml and .msg files. You can process attachments in addition to the
e-mail message itself by passing process_attachments=True into the
constructor for the loader. By default, attachments will be processed
with the unstructured partition function. If you already know the document
types of the attachments, you can specify another partitioning function
with the attachment partitioner kwarg.
Example
-------
from langchain_community.document_loaders import UnstructuredEmailLoader
loader = UnstructuredEmailLoader("example_data/fake-email.eml", mode="elements")
loader.load()
Example
-------
from langchain_community.document_loaders import UnstructuredEmailLoader
loader = UnstructuredEmailLoader(
"example_data/fake-email-attachment.eml",
mode="elements",
process_attachments=True,
)
loader.load()
"""
def __init__(
self,
file_path: Union[str, Path],
mode: str = "single",
**unstructured_kwargs: Any,
):
process_attachments = unstructured_kwargs.get("process_attachments")
attachment_partitioner = unstructured_kwargs.get("attachment_partitioner")
if process_attachments and attachment_partitioner is None:
from unstructured.partition.auto import partition
unstructured_kwargs["attachment_partitioner"] = partition
super().__init__(file_path=file_path, mode=mode, **unstructured_kwargs)
def _get_elements(self) -> List:
from unstructured.file_utils.filetype import FileType, detect_filetype
filetype = detect_filetype(self.file_path) # type: ignore[arg-type]
if filetype == FileType.EML:
from unstructured.partition.email import partition_email
return partition_email(filename=self.file_path, **self.unstructured_kwargs) # type: ignore[arg-type]
elif satisfies_min_unstructured_version("0.5.8") and filetype == FileType.MSG:
from unstructured.partition.msg import partition_msg
return partition_msg(filename=self.file_path, **self.unstructured_kwargs) # type: ignore[arg-type]
else:
raise ValueError(
f"Filetype {filetype} is not supported in UnstructuredEmailLoader."
)
class OutlookMessageLoader(BaseLoader):
"""
Loads Outlook Message files using extract_msg.
https://github.com/TeamMsgExtractor/msg-extractor
"""
def __init__(self, file_path: Union[str, Path]):
"""Initialize with a file path.
Args:
file_path: The path to the Outlook Message file.
"""
self.file_path = str(file_path)
if not os.path.isfile(self.file_path):
raise ValueError(f"File path {self.file_path} is not a valid file")
try:
import extract_msg # noqa:F401
except ImportError:
raise ImportError(
"extract_msg is not installed. Please install it with "
"`pip install extract_msg`"
)
def lazy_load(self) -> Iterator[Document]:
import extract_msg
msg = extract_msg.Message(self.file_path)
yield Document(
page_content=msg.body,
metadata={
"source": self.file_path,
"subject": msg.subject,
"sender": msg.sender,
"date": msg.date,
},
)
msg.close()
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/hugging_face_model.py | from typing import Iterator, List, Optional
import requests
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
class HuggingFaceModelLoader(BaseLoader):
"""
Load model information from `Hugging Face Hub`, including README content.
This loader interfaces with the Hugging Face Models API to fetch and load
model metadata and README files.
The API allows you to search and filter models based on specific criteria
such as model tags, authors, and more.
API URL: https://huggingface.co/api/models
DOC URL: https://huggingface.co/docs/hub/en/api
Examples:
.. code-block:: python
from langchain_community.document_loaders import HuggingFaceModelLoader
# Initialize the loader with search criteria
loader = HuggingFaceModelLoader(search="bert", limit=10)
# Load models
documents = loader.load()
# Iterate through the fetched documents
for doc in documents:
print(doc.page_content) # README content of the model
print(doc.metadata) # Metadata of the model
"""
BASE_URL: str = "https://huggingface.co/api/models"
README_BASE_URL: str = "https://huggingface.co/{model_id}/raw/main/README.md"
def __init__(
self,
*,
search: Optional[str] = None,
author: Optional[str] = None,
filter: Optional[str] = None,
sort: Optional[str] = None,
direction: Optional[str] = None,
limit: Optional[int] = 3,
full: Optional[bool] = None,
config: Optional[bool] = None,
):
"""Initialize the HuggingFaceModelLoader.
Args:
search: Filter based on substrings for repos and their usernames.
author: Filter models by an author or organization.
filter: Filter based on tags.
sort: Property to use when sorting.
direction: Direction in which to sort.
limit: Limit the number of models fetched.
full: Whether to fetch most model data.
config: Whether to also fetch the repo config.
"""
self.params = {
"search": search,
"author": author,
"filter": filter,
"sort": sort,
"direction": direction,
"limit": limit,
"full": full,
"config": config,
}
def fetch_models(self) -> List[dict]:
"""Fetch model information from Hugging Face Hub."""
response = requests.get(
self.BASE_URL,
params={k: v for k, v in self.params.items() if v is not None},
)
response.raise_for_status()
return response.json()
def fetch_readme_content(self, model_id: str) -> str:
"""Fetch the README content for a given model."""
readme_url = self.README_BASE_URL.format(model_id=model_id)
try:
response = requests.get(readme_url)
response.raise_for_status()
return response.text
except requests.RequestException:
return "README not available for this model."
def lazy_load(self) -> Iterator[Document]:
"""Load model information lazily, including README content."""
models = self.fetch_models()
for model in models:
model_id = model.get("modelId", "")
readme_content = self.fetch_readme_content(model_id)
yield Document(
page_content=readme_content,
metadata=model,
)
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/acreom.py | import re
from pathlib import Path
from typing import Iterator, Pattern, Union
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
class AcreomLoader(BaseLoader):
"""Load `acreom` vault from a directory."""
FRONT_MATTER_REGEX: Pattern = re.compile(
r"^---\n(.*?)\n---\n", re.MULTILINE | re.DOTALL
)
"""Regex to match front matter metadata in markdown files."""
def __init__(
self,
path: Union[str, Path],
encoding: str = "UTF-8",
collect_metadata: bool = True,
):
"""Initialize the loader."""
self.file_path = path
"""Path to the directory containing the markdown files."""
self.encoding = encoding
"""Encoding to use when reading the files."""
self.collect_metadata = collect_metadata
"""Whether to collect metadata from the front matter."""
def _parse_front_matter(self, content: str) -> dict:
"""Parse front matter metadata from the content and return it as a dict."""
if not self.collect_metadata:
return {}
match = self.FRONT_MATTER_REGEX.search(content)
front_matter = {}
if match:
lines = match.group(1).split("\n")
for line in lines:
if ":" in line:
key, value = line.split(":", 1)
front_matter[key.strip()] = value.strip()
else:
# Skip lines without a colon
continue
return front_matter
def _remove_front_matter(self, content: str) -> str:
"""Remove front matter metadata from the given content."""
if not self.collect_metadata:
return content
return self.FRONT_MATTER_REGEX.sub("", content)
def _process_acreom_content(self, content: str) -> str:
# remove acreom specific elements from content that
# do not contribute to the context of current document
content = re.sub(r"\s*-\s\[\s\]\s.*|\s*\[\s\]\s.*", "", content) # rm tasks
content = re.sub(r"#", "", content) # rm hashtags
content = re.sub(r"\[\[.*?\]\]", "", content) # rm doclinks
return content
def lazy_load(self) -> Iterator[Document]:
ps = list(Path(self.file_path).glob("**/*.md"))
for p in ps:
with open(p, encoding=self.encoding) as f:
text = f.read()
front_matter = self._parse_front_matter(text)
text = self._remove_front_matter(text)
text = self._process_acreom_content(text)
metadata = {
"source": str(p.name),
"path": str(p),
**front_matter,
}
yield Document(page_content=text, metadata=metadata)
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/larksuite.py | import json
import urllib.request
from typing import Any, Iterator
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
class LarkSuiteDocLoader(BaseLoader):
"""Load from `LarkSuite` (`FeiShu`)."""
def __init__(self, domain: str, access_token: str, document_id: str):
"""Initialize with domain, access_token (tenant / user), and document_id.
Args:
domain: The domain to load the LarkSuite.
access_token: The access_token to use.
document_id: The document_id to load.
"""
self.domain = domain
self.access_token = access_token
self.document_id = document_id
def _get_larksuite_api_json_data(self, api_url: str) -> Any:
"""Get LarkSuite (FeiShu) API response json data."""
headers = {"Authorization": f"Bearer {self.access_token}"}
request = urllib.request.Request(api_url, headers=headers)
with urllib.request.urlopen(request) as response:
json_data = json.loads(response.read().decode())
return json_data
def lazy_load(self) -> Iterator[Document]:
"""Lazy load LarkSuite (FeiShu) document."""
api_url_prefix = f"{self.domain}/open-apis/docx/v1/documents"
metadata_json = self._get_larksuite_api_json_data(
f"{api_url_prefix}/{self.document_id}"
)
raw_content_json = self._get_larksuite_api_json_data(
f"{api_url_prefix}/{self.document_id}/raw_content"
)
text = raw_content_json["data"]["content"]
metadata = {
"document_id": self.document_id,
"revision_id": metadata_json["data"]["document"]["revision_id"],
"title": metadata_json["data"]["document"]["title"],
}
yield Document(page_content=text, metadata=metadata)
class LarkSuiteWikiLoader(LarkSuiteDocLoader):
"""Load from `LarkSuite` (`FeiShu`) wiki."""
def __init__(self, domain: str, access_token: str, wiki_id: str):
"""Initialize with domain, access_token (tenant / user), and wiki_id.
Args:
domain: The domain to load the LarkSuite.
access_token: The access_token to use.
wiki_id: The wiki_id to load.
"""
self.domain = domain
self.access_token = access_token
self.wiki_id = wiki_id
self.document_id = ""
def lazy_load(self) -> Iterator[Document]:
"""Lazy load LarkSuite (FeiShu) wiki document."""
# convert Feishu wiki id to document id
if not self.document_id:
wiki_url_prefix = f"{self.domain}/open-apis/wiki/v2/spaces/get_node"
wiki_node_info_json = self._get_larksuite_api_json_data(
f"{wiki_url_prefix}?token={self.wiki_id}"
)
self.document_id = wiki_node_info_json["data"]["node"]["obj_token"]
yield from super().lazy_load()
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/generic.py | from __future__ import annotations
from pathlib import Path
from typing import (
TYPE_CHECKING,
Any,
Iterator,
List,
Literal,
Optional,
Sequence,
Union,
)
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseBlobParser, BaseLoader
from langchain_community.document_loaders.blob_loaders import (
BlobLoader,
FileSystemBlobLoader,
)
from langchain_community.document_loaders.parsers.registry import get_parser
if TYPE_CHECKING:
from langchain_text_splitters import TextSplitter
_PathLike = Union[str, Path]
DEFAULT = Literal["default"]
class GenericLoader(BaseLoader):
"""Generic Document Loader.
A generic document loader that allows combining an arbitrary blob loader with
a blob parser.
Examples:
Parse a specific PDF file:
.. code-block:: python
from langchain_community.document_loaders import GenericLoader
from langchain_community.document_loaders.parsers.pdf import PyPDFParser
# Recursively load all text files in a directory.
loader = GenericLoader.from_filesystem(
"my_lovely_pdf.pdf",
parser=PyPDFParser()
)
.. code-block:: python
from langchain_community.document_loaders import GenericLoader
from langchain_community.document_loaders.blob_loaders import FileSystemBlobLoader
loader = GenericLoader.from_filesystem(
path="path/to/directory",
glob="**/[!.]*",
suffixes=[".pdf"],
show_progress=True,
)
docs = loader.lazy_load()
next(docs)
Example instantiations to change which files are loaded:
.. code-block:: python
# Recursively load all text files in a directory.
loader = GenericLoader.from_filesystem("/path/to/dir", glob="**/*.txt")
# Recursively load all non-hidden files in a directory.
loader = GenericLoader.from_filesystem("/path/to/dir", glob="**/[!.]*")
# Load all files in a directory without recursion.
loader = GenericLoader.from_filesystem("/path/to/dir", glob="*")
Example instantiations to change which parser is used:
.. code-block:: python
from langchain_community.document_loaders.parsers.pdf import PyPDFParser
# Recursively load all text files in a directory.
loader = GenericLoader.from_filesystem(
"/path/to/dir",
glob="**/*.pdf",
parser=PyPDFParser()
)
""" # noqa: E501
def __init__(
self,
blob_loader: BlobLoader, # type: ignore[valid-type]
blob_parser: BaseBlobParser,
) -> None:
"""A generic document loader.
Args:
blob_loader: A blob loader which knows how to yield blobs
blob_parser: A blob parser which knows how to parse blobs into documents
"""
self.blob_loader = blob_loader
self.blob_parser = blob_parser
def lazy_load(
self,
) -> Iterator[Document]:
"""Load documents lazily. Use this when working at a large scale."""
for blob in self.blob_loader.yield_blobs(): # type: ignore[attr-defined]
yield from self.blob_parser.lazy_parse(blob)
def load_and_split(
self, text_splitter: Optional[TextSplitter] = None
) -> List[Document]:
"""Load all documents and split them into sentences."""
raise NotImplementedError(
"Loading and splitting is not yet implemented for generic loaders. "
"When they will be implemented they will be added via the initializer. "
"This method should not be used going forward."
)
@classmethod
def from_filesystem(
cls,
path: _PathLike,
*,
glob: str = "**/[!.]*",
exclude: Sequence[str] = (),
suffixes: Optional[Sequence[str]] = None,
show_progress: bool = False,
parser: Union[DEFAULT, BaseBlobParser] = "default",
parser_kwargs: Optional[dict] = None,
) -> GenericLoader:
"""Create a generic document loader using a filesystem blob loader.
Args:
path: The path to the directory to load documents from OR the path to a
single file to load. If this is a file, glob, exclude, suffixes
will be ignored.
glob: The glob pattern to use to find documents.
suffixes: The suffixes to use to filter documents. If None, all files
matching the glob will be loaded.
exclude: A list of patterns to exclude from the loader.
show_progress: Whether to show a progress bar or not (requires tqdm).
Proxies to the file system loader.
parser: A blob parser which knows how to parse blobs into documents,
will instantiate a default parser if not provided.
The default can be overridden by either passing a parser or
setting the class attribute `blob_parser` (the latter
should be used with inheritance).
parser_kwargs: Keyword arguments to pass to the parser.
Returns:
A generic document loader.
"""
blob_loader = FileSystemBlobLoader( # type: ignore[attr-defined, misc]
path,
glob=glob,
exclude=exclude,
suffixes=suffixes,
show_progress=show_progress,
)
if isinstance(parser, str):
if parser == "default":
try:
# If there is an implementation of get_parser on the class, use it.
blob_parser = cls.get_parser(**(parser_kwargs or {}))
except NotImplementedError:
# if not then use the global registry.
blob_parser = get_parser(parser)
else:
blob_parser = get_parser(parser)
else:
blob_parser = parser
return cls(blob_loader, blob_parser)
@staticmethod
def get_parser(**kwargs: Any) -> BaseBlobParser:
"""Override this method to associate a default parser with the class."""
raise NotImplementedError()
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/bibtex.py | import logging
import re
from pathlib import Path
from typing import Any, Iterator, List, Mapping, Optional
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
from langchain_community.utilities.bibtex import BibtexparserWrapper
logger = logging.getLogger(__name__)
class BibtexLoader(BaseLoader):
"""Load a `bibtex` file.
Each document represents one entry from the bibtex file.
If a PDF file is present in the `file` bibtex field, the original PDF
is loaded into the document text. If no such file entry is present,
the `abstract` field is used instead.
"""
def __init__(
self,
file_path: str,
*,
parser: Optional[BibtexparserWrapper] = None,
max_docs: Optional[int] = None,
max_content_chars: Optional[int] = 4_000,
load_extra_metadata: bool = False,
file_pattern: str = r"[^:]+\.pdf",
):
"""Initialize the BibtexLoader.
Args:
file_path: Path to the bibtex file.
parser: The parser to use. If None, a default parser is used.
max_docs: Max number of associated documents to load. Use -1 means
no limit.
max_content_chars: Maximum number of characters to load from the PDF.
load_extra_metadata: Whether to load extra metadata from the PDF.
file_pattern: Regex pattern to match the file name in the bibtex.
"""
self.file_path = file_path
self.parser = parser or BibtexparserWrapper()
self.max_docs = max_docs
self.max_content_chars = max_content_chars
self.load_extra_metadata = load_extra_metadata
self.file_regex = re.compile(file_pattern)
def _load_entry(self, entry: Mapping[str, Any]) -> Optional[Document]:
import fitz
parent_dir = Path(self.file_path).parent
# regex is useful for Zotero flavor bibtex files
file_names = self.file_regex.findall(entry.get("file", ""))
if not file_names:
return None
texts: List[str] = []
for file_name in file_names:
try:
with fitz.open(parent_dir / file_name) as f:
texts.extend(page.get_text() for page in f)
except FileNotFoundError as e:
logger.debug(e)
content = "\n".join(texts) or entry.get("abstract", "")
if self.max_content_chars:
content = content[: self.max_content_chars]
metadata = self.parser.get_metadata(entry, load_extra=self.load_extra_metadata)
return Document(
page_content=content,
metadata=metadata,
)
def lazy_load(self) -> Iterator[Document]:
"""Load bibtex file using bibtexparser and get the article texts plus the
article metadata.
See https://bibtexparser.readthedocs.io/en/master/
Returns:
a list of documents with the document.page_content in text format
"""
try:
import fitz # noqa: F401
except ImportError:
raise ImportError(
"PyMuPDF package not found, please install it with "
"`pip install pymupdf`"
)
entries = self.parser.load_bibtex_entries(self.file_path)
if self.max_docs:
entries = entries[: self.max_docs]
for entry in entries:
doc = self._load_entry(entry)
if doc:
yield doc
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/whatsapp_chat.py | import re
from pathlib import Path
from typing import Iterator
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
def concatenate_rows(date: str, sender: str, text: str) -> str:
"""Combine message information in a readable format ready to be used."""
return f"{sender} on {date}: {text}\n\n"
class WhatsAppChatLoader(BaseLoader):
"""Load `WhatsApp` messages text file."""
def __init__(self, path: str):
"""Initialize with path."""
self.file_path = path
def lazy_load(self) -> Iterator[Document]:
p = Path(self.file_path)
text_content = ""
with open(p, encoding="utf8") as f:
lines = f.readlines()
message_line_regex = r"""
\[?
(
\d{1,4}
[\/.]
\d{1,2}
[\/.]
\d{1,4}
,\s
\d{1,2}
:\d{2}
(?:
:\d{2}
)?
(?:[\s_](?:AM|PM))?
)
\]?
[\s-]*
([~\w\s]+)
[:]+
\s
(.+)
"""
ignore_lines = ["This message was deleted", "<Media omitted>"]
for line in lines:
result = re.match(
message_line_regex, line.strip(), flags=re.VERBOSE | re.IGNORECASE
)
if result:
date, sender, text = result.groups()
if text not in ignore_lines:
text_content += concatenate_rows(date, sender, text)
metadata = {"source": str(p)}
yield Document(page_content=text_content, metadata=metadata)
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/git.py | import os
from typing import Callable, Iterator, Optional
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
class GitLoader(BaseLoader):
"""Load `Git` repository files.
The Repository can be local on disk available at `repo_path`,
or remote at `clone_url` that will be cloned to `repo_path`.
Currently, supports only text files.
Each document represents one file in the repository. The `path` points to
the local Git repository, and the `branch` specifies the branch to load
files from. By default, it loads from the `main` branch.
"""
def __init__(
self,
repo_path: str,
clone_url: Optional[str] = None,
branch: Optional[str] = "main",
file_filter: Optional[Callable[[str], bool]] = None,
):
"""
Args:
repo_path: The path to the Git repository.
clone_url: Optional. The URL to clone the repository from.
branch: Optional. The branch to load files from. Defaults to `main`.
file_filter: Optional. A function that takes a file path and returns
a boolean indicating whether to load the file. Defaults to None.
"""
self.repo_path = repo_path
self.clone_url = clone_url
self.branch = branch
self.file_filter = file_filter
def lazy_load(self) -> Iterator[Document]:
try:
from git import Blob, Repo
except ImportError as ex:
raise ImportError(
"Could not import git python package. "
"Please install it with `pip install GitPython`."
) from ex
if not os.path.exists(self.repo_path) and self.clone_url is None:
raise ValueError(f"Path {self.repo_path} does not exist")
elif self.clone_url:
# If the repo_path already contains a git repository, verify that it's the
# same repository as the one we're trying to clone.
if os.path.isdir(os.path.join(self.repo_path, ".git")):
repo = Repo(self.repo_path)
# If the existing repository is not the same as the one we're trying to
# clone, raise an error.
if repo.remotes.origin.url != self.clone_url:
raise ValueError(
"A different repository is already cloned at this path."
)
else:
repo = Repo.clone_from(self.clone_url, self.repo_path)
repo.git.checkout(self.branch)
else:
repo = Repo(self.repo_path)
repo.git.checkout(self.branch)
for item in repo.tree().traverse():
if not isinstance(item, Blob):
continue
file_path = os.path.join(self.repo_path, item.path)
ignored_files = repo.ignored([file_path]) # type: ignore[arg-type]
if len(ignored_files):
continue
# uses filter to skip files
if self.file_filter and not self.file_filter(file_path):
continue
rel_file_path = os.path.relpath(file_path, self.repo_path)
try:
with open(file_path, "rb") as f:
content = f.read()
file_type = os.path.splitext(item.name)[1]
# loads only text files
try:
text_content = content.decode("utf-8")
except UnicodeDecodeError:
continue
metadata = {
"source": rel_file_path,
"file_path": rel_file_path,
"file_name": item.name,
"file_type": file_type,
}
yield Document(page_content=text_content, metadata=metadata)
except Exception as e:
print(f"Error reading file {file_path}: {e}") # noqa: T201
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/airbyte.py | from typing import Any, Callable, Iterator, Mapping, Optional
from langchain_core.documents import Document
from langchain_core.utils.utils import guard_import
from langchain_community.document_loaders.base import BaseLoader
RecordHandler = Callable[[Any, Optional[str]], Document]
class AirbyteCDKLoader(BaseLoader):
"""Load with an `Airbyte` source connector implemented using the `CDK`."""
def __init__(
self,
config: Mapping[str, Any],
source_class: Any,
stream_name: str,
record_handler: Optional[RecordHandler] = None,
state: Optional[Any] = None,
) -> None:
"""Initializes the loader.
Args:
config: The config to pass to the source connector.
source_class: The source connector class.
stream_name: The name of the stream to load.
record_handler: A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state: The state to pass to the source connector. Defaults to None.
"""
from airbyte_cdk.models.airbyte_protocol import AirbyteRecordMessage
from airbyte_cdk.sources.embedded.base_integration import (
BaseEmbeddedIntegration,
)
from airbyte_cdk.sources.embedded.runner import CDKRunner
class CDKIntegration(BaseEmbeddedIntegration):
"""A wrapper around the CDK integration."""
def _handle_record(
self, record: AirbyteRecordMessage, id: Optional[str]
) -> Document:
if record_handler:
return record_handler(record, id)
return Document(page_content="", metadata=record.data)
self._integration = CDKIntegration(
config=config,
runner=CDKRunner(source=source_class(), name=source_class.__name__),
)
self._stream_name = stream_name
self._state = state
def lazy_load(self) -> Iterator[Document]:
return self._integration._load_data(
stream_name=self._stream_name, state=self._state
)
@property
def last_state(self) -> Any:
return self._integration.last_state
class AirbyteHubspotLoader(AirbyteCDKLoader):
"""Load from `Hubspot` using an `Airbyte` source connector."""
def __init__(
self,
config: Mapping[str, Any],
stream_name: str,
record_handler: Optional[RecordHandler] = None,
state: Optional[Any] = None,
) -> None:
"""Initializes the loader.
Args:
config: The config to pass to the source connector.
stream_name: The name of the stream to load.
record_handler: A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state: The state to pass to the source connector. Defaults to None.
"""
source_class = guard_import(
"source_hubspot", pip_name="airbyte-source-hubspot"
).SourceHubspot
super().__init__(
config=config,
source_class=source_class,
stream_name=stream_name,
record_handler=record_handler,
state=state,
)
class AirbyteStripeLoader(AirbyteCDKLoader):
"""Load from `Stripe` using an `Airbyte` source connector."""
def __init__(
self,
config: Mapping[str, Any],
stream_name: str,
record_handler: Optional[RecordHandler] = None,
state: Optional[Any] = None,
) -> None:
"""Initializes the loader.
Args:
config: The config to pass to the source connector.
stream_name: The name of the stream to load.
record_handler: A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state: The state to pass to the source connector. Defaults to None.
"""
source_class = guard_import(
"source_stripe", pip_name="airbyte-source-stripe"
).SourceStripe
super().__init__(
config=config,
source_class=source_class,
stream_name=stream_name,
record_handler=record_handler,
state=state,
)
class AirbyteTypeformLoader(AirbyteCDKLoader):
"""Load from `Typeform` using an `Airbyte` source connector."""
def __init__(
self,
config: Mapping[str, Any],
stream_name: str,
record_handler: Optional[RecordHandler] = None,
state: Optional[Any] = None,
) -> None:
"""Initializes the loader.
Args:
config: The config to pass to the source connector.
stream_name: The name of the stream to load.
record_handler: A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state: The state to pass to the source connector. Defaults to None.
"""
source_class = guard_import(
"source_typeform", pip_name="airbyte-source-typeform"
).SourceTypeform
super().__init__(
config=config,
source_class=source_class,
stream_name=stream_name,
record_handler=record_handler,
state=state,
)
class AirbyteZendeskSupportLoader(AirbyteCDKLoader):
"""Load from `Zendesk Support` using an `Airbyte` source connector."""
def __init__(
self,
config: Mapping[str, Any],
stream_name: str,
record_handler: Optional[RecordHandler] = None,
state: Optional[Any] = None,
) -> None:
"""Initializes the loader.
Args:
config: The config to pass to the source connector.
stream_name: The name of the stream to load.
record_handler: A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state: The state to pass to the source connector. Defaults to None.
"""
source_class = guard_import(
"source_zendesk_support", pip_name="airbyte-source-zendesk-support"
).SourceZendeskSupport
super().__init__(
config=config,
source_class=source_class,
stream_name=stream_name,
record_handler=record_handler,
state=state,
)
class AirbyteShopifyLoader(AirbyteCDKLoader):
"""Load from `Shopify` using an `Airbyte` source connector."""
def __init__(
self,
config: Mapping[str, Any],
stream_name: str,
record_handler: Optional[RecordHandler] = None,
state: Optional[Any] = None,
) -> None:
"""Initializes the loader.
Args:
config: The config to pass to the source connector.
stream_name: The name of the stream to load.
record_handler: A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state: The state to pass to the source connector. Defaults to None.
"""
source_class = guard_import(
"source_shopify", pip_name="airbyte-source-shopify"
).SourceShopify
super().__init__(
config=config,
source_class=source_class,
stream_name=stream_name,
record_handler=record_handler,
state=state,
)
class AirbyteSalesforceLoader(AirbyteCDKLoader):
"""Load from `Salesforce` using an `Airbyte` source connector."""
def __init__(
self,
config: Mapping[str, Any],
stream_name: str,
record_handler: Optional[RecordHandler] = None,
state: Optional[Any] = None,
) -> None:
"""Initializes the loader.
Args:
config: The config to pass to the source connector.
stream_name: The name of the stream to load.
record_handler: A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state: The state to pass to the source connector. Defaults to None.
"""
source_class = guard_import(
"source_salesforce", pip_name="airbyte-source-salesforce"
).SourceSalesforce
super().__init__(
config=config,
source_class=source_class,
stream_name=stream_name,
record_handler=record_handler,
state=state,
)
class AirbyteGongLoader(AirbyteCDKLoader):
"""Load from `Gong` using an `Airbyte` source connector."""
def __init__(
self,
config: Mapping[str, Any],
stream_name: str,
record_handler: Optional[RecordHandler] = None,
state: Optional[Any] = None,
) -> None:
"""Initializes the loader.
Args:
config: The config to pass to the source connector.
stream_name: The name of the stream to load.
record_handler: A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state: The state to pass to the source connector. Defaults to None.
"""
source_class = guard_import(
"source_gong", pip_name="airbyte-source-gong"
).SourceGong
super().__init__(
config=config,
source_class=source_class,
stream_name=stream_name,
record_handler=record_handler,
state=state,
)
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/duckdb_loader.py | from typing import Dict, List, Optional, cast
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
class DuckDBLoader(BaseLoader):
"""Load from `DuckDB`.
Each document represents one row of the result. The `page_content_columns`
are written into the `page_content` of the document. The `metadata_columns`
are written into the `metadata` of the document. By default, all columns
are written into the `page_content` and none into the `metadata`.
"""
def __init__(
self,
query: str,
database: str = ":memory:",
read_only: bool = False,
config: Optional[Dict[str, str]] = None,
page_content_columns: Optional[List[str]] = None,
metadata_columns: Optional[List[str]] = None,
):
"""
Args:
query: The query to execute.
database: The database to connect to. Defaults to ":memory:".
read_only: Whether to open the database in read-only mode.
Defaults to False.
config: A dictionary of configuration options to pass to the database.
Optional.
page_content_columns: The columns to write into the `page_content`
of the document. Optional.
metadata_columns: The columns to write into the `metadata` of the document.
Optional.
"""
self.query = query
self.database = database
self.read_only = read_only
self.config = config or {}
self.page_content_columns = page_content_columns
self.metadata_columns = metadata_columns
def load(self) -> List[Document]:
try:
import duckdb
except ImportError:
raise ImportError(
"Could not import duckdb python package. "
"Please install it with `pip install duckdb`."
)
docs = []
with duckdb.connect(
database=self.database, read_only=self.read_only, config=self.config
) as con:
query_result = con.execute(self.query)
results = query_result.fetchall()
description = cast(list, query_result.description)
field_names = [c[0] for c in description]
if self.page_content_columns is None:
page_content_columns = field_names
else:
page_content_columns = self.page_content_columns
if self.metadata_columns is None:
metadata_columns = []
else:
metadata_columns = self.metadata_columns
for result in results:
page_content = "\n".join(
f"{column}: {result[field_names.index(column)]}"
for column in page_content_columns
)
metadata = {
column: result[field_names.index(column)]
for column in metadata_columns
}
doc = Document(page_content=page_content, metadata=metadata)
docs.append(doc)
return docs
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/pyspark_dataframe.py | import itertools
import logging
import sys
from typing import TYPE_CHECKING, Any, Iterator, List, Optional, Tuple
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
logger = logging.getLogger(__file__)
if TYPE_CHECKING:
from pyspark.sql import SparkSession
class PySparkDataFrameLoader(BaseLoader):
"""Load `PySpark` DataFrames."""
def __init__(
self,
spark_session: Optional["SparkSession"] = None,
df: Optional[Any] = None,
page_content_column: str = "text",
fraction_of_memory: float = 0.1,
):
"""Initialize with a Spark DataFrame object.
Args:
spark_session: The SparkSession object.
df: The Spark DataFrame object.
page_content_column: The name of the column containing the page content.
Defaults to "text".
fraction_of_memory: The fraction of memory to use. Defaults to 0.1.
"""
try:
from pyspark.sql import DataFrame, SparkSession
except ImportError:
raise ImportError(
"pyspark is not installed. "
"Please install it with `pip install pyspark`"
)
self.spark = (
spark_session if spark_session else SparkSession.builder.getOrCreate()
)
if not isinstance(df, DataFrame):
raise ValueError(
f"Expected data_frame to be a PySpark DataFrame, got {type(df)}"
)
self.df = df
self.page_content_column = page_content_column
self.fraction_of_memory = fraction_of_memory
self.num_rows, self.max_num_rows = self.get_num_rows()
self.rdd_df = self.df.rdd.map(list)
self.column_names = self.df.columns
def get_num_rows(self) -> Tuple[int, int]:
"""Gets the number of "feasible" rows for the DataFrame"""
try:
import psutil
except ImportError as e:
raise ImportError(
"psutil not installed. Please install it with `pip install psutil`."
) from e
row = self.df.limit(1).collect()[0]
estimated_row_size = sys.getsizeof(row)
mem_info = psutil.virtual_memory()
available_memory = mem_info.available
max_num_rows = int(
(available_memory / estimated_row_size) * self.fraction_of_memory
)
return min(max_num_rows, self.df.count()), max_num_rows
def lazy_load(self) -> Iterator[Document]:
"""A lazy loader for document content."""
for row in self.rdd_df.toLocalIterator():
metadata = {self.column_names[i]: row[i] for i in range(len(row))}
text = metadata[self.page_content_column]
metadata.pop(self.page_content_column)
yield Document(page_content=text, metadata=metadata)
def load(self) -> List[Document]:
"""Load from the dataframe."""
if self.df.count() > self.max_num_rows:
logger.warning(
f"The number of DataFrame rows is {self.df.count()}, "
f"but we will only include the amount "
f"of rows that can reasonably fit in memory: {self.num_rows}."
)
lazy_load_iterator = self.lazy_load()
return list(itertools.islice(lazy_load_iterator, self.num_rows))
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/tidb.py | from typing import Any, Dict, Iterator, List, Optional
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
class TiDBLoader(BaseLoader):
"""Load documents from TiDB."""
def __init__(
self,
connection_string: str,
query: str,
page_content_columns: Optional[List[str]] = None,
metadata_columns: Optional[List[str]] = None,
engine_args: Optional[Dict[str, Any]] = None,
) -> None:
"""Initialize TiDB document loader.
Args:
connection_string (str): The connection string for the TiDB database,
format: "mysql+pymysql://root@127.0.0.1:4000/test".
query: The query to run in TiDB.
page_content_columns: Optional. Columns written to Document `page_content`,
default(None) to all columns.
metadata_columns: Optional. Columns written to Document `metadata`,
default(None) to no columns.
engine_args: Optional. Additional arguments to pass to sqlalchemy engine.
"""
self.connection_string = connection_string
self.query = query
self.page_content_columns = page_content_columns
self.metadata_columns = metadata_columns if metadata_columns is not None else []
self.engine_args = engine_args
def lazy_load(self) -> Iterator[Document]:
"""Lazy load TiDB data into document objects."""
from sqlalchemy import create_engine
from sqlalchemy.engine import Engine
from sqlalchemy.sql import text
# use sqlalchemy to create db connection
engine: Engine = create_engine(
self.connection_string, **(self.engine_args or {})
)
# execute query
with engine.connect() as conn:
result = conn.execute(text(self.query))
# convert result to Document objects
column_names = list(result.keys())
for row in result:
# convert row to dict{column:value}
row_data = {
column_names[index]: value for index, value in enumerate(row)
}
page_content = "\n".join(
f"{k}: {v}"
for k, v in row_data.items()
if self.page_content_columns is None
or k in self.page_content_columns
)
metadata = {col: row_data[col] for col in self.metadata_columns}
yield Document(page_content=page_content, metadata=metadata)
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/arxiv.py | from typing import Any, Iterator, List, Optional
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
from langchain_community.utilities.arxiv import ArxivAPIWrapper
class ArxivLoader(BaseLoader):
"""Load a query result from `Arxiv`.
The loader converts the original PDF format into the text.
Setup:
Install ``arxiv`` and ``PyMuPDF`` packages.
``PyMuPDF`` transforms PDF files downloaded from the arxiv.org site
into the text format.
.. code-block:: bash
pip install -U arxiv pymupdf
Instantiate:
.. code-block:: python
from langchain_community.document_loaders import ArxivLoader
loader = ArxivLoader(
query="reasoning",
# load_max_docs=2,
# load_all_available_meta=False
)
Load:
.. code-block:: python
docs = loader.load()
print(docs[0].page_content[:100])
print(docs[0].metadata)
.. code-block:: python
Understanding the Reasoning Ability of Language Models
From the Perspective of Reasoning Paths Aggre
{
'Published': '2024-02-29',
'Title': 'Understanding the Reasoning Ability of Language Models From the
Perspective of Reasoning Paths Aggregation',
'Authors': 'Xinyi Wang, Alfonso Amayuelas, Kexun Zhang, Liangming Pan,
Wenhu Chen, William Yang Wang',
'Summary': 'Pre-trained language models (LMs) are able to perform complex reasoning
without explicit fine-tuning...'
}
Lazy load:
.. code-block:: python
docs = []
docs_lazy = loader.lazy_load()
# async variant:
# docs_lazy = await loader.alazy_load()
for doc in docs_lazy:
docs.append(doc)
print(docs[0].page_content[:100])
print(docs[0].metadata)
.. code-block:: python
Understanding the Reasoning Ability of Language Models
From the Perspective of Reasoning Paths Aggre
{
'Published': '2024-02-29',
'Title': 'Understanding the Reasoning Ability of Language Models From the
Perspective of Reasoning Paths Aggregation',
'Authors': 'Xinyi Wang, Alfonso Amayuelas, Kexun Zhang, Liangming Pan,
Wenhu Chen, William Yang Wang',
'Summary': 'Pre-trained language models (LMs) are able to perform complex reasoning
without explicit fine-tuning...'
}
Async load:
.. code-block:: python
docs = await loader.aload()
print(docs[0].page_content[:100])
print(docs[0].metadata)
.. code-block:: python
Understanding the Reasoning Ability of Language Models
From the Perspective of Reasoning Paths Aggre
{
'Published': '2024-02-29',
'Title': 'Understanding the Reasoning Ability of Language Models From the
Perspective of Reasoning Paths Aggregation',
'Authors': 'Xinyi Wang, Alfonso Amayuelas, Kexun Zhang, Liangming Pan,
Wenhu Chen, William Yang Wang',
'Summary': 'Pre-trained language models (LMs) are able to perform complex reasoning
without explicit fine-tuning...'
}
Use summaries of articles as docs:
.. code-block:: python
from langchain_community.document_loaders import ArxivLoader
loader = ArxivLoader(
query="reasoning"
)
docs = loader.get_summaries_as_docs()
print(docs[0].page_content[:100])
print(docs[0].metadata)
.. code-block:: python
Pre-trained language models (LMs) are able to perform complex reasoning
without explicit fine-tuning
{
'Entry ID': 'http://arxiv.org/abs/2402.03268v2',
'Published': datetime.date(2024, 2, 29),
'Title': 'Understanding the Reasoning Ability of Language Models From the
Perspective of Reasoning Paths Aggregation',
'Authors': 'Xinyi Wang, Alfonso Amayuelas, Kexun Zhang, Liangming Pan,
Wenhu Chen, William Yang Wang'
}
""" # noqa: E501
def __init__(
self, query: str, doc_content_chars_max: Optional[int] = None, **kwargs: Any
):
"""Initialize with search query to find documents in the Arxiv.
Supports all arguments of `ArxivAPIWrapper`.
Args:
query: free text which used to find documents in the Arxiv
doc_content_chars_max: cut limit for the length of a document's content
""" # noqa: E501
self.query = query
self.client = ArxivAPIWrapper(
doc_content_chars_max=doc_content_chars_max, **kwargs
)
def lazy_load(self) -> Iterator[Document]:
"""Lazy load Arvix documents"""
yield from self.client.lazy_load(self.query)
def get_summaries_as_docs(self) -> List[Document]:
"""Uses papers summaries as documents rather than source Arvix papers"""
return self.client.get_summaries_as_docs(self.query)
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/github.py | import base64
from abc import ABC
from datetime import datetime
from typing import Any, Callable, Dict, Iterator, List, Literal, Optional, Union
import requests
from langchain_core.documents import Document
from langchain_core.utils import get_from_dict_or_env
from pydantic import BaseModel, field_validator, model_validator
from langchain_community.document_loaders.base import BaseLoader
class BaseGitHubLoader(BaseLoader, BaseModel, ABC):
"""Load `GitHub` repository Issues."""
repo: str
"""Name of repository"""
access_token: str
"""Personal access token - see https://github.com/settings/tokens?type=beta"""
github_api_url: str = "https://api.github.com"
"""URL of GitHub API"""
@model_validator(mode="before")
@classmethod
def validate_environment(cls, values: Dict) -> Any:
"""Validate that access token exists in environment."""
values["access_token"] = get_from_dict_or_env(
values, "access_token", "GITHUB_PERSONAL_ACCESS_TOKEN"
)
return values
@property
def headers(self) -> Dict[str, str]:
return {
"Accept": "application/vnd.github+json",
"Authorization": f"Bearer {self.access_token}",
}
class GitHubIssuesLoader(BaseGitHubLoader):
"""Load issues of a GitHub repository."""
include_prs: bool = True
"""If True include Pull Requests in results, otherwise ignore them."""
milestone: Union[int, Literal["*", "none"], None] = None
"""If integer is passed, it should be a milestone's number field.
If the string '*' is passed, issues with any milestone are accepted.
If the string 'none' is passed, issues without milestones are returned.
"""
state: Optional[Literal["open", "closed", "all"]] = None
"""Filter on issue state. Can be one of: 'open', 'closed', 'all'."""
assignee: Optional[str] = None
"""Filter on assigned user. Pass 'none' for no user and '*' for any user."""
creator: Optional[str] = None
"""Filter on the user that created the issue."""
mentioned: Optional[str] = None
"""Filter on a user that's mentioned in the issue."""
labels: Optional[List[str]] = None
"""Label names to filter one. Example: bug,ui,@high."""
sort: Optional[Literal["created", "updated", "comments"]] = None
"""What to sort results by. Can be one of: 'created', 'updated', 'comments'.
Default is 'created'."""
direction: Optional[Literal["asc", "desc"]] = None
"""The direction to sort the results by. Can be one of: 'asc', 'desc'."""
since: Optional[str] = None
"""Only show notifications updated after the given time.
This is a timestamp in ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ."""
page: Optional[int] = None
"""The page number for paginated results.
Defaults to 1 in the GitHub API."""
per_page: Optional[int] = None
"""Number of items per page.
Defaults to 30 in the GitHub API."""
@field_validator("since")
@classmethod
def validate_since(cls, v: Optional[str]) -> Optional[str]:
if v:
try:
datetime.strptime(v, "%Y-%m-%dT%H:%M:%SZ")
except ValueError:
raise ValueError(
"Invalid value for 'since'. Expected a date string in "
f"YYYY-MM-DDTHH:MM:SSZ format. Received: {v}"
)
return v
def lazy_load(self) -> Iterator[Document]:
"""
Get issues of a GitHub repository.
Returns:
A list of Documents with attributes:
- page_content
- metadata
- url
- title
- creator
- created_at
- last_update_time
- closed_time
- number of comments
- state
- labels
- assignee
- assignees
- milestone
- locked
- number
- is_pull_request
"""
url: Optional[str] = self.url
while url:
response = requests.get(url, headers=self.headers)
response.raise_for_status()
issues = response.json()
for issue in issues:
doc = self.parse_issue(issue)
if not self.include_prs and doc.metadata["is_pull_request"]:
continue
yield doc
if (
response.links
and response.links.get("next")
and (not self.page and not self.per_page)
):
url = response.links["next"]["url"]
else:
url = None
def parse_issue(self, issue: dict) -> Document:
"""Create Document objects from a list of GitHub issues."""
metadata = {
"url": issue["html_url"],
"title": issue["title"],
"creator": issue["user"]["login"],
"created_at": issue["created_at"],
"comments": issue["comments"],
"state": issue["state"],
"labels": [label["name"] for label in issue["labels"]],
"assignee": issue["assignee"]["login"] if issue["assignee"] else None,
"milestone": issue["milestone"]["title"] if issue["milestone"] else None,
"locked": issue["locked"],
"number": issue["number"],
"is_pull_request": "pull_request" in issue,
}
content = issue["body"] if issue["body"] is not None else ""
return Document(page_content=content, metadata=metadata)
@property
def query_params(self) -> str:
"""Create query parameters for GitHub API."""
labels = ",".join(self.labels) if self.labels else self.labels
query_params_dict = {
"milestone": self.milestone,
"state": self.state,
"assignee": self.assignee,
"creator": self.creator,
"mentioned": self.mentioned,
"labels": labels,
"sort": self.sort,
"direction": self.direction,
"since": self.since,
"page": self.page,
"per_page": self.per_page,
}
query_params_list = [
f"{k}={v}" for k, v in query_params_dict.items() if v is not None
]
query_params = "&".join(query_params_list)
return query_params
@property
def url(self) -> str:
"""Create URL for GitHub API."""
return f"{self.github_api_url}/repos/{self.repo}/issues?{self.query_params}"
class GithubFileLoader(BaseGitHubLoader, ABC):
"""Load GitHub File"""
branch: str = "main"
file_filter: Optional[Callable[[str], bool]]
def get_file_paths(self) -> List[Dict]:
base_url = (
f"{self.github_api_url}/repos/{self.repo}/git/trees/"
f"{self.branch}?recursive=1"
)
response = requests.get(base_url, headers=self.headers)
response.raise_for_status()
all_files = response.json()["tree"]
""" one element in all_files
{
'path': '.github',
'mode': '040000',
'type': 'tree',
'sha': '5dc46e6b38b22707894ced126270b15e2f22f64e',
'url': 'https://api.github.com/repos/langchain-ai/langchain/git/blobs/5dc46e6b38b22707894ced126270b15e2f22f64e'
}
"""
return [
f
for f in all_files
if not (self.file_filter and not self.file_filter(f["path"]))
]
def get_file_content_by_path(self, path: str) -> str:
queryparams = f"?ref={self.branch}" if self.branch else ""
base_url = (
f"{self.github_api_url}/repos/{self.repo}/contents/{path}{queryparams}"
)
response = requests.get(base_url, headers=self.headers)
response.raise_for_status()
if isinstance(response.json(), dict):
content_encoded = response.json()["content"]
return base64.b64decode(content_encoded).decode("utf-8")
return ""
def lazy_load(self) -> Iterator[Document]:
files = self.get_file_paths()
for file in files:
content = self.get_file_content_by_path(file["path"])
if content == "":
continue
metadata = {
"path": file["path"],
"sha": file["sha"],
"source": f"{self.github_api_url}/{self.repo}/{file['type']}/"
f"{self.branch}/{file['path']}",
}
yield Document(page_content=content, metadata=metadata)
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/slack_directory.py | import json
import zipfile
from pathlib import Path
from typing import Dict, Iterator, List, Optional, Union
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
class SlackDirectoryLoader(BaseLoader):
"""Load from a `Slack` directory dump."""
def __init__(self, zip_path: Union[str, Path], workspace_url: Optional[str] = None):
"""Initialize the SlackDirectoryLoader.
Args:
zip_path (str): The path to the Slack directory dump zip file.
workspace_url (Optional[str]): The Slack workspace URL.
Including the URL will turn
sources into links. Defaults to None.
"""
self.zip_path = Path(zip_path)
self.workspace_url = workspace_url
self.channel_id_map = self._get_channel_id_map(self.zip_path)
@staticmethod
def _get_channel_id_map(zip_path: Path) -> Dict[str, str]:
"""Get a dictionary mapping channel names to their respective IDs."""
with zipfile.ZipFile(zip_path, "r") as zip_file:
try:
with zip_file.open("channels.json", "r") as f:
channels = json.load(f)
return {channel["name"]: channel["id"] for channel in channels}
except KeyError:
return {}
def lazy_load(self) -> Iterator[Document]:
"""Load and return documents from the Slack directory dump."""
with zipfile.ZipFile(self.zip_path, "r") as zip_file:
for channel_path in zip_file.namelist():
channel_name = Path(channel_path).parent.name
if not channel_name:
continue
if channel_path.endswith(".json"):
messages = self._read_json(zip_file, channel_path)
for message in messages:
yield self._convert_message_to_document(message, channel_name)
def _read_json(self, zip_file: zipfile.ZipFile, file_path: str) -> List[dict]:
"""Read JSON data from a zip subfile."""
with zip_file.open(file_path, "r") as f:
data = json.load(f)
return data
def _convert_message_to_document(
self, message: dict, channel_name: str
) -> Document:
"""
Convert a message to a Document object.
Args:
message (dict): A message in the form of a dictionary.
channel_name (str): The name of the channel the message belongs to.
Returns:
Document: A Document object representing the message.
"""
text = message.get("text", "")
metadata = self._get_message_metadata(message, channel_name)
return Document(
page_content=text,
metadata=metadata,
)
def _get_message_metadata(self, message: dict, channel_name: str) -> dict:
"""Create and return metadata for a given message and channel."""
timestamp = message.get("ts", "")
user = message.get("user", "")
source = self._get_message_source(channel_name, user, timestamp)
return {
"source": source,
"channel": channel_name,
"timestamp": timestamp,
"user": user,
}
def _get_message_source(self, channel_name: str, user: str, timestamp: str) -> str:
"""
Get the message source as a string.
Args:
channel_name (str): The name of the channel the message belongs to.
user (str): The user ID who sent the message.
timestamp (str): The timestamp of the message.
Returns:
str: The message source.
"""
if self.workspace_url:
channel_id = self.channel_id_map.get(channel_name, "")
return (
f"{self.workspace_url}/archives/{channel_id}"
+ f"/p{timestamp.replace('.', '')}"
)
else:
return f"{channel_name} - {user} - {timestamp}"
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/srt.py | from pathlib import Path
from typing import List, Union
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
class SRTLoader(BaseLoader):
"""Load `.srt` (subtitle) files."""
def __init__(self, file_path: Union[str, Path]):
"""Initialize with a file path."""
try:
import pysrt # noqa:F401
except ImportError:
raise ImportError(
"package `pysrt` not found, please install it with `pip install pysrt`"
)
self.file_path = str(file_path)
def load(self) -> List[Document]:
"""Load using pysrt file."""
import pysrt
parsed_info = pysrt.open(self.file_path)
text = " ".join([t.text for t in parsed_info])
metadata = {"source": self.file_path}
return [Document(page_content=text, metadata=metadata)]
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/onedrive.py | from typing import Any
from pydantic import Field
from langchain_community.document_loaders import SharePointLoader
class OneDriveLoader(SharePointLoader):
"""
Load documents from Microsoft OneDrive.
Uses `SharePointLoader` under the hood.
"""
drive_id: str = Field(...)
"""The ID of the OneDrive drive to load data from."""
def __init__(self, **kwargs: Any) -> None:
kwargs["document_library_id"] = kwargs["drive_id"]
super().__init__(**kwargs)
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/dropbox.py | # Prerequisites:
# 1. Create a Dropbox app.
# 2. Give the app these scope permissions: `files.metadata.read`
# and `files.content.read`.
# 3. Generate access token: https://www.dropbox.com/developers/apps/create.
# 4. `pip install dropbox` (requires `pip install unstructured[pdf]` for PDF filetype).
import os
import tempfile
from pathlib import Path
from typing import Any, Dict, List, Optional
from langchain_core.documents import Document
from pydantic import BaseModel, model_validator
from langchain_community.document_loaders.base import BaseLoader
class DropboxLoader(BaseLoader, BaseModel):
"""Load files from `Dropbox`.
In addition to common files such as text and PDF files, it also supports
*Dropbox Paper* files.
"""
dropbox_access_token: str
"""Dropbox access token."""
dropbox_folder_path: Optional[str] = None
"""The folder path to load from."""
dropbox_file_paths: Optional[List[str]] = None
"""The file paths to load from."""
recursive: bool = False
"""Flag to indicate whether to load files recursively from subfolders."""
@model_validator(mode="before")
@classmethod
def validate_inputs(cls, values: Dict[str, Any]) -> Any:
"""Validate that either folder_path or file_paths is set, but not both."""
if (
values.get("dropbox_folder_path") is not None
and values.get("dropbox_file_paths") is not None
):
raise ValueError("Cannot specify both folder_path and file_paths")
if values.get("dropbox_folder_path") is None and not values.get(
"dropbox_file_paths"
):
raise ValueError("Must specify either folder_path or file_paths")
return values
def _create_dropbox_client(self) -> Any:
"""Create a Dropbox client."""
try:
from dropbox import Dropbox, exceptions
except ImportError:
raise ImportError("You must run " "`pip install dropbox")
try:
dbx = Dropbox(self.dropbox_access_token)
dbx.users_get_current_account()
except exceptions.AuthError as ex:
raise ValueError(
"Invalid Dropbox access token. Please verify your token and try again."
) from ex
return dbx
def _load_documents_from_folder(self, folder_path: str) -> List[Document]:
"""Load documents from a Dropbox folder."""
dbx = self._create_dropbox_client()
try:
from dropbox import exceptions
from dropbox.files import FileMetadata
except ImportError:
raise ImportError("You must run " "`pip install dropbox")
try:
results = dbx.files_list_folder(folder_path, recursive=self.recursive)
except exceptions.ApiError as ex:
raise ValueError(
f"Could not list files in the folder: {folder_path}. "
"Please verify the folder path and try again."
) from ex
files = [entry for entry in results.entries if isinstance(entry, FileMetadata)]
documents = [
doc
for doc in (self._load_file_from_path(file.path_display) for file in files)
if doc is not None
]
return documents
def _load_file_from_path(self, file_path: str) -> Optional[Document]:
"""Load a file from a Dropbox path."""
dbx = self._create_dropbox_client()
try:
from dropbox import exceptions
except ImportError:
raise ImportError("You must run " "`pip install dropbox")
try:
file_metadata = dbx.files_get_metadata(file_path)
if file_metadata.is_downloadable:
_, response = dbx.files_download(file_path)
# Some types such as Paper, need to be exported.
elif file_metadata.export_info:
_, response = dbx.files_export(file_path, "markdown")
except exceptions.ApiError as ex:
raise ValueError(
f"Could not load file: {file_path}. Please verify the file path"
"and try again."
) from ex
try:
text = response.content.decode("utf-8")
except UnicodeDecodeError:
file_extension = os.path.splitext(file_path)[1].lower()
if file_extension == ".pdf":
print(f"File {file_path} type detected as .pdf") # noqa: T201
from langchain_community.document_loaders import UnstructuredPDFLoader
# Download it to a temporary file.
temp_dir = tempfile.TemporaryDirectory()
temp_pdf = Path(temp_dir.name) / "tmp.pdf"
with open(temp_pdf, mode="wb") as f:
f.write(response.content)
try:
loader = UnstructuredPDFLoader(str(temp_pdf))
docs = loader.load()
if docs:
return docs[0]
except Exception as pdf_ex:
print(f"Error while trying to parse PDF {file_path}: {pdf_ex}") # noqa: T201
return None
else:
print( # noqa: T201
f"File {file_path} could not be decoded as pdf or text. Skipping."
)
return None
metadata = {
"source": f"dropbox://{file_path}",
"title": os.path.basename(file_path),
}
return Document(page_content=text, metadata=metadata)
def _load_documents_from_paths(self) -> List[Document]:
"""Load documents from a list of Dropbox file paths."""
if not self.dropbox_file_paths:
raise ValueError("file_paths must be set")
return [
doc
for doc in (
self._load_file_from_path(file_path)
for file_path in self.dropbox_file_paths
)
if doc is not None
]
def load(self) -> List[Document]:
"""Load documents."""
if self.dropbox_folder_path is not None:
return self._load_documents_from_folder(self.dropbox_folder_path)
else:
return self._load_documents_from_paths()
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/sql_database.py | from typing import Any, Callable, Dict, Iterator, List, Optional, Sequence, Union
from sqlalchemy.engine import RowMapping
from sqlalchemy.sql.expression import Select
from langchain_community.docstore.document import Document
from langchain_community.document_loaders.base import BaseLoader
from langchain_community.utilities.sql_database import SQLDatabase
class SQLDatabaseLoader(BaseLoader):
"""
Load documents by querying database tables supported by SQLAlchemy.
For talking to the database, the document loader uses the `SQLDatabase`
utility from the LangChain integration toolkit.
Each document represents one row of the result.
"""
def __init__(
self,
query: Union[str, Select],
db: SQLDatabase,
*,
parameters: Optional[Dict[str, Any]] = None,
page_content_mapper: Optional[Callable[..., str]] = None,
metadata_mapper: Optional[Callable[..., Dict[str, Any]]] = None,
source_columns: Optional[Sequence[str]] = None,
include_rownum_into_metadata: bool = False,
include_query_into_metadata: bool = False,
):
"""
Args:
query: The query to execute.
db: A LangChain `SQLDatabase`, wrapping an SQLAlchemy engine.
sqlalchemy_kwargs: More keyword arguments for SQLAlchemy's `create_engine`.
parameters: Optional. Parameters to pass to the query.
page_content_mapper: Optional. Function to convert a row into a string
to use as the `page_content` of the document. By default, the loader
serializes the whole row into a string, including all columns.
metadata_mapper: Optional. Function to convert a row into a dictionary
to use as the `metadata` of the document. By default, no columns are
selected into the metadata dictionary.
source_columns: Optional. The names of the columns to use as the `source`
within the metadata dictionary.
include_rownum_into_metadata: Optional. Whether to include the row number
into the metadata dictionary. Default: False.
include_query_into_metadata: Optional. Whether to include the query
expression into the metadata dictionary. Default: False.
"""
self.query = query
self.db: SQLDatabase = db
self.parameters = parameters or {}
self.page_content_mapper = (
page_content_mapper or self.page_content_default_mapper
)
self.metadata_mapper = metadata_mapper or self.metadata_default_mapper
self.source_columns = source_columns
self.include_rownum_into_metadata = include_rownum_into_metadata
self.include_query_into_metadata = include_query_into_metadata
def lazy_load(self) -> Iterator[Document]:
try:
import sqlalchemy as sa
except ImportError:
raise ImportError(
"Could not import sqlalchemy python package. "
"Please install it with `pip install sqlalchemy`."
)
# Querying in `cursor` fetch mode will return an SQLAlchemy `Result` instance.
result: sa.Result[Any]
# Invoke the database query.
if isinstance(self.query, sa.SelectBase):
result = self.db._execute( # type: ignore[assignment]
self.query, fetch="cursor", parameters=self.parameters
)
query_sql = str(self.query.compile(bind=self.db._engine))
elif isinstance(self.query, str):
result = self.db._execute( # type: ignore[assignment]
sa.text(self.query), fetch="cursor", parameters=self.parameters
)
query_sql = self.query
else:
raise TypeError(f"Unable to process query of unknown type: {self.query}")
# Iterate database result rows and generate list of documents.
for i, row in enumerate(result.mappings()):
page_content = self.page_content_mapper(row)
metadata = self.metadata_mapper(row)
if self.include_rownum_into_metadata:
metadata["row"] = i
if self.include_query_into_metadata:
metadata["query"] = query_sql
source_values = []
for column, value in row.items():
if self.source_columns and column in self.source_columns:
source_values.append(value)
if source_values:
metadata["source"] = ",".join(source_values)
yield Document(page_content=page_content, metadata=metadata)
@staticmethod
def page_content_default_mapper(
row: RowMapping, column_names: Optional[List[str]] = None
) -> str:
"""
A reasonable default function to convert a record into a "page content" string.
"""
if column_names is None:
column_names = list(row.keys())
return "\n".join(
f"{column}: {value}"
for column, value in row.items()
if column in column_names
)
@staticmethod
def metadata_default_mapper(
row: RowMapping, column_names: Optional[List[str]] = None
) -> Dict[str, Any]:
"""
A reasonable default function to convert a record into a "metadata" dictionary.
"""
if column_names is None:
return {}
metadata: Dict[str, Any] = {}
for column, value in row.items():
if column in column_names:
metadata[column] = value
return metadata
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/mastodon.py | from __future__ import annotations
import os
from typing import (
TYPE_CHECKING,
Any,
Dict,
Iterable,
Iterator,
List,
Optional,
Sequence,
)
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
if TYPE_CHECKING:
import mastodon
def _dependable_mastodon_import() -> mastodon:
try:
import mastodon
except ImportError:
raise ImportError(
"Mastodon.py package not found, "
"please install it with `pip install Mastodon.py`"
)
return mastodon
class MastodonTootsLoader(BaseLoader):
"""Load the `Mastodon` 'toots'."""
def __init__(
self,
mastodon_accounts: Sequence[str],
number_toots: Optional[int] = 100,
exclude_replies: bool = False,
access_token: Optional[str] = None,
api_base_url: str = "https://mastodon.social",
):
"""Instantiate Mastodon toots loader.
Args:
mastodon_accounts: The list of Mastodon accounts to query.
number_toots: How many toots to pull for each account. Defaults to 100.
exclude_replies: Whether to exclude reply toots from the load.
Defaults to False.
access_token: An access token if toots are loaded as a Mastodon app. Can
also be specified via the environment variables "MASTODON_ACCESS_TOKEN".
api_base_url: A Mastodon API base URL to talk to, if not using the default.
Defaults to "https://mastodon.social".
"""
mastodon = _dependable_mastodon_import()
access_token = access_token or os.environ.get("MASTODON_ACCESS_TOKEN")
self.api = mastodon.Mastodon(
access_token=access_token, api_base_url=api_base_url
)
self.mastodon_accounts = mastodon_accounts
self.number_toots = number_toots
self.exclude_replies = exclude_replies
def lazy_load(self) -> Iterator[Document]:
"""Load toots into documents."""
for account in self.mastodon_accounts:
user = self.api.account_lookup(account)
toots = self.api.account_statuses(
user.id,
only_media=False,
pinned=False,
exclude_replies=self.exclude_replies,
exclude_reblogs=True,
limit=self.number_toots,
)
yield from self._format_toots(toots, user)
def _format_toots(
self, toots: List[Dict[str, Any]], user_info: dict
) -> Iterable[Document]:
"""Format toots into documents.
Adding user info, and selected toot fields into the metadata.
"""
for toot in toots:
metadata = {
"created_at": toot["created_at"],
"user_info": user_info,
"is_reply": toot["in_reply_to_id"] is not None,
}
yield Document(
page_content=toot["content"],
metadata=metadata,
)
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/kinetica_loader.py | from __future__ import annotations
from typing import Any, Dict, Iterator, List, Optional, Tuple
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
class KineticaLoader(BaseLoader):
"""Load from `Kinetica` API.
Each document represents one row of the result. The `page_content_columns`
are written into the `page_content` of the document. The `metadata_columns`
are written into the `metadata` of the document. By default, all columns
are written into the `page_content` and none into the `metadata`.
"""
def __init__(
self,
query: str,
host: str,
username: str,
password: str,
parameters: Optional[Dict[str, Any]] = None,
page_content_columns: Optional[List[str]] = None,
metadata_columns: Optional[List[str]] = None,
):
"""Initialize Kinetica document loader.
Args:
query: The query to run in Kinetica.
parameters: Optional. Parameters to pass to the query.
page_content_columns: Optional. Columns written to Document `page_content`.
metadata_columns: Optional. Columns written to Document `metadata`.
"""
self.query = query
self.host = host
self.username = username
self.password = password
self.parameters = parameters
self.page_content_columns = page_content_columns
self.metadata_columns = metadata_columns if metadata_columns is not None else []
def _execute_query(self) -> List[Dict[str, Any]]:
try:
from gpudb import GPUdb, GPUdbSqlIterator
except ImportError:
raise ImportError(
"Could not import Kinetica python API. "
"Please install it with `pip install gpudb==7.2.0.9`."
)
try:
options = GPUdb.Options()
options.username = self.username
options.password = self.password
conn = GPUdb(host=self.host, options=options)
with GPUdbSqlIterator(conn, self.query) as records:
column_names = records.type_map.keys()
query_result = [dict(zip(column_names, record)) for record in records]
except Exception as e:
print(f"An error occurred: {e}") # noqa: T201
query_result = []
return query_result
def _get_columns(
self, query_result: List[Dict[str, Any]]
) -> Tuple[List[str], List[str]]:
page_content_columns = (
self.page_content_columns if self.page_content_columns else []
)
metadata_columns = self.metadata_columns if self.metadata_columns else []
if page_content_columns is None and query_result:
page_content_columns = list(query_result[0].keys())
if metadata_columns is None:
metadata_columns = []
return page_content_columns or [], metadata_columns
def lazy_load(self) -> Iterator[Document]:
query_result = self._execute_query()
if isinstance(query_result, Exception):
print(f"An error occurred during the query: {query_result}") # noqa: T201
return [] # type: ignore[return-value]
page_content_columns, metadata_columns = self._get_columns(query_result)
if "*" in page_content_columns:
page_content_columns = list(query_result[0].keys())
for row in query_result:
page_content = "\n".join(
f"{k}: {v}" for k, v in row.items() if k in page_content_columns
)
metadata = {k: v for k, v in row.items() if k in metadata_columns}
doc = Document(page_content=page_content, metadata=metadata)
yield doc
def load(self) -> List[Document]:
"""Load data into document objects."""
return list(self.lazy_load())
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/rss.py | import logging
from typing import Any, Iterator, List, Optional, Sequence
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
from langchain_community.document_loaders.news import NewsURLLoader
logger = logging.getLogger(__name__)
class RSSFeedLoader(BaseLoader):
"""Load news articles from `RSS` feeds using `Unstructured`.
Args:
urls: URLs for RSS feeds to load. Each articles in the feed is loaded into its own document.
opml: OPML file to load feed urls from. Only one of urls or opml should be provided. The value
can be a URL string, or OPML markup contents as byte or string.
continue_on_failure: If True, continue loading documents even if
loading fails for a particular URL.
show_progress_bar: If True, use tqdm to show a loading progress bar. Requires
tqdm to be installed, ``pip install tqdm``.
**newsloader_kwargs: Any additional named arguments to pass to
NewsURLLoader.
Example:
.. code-block:: python
from langchain_community.document_loaders import RSSFeedLoader
loader = RSSFeedLoader(
urls=["<url-1>", "<url-2>"],
)
docs = loader.load()
The loader uses feedparser to parse RSS feeds. The feedparser library is not installed by default so you should
install it if using this loader:
https://pythonhosted.org/feedparser/
If you use OPML, you should also install listparser:
https://pythonhosted.org/listparser/
Finally, newspaper is used to process each article:
https://newspaper.readthedocs.io/en/latest/
""" # noqa: E501
def __init__(
self,
urls: Optional[Sequence[str]] = None,
opml: Optional[str] = None,
continue_on_failure: bool = True,
show_progress_bar: bool = False,
**newsloader_kwargs: Any,
) -> None:
"""Initialize with urls or OPML."""
if (urls is None) == (
opml is None
): # This is True if both are None or neither is None
raise ValueError(
"Provide either the urls or the opml argument, but not both."
)
self.urls = urls
self.opml = opml
self.continue_on_failure = continue_on_failure
self.show_progress_bar = show_progress_bar
self.newsloader_kwargs = newsloader_kwargs
def load(self) -> List[Document]:
iter = self.lazy_load()
if self.show_progress_bar:
try:
from tqdm import tqdm
except ImportError as e:
raise ImportError(
"Package tqdm must be installed if show_progress_bar=True. "
"Please install with 'pip install tqdm' or set "
"show_progress_bar=False."
) from e
iter = tqdm(iter)
return list(iter)
@property
def _get_urls(self) -> Sequence[str]:
if self.urls:
return self.urls
try:
import listparser
except ImportError as e:
raise ImportError(
"Package listparser must be installed if the opml arg is used. "
"Please install with 'pip install listparser' or use the "
"urls arg instead."
) from e
rss = listparser.parse(self.opml)
return [feed.url for feed in rss.feeds]
def lazy_load(self) -> Iterator[Document]:
try:
import feedparser
except ImportError:
raise ImportError(
"feedparser package not found, please install it with "
"`pip install feedparser`"
)
for url in self._get_urls:
try:
feed = feedparser.parse(url)
if getattr(feed, "bozo", False):
raise ValueError(
f"Error fetching {url}, exception: {feed.bozo_exception}"
)
except Exception as e:
if self.continue_on_failure:
logger.error(f"Error fetching {url}, exception: {e}")
continue
else:
raise e
try:
for entry in feed.entries:
loader = NewsURLLoader(
urls=[entry.link],
**self.newsloader_kwargs,
)
article = loader.load()[0]
article.metadata["feed"] = url
yield article
except Exception as e:
if self.continue_on_failure:
logger.error(f"Error processing entry {entry.link}, exception: {e}")
continue
else:
raise e
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/base_o365.py | """Base class for all loaders that uses O365 Package"""
from __future__ import annotations
import logging
import mimetypes
import os
import tempfile
from abc import abstractmethod
from pathlib import Path, PurePath
from typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Sequence, Union
from pydantic import (
BaseModel,
Field,
FilePath,
PrivateAttr,
SecretStr,
)
from pydantic_settings import BaseSettings, SettingsConfigDict
from langchain_community.document_loaders.base import BaseBlobParser, BaseLoader
from langchain_community.document_loaders.blob_loaders.file_system import (
FileSystemBlobLoader,
)
from langchain_community.document_loaders.blob_loaders.schema import Blob
from langchain_community.document_loaders.parsers.generic import MimeTypeBasedParser
from langchain_community.document_loaders.parsers.registry import get_parser
if TYPE_CHECKING:
from O365 import Account
from O365.drive import Drive, Folder
logger = logging.getLogger(__name__)
CHUNK_SIZE = 1024 * 1024 * 5
class _O365Settings(BaseSettings):
client_id: str = Field(..., alias="O365_CLIENT_ID")
client_secret: SecretStr = Field(..., alias="O365_CLIENT_SECRET")
model_config = SettingsConfigDict(
case_sensitive=False, env_file=".env", env_prefix="", extra="ignore"
)
class _O365TokenStorage(BaseSettings):
token_path: FilePath = Path.home() / ".credentials" / "o365_token.txt"
def fetch_mime_types(file_types: Sequence[str]) -> Dict[str, str]:
"""Fetch the mime types for the specified file types."""
mime_types_mapping = {}
for ext in file_types:
mime_type, _ = mimetypes.guess_type(f"file.{ext}")
if mime_type:
mime_types_mapping[ext] = mime_type
else:
raise ValueError(f"Unknown mimetype of extention {ext}")
return mime_types_mapping
def fetch_extensions(mime_types: Sequence[str]) -> Dict[str, str]:
"""Fetch the mime types for the specified file types."""
mime_types_mapping = {}
for mime_type in mime_types:
ext = mimetypes.guess_extension(mime_type)
if ext:
mime_types_mapping[ext[1:]] = mime_type # ignore leading `.`
else:
raise ValueError(f"Unknown mimetype {mime_type}")
return mime_types_mapping
class O365BaseLoader(BaseLoader, BaseModel):
"""Base class for all loaders that uses O365 Package"""
settings: _O365Settings = Field(default_factory=_O365Settings) # type: ignore[arg-type]
"""Settings for the Office365 API client."""
auth_with_token: bool = False
"""Whether to authenticate with a token or not. Defaults to False."""
chunk_size: Union[int, str] = CHUNK_SIZE
"""Number of bytes to retrieve from each api call to the server. int or 'auto'."""
recursive: bool = False
"""Should the loader recursively load subfolders?"""
handlers: Optional[Dict[str, Any]] = {}
"""
Provide custom handlers for MimeTypeBasedParser.
Pass a dictionary mapping either file extensions (like "doc", "pdf", etc.)
or MIME types (like "application/pdf", "text/plain", etc.) to parsers.
Note that you must use either file extensions or MIME types exclusively and
cannot mix them.
Do not include the leading dot for file extensions.
Example using file extensions:
```python
handlers = {
"doc": MsWordParser(),
"pdf": PDFMinerParser(),
"txt": TextParser()
}
```
Example using MIME types:
```python
handlers = {
"application/msword": MsWordParser(),
"application/pdf": PDFMinerParser(),
"text/plain": TextParser()
}
```
"""
_blob_parser: BaseBlobParser = PrivateAttr()
_file_types: Sequence[str] = PrivateAttr()
_mime_types: Dict[str, str] = PrivateAttr()
def __init__(self, **kwargs: Any) -> None:
super().__init__(**kwargs)
if self.handlers:
handler_keys = list(self.handlers.keys())
try:
# assume handlers.keys() are file extensions
self._mime_types = fetch_mime_types(handler_keys)
self._file_types = list(set(handler_keys))
mime_handlers = {
self._mime_types[extension]: handler
for extension, handler in self.handlers.items()
}
except ValueError:
try:
# assume handlers.keys() are mime types
self._mime_types = fetch_extensions(handler_keys)
self._file_types = list(set(self._mime_types.keys()))
mime_handlers = self.handlers
except ValueError:
raise ValueError(
"`handlers` keys must be either file extensions or mimetypes.\n"
f"{handler_keys} could not be interpreted as either.\n"
"File extensions and mimetypes cannot mix. "
"Use either one or the other"
)
self._blob_parser = MimeTypeBasedParser(
handlers=mime_handlers, fallback_parser=None
)
else:
self._blob_parser = get_parser("default")
if not isinstance(self._blob_parser, MimeTypeBasedParser):
raise TypeError(
'get_parser("default) was supposed to return MimeTypeBasedParser.'
f"It returned {type(self._blob_parser)}"
)
self._mime_types = fetch_extensions(list(self._blob_parser.handlers.keys()))
@property
def _fetch_mime_types(self) -> Dict[str, str]:
"""Return a dict of supported file types to corresponding mime types."""
return self._mime_types
@property
@abstractmethod
def _scopes(self) -> List[str]:
"""Return required scopes."""
def _load_from_folder(self, folder: Folder) -> Iterable[Blob]:
"""Lazily load all files from a specified folder of the configured MIME type.
Args:
folder: The Folder instance from which the files are to be loaded. This
Folder instance should represent a directory in a file system where the
files are stored.
Yields:
An iterator that yields Blob instances, which are binary representations of
the files loaded from the folder.
"""
file_mime_types = self._fetch_mime_types
items = folder.get_items()
metadata_dict: Dict[str, Dict[str, Any]] = {}
with tempfile.TemporaryDirectory() as temp_dir:
os.makedirs(os.path.dirname(temp_dir), exist_ok=True)
for file in items:
if file.is_file:
if file.mime_type in list(file_mime_types.values()):
file.download(to_path=temp_dir, chunk_size=self.chunk_size)
metadata_dict[file.name] = {
"source": file.web_url,
"mime_type": file.mime_type,
"created": str(file.created),
"modified": str(file.modified),
"created_by": str(file.created_by),
"modified_by": str(file.modified_by),
"description": file.description,
"id": str(file.object_id),
}
loader = FileSystemBlobLoader(path=temp_dir)
for blob in loader.yield_blobs():
if not isinstance(blob.path, PurePath):
raise NotImplementedError("Expected blob path to be a PurePath")
if blob.path:
file_metadata_ = metadata_dict.get(str(blob.path.name), {})
blob.metadata.update(file_metadata_)
yield blob
if self.recursive:
for subfolder in folder.get_child_folders():
yield from self._load_from_folder(subfolder)
def _load_from_object_ids(
self, drive: Drive, object_ids: List[str]
) -> Iterable[Blob]:
"""Lazily load files specified by their object_ids from a drive.
Load files into the system as binary large objects (Blobs) and return Iterable.
Args:
drive: The Drive instance from which the files are to be loaded. This Drive
instance should represent a cloud storage service or similar storage
system where the files are stored.
object_ids: A list of object_id strings. Each object_id represents a unique
identifier for a file in the drive.
Yields:
An iterator that yields Blob instances, which are binary representations of
the files loaded from the drive using the specified object_ids.
"""
file_mime_types = self._fetch_mime_types
metadata_dict: Dict[str, Dict[str, Any]] = {}
with tempfile.TemporaryDirectory() as temp_dir:
for object_id in object_ids:
file = drive.get_item(object_id)
if not file:
logging.warning(
"There isn't a file with"
f"object_id {object_id} in drive {drive}."
)
continue
if file.is_file:
if file.mime_type in list(file_mime_types.values()):
file.download(to_path=temp_dir, chunk_size=self.chunk_size)
metadata_dict[file.name] = {
"source": file.web_url,
"mime_type": file.mime_type,
"created": file.created,
"modified": file.modified,
"created_by": str(file.created_by),
"modified_by": str(file.modified_by),
"description": file.description,
"id": str(file.object_id),
}
loader = FileSystemBlobLoader(path=temp_dir)
for blob in loader.yield_blobs():
if not isinstance(blob.path, PurePath):
raise NotImplementedError("Expected blob path to be a PurePath")
if blob.path:
file_metadata_ = metadata_dict.get(str(blob.path.name), {})
blob.metadata.update(file_metadata_)
yield blob
def _auth(self) -> Account:
"""Authenticates the OneDrive API client
Returns:
The authenticated Account object.
"""
try:
from O365 import Account, FileSystemTokenBackend
except ImportError:
raise ImportError(
"O365 package not found, please install it with `pip install o365`"
)
if self.auth_with_token:
token_storage = _O365TokenStorage()
token_path = token_storage.token_path
token_backend = FileSystemTokenBackend(
token_path=token_path.parent, token_filename=token_path.name
)
account = Account(
credentials=(
self.settings.client_id,
self.settings.client_secret.get_secret_value(),
),
scopes=self._scopes,
token_backend=token_backend,
**{"raise_http_errors": False},
)
else:
token_backend = FileSystemTokenBackend(
token_path=Path.home() / ".credentials"
)
account = Account(
credentials=(
self.settings.client_id,
self.settings.client_secret.get_secret_value(),
),
scopes=self._scopes,
token_backend=token_backend,
**{"raise_http_errors": False},
)
# make the auth
account.authenticate()
return account
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/dataframe.py | from typing import Any, Iterator, Literal
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
class BaseDataFrameLoader(BaseLoader):
def __init__(self, data_frame: Any, *, page_content_column: str = "text"):
"""Initialize with dataframe object.
Args:
data_frame: DataFrame object.
page_content_column: Name of the column containing the page content.
Defaults to "text".
"""
self.data_frame = data_frame
self.page_content_column = page_content_column
def lazy_load(self) -> Iterator[Document]:
"""Lazy load records from dataframe."""
for _, row in self.data_frame.iterrows():
text = row[self.page_content_column]
metadata = row.to_dict()
metadata.pop(self.page_content_column)
yield Document(page_content=text, metadata=metadata)
class DataFrameLoader(BaseDataFrameLoader):
"""Load `Pandas` DataFrame."""
def __init__(
self,
data_frame: Any,
page_content_column: str = "text",
engine: Literal["pandas", "modin"] = "pandas",
):
"""Initialize with dataframe object.
Args:
data_frame: Pandas DataFrame object.
page_content_column: Name of the column containing the page content.
Defaults to "text".
"""
try:
if engine == "pandas":
import pandas as pd
elif engine == "modin":
import modin.pandas as pd
else:
raise ValueError(
f"Unsupported engine {engine}. Must be one of 'pandas', or 'modin'."
)
except ImportError as e:
raise ImportError(
"Unable to import pandas, please install with `pip install pandas`."
) from e
if not isinstance(data_frame, pd.DataFrame):
raise ValueError(
f"Expected data_frame to be a pd.DataFrame, got {type(data_frame)}"
)
super().__init__(data_frame, page_content_column=page_content_column)
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/sitemap.py | import itertools
import re
from typing import (
Any,
Callable,
Dict,
Generator,
Iterable,
Iterator,
List,
Optional,
Tuple,
)
from urllib.parse import urlparse
from langchain_core.documents import Document
from langchain_community.document_loaders.web_base import WebBaseLoader
def _default_parsing_function(content: Any) -> str:
return str(content.get_text())
def _default_meta_function(meta: dict, _content: Any) -> dict:
return {"source": meta["loc"], **meta}
def _batch_block(iterable: Iterable, size: int) -> Generator[List[dict], None, None]:
it = iter(iterable)
while item := list(itertools.islice(it, size)):
yield item
def _extract_scheme_and_domain(url: str) -> Tuple[str, str]:
"""Extract the scheme + domain from a given URL.
Args:
url (str): The input URL.
Returns:
return a 2-tuple of scheme and domain
"""
parsed_uri = urlparse(url)
return parsed_uri.scheme, parsed_uri.netloc
class SitemapLoader(WebBaseLoader):
"""Load a sitemap and its URLs.
**Security Note**: This loader can be used to load all URLs specified in a sitemap.
If a malicious actor gets access to the sitemap, they could force
the server to load URLs from other domains by modifying the sitemap.
This could lead to server-side request forgery (SSRF) attacks; e.g.,
with the attacker forcing the server to load URLs from internal
service endpoints that are not publicly accessible. While the attacker
may not immediately gain access to this data, this data could leak
into downstream systems (e.g., data loader is used to load data for indexing).
This loader is a crawler and web crawlers should generally NOT be deployed
with network access to any internal servers.
Control access to who can submit crawling requests and what network access
the crawler has.
By default, the loader will only load URLs from the same domain as the sitemap
if the site map is not a local file. This can be disabled by setting
restrict_to_same_domain to False (not recommended).
If the site map is a local file, no such risk mitigation is applied by default.
Use the filter URLs argument to limit which URLs can be loaded.
See https://python.langchain.com/docs/security
"""
def __init__(
self,
web_path: str,
filter_urls: Optional[List[str]] = None,
parsing_function: Optional[Callable] = None,
blocksize: Optional[int] = None,
blocknum: int = 0,
meta_function: Optional[Callable] = None,
is_local: bool = False,
continue_on_failure: bool = False,
restrict_to_same_domain: bool = True,
max_depth: int = 10,
**kwargs: Any,
):
"""Initialize with webpage path and optional filter URLs.
Args:
web_path: url of the sitemap. can also be a local path
filter_urls: a list of regexes. If specified, only
URLS that match one of the filter URLs will be loaded.
*WARNING* The filter URLs are interpreted as regular expressions.
Remember to escape special characters if you do not want them to be
interpreted as regular expression syntax. For example, `.` appears
frequently in URLs and should be escaped if you want to match a literal
`.` rather than any character.
restrict_to_same_domain takes precedence over filter_urls when
restrict_to_same_domain is True and the sitemap is not a local file.
parsing_function: Function to parse bs4.Soup output
blocksize: number of sitemap locations per block
blocknum: the number of the block that should be loaded - zero indexed.
Default: 0
meta_function: Function to parse bs4.Soup output for metadata
remember when setting this method to also copy metadata["loc"]
to metadata["source"] if you are using this field
is_local: whether the sitemap is a local file. Default: False
continue_on_failure: whether to continue loading the sitemap if an error
occurs loading a url, emitting a warning instead of raising an
exception. Setting this to True makes the loader more robust, but also
may result in missing data. Default: False
restrict_to_same_domain: whether to restrict loading to URLs to the same
domain as the sitemap. Attention: This is only applied if the sitemap
is not a local file!
max_depth: maximum depth to follow sitemap links. Default: 10
"""
if blocksize is not None and blocksize < 1:
raise ValueError("Sitemap blocksize should be at least 1")
if blocknum < 0:
raise ValueError("Sitemap blocknum can not be lower then 0")
try:
import lxml # noqa:F401
except ImportError:
raise ImportError(
"lxml package not found, please install it with `pip install lxml`"
)
super().__init__(web_paths=[web_path], **kwargs)
# Define a list of URL patterns (interpreted as regular expressions) that
# will be allowed to be loaded.
# restrict_to_same_domain takes precedence over filter_urls when
# restrict_to_same_domain is True and the sitemap is not a local file.
self.allow_url_patterns = filter_urls
self.restrict_to_same_domain = restrict_to_same_domain
self.parsing_function = parsing_function or _default_parsing_function
self.meta_function = meta_function or _default_meta_function
self.blocksize = blocksize
self.blocknum = blocknum
self.is_local = is_local
self.continue_on_failure = continue_on_failure
self.max_depth = max_depth
def parse_sitemap(self, soup: Any, *, depth: int = 0) -> List[dict]:
"""Parse sitemap xml and load into a list of dicts.
Args:
soup: BeautifulSoup object.
depth: current depth of the sitemap. Default: 0
Returns:
List of dicts.
"""
if depth >= self.max_depth:
return []
els: List[Dict] = []
for url in soup.find_all("url"):
loc = url.find("loc")
if not loc:
continue
# Strip leading and trailing whitespace and newlines
loc_text = loc.text.strip()
if self.restrict_to_same_domain and not self.is_local:
if _extract_scheme_and_domain(loc_text) != _extract_scheme_and_domain(
self.web_path
):
continue
if self.allow_url_patterns and not any(
re.match(regexp_pattern, loc_text)
for regexp_pattern in self.allow_url_patterns
):
continue
els.append(
{
tag: prop.text
for tag in ["loc", "lastmod", "changefreq", "priority"]
if (prop := url.find(tag))
}
)
for sitemap in soup.find_all("sitemap"):
loc = sitemap.find("loc")
if not loc:
continue
soup_child = self.scrape_all([loc.text], "xml")[0]
els.extend(self.parse_sitemap(soup_child, depth=depth + 1))
return els
def lazy_load(self) -> Iterator[Document]:
"""Load sitemap."""
if self.is_local:
try:
import bs4
except ImportError:
raise ImportError(
"beautifulsoup4 package not found, please install it"
" with `pip install beautifulsoup4`"
)
fp = open(self.web_path)
soup = bs4.BeautifulSoup(fp, "xml")
else:
soup = self._scrape(self.web_path, parser="xml")
els = self.parse_sitemap(soup)
if self.blocksize is not None:
elblocks = list(_batch_block(els, self.blocksize))
blockcount = len(elblocks)
if blockcount - 1 < self.blocknum:
raise ValueError(
"Selected sitemap does not contain enough blocks for given blocknum"
)
else:
els = elblocks[self.blocknum]
results = self.scrape_all([el["loc"].strip() for el in els if "loc" in el])
for i, result in enumerate(results):
yield Document(
page_content=self.parsing_function(result),
metadata=self.meta_function(els[i], result),
)
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/python.py | import tokenize
from pathlib import Path
from typing import Union
from langchain_community.document_loaders.text import TextLoader
class PythonLoader(TextLoader):
"""Load `Python` files, respecting any non-default encoding if specified."""
def __init__(self, file_path: Union[str, Path]):
"""Initialize with a file path.
Args:
file_path: The path to the file to load.
"""
with open(file_path, "rb") as f:
encoding, _ = tokenize.detect_encoding(f.readline)
super().__init__(file_path=file_path, encoding=encoding)
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/couchbase.py | import logging
from typing import Iterator, List, Optional
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
logger = logging.getLogger(__name__)
class CouchbaseLoader(BaseLoader):
"""Load documents from `Couchbase`.
Each document represents one row of the result. The `page_content_fields` are
written into the `page_content`of the document. The `metadata_fields` are written
into the `metadata` of the document. By default, all columns are written into
the `page_content` and none into the `metadata`.
"""
def __init__(
self,
connection_string: str,
db_username: str,
db_password: str,
query: str,
*,
page_content_fields: Optional[List[str]] = None,
metadata_fields: Optional[List[str]] = None,
) -> None:
"""Initialize Couchbase document loader.
Args:
connection_string (str): The connection string to the Couchbase cluster.
db_username (str): The username to connect to the Couchbase cluster.
db_password (str): The password to connect to the Couchbase cluster.
query (str): The SQL++ query to execute.
page_content_fields (Optional[List[str]]): The columns to write into the
`page_content` field of the document. By default, all columns are
written.
metadata_fields (Optional[List[str]]): The columns to write into the
`metadata` field of the document. By default, no columns are written.
"""
try:
from couchbase.auth import PasswordAuthenticator
from couchbase.cluster import Cluster
from couchbase.options import ClusterOptions
except ImportError as e:
raise ImportError(
"Could not import couchbase package."
"Please install couchbase SDK with `pip install couchbase`."
) from e
if not connection_string:
raise ValueError("connection_string must be provided.")
if not db_username:
raise ValueError("db_username must be provided.")
if not db_password:
raise ValueError("db_password must be provided.")
auth = PasswordAuthenticator(
db_username,
db_password,
)
self.cluster: Cluster = Cluster(connection_string, ClusterOptions(auth))
self.query = query
self.page_content_fields = page_content_fields
self.metadata_fields = metadata_fields
def lazy_load(self) -> Iterator[Document]:
"""Load Couchbase data into Document objects lazily."""
from datetime import timedelta
# Ensure connection to Couchbase cluster
self.cluster.wait_until_ready(timedelta(seconds=5))
# Run SQL++ Query
result = self.cluster.query(self.query)
for row in result:
metadata_fields = self.metadata_fields
page_content_fields = self.page_content_fields
if not page_content_fields:
page_content_fields = list(row.keys())
if not metadata_fields:
metadata_fields = []
metadata = {field: row[field] for field in metadata_fields}
document = "\n".join(
f"{k}: {v}" for k, v in row.items() if k in page_content_fields
)
yield (Document(page_content=document, metadata=metadata))
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/tsv.py | from pathlib import Path
from typing import Any, List, Union
from langchain_community.document_loaders.unstructured import (
UnstructuredFileLoader,
validate_unstructured_version,
)
class UnstructuredTSVLoader(UnstructuredFileLoader):
"""Load `TSV` files using `Unstructured`.
Like other
Unstructured loaders, UnstructuredTSVLoader can be used in both
"single" and "elements" mode. If you use the loader in "elements"
mode, the TSV file will be a single Unstructured Table element.
If you use the loader in "elements" mode, an HTML representation
of the table will be available in the "text_as_html" key in the
document metadata.
Examples
--------
from langchain_community.document_loaders.tsv import UnstructuredTSVLoader
loader = UnstructuredTSVLoader("stanley-cups.tsv", mode="elements")
docs = loader.load()
"""
def __init__(
self,
file_path: Union[str, Path],
mode: str = "single",
**unstructured_kwargs: Any,
):
validate_unstructured_version(min_unstructured_version="0.7.6")
super().__init__(file_path=file_path, mode=mode, **unstructured_kwargs)
def _get_elements(self) -> List:
from unstructured.partition.tsv import partition_tsv
return partition_tsv(filename=self.file_path, **self.unstructured_kwargs) # type: ignore[arg-type]
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/recursive_url_loader.py | from __future__ import annotations
import asyncio
import inspect
import logging
import re
from typing import (
Callable,
Iterator,
List,
Optional,
Sequence,
Set,
Union,
cast,
)
import aiohttp
import requests
from langchain_core.documents import Document
from langchain_core.utils.html import extract_sub_links
from langchain_community.document_loaders.base import BaseLoader
logger = logging.getLogger(__name__)
def _metadata_extractor(
raw_html: str, url: str, response: Union[requests.Response, aiohttp.ClientResponse]
) -> dict:
"""Extract metadata from raw html using BeautifulSoup."""
content_type = getattr(response, "headers").get("Content-Type", "")
metadata = {"source": url, "content_type": content_type}
try:
from bs4 import BeautifulSoup
except ImportError:
logger.warning(
"The bs4 package is required for default metadata extraction. "
"Please install it with `pip install -U beautifulsoup4`."
)
return metadata
soup = BeautifulSoup(raw_html, "html.parser")
if title := soup.find("title"):
metadata["title"] = title.get_text()
if description := soup.find("meta", attrs={"name": "description"}):
metadata["description"] = description.get("content", None)
if html := soup.find("html"):
metadata["language"] = html.get("lang", None)
return metadata
class RecursiveUrlLoader(BaseLoader):
"""Recursively load all child links from a root URL.
**Security Note**: This loader is a crawler that will start crawling
at a given URL and then expand to crawl child links recursively.
Web crawlers should generally NOT be deployed with network access
to any internal servers.
Control access to who can submit crawling requests and what network access
the crawler has.
While crawling, the crawler may encounter malicious URLs that would lead to a
server-side request forgery (SSRF) attack.
To mitigate risks, the crawler by default will only load URLs from the same
domain as the start URL (controlled via prevent_outside named argument).
This will mitigate the risk of SSRF attacks, but will not eliminate it.
For example, if crawling a host which hosts several sites:
https://some_host/alice_site/
https://some_host/bob_site/
A malicious URL on Alice's site could cause the crawler to make a malicious
GET request to an endpoint on Bob's site. Both sites are hosted on the
same host, so such a request would not be prevented by default.
See https://python.langchain.com/docs/security/
Setup:
This class has no required additional dependencies. You can optionally install
``beautifulsoup4`` for richer default metadata extraction:
.. code-block:: bash
pip install -U beautifulsoup4
Instantiate:
.. code-block:: python
from langchain_community.document_loaders import RecursiveUrlLoader
loader = RecursiveUrlLoader(
"https://docs.python.org/3.9/",
# max_depth=2,
# use_async=False,
# extractor=None,
# metadata_extractor=None,
# exclude_dirs=(),
# timeout=10,
# check_response_status=True,
# continue_on_failure=True,
# prevent_outside=True,
# base_url=None,
# ...
)
Lazy load:
.. code-block:: python
docs = []
docs_lazy = loader.lazy_load()
# async variant:
# docs_lazy = await loader.alazy_load()
for doc in docs_lazy:
docs.append(doc)
print(docs[0].page_content[:100])
print(docs[0].metadata)
.. code-block:: python
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta charset="utf-8" /><
{'source': 'https://docs.python.org/3.9/', 'content_type': 'text/html', 'title': '3.9.19 Documentation', 'language': None}
Async load:
.. code-block:: python
docs = await loader.aload()
print(docs[0].page_content[:100])
print(docs[0].metadata)
.. code-block:: python
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta charset="utf-8" /><
{'source': 'https://docs.python.org/3.9/', 'content_type': 'text/html', 'title': '3.9.19 Documentation', 'language': None}
Content parsing / extraction:
By default the loader sets the raw HTML from each link as the Document page
content. To parse this HTML into a more human/LLM-friendly format you can pass
in a custom ``extractor`` method:
.. code-block:: python
# This example uses `beautifulsoup4` and `lxml`
import re
from bs4 import BeautifulSoup
def bs4_extractor(html: str) -> str:
soup = BeautifulSoup(html, "lxml")
return re.sub(r"\n\n+", "\n\n", soup.text).strip()
loader = RecursiveUrlLoader(
"https://docs.python.org/3.9/",
extractor=bs4_extractor,
)
print(loader.load()[0].page_content[:200])
.. code-block:: python
3.9.19 Documentation
Download
Download these documents
Docs by version
Python 3.13 (in development)
Python 3.12 (stable)
Python 3.11 (security-fixes)
Python 3.10 (security-fixes)
Python 3.9 (securit
Metadata extraction:
Similarly to content extraction, you can specify a metadata extraction function
to customize how Document metadata is extracted from the HTTP response.
.. code-block:: python
import aiohttp
import requests
from typing import Union
def simple_metadata_extractor(
raw_html: str, url: str, response: Union[requests.Response, aiohttp.ClientResponse]
) -> dict:
content_type = getattr(response, "headers").get("Content-Type", "")
return {"source": url, "content_type": content_type}
loader = RecursiveUrlLoader(
"https://docs.python.org/3.9/",
metadata_extractor=simple_metadata_extractor,
)
loader.load()[0].metadata
.. code-block:: python
{'source': 'https://docs.python.org/3.9/', 'content_type': 'text/html'}
Filtering URLs:
You may not always want to pull every URL from a website. There are four parameters
that allow us to control what URLs we pull recursively. First, we can set the
``prevent_outside`` parameter to prevent URLs outside of the ``base_url`` from
being pulled. Note that the ``base_url`` does not need to be the same as the URL we
pass in, as shown below. We can also use ``link_regex`` and ``exclude_dirs`` to be
more specific with the URLs that we select. In this example, we only pull websites
from the python docs, which contain the string "index" somewhere and are not
located in the FAQ section of the website.
.. code-block:: python
loader = RecursiveUrlLoader(
"https://docs.python.org/3.9/",
prevent_outside=True,
base_url="https://docs.python.org",
link_regex=r'<a\\s+(?:[^>]*?\\s+)?href="([^"]*(?=index)[^"]*)"',
exclude_dirs=['https://docs.python.org/3.9/faq']
)
docs = loader.load()
.. code-block:: python
['https://docs.python.org/3.9/',
'https://docs.python.org/3.9/py-modindex.html',
'https://docs.python.org/3.9/genindex.html',
'https://docs.python.org/3.9/tutorial/index.html',
'https://docs.python.org/3.9/using/index.html',
'https://docs.python.org/3.9/extending/index.html',
'https://docs.python.org/3.9/installing/index.html',
'https://docs.python.org/3.9/library/index.html',
'https://docs.python.org/3.9/c-api/index.html',
'https://docs.python.org/3.9/howto/index.html',
'https://docs.python.org/3.9/distributing/index.html',
'https://docs.python.org/3.9/reference/index.html',
'https://docs.python.org/3.9/whatsnew/index.html']
""" # noqa: E501
def __init__(
self,
url: str,
max_depth: Optional[int] = 2,
use_async: Optional[bool] = None,
extractor: Optional[Callable[[str], str]] = None,
metadata_extractor: Optional[_MetadataExtractorType] = None,
exclude_dirs: Optional[Sequence[str]] = (),
timeout: Optional[int] = 10,
prevent_outside: bool = True,
link_regex: Union[str, re.Pattern, None] = None,
headers: Optional[dict] = None,
check_response_status: bool = False,
continue_on_failure: bool = True,
*,
base_url: Optional[str] = None,
autoset_encoding: bool = True,
encoding: Optional[str] = None,
proxies: Optional[dict] = None,
) -> None:
"""Initialize with URL to crawl and any subdirectories to exclude.
Args:
url: The URL to crawl.
max_depth: The max depth of the recursive loading.
use_async: Whether to use asynchronous loading.
If True, lazy_load function will not be lazy, but it will still work in the
expected way, just not lazy.
extractor: A function to extract document contents from raw HTML.
When extract function returns an empty string, the document is
ignored. Default returns the raw HTML.
metadata_extractor: A function to extract metadata from args: raw HTML, the
source url, and the requests.Response/aiohttp.ClientResponse object
(args in that order).
Default extractor will attempt to use BeautifulSoup4 to extract the
title, description and language of the page.
..code-block:: python
import requests
import aiohttp
def simple_metadata_extractor(
raw_html: str, url: str, response: Union[requests.Response, aiohttp.ClientResponse]
) -> dict:
content_type = getattr(response, "headers").get("Content-Type", "")
return {"source": url, "content_type": content_type}
exclude_dirs: A list of subdirectories to exclude.
timeout: The timeout for the requests, in the unit of seconds. If None then
connection will not timeout.
prevent_outside: If True, prevent loading from urls which are not children
of the root url.
link_regex: Regex for extracting sub-links from the raw html of a web page.
headers: Default request headers to use for all requests.
check_response_status: If True, check HTTP response status and skip
URLs with error responses (400-599).
continue_on_failure: If True, continue if getting or parsing a link raises
an exception. Otherwise, raise the exception.
base_url: The base url to check for outside links against.
autoset_encoding: Whether to automatically set the encoding of the response.
If True, the encoding of the response will be set to the apparent
encoding, unless the `encoding` argument has already been explicitly set.
encoding: The encoding of the response. If manually set, the encoding will be
set to given value, regardless of the `autoset_encoding` argument.
proxies: A dictionary mapping protocol names to the proxy URLs to be used for requests.
This allows the crawler to route its requests through specified proxy servers.
If None, no proxies will be used and requests will go directly to the target URL.
Example usage:
..code-block:: python
proxies = {
"http": "http://10.10.1.10:3128",
"https": "https://10.10.1.10:1080",
}
""" # noqa: E501
self.url = url
self.max_depth = max_depth if max_depth is not None else 2
self.use_async = use_async if use_async is not None else False
self.extractor = extractor if extractor is not None else lambda x: x
metadata_extractor = (
metadata_extractor
if metadata_extractor is not None
else _metadata_extractor
)
self.autoset_encoding = autoset_encoding
self.encoding = encoding
self.metadata_extractor = _wrap_metadata_extractor(metadata_extractor)
self.exclude_dirs = exclude_dirs if exclude_dirs is not None else ()
if any(url.startswith(exclude_dir) for exclude_dir in self.exclude_dirs):
raise ValueError(
f"Base url is included in exclude_dirs. Received base_url: {url} and "
f"exclude_dirs: {self.exclude_dirs}"
)
self.timeout = timeout
self.prevent_outside = prevent_outside if prevent_outside is not None else True
self.link_regex = link_regex
self.headers = headers
self.check_response_status = check_response_status
self.continue_on_failure = continue_on_failure
self.base_url = base_url if base_url is not None else url
self.proxies = proxies
def _get_child_links_recursive(
self, url: str, visited: Set[str], *, depth: int = 0
) -> Iterator[Document]:
"""Recursively get all child links starting with the path of the input URL.
Args:
url: The URL to crawl.
visited: A set of visited URLs.
depth: Current depth of recursion. Stop when depth >= max_depth.
"""
if depth >= self.max_depth:
return
# Get all links that can be accessed from the current URL
visited.add(url)
try:
response = requests.get(
url, timeout=self.timeout, headers=self.headers, proxies=self.proxies
)
if self.encoding is not None:
response.encoding = self.encoding
elif self.autoset_encoding:
response.encoding = response.apparent_encoding
if self.check_response_status and 400 <= response.status_code <= 599:
raise ValueError(f"Received HTTP status {response.status_code}")
except Exception as e:
if self.continue_on_failure:
logger.warning(
f"Unable to load from {url}. Received error {e} of type "
f"{e.__class__.__name__}"
)
return
else:
raise e
content = self.extractor(response.text)
if content:
yield Document(
page_content=content,
metadata=self.metadata_extractor(response.text, url, response),
)
# Store the visited links and recursively visit the children
sub_links = extract_sub_links(
response.text,
url,
base_url=self.base_url,
pattern=self.link_regex,
prevent_outside=self.prevent_outside,
exclude_prefixes=self.exclude_dirs,
continue_on_failure=self.continue_on_failure,
)
for link in sub_links:
# Check all unvisited links
if link not in visited:
yield from self._get_child_links_recursive(
link, visited, depth=depth + 1
)
async def _async_get_child_links_recursive(
self,
url: str,
visited: Set[str],
*,
session: Optional[aiohttp.ClientSession] = None,
depth: int = 0,
) -> List[Document]:
"""Recursively get all child links starting with the path of the input URL.
Args:
url: The URL to crawl.
visited: A set of visited URLs.
depth: To reach the current url, how many pages have been visited.
"""
if not self.use_async:
raise ValueError(
"Async functions forbidden when not initialized with `use_async`"
)
if depth >= self.max_depth:
return []
# Disable SSL verification because websites may have invalid SSL certificates,
# but won't cause any security issues for us.
close_session = session is None
session = (
session
if session is not None
else aiohttp.ClientSession(
connector=aiohttp.TCPConnector(ssl=False),
timeout=aiohttp.ClientTimeout(total=self.timeout),
headers=self.headers,
)
)
visited.add(url)
try:
async with session.get(url) as response:
text = await response.text()
if self.check_response_status and 400 <= response.status <= 599:
raise ValueError(f"Received HTTP status {response.status}")
except (aiohttp.client_exceptions.InvalidURL, Exception) as e:
if close_session:
await session.close()
if self.continue_on_failure:
logger.warning(
f"Unable to load {url}. Received error {e} of type "
f"{e.__class__.__name__}"
)
return []
else:
raise e
results = []
content = self.extractor(text)
if content:
results.append(
Document(
page_content=content,
metadata=self.metadata_extractor(text, url, response),
)
)
if depth < self.max_depth - 1:
sub_links = extract_sub_links(
text,
url,
base_url=self.base_url,
pattern=self.link_regex,
prevent_outside=self.prevent_outside,
exclude_prefixes=self.exclude_dirs,
continue_on_failure=self.continue_on_failure,
)
# Recursively call the function to get the children of the children
sub_tasks = []
to_visit = set(sub_links).difference(visited)
for link in to_visit:
sub_tasks.append(
self._async_get_child_links_recursive(
link, visited, session=session, depth=depth + 1
)
)
next_results = await asyncio.gather(*sub_tasks)
for sub_result in next_results:
if isinstance(sub_result, Exception) or sub_result is None:
# We don't want to stop the whole process, so just ignore it
# Not standard html format or invalid url or 404 may cause this.
continue
# locking not fully working, temporary hack to ensure deduplication
results += [r for r in sub_result if r not in results]
if close_session:
await session.close()
return results
def lazy_load(self) -> Iterator[Document]:
"""Lazy load web pages.
When use_async is True, this function will not be lazy,
but it will still work in the expected way, just not lazy."""
visited: Set[str] = set()
if self.use_async:
results = asyncio.run(
self._async_get_child_links_recursive(self.url, visited)
)
return iter(results or [])
else:
return self._get_child_links_recursive(self.url, visited)
_MetadataExtractorType1 = Callable[[str, str], dict]
_MetadataExtractorType2 = Callable[
[str, str, Union[requests.Response, aiohttp.ClientResponse]], dict
]
_MetadataExtractorType = Union[_MetadataExtractorType1, _MetadataExtractorType2]
def _wrap_metadata_extractor(
metadata_extractor: _MetadataExtractorType,
) -> _MetadataExtractorType2:
if len(inspect.signature(metadata_extractor).parameters) == 3:
return cast(_MetadataExtractorType2, metadata_extractor)
else:
def _metadata_extractor_wrapper(
raw_html: str,
url: str,
response: Union[requests.Response, aiohttp.ClientResponse],
) -> dict:
return cast(_MetadataExtractorType1, metadata_extractor)(raw_html, url)
return _metadata_extractor_wrapper
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/document_loaders/conllu.py | import csv
from pathlib import Path
from typing import List, Union
from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
class CoNLLULoader(BaseLoader):
"""Load `CoNLL-U` files."""
def __init__(self, file_path: Union[str, Path]):
"""Initialize with a file path."""
self.file_path = file_path
def load(self) -> List[Document]:
"""Load from a file path."""
with open(self.file_path, encoding="utf8") as f:
tsv = list(csv.reader(f, delimiter="\t"))
# If len(line) > 1, the line is not a comment
lines = [line for line in tsv if len(line) > 1]
text = ""
for i, line in enumerate(lines):
# Do not add a space after a punctuation mark or at the end of the sentence
if line[9] == "SpaceAfter=No" or i == len(lines) - 1:
text += line[1]
else:
text += line[1] + " "
metadata = {"source": str(self.file_path)}
return [Document(page_content=text, metadata=metadata)]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.