id
stringlengths 14
15
| text
stringlengths 49
2.47k
| source
stringlengths 61
166
|
|---|---|---|
044b462b7714-1
|
clear() → None[source]¶
Clear memory contents.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_messages(llm: BaseLanguageModel, chat_memory: BaseChatMessageHistory, *, summarize_step: int = 2, **kwargs: Any) → ConversationSummaryMemory[source]¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.summary.ConversationSummaryMemory.html
|
044b462b7714-2
|
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]¶
Return history buffer.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict_new_summary(messages: List[BaseMessage], existing_summary: str) → str¶
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]¶
Save context from this conversation to buffer.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.summary.ConversationSummaryMemory.html
|
044b462b7714-3
|
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using ConversationSummaryMemory¶
How to use multiple memory classes in the same chain
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.summary.ConversationSummaryMemory.html
|
7508d1e3bf77-0
|
langchain.memory.chat_message_histories.redis.RedisChatMessageHistory¶
class langchain.memory.chat_message_histories.redis.RedisChatMessageHistory(session_id: str, url: str = 'redis://localhost:6379/0', key_prefix: str = 'message_store:', ttl: Optional[int] = None)[source]¶
Chat message history stored in a Redis database.
Attributes
key
Construct the record key to use
messages
Retrieve the messages from Redis
Methods
__init__(session_id[, url, key_prefix, ttl])
add_ai_message(message)
Convenience method for adding an AI message string to the store.
add_message(message)
Append the message to the record in Redis
add_user_message(message)
Convenience method for adding a human message string to the store.
clear()
Clear session memory from Redis
__init__(session_id: str, url: str = 'redis://localhost:6379/0', key_prefix: str = 'message_store:', ttl: Optional[int] = None)[source]¶
add_ai_message(message: str) → None¶
Convenience method for adding an AI message string to the store.
Parameters
message – The string contents of an AI message.
add_message(message: BaseMessage) → None[source]¶
Append the message to the record in Redis
add_user_message(message: str) → None¶
Convenience method for adding a human message string to the store.
Parameters
message – The string contents of a human message.
clear() → None[source]¶
Clear session memory from Redis
Examples using RedisChatMessageHistory¶
Redis Chat Message History
Adding Message Memory backed by a database to an Agent
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.redis.RedisChatMessageHistory.html
|
6ca01a1b7893-0
|
langchain.memory.chat_message_histories.sql.create_message_model¶
langchain.memory.chat_message_histories.sql.create_message_model(table_name, DynamicBase)[source]¶
Create a message model for a given table name.
Parameters
table_name – The name of the table to use.
DynamicBase – The base class to use for the model.
Returns
The model class.
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.sql.create_message_model.html
|
104ae864b1c7-0
|
langchain.memory.chat_message_histories.streamlit.StreamlitChatMessageHistory¶
class langchain.memory.chat_message_histories.streamlit.StreamlitChatMessageHistory(key: str = 'langchain_messages')[source]¶
Chat message history that stores messages in Streamlit session state.
Parameters
key – The key to use in Streamlit session state for storing messages.
Attributes
messages
Retrieve the current list of messages
Methods
__init__([key])
add_ai_message(message)
Convenience method for adding an AI message string to the store.
add_message(message)
Add a message to the session memory
add_user_message(message)
Convenience method for adding a human message string to the store.
clear()
Clear session memory
__init__(key: str = 'langchain_messages')[source]¶
add_ai_message(message: str) → None¶
Convenience method for adding an AI message string to the store.
Parameters
message – The string contents of an AI message.
add_message(message: BaseMessage) → None[source]¶
Add a message to the session memory
add_user_message(message: str) → None¶
Convenience method for adding a human message string to the store.
Parameters
message – The string contents of a human message.
clear() → None[source]¶
Clear session memory
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.streamlit.StreamlitChatMessageHistory.html
|
203359a390b0-0
|
langchain.memory.chat_message_histories.mongodb.MongoDBChatMessageHistory¶
class langchain.memory.chat_message_histories.mongodb.MongoDBChatMessageHistory(connection_string: str, session_id: str, database_name: str = 'chat_history', collection_name: str = 'message_store')[source]¶
Chat message history that stores history in MongoDB.
Parameters
connection_string – connection string to connect to MongoDB
session_id – arbitrary key that is used to store the messages
of a single chat session.
database_name – name of the database to use
collection_name – name of the collection to use
Attributes
messages
Retrieve the messages from MongoDB
Methods
__init__(connection_string, session_id[, ...])
add_ai_message(message)
Convenience method for adding an AI message string to the store.
add_message(message)
Append the message to the record in MongoDB
add_user_message(message)
Convenience method for adding a human message string to the store.
clear()
Clear session memory from MongoDB
__init__(connection_string: str, session_id: str, database_name: str = 'chat_history', collection_name: str = 'message_store')[source]¶
add_ai_message(message: str) → None¶
Convenience method for adding an AI message string to the store.
Parameters
message – The string contents of an AI message.
add_message(message: BaseMessage) → None[source]¶
Append the message to the record in MongoDB
add_user_message(message: str) → None¶
Convenience method for adding a human message string to the store.
Parameters
message – The string contents of a human message.
clear() → None[source]¶
Clear session memory from MongoDB
Examples using MongoDBChatMessageHistory¶
Mongodb Chat Message History
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.mongodb.MongoDBChatMessageHistory.html
|
5d55eb9bf341-0
|
langchain.memory.motorhead_memory.MotorheadMemory¶
class langchain.memory.motorhead_memory.MotorheadMemory[source]¶
Bases: BaseChatMemory
Chat message memory backed by Motorhead service.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param api_key: Optional[str] = None¶
param chat_memory: BaseChatMessageHistory [Optional]¶
param client_id: Optional[str] = None¶
param context: Optional[str] = None¶
param input_key: Optional[str] = None¶
param output_key: Optional[str] = None¶
param return_messages: bool = False¶
param session_id: str [Required]¶
param url: str = 'https://api.getmetal.io/v1/motorhead'¶
clear() → None¶
Clear memory contents.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.motorhead_memory.MotorheadMemory.html
|
5d55eb9bf341-1
|
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
delete_session() → None[source]¶
Delete a session
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
async init() → None[source]¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
load_memory_variables(values: Dict[str, Any]) → Dict[str, Any][source]¶
Return key-value pairs given the text input to the chain.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.motorhead_memory.MotorheadMemory.html
|
5d55eb9bf341-2
|
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]¶
Save context from this conversation to buffer.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property memory_variables: List[str]¶
The string keys this memory class will add to chain inputs.
Examples using MotorheadMemory¶
Motörhead Memory
Motörhead Memory (Managed)
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.motorhead_memory.MotorheadMemory.html
|
a31abc0def39-0
|
langchain.memory.simple.SimpleMemory¶
class langchain.memory.simple.SimpleMemory[source]¶
Bases: BaseMemory
Simple memory for storing context or other information that shouldn’t
ever change between prompts.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param memories: Dict[str, Any] = {}¶
clear() → None[source]¶
Nothing to clear, got a memory like a vault.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.simple.SimpleMemory.html
|
a31abc0def39-1
|
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]¶
Return key-value pairs given the text input to the chain.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.simple.SimpleMemory.html
|
a31abc0def39-2
|
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]¶
Nothing should be saved or changed, my memory is set in stone.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property memory_variables: List[str]¶
The string keys this memory class will add to chain inputs.
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.simple.SimpleMemory.html
|
802935baf05e-0
|
langchain.memory.entity.BaseEntityStore¶
class langchain.memory.entity.BaseEntityStore[source]¶
Bases: BaseModel, ABC
Abstract base class for Entity store.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
abstract clear() → None[source]¶
Delete all entities from store.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
abstract delete(key: str) → None[source]¶
Delete entity value from store.
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.BaseEntityStore.html
|
802935baf05e-1
|
abstract delete(key: str) → None[source]¶
Delete entity value from store.
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
abstract exists(key: str) → bool[source]¶
Check if entity exists in store.
classmethod from_orm(obj: Any) → Model¶
abstract get(key: str, default: Optional[str] = None) → Optional[str][source]¶
Get entity value from store.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.BaseEntityStore.html
|
802935baf05e-2
|
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
abstract set(key: str, value: Optional[str]) → None[source]¶
Set entity value in store.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.BaseEntityStore.html
|
dc88a4ce841e-0
|
langchain.memory.chat_memory.BaseChatMemory¶
class langchain.memory.chat_memory.BaseChatMemory[source]¶
Bases: BaseMemory, ABC
Abstract base class for chat memory.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param chat_memory: langchain.schema.memory.BaseChatMessageHistory [Optional]¶
param input_key: Optional[str] = None¶
param output_key: Optional[str] = None¶
param return_messages: bool = False¶
clear() → None[source]¶
Clear memory contents.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_memory.BaseChatMemory.html
|
dc88a4ce841e-1
|
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
abstract load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any]¶
Return key-value pairs given the text input to the chain.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_memory.BaseChatMemory.html
|
dc88a4ce841e-2
|
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]¶
Save context from this conversation to buffer.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
abstract property memory_variables: List[str]¶
The string keys this memory class will add to chain inputs.
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_memory.BaseChatMemory.html
|
600b4451d46f-0
|
langchain.memory.kg.ConversationKGMemory¶
class langchain.memory.kg.ConversationKGMemory[source]¶
Bases: BaseChatMemory
Knowledge graph conversation memory.
Integrates with external knowledge graph to store and retrieve
information about knowledge triples in the conversation.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param ai_prefix: str = 'AI'¶
param chat_memory: BaseChatMessageHistory [Optional]¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html
|
600b4451d46f-1
|
param entity_extraction_prompt: langchain.schema.prompt_template.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\n\nThe conversation history is provided just in case of a coreference (e.g. "What do you know about him" where "him" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\n\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html
|
600b4451d46f-2
|
line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\'m working with Person #2.\nOutput: Langchain, Person #2\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:', template_format='f-string', validate_template=True)¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html
|
600b4451d46f-3
|
param human_prefix: str = 'Human'¶
param input_key: Optional[str] = None¶
param k: int = 2¶
param kg: langchain.graphs.networkx_graph.NetworkxEntityGraph [Optional]¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html
|
600b4451d46f-4
|
param knowledge_extraction_prompt: langchain.schema.prompt_template.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template="You are a networked intelligence helping a human track knowledge triples about all relevant people, things, concepts, etc. and integrating them with your knowledge stored within your weights as well as that stored in a knowledge graph. Extract all of the knowledge triples from the last line of conversation. A knowledge triple is a clause that contains a subject, a predicate, and an object. The subject is the entity being described, the predicate is the property of the subject that is being described, and the object is the value of the property.\n\nEXAMPLE\nConversation history:\nPerson #1: Did you hear aliens landed in Area 51?\nAI: No, I didn't hear that. What do you know about Area 51?\nPerson #1: It's a secret military base in Nevada.\nAI: What do you know about Nevada?\nLast line of conversation:\nPerson #1: It's a state in the US. It's also the number 1 producer of gold in the US.\n\nOutput: (Nevada, is a, state)<|>(Nevada, is in, US)<|>(Nevada, is the number 1 producer of, gold)\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: Hello.\nAI: Hi! How are you?\nPerson #1: I'm good. How are you?\nAI: I'm good too.\nLast line of conversation:\nPerson #1: I'm going to the store.\n\nOutput: NONE\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: What do you know about Descartes?\nAI: Descartes was a French philosopher, mathematician, and scientist who lived in the 17th
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html
|
600b4451d46f-5
|
Descartes was a French philosopher, mathematician, and scientist who lived in the 17th century.\nPerson #1: The Descartes I'm referring to is a standup comedian and interior designer from Montreal.\nAI: Oh yes, He is a comedian and an interior designer. He has been in the industry for 30 years. His favorite food is baked bean pie.\nLast line of conversation:\nPerson #1: Oh huh. I know Descartes likes to drive antique scooters and play the mandolin.\nOutput: (Descartes, likes to drive, antique scooters)<|>(Descartes, plays, mandolin)\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:", template_format='f-string', validate_template=True)¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html
|
600b4451d46f-6
|
param llm: langchain.schema.language_model.BaseLanguageModel [Required]¶
param output_key: Optional[str] = None¶
param return_messages: bool = False¶
param summary_message_cls: Type[langchain.schema.messages.BaseMessage] = <class 'langchain.schema.messages.SystemMessage'>¶
Number of previous utterances to include in the context.
clear() → None[source]¶
Clear memory contents.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html
|
600b4451d46f-7
|
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_current_entities(input_string: str) → List[str][source]¶
get_knowledge_triplets(input_string: str) → List[KnowledgeTriple][source]¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]¶
Return history buffer.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html
|
600b4451d46f-8
|
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]¶
Save context from this conversation to buffer.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using ConversationKGMemory¶
Conversation Knowledge Graph Memory
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html
|
8e75f7e6c746-0
|
langchain.memory.chat_message_histories.file.FileChatMessageHistory¶
class langchain.memory.chat_message_histories.file.FileChatMessageHistory(file_path: str)[source]¶
Chat message history that stores history in a local file.
Parameters
file_path – path of the local file to store the messages.
Attributes
messages
Retrieve the messages from the local file
Methods
__init__(file_path)
add_ai_message(message)
Convenience method for adding an AI message string to the store.
add_message(message)
Append the message to the record in the local file
add_user_message(message)
Convenience method for adding a human message string to the store.
clear()
Clear session memory from the local file
__init__(file_path: str)[source]¶
add_ai_message(message: str) → None¶
Convenience method for adding an AI message string to the store.
Parameters
message – The string contents of an AI message.
add_message(message: BaseMessage) → None[source]¶
Append the message to the record in the local file
add_user_message(message: str) → None¶
Convenience method for adding a human message string to the store.
Parameters
message – The string contents of a human message.
clear() → None[source]¶
Clear session memory from the local file
Examples using FileChatMessageHistory¶
AutoGPT
|
https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.file.FileChatMessageHistory.html
|
b5667af845e7-0
|
langchain.document_loaders.image_captions.ImageCaptionLoader¶
class langchain.document_loaders.image_captions.ImageCaptionLoader(path_images: Union[str, List[str]], blip_processor: str = 'Salesforce/blip-image-captioning-base', blip_model: str = 'Salesforce/blip-image-captioning-base')[source]¶
Loads the captions of an image
Initialize with a list of image paths
Parameters
path_images – A list of image paths.
blip_processor – The name of the pre-trained BLIP processor.
blip_model – The name of the pre-trained BLIP model.
Methods
__init__(path_images[, blip_processor, ...])
Initialize with a list of image paths
lazy_load()
A lazy loader for Documents.
load()
Load from a list of image files
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(path_images: Union[str, List[str]], blip_processor: str = 'Salesforce/blip-image-captioning-base', blip_model: str = 'Salesforce/blip-image-captioning-base')[source]¶
Initialize with a list of image paths
Parameters
path_images – A list of image paths.
blip_processor – The name of the pre-trained BLIP processor.
blip_model – The name of the pre-trained BLIP model.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load from a list of image files
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.image_captions.ImageCaptionLoader.html
|
b5667af845e7-1
|
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using ImageCaptionLoader¶
Image captions
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.image_captions.ImageCaptionLoader.html
|
ae58c7d9cd2a-0
|
langchain.document_loaders.unstructured.UnstructuredAPIFileIOLoader¶
class langchain.document_loaders.unstructured.UnstructuredAPIFileIOLoader(file: Union[IO, Sequence[IO]], mode: str = 'single', url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any)[source]¶
Loader that uses the Unstructured API to load files.
By default, the loader makes a call to the hosted Unstructured API.
If you are running the unstructured API locally, you can change the
API rule by passing in the url parameter when you initialize the loader.
The hosted Unstructured API requires an API key. See
https://www.unstructured.io/api-key/ if you need to generate a key.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredAPIFileLoader
with open(“example.pdf”, “rb”) as f:
loader = UnstructuredFileAPILoader(f, mode=”elements”, strategy=”fast”, api_key=”MY_API_KEY”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition
https://www.unstructured.io/api-key/
https://github.com/Unstructured-IO/unstructured-api
Initialize with file path.
Methods
__init__(file[, mode, url, api_key])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredAPIFileIOLoader.html
|
ae58c7d9cd2a-1
|
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file: Union[IO, Sequence[IO]], mode: str = 'single', url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any)[source]¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredAPIFileIOLoader.html
|
78d87cdba2ea-0
|
langchain.document_loaders.facebook_chat.FacebookChatLoader¶
class langchain.document_loaders.facebook_chat.FacebookChatLoader(path: str)[source]¶
Loads Facebook messages json directory dump.
Initialize with a path.
Methods
__init__(path)
Initialize with a path.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(path: str)[source]¶
Initialize with a path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using FacebookChatLoader¶
Facebook Chat
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.facebook_chat.FacebookChatLoader.html
|
dc7d12508493-0
|
langchain.document_loaders.tomarkdown.ToMarkdownLoader¶
class langchain.document_loaders.tomarkdown.ToMarkdownLoader(url: str, api_key: str)[source]¶
Loads HTML to markdown using 2markdown.
Initialize with url and api key.
Methods
__init__(url, api_key)
Initialize with url and api key.
lazy_load()
Lazily load the file.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(url: str, api_key: str)[source]¶
Initialize with url and api key.
lazy_load() → Iterator[Document][source]¶
Lazily load the file.
load() → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using ToMarkdownLoader¶
2Markdown
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tomarkdown.ToMarkdownLoader.html
|
6719b9dfe752-0
|
langchain.document_loaders.gcs_file.GCSFileLoader¶
class langchain.document_loaders.gcs_file.GCSFileLoader(project_name: str, bucket: str, blob: str, loader_func: Optional[Callable[[str], BaseLoader]] = None)[source]¶
Load Documents from a GCS file.
Initialize with bucket and key name.
Parameters
project_name – The name of the project to load
bucket – The name of the GCS bucket.
blob – The name of the GCS blob to load.
loader_func – A loader function that instatiates a loader based on a
file_path argument. If nothing is provided, the
UnstructuredFileLoader is used.
Examples
To use an alternative PDF loader:
>> from from langchain.document_loaders import PyPDFLoader
>> loader = GCSFileLoader(…, loader_func=PyPDFLoader)
To use UnstructuredFileLoader with additional arguments:
>> loader = GCSFileLoader(…,
>> loader_func=lambda x: UnstructuredFileLoader(x, mode=”elements”))
Methods
__init__(project_name, bucket, blob[, ...])
Initialize with bucket and key name.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(project_name: str, bucket: str, blob: str, loader_func: Optional[Callable[[str], BaseLoader]] = None)[source]¶
Initialize with bucket and key name.
Parameters
project_name – The name of the project to load
bucket – The name of the GCS bucket.
blob – The name of the GCS blob to load.
loader_func – A loader function that instatiates a loader based on a
file_path argument. If nothing is provided, the
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gcs_file.GCSFileLoader.html
|
6719b9dfe752-1
|
file_path argument. If nothing is provided, the
UnstructuredFileLoader is used.
Examples
To use an alternative PDF loader:
>> from from langchain.document_loaders import PyPDFLoader
>> loader = GCSFileLoader(…, loader_func=PyPDFLoader)
To use UnstructuredFileLoader with additional arguments:
>> loader = GCSFileLoader(…,
>> loader_func=lambda x: UnstructuredFileLoader(x, mode=”elements”))
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using GCSFileLoader¶
Google Cloud Storage
Google Cloud Storage File
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gcs_file.GCSFileLoader.html
|
0e94e992c613-0
|
langchain.document_loaders.wikipedia.WikipediaLoader¶
class langchain.document_loaders.wikipedia.WikipediaLoader(query: str, lang: str = 'en', load_max_docs: Optional[int] = 100, load_all_available_meta: Optional[bool] = False, doc_content_chars_max: Optional[int] = 4000)[source]¶
Loads a query result from www.wikipedia.org into a list of Documents.
The hard limit on the number of downloaded Documents is 300 for now.
Each wiki page represents one Document.
Initializes a new instance of the WikipediaLoader class.
Parameters
query (str) – The query string to search on Wikipedia.
lang (str, optional) – The language code for the Wikipedia language edition.
Defaults to “en”.
load_max_docs (int, optional) – The maximum number of documents to load.
Defaults to 100.
load_all_available_meta (bool, optional) – Indicates whether to load all
available metadata for each document. Defaults to False.
doc_content_chars_max (int, optional) – The maximum number of characters
for the document content. Defaults to 4000.
Methods
__init__(query[, lang, load_max_docs, ...])
Initializes a new instance of the WikipediaLoader class.
lazy_load()
A lazy loader for Documents.
load()
Loads the query result from Wikipedia into a list of Documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(query: str, lang: str = 'en', load_max_docs: Optional[int] = 100, load_all_available_meta: Optional[bool] = False, doc_content_chars_max: Optional[int] = 4000)[source]¶
Initializes a new instance of the WikipediaLoader class.
Parameters
query (str) – The query string to search on Wikipedia.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.wikipedia.WikipediaLoader.html
|
0e94e992c613-1
|
Parameters
query (str) – The query string to search on Wikipedia.
lang (str, optional) – The language code for the Wikipedia language edition.
Defaults to “en”.
load_max_docs (int, optional) – The maximum number of documents to load.
Defaults to 100.
load_all_available_meta (bool, optional) – Indicates whether to load all
available metadata for each document. Defaults to False.
doc_content_chars_max (int, optional) – The maximum number of characters
for the document content. Defaults to 4000.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Loads the query result from Wikipedia into a list of Documents.
Returns
A list of Document objects representing the loadedWikipedia pages.
Return type
List[Document]
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using WikipediaLoader¶
Wikipedia
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.wikipedia.WikipediaLoader.html
|
ab2608d659e2-0
|
langchain.document_loaders.bigquery.BigQueryLoader¶
class langchain.document_loaders.bigquery.BigQueryLoader(query: str, project: Optional[str] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] = None, credentials: Optional[Credentials] = None)[source]¶
Loads a query result from BigQuery into a list of documents.
Each document represents one row of the result. The page_content_columns
are written into the page_content of the document. The metadata_columns
are written into the metadata of the document. By default, all columns
are written into the page_content and none into the metadata.
Initialize BigQuery document loader.
Parameters
query – The query to run in BigQuery.
project – Optional. The project to run the query in.
page_content_columns – Optional. The columns to write into the page_content
of the document.
metadata_columns – Optional. The columns to write into the metadata of the
document.
credentials – google.auth.credentials.Credentials, optional
Credentials for accessing Google APIs. Use this parameter to override
default credentials, such as to use Compute Engine
(google.auth.compute_engine.Credentials) or Service Account
(google.oauth2.service_account.Credentials) credentials directly.
Methods
__init__(query[, project, ...])
Initialize BigQuery document loader.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(query: str, project: Optional[str] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] = None, credentials: Optional[Credentials] = None)[source]¶
Initialize BigQuery document loader.
Parameters
query – The query to run in BigQuery.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.bigquery.BigQueryLoader.html
|
ab2608d659e2-1
|
Initialize BigQuery document loader.
Parameters
query – The query to run in BigQuery.
project – Optional. The project to run the query in.
page_content_columns – Optional. The columns to write into the page_content
of the document.
metadata_columns – Optional. The columns to write into the metadata of the
document.
credentials – google.auth.credentials.Credentials, optional
Credentials for accessing Google APIs. Use this parameter to override
default credentials, such as to use Compute Engine
(google.auth.compute_engine.Credentials) or Service Account
(google.oauth2.service_account.Credentials) credentials directly.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using BigQueryLoader¶
Google BigQuery
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.bigquery.BigQueryLoader.html
|
0df0e65f94a2-0
|
langchain.document_loaders.blockchain.BlockchainDocumentLoader¶
class langchain.document_loaders.blockchain.BlockchainDocumentLoader(contract_address: str, blockchainType: BlockchainType = BlockchainType.ETH_MAINNET, api_key: str = 'docs-demo', startToken: str = '', get_all_tokens: bool = False, max_execution_time: Optional[int] = None)[source]¶
Loads elements from a blockchain smart contract into Langchain documents.
The supported blockchains are: Ethereum mainnet, Ethereum Goerli testnet,
Polygon mainnet, and Polygon Mumbai testnet.
If no BlockchainType is specified, the default is Ethereum mainnet.
The Loader uses the Alchemy API to interact with the blockchain.
ALCHEMY_API_KEY environment variable must be set to use this loader.
The API returns 100 NFTs per request and can be paginated using the
startToken parameter.
If get_all_tokens is set to True, the loader will get all tokens
on the contract. Note that for contracts with a large number of tokens,
this may take a long time (e.g. 10k tokens is 100 requests).
Default value is false for this reason.
The max_execution_time (sec) can be set to limit the execution time
of the loader.
Future versions of this loader can:
Support additional Alchemy APIs (e.g. getTransactions, etc.)
Support additional blockain APIs (e.g. Infura, Opensea, etc.)
Parameters
contract_address – The address of the smart contract.
blockchainType – The blockchain type.
api_key – The Alchemy API key.
startToken – The start token for pagination.
get_all_tokens – Whether to get all tokens on the contract.
max_execution_time – The maximum execution time (sec).
Methods
__init__(contract_address[, blockchainType, ...])
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blockchain.BlockchainDocumentLoader.html
|
0df0e65f94a2-1
|
Methods
__init__(contract_address[, blockchainType, ...])
param contract_address
The address of the smart contract.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(contract_address: str, blockchainType: BlockchainType = BlockchainType.ETH_MAINNET, api_key: str = 'docs-demo', startToken: str = '', get_all_tokens: bool = False, max_execution_time: Optional[int] = None)[source]¶
Parameters
contract_address – The address of the smart contract.
blockchainType – The blockchain type.
api_key – The Alchemy API key.
startToken – The start token for pagination.
get_all_tokens – Whether to get all tokens on the contract.
max_execution_time – The maximum execution time (sec).
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using BlockchainDocumentLoader¶
Blockchain
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blockchain.BlockchainDocumentLoader.html
|
fbdfc93ff0ef-0
|
langchain.document_loaders.bilibili.BiliBiliLoader¶
class langchain.document_loaders.bilibili.BiliBiliLoader(video_urls: List[str])[source]¶
Loads bilibili transcripts.
Initialize with bilibili url.
Parameters
video_urls – List of bilibili urls.
Methods
__init__(video_urls)
Initialize with bilibili url.
lazy_load()
A lazy loader for Documents.
load()
Load Documents from bilibili url.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(video_urls: List[str])[source]¶
Initialize with bilibili url.
Parameters
video_urls – List of bilibili urls.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load Documents from bilibili url.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using BiliBiliLoader¶
BiliBili
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.bilibili.BiliBiliLoader.html
|
6d0c57f806b0-0
|
langchain.document_loaders.airbyte.AirbyteGongLoader¶
class langchain.document_loaders.airbyte.AirbyteGongLoader(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None)[source]¶
Methods
__init__(config, stream_name[, ...])
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None) → None[source]¶
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteGongLoader.html
|
b3b710409d8d-0
|
langchain.document_loaders.blob_loaders.schema.Blob¶
class langchain.document_loaders.blob_loaders.schema.Blob[source]¶
Bases: BaseModel
A blob is used to represent raw data by either reference or value.
Provides an interface to materialize the blob in different representations, and
help to decouple the development of data loaders from the downstream parsing of
the raw data.
Inspired by: https://developer.mozilla.org/en-US/docs/Web/API/Blob
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param data: Optional[Union[bytes, str]] = None¶
param encoding: str = 'utf-8'¶
param mimetype: Optional[str] = None¶
param path: Optional[Union[str, pathlib.PurePath]] = None¶
as_bytes() → bytes[source]¶
Read data as bytes.
as_bytes_io() → Generator[Union[BytesIO, BufferedReader], None, None][source]¶
Read data as a byte stream.
as_string() → str[source]¶
Read data as a string.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.schema.Blob.html
|
b3b710409d8d-1
|
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_data(data: Union[str, bytes], *, encoding: str = 'utf-8', mime_type: Optional[str] = None, path: Optional[str] = None) → Blob[source]¶
Initialize the blob from in-memory data.
Parameters
data – the in-memory data associated with the blob
encoding – Encoding to use if decoding the bytes into a string
mime_type – if provided, will be set as the mime-type of the data
path – if provided, will be set as the source from which the data came
Returns
Blob instance
classmethod from_orm(obj: Any) → Model¶
classmethod from_path(path: Union[str, PurePath], *, encoding: str = 'utf-8', mime_type: Optional[str] = None, guess_type: bool = True) → Blob[source]¶
Load the blob from a path like object.
Parameters
path – path like object to file to be read
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.schema.Blob.html
|
b3b710409d8d-2
|
Parameters
path – path like object to file to be read
encoding – Encoding to use if decoding the bytes into a string
mime_type – if provided, will be set as the mime-type of the data
guess_type – If True, the mimetype will be guessed from the file extension,
if a mime-type was not provided
Returns
Blob instance
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.schema.Blob.html
|
b3b710409d8d-3
|
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property source: Optional[str]¶
The source location of the blob as string if known otherwise none.
Examples using Blob¶
Embaas
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.schema.Blob.html
|
8393120070da-0
|
langchain.document_loaders.concurrent.ConcurrentLoader¶
class langchain.document_loaders.concurrent.ConcurrentLoader(blob_loader: BlobLoader, blob_parser: BaseBlobParser, num_workers: int = 4)[source]¶
A generic document loader that loads and parses documents concurrently.
A generic document loader.
Parameters
blob_loader – A blob loader which knows how to yield blobs
blob_parser – A blob parser which knows how to parse blobs into documents
Methods
__init__(blob_loader, blob_parser[, num_workers])
A generic document loader.
from_filesystem(path, *[, glob, suffixes, ...])
Create a concurrent generic document loader using a filesystem blob loader.
lazy_load()
Load documents lazily with concurrent parsing.
load()
Load all documents.
load_and_split([text_splitter])
Load all documents and split them into sentences.
__init__(blob_loader: BlobLoader, blob_parser: BaseBlobParser, num_workers: int = 4) → None[source]¶
A generic document loader.
Parameters
blob_loader – A blob loader which knows how to yield blobs
blob_parser – A blob parser which knows how to parse blobs into documents
classmethod from_filesystem(path: Union[str, Path], *, glob: str = '**/[!.]*', suffixes: Optional[Sequence[str]] = None, show_progress: bool = False, parser: Union[Literal['default'], BaseBlobParser] = 'default', num_workers: int = 4) → ConcurrentLoader[source]¶
Create a concurrent generic document loader using a
filesystem blob loader.
lazy_load() → Iterator[Document][source]¶
Load documents lazily with concurrent parsing.
load() → List[Document]¶
Load all documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.concurrent.ConcurrentLoader.html
|
8393120070da-1
|
Load all documents and split them into sentences.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.concurrent.ConcurrentLoader.html
|
de107b303d82-0
|
langchain.document_loaders.github.BaseGitHubLoader¶
class langchain.document_loaders.github.BaseGitHubLoader[source]¶
Bases: BaseLoader, BaseModel, ABC
Load issues of a GitHub repository.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param access_token: str [Required]¶
Personal access token - see https://github.com/settings/tokens?type=beta
param repo: str [Required]¶
Name of repository
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.BaseGitHubLoader.html
|
de107b303d82-1
|
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
abstract load() → List[Document]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.BaseGitHubLoader.html
|
de107b303d82-2
|
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property headers: Dict[str, str]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.BaseGitHubLoader.html
|
51ea3f1af148-0
|
langchain.document_loaders.acreom.AcreomLoader¶
class langchain.document_loaders.acreom.AcreomLoader(path: str, encoding: str = 'UTF-8', collect_metadata: bool = True)[source]¶
Loader that loads acreom vault from a directory.
Attributes
FRONT_MATTER_REGEX
Regex to match front matter metadata in markdown files.
file_path
Path to the directory containing the markdown files.
encoding
Encoding to use when reading the files.
collect_metadata
Whether to collect metadata from the front matter.
Methods
__init__(path[, encoding, collect_metadata])
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(path: str, encoding: str = 'UTF-8', collect_metadata: bool = True)[source]¶
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AcreomLoader¶
acreom
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.acreom.AcreomLoader.html
|
7bb080ea5ff4-0
|
langchain.document_loaders.airbyte.AirbyteHubspotLoader¶
class langchain.document_loaders.airbyte.AirbyteHubspotLoader(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None)[source]¶
Methods
__init__(config, stream_name[, ...])
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None) → None[source]¶
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteHubspotLoader.html
|
f14c7542ae07-0
|
langchain.document_loaders.pdf.PyMuPDFLoader¶
class langchain.document_loaders.pdf.PyMuPDFLoader(file_path: str)[source]¶
Loader that uses PyMuPDF to load PDF files.
Initialize with a file path.
Attributes
source
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load(**kwargs)
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str) → None[source]¶
Initialize with a file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load(**kwargs: Optional[Any]) → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PyMuPDFLoader.html
|
d417089da208-0
|
langchain.document_loaders.pdf.PDFMinerPDFasHTMLLoader¶
class langchain.document_loaders.pdf.PDFMinerPDFasHTMLLoader(file_path: str)[source]¶
Loader that uses PDFMiner to load PDF files as HTML content.
Initialize with a file path.
Attributes
source
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str)[source]¶
Initialize with a file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PDFMinerPDFasHTMLLoader.html
|
e8210d6f6890-0
|
langchain.document_loaders.azure_blob_storage_file.AzureBlobStorageFileLoader¶
class langchain.document_loaders.azure_blob_storage_file.AzureBlobStorageFileLoader(conn_str: str, container: str, blob_name: str)[source]¶
Loading Documents from Azure Blob Storage.
Initialize with connection string, container and blob name.
Attributes
conn_str
Connection string for Azure Blob Storage.
container
Container name.
blob
Blob name.
Methods
__init__(conn_str, container, blob_name)
Initialize with connection string, container and blob name.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(conn_str: str, container: str, blob_name: str)[source]¶
Initialize with connection string, container and blob name.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AzureBlobStorageFileLoader¶
Azure Blob Storage
Azure Blob Storage File
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.azure_blob_storage_file.AzureBlobStorageFileLoader.html
|
9067bf14b7ea-0
|
langchain.document_loaders.word_document.Docx2txtLoader¶
class langchain.document_loaders.word_document.Docx2txtLoader(file_path: str)[source]¶
Loads a DOCX with docx2txt and chunks at character level.
Defaults to check for local file, but if the file is a web path, it will download it
to a temporary file, and use that, then clean up the temporary file after completion
Initialize with file path.
Methods
__init__(file_path)
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load given path as single page.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str)[source]¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load given path as single page.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using Docx2txtLoader¶
Microsoft Word
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.word_document.Docx2txtLoader.html
|
5f8f338124c8-0
|
langchain.document_loaders.gutenberg.GutenbergLoader¶
class langchain.document_loaders.gutenberg.GutenbergLoader(file_path: str)[source]¶
Loader that uses urllib to load .txt web files.
Initialize with a file path.
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str)[source]¶
Initialize with a file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using GutenbergLoader¶
Gutenberg
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gutenberg.GutenbergLoader.html
|
9182342ec256-0
|
langchain.document_loaders.rocksetdb.RocksetLoader¶
class langchain.document_loaders.rocksetdb.RocksetLoader(client: ~typing.Any, query: ~typing.Any, content_keys: ~typing.List[str], metadata_keys: ~typing.Optional[~typing.List[str]] = None, content_columns_joiner: ~typing.Callable[[~typing.List[~typing.Tuple[str, ~typing.Any]]], str] = <function default_joiner>)[source]¶
Wrapper around Rockset db
To use, you should have the rockset python package installed.
Example
# This code will load 3 records from the "langchain_demo"
# collection as Documents, with the `text` column used as
# the content
from langchain.document_loaders import RocksetLoader
from rockset import RocksetClient, Regions, models
loader = RocksetLoader(
RocksetClient(Regions.usw2a1, "<api key>"),
models.QueryRequestSql(
query="select * from langchain_demo limit 3"
),
["text"]
)
)
Initialize with Rockset client.
Parameters
client – Rockset client object.
query – Rockset query object.
content_keys – The collection columns to be written into the page_content
of the Documents.
metadata_keys – The collection columns to be written into the metadata of
the Documents. By default, this is all the keys in the document.
content_columns_joiner – Method that joins content_keys and its values into a
string. It’s method that takes in a List[Tuple[str, Any]]],
representing a list of tuples of (column name, column value).
By default, this is a method that joins each column value with a new
line. This method is only relevant if there are multiple content_keys.
Methods
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rocksetdb.RocksetLoader.html
|
9182342ec256-1
|
line. This method is only relevant if there are multiple content_keys.
Methods
__init__(client, query, content_keys[, ...])
Initialize with Rockset client.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(client: ~typing.Any, query: ~typing.Any, content_keys: ~typing.List[str], metadata_keys: ~typing.Optional[~typing.List[str]] = None, content_columns_joiner: ~typing.Callable[[~typing.List[~typing.Tuple[str, ~typing.Any]]], str] = <function default_joiner>)[source]¶
Initialize with Rockset client.
Parameters
client – Rockset client object.
query – Rockset query object.
content_keys – The collection columns to be written into the page_content
of the Documents.
metadata_keys – The collection columns to be written into the metadata of
the Documents. By default, this is all the keys in the document.
content_columns_joiner – Method that joins content_keys and its values into a
string. It’s method that takes in a List[Tuple[str, Any]]],
representing a list of tuples of (column name, column value).
By default, this is a method that joins each column value with a new
line. This method is only relevant if there are multiple content_keys.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rocksetdb.RocksetLoader.html
|
9182342ec256-2
|
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using RocksetLoader¶
Rockset
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rocksetdb.RocksetLoader.html
|
6dc757a539a4-0
|
langchain.document_loaders.url_playwright.PlaywrightURLLoader¶
class langchain.document_loaders.url_playwright.PlaywrightURLLoader(urls: List[str], continue_on_failure: bool = True, headless: bool = True, remove_selectors: Optional[List[str]] = None)[source]¶
Loader that uses Playwright and to load a page and unstructured to load the html.
This is useful for loading pages that require javascript to render.
urls¶
List of URLs to load.
Type
List[str]
continue_on_failure¶
If True, continue loading other URLs on failure.
Type
bool
headless¶
If True, the browser will run in headless mode.
Type
bool
Load a list of URLs using Playwright and unstructured.
Methods
__init__(urls[, continue_on_failure, ...])
Load a list of URLs using Playwright and unstructured.
aload()
Load the specified URLs with Playwright and create Documents asynchronously.
lazy_load()
A lazy loader for Documents.
load()
Load the specified URLs using Playwright and create Document instances.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(urls: List[str], continue_on_failure: bool = True, headless: bool = True, remove_selectors: Optional[List[str]] = None)[source]¶
Load a list of URLs using Playwright and unstructured.
async aload() → List[Document][source]¶
Load the specified URLs with Playwright and create Documents asynchronously.
Use this function when in a jupyter notebook environment.
Returns
A list of Document instances with loaded content.
Return type
List[Document]
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url_playwright.PlaywrightURLLoader.html
|
6dc757a539a4-1
|
A lazy loader for Documents.
load() → List[Document][source]¶
Load the specified URLs using Playwright and create Document instances.
Returns
A list of Document instances with loaded content.
Return type
List[Document]
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using PlaywrightURLLoader¶
URL
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url_playwright.PlaywrightURLLoader.html
|
950b6d10707d-0
|
langchain.document_loaders.unstructured.validate_unstructured_version¶
langchain.document_loaders.unstructured.validate_unstructured_version(min_unstructured_version: str) → None[source]¶
Raises an error if the unstructured version does not exceed the
specified minimum.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.validate_unstructured_version.html
|
806b3dc8f294-0
|
langchain.document_loaders.embaas.EmbaasLoader¶
class langchain.document_loaders.embaas.EmbaasLoader[source]¶
Bases: BaseEmbaasLoader, BaseLoader
Embaas’s document loader.
To use, you should have the
environment variable EMBAAS_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
# Default parsing
from langchain.document_loaders.embaas import EmbaasLoader
loader = EmbaasLoader(file_path="example.mp3")
documents = loader.load()
# Custom api parameters (create embeddings automatically)
from langchain.document_loaders.embaas import EmbaasBlobLoader
loader = EmbaasBlobLoader(
file_path="example.pdf",
params={
"should_embed": True,
"model": "e5-large-v2",
"chunk_size": 256,
"chunk_splitter": "CharacterTextSplitter"
}
)
documents = loader.load()
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param api_url: str = 'https://api.embaas.io/v1/document/extract-text/bytes/'¶
The URL of the embaas document extraction API.
param blob_loader: Optional[langchain.document_loaders.embaas.EmbaasBlobLoader] = None¶
The blob loader to use. If not provided, a default one will be created.
param embaas_api_key: Optional[str] = None¶
The API key for the embaas document extraction API.
param file_path: str [Required]¶
The path to the file to load.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasLoader.html
|
806b3dc8f294-1
|
param file_path: str [Required]¶
The path to the file to load.
param params: langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters = {}¶
Additional parameters to pass to the embaas document extraction API.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasLoader.html
|
806b3dc8f294-2
|
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
lazy_load() → Iterator[Document][source]¶
Load the documents from the file path lazily.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document][source]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasLoader.html
|
806b3dc8f294-3
|
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using EmbaasLoader¶
Embaas
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasLoader.html
|
d86f03903f21-0
|
langchain.document_loaders.airtable.AirtableLoader¶
class langchain.document_loaders.airtable.AirtableLoader(api_token: str, table_id: str, base_id: str)[source]¶
Loader for Airtable tables.
Initialize with API token and the IDs for table and base
Attributes
api_token
Airtable API token.
table_id
Airtable table ID.
base_id
Airtable base ID.
Methods
__init__(api_token, table_id, base_id)
Initialize with API token and the IDs for table and base
lazy_load()
Lazy load Documents from table.
load()
Load Documents from table.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(api_token: str, table_id: str, base_id: str)[source]¶
Initialize with API token and the IDs for table and base
lazy_load() → Iterator[Document][source]¶
Lazy load Documents from table.
load() → List[Document][source]¶
Load Documents from table.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AirtableLoader¶
Airtable
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airtable.AirtableLoader.html
|
07128aab0c33-0
|
langchain.document_loaders.helpers.FileEncoding¶
class langchain.document_loaders.helpers.FileEncoding(encoding: Optional[str], confidence: float, language: Optional[str])[source]¶
A file encoding as the NamedTuple.
Create new instance of FileEncoding(encoding, confidence, language)
Attributes
confidence
The confidence of the encoding.
encoding
The encoding of the file.
language
The language of the file.
Methods
__init__()
count(value, /)
Return number of occurrences of value.
index(value[, start, stop])
Return first index of value.
__init__()¶
count(value, /)¶
Return number of occurrences of value.
index(value, start=0, stop=9223372036854775807, /)¶
Return first index of value.
Raises ValueError if the value is not present.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.helpers.FileEncoding.html
|
4be99b91fcea-0
|
langchain.document_loaders.conllu.CoNLLULoader¶
class langchain.document_loaders.conllu.CoNLLULoader(file_path: str)[source]¶
Load CoNLL-U files.
Initialize with a file path.
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load from a file path.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str)[source]¶
Initialize with a file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load from a file path.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using CoNLLULoader¶
CoNLL-U
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.conllu.CoNLLULoader.html
|
1e4e02c7563c-0
|
langchain.document_loaders.notiondb.NotionDBLoader¶
class langchain.document_loaders.notiondb.NotionDBLoader(integration_token: str, database_id: str, request_timeout_sec: Optional[int] = 10)[source]¶
Notion DB Loader.
Reads content from pages within a Notion Database.
:param integration_token: Notion integration token.
:type integration_token: str
:param database_id: Notion database id.
:type database_id: str
:param request_timeout_sec: Timeout for Notion requests in seconds.
Defaults to 10.
Initialize with parameters.
Methods
__init__(integration_token, database_id[, ...])
Initialize with parameters.
lazy_load()
A lazy loader for Documents.
load()
Load documents from the Notion database.
load_and_split([text_splitter])
Load Documents and split into chunks.
load_page(page_summary)
Read a page.
__init__(integration_token: str, database_id: str, request_timeout_sec: Optional[int] = 10) → None[source]¶
Initialize with parameters.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents from the Notion database.
:returns: List of documents.
:rtype: List[Document]
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
load_page(page_summary: Dict[str, Any]) → Document[source]¶
Read a page.
Parameters
page_summary – Page summary from Notion API.
Examples using NotionDBLoader¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notiondb.NotionDBLoader.html
|
1e4e02c7563c-1
|
page_summary – Page summary from Notion API.
Examples using NotionDBLoader¶
Notion DB
Notion DB 2/2
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notiondb.NotionDBLoader.html
|
f19d64ce0500-0
|
langchain.document_loaders.duckdb_loader.DuckDBLoader¶
class langchain.document_loaders.duckdb_loader.DuckDBLoader(query: str, database: str = ':memory:', read_only: bool = False, config: Optional[Dict[str, str]] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] = None)[source]¶
Loads a query result from DuckDB into a list of documents.
Each document represents one row of the result. The page_content_columns
are written into the page_content of the document. The metadata_columns
are written into the metadata of the document. By default, all columns
are written into the page_content and none into the metadata.
Parameters
query – The query to execute.
database – The database to connect to. Defaults to “:memory:”.
read_only – Whether to open the database in read-only mode.
Defaults to False.
config – A dictionary of configuration options to pass to the database.
Optional.
page_content_columns – The columns to write into the page_content
of the document. Optional.
metadata_columns – The columns to write into the metadata of the document.
Optional.
Methods
__init__(query[, database, read_only, ...])
param query
The query to execute.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(query: str, database: str = ':memory:', read_only: bool = False, config: Optional[Dict[str, str]] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] = None)[source]¶
Parameters
query – The query to execute.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.duckdb_loader.DuckDBLoader.html
|
f19d64ce0500-1
|
Parameters
query – The query to execute.
database – The database to connect to. Defaults to “:memory:”.
read_only – Whether to open the database in read-only mode.
Defaults to False.
config – A dictionary of configuration options to pass to the database.
Optional.
page_content_columns – The columns to write into the page_content
of the document. Optional.
metadata_columns – The columns to write into the metadata of the document.
Optional.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using DuckDBLoader¶
DuckDB
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.duckdb_loader.DuckDBLoader.html
|
d63b1d4e0188-0
|
langchain.document_loaders.joplin.JoplinLoader¶
class langchain.document_loaders.joplin.JoplinLoader(access_token: Optional[str] = None, port: int = 41184, host: str = 'localhost')[source]¶
Loader that fetches notes from Joplin.
In order to use this loader, you need to have Joplin running with the
Web Clipper enabled (look for “Web Clipper” in the app settings).
To get the access token, you need to go to the Web Clipper options and
under “Advanced Options” you will find the access token.
You can find more information about the Web Clipper service here:
https://joplinapp.org/clipper/
Parameters
access_token – The access token to use.
port – The port where the Web Clipper service is running. Default is 41184.
host – The host where the Web Clipper service is running.
Default is localhost.
Methods
__init__([access_token, port, host])
param access_token
The access token to use.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(access_token: Optional[str] = None, port: int = 41184, host: str = 'localhost') → None[source]¶
Parameters
access_token – The access token to use.
port – The port where the Web Clipper service is running. Default is 41184.
host – The host where the Web Clipper service is running.
Default is localhost.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.joplin.JoplinLoader.html
|
d63b1d4e0188-1
|
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using JoplinLoader¶
Joplin
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.joplin.JoplinLoader.html
|
ce728d390b8d-0
|
langchain.document_loaders.hugging_face_dataset.HuggingFaceDatasetLoader¶
class langchain.document_loaders.hugging_face_dataset.HuggingFaceDatasetLoader(path: str, page_content_column: str = 'text', name: Optional[str] = None, data_dir: Optional[str] = None, data_files: Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]] = None, cache_dir: Optional[str] = None, keep_in_memory: Optional[bool] = None, save_infos: bool = False, use_auth_token: Optional[Union[bool, str]] = None, num_proc: Optional[int] = None)[source]¶
Load Documents from the Hugging Face Hub.
Initialize the HuggingFaceDatasetLoader.
Parameters
path – Path or name of the dataset.
page_content_column – Page content column name. Default is “text”.
name – Name of the dataset configuration.
data_dir – Data directory of the dataset configuration.
data_files – Path(s) to source data file(s).
cache_dir – Directory to read/write data.
keep_in_memory – Whether to copy the dataset in-memory.
save_infos – Save the dataset information (checksums/size/splits/…).
Default is False.
use_auth_token – Bearer token for remote files on the Dataset Hub.
num_proc – Number of processes.
Methods
__init__(path[, page_content_column, name, ...])
Initialize the HuggingFaceDatasetLoader.
lazy_load()
Load documents lazily.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.hugging_face_dataset.HuggingFaceDatasetLoader.html
|
ce728d390b8d-1
|
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(path: str, page_content_column: str = 'text', name: Optional[str] = None, data_dir: Optional[str] = None, data_files: Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]] = None, cache_dir: Optional[str] = None, keep_in_memory: Optional[bool] = None, save_infos: bool = False, use_auth_token: Optional[Union[bool, str]] = None, num_proc: Optional[int] = None)[source]¶
Initialize the HuggingFaceDatasetLoader.
Parameters
path – Path or name of the dataset.
page_content_column – Page content column name. Default is “text”.
name – Name of the dataset configuration.
data_dir – Data directory of the dataset configuration.
data_files – Path(s) to source data file(s).
cache_dir – Directory to read/write data.
keep_in_memory – Whether to copy the dataset in-memory.
save_infos – Save the dataset information (checksums/size/splits/…).
Default is False.
use_auth_token – Bearer token for remote files on the Dataset Hub.
num_proc – Number of processes.
lazy_load() → Iterator[Document][source]¶
Load documents lazily.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using HuggingFaceDatasetLoader¶
HuggingFace dataset
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.hugging_face_dataset.HuggingFaceDatasetLoader.html
|
24245a71b83b-0
|
langchain.document_loaders.readthedocs.ReadTheDocsLoader¶
class langchain.document_loaders.readthedocs.ReadTheDocsLoader(path: Union[str, Path], encoding: Optional[str] = None, errors: Optional[str] = None, custom_html_tag: Optional[Tuple[str, dict]] = None, **kwargs: Optional[Any])[source]¶
Loads ReadTheDocs documentation directory dump.
Initialize ReadTheDocsLoader
The loader loops over all files under path and extracts the actual content of
the files by retrieving main html tags. Default main html tags include
<main id=”main-content>, <div role=”main>, and <article role=”main”>. You
can also define your own html tags by passing custom_html_tag, e.g.
(“div”, “class=main”). The loader iterates html tags with the order of
custom html tags (if exists) and default html tags. If any of the tags is not
empty, the loop will break and retrieve the content out of that tag.
Parameters
path – The location of pulled readthedocs folder.
encoding – The encoding with which to open the documents.
errors – Specify how encoding and decoding errors are to be handled—this
cannot be used in binary mode.
custom_html_tag – Optional custom html tag to retrieve the content from
files.
Methods
__init__(path[, encoding, errors, ...])
Initialize ReadTheDocsLoader
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(path: Union[str, Path], encoding: Optional[str] = None, errors: Optional[str] = None, custom_html_tag: Optional[Tuple[str, dict]] = None, **kwargs: Optional[Any])[source]¶
Initialize ReadTheDocsLoader
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.readthedocs.ReadTheDocsLoader.html
|
24245a71b83b-1
|
Initialize ReadTheDocsLoader
The loader loops over all files under path and extracts the actual content of
the files by retrieving main html tags. Default main html tags include
<main id=”main-content>, <div role=”main>, and <article role=”main”>. You
can also define your own html tags by passing custom_html_tag, e.g.
(“div”, “class=main”). The loader iterates html tags with the order of
custom html tags (if exists) and default html tags. If any of the tags is not
empty, the loop will break and retrieve the content out of that tag.
Parameters
path – The location of pulled readthedocs folder.
encoding – The encoding with which to open the documents.
errors – Specify how encoding and decoding errors are to be handled—this
cannot be used in binary mode.
custom_html_tag – Optional custom html tag to retrieve the content from
files.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using ReadTheDocsLoader¶
ReadTheDocs Documentation
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.readthedocs.ReadTheDocsLoader.html
|
04fb57c9a612-0
|
langchain.document_loaders.tsv.UnstructuredTSVLoader¶
class langchain.document_loaders.tsv.UnstructuredTSVLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Loader that uses unstructured to load TSV files. Like other
Unstructured loaders, UnstructuredTSVLoader can be used in both
“single” and “elements” mode. If you use the loader in “elements”
mode, the TSV file will be a single Unstructured Table element.
If you use the loader in “elements” mode, an HTML representation
of the table will be available in the “text_as_html” key in the
document metadata.
Examples
from langchain.document_loaders.tsv import UnstructuredTSVLoader
loader = UnstructuredTSVLoader(“stanley-cups.tsv”, mode=”elements”)
docs = loader.load()
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredTSVLoader¶
TSV
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tsv.UnstructuredTSVLoader.html
|
4a63a2b52cc7-0
|
langchain.document_loaders.s3_file.S3FileLoader¶
class langchain.document_loaders.s3_file.S3FileLoader(bucket: str, key: str)[source]¶
Loading logic for loading documents from an AWS S3 file.
Initialize with bucket and key name.
Parameters
bucket – The name of the S3 bucket.
key – The key of the S3 object.
Methods
__init__(bucket, key)
Initialize with bucket and key name.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(bucket: str, key: str)[source]¶
Initialize with bucket and key name.
Parameters
bucket – The name of the S3 bucket.
key – The key of the S3 object.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using S3FileLoader¶
AWS S3 Directory
AWS S3 File
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.s3_file.S3FileLoader.html
|
0be546c9714f-0
|
langchain.document_loaders.text.TextLoader¶
class langchain.document_loaders.text.TextLoader(file_path: str, encoding: Optional[str] = None, autodetect_encoding: bool = False)[source]¶
Load text files.
Parameters
file_path – Path to the file to load.
encoding – File encoding to use. If None, the file will be loaded
encoding. (with the default system) –
autodetect_encoding – Whether to try to autodetect the file encoding
if the specified encoding fails.
Initialize with file path.
Methods
__init__(file_path[, encoding, ...])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load from file path.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, encoding: Optional[str] = None, autodetect_encoding: bool = False)[source]¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load from file path.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using TextLoader¶
Cohere Reranker
Chat Over Documents with Vectara
Vectorstore Agent
LanceDB
Weaviate
Activeloop’s Deep Lake
Vectara
Redis
PGVector
Rockset
Zilliz
SingleStoreDB
Annoy
Typesense
Atlas
Tair
Chroma
Alibaba Cloud OpenSearch
StarRocks
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.text.TextLoader.html
|
0be546c9714f-1
|
Atlas
Tair
Chroma
Alibaba Cloud OpenSearch
StarRocks
Clarifai
scikit-learn
DocArrayHnswSearch
MyScale
ClickHouse Vector Search
Qdrant
Tigris
AwaDB
Supabase (Postgres)
OpenSearch
Pinecone
Azure Cognitive Search
Cassandra
Milvus
ElasticSearch
Marqo
DocArrayInMemorySearch
pg_embedding
FAISS
AnalyticDB
Hologres
MongoDB Atlas
Meilisearch
Question Answering Benchmarking: State of the Union Address
QA Generation
Question Answering Benchmarking: Paul Graham Essay
Data Augmented Question Answering
Agent VectorDB Question Answering Benchmarking
Question answering over a group chat messages using Activeloop’s DeepLake
Structure answers with OpenAI functions
QA using Activeloop’s DeepLake
Graph QA
Analysis of Twitter the-algorithm source code with LangChain, GPT4 and Activeloop’s Deep Lake
Use LangChain, GPT and Activeloop’s Deep Lake to work with code base
Combine agents and vector stores
Loading from LangChainHub
Retrieval QA using OpenAI functions
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.text.TextLoader.html
|
b99657943b87-0
|
langchain.document_loaders.browserless.BrowserlessLoader¶
class langchain.document_loaders.browserless.BrowserlessLoader(api_token: str, urls: Union[str, List[str]], text_content: bool = True)[source]¶
Loads the content of webpages using Browserless’ /content endpoint
Initialize with API token and the URLs to scrape
Attributes
api_token
Browserless API token.
urls
List of URLs to scrape.
Methods
__init__(api_token, urls[, text_content])
Initialize with API token and the URLs to scrape
lazy_load()
Lazy load Documents from URLs.
load()
Load Documents from URLs.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(api_token: str, urls: Union[str, List[str]], text_content: bool = True)[source]¶
Initialize with API token and the URLs to scrape
lazy_load() → Iterator[Document][source]¶
Lazy load Documents from URLs.
load() → List[Document][source]¶
Load Documents from URLs.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using BrowserlessLoader¶
Browserless
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.browserless.BrowserlessLoader.html
|
1756417c2231-0
|
langchain.document_loaders.notion.NotionDirectoryLoader¶
class langchain.document_loaders.notion.NotionDirectoryLoader(path: str)[source]¶
Loads Notion directory dump.
Initialize with a file path.
Methods
__init__(path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(path: str)[source]¶
Initialize with a file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using NotionDirectoryLoader¶
Notion DB
Notion DB 1/2
Context aware text splitting and QA / Chat
Perform context-aware text splitting
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notion.NotionDirectoryLoader.html
|
1c525ec921ea-0
|
langchain.document_loaders.pdf.PyPDFLoader¶
class langchain.document_loaders.pdf.PyPDFLoader(file_path: str, password: Optional[Union[str, bytes]] = None)[source]¶
Loads a PDF with pypdf and chunks at character level.
Loader also stores page numbers in metadata.
Initialize with a file path.
Attributes
source
Methods
__init__(file_path[, password])
Initialize with a file path.
lazy_load()
Lazy load given path as pages.
load()
Load given path as pages.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, password: Optional[Union[str, bytes]] = None) → None[source]¶
Initialize with a file path.
lazy_load() → Iterator[Document][source]¶
Lazy load given path as pages.
load() → List[Document][source]¶
Load given path as pages.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using PyPDFLoader¶
Document Comparison
MergeDocLoader
Question answering over a group chat messages using Activeloop’s DeepLake
QA using Activeloop’s DeepLake
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PyPDFLoader.html
|
c5d74b5a42ef-0
|
langchain.document_loaders.embaas.EmbaasDocumentExtractionPayload¶
class langchain.document_loaders.embaas.EmbaasDocumentExtractionPayload[source]¶
Payload for the Embaas document extraction API.
Attributes
bytes
The base64 encoded bytes of the document to extract text from.
Methods
__init__(*args, **kwargs)
clear()
copy()
fromkeys([value])
Create a new dictionary with keys from iterable and values set to value.
get(key[, default])
Return the value for key if key is in the dictionary, else default.
items()
keys()
pop(k[,d])
If the key is not found, return the default if given; otherwise, raise a KeyError.
popitem()
Remove and return a (key, value) pair as a 2-tuple.
setdefault(key[, default])
Insert key with a value of default if key is not in the dictionary.
update([E, ]**F)
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
values()
__init__(*args, **kwargs)¶
clear() → None. Remove all items from D.¶
copy() → a shallow copy of D¶
fromkeys(value=None, /)¶
Create a new dictionary with keys from iterable and values set to value.
get(key, default=None, /)¶
Return the value for key if key is in the dictionary, else default.
items() → a set-like object providing a view on D's items¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionPayload.html
|
c5d74b5a42ef-1
|
items() → a set-like object providing a view on D's items¶
keys() → a set-like object providing a view on D's keys¶
pop(k[, d]) → v, remove specified key and return the corresponding value.¶
If the key is not found, return the default if given; otherwise,
raise a KeyError.
popitem()¶
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order.
Raises KeyError if the dict is empty.
setdefault(key, default=None, /)¶
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
update([E, ]**F) → None. Update D from dict/iterable E and F.¶
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k]
If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v
In either case, this is followed by: for k in F: D[k] = F[k]
values() → an object providing a view on D's values¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionPayload.html
|
b6b1cd32293d-0
|
langchain.document_loaders.xml.UnstructuredXMLLoader¶
class langchain.document_loaders.xml.UnstructuredXMLLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Loader that uses unstructured to load XML files.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredXMLLoader
loader = UnstructuredXMLLoader(“example.xml”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-xml
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredXMLLoader¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.xml.UnstructuredXMLLoader.html
|
b6b1cd32293d-1
|
Returns
List of Documents.
Examples using UnstructuredXMLLoader¶
XML
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.xml.UnstructuredXMLLoader.html
|
79ce2042b25a-0
|
langchain.document_loaders.onedrive.OneDriveLoader¶
class langchain.document_loaders.onedrive.OneDriveLoader[source]¶
Bases: BaseLoader, BaseModel
Loads data from OneDrive.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param auth_with_token: bool = False¶
Whether to authenticate with a token or not. Defaults to False.
param drive_id: str [Required]¶
The ID of the OneDrive drive to load data from.
param folder_path: Optional[str] = None¶
The path to the folder to load data from.
param object_ids: Optional[List[str]] = None¶
The IDs of the objects to load data from.
param settings: langchain.document_loaders.onedrive._OneDriveSettings [Optional]¶
The settings for the OneDrive API client.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.onedrive.OneDriveLoader.html
|
79ce2042b25a-1
|
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Loads all supported document files from the specified OneDrive drive
and return a list of Document objects.
Returns
A list of Document objects
representing the loaded documents.
Return type
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.onedrive.OneDriveLoader.html
|
79ce2042b25a-2
|
Returns
A list of Document objects
representing the loaded documents.
Return type
List[Document]
Raises
ValueError – If the specified drive ID
does not correspond to a drive in the OneDrive storage. –
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using OneDriveLoader¶
Microsoft OneDrive
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.onedrive.OneDriveLoader.html
|
ca15992b206e-0
|
langchain.document_loaders.telegram.text_to_docs¶
langchain.document_loaders.telegram.text_to_docs(text: Union[str, List[str]]) → List[Document][source]¶
Converts a string or list of strings to a list of Documents with metadata.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.telegram.text_to_docs.html
|
3f9d03633dfa-0
|
langchain.document_loaders.snowflake_loader.SnowflakeLoader¶
class langchain.document_loaders.snowflake_loader.SnowflakeLoader(query: str, user: str, password: str, account: str, warehouse: str, role: str, database: str, schema: str, parameters: Optional[Dict[str, Any]] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] = None)[source]¶
Loads a query result from Snowflake into a list of documents.
Each document represents one row of the result. The page_content_columns
are written into the page_content of the document. The metadata_columns
are written into the metadata of the document. By default, all columns
are written into the page_content and none into the metadata.
Initialize Snowflake document loader.
Parameters
query – The query to run in Snowflake.
user – Snowflake user.
password – Snowflake password.
account – Snowflake account.
warehouse – Snowflake warehouse.
role – Snowflake role.
database – Snowflake database
schema – Snowflake schema
parameters – Optional. Parameters to pass to the query.
page_content_columns – Optional. Columns written to Document page_content.
metadata_columns – Optional. Columns written to Document metadata.
Methods
__init__(query, user, password, account, ...)
Initialize Snowflake document loader.
lazy_load()
A lazy loader for Documents.
load()
Load data into document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.snowflake_loader.SnowflakeLoader.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.