Unnamed: 0 int64 0 4.66k | page content stringlengths 23 2k | description stringlengths 8 925 | output stringlengths 38 2.93k |
|---|---|---|---|
200 | the Docs is an open-sourced free software documentation hosting platform. It generates documentation written with the Sphinx documentation generator.This notebook covers how to load content from HTML that was generated as part of a Read-The-Docs build.For an example of this in the wild, see here.This assumes that the H... | Read the Docs is an open-sourced free software documentation hosting platform. It generates documentation written with the Sphinx documentation generator. | Read the Docs is an open-sourced free software documentation hosting platform. It generates documentation written with the Sphinx documentation generator. ->: the Docs is an open-sourced free software documentation hosting platform. It generates documentation written with the Sphinx documentation generator.This noteboo... |
201 | Images | ü¶úÔ∏èüîó Langchain | This covers how to load images such as JPG or PNG into a document format that we can use downstream. | This covers how to load images such as JPG or PNG into a document format that we can use downstream. ->: Images | ü¶úÔ∏èüîó Langchain |
202 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte T... | This covers how to load images such as JPG or PNG into a document format that we can use downstream. | This covers how to load images such as JPG or PNG into a document format that we can use downstream. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomA... |
203 | to load images such as JPG or PNG into a document format that we can use downstream.Using Unstructured‚Äã#!pip install pdfminerfrom langchain.document_loaders.image import UnstructuredImageLoaderloader = UnstructuredImageLoader("layout-parser-paper-fast.jpg")data = loader.load()data[0] Document(page_content="LayoutP... | This covers how to load images such as JPG or PNG into a document format that we can use downstream. | This covers how to load images such as JPG or PNG into a document format that we can use downstream. ->: to load images such as JPG or PNG into a document format that we can use downstream.Using Unstructured‚Äã#!pip install pdfminerfrom langchain.document_loaders.image import UnstructuredImageLoaderloader = Unstructure... |
204 | image clasiffeation [I]\n", lookup_str='', metadata={'source': 'layout-parser-paper-fast.jpg'}, lookup_index=0)Retain Elements‚ÄãUnder the hood, Unstructured creates different "elements" for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="eleme... | This covers how to load images such as JPG or PNG into a document format that we can use downstream. | This covers how to load images such as JPG or PNG into a document format that we can use downstream. ->: image clasiffeation [I]\n", lookup_str='', metadata={'source': 'layout-parser-paper-fast.jpg'}, lookup_index=0)Retain Elements‚ÄãUnder the hood, Unstructured creates different "elements" for different chunks of text... |
205 | Async Chromium | ü¶úÔ∏èüîó Langchain | Chromium is one of the browsers supported by Playwright, a library used to control browser automation. | Chromium is one of the browsers supported by Playwright, a library used to control browser automation. ->: Async Chromium | ü¶úÔ∏èüîó Langchain |
206 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte T... | Chromium is one of the browsers supported by Playwright, a library used to control browser automation. | Chromium is one of the browsers supported by Playwright, a library used to control browser automation. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreo... |
207 | the browsers supported by Playwright, a library used to control browser automation. By running p.chromium.launch(headless=True), we are launching a headless instance of Chromium. Headless mode means that the browser is running without a graphical user interface.AsyncChromiumLoader load the page, and then we use Html2Te... | Chromium is one of the browsers supported by Playwright, a library used to control browser automation. | Chromium is one of the browsers supported by Playwright, a library used to control browser automation. ->: the browsers supported by Playwright, a library used to control browser automation. By running p.chromium.launch(headless=True), we are launching a headless instance of Chromium. Headless mode means that the brows... |
208 | Sitemap | ü¶úÔ∏èüîó Langchain | Extends from the WebBaseLoader, SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document. | Extends from the WebBaseLoader, SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document. ->: Sitemap | ü¶úÔ∏èüîó Langchain |
209 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte T... | Extends from the WebBaseLoader, SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document. | Extends from the WebBaseLoader, SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMi... |
210 | SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document.The scraping is done concurrently. There are reasonable limits to concurrent requests, defaulting to 2 per second. If you aren't concerned about being a good citizen, or you control the... | Extends from the WebBaseLoader, SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document. | Extends from the WebBaseLoader, SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document. ->: SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document.The scraping is d... |
211 | match one of the patterns will be loaded.loader = SitemapLoader( web_path="https://langchain.readthedocs.io/sitemap.xml", filter_urls=["https://api.python.langchain.com/en/latest"],)documents = loader.load() Fetching pages: 100%|##########| 1/1 [00:00<00:00, 16.39it/s]documents[0] Document(page_content='\n\... | Extends from the WebBaseLoader, SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document. | Extends from the WebBaseLoader, SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document. ->: match one of the patterns will be loaded.loader = SitemapLoader( web_path="https://langchain.readthedocs.io/sitemap.xml", filter_urls=["https://... |
212 | Sitemap‚ÄãThe sitemap loader can also be used to load local files.sitemap_loader = SitemapLoader(web_path="example_data/sitemap.xml", is_local=True)docs = sitemap_loader.load() Fetching pages: 100%|##########| 3/3 [00:00<00:00, 12.46it/s]PreviousRSTNextSlackFiltering sitemap URLsAdd custom scraping rulesLocal Sitema... | Extends from the WebBaseLoader, SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document. | Extends from the WebBaseLoader, SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document. ->: Sitemap‚ÄãThe sitemap loader can also be used to load local files.sitemap_loader = SitemapLoader(web_path="example_data/sitemap.xml", is_local=True)do... |
213 | Obsidian | ü¶úÔ∏èüîó Langchain | Obsidian is a powerful and extensible knowledge base | Obsidian is a powerful and extensible knowledge base ->: Obsidian | ü¶úÔ∏èüîó Langchain |
214 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte T... | Obsidian is a powerful and extensible knowledge base | Obsidian is a powerful and extensible knowledge base ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSO... |
215 | that works on top of your local folder of plain text files.This notebook covers how to load documents from an Obsidian database.Since Obsidian is just stored on disk as a folder of Markdown files, the loader just takes a path to this directory.Obsidian files also sometimes contain metadata which is a YAML block at the ... | Obsidian is a powerful and extensible knowledge base | Obsidian is a powerful and extensible knowledge base ->: that works on top of your local folder of plain text files.This notebook covers how to load documents from an Obsidian database.Since Obsidian is just stored on disk as a folder of Markdown files, the loader just takes a path to this directory.Obsidian files also... |
216 | Alibaba Cloud MaxCompute | ü¶úÔ∏èüîó Langchain | Alibaba Cloud MaxCompute (previously known as ODPS) is a general purpose, fully managed, multi-tenancy data processing platform for large-scale data warehousing. MaxCompute supports various data importing solutions and distributed computing models, enabling users to effectively query massive datasets, reduce production... | Alibaba Cloud MaxCompute (previously known as ODPS) is a general purpose, fully managed, multi-tenancy data processing platform for large-scale data warehousing. MaxCompute supports various data importing solutions and distributed computing models, enabling users to effectively query massive datasets, reduce production... |
217 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte T... | Alibaba Cloud MaxCompute (previously known as ODPS) is a general purpose, fully managed, multi-tenancy data processing platform for large-scale data warehousing. MaxCompute supports various data importing solutions and distributed computing models, enabling users to effectively query massive datasets, reduce production... | Alibaba Cloud MaxCompute (previously known as ODPS) is a general purpose, fully managed, multi-tenancy data processing platform for large-scale data warehousing. MaxCompute supports various data importing solutions and distributed computing models, enabling users to effectively query massive datasets, reduce production... |
218 | Cloud MaxComputeAlibaba Cloud MaxCompute (previously known as ODPS) is a general purpose, fully managed, multi-tenancy data processing platform for large-scale data warehousing. MaxCompute supports various data importing solutions and distributed computing models, enabling users to effectively query massive datasets, r... | Alibaba Cloud MaxCompute (previously known as ODPS) is a general purpose, fully managed, multi-tenancy data processing platform for large-scale data warehousing. MaxCompute supports various data importing solutions and distributed computing models, enabling users to effectively query massive datasets, reduce production... | Alibaba Cloud MaxCompute (previously known as ODPS) is a general purpose, fully managed, multi-tenancy data processing platform for large-scale data warehousing. MaxCompute supports various data importing solutions and distributed computing models, enabling users to effectively query massive datasets, reduce production... |
219 | AS meta_info UNION ALL SELECT 2 AS id, 'content2' AS content, 'meta_info2' AS meta_info UNION ALL SELECT 3 AS id, 'content3' AS content, 'meta_info3' AS meta_info) mydata;"""endpoint = "<ENDPOINT>"project = "<PROJECT>"ACCESS_ID = "<ACCESS ID>"SECRET_ACCESS_KEY = "<SECRET ACCESS KEY>"loader = MaxComputeLoade... | Alibaba Cloud MaxCompute (previously known as ODPS) is a general purpose, fully managed, multi-tenancy data processing platform for large-scale data warehousing. MaxCompute supports various data importing solutions and distributed computing models, enabling users to effectively query massive datasets, reduce production... | Alibaba Cloud MaxCompute (previously known as ODPS) is a general purpose, fully managed, multi-tenancy data processing platform for large-scale data warehousing. MaxCompute supports various data importing solutions and distributed computing models, enabling users to effectively query massive datasets, reduce production... |
220 | Airbyte Zendesk Support | ü¶úÔ∏èüîó Langchain | Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases. | Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases. ->: Airbyte Zendesk Support | ü¶úÔ∏èüîó Langchain |
221 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte T... | Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases. | Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvide... |
222 | Zendesk SupportAirbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.This loader exposes the Zendesk Support connector as a document loader, allowing you to load various objects as docume... | Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases. | Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases. ->: Zendesk SupportAirbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It h... |
223 | The JSON schema the config object should adhere to can be found on Github: https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-zendesk-support/source_zendesk_support/spec.json.The general shape looks like this:{ "subdomain": "<your zendesk subdomain>", "start_date": "<date from whi... | Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases. | Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases. ->: The JSON schema the config object should adhere to can be found on Github: https://github.com/airbytehq/airbyte/blob/master/airb... |
224 | are updated frequently.To take advantage of this, store the last_state property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded.last_state = loader.last_state # store safelyincremental_loader = AirbyteZendeskSupportLoader(config=config, stream_name="tickets... | Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases. | Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases. ->: are updated frequently.To take advantage of this, store the last_state property of the loader and pass it in when creating the l... |
225 | YouTube transcripts | ü¶úÔ∏èüîó Langchain | YouTube is an online video sharing and social media platform created by Google. | YouTube is an online video sharing and social media platform created by Google. ->: YouTube transcripts | ü¶úÔ∏èüîó Langchain |
226 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte T... | YouTube is an online video sharing and social media platform created by Google. | YouTube is an online video sharing and social media platform created by Google. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte Gon... |
227 | transcriptsYouTube is an online video sharing and social media platform created by Google.This notebook covers how to load documents from YouTube transcripts.from langchain.document_loaders import YoutubeLoader# !pip install youtube-transcript-apiloader = YoutubeLoader.from_youtube_url( "https://www.youtube.com/watc... | YouTube is an online video sharing and social media platform created by Google. | YouTube is an online video sharing and social media platform created by Google. ->: transcriptsYouTube is an online video sharing and social media platform created by Google.This notebook covers how to load documents from YouTube transcripts.from langchain.document_loaders import YoutubeLoader# !pip install youtube-tra... |
228 | Note depending on your set up, the service_account_path needs to be set up. See here for more details.from langchain.document_loaders import GoogleApiClient, GoogleApiYoutubeLoader# Init the GoogleApiClientfrom pathlib import Pathgoogle_api_client = GoogleApiClient(credentials_path=Path("your_path_creds.json"))# Use a ... | YouTube is an online video sharing and social media platform created by Google. | YouTube is an online video sharing and social media platform created by Google. ->: Note depending on your set up, the service_account_path needs to be set up. See here for more details.from langchain.document_loaders import GoogleApiClient, GoogleApiYoutubeLoader# Init the GoogleApiClientfrom pathlib import Pathgoogle... |
229 | Huawei OBS Directory | ü¶úÔ∏èüîó Langchain | The following code demonstrates how to load objects from the Huawei OBS (Object Storage Service) as documents. | The following code demonstrates how to load objects from the Huawei OBS (Object Storage Service) as documents. ->: Huawei OBS Directory | ü¶úÔ∏èüîó Langchain |
230 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte T... | The following code demonstrates how to load objects from the Huawei OBS (Object Storage Service) as documents. | The following code demonstrates how to load objects from the Huawei OBS (Object Storage Service) as documents. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument load... |
231 | DirectoryThe following code demonstrates how to load objects from the Huawei OBS (Object Storage Service) as documents.# Install the required package# pip install esdk-obs-pythonfrom langchain.document_loaders import OBSDirectoryLoaderendpoint = "your-endpoint"# Configure your access credentials\nconfig = { "ak": "y... | The following code demonstrates how to load objects from the Huawei OBS (Object Storage Service) as documents. | The following code demonstrates how to load objects from the Huawei OBS (Object Storage Service) as documents. ->: DirectoryThe following code demonstrates how to load objects from the Huawei OBS (Object Storage Service) as documents.# Install the required package# pip install esdk-obs-pythonfrom langchain.document_loa... |
232 | Figma | ü¶úÔ∏èüîó Langchain | Figma is a collaborative web application for interface design. | Figma is a collaborative web application for interface design. ->: Figma | ü¶úÔ∏èüîó Langchain |
233 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte T... | Figma is a collaborative web application for interface design. | Figma is a collaborative web application for interface design. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotA... |
234 | for interface design.This notebook covers how to load data from the Figma REST API into a format that can be ingested into LangChain, along with example usage for code generation.import osfrom langchain.document_loaders.figma import FigmaFileLoaderfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.... | Figma is a collaborative web application for interface design. | Figma is a collaborative web application for interface design. ->: for interface design.This notebook covers how to load data from the Figma REST API into a format that can be ingested into LangChain, along with example usage for code generation.import osfrom langchain.document_loaders.figma import FigmaFileLoaderfrom ... |
235 | {context}""" human_prompt_template = "Code the {text}. Ensure it's mobile responsive" system_message_prompt = SystemMessagePromptTemplate.from_template( system_prompt_template ) human_message_prompt = HumanMessagePromptTemplate.from_template( human_prompt_template ) # delete the gpt-4 mo... | Figma is a collaborative web application for interface design. | Figma is a collaborative web application for interface design. ->: {context}""" human_prompt_template = "Code the {text}. Ensure it's mobile responsive" system_message_prompt = SystemMessagePromptTemplate.from_template( system_prompt_template ) human_message_prompt = HumanMessagePromptTemplate.from_t... |
236 | none;\n color: #000;\n margin-left: 20px;\n }\n\n @media (max-width: 768px) {\n .header nav {\n display: none;\n }\n }\n </style>\n</head>\n<body>\n <header class="header">\n <h1>Company Contact</h1>\n <nav>\n ... | Figma is a collaborative web application for interface design. | Figma is a collaborative web application for interface design. ->: none;\n color: #000;\n margin-left: 20px;\n }\n\n @media (max-width: 768px) {\n .header nav {\n display: none;\n }\n }\n </style>\n</head>\n<body>\n <header class="hea... |
237 | AssemblyAI Audio Transcripts | ü¶úÔ∏èüîó Langchain | The AssemblyAIAudioTranscriptLoader allows to transcribe audio files with the AssemblyAI API and loads the transcribed text into documents. | The AssemblyAIAudioTranscriptLoader allows to transcribe audio files with the AssemblyAI API and loads the transcribed text into documents. ->: AssemblyAI Audio Transcripts | ü¶úÔ∏èüîó Langchain |
238 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte T... | The AssemblyAIAudioTranscriptLoader allows to transcribe audio files with the AssemblyAI API and loads the transcribed text into documents. | The AssemblyAIAudioTranscriptLoader allows to transcribe audio files with the AssemblyAI API and loads the transcribed text into documents. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponent... |
239 | this pageAssemblyAI Audio TranscriptsThe AssemblyAIAudioTranscriptLoader allows to transcribe audio files with the AssemblyAI API and loads the transcribed text into documents.To use it, you should have the assemblyai python package installed, and the | The AssemblyAIAudioTranscriptLoader allows to transcribe audio files with the AssemblyAI API and loads the transcribed text into documents. | The AssemblyAIAudioTranscriptLoader allows to transcribe audio files with the AssemblyAI API and loads the transcribed text into documents. ->: this pageAssemblyAI Audio TranscriptsThe AssemblyAIAudioTranscriptLoader allows to transcribe audio files with the AssemblyAI API and loads the transcribed text into documents.... |
240 | environment variable ASSEMBLYAI_API_KEY set with your API key. Alternatively, the API key can also be passed as an argument.More info about AssemblyAI:WebsiteGet a Free API keyAssemblyAI API DocsInstallation‚ÄãFirst, you need to install the assemblyai python package.You can find more info about it inside the assemblyai... | The AssemblyAIAudioTranscriptLoader allows to transcribe audio files with the AssemblyAI API and loads the transcribed text into documents. | The AssemblyAIAudioTranscriptLoader allows to transcribe audio files with the AssemblyAI API and loads the transcribed text into documents. ->: environment variable ASSEMBLYAI_API_KEY set with your API key. Alternatively, the API key can also be passed as an argument.More info about AssemblyAI:WebsiteGet a Free API key... |
241 | = loader.load()Transcription Config‚ÄãYou can also specify the config argument to use different audio intelligence models.Visit the AssemblyAI API Documentation to get an overview of all available models!import assemblyai as aaiconfig = aai.TranscriptionConfig(speaker_labels=True, auto_c... | The AssemblyAIAudioTranscriptLoader allows to transcribe audio files with the AssemblyAI API and loads the transcribed text into documents. | The AssemblyAIAudioTranscriptLoader allows to transcribe audio files with the AssemblyAI API and loads the transcribed text into documents. ->: = loader.load()Transcription Config‚ÄãYou can also specify the config argument to use different audio intelligence models.Visit the AssemblyAI API Documentation to get an overv... |
242 | Diffbot | ü¶úÔ∏èüîó Langchain | Unlike traditional web scraping tools, Diffbot doesn't require any rules to read the content on a page. | Unlike traditional web scraping tools, Diffbot doesn't require any rules to read the content on a page. ->: Diffbot | ü¶úÔ∏èüîó Langchain |
243 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte T... | Unlike traditional web scraping tools, Diffbot doesn't require any rules to read the content on a page. | Unlike traditional web scraping tools, Diffbot doesn't require any rules to read the content on a page. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacre... |
244 | tools, Diffbot doesn't require any rules to read the content on a page. | Unlike traditional web scraping tools, Diffbot doesn't require any rules to read the content on a page. | Unlike traditional web scraping tools, Diffbot doesn't require any rules to read the content on a page. ->: tools, Diffbot doesn't require any rules to read the content on a page. |
245 | It starts with computer vision, which classifies a page into one of 20 possible types. Content is then interpreted by a machine learning model trained to identify the key attributes on a page based on its type. | Unlike traditional web scraping tools, Diffbot doesn't require any rules to read the content on a page. | Unlike traditional web scraping tools, Diffbot doesn't require any rules to read the content on a page. ->: It starts with computer vision, which classifies a page into one of 20 possible types. Content is then interpreted by a machine learning model trained to identify the key attributes on a page based on its type. |
246 | The result is a website transformed into clean structured data (like JSON or CSV), ready for your application.This covers how to extract HTML documents from a list of URLs using the Diffbot extract API, into a document format that we can use downstream.urls = [ "https://python.langchain.com/en/latest/index.html",]Th... | Unlike traditional web scraping tools, Diffbot doesn't require any rules to read the content on a page. | Unlike traditional web scraping tools, Diffbot doesn't require any rules to read the content on a page. ->: The result is a website transformed into clean structured data (like JSON or CSV), ready for your application.This covers how to extract HTML documents from a list of URLs using the Diffbot extract API, into a do... |
247 | provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\nIndexes: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.\nChains: Chains go beyond just a single LLM... | Unlike traditional web scraping tools, Diffbot doesn't require any rules to read the content on a page. | Unlike traditional web scraping tools, Diffbot doesn't require any rules to read the content on a page. ->: provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\nIndexes: Language models are often more powerful when combined with your own text ... |
248 | Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\nReference Docs\nAll of LangChain’s reference documentation, in one place. Full documentat... | Unlike traditional web scraping tools, Diffbot doesn't require any rules to read the content on a page. | Unlike traditional web scraping tools, Diffbot doesn't require any rules to read the content on a page. ->: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for ass... |
249 | Notebook | ü¶úÔ∏èüîó Langchain | This notebook covers how to load data from an .ipynb notebook into a format suitable by LangChain. | This notebook covers how to load data from an .ipynb notebook into a format suitable by LangChain. ->: Notebook | ü¶úÔ∏èüîó Langchain |
250 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte T... | This notebook covers how to load data from an .ipynb notebook into a format suitable by LangChain. | This notebook covers how to load data from an .ipynb notebook into a format suitable by LangChain. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAir... |
251 | notebook covers how to load data from an .ipynb notebook into a format suitable by LangChain.from langchain.document_loaders import NotebookLoaderloader = NotebookLoader("example_data/notebook.ipynb")NotebookLoader.load() loads the .ipynb notebook file into a Document object.Parameters:include_outputs (bool): whether t... | This notebook covers how to load data from an .ipynb notebook into a format suitable by LangChain. | This notebook covers how to load data from an .ipynb notebook into a format suitable by LangChain. ->: notebook covers how to load data from an .ipynb notebook into a format suitable by LangChain.from langchain.document_loaders import NotebookLoaderloader = NotebookLoader("example_data/notebook.ipynb")NotebookLoader.lo... |
252 | Modern Treasury | ü¶úÔ∏èüîó Langchain | Modern Treasury simplifies complex payment operations. It is a unified platform to power products and processes that move money. | Modern Treasury simplifies complex payment operations. It is a unified platform to power products and processes that move money. ->: Modern Treasury | ü¶úÔ∏èüîó Langchain |
253 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte T... | Modern Treasury simplifies complex payment operations. It is a unified platform to power products and processes that move money. | Modern Treasury simplifies complex payment operations. It is a unified platform to power products and processes that move money. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat m... |
254 | simplifies complex payment operations. It is a unified platform to power products and processes that move money.Connect to banks and payment systemsTrack transactions and balances in real-timeAutomate payment operations for scaleThis notebook covers how to load data from the Modern Treasury REST API into a format that ... | Modern Treasury simplifies complex payment operations. It is a unified platform to power products and processes that move money. | Modern Treasury simplifies complex payment operations. It is a unified platform to power products and processes that move money. ->: simplifies complex payment operations. It is a unified platform to power products and processes that move money.Connect to banks and payment systemsTrack transactions and balances in real... |
255 | Rockset | ü¶úÔ∏èüîó Langchain | Rockset is a real-time analytics database which enables queries on massive, semi-structured data without operational burden. With Rockset, ingested data is queryable within one second and analytical queries against that data typically execute in milliseconds. Rockset is compute optimized, making it suitable for serving... | Rockset is a real-time analytics database which enables queries on massive, semi-structured data without operational burden. With Rockset, ingested data is queryable within one second and analytical queries against that data typically execute in milliseconds. Rockset is compute optimized, making it suitable for serving... |
256 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte T... | Rockset is a real-time analytics database which enables queries on massive, semi-structured data without operational burden. With Rockset, ingested data is queryable within one second and analytical queries against that data typically execute in milliseconds. Rockset is compute optimized, making it suitable for serving... | Rockset is a real-time analytics database which enables queries on massive, semi-structured data without operational burden. With Rockset, ingested data is queryable within one second and analytical queries against that data typically execute in milliseconds. Rockset is compute optimized, making it suitable for serving... |
257 | analytics database which enables queries on massive, semi-structured data without operational burden. With Rockset, ingested data is queryable within one second and analytical queries against that data typically execute in milliseconds. Rockset is compute optimized, making it suitable for serving high concurrency appli... | Rockset is a real-time analytics database which enables queries on massive, semi-structured data without operational burden. With Rockset, ingested data is queryable within one second and analytical queries against that data typically execute in milliseconds. Rockset is compute optimized, making it suitable for serving... | Rockset is a real-time analytics database which enables queries on massive, semi-structured data without operational burden. With Rockset, ingested data is queryable within one second and analytical queries against that data typically execute in milliseconds. Rockset is compute optimized, making it suitable for serving... |
258 | the query and access all resulting Documents at once, run:loader.load()Here is an example response of loader.load():[ Document( page_content="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Maecenas a libero porta, dictum ipsum eget, hendrerit neque. Morbi blandit, ex ut suscipit viverra, enim velit ... | Rockset is a real-time analytics database which enables queries on massive, semi-structured data without operational burden. With Rockset, ingested data is queryable within one second and analytical queries against that data typically execute in milliseconds. Rockset is compute optimized, making it suitable for serving... | Rockset is a real-time analytics database which enables queries on massive, semi-structured data without operational burden. With Rockset, ingested data is queryable within one second and analytical queries against that data typically execute in milliseconds. Rockset is compute optimized, making it suitable for serving... |
259 | field is "This is the second sentence.", the page_content of the resulting Document would be:This is the first sentence.This is the second sentence.You can define you own function to join content columns by setting the content_columns_joiner argument in the RocksetLoader constructor. content_columns_joiner is a method ... | Rockset is a real-time analytics database which enables queries on massive, semi-structured data without operational burden. With Rockset, ingested data is queryable within one second and analytical queries against that data typically execute in milliseconds. Rockset is compute optimized, making it suitable for serving... | Rockset is a real-time analytics database which enables queries on massive, semi-structured data without operational burden. With Rockset, ingested data is queryable within one second and analytical queries against that data typically execute in milliseconds. Rockset is compute optimized, making it suitable for serving... |
260 | PubMed | ü¶úÔ∏èüîó Langchain | PubMed¬Æ by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites. | PubMed¬Æ by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites. ->: PubMed |... |
261 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte T... | PubMed¬Æ by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites. | PubMed¬Æ by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites. ->: Skip to ... |
262 | Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites.from langchain.document_loaders import PubMedLoad... | PubMed® by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites. | PubMed® by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites. ->: Biotechn... |
263 | one or multiple domains. ChatGPT's overall performance, as well as its performance across the domains of Good Medical Practice, was evaluated.\nRESULTS: Overall, ChatGPT performed well, scoring 76% on the SJT but scoring full marks on only a few questions (9%), which may reflect possible flaws in ChatGPT's situational ... | PubMed® by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites. | PubMed® by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites. ->: one or m... |
264 | Recursive URL | ü¶úÔ∏èüîó Langchain | We may want to process load all URLs under a root directory. | We may want to process load all URLs under a root directory. ->: Recursive URL | ü¶úÔ∏èüîó Langchain |
265 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte T... | We may want to process load all URLs under a root directory. | We may want to process load all URLs under a root directory. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAir... |
266 | process load all URLs under a root directory.For example, let's look at the Python 3.9 Document.This has many interesting child pages that we may want to read in bulk.Of course, the WebBaseLoader can load a list of pages. But, the challenge is traversing the tree of child pages and actually assembling that list!We do t... | We may want to process load all URLs under a root directory. | We may want to process load all URLs under a root directory. ->: process load all URLs under a root directory.For example, let's look at the Python 3.9 Document.This has many interesting child pages that we may want to read in bulk.Of course, the WebBaseLoader can load a list of pages. But, the challenge is traversing ... |
267 | 'title': 'The Python Standard Library — Python 3.9.17 documentation', 'language': None}However, since it's hard to perform a perfect filter, you may still see some irrelevant results in the results. You can perform a filter on the returned documents by yourself, if it's needed. Most of the time, the returned resu... | We may want to process load all URLs under a root directory. | We may want to process load all URLs under a root directory. ->: 'title': 'The Python Standard Library — Python 3.9.17 documentation', 'language': None}However, since it's hard to perform a perfect filter, you may still see some irrelevant results in the results. You can perform a filter on the returned documents... |
268 | Source Code | ü¶úÔ∏èüîó Langchain | This notebook covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a separate document. | This notebook covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a separate document. ->: Source Code ... |
269 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte T... | This notebook covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a separate document. | This notebook covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a separate document. ->: Skip to main... |
270 | covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a separate document.This approach can potentially i... | This notebook covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a separate document. | This notebook covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a separate document. ->: covers how t... |
271 | class MyClass: def __init__(self, name): self.name = name def greet(self): print(f"Hello, {self.name}!") --8<-- def main(): name = input("Enter your name: ") obj = MyClass(name) obj.greet() --8<-- # Code for: class MyClass: ... | This notebook covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a separate document. | This notebook covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a separate document. ->: class MyClas... |
272 | RecursiveCharacterTextSplitter.from_language( language=Language.JS, chunk_size=60, chunk_overlap=0)result = js_splitter.split_documents(docs)len(result) 7print("\n\n--8<--\n\n".join([document.page_content for document in result])) class MyClass { constructor(name) { this.name = name; --8<-- ... | This notebook covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a separate document. | This notebook covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a separate document. ->: RecursiveCha... |
273 | Geopandas | ü¶úÔ∏èüîó Langchain | Geopandas is an open-source project to make working with geospatial data in python easier. | Geopandas is an open-source project to make working with geospatial data in python easier. ->: Geopandas | ü¶úÔ∏èüîó Langchain |
274 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte T... | Geopandas is an open-source project to make working with geospatial data in python easier. | Geopandas is an open-source project to make working with geospatial data in python easier. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDK... |
275 | project to make working with geospatial data in python easier. GeoPandas extends the datatypes used by pandas to allow spatial operations on geometric types. Geometric operations are performed by shapely. Geopandas further depends on fiona for file access and matplotlib for plotting.LLM applications (chat, QA) that uti... | Geopandas is an open-source project to make working with geospatial data in python easier. | Geopandas is an open-source project to make working with geospatial data in python easier. ->: project to make working with geospatial data in python easier. GeoPandas extends the datatypes used by pandas to allow spatial operations on geometric types. Geometric operations are performed by shapely. Geopandas further de... |
276 | are placed in metadata.But, we can specify the page_content_column.from langchain.document_loaders import GeoDataFrameLoaderloader = GeoDataFrameLoader(data_frame=gdf, page_content_column="geometry")docs = loader.load()docs[0] Document(page_content='POINT (-122.420084075249 37.7083109744362)', metadata={'pdid': '413... | Geopandas is an open-source project to make working with geospatial data in python easier. | Geopandas is an open-source project to make working with geospatial data in python easier. ->: are placed in metadata.But, we can specify the page_content_column.from langchain.document_loaders import GeoDataFrameLoaderloader = GeoDataFrameLoader(data_frame=gdf, page_content_column="geometry")docs = loader.load()docs[0... |
277 | AWS S3 File | ü¶úÔ∏èüîó Langchain | Amazon Simple Storage Service (Amazon S3) is an object storage service. | Amazon Simple Storage Service (Amazon S3) is an object storage service. ->: AWS S3 File | ü¶úÔ∏èüîó Langchain |
278 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte T... | Amazon Simple Storage Service (Amazon S3) is an object storage service. | Amazon Simple Storage Service (Amazon S3) is an object storage service. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte... |
279 | Simple Storage Service (Amazon S3) is an object storage service.AWS S3 BucketsThis covers how to load document objects from an AWS S3 File object.from langchain.document_loaders import S3FileLoader#!pip install boto3loader = S3FileLoader("testing-hwc", "fake.docx")loader.load() [Document(page_content='Lorem ipsum do... | Amazon Simple Storage Service (Amazon S3) is an object storage service. | Amazon Simple Storage Service (Amazon S3) is an object storage service. ->: Simple Storage Service (Amazon S3) is an object storage service.AWS S3 BucketsThis covers how to load document objects from an AWS S3 File object.from langchain.document_loaders import S3FileLoader#!pip install boto3loader = S3FileLoader("testi... |
280 | named arguments when creating the S3DirectoryLoader.
This is useful for instance when AWS credentials can't be set as environment variables.
See the list of parameters that can be configured.loader = S3FileLoader("testing-hwc", "fake.docx", aws_access_key_id="xxxx", aws_secret_access_key="yyyy")loader.load()PreviousAWS... | Amazon Simple Storage Service (Amazon S3) is an object storage service. | Amazon Simple Storage Service (Amazon S3) is an object storage service. ->: named arguments when creating the S3DirectoryLoader.
This is useful for instance when AWS credentials can't be set as environment variables.
See the list of parameters that can be configured.loader = S3FileLoader("testing-hwc", "fake.docx", aws... |
281 | College Confidential | ü¶úÔ∏èüîó Langchain | College Confidential gives information on 3,800+ colleges and universities. | College Confidential gives information on 3,800+ colleges and universities. ->: College Confidential | ü¶úÔ∏èüîó Langchain |
282 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte T... | College Confidential gives information on 3,800+ colleges and universities. | College Confidential gives information on 3,800+ colleges and universities. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAir... |
283 | Confidential gives information on 3,800+ colleges and universities.This covers how to load College Confidential webpages into a document format that we can use downstream.from langchain.document_loaders import CollegeConfidentialLoaderloader = CollegeConfidentialLoader( "https://www.collegeconfidential.com/colleges/... | College Confidential gives information on 3,800+ colleges and universities. | College Confidential gives information on 3,800+ colleges and universities. ->: Confidential gives information on 3,800+ colleges and universities.This covers how to load College Confidential webpages into a document format that we can use downstream.from langchain.document_loaders import CollegeConfidentialLoaderloade... |
284 | not offered\n\n\nApplicants Submitting SAT scores\n51%\n\n\nTuition\n$62,680\n\n\nPercent of Need Met\n100%\n\n\nAverage First-Year Financial Aid Package\n$59,749\n\n\n\n\nIs Brown a Good School?\n\nDifferent people have different ideas about what makes a "good" school. Some factors that can help you determine what a g... | College Confidential gives information on 3,800+ colleges and universities. | College Confidential gives information on 3,800+ colleges and universities. ->: not offered\n\n\nApplicants Submitting SAT scores\n51%\n\n\nTuition\n$62,680\n\n\nPercent of Need Met\n100%\n\n\nAverage First-Year Financial Aid Package\n$59,749\n\n\n\n\nIs Brown a Good School?\n\nDifferent people have different ideas abo... |
285 | market value of Brown University\'s endowment was $4.7 billion. The average college endowment was $905 million in 2021. The school spends $34,086 for each full-time student enrolled. \nTuition and Financial Aid at Brown\nTuition is another important factor when choose a college. Some colleges may have high tuition, but... | College Confidential gives information on 3,800+ colleges and universities. | College Confidential gives information on 3,800+ colleges and universities. ->: market value of Brown University\'s endowment was $4.7 billion. The average college endowment was $905 million in 2021. The school spends $34,086 for each full-time student enrolled. \nTuition and Financial Aid at Brown\nTuition is another ... |
286 | Island, less than an hour from Boston. \nIf you would like to see Brown for yourself, plan a visit. The best way to reach campus is to take Interstate 95 to Providence, or book a flight to the nearest airport, T.F. Green.\nYou can also take a virtual campus tour to get a sense of what Brown and Providence are like with... | College Confidential gives information on 3,800+ colleges and universities. | College Confidential gives information on 3,800+ colleges and universities. ->: Island, less than an hour from Boston. \nIf you would like to see Brown for yourself, plan a visit. The best way to reach campus is to take Interstate 95 to Providence, or book a flight to the nearest airport, T.F. Green.\nYou can also take... |
287 | by ACT\n\n\n Take the Next ACT Test\n \n\n\n\n\n\nBrown SAT Scores\n\n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nComposite SAT Range\n\n\n \n 720 - 770\n \n \n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nMath SAT Range\n\n\n \n... | College Confidential gives information on 3,800+ colleges and universities. | College Confidential gives information on 3,800+ colleges and universities. ->: by ACT\n\n\n Take the Next ACT Test\n \n\n\n\n\n\nBrown SAT Scores\n\n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nComposite SAT Range\n\n\n \n 720 - 770\n \n ... |
288 | 96% percent of students attend school \n full-time, \n 6% percent are from RI and \n 94% percent of students are from other states.\n \n\n\n\n\n\n None\n \n\n\n\n\nUndergraduate Enrollment\n\n\n\n 96%\n \nFul... | College Confidential gives information on 3,800+ colleges and universities. | College Confidential gives information on 3,800+ colleges and universities. ->: 96% percent of students attend school \n full-time, \n 6% percent are from RI and \n 94% percent of students are from other states.\n \n\n\n\n\n\n None\n \n\n\n\n\nUnderg... |
289 | Iugu | ü¶úÔ∏èüîó Langchain | Iugu is a Brazilian services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications. | Iugu is a Brazilian services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications. ->: Iugu | ü¶úÔ∏èüîó Langchain |
290 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte T... | Iugu is a Brazilian services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications. | Iugu is a Brazilian services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTR... |
291 | software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications.This notebook covers how to load data from the Iugu REST API into a format that can be ingested into LangChain, along with example usage for vectorization.im... | Iugu is a Brazilian services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications. | Iugu is a Brazilian services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications. ->: software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for ... |
292 | Copy Paste | ü¶úÔ∏èüîó Langchain | This notebook covers how to load a document object from something you just want to copy and paste. In this case, you don't even need to use a DocumentLoader, but rather can just construct the Document directly. | This notebook covers how to load a document object from something you just want to copy and paste. In this case, you don't even need to use a DocumentLoader, but rather can just construct the Document directly. ->: Copy Paste | ü¶úÔ∏èüîó Langchain |
293 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte T... | This notebook covers how to load a document object from something you just want to copy and paste. In this case, you don't even need to use a DocumentLoader, but rather can just construct the Document directly. | This notebook covers how to load a document object from something you just want to copy and paste. In this case, you don't even need to use a DocumentLoader, but rather can just construct the Document directly. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS... |
294 | covers how to load a document object from something you just want to copy and paste. In this case, you don't even need to use a DocumentLoader, but rather can just construct the Document directly.from langchain.docstore.document import Documenttext = "..... put the text you copy pasted here......"doc = Document(page_co... | This notebook covers how to load a document object from something you just want to copy and paste. In this case, you don't even need to use a DocumentLoader, but rather can just construct the Document directly. | This notebook covers how to load a document object from something you just want to copy and paste. In this case, you don't even need to use a DocumentLoader, but rather can just construct the Document directly. ->: covers how to load a document object from something you just want to copy and paste. In this case, you do... |
295 | CoNLL-U | ü¶úÔ∏èüîó Langchain | CoNLL-U is revised version of the CoNLL-X format. Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines: | CoNLL-U is revised version of the CoNLL-X format. Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines: ->: CoNLL-U | ü¶úÔ∏èüîó Langchain |
296 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte T... | CoNLL-U is revised version of the CoNLL-X format. Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines: | CoNLL-U is revised version of the CoNLL-X format. Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines: ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityC... |
297 | of the CoNLL-X format. Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines:Word lines containing the annotation of a word/token in 10 fields separated by single tab characters; see below... | CoNLL-U is revised version of the CoNLL-X format. Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines: | CoNLL-U is revised version of the CoNLL-X format. Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines: ->: of the CoNLL-X format. Annotations are encoded in plain text files (UTF-8, norm... |
298 | Slack | ü¶úÔ∏èüîó Langchain | Slack is an instant messaging program. | Slack is an instant messaging program. ->: Slack | ü¶úÔ∏èüîó Langchain |
299 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte T... | Slack is an instant messaging program. | Slack is an instant messaging program. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte Sales... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.