type
stringclasses
5 values
name
stringlengths
1
55
qualified_name
stringlengths
5
130
docstring
stringlengths
15
3.11k
filepath
stringclasses
90 values
is_public
bool
2 classes
is_private
bool
2 classes
line_start
int64
0
1.44k
line_end
int64
0
1.51k
annotation
stringclasses
2 values
returns
stringclasses
82 values
value
stringclasses
66 values
parameters
listlengths
0
10
bases
listlengths
0
2
parent_class
stringclasses
193 values
api_element_summary
stringlengths
199
3.43k
function
extract
fenic.api.functions.semantic.extract
Extracts structured information from unstructured text using a provided schema. This function applies an instruction-driven extraction process to text columns, returning structured data based on the fields and descriptions provided. Useful for pulling out key entities, facts, or labels from documents. Args: column: Column containing text to extract from. schema: An ExtractSchema containing fields of type ExtractSchemaField that define the output structure and field descriptions or a Pydantic model that defines the output structure with descriptions for each field. model_alias: Optional alias for the language model to use for the mapping. If None, will use the language model configured as the default. temperature: Optional temperature parameter for the language model. If None, will use the default temperature (0.0). max_output_tokens: Optional parameter to constrain the model to generate at most this many tokens. If None, fenic will calculate the expected max tokens, based on the model's context length and other operator-specific parameters. Returns: Column: A new column with structured values (a struct) based on the provided schema. Example: Extracting product metadata from a description using an explict ExtractSchema ```python schema = ExtractSchema([ ExtractSchemaField( name="brand", data_type=DataType.STRING, description="The brand or manufacturer mentioned in the product description" ), ExtractSchemaField( name="capacity_gb", data_type=DataType.INTEGER, description="The storage capacity of the product in gigabytes, if mentioned" ), ExtractSchemaField( name="connectivity", data_type=DataType.STRING, description="The type of connectivity or ports described (e.g., USB-C, Thunderbolt)" ) ]) df.select(semantic.extract("product_description", schema)) ``` Example: Extracting user intent from a support message using a Pydantic model ```python class UserRequest(BaseModel): request_type: str = Field(..., description="The type of request (e.g., refund, technical issue, setup help)") target_product: str = Field(..., description="The name or type of product the user is referring to") preferred_resolution: str = Field(..., description="The action the user is expecting (e.g., replacement, callback)") df.select(semantic.extract("support_message", UserRequest)) ``` Raises: ValueError: If any input expression is invalid, or if the schema is empty or invalid, or if the schema contains fields with no descriptions.
null
true
false
90
168
null
Column
null
[ "column", "schema", "max_output_tokens", "temperature", "model_alias" ]
null
null
Type: function Member Name: extract Qualified Name: fenic.api.functions.semantic.extract Docstring: Extracts structured information from unstructured text using a provided schema. This function applies an instruction-driven extraction process to text columns, returning structured data based on the fields and descriptions provided. Useful for pulling out key entities, facts, or labels from documents. Args: column: Column containing text to extract from. schema: An ExtractSchema containing fields of type ExtractSchemaField that define the output structure and field descriptions or a Pydantic model that defines the output structure with descriptions for each field. model_alias: Optional alias for the language model to use for the mapping. If None, will use the language model configured as the default. temperature: Optional temperature parameter for the language model. If None, will use the default temperature (0.0). max_output_tokens: Optional parameter to constrain the model to generate at most this many tokens. If None, fenic will calculate the expected max tokens, based on the model's context length and other operator-specific parameters. Returns: Column: A new column with structured values (a struct) based on the provided schema. Example: Extracting product metadata from a description using an explict ExtractSchema ```python schema = ExtractSchema([ ExtractSchemaField( name="brand", data_type=DataType.STRING, description="The brand or manufacturer mentioned in the product description" ), ExtractSchemaField( name="capacity_gb", data_type=DataType.INTEGER, description="The storage capacity of the product in gigabytes, if mentioned" ), ExtractSchemaField( name="connectivity", data_type=DataType.STRING, description="The type of connectivity or ports described (e.g., USB-C, Thunderbolt)" ) ]) df.select(semantic.extract("product_description", schema)) ``` Example: Extracting user intent from a support message using a Pydantic model ```python class UserRequest(BaseModel): request_type: str = Field(..., description="The type of request (e.g., refund, technical issue, setup help)") target_product: str = Field(..., description="The name or type of product the user is referring to") preferred_resolution: str = Field(..., description="The action the user is expecting (e.g., replacement, callback)") df.select(semantic.extract("support_message", UserRequest)) ``` Raises: ValueError: If any input expression is invalid, or if the schema is empty or invalid, or if the schema contains fields with no descriptions. Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column", "schema", "max_output_tokens", "temperature", "model_alias"] Returns: Column Parent Class: none
function
predicate
fenic.api.functions.semantic.predicate
Applies a natural language predicate to one or more string columns, returning a boolean result. This is useful for filtering rows based on user-defined criteria expressed in natural language. Args: instruction: A string containing the semantic.predicate prompt. The instruction must include placeholders in curly braces that reference one or more column names. These placeholders will be replaced with actual column values during prompt construction during query execution. examples: Optional collection of examples to guide the semantic predicate operation. Each example should demonstrate the expected boolean output for different inputs. The examples should be created using PredicateExampleCollection.create_example(), providing instruction variables and their expected boolean answers. model_alias: Optional alias for the language model to use for the mapping. If None, will use the language model configured as the default. temperature: Optional temperature parameter for the language model. If None, will use the default temperature (0.0). Returns: Column: A column expression that returns a boolean value after applying the natural language predicate. Raises: ValueError: If the instruction is not a string. Example: Identifying product descriptions that mention wireless capability ```python semantic.predicate("Does the product description: {product_description} mention that the item is wireless?") ``` Example: Filtering support tickets that describe a billing issue ```python semantic.predicate("Does this support message: {ticket_text} describe a billing issue?") ``` Example: Filtering support tickets that describe a billing issue with examples ```python examples = PredicateExampleCollection() examples.create_example(PredicateExample( input={"ticket_text": "I was charged twice for my subscription and need help."}, output=True)) examples.create_example(PredicateExample( input={"ticket_text": "How do I reset my password?"}, output=False)) semantic.predicate("Does this support ticket describe a billing issue? {ticket_text}", examples) ```
null
true
false
171
229
null
Column
null
[ "instruction", "examples", "model_alias", "temperature" ]
null
null
Type: function Member Name: predicate Qualified Name: fenic.api.functions.semantic.predicate Docstring: Applies a natural language predicate to one or more string columns, returning a boolean result. This is useful for filtering rows based on user-defined criteria expressed in natural language. Args: instruction: A string containing the semantic.predicate prompt. The instruction must include placeholders in curly braces that reference one or more column names. These placeholders will be replaced with actual column values during prompt construction during query execution. examples: Optional collection of examples to guide the semantic predicate operation. Each example should demonstrate the expected boolean output for different inputs. The examples should be created using PredicateExampleCollection.create_example(), providing instruction variables and their expected boolean answers. model_alias: Optional alias for the language model to use for the mapping. If None, will use the language model configured as the default. temperature: Optional temperature parameter for the language model. If None, will use the default temperature (0.0). Returns: Column: A column expression that returns a boolean value after applying the natural language predicate. Raises: ValueError: If the instruction is not a string. Example: Identifying product descriptions that mention wireless capability ```python semantic.predicate("Does the product description: {product_description} mention that the item is wireless?") ``` Example: Filtering support tickets that describe a billing issue ```python semantic.predicate("Does this support message: {ticket_text} describe a billing issue?") ``` Example: Filtering support tickets that describe a billing issue with examples ```python examples = PredicateExampleCollection() examples.create_example(PredicateExample( input={"ticket_text": "I was charged twice for my subscription and need help."}, output=True)) examples.create_example(PredicateExample( input={"ticket_text": "How do I reset my password?"}, output=False)) semantic.predicate("Does this support ticket describe a billing issue? {ticket_text}", examples) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["instruction", "examples", "model_alias", "temperature"] Returns: Column Parent Class: none
function
reduce
fenic.api.functions.semantic.reduce
Aggregate function: reduces a set of strings across columns into a single string using a natural language instruction. Args: instruction: A string containing the semantic.reduce prompt. The instruction can include placeholders in curly braces that reference column names. These placeholders will be replaced with actual column values during prompt construction during query execution. model_alias: Optional alias for the language model to use for the mapping. If None, will use the language model configured as the default. temperature: Optional temperature parameter for the language model. If None, will use the default temperature (0.0). max_output_tokens: Optional parameter to constrain the model to generate at most this many tokens. If None, fenic will calculate the expected max tokens, based on the model's context length and other operator-specific parameters. Returns: Column: A column expression representing the semantic reduction operation. Raises: ValueError: If the instruction is not a string. Example: Summarizing documents using their titles and bodies ```python semantic.reduce("Summarize these documents using each document's title: {title} and body: {body}.") ```
null
true
false
232
269
null
Column
null
[ "instruction", "model_alias", "temperature", "max_output_tokens" ]
null
null
Type: function Member Name: reduce Qualified Name: fenic.api.functions.semantic.reduce Docstring: Aggregate function: reduces a set of strings across columns into a single string using a natural language instruction. Args: instruction: A string containing the semantic.reduce prompt. The instruction can include placeholders in curly braces that reference column names. These placeholders will be replaced with actual column values during prompt construction during query execution. model_alias: Optional alias for the language model to use for the mapping. If None, will use the language model configured as the default. temperature: Optional temperature parameter for the language model. If None, will use the default temperature (0.0). max_output_tokens: Optional parameter to constrain the model to generate at most this many tokens. If None, fenic will calculate the expected max tokens, based on the model's context length and other operator-specific parameters. Returns: Column: A column expression representing the semantic reduction operation. Raises: ValueError: If the instruction is not a string. Example: Summarizing documents using their titles and bodies ```python semantic.reduce("Summarize these documents using each document's title: {title} and body: {body}.") ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["instruction", "model_alias", "temperature", "max_output_tokens"] Returns: Column Parent Class: none
function
classify
fenic.api.functions.semantic.classify
Classifies a string column into one of the provided labels. This is useful for tagging incoming documents with predefined categories. Args: column: Column or column name containing text to classify. labels: List of category strings or an Enum defining the categories to classify the text into. examples: Optional collection of example classifications to guide the model. Examples should be created using ClassifyExampleCollection.create_example(), with instruction variables mapped to their expected classifications. model_alias: Optional alias for the language model to use for the mapping. If None, will use the language model configured as the default. temperature: Optional temperature parameter for the language model. If None, will use the default temperature (0.0). Returns: Column: Expression containing the classification results. Raises: ValueError: If column is invalid or categories is not a list of strings. Example: Categorizing incoming support requests ```python # Categorize incoming support requests semantic.classify("message", ["Account Access", "Billing Issue", "Technical Problem"]) ``` Example: Categorizing incoming support requests with examples ```python examples = ClassifyExampleCollection() examples.create_example(ClassifyExample( input="I can't reset my password or access my account.", output="Account Access")) examples.create_example(ClassifyExample( input="You charged me twice for the same month.", output="Billing Issue")) semantic.classify("message", ["Account Access", "Billing Issue", "Technical Problem"], examples) ```
null
true
false
272
333
null
Column
null
[ "column", "labels", "examples", "model_alias", "temperature" ]
null
null
Type: function Member Name: classify Qualified Name: fenic.api.functions.semantic.classify Docstring: Classifies a string column into one of the provided labels. This is useful for tagging incoming documents with predefined categories. Args: column: Column or column name containing text to classify. labels: List of category strings or an Enum defining the categories to classify the text into. examples: Optional collection of example classifications to guide the model. Examples should be created using ClassifyExampleCollection.create_example(), with instruction variables mapped to their expected classifications. model_alias: Optional alias for the language model to use for the mapping. If None, will use the language model configured as the default. temperature: Optional temperature parameter for the language model. If None, will use the default temperature (0.0). Returns: Column: Expression containing the classification results. Raises: ValueError: If column is invalid or categories is not a list of strings. Example: Categorizing incoming support requests ```python # Categorize incoming support requests semantic.classify("message", ["Account Access", "Billing Issue", "Technical Problem"]) ``` Example: Categorizing incoming support requests with examples ```python examples = ClassifyExampleCollection() examples.create_example(ClassifyExample( input="I can't reset my password or access my account.", output="Account Access")) examples.create_example(ClassifyExample( input="You charged me twice for the same month.", output="Billing Issue")) semantic.classify("message", ["Account Access", "Billing Issue", "Technical Problem"], examples) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column", "labels", "examples", "model_alias", "temperature"] Returns: Column Parent Class: none
function
analyze_sentiment
fenic.api.functions.semantic.analyze_sentiment
Analyzes the sentiment of a string column. Returns one of 'positive', 'negative', or 'neutral'. Args: column: Column or column name containing text for sentiment analysis. model_alias: Optional alias for the language model to use for the mapping. If None, will use the language model configured as the default. temperature: Optional temperature parameter for the language model. If None, will use the default temperature (0.0). Returns: Column: Expression containing sentiment results ('positive', 'negative', or 'neutral'). Raises: ValueError: If column is invalid or cannot be resolved. Example: Analyzing the sentiment of a user comment ```python semantic.analyze_sentiment(col('user_comment')) ```
null
true
false
336
366
null
Column
null
[ "column", "model_alias", "temperature" ]
null
null
Type: function Member Name: analyze_sentiment Qualified Name: fenic.api.functions.semantic.analyze_sentiment Docstring: Analyzes the sentiment of a string column. Returns one of 'positive', 'negative', or 'neutral'. Args: column: Column or column name containing text for sentiment analysis. model_alias: Optional alias for the language model to use for the mapping. If None, will use the language model configured as the default. temperature: Optional temperature parameter for the language model. If None, will use the default temperature (0.0). Returns: Column: Expression containing sentiment results ('positive', 'negative', or 'neutral'). Raises: ValueError: If column is invalid or cannot be resolved. Example: Analyzing the sentiment of a user comment ```python semantic.analyze_sentiment(col('user_comment')) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column", "model_alias", "temperature"] Returns: Column Parent Class: none
function
embed
fenic.api.functions.semantic.embed
Generate embeddings for the specified string column. Args: column: Column or column name containing the values to generate embeddings for. model_alias: Optional alias for the embedding model to use for the mapping. If None, will use the embedding model configured as the default. Returns: A Column expression that represents the embeddings for each value in the input column Raises: TypeError: If the input column is not a string column. Example: Generate embeddings for a text column ```python df.select(semantic.embed(col("text_column")).alias("text_embeddings")) ```
null
true
false
369
395
null
Column
null
[ "column", "model_alias" ]
null
null
Type: function Member Name: embed Qualified Name: fenic.api.functions.semantic.embed Docstring: Generate embeddings for the specified string column. Args: column: Column or column name containing the values to generate embeddings for. model_alias: Optional alias for the embedding model to use for the mapping. If None, will use the embedding model configured as the default. Returns: A Column expression that represents the embeddings for each value in the input column Raises: TypeError: If the input column is not a string column. Example: Generate embeddings for a text column ```python df.select(semantic.embed(col("text_column")).alias("text_embeddings")) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column", "model_alias"] Returns: Column Parent Class: none
module
embedding
fenic.api.functions.embedding
Embedding functions.
/private/var/folders/w2/dyfkx_354cqghs4b74vb_x380000gn/T/fenic-clone-0.0.0-y6d85svd/fenic/src/fenic/api/functions/embedding.py
true
false
null
null
null
null
null
null
null
null
Type: module Member Name: embedding Qualified Name: fenic.api.functions.embedding Docstring: Embedding functions. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
function
normalize
fenic.api.functions.embedding.normalize
Normalize embedding vectors to unit length. Args: column: Column containing embedding vectors. Returns: Column: A column of normalized embedding vectors with the same embedding type. Notes: - Normalizes each embedding vector to have unit length (L2 norm = 1) - Preserves the original embedding model in the type - Null values are preserved as null - Zero vectors become NaN after normalization Example: Normalize embeddings for dot product similarity ```python # Normalize embeddings for dot product similarity comparisons df.select( embedding.normalize(col("embeddings")).alias("unit_embeddings") ) ``` Example: Compare normalized embeddings using dot product ```python # Compare normalized embeddings using dot product (equivalent to cosine similarity) normalized_df = df.select(embedding.normalize(col("embeddings")).alias("norm_emb")) query = [0.6, 0.8] # Already normalized normalized_df.select( embedding.compute_similarity(col("norm_emb"), query, metric="dot").alias("dot_product_sim") ) ```
null
true
false
17
51
null
Column
null
[ "column" ]
null
null
Type: function Member Name: normalize Qualified Name: fenic.api.functions.embedding.normalize Docstring: Normalize embedding vectors to unit length. Args: column: Column containing embedding vectors. Returns: Column: A column of normalized embedding vectors with the same embedding type. Notes: - Normalizes each embedding vector to have unit length (L2 norm = 1) - Preserves the original embedding model in the type - Null values are preserved as null - Zero vectors become NaN after normalization Example: Normalize embeddings for dot product similarity ```python # Normalize embeddings for dot product similarity comparisons df.select( embedding.normalize(col("embeddings")).alias("unit_embeddings") ) ``` Example: Compare normalized embeddings using dot product ```python # Compare normalized embeddings using dot product (equivalent to cosine similarity) normalized_df = df.select(embedding.normalize(col("embeddings")).alias("norm_emb")) query = [0.6, 0.8] # Already normalized normalized_df.select( embedding.compute_similarity(col("norm_emb"), query, metric="dot").alias("dot_product_sim") ) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column"] Returns: Column Parent Class: none
function
compute_similarity
fenic.api.functions.embedding.compute_similarity
Compute similarity between embedding vectors using specified metric. Args: column: Column containing embedding vectors. other: Either: - Another column containing embedding vectors for pairwise similarity - A query vector (list of floats or numpy array) for similarity with each embedding metric: The similarity metric to use. Options: - `cosine`: Cosine similarity (range: -1 to 1, higher is more similar) - `dot`: Dot product similarity (raw inner product) - `l2`: L2 (Euclidean) distance (lower is more similar) Returns: Column: A column of float values representing similarity scores. Raises: ValidationError: If query vector contains NaN values or has invalid dimensions. Notes: - Cosine similarity normalizes vectors internally, so pre-normalization is not required - Dot product does not normalize, useful when vectors are already normalized - L2 distance measures the straight-line distance between vectors - When using two columns, dimensions must match between embeddings Example: Compute dot product with a query vector ```python # Compute dot product with a query vector query = [0.1, 0.2, 0.3] df.select( embedding.compute_similarity(col("embeddings"), query).alias("similarity") ) ``` Example: Compute cosine similarity with a query vector ```python query = [0.6, ... 0.8] # Already normalized df.select( embedding.compute_similarity(col("embeddings"), query, metric="cosine").alias("cosine_sim") ) ``` Example: Compute pairwise dot products between columns ```python # Compute L2 distance between two columns of embeddings df.select( embedding.compute_similarity(col("embeddings1"), col("embeddings2"), metric="l2").alias("distance") ) ``` Example: Using numpy array as query vector ```python # Use numpy array as query vector import numpy as np query = np.array([0.1, 0.2, 0.3]) df.select(embedding.compute_similarity("embeddings", query)) ```
null
true
false
54
142
null
Column
null
[ "column", "other", "metric" ]
null
null
Type: function Member Name: compute_similarity Qualified Name: fenic.api.functions.embedding.compute_similarity Docstring: Compute similarity between embedding vectors using specified metric. Args: column: Column containing embedding vectors. other: Either: - Another column containing embedding vectors for pairwise similarity - A query vector (list of floats or numpy array) for similarity with each embedding metric: The similarity metric to use. Options: - `cosine`: Cosine similarity (range: -1 to 1, higher is more similar) - `dot`: Dot product similarity (raw inner product) - `l2`: L2 (Euclidean) distance (lower is more similar) Returns: Column: A column of float values representing similarity scores. Raises: ValidationError: If query vector contains NaN values or has invalid dimensions. Notes: - Cosine similarity normalizes vectors internally, so pre-normalization is not required - Dot product does not normalize, useful when vectors are already normalized - L2 distance measures the straight-line distance between vectors - When using two columns, dimensions must match between embeddings Example: Compute dot product with a query vector ```python # Compute dot product with a query vector query = [0.1, 0.2, 0.3] df.select( embedding.compute_similarity(col("embeddings"), query).alias("similarity") ) ``` Example: Compute cosine similarity with a query vector ```python query = [0.6, ... 0.8] # Already normalized df.select( embedding.compute_similarity(col("embeddings"), query, metric="cosine").alias("cosine_sim") ) ``` Example: Compute pairwise dot products between columns ```python # Compute L2 distance between two columns of embeddings df.select( embedding.compute_similarity(col("embeddings1"), col("embeddings2"), metric="l2").alias("distance") ) ``` Example: Using numpy array as query vector ```python # Use numpy array as query vector import numpy as np query = np.array([0.1, 0.2, 0.3]) df.select(embedding.compute_similarity("embeddings", query)) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column", "other", "metric"] Returns: Column Parent Class: none
module
core
fenic.api.functions.core
Core functions for Fenic DataFrames.
/private/var/folders/w2/dyfkx_354cqghs4b74vb_x380000gn/T/fenic-clone-0.0.0-y6d85svd/fenic/src/fenic/api/functions/core.py
true
false
null
null
null
null
null
null
null
null
Type: module Member Name: core Qualified Name: fenic.api.functions.core Docstring: Core functions for Fenic DataFrames. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
function
col
fenic.api.functions.core.col
Creates a Column expression referencing a column in the DataFrame. Args: col_name: Name of the column to reference Returns: A Column expression for the specified column Raises: TypeError: If colName is not a string
null
true
false
16
29
null
Column
null
[ "col_name" ]
null
null
Type: function Member Name: col Qualified Name: fenic.api.functions.core.col Docstring: Creates a Column expression referencing a column in the DataFrame. Args: col_name: Name of the column to reference Returns: A Column expression for the specified column Raises: TypeError: If colName is not a string Value: none Annotation: none is Public? : true is Private? : false Parameters: ["col_name"] Returns: Column Parent Class: none
function
lit
fenic.api.functions.core.lit
Creates a Column expression representing a literal value. Args: value: The literal value to create a column for Returns: A Column expression representing the literal value Raises: ValueError: If the type of the value cannot be inferred
null
true
false
32
49
null
Column
null
[ "value" ]
null
null
Type: function Member Name: lit Qualified Name: fenic.api.functions.core.lit Docstring: Creates a Column expression representing a literal value. Args: value: The literal value to create a column for Returns: A Column expression representing the literal value Raises: ValueError: If the type of the value cannot be inferred Value: none Annotation: none is Public? : true is Private? : false Parameters: ["value"] Returns: Column Parent Class: none
module
markdown
fenic.api.functions.markdown
Markdown functions.
/private/var/folders/w2/dyfkx_354cqghs4b74vb_x380000gn/T/fenic-clone-0.0.0-y6d85svd/fenic/src/fenic/api/functions/markdown.py
true
false
null
null
null
null
null
null
null
null
Type: module Member Name: markdown Qualified Name: fenic.api.functions.markdown Docstring: Markdown functions. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
function
to_json
fenic.api.functions.markdown.to_json
Converts a column of Markdown-formatted strings into a hierarchical JSON representation. Args: column (ColumnOrName): Input column containing Markdown strings. Returns: Column: A column of JSON-formatted strings representing the structured document tree. Notes: - This function parses Markdown into a structured JSON format optimized for document chunking, semantic analysis, and `jq` queries. - The output conforms to a custom schema that organizes content into nested sections based on heading levels. This makes it more expressive than flat ASTs like `mdast`. - The full JSON schema is available at: TODO: link from docs. Supported Markdown Features: - Headings with nested hierarchy (e.g., h2 → h3 → h4) - Paragraphs with inline formatting (bold, italics, links, code, etc.) - Lists (ordered, unordered, task lists) - Tables with header alignment and inline content - Code blocks with language info - Blockquotes, horizontal rules, and inline/flow HTML Example: Convert markdown to JSON ```python df.select(markdown.to_json(col("markdown_text"))) ``` Example: Extract all level-2 headings with jq ```python # Combine with jq to extract all level-2 headings df.select(json.jq(markdown.to_json(col("md")), ".. | select(.type == 'heading' and .level == 2)")) ```
null
true
false
16
54
null
Column
null
[ "column" ]
null
null
Type: function Member Name: to_json Qualified Name: fenic.api.functions.markdown.to_json Docstring: Converts a column of Markdown-formatted strings into a hierarchical JSON representation. Args: column (ColumnOrName): Input column containing Markdown strings. Returns: Column: A column of JSON-formatted strings representing the structured document tree. Notes: - This function parses Markdown into a structured JSON format optimized for document chunking, semantic analysis, and `jq` queries. - The output conforms to a custom schema that organizes content into nested sections based on heading levels. This makes it more expressive than flat ASTs like `mdast`. - The full JSON schema is available at: TODO: link from docs. Supported Markdown Features: - Headings with nested hierarchy (e.g., h2 → h3 → h4) - Paragraphs with inline formatting (bold, italics, links, code, etc.) - Lists (ordered, unordered, task lists) - Tables with header alignment and inline content - Code blocks with language info - Blockquotes, horizontal rules, and inline/flow HTML Example: Convert markdown to JSON ```python df.select(markdown.to_json(col("markdown_text"))) ``` Example: Extract all level-2 headings with jq ```python # Combine with jq to extract all level-2 headings df.select(json.jq(markdown.to_json(col("md")), ".. | select(.type == 'heading' and .level == 2)")) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column"] Returns: Column Parent Class: none
function
get_code_blocks
fenic.api.functions.markdown.get_code_blocks
Extracts all code blocks from a column of Markdown-formatted strings. Args: column (ColumnOrName): Input column containing Markdown strings. language_filter (Optional[str]): Optional language filter to extract only code blocks with a specific language. By default, all code blocks are extracted. Returns: Column: A column of code blocks. The output column type is: ArrayType(StructType([ StructField("language", StringType), StructField("code", StringType), ])) Notes: - Code blocks are parsed from fenced Markdown blocks (e.g., triple backticks ```). - Language identifiers are optional and may be null if not provided in the original Markdown. - Indented code blocks without fences are not currently supported. - This function is useful for extracting embedded logic, configuration, or examples from documentation or notebooks. Example: Extract all code blocks ```python df.select(markdown.get_code_blocks(col("markdown_text"))) ``` Example: Explode code blocks into individual rows ```python # Explode the list of code blocks into individual rows df = df.explode(df.with_column("blocks", markdown.get_code_blocks(col("md")))) df = df.select(col("blocks")["language"], col("blocks")["code"]) ```
null
true
false
56
92
null
Column
null
[ "column", "language_filter" ]
null
null
Type: function Member Name: get_code_blocks Qualified Name: fenic.api.functions.markdown.get_code_blocks Docstring: Extracts all code blocks from a column of Markdown-formatted strings. Args: column (ColumnOrName): Input column containing Markdown strings. language_filter (Optional[str]): Optional language filter to extract only code blocks with a specific language. By default, all code blocks are extracted. Returns: Column: A column of code blocks. The output column type is: ArrayType(StructType([ StructField("language", StringType), StructField("code", StringType), ])) Notes: - Code blocks are parsed from fenced Markdown blocks (e.g., triple backticks ```). - Language identifiers are optional and may be null if not provided in the original Markdown. - Indented code blocks without fences are not currently supported. - This function is useful for extracting embedded logic, configuration, or examples from documentation or notebooks. Example: Extract all code blocks ```python df.select(markdown.get_code_blocks(col("markdown_text"))) ``` Example: Explode code blocks into individual rows ```python # Explode the list of code blocks into individual rows df = df.explode(df.with_column("blocks", markdown.get_code_blocks(col("md")))) df = df.select(col("blocks")["language"], col("blocks")["code"]) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column", "language_filter"] Returns: Column Parent Class: none
function
generate_toc
fenic.api.functions.markdown.generate_toc
Generates a table of contents from markdown headings. Args: column (ColumnOrName): Input column containing Markdown strings. max_level (Optional[int]): Maximum heading level to include in the TOC (1-6). Defaults to 6 (all levels). Returns: Column: A column of Markdown-formatted table of contents strings. Notes: - The TOC is generated using markdown heading syntax (# ## ### etc.) - Each heading in the source document becomes a line in the TOC - The heading level is preserved in the output - This creates a valid markdown document that can be rendered or processed further Example: Generate a complete TOC ```python df.select(markdown.generate_toc(col("documentation"))) ``` Example: Generate a simplified TOC with only top 2 levels ```python df.select(markdown.generate_toc(col("documentation"), max_level=2)) ``` Example: Add TOC as a new column ```python df = df.with_column("toc", markdown.generate_toc(col("content"), max_level=3)) ```
null
true
false
95
132
null
Column
null
[ "column", "max_level" ]
null
null
Type: function Member Name: generate_toc Qualified Name: fenic.api.functions.markdown.generate_toc Docstring: Generates a table of contents from markdown headings. Args: column (ColumnOrName): Input column containing Markdown strings. max_level (Optional[int]): Maximum heading level to include in the TOC (1-6). Defaults to 6 (all levels). Returns: Column: A column of Markdown-formatted table of contents strings. Notes: - The TOC is generated using markdown heading syntax (# ## ### etc.) - Each heading in the source document becomes a line in the TOC - The heading level is preserved in the output - This creates a valid markdown document that can be rendered or processed further Example: Generate a complete TOC ```python df.select(markdown.generate_toc(col("documentation"))) ``` Example: Generate a simplified TOC with only top 2 levels ```python df.select(markdown.generate_toc(col("documentation"), max_level=2)) ``` Example: Add TOC as a new column ```python df = df.with_column("toc", markdown.generate_toc(col("content"), max_level=3)) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column", "max_level"] Returns: Column Parent Class: none
function
extract_header_chunks
fenic.api.functions.markdown.extract_header_chunks
Splits markdown documents into logical chunks based on heading hierarchy. Args: column (ColumnOrName): Input column containing Markdown strings. header_level (int): Heading level to split on (1-6). Creates a new chunk at every heading of this level, including all nested content and subsections. Returns: Column: A column of arrays containing chunk objects with the following structure: ```python ArrayType(StructType([ StructField("heading", StringType), # Heading text (clean, no markdown) StructField("level", IntegerType), # Heading level (1-6) StructField("content", StringType), # All content under this heading (clean text) StructField("parent_heading", StringType), # Parent heading text (or null) StructField("full_path", StringType), # Full breadcrumb path ])) ``` Notes: - **Context-preserving**: Each chunk contains all content and subsections under the heading - **Hierarchical awareness**: Includes parent heading context for better LLM understanding - **Clean text output**: Strips markdown formatting for direct LLM consumption Chunking Behavior: With `header_level=2`, this markdown: ```markdown # Introduction Overview text ## Getting Started Setup instructions ### Prerequisites Python 3.8+ required ## API Reference Function documentation ``` Produces 2 chunks: 1. `Getting Started` chunk (includes `Prerequisites` subsection) 2. `API Reference` chunk Example: Split articles into top-level sections ```python df.select(markdown.extract_header_chunks(col("articles"), header_level=1)) ``` Example: Split documentation into feature sections ```python df.select(markdown.extract_header_chunks(col("docs"), header_level=2)) ``` Example: Create fine-grained chunks for detailed analysis ```python df.select(markdown.extract_header_chunks(col("content"), header_level=3)) ``` Example: Explode chunks into individual rows for processing ```python chunks_df = df.select( markdown.extract_header_chunks(col("markdown"), header_level=2).alias("chunks") ).explode("chunks") chunks_df.select( col("chunks").heading, col("chunks").content, col("chunks").full_path ) ```
null
true
false
135
212
null
Column
null
[ "column", "header_level" ]
null
null
Type: function Member Name: extract_header_chunks Qualified Name: fenic.api.functions.markdown.extract_header_chunks Docstring: Splits markdown documents into logical chunks based on heading hierarchy. Args: column (ColumnOrName): Input column containing Markdown strings. header_level (int): Heading level to split on (1-6). Creates a new chunk at every heading of this level, including all nested content and subsections. Returns: Column: A column of arrays containing chunk objects with the following structure: ```python ArrayType(StructType([ StructField("heading", StringType), # Heading text (clean, no markdown) StructField("level", IntegerType), # Heading level (1-6) StructField("content", StringType), # All content under this heading (clean text) StructField("parent_heading", StringType), # Parent heading text (or null) StructField("full_path", StringType), # Full breadcrumb path ])) ``` Notes: - **Context-preserving**: Each chunk contains all content and subsections under the heading - **Hierarchical awareness**: Includes parent heading context for better LLM understanding - **Clean text output**: Strips markdown formatting for direct LLM consumption Chunking Behavior: With `header_level=2`, this markdown: ```markdown # Introduction Overview text ## Getting Started Setup instructions ### Prerequisites Python 3.8+ required ## API Reference Function documentation ``` Produces 2 chunks: 1. `Getting Started` chunk (includes `Prerequisites` subsection) 2. `API Reference` chunk Example: Split articles into top-level sections ```python df.select(markdown.extract_header_chunks(col("articles"), header_level=1)) ``` Example: Split documentation into feature sections ```python df.select(markdown.extract_header_chunks(col("docs"), header_level=2)) ``` Example: Create fine-grained chunks for detailed analysis ```python df.select(markdown.extract_header_chunks(col("content"), header_level=3)) ``` Example: Explode chunks into individual rows for processing ```python chunks_df = df.select( markdown.extract_header_chunks(col("markdown"), header_level=2).alias("chunks") ).explode("chunks") chunks_df.select( col("chunks").heading, col("chunks").content, col("chunks").full_path ) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column", "header_level"] Returns: Column Parent Class: none
module
text
fenic.api.functions.text
Text manipulation functions for Fenic DataFrames.
/private/var/folders/w2/dyfkx_354cqghs4b74vb_x380000gn/T/fenic-clone-0.0.0-y6d85svd/fenic/src/fenic/api/functions/text.py
true
false
null
null
null
null
null
null
null
null
Type: module Member Name: text Qualified Name: fenic.api.functions.text Docstring: Text manipulation functions for Fenic DataFrames. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
function
extract
fenic.api.functions.text.extract
Extracts fields from text using a template pattern. Args: template: Template string with fields marked as ``${field_name:format}`` column: Input text column to extract from Returns: Column: A struct column containing the extracted fields Example: Basic field extraction ```python # Extract name and age from a text column df.select(text.extract(col("text"), "Name: ${name:csv}, Age: ${age:none}")) ``` Example: Multiple field extraction with different formats ```python # Extract multiple fields with different formats df.select(text.extract(col("text"), "Product: ${product:csv}, Price: ${price:none}, Tags: ${tags:json}")) ``` Example: Extract and filter based on extracted fields ```python # Extract and filter based on extracted fields df = df.select( col("text"), text.extract(col("text"), "Name: ${name:csv}, Age: ${age:none}").alias("extracted") ) df = df.filter(col("extracted")["age"] == "30") ```
null
true
false
34
69
null
Column
null
[ "column", "template" ]
null
null
Type: function Member Name: extract Qualified Name: fenic.api.functions.text.extract Docstring: Extracts fields from text using a template pattern. Args: template: Template string with fields marked as ``${field_name:format}`` column: Input text column to extract from Returns: Column: A struct column containing the extracted fields Example: Basic field extraction ```python # Extract name and age from a text column df.select(text.extract(col("text"), "Name: ${name:csv}, Age: ${age:none}")) ``` Example: Multiple field extraction with different formats ```python # Extract multiple fields with different formats df.select(text.extract(col("text"), "Product: ${product:csv}, Price: ${price:none}, Tags: ${tags:json}")) ``` Example: Extract and filter based on extracted fields ```python # Extract and filter based on extracted fields df = df.select( col("text"), text.extract(col("text"), "Name: ${name:csv}, Age: ${age:none}").alias("extracted") ) df = df.filter(col("extracted")["age"] == "30") ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column", "template"] Returns: Column Parent Class: none
function
recursive_character_chunk
fenic.api.functions.text.recursive_character_chunk
Chunks a string column into chunks of a specified size (in characters) with an optional overlap. The chunking is performed recursively, attempting to preserve the underlying structure of the text by splitting on natural boundaries (paragraph breaks, sentence breaks, etc.) to maintain context. By default, these characters are ['\n\n', '\n', '.', ';', ':', ' ', '-', ''], but this can be customized. Args: column: The input string column or column name to chunk chunk_size: The size of each chunk in characters chunk_overlap_percentage: The overlap between each chunk as a percentage of the chunk size chunking_character_set_custom_characters (Optional): List of alternative characters to split on. Note that the characters should be ordered from coarsest to finest desired granularity -- earlier characters in the list should result in fewer overall splits than later characters. Returns: Column: A column containing the chunks as an array of strings Example: Default character chunking ```python # Create chunks of at most 100 characters with 20% overlap df.select( text.recursive_character_chunk(col("text"), 100, 20).alias("chunks") ) ``` Example: Custom character chunking ```python # Create chunks with custom split characters df.select( text.recursive_character_chunk( col("text"), 100, 20, ['\n\n', '\n', '.', ' ', ''] ).alias("chunks") ) ```
null
true
false
71
130
null
Column
null
[ "column", "chunk_size", "chunk_overlap_percentage", "chunking_character_set_custom_characters" ]
null
null
Type: function Member Name: recursive_character_chunk Qualified Name: fenic.api.functions.text.recursive_character_chunk Docstring: Chunks a string column into chunks of a specified size (in characters) with an optional overlap. The chunking is performed recursively, attempting to preserve the underlying structure of the text by splitting on natural boundaries (paragraph breaks, sentence breaks, etc.) to maintain context. By default, these characters are ['\n\n', '\n', '.', ';', ':', ' ', '-', ''], but this can be customized. Args: column: The input string column or column name to chunk chunk_size: The size of each chunk in characters chunk_overlap_percentage: The overlap between each chunk as a percentage of the chunk size chunking_character_set_custom_characters (Optional): List of alternative characters to split on. Note that the characters should be ordered from coarsest to finest desired granularity -- earlier characters in the list should result in fewer overall splits than later characters. Returns: Column: A column containing the chunks as an array of strings Example: Default character chunking ```python # Create chunks of at most 100 characters with 20% overlap df.select( text.recursive_character_chunk(col("text"), 100, 20).alias("chunks") ) ``` Example: Custom character chunking ```python # Create chunks with custom split characters df.select( text.recursive_character_chunk( col("text"), 100, 20, ['\n\n', '\n', '.', ' ', ''] ).alias("chunks") ) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column", "chunk_size", "chunk_overlap_percentage", "chunking_character_set_custom_characters"] Returns: Column Parent Class: none
function
recursive_word_chunk
fenic.api.functions.text.recursive_word_chunk
Chunks a string column into chunks of a specified size (in words) with an optional overlap. The chunking is performed recursively, attempting to preserve the underlying structure of the text by splitting on natural boundaries (paragraph breaks, sentence breaks, etc.) to maintain context. By default, these characters are ['\n\n', '\n', '.', ';', ':', ' ', '-', ''], but this can be customized. Args: column: The input string column or column name to chunk chunk_size: The size of each chunk in words chunk_overlap_percentage: The overlap between each chunk as a percentage of the chunk size chunking_character_set_custom_characters (Optional): List of alternative characters to split on. Note that the characters should be ordered from coarsest to finest desired granularity -- earlier characters in the list should result in fewer overall splits than later characters. Returns: Column: A column containing the chunks as an array of strings Example: Default word chunking ```python # Create chunks of at most 100 words with 20% overlap df.select( text.recursive_word_chunk(col("text"), 100, 20).alias("chunks") ) ``` Example: Custom word chunking ```python # Create chunks with custom split characters df.select( text.recursive_word_chunk( col("text"), 100, 20, ['\n\n', '\n', '.', ' ', ''] ).alias("chunks") ) ```
null
true
false
133
192
null
Column
null
[ "column", "chunk_size", "chunk_overlap_percentage", "chunking_character_set_custom_characters" ]
null
null
Type: function Member Name: recursive_word_chunk Qualified Name: fenic.api.functions.text.recursive_word_chunk Docstring: Chunks a string column into chunks of a specified size (in words) with an optional overlap. The chunking is performed recursively, attempting to preserve the underlying structure of the text by splitting on natural boundaries (paragraph breaks, sentence breaks, etc.) to maintain context. By default, these characters are ['\n\n', '\n', '.', ';', ':', ' ', '-', ''], but this can be customized. Args: column: The input string column or column name to chunk chunk_size: The size of each chunk in words chunk_overlap_percentage: The overlap between each chunk as a percentage of the chunk size chunking_character_set_custom_characters (Optional): List of alternative characters to split on. Note that the characters should be ordered from coarsest to finest desired granularity -- earlier characters in the list should result in fewer overall splits than later characters. Returns: Column: A column containing the chunks as an array of strings Example: Default word chunking ```python # Create chunks of at most 100 words with 20% overlap df.select( text.recursive_word_chunk(col("text"), 100, 20).alias("chunks") ) ``` Example: Custom word chunking ```python # Create chunks with custom split characters df.select( text.recursive_word_chunk( col("text"), 100, 20, ['\n\n', '\n', '.', ' ', ''] ).alias("chunks") ) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column", "chunk_size", "chunk_overlap_percentage", "chunking_character_set_custom_characters"] Returns: Column Parent Class: none
function
recursive_token_chunk
fenic.api.functions.text.recursive_token_chunk
Chunks a string column into chunks of a specified size (in tokens) with an optional overlap. The chunking is performed recursively, attempting to preserve the underlying structure of the text by splitting on natural boundaries (paragraph breaks, sentence breaks, etc.) to maintain context. By default, these characters are ['\n\n', '\n', '.', ';', ':', ' ', '-', ''], but this can be customized. Args: column: The input string column or column name to chunk chunk_size: The size of each chunk in tokens chunk_overlap_percentage: The overlap between each chunk as a percentage of the chunk size chunking_character_set_custom_characters (Optional): List of alternative characters to split on. Note that the characters should be ordered from coarsest to finest desired granularity -- earlier characters in the list should result in fewer overall splits than later characters. Returns: Column: A column containing the chunks as an array of strings Example: Default token chunking ```python # Create chunks of at most 100 tokens with 20% overlap df.select( text.recursive_token_chunk(col("text"), 100, 20).alias("chunks") ) ``` Example: Custom token chunking ```python # Create chunks with custom split characters df.select( text.recursive_token_chunk( col("text"), 100, 20, ['\n\n', '\n', '.', ' ', ''] ).alias("chunks") ) ```
null
true
false
195
254
null
Column
null
[ "column", "chunk_size", "chunk_overlap_percentage", "chunking_character_set_custom_characters" ]
null
null
Type: function Member Name: recursive_token_chunk Qualified Name: fenic.api.functions.text.recursive_token_chunk Docstring: Chunks a string column into chunks of a specified size (in tokens) with an optional overlap. The chunking is performed recursively, attempting to preserve the underlying structure of the text by splitting on natural boundaries (paragraph breaks, sentence breaks, etc.) to maintain context. By default, these characters are ['\n\n', '\n', '.', ';', ':', ' ', '-', ''], but this can be customized. Args: column: The input string column or column name to chunk chunk_size: The size of each chunk in tokens chunk_overlap_percentage: The overlap between each chunk as a percentage of the chunk size chunking_character_set_custom_characters (Optional): List of alternative characters to split on. Note that the characters should be ordered from coarsest to finest desired granularity -- earlier characters in the list should result in fewer overall splits than later characters. Returns: Column: A column containing the chunks as an array of strings Example: Default token chunking ```python # Create chunks of at most 100 tokens with 20% overlap df.select( text.recursive_token_chunk(col("text"), 100, 20).alias("chunks") ) ``` Example: Custom token chunking ```python # Create chunks with custom split characters df.select( text.recursive_token_chunk( col("text"), 100, 20, ['\n\n', '\n', '.', ' ', ''] ).alias("chunks") ) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column", "chunk_size", "chunk_overlap_percentage", "chunking_character_set_custom_characters"] Returns: Column Parent Class: none
function
character_chunk
fenic.api.functions.text.character_chunk
Chunks a string column into chunks of a specified size (in characters) with an optional overlap. The chunking is done by applying a simple sliding window across the text to create chunks of equal size. This approach does not attempt to preserve the underlying structure of the text. Args: column: The input string column or column name to chunk chunk_size: The size of each chunk in characters chunk_overlap_percentage: The overlap between chunks as a percentage of the chunk size (Default: 0) Returns: Column: A column containing the chunks as an array of strings Example: Create character chunks ```python # Create chunks of 100 characters with 20% overlap df.select(text.character_chunk(col("text"), 100, 20)) ```
null
true
false
257
289
null
Column
null
[ "column", "chunk_size", "chunk_overlap_percentage" ]
null
null
Type: function Member Name: character_chunk Qualified Name: fenic.api.functions.text.character_chunk Docstring: Chunks a string column into chunks of a specified size (in characters) with an optional overlap. The chunking is done by applying a simple sliding window across the text to create chunks of equal size. This approach does not attempt to preserve the underlying structure of the text. Args: column: The input string column or column name to chunk chunk_size: The size of each chunk in characters chunk_overlap_percentage: The overlap between chunks as a percentage of the chunk size (Default: 0) Returns: Column: A column containing the chunks as an array of strings Example: Create character chunks ```python # Create chunks of 100 characters with 20% overlap df.select(text.character_chunk(col("text"), 100, 20)) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column", "chunk_size", "chunk_overlap_percentage"] Returns: Column Parent Class: none
function
word_chunk
fenic.api.functions.text.word_chunk
Chunks a string column into chunks of a specified size (in words) with an optional overlap. The chunking is done by applying a simple sliding window across the text to create chunks of equal size. This approach does not attempt to preserve the underlying structure of the text. Args: column: The input string column or column name to chunk chunk_size: The size of each chunk in words chunk_overlap_percentage: The overlap between chunks as a percentage of the chunk size (Default: 0) Returns: Column: A column containing the chunks as an array of strings Example: Create word chunks ```python # Create chunks of 100 words with 20% overlap df.select(text.word_chunk(col("text"), 100, 20)) ```
null
true
false
292
324
null
Column
null
[ "column", "chunk_size", "chunk_overlap_percentage" ]
null
null
Type: function Member Name: word_chunk Qualified Name: fenic.api.functions.text.word_chunk Docstring: Chunks a string column into chunks of a specified size (in words) with an optional overlap. The chunking is done by applying a simple sliding window across the text to create chunks of equal size. This approach does not attempt to preserve the underlying structure of the text. Args: column: The input string column or column name to chunk chunk_size: The size of each chunk in words chunk_overlap_percentage: The overlap between chunks as a percentage of the chunk size (Default: 0) Returns: Column: A column containing the chunks as an array of strings Example: Create word chunks ```python # Create chunks of 100 words with 20% overlap df.select(text.word_chunk(col("text"), 100, 20)) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column", "chunk_size", "chunk_overlap_percentage"] Returns: Column Parent Class: none
function
token_chunk
fenic.api.functions.text.token_chunk
Chunks a string column into chunks of a specified size (in tokens) with an optional overlap. The chunking is done by applying a simple sliding window across the text to create chunks of equal size. This approach does not attempt to preserve the underlying structure of the text. Args: column: The input string column or column name to chunk chunk_size: The size of each chunk in tokens chunk_overlap_percentage: The overlap between chunks as a percentage of the chunk size (Default: 0) Returns: Column: A column containing the chunks as an array of strings Example: Create token chunks ```python # Create chunks of 100 tokens with 20% overlap df.select(text.token_chunk(col("text"), 100, 20)) ```
null
true
false
327
359
null
Column
null
[ "column", "chunk_size", "chunk_overlap_percentage" ]
null
null
Type: function Member Name: token_chunk Qualified Name: fenic.api.functions.text.token_chunk Docstring: Chunks a string column into chunks of a specified size (in tokens) with an optional overlap. The chunking is done by applying a simple sliding window across the text to create chunks of equal size. This approach does not attempt to preserve the underlying structure of the text. Args: column: The input string column or column name to chunk chunk_size: The size of each chunk in tokens chunk_overlap_percentage: The overlap between chunks as a percentage of the chunk size (Default: 0) Returns: Column: A column containing the chunks as an array of strings Example: Create token chunks ```python # Create chunks of 100 tokens with 20% overlap df.select(text.token_chunk(col("text"), 100, 20)) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column", "chunk_size", "chunk_overlap_percentage"] Returns: Column Parent Class: none
function
count_tokens
fenic.api.functions.text.count_tokens
Returns the number of tokens in a string using OpenAI's cl100k_base encoding (tiktoken). Args: column: The input string column. Returns: Column: A column with the token counts for each input string. Example: Count tokens in text ```python # Count tokens in a text column df.select(text.count_tokens(col("text"))) ```
null
true
false
362
382
null
Column
null
[ "column" ]
null
null
Type: function Member Name: count_tokens Qualified Name: fenic.api.functions.text.count_tokens Docstring: Returns the number of tokens in a string using OpenAI's cl100k_base encoding (tiktoken). Args: column: The input string column. Returns: Column: A column with the token counts for each input string. Example: Count tokens in text ```python # Count tokens in a text column df.select(text.count_tokens(col("text"))) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column"] Returns: Column Parent Class: none
function
concat
fenic.api.functions.text.concat
Concatenates multiple columns or strings into a single string. Args: *cols: Columns or strings to concatenate Returns: Column: A column containing the concatenated strings Example: Concatenate columns ```python # Concatenate two columns with a space in between df.select(text.concat(col("col1"), lit(" "), col("col2"))) ```
null
true
false
385
414
null
Column
null
[ "cols" ]
null
null
Type: function Member Name: concat Qualified Name: fenic.api.functions.text.concat Docstring: Concatenates multiple columns or strings into a single string. Args: *cols: Columns or strings to concatenate Returns: Column: A column containing the concatenated strings Example: Concatenate columns ```python # Concatenate two columns with a space in between df.select(text.concat(col("col1"), lit(" "), col("col2"))) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["cols"] Returns: Column Parent Class: none
function
parse_transcript
fenic.api.functions.text.parse_transcript
Parses a transcript from text to a structured format with unified schema. Converts transcript text in various formats (srt, generic) to a standardized structure with fields: index, speaker, start_time, end_time, duration, content, format. All timestamps are returned as floating-point seconds from the start. Args: column: The input string column or column name containing transcript text format: The format of the transcript ("srt" or "generic") Returns: Column: A column containing an array of structured transcript entries with unified schema: - index: Optional[int] - Entry index (1-based) - speaker: Optional[str] - Speaker name (for generic format) - start_time: float - Start time in seconds - end_time: Optional[float] - End time in seconds - duration: Optional[float] - Duration in seconds - content: str - Transcript content/text - format: str - Original format ("srt" or "generic") Examples: >>> # Parse SRT format transcript >>> df.select(text.parse_transcript(col("transcript"), "srt")) >>> # Parse generic conversation transcript >>> df.select(text.parse_transcript(col("transcript"), "generic"))
null
true
false
418
449
null
Column
null
[ "column", "format" ]
null
null
Type: function Member Name: parse_transcript Qualified Name: fenic.api.functions.text.parse_transcript Docstring: Parses a transcript from text to a structured format with unified schema. Converts transcript text in various formats (srt, generic) to a standardized structure with fields: index, speaker, start_time, end_time, duration, content, format. All timestamps are returned as floating-point seconds from the start. Args: column: The input string column or column name containing transcript text format: The format of the transcript ("srt" or "generic") Returns: Column: A column containing an array of structured transcript entries with unified schema: - index: Optional[int] - Entry index (1-based) - speaker: Optional[str] - Speaker name (for generic format) - start_time: float - Start time in seconds - end_time: Optional[float] - End time in seconds - duration: Optional[float] - Duration in seconds - content: str - Transcript content/text - format: str - Original format ("srt" or "generic") Examples: >>> # Parse SRT format transcript >>> df.select(text.parse_transcript(col("transcript"), "srt")) >>> # Parse generic conversation transcript >>> df.select(text.parse_transcript(col("transcript"), "generic")) Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column", "format"] Returns: Column Parent Class: none
function
concat_ws
fenic.api.functions.text.concat_ws
Concatenates multiple columns or strings into a single string with a separator. Args: separator: The separator to use *cols: Columns or strings to concatenate Returns: Column: A column containing the concatenated strings Example: Concatenate with comma separator ```python # Concatenate columns with comma separator df.select(text.concat_ws(",", col("col1"), col("col2"))) ```
null
true
false
452
484
null
Column
null
[ "separator", "cols" ]
null
null
Type: function Member Name: concat_ws Qualified Name: fenic.api.functions.text.concat_ws Docstring: Concatenates multiple columns or strings into a single string with a separator. Args: separator: The separator to use *cols: Columns or strings to concatenate Returns: Column: A column containing the concatenated strings Example: Concatenate with comma separator ```python # Concatenate columns with comma separator df.select(text.concat_ws(",", col("col1"), col("col2"))) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["separator", "cols"] Returns: Column Parent Class: none
function
array_join
fenic.api.functions.text.array_join
Joins an array of strings into a single string with a delimiter. Args: column: The column to join delimiter: The delimiter to use Returns: Column: A column containing the joined strings Example: Join array with comma ```python # Join array elements with comma df.select(text.array_join(col("array_column"), ",")) ```
null
true
false
487
509
null
Column
null
[ "column", "delimiter" ]
null
null
Type: function Member Name: array_join Qualified Name: fenic.api.functions.text.array_join Docstring: Joins an array of strings into a single string with a delimiter. Args: column: The column to join delimiter: The delimiter to use Returns: Column: A column containing the joined strings Example: Join array with comma ```python # Join array elements with comma df.select(text.array_join(col("array_column"), ",")) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column", "delimiter"] Returns: Column Parent Class: none
function
replace
fenic.api.functions.text.replace
Replace all occurrences of a pattern with a new string, treating pattern as a literal string. This method creates a new string column with all occurrences of the specified pattern replaced with a new string. The pattern is treated as a literal string, not a regular expression. If either search or replace is a column expression, the operation is performed dynamically using the values from those columns. Args: src: The input string column or column name to perform replacements on search: The pattern to search for (can be a string or column expression) replace: The string to replace with (can be a string or column expression) Returns: Column: A column containing the strings with replacements applied Example: Replace with literal string ```python # Replace all occurrences of "foo" in the "name" column with "bar" df.select(text.replace(col("name"), "foo", "bar")) ``` Example: Replace using column values ```python # Replace all occurrences of the value in the "search" column with the value in the "replace" column, for each row in the "text" column df.select(text.replace(col("text"), col("search"), col("replace"))) ```
null
true
false
512
551
null
Column
null
[ "src", "search", "replace" ]
null
null
Type: function Member Name: replace Qualified Name: fenic.api.functions.text.replace Docstring: Replace all occurrences of a pattern with a new string, treating pattern as a literal string. This method creates a new string column with all occurrences of the specified pattern replaced with a new string. The pattern is treated as a literal string, not a regular expression. If either search or replace is a column expression, the operation is performed dynamically using the values from those columns. Args: src: The input string column or column name to perform replacements on search: The pattern to search for (can be a string or column expression) replace: The string to replace with (can be a string or column expression) Returns: Column: A column containing the strings with replacements applied Example: Replace with literal string ```python # Replace all occurrences of "foo" in the "name" column with "bar" df.select(text.replace(col("name"), "foo", "bar")) ``` Example: Replace using column values ```python # Replace all occurrences of the value in the "search" column with the value in the "replace" column, for each row in the "text" column df.select(text.replace(col("text"), col("search"), col("replace"))) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["src", "search", "replace"] Returns: Column Parent Class: none
function
regexp_replace
fenic.api.functions.text.regexp_replace
Replace all occurrences of a pattern with a new string, treating pattern as a regular expression. This method creates a new string column with all occurrences of the specified pattern replaced with a new string. The pattern is treated as a regular expression. If either pattern or replacement is a column expression, the operation is performed dynamically using the values from those columns. Args: src: The input string column or column name to perform replacements on pattern: The regular expression pattern to search for (can be a string or column expression) replacement: The string to replace with (can be a string or column expression) Returns: Column: A column containing the strings with replacements applied Example: Replace digits with dashes ```python # Replace all digits with dashes df.select(text.regexp_replace(col("text"), r"\d+", "--")) ``` Example: Dynamic replacement using column values ```python # Replace using patterns from columns df.select(text.regexp_replace(col("text"), col("pattern"), col("replacement"))) ``` Example: Complex pattern replacement ```python # Replace email addresses with [REDACTED] df.select(text.regexp_replace(col("text"), r"[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}", "[REDACTED]")) ```
null
true
false
554
605
null
Column
null
[ "src", "pattern", "replacement" ]
null
null
Type: function Member Name: regexp_replace Qualified Name: fenic.api.functions.text.regexp_replace Docstring: Replace all occurrences of a pattern with a new string, treating pattern as a regular expression. This method creates a new string column with all occurrences of the specified pattern replaced with a new string. The pattern is treated as a regular expression. If either pattern or replacement is a column expression, the operation is performed dynamically using the values from those columns. Args: src: The input string column or column name to perform replacements on pattern: The regular expression pattern to search for (can be a string or column expression) replacement: The string to replace with (can be a string or column expression) Returns: Column: A column containing the strings with replacements applied Example: Replace digits with dashes ```python # Replace all digits with dashes df.select(text.regexp_replace(col("text"), r"\d+", "--")) ``` Example: Dynamic replacement using column values ```python # Replace using patterns from columns df.select(text.regexp_replace(col("text"), col("pattern"), col("replacement"))) ``` Example: Complex pattern replacement ```python # Replace email addresses with [REDACTED] df.select(text.regexp_replace(col("text"), r"[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}", "[REDACTED]")) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["src", "pattern", "replacement"] Returns: Column Parent Class: none
function
split
fenic.api.functions.text.split
Split a string column into an array using a regular expression pattern. This method creates an array column by splitting each value in the input string column at matches of the specified regular expression pattern. Args: src: The input string column or column name to split pattern: The regular expression pattern to split on limit: Maximum number of splits to perform (Default: -1 for unlimited). If > 0, returns at most limit+1 elements, with remainder in last element. Returns: Column: A column containing arrays of substrings Example: Split on whitespace ```python # Split on whitespace df.select(text.split(col("text"), r"\s+")) ``` Example: Split with limit ```python # Split on whitespace, max 2 splits df.select(text.split(col("text"), r"\s+", limit=2)) ```
null
true
false
608
638
null
Column
null
[ "src", "pattern", "limit" ]
null
null
Type: function Member Name: split Qualified Name: fenic.api.functions.text.split Docstring: Split a string column into an array using a regular expression pattern. This method creates an array column by splitting each value in the input string column at matches of the specified regular expression pattern. Args: src: The input string column or column name to split pattern: The regular expression pattern to split on limit: Maximum number of splits to perform (Default: -1 for unlimited). If > 0, returns at most limit+1 elements, with remainder in last element. Returns: Column: A column containing arrays of substrings Example: Split on whitespace ```python # Split on whitespace df.select(text.split(col("text"), r"\s+")) ``` Example: Split with limit ```python # Split on whitespace, max 2 splits df.select(text.split(col("text"), r"\s+", limit=2)) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["src", "pattern", "limit"] Returns: Column Parent Class: none
function
split_part
fenic.api.functions.text.split_part
Split a string and return a specific part using 1-based indexing. Splits each string by a delimiter and returns the specified part. If the delimiter is a column expression, the split operation is performed dynamically using the delimiter values from that column. Behavior: - If any input is null, returns null - If part_number is out of range of split parts, returns empty string - If part_number is 0, throws an error - If part_number is negative, counts from the end of the split parts - If the delimiter is an empty string, the string is not split Args: src: The input string column or column name to split delimiter: The delimiter to split on (can be a string or column expression) part_number: Which part to return (1-based, can be an integer or column expression) Returns: Column: A column containing the specified part from each split string Example: Get second part of comma-separated values ```python # Get second part of comma-separated values df.select(text.split_part(col("text"), ",", 2)) ``` Example: Get last part using negative index ```python # Get last part using negative index df.select(text.split_part(col("text"), ",", -1)) ``` Example: Use dynamic delimiter from column ```python # Use dynamic delimiter from column df.select(text.split_part(col("text"), col("delimiter"), 1)) ```
null
true
false
641
696
null
Column
null
[ "src", "delimiter", "part_number" ]
null
null
Type: function Member Name: split_part Qualified Name: fenic.api.functions.text.split_part Docstring: Split a string and return a specific part using 1-based indexing. Splits each string by a delimiter and returns the specified part. If the delimiter is a column expression, the split operation is performed dynamically using the delimiter values from that column. Behavior: - If any input is null, returns null - If part_number is out of range of split parts, returns empty string - If part_number is 0, throws an error - If part_number is negative, counts from the end of the split parts - If the delimiter is an empty string, the string is not split Args: src: The input string column or column name to split delimiter: The delimiter to split on (can be a string or column expression) part_number: Which part to return (1-based, can be an integer or column expression) Returns: Column: A column containing the specified part from each split string Example: Get second part of comma-separated values ```python # Get second part of comma-separated values df.select(text.split_part(col("text"), ",", 2)) ``` Example: Get last part using negative index ```python # Get last part using negative index df.select(text.split_part(col("text"), ",", -1)) ``` Example: Use dynamic delimiter from column ```python # Use dynamic delimiter from column df.select(text.split_part(col("text"), col("delimiter"), 1)) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["src", "delimiter", "part_number"] Returns: Column Parent Class: none
function
upper
fenic.api.functions.text.upper
Convert all characters in a string column to uppercase. Args: column: The input string column to convert to uppercase Returns: Column: A column containing the uppercase strings Example: Convert text to uppercase ```python # Convert all text in the name column to uppercase df.select(text.upper(col("name"))) ```
null
true
false
699
717
null
Column
null
[ "column" ]
null
null
Type: function Member Name: upper Qualified Name: fenic.api.functions.text.upper Docstring: Convert all characters in a string column to uppercase. Args: column: The input string column to convert to uppercase Returns: Column: A column containing the uppercase strings Example: Convert text to uppercase ```python # Convert all text in the name column to uppercase df.select(text.upper(col("name"))) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column"] Returns: Column Parent Class: none
function
lower
fenic.api.functions.text.lower
Convert all characters in a string column to lowercase. Args: column: The input string column to convert to lowercase Returns: Column: A column containing the lowercase strings Example: Convert text to lowercase ```python # Convert all text in the name column to lowercase df.select(text.lower(col("name"))) ```
null
true
false
720
738
null
Column
null
[ "column" ]
null
null
Type: function Member Name: lower Qualified Name: fenic.api.functions.text.lower Docstring: Convert all characters in a string column to lowercase. Args: column: The input string column to convert to lowercase Returns: Column: A column containing the lowercase strings Example: Convert text to lowercase ```python # Convert all text in the name column to lowercase df.select(text.lower(col("name"))) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column"] Returns: Column Parent Class: none
function
title_case
fenic.api.functions.text.title_case
Convert the first character of each word in a string column to uppercase. Args: column: The input string column to convert to title case Returns: Column: A column containing the title case strings Example: Convert text to title case ```python # Convert text in the name column to title case df.select(text.title_case(col("name"))) ```
null
true
false
741
759
null
Column
null
[ "column" ]
null
null
Type: function Member Name: title_case Qualified Name: fenic.api.functions.text.title_case Docstring: Convert the first character of each word in a string column to uppercase. Args: column: The input string column to convert to title case Returns: Column: A column containing the title case strings Example: Convert text to title case ```python # Convert text in the name column to title case df.select(text.title_case(col("name"))) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column"] Returns: Column Parent Class: none
function
trim
fenic.api.functions.text.trim
Remove whitespace from both sides of strings in a column. This function removes all whitespace characters (spaces, tabs, newlines) from both the beginning and end of each string in the column. Args: column: The input string column or column name to trim Returns: Column: A column containing the trimmed strings Example: Remove whitespace from both sides ```python # Remove whitespace from both sides of text df.select(text.trim(col("text"))) ```
null
true
false
762
783
null
Column
null
[ "column" ]
null
null
Type: function Member Name: trim Qualified Name: fenic.api.functions.text.trim Docstring: Remove whitespace from both sides of strings in a column. This function removes all whitespace characters (spaces, tabs, newlines) from both the beginning and end of each string in the column. Args: column: The input string column or column name to trim Returns: Column: A column containing the trimmed strings Example: Remove whitespace from both sides ```python # Remove whitespace from both sides of text df.select(text.trim(col("text"))) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column"] Returns: Column Parent Class: none
function
btrim
fenic.api.functions.text.btrim
Remove specified characters from both sides of strings in a column. This function removes all occurrences of the specified characters from both the beginning and end of each string in the column. If trim is a column expression, the characters to remove are determined dynamically from the values in that column. Args: col: The input string column or column name to trim trim: The characters to remove from both sides (Default: whitespace) Can be a string or column expression. Returns: Column: A column containing the trimmed strings Example: Remove brackets from both sides ```python # Remove brackets from both sides of text df.select(text.btrim(col("text"), "[]")) ``` Example: Remove characters specified in a column ```python # Remove characters specified in a column df.select(text.btrim(col("text"), col("chars"))) ```
null
true
false
786
819
null
Column
null
[ "col", "trim" ]
null
null
Type: function Member Name: btrim Qualified Name: fenic.api.functions.text.btrim Docstring: Remove specified characters from both sides of strings in a column. This function removes all occurrences of the specified characters from both the beginning and end of each string in the column. If trim is a column expression, the characters to remove are determined dynamically from the values in that column. Args: col: The input string column or column name to trim trim: The characters to remove from both sides (Default: whitespace) Can be a string or column expression. Returns: Column: A column containing the trimmed strings Example: Remove brackets from both sides ```python # Remove brackets from both sides of text df.select(text.btrim(col("text"), "[]")) ``` Example: Remove characters specified in a column ```python # Remove characters specified in a column df.select(text.btrim(col("text"), col("chars"))) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["col", "trim"] Returns: Column Parent Class: none
function
ltrim
fenic.api.functions.text.ltrim
Remove whitespace from the start of strings in a column. This function removes all whitespace characters (spaces, tabs, newlines) from the beginning of each string in the column. Args: col: The input string column or column name to trim Returns: Column: A column containing the left-trimmed strings Example: Remove leading whitespace ```python # Remove whitespace from the start of text df.select(text.ltrim(col("text"))) ```
null
true
false
822
843
null
Column
null
[ "col" ]
null
null
Type: function Member Name: ltrim Qualified Name: fenic.api.functions.text.ltrim Docstring: Remove whitespace from the start of strings in a column. This function removes all whitespace characters (spaces, tabs, newlines) from the beginning of each string in the column. Args: col: The input string column or column name to trim Returns: Column: A column containing the left-trimmed strings Example: Remove leading whitespace ```python # Remove whitespace from the start of text df.select(text.ltrim(col("text"))) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["col"] Returns: Column Parent Class: none
function
rtrim
fenic.api.functions.text.rtrim
Remove whitespace from the end of strings in a column. This function removes all whitespace characters (spaces, tabs, newlines) from the end of each string in the column. Args: col: The input string column or column name to trim Returns: Column: A column containing the right-trimmed strings Example: Remove trailing whitespace ```python # Remove whitespace from the end of text df.select(text.rtrim(col("text"))) ```
null
true
false
846
867
null
Column
null
[ "col" ]
null
null
Type: function Member Name: rtrim Qualified Name: fenic.api.functions.text.rtrim Docstring: Remove whitespace from the end of strings in a column. This function removes all whitespace characters (spaces, tabs, newlines) from the end of each string in the column. Args: col: The input string column or column name to trim Returns: Column: A column containing the right-trimmed strings Example: Remove trailing whitespace ```python # Remove whitespace from the end of text df.select(text.rtrim(col("text"))) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["col"] Returns: Column Parent Class: none
function
length
fenic.api.functions.text.length
Calculate the character length of each string in the column. Args: column: The input string column to calculate lengths for Returns: Column: A column containing the length of each string in characters Example: Get string lengths ```python # Get the length of each string in the name column df.select(text.length(col("name"))) ```
null
true
false
870
888
null
Column
null
[ "column" ]
null
null
Type: function Member Name: length Qualified Name: fenic.api.functions.text.length Docstring: Calculate the character length of each string in the column. Args: column: The input string column to calculate lengths for Returns: Column: A column containing the length of each string in characters Example: Get string lengths ```python # Get the length of each string in the name column df.select(text.length(col("name"))) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column"] Returns: Column Parent Class: none
function
byte_length
fenic.api.functions.text.byte_length
Calculate the byte length of each string in the column. Args: column: The input string column to calculate byte lengths for Returns: Column: A column containing the byte length of each string Example: Get byte lengths ```python # Get the byte length of each string in the name column df.select(text.byte_length(col("name"))) ```
null
true
false
891
909
null
Column
null
[ "column" ]
null
null
Type: function Member Name: byte_length Qualified Name: fenic.api.functions.text.byte_length Docstring: Calculate the byte length of each string in the column. Args: column: The input string column to calculate byte lengths for Returns: Column: A column containing the byte length of each string Example: Get byte lengths ```python # Get the byte length of each string in the name column df.select(text.byte_length(col("name"))) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column"] Returns: Column Parent Class: none
module
builtin
fenic.api.functions.builtin
Built-in functions for Fenic DataFrames.
/private/var/folders/w2/dyfkx_354cqghs4b74vb_x380000gn/T/fenic-clone-0.0.0-y6d85svd/fenic/src/fenic/api/functions/builtin.py
true
false
null
null
null
null
null
null
null
null
Type: module Member Name: builtin Qualified Name: fenic.api.functions.builtin Docstring: Built-in functions for Fenic DataFrames. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
function
sum
fenic.api.functions.builtin.sum
Aggregate function: returns the sum of all values in the specified column. Args: column: Column or column name to compute the sum of Returns: A Column expression representing the sum aggregation Raises: TypeError: If column is not a Column or string
null
true
false
30
45
null
Column
null
[ "column" ]
null
null
Type: function Member Name: sum Qualified Name: fenic.api.functions.builtin.sum Docstring: Aggregate function: returns the sum of all values in the specified column. Args: column: Column or column name to compute the sum of Returns: A Column expression representing the sum aggregation Raises: TypeError: If column is not a Column or string Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column"] Returns: Column Parent Class: none
function
avg
fenic.api.functions.builtin.avg
Aggregate function: returns the average (mean) of all values in the specified column. Args: column: Column or column name to compute the average of Returns: A Column expression representing the average aggregation Raises: TypeError: If column is not a Column or string
null
true
false
48
63
null
Column
null
[ "column" ]
null
null
Type: function Member Name: avg Qualified Name: fenic.api.functions.builtin.avg Docstring: Aggregate function: returns the average (mean) of all values in the specified column. Args: column: Column or column name to compute the average of Returns: A Column expression representing the average aggregation Raises: TypeError: If column is not a Column or string Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column"] Returns: Column Parent Class: none
function
mean
fenic.api.functions.builtin.mean
Aggregate function: returns the mean (average) of all values in the specified column. Alias for avg(). Args: column: Column or column name to compute the mean of Returns: A Column expression representing the mean aggregation Raises: TypeError: If column is not a Column or string
null
true
false
66
83
null
Column
null
[ "column" ]
null
null
Type: function Member Name: mean Qualified Name: fenic.api.functions.builtin.mean Docstring: Aggregate function: returns the mean (average) of all values in the specified column. Alias for avg(). Args: column: Column or column name to compute the mean of Returns: A Column expression representing the mean aggregation Raises: TypeError: If column is not a Column or string Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column"] Returns: Column Parent Class: none
function
min
fenic.api.functions.builtin.min
Aggregate function: returns the minimum value in the specified column. Args: column: Column or column name to compute the minimum of Returns: A Column expression representing the minimum aggregation Raises: TypeError: If column is not a Column or string
null
true
false
86
101
null
Column
null
[ "column" ]
null
null
Type: function Member Name: min Qualified Name: fenic.api.functions.builtin.min Docstring: Aggregate function: returns the minimum value in the specified column. Args: column: Column or column name to compute the minimum of Returns: A Column expression representing the minimum aggregation Raises: TypeError: If column is not a Column or string Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column"] Returns: Column Parent Class: none
function
max
fenic.api.functions.builtin.max
Aggregate function: returns the maximum value in the specified column. Args: column: Column or column name to compute the maximum of Returns: A Column expression representing the maximum aggregation Raises: TypeError: If column is not a Column or string
null
true
false
104
119
null
Column
null
[ "column" ]
null
null
Type: function Member Name: max Qualified Name: fenic.api.functions.builtin.max Docstring: Aggregate function: returns the maximum value in the specified column. Args: column: Column or column name to compute the maximum of Returns: A Column expression representing the maximum aggregation Raises: TypeError: If column is not a Column or string Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column"] Returns: Column Parent Class: none
function
count
fenic.api.functions.builtin.count
Aggregate function: returns the count of non-null values in the specified column. Args: column: Column or column name to count values in Returns: A Column expression representing the count aggregation Raises: TypeError: If column is not a Column or string
null
true
false
122
139
null
Column
null
[ "column" ]
null
null
Type: function Member Name: count Qualified Name: fenic.api.functions.builtin.count Docstring: Aggregate function: returns the count of non-null values in the specified column. Args: column: Column or column name to count values in Returns: A Column expression representing the count aggregation Raises: TypeError: If column is not a Column or string Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column"] Returns: Column Parent Class: none
function
collect_list
fenic.api.functions.builtin.collect_list
Aggregate function: collects all values from the specified column into a list. Args: column: Column or column name to collect values from Returns: A Column expression representing the list aggregation Raises: TypeError: If column is not a Column or string
null
true
false
142
157
null
Column
null
[ "column" ]
null
null
Type: function Member Name: collect_list Qualified Name: fenic.api.functions.builtin.collect_list Docstring: Aggregate function: collects all values from the specified column into a list. Args: column: Column or column name to collect values from Returns: A Column expression representing the list aggregation Raises: TypeError: If column is not a Column or string Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column"] Returns: Column Parent Class: none
function
array_agg
fenic.api.functions.builtin.array_agg
Alias for collect_list().
null
true
false
160
163
null
Column
null
[ "column" ]
null
null
Type: function Member Name: array_agg Qualified Name: fenic.api.functions.builtin.array_agg Docstring: Alias for collect_list(). Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column"] Returns: Column Parent Class: none
function
struct
fenic.api.functions.builtin.struct
Creates a new struct column from multiple input columns. Args: *args: Columns or column names to combine into a struct. Can be: - Individual arguments - Lists of columns/column names - Tuples of columns/column names Returns: A Column expression representing a struct containing the input columns Raises: TypeError: If any argument is not a Column, string, or collection of Columns/strings
null
true
false
166
195
null
Column
null
[ "args" ]
null
null
Type: function Member Name: struct Qualified Name: fenic.api.functions.builtin.struct Docstring: Creates a new struct column from multiple input columns. Args: *args: Columns or column names to combine into a struct. Can be: - Individual arguments - Lists of columns/column names - Tuples of columns/column names Returns: A Column expression representing a struct containing the input columns Raises: TypeError: If any argument is not a Column, string, or collection of Columns/strings Value: none Annotation: none is Public? : true is Private? : false Parameters: ["args"] Returns: Column Parent Class: none
function
array
fenic.api.functions.builtin.array
Creates a new array column from multiple input columns. Args: *args: Columns or column names to combine into an array. Can be: - Individual arguments - Lists of columns/column names - Tuples of columns/column names Returns: A Column expression representing an array containing values from the input columns Raises: TypeError: If any argument is not a Column, string, or collection of Columns/strings
null
true
false
198
227
null
Column
null
[ "args" ]
null
null
Type: function Member Name: array Qualified Name: fenic.api.functions.builtin.array Docstring: Creates a new array column from multiple input columns. Args: *args: Columns or column names to combine into an array. Can be: - Individual arguments - Lists of columns/column names - Tuples of columns/column names Returns: A Column expression representing an array containing values from the input columns Raises: TypeError: If any argument is not a Column, string, or collection of Columns/strings Value: none Annotation: none is Public? : true is Private? : false Parameters: ["args"] Returns: Column Parent Class: none
function
udf
fenic.api.functions.builtin.udf
A decorator or function for creating user-defined functions (UDFs) that can be applied to DataFrame rows. When applied, UDFs will: - Access `StructType` columns as Python dictionaries (`dict[str, Any]`). - Access `ArrayType` columns as Python lists (`list[Any]`). - Access primitive types (e.g., `int`, `float`, `str`) as their respective Python types. Args: f: Python function to convert to UDF return_type: Expected return type of the UDF. Required parameter. Example: UDF with primitive types ```python # UDF with primitive types @udf(return_type=IntegerType) def add_one(x: int): return x + 1 # Or add_one = udf(lambda x: x + 1, return_type=IntegerType) ``` Example: UDF with nested types ```python # UDF with nested types @udf(return_type=StructType([StructField("value1", IntegerType), StructField("value2", IntegerType)])) def example_udf(x: dict[str, int], y: list[int]): return { "value1": x["value1"] + x["value2"] + y[0], "value2": x["value1"] + x["value2"] + y[1], } ```
null
true
false
230
277
null
null
null
[ "f", "return_type" ]
null
null
Type: function Member Name: udf Qualified Name: fenic.api.functions.builtin.udf Docstring: A decorator or function for creating user-defined functions (UDFs) that can be applied to DataFrame rows. When applied, UDFs will: - Access `StructType` columns as Python dictionaries (`dict[str, Any]`). - Access `ArrayType` columns as Python lists (`list[Any]`). - Access primitive types (e.g., `int`, `float`, `str`) as their respective Python types. Args: f: Python function to convert to UDF return_type: Expected return type of the UDF. Required parameter. Example: UDF with primitive types ```python # UDF with primitive types @udf(return_type=IntegerType) def add_one(x: int): return x + 1 # Or add_one = udf(lambda x: x + 1, return_type=IntegerType) ``` Example: UDF with nested types ```python # UDF with nested types @udf(return_type=StructType([StructField("value1", IntegerType), StructField("value2", IntegerType)])) def example_udf(x: dict[str, int], y: list[int]): return { "value1": x["value1"] + x["value2"] + y[0], "value2": x["value1"] + x["value2"] + y[1], } ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["f", "return_type"] Returns: none Parent Class: none
function
asc
fenic.api.functions.builtin.asc
Creates a Column expression representing an ascending sort order. Args: column: The column to apply the ascending ordering to. Returns: A Column expression representing the column and the ascending sort order. Raises: ValueError: If the type of the column cannot be inferred. Error: If this expression is passed to a dataframe operation besides sort() and order_by().
null
true
false
280
294
null
Column
null
[ "column" ]
null
null
Type: function Member Name: asc Qualified Name: fenic.api.functions.builtin.asc Docstring: Creates a Column expression representing an ascending sort order. Args: column: The column to apply the ascending ordering to. Returns: A Column expression representing the column and the ascending sort order. Raises: ValueError: If the type of the column cannot be inferred. Error: If this expression is passed to a dataframe operation besides sort() and order_by(). Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column"] Returns: Column Parent Class: none
function
asc_nulls_first
fenic.api.functions.builtin.asc_nulls_first
Creates a Column expression representing an ascending sort order with nulls first. Args: column: The column to apply the ascending ordering to. Returns: A Column expression representing the column and the ascending sort order with nulls first. Raises: ValueError: If the type of the column cannot be inferred. Error: If this expression is passed to a dataframe operation besides sort() and order_by().
null
true
false
297
311
null
Column
null
[ "column" ]
null
null
Type: function Member Name: asc_nulls_first Qualified Name: fenic.api.functions.builtin.asc_nulls_first Docstring: Creates a Column expression representing an ascending sort order with nulls first. Args: column: The column to apply the ascending ordering to. Returns: A Column expression representing the column and the ascending sort order with nulls first. Raises: ValueError: If the type of the column cannot be inferred. Error: If this expression is passed to a dataframe operation besides sort() and order_by(). Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column"] Returns: Column Parent Class: none
function
asc_nulls_last
fenic.api.functions.builtin.asc_nulls_last
Creates a Column expression representing an ascending sort order with nulls last. Args: column: The column to apply the ascending ordering to. Returns: A Column expression representing the column and the ascending sort order with nulls last. Raises: ValueError: If the type of the column cannot be inferred. Error: If this expression is passed to a dataframe operation besides sort() and order_by().
null
true
false
314
328
null
Column
null
[ "column" ]
null
null
Type: function Member Name: asc_nulls_last Qualified Name: fenic.api.functions.builtin.asc_nulls_last Docstring: Creates a Column expression representing an ascending sort order with nulls last. Args: column: The column to apply the ascending ordering to. Returns: A Column expression representing the column and the ascending sort order with nulls last. Raises: ValueError: If the type of the column cannot be inferred. Error: If this expression is passed to a dataframe operation besides sort() and order_by(). Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column"] Returns: Column Parent Class: none
function
desc
fenic.api.functions.builtin.desc
Creates a Column expression representing a descending sort order. Args: column: The column to apply the descending ordering to. Returns: A Column expression representing the column and the descending sort order. Raises: ValueError: If the type of the column cannot be inferred. Error: If this expression is passed to a dataframe operation besides sort() and order_by().
null
true
false
331
345
null
Column
null
[ "column" ]
null
null
Type: function Member Name: desc Qualified Name: fenic.api.functions.builtin.desc Docstring: Creates a Column expression representing a descending sort order. Args: column: The column to apply the descending ordering to. Returns: A Column expression representing the column and the descending sort order. Raises: ValueError: If the type of the column cannot be inferred. Error: If this expression is passed to a dataframe operation besides sort() and order_by(). Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column"] Returns: Column Parent Class: none
function
desc_nulls_first
fenic.api.functions.builtin.desc_nulls_first
Creates a Column expression representing a descending sort order with nulls first. Args: column: The column to apply the descending ordering to. Returns: A Column expression representing the column and the descending sort order with nulls first. Raises: ValueError: If the type of the column cannot be inferred. Error: If this expression is passed to a dataframe operation besides sort() and order_by().
null
true
false
348
362
null
Column
null
[ "column" ]
null
null
Type: function Member Name: desc_nulls_first Qualified Name: fenic.api.functions.builtin.desc_nulls_first Docstring: Creates a Column expression representing a descending sort order with nulls first. Args: column: The column to apply the descending ordering to. Returns: A Column expression representing the column and the descending sort order with nulls first. Raises: ValueError: If the type of the column cannot be inferred. Error: If this expression is passed to a dataframe operation besides sort() and order_by(). Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column"] Returns: Column Parent Class: none
function
desc_nulls_last
fenic.api.functions.builtin.desc_nulls_last
Creates a Column expression representing a descending sort order with nulls last. Args: column: The column to apply the descending ordering to. Returns: A Column expression representing the column and the descending sort order with nulls last. Raises: ValueError: If the type of the column cannot be inferred. Error: If this expression is passed to a dataframe operation besides sort() and order_by().
null
true
false
365
379
null
Column
null
[ "column" ]
null
null
Type: function Member Name: desc_nulls_last Qualified Name: fenic.api.functions.builtin.desc_nulls_last Docstring: Creates a Column expression representing a descending sort order with nulls last. Args: column: The column to apply the descending ordering to. Returns: A Column expression representing the column and the descending sort order with nulls last. Raises: ValueError: If the type of the column cannot be inferred. Error: If this expression is passed to a dataframe operation besides sort() and order_by(). Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column"] Returns: Column Parent Class: none
function
array_size
fenic.api.functions.builtin.array_size
Returns the number of elements in an array column. This function computes the length of arrays stored in the specified column. Returns None for None arrays. Args: column: Column or column name containing arrays whose length to compute. Returns: A Column expression representing the array length. Raises: TypeError: If the column does not contain array data. Example: Get array sizes ```python # Get the size of arrays in 'tags' column df.select(array_size("tags")) # Use with column reference df.select(array_size(col("tags"))) ```
null
true
false
382
409
null
Column
null
[ "column" ]
null
null
Type: function Member Name: array_size Qualified Name: fenic.api.functions.builtin.array_size Docstring: Returns the number of elements in an array column. This function computes the length of arrays stored in the specified column. Returns None for None arrays. Args: column: Column or column name containing arrays whose length to compute. Returns: A Column expression representing the array length. Raises: TypeError: If the column does not contain array data. Example: Get array sizes ```python # Get the size of arrays in 'tags' column df.select(array_size("tags")) # Use with column reference df.select(array_size(col("tags"))) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column"] Returns: Column Parent Class: none
function
array_contains
fenic.api.functions.builtin.array_contains
Checks if array column contains a specific value. This function returns True if the array in the specified column contains the given value, and False otherwise. Returns False if the array is None. Args: column: Column or column name containing the arrays to check. value: Value to search for in the arrays. Can be: - A literal value (string, number, boolean) - A Column expression Returns: A boolean Column expression (True if value is found, False otherwise). Raises: TypeError: If value type is incompatible with the array element type. TypeError: If the column does not contain array data. Example: Check for values in arrays ```python # Check if 'python' exists in arrays in the 'tags' column df.select(array_contains("tags", "python")) # Check using a value from another column df.select(array_contains("tags", col("search_term"))) ```
null
true
false
412
453
null
Column
null
[ "column", "value" ]
null
null
Type: function Member Name: array_contains Qualified Name: fenic.api.functions.builtin.array_contains Docstring: Checks if array column contains a specific value. This function returns True if the array in the specified column contains the given value, and False otherwise. Returns False if the array is None. Args: column: Column or column name containing the arrays to check. value: Value to search for in the arrays. Can be: - A literal value (string, number, boolean) - A Column expression Returns: A boolean Column expression (True if value is found, False otherwise). Raises: TypeError: If value type is incompatible with the array element type. TypeError: If the column does not contain array data. Example: Check for values in arrays ```python # Check if 'python' exists in arrays in the 'tags' column df.select(array_contains("tags", "python")) # Check using a value from another column df.select(array_contains("tags", col("search_term"))) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column", "value"] Returns: Column Parent Class: none
function
when
fenic.api.functions.builtin.when
Evaluates a condition and returns a value if true. This function is used to create conditional expressions. If Column.otherwise() is not invoked, None is returned for unmatched conditions. Args: condition: A boolean Column expression to evaluate. value: A Column expression to return if the condition is true. Returns: A Column expression that evaluates the condition and returns the specified value when true, and None otherwise. Raises: TypeError: If the condition is not a boolean Column expression. Example: Basic conditional expression ```python # Basic usage df.select(when(col("age") > 18, lit("adult"))) # With otherwise df.select(when(col("age") > 18, lit("adult")).otherwise(lit("minor"))) ```
null
true
false
456
486
null
Column
null
[ "condition", "value" ]
null
null
Type: function Member Name: when Qualified Name: fenic.api.functions.builtin.when Docstring: Evaluates a condition and returns a value if true. This function is used to create conditional expressions. If Column.otherwise() is not invoked, None is returned for unmatched conditions. Args: condition: A boolean Column expression to evaluate. value: A Column expression to return if the condition is true. Returns: A Column expression that evaluates the condition and returns the specified value when true, and None otherwise. Raises: TypeError: If the condition is not a boolean Column expression. Example: Basic conditional expression ```python # Basic usage df.select(when(col("age") > 18, lit("adult"))) # With otherwise df.select(when(col("age") > 18, lit("adult")).otherwise(lit("minor"))) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["condition", "value"] Returns: Column Parent Class: none
function
coalesce
fenic.api.functions.builtin.coalesce
Returns the first non-null value from the given columns for each row. This function mimics the behavior of SQL's COALESCE function. It evaluates the input columns in order and returns the first non-null value encountered. If all values are null, returns null. Args: *cols: Column expressions or column names to evaluate. Can be: - Individual arguments - Lists of columns/column names - Tuples of columns/column names Returns: A Column expression containing the first non-null value from the input columns. Raises: ValueError: If no columns are provided. Example: Basic coalesce usage ```python # Basic usage df.select(coalesce("col1", "col2", "col3")) # With nested collections df.select(coalesce(["col1", "col2"], "col3")) ```
null
true
false
489
531
null
Column
null
[ "cols" ]
null
null
Type: function Member Name: coalesce Qualified Name: fenic.api.functions.builtin.coalesce Docstring: Returns the first non-null value from the given columns for each row. This function mimics the behavior of SQL's COALESCE function. It evaluates the input columns in order and returns the first non-null value encountered. If all values are null, returns null. Args: *cols: Column expressions or column names to evaluate. Can be: - Individual arguments - Lists of columns/column names - Tuples of columns/column names Returns: A Column expression containing the first non-null value from the input columns. Raises: ValueError: If no columns are provided. Example: Basic coalesce usage ```python # Basic usage df.select(coalesce("col1", "col2", "col3")) # With nested collections df.select(coalesce(["col1", "col2"], "col3")) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["cols"] Returns: Column Parent Class: none
module
json
fenic.api.functions.json
JSON functions.
/private/var/folders/w2/dyfkx_354cqghs4b74vb_x380000gn/T/fenic-clone-0.0.0-y6d85svd/fenic/src/fenic/api/functions/json.py
true
false
null
null
null
null
null
null
null
null
Type: module Member Name: json Qualified Name: fenic.api.functions.json Docstring: JSON functions. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
function
jq
fenic.api.functions.json.jq
Applies a JQ query to a column containing JSON-formatted strings. Args: column (ColumnOrName): Input column of type `JsonType`. query (str): A [JQ](https://jqlang.org/) expression used to extract or transform values. Returns: Column: A column containing the result of applying the JQ query to each row's JSON input. Notes: - The input column *must* be of type `JsonType`. Use `cast(JsonType)` if needed to ensure correct typing. - This function supports extracting nested fields, transforming arrays/objects, and other standard JQ operations. Example: Extract nested field ```python # Extract the "user.name" field from a JSON column df.select(json.jq(col("json_col"), ".user.name")) ``` Example: Cast to JsonType before querying ```python df.select(json.jq(col("raw_json").cast(JsonType), ".event.type")) ``` Example: Work with arrays ```python # Work with arrays using JQ functions df.select(json.jq(col("json_array"), "map(.id)")) ```
null
true
false
12
46
null
Column
null
[ "column", "query" ]
null
null
Type: function Member Name: jq Qualified Name: fenic.api.functions.json.jq Docstring: Applies a JQ query to a column containing JSON-formatted strings. Args: column (ColumnOrName): Input column of type `JsonType`. query (str): A [JQ](https://jqlang.org/) expression used to extract or transform values. Returns: Column: A column containing the result of applying the JQ query to each row's JSON input. Notes: - The input column *must* be of type `JsonType`. Use `cast(JsonType)` if needed to ensure correct typing. - This function supports extracting nested fields, transforming arrays/objects, and other standard JQ operations. Example: Extract nested field ```python # Extract the "user.name" field from a JSON column df.select(json.jq(col("json_col"), ".user.name")) ``` Example: Cast to JsonType before querying ```python df.select(json.jq(col("raw_json").cast(JsonType), ".event.type")) ``` Example: Work with arrays ```python # Work with arrays using JQ functions df.select(json.jq(col("json_array"), "map(.id)")) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column", "query"] Returns: Column Parent Class: none
function
get_type
fenic.api.functions.json.get_type
Get the JSON type of each value. Args: column (ColumnOrName): Input column of type `JsonType`. Returns: Column: A column of strings indicating the JSON type ("string", "number", "boolean", "array", "object", "null"). Example: Get JSON types ```python df.select(json.get_type(col("json_data"))) ``` Example: Filter by type ```python # Filter by type df.filter(json.get_type(col("data")) == "array") ```
null
true
false
49
73
null
Column
null
[ "column" ]
null
null
Type: function Member Name: get_type Qualified Name: fenic.api.functions.json.get_type Docstring: Get the JSON type of each value. Args: column (ColumnOrName): Input column of type `JsonType`. Returns: Column: A column of strings indicating the JSON type ("string", "number", "boolean", "array", "object", "null"). Example: Get JSON types ```python df.select(json.get_type(col("json_data"))) ``` Example: Filter by type ```python # Filter by type df.filter(json.get_type(col("data")) == "array") ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column"] Returns: Column Parent Class: none
function
contains
fenic.api.functions.json.contains
Check if a JSON value contains the specified value using recursive deep search. Args: column (ColumnOrName): Input column of type `JsonType`. value (str): Valid JSON string to search for. Returns: Column: A column of booleans indicating whether the JSON contains the value. Matching Rules: - **Objects**: Uses partial matching - `{"role": "admin"}` matches `{"role": "admin", "level": 5}` - **Arrays**: Uses exact matching - `[1, 2]` only matches exactly `[1, 2]`, not `[1, 2, 3]` - **Primitives**: Uses exact matching - `42` matches `42` but not `"42"` - **Search is recursive**: Searches at all nesting levels throughout the JSON structure - **Type-aware**: Distinguishes between `42` (number) and `"42"` (string) Example: Find objects with partial structure match ```python # Find objects with partial structure match (at any nesting level) df.select(json.contains(col("json_data"), '{"name": "Alice"}')) # Matches: {"name": "Alice", "age": 30} and {"user": {"name": "Alice"}} ``` Example: Find exact array match ```python # Find exact array match (at any nesting level) df.select(json.contains(col("json_data"), '["read", "write"]')) # Matches: {"permissions": ["read", "write"]} but not ["read", "write", "admin"] ``` Example: Find exact primitive values ```python # Find exact primitive values (at any nesting level) df.select(json.contains(col("json_data"), '"admin"')) # Matches: {"role": "admin"} and ["admin", "user"] but not {"role": "administrator"} ``` Example: Type distinction matters ```python # Type distinction matters df.select(json.contains(col("json_data"), '42')) # number 42 df.select(json.contains(col("json_data"), '"42"')) # string "42" ``` Raises: ValidationError: If `value` is not valid JSON.
null
true
false
76
127
null
Column
null
[ "column", "value" ]
null
null
Type: function Member Name: contains Qualified Name: fenic.api.functions.json.contains Docstring: Check if a JSON value contains the specified value using recursive deep search. Args: column (ColumnOrName): Input column of type `JsonType`. value (str): Valid JSON string to search for. Returns: Column: A column of booleans indicating whether the JSON contains the value. Matching Rules: - **Objects**: Uses partial matching - `{"role": "admin"}` matches `{"role": "admin", "level": 5}` - **Arrays**: Uses exact matching - `[1, 2]` only matches exactly `[1, 2]`, not `[1, 2, 3]` - **Primitives**: Uses exact matching - `42` matches `42` but not `"42"` - **Search is recursive**: Searches at all nesting levels throughout the JSON structure - **Type-aware**: Distinguishes between `42` (number) and `"42"` (string) Example: Find objects with partial structure match ```python # Find objects with partial structure match (at any nesting level) df.select(json.contains(col("json_data"), '{"name": "Alice"}')) # Matches: {"name": "Alice", "age": 30} and {"user": {"name": "Alice"}} ``` Example: Find exact array match ```python # Find exact array match (at any nesting level) df.select(json.contains(col("json_data"), '["read", "write"]')) # Matches: {"permissions": ["read", "write"]} but not ["read", "write", "admin"] ``` Example: Find exact primitive values ```python # Find exact primitive values (at any nesting level) df.select(json.contains(col("json_data"), '"admin"')) # Matches: {"role": "admin"} and ["admin", "user"] but not {"role": "administrator"} ``` Example: Type distinction matters ```python # Type distinction matters df.select(json.contains(col("json_data"), '42')) # number 42 df.select(json.contains(col("json_data"), '"42"')) # string "42" ``` Raises: ValidationError: If `value` is not valid JSON. Value: none Annotation: none is Public? : true is Private? : false Parameters: ["column", "value"] Returns: Column Parent Class: none
module
session
fenic.api.session
Session module for managing query execution context and state.
/private/var/folders/w2/dyfkx_354cqghs4b74vb_x380000gn/T/fenic-clone-0.0.0-y6d85svd/fenic/src/fenic/api/session/__init__.py
true
false
null
null
null
null
null
null
null
null
Type: module Member Name: session Qualified Name: fenic.api.session Docstring: Session module for managing query execution context and state. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
attribute
__all__
fenic.api.session.__all__
null
null
false
false
15
25
null
null
['Session', 'SessionConfig', 'SemanticConfig', 'OpenAIModelConfig', 'AnthropicModelConfig', 'GoogleGLAModelConfig', 'ModelConfig', 'CloudConfig', 'CloudExecutorSize']
null
null
null
Type: attribute Member Name: __all__ Qualified Name: fenic.api.session.__all__ Docstring: none Value: ['Session', 'SessionConfig', 'SemanticConfig', 'OpenAIModelConfig', 'AnthropicModelConfig', 'GoogleGLAModelConfig', 'ModelConfig', 'CloudConfig', 'CloudExecutorSize'] Annotation: none is Public? : false is Private? : false Parameters: none Returns: none Parent Class: none
module
config
fenic.api.session.config
Session configuration classes for Fenic.
/private/var/folders/w2/dyfkx_354cqghs4b74vb_x380000gn/T/fenic-clone-0.0.0-y6d85svd/fenic/src/fenic/api/session/config.py
true
false
null
null
null
null
null
null
null
null
Type: module Member Name: config Qualified Name: fenic.api.session.config Docstring: Session configuration classes for Fenic. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
class
GoogleGLAModelConfig
fenic.api.session.config.GoogleGLAModelConfig
Configuration for Google GenerativeLAnguage (GLA) models. This class defines the configuration settings for models available in Google Developer AI Studio, including model selection and rate limiting parameters. These models are accessible using a GEMINI_API_KEY environment variable.
null
true
false
32
41
null
null
null
null
[ "BaseModel" ]
null
Type: class Member Name: GoogleGLAModelConfig Qualified Name: fenic.api.session.config.GoogleGLAModelConfig Docstring: Configuration for Google GenerativeLAnguage (GLA) models. This class defines the configuration settings for models available in Google Developer AI Studio, including model selection and rate limiting parameters. These models are accessible using a GEMINI_API_KEY environment variable. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
class
OpenAIModelConfig
fenic.api.session.config.OpenAIModelConfig
Configuration for OpenAI models. This class defines the configuration settings for OpenAI language and embedding models, including model selection and rate limiting parameters. Attributes: model_name: The name of the OpenAI model to use. rpm: Requests per minute limit; must be greater than 0. tpm: Tokens per minute limit; must be greater than 0. Examples: Configuring an OpenAI Language model with rate limits: ```python config = OpenAIModelConfig(model_name="gpt-4.1-nano", rpm=100, tpm=100) ``` Configuring an OpenAI Embedding model with rate limits: ```python config = OpenAIModelConfig(model_name="text-embedding-3-small", rpm=100, tpm=100) ```
null
true
false
43
70
null
null
null
null
[ "BaseModel" ]
null
Type: class Member Name: OpenAIModelConfig Qualified Name: fenic.api.session.config.OpenAIModelConfig Docstring: Configuration for OpenAI models. This class defines the configuration settings for OpenAI language and embedding models, including model selection and rate limiting parameters. Attributes: model_name: The name of the OpenAI model to use. rpm: Requests per minute limit; must be greater than 0. tpm: Tokens per minute limit; must be greater than 0. Examples: Configuring an OpenAI Language model with rate limits: ```python config = OpenAIModelConfig(model_name="gpt-4.1-nano", rpm=100, tpm=100) ``` Configuring an OpenAI Embedding model with rate limits: ```python config = OpenAIModelConfig(model_name="text-embedding-3-small", rpm=100, tpm=100) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
class
AnthropicModelConfig
fenic.api.session.config.AnthropicModelConfig
Configuration for Anthropic models. This class defines the configuration settings for Anthropic language models, including model selection and separate rate limiting parameters for input and output tokens. Attributes: model_name: The name of the Anthropic model to use. rpm: Requests per minute limit; must be greater than 0. input_tpm: Input tokens per minute limit; must be greater than 0. output_tpm: Output tokens per minute limit; must be greater than 0. Examples: Configuring an Anthropic model with separate input/output rate limits: ```python config = AnthropicModelConfig( model_name="claude-3-5-haiku-latest", rpm=100, input_tpm=100, output_tpm=100 ) ```
null
true
false
73
100
null
null
null
null
[ "BaseModel" ]
null
Type: class Member Name: AnthropicModelConfig Qualified Name: fenic.api.session.config.AnthropicModelConfig Docstring: Configuration for Anthropic models. This class defines the configuration settings for Anthropic language models, including model selection and separate rate limiting parameters for input and output tokens. Attributes: model_name: The name of the Anthropic model to use. rpm: Requests per minute limit; must be greater than 0. input_tpm: Input tokens per minute limit; must be greater than 0. output_tpm: Output tokens per minute limit; must be greater than 0. Examples: Configuring an Anthropic model with separate input/output rate limits: ```python config = AnthropicModelConfig( model_name="claude-3-5-haiku-latest", rpm=100, input_tpm=100, output_tpm=100 ) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
attribute
ModelConfig
fenic.api.session.config.ModelConfig
null
null
true
false
103
103
null
null
Union[OpenAIModelConfig, AnthropicModelConfig, GoogleGLAModelConfig]
null
null
null
Type: attribute Member Name: ModelConfig Qualified Name: fenic.api.session.config.ModelConfig Docstring: none Value: Union[OpenAIModelConfig, AnthropicModelConfig, GoogleGLAModelConfig] Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
class
SemanticConfig
fenic.api.session.config.SemanticConfig
Configuration for semantic language and embedding models. This class defines the configuration for both language models and optional embedding models used in semantic operations. It ensures that all configured models are valid and supported by their respective providers. Attributes: language_models: Mapping of model aliases to language model configurations. default_language_model: The alias of the default language model to use for semantic operations. Not required if only one language model is configured. embedding_models: Optional mapping of model aliases to embedding model configurations. default_embedding_model: The alias of the default embedding model to use for semantic operations. Note: The embedding model is optional and only required for operations that need semantic search or embedding capabilities.
null
true
false
106
214
null
null
null
null
[ "BaseModel" ]
null
Type: class Member Name: SemanticConfig Qualified Name: fenic.api.session.config.SemanticConfig Docstring: Configuration for semantic language and embedding models. This class defines the configuration for both language models and optional embedding models used in semantic operations. It ensures that all configured models are valid and supported by their respective providers. Attributes: language_models: Mapping of model aliases to language model configurations. default_language_model: The alias of the default language model to use for semantic operations. Not required if only one language model is configured. embedding_models: Optional mapping of model aliases to embedding model configurations. default_embedding_model: The alias of the default embedding model to use for semantic operations. Note: The embedding model is optional and only required for operations that need semantic search or embedding capabilities. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
method
model_post_init
fenic.api.session.config.SemanticConfig.model_post_init
Post initialization hook to set defaults. This hook runs after the model is initialized and validated. It sets the default language and embedding models if they are not set and there is only one model available.
null
true
false
129
141
null
None
null
[ "self", "__context" ]
null
SemanticConfig
Type: method Member Name: model_post_init Qualified Name: fenic.api.session.config.SemanticConfig.model_post_init Docstring: Post initialization hook to set defaults. This hook runs after the model is initialized and validated. It sets the default language and embedding models if they are not set and there is only one model available. Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "__context"] Returns: None Parent Class: SemanticConfig
method
validate_models
fenic.api.session.config.SemanticConfig.validate_models
Validates that the selected models are supported by the system. This validator checks that both the language model and embedding model (if provided) are valid and supported by their respective providers. Returns: The validated SemanticConfig instance. Raises: ConfigurationError: If any of the models are not supported.
null
true
false
143
214
null
SemanticConfig
null
[ "self" ]
null
SemanticConfig
Type: method Member Name: validate_models Qualified Name: fenic.api.session.config.SemanticConfig.validate_models Docstring: Validates that the selected models are supported by the system. This validator checks that both the language model and embedding model (if provided) are valid and supported by their respective providers. Returns: The validated SemanticConfig instance. Raises: ConfigurationError: If any of the models are not supported. Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self"] Returns: SemanticConfig Parent Class: SemanticConfig
class
CloudExecutorSize
fenic.api.session.config.CloudExecutorSize
Enum defining available cloud executor sizes. This enum represents the different size options available for cloud-based execution environments. Attributes: SMALL: Small instance size. MEDIUM: Medium instance size. LARGE: Large instance size. XLARGE: Extra large instance size.
null
true
false
217
232
null
null
null
null
[ "str", "Enum" ]
null
Type: class Member Name: CloudExecutorSize Qualified Name: fenic.api.session.config.CloudExecutorSize Docstring: Enum defining available cloud executor sizes. This enum represents the different size options available for cloud-based execution environments. Attributes: SMALL: Small instance size. MEDIUM: Medium instance size. LARGE: Large instance size. XLARGE: Extra large instance size. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
class
CloudConfig
fenic.api.session.config.CloudConfig
Configuration for cloud-based execution. This class defines settings for running operations in a cloud environment, allowing for scalable and distributed processing of language model operations. Attributes: size: Size of the cloud executor instance. If None, the default size will be used.
null
true
false
235
245
null
null
null
null
[ "BaseModel" ]
null
Type: class Member Name: CloudConfig Qualified Name: fenic.api.session.config.CloudConfig Docstring: Configuration for cloud-based execution. This class defines settings for running operations in a cloud environment, allowing for scalable and distributed processing of language model operations. Attributes: size: Size of the cloud executor instance. If None, the default size will be used. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
class
SessionConfig
fenic.api.session.config.SessionConfig
Configuration for a user session. This class defines the complete configuration for a user session, including application settings, model configurations, and optional cloud settings. It serves as the central configuration object for all language model operations. Attributes: app_name: Name of the application using this session. Defaults to "default_app". db_path: Optional path to a local database file for persistent storage. semantic: Configuration for semantic models (required). cloud: Optional configuration for cloud execution. Note: The semantic configuration is required as it defines the language models that will be used for processing. The cloud configuration is optional and only needed for distributed processing.
null
true
false
248
325
null
null
null
null
[ "BaseModel" ]
null
Type: class Member Name: SessionConfig Qualified Name: fenic.api.session.config.SessionConfig Docstring: Configuration for a user session. This class defines the complete configuration for a user session, including application settings, model configurations, and optional cloud settings. It serves as the central configuration object for all language model operations. Attributes: app_name: Name of the application using this session. Defaults to "default_app". db_path: Optional path to a local database file for persistent storage. semantic: Configuration for semantic models (required). cloud: Optional configuration for cloud execution. Note: The semantic configuration is required as it defines the language models that will be used for processing. The cloud configuration is optional and only needed for distributed processing. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
method
_to_resolved_config
fenic.api.session.config.SessionConfig._to_resolved_config
null
null
false
true
271
325
null
ResolvedSessionConfig
null
[ "self" ]
null
SessionConfig
Type: method Member Name: _to_resolved_config Qualified Name: fenic.api.session.config.SessionConfig._to_resolved_config Docstring: none Value: none Annotation: none is Public? : false is Private? : true Parameters: ["self"] Returns: ResolvedSessionConfig Parent Class: SessionConfig
module
session
fenic.api.session.session
Main session class for interacting with the DataFrame API.
/private/var/folders/w2/dyfkx_354cqghs4b74vb_x380000gn/T/fenic-clone-0.0.0-y6d85svd/fenic/src/fenic/api/session/session.py
true
false
null
null
null
null
null
null
null
null
Type: module Member Name: session Qualified Name: fenic.api.session.session Docstring: Main session class for interacting with the DataFrame API. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
class
Session
fenic.api.session.session.Session
The entry point to programming with the DataFrame API. Similar to PySpark's SparkSession. Example: Create a session with default configuration ```python session = Session.get_or_create(SessionConfig(app_name="my_app")) ``` Example: Create a session with cloud configuration ```python config = SessionConfig( app_name="my_app", cloud=True, api_key="your_api_key" ) session = Session.get_or_create(config) ```
null
true
false
30
314
null
null
null
null
[]
null
Type: class Member Name: Session Qualified Name: fenic.api.session.session.Session Docstring: The entry point to programming with the DataFrame API. Similar to PySpark's SparkSession. Example: Create a session with default configuration ```python session = Session.get_or_create(SessionConfig(app_name="my_app")) ``` Example: Create a session with cloud configuration ```python config = SessionConfig( app_name="my_app", cloud=True, api_key="your_api_key" ) session = Session.get_or_create(config) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
method
__new__
fenic.api.session.session.Session.__new__
Create a new Session instance.
null
true
false
53
59
null
null
null
[ "cls" ]
null
Session
Type: method Member Name: __new__ Qualified Name: fenic.api.session.session.Session.__new__ Docstring: Create a new Session instance. Value: none Annotation: none is Public? : true is Private? : false Parameters: ["cls"] Returns: none Parent Class: Session
method
get_or_create
fenic.api.session.session.Session.get_or_create
Gets an existing Session or creates a new one with the configured settings. Returns: A Session instance configured with the provided settings
null
true
false
61
89
null
Session
null
[ "cls", "config" ]
null
Session
Type: method Member Name: get_or_create Qualified Name: fenic.api.session.session.Session.get_or_create Docstring: Gets an existing Session or creates a new one with the configured settings. Returns: A Session instance configured with the provided settings Value: none Annotation: none is Public? : true is Private? : false Parameters: ["cls", "config"] Returns: Session Parent Class: Session
method
_create_local_session
fenic.api.session.session.Session._create_local_session
Get or create a local session.
null
false
true
91
101
null
Session
null
[ "cls", "session_state" ]
null
Session
Type: method Member Name: _create_local_session Qualified Name: fenic.api.session.session.Session._create_local_session Docstring: Get or create a local session. Value: none Annotation: none is Public? : false is Private? : true Parameters: ["cls", "session_state"] Returns: Session Parent Class: Session
method
_create_cloud_session
fenic.api.session.session.Session._create_cloud_session
Create a cloud session.
null
false
true
103
113
null
Session
null
[ "cls", "session_state" ]
null
Session
Type: method Member Name: _create_cloud_session Qualified Name: fenic.api.session.session.Session._create_cloud_session Docstring: Create a cloud session. Value: none Annotation: none is Public? : false is Private? : true Parameters: ["cls", "session_state"] Returns: Session Parent Class: Session
method
create_dataframe
fenic.api.session.session.Session.create_dataframe
Create a DataFrame from a variety of Python-native data formats. Args: data: Input data. Must be one of: - Polars DataFrame - Pandas DataFrame - dict of column_name -> list of values - list of dicts (each dict representing a row) - pyarrow Table Returns: A new DataFrame instance Raises: ValueError: If the input format is unsupported or inconsistent with provided column names. Example: Create from Polars DataFrame ```python import polars as pl df = pl.DataFrame({"col1": [1, 2], "col2": ["a", "b"]}) session.create_dataframe(df) ``` Example: Create from Pandas DataFrame ```python import pandas as pd df = pd.DataFrame({"col1": [1, 2], "col2": ["a", "b"]}) session.create_dataframe(df) ``` Example: Create from dictionary ```python session.create_dataframe({"col1": [1, 2], "col2": ["a", "b"]}) ``` Example: Create from list of dictionaries ```python session.create_dataframe([ {"col1": 1, "col2": "a"}, {"col1": 2, "col2": "b"} ]) ``` Example: Create from pyarrow Table ```python import pyarrow as pa table = pa.Table.from_pydict({"col1": [1, 2], "col2": ["a", "b"]}) session.create_dataframe(table) ```
null
true
false
132
219
null
DataFrame
null
[ "self", "data" ]
null
Session
Type: method Member Name: create_dataframe Qualified Name: fenic.api.session.session.Session.create_dataframe Docstring: Create a DataFrame from a variety of Python-native data formats. Args: data: Input data. Must be one of: - Polars DataFrame - Pandas DataFrame - dict of column_name -> list of values - list of dicts (each dict representing a row) - pyarrow Table Returns: A new DataFrame instance Raises: ValueError: If the input format is unsupported or inconsistent with provided column names. Example: Create from Polars DataFrame ```python import polars as pl df = pl.DataFrame({"col1": [1, 2], "col2": ["a", "b"]}) session.create_dataframe(df) ``` Example: Create from Pandas DataFrame ```python import pandas as pd df = pd.DataFrame({"col1": [1, 2], "col2": ["a", "b"]}) session.create_dataframe(df) ``` Example: Create from dictionary ```python session.create_dataframe({"col1": [1, 2], "col2": ["a", "b"]}) ``` Example: Create from list of dictionaries ```python session.create_dataframe([ {"col1": 1, "col2": "a"}, {"col1": 2, "col2": "b"} ]) ``` Example: Create from pyarrow Table ```python import pyarrow as pa table = pa.Table.from_pydict({"col1": [1, 2], "col2": ["a", "b"]}) session.create_dataframe(table) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "data"] Returns: DataFrame Parent Class: Session
method
table
fenic.api.session.session.Session.table
Returns the specified table as a DataFrame. Args: table_name: Name of the table Returns: Table as a DataFrame Raises: ValueError: If the table does not exist Example: Load an existing table ```python df = session.table("my_table") ```
null
true
false
221
242
null
DataFrame
null
[ "self", "table_name" ]
null
Session
Type: method Member Name: table Qualified Name: fenic.api.session.session.Session.table Docstring: Returns the specified table as a DataFrame. Args: table_name: Name of the table Returns: Table as a DataFrame Raises: ValueError: If the table does not exist Example: Load an existing table ```python df = session.table("my_table") ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "table_name"] Returns: DataFrame Parent Class: Session
method
sql
fenic.api.session.session.Session.sql
Execute a read-only SQL query against one or more DataFrames using named placeholders. This allows you to execute ad hoc SQL queries using familiar syntax when it's more convenient than the DataFrame API. Placeholders in the SQL string (e.g. `{df}`) should correspond to keyword arguments (e.g. `df=my_dataframe`). For supported SQL syntax and functions, refer to the DuckDB SQL documentation: https://duckdb.org/docs/sql/introduction. Args: query: A SQL query string with placeholders like `{df}` **tables: Keyword arguments mapping placeholder names to DataFrames Returns: A lazy DataFrame representing the result of the SQL query Raises: ValidationError: If a placeholder is used in the query but not passed as a keyword argument Example: Simple join between two DataFrames ```python df1 = session.create_dataframe({"id": [1, 2]}) df2 = session.create_dataframe({"id": [2, 3]}) result = session.sql( "SELECT * FROM {df1} JOIN {df2} USING (id)", df1=df1, df2=df2 ) ``` Example: Complex query with multiple DataFrames ```python users = session.create_dataframe({"user_id": [1, 2], "name": ["Alice", "Bob"]}) orders = session.create_dataframe({"order_id": [1, 2], "user_id": [1, 2]}) products = session.create_dataframe({"product_id": [1, 2], "name": ["Widget", "Gadget"]}) result = session.sql(""" SELECT u.name, p.name as product FROM {users} u JOIN {orders} o ON u.user_id = o.user_id JOIN {products} p ON o.product_id = p.product_id """, users=users, orders=orders, products=products) ```
null
true
false
244
310
null
DataFrame
null
[ "self", "query", "tables" ]
null
Session
Type: method Member Name: sql Qualified Name: fenic.api.session.session.Session.sql Docstring: Execute a read-only SQL query against one or more DataFrames using named placeholders. This allows you to execute ad hoc SQL queries using familiar syntax when it's more convenient than the DataFrame API. Placeholders in the SQL string (e.g. `{df}`) should correspond to keyword arguments (e.g. `df=my_dataframe`). For supported SQL syntax and functions, refer to the DuckDB SQL documentation: https://duckdb.org/docs/sql/introduction. Args: query: A SQL query string with placeholders like `{df}` **tables: Keyword arguments mapping placeholder names to DataFrames Returns: A lazy DataFrame representing the result of the SQL query Raises: ValidationError: If a placeholder is used in the query but not passed as a keyword argument Example: Simple join between two DataFrames ```python df1 = session.create_dataframe({"id": [1, 2]}) df2 = session.create_dataframe({"id": [2, 3]}) result = session.sql( "SELECT * FROM {df1} JOIN {df2} USING (id)", df1=df1, df2=df2 ) ``` Example: Complex query with multiple DataFrames ```python users = session.create_dataframe({"user_id": [1, 2], "name": ["Alice", "Bob"]}) orders = session.create_dataframe({"order_id": [1, 2], "user_id": [1, 2]}) products = session.create_dataframe({"product_id": [1, 2], "name": ["Widget", "Gadget"]}) result = session.sql(""" SELECT u.name, p.name as product FROM {users} u JOIN {orders} o ON u.user_id = o.user_id JOIN {products} p ON o.product_id = p.product_id """, users=users, orders=orders, products=products) ``` Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "query", "tables"] Returns: DataFrame Parent Class: Session
method
stop
fenic.api.session.session.Session.stop
Stops the session and closes all connections.
null
true
false
312
314
null
null
null
[ "self" ]
null
Session
Type: method Member Name: stop Qualified Name: fenic.api.session.session.Session.stop Docstring: Stops the session and closes all connections. Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self"] Returns: none Parent Class: Session