type
stringclasses
5 values
name
stringlengths
1
55
qualified_name
stringlengths
5
130
docstring
stringlengths
15
3.11k
filepath
stringclasses
90 values
is_public
bool
2 classes
is_private
bool
2 classes
line_start
int64
0
1.44k
line_end
int64
0
1.51k
annotation
stringclasses
2 values
returns
stringclasses
82 values
value
stringclasses
66 values
parameters
listlengths
0
10
bases
listlengths
0
2
parent_class
stringclasses
193 values
api_element_summary
stringlengths
199
3.43k
method
check_and_consume_rate_limit
fenic._inference.model_client.RateLimitStrategy.check_and_consume_rate_limit
Checks if there is enough capacity in both token and request rate limit buckets. If there is sufficient capacity, this method will consume the required tokens and request quota. This is an abstract method that must be implemented by subclasses. Args: token_estimate: A TokenEstimate object containing the estimated input, output, and total tokens for the request. Returns: bool: True if there was enough capacity and it was consumed, False otherwise.
null
true
false
148
162
null
bool
null
[ "self", "token_estimate" ]
null
RateLimitStrategy
Type: method Member Name: check_and_consume_rate_limit Qualified Name: fenic._inference.model_client.RateLimitStrategy.check_and_consume_rate_limit Docstring: Checks if there is enough capacity in both token and request rate limit buckets. If there is sufficient capacity, this method will consume the required tokens and request quota. This is an abstract method that must be implemented by subclasses. Args: token_estimate: A TokenEstimate object containing the estimated input, output, and total tokens for the request. Returns: bool: True if there was enough capacity and it was consumed, False otherwise. Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "token_estimate"] Returns: bool Parent Class: RateLimitStrategy
method
context_tokens_per_minute
fenic._inference.model_client.RateLimitStrategy.context_tokens_per_minute
Returns the total token rate limit per minute for this strategy. This is an abstract method that must be implemented by subclasses to specify their token rate limiting behavior. Returns: int: The total number of tokens allowed per minute.
null
true
false
164
174
null
int
null
[ "self" ]
null
RateLimitStrategy
Type: method Member Name: context_tokens_per_minute Qualified Name: fenic._inference.model_client.RateLimitStrategy.context_tokens_per_minute Docstring: Returns the total token rate limit per minute for this strategy. This is an abstract method that must be implemented by subclasses to specify their token rate limiting behavior. Returns: int: The total number of tokens allowed per minute. Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self"] Returns: int Parent Class: RateLimitStrategy
class
UnifiedTokenRateLimitStrategy
fenic._inference.model_client.UnifiedTokenRateLimitStrategy
Rate limiting strategy that uses a single token bucket for both input and output tokens. This strategy enforces both a request rate limit (RPM) and a unified token rate limit (TPM) where input and output tokens share the same quota. Attributes: tpm: Total tokens per minute limit. Must be greater than 0. unified_tokens_bucket: Token bucket for tracking and limiting total token usage.
null
true
false
177
239
null
null
null
null
[ "RateLimitStrategy" ]
null
Type: class Member Name: UnifiedTokenRateLimitStrategy Qualified Name: fenic._inference.model_client.UnifiedTokenRateLimitStrategy Docstring: Rate limiting strategy that uses a single token bucket for both input and output tokens. This strategy enforces both a request rate limit (RPM) and a unified token rate limit (TPM) where input and output tokens share the same quota. Attributes: tpm: Total tokens per minute limit. Must be greater than 0. unified_tokens_bucket: Token bucket for tracking and limiting total token usage. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
method
__init__
fenic._inference.model_client.UnifiedTokenRateLimitStrategy.__init__
null
null
true
false
187
190
null
null
null
[ "self", "rpm", "tpm" ]
null
UnifiedTokenRateLimitStrategy
Type: method Member Name: __init__ Qualified Name: fenic._inference.model_client.UnifiedTokenRateLimitStrategy.__init__ Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "rpm", "tpm"] Returns: none Parent Class: UnifiedTokenRateLimitStrategy
method
backoff
fenic._inference.model_client.UnifiedTokenRateLimitStrategy.backoff
Backoff the request/token rate limit bucket.
null
true
false
192
196
null
int
null
[ "self", "curr_time" ]
null
UnifiedTokenRateLimitStrategy
Type: method Member Name: backoff Qualified Name: fenic._inference.model_client.UnifiedTokenRateLimitStrategy.backoff Docstring: Backoff the request/token rate limit bucket. Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "curr_time"] Returns: int Parent Class: UnifiedTokenRateLimitStrategy
method
check_and_consume_rate_limit
fenic._inference.model_client.UnifiedTokenRateLimitStrategy.check_and_consume_rate_limit
Checks and consumes rate limits for both requests and total tokens. This implementation uses a single token bucket for both input and output tokens, enforcing the total token limit across all token types. Args: token_estimate: A TokenEstimate object containing the estimated input, output, and total tokens for the request. Returns: bool: True if there was enough capacity and it was consumed, False otherwise.
null
true
false
198
223
null
bool
null
[ "self", "token_estimate" ]
null
UnifiedTokenRateLimitStrategy
Type: method Member Name: check_and_consume_rate_limit Qualified Name: fenic._inference.model_client.UnifiedTokenRateLimitStrategy.check_and_consume_rate_limit Docstring: Checks and consumes rate limits for both requests and total tokens. This implementation uses a single token bucket for both input and output tokens, enforcing the total token limit across all token types. Args: token_estimate: A TokenEstimate object containing the estimated input, output, and total tokens for the request. Returns: bool: True if there was enough capacity and it was consumed, False otherwise. Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "token_estimate"] Returns: bool Parent Class: UnifiedTokenRateLimitStrategy
method
context_tokens_per_minute
fenic._inference.model_client.UnifiedTokenRateLimitStrategy.context_tokens_per_minute
Returns the total token rate limit per minute. Returns: int: The total number of tokens allowed per minute (tpm).
null
true
false
225
231
null
int
null
[ "self" ]
null
UnifiedTokenRateLimitStrategy
Type: method Member Name: context_tokens_per_minute Qualified Name: fenic._inference.model_client.UnifiedTokenRateLimitStrategy.context_tokens_per_minute Docstring: Returns the total token rate limit per minute. Returns: int: The total number of tokens allowed per minute (tpm). Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self"] Returns: int Parent Class: UnifiedTokenRateLimitStrategy
method
__str__
fenic._inference.model_client.UnifiedTokenRateLimitStrategy.__str__
Returns a string representation of the rate limit strategy. Returns: str: A string showing the RPM and TPM limits.
null
true
false
233
239
null
null
null
[ "self" ]
null
UnifiedTokenRateLimitStrategy
Type: method Member Name: __str__ Qualified Name: fenic._inference.model_client.UnifiedTokenRateLimitStrategy.__str__ Docstring: Returns a string representation of the rate limit strategy. Returns: str: A string showing the RPM and TPM limits. Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self"] Returns: none Parent Class: UnifiedTokenRateLimitStrategy
class
SeparatedTokenRateLimitStrategy
fenic._inference.model_client.SeparatedTokenRateLimitStrategy
Rate limiting strategy that uses separate token buckets for input and output tokens. This strategy enforces both a request rate limit (RPM) and separate token rate limits for input (input_tpm) and output (output_tpm) tokens. Attributes: input_tpm: Input tokens per minute limit. Must be greater than 0. output_tpm: Output tokens per minute limit. Must be greater than 0. input_tokens_bucket: Token bucket for tracking and limiting input token usage. output_tokens_bucket: Token bucket for tracking and limiting output token usage.
null
true
false
242
312
null
null
null
null
[ "RateLimitStrategy" ]
null
Type: class Member Name: SeparatedTokenRateLimitStrategy Qualified Name: fenic._inference.model_client.SeparatedTokenRateLimitStrategy Docstring: Rate limiting strategy that uses separate token buckets for input and output tokens. This strategy enforces both a request rate limit (RPM) and separate token rate limits for input (input_tpm) and output (output_tpm) tokens. Attributes: input_tpm: Input tokens per minute limit. Must be greater than 0. output_tpm: Output tokens per minute limit. Must be greater than 0. input_tokens_bucket: Token bucket for tracking and limiting input token usage. output_tokens_bucket: Token bucket for tracking and limiting output token usage. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
method
__init__
fenic._inference.model_client.SeparatedTokenRateLimitStrategy.__init__
null
null
true
false
254
259
null
null
null
[ "self", "rpm", "input_tpm", "output_tpm" ]
null
SeparatedTokenRateLimitStrategy
Type: method Member Name: __init__ Qualified Name: fenic._inference.model_client.SeparatedTokenRateLimitStrategy.__init__ Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "rpm", "input_tpm", "output_tpm"] Returns: none Parent Class: SeparatedTokenRateLimitStrategy
method
backoff
fenic._inference.model_client.SeparatedTokenRateLimitStrategy.backoff
Backoff the request/token rate limit bucket.
null
true
false
261
266
null
int
null
[ "self", "curr_time" ]
null
SeparatedTokenRateLimitStrategy
Type: method Member Name: backoff Qualified Name: fenic._inference.model_client.SeparatedTokenRateLimitStrategy.backoff Docstring: Backoff the request/token rate limit bucket. Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "curr_time"] Returns: int Parent Class: SeparatedTokenRateLimitStrategy
method
check_and_consume_rate_limit
fenic._inference.model_client.SeparatedTokenRateLimitStrategy.check_and_consume_rate_limit
Checks and consumes rate limits for requests, input tokens, and output tokens. This implementation uses separate token buckets for input and output tokens, enforcing separate limits for each token type. Args: token_estimate: A TokenEstimate object containing the estimated input, output, and total tokens for the request. Returns: bool: True if there was enough capacity and it was consumed, False otherwise.
null
true
false
268
296
null
bool
null
[ "self", "token_estimate" ]
null
SeparatedTokenRateLimitStrategy
Type: method Member Name: check_and_consume_rate_limit Qualified Name: fenic._inference.model_client.SeparatedTokenRateLimitStrategy.check_and_consume_rate_limit Docstring: Checks and consumes rate limits for requests, input tokens, and output tokens. This implementation uses separate token buckets for input and output tokens, enforcing separate limits for each token type. Args: token_estimate: A TokenEstimate object containing the estimated input, output, and total tokens for the request. Returns: bool: True if there was enough capacity and it was consumed, False otherwise. Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "token_estimate"] Returns: bool Parent Class: SeparatedTokenRateLimitStrategy
method
context_tokens_per_minute
fenic._inference.model_client.SeparatedTokenRateLimitStrategy.context_tokens_per_minute
Returns the total token rate limit per minute. Returns: int: The sum of input and output tokens allowed per minute.
null
true
false
298
304
null
int
null
[ "self" ]
null
SeparatedTokenRateLimitStrategy
Type: method Member Name: context_tokens_per_minute Qualified Name: fenic._inference.model_client.SeparatedTokenRateLimitStrategy.context_tokens_per_minute Docstring: Returns the total token rate limit per minute. Returns: int: The sum of input and output tokens allowed per minute. Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self"] Returns: int Parent Class: SeparatedTokenRateLimitStrategy
method
__str__
fenic._inference.model_client.SeparatedTokenRateLimitStrategy.__str__
Returns a string representation of the rate limit strategy. Returns: str: A string showing the RPM, input TPM, and output TPM limits.
null
true
false
306
312
null
null
null
[ "self" ]
null
SeparatedTokenRateLimitStrategy
Type: method Member Name: __str__ Qualified Name: fenic._inference.model_client.SeparatedTokenRateLimitStrategy.__str__ Docstring: Returns a string representation of the rate limit strategy. Returns: str: A string showing the RPM, input TPM, and output TPM limits. Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self"] Returns: none Parent Class: SeparatedTokenRateLimitStrategy
class
ModelClient
fenic._inference.model_client.ModelClient
Base client for interacting with language models. This abstract base class provides a robust framework for interacting with language models, handling rate limiting, request queuing, retries, and deduplication. It manages concurrent requests efficiently using an asynchronous event loop and implements token-based rate limiting. Type Parameters: RequestT: The type of request objects this client handles ResponseT: The type of response objects this client returns Attributes: model (str): The name or identifier of the model model_provider (ModelProvider): The provider of the model (e.g., OPENAI, ANTHROPIC) rate_limit_strategy (RateLimitStrategy): Strategy for rate limiting requests token_counter (TiktokenTokenCounter): Counter for estimating token usage
null
true
false
315
863
null
null
null
null
[ "Generic[RequestT, ResponseT]", "ABC" ]
null
Type: class Member Name: ModelClient Qualified Name: fenic._inference.model_client.ModelClient Docstring: Base client for interacting with language models. This abstract base class provides a robust framework for interacting with language models, handling rate limiting, request queuing, retries, and deduplication. It manages concurrent requests efficiently using an asynchronous event loop and implements token-based rate limiting. Type Parameters: RequestT: The type of request objects this client handles ResponseT: The type of response objects this client returns Attributes: model (str): The name or identifier of the model model_provider (ModelProvider): The provider of the model (e.g., OPENAI, ANTHROPIC) rate_limit_strategy (RateLimitStrategy): Strategy for rate limiting requests token_counter (TiktokenTokenCounter): Counter for estimating token usage Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
method
__init__
fenic._inference.model_client.ModelClient.__init__
Initialize the ModelClient with configuration for model interaction. Args: model: The name or identifier of the model model_provider: The model provider (OPENAI, ANTHROPIC) rate_limit_strategy: Strategy for rate limiting requests token_counter: Implementation for predicting input token counts queue_size: Maximum size of the request queue (default: 100) initial_backoff_seconds: Initial delay for exponential backoff (default: 1) backoff_factor: Factor by which backoff time increases (default: 2) max_backoffs: Maximum number of retry attempts (default: 10)
null
true
false
333
386
null
null
null
[ "self", "model", "model_provider", "rate_limit_strategy", "token_counter", "queue_size", "initial_backoff_seconds", "backoff_factor", "max_backoffs" ]
null
ModelClient
Type: method Member Name: __init__ Qualified Name: fenic._inference.model_client.ModelClient.__init__ Docstring: Initialize the ModelClient with configuration for model interaction. Args: model: The name or identifier of the model model_provider: The model provider (OPENAI, ANTHROPIC) rate_limit_strategy: Strategy for rate limiting requests token_counter: Implementation for predicting input token counts queue_size: Maximum size of the request queue (default: 100) initial_backoff_seconds: Initial delay for exponential backoff (default: 1) backoff_factor: Factor by which backoff time increases (default: 2) max_backoffs: Maximum number of retry attempts (default: 10) Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "model", "model_provider", "rate_limit_strategy", "token_counter", "queue_size", "initial_backoff_seconds", "backoff_factor", "max_backoffs"] Returns: none Parent Class: ModelClient
method
make_single_request
fenic._inference.model_client.ModelClient.make_single_request
Make a single API call to the language model. This method must be implemented by subclasses to handle the actual API communication with the language model provider. Args: request: The request data to send to the model Returns: Union[None, ResponseT, TransientException, FatalException]: The API response, None if the request was empty, or an exception wrapper indicating either a transient error (can be retried) or a fatal error (should not be retried)
null
true
false
388
405
null
Union[None, ResponseT, TransientException, FatalException]
null
[ "self", "request" ]
null
ModelClient
Type: method Member Name: make_single_request Qualified Name: fenic._inference.model_client.ModelClient.make_single_request Docstring: Make a single API call to the language model. This method must be implemented by subclasses to handle the actual API communication with the language model provider. Args: request: The request data to send to the model Returns: Union[None, ResponseT, TransientException, FatalException]: The API response, None if the request was empty, or an exception wrapper indicating either a transient error (can be retried) or a fatal error (should not be retried) Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "request"] Returns: Union[None, ResponseT, TransientException, FatalException] Parent Class: ModelClient
method
estimate_tokens_for_request
fenic._inference.model_client.ModelClient.estimate_tokens_for_request
Estimate the token usage for a given request. This method must be implemented by subclasses to accurately predict token usage for both input and output tokens. Args: request: The request to estimate tokens for Returns: TokenEstimate: Object containing estimated input and output tokens
null
true
false
407
420
null
TokenEstimate
null
[ "self", "request" ]
null
ModelClient
Type: method Member Name: estimate_tokens_for_request Qualified Name: fenic._inference.model_client.ModelClient.estimate_tokens_for_request Docstring: Estimate the token usage for a given request. This method must be implemented by subclasses to accurately predict token usage for both input and output tokens. Args: request: The request to estimate tokens for Returns: TokenEstimate: Object containing estimated input and output tokens Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "request"] Returns: TokenEstimate Parent Class: ModelClient
method
count_tokens
fenic._inference.model_client.ModelClient.count_tokens
Count the number of tokens in a tokenizable object. Args: messages: The tokenizable object to count tokens for Returns: int: The number of tokens in the object
null
true
false
422
431
null
int
null
[ "self", "messages" ]
null
ModelClient
Type: method Member Name: count_tokens Qualified Name: fenic._inference.model_client.ModelClient.count_tokens Docstring: Count the number of tokens in a tokenizable object. Args: messages: The tokenizable object to count tokens for Returns: int: The number of tokens in the object Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "messages"] Returns: int Parent Class: ModelClient
method
get_request_key
fenic._inference.model_client.ModelClient.get_request_key
Generate a unique key for request deduplication. This method must be implemented by subclasses to provide a hashable key that uniquely identifies a request for deduplication purposes. Args: request: The request to generate a key for Returns: Any: A hashable value that uniquely identifies this request
null
true
false
433
446
null
Any
null
[ "self", "request" ]
null
ModelClient
Type: method Member Name: get_request_key Qualified Name: fenic._inference.model_client.ModelClient.get_request_key Docstring: Generate a unique key for request deduplication. This method must be implemented by subclasses to provide a hashable key that uniquely identifies a request for deduplication purposes. Args: request: The request to generate a key for Returns: Any: A hashable value that uniquely identifies this request Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "request"] Returns: Any Parent Class: ModelClient
method
get_metrics
fenic._inference.model_client.ModelClient.get_metrics
Get the current metrics for this model client. Returns: LMMetrics: The current metrics for this client
null
true
false
448
455
null
LMMetrics
null
[ "self" ]
null
ModelClient
Type: method Member Name: get_metrics Qualified Name: fenic._inference.model_client.ModelClient.get_metrics Docstring: Get the current metrics for this model client. Returns: LMMetrics: The current metrics for this client Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self"] Returns: LMMetrics Parent Class: ModelClient
method
reset_metrics
fenic._inference.model_client.ModelClient.reset_metrics
Reset all metrics for this model client to their initial values.
null
true
false
457
460
null
null
null
[ "self" ]
null
ModelClient
Type: method Member Name: reset_metrics Qualified Name: fenic._inference.model_client.ModelClient.reset_metrics Docstring: Reset all metrics for this model client to their initial values. Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self"] Returns: none Parent Class: ModelClient
method
shutdown
fenic._inference.model_client.ModelClient.shutdown
Shut down the model client and clean up resources. This method: 1. Cancels all pending and in-flight requests 2. Unregisters the client from the ModelClientManager 3. Cleans up all associated resources 4. Ensures all threads are properly notified of the shutdown
null
true
false
465
502
null
null
null
[ "self" ]
null
ModelClient
Type: method Member Name: shutdown Qualified Name: fenic._inference.model_client.ModelClient.shutdown Docstring: Shut down the model client and clean up resources. This method: 1. Cancels all pending and in-flight requests 2. Unregisters the client from the ModelClientManager 3. Cleans up all associated resources 4. Ensures all threads are properly notified of the shutdown Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self"] Returns: none Parent Class: ModelClient
method
make_batch_requests
fenic._inference.model_client.ModelClient.make_batch_requests
Submit and process a batch of requests asynchronously. This method handles the submission and processing of multiple requests in parallel, with automatic deduplication and rate limiting. It provides progress tracking and handles empty requests appropriately. Args: requests: List of requests to process. None entries are handled as empty responses operation_name: Name for logging purposes to identify the operation Returns: List[ResponseT]: List of responses in the same order as the input requests
null
true
false
504
600
null
List[ResponseT]
null
[ "self", "requests", "operation_name" ]
null
ModelClient
Type: method Member Name: make_batch_requests Qualified Name: fenic._inference.model_client.ModelClient.make_batch_requests Docstring: Submit and process a batch of requests asynchronously. This method handles the submission and processing of multiple requests in parallel, with automatic deduplication and rate limiting. It provides progress tracking and handles empty requests appropriately. Args: requests: List of requests to process. None entries are handled as empty responses operation_name: Name for logging purposes to identify the operation Returns: List[ResponseT]: List of responses in the same order as the input requests Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "requests", "operation_name"] Returns: List[ResponseT] Parent Class: ModelClient
method
_get_or_create_request_future
fenic._inference.model_client.ModelClient._get_or_create_request_future
Retrieves an existing future for a duplicate request or creates a new one. Args: unique_futures: A dictionary mapping request keys to their futures. request: The current request being processed. Returns: A tuple of the future for the request and the estimated number of tokens (0 for duplicates).
null
false
true
605
650
null
tuple[Future, TokenEstimate | None]
null
[ "self", "unique_futures", "request" ]
null
ModelClient
Type: method Member Name: _get_or_create_request_future Qualified Name: fenic._inference.model_client.ModelClient._get_or_create_request_future Docstring: Retrieves an existing future for a duplicate request or creates a new one. Args: unique_futures: A dictionary mapping request keys to their futures. request: The current request being processed. Returns: A tuple of the future for the request and the estimated number of tokens (0 for duplicates). Value: none Annotation: none is Public? : false is Private? : true Parameters: ["self", "unique_futures", "request"] Returns: tuple[Future, TokenEstimate | None] Parent Class: ModelClient
method
_maybe_raise_thread_exception
fenic._inference.model_client.ModelClient._maybe_raise_thread_exception
Surface exceptions from event loop to calling thread immediately.
null
false
true
652
657
null
null
null
[ "self" ]
null
ModelClient
Type: method Member Name: _maybe_raise_thread_exception Qualified Name: fenic._inference.model_client.ModelClient._maybe_raise_thread_exception Docstring: Surface exceptions from event loop to calling thread immediately. Value: none Annotation: none is Public? : false is Private? : true Parameters: ["self"] Returns: none Parent Class: ModelClient
method
_calculate_backoff_time
fenic._inference.model_client.ModelClient._calculate_backoff_time
Calculates the backoff duration using exponential backoff with a maximum limit. Args: backoff_iteration: The current backoff iteration. Returns: The backoff time in seconds.
null
false
true
659
671
null
float
null
[ "self", "backoff_iteration" ]
null
ModelClient
Type: method Member Name: _calculate_backoff_time Qualified Name: fenic._inference.model_client.ModelClient._calculate_backoff_time Docstring: Calculates the backoff duration using exponential backoff with a maximum limit. Args: backoff_iteration: The current backoff iteration. Returns: The backoff time in seconds. Value: none Annotation: none is Public? : false is Private? : true Parameters: ["self", "backoff_iteration"] Returns: float Parent Class: ModelClient
method
_check_and_consume_rate_limit
fenic._inference.model_client.ModelClient._check_and_consume_rate_limit
Checks if there is enough capacity in both the token and request rate limit buckets, and consumes the capacity if so. Args: token_amount: A TokenEstimate object containing the estimated input, output, and total tokens. Returns: True if there was enough capacity and it was consumed, False otherwise.
null
false
true
673
683
null
bool
null
[ "self", "token_amount" ]
null
ModelClient
Type: method Member Name: _check_and_consume_rate_limit Qualified Name: fenic._inference.model_client.ModelClient._check_and_consume_rate_limit Docstring: Checks if there is enough capacity in both the token and request rate limit buckets, and consumes the capacity if so. Args: token_amount: A TokenEstimate object containing the estimated input, output, and total tokens. Returns: True if there was enough capacity and it was consumed, False otherwise. Value: none Annotation: none is Public? : false is Private? : true Parameters: ["self", "token_amount"] Returns: bool Parent Class: ModelClient
method
_enqueue_request
fenic._inference.model_client.ModelClient._enqueue_request
Enqueue a request to be processed. Args: queue_item: The queue item to enqueue.
null
false
true
685
691
null
null
null
[ "self", "queue_item" ]
null
ModelClient
Type: method Member Name: _enqueue_request Qualified Name: fenic._inference.model_client.ModelClient._enqueue_request Docstring: Enqueue a request to be processed. Args: queue_item: The queue item to enqueue. Value: none Annotation: none is Public? : false is Private? : true Parameters: ["self", "queue_item"] Returns: none Parent Class: ModelClient
method
_process_queue
fenic._inference.model_client.ModelClient._process_queue
Continuously processes requests from the request and retry queues. This method runs on the shared asyncio event loop.
null
false
true
696
725
null
null
null
[ "self" ]
null
ModelClient
Type: method Member Name: _process_queue Qualified Name: fenic._inference.model_client.ModelClient._process_queue Docstring: Continuously processes requests from the request and retry queues. This method runs on the shared asyncio event loop. Value: none Annotation: none is Public? : false is Private? : true Parameters: ["self"] Returns: none Parent Class: ModelClient
method
_process_single_request
fenic._inference.model_client.ModelClient._process_single_request
Process a single request from the queues. Args: queue_item: The queue item to process.
null
false
true
727
751
null
null
null
[ "self", "queue_item" ]
null
ModelClient
Type: method Member Name: _process_single_request Qualified Name: fenic._inference.model_client.ModelClient._process_single_request Docstring: Process a single request from the queues. Args: queue_item: The queue item to process. Value: none Annotation: none is Public? : false is Private? : true Parameters: ["self", "queue_item"] Returns: none Parent Class: ModelClient
method
_handle_response
fenic._inference.model_client.ModelClient._handle_response
Handle the response from a request, including retrying if necessary. Args: queue_item: The queue item associated with the request. maybe_response: The response or exception from the request.
null
false
true
753
783
null
null
null
[ "self", "queue_item", "maybe_response" ]
null
ModelClient
Type: method Member Name: _handle_response Qualified Name: fenic._inference.model_client.ModelClient._handle_response Docstring: Handle the response from a request, including retrying if necessary. Args: queue_item: The queue item associated with the request. maybe_response: The response or exception from the request. Value: none Annotation: none is Public? : false is Private? : true Parameters: ["self", "queue_item", "maybe_response"] Returns: none Parent Class: ModelClient
method
_maybe_backoff
fenic._inference.model_client.ModelClient._maybe_backoff
Manages the backoff period after encountering a transient exception.
null
false
true
785
801
null
null
null
[ "self" ]
null
ModelClient
Type: method Member Name: _maybe_backoff Qualified Name: fenic._inference.model_client.ModelClient._maybe_backoff Docstring: Manages the backoff period after encountering a transient exception. Value: none Annotation: none is Public? : false is Private? : true Parameters: ["self"] Returns: none Parent Class: ModelClient
method
_get_queued_requests
fenic._inference.model_client.ModelClient._get_queued_requests
Asynchronously retrieves items from the retry queue or the request queue, prioritizing the retry queue. Returns None if a shutdown is signaled. Returns: A list of queue items, or None if a shutdown is signaled.
null
false
true
804
833
null
List[QueueItem[RequestT]]
null
[ "self" ]
null
ModelClient
Type: method Member Name: _get_queued_requests Qualified Name: fenic._inference.model_client.ModelClient._get_queued_requests Docstring: Asynchronously retrieves items from the retry queue or the request queue, prioritizing the retry queue. Returns None if a shutdown is signaled. Returns: A list of queue items, or None if a shutdown is signaled. Value: none Annotation: none is Public? : false is Private? : true Parameters: ["self"] Returns: List[QueueItem[RequestT]] Parent Class: ModelClient
method
_track_inflight_task
fenic._inference.model_client.ModelClient._track_inflight_task
Adds a task to the set of inflight requests and removes it upon completion. Args: task: The task to track.
null
false
true
835
842
null
null
null
[ "self", "task" ]
null
ModelClient
Type: method Member Name: _track_inflight_task Qualified Name: fenic._inference.model_client.ModelClient._track_inflight_task Docstring: Adds a task to the set of inflight requests and removes it upon completion. Args: task: The task to track. Value: none Annotation: none is Public? : false is Private? : true Parameters: ["self", "task"] Returns: none Parent Class: ModelClient
method
_register_thread_exception
fenic._inference.model_client.ModelClient._register_thread_exception
Registers an exception that occurred on the event loop to be raised in the originating thread. Args: queue_item: The queue item associated with the exception. exception: The exception that occurred.
null
false
true
844
857
null
null
null
[ "self", "queue_item", "exception" ]
null
ModelClient
Type: method Member Name: _register_thread_exception Qualified Name: fenic._inference.model_client.ModelClient._register_thread_exception Docstring: Registers an exception that occurred on the event loop to be raised in the originating thread. Args: queue_item: The queue item associated with the exception. exception: The exception that occurred. Value: none Annotation: none is Public? : false is Private? : true Parameters: ["self", "queue_item", "exception"] Returns: none Parent Class: ModelClient
method
_cancel_in_flight_requests
fenic._inference.model_client.ModelClient._cancel_in_flight_requests
Cancels all inflight tasks and gathers their results.
null
false
true
859
863
null
null
null
[ "self" ]
null
ModelClient
Type: method Member Name: _cancel_in_flight_requests Qualified Name: fenic._inference.model_client.ModelClient._cancel_in_flight_requests Docstring: Cancels all inflight tasks and gathers their results. Value: none Annotation: none is Public? : false is Private? : true Parameters: ["self"] Returns: none Parent Class: ModelClient
class
ModelClientManager
fenic._inference.model_client.ModelClientManager
Manages a shared asyncio event loop for multiple ModelClient instances. Ensures that all clients run on the same loop for efficient resource management.
null
true
false
866
936
null
null
null
null
[]
null
Type: class Member Name: ModelClientManager Qualified Name: fenic._inference.model_client.ModelClientManager Docstring: Manages a shared asyncio event loop for multiple ModelClient instances. Ensures that all clients run on the same loop for efficient resource management. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
method
__new__
fenic._inference.model_client.ModelClientManager.__new__
null
null
true
false
874
879
null
null
null
[ "cls" ]
null
ModelClientManager
Type: method Member Name: __new__ Qualified Name: fenic._inference.model_client.ModelClientManager.__new__ Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: ["cls"] Returns: none Parent Class: ModelClientManager
method
_initialize
fenic._inference.model_client.ModelClientManager._initialize
Initializes the ModelClientManager, creating the event loop if it doesn't exist.
null
false
true
881
886
null
null
null
[ "self" ]
null
ModelClientManager
Type: method Member Name: _initialize Qualified Name: fenic._inference.model_client.ModelClientManager._initialize Docstring: Initializes the ModelClientManager, creating the event loop if it doesn't exist. Value: none Annotation: none is Public? : false is Private? : true Parameters: ["self"] Returns: none Parent Class: ModelClientManager
method
register_client
fenic._inference.model_client.ModelClientManager.register_client
Registers a ModelClient with the manager and starts its processing task on the shared event loop. Args: client: The client to register.
null
true
false
888
897
null
null
null
[ "self", "client" ]
null
ModelClientManager
Type: method Member Name: register_client Qualified Name: fenic._inference.model_client.ModelClientManager.register_client Docstring: Registers a ModelClient with the manager and starts its processing task on the shared event loop. Args: client: The client to register. Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "client"] Returns: none Parent Class: ModelClientManager
method
unregister_client
fenic._inference.model_client.ModelClientManager.unregister_client
Unregisters a ModelClient from the manager and shuts down the event loop if no clients remain. Args: client: The client to unregister.
null
true
false
899
927
null
null
null
[ "self", "client" ]
null
ModelClientManager
Type: method Member Name: unregister_client Qualified Name: fenic._inference.model_client.ModelClientManager.unregister_client Docstring: Unregisters a ModelClient from the manager and shuts down the event loop if no clients remain. Args: client: The client to unregister. Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "client"] Returns: none Parent Class: ModelClientManager
method
_maybe_create_event_loop
fenic._inference.model_client.ModelClientManager._maybe_create_event_loop
Creates and starts a dedicated event loop in a background thread if one doesn't exist.
null
false
true
929
936
null
null
null
[ "self" ]
null
ModelClientManager
Type: method Member Name: _maybe_create_event_loop Qualified Name: fenic._inference.model_client.ModelClientManager._maybe_create_event_loop Docstring: Creates and starts a dedicated event loop in a background thread if one doesn't exist. Value: none Annotation: none is Public? : false is Private? : true Parameters: ["self"] Returns: none Parent Class: ModelClientManager
function
_cancel_event_loop_tasks
fenic._inference.model_client._cancel_event_loop_tasks
Cancels all pending tasks in the given asyncio event loop, except the current task. Args: loop: The event loop to cancel tasks for.
null
false
true
939
949
null
null
null
[ "loop" ]
null
null
Type: function Member Name: _cancel_event_loop_tasks Qualified Name: fenic._inference.model_client._cancel_event_loop_tasks Docstring: Cancels all pending tasks in the given asyncio event loop, except the current task. Args: loop: The event loop to cancel tasks for. Value: none Annotation: none is Public? : false is Private? : true Parameters: ["loop"] Returns: none Parent Class: none
module
types
fenic._inference.types
null
/private/var/folders/w2/dyfkx_354cqghs4b74vb_x380000gn/T/fenic-clone-0.0.0-y6d85svd/fenic/src/fenic/_inference/types.py
true
false
null
null
null
null
null
null
null
null
Type: module Member Name: types Qualified Name: fenic._inference.types Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
class
FewShotExample
fenic._inference.types.FewShotExample
null
null
true
false
5
8
null
null
null
null
[]
null
Type: class Member Name: FewShotExample Qualified Name: fenic._inference.types.FewShotExample Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
method
__init__
fenic._inference.types.FewShotExample.__init__
null
null
true
false
0
0
null
None
null
[ "self", "user", "assistant" ]
null
FewShotExample
Type: method Member Name: __init__ Qualified Name: fenic._inference.types.FewShotExample.__init__ Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "user", "assistant"] Returns: None Parent Class: FewShotExample
class
LMRequestMessages
fenic._inference.types.LMRequestMessages
null
null
true
false
10
24
null
null
null
null
[]
null
Type: class Member Name: LMRequestMessages Qualified Name: fenic._inference.types.LMRequestMessages Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
method
to_message_list
fenic._inference.types.LMRequestMessages.to_message_list
null
null
true
false
16
24
null
List[Dict[str, str]]
null
[ "self" ]
null
LMRequestMessages
Type: method Member Name: to_message_list Qualified Name: fenic._inference.types.LMRequestMessages.to_message_list Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self"] Returns: List[Dict[str, str]] Parent Class: LMRequestMessages
method
__init__
fenic._inference.types.LMRequestMessages.__init__
null
null
true
false
0
0
null
None
null
[ "self", "system", "examples", "user" ]
null
LMRequestMessages
Type: method Member Name: __init__ Qualified Name: fenic._inference.types.LMRequestMessages.__init__ Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "system", "examples", "user"] Returns: None Parent Class: LMRequestMessages
module
model_catalog
fenic._inference.model_catalog
null
/private/var/folders/w2/dyfkx_354cqghs4b74vb_x380000gn/T/fenic-clone-0.0.0-y6d85svd/fenic/src/fenic/_inference/model_catalog.py
true
false
null
null
null
null
null
null
null
null
Type: module Member Name: model_catalog Qualified Name: fenic._inference.model_catalog Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
class
ModelProvider
fenic._inference.model_catalog.ModelProvider
Enum representing different model providers supported by the system.
null
true
false
5
9
null
null
null
null
[ "Enum" ]
null
Type: class Member Name: ModelProvider Qualified Name: fenic._inference.model_catalog.ModelProvider Docstring: Enum representing different model providers supported by the system. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
class
TieredTokenCost
fenic._inference.model_catalog.TieredTokenCost
null
null
true
false
11
23
null
null
null
null
[]
null
Type: class Member Name: TieredTokenCost Qualified Name: fenic._inference.model_catalog.TieredTokenCost Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
method
__init__
fenic._inference.model_catalog.TieredTokenCost.__init__
null
null
true
false
13
23
null
null
null
[ "self", "input_token_cost", "cached_input_token_read_cost", "output_token_cost", "cached_input_token_write_cost" ]
null
TieredTokenCost
Type: method Member Name: __init__ Qualified Name: fenic._inference.model_catalog.TieredTokenCost.__init__ Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "input_token_cost", "cached_input_token_read_cost", "output_token_cost", "cached_input_token_write_cost"] Returns: none Parent Class: TieredTokenCost
class
CompletionModelParameters
fenic._inference.model_catalog.CompletionModelParameters
Parameters for completion models including costs and context window size. Attributes: input_token_cost: Cost per input token in USD cached_input_token_read_cost: Cost per cached input token read in USD cached_input_token_write_cost: Cost per cached input token write in USD output_token_cost: Cost per output token in USD context_window_length: Maximum number of tokens in the context window max_output_tokens: Maximum number of tokens the model can generate in a single request.
null
true
false
25
57
null
null
null
null
[]
null
Type: class Member Name: CompletionModelParameters Qualified Name: fenic._inference.model_catalog.CompletionModelParameters Docstring: Parameters for completion models including costs and context window size. Attributes: input_token_cost: Cost per input token in USD cached_input_token_read_cost: Cost per cached input token read in USD cached_input_token_write_cost: Cost per cached input token write in USD output_token_cost: Cost per output token in USD context_window_length: Maximum number of tokens in the context window max_output_tokens: Maximum number of tokens the model can generate in a single request. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
method
__init__
fenic._inference.model_catalog.CompletionModelParameters.__init__
null
null
true
false
36
57
null
null
null
[ "self", "input_token_cost", "output_token_cost", "context_window_length", "max_output_tokens", "max_temperature", "cached_input_token_write_cost", "cached_input_token_read_cost", "tiered_token_costs", "requires_reasoning_effort" ]
null
CompletionModelParameters
Type: method Member Name: __init__ Qualified Name: fenic._inference.model_catalog.CompletionModelParameters.__init__ Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "input_token_cost", "output_token_cost", "context_window_length", "max_output_tokens", "max_temperature", "cached_input_token_write_cost", "cached_input_token_read_cost", "tiered_token_costs", "requires_reasoning_effort"] Returns: none Parent Class: CompletionModelParameters
class
EmbeddingModelParameters
fenic._inference.model_catalog.EmbeddingModelParameters
Parameters for embedding models including costs and output dimensions. Attributes: input_token_cost: Cost per input token in USD output_dimensions: Number of dimensions in the embedding output
null
true
false
60
69
null
null
null
null
[]
null
Type: class Member Name: EmbeddingModelParameters Qualified Name: fenic._inference.model_catalog.EmbeddingModelParameters Docstring: Parameters for embedding models including costs and output dimensions. Attributes: input_token_cost: Cost per input token in USD output_dimensions: Number of dimensions in the embedding output Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
method
__init__
fenic._inference.model_catalog.EmbeddingModelParameters.__init__
null
null
true
false
67
69
null
null
null
[ "self", "input_token_cost", "output_dimensions" ]
null
EmbeddingModelParameters
Type: method Member Name: __init__ Qualified Name: fenic._inference.model_catalog.EmbeddingModelParameters.__init__ Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "input_token_cost", "output_dimensions"] Returns: none Parent Class: EmbeddingModelParameters
attribute
CompletionModelCollection
fenic._inference.model_catalog.CompletionModelCollection
null
null
true
false
71
71
TypeAlias
null
Dict[str, CompletionModelParameters]
null
null
null
Type: attribute Member Name: CompletionModelCollection Qualified Name: fenic._inference.model_catalog.CompletionModelCollection Docstring: none Value: Dict[str, CompletionModelParameters] Annotation: TypeAlias is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
attribute
EmbeddingModelCollection
fenic._inference.model_catalog.EmbeddingModelCollection
null
null
true
false
72
72
TypeAlias
null
Dict[str, EmbeddingModelParameters]
null
null
null
Type: attribute Member Name: EmbeddingModelCollection Qualified Name: fenic._inference.model_catalog.EmbeddingModelCollection Docstring: none Value: Dict[str, EmbeddingModelParameters] Annotation: TypeAlias is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
attribute
OPENAI_AVAILABLE_LANGUAGE_MODELS
fenic._inference.model_catalog.OPENAI_AVAILABLE_LANGUAGE_MODELS
null
null
true
false
73
91
null
null
Literal['gpt-4.1', 'gpt-4.1-mini', 'gpt-4.1-nano', 'gpt-4.1-2025-04-14', 'gpt-4.1-mini-2025-04-14', 'gpt-4.1-nano-2025-04-14', 'gpt-4o', 'gpt-4o-2024-11-20', 'gpt-4o-2024-08-06', 'gpt-4o-2024-05-13', 'gpt-4o-mini', 'gpt-4o-mini-2024-07-18', 'gpt-4-turbo', 'gpt-4-turbo-2024-04-09', 'gpt-4', 'gpt-4-0314', 'gpt-4-0613']
null
null
null
Type: attribute Member Name: OPENAI_AVAILABLE_LANGUAGE_MODELS Qualified Name: fenic._inference.model_catalog.OPENAI_AVAILABLE_LANGUAGE_MODELS Docstring: none Value: Literal['gpt-4.1', 'gpt-4.1-mini', 'gpt-4.1-nano', 'gpt-4.1-2025-04-14', 'gpt-4.1-mini-2025-04-14', 'gpt-4.1-nano-2025-04-14', 'gpt-4o', 'gpt-4o-2024-11-20', 'gpt-4o-2024-08-06', 'gpt-4o-2024-05-13', 'gpt-4o-mini', 'gpt-4o-mini-2024-07-18', 'gpt-4-turbo', 'gpt-4-turbo-2024-04-09', 'gpt-4', 'gpt-4-0314', 'gpt-4-0613'] Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
attribute
OPENAI_AVAILABLE_EMBEDDING_MODELS
fenic._inference.model_catalog.OPENAI_AVAILABLE_EMBEDDING_MODELS
null
null
true
false
93
96
null
null
Literal['text-embedding-3-small', 'text-embedding-3-large']
null
null
null
Type: attribute Member Name: OPENAI_AVAILABLE_EMBEDDING_MODELS Qualified Name: fenic._inference.model_catalog.OPENAI_AVAILABLE_EMBEDDING_MODELS Docstring: none Value: Literal['text-embedding-3-small', 'text-embedding-3-large'] Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
attribute
ANTHROPIC_AVAILABLE_LANGUAGE_MODELS
fenic._inference.model_catalog.ANTHROPIC_AVAILABLE_LANGUAGE_MODELS
null
null
true
false
98
115
null
null
Literal['claude-3-7-sonnet-latest', 'claude-3-7-sonnet-20250219', 'claude-3-5-haiku-latest', 'claude-3-5-haiku-20241022', 'claude-sonnet-4-20250514', 'claude-sonnet-4-0', 'claude-4-sonnet-20250514', 'claude-3-5-sonnet-latest', 'claude-3-5-sonnet-20241022', 'claude-3-5-sonnet-20240620', 'claude-opus-4-0', 'claude-opus-4-20250514', 'claude-4-opus-20250514', 'claude-3-opus-latest', 'claude-3-opus-20240229', 'claude-3-haiku-20240307']
null
null
null
Type: attribute Member Name: ANTHROPIC_AVAILABLE_LANGUAGE_MODELS Qualified Name: fenic._inference.model_catalog.ANTHROPIC_AVAILABLE_LANGUAGE_MODELS Docstring: none Value: Literal['claude-3-7-sonnet-latest', 'claude-3-7-sonnet-20250219', 'claude-3-5-haiku-latest', 'claude-3-5-haiku-20241022', 'claude-sonnet-4-20250514', 'claude-sonnet-4-0', 'claude-4-sonnet-20250514', 'claude-3-5-sonnet-latest', 'claude-3-5-sonnet-20241022', 'claude-3-5-sonnet-20240620', 'claude-opus-4-0', 'claude-opus-4-20250514', 'claude-4-opus-20250514', 'claude-3-opus-latest', 'claude-3-opus-20240229', 'claude-3-haiku-20240307'] Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
attribute
GOOGLE_GLA_AVAILABLE_MODELS
fenic._inference.model_catalog.GOOGLE_GLA_AVAILABLE_MODELS
null
null
true
false
117
125
null
null
Literal['gemini-2.5-pro-preview-06-05', 'gemini-2.5-flash-preview-05-20', 'gemini-2.0-flash-lite', 'gemini-2.0-flash-lite-001', 'gemini-2.0-flash', 'gemini-2.0-flash-001', 'gemini-2.0-flash-exp']
null
null
null
Type: attribute Member Name: GOOGLE_GLA_AVAILABLE_MODELS Qualified Name: fenic._inference.model_catalog.GOOGLE_GLA_AVAILABLE_MODELS Docstring: none Value: Literal['gemini-2.5-pro-preview-06-05', 'gemini-2.5-flash-preview-05-20', 'gemini-2.0-flash-lite', 'gemini-2.0-flash-lite-001', 'gemini-2.0-flash', 'gemini-2.0-flash-001', 'gemini-2.0-flash-exp'] Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
attribute
AVAILABLE_LANGUAGE_MODELS
fenic._inference.model_catalog.AVAILABLE_LANGUAGE_MODELS
null
null
true
false
127
127
null
null
Union[OPENAI_AVAILABLE_LANGUAGE_MODELS, ANTHROPIC_AVAILABLE_LANGUAGE_MODELS, GOOGLE_GLA_AVAILABLE_MODELS]
null
null
null
Type: attribute Member Name: AVAILABLE_LANGUAGE_MODELS Qualified Name: fenic._inference.model_catalog.AVAILABLE_LANGUAGE_MODELS Docstring: none Value: Union[OPENAI_AVAILABLE_LANGUAGE_MODELS, ANTHROPIC_AVAILABLE_LANGUAGE_MODELS, GOOGLE_GLA_AVAILABLE_MODELS] Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
attribute
AVAILABLE_EMBEDDING_MODELS
fenic._inference.model_catalog.AVAILABLE_EMBEDDING_MODELS
null
null
true
false
128
128
null
null
Union[OPENAI_AVAILABLE_EMBEDDING_MODELS]
null
null
null
Type: attribute Member Name: AVAILABLE_EMBEDDING_MODELS Qualified Name: fenic._inference.model_catalog.AVAILABLE_EMBEDDING_MODELS Docstring: none Value: Union[OPENAI_AVAILABLE_EMBEDDING_MODELS] Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
class
ModelCatalog
fenic._inference.model_catalog.ModelCatalog
Catalog of supported models and their parameters for different providers. This class maintains a registry of all supported models across different providers, including their costs, context windows, and other parameters. It provides methods to query model information and calculate costs for different operations.
null
true
false
130
569
null
null
null
null
[]
null
Type: class Member Name: ModelCatalog Qualified Name: fenic._inference.model_catalog.ModelCatalog Docstring: Catalog of supported models and their parameters for different providers. This class maintains a registry of all supported models across different providers, including their costs, context windows, and other parameters. It provides methods to query model information and calculate costs for different operations. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
method
__init__
fenic._inference.model_catalog.ModelCatalog.__init__
null
null
true
false
137
368
null
null
null
[ "self" ]
null
ModelCatalog
Type: method Member Name: __init__ Qualified Name: fenic._inference.model_catalog.ModelCatalog.__init__ Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self"] Returns: none Parent Class: ModelCatalog
method
get_completion_model_parameters
fenic._inference.model_catalog.ModelCatalog.get_completion_model_parameters
Gets the parameters for a specific completion model. Args: model_provider: The provider of the model model_name: The name of the model Returns: Model parameters if found, None otherwise
null
true
false
371
382
null
CompletionModelParameters | None
null
[ "self", "model_provider", "model_name" ]
null
ModelCatalog
Type: method Member Name: get_completion_model_parameters Qualified Name: fenic._inference.model_catalog.ModelCatalog.get_completion_model_parameters Docstring: Gets the parameters for a specific completion model. Args: model_provider: The provider of the model model_name: The name of the model Returns: Model parameters if found, None otherwise Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "model_provider", "model_name"] Returns: CompletionModelParameters | None Parent Class: ModelCatalog
method
get_embedding_model_parameters
fenic._inference.model_catalog.ModelCatalog.get_embedding_model_parameters
Gets the parameters for a specific embedding model. Args: model_provider: The provider of the model model_name: The name of the model Returns: Model parameters if found, None otherwise
null
true
false
384
394
null
EmbeddingModelParameters | None
null
[ "self", "model_provider", "model_name" ]
null
ModelCatalog
Type: method Member Name: get_embedding_model_parameters Qualified Name: fenic._inference.model_catalog.ModelCatalog.get_embedding_model_parameters Docstring: Gets the parameters for a specific embedding model. Args: model_provider: The provider of the model model_name: The name of the model Returns: Model parameters if found, None otherwise Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "model_provider", "model_name"] Returns: EmbeddingModelParameters | None Parent Class: ModelCatalog
method
generate_unsupported_completion_model_error_message
fenic._inference.model_catalog.ModelCatalog.generate_unsupported_completion_model_error_message
Generates an error message for unsupported completion models. Args: model_provider: The provider of the unsupported model model_name: The name of the unsupported model Returns: Error message string
null
true
false
396
407
null
str
null
[ "self", "model_provider", "model_name" ]
null
ModelCatalog
Type: method Member Name: generate_unsupported_completion_model_error_message Qualified Name: fenic._inference.model_catalog.ModelCatalog.generate_unsupported_completion_model_error_message Docstring: Generates an error message for unsupported completion models. Args: model_provider: The provider of the unsupported model model_name: The name of the unsupported model Returns: Error message string Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "model_provider", "model_name"] Returns: str Parent Class: ModelCatalog
method
generate_unsupported_embedding_model_error_message
fenic._inference.model_catalog.ModelCatalog.generate_unsupported_embedding_model_error_message
Generates an error message for unsupported embedding models. Args: model_provider: The provider of the unsupported model model_name: The name of the unsupported model Returns: Error message string
null
true
false
409
419
null
str
null
[ "self", "model_provider", "model_name" ]
null
ModelCatalog
Type: method Member Name: generate_unsupported_embedding_model_error_message Qualified Name: fenic._inference.model_catalog.ModelCatalog.generate_unsupported_embedding_model_error_message Docstring: Generates an error message for unsupported embedding models. Args: model_provider: The provider of the unsupported model model_name: The name of the unsupported model Returns: Error message string Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "model_provider", "model_name"] Returns: str Parent Class: ModelCatalog
method
get_supported_completions_models_as_string
fenic._inference.model_catalog.ModelCatalog.get_supported_completions_models_as_string
Returns a comma-separated string of all supported completion models. Returns: Comma-separated string of model names in format 'provider:model'
null
true
false
421
431
null
str
null
[ "self" ]
null
ModelCatalog
Type: method Member Name: get_supported_completions_models_as_string Qualified Name: fenic._inference.model_catalog.ModelCatalog.get_supported_completions_models_as_string Docstring: Returns a comma-separated string of all supported completion models. Returns: Comma-separated string of model names in format 'provider:model' Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self"] Returns: str Parent Class: ModelCatalog
method
get_supported_embeddings_models_as_string
fenic._inference.model_catalog.ModelCatalog.get_supported_embeddings_models_as_string
Returns a comma-separated string of all supported embedding models. Returns: Comma-separated string of model names
null
true
false
433
443
null
str
null
[ "self" ]
null
ModelCatalog
Type: method Member Name: get_supported_embeddings_models_as_string Qualified Name: fenic._inference.model_catalog.ModelCatalog.get_supported_embeddings_models_as_string Docstring: Returns a comma-separated string of all supported embedding models. Returns: Comma-separated string of model names Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self"] Returns: str Parent Class: ModelCatalog
method
calculate_completion_model_cost
fenic._inference.model_catalog.ModelCatalog.calculate_completion_model_cost
Calculates the total cost for a completion model operation. Args: model_provider: The provider of the model model_name: The name of the model uncached_input_tokens: Number of uncached input tokens cached_input_tokens_read: Number of cached input tokens read output_tokens: Number of output tokens cached_input_tokens_written: Number of cached input tokens written Returns: Total cost in USD Raises: ValueError: If the model is not supported
null
true
false
445
489
null
float
null
[ "self", "model_provider", "model_name", "uncached_input_tokens", "cached_input_tokens_read", "output_tokens", "cached_input_tokens_written" ]
null
ModelCatalog
Type: method Member Name: calculate_completion_model_cost Qualified Name: fenic._inference.model_catalog.ModelCatalog.calculate_completion_model_cost Docstring: Calculates the total cost for a completion model operation. Args: model_provider: The provider of the model model_name: The name of the model uncached_input_tokens: Number of uncached input tokens cached_input_tokens_read: Number of cached input tokens read output_tokens: Number of output tokens cached_input_tokens_written: Number of cached input tokens written Returns: Total cost in USD Raises: ValueError: If the model is not supported Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "model_provider", "model_name", "uncached_input_tokens", "cached_input_tokens_read", "output_tokens", "cached_input_tokens_written"] Returns: float Parent Class: ModelCatalog
method
calculate_embedding_model_cost
fenic._inference.model_catalog.ModelCatalog.calculate_embedding_model_cost
Calculates the total cost for an embedding model operation. Args: model_provider: The provider of the model model_name: The name of the model tokens: Number of tokens to embed Returns: Total cost in USD Raises: ValueError: If the model is not supported
null
true
false
491
512
null
float
null
[ "self", "model_provider", "model_name", "tokens" ]
null
ModelCatalog
Type: method Member Name: calculate_embedding_model_cost Qualified Name: fenic._inference.model_catalog.ModelCatalog.calculate_embedding_model_cost Docstring: Calculates the total cost for an embedding model operation. Args: model_provider: The provider of the model model_name: The name of the model tokens: Number of tokens to embed Returns: Total cost in USD Raises: ValueError: If the model is not supported Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "model_provider", "model_name", "tokens"] Returns: float Parent Class: ModelCatalog
method
_create_complete_model_collection
fenic._inference.model_catalog.ModelCatalog._create_complete_model_collection
Creates a complete model collection including snapshots. Args: base_models: Dictionary of base model parameters snapshots: Dictionary mapping (provider, base_model) to list of snapshot names provider: The provider to create the collection for Returns: Complete model collection including snapshots
null
false
true
515
536
null
Dict[str, Union[CompletionModelParameters, EmbeddingModelParameters]]
null
[ "self", "base_models", "snapshots", "provider" ]
null
ModelCatalog
Type: method Member Name: _create_complete_model_collection Qualified Name: fenic._inference.model_catalog.ModelCatalog._create_complete_model_collection Docstring: Creates a complete model collection including snapshots. Args: base_models: Dictionary of base model parameters snapshots: Dictionary mapping (provider, base_model) to list of snapshot names provider: The provider to create the collection for Returns: Complete model collection including snapshots Value: none Annotation: none is Public? : false is Private? : true Parameters: ["self", "base_models", "snapshots", "provider"] Returns: Dict[str, Union[CompletionModelParameters, EmbeddingModelParameters]] Parent Class: ModelCatalog
method
_get_supported_completions_models_by_provider
fenic._inference.model_catalog.ModelCatalog._get_supported_completions_models_by_provider
Returns the collection of completion models for a specific provider. Args: model_provider: The provider to get models for Returns: Collection of completion models for the specified provider, including snapshots
null
false
true
538
547
null
CompletionModelCollection
null
[ "self", "model_provider" ]
null
ModelCatalog
Type: method Member Name: _get_supported_completions_models_by_provider Qualified Name: fenic._inference.model_catalog.ModelCatalog._get_supported_completions_models_by_provider Docstring: Returns the collection of completion models for a specific provider. Args: model_provider: The provider to get models for Returns: Collection of completion models for the specified provider, including snapshots Value: none Annotation: none is Public? : false is Private? : true Parameters: ["self", "model_provider"] Returns: CompletionModelCollection Parent Class: ModelCatalog
method
_get_supported_embeddings_models_by_provider
fenic._inference.model_catalog.ModelCatalog._get_supported_embeddings_models_by_provider
Returns the collection of embedding models for a specific provider. Args: model_provider: The provider to get models for Returns: Collection of embedding models for the specified provider, including snapshots
null
false
true
549
558
null
EmbeddingModelCollection
null
[ "self", "model_provider" ]
null
ModelCatalog
Type: method Member Name: _get_supported_embeddings_models_by_provider Qualified Name: fenic._inference.model_catalog.ModelCatalog._get_supported_embeddings_models_by_provider Docstring: Returns the collection of embedding models for a specific provider. Args: model_provider: The provider to get models for Returns: Collection of embedding models for the specified provider, including snapshots Value: none Annotation: none is Public? : false is Private? : true Parameters: ["self", "model_provider"] Returns: EmbeddingModelCollection Parent Class: ModelCatalog
method
_get_supported_completions_models_by_provider_as_string
fenic._inference.model_catalog.ModelCatalog._get_supported_completions_models_by_provider_as_string
Returns a comma-separated string of supported completion model names for a provider. Args: model_provider: The provider to get model names for Returns: Comma-separated string of model names
null
false
true
560
569
null
str
null
[ "self", "model_provider" ]
null
ModelCatalog
Type: method Member Name: _get_supported_completions_models_by_provider_as_string Qualified Name: fenic._inference.model_catalog.ModelCatalog._get_supported_completions_models_by_provider_as_string Docstring: Returns a comma-separated string of supported completion model names for a provider. Args: model_provider: The provider to get model names for Returns: Comma-separated string of model names Value: none Annotation: none is Public? : false is Private? : true Parameters: ["self", "model_provider"] Returns: str Parent Class: ModelCatalog
attribute
model_catalog
fenic._inference.model_catalog.model_catalog
null
null
true
false
572
572
null
null
ModelCatalog()
null
null
null
Type: attribute Member Name: model_catalog Qualified Name: fenic._inference.model_catalog.model_catalog Docstring: none Value: ModelCatalog() Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
module
google
fenic._inference.google
null
/private/var/folders/w2/dyfkx_354cqghs4b74vb_x380000gn/T/fenic-clone-0.0.0-y6d85svd/fenic/src/fenic/_inference/google/__init__.py
true
false
null
null
null
null
null
null
null
null
Type: module Member Name: google Qualified Name: fenic._inference.google Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
module
gemini_oai_chat_completions_client
fenic._inference.google.gemini_oai_chat_completions_client
Client for making requests to Google's Gemini model using OpenAI compatibility layer or Vertex AI.
/private/var/folders/w2/dyfkx_354cqghs4b74vb_x380000gn/T/fenic-clone-0.0.0-y6d85svd/fenic/src/fenic/_inference/google/gemini_oai_chat_completions_client.py
true
false
null
null
null
null
null
null
null
null
Type: module Member Name: gemini_oai_chat_completions_client Qualified Name: fenic._inference.google.gemini_oai_chat_completions_client Docstring: Client for making requests to Google's Gemini model using OpenAI compatibility layer or Vertex AI. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
class
GeminiOAIChatCompletionsClient
fenic._inference.google.gemini_oai_chat_completions_client.GeminiOAIChatCompletionsClient
Client for making requests to Google's Gemini model using OpenAI compatibility layer or Vertex AI.
null
true
false
32
128
null
null
null
null
[ "ModelClient[FenicCompletionsRequest, FenicCompletionsResponse]" ]
null
Type: class Member Name: GeminiOAIChatCompletionsClient Qualified Name: fenic._inference.google.gemini_oai_chat_completions_client.GeminiOAIChatCompletionsClient Docstring: Client for making requests to Google's Gemini model using OpenAI compatibility layer or Vertex AI. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
method
__init__
fenic._inference.google.gemini_oai_chat_completions_client.GeminiOAIChatCompletionsClient.__init__
Initialize the Gemini client with OpenAI compatibility layer or Vertex AI. Args: rate_limit_strategy: Strategy for handling rate limits model: The Gemini model to use queue_size: Size of the request queue max_backoffs: Maximum number of backoff attempts reasoning_effort: Reasoning effort level for thinking models (Gemini 2.5 only)
null
true
false
35
80
null
null
null
[ "self", "rate_limit_strategy", "model", "queue_size", "max_backoffs", "reasoning_effort" ]
null
GeminiOAIChatCompletionsClient
Type: method Member Name: __init__ Qualified Name: fenic._inference.google.gemini_oai_chat_completions_client.GeminiOAIChatCompletionsClient.__init__ Docstring: Initialize the Gemini client with OpenAI compatibility layer or Vertex AI. Args: rate_limit_strategy: Strategy for handling rate limits model: The Gemini model to use queue_size: Size of the request queue max_backoffs: Maximum number of backoff attempts reasoning_effort: Reasoning effort level for thinking models (Gemini 2.5 only) Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "rate_limit_strategy", "model", "queue_size", "max_backoffs", "reasoning_effort"] Returns: none Parent Class: GeminiOAIChatCompletionsClient
method
make_single_request
fenic._inference.google.gemini_oai_chat_completions_client.GeminiOAIChatCompletionsClient.make_single_request
Make a single request to the Gemini API. Args: request: The request to make Returns: The response from the API or an exception
null
true
false
83
94
null
Union[None, FenicCompletionsResponse, TransientException, FatalException]
null
[ "self", "request" ]
null
GeminiOAIChatCompletionsClient
Type: method Member Name: make_single_request Qualified Name: fenic._inference.google.gemini_oai_chat_completions_client.GeminiOAIChatCompletionsClient.make_single_request Docstring: Make a single request to the Gemini API. Args: request: The request to make Returns: The response from the API or an exception Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "request"] Returns: Union[None, FenicCompletionsResponse, TransientException, FatalException] Parent Class: GeminiOAIChatCompletionsClient
method
estimate_tokens_for_request
fenic._inference.google.gemini_oai_chat_completions_client.GeminiOAIChatCompletionsClient.estimate_tokens_for_request
Estimate the number of tokens for a request. Args: request: The request to estimate tokens for Returns: TokenEstimate with input and output token counts
null
true
false
96
105
null
TokenEstimate
null
[ "self", "request" ]
null
GeminiOAIChatCompletionsClient
Type: method Member Name: estimate_tokens_for_request Qualified Name: fenic._inference.google.gemini_oai_chat_completions_client.GeminiOAIChatCompletionsClient.estimate_tokens_for_request Docstring: Estimate the number of tokens for a request. Args: request: The request to estimate tokens for Returns: TokenEstimate with input and output token counts Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "request"] Returns: TokenEstimate Parent Class: GeminiOAIChatCompletionsClient
method
get_request_key
fenic._inference.google.gemini_oai_chat_completions_client.GeminiOAIChatCompletionsClient.get_request_key
Generate a unique key for request deduplication. Args: request: The request to generate a key for Returns: A unique key for the request
null
true
false
107
116
null
str
null
[ "self", "request" ]
null
GeminiOAIChatCompletionsClient
Type: method Member Name: get_request_key Qualified Name: fenic._inference.google.gemini_oai_chat_completions_client.GeminiOAIChatCompletionsClient.get_request_key Docstring: Generate a unique key for request deduplication. Args: request: The request to generate a key for Returns: A unique key for the request Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "request"] Returns: str Parent Class: GeminiOAIChatCompletionsClient
method
reset_metrics
fenic._inference.google.gemini_oai_chat_completions_client.GeminiOAIChatCompletionsClient.reset_metrics
Reset all metrics to their initial values.
null
true
false
118
120
null
null
null
[ "self" ]
null
GeminiOAIChatCompletionsClient
Type: method Member Name: reset_metrics Qualified Name: fenic._inference.google.gemini_oai_chat_completions_client.GeminiOAIChatCompletionsClient.reset_metrics Docstring: Reset all metrics to their initial values. Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self"] Returns: none Parent Class: GeminiOAIChatCompletionsClient
method
get_metrics
fenic._inference.google.gemini_oai_chat_completions_client.GeminiOAIChatCompletionsClient.get_metrics
Get the current metrics. Returns: The current metrics
null
true
false
122
128
null
LMMetrics
null
[ "self" ]
null
GeminiOAIChatCompletionsClient
Type: method Member Name: get_metrics Qualified Name: fenic._inference.google.gemini_oai_chat_completions_client.GeminiOAIChatCompletionsClient.get_metrics Docstring: Get the current metrics. Returns: The current metrics Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self"] Returns: LMMetrics Parent Class: GeminiOAIChatCompletionsClient
module
anthropic
fenic._inference.anthropic
null
/private/var/folders/w2/dyfkx_354cqghs4b74vb_x380000gn/T/fenic-clone-0.0.0-y6d85svd/fenic/src/fenic/_inference/anthropic/__init__.py
true
false
null
null
null
null
null
null
null
null
Type: module Member Name: anthropic Qualified Name: fenic._inference.anthropic Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
module
anthropic_batch_chat_completions_client
fenic._inference.anthropic.anthropic_batch_chat_completions_client
null
/private/var/folders/w2/dyfkx_354cqghs4b74vb_x380000gn/T/fenic-clone-0.0.0-y6d85svd/fenic/src/fenic/_inference/anthropic/anthropic_batch_chat_completions_client.py
true
false
null
null
null
null
null
null
null
null
Type: module Member Name: anthropic_batch_chat_completions_client Qualified Name: fenic._inference.anthropic.anthropic_batch_chat_completions_client Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
attribute
EPHEMERAL_CACHE_CONTROL
fenic._inference.anthropic.anthropic_batch_chat_completions_client.EPHEMERAL_CACHE_CONTROL
null
null
true
false
40
40
null
null
CacheControlEphemeralParam(type='ephemeral')
null
null
null
Type: attribute Member Name: EPHEMERAL_CACHE_CONTROL Qualified Name: fenic._inference.anthropic.anthropic_batch_chat_completions_client.EPHEMERAL_CACHE_CONTROL Docstring: none Value: CacheControlEphemeralParam(type='ephemeral') Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
class
AnthropicBatchCompletionsClient
fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient
null
null
true
false
42
198
null
null
null
null
[ "ModelClient[FenicCompletionsRequest, FenicCompletionsResponse]" ]
null
Type: class Member Name: AnthropicBatchCompletionsClient Qualified Name: fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
method
__init__
fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient.__init__
null
null
true
false
45
67
null
null
null
[ "self", "rate_limit_strategy", "queue_size", "model", "max_backoffs" ]
null
AnthropicBatchCompletionsClient
Type: method Member Name: __init__ Qualified Name: fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient.__init__ Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "rate_limit_strategy", "queue_size", "model", "max_backoffs"] Returns: none Parent Class: AnthropicBatchCompletionsClient
method
validate_request
fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient.validate_request
Validate the request before making it to the Anthropic API.
null
true
false
69
73
null
Optional[FatalException]
null
[ "self", "request" ]
null
AnthropicBatchCompletionsClient
Type: method Member Name: validate_request Qualified Name: fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient.validate_request Docstring: Validate the request before making it to the Anthropic API. Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "request"] Returns: Optional[FatalException] Parent Class: AnthropicBatchCompletionsClient
method
make_single_request
fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient.make_single_request
null
null
true
false
75
134
null
Union[None, FenicCompletionsResponse, TransientException, FatalException]
null
[ "self", "request" ]
null
AnthropicBatchCompletionsClient
Type: method Member Name: make_single_request Qualified Name: fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient.make_single_request Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "request"] Returns: Union[None, FenicCompletionsResponse, TransientException, FatalException] Parent Class: AnthropicBatchCompletionsClient
method
estimate_response_format_tokens
fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient.estimate_response_format_tokens
null
null
true
false
138
151
null
int
null
[ "self", "response_format" ]
null
AnthropicBatchCompletionsClient
Type: method Member Name: estimate_response_format_tokens Qualified Name: fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient.estimate_response_format_tokens Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "response_format"] Returns: int Parent Class: AnthropicBatchCompletionsClient
method
estimate_tokens_for_request
fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient.estimate_tokens_for_request
null
null
true
false
153
159
null
TokenEstimate
null
[ "self", "request" ]
null
AnthropicBatchCompletionsClient
Type: method Member Name: estimate_tokens_for_request Qualified Name: fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient.estimate_tokens_for_request Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "request"] Returns: TokenEstimate Parent Class: AnthropicBatchCompletionsClient
method
count_tokens
fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient.count_tokens
null
null
true
false
163
164
null
int
null
[ "self", "messages" ]
null
AnthropicBatchCompletionsClient
Type: method Member Name: count_tokens Qualified Name: fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient.count_tokens Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "messages"] Returns: int Parent Class: AnthropicBatchCompletionsClient