code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def get_output_by_name(self, name: str) -> Optional[InferOutput]:
"""Find an output Tensor in the InferResponse that has the given name
Args:
name : str
name of the output Tensor object
Returns:
InferOutput
The InferOutput with the specifi... | Find an output Tensor in the InferResponse that has the given name
Args:
name : str
name of the output Tensor object
Returns:
InferOutput
The InferOutput with the specified name, or None if no
output with this name exists
| get_output_by_name | python | kserve/kserve | python/kserve/kserve/protocol/infer_type.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/infer_type.py | Apache-2.0 |
def to_grpc_parameters(
parameters: Union[
Dict[str, Union[str, bool, int]], MessageMap[str, InferParameter]
],
) -> Dict[str, InferParameter]:
"""
Converts REST parameters to GRPC InferParameter objects
:param parameters: parameters to be converted.
:return: converted parameters as Dic... |
Converts REST parameters to GRPC InferParameter objects
:param parameters: parameters to be converted.
:return: converted parameters as Dict[str, InferParameter]
:raises InvalidInput: if the parameter type is not supported.
| to_grpc_parameters | python | kserve/kserve | python/kserve/kserve/protocol/infer_type.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/infer_type.py | Apache-2.0 |
def to_http_parameters(
parameters: Union[dict, MessageMap[str, InferParameter]],
) -> Dict[str, Union[str, bool, int]]:
"""
Converts GRPC InferParameter parameters to REST parameters
:param parameters: parameters to be converted.
:return: converted parameters as Dict[str, Union[str, bool, int]]
... |
Converts GRPC InferParameter parameters to REST parameters
:param parameters: parameters to be converted.
:return: converted parameters as Dict[str, Union[str, bool, int]]
| to_http_parameters | python | kserve/kserve | python/kserve/kserve/protocol/infer_type.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/infer_type.py | Apache-2.0 |
def _contains_fp16_datatype(infer_response: InferResponse) -> bool:
"""
Checks whether the InferResponse outputs contains FP16 datatype.
:param infer_response: An InferResponse object containing model inference results.
:return: A boolean indicating whether any output in the InferResponse uses the FP16... |
Checks whether the InferResponse outputs contains FP16 datatype.
:param infer_response: An InferResponse object containing model inference results.
:return: A boolean indicating whether any output in the InferResponse uses the FP16 datatype.
| _contains_fp16_datatype | python | kserve/kserve | python/kserve/kserve/protocol/infer_type.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/infer_type.py | Apache-2.0 |
async def index(self, filter_ready: Optional[bool] = False) -> List[Dict[str, str]]:
"""Returns information about every model available in a model repository.
Args:
filter_ready: When set True, the function returns only the models that are ready
Returns:
List[Dict[str, ... | Returns information about every model available in a model repository.
Args:
filter_ready: When set True, the function returns only the models that are ready
Returns:
List[Dict[str, str]]: list with metadata for models as below:
{
name: mode... | index | python | kserve/kserve | python/kserve/kserve/protocol/model_repository_extension.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/model_repository_extension.py | Apache-2.0 |
async def load(self, model_name: str) -> None:
"""Loads the specified model.
Args:
model_name (str): name of the model to load.
Returns: None
Raises:
ModelNotReady: Exception if model loading fails.
"""
try:
# For backward compatibil... | Loads the specified model.
Args:
model_name (str): name of the model to load.
Returns: None
Raises:
ModelNotReady: Exception if model loading fails.
| load | python | kserve/kserve | python/kserve/kserve/protocol/model_repository_extension.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/model_repository_extension.py | Apache-2.0 |
async def unload(self, model_name: str) -> None:
"""Unload the specified model.
Args:
model_name (str): Name of the model to unload.
Returns: None
Raises:
ModelNotFound: Exception if the requested model is not found.
"""
try:
self._m... | Unload the specified model.
Args:
model_name (str): Name of the model to unload.
Returns: None
Raises:
ModelNotFound: Exception if the requested model is not found.
| unload | python | kserve/kserve | python/kserve/kserve/protocol/model_repository_extension.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/model_repository_extension.py | Apache-2.0 |
def ServerLive(self, request, context):
"""The ServerLive API indicates if the inference server is able to receive
and respond to metadata and inference requests.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise Not... | The ServerLive API indicates if the inference server is able to receive
and respond to metadata and inference requests.
| ServerLive | python | kserve/kserve | python/kserve/kserve/protocol/grpc/grpc_predict_v2_pb2_grpc.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/grpc/grpc_predict_v2_pb2_grpc.py | Apache-2.0 |
def ServerReady(self, request, context):
"""The ServerReady API indicates if the server is ready for inferencing.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!') | The ServerReady API indicates if the server is ready for inferencing.
| ServerReady | python | kserve/kserve | python/kserve/kserve/protocol/grpc/grpc_predict_v2_pb2_grpc.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/grpc/grpc_predict_v2_pb2_grpc.py | Apache-2.0 |
def ModelReady(self, request, context):
"""The ModelReady API indicates if a specific model is ready for inferencing.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!') | The ModelReady API indicates if a specific model is ready for inferencing.
| ModelReady | python | kserve/kserve | python/kserve/kserve/protocol/grpc/grpc_predict_v2_pb2_grpc.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/grpc/grpc_predict_v2_pb2_grpc.py | Apache-2.0 |
def ServerMetadata(self, request, context):
"""The ServerMetadata API provides information about the server. Errors are
indicated by the google.rpc.Status returned for the request. The OK code
indicates success and other codes indicate failure.
"""
context.set_code(grpc.StatusC... | The ServerMetadata API provides information about the server. Errors are
indicated by the google.rpc.Status returned for the request. The OK code
indicates success and other codes indicate failure.
| ServerMetadata | python | kserve/kserve | python/kserve/kserve/protocol/grpc/grpc_predict_v2_pb2_grpc.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/grpc/grpc_predict_v2_pb2_grpc.py | Apache-2.0 |
def ModelMetadata(self, request, context):
"""The per-model metadata API provides information about a model. Errors are
indicated by the google.rpc.Status returned for the request. The OK code
indicates success and other codes indicate failure.
"""
context.set_code(grpc.StatusC... | The per-model metadata API provides information about a model. Errors are
indicated by the google.rpc.Status returned for the request. The OK code
indicates success and other codes indicate failure.
| ModelMetadata | python | kserve/kserve | python/kserve/kserve/protocol/grpc/grpc_predict_v2_pb2_grpc.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/grpc/grpc_predict_v2_pb2_grpc.py | Apache-2.0 |
def ModelInfer(self, request, context):
"""The ModelInfer API performs inference using the specified model. Errors are
indicated by the google.rpc.Status returned for the request. The OK code
indicates success and other codes indicate failure.
"""
context.set_code(grpc.StatusCod... | The ModelInfer API performs inference using the specified model. Errors are
indicated by the google.rpc.Status returned for the request. The OK code
indicates success and other codes indicate failure.
| ModelInfer | python | kserve/kserve | python/kserve/kserve/protocol/grpc/grpc_predict_v2_pb2_grpc.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/grpc/grpc_predict_v2_pb2_grpc.py | Apache-2.0 |
def RepositoryModelLoad(self, request, context):
"""Load or reload a model from a repository.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!') | Load or reload a model from a repository.
| RepositoryModelLoad | python | kserve/kserve | python/kserve/kserve/protocol/grpc/grpc_predict_v2_pb2_grpc.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/grpc/grpc_predict_v2_pb2_grpc.py | Apache-2.0 |
async def start(self):
"""Starts the server without configuring the event loop."""
self.create_application()
logger.info("Starting uvicorn with %s workers", self.config.workers)
await self._server.serve() | Starts the server without configuring the event loop. | start | python | kserve/kserve | python/kserve/kserve/protocol/rest/server.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/rest/server.py | Apache-2.0 |
async def models(self) -> Dict[str, List[str]]:
"""Get a list of models in the model registry.
Returns:
Dict[str, List[str]]: List of model names.
"""
return {"models": list(self.dataplane.model_registry.get_models().keys())} | Get a list of models in the model registry.
Returns:
Dict[str, List[str]]: List of model names.
| models | python | kserve/kserve | python/kserve/kserve/protocol/rest/v1_endpoints.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/rest/v1_endpoints.py | Apache-2.0 |
async def model_ready(self, model_name: str) -> Dict[str, Union[str, bool]]:
"""Check if a given model is ready.
Args:
model_name (str): Model name.
Returns:
Dict[str, Union[str, bool]]: Name of the model and whether it's ready.
"""
model_ready = await s... | Check if a given model is ready.
Args:
model_name (str): Model name.
Returns:
Dict[str, Union[str, bool]]: Name of the model and whether it's ready.
| model_ready | python | kserve/kserve | python/kserve/kserve/protocol/rest/v1_endpoints.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/rest/v1_endpoints.py | Apache-2.0 |
async def predict(self, model_name: str, request: Request) -> Union[Response, Dict]:
"""Predict request handler.
It sends the request to the dataplane where the model will process the request body.
Args:
model_name (str): Model name.
request (Request): Raw request objec... | Predict request handler.
It sends the request to the dataplane where the model will process the request body.
Args:
model_name (str): Model name.
request (Request): Raw request object.
Returns:
Dict|Response: Model inference response.
| predict | python | kserve/kserve | python/kserve/kserve/protocol/rest/v1_endpoints.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/rest/v1_endpoints.py | Apache-2.0 |
async def explain(self, model_name: str, request: Request) -> Union[Response, Dict]:
"""Explain handler.
Args:
model_name (str): Model name.
request (Request): Raw request object.
Returns:
Dict: Explainer output.
"""
# Disable predictor healt... | Explain handler.
Args:
model_name (str): Model name.
request (Request): Raw request object.
Returns:
Dict: Explainer output.
| explain | python | kserve/kserve | python/kserve/kserve/protocol/rest/v1_endpoints.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/rest/v1_endpoints.py | Apache-2.0 |
def register_v1_endpoints(
app: FastAPI,
dataplane: DataPlane,
model_repository_extension: Optional[ModelRepositoryExtension],
):
"""Register V1 endpoints.
Args:
app (FastAPI): FastAPI app.
dataplane (DataPlane): DataPlane object.
model_repository_extension (Optional[ModelRe... | Register V1 endpoints.
Args:
app (FastAPI): FastAPI app.
dataplane (DataPlane): DataPlane object.
model_repository_extension (Optional[ModelRepositoryExtension]): Model repository extension.
| register_v1_endpoints | python | kserve/kserve | python/kserve/kserve/protocol/rest/v1_endpoints.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/rest/v1_endpoints.py | Apache-2.0 |
async def metadata(self) -> ServerMetadataResponse:
"""Server metadata endpoint.
Returns:
ServerMetadataResponse: Server metadata JSON object.
"""
return ServerMetadataResponse.model_validate(self.dataplane.metadata()) | Server metadata endpoint.
Returns:
ServerMetadataResponse: Server metadata JSON object.
| metadata | python | kserve/kserve | python/kserve/kserve/protocol/rest/v2_endpoints.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/rest/v2_endpoints.py | Apache-2.0 |
async def live(self) -> ServerLiveResponse:
"""Server live endpoint.
Returns:
ServerLiveResponse: Server live message.
"""
response = await self.dataplane.live()
is_live = response["status"] == "alive"
if not is_live:
raise ServerNotLive()
... | Server live endpoint.
Returns:
ServerLiveResponse: Server live message.
| live | python | kserve/kserve | python/kserve/kserve/protocol/rest/v2_endpoints.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/rest/v2_endpoints.py | Apache-2.0 |
async def ready(self) -> ServerReadyResponse:
"""Server ready endpoint.
Returns:
ServerReadyResponse: Server ready message.
"""
is_ready = await self.dataplane.ready()
if not is_ready:
raise ServerNotReady()
return ServerReadyResponse(ready=is_rea... | Server ready endpoint.
Returns:
ServerReadyResponse: Server ready message.
| ready | python | kserve/kserve | python/kserve/kserve/protocol/rest/v2_endpoints.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/rest/v2_endpoints.py | Apache-2.0 |
async def models(self) -> ListModelsResponse:
"""Get a list of models in the model registry.
Returns:
ListModelsResponse: List of models object.
"""
models = list(self.dataplane.model_registry.get_models().keys())
return ListModelsResponse.model_validate({"models": m... | Get a list of models in the model registry.
Returns:
ListModelsResponse: List of models object.
| models | python | kserve/kserve | python/kserve/kserve/protocol/rest/v2_endpoints.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/rest/v2_endpoints.py | Apache-2.0 |
async def model_metadata(
self, model_name: str, model_version: Optional[str] = None
) -> ModelMetadataResponse:
"""Model metadata handler. It provides information about a model.
Args:
model_name (str): Model name.
model_version (Optional[str]): Model version (option... | Model metadata handler. It provides information about a model.
Args:
model_name (str): Model name.
model_version (Optional[str]): Model version (optional).
Returns:
ModelMetadataResponse: Model metadata object.
| model_metadata | python | kserve/kserve | python/kserve/kserve/protocol/rest/v2_endpoints.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/rest/v2_endpoints.py | Apache-2.0 |
async def model_ready(
self, model_name: str, model_version: Optional[str] = None
) -> ModelReadyResponse:
"""Check if a given model is ready.
Args:
model_name (str): Model name.
model_version (str): Model version.
Returns:
ModelReadyResponse: Mo... | Check if a given model is ready.
Args:
model_name (str): Model name.
model_version (str): Model version.
Returns:
ModelReadyResponse: Model ready object
| model_ready | python | kserve/kserve | python/kserve/kserve/protocol/rest/v2_endpoints.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/rest/v2_endpoints.py | Apache-2.0 |
async def infer(
self,
raw_request: Request,
raw_response: Response,
model_name: str,
request_body: Union[InferenceRequest, bytes],
model_version: Optional[str] = None,
) -> Union[InferenceResponse, Response]:
"""Infer handler.
Args:
raw_r... | Infer handler.
Args:
raw_request (Request): fastapi request object,
raw_response (Response): fastapi response object,
model_name (str): Model name.
request_body (InferenceRequest): Inference request body.
model_version (Optional[str]): Model version (... | infer | python | kserve/kserve | python/kserve/kserve/protocol/rest/v2_endpoints.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/rest/v2_endpoints.py | Apache-2.0 |
async def load(self, model_name: str) -> Dict:
"""Model load handler.
Args:
model_name (str): Model name.
Returns:
Dict: {"name": model_name, "load": True}
"""
await self.model_repository_extension.load(model_name)
return {"name": model_name, "lo... | Model load handler.
Args:
model_name (str): Model name.
Returns:
Dict: {"name": model_name, "load": True}
| load | python | kserve/kserve | python/kserve/kserve/protocol/rest/v2_endpoints.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/rest/v2_endpoints.py | Apache-2.0 |
async def unload(self, model_name: str) -> Dict:
"""Model unload handler.
Args:
model_name (str): Model name.
Returns:
Dict: {"name": model_name, "unload": True}
"""
await self.model_repository_extension.unload(model_name)
return {"name": model_n... | Model unload handler.
Args:
model_name (str): Model name.
Returns:
Dict: {"name": model_name, "unload": True}
| unload | python | kserve/kserve | python/kserve/kserve/protocol/rest/v2_endpoints.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/rest/v2_endpoints.py | Apache-2.0 |
def register_v2_endpoints(
app: FastAPI,
dataplane: DataPlane,
model_repository_extension: Optional[ModelRepositoryExtension],
):
"""Register V2 endpoints.
Args:
app (FastAPI): FastAPI app.
dataplane (DataPlane): DataPlane object.
model_repository_extension (Optional[ModelRe... | Register V2 endpoints.
Args:
app (FastAPI): FastAPI app.
dataplane (DataPlane): DataPlane object.
model_repository_extension (Optional[ModelRepositoryExtension]): Model repository extension.
| register_v2_endpoints | python | kserve/kserve | python/kserve/kserve/protocol/rest/v2_endpoints.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/rest/v2_endpoints.py | Apache-2.0 |
async def wait_for_termination(self, grace_period: Optional[int] = None):
"""Wait for the process to terminate. When a timeout occurs,
it cancels the task and raises TimeoutError."""
async def _wait_for_process():
while self._process.exitcode is None:
await asyncio.s... | Wait for the process to terminate. When a timeout occurs,
it cancels the task and raises TimeoutError. | wait_for_termination | python | kserve/kserve | python/kserve/kserve/protocol/rest/multiprocess/server.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/rest/multiprocess/server.py | Apache-2.0 |
async def terminate_all(self) -> None:
"""Propagate signal to all child processes and wait for termination."""
for p in self._processes:
p.terminate()
async def force_terminate(process) -> None:
try:
await process.wait_for_termination(
... | Propagate signal to all child processes and wait for termination. | terminate_all | python | kserve/kserve | python/kserve/kserve/protocol/rest/multiprocess/server.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/rest/multiprocess/server.py | Apache-2.0 |
def get_open_ai_models(repository: ModelRepository) -> dict[str, Model]:
"""Retrieve all models in the repository that implement the OpenAI interface"""
from .openai_model import OpenAIModel
return {
name: model
for name, model in repository.get_models().items()
if isinstance(model,... | Retrieve all models in the repository that implement the OpenAI interface | get_open_ai_models | python | kserve/kserve | python/kserve/kserve/protocol/rest/openai/config.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/rest/openai/config.py | Apache-2.0 |
async def create_completion(
self,
model_name: str,
request: CompletionRequest,
raw_request: Request,
headers: Headers,
response: Response,
) -> Union[AsyncGenerator[str, None], Completion, ErrorResponse]:
"""Generate the text with the provided text prompt.
... | Generate the text with the provided text prompt.
Args:
model_name (str): Model name.
request (CompletionRequest): Params to create a completion.
raw_request (Request): fastapi request object.
headers: (Headers): Request headers.
response: (Response): ... | create_completion | python | kserve/kserve | python/kserve/kserve/protocol/rest/openai/dataplane.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/rest/openai/dataplane.py | Apache-2.0 |
async def create_chat_completion(
self,
model_name: str,
request: ChatCompletionRequest,
raw_request: Request,
headers: Headers,
response: Response,
) -> Union[AsyncGenerator[str, None], ChatCompletion, ErrorResponse]:
"""Generate the text with the provided te... | Generate the text with the provided text prompt.
Args:
model_name (str): Model name.
request (CreateChatCompletionRequest): Params to create a chat completion.
headers: (Optional[Dict[str, str]]): Request headers.
Returns:
response: A non-streaming or st... | create_chat_completion | python | kserve/kserve | python/kserve/kserve/protocol/rest/openai/dataplane.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/rest/openai/dataplane.py | Apache-2.0 |
async def create_embedding(
self,
model_name: str,
request: EmbeddingRequest,
raw_request: Request,
headers: Headers,
response: Response,
) -> Union[AsyncGenerator[str, None], Embedding, ErrorResponse]:
"""Generate the text with the provided text prompt.
... | Generate the text with the provided text prompt.
Args:
model_name (str): Model name.
request (EmbeddingRequest): Params to create a embedding.
raw_request (Request): fastapi request object.
headers: (Headers): Request headers.
response: (Response): Fa... | create_embedding | python | kserve/kserve | python/kserve/kserve/protocol/rest/openai/dataplane.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/rest/openai/dataplane.py | Apache-2.0 |
async def create_rerank(
self,
model_name: str,
request: RerankRequest,
raw_request: Request,
headers: Headers,
response: Response,
) -> Union[AsyncGenerator[str, None], Rerank, ErrorResponse]:
"""Generate the text with the provided text prompt.
Args:
... | Generate the text with the provided text prompt.
Args:
model_name (str): Model name.
request (RerankRequest): Params to create rerank response.
raw_request (Request): fastapi request object.
headers: (Headers): Request headers.
response: (Response): Fa... | create_rerank | python | kserve/kserve | python/kserve/kserve/protocol/rest/openai/dataplane.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/rest/openai/dataplane.py | Apache-2.0 |
async def models(self) -> List[OpenAIModel]:
"""Retrieve a list of models
Returns:
response: A list of OpenAIModel instances
"""
return [
model
for model in self.model_registry.get_models().values()
if isinstance(model, OpenAIModel)
... | Retrieve a list of models
Returns:
response: A list of OpenAIModel instances
| models | python | kserve/kserve | python/kserve/kserve/protocol/rest/openai/dataplane.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/rest/openai/dataplane.py | Apache-2.0 |
async def create_completion(
self,
request_body: CompletionRequest,
raw_request: Request,
response: Response,
) -> Response:
"""Create completion handler.
Args:
request_body (CompletionCreateParams): Completion params body.
raw_request (Reques... | Create completion handler.
Args:
request_body (CompletionCreateParams): Completion params body.
raw_request (Request): fastapi request object,
response (Response): fastapi response object
Returns:
InferenceResponse: Inference response object.
| create_completion | python | kserve/kserve | python/kserve/kserve/protocol/rest/openai/endpoints.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/rest/openai/endpoints.py | Apache-2.0 |
async def create_chat_completion(
self,
request_body: ChatCompletionRequest,
raw_request: Request,
response: Response,
) -> Response:
"""Create chat completion handler.
Args:
request_body (ChatCompletionRequestAdapter): Chat completion params body.
... | Create chat completion handler.
Args:
request_body (ChatCompletionRequestAdapter): Chat completion params body.
raw_request (Request): fastapi request object,
response (Response): fastapi response object
Returns:
InferenceResponse: Inference response obj... | create_chat_completion | python | kserve/kserve | python/kserve/kserve/protocol/rest/openai/endpoints.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/rest/openai/endpoints.py | Apache-2.0 |
async def create_embedding(
self,
request_body: EmbeddingRequest,
raw_request: Request,
response: Response,
) -> Response:
"""Create embedding handler.
Args:
request_body (EmbeddingRequestAdapter): Embedding params body.
raw_request (Request): ... | Create embedding handler.
Args:
request_body (EmbeddingRequestAdapter): Embedding params body.
raw_request (Request): fastapi request object,
model_name (str): Model name.
Returns:
InferenceResponse: Inference response object.
| create_embedding | python | kserve/kserve | python/kserve/kserve/protocol/rest/openai/endpoints.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/rest/openai/endpoints.py | Apache-2.0 |
async def create_rerank(
self,
raw_request: Request,
request_body: RerankRequest,
response: Response,
) -> Response:
"""Create rerank handler.
Args:
raw_request (Request): fastapi request object,
model_name (str): Model name.
reques... | Create rerank handler.
Args:
raw_request (Request): fastapi request object,
model_name (str): Model name.
request_body (RerankRequestAdapter): Rerank params body.
Returns:
InferenceResponse: Inference response object.
| create_rerank | python | kserve/kserve | python/kserve/kserve/protocol/rest/openai/endpoints.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/rest/openai/endpoints.py | Apache-2.0 |
async def models(
self,
) -> ModelList:
"""Create chat completion handler.
Args:
raw_request (Request): fastapi request object,
Returns:
ModelList: Model response object.
"""
models = await self.dataplane.models()
return ModelList(
... | Create chat completion handler.
Args:
raw_request (Request): fastapi request object,
Returns:
ModelList: Model response object.
| models | python | kserve/kserve | python/kserve/kserve/protocol/rest/openai/endpoints.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/rest/openai/endpoints.py | Apache-2.0 |
def apply_chat_template(
self,
request: ChatCompletionRequest,
) -> ChatPrompt:
"""
Given a list of chat completion messages, convert them to a prompt.
"""
pass |
Given a list of chat completion messages, convert them to a prompt.
| apply_chat_template | python | kserve/kserve | python/kserve/kserve/protocol/rest/openai/openai_chat_adapter_model.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/rest/openai/openai_chat_adapter_model.py | Apache-2.0 |
def postprocess_completion(
self,
completion: Completion,
request: CompletionRequest,
raw_request: Optional[Request] = None,
):
"""Postprocess a completion. Only called when response is not being streamed (i.e. stream=false)"""
pass | Postprocess a completion. Only called when response is not being streamed (i.e. stream=false) | postprocess_completion | python | kserve/kserve | python/kserve/kserve/protocol/rest/openai/openai_proxy_model.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/rest/openai/openai_proxy_model.py | Apache-2.0 |
def postprocess_completion_chunk(
self,
completion: Completion,
request: CompletionRequest,
raw_request: Optional[Request] = None,
):
"""Postprocess a completion chunk. Only called when response is being streamed (i.e. stream=true)
This method will be called once for ... | Postprocess a completion chunk. Only called when response is being streamed (i.e. stream=true)
This method will be called once for each chunk that is streamed back to the user.
| postprocess_completion_chunk | python | kserve/kserve | python/kserve/kserve/protocol/rest/openai/openai_proxy_model.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/rest/openai/openai_proxy_model.py | Apache-2.0 |
def postprocess_chat_completion(
self,
chat_completion: ChatCompletion,
request: ChatCompletionRequest,
raw_request: Optional[Request] = None,
):
"""Postprocess a chat completion. Only called when response is not being streamed (i.e. stream=false)"""
pass | Postprocess a chat completion. Only called when response is not being streamed (i.e. stream=false) | postprocess_chat_completion | python | kserve/kserve | python/kserve/kserve/protocol/rest/openai/openai_proxy_model.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/rest/openai/openai_proxy_model.py | Apache-2.0 |
def postprocess_chat_completion_chunk(
self,
chat_completion_chunk: ChatCompletionChunk,
request: ChatCompletionRequest,
raw_request: Optional[Request] = None,
):
"""Postprocess a chat completion chunk. Only called when response is being streamed (i.e. stream=true)
Th... | Postprocess a chat completion chunk. Only called when response is being streamed (i.e. stream=true)
This method will be called once for each chunk that is streamed back to the user.
| postprocess_chat_completion_chunk | python | kserve/kserve | python/kserve/kserve/protocol/rest/openai/openai_proxy_model.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/protocol/rest/openai/openai_proxy_model.py | Apache-2.0 |
def is_structured_cloudevent(body: Dict) -> bool:
"""Returns True if the JSON request body resembles a structured CloudEvent"""
return (
"time" in body
and "type" in body
and "source" in body
and "id" in body
and "specversion" in body
and "data" in body
) | Returns True if the JSON request body resembles a structured CloudEvent | is_structured_cloudevent | python | kserve/kserve | python/kserve/kserve/utils/utils.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/utils/utils.py | Apache-2.0 |
def strtobool(val: str) -> bool:
"""Convert a string representation of truth to True or False.
True values are 'y', 'yes', 't', 'true', 'on', and '1'; false values
are 'n', 'no', 'f', 'false', 'off', and '0'. Raises ValueError if
'val' is anything else.
Adapted from deprecated `distutils`
htt... | Convert a string representation of truth to True or False.
True values are 'y', 'yes', 't', 'true', 'on', and '1'; false values
are 'n', 'no', 'f', 'false', 'off', and '0'. Raises ValueError if
'val' is anything else.
Adapted from deprecated `distutils`
https://github.com/python/cpython/blob/3.11... | strtobool | python | kserve/kserve | python/kserve/kserve/utils/utils.py | https://github.com/kserve/kserve/blob/master/python/kserve/kserve/utils/utils.py | Apache-2.0 |
def test_inferenceservice_client_creat():
"""Unit test for kserve create api"""
with patch(
"kserve.api.kserve_client.KServeClient.create", return_value=mocked_unit_result
):
isvc = generate_inferenceservice()
assert mocked_unit_result == kserve_client.create(isvc, namespace="kubeflo... | Unit test for kserve create api | test_inferenceservice_client_creat | python | kserve/kserve | python/kserve/test/skip_test_inference_service_client.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/skip_test_inference_service_client.py | Apache-2.0 |
def test_inferenceservice_client_get():
"""Unit test for kserve get api"""
with patch(
"kserve.api.kserve_client.KServeClient.get", return_value=mocked_unit_result
):
assert mocked_unit_result == kserve_client.get(
"flower-sample", namespace="kubeflow"
) | Unit test for kserve get api | test_inferenceservice_client_get | python | kserve/kserve | python/kserve/test/skip_test_inference_service_client.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/skip_test_inference_service_client.py | Apache-2.0 |
def test_inferenceservice_client_watch():
"""Unit test for kserve get api"""
with patch(
"kserve.api.kserve_client.KServeClient.get", return_value=mocked_unit_result
):
assert mocked_unit_result == kserve_client.get(
"flower-sample", namespace="kubeflow", watch=True, timeout_seco... | Unit test for kserve get api | test_inferenceservice_client_watch | python | kserve/kserve | python/kserve/test/skip_test_inference_service_client.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/skip_test_inference_service_client.py | Apache-2.0 |
def test_inferenceservice_client_patch():
"""Unit test for kserve patch api"""
with patch(
"kserve.api.kserve_client.KServeClient.patch", return_value=mocked_unit_result
):
isvc = generate_inferenceservice()
assert mocked_unit_result == kserve_client.patch(
"flower-sample... | Unit test for kserve patch api | test_inferenceservice_client_patch | python | kserve/kserve | python/kserve/test/skip_test_inference_service_client.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/skip_test_inference_service_client.py | Apache-2.0 |
def test_inferenceservice_client_rollout_canary():
"""Unit test for kserve promote api"""
with patch(
"kserve.api.kserve_client.KServeClient.rollout_canary",
return_value=mocked_unit_result,
):
assert mocked_unit_result == kserve_client.rollout_canary(
"flower-sample", na... | Unit test for kserve promote api | test_inferenceservice_client_rollout_canary | python | kserve/kserve | python/kserve/test/skip_test_inference_service_client.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/skip_test_inference_service_client.py | Apache-2.0 |
def test_inferenceservice_client_replace():
"""Unit test for kserve replace api"""
with patch(
"kserve.api.kserve_client.KServeClient.replace", return_value=mocked_unit_result
):
isvc = generate_inferenceservice()
assert mocked_unit_result == kserve_client.replace(
"flowe... | Unit test for kserve replace api | test_inferenceservice_client_replace | python | kserve/kserve | python/kserve/test/skip_test_inference_service_client.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/skip_test_inference_service_client.py | Apache-2.0 |
def test_inferenceservice_client_delete():
"""Unit test for kserve delete api"""
with patch(
"kserve.api.kserve_client.KServeClient.delete", return_value=mocked_unit_result
):
assert mocked_unit_result == kserve_client.delete(
"flower-sample", namespace="kubeflow"
) | Unit test for kserve delete api | test_inferenceservice_client_delete | python | kserve/kserve | python/kserve/test/skip_test_inference_service_client.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/skip_test_inference_service_client.py | Apache-2.0 |
async def test_grpc_raw_inputs(mock_to_headers, server):
"""
If we receive raw inputs then, the response also should be in raw output format.
"""
fp32_data = np.array([6.8, 2.8, 4.8, 1.4, 6.0, 3.4, 4.5, 1.6], dtype=np.float32)
int32_data = np.array([6, 2, 4, 1, 6, 3, 4, 1], dtype=np.int32)
str_d... |
If we receive raw inputs then, the response also should be in raw output format.
| test_grpc_raw_inputs | python | kserve/kserve | python/kserve/test/test_grpc_server.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_grpc_server.py | Apache-2.0 |
async def test_grpc_fp16_output(mock_to_headers, server):
"""
If the output contains FP16 datatype, then the outputs should be returned as raw outputs.
"""
fp32_data = [6.8, 2.8, 4.8, 1.4, 6.0, 3.4, 4.5, 1.6]
request = grpc_predict_v2_pb2.ModelInferRequest(
model_name="FP16OutputModel",
... |
If the output contains FP16 datatype, then the outputs should be returned as raw outputs.
| test_grpc_fp16_output | python | kserve/kserve | python/kserve/test/test_grpc_server.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_grpc_server.py | Apache-2.0 |
async def test_grpc_raw_inputs_with_missing_input_data(mock_to_headers, server):
"""
Server should raise InvalidInput if raw_input_contents missing some input data.
"""
raw_input_contents = [
np.array([6.8, 2.8, 4.8, 1.4, 6.0, 3.4, 4.5, 1.6], dtype=np.float32).tobytes(),
np.array([6, 2, ... |
Server should raise InvalidInput if raw_input_contents missing some input data.
| test_grpc_raw_inputs_with_missing_input_data | python | kserve/kserve | python/kserve/test/test_grpc_server.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_grpc_server.py | Apache-2.0 |
async def test_grpc_raw_inputs_with_contents_specified(mock_to_headers, server):
"""
Server should raise InvalidInput if both contents and raw_input_contents specified.
"""
raw_input_contents = [
np.array([6.8, 2.8, 4.8, 1.4, 6.0, 3.4, 4.5, 1.6], dtype=np.float32).tobytes(),
np.array([6,... |
Server should raise InvalidInput if both contents and raw_input_contents specified.
| test_grpc_raw_inputs_with_contents_specified | python | kserve/kserve | python/kserve/test/test_grpc_server.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_grpc_server.py | Apache-2.0 |
def test_inferenceservice_client_create():
"""Unit test for kserve create api"""
with patch(
"kserve.api.kserve_client.KServeClient.create", return_value=mocked_unit_result
):
isvc = generate_inferenceservice()
assert mocked_unit_result == kserve_client.create(isvc, namespace="kubefl... | Unit test for kserve create api | test_inferenceservice_client_create | python | kserve/kserve | python/kserve/test/test_inference_service_client.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_inference_service_client.py | Apache-2.0 |
def test_inferenceservice_client_get():
"""Unit test for kserve get api"""
with patch(
"kserve.api.kserve_client.KServeClient.get", return_value=mocked_unit_result
):
assert mocked_unit_result == kserve_client.get(
"flower-sample", namespace="kubeflow"
) | Unit test for kserve get api | test_inferenceservice_client_get | python | kserve/kserve | python/kserve/test/test_inference_service_client.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_inference_service_client.py | Apache-2.0 |
def test_inferenceservice_client_watch():
"""Unit test for kserve get api"""
with patch(
"kserve.api.kserve_client.KServeClient.get", return_value=mocked_unit_result
):
assert mocked_unit_result == kserve_client.get(
"flower-sample", namespace="kubeflow", watch=True, timeout_seco... | Unit test for kserve get api | test_inferenceservice_client_watch | python | kserve/kserve | python/kserve/test/test_inference_service_client.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_inference_service_client.py | Apache-2.0 |
def test_inferenceservice_client_patch():
"""Unit test for kserve patch api"""
with patch(
"kserve.api.kserve_client.KServeClient.patch", return_value=mocked_unit_result
):
isvc = generate_inferenceservice()
assert mocked_unit_result == kserve_client.patch(
"flower-sample... | Unit test for kserve patch api | test_inferenceservice_client_patch | python | kserve/kserve | python/kserve/test/test_inference_service_client.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_inference_service_client.py | Apache-2.0 |
def test_inferenceservice_client_replace():
"""Unit test for kserve replace api"""
with patch(
"kserve.api.kserve_client.KServeClient.replace", return_value=mocked_unit_result
):
isvc = generate_inferenceservice()
assert mocked_unit_result == kserve_client.replace(
"flowe... | Unit test for kserve replace api | test_inferenceservice_client_replace | python | kserve/kserve | python/kserve/test/test_inference_service_client.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_inference_service_client.py | Apache-2.0 |
def test_inferenceservice_client_delete():
"""Unit test for kserve delete api"""
with patch(
"kserve.api.kserve_client.KServeClient.delete", return_value=mocked_unit_result
):
assert mocked_unit_result == kserve_client.delete(
"flower-sample", namespace="kubeflow"
) | Unit test for kserve delete api | test_inferenceservice_client_delete | python | kserve/kserve | python/kserve/test/test_inference_service_client.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_inference_service_client.py | Apache-2.0 |
def test_inferenceservice_client_create():
"""Unit test for kserve create api"""
with patch(
"kserve.api.kserve_client.KServeClient.create", return_value=mocked_unit_result
):
isvc = generate_inferenceservice()
assert mocked_unit_result == kserve_client.create(isvc, namespace="kubefl... | Unit test for kserve create api | test_inferenceservice_client_create | python | kserve/kserve | python/kserve/test/test_kubeconfig_dict.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_kubeconfig_dict.py | Apache-2.0 |
def test_inferenceservice_client_get():
"""Unit test for kserve get api"""
with patch(
"kserve.api.kserve_client.KServeClient.get", return_value=mocked_unit_result
):
assert mocked_unit_result == kserve_client.get(
"flower-sample", namespace="kubeflow"
) | Unit test for kserve get api | test_inferenceservice_client_get | python | kserve/kserve | python/kserve/test/test_kubeconfig_dict.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_kubeconfig_dict.py | Apache-2.0 |
def make_instance(self, include_optional):
"""Test V1alpha1BuiltInAdapter
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included"""
# model = kserve.models.v1alpha1_built_in_adapter.V1alpha1BuiltInAdapte... | Test V1alpha1BuiltInAdapter
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included | make_instance | python | kserve/kserve | python/kserve/test/test_v1alpha1_built_in_adapter.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_v1alpha1_built_in_adapter.py | Apache-2.0 |
def make_instance(self, include_optional):
"""Test V1alpha1ClusterServingRuntime
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included"""
# model = kserve.models.v1alpha1_cluster_serving_runtime.V1alpha... | Test V1alpha1ClusterServingRuntime
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included | make_instance | python | kserve/kserve | python/kserve/test/test_v1alpha1_cluster_serving_runtime.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_v1alpha1_cluster_serving_runtime.py | Apache-2.0 |
def make_instance(self, include_optional):
"""Test V1alpha1ClusterServingRuntimeList
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included"""
# model = kserve.models.v1alpha1_cluster_serving_runtime_lis... | Test V1alpha1ClusterServingRuntimeList
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included | make_instance | python | kserve/kserve | python/kserve/test/test_v1alpha1_cluster_serving_runtime_list.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_v1alpha1_cluster_serving_runtime_list.py | Apache-2.0 |
def make_instance(self, include_optional):
"""Test V1alpha1ClusterStorageContainer
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included"""
# model = kserve.models.v1alpha1_cluster_storage_container.V1a... | Test V1alpha1ClusterStorageContainer
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included | make_instance | python | kserve/kserve | python/kserve/test/test_v1alpha1_cluster_storage_container.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_v1alpha1_cluster_storage_container.py | Apache-2.0 |
def make_instance(self, include_optional):
"""Test V1alpha1ClusterStorageContainerList
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included"""
# model = kserve.models.v1alpha1_cluster_storage_container... | Test V1alpha1ClusterStorageContainerList
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included | make_instance | python | kserve/kserve | python/kserve/test/test_v1alpha1_cluster_storage_container_list.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_v1alpha1_cluster_storage_container_list.py | Apache-2.0 |
def make_instance(self, include_optional):
"""Test V1alpha1Container
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included"""
# model = kserve.models.v1alpha1_container.V1alpha1Container() # noqa: E501... | Test V1alpha1Container
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included | make_instance | python | kserve/kserve | python/kserve/test/test_v1alpha1_container.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_v1alpha1_container.py | Apache-2.0 |
def make_instance(self, include_optional):
"""Test V1alpha1InferenceGraph
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included"""
# model = kserve.models.v1alpha1_inference_graph.V1alpha1InferenceGraph... | Test V1alpha1InferenceGraph
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included | make_instance | python | kserve/kserve | python/kserve/test/test_v1alpha1_inference_graph.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_v1alpha1_inference_graph.py | Apache-2.0 |
def make_instance(self, include_optional):
"""Test V1alpha1InferenceGraphList
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included"""
# model = kserve.models.v1alpha1_inference_graph_list.V1alpha1Infer... | Test V1alpha1InferenceGraphList
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included | make_instance | python | kserve/kserve | python/kserve/test/test_v1alpha1_inference_graph_list.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_v1alpha1_inference_graph_list.py | Apache-2.0 |
def make_instance(self, include_optional):
"""Test V1alpha1InferenceGraphSpec
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included"""
# model = kserve.models.v1alpha1_inference_graph_spec.V1alpha1Infer... | Test V1alpha1InferenceGraphSpec
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included | make_instance | python | kserve/kserve | python/kserve/test/test_v1alpha1_inference_graph_spec.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_v1alpha1_inference_graph_spec.py | Apache-2.0 |
def make_instance(self, include_optional):
"""Test V1alpha1InferenceGraphStatus
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included"""
# model = kserve.models.v1alpha1_inference_graph_status.V1alpha1I... | Test V1alpha1InferenceGraphStatus
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included | make_instance | python | kserve/kserve | python/kserve/test/test_v1alpha1_inference_graph_status.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_v1alpha1_inference_graph_status.py | Apache-2.0 |
def make_instance(self, include_optional):
"""Test V1alpha1InferenceRouter
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included"""
# model = kserve.models.v1alpha1_inference_router.V1alpha1InferenceRou... | Test V1alpha1InferenceRouter
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included | make_instance | python | kserve/kserve | python/kserve/test/test_v1alpha1_inference_router.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_v1alpha1_inference_router.py | Apache-2.0 |
def make_instance(self, include_optional):
"""Test V1alpha1InferenceStep
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included"""
# model = kserve.models.v1alpha1_inference_step.V1alpha1InferenceStep() ... | Test V1alpha1InferenceStep
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included | make_instance | python | kserve/kserve | python/kserve/test/test_v1alpha1_inference_step.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_v1alpha1_inference_step.py | Apache-2.0 |
def make_instance(self, include_optional):
"""Test V1alpha1InferenceTarget
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included"""
# model = kserve.models.v1alpha1_inference_target.V1alpha1InferenceTar... | Test V1alpha1InferenceTarget
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included | make_instance | python | kserve/kserve | python/kserve/test/test_v1alpha1_inference_target.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_v1alpha1_inference_target.py | Apache-2.0 |
def make_instance(self, include_optional):
"""Test V1alpha1LocalModelCache
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included"""
# model = kserve.models.v1alpha1_local_model_cache.V1alpha1LocalModelC... | Test V1alpha1LocalModelCache
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included | make_instance | python | kserve/kserve | python/kserve/test/test_v1alpha1_local_model_cache.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_v1alpha1_local_model_cache.py | Apache-2.0 |
def make_instance(self, include_optional):
"""Test V1alpha1LocalModelCacheList
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included"""
# model = kserve.models.v1alpha1_local_model_cache_list.V1alpha1Lo... | Test V1alpha1LocalModelCacheList
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included | make_instance | python | kserve/kserve | python/kserve/test/test_v1alpha1_local_model_cache_list.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_v1alpha1_local_model_cache_list.py | Apache-2.0 |
def make_instance(self, include_optional):
"""Test V1alpha1LocalModelCacheSpec
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included"""
# model = kserve.models.v1alpha1_local_model_cache_spec.V1alpha1Lo... | Test V1alpha1LocalModelCacheSpec
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included | make_instance | python | kserve/kserve | python/kserve/test/test_v1alpha1_local_model_cache_spec.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_v1alpha1_local_model_cache_spec.py | Apache-2.0 |
def make_instance(self, include_optional):
"""Test V1alpha1LocalModelNode
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included"""
# model = kserve.models.v1alpha1_local_model_node.V1alpha1LocalModelNod... | Test V1alpha1LocalModelNode
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included | make_instance | python | kserve/kserve | python/kserve/test/test_v1alpha1_local_model_node.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_v1alpha1_local_model_node.py | Apache-2.0 |
def make_instance(self, include_optional):
"""Test V1alpha1LocalModelNodeGroup
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included"""
# model = kserve.models.v1alpha1_local_model_node_group.V1alpha1Lo... | Test V1alpha1LocalModelNodeGroup
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included | make_instance | python | kserve/kserve | python/kserve/test/test_v1alpha1_local_model_node_group.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_v1alpha1_local_model_node_group.py | Apache-2.0 |
def make_instance(self, include_optional):
"""Test V1alpha1LocalModelNodeGroupList
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included"""
# model = kserve.models.v1alpha1_local_model_node_group_list.V... | Test V1alpha1LocalModelNodeGroupList
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included | make_instance | python | kserve/kserve | python/kserve/test/test_v1alpha1_local_model_node_group_list.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_v1alpha1_local_model_node_group_list.py | Apache-2.0 |
def make_instance(self, include_optional):
"""Test V1alpha1LocalModelNodeGroupSpec
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included"""
# model = kserve.models.v1alpha1_local_model_node_group_spec.V... | Test V1alpha1LocalModelNodeGroupSpec
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included | make_instance | python | kserve/kserve | python/kserve/test/test_v1alpha1_local_model_node_group_spec.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_v1alpha1_local_model_node_group_spec.py | Apache-2.0 |
def make_instance(self, include_optional):
"""Test V1alpha1LocalModelNodeList
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included"""
# model = kserve.models.v1alpha1_local_model_node_list.V1alpha1Loca... | Test V1alpha1LocalModelNodeList
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included | make_instance | python | kserve/kserve | python/kserve/test/test_v1alpha1_local_model_node_list.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_v1alpha1_local_model_node_list.py | Apache-2.0 |
def make_instance(self, include_optional):
"""Test V1alpha1LocalModelNodeSpec
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included"""
# model = kserve.models.v1alpha1_local_model_node_spec.V1alpha1Loca... | Test V1alpha1LocalModelNodeSpec
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included | make_instance | python | kserve/kserve | python/kserve/test/test_v1alpha1_local_model_node_spec.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_v1alpha1_local_model_node_spec.py | Apache-2.0 |
def make_instance(self, include_optional):
"""Test V1alpha1ModelSpec
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included"""
# model = kserve.models.v1alpha1_model_spec.V1alpha1ModelSpec() # noqa: E50... | Test V1alpha1ModelSpec
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included | make_instance | python | kserve/kserve | python/kserve/test/test_v1alpha1_model_spec.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_v1alpha1_model_spec.py | Apache-2.0 |
def make_instance(self, include_optional):
"""Test V1alpha1ServingRuntime
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included"""
# model = kserve.models.v1alpha1_serving_runtime.V1alpha1ServingRuntime... | Test V1alpha1ServingRuntime
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included | make_instance | python | kserve/kserve | python/kserve/test/test_v1alpha1_serving_runtime.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_v1alpha1_serving_runtime.py | Apache-2.0 |
def make_instance(self, include_optional):
"""Test V1alpha1ServingRuntimeList
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included"""
# model = kserve.models.v1alpha1_serving_runtime_list.V1alpha1Servi... | Test V1alpha1ServingRuntimeList
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included | make_instance | python | kserve/kserve | python/kserve/test/test_v1alpha1_serving_runtime_list.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_v1alpha1_serving_runtime_list.py | Apache-2.0 |
def make_instance(self, include_optional):
"""Test V1alpha1ServingRuntimePodSpec
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included"""
# model = kserve.models.v1alpha1_serving_runtime_pod_spec.V1alph... | Test V1alpha1ServingRuntimePodSpec
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included | make_instance | python | kserve/kserve | python/kserve/test/test_v1alpha1_serving_runtime_pod_spec.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_v1alpha1_serving_runtime_pod_spec.py | Apache-2.0 |
def make_instance(self, include_optional):
"""Test V1alpha1ServingRuntimeSpec
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included"""
# model = kserve.models.v1alpha1_serving_runtime_spec.V1alpha1Servi... | Test V1alpha1ServingRuntimeSpec
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included | make_instance | python | kserve/kserve | python/kserve/test/test_v1alpha1_serving_runtime_spec.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_v1alpha1_serving_runtime_spec.py | Apache-2.0 |
def make_instance(self, include_optional):
"""Test V1alpha1StorageContainerSpec
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included"""
# model = kserve.models.v1alpha1_storage_container_spec.V1alpha1S... | Test V1alpha1StorageContainerSpec
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included | make_instance | python | kserve/kserve | python/kserve/test/test_v1alpha1_storage_container_spec.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_v1alpha1_storage_container_spec.py | Apache-2.0 |
def make_instance(self, include_optional):
"""Test V1alpha1StorageHelper
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included"""
# model = kserve.models.v1alpha1_storage_helper.V1alpha1StorageHelper() ... | Test V1alpha1StorageHelper
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included | make_instance | python | kserve/kserve | python/kserve/test/test_v1alpha1_storage_helper.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_v1alpha1_storage_helper.py | Apache-2.0 |
def make_instance(self, include_optional):
"""Test V1alpha1SupportedModelFormat
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included"""
# model = kserve.models.v1alpha1_supported_model_format.V1alpha1S... | Test V1alpha1SupportedModelFormat
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included | make_instance | python | kserve/kserve | python/kserve/test/test_v1alpha1_supported_model_format.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_v1alpha1_supported_model_format.py | Apache-2.0 |
def make_instance(self, include_optional):
"""Test V1alpha1SupportedUriFormat
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included"""
# model = kserve.models.v1alpha1_supported_uri_format.V1alpha1Suppo... | Test V1alpha1SupportedUriFormat
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included | make_instance | python | kserve/kserve | python/kserve/test/test_v1alpha1_supported_uri_format.py | https://github.com/kserve/kserve/blob/master/python/kserve/test/test_v1alpha1_supported_uri_format.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.