repo_id
stringclasses
195 values
file_path
stringlengths
32
139
content
stringlengths
6
440k
__index_level_0__
int64
0
0
cloned_public_repos
cloned_public_repos/zenml/CLA.md
# Fiduciary License Agreement 2.0 based on the ## Individual Contributor Exclusive License Agreement (including the Traditional Patent License OPTION) Thank you for your interest in contributing to ZenML by ZenML GmbH ("We" or "Us"). The purpose of this contributor agreement ("Agreement") is to clarify and document the rights granted by contributors to Us. To make this document effective, please follow the instructions at https://zenml.io/cla/. ### 0. Preamble Software is deeply embedded in all aspects of our lives and it is important that it empower, rather than restrict us. Free Software gives everybody the rights to use, understand, adapt and share software. These rights help support other fundamental freedoms like freedom of speech, press and privacy. Development of Free Software can follow many patterns. In some cases whole development is handled by a sole programmer or a small group of people. But usually, the creation and maintenance of software is a complex process that requires the contribution of many individuals. This also affects who owns the rights to the software. In the latter case, rights in software are owned jointly by a great number of individuals. To tackle this issue some projects require a full copyright assignment to be signed by all contributors. The problem with such assignments is that they often lack checks and balances that would protect the contributors from potential abuse of power from the new copyright holder. FSFE’s Fiduciary License Agreement (FLA) was created by the Free Software Foundation Europe e.V. with just that in mind – to concentrate all deciding power within one entity and prevent fragmentation of rights on one hand, while on the other preventing that single entity from abusing its power. The main aim is to ensure that the software covered under the FLA will forever remain Free Software. This process only serves for the transfer of economic rights. So-called moral rights (e.g. authors right to be identified as author) remain with the original author(s) and are inalienable. How to use this FLA If You are an employee and have created the Contribution as part of your employment, You need to have Your employer approve this Agreement or sign the Entity version of this document. If You do not own the Copyright in the entire work of authorship, any other author of the Contribution should also sign this – in any event, please contact Us at support@zenml.io ### 1. Definitions "You" means the individual Copyright owner who Submits a Contribution to Us. "Contribution" means any original work of authorship, including any original modifications or additions to an existing work of authorship, Submitted by You to Us, in which You own the Copyright. "Copyright" means all rights protecting works of authorship, including copyright, moral and neighboring rights, as appropriate, for the full term of their existence. "Material" means the software or documentation made available by Us to third parties. When this Agreement covers more than one software project, the Material means the software or documentation to which the Contribution was Submitted. After You Submit the Contribution, it may be included in the Material. "Submit" means any act by which a Contribution is transferred to Us by You by means of tangible or intangible media, including but not limited to electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, Us, but excluding any transfer that is conspicuously marked or otherwise designated in writing by You as "Not a Contribution." "Documentation" means any non-software portion of a Contribution. ### 2. License grant #### 2.1 Copyright license to Us Subject to the terms and conditions of this Agreement, You hereby grant to Us a worldwide, royalty-free, exclusive, perpetual and irrevocable (except as stated in Section 8.2) license, with the right to transfer an unlimited number of non-exclusive licenses or to grant sublicenses to third parties, under the Copyright covering the Contribution to use the Contribution by all means, including, but not limited to: publish the Contribution, modify the Contribution, prepare derivative works based upon or containing the Contribution and/or to combine the Contribution with other Materials, reproduce the Contribution in original or modified form, distribute, to make the Contribution available to the public, display and publicly perform the Contribution in original or modified form. #### 2.2 Moral rights Moral Rights remain unaffected to the extent they are recognized and not waivable by applicable law. Notwithstanding, You may add your name to the attribution mechanism customary used in the Materials you Contribute to, such as the header of the source code files of Your Contribution, and We will respect this attribution when using Your Contribution. #### 2.3 Copyright license back to You Upon such grant of rights to Us, We immediately grant to You a worldwide, royalty-free, non-exclusive, perpetual and irrevocable license, with the right to transfer an unlimited number of non-exclusive licenses or to grant sublicenses to third parties, under the Copyright covering the Contribution to use the Contribution by all means, including, but not limited to: publish the Contribution, modify the Contribution, prepare derivative works based upon or containing the Contribution and/or to combine the Contribution with other Materials, reproduce the Contribution in original or modified form, distribute, to make the Contribution available to the public, display and publicly perform the Contribution in original or modified form. This license back is limited to the Contribution and does not provide any rights to the Material. ### 3. Patents #### 3.1 Patent license Subject to the terms and conditions of this Agreement You hereby grant to Us and to recipients of Materials distributed by Us a worldwide, royalty-free, non-exclusive, perpetual and irrevocable (except as stated in Section 3.2) patent license, with the right to transfer an unlimited number of non-exclusive licenses or to grant sublicenses to third parties, to make, have made, use, sell, offer for sale, import and otherwise transfer the Contribution and the Contribution in combination with any Material (and portions of such combination). This license applies to all patents owned or controlled by You, whether already acquired or hereafter acquired, that would be infringed by making, having made, using, selling, offering for sale, importing or otherwise transferring of Your Contribution(s) alone or by combination of Your Contribution(s) with any Material. #### 3.2 Revocation of patent license You reserve the right to revoke the patent license stated in section 3.1 if We make any infringement claim that is targeted at your Contribution and not asserted for a Defensive Purpose. An assertion of claims of the Patents shall be considered for a "Defensive Purpose" if the claims are asserted against an entity that has filed, maintained, threatened, or voluntarily participated in a patent infringement lawsuit against Us or any of Our licensees. ### 4. License obligations by Us We agree to (sub)license the Contribution or any Materials containing, based on or derived from your Contribution under the terms of any licenses the Free Software Foundation classifies as Free Software License and which are approved by the Open Source Initiative as Open Source licenses. More specifically and in strict accordance with the above paragraph, we agree to (sub)license the Contribution or any Materials containing, based on or derived from the Contribution only under the terms of the following license(s) Apache-2.0 (including any right to adopt any future version of a license if permitted). We agree to license patents owned or controlled by You only to the extent necessary to (sub)license Your Contribution(s) and the combination of Your Contribution(s) with the Material under the terms of any licenses the Free Software Foundation classifies as Free Software licenses and which are approved by the Open Source Initiative as Open Source licenses.. ### 5. Disclaimer THE CONTRIBUTION IS PROVIDED "AS IS". MORE PARTICULARLY, ALL EXPRESS OR IMPLIED WARRANTIES INCLUDING, WITHOUT LIMITATION, ANY IMPLIED WARRANTY OF SATISFACTORY QUALITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT ARE EXPRESSLY DISCLAIMED BY YOU TO US AND BY US TO YOU. TO THE EXTENT THAT ANY SUCH WARRANTIES CANNOT BE DISCLAIMED, SUCH WARRANTY IS LIMITED IN DURATION AND EXTENT TO THE MINIMUM PERIOD AND EXTENT PERMITTED BY LAW. ### 6. Consequential damage waiver TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, IN NO EVENT WILL YOU OR WE BE LIABLE FOR ANY LOSS OF PROFITS, LOSS OF ANTICIPATED SAVINGS, LOSS OF DATA, INDIRECT, SPECIAL, INCIDENTAL, CONSEQUENTIAL AND EXEMPLARY DAMAGES ARISING OUT OF THIS AGREEMENT REGARDLESS OF THE LEGAL OR EQUITABLE THEORY (CONTRACT, TORT OR OTHERWISE) UPON WHICH THE CLAIM IS BASED. ### 7. Approximation of disclaimer and damage waiver IF THE DISCLAIMER AND DAMAGE WAIVER MENTIONED IN SECTION 5. AND SECTION 6. CANNOT BE GIVEN LEGAL EFFECT UNDER APPLICABLE LOCAL LAW, REVIEWING COURTS SHALL APPLY LOCAL LAW THAT MOST CLOSELY APPROXIMATES AN ABSOLUTE WAIVER OF ALL CIVIL OR CONTRACTUAL LIABILITY IN CONNECTION WITH THE CONTRIBUTION. ### 8. Term #### 8.1 This Agreement shall come into effect upon Your acceptance of the terms and conditions. #### 8.2 This Agreement shall apply for the term of the copyright and patents licensed here. However, You shall have the right to terminate the Agreement if We do not fulfill the obligations as set forth in Section 4. Such termination must be made in writing. #### 8.3 In the event of a termination of this Agreement Sections 5., 6., 7., 8., and 9. shall survive such termination and shall remain in full force thereafter. For the avoidance of doubt, Free and Open Source Software (sub)licenses that have already been granted for Contributions at the date of the termination shall remain in full force after the termination of this Agreement. ### 9. Miscellaneous #### 9.1 This Agreement and all disputes, claims, actions, suits or other proceedings arising out of this agreement or relating in any way to it shall be governed by the laws of Germany excluding its private international law provisions. #### 9.2 This Agreement sets out the entire agreement between You and Us for Your Contributions to Us and overrides all other agreements or understandings. #### 9.3 In case of Your death, this agreement shall continue with Your heirs. In case of more than one heir, all heirs must exercise their rights through a commonly authorized person. #### 9.4 If any provision of this Agreement is found void and unenforceable, such provision will be replaced to the extent possible with a provision that comes closest to the meaning of the original provision and that is enforceable. The terms and conditions set forth in this Agreement shall apply notwithstanding any failure of essential purpose of this Agreement or any limited remedy to the maximum extent possible under law. #### 9.5 You agree to notify Us of any facts or circumstances of which you become aware that would make this Agreement inaccurate in any respect. **You** Date:_______________________________ Name:_______________________________ Title:______________________________ Address:____________________________ **Us** Date:_______________________________ Name:_______________________________ Title:_______________________________ Address:_______________________________
0
cloned_public_repos
cloned_public_repos/zenml/pyproject.toml
[tool.poetry] name = "zenml" version = "0.80.1" packages = [{ include = "zenml", from = "src" }] description = "ZenML: Write production-ready ML code." authors = ["ZenML GmbH <info@zenml.io>"] readme = "README.md" homepage = "https://zenml.io" documentation = "https://docs.zenml.io" repository = "https://github.com/zenml-io/zenml" license = "Apache-2.0" keywords = ["machine learning", "production", "pipeline", "mlops", "devops"] classifiers = [ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "Intended Audience :: Science/Research", "Intended Audience :: System Administrators", "License :: OSI Approved :: Apache Software License", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Topic :: System :: Distributed Computing", "Topic :: Software Development :: Libraries :: Python Modules", "Typing :: Typed", ] exclude = [ "tests.*", "*.tests", "docs", "tests", "tests", "legacy", "*.tests.*", "examples", ] include = ["src/zenml", "*.txt", "*.sh", "*.md"] [tool.poetry.scripts] zenml = "zenml.cli.cli:cli" [tool.poetry.dependencies] alembic = { version = "~1.8.1" } bcrypt = { version = "4.0.1" } click = "^8.0.1,<8.1.8" cloudpickle = ">=2.0.0,<3" distro = "^1.6.0" docker = "~7.1.0" gitpython = "^3.1.18" packaging = ">=24.1" passlib = { extras = ["bcrypt"], version = "~1.7.4" } psutil = ">=5.0.0" pydantic = "~2.8" pydantic-settings = "*" pymysql = { version = "~1.1.0,>=1.1.1" } python = ">=3.9,<3.13" python-dateutil = "^2.8.1" pyyaml = ">=6.0.1" rich = { extras = ["jupyter"], version = ">=12.0.0" } setuptools = "*" sqlalchemy = "^2.0.0" sqlalchemy_utils = "*" sqlmodel = "0.0.18" importlib_metadata = { version = "<=7.0.0", python = "<3.10" } # Optional dependencies for the ZenServer fastapi = { version = ">=0.100, <=0.115.8", optional = true } uvicorn = { extras = ["standard"], version = ">=0.17.5", optional = true } python-multipart = { version = "~0.0.9", optional = true } pyjwt = { extras = ["crypto"], version = "2.7.*", optional = true } orjson = { version = "~3.10.0", optional = true } Jinja2 = { version = "*", optional = true } ipinfo = { version = ">=4.4.3", optional = true } secure = { version = "~0.3.0", optional = true } tldextract = { version = "~5.1.0", optional = true } itsdangerous = { version = "~2.2.0", optional = true } # Optional dependencies for project templates copier = { version = ">=8.1.0", optional = true } pyyaml-include = { version = "<2.0", optional = true } jinja2-time = { version = "^0.2.0", optional = true } # Optional dependencies for the AWS secrets store boto3 = { version = ">=1.16.0", optional = true } # Optional dependencies for the GCP secrets store google-cloud-secret-manager = { version = ">=2.12.5", optional = true } # Optional dependencies for the Azure Key Vault secrets store requests = { version = "^2.27.11", optional = true } azure-identity = { version = ">=1.4.0", optional = true } azure-keyvault-secrets = { version = ">=4.0.0", optional = true } # Optional dependencies for the HashiCorp Vault secrets store hvac = { version = ">=0.11.2", optional = true } # Optional dependencies for the AWS connector aws-profile-manager = { version = ">=0.5.0", optional = true } # Optional dependencies for the Kubernetes connector kubernetes = { version = ">=18.20.0", optional = true } # Optional dependencies for the GCP connector google-cloud-container = { version = ">=2.21.0", optional = true } google-cloud-storage = { version = ">=2.9.0", optional = true } google-cloud-artifact-registry = { version = ">=1.11.3", optional = true } # Optional dependencies for the Azure connector azure-mgmt-containerservice = { version = ">=20.0.0", optional = true } azure-mgmt-containerregistry = { version = ">=10.0.0", optional = true } azure-mgmt-storage = { version = ">=20.0.0", optional = true } azure-storage-blob = { version = ">=12.0.0", optional = true } azure-mgmt-resource = { version = ">=21.0.0", optional = true } # Optional dependencies for the S3 artifact store s3fs = { version = ">=2022.11.0", optional = true } # Optional dependencies for the Sagemaker orchestrator sagemaker = { version = ">=2.199.0", optional = true } # Optional dependencies for the GCS artifact store gcsfs = { version = ">=2022.11.0", optional = true } # Optional dependencies for the Vertex orchestrator kfp = { version = ">=2.6.0", optional = true } google-cloud-aiplatform = { version = ">=1.34.0", optional = true } # Optional dependencies for the Azure artifact store adlfs = { version = ">=2021.10.0", optional = true } # Optional dependencies for the AzureML orchestrator azure-ai-ml = { version = "1.23.1", optional = true } # Optional development dependencies bandit = { version = "^1.7.5", optional = true } coverage = { extras = ["toml"], version = "^5.5", optional = true } mypy = { version = "1.7.1", optional = true } pyment = { version = "^0.3.3", optional = true } tox = { version = "^3.24.3", optional = true } hypothesis = { version = "^6.43.1", optional = true } typing-extensions = { version = ">=3.7.4", optional = true } darglint = { version = "^1.8.1", optional = true } ruff = { version = ">=0.1.7", optional = true } yamlfix = { version = "^1.16.0", optional = true } maison = { version = "<2.0", optional = true } # pytest pytest = { version = "^7.4.0", optional = true } pytest-randomly = { version = "^3.10.1", optional = true } pytest-mock = { version = "^3.6.1", optional = true } pytest-clarity = { version = "^1.0.1", optional = true } pytest-instafail = { version = ">=0.5.0", optional = true } pytest-rerunfailures = { version = ">=13.0", optional = true } pytest-split = { version = "^0.8.1", optional = true } # mkdocs including plugins mkdocs = { version = "^1.6.1,<2.0.0", optional = true } mkdocs-material = { version = ">=9.6.5,<10.0.0", optional = true } mkdocs-awesome-pages-plugin = { version = ">=2.10.1,<3.0.0", optional = true } mkdocstrings = { extras = ["python"], version = "^0.28.1,<1.0.0", optional = true } mkdocs-autorefs = { version = ">=1.4.0,<2.0.0", optional = true } mike = { version = ">=1.1.2,<2.0.0", optional = true } # mypy type stubs types-certifi = { version = "^2021.10.8.0", optional = true } types-croniter = { version = "^1.0.2", optional = true } types-futures = { version = "^3.3.1", optional = true } types-Markdown = { version = "^3.3.6", optional = true } types-paramiko = { version = ">=3.4.0", optional = true } types-Pillow = { version = "^9.2.1", optional = true } types-protobuf = { version = "^3.18.0", optional = true } types-PyMySQL = { version = "^1.0.4", optional = true } types-python-dateutil = { version = "^2.8.2", optional = true } types-python-slugify = { version = "^5.0.2", optional = true } types-PyYAML = { version = "^6.0.0", optional = true } types-redis = { version = "^4.1.19", optional = true } types-requests = { version = "^2.27.11", optional = true } types-setuptools = { version = "^57.4.2", optional = true } types-six = { version = "^1.16.2", optional = true } types-termcolor = { version = "^1.1.2", optional = true } types-psutil = { version = "^5.8.13", optional = true } types-passlib = { version = "^1.7.7", optional = true } [tool.poetry.extras] server = [ "fastapi", "uvicorn", "python-multipart", "pyjwt", "fastapi-utils", "orjson", "Jinja2", "ipinfo", "secure", "tldextract", "itsdangerous", ] templates = ["copier", "jinja2-time", "ruff", "pyyaml-include"] terraform = ["python-terraform"] secrets-aws = ["boto3"] secrets-gcp = ["google-cloud-secret-manager"] secrets-azure = ["azure-identity", "azure-keyvault-secrets"] secrets-hashicorp = ["hvac"] s3fs = ["s3fs"] gcsfs = ["gcsfs"] adlfs = ["adlfs"] connectors-kubernetes = ["kubernetes"] connectors-aws = ["boto3", "kubernetes", "aws-profile-manager"] connectors-gcp = [ "google-cloud-container", "google-cloud-storage", "google-cloud-artifact-registry", "kubernetes", ] connectors-azure = [ "azure-identity", "azure-mgmt-containerservice", "azure-mgmt-containerregistry", "azure-mgmt-storage", "azure-storage-blob", "azure-mgmt-resource", "kubernetes", "requests", ] sagemaker = [ "sagemaker", ] vertex = [ "google-cloud-aiplatform", "kfp", ] azureml = [ "azure-ai-ml", ] dev = [ "bandit", "ruff", "yamlfix", "coverage", "pytest", "mypy", "pre-commit", "pyment", "tox", "hypothesis", "typing-extensions", "darglint", "pytest-randomly", "pytest-mock", "pytest-clarity", "pytest-instafail", "pytest-rerunfailures", "pytest-split", "mkdocs", "mkdocs-material", "mkdocs-awesome-pages-plugin", "mkdocstrings", "mkdocstrings-python", "mkdocs-autorefs", "mike", "maison", "types-certifi", "types-croniter", "types-futures", "types-Markdown", "types-paramiko", "types-Pillow", "types-protobuf", "types-PyMySQL", "types-python-dateutil", "types-python-slugify", "types-PyYAML", "types-redis", "types-requests", "types-setuptools", "types-six", "types-termcolor", "types-psutil", "types-passlib", ] [build-system] requires = ["poetry-core"] build-backend = "poetry.core.masonry.api" [tool.poetry-version-plugin] source = "init" [tool.pytest.ini_options] filterwarnings = ["ignore::DeprecationWarning"] log_cli = true log_cli_level = "INFO" testpaths = "tests" xfail_strict = true norecursedirs = [ "tests/integration/examples/*", # ignore example folders ] [tool.coverage.run] parallel = true source = ["src/zenml"] [tool.coverage.report] exclude_lines = [ "pragma: no cover", 'if __name__ == "__main__":', "if TYPE_CHECKING:", ] [tool.ruff] line-length = 79 # Exclude a variety of commonly ignored directories. exclude = [ ".bzr", ".direnv", ".eggs", ".git", ".hg", ".mypy_cache", ".nox", ".pants.d", ".ruff_cache", ".svn", ".tox", ".venv", "__pypackages__", "_build", "buck-out", ".test_durations", "build", "dist", "node_modules", "venv", '__init__.py', 'src/zenml/cli/version.py', # LitGPT files from the LLM Finetuning example 'examples/llm_finetuning/evaluate', 'examples/llm_finetuning/finetune', 'examples/llm_finetuning/generate', 'examples/llm_finetuning/lit_gpt', 'examples/llm_finetuning/scripts', ] src = ["src", "test"] # use Python 3.9 as the minimum version for autofixing target-version = "py39" [tool.ruff.format] exclude = [ "*.git", "*.hg", ".mypy_cache", ".tox", ".venv", "_build", "buck-out", "build]", ] [tool.ruff.lint] # Disable autofix for unused imports (`F401`). unfixable = ["F401"] per-file-ignores = { } ignore = [ "E501", "F401", "F403", "D301", "D401", "D403", "D407", "D213", "D203", "S101", "S104", "S105", "S106", "S107", ] select = ["D", "E", "F", "I", "I001", "Q"] [tool.ruff.lint.flake8-import-conventions.aliases] altair = "alt" "matplotlib.pyplot" = "plt" numpy = "np" pandas = "pd" seaborn = "sns" [tool.ruff.lint.mccabe] max-complexity = 18 [tool.ruff.lint.pydocstyle] # Use Google-style docstrings. convention = "google" [tool.mypy] plugins = ["pydantic.mypy"] strict = true namespace_packages = true show_error_codes = true # import all google, transformers and datasets files as `Any` [[tool.mypy.overrides]] module = [ "google.*", "transformers.*", # https://github.com/huggingface/transformers/issues/13390 "datasets.*", "langchain_community.*", "IPython.core.*", ] follow_imports = "skip" [[tool.mypy.overrides]] module = [ "airflow.*", "tensorflow.*", "apache_beam.*", "pandas.*", "distro.*", "analytics.*", "absl.*", "gcsfs.*", "s3fs.*", "adlfs.*", "fsspec.*", "torch.*", "pytorch_lightning.*", "sklearn.*", "numpy.*", "facets_overview.*", "IPython.core.*", "IPython.display.*", "plotly.*", "dash.*", "dash_bootstrap_components.*", "dash_cytoscape", "dash.dependencies", "docker.*", "flask.*", "kfp.*", "kubernetes.*", "urllib3.*", "kfp_server_api.*", "sagemaker.*", "azureml.*", "google.*", "google_cloud_pipeline_components.*", "neuralprophet.*", "lightgbm.*", "scipy.*", "deepchecks.*", "boto3.*", "botocore.*", "jupyter_dash.*", "slack_sdk.*", "azure-keyvault-keys.*", "azure-mgmt-resource.*", "azure.mgmt.resource.*", "model_archiver.*", "kfp_tekton.*", "mlflow.*", "python_terraform.*", "bentoml.*", "multipart.*", "jose.*", "sqlalchemy_utils.*", "sky.*", "copier.*", "datasets.*", "pyngrok.*", "cloudpickle.*", "matplotlib.*", "IPython.*", "huggingface_hub.*", "distutils.*", "accelerate.*", "label_studio_sdk.*", "argilla.*", "lightning_sdk.*", "peewee.*", "prodigy.*", "prodigy.components.*", "prodigy.components.db.*", "transformers.*", "vllm.*", "numba.*", "uvloop.*", ] ignore_missing_imports = true
0
cloned_public_repos
cloned_public_repos/zenml/release-cloudbuild-preparation.yaml
steps: # login to Dockerhub - name: gcr.io/cloud-builders/docker args: - '-c' - docker login --username=$$USERNAME --password=$$PASSWORD id: docker-login entrypoint: bash secretEnv: - USERNAME - PASSWORD # Build base image - name: gcr.io/cloud-builders/docker args: - '-c' - | docker build . \ --platform linux/amd64 \ -f docker/zenml-dev.Dockerfile \ -t $$USERNAME/prepare-release:base-${_ZENML_NEW_VERSION} id: build-base waitFor: ['-'] entrypoint: bash secretEnv: - USERNAME # Push base image - name: gcr.io/cloud-builders/docker args: - '-c' - docker push $$USERNAME/prepare-release:base-${_ZENML_NEW_VERSION} id: push-base waitFor: - docker-login - build-base entrypoint: bash secretEnv: - USERNAME # Build server image - name: gcr.io/cloud-builders/docker args: - '-c' - | docker build . \ --platform linux/amd64 \ -f docker/zenml-server-dev.Dockerfile \ -t $$USERNAME/prepare-release:server-${_ZENML_NEW_VERSION} id: build-server waitFor: ['-'] entrypoint: bash secretEnv: - USERNAME # Push server images - name: gcr.io/cloud-builders/docker args: - '-c' - docker push $$USERNAME/prepare-release:server-${_ZENML_NEW_VERSION} id: push-server waitFor: - docker-login - build-server entrypoint: bash secretEnv: - USERNAME # Build Quickstart GCP Image - name: gcr.io/cloud-builders/docker args: - '-c' - | docker build . \ --platform linux/amd64 \ --build-arg BASE_IMAGE=$$USERNAME/prepare-release:base-${_ZENML_NEW_VERSION} \ --build-arg CLOUD_PROVIDER=gcp \ --build-arg ZENML_BRANCH=${_ZENML_BRANCH} \ -f docker/zenml-quickstart-dev.Dockerfile \ -t $$USERNAME/prepare-release:quickstart-gcp-${_ZENML_NEW_VERSION} id: build-quickstart-gcp waitFor: - push-base entrypoint: bash secretEnv: - USERNAME # Build Quickstart AWS image - name: gcr.io/cloud-builders/docker args: - '-c' - | docker build . \ --platform linux/amd64 \ --build-arg BASE_IMAGE=$$USERNAME/prepare-release:base-${_ZENML_NEW_VERSION} \ --build-arg CLOUD_PROVIDER=aws \ --build-arg ZENML_BRANCH=${_ZENML_BRANCH} \ -f docker/zenml-quickstart-dev.Dockerfile \ -t $$USERNAME/prepare-release:quickstart-aws-${_ZENML_NEW_VERSION} id: build-quickstart-aws waitFor: - push-base entrypoint: bash secretEnv: - USERNAME # Build Quickstart Azure image - name: gcr.io/cloud-builders/docker args: - '-c' - | docker build . \ --platform linux/amd64 \ --build-arg BASE_IMAGE=$$USERNAME/prepare-release:base-${_ZENML_NEW_VERSION} \ --build-arg CLOUD_PROVIDER=azure \ --build-arg ZENML_BRANCH=${_ZENML_BRANCH} \ -f docker/zenml-quickstart-dev.Dockerfile \ -t $$USERNAME/prepare-release:quickstart-azure-${_ZENML_NEW_VERSION} id: build-quickstart-azure waitFor: - push-base entrypoint: bash secretEnv: - USERNAME # Push Quickstart images - name: gcr.io/cloud-builders/docker args: - '-c' - | docker push $$USERNAME/prepare-release:quickstart-aws-${_ZENML_NEW_VERSION} docker push $$USERNAME/prepare-release:quickstart-azure-${_ZENML_NEW_VERSION} docker push $$USERNAME/prepare-release:quickstart-gcp-${_ZENML_NEW_VERSION} id: push-quickstart waitFor: - docker-login - build-quickstart-gcp - build-quickstart-aws - build-quickstart-azure entrypoint: bash secretEnv: - USERNAME timeout: 3600s availableSecrets: secretManager: - versionName: projects/$PROJECT_ID/secrets/docker-password/versions/1 env: PASSWORD - versionName: projects/$PROJECT_ID/secrets/docker-username/versions/1 env: USERNAME
0
cloned_public_repos
cloned_public_repos/zenml/ROADMAP.md
# Roadmap The roadmap is an encapsulation of the features that we intend to build for ZenML. However, please note that we limited resources and therefore no means of guaranteeing that this roadmap will be followed precisely as described on this page. Rest assured we are working to follow this diligently - please keep us in check! The roadmap is public and can be found [here](https://zenml.io/roadmap).
0
cloned_public_repos
cloned_public_repos/zenml/.typos.toml
[files] extend-exclude = [ "*.json", "*.js", "*.ipynb", "src/zenml/zen_stores/migrations/versions/", "tests/unit/materializers/test_built_in_materializer.py", "tests/integration/functional/cli/test_pipeline.py", "src/zenml/zen_server/dashboard/", "examples/llm_finetuning/lit_gpt/" ] [default.extend-identifiers] HashiCorp = "HashiCorp" NDArray = "NDArray" K_Scatch = "K_Scatch" MCAGA1UECgwZQW1hem9uIFdlYiBTZXJ2aWNlcywgSW5jLjETMBEGA1UECwwKQW1h = "MCAGA1UECgwZQW1hem9uIFdlYiBTZXJ2aWNlcywgSW5jLjETMBEGA1UECwwKQW1h" VQQGEwJVUzEQMA4GA1UEBwwHU2VhdHRsZTETMBEGA1UECAwKV2FzaGluZ3RvbjEi = "VQQGEwJVUzEQMA4GA1UEBwwHU2VhdHRsZTETMBEGA1UECAwKV2FzaGluZ3RvbjEi" MDEyOk9yZ2FuaXphdGlvbjg4Njc2OTU1 = "MDEyOk9yZ2FuaXphdGlvbjg4Njc2OTU1" [default.extend-words] # Don't correct the surname "Teh" aks = "aks" hashi = "hashi" womens = "womens" prepend = "prepend" prepended = "prepended" goes = "goes" bare = "bare" prepending = "prepending" prev = "prev" creat = "creat" ret = "ret" daa = "daa" arange = "arange" cachable = "cachable" OT = "OT" cll = "cll" [default] locale = "en-us"
0
cloned_public_repos
cloned_public_repos/zenml/README.md
<div align="center"> <img referrerpolicy="no-referrer-when-downgrade" src="https://static.scarf.sh/a.png?x-pxid=0fcbab94-8fbe-4a38-93e8-c2348450a42e" /> <h1 align="center">Beyond The Demo: Production-Grade AI Systems</h1> <h3 align="center">ZenML brings battle-tested MLOps practices to your AI applications, handling evaluation, monitoring, and deployment at scale</h3> </div> <!-- PROJECT SHIELDS --> <!-- *** I'm using markdown "reference style" links for readability. *** Reference links are enclosed in brackets [ ] instead of parentheses ( ). *** See the bottom of this document for the declaration of the reference variables *** for contributors-url, forks-url, etc. This is an optional, concise syntax you may use. *** https://www.markdownguide.org/basic-syntax/#reference-style-links --> <div align="center"> <!-- PROJECT LOGO --> <br /> <a href="https://zenml.io"> <img alt="ZenML Logo" src="docs/book/.gitbook/assets/header.png" alt="ZenML Logo"> </a> <br /> [![PyPi][pypi-shield]][pypi-url] [![PyPi][pypiversion-shield]][pypi-url] [![PyPi][downloads-shield]][downloads-url] [![Contributors][contributors-shield]][contributors-url] [![License][license-shield]][license-url] <!-- [![Build][build-shield]][build-url] --> <!-- [![CodeCov][codecov-shield]][codecov-url] --> </div> <!-- MARKDOWN LINKS & IMAGES --> <!-- https://www.markdownguide.org/basic-syntax/#reference-style-links --> [pypi-shield]: https://img.shields.io/pypi/pyversions/zenml?color=281158 [pypi-url]: https://pypi.org/project/zenml/ [pypiversion-shield]: https://img.shields.io/pypi/v/zenml?color=361776 [downloads-shield]: https://img.shields.io/pypi/dm/zenml?color=431D93 [downloads-url]: https://pypi.org/project/zenml/ [codecov-shield]: https://img.shields.io/codecov/c/gh/zenml-io/zenml?color=7A3EF4 [codecov-url]: https://codecov.io/gh/zenml-io/zenml [contributors-shield]: https://img.shields.io/github/contributors/zenml-io/zenml?color=7A3EF4 [contributors-url]: https://github.com/zenml-io/zenml/graphs/contributors [license-shield]: https://img.shields.io/github/license/zenml-io/zenml?color=9565F6 [license-url]: https://github.com/zenml-io/zenml/blob/main/LICENSE [linkedin-shield]: https://img.shields.io/badge/-LinkedIn-black.svg?style=for-the-badge&logo=linkedin&colorB=555 [linkedin-url]: https://www.linkedin.com/company/zenml/ [twitter-shield]: https://img.shields.io/twitter/follow/zenml_io?style=for-the-badge [twitter-url]: https://twitter.com/zenml_io [slack-shield]: https://img.shields.io/badge/-Slack-black.svg?style=for-the-badge&logo=linkedin&colorB=555 [slack-url]: https://zenml.io/slack-invite [build-shield]: https://img.shields.io/github/workflow/status/zenml-io/zenml/Build,%20Lint,%20Unit%20&%20Integration%20Test/develop?logo=github&style=for-the-badge [build-url]: https://github.com/zenml-io/zenml/actions/workflows/ci.yml --- Need help with documentation? Visit our [docs site](https://docs.zenml.io) for comprehensive guides and tutorials, or browse the [SDK reference](https://sdkdocs.zenml.io/) to find specific functions and classes. ## ⭐️ Show Your Support If you find ZenML helpful or interesting, please consider giving us a star on GitHub. Your support helps promote the project and lets others know that it's worth checking out. Thank you for your support! 🌟 [![Star this project](https://img.shields.io/github/stars/zenml-io/zenml?style=social)](https://github.com/zenml-io/zenml/stargazers) ## 🀸 Quickstart [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/zenml-io/zenml/blob/main/examples/quickstart/quickstart.ipynb) [Install ZenML](https://docs.zenml.io/getting-started/installation) via [PyPI](https://pypi.org/project/zenml/). Python 3.9 - 3.12 is required: ```bash pip install "zenml[server]" notebook ``` Take a tour with the guided quickstart by running: ```bash zenml go ``` ## πŸͺ„ From Prototype to Production: AI Made Simple ### Create AI pipelines with minimal code changes ZenML is an open-source framework that handles MLOps and LLMOps for engineers scaling AI beyond prototypes. Automate evaluation loops, track performance, and deploy updates across 100s of pipelinesβ€”all while your RAG apps run like clockwork. ```python from zenml import pipeline, step @step def load_rag_documents() -> dict: # Load and chunk documents for RAG pipeline documents = extract_web_content(url="https://www.zenml.io/") return {"chunks": chunk_documents(documents)} @step def generate_embeddings(data: dict) -> None: # Generate embeddings for RAG pipeline embeddings = embed_documents(data['chunks']) return {"embeddings": embeddings} @step def index_generator( embeddings: dict, ) -> str: # Generate index for RAG pipeline index = create_index(embeddings) return index.id @pipeline def rag_pipeline() -> str: documents = load_rag_documents() embeddings = generate_embeddings(documents) index = index_generator(embeddings) return index ``` ![Running a ZenML pipeline](/docs/book/.gitbook/assets/readme_simple_pipeline.gif) ### Easily provision an MLOps stack or reuse your existing infrastructure The framework is a gentle entry point for practitioners to build complex ML pipelines with little knowledge required of the underlying infrastructure complexity. ZenML pipelines can be run on AWS, GCP, Azure, Airflow, Kubeflow and even on Kubernetes without having to change any code or know underlying internals. ZenML provides different features to aid people to get started quickly on a remote setting as well. If you want to deploy a remote stack from scratch on your selected cloud provider, you can use the 1-click deployment feature either through the dashboard: ![Running a ZenML pipeline](/docs/book/.gitbook/assets/one-click-deployment.gif) Or, through our CLI command: ```bash zenml stack deploy --provider aws ``` Alternatively, if the necessary pieces of infrastructure are already deployed, you can register a cloud stack seamlessly through the stack wizard: ```bash zenml stack register <STACK_NAME> --provider aws ``` Read more about [ZenML stacks](https://docs.zenml.io/user-guide/production-guide/understand-stacks). ### Run workloads easily on your production infrastructure Once you have your MLOps stack configured, you can easily run workloads on it: ```bash zenml stack set <STACK_NAME> python run.py ``` ```python from zenml.config import ResourceSettings, DockerSettings @step( settings={ "resources": ResourceSettings(memory="16GB", gpu_count="1", cpu_count="8"), "docker": DockerSettings(parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime") } ) def training(...): ... ``` ![Workloads with ZenML](/docs/book/.gitbook/assets/readme_compute.gif) ### Track models, pipeline, and artifacts Create a complete lineage of who, where, and what data and models are produced. You'll be able to find out who produced which model, at what time, with which data, and on which version of the code. This guarantees full reproducibility and auditability. ```python from zenml import Model @step(model=Model(name="rag_llm", tags=["staging"])) def deploy_rag(index_id: str) -> str: deployment_id = deploy_to_endpoint(index_id) return deployment_id ``` ![Exploring ZenML Models](/docs/book/.gitbook/assets/readme_mcp.gif) ## πŸš€ Key LLMOps Capabilities ### Continual RAG Improvement **Build production-ready retrieval systems** <div align="center"> <img src="/docs/book/.gitbook/assets/rag_zenml_home.png" width="800" alt="RAG Pipeline"> </div> ZenML tracks document ingestion, embedding versions, and query patterns. Implement feedback loops and: - Fix your RAG logic based on production logs - Automatically re-ingest updated documents - A/B test different embedding models - Monitor retrieval quality metrics ### Reproducible Model Fine-Tuning **Confidence in model updates** <div align="center"> <img src="/docs/book/.gitbook/assets/finetune_zenml_home.png" width="800" alt="Finetuning Pipeline"> </div> Maintain full lineage of SLM/LLM training runs: - Version training data and hyperparameters - Track performance across iterations - Automatically promote validated models - Roll back to previous versions if needed ### Purpose built for machine learning with integrations to your favorite tools While ZenML brings a lot of value out of the box, it also integrates into your existing tooling and infrastructure without you having to be locked in. ```python from bentoml._internal.bento import bento @step(on_failure=alert_slack, experiment_tracker="mlflow") def train_and_deploy(training_df: pd.DataFrame) -> bento.Bento mlflow.autolog() ... return bento ``` ![Exploring ZenML Integrations](/docs/book/.gitbook/assets/readme_integrations.gif) ## πŸ”„ Your LLM Framework Isn't Enough for Production While tools like LangChain and LlamaIndex help you **build** LLM workflows, ZenML helps you **productionize** them by adding: βœ… **Artifact Tracking** - Every vector store index, fine-tuned model, and evaluation result versioned automatically βœ… **Pipeline History** - See exactly what code/data produced each version of your RAG system βœ… **Stage Promotion** - Move validated pipelines from staging β†’ production with one click ## πŸ–ΌοΈ Learning The best way to learn about ZenML is the [docs](https://docs.zenml.io/). We recommend beginning with the [Starter Guide](https://docs.zenml.io/user-guide/starter-guide) to get up and running quickly. If you are a visual learner, this 11-minute video tutorial is also a great start: [![Introductory Youtube Video](docs/book/.gitbook/assets/readme_youtube_thumbnail.png)](https://www.youtube.com/watch?v=wEVwIkDvUPs) And finally, here are some other examples and use cases for inspiration: 1. [E2E Batch Inference](examples/e2e/): Feature engineering, training, and inference pipelines for tabular machine learning. 2. [Basic NLP with BERT](examples/e2e_nlp/): Feature engineering, training, and inference focused on NLP. 3. [LLM RAG Pipeline with Langchain and OpenAI](https://github.com/zenml-io/zenml-projects/tree/main/zenml-support-agent): Using Langchain to create a simple RAG pipeline. 4. [Huggingface Model to Sagemaker Endpoint](https://github.com/zenml-io/zenml-projects/tree/main/huggingface-sagemaker): Automated MLOps on Amazon Sagemaker and HuggingFace 5. [LLMops](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide): Complete guide to do LLM with ZenML ## πŸ“š Learn from Books <div align="center"> <a href="https://www.amazon.com/LLM-Engineers-Handbook-engineering-production/dp/1836200072"> <img src="docs/book/.gitbook/assets/llm_engineering_handbook_cover.jpg" alt="LLM Engineer's Handbook Cover" width="200"/></img> </a>&nbsp;&nbsp;&nbsp;&nbsp; <a href="https://www.amazon.com/-/en/Andrew-McMahon/dp/1837631964"> <img src="docs/book/.gitbook/assets/ml_engineering_with_python.jpg" alt="Machine Learning Engineering with Python Cover" width="200"/></img> </a> </br></br> </div> ZenML is featured in these comprehensive guides to modern MLOps and LLM engineering. Learn how to build production-ready machine learning systems with real-world examples and best practices. ## πŸ”‹ Deploy ZenML For full functionality ZenML should be deployed on the cloud to enable collaborative features as the central MLOps interface for teams. Read more about various deployment options [here](https://docs.zenml.io/getting-started/deploying-zenml). Or, sign up for [ZenML Pro to get a fully managed server on a free trial](https://cloud.zenml.io/?utm_source=readme&utm_medium=referral_link&utm_campaign=cloud_promotion&utm_content=signup_link). ## Use ZenML with VS Code ZenML has a [VS Code extension](https://marketplace.visualstudio.com/items?itemName=ZenML.zenml-vscode) that allows you to inspect your stacks and pipeline runs directly from your editor. The extension also allows you to switch your stacks without needing to type any CLI commands. <details> <summary>πŸ–₯️ VS Code Extension in Action!</summary> <div align="center"> <img width="60%" src="/docs/book/.gitbook/assets/zenml-extension-shortened.gif" alt="ZenML Extension"> </div> </details> ## πŸ—Ί Roadmap ZenML is being built in public. The [roadmap](https://zenml.io/roadmap) is a regularly updated source of truth for the ZenML community to understand where the product is going in the short, medium, and long term. ZenML is managed by a [core team](https://zenml.io/company) of developers that are responsible for making key decisions and incorporating feedback from the community. The team oversees feedback via various channels, and you can directly influence the roadmap as follows: - Vote on your most wanted feature on our [Discussion board](https://zenml.io/discussion). - Start a thread in our [Slack channel](https://zenml.io/slack). - [Create an issue](https://github.com/zenml-io/zenml/issues/new/choose) on our GitHub repo. ## πŸ™Œ Contributing and Community We would love to develop ZenML together with our community! The best way to get started is to select any issue from the `[good-first-issue` label](https://github.com/issues?q=is%3Aopen+is%3Aissue+archived%3Afalse+user%3Azenml-io+label%3A%22good+first+issue%22) and open up a Pull Request! If you would like to contribute, please review our [Contributing Guide](CONTRIBUTING.md) for all relevant details. ## πŸ†˜ Getting Help The first point of call should be [our Slack group](https://zenml.io/slack-invite/). Ask your questions about bugs or specific use cases, and someone from the [core team](https://zenml.io/company) will respond. Or, if you prefer, [open an issue](https://github.com/zenml-io/zenml/issues/new/choose) on our GitHub repo. ## πŸ“š LLM-focused Learning Resources 1. [LL Complete Guide - Full RAG Pipeline](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide) - Document ingestion, embedding management, and query serving 2. [LLM Fine-Tuning Pipeline](https://github.com/zenml-io/zenml-projects/tree/main/zencoder) - From data prep to deployed model 3. [LLM Agents Example](https://github.com/zenml-io/zenml-projects/tree/main/zenml-support-agent) - Track conversation quality and tool usage ## πŸ€– AI-Friendly Documentation with llms.txt ZenML implements the llms.txt standard to make our documentation more accessible to AI assistants and LLMs. Our implementation includes: - Base documentation at [zenml.io/llms.txt](https://zenml.io/llms.txt) with core user guides - Specialized files for different documentation aspects: - [Component guides](https://zenml.io/component-guide.txt) for integration details - [How-to guides](https://zenml.io/how-to-guides.txt) for practical implementations - [Complete documentation corpus](https://zenml.io/llms-full.txt) for comprehensive access This structured approach helps AI tools better understand and utilize ZenML's documentation, enabling more accurate code suggestions and improved documentation search. ## πŸ“œ License ZenML is distributed under the terms of the Apache License Version 2.0. A complete version of the license is available in the [LICENSE](LICENSE) file in this repository. Any contribution made to this project will be licensed under the Apache License Version 2.0. <div> <p align="left"> <div align="left"> Join our <a href="https://zenml.io/slack" target="_blank"> <img width="18" src="https://cdn3.iconfinder.com/data/icons/logos-and-brands-adobe/512/306_Slack-512.png" alt="Slack"/> <b>Slack Community</b> </a> and be part of the ZenML family. </div> <br /> <a href="https://zenml.io/features">Features</a> Β· <a href="https://zenml.io/roadmap">Roadmap</a> Β· <a href="https://github.com/zenml-io/zenml/issues">Report Bug</a> Β· <a href="https://zenml.io/pro">Sign up for ZenML Pro</a> Β· <a href="https://www.zenml.io/blog">Read Blog</a> Β· <a href="https://github.com/issues?q=is%3Aopen+is%3Aissue+archived%3Afalse+user%3Azenml-io+label%3A%22good+first+issue%22">Contribute to Open Source</a> Β· <a href="https://github.com/zenml-io/zenml-projects">Projects Showcase</a> <br /> <br /> πŸŽ‰ Version 0.80.1 is out. Check out the release notes <a href="https://github.com/zenml-io/zenml/releases">here</a>. <br /> πŸ–₯️ Download our VS Code Extension <a href="https://marketplace.visualstudio.com/items?itemName=ZenML.zenml-vscode">here</a>. <br /> </p> </div>
0
cloned_public_repos
cloned_public_repos/zenml/CODE-OF-CONDUCT.md
# Contributor Covenant Code of Conduct ## Our Pledge We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community. ## Our Standards Examples of behavior that contributes to a positive environment for our community include: * Demonstrating empathy and kindness toward other people * Being respectful of differing opinions, viewpoints, and experiences * Giving and gracefully accepting constructive feedback * Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience * Focusing on what is best not just for us as individuals, but for the overall community Examples of unacceptable behavior include: * The use of sexualized language or imagery, and sexual attention or advances of any kind * Trolling, insulting or derogatory comments, and personal or political attacks * Public or private harassment * Publishing others' private information, such as a physical or email address, without their explicit permission * Other conduct which could reasonably be considered inappropriate in a professional setting ## Enforcement Responsibilities Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful. Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate. ## Scope This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. ## Enforcement Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at [support@zenml.io](mailto:support@zenml.io). All complaints will be reviewed and investigated promptly and fairly. All community leaders are obligated to respect the privacy and security of the reporter of any incident. ## Enforcement Guidelines Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct: ### 1. Correction **Community Impact**: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community. **Consequence**: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested. ### 2. Warning **Community Impact**: A violation through a single incident or series of actions. **Consequence**: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban. ### 3. Temporary Ban **Community Impact**: A serious violation of community standards, including sustained inappropriate behavior. **Consequence**: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban. ### 4. Permanent Ban **Community Impact**: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals. **Consequence**: A permanent ban from any sort of public interaction within the community. ## Attribution This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 2.0, available at [https://www.contributor-covenant.org/version/2/0/code_of_conduct.html][v2.0]. Community Impact Guidelines were inspired by [Mozilla's code of conduct enforcement ladder][Mozilla CoC]. For answers to common questions about this code of conduct, see the FAQ at [https://www.contributor-covenant.org/faq][FAQ]. Translations are available at [https://www.contributor-covenant.org/translations][translations]. [homepage]: https://www.contributor-covenant.org [v2.0]: https://www.contributor-covenant.org/version/2/0/code_of_conduct.html [Mozilla CoC]: https://github.com/mozilla/inclusion [FAQ]: https://www.contributor-covenant.org/faq [translations]: https://www.contributor-covenant.org/translations
0
cloned_public_repos
cloned_public_repos/zenml/RELEASE_NOTES.md
<!-- markdown-link-check-disable --> # 0.80.1 The `0.80.1` release focuses on bug fixes and performance improvements following the major `0.80.0` update. This release addresses several critical issues, particularly improving the CLI functionality when used with the REST API through a deployed ZenML instance. Additionally, this version introduces [a restructured documentation architecture](https://docs.zenml.io) for improved user experience. ## Improvements - Import integrations lazily for better performance - Added ability to store a default project for users ## Fixes - Fixed CLI combined with RestZenStore and filters with multiple entries - Fixed stack validation for incluster Kubernetes orchestrator - Fixed stack and component URL when connected to a cloud workspace - Fixed code repository host fallback - Fixed version validation - Various other minor bugfixes ## Documentation - Restructured entire documentation for better organization - Fixed broken links in API documentation - Refined logging to debug level for service connectors - Removed redundant log messages ## What's Changed * Adding `0.80.0` to the migration tests by @bcdurak in https://github.com/zenml-io/zenml/pull/3442 * Adding the disabled flavor test back by @bcdurak in https://github.com/zenml-io/zenml/pull/3431 * Stop CLI profiler running so much by @strickvl in https://github.com/zenml-io/zenml/pull/3449 * Add missing fallback host for code repositories by @schustmi in https://github.com/zenml-io/zenml/pull/3434 * Fix stack and component URL when connected to a cloud workspace by @schustmi in https://github.com/zenml-io/zenml/pull/3451 * Fix stack validation for incluster Kubernetes orchestrator by @schustmi in https://github.com/zenml-io/zenml/pull/3450 * Bump `click` dependency by @strickvl in https://github.com/zenml-io/zenml/pull/3445 * Fix 0.80.0 database migration by @stefannica in https://github.com/zenml-io/zenml/pull/3453 * Pin the ZenML Terraform provider version by @stefannica in https://github.com/zenml-io/zenml/pull/3443 * Import integrations lazily by @stefannica in https://github.com/zenml-io/zenml/pull/3419 * Remove Segment analytics script and scarf image load. by @htahir1 in https://github.com/zenml-io/zenml/pull/3455 * Restructure entire docs by @htahir1 in https://github.com/zenml-io/zenml/pull/3447 * Refactor logging to debug level for service connectors by @htahir1 in https://github.com/zenml-io/zenml/pull/3456 * Removing redundant log messages by @bcdurak in https://github.com/zenml-io/zenml/pull/3459 * Fix broken link in API documentation table by @htahir1 in https://github.com/zenml-io/zenml/pull/3462 * Add the ability to store a default project for a user by @schustmi in https://github.com/zenml-io/zenml/pull/3457 * Fixing the CLI combined with RestZenStore and filters with multiple entries by @bcdurak in https://github.com/zenml-io/zenml/pull/3464 * Don't ask for active project when listing projects by @stefannica in https://github.com/zenml-io/zenml/pull/3466 * Use the build python version to collect stack requirements for run templates by @stefannica in https://github.com/zenml-io/zenml/pull/3465 * Fix version validation by @bcdurak in https://github.com/zenml-io/zenml/pull/3467 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.80.0...0.80.1 # 0.80.0 The 0.80.0 release is one of our biggest updates in a while! This version introduces a major refactoring of workspaces into projects, enhances tagging capabilities, and improves GitLab repository support. This release also features significant performance optimizations for Docker builds and CLI operations. ## Features - For our Pro users: Refactored workspaces into projects with improved RBAC API resource format (We will release separate docs on this soon.) - [Enhanced tagging system](https://docs.zenml.io/how-to/data-artifact-management/handle-data-artifacts/tagging) with resource type filtering and exclusive tag behavior - [Added persistent resource support for the Vertex orchestrator](https://docs.zenml.io/stacks/orchestrators/vertex#using-persistent-resources-for-faster-development) - Store build duration information for better tracking - [Allow passing step artifacts to specify upstream steps](https://docs.zenml.io/how-to/pipeline-development/build-pipelines/control-execution-order-of-steps) - Support for environment variables in KubernetesPodSettings ## Improvements - Updated devel dockerfiles to make rebuilds faster - Improved CLI response time through optimized imports - Allow registering public GitLab repositories without token - [Enable Weave integration](https://docs.zenml.io/stacks/experiment-trackers/wandb#using-weights-and-biases-weave) in Wandb settings - Allow the service account project ID to be overridden in [the GCP service connector](https://docs.zenml.io/how-to/infrastructure-deployment/auth-management/gcp-service-connector) - [Pass API token as Kubernetes secret](https://docs.zenml.io/stacks/orchestrators/kubernetes#additional-configuration), allowing Kubernetess orchestrator to run workloads without exposing any sensitive API tokens in the environment ## Fixes - Fixed GitLab URL parsing and matching - Corrected CLI command to describe flavors - Fixed taggable filter model and filter models with multiple inputs - Fixed project statistics endpoint and ZenML Pro project URLs - Fixed the ACR support in the Azure service connector - Resolved SkyPilot Orchestrator cluster name handling - Fixed deprecation messages for GitHub code repository - Don't retry REST API calls if runtime errors occur ## Documentation - Renamed API Docs to SDK Docs for clarity - Fixed SDK docs rendering with proper directory structure and links - Removed deprecated caveat from Kubernetes docs - Various documentation fixes and clarifications ## What's Changed * Update devel dockerfiles to make rebuilds faster by @stefannica in https://github.com/zenml-io/zenml/pull/3385 * Deepchecks fix for the CI by @bcdurak in https://github.com/zenml-io/zenml/pull/3389 * Fixing the CI by @bcdurak in https://github.com/zenml-io/zenml/pull/3391 * Fixing the zenml login hint for separated names by @bcdurak in https://github.com/zenml-io/zenml/pull/3388 * bugfix: correctly parse and match Gitlab URLs by @dragosmc in https://github.com/zenml-io/zenml/pull/3392 * bugfix: pass iterator to gitlab by @dragosmc in https://github.com/zenml-io/zenml/pull/3393 * Allow registering public gitlab repositories without token by @schustmi in https://github.com/zenml-io/zenml/pull/3394 * Fix CLI command to describe flavors by @schustmi in https://github.com/zenml-io/zenml/pull/3390 * Store build duration by @schustmi in https://github.com/zenml-io/zenml/pull/3386 * Removed deprecated caveat from kubernetes docs by @AlexejPenner in https://github.com/zenml-io/zenml/pull/3395 * fix doc confusion by @VicSev in https://github.com/zenml-io/zenml/pull/3397 * Improved tagging by @bcdurak in https://github.com/zenml-io/zenml/pull/3360 * Refactor workspaces into projects by @stefannica in https://github.com/zenml-io/zenml/pull/3364 * Fix taggable filter model by @schustmi in https://github.com/zenml-io/zenml/pull/3403 * Testing the CLI with the profiler by @bcdurak in https://github.com/zenml-io/zenml/pull/3400 * Don't retry REST API calls if runtime errors occur by @stefannica in https://github.com/zenml-io/zenml/pull/3408 * Allow the service account project ID to be overridden in the GCP service connector by @stefannica in https://github.com/zenml-io/zenml/pull/3398 * Allow passing step artifacts to specify upstream steps by @schustmi in https://github.com/zenml-io/zenml/pull/3401 * Add Reo Javascript snippet to main.html by @htahir1 in https://github.com/zenml-io/zenml/pull/3409 * Rename workspace to project by @stefannica in https://github.com/zenml-io/zenml/pull/3407 * Enable Weave integration in Wandb settings by @htahir1 in https://github.com/zenml-io/zenml/pull/3359 * Add persistent resource support for the vertex orchestrator by @schustmi in https://github.com/zenml-io/zenml/pull/3396 * Minor fix for the docs by @avishniakov in https://github.com/zenml-io/zenml/pull/3411 * Listing tags filtered by resource type by @bcdurak in https://github.com/zenml-io/zenml/pull/3406 * Adding removing tags with various update models by @bcdurak in https://github.com/zenml-io/zenml/pull/3404 * Exclusive tag behavior by @bcdurak in https://github.com/zenml-io/zenml/pull/3405 * Rename tenant to workspace and implement new RBAC API resource format by @stefannica in https://github.com/zenml-io/zenml/pull/3414 * API Docs -> SDK Docs by @htahir1 in https://github.com/zenml-io/zenml/pull/3415 * Removed step by @AlexejPenner in https://github.com/zenml-io/zenml/pull/3416 * Fix unbound variable access by @schustmi in https://github.com/zenml-io/zenml/pull/3412 * Allow setting environment variables through `KubernetesPodSettings` by @schustmi in https://github.com/zenml-io/zenml/pull/3413 * Fix SDK docs rendering with proper directory structure and links by @strickvl in https://github.com/zenml-io/zenml/pull/3374 * Fix deprecation message for github code repository by @schustmi in https://github.com/zenml-io/zenml/pull/3418 * Fix project statistics endpoint by @schustmi in https://github.com/zenml-io/zenml/pull/3420 * Track project creation in onboarding state by @schustmi in https://github.com/zenml-io/zenml/pull/3423 * Fix the ACR support in the Azure service connector by @stefannica in https://github.com/zenml-io/zenml/pull/3424 * Limiting the `mlflow` dependency by @bcdurak in https://github.com/zenml-io/zenml/pull/3422 * Pass API token as kubernetes secret by @schustmi in https://github.com/zenml-io/zenml/pull/3421 * Improve the CLI response time through imports by @bcdurak in https://github.com/zenml-io/zenml/pull/3399 * Fetch model hydrated during deletion process by @schustmi in https://github.com/zenml-io/zenml/pull/3427 * Fix ZenML Pro project URLs for pipeline runs and model versions by @stefannica in https://github.com/zenml-io/zenml/pull/3426 * Add missing functions and classes to root init exports by @schustmi in https://github.com/zenml-io/zenml/pull/3428 * Fix doc links and comment test out by @htahir1 in https://github.com/zenml-io/zenml/pull/3430 * fix: SkypilotBaseOrchestrator handle given cluster_name and correct reuse by @BjoernBiltzinger in https://github.com/zenml-io/zenml/pull/3417 * Upgrading the `skypilot` dependency by @bcdurak in https://github.com/zenml-io/zenml/pull/3429 * Fixing filter models with multiple inputs by @bcdurak in https://github.com/zenml-io/zenml/pull/3410 * Add project usage tracking by @schustmi in https://github.com/zenml-io/zenml/pull/3435 ## New Contributors * @dragosmc made their first contribution in https://github.com/zenml-io/zenml/pull/3392 * @VicSev made their first contribution in https://github.com/zenml-io/zenml/pull/3397 * @BjoernBiltzinger made their first contribution in https://github.com/zenml-io/zenml/pull/3417 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.75.0...0.80.0 # 0.75.0 The `0.75.0` release introduces dashboard enhancements for stack component management along with improvements to documentation and service connector capabilities. Users can now create and update stack components directly from the dashboard. ## Features - Create and update stack components directly from the dashboard - Custom authentication method support during auto-configuration of service connectors - Enhanced model artifact retrieval by creation date instead of version name - Additional SageMaker environment settings ## Improvements - Expanded fastapi dependency range for better compatibility - Improved pipeline source root documentation and logging - Better sorting functionality when using custom fetching ## Fixes - Fixed registration of components with custom flavors - Fixed sorting logic when using custom fetching criteria - Prevented inner fsspec logs from being flushed to the artifact store ## Documentation - Added LLM messaging and video resources to documentation - Improved formatting for model deployers documentation - Fixed GCP service connector docs - Added SDK documentation links - Enhanced README with LLM messaging ## What's Changed * Adding `0.74.0` to the migration tests by @bcdurak in https://github.com/zenml-io/zenml/pull/3351 * Fixing the release preparation workflow by @bcdurak in https://github.com/zenml-io/zenml/pull/3348 * Expand `fastapi` dependency range by @strickvl in https://github.com/zenml-io/zenml/pull/3340 * Document the programmatic API access options by @stefannica in https://github.com/zenml-io/zenml/pull/3352 * Fix some docs links by @schustmi in https://github.com/zenml-io/zenml/pull/3353 * [docs] Rename llms.txt file, add header and docs by @wjayesh in https://github.com/zenml-io/zenml/pull/3346 * Add `llms.txt` YouTube video to docs by @strickvl in https://github.com/zenml-io/zenml/pull/3354 * Fix model deployers docs formatting by @strickvl in https://github.com/zenml-io/zenml/pull/3356 * Get the latest artifact of a model by creation date instead of version name by @pierre-godard in https://github.com/zenml-io/zenml/pull/3343 * Improve source root docs/logs when running a pipeline by @schustmi in https://github.com/zenml-io/zenml/pull/3357 * Fix registration of components with custom flavors by @schustmi in https://github.com/zenml-io/zenml/pull/3363 * Fix GCP service connector docs by @stefannica in https://github.com/zenml-io/zenml/pull/3365 * Allow auth method to be customized during auto-configuration of service connectors by @stefannica in https://github.com/zenml-io/zenml/pull/3367 * Add some sdkdocs links by @htahir1 in https://github.com/zenml-io/zenml/pull/3358 * doc: fix link by @tanguyantoine in https://github.com/zenml-io/zenml/pull/3369 * Fix sorting when using custom fetching by @schustmi in https://github.com/zenml-io/zenml/pull/3366 * Add sagemaker env settings by @stefannica in https://github.com/zenml-io/zenml/pull/3368 * Update README with LLM messaging and llms.txt by @wjayesh in https://github.com/zenml-io/zenml/pull/3362 * CI Linting fix by @bcdurak in https://github.com/zenml-io/zenml/pull/3377 * Don't flush inner fsspec logs to the artifact store by @stefannica in https://github.com/zenml-io/zenml/pull/3373 * Bugfix for Sagemaker env variables by @bcdurak in https://github.com/zenml-io/zenml/pull/3380 * Correct isintance check for sagemaker env variables by @bcdurak in https://github.com/zenml-io/zenml/pull/3382 ## New Contributors * @pierre-godard made their first contribution in https://github.com/zenml-io/zenml/pull/3343 * @tanguyantoine made their first contribution in https://github.com/zenml-io/zenml/pull/3369 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.74.0...0.75.0 # 0.74.0 The `0.74.0` release introduces several major features including [SageMaker pipeline scheduling capabilities](https://docs.zenml.io/stack-components/orchestrators/sagemaker#scheduling-pipelines), [Azure Container Registry (ACR) implicit authentication support](https://docs.zenml.io/stack-components/container-registries/azure#authentication-methods), and [Vertex AI persistent resource handling for step operators](https://docs.zenml.io/stack-components/step-operators/vertex#using-persistent-resources-for-faster-development). Additionally, this release includes comprehensive improvements to timezone handling and significant enhancements to database performance. ## Features - API Tokens support in the dashboard for time-boxed API authentication - [SageMaker pipeline scheduling capabilities](https://docs.zenml.io/stack-components/orchestrators/sagemaker#scheduling-pipelines) - [Azure Container Registry (ACR) and Storage Account implicit authentication](https://docs.zenml.io/stack-components/container-registries/azure#authentication-methods) - [Vertex AI persistent resource support](https://docs.zenml.io/stack-components/step-operators/vertex#using-persistent-resources-for-faster-development) for step operators - Support for [custom log formats](https://docs.zenml.io/how-to/control-logging/set-logging-format) - Run metadata and tag indices for improved performance - [Core concepts video added to documentation](https://docs.zenml.io/getting-started/core-concepts) ## Improvements - Comprehensive timezone consistency improvements across the platform - Enhanced database query performance for pipelines, run templates, models, and artifacts - Better handling of configured parameters during pipeline preparation - Support for passing run configurations as dictionaries when triggering pipelines - Enhanced sorting capabilities for columns with empty values in the dashboard - Improved queries for pipelines, run templates, models, and artifacts - Better filtering functionality for run metadata - More efficient artifact filtering - Various Helm chart improvements and reorganization - Updated materializer support for newer PyTorch versions - Improved code repository management and downloading - Better handling of `SecretStr` values in store configurations ## Fixes - Kubernetes service connector issues resolved - Fixed sorting for columns with potentially empty values - Corrected timestamp utilization for better timezone consistency - Resolved issues with vLLM pipeline config file usage - Fixed code download functionality for custom flavor components - Addressed various documentation and broken links - Corrected MySQL database connection warnings - Fixed issues with Vertex AI experiment tracker documentation ## What's Changed * Fix some docs by @htahir1 in https://github.com/zenml-io/zenml/pull/3302 * Replace deprecated `datetime.utcnow()` with `datetime.now(timezone.utc)` by @aiakide in https://github.com/zenml-io/zenml/pull/3265 * Adding the missing VertexAI experiment tracker docs by @bcdurak in https://github.com/zenml-io/zenml/pull/3308 * Create Sagemaker pipeline schedules if specified by @htahir1 in https://github.com/zenml-io/zenml/pull/3271 * Formatting by @schustmi in https://github.com/zenml-io/zenml/pull/3307 * Remove trailing slashes from zenml login URLs by @stefannica in https://github.com/zenml-io/zenml/pull/3312 * Fix Kubernetes service connector by @stefannica in https://github.com/zenml-io/zenml/pull/3313 * Add notes on missing features for on-prem ZenML Pro deployments by @stefannica in https://github.com/zenml-io/zenml/pull/3301 * Fix wrong warning log when directly connecting to MySQL DB by @schustmi in https://github.com/zenml-io/zenml/pull/3311 * Fix typo by @schustmi in https://github.com/zenml-io/zenml/pull/3316 * Minor fix for Sagemaker by @bcdurak in https://github.com/zenml-io/zenml/pull/3318 * Rework timestamp utilization for timezone consistency by @stefannica in https://github.com/zenml-io/zenml/pull/3314 * Add broken links checker by @htahir1 in https://github.com/zenml-io/zenml/pull/3305 * Schedule timezone fixes by @schustmi in https://github.com/zenml-io/zenml/pull/3315 * Misc code repository improvements by @schustmi in https://github.com/zenml-io/zenml/pull/3306 * Add core concepts video by @htahir1 in https://github.com/zenml-io/zenml/pull/3324 * Fix code download for custom flavor components by @schustmi in https://github.com/zenml-io/zenml/pull/3323 * Allow passing run configuration as dict when triggering pipelines by @schustmi in https://github.com/zenml-io/zenml/pull/3326 * Fix sorting by columns with potentially empty values by @schustmi in https://github.com/zenml-io/zenml/pull/3325 * Allow custom log formats by @schustmi in https://github.com/zenml-io/zenml/pull/3288 * Add vertex persistent resource to settings for step operator by @htahir1 in https://github.com/zenml-io/zenml/pull/3304 * Fix use of config file in vLLM pipelines by @wjayesh in https://github.com/zenml-io/zenml/pull/3322 * Fixing the CI with the new `huggingface-hub` version by @bcdurak in https://github.com/zenml-io/zenml/pull/3329 * Handling string values as SecretStrs in store configurations by @bcdurak in https://github.com/zenml-io/zenml/pull/3319 * More code repository improvements by @schustmi in https://github.com/zenml-io/zenml/pull/3327 * Fix materializer for new pytorch version by @schustmi in https://github.com/zenml-io/zenml/pull/3331 * Add some nicer docs by @htahir1 in https://github.com/zenml-io/zenml/pull/3328 * Add run metadata and tag indices by @schustmi in https://github.com/zenml-io/zenml/pull/3310 * Fix markdown link checker for external PRs by @schustmi in https://github.com/zenml-io/zenml/pull/3333 * feat: implement implicit authentication for ACR and Storage Account by @lukas-reining in https://github.com/zenml-io/zenml/pull/3274 * Add support for symlinks in GH download by @schustmi in https://github.com/zenml-io/zenml/pull/3332 * ZenML Helm chart improvements by @stefannica in https://github.com/zenml-io/zenml/pull/3320 * Move helm chart out of the source tree by @stefannica in https://github.com/zenml-io/zenml/pull/3338 * Add option to skip stack validation by @schustmi in https://github.com/zenml-io/zenml/pull/3337 * Improve queries for pipelines, run templates, models and artifacts by @schustmi in https://github.com/zenml-io/zenml/pull/3335 * Improve configured parameter detection when preparing pipeline by @schustmi in https://github.com/zenml-io/zenml/pull/3339 * Minor fix for the Artifact filter model by @bcdurak in https://github.com/zenml-io/zenml/pull/3334 * Allow (un)installing integrations with system-wide uv installations by @schustmi in https://github.com/zenml-io/zenml/pull/3342 * Fix filtering by run metadata by @schustmi in https://github.com/zenml-io/zenml/pull/3344 ## New Contributors * @lukas-reining made their first contribution in https://github.com/zenml-io/zenml/pull/3274 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.73.0...0.74.0 # 0.73.0 The `0.73.0` release contains various changes and improvements, but most importantly it introduces the support to deploy and enroll un-managed ZenML Pro tenants in the ZenML Pro control plane (Helm deployment options, secure enrollment, CSRF tokens) and other features necessary for self-hosted, multi-domain ZenML Pro installations. ## Other Features - Vertex AI experiment tracker integration - Experiment comparison tooling - Support for new Airflow KubernetesPodOperator import paths - Updated Slack alerter implementation - Independent memory resource configuration for migration pods in Helm charts ## Improvements - Added environment variable to allow non-ASCII characters in JSON dumps - Removed gluon from MLflow log suppression list - Enhanced resource reporting with automatic conversion - Documentation updates for Kubeflow Pipelines and LLMs - Various bugfixes for the ZenML dashboard ## What's Changed * Fix some docs by @htahir1 in https://github.com/zenml-io/zenml/pull/3302 * Replace deprecated `datetime.utcnow()` with `datetime.now(timezone.utc)` by @aiakide in https://github.com/zenml-io/zenml/pull/3265 * Adding the missing VertexAI experiment tracker docs by @bcdurak in https://github.com/zenml-io/zenml/pull/3308 * Create Sagemaker pipeline schedules if specified by @htahir1 in https://github.com/zenml-io/zenml/pull/3271 * Formatting by @schustmi in https://github.com/zenml-io/zenml/pull/3307 * Remove trailing slashes from zenml login URLs by @stefannica in https://github.com/zenml-io/zenml/pull/3312 * Fix Kubernetes service connector by @stefannica in https://github.com/zenml-io/zenml/pull/3313 * Add notes on missing features for on-prem ZenML Pro deployments by @stefannica in https://github.com/zenml-io/zenml/pull/3301 * Fix wrong warning log when directly connecting to MySQL DB by @schustmi in https://github.com/zenml-io/zenml/pull/3311 * Fix typo by @schustmi in https://github.com/zenml-io/zenml/pull/3316 * Minor fix for Sagemaker by @bcdurak in https://github.com/zenml-io/zenml/pull/3318 * Rework timestamp utilization for timezone consistency by @stefannica in https://github.com/zenml-io/zenml/pull/3314 * Add broken links checker by @htahir1 in https://github.com/zenml-io/zenml/pull/3305 * Schedule timezone fixes by @schustmi in https://github.com/zenml-io/zenml/pull/3315 * Misc code repository improvements by @schustmi in https://github.com/zenml-io/zenml/pull/3306 * Add core concepts video by @htahir1 in https://github.com/zenml-io/zenml/pull/3324 * Fix code download for custom flavor components by @schustmi in https://github.com/zenml-io/zenml/pull/3323 * Allow passing run configuration as dict when triggering pipelines by @schustmi in https://github.com/zenml-io/zenml/pull/3326 * Fix sorting by columns with potentially empty values by @schustmi in https://github.com/zenml-io/zenml/pull/3325 * Allow custom log formats by @schustmi in https://github.com/zenml-io/zenml/pull/3288 * Add vertex persistent resource to settings for step operator by @htahir1 in https://github.com/zenml-io/zenml/pull/3304 * Fix use of config file in vLLM pipelines by @wjayesh in https://github.com/zenml-io/zenml/pull/3322 * Fixing the CI with the new `huggingface-hub` version by @bcdurak in https://github.com/zenml-io/zenml/pull/3329 * Handling string values as SecretStrs in store configurations by @bcdurak in https://github.com/zenml-io/zenml/pull/3319 * More code repository improvements by @schustmi in https://github.com/zenml-io/zenml/pull/3327 * Fix materializer for new pytorch version by @schustmi in https://github.com/zenml-io/zenml/pull/3331 * On-prem Pro tenants: secure enrollment, CSRF tokens and cross-domain authorization flow by @stefannica in https://github.com/zenml-io/zenml/pull/3264 * Fix the misc release actions by @schustmi in https://github.com/zenml-io/zenml/pull/3286 * Add 0.72.0 to the migration tests by @schustmi in https://github.com/zenml-io/zenml/pull/3285 * Fix links to Kubeflow Pipelines docs in `kubeflow.md` by @matemijolovic in https://github.com/zenml-io/zenml/pull/3289 * Add experiment comparison tool docs by @strickvl in https://github.com/zenml-io/zenml/pull/3287 * Fix broken links by @strickvl in https://github.com/zenml-io/zenml/pull/3291 * Add support for new Airflow KubernetesPodOperator import by @schustmi in https://github.com/zenml-io/zenml/pull/3295 * Updated Slack Alerter by @bcdurak in https://github.com/zenml-io/zenml/pull/3282 * Allow non ASCII in JSON dump with env var by @Frank995 in https://github.com/zenml-io/zenml/pull/3257 * Remove gluon from mlflow log suppression list by @htahir1 in https://github.com/zenml-io/zenml/pull/3298 * Convert reportable resources if necessary by @schustmi in https://github.com/zenml-io/zenml/pull/3296 * Vertex AI Experiment Tracker Integration by @nkhusainov in https://github.com/zenml-io/zenml/pull/3260 * Document on-prem ZenML Pro deployments by @stefannica in https://github.com/zenml-io/zenml/pull/3294 * generate llms.txt for our docs by @wjayesh in https://github.com/zenml-io/zenml/pull/3273 * [helm] Independent setting of memory resources for migration pods by @wjayesh in https://github.com/zenml-io/zenml/pull/3281 ## New Contributors * @matemijolovic made their first contribution in https://github.com/zenml-io/zenml/pull/3289 * @Frank995 made their first contribution in https://github.com/zenml-io/zenml/pull/3257 * @nkhusainov made their first contribution in https://github.com/zenml-io/zenml/pull/3260 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.72.0...0.73.0 # 0.72.0 The `0.72.0` release contains various bug fixes, performance improvements and improvements to our documentation. ## What's Changed * Fix typo in readme by @schustmi in https://github.com/zenml-io/zenml/pull/3247 * adding 0.71.0 to migration tests by @bcdurak in https://github.com/zenml-io/zenml/pull/3250 * Fix workload token expiration for cached steps/runs by @schustmi in https://github.com/zenml-io/zenml/pull/3243 * Implement wandb settings conversion for latest release by @schustmi in https://github.com/zenml-io/zenml/pull/3246 * Add CPU usage note to Modal docs by @strickvl in https://github.com/zenml-io/zenml/pull/3253 * Re-authenticate requests that failed authentication by @stefannica in https://github.com/zenml-io/zenml/pull/3256 * Add new toc by @htahir1 in https://github.com/zenml-io/zenml/pull/3255 * Add step run unique constraint by @schustmi in https://github.com/zenml-io/zenml/pull/3236 * Fix build reuse after stack updates by @schustmi in https://github.com/zenml-io/zenml/pull/3251 * Fix fetching run template using the client by @schustmi in https://github.com/zenml-io/zenml/pull/3258 * Improved deprecation messages for artifact configs and run metadata by @bcdurak in https://github.com/zenml-io/zenml/pull/3261 * Filtering and sorting by @bcdurak in https://github.com/zenml-io/zenml/pull/3230 * Fix hyperparam tuning docs by @stefannica in https://github.com/zenml-io/zenml/pull/3259 * Include user of latest run in pipeline response by @schustmi in https://github.com/zenml-io/zenml/pull/3262 * Create model versions server-side to avoid race conditions by @schustmi in https://github.com/zenml-io/zenml/pull/3254 * Fix request model validation by @schustmi in https://github.com/zenml-io/zenml/pull/3245 * Improve docs to encourage using secrets by @AlexejPenner in https://github.com/zenml-io/zenml/pull/3272 * Include service connector requirements in custom flavor registration by @schustmi in https://github.com/zenml-io/zenml/pull/3267 * Fix the onboarding state to account for zenml login by @stefannica in https://github.com/zenml-io/zenml/pull/3270 * Improve the efficiency of some SQL queries by @schustmi in https://github.com/zenml-io/zenml/pull/3263 * Fix broken link by @strickvl in https://github.com/zenml-io/zenml/pull/3276 * Bump NLP template by @schustmi in https://github.com/zenml-io/zenml/pull/3275 * Fixed and improved sorting by @bcdurak in https://github.com/zenml-io/zenml/pull/3266 * Add matplotlib visualization to ZenML dashboard by @htahir1 in https://github.com/zenml-io/zenml/pull/3278 * Fix azure integration by @schustmi in https://github.com/zenml-io/zenml/pull/3279 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.71.0...0.72.0 # 0.71.0 ZenML version 0.71.0 delivers a new Modal step operator integration as its core feature, enabling efficient cloud execution for ML pipelines with granular hardware configuration options. The release strengthens enterprise capabilities through improved token management and dashboard features, while expanding artifact handling with dynamic naming and enhanced visualization support. Additionally, it includes various infrastructure improvements and bug fixes that enhance the platform's stability and usability, particularly around Docker connectivity, Kubernetes management, and service connector operations. ## New Feature: Modal Step Operator Integration ZenML now [integrates with Modal](https://modal.com/), bringing lightning-fast cloud execution capabilities to your ML pipelines. This new step operator[ https://docs.zenml.io/stack-components/step-operators/modal] allows you to execute individual pipeline steps on Modal's specialized compute instances, offering notable speed particularly for Docker image building and hardware provisioning. With simple configuration options, you can precisely specify hardware requirements like GPU type, CPU count, and memory for each step, making it ideal for resource-intensive ML workloads. ## Other Highlights - **Workload API Token Management:** Refactored token management for improved security with a generic API token dispenser. - **Dashboard Enhancements:** - Introduced service account management capabilities. - Added API key creation and integration features. - **Dynamic Artifact Naming:** Introduced capability to dynamically name artifacts. - **Visualization Enhancements:** Made dictionaries and lists visualizable, added JSON visualization type. ## Additional Features and Improvements - Improved error messages for Docker daemon connectivity - Enhanced SageMaker URL handling - Simplified model version artifact linkage - Added testing for pipeline templates - Improved Kubernetes pod and label length management - Allowed skipping type annotations for step inputs - Enabled using feature service instances instead of just names ## Bug Fixes - Fixed issues with getting out of an inaccessible active stack - Fixed race conditions in the service connector type registry - Resolved migration test complications - Corrected documentation links - Fixed artifact store and artifact URI handling - Addressed various scalability and compatibility issues ## Documentation Updates - Added documentation redirects - Updated PyTorch documentation links - Improved service connector documentation ## What's Changed * Refactored workload API token management for better security and implemented generic API token dispenser by @stefannica in https://github.com/zenml-io/zenml/pull/3154 * Add 0.70.0 to the migration tests by @avishniakov in https://github.com/zenml-io/zenml/pull/3190 * Adjustments to the PR template by @bcdurak in https://github.com/zenml-io/zenml/pull/3194 * [docs] Fix links in the how-to section of docs by @wjayesh in https://github.com/zenml-io/zenml/pull/3196 * Fixing sagemaker urls to take the settings into consideration by @bcdurak in https://github.com/zenml-io/zenml/pull/3195 * Add cached run into testing of migrations by @avishniakov in https://github.com/zenml-io/zenml/pull/3199 * Fix service connector type registry race conditions by @stefannica in https://github.com/zenml-io/zenml/pull/3202 * Refactor container resource configuration in Vertex Orchestrator test by @avishniakov in https://github.com/zenml-io/zenml/pull/3203 * [docs] Add missing redirects by @wjayesh in https://github.com/zenml-io/zenml/pull/3200 * Add links to `uv` new PyTorch documentation by @strickvl in https://github.com/zenml-io/zenml/pull/3204 * Fix broken docs link by @strickvl in https://github.com/zenml-io/zenml/pull/3208 * Bugfix for getting out of an inaccessible active stack with no permissions by @bcdurak in https://github.com/zenml-io/zenml/pull/3198 * Simplify model version artifact linkage by @schustmi in https://github.com/zenml-io/zenml/pull/3175 * Reenable macos testing by @avishniakov in https://github.com/zenml-io/zenml/pull/3205 * Various fixes and improvements by @stefannica in https://github.com/zenml-io/zenml/pull/3211 * Pass config path during zenml pipeline build by @schustmi in https://github.com/zenml-io/zenml/pull/3212 * Add test for running templates by @schustmi in https://github.com/zenml-io/zenml/pull/3192 * Fix service connector docstring by @schustmi in https://github.com/zenml-io/zenml/pull/3216 * Improve error message when docker daemon is not reachable by @schustmi in https://github.com/zenml-io/zenml/pull/3214 * Don't run migration for empty updates by @schustmi in https://github.com/zenml-io/zenml/pull/3210 * Remove `--check` from format script and fix naming by @safoinme in https://github.com/zenml-io/zenml/pull/3218 * More scalability improvements by @schustmi in https://github.com/zenml-io/zenml/pull/3206 * Use correct keyword for artifact store open by @schustmi in https://github.com/zenml-io/zenml/pull/3220 * Fix passing of some sagemaker settings by @schustmi in https://github.com/zenml-io/zenml/pull/3221 * Add hint when trying to connect with api key by @schustmi in https://github.com/zenml-io/zenml/pull/3222 * Allow passing None values as parameter for optional complex types by @schustmi in https://github.com/zenml-io/zenml/pull/3215 * Limit kubernetes pod and label length by @schustmi in https://github.com/zenml-io/zenml/pull/3217 * Updating the quickstart example to use the new `log_metadata` by @bcdurak in https://github.com/zenml-io/zenml/pull/3188 * Allow skipping type annotations for step inputs by @schustmi in https://github.com/zenml-io/zenml/pull/3223 * Modal Step Operator by @strickvl in https://github.com/zenml-io/zenml/pull/2948 * Add dynamic artifacts naming, documentation and tests by @avishniakov in https://github.com/zenml-io/zenml/pull/3201 * Run template CLI command and bugfix by @schustmi in https://github.com/zenml-io/zenml/pull/3225 * Make dicts/lists visualizable and add JSON as viz type by @wjayesh in https://github.com/zenml-io/zenml/pull/2882 * Instances of the `FeatureService`s are now used instead of only the names of the FeatureServices. by @aiakide in https://github.com/zenml-io/zenml/pull/3209 * Quickstart fixes by @schustmi in https://github.com/zenml-io/zenml/pull/3227 * Add missing docs by @schustmi in https://github.com/zenml-io/zenml/pull/3226 * Misc cleanup by @schustmi in https://github.com/zenml-io/zenml/pull/3229 * Fix input resolution for steps with dynamic artifact names by @schustmi in https://github.com/zenml-io/zenml/pull/3228 * Follow-up on the `run_metadata` changes by @bcdurak in https://github.com/zenml-io/zenml/pull/3193 * Fixed broken links by @htahir1 in https://github.com/zenml-io/zenml/pull/3232 * Fixed wandb login problem in Quickstart by @htahir1 in https://github.com/zenml-io/zenml/pull/3233 * Misc bugfixes by @schustmi in https://github.com/zenml-io/zenml/pull/3234 * Add additional way to fetch docker repo digest by @schustmi in https://github.com/zenml-io/zenml/pull/3231 * AWS Image Builder implementation by @stefannica in https://github.com/zenml-io/zenml/pull/2904 * Disable client-side caching for some orchestrators by @schustmi in https://github.com/zenml-io/zenml/pull/3235 * Fix artifact uris for artifacts with name placeholders by @schustmi in https://github.com/zenml-io/zenml/pull/3237 * Materializer test fix on Windows by @bcdurak in https://github.com/zenml-io/zenml/pull/3238 * Fix GET step run endpoint to return unhydrated response if requested by @schustmi in https://github.com/zenml-io/zenml/pull/3240 * Pipeline run API token fixes and improvements by @stefannica in https://github.com/zenml-io/zenml/pull/3242 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.70.0...0.71.0 # 0.70.0 The **ZenML 0.70.0** release includes a significant number of database schema changes and migrations, which means upgrading to this version will require extra caution. As always, please make sure to make a copy of your production database before upgrading. ## Key Changes * **Artifact Versioning Improvements**: The handling of artifact versions has been improved, including the API improvements like the ability to batch artifact version requests to improve the execution times and more types for the step input/output artifacts, including multiple versions of the same artifact (e.g. model checkpoints), to improve the UX using ZenML UI or while working directly with the API. * **Scalability Enhancements**: Various scalability improvements have been made, such as reducing unnecessary server requests and incrementing artifact versions server-side. These enhancements are expected to provide significant speed and scale improvements for ZenML users. * **Metadata management**: Now, all the metadata-creating functions are gathered under one method called `log_metadata`. It is possible to call this method with different inputs to log run metadata for artifact versions, model versions, steps, and runs. * **The oneof filtering**: This allows to filter entities using a new operator called `oneof`. You can use this with IDs (UUID type) or tags (or other string-typed attributes) like this `PipelineRunFilter(tag='oneof:["cats", "dogs"]')`. * **Documentation Improvements**: The ZenML documentation has been restructured and expanded, including the addition of new sections on [finetuning](https://docs.zenml.io/user-guide/llmops-guide/finetuning-llms) and [LLM/ML engineering](https://docs.zenml.io/user-guide/llmops-guide/evaluation) resources. * **Bug Fixes**: This release includes several bug fixes, including issues with in-process main module source loading, and more. ## Caution: Make sure to back up your data before upgrading! While this release brings many valuable improvements, the database schema changes and migrations pose a potential risk to users. It is strongly recommended that users: * **Test the upgrade on a non-production environment**: Before upgrading a production system, test the upgrade process in a non-production environment to identify and address any issues. * **Back up your data**: Ensure that you have a reliable backup of your ZenML data before attempting the upgrade. ## What's Changed * Optimizing the CI workflows by @bcdurak in https://github.com/zenml-io/zenml/pull/3145 * Adding 0.68.0 to the migration tests by @bcdurak in https://github.com/zenml-io/zenml/pull/3144 * Move step durations to body by @schustmi in https://github.com/zenml-io/zenml/pull/3046 * Docs on ZenML setup by @strickvl in https://github.com/zenml-io/zenml/pull/3100 * Remove wrongly set Model.was_created_in_this_run attribute by @schustmi in https://github.com/zenml-io/zenml/pull/3129 * Allow specifying run tags in pipeline configuration by @schustmi in https://github.com/zenml-io/zenml/pull/3130 * Fix materializer type compatibility check during loading by @schustmi in https://github.com/zenml-io/zenml/pull/3105 * [docs] Add icons to headers in docs by @wjayesh in https://github.com/zenml-io/zenml/pull/3149 * fix icons and remove redundant file by @wjayesh in https://github.com/zenml-io/zenml/pull/3150 * Merge 0.68.1 release into develop by @schustmi in https://github.com/zenml-io/zenml/pull/3153 * Allow filtering pipeline runs by stack component by @schustmi in https://github.com/zenml-io/zenml/pull/3142 * Allow artifact response as step input by @schustmi in https://github.com/zenml-io/zenml/pull/3134 * Filter component by user name by @schustmi in https://github.com/zenml-io/zenml/pull/3126 * [docs] Restructure how-to section to make it more readable by @wjayesh in https://github.com/zenml-io/zenml/pull/3147 * ZenML Pro web login implementation by @stefannica in https://github.com/zenml-io/zenml/pull/3141 * Scalability improvements: Reduce misc/hydration server requests by @schustmi in https://github.com/zenml-io/zenml/pull/3093 * Fix in-process main module source loading by @schustmi in https://github.com/zenml-io/zenml/pull/3119 * Catch assertion in GH library by @schustmi in https://github.com/zenml-io/zenml/pull/3160 * Enable cache precomputation for run templates by @schustmi in https://github.com/zenml-io/zenml/pull/3156 * Add LLM and ML engineering books to README by @htahir1 in https://github.com/zenml-io/zenml/pull/3159 * Add helper method to quickly create run template from pipeline by @schustmi in https://github.com/zenml-io/zenml/pull/3155 * Add CLI command to export stack requirements by @schustmi in https://github.com/zenml-io/zenml/pull/3158 * Scalability improvements: Increment artifact version server side by @schustmi in https://github.com/zenml-io/zenml/pull/3095 * Update OpenAI integration by @safoinme in https://github.com/zenml-io/zenml/pull/3163 * Remove deprecated torch version constraint by @safoinme in https://github.com/zenml-io/zenml/pull/3166 * vLLM model deployer by @dudeperf3ct in https://github.com/zenml-io/zenml/pull/3032 * Don't initialize client during flavor sync by @schustmi in https://github.com/zenml-io/zenml/pull/3168 * Cleanup materializer temporary directories after step execution by @schustmi in https://github.com/zenml-io/zenml/pull/3162 * Fix langchain in API docs by @avishniakov in https://github.com/zenml-io/zenml/pull/3171 * Finetuning guide by @strickvl in https://github.com/zenml-io/zenml/pull/3157 * Fix mypy issue vllm evidently by @safoinme in https://github.com/zenml-io/zenml/pull/3169 * Add artifact version batch request by @schustmi in https://github.com/zenml-io/zenml/pull/3164 * Add missing section links by @strickvl in https://github.com/zenml-io/zenml/pull/3172 * Fix uvloop mypy by @avishniakov in https://github.com/zenml-io/zenml/pull/3174 * Multiple output versions for a step outputs by @avishniakov in https://github.com/zenml-io/zenml/pull/3072 * Simplify Metadata handling by @AlexejPenner in https://github.com/zenml-io/zenml/pull/3096 * assign value to component_name in preset stack registration by @hirekk in https://github.com/zenml-io/zenml/pull/3178 * Updating the template versions with `zenml login` by @bcdurak in https://github.com/zenml-io/zenml/pull/3177 * Better input artifacts typing by @avishniakov in https://github.com/zenml-io/zenml/pull/3099 * Refactor environment setup and caching by @safoinme in https://github.com/zenml-io/zenml/pull/3077 * Fix spelling errors by @safoinme in https://github.com/zenml-io/zenml/pull/3181 * Prevent some race conditions by @schustmi in https://github.com/zenml-io/zenml/pull/3167 * Update stack deployments with latest features by @stefannica in https://github.com/zenml-io/zenml/pull/3183 * Terraform best practices by @htahir1 in https://github.com/zenml-io/zenml/pull/3131 * Fix sagemaker pipeline URLs by @stefannica in https://github.com/zenml-io/zenml/pull/3176 * Fix lightning orchestrator for multi-step pipelines by @wjayesh in https://github.com/zenml-io/zenml/pull/3170 * Port bugfixes from #2497 by @avishniakov in https://github.com/zenml-io/zenml/pull/3179 * Removing the `enable_cache` from the config files by @bcdurak in https://github.com/zenml-io/zenml/pull/3184 * Don't pass tags to step config by @schustmi in https://github.com/zenml-io/zenml/pull/3186 * New `log_metadata` function, new `oneof` filtering, additional `run_metadata` filtering by @bcdurak in https://github.com/zenml-io/zenml/pull/3182 ## New Contributors * @hirekk made their first contribution in https://github.com/zenml-io/zenml/pull/3178 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.68.1...0.70.0 # 0.68.1 Fixes an issue with some partially cached pipelines running on remote orchestrators. ## What's Changed * Remove unavailable upstream steps during cache precomputation by @schustmi in https://github.com/zenml-io/zenml/pull/3146 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.68.0...0.68.1 # 0.68.0 ## Highlights - **Stack Components on the Dashboard:** We're bringing back stack components. With this release you will get access to the list of your stack components on the ZenML dashboard. More functionality is going to follow in the next releases. - **Client-Side Caching:** Implemented client-side computation for cached steps, significantly reducing time and costs associated with remote orchestrator spin-up. - **Streamlined Onboarding Process:** Unified the starter and production setup into a single sequential flow, providing a more intuitive user experience. - **BentoML Integration:** Updated to version 1.3.5 with enhanced containerization support. - **Artifact Management:** Introduced `register_artifact` function enabling direct linking of existing data in the artifact store, particularly useful for tools like PyTorch-Lightning that manage their own checkpoints. - **Enhanced Error Handling:** Added Error Boundary to visualization components for improved reliability and user experience. ## Additional Features and Improvements - Added multiple access points for deleting pipeline runs - Improved pipeline detail view functionality - Improved service account handling for Kaniko image builder ## Breaking Changes and Deprecations - Discontinued Python 3.8 support - Removed legacy pipeline and step interface - Removed legacy post execution workflow - Removed legacy dashboard option - Removed `zenml stack up/down` CLI commands - Removed `zenml deploy` and `zenml <stack-component> deploy` - Removed `StepEnvironment` class - Removed `ArtifactConfig` class for model version specification - Removed `ExternalArtifact` class - Deprecated `Client.list_runs` in favor of `Client.list_pipeline_runs` - Deprecated `ArtifactVersionResponse.read` in favor of `ArtifactVersionResponse.load` ## Documentation Updates Added new guides for the following topics: - Kubernetes per-pod configuration - Factory generation of artifact names - Common stacks best practices - Azure 1-click dashboard deployment - ZenML server upgrade best practices - Custom Dataset classes and Materializers - Comprehensive ZenML Pro documentation - Image building optimization during pipeline runs - Enhanced BentoML integration documentation ## What's Changed * Release 0.67.0 migration testing by @bcdurak in https://github.com/zenml-io/zenml/pull/3050 * Prevent too large requests by @avishniakov in https://github.com/zenml-io/zenml/pull/3048 * Fix Neptune linting after 1.12.0 release by @avishniakov in https://github.com/zenml-io/zenml/pull/3055 * Fix Lightning Orchestrator (remove -y from pip install) by @wjayesh in https://github.com/zenml-io/zenml/pull/3058 * Fix artifact pruning endpoint path by @schustmi in https://github.com/zenml-io/zenml/pull/3052 * Update python versioning in docs by @avishniakov in https://github.com/zenml-io/zenml/pull/3059 * Fix infinite loop while fetching artifact store in logs storage class by @avishniakov in https://github.com/zenml-io/zenml/pull/3061 * Make sync a setting for sagemaker/azureml orchestrator by @schustmi in https://github.com/zenml-io/zenml/pull/3062 * Remove some deprecated features by @schustmi in https://github.com/zenml-io/zenml/pull/2926 * Fix MySQL warning when filtering pipelines by latest run by @schustmi in https://github.com/zenml-io/zenml/pull/3051 * Remove more deprecated stuff by @schustmi in https://github.com/zenml-io/zenml/pull/3063 * Remove log versions from versioned buckets in S3 by @avishniakov in https://github.com/zenml-io/zenml/pull/3060 * add docs on k8s per pod settings by @wjayesh in https://github.com/zenml-io/zenml/pull/3066 * Remove Python 3.8 support by @strickvl in https://github.com/zenml-io/zenml/pull/3034 * `register_artifact ` function by @avishniakov in https://github.com/zenml-io/zenml/pull/3053 * Fix bad link in docs by @avishniakov in https://github.com/zenml-io/zenml/pull/3069 * Fix model linkage for the lazy loading scenarios by @avishniakov in https://github.com/zenml-io/zenml/pull/3054 * Updating template versions after the Python 3.8 changes by @bcdurak in https://github.com/zenml-io/zenml/pull/3070 * Add UUID materializer by @htahir1 in https://github.com/zenml-io/zenml/pull/3073 * Fix pipeline and model URLs for ZenML Pro on-prem deployments by @stefannica in https://github.com/zenml-io/zenml/pull/3083 * Update bentoml integration to 1.3.5 and add containerization by @wjayesh in https://github.com/zenml-io/zenml/pull/3045 * Fix mlflow linting by @schustmi in https://github.com/zenml-io/zenml/pull/3085 * Add docs for factory generation of artifact names by @strickvl in https://github.com/zenml-io/zenml/pull/3084 * Remove unnecessary metadata fields in UUID materializer test by @htahir1 in https://github.com/zenml-io/zenml/pull/3088 * Client-side computation of cached steps by @schustmi in https://github.com/zenml-io/zenml/pull/3068 * Fix Kaniko image builder service account passing by @schustmi in https://github.com/zenml-io/zenml/pull/3081 * Bugfix in GitLab Code Repository integration by @4gt-104 in https://github.com/zenml-io/zenml/pull/3076 * Add docs on common stacks best practices by @strickvl in https://github.com/zenml-io/zenml/pull/3092 * [docs] Update stacks page and add azure 1-click from dashboard docs by @wjayesh in https://github.com/zenml-io/zenml/pull/3082 * Local development how-to section by @strickvl in https://github.com/zenml-io/zenml/pull/3090 * [docs] best practices for upgrading zenml server by @wjayesh in https://github.com/zenml-io/zenml/pull/3087 * Fix S3 ArtifactStore auth issue by @avishniakov in https://github.com/zenml-io/zenml/pull/3086 * Reduce migration testing runtime by @avishniakov in https://github.com/zenml-io/zenml/pull/3078 * [docs] Dedicated docs on how to skip building an image on pipeline run by @wjayesh in https://github.com/zenml-io/zenml/pull/3079 * Fix filtering by tag for pipeline runs by @schustmi in https://github.com/zenml-io/zenml/pull/3097 * Remove deprecated features: `zenml deploy` and `zenml <stack-component> deploy` by @stefannica in https://github.com/zenml-io/zenml/pull/3089 * Do not tag model via `Model` class on creation by @avishniakov in https://github.com/zenml-io/zenml/pull/3098 * Sagemaker add pipeline tags by @htahir1 in https://github.com/zenml-io/zenml/pull/3080 * [docs] Add custom Dataset classes and Materializers in ZenML by @htahir1 in https://github.com/zenml-io/zenml/pull/3091 * Delete Scarf related scripts and workflow files by @htahir1 in https://github.com/zenml-io/zenml/pull/3103 * Add more detailed docs for ZenML Pro by @wjayesh in https://github.com/zenml-io/zenml/pull/3065 * Add missing code hash filter in client method by @schustmi in https://github.com/zenml-io/zenml/pull/3094 * Remove lineage graph and legacy dashboard support by @schustmi in https://github.com/zenml-io/zenml/pull/3064 * Add unittest to cover gitlab CR regex. by @4gt-104 in https://github.com/zenml-io/zenml/pull/3102 * Automating the release process using Github workflows by @bcdurak in https://github.com/zenml-io/zenml/pull/3101 * Bugfix for release automation by @bcdurak in https://github.com/zenml-io/zenml/pull/3107 * Bugfix for new version in the release automation by @bcdurak in https://github.com/zenml-io/zenml/pull/3108 * using the right parent image name by @bcdurak in https://github.com/zenml-io/zenml/pull/3109 * Making the new release automation scripts executable by @bcdurak in https://github.com/zenml-io/zenml/pull/3110 * Fixing the env variables for the release automation by @bcdurak in https://github.com/zenml-io/zenml/pull/3111 * Adding the right Github configuration before using the `gh` CLI to fetch the version by @bcdurak in https://github.com/zenml-io/zenml/pull/3112 * Fixing the outputs of the first step in the release automation by @bcdurak in https://github.com/zenml-io/zenml/pull/3113 * Handling github auth and release notes for release automation by @bcdurak in https://github.com/zenml-io/zenml/pull/3114 * Fixing the cloudbuild call for release automation by @bcdurak in https://github.com/zenml-io/zenml/pull/3116 * Fixing the update tenant call in the script by @bcdurak in https://github.com/zenml-io/zenml/pull/3118 * Release automation with the new redeploy logic by @bcdurak in https://github.com/zenml-io/zenml/pull/3120 * Fixing the automation triggers for other branches by @bcdurak in https://github.com/zenml-io/zenml/pull/3125 * Update link for `llm-complete-guide` repository.- Updated link to poi… by @htahir1 in https://github.com/zenml-io/zenml/pull/3128 * Fixing the migration testing for the release branches by @bcdurak in https://github.com/zenml-io/zenml/pull/3127 * Update pipeline deletion docs by @strickvl in https://github.com/zenml-io/zenml/pull/3123 * Disabling the cache for the quickstart tests by @bcdurak in https://github.com/zenml-io/zenml/pull/3133 * Update Argilla integration for v2.x SDK by @sdiazlor in https://github.com/zenml-io/zenml/pull/2915 * Using pip instead of `gh` CLI in the migration tests by @bcdurak in https://github.com/zenml-io/zenml/pull/3136 * Adapting tags to work with older versions of Sagemaker by @bcdurak in https://github.com/zenml-io/zenml/pull/3135 * Manual trigger for the `release_finalize` workflow by @bcdurak in https://github.com/zenml-io/zenml/pull/3137 * Fixing the prepare trigger for the release automation by @bcdurak in https://github.com/zenml-io/zenml/pull/3138 ## New Contributors * @4gt-104 made their first contribution in https://github.com/zenml-io/zenml/pull/3076 * @sdiazlor made their first contribution in https://github.com/zenml-io/zenml/pull/2915 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.67.0...0.68.0 # 0.67.0 ## Highlights - **Improved Sagemaker Orchestrator:** Now supports warm pools for AWS Sagemaker, enhancing performance and reducing startup times for TrainingJobs. - **New DAG Visualizer:** Shipped major enhancements to the DAG Visualizer for Pipeline Runs: - Preview of the actual DAG before pipeline completion - Visual adjustments for improved clarity - Real-time updates during pipeline execution - **Environment Variable References in Configurations:** Introduced the ability to reference environment variables in both code and configuration files using the syntax ${ENV_VARIABLE_NAME}, increasing flexibility in setups. - **Enhanced UX for Major Cloud Providers:** Displaying direct pipeline/log URL when working with major cloud platforms. - **Skypilot with Kubernetes Support:** Added compatibility for running Skypilot orchestrator on Kubernetes clusters. - **Updated Deepchecks Integration:** The Deepchecks integration has been refreshed with the latest features and improvements. ## Features and Improvements - **AWS Integration:** - Added permissions to workflow to enable assuming AWS role. - Fixed expired credentials error when using the docker service connector. - **Error Handling:** Improved error messages for stack components of uninstalled integrations. - **API Key Management:** Added an option to write API keys to a file instead of using the CLI. ## Pipeline Execution: - Implemented fixes for executing steps as single step pipelines. - Added filter option for templatable runs. - Added additional filtering options for pipeline runs. - MLflow Integration: Linked registered models in MLflow with the corresponding MLflow run. - Analytics: Added missing analytics event to improve user insights. ## Documentation Updates - Updated documentation for various integrations including: - Lightning AI orchestrator - Kubeflow - Comet experiment tracker - Neptune - Hugging Face deployer - Weights & Biases (wandb) - Added documentation for run templates. - Fixed incorrect method name in Pigeon docs. - Various small documentation fixes and improvements. ## Bug Fixes - Fixed YAML formatting issues. - Resolved RBAC issues for subpages in response models. - Fixed step output annotation in Discord test. - Addressed MLFlow integration requirements duplication. - Fixed Lightning orchestrator functionality. ## What's Changed * Error message for stack components of uninstalled integrations by @bcdurak in https://github.com/zenml-io/zenml/pull/2996 * Enable migration testing for version 0.66.0 by @schustmi in https://github.com/zenml-io/zenml/pull/2998 * Add permissions to workflow to enable assuming AWS role by @schustmi in https://github.com/zenml-io/zenml/pull/2999 * Add option to write api key to file instead of CLI by @schustmi in https://github.com/zenml-io/zenml/pull/3001 * Fix yaml formatting by @schustmi in https://github.com/zenml-io/zenml/pull/3004 * Update ZenML Pro links for consistency.- Update ZenML Pro links for c… by @htahir1 in https://github.com/zenml-io/zenml/pull/3007 * Fix incorrect method name in Pigeon docs by @strickvl in https://github.com/zenml-io/zenml/pull/3008 * Fixes for executing steps as single step pipelines by @schustmi in https://github.com/zenml-io/zenml/pull/3006 * Add filter option for templatable runs by @schustmi in https://github.com/zenml-io/zenml/pull/3000 * Add missing analytics event by @schustmi in https://github.com/zenml-io/zenml/pull/3009 * Fix expired credentials error when using the docker service connector by @schustmi in https://github.com/zenml-io/zenml/pull/3002 * Fix Lightning docs by @strickvl in https://github.com/zenml-io/zenml/pull/3013 * Remove image builder warning by @htahir1 in https://github.com/zenml-io/zenml/pull/3014 * Fixed kubeflow docs by @AlexejPenner in https://github.com/zenml-io/zenml/pull/3018 * Update Comet experiment tracker docs by @htahir1 in https://github.com/zenml-io/zenml/pull/3019 * Small docs fixes by @strickvl in https://github.com/zenml-io/zenml/pull/3022 * Feature/cleanup unused file by @AlexejPenner in https://github.com/zenml-io/zenml/pull/3023 * MLFlow integration requirements duplicate fix by @bcdurak in https://github.com/zenml-io/zenml/pull/3011 * Fix Neptune docs by @htahir1 in https://github.com/zenml-io/zenml/pull/3026 * Fix huggingface deployer docs by @htahir1 in https://github.com/zenml-io/zenml/pull/3024 * Fix step output annotation in Discord test by @wjayesh in https://github.com/zenml-io/zenml/pull/3029 * Fix RBAC for subpages in response models by @schustmi in https://github.com/zenml-io/zenml/pull/3031 * Allow env variable placeholders in configurations by @schustmi in https://github.com/zenml-io/zenml/pull/3003 * Leverage warm pools for AWS Sagemaker by @avishniakov in https://github.com/zenml-io/zenml/pull/3027 * Updated wandb docs by @htahir1 in https://github.com/zenml-io/zenml/pull/3030 * Add hyperlint by @htahir1 in https://github.com/zenml-io/zenml/pull/3035 * Bump NLP template by @avishniakov in https://github.com/zenml-io/zenml/pull/3036 * Add additional filtering options by @schustmi in https://github.com/zenml-io/zenml/pull/2951 * Bump starter template version by @schustmi in https://github.com/zenml-io/zenml/pull/3038 * Docs for run templates by @bcdurak in https://github.com/zenml-io/zenml/pull/3028 * Update Lightning AI orchestrator documentation by @strickvl in https://github.com/zenml-io/zenml/pull/3016 * Add default value for PipelineRun.is_templatable by @schustmi in https://github.com/zenml-io/zenml/pull/3040 * Use a generic OAuth2 client credentials flow to login to the Cloud API by @stefannica in https://github.com/zenml-io/zenml/pull/3041 * fix lightning orchestrator by @safoinme in https://github.com/zenml-io/zenml/pull/3010 * Linking registered models in MLflow with the corresponding MLflow run by @aiakide in https://github.com/zenml-io/zenml/pull/3020 * Bugfixing mlflow registry linting issue by @bcdurak in https://github.com/zenml-io/zenml/pull/3043 * Enhancing the orchestrator UX for major cloud providers by @bcdurak in https://github.com/zenml-io/zenml/pull/3005 * Skypilot with Kubernetes by @safoinme in https://github.com/zenml-io/zenml/pull/3033 * Update deepchecks integration by @wjayesh in https://github.com/zenml-io/zenml/pull/2987 ## New Contributors * @aiakide made their first contribution in https://github.com/zenml-io/zenml/pull/3020 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.66.0...0.67.0 # 0.66.0 ## New Features and Improvements ### Python 3.12 support This release adds support for Python 3.12, which means you can now develop your ZenML pipelines with the latest python features. ### Easier way to specify component settings Before this release, settings for stack components had to be specified with both the component type as well as the flavor. We simplified this and it is now possible to specify settings just using the component type: ```python # Before @pipeline(settings={"orchestrator.sagemaker": SagemakerOrchestratorSettings(...)}) def my_pipeline(): ... # Now @pipeline(settings={"orchestrator": SagemakerOrchestratorSettings(...)}) def my_pipeline(): ... ``` ## Breaking changes * In order to slim down the ZenML library, we removed the `numpy` and `pandas` libraries as dependencies of ZenML. If your code uses these libraries, you have to make sure they're installed in your local environment as well as the Docker images that get built to run your pipelines (Use `DockerSettings.requirements` or `DockerSettings.required_integrations`). ## What's Changed * Add 0.65.0 to migration testing by @avishniakov in https://github.com/zenml-io/zenml/pull/2963 * Hotfix for release flow by @avishniakov in https://github.com/zenml-io/zenml/pull/2961 * Fix the one-click AWS and GCP stack deployments by @stefannica in https://github.com/zenml-io/zenml/pull/2964 * Fix wandb mypy error by @strickvl in https://github.com/zenml-io/zenml/pull/2967 * Fix accelerate docs for 0.65.0+ by @avishniakov in https://github.com/zenml-io/zenml/pull/2968 * Dynamic model version names docs by @avishniakov in https://github.com/zenml-io/zenml/pull/2970 * Logging nits by @avishniakov in https://github.com/zenml-io/zenml/pull/2972 * Fix excess Azure logging by @strickvl in https://github.com/zenml-io/zenml/pull/2965 * Fix typo in docs by @strickvl in https://github.com/zenml-io/zenml/pull/2976 * Pass code path to template run by @schustmi in https://github.com/zenml-io/zenml/pull/2973 * Prevent extra attributes in component configs by @schustmi in https://github.com/zenml-io/zenml/pull/2978 * Dependency cleanup and Python 3.12 support by @bcdurak in https://github.com/zenml-io/zenml/pull/2953 * Few nits in docs based on integrations review by @avishniakov in https://github.com/zenml-io/zenml/pull/2983 * Update slack alerter docs by @stefannica in https://github.com/zenml-io/zenml/pull/2981 * Update Kubeflow orchestrator docs by @stefannica in https://github.com/zenml-io/zenml/pull/2985 * Build docker images for python 3.12 by @schustmi in https://github.com/zenml-io/zenml/pull/2988 * Allow shortcut keys for component settings by @schustmi in https://github.com/zenml-io/zenml/pull/2957 * Remove references to workspaces from docs by @strickvl in https://github.com/zenml-io/zenml/pull/2991 * Added some adjustments for colab by @AlexejPenner in https://github.com/zenml-io/zenml/pull/2966 * Reverting the installation of `mlstacks` after its new release by @bcdurak in https://github.com/zenml-io/zenml/pull/2980 * Small dependency and docs updates by @strickvl in https://github.com/zenml-io/zenml/pull/2982 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.65.0...0.66.0 # 0.65.0 ## New Features and Improvements ### New Quickstart Experience This example demonstrates how ZenML streamlines the transition of machine learning workflows from local environments to cloud-scale operations. ### Run Single Step as a ZenML Pipeline If you want to run just an individual step on your stack, you can simply call the step as you would with a normal Python function. ZenML will internally create a pipeline with just your step and run it on the active stack. ### Other improvements and fixes * Updated AzureML Step Operator to work with SDKv2 and use Service Connectors * Added timestamps to log messages * Fixed issue with loading artifacts from artifact store outside of current active artifact store * Support of templated names for Model Version (`{date}` and `{time}` are currently supported placeholders) * `run_with_accelerate` step wrapper can be used as a Python Decorator on top of ZenML steps ## Breaking changes * Workspace scoped POST endpoint `full-stack` was removed and merged with `stacks` POST endpoint ## What's Changed * Remove broken JIRA sync workflow by @strickvl in https://github.com/zenml-io/zenml/pull/2924 * Fix Hugging Face Spaces permissions by @strickvl in https://github.com/zenml-io/zenml/pull/2925 * Fixes for `run_with_accelerate` by @avishniakov in https://github.com/zenml-io/zenml/pull/2935 * Bump azure skypilot to a stable 0.6.1 by @avishniakov in https://github.com/zenml-io/zenml/pull/2933 * Add Timestamps to Logs and Update Dashboard URL Message by @htahir1 in https://github.com/zenml-io/zenml/pull/2934 * Adding 0.64.0 to migration tests by @bcdurak in https://github.com/zenml-io/zenml/pull/2923 * Removed docker build docs + fixed CLI command for zenml pipeline build list by @htahir1 in https://github.com/zenml-io/zenml/pull/2938 * Throw an error when running integration installs when uv == False but pip is not installed by @mennoliefstingh in https://github.com/zenml-io/zenml/pull/2930 * Update AzureML step operator to SDK v2 and add service connector support by @stefannica in https://github.com/zenml-io/zenml/pull/2927 * Improving the AzureML orchestrator docs by @bcdurak in https://github.com/zenml-io/zenml/pull/2940 * Update mlflow docs by @htahir1 in https://github.com/zenml-io/zenml/pull/2941 * Tell users where they can import `DockerSettings` from by @strickvl in https://github.com/zenml-io/zenml/pull/2947 * Fail early when specifying invalid materializers by @schustmi in https://github.com/zenml-io/zenml/pull/2950 * Add GitHub Codespaces and VS Code Remote Container support by @htahir1 in https://github.com/zenml-io/zenml/pull/2949 * Automatically detect whether code download is necessary by @schustmi in https://github.com/zenml-io/zenml/pull/2946 * Enable running a single step on the active stack by @schustmi in https://github.com/zenml-io/zenml/pull/2942 * Dynamic (templated) names for model versions by @avishniakov in https://github.com/zenml-io/zenml/pull/2909 * Adding an orchestrator URL to the AzureML orchestrator by @bcdurak in https://github.com/zenml-io/zenml/pull/2952 * Update python version of latest docker image by @schustmi in https://github.com/zenml-io/zenml/pull/2954 * Make `run_with_accelerate` a pythonic decorator by @avishniakov in https://github.com/zenml-io/zenml/pull/2943 * Bugfix for artifacts coming from a different artifact store by @bcdurak in https://github.com/zenml-io/zenml/pull/2928 * Stack Request cleanup and improvements by @bcdurak in https://github.com/zenml-io/zenml/pull/2906 * Silence pydantic protected namespace warnings by @schustmi in https://github.com/zenml-io/zenml/pull/2955 * Update key for finished onboarding survey by @schustmi in https://github.com/zenml-io/zenml/pull/2956 * Extend notebook source replacement code to other objects apart from ZenML steps by @schustmi in https://github.com/zenml-io/zenml/pull/2919 * Fix stack register CLI command by @schustmi in https://github.com/zenml-io/zenml/pull/2958 * Lightening studio orchestrator by @safoinme in https://github.com/zenml-io/zenml/pull/2931 * Introduce new quickstart with a focus on Stack switching by @AlexejPenner in https://github.com/zenml-io/zenml/pull/2937 * Bugfix for the required prompts for the AzureML wizard by @bcdurak in https://github.com/zenml-io/zenml/pull/2959 ## New Contributors * @mennoliefstingh made their first contribution in https://github.com/zenml-io/zenml/pull/2930 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.64.0...0.65.0 # 0.64.0 ## New Features and Improvements ### Notebook Integration ZenML now supports running steps defined in notebook cells with remote orchestrators and step operators. This feature enhances the development workflow by allowing seamless transition from experimentation to production. - **Details**: [Running remote pipelines from notebooks](https://docs.zenml.io/v/docs/how-to/run-remote-pipelines-from-notebooks) ### Reduced Docker Builds with Code Uploads We've introduced an option to upload code to the artifact store, enabling Docker build reuse. This feature can significantly speed up iteration, especially when working with remote stacks. - **Default**: Enabled - **Configuration**: To disable, set `DockerSettings.allow_download_from_artifact_store=False` for steps or pipelines - **Benefits**: - Faster development cycles - No need to register a code repository to reuse builds - Builds only occur when requirements or DockerSettings change - **Documentation**: [Which files are built into the image](https://docs.zenml.io/how-to/customize-docker-builds/which-files-are-built-into-the-image) ### AzureML Orchestrator Support ZenML now supports [AzureML](https://azure.microsoft.com/en-gb/free/machine-learning) as an orchestrator, expanding our list of supported cloud platforms. - **Full Azure Guide**: [Setting up an Azure stack](https://docs.zenml.io/how-to/popular-integrations/azure-guide) - **Documentation**: [AzureML orchestrator](https://docs.zenml.io/stack-components/orchestrators/azureml) ### Terraform Modules We've released new Terraform modules on the Hashicorp registry for provisioning complete MLOps stacks across major cloud providers. - **Features**: - Automate infrastructure setup for ZenML stack deployment - Handle registration of configurations to ZenML server - **More Information**: [MLOps Terraform ZenML blog post](https://www.zenml.io/blog/mlops-terraform-zenml) These updates aim to streamline the MLOps workflow, making it easier to develop, deploy, and manage machine learning pipelines with ZenML. ## What's Changed * Add 0.63.0 to migration testing by @bcdurak in https://github.com/zenml-io/zenml/pull/2893 * Document terraform stack deployment modules by @stefannica in https://github.com/zenml-io/zenml/pull/2898 * README update by @htahir1 in https://github.com/zenml-io/zenml/pull/2901 * Enable `Databricks` Unity Catalog for MLflow by @safoinme in https://github.com/zenml-io/zenml/pull/2900 * Make urls pop out from the sea of purple/cyan in the logs by @AlexejPenner in https://github.com/zenml-io/zenml/pull/2894 * Add terraform as a supported stack deployment provider by @stefannica in https://github.com/zenml-io/zenml/pull/2902 * Fix `Model` imports in docs by @strickvl in https://github.com/zenml-io/zenml/pull/2907 * Remove hub references by @schustmi in https://github.com/zenml-io/zenml/pull/2905 * Bump NLP template by @avishniakov in https://github.com/zenml-io/zenml/pull/2912 * Updated step operator docs by @htahir1 in https://github.com/zenml-io/zenml/pull/2908 * Added lightning studio check by @htahir1 in https://github.com/zenml-io/zenml/pull/2910 * Upload code to artifact store by @schustmi in https://github.com/zenml-io/zenml/pull/2895 * AzureML orchestrator by @bcdurak in https://github.com/zenml-io/zenml/pull/2873 * Run steps defined in notebooks with remote orchestrators by @schustmi in https://github.com/zenml-io/zenml/pull/2899 * Fix broken / unparsable md docs file by @strickvl in https://github.com/zenml-io/zenml/pull/2916 * Bump mlflow to 2.15.0 by @christianversloot in https://github.com/zenml-io/zenml/pull/2896 * Remove extra button by @schustmi in https://github.com/zenml-io/zenml/pull/2918 * Added last timestamp to zenserver by @htahir1 in https://github.com/zenml-io/zenml/pull/2913 * A pipeline can't finish successfully in this case by @AlexejPenner in https://github.com/zenml-io/zenml/pull/2903 * Fix the GCP Workload Identity Federation support in the GCP Service Connector by @stefannica in https://github.com/zenml-io/zenml/pull/2914 * Embeddings finetuning guide for LLMOps guide by @strickvl in https://github.com/zenml-io/zenml/pull/2917 ## πŸ₯³ Community Contributions πŸ₯³ We'd like to give a special thanks to @christianversloot who contributed to this release by bumping the `mlflow` version to 2.15.0 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.63.0...0.64.0 # 0.63.0 Moving forward from the last two releases, we have further improved the 1-click deployment tool and the stack wizard by adding support for Azure. Moreover, we implemented a new step operator that allows you to run individual steps of your pipeline in Kubernetes pods. Lastly, we have simplified our pipeline models by removing their versions. ## What's Changed * Enable cloud build service in GCP stack deployment by @stefannica in https://github.com/zenml-io/zenml/pull/2864 * Adding a `logo_url` and the of the `integration` to component responses by @bcdurak in https://github.com/zenml-io/zenml/pull/2866 * Use REST in Model tests by @avishniakov in https://github.com/zenml-io/zenml/pull/2834 * Add Azure stack wizard by @avishniakov in https://github.com/zenml-io/zenml/pull/2841 * Migration testing for 0.62.0 by @schustmi in https://github.com/zenml-io/zenml/pull/2860 * Fix RBAC in combination with lazy loaders by @schustmi in https://github.com/zenml-io/zenml/pull/2869 * Misc cleanup after release by @schustmi in https://github.com/zenml-io/zenml/pull/2861 * Disable notebook error for Kubernetes orchestrator by @strickvl in https://github.com/zenml-io/zenml/pull/2870 * Added ability to add labels to k8s pod by @htahir1 in https://github.com/zenml-io/zenml/pull/2872 * Fix zenml pro links by @schustmi in https://github.com/zenml-io/zenml/pull/2875 * Fix mlstacks docs typo by @begoechavarren in https://github.com/zenml-io/zenml/pull/2878 * Fix requests vulnerability by @stefannica in https://github.com/zenml-io/zenml/pull/2843 * Fixed some minor docs things i noticed by @htahir1 in https://github.com/zenml-io/zenml/pull/2881 * Serialize source as Any to keep subclass attributes by @schustmi in https://github.com/zenml-io/zenml/pull/2880 * Fix node selectors for Vertex orchestrator by @schustmi in https://github.com/zenml-io/zenml/pull/2876 * Kubernetes step operator by @schustmi in https://github.com/zenml-io/zenml/pull/2883 * Automatically populate GCP/azure path when using wizard from the frontend by @schustmi in https://github.com/zenml-io/zenml/pull/2886 * Remove pipeline versioning and add run templates by @schustmi in https://github.com/zenml-io/zenml/pull/2830 * Implement the Azure 1-click stack deployment by @stefannica in https://github.com/zenml-io/zenml/pull/2887 * Better error message sagemaker, better documentation server env vars by @AlexejPenner in https://github.com/zenml-io/zenml/pull/2885 * Azure Stack Wizard docs by @bcdurak in https://github.com/zenml-io/zenml/pull/2890 * Docs update mlflow deploy function call by @safoinme in https://github.com/zenml-io/zenml/pull/2863 * Fix databricks resource setting by @safoinme in https://github.com/zenml-io/zenml/pull/2889 ## New Contributors * @begoechavarren made their first contribution in https://github.com/zenml-io/zenml/pull/2878 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.62.0...0.63.0 # 0.62.0 Building on top of the last release, this release adds a new and easy way to deploy a GCP ZenML stack from the dashboard and the CLI. Give it a try by going to the `Stacks` section in the dashboard or running the `zenml stack deploy` command! For more information on this new feature, please do check out [the video and blog](https://www.zenml.io/blog/easy-mlops-pipelines) from our previous release. We also [updated our Hugging Face integration](https://github.com/zenml-io/zenml/pull/2851) to support the automatic display of an embedded `datasets` preview pane in the ZenML Dashboard whenever you return a `Dataset` from a step. This was recently released by the Hugging Face datasets team and it allows you to easily visualize and inspect your data from the comfort of the dashboard. ## What's Changed * Fix release action docker limit by @schustmi in https://github.com/zenml-io/zenml/pull/2837 * Upgrade ruff and yamlfix to latest versions before running formatting by @christianversloot in https://github.com/zenml-io/zenml/pull/2577 * Fixed edge-case where step run is stored incompletely by @AlexejPenner in https://github.com/zenml-io/zenml/pull/2827 * Docs for stack registration + deployment wizards by @htahir1 in https://github.com/zenml-io/zenml/pull/2814 * Make upgrade checks in formatting script optional by @avishniakov in https://github.com/zenml-io/zenml/pull/2839 * Enable migration testing for version 0.61.0 by @schustmi in https://github.com/zenml-io/zenml/pull/2836 * One-click GCP stack deployments by @stefannica in https://github.com/zenml-io/zenml/pull/2833 * Only login to docker for PRs with secret access by @schustmi in https://github.com/zenml-io/zenml/pull/2842 * Add GCP Stack creation Wizard (CLI) by @avishniakov in https://github.com/zenml-io/zenml/pull/2826 * Update onboarding by @schustmi in https://github.com/zenml-io/zenml/pull/2794 * Merged log files in Step Ops steps might be not available on main process, due to merge in the step op by @avishniakov in https://github.com/zenml-io/zenml/pull/2795 * Fix some broken links, copy paste commands, and made secrets more visible by @htahir1 in https://github.com/zenml-io/zenml/pull/2848 * Update stack deployment docs and other small fixes by @stefannica in https://github.com/zenml-io/zenml/pull/2846 * Improved the `StepInterfaceError` message for missing inputs by @AlexejPenner in https://github.com/zenml-io/zenml/pull/2849 * add image pull secrets to k8s pod settings by @wjayesh in https://github.com/zenml-io/zenml/pull/2847 * Include apt installation of libgomp1 for docker images with lightgbm by @AlexejPenner in https://github.com/zenml-io/zenml/pull/2813 * Patch filter mflow by stage by @whoknowsB in https://github.com/zenml-io/zenml/pull/2798 * Bump mlflow to version 2.14.2 by @christianversloot in https://github.com/zenml-io/zenml/pull/2825 * Fix Accelerate string arguments passing by @avishniakov in https://github.com/zenml-io/zenml/pull/2845 * Fix CI by @schustmi in https://github.com/zenml-io/zenml/pull/2850 * Added some visualizations for the HF dataset by @htahir1 in https://github.com/zenml-io/zenml/pull/2851 * Fix skypilot versioning for the lambda integration by @wjayesh in https://github.com/zenml-io/zenml/pull/2853 * Improve custom visualization docs by @htahir1 in https://github.com/zenml-io/zenml/pull/2855 * Fix list typo by @htahir1 in https://github.com/zenml-io/zenml/pull/2856 * Endpoint to get existing and prospective resources for service connector by @avishniakov in https://github.com/zenml-io/zenml/pull/2854 * Databricks integrations by @safoinme in https://github.com/zenml-io/zenml/pull/2823 ## New Contributors * @whoknowsB made their first contribution in https://github.com/zenml-io/zenml/pull/2798 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.61.0...0.62.0 # 0.61.0 This release comes with a new and easy way to deploy an AWS ZenML stack from the dashboard and the CLI. Give it a try by going to the `Stacks` section in the dashboard or running the `zenml stack deploy` command! We hope this makes it super easy for existing and new users to set up the infrastructure required to run ZenML pipelines on the cloud in one click. Note: Only a simple AWS stack using Skypilot supported for now but GCP + Azure to come! Additionally, this release includes improvements to our documentation and bugfixes for some integrations. ## What's Changed * Add latest zenml version to migration testing scripts by @htahir1 in https://github.com/zenml-io/zenml/pull/2811 * Add service connector support for Google Artifact Registry by @stefannica in https://github.com/zenml-io/zenml/pull/2771 * Update order in which requirements are installed by @schustmi in https://github.com/zenml-io/zenml/pull/2341 * Add installation instructions for Macs running on Apple Silicon by @strickvl in https://github.com/zenml-io/zenml/pull/2774 * Added docs for trigger interface by @htahir1 in https://github.com/zenml-io/zenml/pull/2806 * Update triggers docs with information on previously-run pipelines by @strickvl in https://github.com/zenml-io/zenml/pull/2820 * Bump kfp version in GCP integration for pydantic2.0 by @wjayesh in https://github.com/zenml-io/zenml/pull/2824 * Use shared cloud connection to reduce M2M token usage by @schustmi in https://github.com/zenml-io/zenml/pull/2817 * Fail pipeline run if error happens during deployment by @schustmi in https://github.com/zenml-io/zenml/pull/2818 * Login to dockerhub to solve rate limiting by @schustmi in https://github.com/zenml-io/zenml/pull/2828 * Stack wizard CLI + Endpoints by @avishniakov in https://github.com/zenml-io/zenml/pull/2808 * In-browser assisted full cloud stack deployments by @stefannica in https://github.com/zenml-io/zenml/pull/2816 * Fix Kubeflow v2 integration by @wjayesh in https://github.com/zenml-io/zenml/pull/2829 * fix skypilot jobs failing on VMs (sky bumped to 0.6.0) by @wjayesh in https://github.com/zenml-io/zenml/pull/2815 * Fix unicode decode errors in k8s pod logs read operation by @wjayesh in https://github.com/zenml-io/zenml/pull/2807 * Small improvements and bug fixes by @schustmi in https://github.com/zenml-io/zenml/pull/2821 * TF tests + various integration (un)install improvements by @avishniakov in https://github.com/zenml-io/zenml/pull/2791 * Fixed bug in the MacOS version check by @strickvl in https://github.com/zenml-io/zenml/pull/2819 * Remove prefix for analytics labels by @schustmi in https://github.com/zenml-io/zenml/pull/2831 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.60.0...0.61.0 # 0.60.0 ZenML now uses Pydantic v2. πŸ₯³ This upgrade comes with a set of critical updates. While your user experience mostly remains unaffected, you might see unexpected behavior due to the changes in our dependencies. Moreover, since Pydantic v2 provides a slightly stricter validation process, you might end up bumping into some validation errors which was not caught before, but it is all for the better πŸ™‚ If you run into any other errors, please let us know either on [GitHub](https://github.com/zenml-io/zenml) or on our [Slack](https://zenml.io/slack-invite). ## Changes in some of the critical dependencies - SQLModel is one of the core dependencies of ZenML and prior to this upgrade, we were utilizing version `0.0.8`. However, this version is relatively outdated and incompatible with Pydantic v2. Within the scope of this upgrade, we upgraded it to `0.0.18`. - Due to the change in the SQLModel version, we also had to upgrade our SQLAlchemy dependency from V1 to v2. While this does not affect the way that you are using ZenML, if you are using SQLAlchemy in your environment, you might have to migrate your code as well. For a detailed list of changes, feel free to check [their migration guide](https://docs.sqlalchemy.org/en/20/changelog/migration_20.html). ## Changes in `pydantic` Pydantic v2 brings a lot of new and exciting changes to the table. The core logic now uses Rust, and it is much faster and more efficient in terms of performance. On top of it, the main concepts like model design, configuration, validation, or serialization now include a lot of new cool features. If you are using `pydantic` in your workflow and are interested in the new changes, you can check [the brilliant migration guide](https://docs.pydantic.dev/2.7/migration/) provided by the `pydantic` team to see the full list of changes. ## Changes in our integrations changes Much like ZenML, `pydantic` is an important dependency in many other Python packages. That’s why conducting this upgrade helped us unlock a new version for several ZenML integration dependencies. Additionally, in some instances, we had to adapt the functionality of the integration to keep it compatible with `pydantic`. So, if you are using any of these integrations, please go through the changes. ### Airflow As mentioned above upgrading our `pydantic` dependency meant we had to upgrade our `sqlmodel` dependency. Upgrading our `sqlmodel` dependency meant we had to upgrade our `sqlalchemy` dependency as well. Unfortunately, `apache-airflow` is still using `sqlalchemy` v1 and is incompatible with pydantic v2. As a solution, we have removed the dependencies of the `airflow` integration. Now, you can use ZenML to create your Airflow pipelines and use a separate environment to run them with Airflow. You can check the updated docs [right here](https://docs.zenml.io/stack-components/orchestrators/airflow). ### AWS Some of our integrations now require `protobuf` 4. Since our previous `sagemaker` version (`2.117.0`) did not support `protobof` 4, we could not pair it with these new integrations. Thankfully `sagemaker` started supporting `protobuf` 4 with version `2.172.0` and relaxing its dependency solved the compatibility issue. ### Evidently The old version of our `evidently` integration was not compatible with Pydantic v2. They started supporting it starting from version `0.4.16`. As their latest version is `0.4.22`, the new dependency of the integration is limited between these two versions. ### Feast Our previous implementation of the `feast` integration was not compatible with Pydantic v2 due to the extra `redis` dependency we were using. This extra dependency is now removed and the `feast` integration is working as intended. ### GCP The previous version of the Kubeflow dependency (`kfp==1.8.22`) in our GCP integration required Pydantic V1 to be installed. While we were upgrading our Pydantic dependency, we saw this as an opportunity and wanted to use this chance to upgrade the `kfp` dependency to v2 (which has no dependencies on the Pydantic library). This is why you may see some functional changes in the vertex step operator and orchestrator. If you would like to go through the changes in the `kfp` library, you can find [the migration guide here](https://www.kubeflow.org/docs/components/pipelines/v2/migration/). ### Great Expectations Great Expectations started supporting Pydantic v2 starting from version `0.17.15` and they are closing in on their `1.0` release. Since this release might include a lot of big changes, we adjusted the dependency in our integration to `great-expectations>=0.17.15,<1.0`. We will try to keep it updated in the future once they release the `1.0` version ### Kubeflow Similar to the GCP integration, the previous version of the kubeflow dependency (`kfp==1.8.22`) in our `kubeflow` integration required Pydantic V1 to be installed. While we were upgrading our Pydantic dependency, we saw this as an opportunity and wanted to use this chance to upgrade the `kfp` dependency to v2 (which has no dependencies on the Pydantic library). If you would like to go through the changes in the `kfp` library, you can find [the migration guide here](https://www.kubeflow.org/docs/components/pipelines/v2/migration/). ( We also are considering adding an alternative version of this integration so our users can keep using `kfp` V1 in their environment. Stay tuned for any updates.) ### MLflow `mlflow` is compatible with both Pydantic V1 and v2. However, due to a known issue, if you install `zenml` first and then do `zenml integration install mlflow -y`, it downgrades `pydantic` to V1. This is why we manually added the same duplicated `pydantic` requirement in the integration definition as well. Keep in mind that the `mlflow` library is still using some features of `pydantic` V1 which are deprecated. So, if the integration is installed in your environment, you might run into some deprecation warnings. ### Label Studio While we were working on updating our `pydantic` dependency, the `label-studio-sdk` has released its 1.0 version. In this new version, `pydantic` v2 is also supported. The implementation and documentation of our Label Studio integration have been updated accordingly. ### Skypilot With the switch to `pydantic` v2, the implementation of our `skypilot` integration mostly remained untouched. However, due to an incompatibility between the new version `pydantic` and the `azurecli`, the `skypilot[azure]` flavor can not be installed at the same time, thus our `skypilot_azure` integration is currently deactivated. We are working on fixing this issue and if you are using this integration in your workflows, we recommend staying on the previous version of ZenML until we can solve this issue. ### Tensorflow The new version of `pydantic` creates a drift between `tensorflow` and `typing_extensions` packages and relaxing the dependencies here resolves the issue. At the same time, the upgrade to `kfp` v2 (in integrations like `kubeflow`, `tekton`, or `gcp`) bumps our `protobuf` dependency from `3.X` to `4.X`. To stay compatible with this requirement, the installed version of `tensorflow` needs to be `>=2.12.0`. While this change solves the dependency issues in most settings, we have bumped into some errors while using `tensorflow` 2.12.0 on Python 3.8 on Ubuntu. If you would like to use this integration, please consider using a higher Python version. ### Tekton Similar to the `gcp` and `kubeflow` integrations, the old version of our `tekton` integration was not compatible with `pydantic` V1 due to its `kfp` dependency. With the switch from `kfp` V1 to v2, we have adapted our implementation to use the new version of `kfp` library and updated our documentation accordingly. ## Additional Changes * We have also released a new version of `mlstacks` with Pydantic v2 support. If you are using it in your development environment, you have to upgrade your `mlstacks` package as well. * Added `zenml.integrations.huggingface.steps.run_with_accelerate` to enable running any step using [`accelerate`](https://huggingface.co/docs/accelerate/en/index). This function is supported by a utility that wraps any step function into a CLI script (which is required by most distributed training tools). * Fixed a memory leak that was observed while using the ZenML dashboard to view pipeline logs or artifact visualizations logged through an S3 Artifact Store linked to an AWS Service Connector. * Previously, we had an option called `build_options` that allowed users to pass arguments to the docker build command. However, these options were only applied when building the parent image. On macOS with ARM architecture, one needs to specify `platform=linux/amd64` to the build command to leverage local caching of Docker image layers. We have added a way to specify these build options for the "main" ZenML build as well, not just the parent image build. Additionally, users can now specify a `.dockerignore` file for the parent image build, which was previously not possible. ## What's Changed * Extend migration testing by @avishniakov in https://github.com/zenml-io/zenml/pull/2768 * Add retry docs by @htahir1 in https://github.com/zenml-io/zenml/pull/2770 * Fix nightly Docker build by @strickvl in https://github.com/zenml-io/zenml/pull/2769 * Start CTA and Cloud -> Pro renaming by @AlexejPenner in https://github.com/zenml-io/zenml/pull/2773 * Add star CTA to `README` by @AlexejPenner in https://github.com/zenml-io/zenml/pull/2777 * Use build python version if available by @schustmi in https://github.com/zenml-io/zenml/pull/2775 * Introduced Legacy env var in docs by @AlexejPenner in https://github.com/zenml-io/zenml/pull/2783 * Fixing the nlp template for the upcoming pydantic upgrade by @bcdurak in https://github.com/zenml-io/zenml/pull/2778 * Full renaming away from cloud to pro by @AlexejPenner in https://github.com/zenml-io/zenml/pull/2782 * Adjust docs url for flavors by @AlexejPenner in https://github.com/zenml-io/zenml/pull/2772 * Fixed broken unit test on develop and fixed duplicate / by @AlexejPenner in https://github.com/zenml-io/zenml/pull/2785 * Added timeout by @AlexejPenner in https://github.com/zenml-io/zenml/pull/2786 * Bump NLP template by @avishniakov in https://github.com/zenml-io/zenml/pull/2787 * Raise error if Dockerfile does not exist by @schustmi in https://github.com/zenml-io/zenml/pull/2776 * Pin `numpy<2.0.0` by @avishniakov in https://github.com/zenml-io/zenml/pull/2789 * Fix partial logs loss in step operators with immutable FS in the backend by @avishniakov in https://github.com/zenml-io/zenml/pull/2788 * Upgrading to `pydantic` v2 by @bcdurak in https://github.com/zenml-io/zenml/pull/2543 * New CI/CD docs by @AlexejPenner in https://github.com/zenml-io/zenml/pull/2784 * Improvements for running pipelines from the dashboard by @schustmi in https://github.com/zenml-io/zenml/pull/2781 * Accelerate runner helper method by @avishniakov in https://github.com/zenml-io/zenml/pull/2746 * Add `--ignore-errors` flag for `zenml artifact prune` by @strickvl in https://github.com/zenml-io/zenml/pull/2780 * Enable running a pipeline through the client by @schustmi in https://github.com/zenml-io/zenml/pull/2736 * Accelerated template LLMs by @avishniakov in https://github.com/zenml-io/zenml/pull/2797 * Separate actions from triggers by @schustmi in https://github.com/zenml-io/zenml/pull/2700 * Fix hook type definition and improve code completion for pipeline decorator by @schustmi in https://github.com/zenml-io/zenml/pull/2793 * Allow specifying build options for main image build by @schustmi in https://github.com/zenml-io/zenml/pull/2749 * Small improvements for yaml config files by @schustmi in https://github.com/zenml-io/zenml/pull/2796 * Docs for the `pydantic` migration guide by @bcdurak in https://github.com/zenml-io/zenml/pull/2801 * Bump mlflow to v2.14.1 by @christianversloot in https://github.com/zenml-io/zenml/pull/2779 * Bugfix fixing the installation script to use the right mlstacks branch by @bcdurak in https://github.com/zenml-io/zenml/pull/2803 * Fix S3 artifact store memory leak and other improvements by @stefannica in https://github.com/zenml-io/zenml/pull/2802 ## πŸ₯³ Community Contributions πŸ₯³ We'd like to give a special thanks to @christianversloot who contributed to this release by bumping the `mlflow` version to 2.14.1 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.58.2...0.60.0 # 0.58.2 The 0.58.2 minor release is packed with a set of improvements to the ZenML logging and ZenML Server. With this release ZenML logging will: - Offer pagination of the logs during fetching via REST API - Store the full logs history on GCS Artifact Stores - Be performant running logging-heavy tasks, like TQDM logging or logging of training in any Deep Learning framework (also TQDM-backed) ## What's Changed * update test-migrations.sh with latest versions by @safoinme in https://github.com/zenml-io/zenml/pull/2757 * Fix overriding expiration date for api tokens by @schustmi in https://github.com/zenml-io/zenml/pull/2753 * Step logs pagination by @schustmi in https://github.com/zenml-io/zenml/pull/2731 * Fix broken links (round 2) by @strickvl in https://github.com/zenml-io/zenml/pull/2760 * Remove default system flag in docker UV by @avishniakov in https://github.com/zenml-io/zenml/pull/2764 * Another batch of small fixes and expansions by @AlexejPenner in https://github.com/zenml-io/zenml/pull/2762 * Server scalability improvements by @stefannica in https://github.com/zenml-io/zenml/pull/2752 * Add option to start parallel kubernetes steps with delay by @schustmi in https://github.com/zenml-io/zenml/pull/2758 * Move `thread_limiter` to app startup event by @avishniakov in https://github.com/zenml-io/zenml/pull/2765 * Logging performance improvements and GCP logging fix by @avishniakov in https://github.com/zenml-io/zenml/pull/2755 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.58.1...0.58.2 # 0.58.1 The 0.58.1 release brings a set of minor enhancement and bugfix to the ZenML framework, such as the ability to delete all versions of a pipeline using the Client/CLI, providing greater flexibility and control over pipeline management. Users can now specify Python package installer arguments. Furthermore, a fix has been implemented for the Sentencepiece tokenizer materializer. We are also excited to announce the introduction of breadcrumbs to our dashboard to improve your navigation experience. This new feature allows you to easily visualize the path of your Pipelines, Models, and Artifacts, providing clear orientation, quick return to any section with a single click, and effortless navigation. We’d like to give a special thanks to @eltociear for their first contribution. ## Docs re-work We reworked the structure of our documentation pages to make it easier to find answers to your practical questions. Please do let us know if you have any feedback on the structure or the new style of the 'How To' section! ## What's Changed * Add 0.58.0 to migration testing by @avishniakov in https://github.com/zenml-io/zenml/pull/2730 * Print step names in color, again by @avishniakov in https://github.com/zenml-io/zenml/pull/2728 * Workflow to create JIRA tickets when Github Issues are created by @strickvl in https://github.com/zenml-io/zenml/pull/2724 * Allow specifying python package installer args by @schustmi in https://github.com/zenml-io/zenml/pull/2727 * Send workflow dispatch event to Cloud Plugins repo on release by @wjayesh in https://github.com/zenml-io/zenml/pull/2633 * Fix Nightly Release by @safoinme in https://github.com/zenml-io/zenml/pull/2711 * Fix `zenml go` images visibility in notebook by @strickvl in https://github.com/zenml-io/zenml/pull/2742 * Handle error when using `zenml info` with missing dependencies by @strickvl in https://github.com/zenml-io/zenml/pull/2725 * Add Discord Alerter into TOC by @strickvl in https://github.com/zenml-io/zenml/pull/2735 * Allow deleting all versions of a pipeline using the Client/CLI by @schustmi in https://github.com/zenml-io/zenml/pull/2745 * Misc fixes by @schustmi in https://github.com/zenml-io/zenml/pull/2732 * Move full SQLite DB migration test to slow CI by @strickvl in https://github.com/zenml-io/zenml/pull/2743 * Add system flag as default for uv by @schustmi in https://github.com/zenml-io/zenml/pull/2748 * Add how-to section & restructure/update documentation by @AlexejPenner in https://github.com/zenml-io/zenml/pull/2705 * Fix typo in help text by @eltociear in https://github.com/zenml-io/zenml/pull/2750 * Add support for function types in source utils by @schustmi in https://github.com/zenml-io/zenml/pull/2738 * Fix Sentencepiece tokenizer materializer by @safoinme in https://github.com/zenml-io/zenml/pull/2751 ## New Contributors * @eltociear made their first contribution in https://github.com/zenml-io/zenml/pull/2750 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.58.0...0.58.1 # 0.58.0 ## New Annotators This release brings in three new integrations for our annotator stack component: [Prodigy](https://prodi.gy/), [Argilla](https://github.com/argilla-io/argilla) and [Pigeon](https://github.com/agermanidis/pigeon). * Pigeon works within Jupyter notebooks and supports a limited feature set but is great for experimentation and demos. * Argilla works both locally-deployed and when the annotation instance lives in the cloud (i.e. in the Hugging Face Spaces deployment which they recommend). * Prodigy is a powerful closed-source annotation tool that allows for efficient data labeling. With this integration, users can now connect ZenML with Prodigy and leverage its annotation capabilities in their ML pipelines. ## Retry configuration for steps This release also includes new `retry` configuration for the steps. The following parameters can be set: - _**max_retries**_: The maximum number of times the step should be retried in case of failure. - _**delay**_: The initial delay in seconds before the first retry attempt. - _**backoff**_: The factor by which the delay should be multiplied after each retry attempt. To use this in your code: ```python from zenml.config.retry_config import StepRetryConfig @step(retry=StepRetryConfig(max_retries=3, delay=10, backoff=2)) def step_3() -> None: # Step implementation raise Exception("This is a test exception") ``` or using a `config.yaml`: ```yaml steps: my_step: retry: max_retries: 3 delay: 10 backoff: 2 ``` In addition, this release includes a number of bug fixes and documentation updates, such as a new LLM finetuning template powered by PEFT and BitsAndBytes and instructions for the new annotators. ## Breaking changes * The interface for the base class of the annotator stack component has been updated to account for the fact that not all annotators will launch with a specific URL. So there is no longer an url argument passed in. ## πŸ₯³ Community Contributions πŸ₯³ We'd like to give a special thanks to @christianversloot who contributed to this release by bumping the `mlflow` version to 2.12.2 ## What's Changed * Add more failure logs for code repositories and build reuse by @schustmi in https://github.com/zenml-io/zenml/pull/2697 * Prodigy annotator by @strickvl in https://github.com/zenml-io/zenml/pull/2655 * Bump mlflow support to version 2.12.2 by @christianversloot in https://github.com/zenml-io/zenml/pull/2693 * add 0.57.1 to migration test scripts by @safoinme in https://github.com/zenml-io/zenml/pull/2702 * Pigeon annotator by @strickvl in https://github.com/zenml-io/zenml/pull/2641 * Allow credentials expiry to be configured for service connectors by @stefannica in https://github.com/zenml-io/zenml/pull/2704 * Argilla annotator by @strickvl in https://github.com/zenml-io/zenml/pull/2687 * Add `MySQL` and `mariadb` migration tests to Slow CI by @safoinme in https://github.com/zenml-io/zenml/pull/2686 * Misc small fixes by @schustmi in https://github.com/zenml-io/zenml/pull/2712 * Allow resetting server and user metadata by @schustmi in https://github.com/zenml-io/zenml/pull/2666 * Fix Docker failures in the CI by @avishniakov in https://github.com/zenml-io/zenml/pull/2716 * Add note about helm dependencies by @strickvl in https://github.com/zenml-io/zenml/pull/2709 * Add retry config for failing steps by @safoinme in https://github.com/zenml-io/zenml/pull/2627 * Update pyparsing version by @strickvl in https://github.com/zenml-io/zenml/pull/2710 * New ruff issue by @avishniakov in https://github.com/zenml-io/zenml/pull/2718 * PEFT LLM Template by @avishniakov in https://github.com/zenml-io/zenml/pull/2719 * Add `model_version_id` as part of the Model config by @avishniakov in https://github.com/zenml-io/zenml/pull/2703 * Add more runners to fast CI by @safoinme in https://github.com/zenml-io/zenml/pull/2706 * Fail faster on notebook installation and only clone / download the branch we need for `zenml go` by @strickvl in https://github.com/zenml-io/zenml/pull/2721 * Make a clear separation between server and dashboard API in the server configuration by @stefannica in https://github.com/zenml-io/zenml/pull/2722 * Update pymysql to fix CVE-2024-36039 by @stefannica in https://github.com/zenml-io/zenml/pull/2714 * Allow specifying privileged mode for Kubernetes orchestrator containers by @schustmi in https://github.com/zenml-io/zenml/pull/2717 * Don't use pod resources/affinity for kubernetes orchestrator pod by @schustmi in https://github.com/zenml-io/zenml/pull/2707 * Extra test for artifact listing by @avishniakov in https://github.com/zenml-io/zenml/pull/2715 * Pipeline run not tracked in cached artifact version by @avishniakov in https://github.com/zenml-io/zenml/pull/2713 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.57.1...0.58.0 # 0.57.1 This a minor release that brings a variety of enhancements for the new dashboard release, a new update to the LLMOps guide (covering the use of rerankers in RAG pipelines) and [an updated README](README.md). It also introduces some new improvement to the service connectors. We'd like to give a special thanks to @ruvilonix for their first contribution. ## What's Changed * Add new versions to migration testing by @avishniakov in https://github.com/zenml-io/zenml/pull/2663 * Resource settings import fix by @htahir1 in https://github.com/zenml-io/zenml/pull/2664 * Fix env variable for legacy dashboard by @schustmi in https://github.com/zenml-io/zenml/pull/2668 * Fix broken links in code examples by @strickvl in https://github.com/zenml-io/zenml/pull/2672 * Improve error message when trying to unpack a step artifact by @schustmi in https://github.com/zenml-io/zenml/pull/2674 * Prevent special whitespaces in the names of entities by @avishniakov in https://github.com/zenml-io/zenml/pull/2665 * Ensure extra flags aren't passed into `uv` integration install command by @strickvl in https://github.com/zenml-io/zenml/pull/2670 * `enable_cache` option shouldn't be set to `False` for one of the steps by @ruvilonix in https://github.com/zenml-io/zenml/pull/2574 * Add new dashboard links to create/deactivate CLI commands by @avishniakov in https://github.com/zenml-io/zenml/pull/2678 * Add reranking section to LLMOps guide by @strickvl in https://github.com/zenml-io/zenml/pull/2679 * Updated Readme by @AlexejPenner in https://github.com/zenml-io/zenml/pull/2675 * Added Thumbnail by @AlexejPenner in https://github.com/zenml-io/zenml/pull/2684 * [k8s orchestrator] Fix credentials refresh and don't use service connector for incluster auth by @wjayesh in https://github.com/zenml-io/zenml/pull/2671 * Prepare Release 0.57.1 by @safoinme in https://github.com/zenml-io/zenml/pull/2683 * Include email in event by @schustmi in https://github.com/zenml-io/zenml/pull/2692 * Set newsletter flag from email opted in by @schustmi in https://github.com/zenml-io/zenml/pull/2694 * Only report usage once pipeline run starts by @schustmi in https://github.com/zenml-io/zenml/pull/2680 * Reduced thumbnail size by @AlexejPenner in https://github.com/zenml-io/zenml/pull/2689 * Fix intermittent timeout issues with service connector sessions by @stefannica in https://github.com/zenml-io/zenml/pull/2690 * Include unique constraints in the database backup by @stefannica in https://github.com/zenml-io/zenml/pull/2695 * [k8s orch] Add option to specify separate service account for step pods by @wjayesh in https://github.com/zenml-io/zenml/pull/2688 * Update GCP registry docs by @safoinme in https://github.com/zenml-io/zenml/pull/2676 * Use service connector for boto session if possible by @schustmi in https://github.com/zenml-io/zenml/pull/2682 * Send missing user enriched events by @schustmi in https://github.com/zenml-io/zenml/pull/2696 ## New Contributors * @ruvilonix made their first contribution in https://github.com/zenml-io/zenml/pull/2574 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.57.0...0.57.1 # 0.57.0 We're excited to announce that we're open-sourcing our new and improved dashboard. This unifies the experience for OSS and cloud users, though OSS users will initially see some dashboard features unavailable in this launch release. We're open-sourcing our dashboard for a few reasons: - to ensure that the dashboard experience is consistent across all users, for both the open-source and cloud versions - to make it easier for us to maintain and develop the dashboard, as we can share components between the two versions - to allow OSS contributions (and self-hosting and modifications) to the new dashboard - to open up possibilities for future features, particularly for our OSS users New users of the ZenML in the dashboard will have a better experience thanks to a much-improved onboarding sequence: <div align="center"> <img width="80%" src="docs/book/.gitbook/assets/new_dashboard_rn_2.png" alt="Dashboard 2"/> </div> The dashboard will guide you through connecting to your server, setting up a stack, connecting to service connectors as well as running a pipeline. We’ve also improved the β€˜Settings’ section of the dashboard and this is the new home for configuration of your repositories, secrets, and connectors, along with some other options. <div align="center"> <img width="80%" src="docs/book/.gitbook/assets/new_dashboard_rn_3.png" alt="Dashboard 3"/> </div> ## What It Means for You If you're already a **cloud user**, not much will change for you. You're already using the new dashboard for pipelines, models and artifacts. Your experience won’t change and for the moment you’ll continue using the old dashboard for certain components (notably for stacks and components). If you're an **open-source user**, the new dashboard is now available to you as part of our latest release (0.57.0). You'll notice a completely refreshed design and a new DAG visualizer. <div align="center"> <img width="80%" src="docs/book/.gitbook/assets/new_dashboard_rn_4.png" alt="Dashboard 4"/> </div> Unfortunately, some dashboard features are not yet ready so you'll see instructions on how to access them via the CLI. We hope to have these features returned into the product soon. (If you have a strong opinion as to which you'd like to see first, please let us know!) Specifically, secrets, stacks, and service connectors are not yet implemented in the new dashboard. ### How to use the legacy dashboard The old dashboard is still available to you. To run with the legacy dashboard pass the `--legacy` flag when spinning it up: ```bash zenml up --legacy ``` Note that you can’t use both the new and old dashboard at the same time. If you’re self-hosting ZenML instead of using ZenML Pro, you can specify which dashboard you want to use by setting the `ZENML_SERVER_USE_LEGACY_DASHBOARD` environment variable pre-deployment. Specifying a boolean value for this variable will determine which dashboard gets served for your deployment. (There’s no dynamic switching between dashboards allowed, so if you wish to change which dashboard is used for a deployed server, you’ll need to redeploy the server after updating the environment variable.) If you’re using [SaaS ZenML Pro](https://cloud.zenml.io/), your experience won’t change with this release and your use of the dashboard remains the same. ## What's Changed * Add Comet to Experiment Trackers in TOC by @strickvl in https://github.com/zenml-io/zenml/pull/2637 * Fix Comet docs formatting by @strickvl in https://github.com/zenml-io/zenml/pull/2639 * ZenML Server activation and user on-boarding by @stefannica in https://github.com/zenml-io/zenml/pull/2630 * Slimmer and more secure Docker container images by @stefannica in https://github.com/zenml-io/zenml/pull/2617 * Add dashboard v2 source context by @schustmi in https://github.com/zenml-io/zenml/pull/2642 * Support New Dashboard release by @avishniakov in https://github.com/zenml-io/zenml/pull/2635 * Fix CI by @strickvl in https://github.com/zenml-io/zenml/pull/2645 * Misc/prepare release 0.57.0rc1 by @avishniakov in https://github.com/zenml-io/zenml/pull/2646 * Add rate limiting to user password reset operations by @stefannica in https://github.com/zenml-io/zenml/pull/2643 * Set zenml server name to default if not customized by @stefannica in https://github.com/zenml-io/zenml/pull/2647 * Docker release fix by @avishniakov in https://github.com/zenml-io/zenml/pull/2649 * Fix dashboard urls by @schustmi in https://github.com/zenml-io/zenml/pull/2648 * Enable analytics during db initialization if specified by @schustmi in https://github.com/zenml-io/zenml/pull/2652 * Better checks for user account updates to avoid Mass Assignment attacks by @stefannica in https://github.com/zenml-io/zenml/pull/2622 * Prepare 0.57.0-rc2 by @avishniakov in https://github.com/zenml-io/zenml/pull/2651 * Fix frontend analytics calls by @schustmi in https://github.com/zenml-io/zenml/pull/2653 * Label studio settings and optional port by @htahir1 in https://github.com/zenml-io/zenml/pull/2628 * Introduce default value fro enable_analytics by @AlexejPenner in https://github.com/zenml-io/zenml/pull/2654 * Fix helm chart notes syntax by @wjayesh in https://github.com/zenml-io/zenml/pull/2656 * Add server env variable to fix activation by @schustmi in https://github.com/zenml-io/zenml/pull/2657 * Respect analytic ENV in local servers by @avishniakov in https://github.com/zenml-io/zenml/pull/2658 * Small fixes in helm docs by @schustmi in https://github.com/zenml-io/zenml/pull/2659 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.56.4...0.57.0 # 0.56.4 This release brings a variety of bug fixes and enhancements, including a new Comet Experiment Tracker integration, additional support for the `uv` package installer for `zenml integration ...` commands which significantly improves the speed of integration installations and dependency management, and a new evaluation section in the LLMOps guide. In addition, it includes a number of bug fixes and documentation updates, such as a fix for cached artifacts produced via `save_artifact` inside steps linkage to the MCP. ## πŸ₯³ Community Contributions πŸ₯³ We'd like to give a special thanks to @christianversloot who contributed to this release by bumping the `mlflow` version to 2.12.1 ## What's Changed * Fix mariadb test script by @avishniakov in https://github.com/zenml-io/zenml/pull/2599 * Disable CSP headers for the openAPI docs pages and fix API docs building by @stefannica in https://github.com/zenml-io/zenml/pull/2598 * Add short motivating example for RAG pipeline by @strickvl in https://github.com/zenml-io/zenml/pull/2596 * Fix DB backup and restore and add database upgrade testing improvements by @stefannica in https://github.com/zenml-io/zenml/pull/2607 * Fix for #2556 by @avishniakov in https://github.com/zenml-io/zenml/pull/2603 * Fix AWS service connector resource ID regexp by @stefannica in https://github.com/zenml-io/zenml/pull/2611 * Add dry run for docs CI by @avishniakov in https://github.com/zenml-io/zenml/pull/2612 * Completing and refining the CLI documentation by @bcdurak in https://github.com/zenml-io/zenml/pull/2605 * Allow DB backup failures if the database version is 0.56.3 or earlier by @stefannica in https://github.com/zenml-io/zenml/pull/2613 * Mixpanel grouping improvements by @schustmi in https://github.com/zenml-io/zenml/pull/2610 * Add support for `uv` package installer for `zenml integration ...` commands by @strickvl in https://github.com/zenml-io/zenml/pull/2609 * Add evaluation section to LLMOps guide by @strickvl in https://github.com/zenml-io/zenml/pull/2614 * Fix GCP commands in docs for `project_id` by @strickvl in https://github.com/zenml-io/zenml/pull/2616 * Minor fix for GitGuardian warnings. by @bcdurak in https://github.com/zenml-io/zenml/pull/2621 * Bump mlflow to version 2.12.1 by @christianversloot in https://github.com/zenml-io/zenml/pull/2618 * Updated security email by @htahir1 in https://github.com/zenml-io/zenml/pull/2625 * Add Comet Experiment Tracker integration by @strickvl in https://github.com/zenml-io/zenml/pull/2620 * Fix cached artifacts produced via `save_artifact` inside steps linkage to MCP by @avishniakov in https://github.com/zenml-io/zenml/pull/2619 * Update MCP instructions by @avishniakov in https://github.com/zenml-io/zenml/pull/2632 * Replace parse_obj by @AlexejPenner in https://github.com/zenml-io/zenml/pull/2623 * Fix imports in for `Model` in documentation by @strickvl in https://github.com/zenml-io/zenml/pull/2631 * Return up-to-date `PipelineRunResponse` from pipeline run by @avishniakov in https://github.com/zenml-io/zenml/pull/2624 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.56.3...0.56.4 # 0.56.3 This release comes with a number of bug fixes and enhancements. With this release you can benefit from new Lambda Labs GPU orchestrator integration in your pipelines. [Lambda Labs](https://lambdalabs.com/service/gpu-cloud) is a cloud provider that offers GPU instances for machine learning workloads. In this release we have also implemented a few important security improvements to ZenML Server mostly around Content Security Policies. Also users are from now on mandated to provide previous password during the password change process. Also the documentation was significantly improved with [the new AWS Cloud guide](https://docs.zenml.io/user-guide/cloud-guide/aws-guide) and [the LLMOps guide](https://docs.zenml.io/user-guide/llmops-guide) covering various aspects of the LLM lifecycle. ## πŸ₯³ Community Contributions πŸ₯³ We'd like to give a special thanks to @christianversloot who contributed to this release by adding support for `Schedule.start_time` to the HyperAI orchestrator. ## What's Changed * Really run migration testing by @avishniakov in https://github.com/zenml-io/zenml/pull/2562 * Interact with feature gate by @AlexejPenner in https://github.com/zenml-io/zenml/pull/2492 * Allow for logs to be unformatted / without colors by @strickvl in https://github.com/zenml-io/zenml/pull/2544 * Add VS Code extension to README / docs by @strickvl in https://github.com/zenml-io/zenml/pull/2568 * Allow loading of artifacts without needing to activate the artifact store (again) by @avishniakov in https://github.com/zenml-io/zenml/pull/2545 * Minor fix by @htahir1 in https://github.com/zenml-io/zenml/pull/2578 * [DOCS] Fix code block in Vertex docs by @wjayesh in https://github.com/zenml-io/zenml/pull/2580 * Added an AWS cloud guide by @htahir1 in https://github.com/zenml-io/zenml/pull/2570 * Update AWS cloud guide by @strickvl in https://github.com/zenml-io/zenml/pull/2581 * More docs fixes by @htahir1 in https://github.com/zenml-io/zenml/pull/2585 * Bugfix for the `pyyaml_include` version for `copier` by @bcdurak in https://github.com/zenml-io/zenml/pull/2586 * Update fastapi and orjson to fix python-multipart and orjson vulnerabilities by @stefannica in https://github.com/zenml-io/zenml/pull/2582 * Add security headers to the ZenML server by @stefannica in https://github.com/zenml-io/zenml/pull/2583 * Fix and update AWS cloud guide by @strickvl in https://github.com/zenml-io/zenml/pull/2591 * Add `start_time` support to HyperAI orchestrator scheduled pipelines by @christianversloot in https://github.com/zenml-io/zenml/pull/2572 * Make `secure` an optional import by @stefannica in https://github.com/zenml-io/zenml/pull/2592 * RAG guide for docs by @strickvl in https://github.com/zenml-io/zenml/pull/2525 * Update test-migrations scripts with new versions `0.56.2` by @safoinme in https://github.com/zenml-io/zenml/pull/2565 * Check old password during password change and add missing CLI commands by @stefannica in https://github.com/zenml-io/zenml/pull/2587 * Add a note about the `f` prefix being needed for template strings by @strickvl in https://github.com/zenml-io/zenml/pull/2593 * Skypilot: Lambda Edition by @safoinme in https://github.com/zenml-io/zenml/pull/2526 * Use the correct validity for EKS API tokens and handle long-running Kubernetes pipelines by @stefannica in https://github.com/zenml-io/zenml/pull/2589 * Catch missing jupyter installation for `zenml go` by @strickvl in https://github.com/zenml-io/zenml/pull/2571 * Allow resources required for the fastapi OpenAPI docs in the CSP header by @stefannica in https://github.com/zenml-io/zenml/pull/2595 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.56.2...0.56.3 # 0.56.2 This release replaces 0.56.0 and 0.56.1, and fixes the major migration bugs that were in that yanked release. Please upgrade directly to 0.56.2 and avoid upgrading to 0.56.0 to avoid unexpected migration issues. Note that 0.56.0 and 0.56.1 were removed from PyPI due to an issue with the alembic versions + migration which could affect the database state. This release fixes that issue. This release introduces a wide array of new features, enhancements, and bug fixes, with a strong emphasis on elevating the user experience and streamlining machine learning workflows. Most notably, you can now deploy models using Hugging Face inference endpoints thanks for an open-source community contribution of this model deployer stack component! This release also comes with a breaking change to the services architecture. ## Breaking Change A significant change in this release is the migration of the `Service` (ZenML's technical term for deployment) registration and deployment from local or remote environments to the ZenML server. This change will be reflected in an upcoming tab in the dashboard which will allow users to explore and see the deployed models in the dashboard with their live status and metadata. This architectural shift also simplifies the model deployer abstraction and streamlines the model deployment process for users by moving from limited built-in steps to a more documented and flexible approach. Important note: If you have models that you previously deployed with ZenML, you might want to redeploy them to have them stored in the ZenML server and tracked by ZenML, ensuring they appear in the dashboard. Additionally, the `find_model_server` method now retrieves models (services) from the ZenML server instead of local or remote deployment environments. As a result, any usage of `find_model_server` will only return newly deployed models stored in the server. It is also no longer recommended to call service functions like `service.start()`. Instead, use `model_deployer.start_model_server(service_id)`, which will allow ZenML to update the changed status of the service in the server. ### Starting a service **Old syntax:** ```python from zenml import pipeline, from zenml.integrations.bentoml.services.bentoml_deployment import BentoMLDeploymentService @step def predictor( service: BentoMLDeploymentService, ) -> None: # starting the service service.start(timeout=10) ``` **New syntax:** ```python from zenml import pipeline from zenml.integrations.bentoml.model_deployers import BentoMLModelDeployer from zenml.integrations.bentoml.services.bentoml_deployment import BentoMLDeploymentService @step def predictor( service: BentoMLDeploymentService, ) -> None: # starting the service model_deployer = BentoMLModelDeployer.get_active_model_deployer() model_deployer.start_model_server(service_id=service.service_id, timeout=10) ``` ### Enabling continuous deployment Instead of replacing the parameter that was used in the `deploy_model` method to replace the existing service (if it matches the exact same pipeline name and step name without taking into accounts other parameters or configurations), we now have a new parameter, `continuous_deployment_mode`, that allows you to enable continuous deployment for the service. This will ensure that the service is updated with the latest version if it's on the same pipeline and step and the service is not already running. Otherwise, any new deployment with different configurations will create a new service. ```python from zenml import pipeline, step, get_step_context from zenml.client import Client @step def deploy_model() -> Optional[MLFlowDeploymentService]: # Deploy a model using the MLflow Model Deployer zenml_client = Client() model_deployer = zenml_client.active_stack.model_deployer mlflow_deployment_config = MLFlowDeploymentConfig( name: str = "mlflow-model-deployment-example", description: str = "An example of deploying a model using the MLflow Model Deployer", pipeline_name: str = get_step_context().pipeline_name, pipeline_step_name: str = get_step_context().step_name, model_uri: str = "runs:/<run_id>/model" or "models:/<model_name>/<model_version>", model_name: str = "model", workers: int = 1 mlserver: bool = False timeout: int = DEFAULT_SERVICE_START_STOP_TIMEOUT ) service = model_deployer.deploy_model(mlflow_deployment_config, continuous_deployment_mode=True) logger.info(f"The deployed service info: {model_deployer.get_model_server_info(service)}") return service ``` ## Major Features and Enhancements: * A new `Huggingface Model Deployer` has been introduced, allowing you to seamlessly deploy your Huggingface models using ZenML. (Thank you so much @dudeperf3ct for the contribution!) * Faster Integration and Dependency Management ZenML now leverages the `uv` library, significantly improving the speed of integration installations and dependency management, resulting in a more streamlined and efficient workflow. * Enhanced Logging and Status Tracking Logging have been improved, providing better visibility into the state of your ZenML services. * Improved Artifact Store Isolation: ZenML now prevents unsafe operations that access data outside the scope of the artifact store, ensuring better isolation and security. * Adding admin user notion for the user accounts and added protection to certain operations performed via the REST interface to ADMIN-allowed only. * Rate limiting for login API to prevent abuse and protect the server from potential security threats. * The LLM template is now supported in ZenML, allowing you to use the LLM template for your pipelines. ## πŸ₯³ Community Contributions πŸ₯³ We'd like to give a special thanks to @dudeperf3ct he contributed to this release by introducing the Huggingface Model Deployer. We'd also like to thank @moesio-f for their contribution to this release by adding a new attribute to the `Kaniko` image builder. Additionally, we'd like to thank @christianversloot for his contributions to this release. ## What's Changed * Upgrading SQLModel to the latest version by @bcdurak in https://github.com/zenml-io/zenml/pull/2452 * Remove KServe integration by @safoinme in https://github.com/zenml-io/zenml/pull/2495 * Upgrade migration testing with 0.55.5 by @avishniakov in https://github.com/zenml-io/zenml/pull/2501 * Relax azure, gcfs and s3 dependencies by @strickvl in https://github.com/zenml-io/zenml/pull/2498 * Use HTTP forwarded headers to detect the real origin of client devices by @stefannica in https://github.com/zenml-io/zenml/pull/2499 * Update README.md for quickstart colab link by @strickvl in https://github.com/zenml-io/zenml/pull/2505 * Add sequential migration tests for MariaDB and MySQL by @strickvl in https://github.com/zenml-io/zenml/pull/2502 * Huggingface Model Deployer by @dudeperf3ct in https://github.com/zenml-io/zenml/pull/2376 * Use `uv` to speed up pip installs & the CI in general by @strickvl in https://github.com/zenml-io/zenml/pull/2442 * Handle corrupted or empty global configuration file by @stefannica in https://github.com/zenml-io/zenml/pull/2508 * Add admin users notion by @avishniakov in https://github.com/zenml-io/zenml/pull/2494 * Remove dashboard from gitignore by @safoinme in https://github.com/zenml-io/zenml/pull/2517 * Colima / Homebrew fix by @strickvl in https://github.com/zenml-io/zenml/pull/2512 * [HELM] Remove extra environment variable assignment by @wjayesh in https://github.com/zenml-io/zenml/pull/2518 * Allow installing packages using UV by @schustmi in https://github.com/zenml-io/zenml/pull/2510 * Additional fields for track events by @bcdurak in https://github.com/zenml-io/zenml/pull/2507 * Check if environment key is set before deleting in HyperAI orchestrator by @christianversloot in https://github.com/zenml-io/zenml/pull/2511 * Fix the pagination in the database backup by @stefannica in https://github.com/zenml-io/zenml/pull/2522 * Bump mlflow to version 2.11.1 by @christianversloot in https://github.com/zenml-io/zenml/pull/2524 * Add docs for uv installation by @schustmi in https://github.com/zenml-io/zenml/pull/2527 * Fix bug in HyperAI orchestrator depends_on parallelism by @christianversloot in https://github.com/zenml-io/zenml/pull/2523 * Upgrade pip in docker images by @schustmi in https://github.com/zenml-io/zenml/pull/2528 * Fix node selector and other fields for DB job in helm chart by @stefannica in https://github.com/zenml-io/zenml/pull/2531 * Revert "Upgrading SQLModel to the latest version" by @bcdurak in https://github.com/zenml-io/zenml/pull/2515 * Add `pod_running_timeout` attribute to `Kaniko` image builder by @moesio-f in https://github.com/zenml-io/zenml/pull/2509 * Add test to install dashboard script by @strickvl in https://github.com/zenml-io/zenml/pull/2521 * Sort pipeline namespaces by last run by @schustmi in https://github.com/zenml-io/zenml/pull/2514 * Add support for LLM template by @schustmi in https://github.com/zenml-io/zenml/pull/2519 * Rate limiting for login API by @avishniakov in https://github.com/zenml-io/zenml/pull/2484 * Try/catch for Docker client by @christianversloot in https://github.com/zenml-io/zenml/pull/2513 * Fix config file in starter guide by @schustmi in https://github.com/zenml-io/zenml/pull/2534 * Log URL for pipelines and model versions when running a pipeline by @wjayesh in https://github.com/zenml-io/zenml/pull/2506 * Add security exclude by @schustmi in https://github.com/zenml-io/zenml/pull/2541 * Update error message around notebook use by @strickvl in https://github.com/zenml-io/zenml/pull/2536 * Cap `fsspec` for Huggingface integration by @avishniakov in https://github.com/zenml-io/zenml/pull/2542 * Fix integration materializers' URLs in docs by @strickvl in https://github.com/zenml-io/zenml/pull/2538 * Bug fix HyperAI orchestrator: Offload scheduled pipeline execution to bash script by @christianversloot in https://github.com/zenml-io/zenml/pull/2535 * Update `pip check` command to use `uv` by @strickvl in https://github.com/zenml-io/zenml/pull/2520 * Implemented bitbucket webhook event source by @AlexejPenner in https://github.com/zenml-io/zenml/pull/2481 * Add ZenMLServiceType and update service registration by @safoinme in https://github.com/zenml-io/zenml/pull/2471 * Prepare release 0.56.0 by @safoinme in https://github.com/zenml-io/zenml/pull/2546 * Fix formatting and release workflow by @strickvl in https://github.com/zenml-io/zenml/pull/2549 * Fix release workflow by @strickvl in https://github.com/zenml-io/zenml/pull/2550 * Fix pipelines and model links for the cloud dashboard by @wjayesh in https://github.com/zenml-io/zenml/pull/2554 * Make starlette non-must for client by @avishniakov in https://github.com/zenml-io/zenml/pull/2553 * Bump MLFlow to version 2.11.2 by @christianversloot in https://github.com/zenml-io/zenml/pull/2552 * Prepare release 0.56.1 by @avishniakov in https://github.com/zenml-io/zenml/pull/2555 * Updated neptune documentation by @SiddhantSadangi in https://github.com/zenml-io/zenml/pull/2548 * 0.56.0 and 0.56.1 in testing by @avishniakov in https://github.com/zenml-io/zenml/pull/2557 * Only install uv once by @schustmi in https://github.com/zenml-io/zenml/pull/2558 * Bump MLFlow to version 2.11.3 by @christianversloot in https://github.com/zenml-io/zenml/pull/2559 * Update docs with warning about pickle materializer insecurity by @avishniakov in https://github.com/zenml-io/zenml/pull/2561 * Add service table migration by @safoinme in https://github.com/zenml-io/zenml/pull/2563 ## New Contributors * @dudeperf3ct made their first contribution in https://github.com/zenml-io/zenml/pull/2376 * @moesio-f made their first contribution in https://github.com/zenml-io/zenml/pull/2509 * @SiddhantSadangi made their first contribution in https://github.com/zenml-io/zenml/pull/2548 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.55.5...0.56.2 # 0.55.5 This patch contains a number of bug fixes and security improvements. We improved the isolation of artifact stores so that various artifacts cannot be stored or accessed outside of the configured artifact store scope. Such unsafe operations are no longer allowed. This may have an impact on existing codebases if you have used unsafe file operations in the past. To illustrate such a side effect, let's consider a remote S3 artifact store is configured for the path `s3://some_bucket/some_sub_folder`. and in the code you use `artifact_store.open("s3://some_bucket/some_other_folder/dummy.txt","w")` -> this operation is considered unsafe as it accesses the data outside the scope of the artifact store. If you really need this to achieve your goals, consider switching to `s3fs` or similar libraries for such cases. Also with this release, the server global configuration is no longer stored on the server file system to prevent exposure of sensitive information. User entities are now uniquely constrained to prevent the creation of duplicate users under certain race conditions. ## What's Changed * Change runnerset name to ubuntu-runners by @safoinme in https://github.com/zenml-io/zenml/pull/2489 * Allow latest `ruff` versions by @strickvl in https://github.com/zenml-io/zenml/pull/2487 * Uniquely constrained users table by @avishniakov in https://github.com/zenml-io/zenml/pull/2483 * Add option to add base URL for zenml server (with support for cloud) by @wjayesh in https://github.com/zenml-io/zenml/pull/2464 * Improve Artifact Store isolation by @avishniakov in https://github.com/zenml-io/zenml/pull/2490 * Don't write the global config to file on server by @stefannica in https://github.com/zenml-io/zenml/pull/2491 * Add versions for DB migration testing by @strickvl in https://github.com/zenml-io/zenml/pull/2486 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.55.4...0.55.5 # 0.55.4 This release brings a host of enhancements and fixes across the board, including significant improvements to our services logging and status, the integration of model saving to the registry via CLI methods, and more robust handling of parallel pipelines and database entities. We've also made strides in optimizing MLflow interactions, enhancing our documentation, and ensuring our CI processes are more robust. Additionally, we've tackled several bug fixes and performance improvements, making our platform even more reliable and user-friendly. We'd like to give a special thanks to @christianversloot and @francoisserra for their contributions. ## What's Changed * Bump mlflow to 2.10.2 by @christianversloot in https://github.com/zenml-io/zenml/pull/2444 * Improve services logging and status by @safoinme in https://github.com/zenml-io/zenml/pull/2436 * Add `save models to registry` setting of a model to CLI methods by @avishniakov in https://github.com/zenml-io/zenml/pull/2447 * Parallel pipelines can create entities in DB by @avishniakov in https://github.com/zenml-io/zenml/pull/2446 * Fix MlFlow TF autlogging excessive warnings by @avishniakov in https://github.com/zenml-io/zenml/pull/2449 * Fix and improve integration deps checker by @stefannica in https://github.com/zenml-io/zenml/pull/2455 * Add migration test version + use self-hosted runners for release by @strickvl in https://github.com/zenml-io/zenml/pull/2450 * Enable running pipeline via REST by @schustmi in https://github.com/zenml-io/zenml/pull/2389 * Faster mlflow `list_model_versions` by @avishniakov in https://github.com/zenml-io/zenml/pull/2460 * Avoid exposure of tracking uri to metadata by @avishniakov in https://github.com/zenml-io/zenml/pull/2458 * Some important docs updates by @htahir1 in https://github.com/zenml-io/zenml/pull/2463 * Fix CI by @strickvl in https://github.com/zenml-io/zenml/pull/2467 * Fix local Airflow install + docs instructions by @strickvl in https://github.com/zenml-io/zenml/pull/2459 * Update `.coderabbit.yaml` by @strickvl in https://github.com/zenml-io/zenml/pull/2470 * Prevent templates update from formatting the whole codebase by @avishniakov in https://github.com/zenml-io/zenml/pull/2469 * Telemetry guarding for CI & editable installs by @strickvl in https://github.com/zenml-io/zenml/pull/2468 * Add Vertex Step Operator network parameter by @francoisserra in https://github.com/zenml-io/zenml/pull/2398 * Allow integration export to overwrite a pre-existing file by @strickvl in https://github.com/zenml-io/zenml/pull/2466 * Fix `log_model_metadata` with explicit name and version by @avishniakov in https://github.com/zenml-io/zenml/pull/2465 * Triggers, actions, event sources - base abstractions and github and pipeline run implementations by @AlexejPenner in https://github.com/zenml-io/zenml/pull/2312 * Mount zenml config path as empty dir by @stefannica in https://github.com/zenml-io/zenml/pull/2472 * Fix broken docs links by @strickvl in https://github.com/zenml-io/zenml/pull/2473 * Use `uv pip compile` for environment setup in CI by @strickvl in https://github.com/zenml-io/zenml/pull/2474 * MLflow fix for tests on Mac Python 3.9 and 3.10 by @strickvl in https://github.com/zenml-io/zenml/pull/2462 * Improve custom data types docs by @avishniakov in https://github.com/zenml-io/zenml/pull/2476 * Reflect env variables on global configuration by @safoinme in https://github.com/zenml-io/zenml/pull/2371 * Fix zenml deploy secret stores by @safoinme in https://github.com/zenml-io/zenml/pull/2454 * Don't fail when workload manager source fails to load by @schustmi in https://github.com/zenml-io/zenml/pull/2478 * Add analytics events for cloud onboarding by @schustmi in https://github.com/zenml-io/zenml/pull/2456 * Race condition on creating new users allows duplicate usernames by @avishniakov in https://github.com/zenml-io/zenml/pull/2479 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.55.3...0.55.4 # 0.55.3 This patch comes with a variety of bug fixes and documentation updates. With this release you can now download files directly from artifact versions that you get back from the client without the need to materialize them. If you would like to bypass materialization entirely and just download the data or files associated with a particular artifact version, you can use the `download_files` method: ```python from zenml.client import Client client = Client() artifact = client.get_artifact_version(name_id_or_prefix="iris_dataset") artifact.download_files("path/to/save.zip") ``` ## What's Changed * Backport: Add HyperAI to TOC (#2406) by @strickvl in https://github.com/zenml-io/zenml/pull/2407 * Fix conditional statements in GitHub workflows by @strickvl in https://github.com/zenml-io/zenml/pull/2404 * Ensure proper spacing in error messages by @christianversloot in https://github.com/zenml-io/zenml/pull/2399 * Fix hyperai markdown table by @strickvl in https://github.com/zenml-io/zenml/pull/2426 * Upgrade Vertex integration `google-cloud-aiplatform` minimum required version to 1.34.0 by @francoisserra in https://github.com/zenml-io/zenml/pull/2428 * Close code block left open in the docs by @jlopezpena in https://github.com/zenml-io/zenml/pull/2432 * Simplify HF example and notify when cache is down by @safoinme in https://github.com/zenml-io/zenml/pull/2300 * Adding the latest version id and name to the artifact response by @bcdurak in https://github.com/zenml-io/zenml/pull/2430 * Adding the ID of the producer pipeline run to artifact versions by @bcdurak in https://github.com/zenml-io/zenml/pull/2431 * Add vulnerability notice to README by @strickvl in https://github.com/zenml-io/zenml/pull/2437 * REVERTED: Allow more recent `adlfs` and `s3fs` versions by @strickvl in https://github.com/zenml-io/zenml/pull/2402 * Add new property for filtering service account events by @strickvl in https://github.com/zenml-io/zenml/pull/2405 * Add `download_files` method for `ArtifactVersion` by @strickvl in https://github.com/zenml-io/zenml/pull/2434 * Fixing `update_model`s and revert #2402 by @bcdurak in https://github.com/zenml-io/zenml/pull/2440 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.55.2...0.55.3 # 0.55.2 This patch comes with a variety of new features, bug-fixes, and documentation updates. Some of the most important changes include: - The ability to add tags to outputs through the step context - Allowing the secret stores to utilize the implicit authentication method of AWS/GCP/Azure Service Connectors - [Lazy loading client methods](https://docs.zenml.io/v/docs/user-guide/advanced-guide/data-management/late-materialization) in a pipeline context - Updates on the Vertex orchestrator to switch to the native VertexAI scheduler - The new [HyperAI](https://hyperai.ai) integration featuring a new orchestrator and service connector - Bumping the mlflow version to 2.10.0 We'd like to give a special thanks to @christianversloot and @francoisserra for their contributions. ## What's Changed * `0.55.1` in migration testing by @avishniakov in https://github.com/zenml-io/zenml/pull/2368 * Credential-less AWS/GCP/Azure Secrets Store support by @stefannica in https://github.com/zenml-io/zenml/pull/2365 * Small docs updates by @strickvl in https://github.com/zenml-io/zenml/pull/2359 * generic `Client()` getters lazy loading by @avishniakov in https://github.com/zenml-io/zenml/pull/2323 * Added slack settings OSSK-382 by @htahir1 in https://github.com/zenml-io/zenml/pull/2378 * Label triggered slow ci by @avishniakov in https://github.com/zenml-io/zenml/pull/2379 * Remove unused `is-slow-ci` input from fast and slow integration testing by @strickvl in https://github.com/zenml-io/zenml/pull/2382 * Add deprecation warning for `ExternalArtifact` non-value features by @avishniakov in https://github.com/zenml-io/zenml/pull/2375 * Add telemetry pipeline run ends by @htahir1 in https://github.com/zenml-io/zenml/pull/2377 * Updating the `update_model` decorator by @bcdurak in https://github.com/zenml-io/zenml/pull/2136 * Mocked API docs building by @avishniakov in https://github.com/zenml-io/zenml/pull/2360 * Add outputs tags function by @avishniakov in https://github.com/zenml-io/zenml/pull/2383 * Bump mlflow to v2.10.0 by @christianversloot in https://github.com/zenml-io/zenml/pull/2374 * Fix sharing of model versions by @schustmi in https://github.com/zenml-io/zenml/pull/2380 * Fix GCP service connector login to overwrite existing valid credentials by @stefannica in https://github.com/zenml-io/zenml/pull/2392 * Update `has_custom_name` for legacy artifacts by @avishniakov in https://github.com/zenml-io/zenml/pull/2384 * Use native VertexAI scheduler capability instead of old GCP official workaround by @francoisserra in https://github.com/zenml-io/zenml/pull/2310 * HyperAI integration: orchestrator and service connector by @christianversloot in https://github.com/zenml-io/zenml/pull/2372 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.55.1...0.55.2 # 0.55.1 **If you are actively using the Model Control Plane features, we suggest that you directly upgrade to 0.55.1, bypassing 0.55.0.** This is a patch release bringing backwards compatibility for breaking changes introduced in **0.55.0**, so that appropriate migration actions can be performed at desired pace. Please refer to [the 0.55.0 release notes](https://github.com/zenml-io/zenml/releases/tag/0.55.0) for specific information on breaking changes and how to update your code to align with the new way of doing things. We also have updated our documentation to serve you better and introduced `PipelineNamespace` models in our API. Also this release is packed with Database recovery in case upgrade failed to migrate Database to a newer version of ZenML. ## What's Changed * Update skypilot docs by @safoinme in https://github.com/zenml-io/zenml/pull/2344 * Fast CI / Slow CI by @strickvl in https://github.com/zenml-io/zenml/pull/2268 * Add repeating tests and instafail error logging to testing in CI by @strickvl in https://github.com/zenml-io/zenml/pull/2334 * Added more info about metadata by @htahir1 in https://github.com/zenml-io/zenml/pull/2346 * Use GitHub as trusted publisher for PyPI publication by @strickvl in https://github.com/zenml-io/zenml/pull/2343 * Fix code in docs/questions about MCP by @wjayesh in https://github.com/zenml-io/zenml/pull/2340 * Update release notes for 0.55.0 by @avishniakov in https://github.com/zenml-io/zenml/pull/2351 * Fixed metadata docs by @htahir1 in https://github.com/zenml-io/zenml/pull/2350 * Add generate test duration file cron by @safoinme in https://github.com/zenml-io/zenml/pull/2347 * CI comments for slow CI and more conditional membership checking by @strickvl in https://github.com/zenml-io/zenml/pull/2356 * Backward compatible `ModelVersion` by @avishniakov in https://github.com/zenml-io/zenml/pull/2357 * Add model version created to analytics by @avishniakov in https://github.com/zenml-io/zenml/pull/2352 * Make CI run on the appropriate branch by @strickvl in https://github.com/zenml-io/zenml/pull/2358 * Add MVP pipeline namespace support by @schustmi in https://github.com/zenml-io/zenml/pull/2353 * Apply docker run args to skypilot orchestrator VM by @schustmi in https://github.com/zenml-io/zenml/pull/2342 * πŸ“ Minor docs improvements (basic step example) by @plattenschieber in https://github.com/zenml-io/zenml/pull/2348 * Add DB backup and recovery during DB schema migrations by @wjayesh in https://github.com/zenml-io/zenml/pull/2158 * Fix CI issues by @strickvl in https://github.com/zenml-io/zenml/pull/2363 ## New Contributors * @plattenschieber made their first contribution in https://github.com/zenml-io/zenml/pull/2348 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.55.0...0.55.1 # 0.55.0 This release comes with a range of new features, bug fixes and documentation updates. The most notable changes are the ability to do lazy loading of Artifacts and their Metadata and Model and its Metadata inside the pipeline code using pipeline context object, and the ability to link Artifacts to Model Versions implicitly via the `save_artifact` function. Additionally, we've updated the documentation to include a new starter guide on how to manage artifacts, and a new production guide that walks you through how to configure your pipelines to run in production. ## Breaking Change The `ModelVersion` concept was renamed to `Model` going forward, which affects code bases using the Model Control Plane feature. **This change is not backward compatible with 0.55.0, but is backward compatible again in 0.55.1**. ### Pipeline decorator `@pipeline(model_version=ModelVersion(...))` -> `@pipeline(model=Model(...))` **Old syntax:** ```python from zenml import pipeline, ModelVersion @pipeline(model_version=ModelVersion(name="model_name",version="v42")) def p(): ... ``` **New syntax:** ```python from zenml import pipeline, Model @pipeline(model=Model(name="model_name",version="v42")) def p(): ... ``` ### Step decorator `@step(model_version=ModelVersion(...))` -> `@step(model=Model(...))` **Old syntax:** ```python from zenml import step, ModelVersion @step(model_version=ModelVersion(name="model_name",version="v42")) def s(): ... ``` **New syntax:** ```python from zenml import step, Model @step(model=Model(name="model_name",version="v42")) def s(): ... ``` ### Acquiring model configuration from pipeline/step context **Old syntax:** ```python from zenml import pipeline, step, ModelVersion, get_step_context, get_pipeline_context @pipeline(model_version=ModelVersion(name="model_name",version="v42")) def p(): model_version = get_pipeline_context().model_version ... @step(model_version=ModelVersion(name="model_name",version="v42")) def s(): model_version = get_step_context().model_version ... ``` **New syntax:** ```python from zenml import pipeline, step, Model, get_step_context, get_pipeline_context @pipeline(model=Model(name="model_name",version="v42")) def p(): model = get_pipeline_context().model ... @step(model=Model(name="model_name",version="v42")) def s(): model = get_step_context().model ... ``` ### Usage of model configuration inside pipeline YAML config file **Old syntax:** ```yaml model_version: name: model_name version: v42 ... ``` **New syntax:** ```yaml model: name: model_name version: v42 ... ``` ### `ModelVersion.metadata` -> `Model.run_metadata` **Old syntax:** ```python from zenml import ModelVersion def s(): model_version = ModelVersion(name="model_name",version="production") some_metadata = model_version.metadata["some_metadata"].value ... ``` **New syntax:** ```python from zenml import Model def s(): model = Model(name="model_name",version="production") some_metadata = model.run_metadata["some_metadata"].value ... ``` ## What's Changed * Remove --name from service account creation in docs by @christianversloot in https://github.com/zenml-io/zenml/pull/2295 * Secrets store hot backup and restore by @stefannica in https://github.com/zenml-io/zenml/pull/2277 * Updating the README of the e2e template by @bcdurak in https://github.com/zenml-io/zenml/pull/2299 * Add missing docstring for Skypilot setting by @schustmi in https://github.com/zenml-io/zenml/pull/2305 * Update Manage artifacts starter guide docs by @JonathanLoscalzo in https://github.com/zenml-io/zenml/pull/2301 * Add some tiny details and moved around a page by @htahir1 in https://github.com/zenml-io/zenml/pull/2297 * Model links lazy evaluation in pipeline code by @avishniakov in https://github.com/zenml-io/zenml/pull/2205 * Link artifact to MCP entity via function call or implicitly in `save_artifact` by @avishniakov in https://github.com/zenml-io/zenml/pull/2298 * Extend MCP/ACP listing capabilities by @avishniakov in https://github.com/zenml-io/zenml/pull/2285 * Add latest `zenml` version to migration testing scripts by @strickvl in https://github.com/zenml-io/zenml/pull/2294 * Remove Python 3.7 check for Langchain Integration by @strickvl in https://github.com/zenml-io/zenml/pull/2308 * Allow spellcheck to run for docs changes by @strickvl in https://github.com/zenml-io/zenml/pull/2307 * Add helper message for `zenml up --blocking` login by @strickvl in https://github.com/zenml-io/zenml/pull/2290 * Fix secret migration from external store in helm deployment by @stefannica in https://github.com/zenml-io/zenml/pull/2315 * Small docs fixes by @htahir1 in https://github.com/zenml-io/zenml/pull/2314 * Rename model version to a model by @avishniakov in https://github.com/zenml-io/zenml/pull/2267 * Updating the docs after the Skypilot tests by @bcdurak in https://github.com/zenml-io/zenml/pull/2311 * Remove unused Segment / Mixpanel generation workflow and script by @strickvl in https://github.com/zenml-io/zenml/pull/2319 * Add `log_step_metadata` utility function by @strickvl in https://github.com/zenml-io/zenml/pull/2322 * Add conditional checks to prevent scheduled actions running inside forked repositories by @strickvl in https://github.com/zenml-io/zenml/pull/2317 * RBAC resource sharing by @schustmi in https://github.com/zenml-io/zenml/pull/2320 * Fix typo in migration downgrade by @avishniakov in https://github.com/zenml-io/zenml/pull/2337 * Separate `skypilot` flavors into different folders by @safoinme in https://github.com/zenml-io/zenml/pull/2332 * Add warning for GCP integration when using Python >=3.11 by @strickvl in https://github.com/zenml-io/zenml/pull/2333 ## New Contributors * @JonathanLoscalzo made their first contribution in https://github.com/zenml-io/zenml/pull/2301 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.54.1...0.55.0 # 0.54.1 Release 0.54.1, includes a mix of updates and new additions and bug fixes. The most notable changes are the new production guide, allowing multi step VMs for the Skypilot orchestrator which allows you to configure a step to run on a specific VM or run the entire pipeline on a single VM, and some improvements to the Model Control Plane. ## What's Changed * Bump aquasecurity/trivy-action from 0.16.0 to 0.16.1 by @dependabot in https://github.com/zenml-io/zenml/pull/2244 * Bump crate-ci/typos from 1.16.26 to 1.17.0 by @dependabot in https://github.com/zenml-io/zenml/pull/2245 * Add YAML formatting standardization to formatting & linting scripts by @strickvl in https://github.com/zenml-io/zenml/pull/2224 * Remove text annotation by @strickvl in https://github.com/zenml-io/zenml/pull/2246 * Add MariaDB migration testing by @strickvl in https://github.com/zenml-io/zenml/pull/2170 * Delete artifact links from model version via Client, ModelVersion and API by @avishniakov in https://github.com/zenml-io/zenml/pull/2191 * Default/Non-Default step params produce conflict with yaml ones as defaults are set in code by @avishniakov in https://github.com/zenml-io/zenml/pull/2247 * Prune of unused artifacts links via client by @avishniakov in https://github.com/zenml-io/zenml/pull/2192 * Rename nlp example by @safoinme in https://github.com/zenml-io/zenml/pull/2221 * Support refreshing service connector credentials in the Vertex step operator to support long-running jobs by @stefannica in https://github.com/zenml-io/zenml/pull/2198 * Refactor secrets stores to store all secret metadata in the DB by @stefannica in https://github.com/zenml-io/zenml/pull/2193 * Add `latest_version_id` to the `ModelResponse` by @avishniakov in https://github.com/zenml-io/zenml/pull/2266 * Remove `link_artifact` from docs for MCP by @strickvl in https://github.com/zenml-io/zenml/pull/2272 * Improve action by adding advice to KeyError when configured steps are not present in pipeline by @christianversloot in https://github.com/zenml-io/zenml/pull/2265 * Allow multi step configuration for skypilot by @safoinme in https://github.com/zenml-io/zenml/pull/2166 * Reworking the examples by @bcdurak in https://github.com/zenml-io/zenml/pull/2259 * A docs update for incorrect import in docs/book/user-guide/starter-guide/track-ml-models.md by @yo-harsh in https://github.com/zenml-io/zenml/pull/2279 * Allow `sklearn` versions > 1.3 by @Vishal-Padia in https://github.com/zenml-io/zenml/pull/2271 * Free `sklearn` dependency to allow all versions by @strickvl in https://github.com/zenml-io/zenml/pull/2281 * Misc CI bugfixes by @strickvl in https://github.com/zenml-io/zenml/pull/2260 * Fix `yamlfix` script to use `--no-yamlfix` flag by @strickvl in https://github.com/zenml-io/zenml/pull/2280 * Fix dependabot settings autoformatting by `yamlfix` by @strickvl in https://github.com/zenml-io/zenml/pull/2282 * Add advice for next step to error on AuthorizationException by @christianversloot in https://github.com/zenml-io/zenml/pull/2264 * Allow skypilot to configure step or run full pipeline in one VM by @safoinme in https://github.com/zenml-io/zenml/pull/2276 * A docs update with production guide + restructured advanced guide by @htahir1 in https://github.com/zenml-io/zenml/pull/2232 ## New Contributors * @yo-harsh made their first contribution in https://github.com/zenml-io/zenml/pull/2279 * @Vishal-Padia made their first contribution in https://github.com/zenml-io/zenml/pull/2271 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.54.0...0.54.1 # 0.54.0 This release brings a range of new features, bug fixes and documentation updates. The Model Control Plane has received a number of small bugfixes and improvements, notably the ability to change model and model version names. We've also added a whole new starter guide that walks you through how to get started with ZenML, from creating your first pipeline to fetching objects once your pipelines have run and much more. Be sure to [check it out](https://docs.zenml.io/user-guide/starter-guide) if you're new to ZenML! Speaking of documentation improvements, the Model Control Plane now has [its own dedicated documentation section](https://docs.zenml.io/user-guide/advanced-guide/data-management/model-management) introducing the concepts and features of the Model Control Plane. As always, this release comes with number of bug fixes, docs additions and smaller improvements to our internal processes. ## Breaking Change This PR introduces breaking changes in the areas of the REST API concerning secrets and tags. As a consequence, the ZenML Client running the previous ZenML version is no longer compatible with a ZenML Server running the new version and vice-versa. To address this, simply ensure that all your ZenML clients use the same version as the server(s) they connect to. ## πŸ₯³ Community Contributions πŸ₯³ We'd like to give a special thanks to @christianversloot for two PRs he contributed to this release. One of them [fixes a bug](https://github.com/zenml-io/zenml/pull/2195) that prevented ZenML from running on Windows and the other one [adds a new materializer for the Polars library](https://github.com/zenml-io/zenml/pull/2229). Also many thanks to @sean-hickey-wf for his contribution of [an improvement to the Slack Alerter stack component](https://github.com/zenml-io/zenml/pull/2153) which allows you to define custom blocks for the Slack message. ## What's Changed * Completing the hydration story with the remaining models by @bcdurak in https://github.com/zenml-io/zenml/pull/2151 * Remove secrets manager flavors from DB by @stefannica in https://github.com/zenml-io/zenml/pull/2182 * Prepare 0.53.1 release by @stefannica in https://github.com/zenml-io/zenml/pull/2183 * Update package name for nightly build by @strickvl in https://github.com/zenml-io/zenml/pull/2172 * Remove space saver action + upgrade other actions by @strickvl in https://github.com/zenml-io/zenml/pull/2174 * mutable names in Model and MV by @avishniakov in https://github.com/zenml-io/zenml/pull/2185 * Fix image building for nightly container builds by @strickvl in https://github.com/zenml-io/zenml/pull/2189 * Test that artifacts not get linked to model version not from context by @avishniakov in https://github.com/zenml-io/zenml/pull/2188 * Warn if Model(Version) config fluctuates from DB state by @avishniakov in https://github.com/zenml-io/zenml/pull/2144 * Add blocks field to SlackAlerterParameters for custom slack blocks by @sean-hickey-wf in https://github.com/zenml-io/zenml/pull/2153 * Model control plane technical documentation by @strickvl in https://github.com/zenml-io/zenml/pull/2111 * Alembic branching issue fix by @avishniakov in https://github.com/zenml-io/zenml/pull/2197 * Bump github/codeql-action from 2 to 3 by @dependabot in https://github.com/zenml-io/zenml/pull/2201 * Bump google-github-actions/get-gke-credentials from 0 to 2 by @dependabot in https://github.com/zenml-io/zenml/pull/2202 * Bump google-github-actions/auth from 1 to 2 by @dependabot in https://github.com/zenml-io/zenml/pull/2203 * Bump aws-actions/amazon-ecr-login from 1 to 2 by @dependabot in https://github.com/zenml-io/zenml/pull/2200 * Bump crate-ci/typos from 1.16.25 to 1.16.26 by @dependabot in https://github.com/zenml-io/zenml/pull/2207 * Fix unreliable test behavior when using hypothesis by @strickvl in https://github.com/zenml-io/zenml/pull/2208 * Added more pod spec properties for k8s orchestrator by @htahir1 in https://github.com/zenml-io/zenml/pull/2097 * Fix API docs environment setup by @strickvl in https://github.com/zenml-io/zenml/pull/2190 * Use placeholder runs to show pipeline runs in the dashboard without delay by @schustmi in https://github.com/zenml-io/zenml/pull/2048 * Update README and CONTRIBUTING.md docs with links to good first issues for contribution by @strickvl in https://github.com/zenml-io/zenml/pull/2220 * Bump supported `mlstacks` version to 0.8.0 by @strickvl in https://github.com/zenml-io/zenml/pull/2196 * Misc cleanup by @schustmi in https://github.com/zenml-io/zenml/pull/2126 * Refactor pipeline run updates by @schustmi in https://github.com/zenml-io/zenml/pull/2117 * Rename log_model_version_metadata to log_model_metadata by @htahir1 in https://github.com/zenml-io/zenml/pull/2215 * Update starter and create new production guide by @htahir1 in https://github.com/zenml-io/zenml/pull/2143 * Fix typo by @strickvl in https://github.com/zenml-io/zenml/pull/2223 * Consolidate Custom Filter Logic by @fa9r in https://github.com/zenml-io/zenml/pull/2116 * Force forward slashes when saving artifacts by @christianversloot in https://github.com/zenml-io/zenml/pull/2195 * Temporarily disable two MLflow tests for MacOS with Python 3.9 and 3.10 by @strickvl in https://github.com/zenml-io/zenml/pull/2186 * Disable template updates for forked repositories by @strickvl in https://github.com/zenml-io/zenml/pull/2222 * Remove Label Studio text annotation example by @strickvl in https://github.com/zenml-io/zenml/pull/2225 * Add scarf checker script and CI workflow by @strickvl in https://github.com/zenml-io/zenml/pull/2227 * Add `mlstacks` installation instructions to docs by @strickvl in https://github.com/zenml-io/zenml/pull/2228 * Adding the `hydrate` flag to the client methods by @bcdurak in https://github.com/zenml-io/zenml/pull/2120 * Fixing the remaining docs pages for `run_metadata` by @bcdurak in https://github.com/zenml-io/zenml/pull/2230 * Fix CI check to disallow template testing on forked repositories by @strickvl in https://github.com/zenml-io/zenml/pull/2231 * Fix fork check syntax by @strickvl in https://github.com/zenml-io/zenml/pull/2237 * Add missing annotations section to zenml service account by @wjayesh in https://github.com/zenml-io/zenml/pull/2234 * Allow filtering artifacts with/without custom names by @schustmi in https://github.com/zenml-io/zenml/pull/2226 * Adjust migration settings based on database engine by @strickvl in https://github.com/zenml-io/zenml/pull/2236 * Added one more chapter to starter guide by @htahir1 in https://github.com/zenml-io/zenml/pull/2238 * Add Polars materializer by @christianversloot in https://github.com/zenml-io/zenml/pull/2229 ## New Contributors * @sean-hickey-wf made their first contribution in https://github.com/zenml-io/zenml/pull/2153 * @dependabot πŸ€– made their first contribution in https://github.com/zenml-io/zenml/pull/2201 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.53.1...0.54.0 # 0.53.1 This minor release contains a hot fix for a bug that was introduced in 0.53.0 where the secrets manager flavors were not removed from the database properly. This release fixes that issue. ## What's Changed * Remove secrets manager flavors from DB by @stefannica in https://github.com/zenml-io/zenml/pull/2182 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.53.0...0.53.1 # 0.53.0 This release is packed with a deeply reworked quickstart example and starter template, the removal of secret manager stack component, improved experience with Cloud Secret Stores, support for tags and metadata directly in Model Versions, some breaking changes for Model Control Plane and a few bugfixes. ## Breaking changes ### Secret Manager stack components sunset Upon upgrading, all Secrets Manager stack components will be removed from the Stacks that still contain them and from the database. This also implies that access to any remaining secrets managed through Secrets Manager stack components will be lost. If you still have secrets configured and managed through Secrets Manager stack components, please consider migrating all your existing secrets to the centralized secrets store *before upgrading* by means of the `zenml secrets-manager secret migrate` CLI command. Also see the `zenml secret --help` command for more information. ### Renaming "endpoints" to "deployments" in Model Control Plane This is just a renaming to provide better alignment with industry standards. Though, it will affect: - `ArtifactConfig(..., is_endpoint_artifact=True)` now is `ArtifactConfig(..., is_deployment_artifact=True)` - CLI command `zenml model endpoint_artifacts ...` now is `zenml model deployment_artifacts ...` - `Client().list_model_version_artifact_links(..., only_endpoint_artifacts=True)` now is `Client().list_model_version_artifact_links(..., only_deployment_artifacts=True)` - `ModelVersion(...).get_endpoint_artifact(...)` now is `ModelVersion(...).get_deployment_artifact(...)` ## Major bugfixes * Fix various bugs by @stefannica in https://github.com/zenml-io/zenml/pull/2147 * Adding a link from pipeline runs to code repositories by @bcdurak in https://github.com/zenml-io/zenml/pull/2146 * Fix Client doesn't recover from remote connection resets by @avishniakov in https://github.com/zenml-io/zenml/pull/2129 * Bugfix: `run_metadata` value returns string instead of other types by @avishniakov in https://github.com/zenml-io/zenml/pull/2149 * `KubernetesSparkStepOperator` imports fails by @avishniakov in https://github.com/zenml-io/zenml/pull/2159 * Fix `get_pipeline_context().model_version.get_artifact(...)` flow by @avishniakov in https://github.com/zenml-io/zenml/pull/2162 ## What's Changed * Model Versions are taggable by @avishniakov in https://github.com/zenml-io/zenml/pull/2102 * Adding a condition to the PR template by @bcdurak in https://github.com/zenml-io/zenml/pull/2140 * trying local caching for custom runners by @safoinme in https://github.com/zenml-io/zenml/pull/2148 * make template tests runs on ubuntu latest instead of custom runners by @safoinme in https://github.com/zenml-io/zenml/pull/2150 * Fix various bugs by @stefannica in https://github.com/zenml-io/zenml/pull/2147 * Fix `importlib` calling to `importlib.metadata` by @safoinme in https://github.com/zenml-io/zenml/pull/2160 * Debugging `zenml clean` by @bcdurak in https://github.com/zenml-io/zenml/pull/2119 * Add metadata to model versions by @avishniakov in https://github.com/zenml-io/zenml/pull/2109 * Adding a link from pipeline runs to code repositories by @bcdurak in https://github.com/zenml-io/zenml/pull/2146 * Moving tags to the body for artifacts and artifact versions by @bcdurak in https://github.com/zenml-io/zenml/pull/2138 * Fix MLFlow test by @avishniakov in https://github.com/zenml-io/zenml/pull/2161 * Fix Client doesn't recover from remote connection resets by @avishniakov in https://github.com/zenml-io/zenml/pull/2129 * Bugfix: `run_metadata` value returns string instead of other types by @avishniakov in https://github.com/zenml-io/zenml/pull/2149 * `KubernetesSparkStepOperator` imports fails by @avishniakov in https://github.com/zenml-io/zenml/pull/2159 * Endpoint artifacts rename to deployment artifacts by @avishniakov in https://github.com/zenml-io/zenml/pull/2134 * Fix `get_pipeline_context().model_version.get_artifact(...)` flow by @avishniakov in https://github.com/zenml-io/zenml/pull/2162 * Add CodeRabbit config to repo base by @strickvl in https://github.com/zenml-io/zenml/pull/2165 * Feature: use service connectors to authenticate secrets stores. by @stefannica in https://github.com/zenml-io/zenml/pull/2154 * Add dependabot updates for Github Actions on CI by @strickvl in https://github.com/zenml-io/zenml/pull/2087 * Run DB migration testing using MySQL alongside SQLite by @strickvl in https://github.com/zenml-io/zenml/pull/2113 * Remove `precommit` by @strickvl in https://github.com/zenml-io/zenml/pull/2164 * Remove support for secrets managers by @stefannica in https://github.com/zenml-io/zenml/pull/2163 * Add MariaDB test harnesses by @christianversloot in https://github.com/zenml-io/zenml/pull/2155 * Feature/update quickstart from template by @AlexejPenner in https://github.com/zenml-io/zenml/pull/2157 * Bump MLFlow to 2.9.2 by @christianversloot in https://github.com/zenml-io/zenml/pull/2156 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.52.0...0.53.0 # 0.52.0 This adds the ability to pass in pipeline parameters as YAML configuration and fixes a couple of minor issues affecting the W&B integration and the way expiring credentials are refreshed when service connectors are used. ## Breaking Change The current pipeline YAML configurations are now being validated to ensure that configured parameters match what is available in the code. This means that if you have a pipeline that is configured with a parameter that has a different value that what is provided through code, the pipeline will fail to run. This is a breaking change, but it is a good thing as it will help you catch errors early on. This is an example of a pipeline configuration that will fail to run: ```yaml parameters: some_param: 24 steps: my_step: parameters: input_2: 42 ``` ```python # run.py @step def my_step(input_1: int, input_2: int) -> None: pass @pipeline def my_pipeline(some_param: int): # here an error will be raised since `input_2` is # `42` in config, but `43` was provided in the code my_step(input_1=42, input_2=43) if __name__=="__main__": # here an error will be raised since `some_param` is # `24` in config, but `23` was provided in the code my_pipeline(23) ``` ## What's Changed * Passing pipeline parameters as yaml config by @avishniakov in https://github.com/zenml-io/zenml/pull/2058 * Side-effect free tests by @avishniakov in https://github.com/zenml-io/zenml/pull/2065 * Fix various bugs by @stefannica in https://github.com/zenml-io/zenml/pull/2124 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.51.0...0.52.0 # 0.51.0 This release comes with a breaking change to the model version model, a new use-case example for NLP, and a range of bug fixes and enhancements to the artifact management and pipeline run management features. ## Breaking Change * Artifact Version Table + Artifact Tagging by @fa9r in https://github.com/zenml-io/zenml/pull/2081 * Converting model models to use the new hydration paradigm by @bcdurak in https://github.com/zenml-io/zenml/pull/2101 ## New Example * NLP Template Example is a new example that demonstrates how to use ZenML for NLP tasks. by @safoinme in https://github.com/zenml-io/zenml/pull/2070 ## What's Changed * Updated to one quickstart again by @htahir1 in https://github.com/zenml-io/zenml/pull/2092 * Fix Nightly Build workflow files by @strickvl in https://github.com/zenml-io/zenml/pull/2090 * Make PyPi release depend on DB migration tests passing by @strickvl in https://github.com/zenml-io/zenml/pull/2088 * Bump `mlstacks` version in ZenML extra by @strickvl in https://github.com/zenml-io/zenml/pull/2091 * Fix SQL schema imports by @stefannica in https://github.com/zenml-io/zenml/pull/2098 * Fix migration for unowned stacks/components by @schustmi in https://github.com/zenml-io/zenml/pull/2099 * Polymorthic `run_metadata` by @avishniakov in https://github.com/zenml-io/zenml/pull/2064 * Update ruff formatter (for bugfixes) by @strickvl in https://github.com/zenml-io/zenml/pull/2106 * Lock in airflow version as higher versions will fail by @AlexejPenner in https://github.com/zenml-io/zenml/pull/2108 * Swap contents for HTMLString and MarkdownString in docs by @christianversloot in https://github.com/zenml-io/zenml/pull/2110 * Fix secrets list with cloud secrets stores and RBAC by @stefannica in https://github.com/zenml-io/zenml/pull/2107 * More track events by @htahir1 in https://github.com/zenml-io/zenml/pull/2112 * Fix pipeline run cascade deletion by @fa9r in https://github.com/zenml-io/zenml/pull/2104 * Take integrations tests out of unit tests folder by @safoinme in https://github.com/zenml-io/zenml/pull/2100 * Allow extra values when dehydrating response models by @schustmi in https://github.com/zenml-io/zenml/pull/2114 * Request optimizations by @schustmi in https://github.com/zenml-io/zenml/pull/2103 * Pagination in model versions by @avishniakov in https://github.com/zenml-io/zenml/pull/2115 * Add `StepContext.inputs` property by @fa9r in https://github.com/zenml-io/zenml/pull/2105 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.50.0...0.51.0 # 0.50.0 In this release, we introduce key updates aimed at improving user experience and security. The `ModelConfig` object has been renamed to `ModelVersion` for a more intuitive interface. Additionally, the release features enhancements such as optimized model hydration for better performance, alongside a range of bug fixes and contributions from both new and returning community members. ## Breaking Change - We have renamed the `ModelConfig` object to `ModelVersion` with other related changes to the model control plane, the goal of this is to bring a simplified user-interface experience, so once ModelVersion is configured in @pipeline or @step it will travel into all other user-facing places: step context, client, etc. by @avishniakov in [#2044](https://github.com/zenml-io/zenml/pull/2044) - introducing RBAC for server endpoints, ensuring users have appropriate permissions for actions on resources. Additionally, it improves data handling by dehydrating response models to redact inaccessible information, while service accounts retain full permissions due to current database constraints. by @schustmi in [#1999](https://github.com/zenml-io/zenml/pull/1999) ## Enhancements - Optimizing model hydration by @bcdurak in [#1971](https://github.com/zenml-io/zenml/pull/1971) - Improve alembic migration safety by @fa9r in [#2073](https://github.com/zenml-io/zenml/pull/2073) - Model Link Filtering by Artifact / Run Name by @fa9r in [#2074](https://github.com/zenml-io/zenml/pull/2074) ## Bug Fixes - Fix tag<>resource ID generator to fix the issue of manipulating migrated tags properly [#2056](https://github.com/zenml-io/zenml/pull/2056) - Fixes for `k3d` deployments via `mlstacks` using the ZenML CLI wrapper [#2059](https://github.com/zenml-io/zenml/pull/2059) - Fix some filter options for pipeline runs by @schustmi [#2078](https://github.com/zenml-io/zenml/pull/2078) - Fix Label Studio image annotation example by @strickvl [#2010](https://github.com/zenml-io/zenml/pull/2010) - Alembic migration fix for databases with scheduled pipelines with 2+ runs by @bcdurak [#2072](https://github.com/zenml-io/zenml/pull/2072) - Model version endpoint fixes by @schustmi in [#2060](https://github.com/zenml-io/zenml/pull/2060) ## ZenML Helm Chart Changes - Make helm chart more robust to accidental secret deletions by @stefannica in [#2053](https://github.com/zenml-io/zenml/pull/2053) - Separate helm hook resources from regular resources by @stefannica in [#2055](https://github.com/zenml-io/zenml/pull/2055) ## Other Changes * Connectors docs small fixes by @strickvl in https://github.com/zenml-io/zenml/pull/2050 * Feature/configurable service account for seldon predictor service by @Johnyz21 in https://github.com/zenml-io/zenml/pull/1725 * Adding NLP Template Example by @safoinme in https://github.com/zenml-io/zenml/pull/2051 * Fix CI by @fa9r in https://github.com/zenml-io/zenml/pull/2069 * Depaginate step runs to allow running pipelines with arbitrary step count by @schustmi in https://github.com/zenml-io/zenml/pull/2068 * Remove user name from orchestrator run name by @schustmi in https://github.com/zenml-io/zenml/pull/2067 * Artifacts Tab by @fa9r in https://github.com/zenml-io/zenml/pull/1943 * Add warnings/updates to Huggingface Spaces deployment docs by @strickvl in https://github.com/zenml-io/zenml/pull/2052 * Nightly builds by @strickvl in https://github.com/zenml-io/zenml/pull/2031 * Allow for custom disk size and type when using VertexAI Step Operator by @strickvl in https://github.com/zenml-io/zenml/pull/2054 * Set nightly builds to run at half-past the hour by @strickvl in https://github.com/zenml-io/zenml/pull/2077 * Set DCP template tag by @avishniakov in https://github.com/zenml-io/zenml/pull/2076 * Add missing dehydration in get_service_connector endpoint by @schustmi in https://github.com/zenml-io/zenml/pull/2080 * Replace `black` with `ruff format` / bump `mypy` by @strickvl in https://github.com/zenml-io/zenml/pull/2082 * ModelVersion in pipeline context to pass in steps by @avishniakov in https://github.com/zenml-io/zenml/pull/2079 * Pin `bcrypt` by @strickvl in https://github.com/zenml-io/zenml/pull/2083 ## New Contributors * @Johnyz21 made their first contribution in https://github.com/zenml-io/zenml/pull/1725 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.47.0...0.50.0 # 0.47.0 This release fixes a bug that was introduced in 0.46.1 where the default user was made inaccessible and was inadvertently duplicated. This release rescues the original user and renames the duplicate. ## What's Changed * Create tags table by @avishniakov in https://github.com/zenml-io/zenml/pull/2036 * Bring dashboard back to the release by @avishniakov in https://github.com/zenml-io/zenml/pull/2046 * Fix duplicate default user by @stefannica in https://github.com/zenml-io/zenml/pull/2045 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.46.1...tmp # 0.46.1 The 0.46.1 release introduces support for Service Accounts and API Keys that can be used to authenticate with the ZenML server from environments that do not support the web login flow, such as CI/CD environments, for example. Also included in this release are some documentation updates and bug fixes, notably moving the database migration logic deployed with the Helm chart out of the init containers and into a Kubernetes Job, which makes it possible to scale out the ZenML server deployments without the risk of running into database migration conflicts. ## What's Changed * Small improvements to Hub docs page by @strickvl in https://github.com/zenml-io/zenml/pull/2015 * Pin OpenAI integration to `<1.0.0` by @strickvl in https://github.com/zenml-io/zenml/pull/2027 * Make error message nicer for when two artifacts that share a prefix are found by @strickvl in https://github.com/zenml-io/zenml/pull/2023 * Move db-migration to `job` instead of `init-container` to allow replicas by @safoinme in https://github.com/zenml-io/zenml/pull/2021 * Fix stuck/broken CI by @strickvl in https://github.com/zenml-io/zenml/pull/2032 * Increase `step.source_code` Cut-Off Limit by @fa9r in https://github.com/zenml-io/zenml/pull/2025 * Improve artifact linkage logging in MCP by @avishniakov in https://github.com/zenml-io/zenml/pull/2016 * Upgrade feast so apidocs don't fail no mo by @AlexejPenner in https://github.com/zenml-io/zenml/pull/2028 * Remove NumPy Visualizations for 2D Arrays by @fa9r in https://github.com/zenml-io/zenml/pull/2033 * Fix user activation bug by @stefannica in https://github.com/zenml-io/zenml/pull/2037 * Remove `create_new_model_version` arg of `ModelConfig` by @avishniakov in https://github.com/zenml-io/zenml/pull/2030 * Extend the wait period in between PyPi package publication and Docker image building for releases by @strickvl in https://github.com/zenml-io/zenml/pull/2029 * Make `zenml up` prefill username when launching dashboard by @strickvl in https://github.com/zenml-io/zenml/pull/2024 * Add warning when artifact store cannot be loaded by @strickvl in https://github.com/zenml-io/zenml/pull/2011 * Add extra config to `Kaniko` docs by @safoinme in https://github.com/zenml-io/zenml/pull/2019 * ZenML API Keys and Service Accounts by @stefannica in https://github.com/zenml-io/zenml/pull/1840 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.46.0..0.46.1 # 0.46.0 This release brings some upgrades, documentation updates and bug fixes. Notably, our `langchain` integration now supports more modern versions and has been upgraded to a new version at the lower edge of supported packages on account of a security vulnerability. Other fixes related to the Model Control Plane which was updated to support the deletion of model versions via the CLI, for example. ## Breaking Change We removed the `llama_index` integration in this release. This related to unsolvable dependency clashes that relate to `sqlmodel` and our database. We expect these clashes to be resolved in the future and then we will add our integration back in. If you were using the `llama_index` materializer that was part of the integration, you will have to use a custom materializer in the meanwhile. We apologize for the inconvenience. ## What's Changed * MCP-driven E2E template by @avishniakov in https://github.com/zenml-io/zenml/pull/2004 * Model scoped endpoints by @avishniakov in https://github.com/zenml-io/zenml/pull/2003 * Delete model version in cli by @avishniakov in https://github.com/zenml-io/zenml/pull/2006 * Add latest version to model list response by @avishniakov in https://github.com/zenml-io/zenml/pull/2007 * Fix `gcs bucket` docs error message by @safoinme in https://github.com/zenml-io/zenml/pull/2018 * Fix `Skypilot` docs configuration by @safoinme in https://github.com/zenml-io/zenml/pull/2017 * Bump `langchain`, disable `llama_index`, and fix Vector Store materializer by @strickvl in https://github.com/zenml-io/zenml/pull/2013 * Fix Build Options of `GCPImageBuilder` by @fa9r in https://github.com/zenml-io/zenml/pull/1992 * Fix the stack component describe CLI output by @stefannica in https://github.com/zenml-io/zenml/pull/2001 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.45.6...0.46.0 # 0.45.6 This release brings an array of enhancements and refinements. Notable improvements include allowing for `disconnecting` service connectors from stack components, adding connector support to the sagemaker step operator, turning synchronous mode on by default for all orchestrators, and enabling server-side component config validation. ## What's Changed * Updating `README.md` and update images by @znegrin in https://github.com/zenml-io/zenml/pull/1986 * Always set the active workspace to be the default workspace server side by @stefannica in https://github.com/zenml-io/zenml/pull/1989 * Update outdated CLI docs by @strickvl in https://github.com/zenml-io/zenml/pull/1990 * Turn synchronous mode on by default for all orchestrators by @stefannica in https://github.com/zenml-io/zenml/pull/1991 * Use docker credentials in the skypilot orchestrator by @stefannica in https://github.com/zenml-io/zenml/pull/1983 * Add missing space to `@step` warning message by @strickvl in https://github.com/zenml-io/zenml/pull/1994 * Fix sagemaker orchestrator and step operator env vars and other minor bugs by @stefannica in https://github.com/zenml-io/zenml/pull/1993 * fix: `BasePyTorchMaterliazer` -> `Materializer` by @cameronraysmith in https://github.com/zenml-io/zenml/pull/1969 * allow calling old base pytorch materilizzer by @safoinme in https://github.com/zenml-io/zenml/pull/1997 * Add connector support to sagemaker step operator. by @stefannica in https://github.com/zenml-io/zenml/pull/1996 * Server-Side Component Config Validation by @fa9r in https://github.com/zenml-io/zenml/pull/1988 * Allow disconnecting service-connector from stack component by @safoinme in https://github.com/zenml-io/zenml/pull/1864 ## New Contributors * @znegrin made their first contribution in https://github.com/zenml-io/zenml/pull/1986 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.45.5...0.45.6 # 0.45.5 This minor release contains bugfixes and documentation improvements. Notably, our `sqlmodel` dependency has been pinned to 0.0.8 which fixes installation errors following the release of 0.0.9. ## What's Changed * Add a 'how do I...' section into docs by @strickvl in https://github.com/zenml-io/zenml/pull/1953 * Bump `mypy`, `ruff` and `black` by @strickvl in https://github.com/zenml-io/zenml/pull/1963 * Fix double slashes in weblogin by @schustmi in https://github.com/zenml-io/zenml/pull/1972 * SQLModel docs backport fixes by @strickvl in https://github.com/zenml-io/zenml/pull/1975 * Updated quickstart command in cloud quickstart by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1977 * Make sure vertex job id is only lower case letter, number or dash by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1978 * Fix DB initialization when using external authentication by @schustmi in https://github.com/zenml-io/zenml/pull/1965 * Pin SQLModel dependency to `0.0.8` by @strickvl in https://github.com/zenml-io/zenml/pull/1973 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.45.4...0.45.5 # 0.45.4 This minor update fixes a database migration bug that you could potentially encounter while upgrading your ZenML version and relates to use of the `ExternalArtifact` object. If you are upgrading from <0.45.x version, this is the recommended release. **PROBLEMS?**: If you upgraded to ZenML v0.45.2 or v0.45.3 and are experiencing issues with your database, please consider upgrading to v0.45.4 instead. ## What's Changed * Increase reuse of `ModelConfig` by @avishniakov in https://github.com/zenml-io/zenml/pull/1954 * resolve alembic branches by @avishniakov in https://github.com/zenml-io/zenml/pull/1964 * Fix corrupted migration for old dbs by @avishniakov in https://github.com/zenml-io/zenml/pull/1966 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.45.3...0.45.4 # 0.45.3 This minor update fixes a database migration bug that you could potentially encounter while upgrading your ZenML version and relates to use of the `ExternalArtifact` object. **PROBLEMS?**: If you upgraded to ZenML v0.45.2 and are experiencing issues with your database, please either [reach out to us on Slack directly](https://zenml.io/slack-invite/) or feel free to [use this migration script](https://gist.github.com/strickvl/2178d93c8693f068768a82587fd4db75) that will manually fix the issue. This release also includes a bugfix from @cameronraysmith relating to the resolution of our Helm chart OCI location. Thank you! ## What's Changed * fix: match chart name in docs to publish workflow by @cameronraysmith in https://github.com/zenml-io/zenml/pull/1942 * Evaluate YAML based config early + OSS-2511 by @avishniakov in https://github.com/zenml-io/zenml/pull/1876 * Fixing nullable parameter to avoid extra migrations by @bcdurak in https://github.com/zenml-io/zenml/pull/1955 * Pin Helm version to avoid 400 Bad Request error by @wjayesh in https://github.com/zenml-io/zenml/pull/1958 * `external_input_artifact` backward compatibility with alembic by @avishniakov in https://github.com/zenml-io/zenml/pull/1957 ## New Contributors * @cameronraysmith made their first contribution in https://github.com/zenml-io/zenml/pull/1942 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.45.2...0.45.3 # 0.45.2 This release replaces 0.45.0 and 0.45.1, and fixes the major migration bugs that were in that yanked release. Please upgrade directly to 0.45.2 and avoid upgrading to 0.45.0 to avoid unexpected migration issues. Note that 0.45.0 and 0.45.1 were removed from PyPI due to an issue with the alembic versions + migration which could affect the database state. This release fixes that issue. If you have already upgraded to 0.45.0 please [let us know in Slack](https://zenml.io/slack-invite/) and we'll happy to assist in rollback and recovery. This release introduces a major upgrade to ZenML, featuring a new authentication mechanism, performance improvements, the introduction of the model control plane, and internal enhancements. ## New Authentication Mechanism (#4303) Our improved authentication mechanism offers a more secure way of connecting to the ZenML server. It initiates a device flow that prompts you to log in via the browser dashboard: ``` zenml connect --url <YOUR_SERVER_URL> ``` This eliminates the need for explicit credential input. The previous method (`zenml connect --url <URL> --username <USERNAME> --password <PASSWORD>`) remains operational but is less recommended due to security concerns. **Critical** This change disrupts existing pipeline schedules. After upgrading, manually cancel and reschedule pipelines using the updated version of ZenML. For more information, read about the device flow in our [documentation](https://docs.zenml.io/user-guide/starter-guide/switch-to-production). ## Performance enhancements (#3207) Internal API adjustments have reduced the footprint of ZenML API objects by up to 35%. This will particularly benefit users with large step and pipeline configurations. Further reductions will be implemented in our next release. ## Model Control Plane debut (#5648) ZenML now includes a preliminary version of the model control plane, a feature for registering models and their metadata on a single ZenML dashboard view. Future releases will provide more details. To test this early version, follow this [example](https://github.com/zenml-io/zenml-plugins/tree/main/model_control_plane). ## Breaking Changes - Environment variables `ZENML_AUTH_TYPE` and `ZENML_JWT_SECRET_KEY` have been renamed to `ZENML_SERVER_AUTH_SCHEME` and `ZENML_SERVER_JWT_SECRET_KEY`, respectively. - All ZenML server-issued JWT tokens now include an issuer and an audience. After the server update, current scheduled pipelines become invalidated. Reset your schedules and reconnect all clients to the server to obtain new tokens. - `UnmaterializedArtifact` has been relocated to `zenml.artifacts`. Change your import statement from `from zenml.materializers import UnmaterializedArtifact` to `from zenml.artifacts.unmaterialized_artifact import UnmaterializedArtifact`. ## Deprecations - `zenml.steps.external_artifact.ExternalArtifact` has moved to `zenml.artifacts.external_artifact.ExternalArtifact`. ## And the rest: * Discord alerter integration by @bhatt-priyadutt in https://github.com/zenml-io/zenml/pull/1818. Huge shoutout to you priyadutt - we're sending some swag your way! * Update Neptune dependency: `neptune-client` > `neptune` by @fa9r in https://github.com/zenml-io/zenml/pull/1837 * Disable codeql on pushes to `develop` by @strickvl in https://github.com/zenml-io/zenml/pull/1842 * Template not updating due to git diff misuse by @avishniakov in https://github.com/zenml-io/zenml/pull/1844 * Bump feast version to fix api docs generation by @fa9r in https://github.com/zenml-io/zenml/pull/1845 * CI Fixes / Improvements by @fa9r in https://github.com/zenml-io/zenml/pull/1848 * Fix MLflow registry methods with empty metadata by @fa9r in https://github.com/zenml-io/zenml/pull/1843 * Use configured template REF in CI by @avishniakov in https://github.com/zenml-io/zenml/pull/1851 * Fix template REF in CI by @avishniakov in https://github.com/zenml-io/zenml/pull/1852 * Fix AWS service connector installation requirements by @stefannica in https://github.com/zenml-io/zenml/pull/1850 * [Docs] Improvements to custom flavor and custom orchestrator pages by @htahir1 in https://github.com/zenml-io/zenml/pull/1747 * Optimizing the performance through database changes by @bcdurak in https://github.com/zenml-io/zenml/pull/1835 * Add `README` for `examples` folder by @strickvl in https://github.com/zenml-io/zenml/pull/1860 * Free up disk space in CI by @strickvl in https://github.com/zenml-io/zenml/pull/1863 * Make Terraform Optional Again by @fa9r in https://github.com/zenml-io/zenml/pull/1855 * Model watchtower becomes Model control plane by @strickvl in https://github.com/zenml-io/zenml/pull/1868 * Update documentation by @VishalKumar-S in https://github.com/zenml-io/zenml/pull/1872 * Fix CI by freeing up space on runner by @strickvl in https://github.com/zenml-io/zenml/pull/1866 * Allow for `user` param to be specified (successfully) in `DockerSettings` by @strickvl in https://github.com/zenml-io/zenml/pull/1857 * Add `get_pipeline_context` by @avishniakov in https://github.com/zenml-io/zenml/pull/1870 * [Helm] Use GCP creds directly instead of a file. by @wjayesh in https://github.com/zenml-io/zenml/pull/1874 * External authenticator support, authorized devices and web login by @stefannica in https://github.com/zenml-io/zenml/pull/1814 * Connect to Service-connector at component registration by @safoinme in https://github.com/zenml-io/zenml/pull/1858 * Fixing the `upgrade` migration script after the database changes by @bcdurak in https://github.com/zenml-io/zenml/pull/1877 * [Model Control Plane] v0.1 mega-branch by @avishniakov in https://github.com/zenml-io/zenml/pull/1816 * Update to templates by @htahir1 in https://github.com/zenml-io/zenml/pull/1878 * Docs for orgs, rbac and sso by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1875 * Convert network_config dict to NetworkConfig object in SageMaker orchestrator by @christianversloot in https://github.com/zenml-io/zenml/pull/1873 * Add missing Docker build options for GCP image builder by @strickvl in https://github.com/zenml-io/zenml/pull/1856 * Solve alembic branching issue by @avishniakov in https://github.com/zenml-io/zenml/pull/1879 * Fix typo for 0.45 release by @strickvl in https://github.com/zenml-io/zenml/pull/1881 * Only import ipinfo when necessary by @schustmi in https://github.com/zenml-io/zenml/pull/1888 * [Model Control Plane] Suppress excessive logging in model control plane by @avishniakov in https://github.com/zenml-io/zenml/pull/1885 * Add warning generation scripts for Gitbook docs by @strickvl in https://github.com/zenml-io/zenml/pull/1929 * Fix calling `click` decorator in model CLI command by @safoinme in https://github.com/zenml-io/zenml/pull/1932 * Lightweight template CI by @avishniakov in https://github.com/zenml-io/zenml/pull/1930 * Update `Skypilot` orchestrator setting docs section by @safoinme in https://github.com/zenml-io/zenml/pull/1931 ### New Contributors * @VishalKumar-S made their first contribution in https://github.com/zenml-io/zenml/pull/1872 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.44.3...0.45.0 # 0.44.3 ## New Orchestrator: SkyPilot (#1765) This release introduces a new orchestrator called SkyPilot. SkyPilot is a VM orchestrator that can be used to run ZenML pipelines on a VM of choice in one of the three supported cloud providers. It is a great choice for users who want to run ZenML pipelines on a GPU instance, but don't want to use Kubernetes or serverless orchestrators like SageMaker. ## Fixes and Improvements This release fixes several bugs and improves the user experience of the CLI and the documentation. The most notable changes are: * The new `connect` command that allows connecting all stack components within a stack to a service connector with a single command. * Adding an interactive flow to the `zenml stack deploy` command that allows users to configure their stack in a guided manner. * Add documentation on how to debug the SageMaker orchestrator, how to get started with a quick cloud stack on GCP, and documentation on the use of service connectors with enabled MFA. ## What's Changed * Add support for empty API token in Kubernetes service connector. by @stefannica in https://github.com/zenml-io/zenml/pull/1808 * Use the container registry credentials to build images with the local image builder by @stefannica in https://github.com/zenml-io/zenml/pull/1804 * Fix CI by @fa9r in https://github.com/zenml-io/zenml/pull/1809 * Add documentation on how to debug the SageMaker orchestrator by @fa9r in https://github.com/zenml-io/zenml/pull/1810 * Bump `rich` and `uvicorn` by @jlopezpena in https://github.com/zenml-io/zenml/pull/1750 * SageMaker: Enable configuring authentication credentials explicitly by @fa9r in https://github.com/zenml-io/zenml/pull/1805 * Fix: ZenML DB migrations don't run if zenml is installed in path with spaces by @stefannica in https://github.com/zenml-io/zenml/pull/1815 * Fix mlflow 'run_name' variable overwriting by @iraadit in https://github.com/zenml-io/zenml/pull/1821 * Add `SECURITY.md` file for vulnerability disclosures. by @strickvl in https://github.com/zenml-io/zenml/pull/1824 * Add MFA limitation to service-connectors docs by @safoinme in https://github.com/zenml-io/zenml/pull/1827 * Improve `zenml stack describe` to show `mlstacks` outputs by @strickvl in https://github.com/zenml-io/zenml/pull/1826 * Documentation to get started with a quick cloud stack on GCP by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1807 * Fix missing text in git repo docs by @strickvl in https://github.com/zenml-io/zenml/pull/1831 * Handle irregular plural of `code_repository` for error message by @strickvl in https://github.com/zenml-io/zenml/pull/1832 * Connect stack to a service account by @safoinme in https://github.com/zenml-io/zenml/pull/1828 * SkyPilot Integration with VM Orchestrators by @htahir1 in https://github.com/zenml-io/zenml/pull/1765 * Add interactive CLI flow for `zenml stack deploy` by @strickvl in https://github.com/zenml-io/zenml/pull/1829 * Add `README` file for helm chart by @strickvl in https://github.com/zenml-io/zenml/pull/1830 * Fix slack environment variable in in `generative_chat` example README by @bhatt-priyadutt in https://github.com/zenml-io/zenml/pull/1836 ## New Contributors * @iraadit made their first contribution in https://github.com/zenml-io/zenml/pull/1821 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.44.2...0.44.3 # 0.44.2 This release contains updates for some of the most popular integrations, as well as several bug fixes and documentation improvements. ## Minor Default Behavior Changes * The default page size for `zenml list` commands was reduced to 20 (from 50) to speed up the runtime of such commands. * Simultaneous connection to local and remote ZenML servers is no longer possible since this caused several unexpected behaviors in the past. ## Integration Updates - The `mlflow` integration now supports the newest MLflow version `2.6.0`. - The `evidently` integration now supports the latest Evidently version `0.4.4`. - The SageMaker orchestrator of the `aws` integration now supports authentication via service connectors. ## What's Changed * Add `bandit` to CI for security linting by @strickvl in https://github.com/zenml-io/zenml/pull/1775 * Add `mlstacks` compatibility check to CI by @strickvl in https://github.com/zenml-io/zenml/pull/1767 * extend `StepContext` visibility to materializers by @avishniakov in https://github.com/zenml-io/zenml/pull/1769 * Revert GH changes to fix colima bug in macos gh by @safoinme in https://github.com/zenml-io/zenml/pull/1779 * Reduce CI runner count by @strickvl in https://github.com/zenml-io/zenml/pull/1777 * Add E2E template as example by @avishniakov in https://github.com/zenml-io/zenml/pull/1766 * Fix CI step names by @avishniakov in https://github.com/zenml-io/zenml/pull/1784 * Add vulnerability scanner by @strickvl in https://github.com/zenml-io/zenml/pull/1776 * Stop CI from running on push to `develop` by @strickvl in https://github.com/zenml-io/zenml/pull/1788 * Skip update templates outside PR by @avishniakov in https://github.com/zenml-io/zenml/pull/1786 * Fix azure service connector docs by @stefannica in https://github.com/zenml-io/zenml/pull/1778 * fix: use k8s V1CronJob instead of V1beta1CronJob (#1781) by @francoisserra in https://github.com/zenml-io/zenml/pull/1787 * Page limit adjustment by @bcdurak in https://github.com/zenml-io/zenml/pull/1791 * Prevent simultaneous connection to local and remote servers by @fa9r in https://github.com/zenml-io/zenml/pull/1792 * Update `MLflow` version to allow support for 2.6.0 by @safoinme in https://github.com/zenml-io/zenml/pull/1782 * Improve `ConnectionError` error message by @fa9r in https://github.com/zenml-io/zenml/pull/1783 * Stop old MLflow services when deploying new ones by @fa9r in https://github.com/zenml-io/zenml/pull/1793 * Prevent adding private components into shared stacks by @fa9r in https://github.com/zenml-io/zenml/pull/1794 * Publish server helm chart as part of CI by @wjayesh in https://github.com/zenml-io/zenml/pull/1740 * Docs on the use of ZenML-specific environment variables by @strickvl in https://github.com/zenml-io/zenml/pull/1796 * Add support for newer Evidently versions by @fa9r in https://github.com/zenml-io/zenml/pull/1780 * Link E2E example to docs by @avishniakov in https://github.com/zenml-io/zenml/pull/1790 * Copy step instance before applying configuration by @schustmi in https://github.com/zenml-io/zenml/pull/1798 * Fix AWS container registry image pushing with service connectors by @fa9r in https://github.com/zenml-io/zenml/pull/1797 * Make Sagemaker orchestrator work with connectors by @fa9r in https://github.com/zenml-io/zenml/pull/1799 * Add rebase Pre-requisite to PRs template by @safoinme in https://github.com/zenml-io/zenml/pull/1801 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.44.1...0.44.2 # 0.44.1 This release brings various improvements over the previous version, mainly focusing on the usage of newly refactored `mlstacks` package, ZenML's `logging` module and the changes in our analytics. **Note:** *0.44.0 was removed from pypi due to an issue with the alembic versions which could affect the database state. A branch occurred in the versions: 0.42.1 -> [0.43.0, e1d66d91a099] -> 0.44.0. This release fixes the issue.<br> The primary issue arises when deploying version 0.44.0 using a MySQL backend. Although the alembic migration executes all tasks up to 0.44.0, the alembic version represented in the database remains at 0.43.0. This issue persists irrespective of the measures taken, including trying various versions after 0.43.0.<br> This imbalance leads to failure when running a second replica migration because the database's state is at 0.44.0 while the alembic version remains at 0.43.0. Similarly, attempts to run a second replica or restart the pod fail as the alembic tries to migrate from 0.43.0 to 0.44.0, which is not possible because these changes already exist in the database.<br> Please note: If you encounter this problem, we recommend that you rollback to previous versions and then upgrade to 0.43.0. If you still experience difficulties, please join our Slack community at https://zenml.io/slack. We're ready to help you work through this issue.* ## What's Changed * Remove e2e example and point to templates by @avishniakov in https://github.com/zenml-io/zenml/pull/1752 * Add cloud architecture docs by @htahir1 in https://github.com/zenml-io/zenml/pull/1751 * Update docs/docstrings following `mlstacks` repo name change by @strickvl in https://github.com/zenml-io/zenml/pull/1754 * Update Cloud deployment scenarios by @stefannica in https://github.com/zenml-io/zenml/pull/1757 * Fixing the logging message regarding caching by @bcdurak in https://github.com/zenml-io/zenml/pull/1748 * Improvements to the step logs storage functionality by @bcdurak in https://github.com/zenml-io/zenml/pull/1733 * Fix `qemu`/`colima` Github Actions bug by @safoinme in https://github.com/zenml-io/zenml/pull/1760 * Bump `ruff` and `mypy` by @strickvl in https://github.com/zenml-io/zenml/pull/1762 * Add Template Testing in Core by @avishniakov in https://github.com/zenml-io/zenml/pull/1745 * Removing analytics v1 and optimizing v2 by @bcdurak in https://github.com/zenml-io/zenml/pull/1753 * Update publish script to take a token by @strickvl in https://github.com/zenml-io/zenml/pull/1758 * Update variable name for release publication token by @strickvl in https://github.com/zenml-io/zenml/pull/1764 * Lock `MYSQL` Database during DB migrations by @safoinme in https://github.com/zenml-io/zenml/pull/1763 * `mlstacks` integration (and deprecation of old deployment logic) by @strickvl in https://github.com/zenml-io/zenml/pull/1721 * Upgrade typing extensions within api docs build workflow by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1741 * Fix branching alembic history by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1772 * Remove pinned `zenml` version specified in TOC for SDK docs by @strickvl in https://github.com/zenml-io/zenml/pull/1770 * Modified the track metadata for the opt-in event by @bcdurak in https://github.com/zenml-io/zenml/pull/1774 * Check alembic branch divergence in CI by @strickvl in https://github.com/zenml-io/zenml/pull/1773 * Remove the DB lock by @safoinme in https://github.com/zenml-io/zenml/pull/1771 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.43.0...0.44.1 # 0.44.0 This release brings various improvements over the previous version, mainly focusing on the usage of newly refactored `mlstacks` package, ZenML's `logging` module and the changes in our analytics. ## What's Changed * Remove e2e example and point to templates by @avishniakov in https://github.com/zenml-io/zenml/pull/1752 * Add cloud architecture docs by @htahir1 in https://github.com/zenml-io/zenml/pull/1751 * Update docs/docstrings following `mlstacks` repo name change by @strickvl in https://github.com/zenml-io/zenml/pull/1754 * Update Cloud deployment scenarios by @stefannica in https://github.com/zenml-io/zenml/pull/1757 * Fixing the logging message regarding caching by @bcdurak in https://github.com/zenml-io/zenml/pull/1748 * Improvements to the step logs storage functionality by @bcdurak in https://github.com/zenml-io/zenml/pull/1733 * Fix `qemu`/`colima` Github Actions bug by @safoinme in https://github.com/zenml-io/zenml/pull/1760 * Bump `ruff` and `mypy` by @strickvl in https://github.com/zenml-io/zenml/pull/1762 * Add Template Testing in Core by @avishniakov in https://github.com/zenml-io/zenml/pull/1745 * Removing analytics v1 and optimizing v2 by @bcdurak in https://github.com/zenml-io/zenml/pull/1753 * Update publish script to take a token by @strickvl in https://github.com/zenml-io/zenml/pull/1758 * Update variable name for release publication token by @strickvl in https://github.com/zenml-io/zenml/pull/1764 * Lock `MYSQL` Database during DB migrations by @safoinme in https://github.com/zenml-io/zenml/pull/1763 * `mlstacks` integration (and deprecation of old deployment logic) by @strickvl in https://github.com/zenml-io/zenml/pull/1721 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.43.0...0.44.0 # 0.43.0 This release brings limited support for Python 3.11, improves quickstart experience with the fully reworked flow, enhances the user experience while dealing with ZenML docs, offers new extended templates for projects, and fixes GCP connector creation issue. ## Limited support for Python 3.11 This release adds limited support for Python 3.11. The following integrations are currently not supported with Python 3.11: - gcp - kubeflow - tekton This is because: - GCP packages that support Python 3.11 are not compatible with KFP 1 - Upgrade to KFP 2 is blocked by the fact that Tekton doesn't have any release compatible with KFP 2 yet (https://github.com/zenml-io/zenml/pull/1697) ## Breaking Changes A minor breaking change in CLI for `zenml init`: - previously supported flag `--starter` - new flag `--template-with-defaults` - behavior remains the same - flag is responsible for usage of default settings in the template ## What's Changed * Disable implicit auth methods for service connectors by default by @stefannica in https://github.com/zenml-io/zenml/pull/1704 * New quickstart by @strickvl in https://github.com/zenml-io/zenml/pull/1692 * Set `MLflow` configuration as environment variables before deployment subprocess by @safoinme in https://github.com/zenml-io/zenml/pull/1705 * Fix Migration Guide Links by @fa9r in https://github.com/zenml-io/zenml/pull/1706 * Improve Input Validation Error Message by @fa9r in https://github.com/zenml-io/zenml/pull/1712 * Update link in cloudpickle_materializer.py by @duarteocarmo in https://github.com/zenml-io/zenml/pull/1713 * catch exceptions in `list_model_versions` by @avishniakov in https://github.com/zenml-io/zenml/pull/1703 * Rename `transition_model_stage` to `transition_model_version_stage` by @avishniakov in https://github.com/zenml-io/zenml/pull/1707 * pandas input to `predict` by @avishniakov in https://github.com/zenml-io/zenml/pull/1715 * Small fixes to global config docs page by @schustmi in https://github.com/zenml-io/zenml/pull/1714 * Allow specifying extra hosts for LocalDockerOrchestrator by @schustmi in https://github.com/zenml-io/zenml/pull/1709 * Flexible use of `ignore_cols` in `evidently_report_step` by @avishniakov in https://github.com/zenml-io/zenml/pull/1711 * Add external artifacts and direct links to run DAG by @fa9r in https://github.com/zenml-io/zenml/pull/1718 * E2E flow example for templates by @avishniakov in https://github.com/zenml-io/zenml/pull/1710 * Fix bug in service connector, Closes #1720 by @soubenz in https://github.com/zenml-io/zenml/pull/1726 * Document the namespace and service account k8s orchestrator settings by @stefannica in https://github.com/zenml-io/zenml/pull/1722 * Refactoring done and reduced some functions complexity and work-time by @thanseefpp in https://github.com/zenml-io/zenml/pull/1719 * Update custom orchestrator guide by @schustmi in https://github.com/zenml-io/zenml/pull/1728 * Improve error message when passing non-json serializable parameter by @schustmi in https://github.com/zenml-io/zenml/pull/1729 * Bump `ruff` to 0.0.282 by @strickvl in https://github.com/zenml-io/zenml/pull/1730 * Docs and README update for ZenML Cloud by @bcdurak in https://github.com/zenml-io/zenml/pull/1723 * bump `MLflow` to 2.5.0 by @safoinme in https://github.com/zenml-io/zenml/pull/1708 * Move Examples to Tests by @fa9r in https://github.com/zenml-io/zenml/pull/1673 * Add Error Handling for Empty Pipelines by @fa9r in https://github.com/zenml-io/zenml/pull/1734 * Revert "Add Error Handling for Empty Pipelines" by @fa9r in https://github.com/zenml-io/zenml/pull/1735 * Changing the links to the public roadmap by @bcdurak in https://github.com/zenml-io/zenml/pull/1737 * Add Error Handling for Empty Pipelines by @fa9r in https://github.com/zenml-io/zenml/pull/1736 * Revisit `init --template` CLI for new templates by @avishniakov in https://github.com/zenml-io/zenml/pull/1731 * Add Python 3.11 Support by @fa9r in https://github.com/zenml-io/zenml/pull/1702 * fix error on scheduled pipelines with KubernetesOrchestrator by @francoisserra in https://github.com/zenml-io/zenml/pull/1738 * Bugfix for identify calls with empty email strings by @bcdurak in https://github.com/zenml-io/zenml/pull/1739 ## New Contributors * @duarteocarmo made their first contribution in https://github.com/zenml-io/zenml/pull/1713 * @thanseefpp made their first contribution in https://github.com/zenml-io/zenml/pull/1719 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.42.0...0.43.0 # 0.42.1 This is a minor release that fixes a couple of minor issues and improves the quickstart example. ## Breaking Changes ### Disable Implicit Auth Methods for Service Connectors by Default The implicit authentication methods supported by cloud Service Connectors method may constitute a security risk, because they can give users access to the same cloud resources and services that the ZenML Server itself is allowed to access. For this reason, the default behavior of ZenML Service Connectors has been changed to disable implicit authentication methods by default. If you try to configure any of the AWS, GCP or Azure Service Connectors using the implicit authentication method, you will now receive an error message. To enable implicit authentication methods, you have to set the `ZENML_ENABLE_IMPLICIT_AUTH_METHODS` environment variable or the ZenML helm chart `enableImplicitAuthMethods` configuration option to `true`. ## What's Changed * Disable implicit auth methods for service connectors by default by @stefannica in https://github.com/zenml-io/zenml/pull/1704 * New quickstart by @strickvl in https://github.com/zenml-io/zenml/pull/1692 * Set `MLflow` configuration as environment variables before deployment subprocess by @safoinme in https://github.com/zenml-io/zenml/pull/1705 * Fix Migration Guide Links by @fa9r in https://github.com/zenml-io/zenml/pull/1706 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.42.0...0.42.1 # 0.42.0 This release brings major user experience improvements to how ZenML logs are managed and displayed, removes Python 3.7 support, and fixes the Python 3.10 PyYAML issues caused by the Cython 3.0 release. ## Improved Logging UX The log messages written by ZenML when running pipelines or executing ZenML CLI commands are now more concise and easier to digest and the log message colors were adjusted to be more intuitive. Additionally, all log messages, including custom prints to stdout, now show up as step logs in the dashboard. ## Breaking Changes ### Python 3.7 Support Dropped Python 3.7 reached its end of life on on June 27th, 2023. Since then, several MLOps tools have stopped supporting Python 3.7. To prevent dependency issues with our integrations and other open-source packages, ZenML will also no longer support Python 3.7 starting from this release. ### Dependency and Integration Version Updates ZenML now requires PyYAML 6 since older versions are broken under Python 3.10. Subsequently, the following integrations now require a higher package version: - Kubeflow now requires `kfp==1.8.22` - Tekton now requires `kfk-tekton==1.7.1` - Evidently now requires `evidently==0.2.7` or `evidently==0.2.8` ## What's Changed * Add missing quote in docs by @schustmi in https://github.com/zenml-io/zenml/pull/1674 * Update Local Docker orchestrator docs by @strickvl in https://github.com/zenml-io/zenml/pull/1676 * Relax `fastapi` dependency version by @fa9r in https://github.com/zenml-io/zenml/pull/1675 * Improve flavor registration error message by @schustmi in https://github.com/zenml-io/zenml/pull/1671 * Simplified Page Iteration by @fa9r in https://github.com/zenml-io/zenml/pull/1679 * Document how to deploy ZenML with custom Docker image by @fa9r in https://github.com/zenml-io/zenml/pull/1672 * Document the ZenML Client and Models by @fa9r in https://github.com/zenml-io/zenml/pull/1678 * Add Label Studio text classification integration and example by @adamwawrzynski in https://github.com/zenml-io/zenml/pull/1658 * Improve yaml config docs page by @schustmi in https://github.com/zenml-io/zenml/pull/1680 * Catch correct exception when trying to access step context by @schustmi in https://github.com/zenml-io/zenml/pull/1681 * Add option to only export requirements for installed integrations by @schustmi in https://github.com/zenml-io/zenml/pull/1682 * Fix copy-paste error (Seldon / KServe docstring) by @strickvl in https://github.com/zenml-io/zenml/pull/1687 * Add avishniakov to `teams.yaml` by @avishniakov in https://github.com/zenml-io/zenml/pull/1688 * [NEW PR] Set contains_code to 1 instead of True by @kobiche in https://github.com/zenml-io/zenml/pull/1685 * Misc slack fixes by @schustmi in https://github.com/zenml-io/zenml/pull/1686 * Docs: Migration Guide by @fa9r in https://github.com/zenml-io/zenml/pull/1691 * fix: :card_file_box: Extend pipeline spec storage length by @francoisserra in https://github.com/zenml-io/zenml/pull/1694 * Make the workspace statistics endpoint more performant by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1689 * Deprecate examples CLI by @avishniakov in https://github.com/zenml-io/zenml/pull/1693 * Add cloud server deployment type by @schustmi in https://github.com/zenml-io/zenml/pull/1699 * Fix Python 3.10 PyYAML Installation Issues by @fa9r in https://github.com/zenml-io/zenml/pull/1695 * Remove Python 3.7 Support by @fa9r in https://github.com/zenml-io/zenml/pull/1652 * Improved logs for pipeline execution and CLI usage by @bcdurak in https://github.com/zenml-io/zenml/pull/1664 * Docs: Restructure Advanced Guide by @fa9r in https://github.com/zenml-io/zenml/pull/1698 ## New Contributors * @adamwawrzynski made their first contribution in https://github.com/zenml-io/zenml/pull/1658 * @avishniakov made their first contribution in https://github.com/zenml-io/zenml/pull/1688 * @kobiche made their first contribution in https://github.com/zenml-io/zenml/pull/1685 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.41.0...0.42.0 # 0.41.0 ZenML release 0.41.0 comes with a second round of updates to the pipeline and step interface with major changes in how step outputs are defined, how information about previous runs can be fetched programmatically, and how information about the current run can be obtained. See [this docs page](https://docs.zenml.io/user-guide/migration-guide/migration-zero-forty) for an overview of all pipeline interface changes introduced since release 0.40.0 and for more information on how to migrate your existing ZenML pipelines to the latest syntax. ## Fetching Runs Programmatically (#1635) The entire syntax of fetching previous runs programmatically was majorly redesigned. While the overall user flow is still almost identical, the new approach does not contain pipeline-versioning-related inconsistencies, has a more intuitive syntax, and is also easier for users to learn since the new syntax uses the ZenML Client and response models natively instead of requiring the `zenml.post_execution` util functions and corresponding `...View` wrapper classes. ## Accessing Current Run Information (#1648) How to fetch information about the current pipeline run from within the run has been majorly redesigned: - Instead of being an argument of the step function, the `StepContext` is now a singleton that can be accessed via the new `zenml.get_step_context()` function. - The `StepContext` is now decoupled from the `StepEnvironment` and the `StepEnvironment` is deprecated. - The `StepContext` now contains the full `PipelineRunResponseModel` and `StepRunResponseModel` so all information about the run is accessible, not just the name / id / params. ## Defining Step Outputs (#1653) Instead of using the `zenml.steps.Output` class to annotate steps with multiple outputs, ZenML can now handle `Tuple` annotations natively and output names can now be assigned to any step output using `typing_extensions.Annotated`. ## What's Changed * Remove remaining BaseParameters references by @schustmi in https://github.com/zenml-io/zenml/pull/1625 * Fix the s3 integration dependencies by @stefannica in https://github.com/zenml-io/zenml/pull/1641 * Don't run whylogs example on windows by @stefannica in https://github.com/zenml-io/zenml/pull/1644 * Adding the missing pages to our docs by @bcdurak in https://github.com/zenml-io/zenml/pull/1640 * Connectors startup guide and stack component references by @stefannica in https://github.com/zenml-io/zenml/pull/1632 * Fixing the listing functionality of several objects in our CLI by @bcdurak in https://github.com/zenml-io/zenml/pull/1616 * Revamp Post Execution by @fa9r in https://github.com/zenml-io/zenml/pull/1635 * Fix run configuration parameter merging by @schustmi in https://github.com/zenml-io/zenml/pull/1638 * Simplify email opt-in telemetry by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1637 * Fix Step Logs on Windows by @fa9r in https://github.com/zenml-io/zenml/pull/1645 * Improve config section of containerization docs page by @schustmi in https://github.com/zenml-io/zenml/pull/1649 * Validating slack alerter by @bhatt-priyadutt in https://github.com/zenml-io/zenml/pull/1609 * Added some error handling in gcp cloud function scheduling by @htahir1 in https://github.com/zenml-io/zenml/pull/1634 * CI: Disable Python 3.7 Mac Runners by @fa9r in https://github.com/zenml-io/zenml/pull/1650 * Redesign `StepContext` by @fa9r in https://github.com/zenml-io/zenml/pull/1648 * Fix output of dashboard url on pipeline run by @strickvl in https://github.com/zenml-io/zenml/pull/1629 * fix: use k8s orchestrator service account in step pod's manifest by @francoisserra in https://github.com/zenml-io/zenml/pull/1654 * Fix Image Builder Warning Message by @fa9r in https://github.com/zenml-io/zenml/pull/1659 * New step output annotations by @schustmi in https://github.com/zenml-io/zenml/pull/1653 * Add Python 3.10 to listed versions supported via PyPi by @strickvl in https://github.com/zenml-io/zenml/pull/1662 * Add DatabricksShell on list of notebooks allowed to show dashboard by @lucasbissaro in https://github.com/zenml-io/zenml/pull/1643 * Fixing broken links in our examples folder by @bcdurak in https://github.com/zenml-io/zenml/pull/1661 * Feature/frw 2013 docs by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1639 * Update Pipeline Migration Page by @fa9r in https://github.com/zenml-io/zenml/pull/1667 * Fix/set env variables before installing packages by @lopezco in https://github.com/zenml-io/zenml/pull/1665 * Fix the `zenml deploy` story by @wjayesh in https://github.com/zenml-io/zenml/pull/1651 * Always keep link to API docs pointed at the version of the release branch by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1636 * Fix BentoML deployer by @safoinme in https://github.com/zenml-io/zenml/pull/1647 * Corrected all mentions in docs from API docs to SDK docs. by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1669 * Update outdated docs by @schustmi in https://github.com/zenml-io/zenml/pull/1668 ## New Contributors * @lucasbissaro made their first contribution in https://github.com/zenml-io/zenml/pull/1643 * @lopezco made their first contribution in https://github.com/zenml-io/zenml/pull/1665 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.40.3...0.41.0 # 0.40.3 This is a minor ZenML release that introduces a couple of new features: * the [Azure Service Connector](https://docs.zenml.io/stacks-and-components/auth-management/azure-service-connector) is now available in addition to the AWS and GCP ones. It can be used to connect ZenML and Stack Components to Azure cloud infrastructure resources like Azure Blob Storage, Azure Container Registry and Azure Kubernetes Service. * Service Connectors can now also be managed through the ZenML Dashboard * adds `zenml secret export` CLI command to export secrets from the ZenML Secret Store to a local file * adds the ability to create/update ZenML secrets from JSON/YAML files or command line arguments (courtesy of @bhatt-priyadutt) In addition to that, this release also contains a couple of bug fixes and improvements, including: * better documentation and fixes for the ZenML [Vertex AI Orchestrator](https://docs.zenml.io/stack-components/orchestrators/vertex) and [Vertex AI Step Operator](https://docs.zenml.io/stack-components/step-operators/vertex) * adjust Seldon and BentoML Steps and Examples to new pipeline interface ## What's Changed * Add option to list all resources when verifying service connector config by @stefannica in https://github.com/zenml-io/zenml/pull/1573 * Fix sandbox time limit by @schustmi in https://github.com/zenml-io/zenml/pull/1602 * Secrets input structure change method by @bhatt-priyadutt in https://github.com/zenml-io/zenml/pull/1547 * Implement Azure service connector by @stefannica in https://github.com/zenml-io/zenml/pull/1589 * Adding the ability to tag the source of an event for the analytics by @bcdurak in https://github.com/zenml-io/zenml/pull/1599 * Move all the logic into the script to make it as easy as possible to … by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1605 * Only set mysql session variables when necessary by @schustmi in https://github.com/zenml-io/zenml/pull/1568 * Bug in creating upstream_steps by @sidsaurb in https://github.com/zenml-io/zenml/pull/1601 * Added logs endpoint to display on the dashboard by @htahir1 in https://github.com/zenml-io/zenml/pull/1526 * Fix CI by @fa9r in https://github.com/zenml-io/zenml/pull/1612 * Fix Azure Integration Imports and Improve Flavor Registration Error Handling by @fa9r in https://github.com/zenml-io/zenml/pull/1615 * Deprecation Cleanup by @fa9r in https://github.com/zenml-io/zenml/pull/1608 * Cleanup Local Logging Temp Files by @fa9r in https://github.com/zenml-io/zenml/pull/1621 * Add cloud orchestrator warning message by @strickvl in https://github.com/zenml-io/zenml/pull/1418 * Update custom code run in sandbox docs by @safoinme in https://github.com/zenml-io/zenml/pull/1610 * Remove the GH Actions review reminder bot by @strickvl in https://github.com/zenml-io/zenml/pull/1624 * Automatically optimize image sizes on PR creation by @strickvl in https://github.com/zenml-io/zenml/pull/1626 * Deprecation Warning Improvements by @fa9r in https://github.com/zenml-io/zenml/pull/1620 * Fix ZenML Installation when FastAPI is not Installed by @fa9r in https://github.com/zenml-io/zenml/pull/1627 * Fix unnecessary / extra deprecation warnings by @strickvl in https://github.com/zenml-io/zenml/pull/1630 * Add `zenml secret export` CLI command by @fa9r in https://github.com/zenml-io/zenml/pull/1607 * Missing pipeline features docs by @schustmi in https://github.com/zenml-io/zenml/pull/1619 * Fix for valid secret name by @bhatt-priyadutt in https://github.com/zenml-io/zenml/pull/1617 * Fix and document Vertex AI orchestrator and step operator by @stefannica in https://github.com/zenml-io/zenml/pull/1606 * Deprecate KServe Integration by @fa9r in https://github.com/zenml-io/zenml/pull/1631 * Adjust Seldon Steps and Examples to New Pipeline Interface by @fa9r in https://github.com/zenml-io/zenml/pull/1560 * Adjust BentoML Steps and Example to New Pipeline Interface by @fa9r in https://github.com/zenml-io/zenml/pull/1614 * Moved kubernetes imports to inner function to avoid module not found error by @htahir1 in https://github.com/zenml-io/zenml/pull/1622 ## New Contributors * @sidsaurb made their first contribution in https://github.com/zenml-io/zenml/pull/1601 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.40.2...test # 0.40.2 Documentation and example updates. ## What's Changed * Update Example for sandbox by @safoinme in https://github.com/zenml-io/zenml/pull/1576 * Document `zenml show` by @fa9r in https://github.com/zenml-io/zenml/pull/1570 * Clean up for the new docs by @bcdurak in https://github.com/zenml-io/zenml/pull/1575 * Add orchestrator outputs for sandbox examples by @strickvl in https://github.com/zenml-io/zenml/pull/1579 * Docs: Added some adjustments to the code repository page. by @bcdurak in https://github.com/zenml-io/zenml/pull/1582 * Sandbox documentation (and other docs updates) by @strickvl in https://github.com/zenml-io/zenml/pull/1574 * Minor README update regarding the sandbox. by @bcdurak in https://github.com/zenml-io/zenml/pull/1586 * Fix failing `mlflow_tracking` example test by @strickvl in https://github.com/zenml-io/zenml/pull/1581 * Bump `ruff` and `mypy` by @strickvl in https://github.com/zenml-io/zenml/pull/1590 * Remove `config.yaml` references in example docs by @strickvl in https://github.com/zenml-io/zenml/pull/1585 * update mlflow tracking example and reduce number of epochs by @safoinme in https://github.com/zenml-io/zenml/pull/1598 * Improve error message when requirements file does not exist by @schustmi in https://github.com/zenml-io/zenml/pull/1596 * Fix build reuse for integrations with apt packages by @schustmi in https://github.com/zenml-io/zenml/pull/1594 * make the `Github` repo token optional by @safoinme in https://github.com/zenml-io/zenml/pull/1593 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.40.1...0.40.2 # 0.40.1 Small bug and docs fixes following the 0.40.0 release. ## What's Changed * Convert dict to tuple in ArtifactConfiguration validator by @schustmi in https://github.com/zenml-io/zenml/pull/1571 * Docs cleanup by @schustmi in https://github.com/zenml-io/zenml/pull/1569 * Fix `boto3<=1.24.59` by @safoinme in https://github.com/zenml-io/zenml/pull/1572 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.40.0...0.40.1 # 0.40.0 ZenML release 0.40.0 introduces two big updates: a fresh and more flexible pipeline interface and a new way to connect and authenticate with external services in ZenML Connectors. See below for full details on these two major new sets of functionality. The release also contains many bug fixes and quality-of-life improvements. Specifically, we reworked our documentation from the ground up with particular attention to the structure to help you find what you need. Our [Label Studio integration example](https://github.com/zenml-io/zenml/tree/main/examples/label_studio_annotation) is now working again and allows you to use more recent versions of the `label-studio` package that backs it. ## A Fresh Pipeline Interface This release introduces a completely reworked interface for developing your ZenML steps and pipelines: * Increased flexibility when defining steps: Steps can now have `Optional`, `Union`, and `Any` type annotations for their inputs and outputs. Additionally, default values are allowed for step inputs. ```python @step def trainer(data: pd.Dataframe, start_model: Union[svm.SVC, svm.SVR], coef0: Optional[int] = None) -> Any: pass ``` * You can now easily run a step outside of a pipeline, making it easier to test and debug your code: ```python trainer(data=pd.Dataframe(...), start_model=svc.SVC(...)) ``` * External artifacts can be used to pass values to steps that are not produced by an upstream step. This provides more flexibility when working with external data or models: ```python from zenml.steps.external_artifact import ExternalArtifact @pipeline def my_pipeline(lr: float): data = process_data() trainer(data=data, start_model=ExternalArtifact(svc.SVC(...))) ``` * You can now call steps multiple times inside a pipeline, allowing you to create more complex workflows and reuse steps with different parameters: ```python @pipeline def my_pipeline(step_count: int) -> None: data = load_data_step() after = [] for i in range(step_count): train_step(data, learning_rate=i * 0.0001, id=f"train_step_{i}") after.append(f"train_step_{i}") model = select_model_step(..., after=after) ``` * Pipelines can now define inputs and outputs, providing a clearer interface for working with data and dependencies between pipelines: ```python @pipeline(enable_cache=False) def subpipeline(pipeline_param: int): out = step_1(k=None) step_2(a=3, b=pipeline_param) return 17 ``` * You can now call pipelines within other pipelines. This currently does not execute the inner pipeline but instead adds its steps to the parent pipeline, allowing you to create modular and reusable workflows: ```python @pipeline(enable_cache=False) def my_pipeline(a: int = 1): p1_output = subpipeline(pipeline_param=22) step_2(a=a, b=p1_output) ``` To get started, simply import the new `@step` and `@pipeline` decorator and check out our new [starter guide](https://docs.zenml.io/user-guide/starter-guide) for more information. ```python from zenml import step, pipeline @step def my_step(...): ... @pipeline def my_pipeline(...): ... ``` The old pipeline and step interface is still working using the imports from previous ZenML releases but is deprecated and will be removed in the future. ## 'Connectors' for authentication In this update, we're pleased to present a new feature to ZenML: Service Connectors. The intention behind these connectors is to offer a reliable and more user-friendly method for integrating ZenML with external resources and services. We aim to simplify processes such as validating, storing, and generating security-sensitive data, along with the authentication and authorization of access to external services. We believe ZenML Service Connectors will be a useful tool to alleviate some of the common challenges in managing pipeline across various Stack Components. Regardless of your background in infrastructure management - whether you're a beginner looking for quick cloud stack integration, or an experienced engineer focused on maintaining robust infrastructure security practices - our Service Connectors are designed to assist your work while maintaining high security standards. Here are just a few ways you could use ZenML Service Connectors: - Easy utilization of cloud resources: With ZenML's Service Connectors, you can use resources from AWS, GCP, and Azure without the need for extensive knowledge of cloud infrastructure or environment configuration. All you'll need is a ZenML Service Connector and a few Python libraries. - Assisted setup with security in mind: Our Service Connectors come with features for configuration validation and verification, the generation of temporary, low-privilege credentials, and pre-authenticated and pre-configured clients for Python libraries. - Easy local configuration transfer: ZenML's Service Connectors aim to resolve the reproducibility issue in ML pipelines. They do this by automatically transferring authentication configurations and credentials from your local machine, storing them securely, and allowing for effortless sharing across different environments. [Visit our documentation pages](https://docs.zenml.io/stacks-and-components/auth-management) to learn more about ZenML Connectors and how you can use them in a way that supports your ML workflows. ## What's Changed * Cleanup remaining references of `zenml.artifacts` by @fa9r in https://github.com/zenml-io/zenml/pull/1534 * Upgrading the `black` version by @bcdurak in https://github.com/zenml-io/zenml/pull/1535 * Remove dev breakpoints by @strickvl in https://github.com/zenml-io/zenml/pull/1540 * Removing old option command from contribution doc by @bhatt-priyadutt in https://github.com/zenml-io/zenml/pull/1544 * Improve CLI help text for `zenml integration install -i ...` by @strickvl in https://github.com/zenml-io/zenml/pull/1545 * Fix RestZenStore error handling for list responses by @fa9r in https://github.com/zenml-io/zenml/pull/1539 * Simplify Dashboard UX via `zenml.show()` by @fa9r in https://github.com/zenml-io/zenml/pull/1511 * Removed hardcoded variable by @bhatt-priyadutt in https://github.com/zenml-io/zenml/pull/1543 * Revert Quickstart Changes by @fa9r in https://github.com/zenml-io/zenml/pull/1546 * Deprecate some long overdue functions by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1541 * ZenML Connectors by @stefannica in https://github.com/zenml-io/zenml/pull/1514 * Fix automatic dashboard opening after `zenml up` by @fa9r in https://github.com/zenml-io/zenml/pull/1551 * Update Neptune README by @strickvl in https://github.com/zenml-io/zenml/pull/1554 * Update example READMEs following deployment PR by @strickvl in https://github.com/zenml-io/zenml/pull/1555 * Fix and update Label Studio example by @strickvl in https://github.com/zenml-io/zenml/pull/1542 * Fix linter errors by @stefannica in https://github.com/zenml-io/zenml/pull/1557 * Add Vertex as orchestrator and step operator to deploy CLI by @wjayesh in https://github.com/zenml-io/zenml/pull/1559 * Fix dashboard secret references by @stefannica in https://github.com/zenml-io/zenml/pull/1561 * New pipeline and step interface by @schustmi in https://github.com/zenml-io/zenml/pull/1466 * Major Documentation Rehaul by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1562 * Easy CSV Visualization by @fa9r in https://github.com/zenml-io/zenml/pull/1556 ## New Contributors * @bhatt-priyadutt made their first contribution in https://github.com/zenml-io/zenml/pull/1544 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.39.1...0.40 # 0.39.1 Minor hotfix release for running ZenML in Google Colab environments. ## What's Changed * Fix Source Resolving in Colab by @fa9r in https://github.com/zenml-io/zenml/pull/1530 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.39.0...0.39.1 # 0.39.0 ZenML release 0.39.0 introduces several big new features: - The `zenml stack recipe` CLI commands now support fine-grained handling of individual stack components. - Artifacts are now automatically visualized in the dashboard. - Materializers received an overhaul: a new `cloudpickle` default materializer was added that works for arbitrary objects, and a `pycaret` materializer that can handle various modeling frameworks in a unified format. The release also contains many bug fixes and quality-of-life improvements, such as new settings options for the SageMaker and Kubernetes orchestrators. ## Individual Stack Component Deployment In this release, we've enhanced the ZenML stack recipe CLI to support conditional deployment, destruction, and configuration of individual stack components. Users can now quickly deploy and destroy components with options for each flavor, and pass a config file for custom variables. The new `output` CLI command allows users to retrieve outputs from their recipes. Overall, this update streamlines deploying and managing stack components by providing a more efficient and user-friendly experience. ## Artifact Visualization Artifact visualizations are now automatically extracted by ZenML and embedded in the ZenML dashboard. Visualizations can now be defined by overriding the `save_visualizations` method of the materializer that handles an artifact. These visualizations are then automatically shown in the dashboard and can also be displayed in Jupyter notebooks using the new `visualize` post-execution method. ## Default Cloudpickle Materializer ZenML now uses `cloudpickle` under the hood to save/load artifacts that other materializers cannot handle. This makes it even easier to get started with ZenML since you no longer need to define custom materializers if you just want to experiment with some new data types. ## What's Changed * Docs/zenml hub documentation by @bcdurak in https://github.com/zenml-io/zenml/pull/1490 * Sort integration list before display by @strickvl in https://github.com/zenml-io/zenml/pull/1494 * Update docs to surface CLI filtering syntax by @strickvl in https://github.com/zenml-io/zenml/pull/1496 * ZenML Hub Tests & CLI Improvements by @fa9r in https://github.com/zenml-io/zenml/pull/1495 * Delete Legacy Docs by @fa9r in https://github.com/zenml-io/zenml/pull/1497 * Improve the REST API error handling by @stefannica in https://github.com/zenml-io/zenml/pull/1451 * Fix circular import of PipelineRunConfiguration by @schustmi in https://github.com/zenml-io/zenml/pull/1501 * Delete Deprecated Artifacts and Materializer Code by @fa9r in https://github.com/zenml-io/zenml/pull/1498 * Allow filtering runs by code repo id by @schustmi in https://github.com/zenml-io/zenml/pull/1499 * Add example to docs for passing stack component specific settings by @christianversloot in https://github.com/zenml-io/zenml/pull/1506 * Increase step run field lengths by @schustmi in https://github.com/zenml-io/zenml/pull/1503 * Fix Sagemaker orchestrator pipeline name bug by @strickvl in https://github.com/zenml-io/zenml/pull/1508 * Generate unique SageMaker training job name based on pipeline and ste… by @christianversloot in https://github.com/zenml-io/zenml/pull/1505 * [CI Fix] Pin Llama Index Version by @fa9r in https://github.com/zenml-io/zenml/pull/1516 * Basic PyCaret integration and materializer by @christianversloot in https://github.com/zenml-io/zenml/pull/1512 * Specify line endings for different operating systems by @strickvl in https://github.com/zenml-io/zenml/pull/1513 * Extend SageMakerOrchestratorSettings with processor_args enabling step level configuration by @christianversloot in https://github.com/zenml-io/zenml/pull/1509 * Fix post execution `get_pipeline()` and `pipeline.get_runs()` by @fa9r in https://github.com/zenml-io/zenml/pull/1510 * Default `cloudpickle` Materializer & Materializer Inheritance by @fa9r in https://github.com/zenml-io/zenml/pull/1507 * Artifact Visualization by @fa9r in https://github.com/zenml-io/zenml/pull/1472 * Add Kubernetes Orchestrator Settings by @fa9r in https://github.com/zenml-io/zenml/pull/1518 * Bump `ruff` to 0.0.265 by @strickvl in https://github.com/zenml-io/zenml/pull/1520 * feat: Set cloud function service account to the one defined in Vertex… by @francoisserra in https://github.com/zenml-io/zenml/pull/1519 * Fix Kubernetes Orchestrator Config Loading by @fa9r in https://github.com/zenml-io/zenml/pull/1523 * Resolve path during module resolving by @schustmi in https://github.com/zenml-io/zenml/pull/1521 * Fix `SO_REUSEPORT` issue by @fa9r in https://github.com/zenml-io/zenml/pull/1524 * Add individual stack component deployment through recipes by @wjayesh in https://github.com/zenml-io/zenml/pull/1328 * Raise 501 for Unauthenticated Artifact Stores by @fa9r in https://github.com/zenml-io/zenml/pull/1522 * Fix Duplicate Step Error by @fa9r in https://github.com/zenml-io/zenml/pull/1527 * Fix pulling of stack recipes on `zenml init` by @wjayesh in https://github.com/zenml-io/zenml/pull/1528 * Store dockerfile and requirements for builds by @schustmi in https://github.com/zenml-io/zenml/pull/1525 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.38.0...0.39.0 # 0.38.0 The 0.38.0 ZenML release is a major milestone for the ZenML project. It marks the introduction of the ZenML Hub, a central platform that enables our users to search, share and discover community-contributed code, such as stack component flavors, materializers, and pipeline steps. The ZenML Hub allows our users to extend their ZenML experience by leveraging the community's diverse range of implementations and MLOps best practices. If you're interested in learning more about our motivation for implementing the ZenML Hub and our plans for its future, we invite you to read [our new blog post](https://blog.zenml.io/zenml-hub-launch). In addition to this technical documentation, the blog post provides a comprehensive overview of the ZenML Hub's goals and objectives, as well as the features that we plan to introduce in the future. Aside from this major new feature, the release also includes a number of small improvements and bug fixes. ## What's Changed * Fix broken ENV variable by @strickvl in https://github.com/zenml-io/zenml/pull/1458 * fix screenshot size in code repo by @safoinme in https://github.com/zenml-io/zenml/pull/1467 * Fix CI (Deepchecks integration tests) by @fa9r in https://github.com/zenml-io/zenml/pull/1470 * chore: update teams.yml by @Cahllagerfeld in https://github.com/zenml-io/zenml/pull/1459 * Fix `BuiltInContainerMaterializer` for subtypes and non-built-in types by @fa9r in https://github.com/zenml-io/zenml/pull/1464 * Kubernetes Orchestrator Improvements by @fa9r in https://github.com/zenml-io/zenml/pull/1460 * Fix flaky CLI tests by @schustmi in https://github.com/zenml-io/zenml/pull/1465 * Fix circular import during type checking by @schustmi in https://github.com/zenml-io/zenml/pull/1463 * Allow secret values replacement in REST API PUT by @stefannica in https://github.com/zenml-io/zenml/pull/1471 * Fix two steps race condition by @safoinme in https://github.com/zenml-io/zenml/pull/1473 * Downgrading ZenML Version in global config by @safoinme in https://github.com/zenml-io/zenml/pull/1474 * Revert "Downgrading ZenML Version in global config" by @safoinme in https://github.com/zenml-io/zenml/pull/1476 * Add metadata to stack components by @wjayesh in https://github.com/zenml-io/zenml/pull/1416 * remove modules from the list output for stack recipes by @wjayesh in https://github.com/zenml-io/zenml/pull/1480 * Pin `openai` integration to `>0.27.0` by @strickvl in https://github.com/zenml-io/zenml/pull/1461 * Apply formatting fixes to `/scripts` by @strickvl in https://github.com/zenml-io/zenml/pull/1462 * Move import outside of type checking by @schustmi in https://github.com/zenml-io/zenml/pull/1482 * Delete extra word from `bentoml` docs by @strickvl in https://github.com/zenml-io/zenml/pull/1484 * Remove top-level config from recommended repo structure by @schustmi in https://github.com/zenml-io/zenml/pull/1485 * Bump `mypy` and `ruff` by @strickvl in https://github.com/zenml-io/zenml/pull/1481 * ZenML Version Downgrade - Silence Warning by @safoinme in https://github.com/zenml-io/zenml/pull/1477 * Update ZenServer recipes to include secret stores by @wjayesh in https://github.com/zenml-io/zenml/pull/1483 * Fix alembic order by @schustmi in https://github.com/zenml-io/zenml/pull/1487 * Fix source resolving for classes in notebooks by @schustmi in https://github.com/zenml-io/zenml/pull/1486 * fix: use pool_pre_ping to discard invalid SQL connections when borrow… by @francoisserra in https://github.com/zenml-io/zenml/pull/1489 ## New Contributors * @Cahllagerfeld made their first contribution in https://github.com/zenml-io/zenml/pull/1459 * @francoisserra made their first contribution in https://github.com/zenml-io/zenml/pull/1489 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.37.0...0.38.0 # 0.37.0 In this ZenML release, we are pleased to introduce a compelling new feature: [ZenML Code Repositories](https://docs.zenml.io/starter-guide/production-fundamentals/code-repositories). This innovative addition formalizes the principles of code versioning and tracking while consolidating their pivotal role in executing pipelines and caching pipeline steps. With Code Repositories, ZenML is equipped to maintain an accurate record of the code version employed in your pipeline runs. Furthermore, executing a pipeline that is monitored by a registered code repository can significantly accelerate the Docker image building process for containerized stack components. As is the case with everything ZenML, we designed the ZenML Code Repository concept as a highly extensible abstraction. The update defines the basic Code Repository interface an includes two implementations integrating ZenML with two popular code repository flavors: GitHub and GitLab. ## Other Enhancements We've updated the `pytorch-lightning` integration to support the `2.0` version. We also updated the `mlflow` integration to support the `2.2.2` version. **IMPORTANT**: it is not recommended to continue using MLflow older than `2.2.1` as a model registry with ZenML, as [it is vulnerable to a security issue](https://github.com/advisories/GHSA-xg73-94fp-g449). Last but not least, two stellar additions from our community members: * `zenml stack delete` now supports a `--recursive` flag to delete all components in a stack. Many thanks to @KenmogneThimotee for the contribution! * the ZenML Sagemaker step operator has been expanded to support S3 input data and additional input arguments. Many thanks to @christianversloot for the contribution! ## Breaking Changes The ZenML GitHub Orchestrator and GitHub Secrets Manager have been removed in this release. Given that their concerns overlapped with the new ZenML GitHub Code Repository and they didn't provide sufficient value on their own, we decided to discontinue them. If you were using these components, you can continue to use GitHub Actions to run your pipelines, in combination with the ZenML GitHub Code Repository. ## What's Changed * Test integration for seldon example by @safoinme in https://github.com/zenml-io/zenml/pull/1285 * Update `pytorch-lightning` to support `2.0` by @safoinme in https://github.com/zenml-io/zenml/pull/1425 * Code repository by @schustmi in https://github.com/zenml-io/zenml/pull/1344 * Bump `ruff` to 0.259 by @strickvl in https://github.com/zenml-io/zenml/pull/1439 * Change `pipeline_run_id` to `run_name` by @safoinme in https://github.com/zenml-io/zenml/pull/1390 * Update `mypy>=1.1.1` and fix new errors by @safoinme in https://github.com/zenml-io/zenml/pull/1432 * Add `--upgrade` option to ZenML integration install by @safoinme in https://github.com/zenml-io/zenml/pull/1435 * Bump `MLflow` to 2.2.2 by @safoinme in https://github.com/zenml-io/zenml/pull/1441 * HuggingFace Spaces server deployment option by @strickvl in https://github.com/zenml-io/zenml/pull/1427 * Bugfix for server import by @bcdurak in https://github.com/zenml-io/zenml/pull/1442 * Fix HF Spaces URL by @strickvl in https://github.com/zenml-io/zenml/pull/1444 * Remove all `zenml.cli` imports outside of `zenml.cli` by @fa9r in https://github.com/zenml-io/zenml/pull/1447 * Add recursive deletion of components for `zenml stack delete` by @KenmogneThimotee in https://github.com/zenml-io/zenml/pull/1437 * Temporarily disable primary key requirement for newer mysql versions by @schustmi in https://github.com/zenml-io/zenml/pull/1450 * Add step name suffix for sagemaker job name by @schustmi in https://github.com/zenml-io/zenml/pull/1452 * Code repo docs by @schustmi in https://github.com/zenml-io/zenml/pull/1448 * Allow resource settings for airflow kubernetes pod operators by @schustmi in https://github.com/zenml-io/zenml/pull/1378 * SageMaker step operator: expand input arguments and add support for S3 input data by @christianversloot in https://github.com/zenml-io/zenml/pull/1381 * Add Screenshots to Code Repo Token by @safoinme in https://github.com/zenml-io/zenml/pull/1454 ## New Contributors * @KenmogneThimotee made their first contribution in https://github.com/zenml-io/zenml/pull/1437 * @christianversloot made their first contribution in https://github.com/zenml-io/zenml/pull/1381 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.36.1...0.37.0 # 0.36.1 This minor release contains some small fixes and improvements. - We fixed a bug with the way hooks were being parsed, which was causing pipelines to fail. - We brought various parts of the documentation up to date with features that had previously been added, notably the new image building functionality. - We added a failure hook that connects to OpenAI's ChatGPT API to allow you to receive a message when a pipeline fails that includes suggestions on how to fix the failing step. - We added a new integration with `langchain` and `llama_hub` to allow you to build on top of those libraries as part of a more robust MLOps workflow. - We made the first some bigger changes to our analytics system to make it more robust and secure. This release begins that migration. Users should expect no changes in behavior and all telemetry-related preferences will be preserved. ## What's Changed * Fix hook parser by @strickvl in https://github.com/zenml-io/zenml/pull/1428 * Fix some pipeline bugs by @schustmi in https://github.com/zenml-io/zenml/pull/1426 * Add image builders to Examples by @safoinme in https://github.com/zenml-io/zenml/pull/1434 * ZenML Failure Hook for OpenAI ChatGPT fixes by @strickvl in https://github.com/zenml-io/zenml/pull/1430 * Integrations with `langchain` and `llama_hub` by @fa9r in https://github.com/zenml-io/zenml/pull/1404 * Add basic tests for the server and recipes CLI by @wjayesh in https://github.com/zenml-io/zenml/pull/1306 * Add to our alembic migration guide by @strickvl in https://github.com/zenml-io/zenml/pull/1423 * Analytics 2.0 by @bcdurak in https://github.com/zenml-io/zenml/pull/1411 * Improve Slack Alerter by adding message blocks by @soubenz in https://github.com/zenml-io/zenml/pull/1402 * Add HF deployment type by @strickvl in https://github.com/zenml-io/zenml/pull/1438 ## New Contributors * @soubenz made their first contribution in https://github.com/zenml-io/zenml/pull/1402 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.36.0...0.36.1 # 0.36.0 Our latest release adds hooks to ZenML pipelines to handle custom logic that occurs on pipeline failure or success. This is a powerful feature that allows you to easily receive custom alerts, for example, when a pipeline fails or succeeds. (Check out our video showcasing the feature [here](https://www.youtube.com/watch?v=KUW2G3EsqF8).) The release is also packed with bug fixes and documentation updates. Some smaller improvements include an increase of the `step_configurations` column size in the database to accommodate really large configurations and the ability to click through to orchestrator logs for the Sagemaker orchestrator directly from the ZenML dashboard. ## Breaking Changes Secrets are now handled internally by ZenML. This changes some behaviors that you may have become used to with the (now-deprecated) Secrets Manager stack component. The default behavior for the KServe and Seldon Core Model Deployer if explicit credentials are not configured through the secret stack component attribute has changed. Now, the model deployer will attempt to reuse credentials configured for the Artifact Store in the same stack and may, in some cases, fail if it cannot use them. In most cases, if credentials are not configured for the active Artifact Store, the model deployer will assume some form of implicit in-cloud authentication is configured for the Kubernetes cluster where KServe / Seldon Core is installed and default to using that. ## What's Changed * Add CLI utils tests by @strickvl in https://github.com/zenml-io/zenml/pull/1383 * Don't use docker client when building images remotely by @schustmi in https://github.com/zenml-io/zenml/pull/1394 * Fix zenml-quickstart-model typo by @safoinme in https://github.com/zenml-io/zenml/pull/1397 * Ignore starting quotes from Artifact store path by @safoinme in https://github.com/zenml-io/zenml/pull/1388 * CI speed improvements by @stefannica in https://github.com/zenml-io/zenml/pull/1384 * Fix stack recipe link by @strickvl in https://github.com/zenml-io/zenml/pull/1393 * Switch FastAPI response class to orjson so `NaN` values don't break the server by @fa9r in https://github.com/zenml-io/zenml/pull/1395 * Numpy materializer metadata for arrays with strings by @safoinme in https://github.com/zenml-io/zenml/pull/1392 * Fix last remaining runs index by @stefannica in https://github.com/zenml-io/zenml/pull/1399 * Add failure (and success hooks) by @htahir1 in https://github.com/zenml-io/zenml/pull/1361 * Replace `pyspelling` with `typos` by @strickvl in https://github.com/zenml-io/zenml/pull/1400 * Fix the download nltk param for report step by @wjayesh in https://github.com/zenml-io/zenml/pull/1409 * Add `build_timeout` attribute to `GCPImageBuilderConfig` by @gabrielmbmb in https://github.com/zenml-io/zenml/pull/1408 * Bump `ruff` to v0.255 by @strickvl in https://github.com/zenml-io/zenml/pull/1403 * Update title of deployment docs page by @strickvl in https://github.com/zenml-io/zenml/pull/1412 * Changed to debug log by @htahir1 in https://github.com/zenml-io/zenml/pull/1406 * Fix incorrect `--sort_by` help text by @strickvl in https://github.com/zenml-io/zenml/pull/1413 * Document CLI filtering query language by @strickvl in https://github.com/zenml-io/zenml/pull/1414 * Fix GitHub pip download cache key by @stefannica in https://github.com/zenml-io/zenml/pull/1405 * Add orchestrator logs link for Sagemaker by @strickvl in https://github.com/zenml-io/zenml/pull/1375 * Phase out secrets managers from other stack components. by @stefannica in https://github.com/zenml-io/zenml/pull/1401 * Add MLflow UI message to quickstart example and fix autolog spillage by @stefannica in https://github.com/zenml-io/zenml/pull/1421 * Add tests for the model registry by @safoinme in https://github.com/zenml-io/zenml/pull/1415 * Remove Aspell installation by @strickvl in https://github.com/zenml-io/zenml/pull/1419 * Increase `step_configurations` column size to 2^24 by @strickvl in https://github.com/zenml-io/zenml/pull/1422 * Add help text for `enable_service` option in recipe sub-command by @safoinme in https://github.com/zenml-io/zenml/pull/1424 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.35.1...test # 0.35.1 **Note:** *This release replaces the previous 0.35.0 release that was yanked from PyPI due to a bug. If you already installed 0.35.0 and are experiencing issues, we recommend you downgrade to 0.34.0 before installing and upgrading to 0.35.1.* This release is packed with big features as well as documentation updates and some bug fixes. The 0.35.1 release puts models front and center in ZenML with the addition of the **Model Registry** abstraction and Stack Component. You can now register, version and manage models as first class citizens in ZenML. This is a major milestone for ZenML and we are excited to see what you build with it! The introduction of Model Registries greatly simplifies the journey that the model takes from training to deployment and extends the ZenML ecosystem to include model registry tools and libraries. The first Model Registry integration included in this release is MLFlow, with many more to come in the future. This release also continues the deprecation of Secrets Managers and the introduction of Secret Stores. You now have the option of configuring the ZenML server to use AWS, GCP, Azure or Hashicorp Vault directly as a centralized secrets store back-end. This is meant to replace all Secrets Manager flavors which were previously used to store secrets using the same cloud services. Please be reminded that all Secrets Managers are now deprecated and will be removed in the near future. We recommend that you migrate all your secrets from the Secrets Manager stack components to the centralized secrets store by means of the included `zenml secrets-manager secret migrate` CLI command. Last but not least, this release includes an updated Evidently integration that is compatible with the latest and greatest features from Evidently: reports and test suites. Check out the updated example to get a feel for the new features. ## Breaking Changes This release introduces a few breaking changes. Please update your code to reflect the changes below: * the order of pipelines and runs in the post-execution results has been reversed. This means that the most recent pipeline and pipeline run can be accessed using the first index of the respective lists instead of the last index. This change was made to make the post-execution results more intuitive and to allow returning multi-page results in the future. This is a code snippet outlining the changes that you need to make in your post-execution code: ```python from zenml.post_execution import get_pipelines, get_unlisted_runs pipelines = get_pipelines() # instead of calling this to get the pipeline last created latest_pipeline = pipelines[-1] # you now have to call this latest_pipeline = pipelines[0] # and instead of calling this to get the latest run of a pipeline latest_pipeline_run = latest_pipeline.get_runs()[-1] # or latest_pipeline_run = latest_pipeline.runs[-1] # you now have to call this latest_pipeline_run = latest_pipeline.get_runs()[0] # or latest_pipeline_run = latest_pipeline.runs[0] # the same applies to the unlisted runs; instead of last_unlisted_run = get_unlisted_runs()[-1] # you now have to call this last_unlisted_run = get_unlisted_runs()[0] ``` * if you were using the `StepEnvironment` to fetch the name of the active step in your step implementation, this name no longer reflects the name of the step function. Instead, it now reflects the name of the step used in the pipeline DAG, similar to what you would see in the ZenML dashboard when visualizing the pipeline. This is also implicitly reflected in the output of `zenml model-deployer model` CLI commands. ## What's Changed * Upgrade dev dependencies by @strickvl in https://github.com/zenml-io/zenml/pull/1334 * Add warning when attempting server connection without user permissions by @strickvl in https://github.com/zenml-io/zenml/pull/1314 * Keep CLI help text for `zenml pipeline` to a single line by @strickvl in https://github.com/zenml-io/zenml/pull/1338 * Rename page attributes by @schustmi in https://github.com/zenml-io/zenml/pull/1266 * Add missing docs for pipeline build by @schustmi in https://github.com/zenml-io/zenml/pull/1341 * Sagemaker orchestrator docstring and example update by @strickvl in https://github.com/zenml-io/zenml/pull/1350 * Fix `secret create` docs error for secret store by @strickvl in https://github.com/zenml-io/zenml/pull/1355 * Update README for test environment provisioning by @strickvl in https://github.com/zenml-io/zenml/pull/1336 * Disable name prefix matching when updating/deleting entities by @schustmi in https://github.com/zenml-io/zenml/pull/1345 * Add Kubeflow Pipeline UI Port to deprecated config by @safoinme in https://github.com/zenml-io/zenml/pull/1358 * Small clarifications for slack alerter by @htahir1 in https://github.com/zenml-io/zenml/pull/1365 * update Neptune integration for v1.0 compatibility by @AleksanderWWW in https://github.com/zenml-io/zenml/pull/1335 * Integrations conditional requirements by @safoinme in https://github.com/zenml-io/zenml/pull/1255 * Fix fetching versioned pipelines in post execution by @schustmi in https://github.com/zenml-io/zenml/pull/1363 * Load artifact store before loading artifact to register filesystem by @schustmi in https://github.com/zenml-io/zenml/pull/1367 * Remove poetry from CI by @schustmi in https://github.com/zenml-io/zenml/pull/1346 * Fix Sagemaker example readme by @strickvl in https://github.com/zenml-io/zenml/pull/1370 * Update evidently to include reports and tests by @wjayesh in https://github.com/zenml-io/zenml/pull/1283 * Fix neptune linting error on `develop` (and bump ruff) by @strickvl in https://github.com/zenml-io/zenml/pull/1372 * Add pydantic materializer by @htahir1 in https://github.com/zenml-io/zenml/pull/1371 * Registering GIFs added by @htahir1 in https://github.com/zenml-io/zenml/pull/1368 * Refresh CLI cheat sheet by @strickvl in https://github.com/zenml-io/zenml/pull/1347 * Add dependency resolution docs by @strickvl in https://github.com/zenml-io/zenml/pull/1337 * [BUGFIX] Fix error while using an existing SQL server with GCP ZenServer by @wjayesh in https://github.com/zenml-io/zenml/pull/1353 * Update step name assignment with the parameter name by @strickvl in https://github.com/zenml-io/zenml/pull/1310 * Copy huggingface data directory to local before loading in materializers by @TimovNiedek in https://github.com/zenml-io/zenml/pull/1351 * Update huggingface token classification example by @strickvl in https://github.com/zenml-io/zenml/pull/1369 * Use the most specialized materializer based on MRO by @schustmi in https://github.com/zenml-io/zenml/pull/1376 * Update Kserve to support 0.10.0 by @safoinme in https://github.com/zenml-io/zenml/pull/1373 * Add more examples to integration tests by @schustmi in https://github.com/zenml-io/zenml/pull/1245 * Fix order of runs and order of pipelines in post-execution by @stefannica in https://github.com/zenml-io/zenml/pull/1380 * Add Cloud Secrets Store back-ends by @stefannica in https://github.com/zenml-io/zenml/pull/1348 * Model Registry Stack Component + MLFlow integration by @safoinme in https://github.com/zenml-io/zenml/pull/1309 * Fix broken docs URLs and add SDK docs url by @strickvl in https://github.com/zenml-io/zenml/pull/1349 * Fix label studio `dataset delete` command by @strickvl in https://github.com/zenml-io/zenml/pull/1377 * Add missing links to Quickstart by @strickvl in https://github.com/zenml-io/zenml/pull/1379 * Fix PyPI readme logo display by @strickvl in https://github.com/zenml-io/zenml/pull/1382 * Fixed broken migration for flavors by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1386 * Add debug mode flag for `zenml info` by @strickvl in https://github.com/zenml-io/zenml/pull/1374 * Update issue creation for bugs by @strickvl in https://github.com/zenml-io/zenml/pull/1387 * Integration sdk docs generated correctly now by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1389 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.34.0...0.35.0 # 0.35.0 (YANKED) This release is packed with big features as well as documentation updates and some bug fixes. The 0.35.0 release puts models front and center in ZenML with the addition of the **Model Registry** abstraction and Stack Component. You can now register, version and manage models as first class citizens in ZenML. This is a major milestone for ZenML and we are excited to see what you build with it! The introduction of Model Registries greatly simplifies the journey that the model takes from training to deployment and extends the ZenML ecosystem to include model registry tools and libraries. The first Model Registry integration included in this release is MLFlow, with many more to come in the future. This release also continues the deprecation of Secrets Managers and the introduction of Secret Stores. You now have the option of configuring the ZenML server to use AWS, GCP, Azure or Hashicorp Vault directly as a centralized secrets store back-end. This is meant to replace all Secrets Manager flavors which were previously used to store secrets using the same cloud services. Please be reminded that all Secrets Managers are now deprecated and will be removed in the near future. We recommend that you migrate all your secrets from the Secrets Manager stack components to the centralized secrets store by means of the included `zenml secrets-manager secret migrate` CLI command. Last but not least, this release includes an updated Evidently integration that is compatible with the latest and greatest features from Evidently: reports and test suites. Check out the updated example to get a feel for the new features. ## Breaking Changes This release introduces a few breaking changes. Please update your code to reflect the changes below: * the order of pipelines and runs in the post-execution results has been reversed. This means that the most recent pipeline and pipeline run can be accessed using the first index of the respective lists instead of the last index. This change was made to make the post-execution results more intuitive and to allow returning multi-page results in the future. This is a code snippet outlining the changes that you need to make in your post-execution code: ```python from zenml.post_execution import get_pipelines, get_unlisted_runs pipelines = get_pipelines() # instead of calling this to get the pipeline last created latest_pipeline = pipelines[-1] # you now have to call this latest_pipeline = pipelines[0] # and instead of calling this to get the latest run of a pipeline latest_pipeline_run = latest_pipeline.get_runs()[-1] # or latest_pipeline_run = latest_pipeline.runs[-1] # you now have to call this latest_pipeline_run = latest_pipeline.get_runs()[0] # or latest_pipeline_run = latest_pipeline.runs[0] # the same applies to the unlisted runs; instead of last_unlisted_run = get_unlisted_runs()[-1] # you now have to call this last_unlisted_run = get_unlisted_runs()[0] ``` * if you were using the `StepEnvironment` to fetch the name of the active step in your step implementation, this name no longer reflects the name of the step function. Instead, it now reflects the name of the step used in the pipeline DAG, similar to what you would see in the ZenML dashboard when visualizing the pipeline. This is also implicitly reflected in the output of `zenml model-deployer model` CLI commands. ## What's Changed * Upgrade dev dependencies by @strickvl in https://github.com/zenml-io/zenml/pull/1334 * Add warning when attempting server connection without user permissions by @strickvl in https://github.com/zenml-io/zenml/pull/1314 * Keep CLI help text for `zenml pipeline` to a single line by @strickvl in https://github.com/zenml-io/zenml/pull/1338 * Rename page attributes by @schustmi in https://github.com/zenml-io/zenml/pull/1266 * Add missing docs for pipeline build by @schustmi in https://github.com/zenml-io/zenml/pull/1341 * Sagemaker orchestrator docstring and example update by @strickvl in https://github.com/zenml-io/zenml/pull/1350 * Fix `secret create` docs error for secret store by @strickvl in https://github.com/zenml-io/zenml/pull/1355 * Update README for test environment provisioning by @strickvl in https://github.com/zenml-io/zenml/pull/1336 * Disable name prefix matching when updating/deleting entities by @schustmi in https://github.com/zenml-io/zenml/pull/1345 * Add Kubeflow Pipeline UI Port to deprecated config by @safoinme in https://github.com/zenml-io/zenml/pull/1358 * Small clarifications for slack alerter by @htahir1 in https://github.com/zenml-io/zenml/pull/1365 * update Neptune integration for v1.0 compatibility by @AleksanderWWW in https://github.com/zenml-io/zenml/pull/1335 * Integrations conditional requirements by @safoinme in https://github.com/zenml-io/zenml/pull/1255 * Fix fetching versioned pipelines in post execution by @schustmi in https://github.com/zenml-io/zenml/pull/1363 * Load artifact store before loading artifact to register filesystem by @schustmi in https://github.com/zenml-io/zenml/pull/1367 * Remove poetry from CI by @schustmi in https://github.com/zenml-io/zenml/pull/1346 * Fix Sagemaker example readme by @strickvl in https://github.com/zenml-io/zenml/pull/1370 * Update evidently to include reports and tests by @wjayesh in https://github.com/zenml-io/zenml/pull/1283 * Fix neptune linting error on `develop` (and bump ruff) by @strickvl in https://github.com/zenml-io/zenml/pull/1372 * Add pydantic materializer by @htahir1 in https://github.com/zenml-io/zenml/pull/1371 * Registering GIFs added by @htahir1 in https://github.com/zenml-io/zenml/pull/1368 * Refresh CLI cheat sheet by @strickvl in https://github.com/zenml-io/zenml/pull/1347 * Add dependency resolution docs by @strickvl in https://github.com/zenml-io/zenml/pull/1337 * [BUGFIX] Fix error while using an existing SQL server with GCP ZenServer by @wjayesh in https://github.com/zenml-io/zenml/pull/1353 * Update step name assignment with the parameter name by @strickvl in https://github.com/zenml-io/zenml/pull/1310 * Copy huggingface data directory to local before loading in materializers by @TimovNiedek in https://github.com/zenml-io/zenml/pull/1351 * Update huggingface token classification example by @strickvl in https://github.com/zenml-io/zenml/pull/1369 * Use the most specialized materializer based on MRO by @schustmi in https://github.com/zenml-io/zenml/pull/1376 * Update Kserve to support 0.10.0 by @safoinme in https://github.com/zenml-io/zenml/pull/1373 * Add more examples to integration tests by @schustmi in https://github.com/zenml-io/zenml/pull/1245 * Fix order of runs and order of pipelines in post-execution by @stefannica in https://github.com/zenml-io/zenml/pull/1380 * Add Cloud Secrets Store back-ends by @stefannica in https://github.com/zenml-io/zenml/pull/1348 * Model Registry Stack Component + MLFlow integration by @safoinme in https://github.com/zenml-io/zenml/pull/1309 * Fix broken docs URLs and add SDK docs url by @strickvl in https://github.com/zenml-io/zenml/pull/1349 * Fix label studio `dataset delete` command by @strickvl in https://github.com/zenml-io/zenml/pull/1377 * Add missing links to Quickstart by @strickvl in https://github.com/zenml-io/zenml/pull/1379 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.34.0...0.35.0 # 0.34.0 This release comes with major upgrades to the python library as well as the dashboard: - You can now store you secrets in a centralized way instead of having them tied to a secrets manager stack component. The secrets manager component is deprecated but will still work while we continue migrating all secrets manager flavors to be available as a backend to store centralized secrets. Check out [the docs](https://docs.zenml.io/starter-guide/production-fundamentals/secrets-management) for more information. - Pipelines are now versioned: ZenML detects changes to your steps and structure of your pipelines and automatically creates new pipeline versions for you. - You can now build the required Docker images for your pipeline without actually running it with the `zenml pipeline build` command. This build can later be used to run the pipeline using the `zenml pipeline run` command or by passing it to `pipeline.run()` in python. - Metadata for runs and artifacts is now displayed in the dashboard: When viewing a pipeline run in the dashboard, click on a step or artifact to get useful metadata like the endpoint where your model is deployed or statistics about your training data. ## What's Changed * Move inbuilt Flavors into the Database by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1187 * Bump `ruff` version to 241 by @strickvl in https://github.com/zenml-io/zenml/pull/1289 * Add docs for run name templates by @schustmi in https://github.com/zenml-io/zenml/pull/1290 * Remove excess help text for `zenml connect` command by @strickvl in https://github.com/zenml-io/zenml/pull/1291 * Increase default service timeout to 60 by @safoinme in https://github.com/zenml-io/zenml/pull/1294 * increase timeout on quickstart example by @safoinme in https://github.com/zenml-io/zenml/pull/1296 * Add warning about MacOS not being supported by @strickvl in https://github.com/zenml-io/zenml/pull/1303 * Always include .zen in docker builds by @schustmi in https://github.com/zenml-io/zenml/pull/1292 * Add warning and docs update for `label_studio` installation issue by @strickvl in https://github.com/zenml-io/zenml/pull/1299 * Loosen version requirements for Great Expectations integration by @strickvl in https://github.com/zenml-io/zenml/pull/1302 * Change zenml init --template to optionally prompt and track email by @stefannica in https://github.com/zenml-io/zenml/pull/1298 * Update docs for Neptune experiment tracker integration by @strickvl in https://github.com/zenml-io/zenml/pull/1307 * Fix the destroy function on the stack recipe CLI by @wjayesh in https://github.com/zenml-io/zenml/pull/1301 * Add missing flavor migrations, make workspace ID optional by @schustmi in https://github.com/zenml-io/zenml/pull/1315 * Bump ruff 246 by @strickvl in https://github.com/zenml-io/zenml/pull/1316 * Remove tag from image name in gcp image builder by @schustmi in https://github.com/zenml-io/zenml/pull/1317 * Fix docs typo by @strickvl in https://github.com/zenml-io/zenml/pull/1318 * Fix step parameter merging by @schustmi in https://github.com/zenml-io/zenml/pull/1320 * Increase timeout for mlflow deployment example by @strickvl in https://github.com/zenml-io/zenml/pull/1308 * Workspace/projects fix for dashboard URL output when running pipeline by @strickvl in https://github.com/zenml-io/zenml/pull/1322 * Component Metadata Tracking Docs by @fa9r in https://github.com/zenml-io/zenml/pull/1319 * Add user environment `zenml info` command to CLI for debugging by @strickvl in https://github.com/zenml-io/zenml/pull/1312 * Added caching to quickstart by @htahir1 in https://github.com/zenml-io/zenml/pull/1321 * Renovation of the zenstore tests by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1275 * Fixes GCP docs typo by @luckri13 in https://github.com/zenml-io/zenml/pull/1327 * Remove deprecated CLI options by @strickvl in https://github.com/zenml-io/zenml/pull/1325 * GCP Image Builder network by @gabrielmbmb in https://github.com/zenml-io/zenml/pull/1323 * improved flavor docs by @htahir1 in https://github.com/zenml-io/zenml/pull/1324 * Commands to register, build and run pipelines from the CLI by @schustmi in https://github.com/zenml-io/zenml/pull/1293 * Validate kserve model name by @strickvl in https://github.com/zenml-io/zenml/pull/1304 * Fix post-execution run sorting by @schustmi in https://github.com/zenml-io/zenml/pull/1332 * Secrets store with SQL back-end by @stefannica in https://github.com/zenml-io/zenml/pull/1313 ## New Contributors * @luckri13 made their first contribution in https://github.com/zenml-io/zenml/pull/1327 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.33.0...0.34.0 # 0.33.0 This release introduces several big new features: - Docker images can now be built in GCP using the new [Google Cloud Image Builder](https://docs.zenml.io/component-gallery/image-builders/gcloud-build) integration. Special shoutout to @gabrielmbmb for this amazing contribution! - Getting started with ZenML has been made even easier. You can now use one of the new [ZenML Project Templates](https://github.com/zenml-io/zenml-project-templates) to initialize your ZenML repository with a basic project structure including a functional pipeline and basic scaffolding for materializers, parameters, and other classes you might want to extend. - Orchestrating runs on local Kubernetes has been made easier: The KubeFlow, Kubernetes, and Tekton orchestrators have been redesigned to be compatible with the [K3D modular stack recipe](https://github.com/zenml-io/mlops-stacks/tree/main/k3d-modular) that lets you spin up a local K3D Kubernetes cluster with a single line of code! - The MLflow integration has been updated and can now be used with the new MLflow 2.x! - You can now specify parameters and resources for your Seldon model deployers thanks to @d-lowl! Furthermore, the internal `project` concept has been renamed to `workspace` to avoid confusion with the [zenml-projects](https://github.com/zenml-io/zenml-projects) repository. This should only be relevant to you if you have custom applications that are interacting with the REST API of the ZenML server directly since all models sent from/to the server need to contain a `workspace` instead of a `project` now. ## What's Changed * Renaming Project to Workspace by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1254 * Integration tests for post execution functions by @fa9r in https://github.com/zenml-io/zenml/pull/1264 * Introduce `post_execution.BaseView` by @fa9r in https://github.com/zenml-io/zenml/pull/1238 * Make `/cloud` point to enterprise page by @strickvl in https://github.com/zenml-io/zenml/pull/1268 * update mlflow to version greater than 2.0 by @safoinme in https://github.com/zenml-io/zenml/pull/1249 * Store run start time by @schustmi in https://github.com/zenml-io/zenml/pull/1271 * Relax pydantic dependency by @jlopezpena in https://github.com/zenml-io/zenml/pull/1262 * Fix failing filter on stacks by component id by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1276 * Track server version by @schustmi in https://github.com/zenml-io/zenml/pull/1265 * Bump ruff, drop `autoflake`, add `darglint` back by @strickvl in https://github.com/zenml-io/zenml/pull/1279 * Fixed startswith and endswith by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1278 * Fix workspace scoping on `list_workspace_... endpoints` again by @fa9r in https://github.com/zenml-io/zenml/pull/1284 * Custom Metadata Tracking by @fa9r in https://github.com/zenml-io/zenml/pull/1151 * Bug: local ZenML server ignores ip-address CLI argument by @stefannica in https://github.com/zenml-io/zenml/pull/1282 * Configure the zenml-server docker image and helm chart to run as non-privileged user by @stefannica in https://github.com/zenml-io/zenml/pull/1273 * GCP Image Builder by @gabrielmbmb in https://github.com/zenml-io/zenml/pull/1270 * Disentangle K3D code from ZenML by @safoinme in https://github.com/zenml-io/zenml/pull/1185 * Rework params / artifact docs by @strickvl in https://github.com/zenml-io/zenml/pull/1277 * Always add active user to analytics by @stefannica in https://github.com/zenml-io/zenml/pull/1286 * Fix step and pipeline run metadata in LineageGraph by @fa9r in https://github.com/zenml-io/zenml/pull/1288 * add validator to endpoint url to replace hostname with k3d or docker … by @safoinme in https://github.com/zenml-io/zenml/pull/1189 * Add option to use project templates to initialize a repository by @stefannica in https://github.com/zenml-io/zenml/pull/1287 * Add example for Hyperparameter Tuning with ZenML by @nitay93 in https://github.com/zenml-io/zenml/pull/1206 * Add seldon deployment predictor parameters and resource requirements by @d-lowl in https://github.com/zenml-io/zenml/pull/1280 ## New Contributors * @jlopezpena made their first contribution in https://github.com/zenml-io/zenml/pull/1262 * @nitay93 made their first contribution in https://github.com/zenml-io/zenml/pull/1206 * @d-lowl made their first contribution in https://github.com/zenml-io/zenml/pull/1280 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.32.1...0.33.0 # 0.32.1 This release resolves several minor bugs and inconveniences introduced during the filtering and pagination overhaul in the last release. Additionally, the release includes new integration tests to improve future stability. ## What's Changed * Update and improve docker and helm deployment docs by @stefannica in https://github.com/zenml-io/zenml/pull/1246 * Fixed broken link returned form pipeline runs by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1257 * Fix project scoping on `list_project_...` endpoints by @fa9r in https://github.com/zenml-io/zenml/pull/1256 * Orchestrator tests by @schustmi in https://github.com/zenml-io/zenml/pull/1258 * Add integration tests for lineage graph creation by @fa9r in https://github.com/zenml-io/zenml/pull/1253 * Always instantiate a zen_store before startup. by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1261 * Fix post execution run fetching by @schustmi in https://github.com/zenml-io/zenml/pull/1263 * Implemented the option to choose between ascending and descending on list calls by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1260 * Fix logger warning message by @strickvl in https://github.com/zenml-io/zenml/pull/1267 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.32.0...0.32.1 # 0.32.0 Release 0.32.0 introduces two big new features: * A new stack component, the "image builder", with a corresponding new Kaniko integration. * Logic for filtering and pagination of list requests. ## Image Builder Abstraction and Kaniko Integration ZenML stacks can now contain an image builder as additional optional stack component. The image builder defines how the Docker images are built that are required by many of the other stack components such as Airflow or Kubeflow. Previously, all image building was handled implicitly by ZenML using local Docker, which has now been refactored into the "local" image builder flavor. As an alternative, you can now install the new "kaniko" integration to build your images in Kubernetes using Kaniko. ## Filtering and Pagination All list commands in ZenML are now capable of advanced filtering such as `zenml stack list --created="gt:22-12-04 17:00:00" --name contains:def`. Additionally, list commands now return pages of results, which significantly improves performance for power ZenML users that have already created many runs or other entities. ## What's Changed * UserResponseModel contains roles, block recursion properly on more Models, reduce amount of Runs on a PipelineResponseModel by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1180 * Bump ruff version by @strickvl in https://github.com/zenml-io/zenml/pull/1232 * Zenfile becomes project by @strickvl in https://github.com/zenml-io/zenml/pull/1235 * Fix class resolution in notebooks under Python>=3.10 by @fa9r in https://github.com/zenml-io/zenml/pull/1234 * Fix Sagemaker README images & pipeline addition by @strickvl in https://github.com/zenml-io/zenml/pull/1239 * Step/Pipeline configuration tests by @schustmi in https://github.com/zenml-io/zenml/pull/1233 * Removed gRPC from diagrams by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1242 * Fix MLflow tracking example bug for Macs by @strickvl in https://github.com/zenml-io/zenml/pull/1237 * Fix copy function to copyfile in registered filesystem by @safoinme in https://github.com/zenml-io/zenml/pull/1243 * Image builder abstraction by @schustmi in https://github.com/zenml-io/zenml/pull/1198 * Add support for modular recipes to the recipe CLI by @wjayesh in https://github.com/zenml-io/zenml/pull/1247 * Add docs on upgrading and troubleshooting zenml server by @wjayesh in https://github.com/zenml-io/zenml/pull/1244 * Improve Seldon and Kserve Docs by @wjayesh in https://github.com/zenml-io/zenml/pull/1236 * Add Pagination to all List commands by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1113 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.31.1...0.32.0 # 0.31.1 This release includes several bug fixes and new additions under the hood such as testing for various internal utility functions. This should help keep ZenML more stable over time. Additionally, we added the ability to customize default materializers for custom artifact stores, and the ability to track system info and the Python version of pipeline runs (both where pipelines are initially executed as well as wherever they eventually run). We added better support for pipeline scheduling (particularly from within the CLI) and tracking of the source code of steps. The release also includes the addition of information about whether the pipeline is running on a stack created by the active user, and the ability to specify Kubernetes container resource requests and limits. Finally, we addressed issues with caching such that caching is enabled for steps that have explicit enable_cache=True specified (even when pipelines have it turned off). ## What's Changed * Test for `enum_utils` by @strickvl in https://github.com/zenml-io/zenml/pull/1209 * Add missing space in Azure docs by @strickvl in https://github.com/zenml-io/zenml/pull/1218 * Test for `dashboard_utils` by @strickvl in https://github.com/zenml-io/zenml/pull/1202 * Cloud version gets love by @htahir1 in https://github.com/zenml-io/zenml/pull/1219 * ZenFiles to ZenML Projects by @htahir1 in https://github.com/zenml-io/zenml/pull/1220 * Track System Info and Python Version of Pipeline Runs by @fa9r in https://github.com/zenml-io/zenml/pull/1215 * Tests for `pydantic_utils` by @strickvl in https://github.com/zenml-io/zenml/pull/1207 * Customizing Default Materializers for Custom Artifact Stores by @safoinme in https://github.com/zenml-io/zenml/pull/1224 * Test `typed_model` utilities by @strickvl in https://github.com/zenml-io/zenml/pull/1208 * Enable Airflow<2.4 by @schustmi in https://github.com/zenml-io/zenml/pull/1222 * Fix `alembic_start` migration if tables exist by @fa9r in https://github.com/zenml-io/zenml/pull/1214 * Tests for `network_utils` by @strickvl in https://github.com/zenml-io/zenml/pull/1201 * Tests for `io_utils` and removal of duplicate code by @strickvl in https://github.com/zenml-io/zenml/pull/1199 * Use `ruff` to replace our linting suite by @strickvl in https://github.com/zenml-io/zenml/pull/1211 * Test `materializer` utilities by @safoinme in https://github.com/zenml-io/zenml/pull/1221 * Add information whether pipeline is running on a stack created by the active user by @schustmi in https://github.com/zenml-io/zenml/pull/1229 * Test `daemon` util functions by @strickvl in https://github.com/zenml-io/zenml/pull/1210 * Test `filesync_model` utils by @strickvl in https://github.com/zenml-io/zenml/pull/1230 * Track Source Code of Steps by @fa9r in https://github.com/zenml-io/zenml/pull/1216 * Track Pipeline Run Schedules by @fa9r in https://github.com/zenml-io/zenml/pull/1227 * Tests for analytics by @bcdurak in https://github.com/zenml-io/zenml/pull/1228 * Allow specifying Kubernetes container resource requests and limits by @schustmi in https://github.com/zenml-io/zenml/pull/1223 * Enable cache for all steps that have explicit `enable_cache=True` by @fa9r in https://github.com/zenml-io/zenml/pull/1217 * Make shared stacks visible again by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1225 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.31.0...0.31.1 # 0.31.0 The highlights of this release are: * our Materializers have been redesigned to be more flexible and easier to use * we have added a new integration test framework * the SageMaker orchestrator has been added to our list of supported orchestrators * pipeline runs and artifacts can now be deleted from the ZenML database via the CLI or the Client API * some integrations have been updated to a more recent version: Kubeflow, Seldon Core and Tekton This release also includes a few bug fixes and other minor improvements to existing features. ## What's Changed * Fix installation instructions in readme and docs by @schustmi in https://github.com/zenml-io/zenml/pull/1167 * Fix broken TOC for scheduling docs by @strickvl in https://github.com/zenml-io/zenml/pull/1169 * Ensure model string fields have a max length by @strickvl in https://github.com/zenml-io/zenml/pull/1136 * Integration test framework by @stefannica in https://github.com/zenml-io/zenml/pull/1099 * Check if all ZenML server dependencies are installed for local zenml deployment using `zenml up` by @dnth in https://github.com/zenml-io/zenml/pull/1144 * Persist the server ID in the database by @stefannica in https://github.com/zenml-io/zenml/pull/1173 * Tiny docs improvements by @strickvl in https://github.com/zenml-io/zenml/pull/1179 * Changing some interactions with analytics fields by @bcdurak in https://github.com/zenml-io/zenml/pull/1174 * Fix `PyTorchDataLoaderMaterializer` for older torch versions by @fa9r in https://github.com/zenml-io/zenml/pull/1178 * Redesign Materializers by @fa9r in https://github.com/zenml-io/zenml/pull/1154 * Fixing the error messages when fetching entities by @bcdurak in https://github.com/zenml-io/zenml/pull/1171 * Moved the active_user property onto the client, implemented get_myself as zenstore method by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1161 * Bugfix/bump evidently version by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1183 * Alembic migration to update size of flavor config schema by @fa9r in https://github.com/zenml-io/zenml/pull/1181 * Deleting pipeline runs and artifacts by @fa9r in https://github.com/zenml-io/zenml/pull/1164 * Signer email checked before setting in google cloud scheduler by @htahir1 in https://github.com/zenml-io/zenml/pull/1184 * Fix zenml helm chart to not leak analytics events by @stefannica in https://github.com/zenml-io/zenml/pull/1190 * Tests for `dict_utils` by @strickvl in https://github.com/zenml-io/zenml/pull/1196 * Adding exception tracking to `zeml init` by @bcdurak in https://github.com/zenml-io/zenml/pull/1192 * Prevent crashes during Airflow server forking on MacOS by @schustmi in https://github.com/zenml-io/zenml/pull/1186 * add alpha as server deployment type by @wjayesh in https://github.com/zenml-io/zenml/pull/1197 * Bugfix for custom flavor registration by @bcdurak in https://github.com/zenml-io/zenml/pull/1195 * Tests for `uuid_utils` by @strickvl in https://github.com/zenml-io/zenml/pull/1200 * Sagemaker orchestrator integration by @strickvl in https://github.com/zenml-io/zenml/pull/1177 * Fix Pandas Materializer Index by @safoinme in https://github.com/zenml-io/zenml/pull/1193 * Add support for deploying custom stack recipes using the ZenML CLI by @wjayesh in https://github.com/zenml-io/zenml/pull/1188 * Add cloud CI environments by @stefannica in https://github.com/zenml-io/zenml/pull/1176 * Fix project scoping for artifact list through ZenServer by @fa9r in https://github.com/zenml-io/zenml/pull/1203 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.30.0...0.31.0 # 0.30.0 In this release, ZenML finally adds Mac M1 support, Python 3.10 support and much greater flexibility and configurability under the hood by deprecating some large dependencies like `ml-pipelines-sdk`. ## Scheduling Based on some community feedback around scheduling, this release comes with improved docs concerning scheduling in general. Additionally, the Vertex AI orchestrator now also supports scheduling. ## Slimmer Dependencies By removing dependencies on some of the packages that ZenML was built on, this version of ZenML is slimmer, faster and more configurable than ever. This also finally makes ZenML run natively on Macs with M1 processors without the need for Rosetta. This also finally enables ZenML to run on Python 3.10. ## Breaking Changes * The removal of `ml-pipelines-sdk` and `tfx` leads to some larger changes in the database that is tracking your pipeline runs and artifacts. **Note**: There is an automatic migration to upgrade this automatically, However, please note that downgrading back down to 0.23.0 is not supported. * The CLI commands to export and import pipeline runs have been deprecated. Namely: `zenml pipeline runs export` and `zenml pipeline runs import` These commands were meant for migrating from `zenml<0.20.0` to `0.20.0<=zenml<0.30.0`. * The `azure-ml` integration dependency on `azureml-core` has been upgraded from `1.42` to `1.48` ## What's Changed * Remove stack extra from installation, enable re-running the quickstart by @schustmi in https://github.com/zenml-io/zenml/pull/1133 * Secrets manager support to experiment trackers docs by @safoinme in https://github.com/zenml-io/zenml/pull/1137 * Updating the README files of our examples by @bcdurak in https://github.com/zenml-io/zenml/pull/1128 * Prevent running with local ZenStore and remote code execution by @schustmi in https://github.com/zenml-io/zenml/pull/1134 * Remove `ml-pipelines-sdk` dependency by @schustmi in https://github.com/zenml-io/zenml/pull/1103 * Fix Huggingface dataset materializer by @safoinme in https://github.com/zenml-io/zenml/pull/1142 * Disallow alembic downgrades for 0.30.0 release by @fa9r in https://github.com/zenml-io/zenml/pull/1140 * Fix Client flavor-related methods by @schustmi in https://github.com/zenml-io/zenml/pull/1153 * Replace User Password with Token in docker images by @safoinme in https://github.com/zenml-io/zenml/pull/1147 * Remove zenml pipeline runs export / import CLI commands by @fa9r in https://github.com/zenml-io/zenml/pull/1150 * Context manager to track events by @bcdurak in https://github.com/zenml-io/zenml/pull/1149 * Made explicit `is not None` calls to allow for empty pwd again by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1159 * Add Neptune exp tracker into flavors table by @dnth in https://github.com/zenml-io/zenml/pull/1156 * Fix step operators by @schustmi in https://github.com/zenml-io/zenml/pull/1155 * Display correct name when updating a stack component by @schustmi in https://github.com/zenml-io/zenml/pull/1160 * Update mysql database creation by @schustmi in https://github.com/zenml-io/zenml/pull/1152 * Adding component conditions to experiment tracker examples and adding to the environmental variable docs by @bcdurak in https://github.com/zenml-io/zenml/pull/1162 * Increase dependency range for protobuf by @schustmi in https://github.com/zenml-io/zenml/pull/1163 * Scheduling documentation by @strickvl in https://github.com/zenml-io/zenml/pull/1158 * Adding scheduling for Vertex Pipelines by @htahir1 in https://github.com/zenml-io/zenml/pull/1148 * Fix alembic migration for sqlite<3.25 by @fa9r in https://github.com/zenml-io/zenml/pull/1165 * Fix pandas Series materializer by @jordandelbar in https://github.com/zenml-io/zenml/pull/1146 ## New Contributors * @jordandelbar made their first contribution in https://github.com/zenml-io/zenml/pull/1146 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.23.0...0.30.0 # 0.23.0 This release comes with a brand-new Neptune integration to track your ML experiments as well as lots of performance improvements! ## Neptune integration The new [Neptune integration](https://github.com/zenml-io/zenml/tree/main/examples/neptune_tracking) includes a Neptune experiment tracker component that allows you to track your machine learning experiments using Neptune. ## Performance Optimization The 0.20.0 release introduced our new server but brought with it a few performance and scalability issues. Since then, we've made many improvements to it, and this release is the final and biggest boost in performance. We reduced the amount of server calls needed for almost all CLI commands and greatly improved the speed of the dashboard as well. ## PyArrow dependency removal We've removed PyArrow as a dependency of the `zenml` python package. As a consequence of that, our NumPy and Pandas materializer no longer read and write their artifacts using PyArrow but instead use native formats instead. If you still want to use PyArrow to serialize your NumPy arrays and Pandas dataframes, you'll need to install it manually like this: `pip install pyarrow` In future releases we'll get rid of other unnecessary dependencies to further slim down the `zenml` package. ## Breaking Changes The following changes introduces with this release may require some manual intervention to update your current installations: - If your code calls some methods of our `Client` class, it might need to be updated to the new model classes introduced by the performance optimization changes explained above - The CLI command to remove an attribute from a stack component now takes no more dashes in front of the attribute names: `zenml stack-component remove-attribute <COMPONENT_NAME> <ATTRIBUTE_NAME>` - If you're using a custom stack component and have overridden the `cleanup_step_run` method, you'll need to update the method signature to include a `step_failed` parameter. ## What's Changed * Docs regarding roles and permissions by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1081 * Add global config dir to `zenml status` by @schustmi in https://github.com/zenml-io/zenml/pull/1084 * Remove source pins and ignore source pins during step spec comparisons by @schustmi in https://github.com/zenml-io/zenml/pull/1083 * Docs/links for roles permissions by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1091 * Bugfix/eng 1485 fix api docs build by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1089 * fix bento builder step parameters to match bentoml by @safoinme in https://github.com/zenml-io/zenml/pull/1096 * Add bentoctl to BentoML docs and example by @safoinme in https://github.com/zenml-io/zenml/pull/1094 * Fix BaseParameters sample code in docs by @jcarlosgarcia in https://github.com/zenml-io/zenml/pull/1098 * zenml <stack-component> logs defaults to active stack without name_or_id by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1101 * Fixed evidently docs by @htahir1 in https://github.com/zenml-io/zenml/pull/1111 * Update sagemaker default instance type by @schustmi in https://github.com/zenml-io/zenml/pull/1112 * The ultimate optimization for performance by @bcdurak in https://github.com/zenml-io/zenml/pull/1077 * Update stack exporting and importing by @schustmi in https://github.com/zenml-io/zenml/pull/1114 * Fix readme by @schustmi in https://github.com/zenml-io/zenml/pull/1116 * Remove Pyarrow dependency by @safoinme in https://github.com/zenml-io/zenml/pull/1109 * Bugfix for listing the runs filtered by a name by @bcdurak in https://github.com/zenml-io/zenml/pull/1118 * Neptune.ai integration by @AleksanderWWW in https://github.com/zenml-io/zenml/pull/1082 * Add YouTube video explaining Stack Components Settings vs Config by @dnth in https://github.com/zenml-io/zenml/pull/1120 * Add failed Status to component when step fails by @safoinme in https://github.com/zenml-io/zenml/pull/1115 * Add architecture diagrams to docs by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1119 * Remove local orchestrator restriction from step operator docs by @schustmi in https://github.com/zenml-io/zenml/pull/1122 * Validate Stack Before Provision by @safoinme in https://github.com/zenml-io/zenml/pull/1110 * Bugfix/fix endpoints for dashboard development by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1125 * Skip kubeflow UI daemon provisioning if a hostname is configured by @schustmi in https://github.com/zenml-io/zenml/pull/1126 * Update Neptune Example by @safoinme in https://github.com/zenml-io/zenml/pull/1124 * Add debugging guide to docs by @dnth in https://github.com/zenml-io/zenml/pull/1097 * Fix stack component attribute removal CLI command by @schustmi in https://github.com/zenml-io/zenml/pull/1127 * Improving error messages when fetching entities by @bcdurak in https://github.com/zenml-io/zenml/pull/1117 * Introduce username and password to kubeflow for more native multi-tenant support by @htahir1 in https://github.com/zenml-io/zenml/pull/1123 * Add support for Label Studio OCR config generation by @shivalikasingh95 in https://github.com/zenml-io/zenml/pull/1062 * Misc doc updates by @schustmi in https://github.com/zenml-io/zenml/pull/1131 * Fix Neptune run cleanup by @safoinme in https://github.com/zenml-io/zenml/pull/1130 ## New Contributors * @jcarlosgarcia made their first contribution in https://github.com/zenml-io/zenml/pull/1098 * @AleksanderWWW made their first contribution in https://github.com/zenml-io/zenml/pull/1082 * @shivalikasingh95 made their first contribution in https://github.com/zenml-io/zenml/pull/1062 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.22.0...0.23.0 # 0.22.0 The 0.22.0 release comes with a new BentoML integration as well as a reworked Airflow orchestrator. Additionally, it greatly improves the server performance as well as other small fixes and updates to our docs! ## BentoML integration The new [BentoML integration](https://github.com/zenml-io/zenml/tree/main/examples/bentoml_deployment) includes a BentoML model deployer component that allows you to deploy your models from any of the major machine learning frameworks on your local machine. ## Airflow orchestrator v2 The previous Airflow orchestrator was limited to running locally and had many additional unpleasant constraints that made it hard to work with. This release includes a completely rewritten, new version of the Airflow orchestrator that now relies on Docker images to run your pipelines and works both locally and with remote Airflow deployments. ## Notable bugfixes - Further improvements to the synchronization that transfers pipeline run information from the MLMD database to the ZenML Server. - The ZenML Label Studio integration can now be used with non-local (i.e. deployed) instances. For more information see [the Label Studiodocs](https://docs.zenml.io/component-gallery/annotators/label-studio). - The Spark example is fixed and now works again end-to-end. ## Breaking Changes The following changes introduces with this release may require some manual intervention to update your current installations: * the Airflow orchestrator now requires a newer version of Airflow (run `zenml integration install airflow` to upgrade) and Docker installed to work. ## What's Changed * Fix bug when running non-local annotator instance. by @sheikhomar in https://github.com/zenml-io/zenml/pull/1045 * Introduce Permissions, Link Permissions to Roles, Restrict Access to endpoints based on Permission by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1007 * Fix copy-pasted log message for annotator by @strickvl in https://github.com/zenml-io/zenml/pull/1049 * Add warning message for client server version mismatch by @schustmi in https://github.com/zenml-io/zenml/pull/1047 * Fix path to ingress values in ZenServer recipes by @wjayesh in https://github.com/zenml-io/zenml/pull/1053 * Prevent deletion/update of default entities by @stefannica in https://github.com/zenml-io/zenml/pull/1046 * Fix Publish API docs workflow by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1054 * Fix multiple alembic heads warning by @fa9r in https://github.com/zenml-io/zenml/pull/1051 * Fix Null Step Configuration/Parameters Error by @fa9r in https://github.com/zenml-io/zenml/pull/1050 * Fix role permission migration by @schustmi in https://github.com/zenml-io/zenml/pull/1056 * Made role assignment/revokation possible through zen_server by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1059 * Bugfix/make role assignment work with enum by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1063 * Manually set scoped for each endpoint by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1064 * Add run args to local docker orchestrator settings by @schustmi in https://github.com/zenml-io/zenml/pull/1060 * Docker ZenML deployment improvements and docs by @stefannica in https://github.com/zenml-io/zenml/pull/1061 * Bugfix Mlflow service cleanup configuration by @safoinme in https://github.com/zenml-io/zenml/pull/1067 * Rename DB Tables and Fix Foreign Keys by @fa9r in https://github.com/zenml-io/zenml/pull/1058 * Paginate secrets in `AWSSecretsManager` by @chiragjn in https://github.com/zenml-io/zenml/pull/1057 * Add explicit dashboard docs by @strickvl in https://github.com/zenml-io/zenml/pull/1052 * Added GA and Gitlab to envs by @htahir1 in https://github.com/zenml-io/zenml/pull/1068 * Add Inference Server Predictor to KServe and Seldon Docs by @safoinme in https://github.com/zenml-io/zenml/pull/1048 * Rename project table to workspace by @fa9r in https://github.com/zenml-io/zenml/pull/1073 * Airflow orchestrator v2 by @schustmi in https://github.com/zenml-io/zenml/pull/1042 * Add get_or_create_run() ZenStore method by @fa9r in https://github.com/zenml-io/zenml/pull/1070 * Fix the flaky fileio tests by @schustmi in https://github.com/zenml-io/zenml/pull/1072 * BentoML Deployer Integration by @safoinme in https://github.com/zenml-io/zenml/pull/1044 * Sync Speedup by @fa9r in https://github.com/zenml-io/zenml/pull/1055 * Fixed broken links in docs and examples. by @dnth in https://github.com/zenml-io/zenml/pull/1076 * Make additional stack component config options available as a setting by @schustmi in https://github.com/zenml-io/zenml/pull/1069 * Rename `step_run_artifact` table to `step_run_input_artifact` by @fa9r in https://github.com/zenml-io/zenml/pull/1075 * Update Spark Example to ZenML post 0.20.0 by @safoinme in https://github.com/zenml-io/zenml/pull/1071 * Always set caching to false for all Kubeflow based orchestrators by @schustmi in https://github.com/zenml-io/zenml/pull/1079 * Feature/eng 1402 consolidate stack sharing by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1036 ## New Contributors * @sheikhomar made their first contribution in https://github.com/zenml-io/zenml/pull/1045 * @chiragjn made their first contribution in https://github.com/zenml-io/zenml/pull/1057 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.21.1...0.22.0 # 0.21.1 This is an ad-hoc release to fix some bugs introduced the 0.21.0 release that made the local ZenML dashboard unusable. ## What's Changed * Include latest (not oldest) three runs in HydratedPipelineModel by @schustmi in https://github.com/zenml-io/zenml/pull/1039 * Update docs to use `pip install [server]` by @strickvl in https://github.com/zenml-io/zenml/pull/1037 * Docs fix for Deepchecks by @strickvl in https://github.com/zenml-io/zenml/pull/1040 * Fix the pipeline run sync on sqlite and the --blocking zenml server deployment by @stefannica in https://github.com/zenml-io/zenml/pull/1041 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.21.0...0.21.1 # 0.21.0 This release primarily fixes a number of bugs that were introduced as part of the 0.20.0 ZenServer release. These significantly improve the stability when using ZenML with the ZenML Server. Notable fixes include: - Improved the synchronization that transfers pipeline run information from the MLMD database to the ZenML Server. This helps fix a number of issues with missing steps in the post-execution workflow, model deployment steps and other issues. - The Label Studio example is fixed and now works again end-to-end. - The ZenML Label Studio integration can now be used with non-local (i.e. deployed) instances. For more information see [the Label Studiodocs](https://docs.zenml.io/component-gallery/annotators/label-studio). New features and other improvements: - ZenML now uses [alembic](https://alembic.sqlalchemy.org/en/latest/) for automated database migrations. The migrations happen automatically after every ZenML update. - New `zenml pipeline runs export / import / migrate` CLI commands are now available to export, import and migrate pipeline runs from older, pre-0.20.0 versions of ZenML. The ZenML server now also automatically picks up older pipeline runs that have been logged in the metadata store by ZenML prior to 0.20.0. - An MLMD gRPC service can now be deployed with the ZenML Helm chart to act as a proxy between clients, orchestrators and the MySQL database. This significantly reduces the time it takes to run pipelines locally. - You can now specify affinity and tolerations and node selectors to all Kubernetes based orchestrators with the new Kubernetes Pod settings feature. ## Breaking Changes The following changes introduces with this release may require some manual intervention to update your current installations: * the zenml server helm chart `values.yaml` file has been restructured to make it easier to configure and to clearly distinguish between the zenml server component and the newly introduced gRPC service component. Please update your `values.yaml` copies accordingly. * the Azure integration dependency versions have been updated. Please run `zenml integration install azure` to update your current installation, if you're using Azure. ## What's Changed * Implement automatic alembic migration by @AlexejPenner in https://github.com/zenml-io/zenml/pull/990 * Fix GCP Artifact Store listdir empty path by @safoinme in https://github.com/zenml-io/zenml/pull/998 * Add flavors mini-video to docs by @strickvl in https://github.com/zenml-io/zenml/pull/999 * Remove the Client() warning when used inside a step by @stefannica in https://github.com/zenml-io/zenml/pull/1000 * Fix broken links caused by updated by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1002 * Fix `FileNotFoundError` with remote path in HuggingFace Dataset materializer by @gabrielmbmb in https://github.com/zenml-io/zenml/pull/995 * Add `zenml pipeline runs export / import / migrate` CLI commands by @fa9r in https://github.com/zenml-io/zenml/pull/977 * Log message when activating a stack as part of registration by @schustmi in https://github.com/zenml-io/zenml/pull/1005 * Minor fixes in Migration to 0.20.0 documentation by @alvarobartt in https://github.com/zenml-io/zenml/pull/1009 * Doc updates by @htahir1 in https://github.com/zenml-io/zenml/pull/1006 * Fixing broken links in docs by @dnth in https://github.com/zenml-io/zenml/pull/1018 * Label Studio example fix by @strickvl in https://github.com/zenml-io/zenml/pull/1021 * Docs for using CUDA-enabled docker images by @strickvl in https://github.com/zenml-io/zenml/pull/1010 * Add social media heading on docs page by @dnth in https://github.com/zenml-io/zenml/pull/1020 * Add executing custom command for getting requirements by @gabrielmbmb in https://github.com/zenml-io/zenml/pull/1012 * Delay user instruction in dockerfile generation by @schustmi in https://github.com/zenml-io/zenml/pull/1004 * Update link checker configs for faster, more accurate checks by @dnth in https://github.com/zenml-io/zenml/pull/1022 * Add `pip install zenml[server]` to relevant examples by @dnth in https://github.com/zenml-io/zenml/pull/1027 * Add Tolerations and NodeAffinity to Kubernetes executor by @wefner in https://github.com/zenml-io/zenml/pull/994 * Support pydantic subclasses in BaseParameter attributes by @schustmi in https://github.com/zenml-io/zenml/pull/1023 * Unify run names across orchestrators by @schustmi in https://github.com/zenml-io/zenml/pull/1025 * Add gRPC metadata service to the ZenML helm chart by @stefannica in https://github.com/zenml-io/zenml/pull/1026 * Make the MLMD pipeline run information transfer synchronous by @stefannica in https://github.com/zenml-io/zenml/pull/1032 * Add console spinner back by @strickvl in https://github.com/zenml-io/zenml/pull/1034 * Fix Azure CLI auth problem by @wjayesh in https://github.com/zenml-io/zenml/pull/1035 * Allow non-local Label Studio instances for annotation by @strickvl in https://github.com/zenml-io/zenml/pull/1033 * Before deleting the global zen_server files, spin it down by @AlexejPenner in https://github.com/zenml-io/zenml/pull/1029 * Adding zenserver integration to stack recipe CLI by @wjayesh in https://github.com/zenml-io/zenml/pull/1017 * Add support for Azure ZenServer by @wjayesh in https://github.com/zenml-io/zenml/pull/1024 * Kubernetes Pod settings by @schustmi in https://github.com/zenml-io/zenml/pull/1008 ## New Contributors * @alvarobartt made their first contribution in https://github.com/zenml-io/zenml/pull/1009 * @wefner made their first contribution in https://github.com/zenml-io/zenml/pull/994 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.20.5...0.21.0 # 0.20.5 ZenML 0.20.5 fixes another series of minor bugs, significantly improves the performance of the CLI, and adds an option to specify APT packages in Docker images. ## What's Changed * Fix accessing local zen store and artifact store in containers by @stefannica in https://github.com/zenml-io/zenml/pull/976 * K3d local registry pod spec updated by @wjayesh in https://github.com/zenml-io/zenml/pull/972 * Update readme page by @dnth in https://github.com/zenml-io/zenml/pull/985 * Remove beam dependency by @schustmi in https://github.com/zenml-io/zenml/pull/986 * Fix error message when registering secret without secrets manager by @schustmi in https://github.com/zenml-io/zenml/pull/981 * Update cheat sheet up to `zenml==0.20.4` by @dnth in https://github.com/zenml-io/zenml/pull/987 * Example fixes (part 2) by @strickvl in https://github.com/zenml-io/zenml/pull/971 * Allow duplicate step classes inside a pipeline by @schustmi in https://github.com/zenml-io/zenml/pull/989 * Include deployment in azureml docker build by @schustmi in https://github.com/zenml-io/zenml/pull/984 * Automatically open browser upon `zenml up` command by @dnth in https://github.com/zenml-io/zenml/pull/978 * Add a `just_mine` flag for `zenml stack list` by @strickvl in https://github.com/zenml-io/zenml/pull/979 * Add option to specify apt packages by @schustmi in https://github.com/zenml-io/zenml/pull/982 * Replace old flavor references, fix the windows local ZenML server and other fixes by @stefannica in https://github.com/zenml-io/zenml/pull/988 * Improve docker and k8s detection by @schustmi in https://github.com/zenml-io/zenml/pull/991 * Update GH actions example by @schustmi in https://github.com/zenml-io/zenml/pull/993 * Update `MissingStepParameterError` exception message by @gabrielmbmb in https://github.com/zenml-io/zenml/pull/996 * Separated code docs into `core` and `integration` docs by @AlexejPenner in https://github.com/zenml-io/zenml/pull/983 * Add docs/mkdocstrings_helper.py to format script sources by @fa9r in https://github.com/zenml-io/zenml/pull/997 * Further CLI optimization by @bcdurak in https://github.com/zenml-io/zenml/pull/992 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.20.4...0.20.5 # 0.20.4 This release fixes another series of minor bugs that were introduced in 0.20.0. ## What's Changed * Detect failed executions by @schustmi in https://github.com/zenml-io/zenml/pull/964 * Only build docker images for custom deployments by @schustmi in https://github.com/zenml-io/zenml/pull/960 * M1 Mac Installation Tutorial by @fa9r in https://github.com/zenml-io/zenml/pull/966 * Update ZenBytes links in docs by @fa9r in https://github.com/zenml-io/zenml/pull/968 * Fix the API docs builder by @stefannica in https://github.com/zenml-io/zenml/pull/967 * Fix `gpu_limit` condition in `VertexOrchestrator` by @gabrielmbmb in https://github.com/zenml-io/zenml/pull/963 * Add simple node affinity configurations by @schustmi in https://github.com/zenml-io/zenml/pull/973 * First iteration of the CLI optimization by @bcdurak in https://github.com/zenml-io/zenml/pull/962 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.20.3...0.20.4 # 0.20.3 This release fixes another series of minor bugs that were introduced in 0.20.0. ## What's Changed * Fixed GitHub/Colab JSON formatting error on quickstart. by @fa9r in https://github.com/zenml-io/zenml/pull/947 * Update YAML config template by @htahir1 in https://github.com/zenml-io/zenml/pull/952 * correct code from merge and fix import by @wjayesh in https://github.com/zenml-io/zenml/pull/950 * Check for active component using id instead of name by @schustmi in https://github.com/zenml-io/zenml/pull/956 * Tekton fix by @htahir1 in https://github.com/zenml-io/zenml/pull/955 * Improve zenml up/down UX and other fixes by @stefannica in https://github.com/zenml-io/zenml/pull/957 * Update kubeflow docs for multi-tenant deployments by @htahir1 in https://github.com/zenml-io/zenml/pull/958 * Update kubeflow.md by @abohmeed in https://github.com/zenml-io/zenml/pull/959 * Add additional stack validation for step operators by @schustmi in https://github.com/zenml-io/zenml/pull/954 * Fix pipeline run dashboard URL for unlisted runs by @fa9r in https://github.com/zenml-io/zenml/pull/951 * Support subclasses of registered types in recursive materialization by @fa9r in https://github.com/zenml-io/zenml/pull/953 ## New Contributors * @abohmeed made their first contribution in https://github.com/zenml-io/zenml/pull/959 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.20.2...0.20.3 # 0.20.2 After a successful release of the new ZenML server and dashboard paradigm, we set to ironing out some bugs that slipped through. ## What's Changed * Capitalize all docs page titles. by @fa9r in https://github.com/zenml-io/zenml/pull/937 * Increase field sizes for docstrings and step parameters. by @fa9r in https://github.com/zenml-io/zenml/pull/940 * Fixing the bug in the registration of custom flavors by @bcdurak in https://github.com/zenml-io/zenml/pull/938 * Implemented `docstring` Attribute of StepModel by @fa9r in https://github.com/zenml-io/zenml/pull/936 * Fix shared stack emoji by @strickvl in https://github.com/zenml-io/zenml/pull/941 * Fix shared stacks not being allowed to be set as active. by @fa9r in https://github.com/zenml-io/zenml/pull/943 * Typo fix by @strickvl in https://github.com/zenml-io/zenml/pull/944 * Update Kubernetes Orchestrator Example by @fa9r in https://github.com/zenml-io/zenml/pull/942 * Add code and instructions to run quickstart on Colab. by @fa9r in https://github.com/zenml-io/zenml/pull/939 * Fixing the interaction in getting stacks/components by @bcdurak in https://github.com/zenml-io/zenml/pull/945 * Fix Kubeflow run name by @safoinme in https://github.com/zenml-io/zenml/pull/946 * `VertexOrchestrator` apply node selector constraint if `gpu_limit > 0` by @gabrielmbmb in https://github.com/zenml-io/zenml/pull/935 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.20.1...0.20.2 # 0.20.0 / 0.20.1 The ZenML 0.20.0 release brings a number of big changes to its architecture and a lot of cool new features, some of which are not backwards compatible with previous versions. These changes are only covered briefly in the release notes. For a detailed view on what happened and how you can get the most out of the 0.20.0 release, please head over to [our "ZenML 0.20.0: Our Biggest Release Yet" blog post](https://blog.zenml.io/zenml-revamped). ## Warning: Breaking Changes Updating to ZenML 0.20.0 needs to be followed by a migration of your existing ZenML Stacks and you may also need to make changes to your current ZenML pipeline code. Please read [the migration guide](https://docs.zenml.io/guidelines/migration-zero-twenty) carefully and follow the instructions to ensure a smooth transition. The guide walks you through these changes and offers instructions on how to migrate your existing ZenML stacks and pipelines to the new version with minimal effort and disruption to your existing workloads. If you have updated to ZenML 0.20.0 by mistake or are experiencing issues with the new version, you can always go back to the previous version by using `pip install zenml==0.13.2` instead of `pip install zenml` when installing ZenML manually or in your scripts. ## Overview of Changes * [ZenML takes over the Metadata Store](https://docs.zenml.io/guidelines/migration-zero-twenty#zenml-takes-over-the-metadata-store-role) role. All information about your ZenML Stacks, pipelines, and artifacts is now tracked by ZenML itself directly. If you are currently using remote Metadata Stores (e.g. deployed in cloud) in your stacks, you will probably need to replace them with [ZenML cloud deployments](https://docs.zenml.io/guidelines/migration-zero-twenty/getting-started/deploying-zenml/deploying-zenml.md). * the [new ZenML Dashboard](https://docs.zenml.io/guidelines/migration-zero-twenty#the-zenml-dashboard-is-now-available) is now available with all ZenML deployments. * [ZenML Profiles have been removed](https://docs.zenml.io/guidelines/migration-zero-twenty#removal-of-profiles-and-the-local-yaml-database) in favor of ZenML Projects. You need to [manually migrate your existing ZenML Profiles](https://docs.zenml.io/guidelines/migration-zero-twenty#how-to-migrate-your-profiles) after the update. * the [configuration of Stack Components is now decoupled from their implementation](https://docs.zenml.io/guidelines/migration-zero-twenty#decoupling-stack-component-configuration-from-implementation). If you extended ZenML with custom stack component implementations, you may need to update the way they are registered in ZenML. * the updated ZenML server provides a new and improved collaborative experience. When connected to a ZenML server, you can now [share your ZenML Stacks and Stack Components](https://docs.zenml.io/guidelines/migration-zero-twenty#shared-zenml-stacks-and-stack-components) with other users. If you were previously using the ZenML Profiles or the ZenML server to share your ZenML Stacks, you should switch to the new ZenML server and Dashboard and update your existing workflows to reflect the new features. ## What's Changed * Fix error in checking Great Expectations results when exit_on_error=True by @TimovNiedek in https://github.com/zenml-io/zenml/pull/889 * feat(user-dockerfile): Add user argument to DockerConfiguration by @cjidboon94 in https://github.com/zenml-io/zenml/pull/892 * Minor doc updates for backporting by @htahir1 in https://github.com/zenml-io/zenml/pull/894 * Removed feature request and replaced with hellonext board by @htahir1 in https://github.com/zenml-io/zenml/pull/897 * Unit tests for (some) integrations by @strickvl in https://github.com/zenml-io/zenml/pull/880 * Fixed integration installation command by @edshee in https://github.com/zenml-io/zenml/pull/900 * Pipeline configuration and intermediate representation by @schustmi in https://github.com/zenml-io/zenml/pull/898 * [Bugfix] Fix bug in auto-import of stack after recipe deploy by @wjayesh in https://github.com/zenml-io/zenml/pull/901 * Update TOC on CONTRIBUTING.md by @strickvl in https://github.com/zenml-io/zenml/pull/907 * ZenServer by @fa9r in https://github.com/zenml-io/zenml/pull/879 * Update `kserve` README by @strickvl in https://github.com/zenml-io/zenml/pull/912 * Confirmation prompts were not working by @htahir1 in https://github.com/zenml-io/zenml/pull/917 * Stacks can be registered in `Click<8.0.0` now by @AlexejPenner in https://github.com/zenml-io/zenml/pull/920 * Made Pipeline and Stack optional on the HydratedPipelineRunModel by @AlexejPenner in https://github.com/zenml-io/zenml/pull/919 * Renamed all references from ZenServer to ZenML Server in logs and comments by @htahir1 in https://github.com/zenml-io/zenml/pull/915 * Prettify pipeline runs list CLI output. by @fa9r in https://github.com/zenml-io/zenml/pull/921 * Warn when registering non-local component with local ZenServer by @strickvl in https://github.com/zenml-io/zenml/pull/904 * Fix duplicate results in pipeline run lists and unlisted flag. by @fa9r in https://github.com/zenml-io/zenml/pull/922 * Fix error log by @htahir1 in https://github.com/zenml-io/zenml/pull/916 * Update cli docs by @AlexejPenner in https://github.com/zenml-io/zenml/pull/913 * Fix Pipeline Run Status by @fa9r in https://github.com/zenml-io/zenml/pull/923 * Change the CLI emoji for whether a stack is shared or not. by @fa9r in https://github.com/zenml-io/zenml/pull/926 * Fix running pipelines from different locations. by @fa9r in https://github.com/zenml-io/zenml/pull/925 * Fix zenml stack-component describe CLI command. by @fa9r in https://github.com/zenml-io/zenml/pull/929 * Update custom deployment to use ArtifactModel by @safoinme in https://github.com/zenml-io/zenml/pull/928 * Fix the CI unit test and integration test failures by @stefannica in https://github.com/zenml-io/zenml/pull/924 * Add gcp zenserver recipe by @wjayesh in https://github.com/zenml-io/zenml/pull/930 * Extend Post Execution Class Properties by @fa9r in https://github.com/zenml-io/zenml/pull/931 * Fixes for examples by @strickvl in https://github.com/zenml-io/zenml/pull/918 * Update cheat sheet by @dnth in https://github.com/zenml-io/zenml/pull/932 * Fix the docstring attribute of pipeline models. by @fa9r in https://github.com/zenml-io/zenml/pull/933 * New docs post ZenML Server by @htahir1 in https://github.com/zenml-io/zenml/pull/927 ## New Contributors * @TimovNiedek made their first contribution in https://github.com/zenml-io/zenml/pull/889 * @cjidboon94 made their first contribution in https://github.com/zenml-io/zenml/pull/892 * @edshee made their first contribution in https://github.com/zenml-io/zenml/pull/900 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.13.2...0.20.0 # 0.13.2 ZenML 0.13.2 comes with a new local Docker orchestrator and many other improvements and fixes: * You can now run your pipelines locally in isolated Docker containers per step * @gabrielmbmb updated our MLFlow experiment tracker to work with Databricks deployments πŸŽ‰ * Documentation updates for cloud deployments and multi-tenancy Kubeflow support ## What's Changed * Update GitHub Actions by @fa9r in https://github.com/zenml-io/zenml/pull/864 * Raise zenml exception when cyclic graph is detected by @schustmi in https://github.com/zenml-io/zenml/pull/866 * Add source to segment identify call by @htahir1 in https://github.com/zenml-io/zenml/pull/868 * Use default local paths/URIs for the local artifact and metadata stores by @stefannica in https://github.com/zenml-io/zenml/pull/873 * Implement local docker orchestrator by @schustmi in https://github.com/zenml-io/zenml/pull/862 * Update cheat sheet with latest CLI commands from 0.13.0 by @dnth in https://github.com/zenml-io/zenml/pull/867 * Add a note about importing proper DockerConfiguration module by @jsuchome in https://github.com/zenml-io/zenml/pull/877 * Bugfix/misc by @schustmi in https://github.com/zenml-io/zenml/pull/878 * Fixed bug in tfx by @htahir1 in https://github.com/zenml-io/zenml/pull/883 * Mlflow Databricks connection by @gabrielmbmb in https://github.com/zenml-io/zenml/pull/882 * Refactor cloud guide to stack deployment guide by @wjayesh in https://github.com/zenml-io/zenml/pull/861 * Add cookie consent by @strickvl in https://github.com/zenml-io/zenml/pull/871 * Stack recipe CLI improvements by @wjayesh in https://github.com/zenml-io/zenml/pull/872 * Kubeflow workaround added by @htahir1 in https://github.com/zenml-io/zenml/pull/886 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.13.1...0.13.2 # 0.13.1 ZenML 0.13.1 is here and it comes with several quality of life improvements: * You can now specify the exact order in which your pipelines steps should be executed, e.g., via `step_b.after(step_a)` * TensorBoard was moved to a separate integration so you can use it with Pytorch and other modeling frameworks * You can now configure the Evidently integration to ignore specific columns in your datasets. This release also contains a lot of documentation on how to deploy custom code (like preprocessing and postprocessing code) with our KServe and Seldon integrations. ## What's Changed * Fix flag info on recipes in docs by @wjayesh in https://github.com/zenml-io/zenml/pull/854 * Fix some materializer issues by @schustmi in https://github.com/zenml-io/zenml/pull/852 * Add ignore columns for evidently drift detection by @SangamSwadiK in https://github.com/zenml-io/zenml/pull/851 * TensorBoard Integration by @fa9r in https://github.com/zenml-io/zenml/pull/850 * Add option to specify task dependencies by @schustmi in https://github.com/zenml-io/zenml/pull/858 * Custom code readme and docs by @safoinme in https://github.com/zenml-io/zenml/pull/853 ## New Contributors * @SangamSwadiK made their first contribution in https://github.com/zenml-io/zenml/pull/851 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.13.0...0.13.1 # 0.13.0 ZenML version 0.13.0 is chock-full with exciting features. [Custom Code Deployment](https://github.com/zenml-io/zenml/tree/main/examples/custom_code_deployment) is the continuation of the Model Deployment story that we have been working on over the last few releases. Now it is possible to deploy custom code along with your models using Kserve or Seldon. With [Spark](https://github.com/zenml-io/zenml/tree/main/examples/spark_distributed_programming) this release also brings distributed processing into the ZenML toolkit. Spinning up and configuring infrastructure is a difficult part of the MLOps journey and can easily become a barrier to entry. Using our [mlops-stacks](https://github.com/zenml-io/mlops-stacks) repository, it is now possible to spin up perfectly configured infrastructure with the corresponding ZenML stack using the ZenML CLI. As always, we've also included various bug fixes and lots of improvements to the documentation and our examples. ## Breaking Changes This release introduces a breaking change to the CLI by adjusting the access to the stack component specific resources for `secret-managers` and `model-deployers` to be more explicitly linked to the component. Here is how: ```bash # `zenml secret register ...` becomes zenml secrets-manager secret register ... # `zenml served_models list` becomes zenml model-deployer models list ``` ## What's Changed * Link checker by @dnth in https://github.com/zenml-io/zenml/pull/818 * Update Readme with latest info from docs page by @dnth in https://github.com/zenml-io/zenml/pull/810 * Typo on Readme by @dnth in https://github.com/zenml-io/zenml/pull/821 * Update kserve installation to 0.9 on kserve deployment example by @safoinme in https://github.com/zenml-io/zenml/pull/823 * Allow setting caching via the `config.yaml` by @strickvl in https://github.com/zenml-io/zenml/pull/827 * Handle file-io with context manager by @aliabbasjaffri in https://github.com/zenml-io/zenml/pull/825 * Add automated link check github actions by @dnth in https://github.com/zenml-io/zenml/pull/828 * Fix the SQL zenstore to work with MySQL by @stefannica in https://github.com/zenml-io/zenml/pull/829 * Improve label studio error messages if secrets are missing or of wrong schema by @schustmi in https://github.com/zenml-io/zenml/pull/832 * Add secret scoping to the Azure Key Vault by @stefannica in https://github.com/zenml-io/zenml/pull/830 * Unify CLI concepts (removing `secret`, `feature` and `served-models`) by @strickvl in https://github.com/zenml-io/zenml/pull/833 * Put link checker as part of CI by @dnth in https://github.com/zenml-io/zenml/pull/838 * Add missing requirement for step operators by @schustmi in https://github.com/zenml-io/zenml/pull/834 * Fix broken links from link checker results by @dnth in https://github.com/zenml-io/zenml/pull/835 * Fix served models logs formatting error by @safoinme in https://github.com/zenml-io/zenml/pull/836 * New Docker build configuration by @schustmi in https://github.com/zenml-io/zenml/pull/811 * Secrets references on stack component attributes by @schustmi in https://github.com/zenml-io/zenml/pull/817 * Misc bugfixes by @schustmi in https://github.com/zenml-io/zenml/pull/842 * Pillow Image materializer by @strickvl in https://github.com/zenml-io/zenml/pull/820 * Add Tekton orchestrator by @schustmi in https://github.com/zenml-io/zenml/pull/844 * Put Slack call to action at the top of README page. by @dnth in https://github.com/zenml-io/zenml/pull/846 * Change Quickstart to Use Tabular Data by @fa9r in https://github.com/zenml-io/zenml/pull/843 * Add sleep before docker builds in release GH action by @schustmi in https://github.com/zenml-io/zenml/pull/849 * Implement Recursive Built-In Container Materializer by @fa9r in https://github.com/zenml-io/zenml/pull/812 * Custom deployment with KServe and Seldon Core by @safoinme in https://github.com/zenml-io/zenml/pull/841 * Spark Integration by @bcdurak in https://github.com/zenml-io/zenml/pull/837 * Add zenml stack recipe CLI commands by @wjayesh in https://github.com/zenml-io/zenml/pull/807 ## New Contributors * @aliabbasjaffri made their first contribution in https://github.com/zenml-io/zenml/pull/825 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.12.0...0.13.0 # 0.12.0 The 0.12.0 release comes with the third implementation of the ZenML Model Deployer abstraction: The [KServe](https://github.com/zenml-io/zenml/tree/main/examples/kserve_deployment) integration allows you to deploy any PyTorch, TensorFlow or SKLearn from within your ZenML pipelines! We also added functionality to specify hardware resources on a step level to control the amount of memory, CPUs and GPUs that each ZenML step has access to. This is currently limited to the Kubeflow and Vertex orchestrator but will be expanded in upcoming releases. Additionally, we've added support for scoped secrets in our AWS, GCP and Vault Secrets Managers. These updated Secrets Managers allow you to configure a scope which determines if secrets are shared with other ZenML Secrets Managers using the same backend. As always, we've also included various bug fixes and lots of improvements to the documentation and our examples. ## What's Changed * Fix Links on the examples by @safoinme in https://github.com/zenml-io/zenml/pull/782 * Fix broken links in source code by @schustmi in https://github.com/zenml-io/zenml/pull/784 * Invalidating artifact/metadata store if there is a change in one of them by @bcdurak in https://github.com/zenml-io/zenml/pull/719 * Fixed broken link in README by @htahir1 in https://github.com/zenml-io/zenml/pull/785 * Embed Cheat Sheet in a separate docs page by @fa9r in https://github.com/zenml-io/zenml/pull/790 * Add data validation documentation by @stefannica in https://github.com/zenml-io/zenml/pull/789 * Add local path for mlflow experiment tracker by @schustmi in https://github.com/zenml-io/zenml/pull/786 * Improve Docker build logs. by @fa9r in https://github.com/zenml-io/zenml/pull/793 * Allow standard library types in steps by @stefannica in https://github.com/zenml-io/zenml/pull/799 * Added small description by @AlexejPenner in https://github.com/zenml-io/zenml/pull/801 * Replace the restriction to use Repository inside step with a warning by @stefannica in https://github.com/zenml-io/zenml/pull/792 * Adjust quickstart to data validators by @fa9r in https://github.com/zenml-io/zenml/pull/797 * Add utility function to deprecate pydantic attributes by @schustmi in https://github.com/zenml-io/zenml/pull/778 * Fix the mismatch KFP version between Kubeflow and GCP integration by @safoinme in https://github.com/zenml-io/zenml/pull/796 * Made mlflow more verbose by @htahir1 in https://github.com/zenml-io/zenml/pull/802 * Fix links by @dnth in https://github.com/zenml-io/zenml/pull/798 * KServe model deployer integration by @stefannica in https://github.com/zenml-io/zenml/pull/655 * retrieve pipeline requirement within running step by @safoinme in https://github.com/zenml-io/zenml/pull/805 * Fix `--decouple_stores` error message by @strickvl in https://github.com/zenml-io/zenml/pull/814 * Support subscripted generic step output types by @fa9r in https://github.com/zenml-io/zenml/pull/806 * Allow empty kubeconfig when using local kubeflow orchestrator by @schustmi in https://github.com/zenml-io/zenml/pull/809 * fix the secret register command in kserve docs page by @safoinme in https://github.com/zenml-io/zenml/pull/815 * Annotation example (+ stack component update) by @strickvl in https://github.com/zenml-io/zenml/pull/813 * Per-step resource configuration by @schustmi in https://github.com/zenml-io/zenml/pull/794 * Scoped secrets by @stefannica in https://github.com/zenml-io/zenml/pull/803 * Adjust examples and docs to new pipeline and step fetching syntax by @fa9r in https://github.com/zenml-io/zenml/pull/795 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.11.0...0.12.0 # 0.11.0 Our 0.11.0 release contains our new annotation workflow and stack component. We've been blogging about this for a few weeks, and even started maintaining our own repository of open-source annotation tools. With ZenML 0.11.0 you can bring data labeling into your MLOps pipelines and workflows as a first-class citizen. We've started our first iteration of this functionality by integrating with [Label Studio](https://labelstud.io/), a leader in the open-source annotation tool space. This release also includes a ton of updates to our documentation. (Seriously, go check them out! We added tens of thousands of words since the last release.) We continued the work on our data validation story from the previous release: [Deepchecks](https://deepchecks.com/) is the newest data validator we support, and we updated our Evidently and Whylogs integrations to include all the latest and greatest from those tools. Beyond this, as usual we included a number of smaller bugfixes and documentation changes to cumulatively improve experience of using ZenML as a user. For a detailed look at what's changed, give [our full release notes](https://github.com/zenml-io/zenml/releases/tag/0.11.0) a glance. ## Breaking Changes The 0.11.0 release remodels the Evidently and whylogs integrations as Data Validator stack components, in an effort to converge all data profiling and validation libraries around the same abstraction. As a consequence, you now need to configure and add a Data Validator stack component to your stack if you wish to use Evidently or whylogs in your pipelines: * for Evidently: ```shell zenml data-validator register evidently -f evidently zenml stack update -dv evidently ``` * for whylogs: ```shell zenml data-validator register whylogs -f whylogs zenml stack update -dv whylogs ``` In this release, we have also upgraded the Evidently and whylogs libraries to their latest and greatest versions (whylogs 1.0.6 and evidently 0.1.52). These versions introduce non-backwards compatible changes that are also reflected in the ZenML integrations: * Evidently profiles are now materialized using their original `evidently.model_profile.Profile ` data type and the builtin `EvidentlyProfileStep` step now also returns a `Profile` instance instead of the previous dictionary representation. This may impact your existing pipelines as you may have to update your steps to take in `Profile` artifact instances instead of dictionaries. * the whylogs `whylogs.DatasetProfile` data type was replaced by `whylogs.core.DatasetProfileView` in the builtin whylogs materializer and steps. This may impact your existing pipelines as you may have to update your steps to return and take in `whylogs.core.DatasetProfileView` artifact instances instead of `whylogs.DatasetProfile` objects. * the whylogs library has gone through a major transformation that completely removed the session concept. As a result, the `enable_whylogs` step decorator was replaced by an `enable_whylabs` step decorator. You only need to use the step decorator if you wish to log your profiles to the Whylabs platform. Pleaser refer to the examples provided for Evidently and whylogs to learn more about how to use the new integration versions: * [Evidently](https://github.com/zenml-io/zenml/tree/main/examples/evidently_drift_detection) * [whylogs/Whylabs](https://github.com/zenml-io/zenml/tree/main/examples/whylogs_data_profiling) ## What's Changed * Changed PR template to reflect integrations flow by @htahir1 in https://github.com/zenml-io/zenml/pull/732 * Fix broken Feast integration by @strickvl in https://github.com/zenml-io/zenml/pull/737 * Describe args run.py application actually supports by @jsuchome in https://github.com/zenml-io/zenml/pull/740 * Update kubernetes_orchestration example by @fa9r in https://github.com/zenml-io/zenml/pull/743 * Fix some example links by @schustmi in https://github.com/zenml-io/zenml/pull/744 * Fix broken links for docs and examples by @safoinme in https://github.com/zenml-io/zenml/pull/747 * Update CONTRIBUTING.md by @strickvl in https://github.com/zenml-io/zenml/pull/748 * Fix references to types when registering secrets managers by @strickvl in https://github.com/zenml-io/zenml/pull/738 * Make examples conform to best practices guidance by @AlexejPenner in https://github.com/zenml-io/zenml/pull/734 * API Docs with Cookies and Milk by @AlexejPenner in https://github.com/zenml-io/zenml/pull/758 * Use correct region when trying to fetch ECR repositories by @schustmi in https://github.com/zenml-io/zenml/pull/761 * Encode azure secrets manager secret names by @schustmi in https://github.com/zenml-io/zenml/pull/760 * Add nested mlflow option to enable_mlflow decorator by @Val3nt-ML in https://github.com/zenml-io/zenml/pull/742 * Combine all MLMD contexts by @schustmi in https://github.com/zenml-io/zenml/pull/759 * Prevent extra attributes when initializing StackComponents by @schustmi in https://github.com/zenml-io/zenml/pull/763 * New Docker images by @schustmi in https://github.com/zenml-io/zenml/pull/757 * Fix facets magic display in Google Colab by @fa9r in https://github.com/zenml-io/zenml/pull/765 * Allow fetching secrets from within a step by @schustmi in https://github.com/zenml-io/zenml/pull/766 * Add notebook to great expectation example by @stefannica in https://github.com/zenml-io/zenml/pull/768 * Module resolving and path fixes by @schustmi in https://github.com/zenml-io/zenml/pull/735 * Fix step operator entrypoint by @schustmi in https://github.com/zenml-io/zenml/pull/771 * Docs Revamp by @fa9r in https://github.com/zenml-io/zenml/pull/769 * Allow fetching pipeline/step by name, class or instance by @AlexejPenner in https://github.com/zenml-io/zenml/pull/733 * Data Validator abstraction and Deepchecks integration by @htahir1 in https://github.com/zenml-io/zenml/pull/553 * rolling back seldon deployment example by @safoinme in https://github.com/zenml-io/zenml/pull/774 * Added changes from 1062 and 1061 into the updated docs by @AlexejPenner in https://github.com/zenml-io/zenml/pull/775 * Refresh Examples on `zenml examples pull` by @fa9r in https://github.com/zenml-io/zenml/pull/776 * Annotation stack component and Label Studio integration by @strickvl in https://github.com/zenml-io/zenml/pull/764 * Add optional machine specs to vertex orchestrator by @felixthebeard in https://github.com/zenml-io/zenml/pull/762 ## New Contributors * @jsuchome made their first contribution in https://github.com/zenml-io/zenml/pull/740 * @Val3nt-ML made their first contribution in https://github.com/zenml-io/zenml/pull/742 * @felixthebeard made their first contribution in https://github.com/zenml-io/zenml/pull/762 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.10.0...0.11.0 # 0.10.0 The 0.10.0 release continues our streak of extending ZenML with support for new orchestrators, this time by adding [the Kubernetes Native Orchestrator](https://github.com/zenml-io/zenml/tree/main/examples/kubernetes_orchestration). This orchestrator is a lightweight alternative to other distributed orchestrators like Airflow or Kubeflow that gives our users the ability to run pipelines in any Kubernetes cluster without having to install and manage additional tools or components. This release features another integration that we are really excited about: the popular data profiling and validation library [Great Expectations](https://greatexpectations.io/) is our first Data Validator, a new category of stack components that we are in the process of standardizing, that will make data quality a central feature of ZenML. [The ZenML Great Expectations integration](https://github.com/zenml-io/zenml/tree/main/examples/great_expectations_data_validation) eliminates the complexity associated with configuring the store backends for Great Expectations by reusing our Artifact Store concept for that purpose and gives ZenML users immediate access to Great Expectations in both local and cloud settings. Last but not least, the release also includes a new secrets manager implementation, courtesy of our contributor @karimhabush, that integrates ZenML with the [Hashicorp Vault Server](https://www.vaultproject.io) as well as a few other bug fixes and improvements. ## What's Changed * Fix broken link by @strickvl in https://github.com/zenml-io/zenml/pull/707 * Add stack component copy command by @schustmi in https://github.com/zenml-io/zenml/pull/705 * Remove `force` flag from secrets managers' implementation by @strickvl in https://github.com/zenml-io/zenml/pull/708 * Fixed wrong example README by @AlexejPenner in https://github.com/zenml-io/zenml/pull/712 * Fix dead links in integrations docs. by @fa9r in https://github.com/zenml-io/zenml/pull/710 * Fixing link to guide by @chethanuk-plutoflume in https://github.com/zenml-io/zenml/pull/716 * Adding azure-keyvault-secrets to azure integration dependencies by @safoinme in https://github.com/zenml-io/zenml/pull/717 * Fix MLflow repeated deployment error by @fa9r in https://github.com/zenml-io/zenml/pull/715 * Replace alerter standard steps by Slack-specific steps to fix config issue. by @fa9r in https://github.com/zenml-io/zenml/pull/714 * Fix broken links on README by @dnth in https://github.com/zenml-io/zenml/pull/722 * Invalidate cache by @strickvl in https://github.com/zenml-io/zenml/pull/724 * Skip Cleaning Trace on tests by @safoinme in https://github.com/zenml-io/zenml/pull/725 * Kubernetes orchestrator by @fa9r in https://github.com/zenml-io/zenml/pull/688 * Vault Secrets Manager integration - KV Secrets Engine by @karimhabush in https://github.com/zenml-io/zenml/pull/689 * Add missing help text for CLI commands by @safoinme in https://github.com/zenml-io/zenml/pull/723 * Misc bugfixes by @schustmi in https://github.com/zenml-io/zenml/pull/713 * Great Expectations integration for data validation by @strickvl in https://github.com/zenml-io/zenml/pull/555 * Fix GCP artifact store by @schustmi in https://github.com/zenml-io/zenml/pull/730 ## New Contributors * @chethanuk-plutoflume made their first contribution in https://github.com/zenml-io/zenml/pull/716 * @dnth made their first contribution in https://github.com/zenml-io/zenml/pull/722 * @karimhabush made their first contribution in https://github.com/zenml-io/zenml/pull/689 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.9.0...0.10.0 # 0.9.0 It's been a couple of weeks, so it's time for a new release! 0.9.0 brings two whole new orchestrators, one of which was contributed by a community member just one day after we unveiled new documentation for orchestrator extensibility! The release also includes a new secrets manager, a Slack integration and a bunch of other smaller changes across the codebase. (Our new orchestrators are exciting enough that they'll get their own blog posts to showcase their strengths in due course.) Beyond this, as usual we included a number of smaller bugfixes and documentation changes to cumulatively improve experience of using ZenML as a user. ## What's Changed * Pass secret to release linting workflow by @schustmi in https://github.com/zenml-io/zenml/pull/642 * Fix typo in example by @anencore94 in https://github.com/zenml-io/zenml/pull/644 * Added `SecretExistsError` in `register_secret()` method by @hectorLop in https://github.com/zenml-io/zenml/pull/648 * Fix broken GCP Secrets example CLI command by @strickvl in https://github.com/zenml-io/zenml/pull/649 * Upgrade to `ml-pipelines-sdk` v1.8.0 by @strickvl in https://github.com/zenml-io/zenml/pull/651 * Fix example list CLI command name by @schustmi in https://github.com/zenml-io/zenml/pull/647 * Fix README by @strickvl in https://github.com/zenml-io/zenml/pull/657 * Fix broken links in docs by @safoinme in https://github.com/zenml-io/zenml/pull/652 * Add `VertexOrchestrator` implementation by @gabrielmbmb in https://github.com/zenml-io/zenml/pull/640 * Fix index page links and Heading links. by @safoinme in https://github.com/zenml-io/zenml/pull/661 * Add docstring checks to `pre-commit` script by @strickvl in https://github.com/zenml-io/zenml/pull/481 * Pin MLflow to <1.26.0 to prevent issues when matplotlib is not installed by @fa9r in https://github.com/zenml-io/zenml/pull/666 * Making `utils` more consistent by @strickvl in https://github.com/zenml-io/zenml/pull/658 * Fix linting failures on `develop` by @strickvl in https://github.com/zenml-io/zenml/pull/669 * Add docstrings for `config` module by @strickvl in https://github.com/zenml-io/zenml/pull/668 * Miscellaneous bugfixes by @schustmi in https://github.com/zenml-io/zenml/pull/660 * Make ZenServer dependencies optional by @schustmi in https://github.com/zenml-io/zenml/pull/665 * Implement Azure Secrets Manager integration by @strickvl in https://github.com/zenml-io/zenml/pull/654 * Replace `codespell` with `pyspelling` by @strickvl in https://github.com/zenml-io/zenml/pull/663 * Add Community Event to README by @htahir1 in https://github.com/zenml-io/zenml/pull/674 * Fix failing integration tests by @strickvl in https://github.com/zenml-io/zenml/pull/677 * Add `io` and `model_deployers` docstring checks by @strickvl in https://github.com/zenml-io/zenml/pull/675 * Update `zenml stack down` to use --force flag by @schustmi in https://github.com/zenml-io/zenml/pull/673 * Fix class resolving on windows by @schustmi in https://github.com/zenml-io/zenml/pull/678 * Added `pipelines` docstring checks by @strickvl in https://github.com/zenml-io/zenml/pull/676 * Docstring checks for `cli` module by @strickvl in https://github.com/zenml-io/zenml/pull/680 * Docstring fixes for `entrypoints` and `experiment_trackers` modules by @strickvl in https://github.com/zenml-io/zenml/pull/672 * Clearer Contributing.md by @htahir1 in https://github.com/zenml-io/zenml/pull/681 * How to access secrets within step added to docs by @AlexejPenner in https://github.com/zenml-io/zenml/pull/653 * FIX: Log a warning instead of raising an `AssertionError` by @ketangangal in https://github.com/zenml-io/zenml/pull/628 * Reviewer Reminder by @htahir1 in https://github.com/zenml-io/zenml/pull/683 * Fix some docs phrasings and headers by @strickvl in https://github.com/zenml-io/zenml/pull/670 * Implement `SlackAlerter.ask()` by @fa9r in https://github.com/zenml-io/zenml/pull/662 * Extending Alerters Docs by @fa9r in https://github.com/zenml-io/zenml/pull/690 * Sane defaults for MySQL by @htahir1 in https://github.com/zenml-io/zenml/pull/691 * pd.Series materializer by @Reed-Schimmel in https://github.com/zenml-io/zenml/pull/684 * Add docstrings for `materializers` and `metadata_stores` by @strickvl in https://github.com/zenml-io/zenml/pull/694 * Docstrings for the `integrations` module(s) by @strickvl in https://github.com/zenml-io/zenml/pull/692 * Add remaining docstrings by @strickvl in https://github.com/zenml-io/zenml/pull/696 * Allow enabling mlflow/wandb/whylogs with the class-based api by @schustmi in https://github.com/zenml-io/zenml/pull/697 * GitHub Actions orchestrator by @schustmi in https://github.com/zenml-io/zenml/pull/685 * Created MySQL docs, Vertex AI docs, and step.entrypoint() by @AlexejPenner in https://github.com/zenml-io/zenml/pull/698 * Update ignored words by @strickvl in https://github.com/zenml-io/zenml/pull/701 * Stack Component registering made easier by @AlexejPenner in https://github.com/zenml-io/zenml/pull/695 * Cleaning up the docs after the revamp by @bcdurak in https://github.com/zenml-io/zenml/pull/699 * Add model deployer to CLI docs by @safoinme in https://github.com/zenml-io/zenml/pull/702 * Merge Cloud Integrations and create a Vertex AI Example by @AlexejPenner in https://github.com/zenml-io/zenml/pull/693 * GitHub actions orchestrator example by @schustmi in https://github.com/zenml-io/zenml/pull/703 ## New Contributors * @anencore94 made their first contribution in https://github.com/zenml-io/zenml/pull/644 * @hectorLop made their first contribution in https://github.com/zenml-io/zenml/pull/648 * @gabrielmbmb made their first contribution in https://github.com/zenml-io/zenml/pull/640 * @ketangangal made their first contribution in https://github.com/zenml-io/zenml/pull/628 * @Reed-Schimmel made their first contribution in https://github.com/zenml-io/zenml/pull/684 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.8.1...0.9.0 # 0.8.1 ZenML 0.8.1 is here and it comes with support for Python 3.9 πŸŽ‰. It also includes major updates to our documentation, fixes some broken links in our examples and improves the `zenml go` command which helps you get started with ZenML. ## What's Changed * Hotfix/fix failing release by @AlexejPenner in https://github.com/zenml-io/zenml/pull/611 * Remove autocomplete + alerter from documentation by @strickvl in https://github.com/zenml-io/zenml/pull/612 * Support Python 3.9 by @htahir1 in https://github.com/zenml-io/zenml/pull/605 * Revert README by @htahir1 in https://github.com/zenml-io/zenml/pull/624 * Don't build cuda image on release by @schustmi in https://github.com/zenml-io/zenml/pull/623 * Update quickstart for `zenml go` by @fa9r in https://github.com/zenml-io/zenml/pull/625 * Improve kubeflow manual setup logs by @schustmi in https://github.com/zenml-io/zenml/pull/622 * Added missing space to error message by @AlexejPenner in https://github.com/zenml-io/zenml/pull/614 * Added --set flag to register stack command by @AlexejPenner in https://github.com/zenml-io/zenml/pull/613 * Fixes for multiple examples by @schustmi in https://github.com/zenml-io/zenml/pull/626 * Bring back the `served_model` format to the keras materializer by @stefannica in https://github.com/zenml-io/zenml/pull/629 * Fix broken example links by @schustmi in https://github.com/zenml-io/zenml/pull/630 * FAQ edits by @strickvl in https://github.com/zenml-io/zenml/pull/634 * Fix version parsing by @schustmi in https://github.com/zenml-io/zenml/pull/633 * Completed Best Practices Page by @AlexejPenner in https://github.com/zenml-io/zenml/pull/635 * Comments on Issues should no longer trigger gh actions by @AlexejPenner in https://github.com/zenml-io/zenml/pull/636 * Revise `CONTRIBUTING.md` by @strickvl in https://github.com/zenml-io/zenml/pull/615 * Alerter Component for Slack Integration by @fa9r in https://github.com/zenml-io/zenml/pull/586 * Update `zenml go` to open quickstart/notebooks. by @fa9r in https://github.com/zenml-io/zenml/pull/631 * Update examples by @schustmi in https://github.com/zenml-io/zenml/pull/638 * More detailed instructions on creating an integration by @AlexejPenner in https://github.com/zenml-io/zenml/pull/639 * Added publish api docs to release workflow by @AlexejPenner in https://github.com/zenml-io/zenml/pull/641 * Added *.md to ignore paths by @AlexejPenner in https://github.com/zenml-io/zenml/pull/637 * Update README and Docs with new messaging and fix broken links by @htahir1 in https://github.com/zenml-io/zenml/pull/632 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.8.0...0.8.1 # 0.8.0 ## πŸ§˜β€β™€οΈ Extensibility is our middle name * The ability to register custom stack component flavors (and renaming types to flavor (Registering custom stack component flavors by @bcdurak in https://github.com/zenml-io/zenml/pull/541) * The ability to easily extend orchestrators * Documentation for stacks, stack components and flavors by @bcdurak in https://github.com/zenml-io/zenml/pull/607 * Allow configuration of s3fs by @schustmi in https://github.com/zenml-io/zenml/pull/532 * Ability to use SSL to connect to MySQL clients (That allows for connecting to Cloud based MYSQL deployments) * New MySQL metadata stores by @bcdurak in https://github.com/zenml-io/zenml/pull/580! * Docs and messaging change * Make Orchestrators more extensible and simplify the interface by @AlexejPenner in https://github.com/zenml-io/zenml/pull/581 * S3 Compatible Artifact Store and materializers file handling by @safoinme in https://github.com/zenml-io/zenml/pull/598 ## Manage your stacks * Update stack and stack components via the CLI by @strickvl in https://github.com/zenml-io/zenml/pull/497 * Add `stack delete` confirmation prompt by @strickvl in https://github.com/zenml-io/zenml/pull/548 * Add `zenml stack export` and `zenml stack import` commands by @fa9r in https://github.com/zenml-io/zenml/pull/560 ## Collaboration * User management by @schustmi in https://github.com/zenml-io/zenml/pull/500 ## CLI improvements * CLI speed improvement by @bcdurak in https://github.com/zenml-io/zenml/pull/567 * Ensure `rich` CLI displays full text and wraps table text by @strickvl in https://github.com/zenml-io/zenml/pull/577 * Add CLI command to remove stack component attribute by @strickvl in https://github.com/zenml-io/zenml/pull/590 * Beautify CLI by grouping commands list into tags by @safoinme in https://github.com/zenml-io/zenml/pull/546 ## New integrations: * Add PyTorch example by @htahir1 in https://github.com/zenml-io/zenml/pull/559 * Added GCP as secret manager by @AlexejPenner in https://github.com/zenml-io/zenml/pull/556 ## Documentation / ZenBytes etc * ZenBytes update (and ZenML Projects) * Beautification of Examples by @AlexejPenner in https://github.com/zenml-io/zenml/pull/491 * Document global configuration and repository by @stefannica in https://github.com/zenml-io/zenml/pull/579 * ZenML Collaboration docs by @stefannica in https://github.com/zenml-io/zenml/pull/597 ## βž• Other Updates, Additions and Fixes * Experiment tracker stack components by @htahir1 in https://github.com/zenml-io/zenml/pull/530 * Secret Manager improvements and Seldon Core secret passing by @stefannica in https://github.com/zenml-io/zenml/pull/529 * Pipeline run tracking by @schustmi in https://github.com/zenml-io/zenml/pull/601 * Stream model deployer logs through CLI by @stefannica in https://github.com/zenml-io/zenml/pull/557 * Fix various usability bugs by @stefannica in https://github.com/zenml-io/zenml/pull/561 * Replace `-f` and `--force` with `-y` and `--yes` by @strickvl in https://github.com/zenml-io/zenml/pull/566 * Make it easier to submit issues by @htahir1 in https://github.com/zenml-io/zenml/pull/571 * Sync the repository and local store with the disk configuration files and other fixes by @stefannica in https://github.com/zenml-io/zenml/pull/588 * Add ability to give in-line pip requirements for pipeline by @strickvl in https://github.com/zenml-io/zenml/pull/583 * Fix evidently visualizer on Colab by @fa9r in https://github.com/zenml-io/zenml/pull/592 ## πŸ™Œ Community Contributions * @Ankur3107 made their first contribution in https://github.com/zenml-io/zenml/pull/467 * @MateusGheorghe made their first contribution in https://github.com/zenml-io/zenml/pull/523 * Added support for scipy sparse matrices by @avramdj in https://github.com/zenml-io/zenml/pull/534 # 0.7.3 ## πŸ“Š Experiment Tracking Components [PR #530](https://github.com/zenml-io/zenml/pull/530) adds a new stack component to ZenMLs ever-growing list: `experiment_trackers` allows users to configure your experiment tracking tools with ZenML. Examples of experiment tracking tools are [Weights&Biases](https://wandb.ai), [mlflow](https://mlflow.org), [Neptune](https://neptune.ai), amongst others. Existing users might be confused, as ZenML has had MLflow and wandb support for a while now without such a component. However, this component allows uses more control over the configuration of MLflow and wandb with the new `MLFlowExperimentTracker` and `WandbExperimentTracker` components. This allows these tools to work in more scenarios than the currently limiting local use-cases. ## πŸ”Ž XGBoost and LightGBM support [XGBoost](https://xgboost.readthedocs.io/en/stable/) and [LightGBM](https://lightgbm.readthedocs.io/) are one of the most widely used boosting algorithm libraries out there. This release adds materializers for native objects for each library. Check out [both examples here](https://github.com/zenml-io/zenml/tree/main/examples) and PR's [#544](https://github.com/zenml-io/zenml/pull/544) and [#538](https://github.com/zenml-io/zenml/pull/538) for more details. ## πŸ“‚ Parameterized S3FS support to enable non-AWS S3 storage (minio, ceph) A big complaint of the [S3 Artifact Store](https://github.com/zenml-io/zenml/blob/main/src/zenml/integrations/s3/artifact_stores/s3_artifact_store.py) integration was that it was hard to parameterize it in a way that it supports non-AWS S3 storage like [minio](https://min.io/) and [ceph](https://docs.ceph.com/en/latest/radosgw/s3/). The latest release made this super simple! When you want to register an S3ArtifactStore from the CLI, you can now pass in `client_kwargs`, `config_kwargs` or `s3_additional_kwargs` as a JSON string. For example: ```shell zenml artifact-store register my_s3_store --type=s3 --path=s3://my_bucket \ --client_kwargs='{"endpoint_url": "http://my-s3-endpoint"}' ``` See PR [#532](https://github.com/zenml-io/zenml/pull/532) for more details. ## 🧱 New CLI commands to update stack components We added functionality to allow users to update stacks that already exist. This shows the basic workflow: ```shell zenml orchestrator register local_orchestrator2 -t local zenml stack update default -o local_orchestrator2 zenml stack describe default zenml container-registry register local_registry --type=default --uri=localhost:5000 zenml container-registry update local --uri='somethingelse.com' zenml container-registry rename local local2 zenml container-registry describe local2 zenml stack rename default new_default zenml stack update new_default -c local2 zenml stack describe new_default zenml stack remove-component -c ``` More details are in the [CLI docs](https://apidocs.zenml.io/0.7.3/cli/). Users can add new stack components to a pre-existing stack, or they can modify already-present stack components. They can also rename their stack and individual stack components. ## πŸ› Seldon Core authentication through ZenML secrets The Seldon Core Model Deployer stack component was updated in this release to allow the configuration of ZenML secrets with credentials that authenticate Seldon to access the Artifact Store. The Seldon Core integration provides 3 different secret schemas for the 3 flavors of Artifact Store: AWS, GCP, and Azure, but custom secrets can be used as well. For more information on how to use this feature please refer to our [Seldon Core deployment example](https://github.com/zenml-io/zenml/tree/main/examples/seldon_deployment). Lastly, we had numerous other changes such as ensuring the PyTorch materializer works across all artifact stores and the Kubeflow Metadata Store can be easily queried locally. ## Detailed Changelog * Fix caching & `mypy` errors by @strickvl in https://github.com/zenml-io/zenml/pull/524 * Switch unit test from local_daemon to multiprocessing by @jwwwb in https://github.com/zenml-io/zenml/pull/508 * Change Pytorch materializer to support remote storage by @safoinme in https://github.com/zenml-io/zenml/pull/525 * Remove TODO from Feature Store `init` docstring by @strickvl in https://github.com/zenml-io/zenml/pull/527 * Fixed typo predicter -> predictor by @MateusGheorghe in https://github.com/zenml-io/zenml/pull/523 * Fix mypy errors by @strickvl in https://github.com/zenml-io/zenml/pull/528 * Replaced old local_* logic by @htahir1 in https://github.com/zenml-io/zenml/pull/531 * capitalize aws username in ECR docs by @wjayesh in https://github.com/zenml-io/zenml/pull/533 * Build docker base images quicker after release by @schustmi in https://github.com/zenml-io/zenml/pull/537 * Allow configuration of s3fs by @schustmi in https://github.com/zenml-io/zenml/pull/532 * Update contributing and fix ci badge to main by @htahir1 in https://github.com/zenml-io/zenml/pull/536 * Added XGboost integration by @htahir1 in https://github.com/zenml-io/zenml/pull/538 * Added fa9r to .github/teams.yml. by @fa9r in https://github.com/zenml-io/zenml/pull/539 * Secret Manager improvements and Seldon Core secret passing by @stefannica in https://github.com/zenml-io/zenml/pull/529 * User management by @schustmi in https://github.com/zenml-io/zenml/pull/500 * Update stack and stack components via the CLI by @strickvl in https://github.com/zenml-io/zenml/pull/497 * Added lightgbm integration by @htahir1 in https://github.com/zenml-io/zenml/pull/544 * Fix the Kubeflow metadata store and other stack management improvements by @stefannica in https://github.com/zenml-io/zenml/pull/542 * Experiment tracker stack components by @htahir1 in https://github.com/zenml-io/zenml/pull/530 ## New Contributors * @MateusGheorghe made their first contribution in https://github.com/zenml-io/zenml/pull/523 * @fa9r made their first contribution in https://github.com/zenml-io/zenml/pull/539 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.7.2...0.7.3 **Blog Post**: https://blog.zenml.io/zero-seven-two-three-release/ # 0.7.2 0.7.2 is a minor release which quickly patches some bugs found in the last release to do with Seldon and Mlflow deployment. This release also features initial versions of two amazing new integrations: [HuggingFace](https://huggingface.co/) and [Weights&Biases](https://wandb.ai/site)! - HuggingFace models are now supported to be passed through ZenML pipelines! - You can now track your pipeline runs with Weights&Biases with the new `enable_wandb` decorator! Continuous model deployment with MLflow has been improved with ZenML 0.7.2. A new MLflow Model Deployer Stack component is now available and needs to be part of your stack to be able to deploy models: ```bash zenml integration install mlflow zenml model-deployer register mlflow --type=mlflow zenml stack register local_with_mlflow -m default -a default -o default -d mlflow zenml stack set local_with_mlflow ``` The MLflow Model Deployer is yet another addition to the list of Model Deployers available in ZenML. You can read more on deploying models to production with MLflow in our [Continuous Training and Deployment documentation section](https://docs.zenml.io/advanced-guide/practical/deploying-models) and our [MLflow deployment example](https://github.com/zenml-io/zenml/tree/main/examples/mlflow_deployment). ## What's Changed * Fix the seldon deployment example by @htahir1 in https://github.com/zenml-io/zenml/pull/511 * Create base deployer and refactor MLflow deployer implementation by @wjayesh in https://github.com/zenml-io/zenml/pull/489 * Add nlp example by @Ankur3107 in https://github.com/zenml-io/zenml/pull/467 * Fix typos by @strickvl in https://github.com/zenml-io/zenml/pull/515 * Bugfix/hypothesis given does not work with fixture by @jwwwb in https://github.com/zenml-io/zenml/pull/513 * Bug: fix long Kubernetes labels in Seldon deployments by @stefannica in https://github.com/zenml-io/zenml/pull/514 * Change prediction_uri to prediction_url in MLflow deployer by @stefannica in https://github.com/zenml-io/zenml/pull/516 * Simplify HuggingFace Integration by @AlexejPenner in https://github.com/zenml-io/zenml/pull/517 * Weights & Biases Basic Integration by @htahir1 in https://github.com/zenml-io/zenml/pull/518 ## New Contributors * @Ankur3107 made their first contribution in https://github.com/zenml-io/zenml/pull/467 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.7.1...0.7.2 # 0.7.1 The release introduces the [Seldon Core](https://github.com/SeldonIO/seldon-core) ZenML integration, featuring the *Seldon Core Model Deployer* and a *Seldon Core standard model deployer step*. The [*Model Deployer*](https://docs.zenml.io/component-gallery/model-deployers/model-deployers) is a new type of stack component that enables you to develop continuous model deployment pipelines that train models and continuously deploy them to an external model serving tool, service or platform. You can read more on deploying models to production with Seldon Core in our [Continuous Training and Deployment documentation section](https://docs.zenml.io/component-gallery/model-deployers/model-deployers) and our [Seldon Core deployment example](https://github.com/zenml-io/zenml/tree/main/examples/seldon_deployment). We also see two new integrations with [Feast](https://feast.dev) as ZenML's first feature store integration. Feature stores allow data teams to serve data via an offline store and an online low-latency store where data is kept in sync between the two. It also offers a centralized registry where features (and feature schemas) are stored for use within a team or wider organization. ZenML now supports connecting to a Redis-backed Feast feature store as a stack component integration. Check out the [full example](https://github.com/zenml-io/zenml/tree/release/0.7.1/examples/feature_store) to see it in action! 0.7.1 also brings an addition to ZenML training library integrations with [NeuralProphet](https://neuralprophet.com/html/index.html). Check out the new [example](https://github.com/zenml-io/zenml/tree/main/examples) for more details, and the [docs](https://docs.zenml.io) for more further detail on all new features! ## What's Changed * Add linting of examples to `pre-commit` by @strickvl in https://github.com/zenml-io/zenml/pull/490 * Remove dev-specific entries in `.gitignore` by @strickvl in https://github.com/zenml-io/zenml/pull/488 * Produce periodic mocked data for Segment/Mixpanel by @AlexejPenner in https://github.com/zenml-io/zenml/pull/487 * Abstractions for artifact stores by @bcdurak in https://github.com/zenml-io/zenml/pull/474 * enable and disable cache from runtime config by @AlexejPenner in https://github.com/zenml-io/zenml/pull/492 * Basic Seldon Core Deployment Service by @stefannica in https://github.com/zenml-io/zenml/pull/495 * Parallelize our test suite and make errors more readable by @alex-zenml in https://github.com/zenml-io/zenml/pull/378 * Provision local zenml service by @jwwwb in https://github.com/zenml-io/zenml/pull/496 * bugfix/optional-secrets-manager by @safoinme in https://github.com/zenml-io/zenml/pull/493 * Quick fix for copying folders by @bcdurak in https://github.com/zenml-io/zenml/pull/501 * Pin exact ml-pipelines-sdk version by @schustmi in https://github.com/zenml-io/zenml/pull/506 * Seldon Core model deployer stack component and standard step by @stefannica in https://github.com/zenml-io/zenml/pull/499 * Fix datetime test / bug by @strickvl in https://github.com/zenml-io/zenml/pull/507 * Added NeuralProphet integration by @htahir1 in https://github.com/zenml-io/zenml/pull/504 * Feature Store (Feast with Redis) by @strickvl in https://github.com/zenml-io/zenml/pull/498 # 0.7.0 With ZenML 0.7.0, a lot has been revamped under the hood about how things are stored. Importantly what this means is that ZenML now has system-wide profiles that let you register stacks to share across several of your projects! If you still want to manage your stacks for each project folder individually, profiles still let you do that as well. Most projects of any complexity will require passwords or tokens to access data and infrastructure, and for this purpose ZenML 0.7.0 introduces [the Secrets Manager](https://docs.zenml.io/component-gallery/secrets-managers/secrets-managers) stack component to seamlessly pass around these values to your steps. Our AWS integration also allows you to use AWS Secrets Manager as a backend to handle all your secret persistence needs. Finally, in addition to the new AzureML and Sagemaker Step Operators that version 0.6.3 brought, this release also adds the ability to [run individual steps on GCP's Vertex AI](https://docs.zenml.io/component-gallery/step-operators/gcloud-vertexai). Beyond this, some smaller bugfixes and documentation changes combine to make ZenML 0.7.0 a more pleasant user experience. ## What's Changed * Added quick mention of how to use dockerignore by @AlexejPenner in https://github.com/zenml-io/zenml/pull/468 * Made rich traceback optional with ENV variable by @htahir1 in https://github.com/zenml-io/zenml/pull/472 * Separate stack persistence from repo implementation by @jwwwb in https://github.com/zenml-io/zenml/pull/462 * Adding safoine username to github team by @safoinme in https://github.com/zenml-io/zenml/pull/475 * Fix `zenml stack describe` bug by @strickvl in https://github.com/zenml-io/zenml/pull/476 * ZenProfiles and centralized ZenML repositories by @stefannica in https://github.com/zenml-io/zenml/pull/471 * Add `examples` folder to linting script by @strickvl in https://github.com/zenml-io/zenml/pull/482 * Vertex AI integration and numerous other changes by @htahir1 in https://github.com/zenml-io/zenml/pull/477 * Fix profile handing in the Azure ML step operator by @stefannica in https://github.com/zenml-io/zenml/pull/483 * Copy the entire stack configuration into containers by @stefannica in https://github.com/zenml-io/zenml/pull/480 * Improve some things with the Profiles CLI output by @stefannica in https://github.com/zenml-io/zenml/pull/484 * Secrets manager stack component and interface by @AlexejPenner in https://github.com/zenml-io/zenml/pull/470 * Update schedule.py (#485) by @avramdj in https://github.com/zenml-io/zenml/pull/485 ## New Contributors * @avramdj in https://github.com/zenml-io/zenml/pull/485 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.6.3...0.7.0rc # 0.6.3 With ZenML 0.6.3, you can now run your ZenML steps on Sagemaker and AzureML! It's normal to have certain steps that require specific hardware on which to run model training, for example, and this latest release gives you the power to switch out hardware for individual steps to support this. We added a new Tensorboard visualization that you can make use of when using our Kubeflow Pipelines integration. We handle the background processes needed to spin up this interactive web interface that you can use to visualize your model's performance over time. Behind the scenes we gave our integration testing suite a massive upgrade, fixed a number of smaller bugs and made documentation updates. For a detailed look at what's changed, give [our full release notes](https://github.com/zenml-io/zenml/releases/tag/0.6.3) a glance. ## What's Changed * Fix typo by @wjayesh in https://github.com/zenml-io/zenml/pull/432 * Remove tabulate dependency (replaced by rich) by @jwwwb in https://github.com/zenml-io/zenml/pull/436 * Fix potential issue with local integration tests by @schustmi in https://github.com/zenml-io/zenml/pull/428 * Remove support for python 3.6 by @schustmi in https://github.com/zenml-io/zenml/pull/437 * Create clean test repos in separate folders by @michael-zenml in https://github.com/zenml-io/zenml/pull/430 * Copy explicit materializers before modifying, log correct class by @schustmi in https://github.com/zenml-io/zenml/pull/434 * Fix typo in mysql password parameter by @pafpixel in https://github.com/zenml-io/zenml/pull/438 * Pytest-fixture for separate virtual environments for each integration test by @AlexejPenner in https://github.com/zenml-io/zenml/pull/405 * Bugfix/fix failing tests due to comments step by @AlexejPenner in https://github.com/zenml-io/zenml/pull/444 * Added --use-virtualenvs option to allow choosing envs to run by @AlexejPenner in https://github.com/zenml-io/zenml/pull/445 * Log whether a step was cached by @strickvl in https://github.com/zenml-io/zenml/pull/435 * Added basic integration tests for remaining examples by @strickvl in https://github.com/zenml-io/zenml/pull/439 * Improve error message when provisioning local kubeflow resources with a non-local container registry. by @schustmi in https://github.com/zenml-io/zenml/pull/442 * Enable generic step inputs and outputs by @schustmi in https://github.com/zenml-io/zenml/pull/440 * Removed old reference to a step that no longer exists by @AlexejPenner in https://github.com/zenml-io/zenml/pull/452 * Correctly use custom kubernetes context if specified by @schustmi in https://github.com/zenml-io/zenml/pull/451 * Fix CLI stack component describe/list commands by @schustmi in https://github.com/zenml-io/zenml/pull/450 * Ignore type of any tfx proto file by @schustmi in https://github.com/zenml-io/zenml/pull/453 * Another boyscout pr on the gh actions by @AlexejPenner in https://github.com/zenml-io/zenml/pull/455 * Upgrade TFX to 1.6.1 by @jwwwb in https://github.com/zenml-io/zenml/pull/441 * Added ZenML Projects to README by @htahir1 in https://github.com/zenml-io/zenml/pull/457 * Upgrade `rich` from 11.0 to 12.0 by @strickvl in https://github.com/zenml-io/zenml/pull/458 * Add Kubeflow tensorboard viz and fix tensorflow file IO for cloud back-ends by @stefannica in https://github.com/zenml-io/zenml/pull/447 * Implementing the `explain` subcommand by @bcdurak in https://github.com/zenml-io/zenml/pull/460 * Implement AzureML and Sagemaker step operators by @schustmi in https://github.com/zenml-io/zenml/pull/456 ## New Contributors * @pafpixel made their first contribution in https://github.com/zenml-io/zenml/pull/438 # 0.6.2 ZenML 0.6.2 brings you the ability to serve models using MLflow deployments as well as an updated CLI interface! For a real continuous deployment cycle, we know that ZenML pipelines should be able to handle everything β€” from pre-processing to training to serving to monitoring and then potentially re-training and re-serving. The interfaces we created in this release are the foundation on which all of this will build. We also improved how you interact with ZenML through the CLI. Everything looks so much smarter and readable now with the popular `rich` library integrated into our dependencies. Smaller changes that you'll notice include updates to our cloud integrations and bug fixes for Windows users. For a detailed look at what's changed, see below. ## What's Changed * Updated notebook for quickstart by @htahir1 in https://github.com/zenml-io/zenml/pull/398 * Update tensorflow base image by @schustmi in https://github.com/zenml-io/zenml/pull/396 * Add cloud specific deployment guide + refactoring by @wjayesh in https://github.com/zenml-io/zenml/pull/400 * add cloud sub page to toc.md by @wjayesh in https://github.com/zenml-io/zenml/pull/401 * fix tab indent by @wjayesh in https://github.com/zenml-io/zenml/pull/402 * Bugfix for workflows failing due to modules not being found by @bcdurak in https://github.com/zenml-io/zenml/pull/390 * Improve github workflows by @schustmi in https://github.com/zenml-io/zenml/pull/406 * Add plausible script to docs.zenml.io pages by @alex-zenml in https://github.com/zenml-io/zenml/pull/414 * Add orchestrator and ECR docs by @wjayesh in https://github.com/zenml-io/zenml/pull/413 * Richify the CLI by @alex-zenml in https://github.com/zenml-io/zenml/pull/392 * Allow specification of required integrations for a pipeline by @schustmi in https://github.com/zenml-io/zenml/pull/408 * Update quickstart in docs to conform to examples by @htahir1 in https://github.com/zenml-io/zenml/pull/410 * Updated PR template with some more details by @htahir1 in https://github.com/zenml-io/zenml/pull/411 * Bugfix on the CLI to work without a git installation by @bcdurak in https://github.com/zenml-io/zenml/pull/412 * Added Ayush's Handle by @ayush714 in https://github.com/zenml-io/zenml/pull/417 * Adding an info message on Windows if there is no application associated to .sh files by @bcdurak in https://github.com/zenml-io/zenml/pull/419 * Catch `matplotlib` crash when running IPython in terminal by @strickvl in https://github.com/zenml-io/zenml/pull/416 * Automatically activate integrations when unable to find stack component by @schustmi in https://github.com/zenml-io/zenml/pull/420 * Fix some code inspections by @halvgaard in https://github.com/zenml-io/zenml/pull/422 * Prepare integration tests on kubeflow by @schustmi in https://github.com/zenml-io/zenml/pull/423 * Add concepts back into glossary by @strickvl in https://github.com/zenml-io/zenml/pull/425 * Make guide easier to follow by @wjayesh in https://github.com/zenml-io/zenml/pull/427 * Fix httplib to 0.19 and pyparsing to 2.4 by @jwwwb in https://github.com/zenml-io/zenml/pull/426 * Wrap context serialization in try blocks by @jwwwb in https://github.com/zenml-io/zenml/pull/397 * Track stack configuration when registering and running a pipeline by @schustmi in https://github.com/zenml-io/zenml/pull/429 * MLflow deployment integration by @stefannica in https://github.com/zenml-io/zenml/pull/415 # 0.6.1 ZenML 0.6.1 is out and it's all about the cloud ☁️! We have improved AWS integration and a brand-new [Azure](https://github.com/zenml-io/zenml/tree/0.6.1/src/zenml/integrations/azure) integration! Run your pipelines on AWS and Azure now and let us know how it went on our [Slack](https://zenml.io/slack-invite). Smaller changes that you'll notice include much-awaited updates and fixes, including the first iterations of scheduling pipelines and tracking more reproducibility-relevant data in the metadata store. For a detailed look at what's changed, see below. ## What's changed * Add MVP for scheduling by @htahir1 in https://github.com/zenml-io/zenml/pull/354 * Add S3 artifact store and filesystem by @schustmi in https://github.com/zenml-io/zenml/pull/359 * Update 0.6.0 release notes by @alex-zenml in https://github.com/zenml-io/zenml/pull/362 * Fix cuda-dev base container image by @stefannica in https://github.com/zenml-io/zenml/pull/361 * Mark ZenML as typed package by @schustmi in https://github.com/zenml-io/zenml/pull/360 * Improve error message if ZenML repo is missing inside kubeflow container entrypoint by @schustmi in https://github.com/zenml-io/zenml/pull/363 * Spell whylogs and WhyLabs correctly in our docs by @stefannica in https://github.com/zenml-io/zenml/pull/369 * Feature/add readme for mkdocs by @AlexejPenner in https://github.com/zenml-io/zenml/pull/372 * Cleaning up the assets pushed by gitbook automatically by @bcdurak in https://github.com/zenml-io/zenml/pull/371 * Turn codecov off for patch updates by @htahir1 in https://github.com/zenml-io/zenml/pull/376 * Minor changes and fixes by @schustmi in https://github.com/zenml-io/zenml/pull/365 * Only include python files when building local docs by @schustmi in https://github.com/zenml-io/zenml/pull/377 * Prevent access to repo during step execution by @schustmi in https://github.com/zenml-io/zenml/pull/370 * Removed duplicated Section within docs by @AlexejPenner in https://github.com/zenml-io/zenml/pull/379 * Fixing the materializer registry to spot sub-classes of defined types by @bcdurak in https://github.com/zenml-io/zenml/pull/368 * Computing hash of step and materializer works in notebooks by @htahir1 in https://github.com/zenml-io/zenml/pull/375 * Sort requirements to improve docker build caching by @schustmi in https://github.com/zenml-io/zenml/pull/383 * Make sure the s3 artifact store is registered when the integration is activated by @schustmi in https://github.com/zenml-io/zenml/pull/382 * Make MLflow integration work with kubeflow and scheduled pipelines by @stefannica in https://github.com/zenml-io/zenml/pull/374 * Reset _has_been_called to False ahead of pipeline.connect by @AlexejPenner in https://github.com/zenml-io/zenml/pull/385 * Fix local airflow example by @schustmi in https://github.com/zenml-io/zenml/pull/366 * Improve and extend base materializer error messages by @schustmi in https://github.com/zenml-io/zenml/pull/380 * Windows CI issue by @schustmi in https://github.com/zenml-io/zenml/pull/389 * Add the ability to attach custom properties to the Metadata Store by @bcdurak in https://github.com/zenml-io/zenml/pull/355 * Handle case when return values do not match output by @AlexejPenner in https://github.com/zenml-io/zenml/pull/386 * Quickstart code in docs fixed by @AlexejPenner in https://github.com/zenml-io/zenml/pull/387 * Fix mlflow tracking example by @stefannica in https://github.com/zenml-io/zenml/pull/393 * Implement azure artifact store and fileio plugin by @schustmi in https://github.com/zenml-io/zenml/pull/388 * Create todo issues with separate issue type by @schustmi in https://github.com/zenml-io/zenml/pull/394 * Log that steps are cached while running pipeline by @alex-zenml in https://github.com/zenml-io/zenml/pull/381 * Schedule added to context for all orchestrators by @AlexejPenner in https://github.com/zenml-io/zenml/pull/391 # 0.6.0 ZenML 0.6.0 is out now. We've made some big changes under the hood, but our biggest public-facing addition is our new integration to support all your data logging needs: [`whylogs`](https://github.com/whylabs/whylogs). Our core architecture was [thoroughly reworked](https://github.com/zenml-io/zenml/pull/305) and is now in a much better place to support our ongoing development needs. Smaller changes that you'll notice include extensive documentation additions, updates and fixes. For a detailed look at what's changed, see below. ## πŸ“Š Whylogs logging [Whylogs](https://github.com/whylabs/whylogs) is an open source library that analyzes your data and creates statistical summaries called whylogs profiles. Whylogs profiles can be visualized locally or uploaded to the WhyLabs platform where more comprehensive analysis can be carried out. ZenML integrates seamlessly with Whylogs and [WhyLabs](https://whylabs.ai/). This example shows how easy it is to enhance steps in an existing ML pipeline with Whylogs profiling features. Changes to the user code are minimal while ZenML takes care of all aspects related to Whylogs session initialization, profile serialization, versioning and persistence and even uploading generated profiles to [Whylabs](https://whylabs.ai/). ![Example of the visualizations you can make from Whylogs profiles](https://blog.zenml.io/assets/posts/release_0_6_0/whylogs-visualizer.png) With our `WhylogsVisualizer`, as described in [the associated example notes](https://github.com/zenml-io/zenml/tree/0.6.0/examples/whylogs), you can visualize Whylogs profiles generated as part of a pipeline. ## β›© New Core Architecture We implemented [some fundamental changes](https://github.com/zenml-io/zenml/pull/305) to the core architecture to solve some of the issues we previously had and provide a more extensible design to support quicker implementations of different stack components and integrations. The main change was to refactor the `Repository`, `Stack` and `StackComponent` architectures. These changes had a pretty wide impact so involved changes in many files throughout the codebase, especially in the CLI which makes calls to all these pieces. We've already seen how it helps us move faster in building integrations and we hope it helps making contributions as pain-free as possible! ## πŸ—’ Documentation and Example Updates As the codebase and functionality of ZenML grows, we always want to make sure our documentation is clear, up-to-date and easy to use. We made a number of changes in this release that will improve your experience in this regard: - added a number of new explainers on key ZenML concepts and how to use them in your code, notably on [how to create a custom materializer](https://docs.zenml.io/v/0.6.0/guides/index/custom-materializer) and [how to fetch historic pipeline runs](https://docs.zenml.io/v/0.6.0/guides/index/historic-runs) using the `StepContext` - fixed a number of typos and broken links - [added versioning](https://github.com/zenml-io/zenml/pull/336) to our API documentation so you can choose to view the reference appropriate to the version that you're using. We now use `mkdocs` for this so you'll notice a slight visual refresh as well. - added new examples highlighting specific use cases and integrations: - how to create a custom materializer ([example](https://github.com/zenml-io/zenml/tree/0.6.0/examples/custom_materializer)) - how to fetch historical pipeline runs ([example](https://github.com/zenml-io/zenml/tree/0.6.0/examples/fetch_historical_runs)) - how to use standard interfaces for common ML patterns ([example](https://github.com/zenml-io/zenml/tree/0.6.0/examples/standard_interfaces)) - `whylogs` logging ([example](https://github.com/zenml-io/zenml/tree/0.6.0/examples/whylogs)) ## βž• Other updates, additions and fixes As with most releases, we made a number of small but significant fixes and additions. The most import of these were that you can now [access the metadata store](https://github.com/zenml-io/zenml/pull/338) via the step context. This enables a number of new possible workflows and pipeline patterns and we're really excited to have this in the release. We [added in](https://github.com/zenml-io/zenml/pull/315) a markdown parser for the `zenml example info …` command, so now when you want to use our CLI to learn more about specific examples you will see beautifully parsed text and not markdown markup. We improved a few of our error messages, too, like for when the return type of a step function [doesn’t match the expected type](https://github.com/zenml-io/zenml/pull/322), or if [step is called twice](https://github.com/zenml-io/zenml/pull/353). We hope this makes ZenML just that little bit easier to use. # 0.5.7 ZenML 0.5.7 is here :100: and it brings not one, but :fire:TWO:fire: brand new integrations :rocket:! ZenML now support [MLFlow](https://www.mlflow.org/docs/latest/tracking.html) for tracking pipelines as experiments and [Evidently](https://github.com/evidentlyai/evidently) for detecting drift in your ML pipelines in production! ## New Features * Introducing the [MLFlow Tracking](https://www.mlflow.org/docs/latest/tracking.html) Integration, a first step towards our complete MLFlow Integration as described in the [#115 poll](https://github.com/zenml-io/zenml/discussions/115). Full example found [here](https://github.com/zenml-io/zenml/tree/0.5.7/examples/mlflow). * Introducing the [Evidently](https://github.com/evidentlyai/evidently) integration. Use the standard [Evidently drift detection step](https://github.com/zenml-io/zenml/blob/0.5.7/src/zenml/integrations/evidently/steps/evidently_profile.py) to calculate drift automatically in your pipeline. Full example found [here](https://github.com/zenml-io/zenml/tree/0.5.7/examples/drift_detection). ## Bugfixes * Prevent KFP install timeouts during `stack up` by @stefannica in https://github.com/zenml-io/zenml/pull/299 * Prevent naming parameters same name as inputs/outputs to prevent kwargs-errors by @bcdurak in https://github.com/zenml-io/zenml/pull/300 ## What's Changed * Force pull overwrites local examples without user confirmation by @AlexejPenner in https://github.com/zenml-io/zenml/pull/278 * Updated README with latest features by @htahir1 in https://github.com/zenml-io/zenml/pull/280 * Integration test the examples within ci pipeline by @AlexejPenner in https://github.com/zenml-io/zenml/pull/282 * Add exception for missing system requirements by @kamalesh0406 in https://github.com/zenml-io/zenml/pull/281 * Examples are automatically pulled if not present before any example command is run by @AlexejPenner in https://github.com/zenml-io/zenml/pull/279 * Add pipeline error for passing the same step object twice by @kamalesh0406 in https://github.com/zenml-io/zenml/pull/283 * Create pytest fixture to use a temporary zenml repo in tests by @htahir1 in https://github.com/zenml-io/zenml/pull/287 * Additional example run implementations for standard interfaces, functional and class based api by @AlexejPenner in https://github.com/zenml-io/zenml/pull/286 * Make pull_request.yaml actually use os.runner instead of ubuntu by @htahir1 in https://github.com/zenml-io/zenml/pull/288 * In pytest return to previous workdir before tearing down tmp_dir fixture by @AlexejPenner in https://github.com/zenml-io/zenml/pull/289 * Don't raise an exception during integration installation if system requirement is not installed by @schustmi in https://github.com/zenml-io/zenml/pull/291 * Update starting page for the API docs by @alex-zenml in https://github.com/zenml-io/zenml/pull/294 * Add `stack up` failure prompts by @alex-zenml in https://github.com/zenml-io/zenml/pull/290 * Spelling fixes by @alex-zenml in https://github.com/zenml-io/zenml/pull/295 * Remove instructions to git init from docs by @bcdurak in https://github.com/zenml-io/zenml/pull/293 * Fix the `stack up` and `orchestrator up` failure prompts by @stefannica in https://github.com/zenml-io/zenml/pull/297 * Prevent KFP install timeouts during `stack up` by @stefannica in https://github.com/zenml-io/zenml/pull/299 * Add stefannica to list of internal github users by @stefannica in https://github.com/zenml-io/zenml/pull/303 * Improve KFP UI daemon error messages by @schustmi in https://github.com/zenml-io/zenml/pull/292 * Replaced old diagrams with new ones in the docs by @AlexejPenner in https://github.com/zenml-io/zenml/pull/306 * Fix broken links & text formatting in docs by @alex-zenml in https://github.com/zenml-io/zenml/pull/302 * Run KFP container as local user/group if local by @stefannica in https://github.com/zenml-io/zenml/pull/304 * Add james to github team by @jwwwb in https://github.com/zenml-io/zenml/pull/308 * Implement integration of mlflow tracking by @AlexejPenner in https://github.com/zenml-io/zenml/pull/301 * Bugfix integration tests on windows by @jwwwb in https://github.com/zenml-io/zenml/pull/296 * Prevent naming parameters same name as inputs/outputs to prevent kwargs-errors by @bcdurak in https://github.com/zenml-io/zenml/pull/300 * Add tests for `fileio` by @alex-zenml in https://github.com/zenml-io/zenml/pull/298 * Evidently integration (standard steps and example) by @alex-zenml in https://github.com/zenml-io/zenml/pull/307 * Implemented evidently integration by @stefannica in https://github.com/zenml-io/zenml/pull/310 * Make mlflow example faster by @AlexejPenner in https://github.com/zenml-io/zenml/pull/312 ## New Contributors * @kamalesh0406 made their first contribution in https://github.com/zenml-io/zenml/pull/281 * @stefannica made their first contribution in https://github.com/zenml-io/zenml/pull/297 * @jwwwb made their first contribution in https://github.com/zenml-io/zenml/pull/308 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.5.6...0.5.7 # 0.5.6 ) * ( ( /( ( ` )\ ) )\()) ( )\))( (()/( ((_)\ ))\ ( ((_)()\ /(_)) _((_) /((_) )\ ) (_()((_) (_)) |_ / (_)) _(_/( | \/ | | | / / / -_) | ' \)) | |\/| | | |__ /___| \___| |_||_| |_| |_| |____| This release fixes some known bugs from previous releases and especially 0.5.5. Therefore, upgrading to 0.5.6 is a **breaking change**. You must do the following in order to proceed with this version: ``` cd zenml_enabled_repo rm -rf .zen/ ``` And then start again with ZenML init: ``` pip install --upgrade zenml zenml init ``` ## New Features * Added `zenml example run [EXAMPLE_RUN_NAME]` feature: The ability to run an example with one command. In order to run this, do `zenml example pull` first and see all examples available by running `zenml example list`. * Added ability to specify a `.dockerignore` file before running pipelines on Kubeflow. * Kubeflow Orchestrator is now leaner and faster. * Added the `describe` command group to the CLI for groups `stack`, `orchestrator`, `artifact-store`, and `metadata-store`. E.g. `zenml stack describe` ## Bug fixes and minor improvements * Adding `StepContext` to a branch now invalidates caching by default. Disable explicitly with `enable_cache=True`. * Docs updated to reflect minor changes in CLI commands. * CLI `list` commands now mentions active component. Try `zenml stack list` to check this out. * `zenml version` now has cooler art. ## What's Changed * Delete blog reference from release notes by @alex-zenml in https://github.com/zenml-io/zenml/pull/228 * Docs updates by @alex-zenml in https://github.com/zenml-io/zenml/pull/229 * Update kubeflow guide by @schustmi in https://github.com/zenml-io/zenml/pull/230 * Updated quickstart to reflect newest zenml version by @alexej-zenml in https://github.com/zenml-io/zenml/pull/231 * Add KFP GCP example readme by @schustmi in https://github.com/zenml-io/zenml/pull/233 * Baris/update docs with class api by @bcdurak in https://github.com/zenml-io/zenml/pull/232 * fixing a small typo [ci skip] by @bcdurak in https://github.com/zenml-io/zenml/pull/236 * Hamza/docs last min updates by @htahir1 in https://github.com/zenml-io/zenml/pull/234 * fix broken links by @alex-zenml in https://github.com/zenml-io/zenml/pull/237 * added one more page for standardized artifacts [ci skip] by @bcdurak in https://github.com/zenml-io/zenml/pull/238 * Unified use of cli_utils.print_table for all table format cli printouts by @AlexejPenner in https://github.com/zenml-io/zenml/pull/240 * Remove unused tfx kubeflow code by @schustmi in https://github.com/zenml-io/zenml/pull/239 * Relaxed typing requirements for cli_utils.print_table by @AlexejPenner in https://github.com/zenml-io/zenml/pull/241 * Pass input artifact types to kubeflow container entrypoint by @schustmi in https://github.com/zenml-io/zenml/pull/242 * Catch duplicate run name error and throw custom exception by @schustmi in https://github.com/zenml-io/zenml/pull/243 * Improved logs by @htahir1 in https://github.com/zenml-io/zenml/pull/244 * CLI active component highlighting by @alex-zenml in https://github.com/zenml-io/zenml/pull/245 * Baris/eng 244 clean up by @bcdurak in https://github.com/zenml-io/zenml/pull/246 * CLI describe command by @alex-zenml in https://github.com/zenml-io/zenml/pull/248 * Alexej/eng 35 run examples from cli by @AlexejPenner in https://github.com/zenml-io/zenml/pull/253 * CLI argument and option flag consistency improvements by @alex-zenml in https://github.com/zenml-io/zenml/pull/250 * Invalidate caching when a step requires a step context by @schustmi in https://github.com/zenml-io/zenml/pull/252 * Implement better error messages for custom step output artifact types by @schustmi in https://github.com/zenml-io/zenml/pull/254 * Small improvements by @schustmi in https://github.com/zenml-io/zenml/pull/251 * Kubeflow dockerignore by @schustmi in https://github.com/zenml-io/zenml/pull/249 * Rename container registry folder to be consistent with the other stack components by @schustmi in https://github.com/zenml-io/zenml/pull/257 * Update todo script by @schustmi in https://github.com/zenml-io/zenml/pull/256 * Update docs following CLI change by @alex-zenml in https://github.com/zenml-io/zenml/pull/255 * Bump mypy version by @schustmi in https://github.com/zenml-io/zenml/pull/258 * Kubeflow Windows daemon alternative by @schustmi in https://github.com/zenml-io/zenml/pull/259 * Run pre commit in local environment by @schustmi in https://github.com/zenml-io/zenml/pull/260 * Hamza/eng 269 move beam out by @htahir1 in https://github.com/zenml-io/zenml/pull/262 * Update docs by @alex-zenml in https://github.com/zenml-io/zenml/pull/261 * Hamza/update readme with contribitions by @htahir1 in https://github.com/zenml-io/zenml/pull/271 * Hamza/eng 256 backoff analytics by @htahir1 in https://github.com/zenml-io/zenml/pull/270 * Add spellcheck by @alex-zenml in https://github.com/zenml-io/zenml/pull/264 * Using the pipeline run name to explicitly access when explaining the … by @AlexejPenner in https://github.com/zenml-io/zenml/pull/263 * Import user main module in kubeflow entrypoint to make sure all components are registered by @schustmi in https://github.com/zenml-io/zenml/pull/273 * Fix cli version command by @schustmi in https://github.com/zenml-io/zenml/pull/272 * User is informed of version mismatch and example pull defaults to cod… by @AlexejPenner in https://github.com/zenml-io/zenml/pull/274 * Hamza/eng 274 telemetry by @htahir1 in https://github.com/zenml-io/zenml/pull/275 * Update docs with right commands and events by @htahir1 in https://github.com/zenml-io/zenml/pull/276 * Fixed type annotation for some python versions by @AlexejPenner in https://github.com/zenml-io/zenml/pull/277 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.5.5...0.5.6 # 0.5.5 ZenML 0.5.5 is jam-packed with new features to take your ML pipelines to the next level. Our three biggest new features: Kubeflow Pipelines, CLI support for our integrations and Standard Interfaces. That’s right, Standard Interfaces are back! ## What's Changed * Implement base component tests by @schustmi in https://github.com/zenml-io/zenml/pull/211 * Add chapter names by @alex-zenml in https://github.com/zenml-io/zenml/pull/212 * Fix docstring error by @alex-zenml in https://github.com/zenml-io/zenml/pull/213 * Hamza/add caching example by @htahir1 in https://github.com/zenml-io/zenml/pull/214 * Update readme by @alex-zenml in https://github.com/zenml-io/zenml/pull/216 * Hamza/add small utils by @htahir1 in https://github.com/zenml-io/zenml/pull/219 * Update docs by @alex-zenml in https://github.com/zenml-io/zenml/pull/220 * Docs fixes by @alex-zenml in https://github.com/zenml-io/zenml/pull/222 * Baris/eng 182 standard interfaces by @bcdurak in https://github.com/zenml-io/zenml/pull/209 * Fix naming error by @alex-zenml in https://github.com/zenml-io/zenml/pull/221 * Remove framework design by @alex-zenml in https://github.com/zenml-io/zenml/pull/224 * Alexej/eng 234 zenml integration install by @alexej-zenml in https://github.com/zenml-io/zenml/pull/223 * Fix deployment section order by @alex-zenml in https://github.com/zenml-io/zenml/pull/225 * the readme of the example by @bcdurak in https://github.com/zenml-io/zenml/pull/227 * Kubeflow integration by @schustmi in https://github.com/zenml-io/zenml/pull/226 ## New Contributors * @alexej-zenml made their first contribution in https://github.com/zenml-io/zenml/pull/223 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.5.4...0.5.5 # 0.5.4 0.5.4 adds a [lineage tracking](https://github.com/zenml-io/zenml/tree/main/examples/lineage) integration to visualize lineage of pipeline runs! It also includes numerous bug fixes and optimizations. ## What's Changed * Fix typos by @alex-zenml in https://github.com/zenml-io/zenml/pull/192 * Fix Apache Beam bug by @alex-zenml in https://github.com/zenml-io/zenml/pull/194 * Fix apache beam logging bug by @alex-zenml in https://github.com/zenml-io/zenml/pull/195 * Add step context by @schustmi in https://github.com/zenml-io/zenml/pull/196 * Init docstrings by @alex-zenml in https://github.com/zenml-io/zenml/pull/197 * Hamza/small fixes by @htahir1 in https://github.com/zenml-io/zenml/pull/199 * Fix writing to metadata store with airflow orchestrator by @schustmi in https://github.com/zenml-io/zenml/pull/198 * Use pipeline parameter name as step name in post execution by @schustmi in https://github.com/zenml-io/zenml/pull/200 * Add error message when step name is not in metadata store by @schustmi in https://github.com/zenml-io/zenml/pull/201 * Add option to set repo location using an environment variable by @schustmi in https://github.com/zenml-io/zenml/pull/202 * Run cloudbuild after pypi publish by @schustmi in https://github.com/zenml-io/zenml/pull/203 * Refactor component generation by @schustmi in https://github.com/zenml-io/zenml/pull/204 * Removed unnecessary panel dependency by @htahir1 in https://github.com/zenml-io/zenml/pull/206 * Updated README to successively install requirements by @AlexejPenner in https://github.com/zenml-io/zenml/pull/205 * Store active stack in local config by @schustmi in https://github.com/zenml-io/zenml/pull/208 * Hamza/eng 125 lineage tracking vis by @htahir1 in https://github.com/zenml-io/zenml/pull/207 ## New Contributors * @AlexejPenner made their first contribution in https://github.com/zenml-io/zenml/pull/205 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.5.3...0.5.4 # 0.5.3 Version 0.5.3 adds [statistics visualizations](https://github.com/zenml-io/zenml/blob/main/examples/visualizers/statistics/README.md), greatly improved speed for CLI commands as well as lots of small improvements to the pipeline and step interface. ## What's Changed * Make tests run in a random order by @alex-zenml in https://github.com/zenml-io/zenml/pull/160 * Connect steps using *args by @schustmi in https://github.com/zenml-io/zenml/pull/162 * Move location of repobeats image by @alex-zenml in https://github.com/zenml-io/zenml/pull/163 * Hamza/add sam by @htahir1 in https://github.com/zenml-io/zenml/pull/165 * Pipeline initialization with *args by @schustmi in https://github.com/zenml-io/zenml/pull/164 * Improve detection of third party modules during class resolving by @schustmi in https://github.com/zenml-io/zenml/pull/167 * Merge path_utils into fileio & refactor what was left by @alex-zenml in https://github.com/zenml-io/zenml/pull/168 * Update docker files by @schustmi in https://github.com/zenml-io/zenml/pull/169 * Hamza/deploy api reference by @htahir1 in https://github.com/zenml-io/zenml/pull/171 * API Reference by @schustmi in https://github.com/zenml-io/zenml/pull/172 * Add color back into our github actions by @alex-zenml in https://github.com/zenml-io/zenml/pull/176 * Refactor tests not raising by @alex-zenml in https://github.com/zenml-io/zenml/pull/177 * Improve step and pipeline interface by @schustmi in https://github.com/zenml-io/zenml/pull/175 * Alex/eng 27 windows bug again by @htahir1 in https://github.com/zenml-io/zenml/pull/178 * Automated todo tracking by @schustmi in https://github.com/zenml-io/zenml/pull/173 * Fix mypy issues related to windows by @schustmi in https://github.com/zenml-io/zenml/pull/179 * Include Github URL to TODO comment in issue by @schustmi in https://github.com/zenml-io/zenml/pull/181 * Create Visualizers logic by @htahir1 in https://github.com/zenml-io/zenml/pull/182 * Add README for visualizers examples by @alex-zenml in https://github.com/zenml-io/zenml/pull/184 * Allow None as default value for BaseStep configs by @schustmi in https://github.com/zenml-io/zenml/pull/185 * Baris/eng 37 standard import check by @bcdurak in https://github.com/zenml-io/zenml/pull/183 * Replace duplicated code by call to source_utils.resolve_class by @schustmi in https://github.com/zenml-io/zenml/pull/186 * Remove unused base enum cases by @schustmi in https://github.com/zenml-io/zenml/pull/187 * Testing mocks for CLI `examples` command by @alex-zenml in https://github.com/zenml-io/zenml/pull/180 * Set the correct module for steps created using our decorator by @schustmi in https://github.com/zenml-io/zenml/pull/188 * Fix some cli commands by @schustmi in https://github.com/zenml-io/zenml/pull/189 * Tag jira issues for which the todo was deleted by @schustmi in https://github.com/zenml-io/zenml/pull/190 * Remove deadlinks by @alex-zenml in https://github.com/zenml-io/zenml/pull/191 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.5.2...0.5.3 # 0.5.2 0.5.2 brings an improved post-execution workflow and lots of minor changes and upgrades for the developer experience when creating pipelines. It also improves the Airflow orchestrator logic to accommodate for more real world scenarios. ## What's Changed * Fix autocomplete for step and pipeline decorated functions by @schustmi in https://github.com/zenml-io/zenml/pull/144 * Add reference docs for CLI example functionality by @alex-zenml in https://github.com/zenml-io/zenml/pull/145 * Fix mypy integration by @schustmi in https://github.com/zenml-io/zenml/pull/147 * Improve Post-Execution Workflow by @schustmi in https://github.com/zenml-io/zenml/pull/146 * Fix CLI examples bug by @alex-zenml in https://github.com/zenml-io/zenml/pull/148 * Update quickstart example notebook by @alex-zenml in https://github.com/zenml-io/zenml/pull/150 * Add documentation images by @alex-zenml in https://github.com/zenml-io/zenml/pull/151 * Add prettierignore to gitignore by @alex-zenml in https://github.com/zenml-io/zenml/pull/154 * Airflow orchestrator improvements by @schustmi in https://github.com/zenml-io/zenml/pull/153 * Google colab added by @htahir1 in https://github.com/zenml-io/zenml/pull/155 * Tests for `core` and `cli` modules by @alex-zenml in https://github.com/zenml-io/zenml/pull/149 * Add Paperspace environment check by @alex-zenml in https://github.com/zenml-io/zenml/pull/156 * Step caching by @schustmi in https://github.com/zenml-io/zenml/pull/157 * Add documentation for pipeline step parameter and run name configuration by @schustmi in https://github.com/zenml-io/zenml/pull/158 * Automatically disable caching if the step function code has changed by @schustmi in https://github.com/zenml-io/zenml/pull/159 **Full Changelog**: https://github.com/zenml-io/zenml/compare/0.5.1...0.5.2 # 0.5.1 0.5.1 builds on top of Slack of the 0.5.0 release with quick bug updates. ## Overview * Pipeline can now be run via a YAML file. #132 * CLI now let's you pull directly from GitHub examples folder. :fire: Amazing @alex-zenml with #141! * ZenML now has full [mypy](http://mypy-lang.org/) compliance. :tada: Thanks @schustmi for #140! * Numerous bugs and performance improvements. #136, @bcdurak great job with #142 * Added new docs with a low level API guide. #143 [Our roadmap](https://zenml.hellonext.co/roadmap) goes into further detail on the timeline. Vote on the [next features now](https://github.com/zenml-io/zenml/discussions). We encourage every user (old or new) to start afresh with this release. Please go over our latest [docs](https://docs.zenml.io) and [examples](examples) to get a hang of the new system. # 0.5.0 This long-awaited ZenML release marks a seminal moment in the project's history. We present to you a complete revamp of the internals of ZenML, with a fresh new design and API. While these changes are significant, and have been months in the making, the original vision of ZenML has not wavered. We hope that the ZenML community finds the new design choices easier to grasp and use, and we welcome feedback on the [issues board](https://github.com/zenml-io/zenml/issues). ## Warning 0.5.0 is a complete API change from the previous versions of ZenML, and is a *breaking* upgrade. Fundamental concepts have been changed, and therefore backwards compatibility is not maintained. Please use only this version with fresh projects. With such significant changes, we expect this release to also be breaking. Please report any bugs in the issue board, and they should be addressed in upcoming releases. ## Overview * Introducing a new functional API for creating pipelines and steps. This is now the default mechanism for building ZenML pipelines. [read more](https://docs.zenml.io/starter-guide/pipelines/pipelines) * Steps now use Materializers to handle artifact serialization/deserialization between steps. This is a powerful change, and will be expanded upon in the future. [read more](https://docs.zenml.io/pipelines/materializers) * Introducing the new `Stack` paradigm: Easily transition from one MLOps stack to the next with a few CLI commands [read more](https://docs.zenml.io/starter-guide/stacks/stacks) * Introducing a new `Artifact`, `Typing`, and `Annotation` system, with `pydantic` (and `dataclasses`) support [read more](https://docs.zenml.io/getting-started/core-concepts) * Deprecating the `pipelines_dir`: Now individual pipelines will be stored in their metadata stores, making the metadata store a single source of truth. [read more](https://docs.zenml.io/getting-started/core-concepts) * Deprecating the YAML config file: ZenML no longer natively compiles to an intermediate YAML-based representation. Instead, it compiles and deploys directly into the selected orchestrator's representation. While we do plan to support running pipelines directly through YAML in the future, it will no longer be the default route through which pipelines are run. [read more about orchestrators here](https://docs.zenml.io/component-gallery/orchestrators/orchestrators) ## Technical Improvements * A completely new system design, please refer to the [docs](https://docs.zenml.io/getting-started/core-concepts). * Better type hints and docstrings. * Auto-completion support. * Numerous performance improvements and bug fixes, including a smaller dependency footprint. ## What to expect in the next weeks and the new ZenML Currently, this release is bare bones. We are missing some basic features which used to be part of ZenML 0.3.8 (the previous release): * Standard interfaces for `TrainingPipeline`. * Individual step interfaces like `PreprocessorStep`, `TrainerStep`, `DeployerStep` etc. need to be rewritten from within the new paradigm. They should be included in the non-RC version of this release. * A proper production setup with an orchestrator like Airflow. * A post-execution workflow to analyze and inspect pipeline runs. * The concept of `Backends` will evolve into a simple mechanism of transitioning individual steps into different runners. * Support for `KubernetesOrchestrator`, `KubeflowOrchestrator`, `GCPOrchestrator` and `AWSOrchestrator` are also planned. * Dependency management including Docker support is planned. [Our roadmap](https://zenml.hellonext.co/roadmap) goes into further detail on the timeline. We encourage every user (old or new) to start afresh with this release. Please go over our latest [docs](https://docs.zenml.io) and [examples](examples) to get a hang of the new system. Onwards and upwards to 1.0.0! # 0.5.0rc2 This long-awaited ZenML release marks a seminal moment in the project's history. We present to you a complete revamp of the internals of ZenML, with a fresh new design and API. While these changes are significant, and have been months in the making, the original vision of ZenML has not wavered. We hope that the ZenML community finds the new design choices easier to grasp and use, and we welcome feedback on the [issues board](https://github.com/zenml-io/zenml/issues). ## Warning 0.5.0rc0 is a complete API change from the previous versions of ZenML, and is a *breaking* upgrade. Fundamental concepts have been changed, and therefore backwards compatibility is not maintained. Please use only this version with fresh projects. With such significant changes, we expect this release to also be breaking. Please report any bugs in the issue board, and they should be addressed in upcoming releases. ## Overview * Introducing a new functional API for creating pipelines and steps. This is now the default mechanism for building ZenML pipelines. [read more](https://docs.zenml.io/starter-guide/pipelines/pipelines) * Introducing the new `Stack` paradigm: Easily transition from one MLOps stack to the next with a few CLI commands [read more](https://docs.zenml.io/starter-guide/stacks/stacks) * Introducing a new `Artifact`, `Typing`, and `Annotation` system, with `pydantic` (and `dataclasses`) support [read more](https://docs.zenml.io/getting-started/core-concepts) * Deprecating the `pipelines_dir`: Now individual pipelines will be stored in their metadata stores, making the metadata store a single source of truth. [read more](https://docs.zenml.io/starter-guide/stacks/stacks) * Deprecating the YAML config file: ZenML no longer natively compiles to an intermediate YAML-based representation. Instead, it compiles and deploys directly into the selected orchestrator's representation. While we do plan to support running pipelines directly through YAML in the future, it will no longer be the default route through which pipelines are run. [read more about orchestrators here](https://docs.zenml.io/core/stacks) ## Technical Improvements * A completely new system design, please refer to the [docs](https://docs.zenml.io/component-gallery/orchestrators/orchestrators). * Better type hints and docstrings. * Auto-completion support. * Numerous performance improvements and bug fixes, including a smaller dependency footprint. ## What to expect in the next weeks and the new ZenML Currently, this release is bare bones. We are missing some basic features which used to be part of ZenML 0.3.8 (the previous release): * Standard interfaces for `TrainingPipeline`. * Individual step interfaces like `PreprocessorStep`, `TrainerStep`, `DeployerStep` etc. need to be rewritten from within the new paradigm. They should be included in the non-RC version of this release. * A proper production setup with an orchestrator like Airflow. * A post-execution workflow to analyze and inspect pipeline runs. * The concept of `Backends` will evolve into a simple mechanism of transitioning individual steps into different runners. * Support for `KubernetesOrchestrator`, `KubeflowOrchestrator`, `GCPOrchestrator` and `AWSOrchestrator` are also planned. * Dependency management including Docker support is planned. [Our roadmap](https://zenml.hellonext.co/roadmap) goes into further detail on the timeline. We encourage every user (old or new) to start afresh with this release. Please go over our latest [docs](https://docs.zenml.io) and [examples](examples) to get a hang of the new system. Onwards and upwards to 1.0.0! # 0.3.7.1 This release fixes some known bugs from previous releases and especially 0.3.7. Same procedure as always, please delete existing pipelines, metadata, and artifact stores. ``` cd zenml_enabled_repo rm -rf pipelines/ rm -rf .zenml/ ``` And then another ZenML init: ``` pip install --upgrade zenml cd zenml_enabled_repo zenml init ``` ## New Features * Introduced new `zenml example` CLI sub-group: Easily pull examples via zenml to check it out. ```bash zenml example pull # pulls all examples in `zenml_examples` directory zenml example pull EXAMPLE_NAME # pulls specific example zenml example info EXAMPLE_NAME # gives quick info regarding example ``` Thanks Michael Xu for the suggestion! * Updated examples with new `zenml examples` paradigm for examples. ## Bug Fixes + Refactor * ZenML now works on Windows -> Thank you @Franky007Bond for the heads up. * Updated numerous bugs in examples directory. Also updated README's. * Fixed remote orchestration logic -> Now remote orchestration works. * Changed datasource `to_config` to include reference to backend, metadata, and artifact store. # 0.3.7 0.3.7 is a much-needed, long-awaited, big refactor of the Datasources paradigm of ZenML. There are also bug fixes, improvements, and more! For those upgrading from an older version of ZenML, we ask to please delete their old `pipelines` dir and `.zenml` folders and start afresh with a `zenml init`. If only working locally, this is as simple as: ``` cd zenml_enabled_repo rm -rf pipelines/ rm -rf .zenml/ ``` And then another ZenML init: ``` pip install --upgrade zenml cd zenml_enabled_repo zenml init ``` ## New Features * The inner-workings of the `BaseDatasource` have been modified along with the concrete implementations. Now, there is no relation between a `DataStep` and a `Datasource`: A `Datasource` holds all the logic to version and track itself via the new `commit` paradigm. * Introduced a new interface for datasources, the `process` method which is responsible for ingesting data and writing to TFRecords to be consumed by later steps. * Datasource versions (snapshots) can be accessed directly via the `commits` paradigm: Every commit is a new version of data. * Added `JSONDatasource` and `TFRecordsDatasource`. ## Bug Fixes + Refactor A big thanks to our new contributor @aak7912 for the help in this release with issue #71 and PR #75. * Added an example for [regression](https://github.com/zenml-io/zenml/tree/main/examples/regression). * `compare_training_runs()` now takes an optional `datasource` parameter to filter by datasource. * `Trainer` interface refined to focus on `run_fn` rather than other helper functions. * New docs released with a streamlined vision and coherent storyline: https://docs.zenml.io * Got rid of unnecessary Torch dependency with base ZenML version. # 0.3.6 0.3.6 is a more inwards-facing release as part of a bigger effort to create a more flexible ZenML. As a first step, ZenML now supports arbitrary splits for all components natively, freeing us from the `train/eval` split paradigm. Here is an overview of changes: ## New Features * The inner-workings of the `BaseTrainerStep`, `BaseEvaluatorStep` and the `BasePreprocessorStep` have been modified along with their respective components to work with the new split_mapping. Now, users can define arbitrary splits (not just train/eval). E.g. Doing a `train/eval/test` split is possible. * Within the instance of a `TrainerStep`, the user has access to `input_patterns` and `output_patterns` which provide the required uris with respect to their splits for the input and output(test_results) examples. * The built-in trainers are modified to work with the new changes. ## Bug Fixes + Refactor A big thanks to our new super supporter @zyfzjsc988 for most of the feedback that led to bug fixes and enhancements for this release: * #63: Now one can specify which ports ZenML opens its add-on applications. * #64 Now there is a way to list integrations with the following code: ``` from zenml.utils.requirements_utils import list_integrations. list_integrations() ``` * Fixed #61: `view_anomalies()` breaking in the quickstart. * Analytics is now `opt-in` by default, to get rid of the unnecessary prompt at `zenml init`. Users can still freely `opt-out` by using the CLI: ``` zenml config analytics opt-out ``` Again, the telemetry data is fully anonymized and just used to improve the product. Read more [here](https://docs.zenml.io/misc/usage-analytics) # 0.3.5 ## New Features * Added a new interface into the trainer step called [`test_fn`]() which is utilized to produce model predictions and save them as test results * Implemented a new evaluator step called [`AgnosticEvaluator`]() which is designed to work regardless of the model type as long as you run the `test_fn` in your trainer step * The first two changes allow torch trainer steps to be followed by an agnostic evaluator step, see the example [here](). * Proposed a new naming scheme, which is now integrated into the built-in steps, in order to make it easier to handle feature/label names * Implemented a new adapted version of 2 TFX components, namely the [`Trainer`]() and the [`Evaluator`]() to allow the aforementioned changes to take place * Modified the [`TorchFeedForwardTrainer`]() to showcase how to use TensorBoard in conjunction with PyTorch ## Bug Fixes + Refactor * Refactored how ZenML treats relative imports for custom steps. Now: ```python ``` * Updated the [Scikit Example](https://github.com/zenml-io/zenml/tree/main/examples/scikit), [PyTorch Lightning Example](https://github.com/zenml-io/zenml/tree/main/examples/pytorch_lightning), [GAN Example](https://github.com/zenml-io/zenml/tree/main/examples/gan) accordingly. Now they should work according to their README's. Big shout out to @SarahKing92 in issue #34 for raising the above issues! # 0.3.4 This release is a big design change and refactor. It involves a significant change in the Configuration file structure, meaning this is a **breaking upgrade**. For those upgrading from an older version of ZenML, we ask to please delete their old `pipelines` dir and `.zenml` folders and start afresh with a `zenml init`. If only working locally, this is as simple as: ``` cd zenml_enabled_repo rm -rf pipelines/ rm -rf .zenml/ ``` And then another ZenML init: ``` pip install --upgrade zenml cd zenml_enabled_repo zenml init ``` ## New Features * Introduced another higher-level pipeline: The [NLPPipeline](https://github.com/zenml-io/zenml/blob/main/zenml/pipelines/nlp_pipeline.py). This is a generic NLP pipeline for a text-datasource based training task. Full example of how to use the NLPPipeline can be found [here](https://github.com/zenml-io/zenml/tree/main/examples/nlp) * Introduced a [BaseTokenizerStep](https://github.com/zenml-io/zenml/blob/main/zenml/steps/tokenizer/base_tokenizer.py) as a simple mechanism to define how to train and encode using any generic tokenizer (again for NLP-based tasks). ## Bug Fixes + Refactor * Significant change to imports: Now imports are way simpler and user-friendly. E.g. Instead of: ```python from zenml.core.pipelines.training_pipeline import TrainingPipeline ``` A user can simple do: ```python from zenml.pipelines import TrainingPipeline ``` The caveat is of course that this might involve a re-write of older ZenML code imports. Note: Future releases are also expected to be breaking. Until announced, please expect that upgrading ZenML versions may cause older-ZenML generated pipelines to behave unexpectedly. <!-- -->
0
cloned_public_repos
cloned_public_repos/zenml/release-cloudbuild.yaml
steps: # build client base image - python 3.9 - name: gcr.io/cloud-builders/docker args: - '-c' - | docker build \ --build-arg ZENML_VERSION=$TAG_NAME \ --build-arg PYTHON_VERSION=3.9 \ --target client \ -f docker/base.Dockerfile . \ -t $$USERNAME/zenml:$TAG_NAME-py3.9 # use latest tags only for official releases if [[ $TAG_NAME =~ ^(0|[1-9][0-9]*)\.(0|[1-9][0-9]*)\.(0|[1-9][0-9]*)$ ]]; then docker tag $$USERNAME/zenml:$TAG_NAME-py3.9 $$USERNAME/zenml:py3.9 fi id: build-base-3.9 waitFor: ['-'] entrypoint: bash secretEnv: - USERNAME # build client base image - python 3.10 - name: gcr.io/cloud-builders/docker args: - '-c' - | docker build \ --build-arg ZENML_VERSION=$TAG_NAME \ --build-arg PYTHON_VERSION=3.10 \ --target client \ -f docker/base.Dockerfile . \ -t $$USERNAME/zenml:$TAG_NAME-py3.10 # use latest tags only for official releases if [[ $TAG_NAME =~ ^(0|[1-9][0-9]*)\.(0|[1-9][0-9]*)\.(0|[1-9][0-9]*)$ ]]; then docker tag $$USERNAME/zenml:$TAG_NAME-py3.10 $$USERNAME/zenml:py3.10 fi id: build-base-3.10 waitFor: ['-'] entrypoint: bash secretEnv: - USERNAME # build client base image - python 3.11 - name: gcr.io/cloud-builders/docker args: - '-c' - | docker build \ --build-arg ZENML_VERSION=$TAG_NAME \ --build-arg PYTHON_VERSION=3.11 \ --target client \ -f docker/base.Dockerfile . \ -t $$USERNAME/zenml:$TAG_NAME-py3.11 \ -t $$USERNAME/zenml:$TAG_NAME # use latest tags only for official releases if [[ $TAG_NAME =~ ^(0|[1-9][0-9]*)\.(0|[1-9][0-9]*)\.(0|[1-9][0-9]*)$ ]]; then docker tag $$USERNAME/zenml:$TAG_NAME-py3.11 $$USERNAME/zenml:py3.11 docker tag $$USERNAME/zenml:$TAG_NAME-py3.11 $$USERNAME/zenml:latest fi id: build-base-3.11 waitFor: ['-'] entrypoint: bash secretEnv: - USERNAME # build client base image - python 3.12 - name: gcr.io/cloud-builders/docker args: - '-c' - | docker build \ --build-arg ZENML_VERSION=$TAG_NAME \ --build-arg PYTHON_VERSION=3.12 \ --target client \ -f docker/base.Dockerfile . \ -t $$USERNAME/zenml:$TAG_NAME-py3.12 # use latest tags only for official releases if [[ $TAG_NAME =~ ^(0|[1-9][0-9]*)\.(0|[1-9][0-9]*)\.(0|[1-9][0-9]*)$ ]]; then docker tag $$USERNAME/zenml:$TAG_NAME-py3.12 $$USERNAME/zenml:py3.12 fi id: build-base-3.12 waitFor: ['-'] entrypoint: bash secretEnv: - USERNAME # build server image - python 3.11 only - name: gcr.io/cloud-builders/docker args: - '-c' - | docker build \ --build-arg ZENML_VERSION=$TAG_NAME \ --build-arg PYTHON_VERSION=3.11 \ --target server \ -f docker/base.Dockerfile . \ -t $$USERNAME/zenml-server:$TAG_NAME # use latest tags only for official releases if [[ $TAG_NAME =~ ^(0|[1-9][0-9]*)\.(0|[1-9][0-9]*)\.(0|[1-9][0-9]*)$ ]]; then docker tag $$USERNAME/zenml-server:$TAG_NAME $$USERNAME/zenml-server:latest fi id: build-server waitFor: ['-'] entrypoint: bash secretEnv: - USERNAME # login to Dockerhub - name: gcr.io/cloud-builders/docker args: - '-c' - docker login --username=$$USERNAME --password=$$PASSWORD id: docker-login entrypoint: bash secretEnv: - USERNAME - PASSWORD # push base images - name: gcr.io/cloud-builders/docker args: - '-c' - docker push --all-tags $$USERNAME/zenml id: push-base waitFor: - docker-login - build-base-3.9 - build-base-3.10 - build-base-3.11 - build-base-3.12 entrypoint: bash secretEnv: - USERNAME # push server images - name: gcr.io/cloud-builders/docker args: - '-c' - docker push --all-tags $$USERNAME/zenml-server id: push-server waitFor: - build-server - docker-login entrypoint: bash secretEnv: - USERNAME # build client quickstart gcp image - python 3.11 - name: gcr.io/cloud-builders/docker args: - '-c' - | docker build \ --build-arg ZENML_VERSION=$TAG_NAME \ --build-arg PYTHON_VERSION=3.11 \ --build-arg CLOUD_PROVIDER=gcp \ -f docker/zenml-quickstart.Dockerfile . \ -t $$USERNAME/zenml-public-pipelines:quickstart-$TAG_NAME-py3.11-gcp id: build-quickstart-3.11-gcp waitFor: [ 'push-base' ] entrypoint: bash secretEnv: - USERNAME # build client quickstart aws image - python 3.11 - name: gcr.io/cloud-builders/docker args: - '-c' - | docker build \ --build-arg ZENML_VERSION=$TAG_NAME \ --build-arg PYTHON_VERSION=3.11 \ --build-arg CLOUD_PROVIDER=aws \ -f docker/zenml-quickstart.Dockerfile . \ -t $$USERNAME/zenml-public-pipelines:quickstart-$TAG_NAME-py3.11-aws id: build-quickstart-3.11-aws waitFor: [ 'push-base' ] entrypoint: bash secretEnv: - USERNAME # build client quickstart azure image - python 3.11 - name: gcr.io/cloud-builders/docker args: - '-c' - | docker build \ --build-arg ZENML_VERSION=$TAG_NAME \ --build-arg PYTHON_VERSION=3.11 \ --build-arg CLOUD_PROVIDER=azure \ -f docker/zenml-quickstart.Dockerfile . \ -t $$USERNAME/zenml-public-pipelines:quickstart-$TAG_NAME-py3.11-azure id: build-quickstart-3.11-azure waitFor: [ 'push-base' ] entrypoint: bash secretEnv: - USERNAME # push quickstart images - name: gcr.io/cloud-builders/docker args: - '-c' - docker push --all-tags $$USERNAME/zenml-public-pipelines id: push-quickstart waitFor: - build-quickstart-3.11-gcp - build-quickstart-3.11-aws - build-quickstart-3.11-azure - docker-login entrypoint: bash secretEnv: - USERNAME timeout: 3600s availableSecrets: secretManager: - versionName: projects/$PROJECT_ID/secrets/docker-password/versions/1 env: PASSWORD - versionName: projects/$PROJECT_ID/secrets/docker-username/versions/1 env: USERNAME
0
cloned_public_repos
cloned_public_repos/zenml/.coderabbit.yaml
language: "en" early_access: false reviews: high_level_summary: true poem: true review_status: true collapse_walkthrough: true path_filters: - "!**/.xml" - "!**/.json" path_instructions: - path: "src/zenml/**/*.py" instructions: "Review the Python code for conformity with Python best practices." - path: "docs/**/*.md" instructions: "Review the documentation for readability and clarity." - path: "tests/**/*.py" instructions: | "Assess the unit test code employing the PyTest testing framework. Confirm that: - The tests adhere to PyTest's established best practices. - Test descriptions are sufficiently detailed to clarify the purpose of each test." auto_review: enabled: false ignore_title_keywords: - "WIP" - "DO NOT MERGE" drafts: false base_branches: - "develop" chat: auto_reply: true
0
cloned_public_repos
cloned_public_repos/zenml/CONTRIBUTING.md
# πŸ§‘β€πŸ’» Contributing to ZenML A big welcome and thank you for considering contributing to ZenML! It’s people like you that make it a reality for users in our community. Reading and following these guidelines will help us make the contribution process easy and effective for everyone involved. It also communicates that you agree to respect the developers' time management and develop these open-source projects. In return, we will reciprocate that respect by reading your issue, assessing changes, and helping you finalize your pull requests. ## ⚑️ Quicklinks - [πŸ§‘β€πŸ’» Contributing to ZenML](#-contributing-to-zenml) - [⚑️ Quicklinks](#-quicklinks) - [πŸ§‘β€βš–οΈ Code of Conduct](#-code-of-conduct) - [πŸ›« Getting Started](#-getting-started) - [⁉️ Issues](#-issues) - [🏷 Pull Requests: When to make one](#-pull-requests-when-to-make-one) - [πŸ’― Pull Requests: Workflow to Contribute](#-pull-requests-workflow-to-contribute) - [🧱 Pull Requests: Rebase on develop](#-pull-requests-rebase-your-branch-on-develop) - [🧐 Linting, formatting, and tests](#-linting-formatting-and-tests) - [🚨 Reporting a Vulnerability](#-reporting-a-vulnerability) - [Coding Conventions](#coding-conventions) - [πŸ‘· Creating a new Integration](#-creating-a-new-integration) - [πŸ†˜ Getting Help](#-getting-help) ## πŸ§‘β€βš–οΈ Code of Conduct We take our open-source community seriously and hold ourselves and other contributors to high standards of communication. By participating and contributing to this project, you agree to uphold our [Code of Conduct](https://github.com/zenml-io/zenml/blob/master/CODE-OF-CONDUCT.md) . ## πŸ›« Getting Started Contributions are made to this repo via Issues and Pull Requests (PRs). A few general guidelines that cover both: - To report security vulnerabilities, please get in touch at [support@zenml.io](mailto:support@zenml.io), monitored by our security team. - Search for existing Issues and PRs before creating your own. - We work hard to make sure issues are handled on time, but it could take a while to investigate the root cause depending on the impact. A friendly ping in the comment thread to the submitter or a contributor can help draw attention if your issue is blocking. ### Good First Issues for New Contributors The best way to start is to check the [`good-first-issue`](https://github.com/issues?q=is%3Aopen+is%3Aissue+archived%3Afalse+user%3Azenml-io+label%3A%22good+first+issue%22) label on the issue board. The core team creates these issues as necessary smaller tasks that you can work on to get deeper into ZenML internals. These should generally require relatively simple changes, probably affecting just one or two files which we think are ideal for people new to ZenML. The next step after that would be to look at the [`good-second-issue`](https://github.com/issues?q=is%3Aopen+is%3Aissue+archived%3Afalse+user%3Azenml-io+label%3A%22good+second+issue%22) label on the issue board. These are a bit more complex, might involve more files, but should still be well-defined and achievable to people relatively new to ZenML. ### ⁉️ Issues Issues should be used to report problems with the library, request a new feature, or to discuss potential changes before a PR is created. When you create a new Issue, a template will be loaded that will guide you through collecting and providing the information we need to investigate. If you find an Issue that addresses your problem, please add your own reproduction information to the existing issue rather than creating a new one. Adding a [reaction](https://github.blog/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) can also help by indicating to our maintainers that a particular issue is affecting more than just the reporter. ### 🏷 Pull Requests: When to make one Pull Requests (PRs) to ZenML are always welcome and can be a quick way to get your fix or improvement slated for the next release. In general, PRs should: - Only fix/add the functionality in question **OR** address widespread whitespace/style issues, not both. - Add unit or integration tests for fixed or changed functionality (if a test suite already exists). - Address a single concern in the least number of changed lines as possible. - Include documentation in the repo or in your Pull Request. - Be accompanied by a filled-out Pull Request template (loaded automatically when a PR is created). For changes that address core functionality or would require breaking changes (e.g. a major release), it's best to open an Issue to discuss your proposal first. This is not required but can save time creating and reviewing changes. ### πŸ’― Pull Requests: Workflow to Contribute <p class="callout warning">Please note that development in ZenML happens off of the <b>develop</b> branch, <b>not main</b>, which is the default branch on GitHub. Therefore, please pay particular attention to step 5 and step 9 below. </p> In general, we follow the ["fork-and-pull" Git workflow](https://github.com/susam/gitpr) 1. Review and sign the [Contributor License Agreement](https://cla-assistant.io/zenml-io/zenml) ( CLA). 2. Fork the repository to your own Github account. 3. Clone the project to your machine. 4. Checkout the **develop** branch <- `git checkout develop`. 5. Create a branch (again, off of the develop branch) locally with a succinct but descriptive name. 6. Commit changes to the branch 7. Follow the `Linting, formatting, and tests` guide to make sure your code adheres to the ZenML coding style (see below). 8. Push changes to your fork. 9. Open a PR in our repository (to the `develop` branch, **NOT** `main`) and follow the PR template so that we can efficiently review the changes. ### 🧱 Pull Requests: Rebase Your Branch on Develop 1. When making pull requests to ZenML, you should always make your changes on a branch that is based on `develop`. You can create a new branch based on `develop` by running the following command: ``` git checkout -b <new-branch-name> develop ``` 2. Fetch the latest changes from the remote `develop` branch: ``` git fetch origin develop ``` 3. Switch to your branch: ``` git checkout <your-branch-name> ``` 4. Rebase your branch on `develop`: ``` git rebase origin/develop ``` This will apply your branch's changes on top of the latest changes in `develop`, one commit at a time. 5. Resolve any conflicts that may arise during the rebase. Git will notify you if there are any conflicts that need to be resolved. Use a text editor to manually resolve the conflicts in the affected files. 6. After resolving the conflicts, stage the changes: ``` git add . ``` 7. Continue the rebase for all of your commits and go to 5) if there are conflicts. ``` git rebase --continue ``` 8. Push the rebased branch to your remote repository: ``` git push origin --force <your-branch-name> ``` 9. Open a pull request targeting the `develop` branch. The changes from your rebased branch will now be based on the latest `develop` branch. ### 🧐 Linting, formatting, and tests To install ZenML from your local checked out files including all core dev-dependencies, run: ``` pip install -e ".[server,dev]" ``` Optionally, you might want to run the following commands to ensure you have all integrations for `mypy` checks: ``` zenml integration install -y -i feast pip install click~=8.0.3 mypy --install-types ``` Warning: This might take a while for both (~ 15 minutes each, depending on your machine), however if you have time, please run it as it will make the next commands error-free. Note that the `zenml integration install` command might also fail on account of dependency conflicts so you can just install the specific integration you're working on and manually run the mypy command for the files you've been working on. You can now run the following scripts to automatically format your code and to check whether the code formatting, linting, docstrings, and spelling is in order: ``` bash scripts/format.sh bash scripts/run-ci-checks.sh ``` If you're on Windows you might have to run the formatting script as `bash scripts/format.sh --no-yamlfix` and run the yamlfix command separately as `yamlfix .github -v`. Tests can be run as follows: ``` bash scripts/test-coverage-xml.sh ``` Please note that it is good practice to run the above commands before submitting any Pull Request: The CI GitHub Action will run it anyway, so you might as well catch the errors locally! ### 🚨 Reporting a Vulnerability Please refer to [our security / reporting instructions](./SECURITY.md) for details on reporting vulnerabilities. ## Coding Conventions The code within the repository is structured in the following way - the most relevant places for contributors are highlighted with a `<-` arrow: ``` β”œβ”€β”€ .github -- Definition of the GH action workflows β”œβ”€β”€ docker -- Dockerfiles used to build ZenML docker images β”œβ”€β”€ docs <- The ZenML docs, CLI docs and API docs live here β”‚ β”œβ”€β”€ book <- In case you make user facing changes, update docs here β”‚ └── mkdocs -- Some configurations for the API/CLI docs β”œβ”€β”€ examples <- When adding an integration, add an example here β”œβ”€β”€ scripts -- Scripts used by Github Actions or for local linting/testing β”œβ”€β”€ src/zenml <- The heart of ZenML β”‚ β”œβ”€β”€ <stack_component> <- Each stack component has its own directory β”‚ β”œβ”€β”€ cli <- Change and improve the CLI here β”‚ β”œβ”€β”€ config -- The ZenML config methods live here β”‚ β”œβ”€β”€ integrations <- Add new integrations here β”‚ β”œβ”€β”€ io -- File operation implementations β”‚ β”œβ”€β”€ materializers <- Materializers responsible for reading/writing artifacts β”‚ β”œβ”€β”€ pipelines <- The base pipeline and its decorator β”‚ β”œβ”€β”€ services -- Code responsible for managing services β”‚ β”œβ”€β”€ stack <- Stack, Stack Components and the flavor registry β”‚ β”œβ”€β”€ steps <- Steps and their decorators are defined here β”‚ β”œβ”€β”€ utils <- Collection on useful utils β”‚ β”œβ”€β”€ zen_server -- Code for running the Zen Server β”‚ └── zen_stores -- Code for storing stacks in multiple settings └── test <- Don't forget to write unit tests for your code ``` ## πŸ‘· Creating a new Integration In case you want to create an entirely new integration that you would like to see supported by ZenML there are a few steps that you should follow: 1. Create the actual integration. Check out the [Integrations README](src/zenml/integrations/README.md) for detailed step-by-step instructions. 2. Create an example of how to use the integration. Check out the [Examples README](examples/README.md) to find out what to do. 3. All integrations deserve to be documented. Make sure to pay a visit to the [Component Guide](https://docs.zenml.io/stack-components/component-guide) in the docs and add your implementations. ## πŸ†˜ Getting Help Join us in the [ZenML Slack Community](https://zenml.io/slack-invite/) to interact directly with the core team and community at large. This is a good place to ideate, discuss concepts or ask for help.
0
cloned_public_repos
cloned_public_repos/zenml/zen-dev
#!/usr/bin/env python # Copyright (c) ZenML GmbH 2023. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at: # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express # or implied. See the License for the specific language governing # permissions and limitations under the License. """ CLI callable through `./zen-dev ....`. This CLI will serve as a general interface for all convenience functions during development.""" from scripts.verify_flavor_url_valid import cli import sys if __name__ == "__main__": sys.exit(cli())
0
cloned_public_repos
cloned_public_repos/zenml/alembic.ini
# A generic, single database configuration. [alembic] # path to migration scripts script_location = src/zenml/zen_stores/migrations # template used to generate migration file names; The default value is %%(rev)s_%%(slug)s # Uncomment the line below if you want the files to be prepended with date and time # see https://alembic.sqlalchemy.org/en/latest/tutorial.html#editing-the-ini-file # for all available tokens # file_template = %%(year)d_%%(month).2d_%%(day).2d_%%(hour).2d%%(minute).2d-%%(rev)s_%%(slug)s # sys.path path, will be prepended to sys.path if present. # defaults to the current working directory. prepend_sys_path = . # timezone to use when rendering the date within the migration file # as well as the filename. # If specified, requires the python-dateutil library that can be # installed by adding `alembic[tz]` to the pip requirements # string value is passed to dateutil.tz.gettz() # leave blank for localtime # timezone = # max length of characters to apply to the # "slug" field # truncate_slug_length = 40 # set to 'true' to run the environment during # the 'revision' command, regardless of autogenerate # revision_environment = false # set to 'true' to allow .pyc and .pyo files without # a source .py file to be detected as revisions in the # versions/ directory # sourceless = false # version location specification; This defaults # to alembic/versions. When using multiple version # directories, initial revisions must be specified with --version-path. # The path separator used here should be the separator specified by "version_path_separator" below. # version_locations = %(here)s/bar:%(here)s/bat:alembic/versions # version path separator; As mentioned above, this is the character used to split # version_locations. The default within new alembic.ini files is "os", which uses os.pathsep. # If this key is omitted entirely, it falls back to the legacy behavior of splitting on spaces and/or commas. # Valid values for version_path_separator are: # # version_path_separator = : # version_path_separator = ; # version_path_separator = space version_path_separator = os # Use os.pathsep. Default configuration used for new projects. # the output encoding used when revision files # are written from script.py.mako # output_encoding = utf-8 [post_write_hooks] # post_write_hooks defines scripts or Python functions that are run # on newly generated revision scripts. See the documentation for further # detail and examples # format using "black" - use the console_scripts runner, against the "black" entrypoint # hooks = black # black.type = console_scripts # black.entrypoint = black # black.options = -l 79 REVISION_SCRIPT_FILENAME # Logging configuration [loggers] keys = root,sqlalchemy,alembic [handlers] keys = console [formatters] keys = generic [logger_root] level = WARN handlers = console qualname = [logger_sqlalchemy] level = WARN handlers = qualname = sqlalchemy.engine [logger_alembic] level = INFO handlers = qualname = alembic [handler_console] class = StreamHandler args = (sys.stderr,) level = NOTSET formatter = generic [formatter_generic] format = %(levelname)-5.5s [%(name)s] %(message)s datefmt = %H:%M:%S
0
cloned_public_repos
cloned_public_repos/zenml/.dockerignore
* !/README.md !/pyproject.toml !/src/** !/scripts !/tests !/.trivyignore !/trivy-secret.yaml
0
cloned_public_repos
cloned_public_repos/zenml/SECURITY.md
# 🚨 Reporting a Vulnerability If you think you have found a vulnerability, and even if you are not sure about it, please report it right away by sending an email to: [security@zenml.io](mailto:security@zenml.io?subject=Security%20Vulnerability%20Found). Please try to be as explicit as possible, describing all the steps and example code to reproduce the security issue. We will review it thoroughly and get back to you. Please refrain from publicly discussing a potential security vulnerability as this could potentially put our users at risk! It's better to discuss privately and give us a chance to find a solution first, to limit the potential impact as much as possible.
0
cloned_public_repos
cloned_public_repos/zenml/LICENSE
Apache License Version 2.0, January 2004 https://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
0
cloned_public_repos
cloned_public_repos/zenml/release-cloudbuild-nightly.yaml
steps: # build client base image - python 3.10 - name: gcr.io/cloud-builders/docker args: - '-c' - | docker build \ --build-arg ZENML_VERSION=$TAG_NAME \ --build-arg PYTHON_VERSION=3.10 \ --build-arg ZENML_NIGHTLY=true \ --target client \ -f docker/base.Dockerfile . \ -t $$USERNAME/zenml:$TAG_NAME-py3.10-nightly # no need to check for official release regex, this is for nightly builds docker tag $$USERNAME/zenml:$TAG_NAME-py3.10-nightly $$USERNAME/zenml:py3.10-nightly id: build-base-3.10-nightly waitFor: ['-'] entrypoint: bash secretEnv: - USERNAME # build server image - python 3.11 only - name: gcr.io/cloud-builders/docker args: - '-c' - | docker build \ --build-arg ZENML_VERSION=$TAG_NAME \ --build-arg PYTHON_VERSION=3.11 \ --build-arg ZENML_NIGHTLY=true \ --target server \ -f docker/base.Dockerfile . \ -t $$USERNAME/zenml-server:$TAG_NAME-nightly id: build-server-nightly waitFor: ['-'] entrypoint: bash secretEnv: - USERNAME # login to Dockerhub - name: gcr.io/cloud-builders/docker args: - '-c' - docker login --username=$$USERNAME --password=$$PASSWORD id: docker-login entrypoint: bash secretEnv: - USERNAME - PASSWORD # push base images - name: gcr.io/cloud-builders/docker args: - '-c' - docker push --all-tags $$USERNAME/zenml id: push-base waitFor: - docker-login - build-base-3.10-nightly entrypoint: bash secretEnv: - USERNAME # push server images - name: gcr.io/cloud-builders/docker args: - '-c' - docker push --all-tags $$USERNAME/zenml-server id: push-server waitFor: - docker-login - build-server-nightly entrypoint: bash secretEnv: - USERNAME timeout: 3600s availableSecrets: secretManager: - versionName: projects/$PROJECT_ID/secrets/docker-password/versions/1 env: PASSWORD - versionName: projects/$PROJECT_ID/secrets/docker-username/versions/1 env: USERNAME
0
cloned_public_repos
cloned_public_repos/zenml/.test_durations
{ "tests/integration/examples/test_deepchecks.py::test_example": 37.76593694500002, "tests/integration/examples/test_evidently.py::test_example": 44.578816048000135, "tests/integration/examples/test_facets.py::test_example": 17.93815276400005, "tests/integration/examples/test_great_expectations.py::test_example": 36.67548225600001, "tests/integration/examples/test_huggingface.py::test_sequence_classification": 82.02338253799985, "tests/integration/examples/test_lightgbm.py::test_example": 18.183318441000097, "tests/integration/examples/test_mlflow_deployment.py::test_example": 1.913912411999945, "tests/integration/examples/test_mlflow_registry.py::test_example": 0.22638852199997928, "tests/integration/examples/test_mlflow_tracking.py::test_example": 33.82489725000005, "tests/integration/examples/test_neural_prophet.py::test_example": 73.7382085820002, "tests/integration/examples/test_pytorch.py::test_example": 26.653799964999962, "tests/integration/examples/test_scipy.py::test_example": 18.994561089999706, "tests/integration/examples/test_seldon.py::test_example": 0.01343574400016223, "tests/integration/examples/test_sklearn.py::test_example": 18.073298817000023, "tests/integration/examples/test_slack.py::test_example": 0.047125661000109176, "tests/integration/examples/test_tensorflow.py::test_example": 0.000539110000090659, "tests/integration/examples/test_whylogs.py::test_example": 22.653405770000063, "tests/integration/examples/test_xgboost.py::test_example": 18.751801107999995, "tests/integration/functional/artifacts/test_artifact_config.py::test_artifacts_linked_from_cache_steps": 5.722613175999868, "tests/integration/functional/artifacts/test_artifact_config.py::test_artifacts_linked_from_cache_steps_same_id": 5.5699576369997885, "tests/integration/functional/artifacts/test_artifact_config.py::test_link_minimalistic": 3.485321890999785, "tests/integration/functional/artifacts/test_artifact_config.py::test_link_multiple_named_outputs": 2.9625088689999757, "tests/integration/functional/artifacts/test_artifact_config.py::test_link_multiple_named_outputs_with_mixed_linkage": 4.224968328000614, "tests/integration/functional/artifacts/test_artifact_config.py::test_link_multiple_named_outputs_with_self_context_and_caching": 5.240725193000344, "tests/integration/functional/artifacts/test_artifact_config.py::test_link_multiple_named_outputs_without_links": 3.0135965639997266, "tests/integration/functional/artifacts/test_utils.py::test_log_artifact_metadata_existing": 1.1619692589999886, "tests/integration/functional/artifacts/test_utils.py::test_log_artifact_metadata_multi_output": 2.392889625999942, "tests/integration/functional/artifacts/test_utils.py::test_log_artifact_metadata_raises_error_if_output_name_unclear": 2.2631211459997758, "tests/integration/functional/artifacts/test_utils.py::test_log_artifact_metadata_single_output": 2.3480959419998726, "tests/integration/functional/artifacts/test_utils.py::test_save_load_artifact_in_run": 8.640999665000209, "tests/integration/functional/artifacts/test_utils.py::test_save_load_artifact_outside_run": 1.1636797720000231, "tests/integration/functional/cli/test_artifact.py::test_artifact_list": 2.502585106999959, "tests/integration/functional/cli/test_artifact.py::test_artifact_prune": 2.7513724519999414, "tests/integration/functional/cli/test_artifact.py::test_artifact_update": 2.544811963000029, "tests/integration/functional/cli/test_artifact.py::test_artifact_version_list": 2.514056116999882, "tests/integration/functional/cli/test_artifact.py::test_artifact_version_update": 3.554075184000112, "tests/integration/functional/cli/test_base.py::test_clean_user_config": 0.9525632080001287, "tests/integration/functional/cli/test_base.py::test_init_creates_from_templates[e2e_batch]": 1.5916799869999068, "tests/integration/functional/cli/test_base.py::test_init_creates_from_templates[nlp]": 1.911210925999967, "tests/integration/functional/cli/test_base.py::test_init_creates_from_templates[starter]": 1.643989041000168, "tests/integration/functional/cli/test_base.py::test_init_creates_zen_folder": 0.019772161999981108, "tests/integration/functional/cli/test_cli.py::test_ZenMLCLI_formatter": 0.0013243239998246281, "tests/integration/functional/cli/test_cli.py::test_cli": 0.02635108700019373, "tests/integration/functional/cli/test_cli.py::test_cli_command_defines_a_cli_group": 0.0013053229999968607, "tests/integration/functional/cli/test_cli.py::test_cli_does_not_set_custom_source_root_if_inside_repository": 0.18090874100016663, "tests/integration/functional/cli/test_cli.py::test_cli_sets_custom_source_root_if_outside_of_repository": 0.9302926819998447, "tests/integration/functional/cli/test_config.py::test_analytics_opt_in_amends_global_config": 0.014308160999917163, "tests/integration/functional/cli/test_config.py::test_analytics_opt_out_amends_global_config": 0.013978155999893715, "tests/integration/functional/cli/test_config.py::test_set_logging_verbosity_stops_when_not_real_level[abc]": 0.002338442000109353, "tests/integration/functional/cli/test_config.py::test_set_logging_verbosity_stops_when_not_real_level[my_cat_is_called_aria]": 0.0022420420002617902, "tests/integration/functional/cli/test_config.py::test_set_logging_verbosity_stops_when_not_real_level[pipeline123]": 0.003105256000026202, "tests/integration/functional/cli/test_formatter.py::test_measure_table": 0.002004536000185908, "tests/integration/functional/cli/test_formatter.py::test_write_zen_dl": 0.0015430280000146013, "tests/integration/functional/cli/test_hub.py::test_hub_list": 1.0860935200000768, "tests/integration/functional/cli/test_integration.py::test_integration_get_requirements_all": 0.015991992999715876, "tests/integration/functional/cli/test_integration.py::test_integration_get_requirements_inexistent_integration[123]": 0.002594347000012931, "tests/integration/functional/cli/test_integration.py::test_integration_get_requirements_inexistent_integration[Anti-Tensorflow]": 0.0021564399999078887, "tests/integration/functional/cli/test_integration.py::test_integration_get_requirements_inexistent_integration[zenflow]": 0.0021408400000382244, "tests/integration/functional/cli/test_integration.py::test_integration_get_requirements_specific_integration": 0.0038329710000652994, "tests/integration/functional/cli/test_integration.py::test_integration_install_all": 0.004980690000138566, "tests/integration/functional/cli/test_integration.py::test_integration_install_inexistent_integration[123]": 0.00375236800005041, "tests/integration/functional/cli/test_integration.py::test_integration_install_inexistent_integration[Anti-Tensorflow]": 0.004052474999980404, "tests/integration/functional/cli/test_integration.py::test_integration_install_inexistent_integration[zenflow]": 0.0046310860000176035, "tests/integration/functional/cli/test_integration.py::test_integration_install_multiple_integrations": 0.0042998789999728615, "tests/integration/functional/cli/test_integration.py::test_integration_install_specific_integration[airflow]": 0.0052596970001559384, "tests/integration/functional/cli/test_integration.py::test_integration_install_specific_integration[sklearn]": 0.004452580999895872, "tests/integration/functional/cli/test_integration.py::test_integration_install_specific_integration[tensorflow]": 0.0043452789998355, "tests/integration/functional/cli/test_integration.py::test_integration_list": 1.1199886009999318, "tests/integration/functional/cli/test_integration.py::test_integration_requirements_exporting": 0.004271278000032908, "tests/integration/functional/cli/test_integration.py::test_integration_uninstall_all": 0.01065949499979979, "tests/integration/functional/cli/test_integration.py::test_integration_uninstall_inexistent_integration[123]": 0.0037088680001033936, "tests/integration/functional/cli/test_integration.py::test_integration_uninstall_inexistent_integration[Anti-Tensorflow]": 0.003562567000017225, "tests/integration/functional/cli/test_integration.py::test_integration_uninstall_inexistent_integration[zenflow]": 0.0036093660000915406, "tests/integration/functional/cli/test_integration.py::test_integration_uninstall_specific_integration[airflow]": 0.011568811999950412, "tests/integration/functional/cli/test_integration.py::test_integration_uninstall_specific_integration[sklearn]": 0.008025246999977753, "tests/integration/functional/cli/test_integration.py::test_integration_uninstall_specific_integration[tensorflow]": 0.007863443000360348, "tests/integration/functional/cli/test_model.py::test_model_create_full_names": 3.2758175740000297, "tests/integration/functional/cli/test_model.py::test_model_create_only_required": 3.174627945999646, "tests/integration/functional/cli/test_model.py::test_model_create_short_names": 4.055899303999922, "tests/integration/functional/cli/test_model.py::test_model_create_without_required_fails": 3.9470225580000715, "tests/integration/functional/cli/test_model.py::test_model_delete_found": 3.2105255939998187, "tests/integration/functional/cli/test_model.py::test_model_delete_not_found": 4.513233660999958, "tests/integration/functional/cli/test_model.py::test_model_list": 3.2393953599998895, "tests/integration/functional/cli/test_model.py::test_model_update": 3.262766443000146, "tests/integration/functional/cli/test_model.py::test_model_version_delete_found": 3.2907305999999608, "tests/integration/functional/cli/test_model.py::test_model_version_delete_not_found": 3.157172151000168, "tests/integration/functional/cli/test_model.py::test_model_version_links_list[data_artifacts]": 3.193657131000009, "tests/integration/functional/cli/test_model.py::test_model_version_links_list[deployment_artifacts]": 3.1936714569999367, "tests/integration/functional/cli/test_model.py::test_model_version_links_list[model_artifacts]": 3.305575039999667, "tests/integration/functional/cli/test_model.py::test_model_version_links_list[runs]": 3.2299188760002835, "tests/integration/functional/cli/test_model.py::test_model_version_list": 3.2711113319999185, "tests/integration/functional/cli/test_model.py::test_model_version_list_fails_on_bad_model": 3.247708474999854, "tests/integration/functional/cli/test_model.py::test_model_version_update": 4.008187265999823, "tests/integration/functional/cli/test_model_registry.py::test_get_model": 0.0076479370002289215, "tests/integration/functional/cli/test_model_registry.py::test_get_model_version": 0.014453755999966234, "tests/integration/functional/cli/test_model_registry.py::test_list_model_versions": 0.0086355530002038, "tests/integration/functional/cli/test_model_registry.py::test_list_models": 0.006482714999947348, "tests/integration/functional/cli/test_model_registry.py::test_update_model": 0.005061190000105853, "tests/integration/functional/cli/test_pipeline.py::test_pipeline_build_delete": 0.9584230880000177, "tests/integration/functional/cli/test_pipeline.py::test_pipeline_build_doesnt_write_output_file_if_no_build_needed": 4.007570977999876, "tests/integration/functional/cli/test_pipeline.py::test_pipeline_build_list": 0.9750501810003698, "tests/integration/functional/cli/test_pipeline.py::test_pipeline_build_with_config_file": 3.9542361389999314, "tests/integration/functional/cli/test_pipeline.py::test_pipeline_build_with_different_stack": 3.236543438999888, "tests/integration/functional/cli/test_pipeline.py::test_pipeline_build_with_nonexistent_name_fails": 2.920209014999955, "tests/integration/functional/cli/test_pipeline.py::test_pipeline_build_without_repo": 3.165552203999823, "tests/integration/functional/cli/test_pipeline.py::test_pipeline_build_writes_output_file": 3.181407643000057, "tests/integration/functional/cli/test_pipeline.py::test_pipeline_delete": 2.568359958000201, "tests/integration/functional/cli/test_pipeline.py::test_pipeline_list": 2.523850872999901, "tests/integration/functional/cli/test_pipeline.py::test_pipeline_registration_with_repo": 2.053968299999724, "tests/integration/functional/cli/test_pipeline.py::test_pipeline_registration_without_repo": 2.020588890999761, "tests/integration/functional/cli/test_pipeline.py::test_pipeline_run_delete": 2.572276681999938, "tests/integration/functional/cli/test_pipeline.py::test_pipeline_run_list": 2.612890040000366, "tests/integration/functional/cli/test_pipeline.py::test_pipeline_run_with_config_file": 3.3496300499998597, "tests/integration/functional/cli/test_pipeline.py::test_pipeline_run_with_custom_build_file": 3.3882004840002082, "tests/integration/functional/cli/test_pipeline.py::test_pipeline_run_with_custom_build_id": 4.195035306999898, "tests/integration/functional/cli/test_pipeline.py::test_pipeline_run_with_different_stack": 3.3970386670000607, "tests/integration/functional/cli/test_pipeline.py::test_pipeline_run_with_invalid_build_id_fails": 3.194453009999961, "tests/integration/functional/cli/test_pipeline.py::test_pipeline_run_with_nonexistent_name_fails": 2.050805601000093, "tests/integration/functional/cli/test_pipeline.py::test_pipeline_run_without_repo": 3.274575709999908, "tests/integration/functional/cli/test_pipeline.py::test_pipeline_schedule_delete": 3.3812733639999806, "tests/integration/functional/cli/test_pipeline.py::test_pipeline_schedule_list": 2.5042106290002266, "tests/integration/functional/cli/test_secret.py::test_create_fails_with_bad_scope": 0.010127882000233512, "tests/integration/functional/cli/test_secret.py::test_create_secret": 0.04605052700003398, "tests/integration/functional/cli/test_secret.py::test_create_secret_with_scope": 0.049897896000175024, "tests/integration/functional/cli/test_secret.py::test_create_secret_with_values": 0.06124520000003031, "tests/integration/functional/cli/test_secret.py::test_delete_secret_works": 0.0514310230000774, "tests/integration/functional/cli/test_secret.py::test_export_import_secret": 0.09734294800023235, "tests/integration/functional/cli/test_secret.py::test_get_secret_with_prefix_works": 0.05343265899978178, "tests/integration/functional/cli/test_secret.py::test_get_secret_with_scope_works": 0.05969927199976155, "tests/integration/functional/cli/test_secret.py::test_get_secret_works": 0.05229664000012235, "tests/integration/functional/cli/test_secret.py::test_list_secret_works": 0.0493179859997781, "tests/integration/functional/cli/test_secret.py::test_rename_secret_works": 0.09772115400005532, "tests/integration/functional/cli/test_secret.py::test_update_secret_works": 0.21188300499989055, "tests/integration/functional/cli/test_server.py::test_server_cli_up_down": 34.51465690500004, "tests/integration/functional/cli/test_stack.py::test_delete_stack_default_stack_fails": 1.0022619789997407, "tests/integration/functional/cli/test_stack.py::test_delete_stack_recursively_with_flag_succeeds": 1.0854003739998461, "tests/integration/functional/cli/test_stack.py::test_delete_stack_with_flag_succeeds": 1.689133282000057, "tests/integration/functional/cli/test_stack.py::test_describe_stack_bad_input_fails[abc_def]": 0.015018565000218587, "tests/integration/functional/cli/test_stack.py::test_describe_stack_bad_input_fails[my_other_cat_is_called_blupus]": 0.01111583600004451, "tests/integration/functional/cli/test_stack.py::test_describe_stack_bad_input_fails[stack123]": 0.011008803000095213, "tests/integration/functional/cli/test_stack.py::test_describe_stack_contains_local_stack": 0.007313133999787169, "tests/integration/functional/cli/test_stack.py::test_remove_component_core_component_fails": 1.019084373999931, "tests/integration/functional/cli/test_stack.py::test_remove_component_from_nonexistent_stack_fails": 0.9320257159999983, "tests/integration/functional/cli/test_stack.py::test_remove_component_non_core_component_succeeds": 1.09418248399993, "tests/integration/functional/cli/test_stack.py::test_rename_stack_active_stack_succeeds": 1.0091419500001848, "tests/integration/functional/cli/test_stack.py::test_rename_stack_default_stack_fails": 0.9422308570001405, "tests/integration/functional/cli/test_stack.py::test_rename_stack_new_name_with_existing_name_fails": 0.9435772970000471, "tests/integration/functional/cli/test_stack.py::test_rename_stack_non_active_stack_succeeds": 1.0033496810001452, "tests/integration/functional/cli/test_stack.py::test_rename_stack_nonexistent_stack_fails": 0.9293013359997531, "tests/integration/functional/cli/test_stack.py::test_stack_export": 0.9511159140001837, "tests/integration/functional/cli/test_stack.py::test_stack_export_delete_import": 1.186461449000035, "tests/integration/functional/cli/test_stack.py::test_stack_export_import_reuses_components": 1.1255334790000688, "tests/integration/functional/cli/test_stack.py::test_update_stack_active_stack_succeeds": 1.0390111579997665, "tests/integration/functional/cli/test_stack.py::test_update_stack_adding_component_succeeds": 1.0748913489999268, "tests/integration/functional/cli/test_stack.py::test_update_stack_adding_to_default_stack_fails": 1.0354215590000422, "tests/integration/functional/cli/test_stack.py::test_update_stack_nonexistent_stack_fails": 0.9742871210000885, "tests/integration/functional/cli/test_stack.py::test_update_stack_update_on_default_fails": 0.9711079939997944, "tests/integration/functional/cli/test_stack.py::test_updating_non_active_stack_succeeds": 1.0287943369999084, "tests/integration/functional/cli/test_stack_components.py::test_delete_default_component_fails": 0.9482397119995767, "tests/integration/functional/cli/test_stack_components.py::test_remove_attribute_component_non_existent_attributes_fail": 0.950428915999737, "tests/integration/functional/cli/test_stack_components.py::test_remove_attribute_component_nonexistent_component_fails": 0.9482611110001926, "tests/integration/functional/cli/test_stack_components.py::test_remove_attribute_component_required_attribute_fails": 0.9980963200000588, "tests/integration/functional/cli/test_stack_components.py::test_remove_attribute_component_succeeds": 1.023023091999903, "tests/integration/functional/cli/test_stack_components.py::test_remove_labels": 1.0030655119996936, "tests/integration/functional/cli/test_stack_components.py::test_rename_stack_component_nonexistent_component_fails": 0.9497674709998591, "tests/integration/functional/cli/test_stack_components.py::test_rename_stack_component_to_preexisting_name_fails": 0.9751402839999628, "tests/integration/functional/cli/test_stack_components.py::test_renaming_core_component_succeeds": 1.0100411390001227, "tests/integration/functional/cli/test_stack_components.py::test_renaming_default_component_fails": 0.9919488019997971, "tests/integration/functional/cli/test_stack_components.py::test_renaming_non_core_component_succeeds": 1.0057766370000536, "tests/integration/functional/cli/test_stack_components.py::test_set_labels_on_register": 0.9889630479997322, "tests/integration/functional/cli/test_stack_components.py::test_set_labels_on_update": 0.9944682570001078, "tests/integration/functional/cli/test_stack_components.py::test_update_stack_component_for_nonexistent_component_fails": 0.9538115810003092, "tests/integration/functional/cli/test_stack_components.py::test_update_stack_component_succeeds": 1.0134355030002098, "tests/integration/functional/cli/test_stack_components.py::test_update_stack_component_with_name_or_uuid_fails": 0.989550958000109, "tests/integration/functional/cli/test_stack_components.py::test_update_stack_component_with_non_configured_property_fails": 0.9676224419995378, "tests/integration/functional/cli/test_tag.py::test_tag_create_full_names": 0.022605210999927294, "tests/integration/functional/cli/test_tag.py::test_tag_create_only_required": 0.02324612100005652, "tests/integration/functional/cli/test_tag.py::test_tag_create_short_names": 0.02300841699980083, "tests/integration/functional/cli/test_tag.py::test_tag_create_without_required_fails": 0.00489808800011815, "tests/integration/functional/cli/test_tag.py::test_tag_delete_found": 0.01925194900036331, "tests/integration/functional/cli/test_tag.py::test_tag_delete_not_found": 0.0062028119998558395, "tests/integration/functional/cli/test_tag.py::test_tag_list": 0.07635788699985824, "tests/integration/functional/cli/test_tag.py::test_tag_update": 0.06470977399999356, "tests/integration/functional/cli/test_user_management.py::test_create_user_that_exists_fails": 0.2612506680000024, "tests/integration/functional/cli/test_user_management.py::test_create_user_with_password_succeeds": 0.26029265300007864, "tests/integration/functional/cli/test_user_management.py::test_delete_default_user_fails": 0.0073763349998898775, "tests/integration/functional/cli/test_user_management.py::test_delete_user_succeeds": 0.29623030800007655, "tests/integration/functional/cli/test_user_management.py::test_update_default_user_metadata_succeeds": 0.012948036000125285, "tests/integration/functional/cli/test_user_management.py::test_update_default_user_name_fails": 0.007198732000006203, "tests/integration/functional/cli/test_user_management.py::test_update_user_with_new_email_succeeds": 0.27146515599997656, "tests/integration/functional/cli/test_user_management.py::test_update_user_with_new_full_name_succeeds": 0.27034843500018724, "tests/integration/functional/cli/test_user_management.py::test_update_user_with_new_name_succeeds": 0.2717049610000686, "tests/integration/functional/cli/test_utils.py::test_converting_structured_str_to_dict": 0.0023799440000402683, "tests/integration/functional/cli/test_utils.py::test_error_raises_exception": 0.0014610279998805709, "tests/integration/functional/cli/test_utils.py::test_file_expansion_works": 0.0025212469998905362, "tests/integration/functional/cli/test_utils.py::test_get_package_information_works": 0.0021317409998573567, "tests/integration/functional/cli/test_utils.py::test_parsing_name_and_arguments": 0.001305322999883174, "tests/integration/functional/cli/test_utils.py::test_parsing_unknown_component_attributes": 0.0012623229999917385, "tests/integration/functional/cli/test_utils.py::test_validate_keys": 0.002623548999963532, "tests/integration/functional/cli/test_version.py::test_version_outputs_running_version_number": 0.007147431999783294, "tests/integration/functional/model/test_model_version.py::TestModel::test_create_model_version_makes_proper_tagging": 1.0956598780003333, "tests/integration/functional/model/test_model_version.py::TestModel::test_deletion_of_links[False]": 4.895070915999895, "tests/integration/functional/model/test_model_version.py::TestModel::test_deletion_of_links[True]": 5.915976372000159, "tests/integration/functional/model/test_model_version.py::TestModel::test_init_stage_logic": 0.9170593989997542, "tests/integration/functional/model/test_model_version.py::TestModel::test_link_artifact_via_function": 9.944256607999705, "tests/integration/functional/model/test_model_version.py::TestModel::test_link_artifact_via_save_artifact": 8.279441914999552, "tests/integration/functional/model/test_model_version.py::TestModel::test_metadata_logging": 1.1543337029997929, "tests/integration/functional/model/test_model_version.py::TestModel::test_metadata_logging_functional": 1.1701669359995321, "tests/integration/functional/model/test_model_version.py::TestModel::test_metadata_logging_in_steps": 2.5810546660004547, "tests/integration/functional/model/test_model_version.py::TestModel::test_model_config_differs_from_db_warns": 1.0024100709997583, "tests/integration/functional/model/test_model_version.py::TestModel::test_model_create_model_and_version": 1.0488818189996891, "tests/integration/functional/model/test_model_version.py::TestModel::test_model_created_with_warning": 0.966104615000404, "tests/integration/functional/model/test_model_version.py::TestModel::test_model_exists": 0.9648359610000625, "tests/integration/functional/model/test_model_version.py::TestModel::test_model_fetch_model_and_version_by_number": 1.0063748050001777, "tests/integration/functional/model/test_model_version.py::TestModel::test_model_fetch_model_and_version_by_number_not_found": 0.9634731400001328, "tests/integration/functional/model/test_model_version.py::TestModel::test_model_fetch_model_and_version_by_stage": 1.0142643759995735, "tests/integration/functional/model/test_model_version.py::TestModel::test_model_fetch_model_and_version_by_stage_not_found": 0.9800801679998585, "tests/integration/functional/model/test_model_version.py::TestModel::test_model_fetch_model_and_version_latest": 1.0146396730006018, "tests/integration/functional/model/test_model_version.py::TestModel::test_model_version_config_differs_from_db_warns": 1.0874289989997123, "tests/integration/functional/model/test_model_version.py::TestModel::test_recovery_flow": 0.9953197450004154, "tests/integration/functional/model/test_model_version.py::TestModel::test_tags_properly_created": 0.9933063110001967, "tests/integration/functional/model/test_model_version.py::TestModel::test_tags_properly_updated": 1.3527128369996717, "tests/integration/functional/model/test_model_version.py::TestModel::test_that_artifacts_are_not_linked_to_models_outside_of_the_context": 4.322964287000104, "tests/integration/functional/model/test_model_version.py::TestModelVersion::test_create_model_version_makes_proper_tagging": 0.8974301159996685, "tests/integration/functional/model/test_model_version.py::TestModelVersion::test_deletion_of_links[False]": 5.806348885000261, "tests/integration/functional/model/test_model_version.py::TestModelVersion::test_deletion_of_links[True]": 5.022944135999751, "tests/integration/functional/model/test_model_version.py::TestModelVersion::test_init_stage_logic": 0.7383732329999475, "tests/integration/functional/model/test_model_version.py::TestModelVersion::test_metadata_logging": 0.8898821490001865, "tests/integration/functional/model/test_model_version.py::TestModelVersion::test_metadata_logging_functional": 1.028141828999651, "tests/integration/functional/model/test_model_version.py::TestModelVersion::test_metadata_logging_in_steps": 1.8727879669995673, "tests/integration/functional/model/test_model_version.py::TestModelVersion::test_model_config_differs_from_db_warns": 0.8953153600004953, "tests/integration/functional/model/test_model_version.py::TestModelVersion::test_model_create_model_and_version": 0.9073972389996925, "tests/integration/functional/model/test_model_version.py::TestModelVersion::test_model_created_with_warning": 0.759970577999411, "tests/integration/functional/model/test_model_version.py::TestModelVersion::test_model_exists": 0.8844886970000516, "tests/integration/functional/model/test_model_version.py::TestModelVersion::test_model_fetch_model_and_version_by_number": 0.9301667969994014, "tests/integration/functional/model/test_model_version.py::TestModelVersion::test_model_fetch_model_and_version_by_number_not_found": 0.7746619610006746, "tests/integration/functional/model/test_model_version.py::TestModelVersion::test_model_fetch_model_and_version_by_stage": 0.827261591000024, "tests/integration/functional/model/test_model_version.py::TestModelVersion::test_model_fetch_model_and_version_by_stage_not_found": 0.8236998370002766, "tests/integration/functional/model/test_model_version.py::TestModelVersion::test_model_fetch_model_and_version_latest": 0.7821355020005285, "tests/integration/functional/model/test_model_version.py::TestModelVersion::test_model_version_config_differs_from_db_warns": 1.0592227449997154, "tests/integration/functional/model/test_model_version.py::TestModelVersion::test_recovery_flow": 0.8021877130004214, "tests/integration/functional/model/test_model_version.py::TestModelVersion::test_tags_properly_created": 0.7675582029996804, "tests/integration/functional/model/test_model_version.py::TestModelVersion::test_tags_properly_updated": 1.376793110998733, "tests/integration/functional/model/test_model_version.py::TestModelVersion::test_that_artifacts_are_not_linked_to_models_outside_of_the_context": 4.3979607259998375, "tests/integration/functional/models/test_artifact.py::test_artifact_step_run_linkage": 4.3628059939999275, "tests/integration/functional/models/test_artifact.py::test_artifact_tagging": 3.023466597999686, "tests/integration/functional/models/test_artifact.py::test_artifact_versioning": 8.24613314300018, "tests/integration/functional/models/test_artifact.py::test_artifact_versioning_duplication": 6.1456308049998825, "tests/integration/functional/models/test_artifact.py::test_custom_artifact_name": 2.3556658070001504, "tests/integration/functional/models/test_artifact.py::test_default_artifact_name": 2.9994298179999532, "tests/integration/functional/models/test_artifact.py::test_disabling_artifact_metadata": 10.763572024999803, "tests/integration/functional/models/test_artifact.py::test_disabling_artifact_visualization": 10.605466356000079, "tests/integration/functional/models/test_artifact.py::test_load_artifact_visualization": 2.304982773999882, "tests/integration/functional/models/test_artifact.py::test_multi_output_artifact_names": 2.4001710340000955, "tests/integration/functional/models/test_pipeline.py::test_pipeline_run_linkage": 10.514613569000176, "tests/integration/functional/models/test_pipeline_run.py::test_pipeline_run_artifacts": 7.13547150699992, "tests/integration/functional/models/test_pipeline_run.py::test_pipeline_run_has_client_and_orchestrator_environment": 2.5719043299998248, "tests/integration/functional/models/test_pipeline_run.py::test_scheduled_pipeline_run_has_schedule_id": 2.5044267030000356, "tests/integration/functional/models/test_step_run.py::test_disabling_step_logs": 11.011093259000063, "tests/integration/functional/models/test_step_run.py::test_step_run_has_docstring": 2.5077602920002846, "tests/integration/functional/models/test_step_run.py::test_step_run_has_source_code": 3.370865112000047, "tests/integration/functional/models/test_step_run.py::test_step_run_linkage": 3.691438830999914, "tests/integration/functional/models/test_step_run.py::test_step_run_parent_steps_linkage": 3.3080677039999955, "tests/integration/functional/models/test_step_run.py::test_step_run_with_too_long_docstring_is_truncated": 2.8218574259999514, "tests/integration/functional/models/test_step_run.py::test_step_run_with_too_long_source_code_is_truncated": 2.8452983039999253, "tests/integration/functional/pipelines/test_pipeline_config.py::test_pipeline_config_from_file_fails_with_pipeline_parameters_on_conflict_with_pipeline_parameters": 0.13278002700008074, "tests/integration/functional/pipelines/test_pipeline_config.py::test_pipeline_config_from_file_fails_with_pipeline_parameters_on_conflict_with_step_parameters": 1.2314241050000874, "tests/integration/functional/pipelines/test_pipeline_config.py::test_pipeline_config_from_file_not_overridden_for_extra": 3.062536284000089, "tests/integration/functional/pipelines/test_pipeline_config.py::test_pipeline_config_from_file_not_overridden_for_model": 2.4963125199999467, "tests/integration/functional/pipelines/test_pipeline_config.py::test_pipeline_config_from_file_not_overridden_for_model_version": 2.2279090570000335, "tests/integration/functional/pipelines/test_pipeline_config.py::test_pipeline_config_from_file_not_warns_on_new_value": 2.2399504260001777, "tests/integration/functional/pipelines/test_pipeline_config.py::test_pipeline_config_from_file_works_with_pipeline_parameters": 1.3944359439999516, "tests/integration/functional/pipelines/test_pipeline_config.py::test_pipeline_config_from_file_works_with_pipeline_parameters_on_conflict_with_default_parameters": 1.4017153980000785, "tests/integration/functional/pipelines/test_pipeline_config.py::test_pipeline_with_model_from_yaml": 3.9317949999999655, "tests/integration/functional/pipelines/test_pipeline_config.py::test_pipeline_with_model_version_from_yaml": 3.3078972839985, "tests/integration/functional/pipelines/test_pipeline_context.py::test_pipeline_context": 1.2018538619995525, "tests/integration/functional/pipelines/test_pipeline_context.py::test_pipeline_context_available_as_config_yaml": 0.004250577999755478, "tests/integration/functional/pipelines/test_pipeline_context.py::test_pipeline_context_can_load_model_artifacts_and_metadata_in_lazy_mode": 3.0506267490000027, "tests/integration/functional/pipelines/test_pipeline_context.py::test_that_argument_as_get_artifact_of_model_in_pipeline_context_fails_if_not_found": 4.398020258000088, "tests/integration/functional/pipelines/test_pipeline_context.py::test_that_argument_as_get_artifact_of_model_version_in_pipeline_context_fails_if_not_found": 2.890680204000091, "tests/integration/functional/pipelines/test_pipeline_context.py::test_that_argument_can_be_a_get_artifact_of_model_in_pipeline_context": 4.061323698000251, "tests/integration/functional/pipelines/test_pipeline_context.py::test_that_argument_can_be_a_get_artifact_of_model_version_in_pipeline_context": 4.176267209999423, "tests/integration/functional/steps/test_external_artifact.py::test_external_artifact_by_id": 4.43389475999993, "tests/integration/functional/steps/test_external_artifact.py::test_external_artifact_by_name_and_version": 7.06729097199991, "tests/integration/functional/steps/test_external_artifact.py::test_external_artifact_by_name_only": 4.746002260000068, "tests/integration/functional/steps/test_external_artifact.py::test_external_artifact_by_value": 2.322968876999994, "tests/integration/functional/steps/test_model_version.py::test_create_new_version_only_in_pipeline": 5.041434881999976, "tests/integration/functional/steps/test_model_version.py::test_create_new_version_only_in_step": 4.295798766000189, "tests/integration/functional/steps/test_model_version.py::test_create_new_versions_both_pipeline_and_step": 4.88058153799966, "tests/integration/functional/steps/test_model_version.py::test_model_passed_to_step_context_and_switches": 3.135234529000172, "tests/integration/functional/steps/test_model_version.py::test_model_passed_to_step_context_via_pipeline": 2.455294302999846, "tests/integration/functional/steps/test_model_version.py::test_model_passed_to_step_context_via_step": 2.4911644349999733, "tests/integration/functional/steps/test_model_version.py::test_model_passed_to_step_context_via_step_and_pipeline": 2.4559699980000005, "tests/integration/functional/steps/test_model_version.py::test_model_version_passed_to_step_context_and_switches": 2.3592089600024337, "tests/integration/functional/steps/test_model_version.py::test_model_version_passed_to_step_context_via_pipeline": 1.8863527530011197, "tests/integration/functional/steps/test_model_version.py::test_model_version_passed_to_step_context_via_step": 2.0089393099997324, "tests/integration/functional/steps/test_model_version.py::test_model_version_passed_to_step_context_via_step_and_pipeline": 1.8476394680010344, "tests/integration/functional/steps/test_model_version.py::test_multiple_definitions_create_new_version_warns[Configuration in pipeline only - not warns.]": 2.6884162349997496, "tests/integration/functional/steps/test_model_version.py::test_multiple_definitions_create_new_version_warns[Configuration in step only - not warns.]": 3.509240274000149, "tests/integration/functional/steps/test_model_version.py::test_multiple_definitions_create_new_version_warns[Pipeline and one of the steps ask to create new versions - warning to keep it in one place.]": 2.9222934539998278, "tests/integration/functional/steps/test_model_version.py::test_multiple_definitions_create_new_version_warns[Pipeline with one step, which overrides model - warns that pipeline conf is useless.]": 2.543846257000041, "tests/integration/functional/steps/test_model_version.py::test_multiple_definitions_create_new_version_warns[Pipeline with one step, which overrides model_version - warns that pipeline conf is useless.]": 2.0887765379993652, "tests/integration/functional/steps/test_model_version.py::test_multiple_definitions_create_new_version_warns[Two steps ask to create new versions - warning to keep it in one place.]": 2.9407093379995786, "tests/integration/functional/steps/test_model_version.py::test_pipeline_context_pass_artifact_from_model_and_link_run": 5.736179741000342, "tests/integration/functional/steps/test_model_version.py::test_pipeline_run_link_attached_from_mixed_context[Multiple steps pipeline (declarative+functional)]": 5.98215966700036, "tests/integration/functional/steps/test_model_version.py::test_pipeline_run_link_attached_from_mixed_context[Multiple steps pipeline (declarative+functional+step+pipeline)]": 9.583252412000093, "tests/integration/functional/steps/test_model_version.py::test_pipeline_run_link_attached_from_mixed_context[Single step pipeline (declarative+functional)]": 5.177251830999921, "tests/integration/functional/steps/test_model_version.py::test_pipeline_run_link_attached_from_mixed_context[Single step pipeline (declarative+functional+step+pipeline)]": 6.291753861999496, "tests/integration/functional/steps/test_model_version.py::test_pipeline_run_link_attached_from_pipeline_context[Multiple steps pipeline]": 4.973972935000347, "tests/integration/functional/steps/test_model_version.py::test_pipeline_run_link_attached_from_pipeline_context[Single step pipeline]": 5.0872974740000245, "tests/integration/functional/steps/test_model_version.py::test_pipeline_run_link_attached_from_step_context[Multiple steps pipeline]": 5.0181257659996845, "tests/integration/functional/steps/test_model_version.py::test_pipeline_run_link_attached_from_step_context[Single step pipeline]": 4.232454770000004, "tests/integration/functional/steps/test_model_version.py::test_recovery_of_steps[custom_running_name]": 7.935608918000526, "tests/integration/functional/steps/test_model_version.py::test_recovery_of_steps[default_running_name]": 6.799407892999625, "tests/integration/functional/steps/test_model_version.py::test_that_artifact_is_removed_on_deletion": 2.821937491999961, "tests/integration/functional/steps/test_model_version.py::test_that_consumption_also_registers_run_in_model": 6.865526518000024, "tests/integration/functional/steps/test_model_version.py::test_that_consumption_also_registers_run_in_model_version": 5.916077644000325, "tests/integration/functional/steps/test_model_version.py::test_that_if_some_steps_request_new_version_but_cached_new_version_is_still_created": 4.906100304000574, "tests/integration/functional/steps/test_model_version.py::test_that_pipeline_run_is_removed_on_deletion_of_pipeline": 2.7639731179997398, "tests/integration/functional/steps/test_model_version.py::test_that_pipeline_run_is_removed_on_deletion_of_pipeline_run": 3.737926896999852, "tests/integration/functional/steps/test_step_context.py::test_input_artifacts_property": 1.3951349440001195, "tests/integration/functional/steps/test_step_context.py::test_materializer_can_access_step_context": 1.389032133000228, "tests/integration/functional/steps/test_step_context.py::test_step_can_access_step_context": 1.2705255629998646, "tests/integration/functional/steps/test_utils.py::test_log_step_metadata_using_latest_run": 2.4885686559998703, "tests/integration/functional/steps/test_utils.py::test_log_step_metadata_using_specific_params": 2.5214940140001545, "tests/integration/functional/steps/test_utils.py::test_log_step_metadata_within_step": 3.2401540440000645, "tests/integration/functional/test_client.py::TestArtifact::test_prune_data_and_version": 1.0312374050001836, "tests/integration/functional/test_client.py::TestArtifact::test_prune_full": 1.059768227999939, "tests/integration/functional/test_client.py::TestArtifact::test_prune_only_artifact_version": 1.0481208149999475, "tests/integration/functional/test_client.py::TestModel::test_create_model_duplicate_fail": 0.9860255500000221, "tests/integration/functional/test_client.py::TestModel::test_create_model_pass": 1.0056150440000238, "tests/integration/functional/test_client.py::TestModel::test_delete_model_found": 1.0023462840003958, "tests/integration/functional/test_client.py::TestModel::test_delete_model_not_found": 0.9329158240000197, "tests/integration/functional/test_client.py::TestModel::test_get_model_found": 0.97603436899999, "tests/integration/functional/test_client.py::TestModel::test_get_model_not_found": 0.9295481219996873, "tests/integration/functional/test_client.py::TestModel::test_latest_version_retrieval": 1.0119087690000015, "tests/integration/functional/test_client.py::TestModel::test_list_by_tags": 1.0388558459997057, "tests/integration/functional/test_client.py::TestModel::test_name_is_mutable": 0.960228620999942, "tests/integration/functional/test_client.py::TestModel::test_update_model": 1.0568262720003077, "tests/integration/functional/test_client.py::TestModelVersion::test_create_model_version_duplicate_fails": 1.022245281000096, "tests/integration/functional/test_client.py::TestModelVersion::test_create_model_version_pass": 1.113280266999709, "tests/integration/functional/test_client.py::TestModelVersion::test_delete_model_version_found": 0.9925337799998033, "tests/integration/functional/test_client.py::TestModelVersion::test_delete_model_version_not_found": 0.9767925810001543, "tests/integration/functional/test_client.py::TestModelVersion::test_get_by_latest": 1.1209780930000761, "tests/integration/functional/test_client.py::TestModelVersion::test_get_by_stage": 2.0348678870000185, "tests/integration/functional/test_client.py::TestModelVersion::test_get_model_version_by_id_found": 1.0102850890000354, "tests/integration/functional/test_client.py::TestModelVersion::test_get_model_version_by_index_found": 1.0038095749998774, "tests/integration/functional/test_client.py::TestModelVersion::test_get_model_version_by_name_found": 0.9984719750000295, "tests/integration/functional/test_client.py::TestModelVersion::test_get_model_version_by_stage_found": 1.0414944530002685, "tests/integration/functional/test_client.py::TestModelVersion::test_get_model_version_by_stage_not_found": 1.0194185540003673, "tests/integration/functional/test_client.py::TestModelVersion::test_get_model_version_not_found": 0.9994861669999864, "tests/integration/functional/test_client.py::TestModelVersion::test_list_model_version": 2.2563411120002, "tests/integration/functional/test_client.py::TestModelVersion::test_name_and_description_is_mutable": 1.0307814650000182, "tests/integration/functional/test_client.py::TestModelVersion::test_stage_not_found": 1.0046130900000207, "tests/integration/functional/test_client.py::TestModelVersion::test_update_model_version": 1.2454892390001078, "tests/integration/functional/test_client.py::test_activating_a_stack_updates_the_config_file": 1.0299360849999175, "tests/integration/functional/test_client.py::test_activating_nonexisting_stack_fails": 0.9252242649995424, "tests/integration/functional/test_client.py::test_basic_crud_for_entity[code_repository]": 1.019507988999976, "tests/integration/functional/test_client.py::test_basic_crud_for_entity[flavor]": 0.9802645439999651, "tests/integration/functional/test_client.py::test_basic_crud_for_entity[stack]": 1.0749406120000913, "tests/integration/functional/test_client.py::test_basic_crud_for_entity[stack_component]": 1.062393572000019, "tests/integration/functional/test_client.py::test_basic_crud_for_entity[user]": 1.0916302069999801, "tests/integration/functional/test_client.py::test_basic_crud_for_entity[workspace]": 1.091735318000019, "tests/integration/functional/test_client.py::test_create_run_metadata_for_artifact": 2.4831037290000495, "tests/integration/functional/test_client.py::test_create_run_metadata_for_pipeline_run": 2.466284693999796, "tests/integration/functional/test_client.py::test_create_run_metadata_for_pipeline_run_and_component": 2.514252596000233, "tests/integration/functional/test_client.py::test_create_run_metadata_for_step_run": 2.4883987200000774, "tests/integration/functional/test_client.py::test_create_run_metadata_for_step_run_and_component": 2.543550716000027, "tests/integration/functional/test_client.py::test_create_secret_default_scope": 0.0290293330001532, "tests/integration/functional/test_client.py::test_create_secret_existing_name_different_scope": 0.05312767400027951, "tests/integration/functional/test_client.py::test_create_secret_existing_name_scope": 0.03535044800014475, "tests/integration/functional/test_client.py::test_create_secret_existing_name_user_scope": 0.03479183700005706, "tests/integration/functional/test_client.py::test_create_secret_user_scope": 0.030525360000183355, "tests/integration/functional/test_client.py::test_creating_repository_instance_during_step_execution": 0.0024875449996670795, "tests/integration/functional/test_client.py::test_deleting_builds": 0.9697282770000584, "tests/integration/functional/test_client.py::test_deleting_deployments": 0.9842411429999629, "tests/integration/functional/test_client.py::test_deregistering_a_non_active_stack": 1.0232783609999387, "tests/integration/functional/test_client.py::test_deregistering_a_stack_component_in_stack_fails": 1.9370258139999805, "tests/integration/functional/test_client.py::test_deregistering_a_stack_component_that_is_part_of_a_registered_stack": 0.9681426480001392, "tests/integration/functional/test_client.py::test_deregistering_the_active_stack": 0.9538613829997757, "tests/integration/functional/test_client.py::test_finding_repository_directory_with_explicit_path": 0.9814666930001295, "tests/integration/functional/test_client.py::test_freshly_initialized_repo_attributes": 0.020984880000014527, "tests/integration/functional/test_client.py::test_get_run": 2.5070294609997745, "tests/integration/functional/test_client.py::test_get_run_fails_for_non_existent_run": 0.9735993510000753, "tests/integration/functional/test_client.py::test_get_unlisted_runs": 3.784757890000037, "tests/integration/functional/test_client.py::test_getting_a_nonexisting_stack_component": 0.9260725799999818, "tests/integration/functional/test_client.py::test_getting_a_pipeline": 1.012382962999709, "tests/integration/functional/test_client.py::test_getting_a_stack_component": 0.9446366199999829, "tests/integration/functional/test_client.py::test_getting_builds": 0.9489563990000534, "tests/integration/functional/test_client.py::test_getting_deployments": 0.9634818419999647, "tests/integration/functional/test_client.py::test_initializing_repo_creates_directory_and_uses_default_stack": 0.9613556259998859, "tests/integration/functional/test_client.py::test_initializing_repo_twice_fails": 0.011425310000049649, "tests/integration/functional/test_client.py::test_listing_builds": 0.9464990539997871, "tests/integration/functional/test_client.py::test_listing_deployments": 0.9938199249997979, "tests/integration/functional/test_client.py::test_listing_pipelines": 1.9813161319998471, "tests/integration/functional/test_client.py::test_register_a_stack_with_unregistered_component_fails": 0.9457218319998901, "tests/integration/functional/test_client.py::test_registering_a_new_stack_component_succeeds": 0.9650285859997894, "tests/integration/functional/test_client.py::test_registering_a_stack": 1.0131612770001084, "tests/integration/functional/test_client.py::test_registering_a_stack_component_with_existing_name": 0.955384315999936, "tests/integration/functional/test_client.py::test_registering_a_stack_with_existing_name": 1.0754401170001984, "tests/integration/functional/test_client.py::test_renaming_stack_with_update_method_succeeds": 1.0627626879997933, "tests/integration/functional/test_client.py::test_repository_detection": 0.011109303000012005, "tests/integration/functional/test_client.py::test_updating_a_stack_with_new_component_succeeds": 1.0770695470000646, "tests/integration/functional/test_lineage_graph.py::test_add_direct_edges": 2.552997497000206, "tests/integration/functional/test_lineage_graph.py::test_add_external_artifacts": 3.7319646040000407, "tests/integration/functional/test_lineage_graph.py::test_generate_run_nodes_and_edges": 2.67205659199999, "tests/integration/functional/test_lineage_graph.py::test_manual_save_load_artifact": 3.65780175399982, "tests/integration/functional/test_zen_server_api.py::test_list_stacks_endpoint": 0.0014071260000037, "tests/integration/functional/test_zen_server_api.py::test_list_users_endpoint": 0.000930116999825259, "tests/integration/functional/test_zen_server_api.py::test_server_requires_auth": 0.0010635189998993155, "tests/integration/functional/zen_server/test_zen_server.py::test_server_up_down": 24.17791183399993, "tests/integration/functional/zen_stores/test_secrets_store.py::test_delete_user_with_secrets": 0.4410866210000677, "tests/integration/functional/zen_stores/test_secrets_store.py::test_get_secret_returns_values": 0.024542940999936036, "tests/integration/functional/zen_stores/test_secrets_store.py::test_list_secret_excludes_values": 0.025359755000181394, "tests/integration/functional/zen_stores/test_secrets_store.py::test_list_secrets_filter": 0.15560289499990176, "tests/integration/functional/zen_stores/test_secrets_store.py::test_list_secrets_pagination_and_sorting": 2.171697098000095, "tests/integration/functional/zen_stores/test_secrets_store.py::test_reusing_user_secret_name_succeeds": 0.0014190250001320237, "tests/integration/functional/zen_stores/test_secrets_store.py::test_secret_empty_values": 0.04138014400018619, "tests/integration/functional/zen_stores/test_secrets_store.py::test_secret_is_deleted_with_workspace": 0.18305828699999438, "tests/integration/functional/zen_stores/test_secrets_store.py::test_update_scope_fails_if_name_already_in_scope": 0.07530125299967949, "tests/integration/functional/zen_stores/test_secrets_store.py::test_update_scope_succeeds": 0.11576448000005257, "tests/integration/functional/zen_stores/test_secrets_store.py::test_update_secret_add_new_values": 0.04131614199991418, "tests/integration/functional/zen_stores/test_secrets_store.py::test_update_secret_existing_values": 0.041712549000067156, "tests/integration/functional/zen_stores/test_secrets_store.py::test_update_secret_name": 0.06636079100007919, "tests/integration/functional/zen_stores/test_secrets_store.py::test_update_secret_name_fails_if_exists_in_workspace": 0.0654178749998664, "tests/integration/functional/zen_stores/test_secrets_store.py::test_update_secret_name_sets_updated_date": 1.0501437580001038, "tests/integration/functional/zen_stores/test_secrets_store.py::test_update_secret_remove_nonexisting_values": 0.04005712000002859, "tests/integration/functional/zen_stores/test_secrets_store.py::test_update_secret_remove_values": 0.04309977300022183, "tests/integration/functional/zen_stores/test_secrets_store.py::test_update_secret_values_sets_updated_date": 1.048612931999969, "tests/integration/functional/zen_stores/test_secrets_store.py::test_update_user_secret_name_succeeds_if_exists_in_workspace": 0.08814708399995652, "tests/integration/functional/zen_stores/test_secrets_store.py::test_update_workspace_secret_name_succeeds_if_exists_for_a_user": 0.08451011800002561, "tests/integration/functional/zen_stores/test_secrets_store.py::test_user_secret_is_not_visible_to_other_users": 0.0021921390000443353, "tests/integration/functional/zen_stores/test_secrets_store.py::test_user_secret_is_not_visible_to_other_workspaces": 0.0013866249998955027, "tests/integration/functional/zen_stores/test_secrets_store.py::test_workspace_secret_is_not_visible_to_other_workspaces": 0.0013716249995923135, "tests/integration/functional/zen_stores/test_secrets_store.py::test_workspace_secret_is_visible_to_other_users": 0.001460527000062939, "tests/integration/functional/zen_stores/test_zen_store.py::TestModel::test_latest_version_properly_fetched": 2.1645975859996724, "tests/integration/functional/zen_stores/test_zen_store.py::TestModel::test_list_by_tag": 2.0608196600001065, "tests/integration/functional/zen_stores/test_zen_store.py::TestModel::test_update_name": 1.0582784469997932, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersion::test_create_duplicated": 0.08315280700026051, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersion::test_create_no_model": 0.04677074700020967, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersion::test_create_pass": 0.07523346399966613, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersion::test_delete_found": 0.09296638500040899, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersion::test_delete_not_found": 0.04768576300011773, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersion::test_get_found": 0.098840090000067, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersion::test_get_found_by_number": 0.11490568299973347, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersion::test_get_not_found": 0.046364641000081974, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersion::test_get_not_found_by_number": 0.09942640200006281, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersion::test_in_stage_not_found": 0.08798229500030175, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersion::test_increments_version_number": 1.139764557999797, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersion::test_latest_found": 1.1369452999999794, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersion::test_list_by_tags": 0.2704386010000235, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersion::test_list_empty": 0.057521043000178906, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersion::test_list_not_empty": 0.1287427340000704, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersion::test_model_bad_stage": 0.0015075270000579621, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersion::test_model_ok_stage": 0.0014155260000734415, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersion::test_update_forced": 0.18710080599998946, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersion::test_update_name_and_description": 1.1056055389999528, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersion::test_update_not_forced": 0.16816594700026144, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersion::test_update_not_found": 0.04697575099976348, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersion::test_update_public_interface": 0.11767553300023792, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersion::test_update_public_interface_bad_stage": 0.07529956500025037, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersionArtifactLinks::test_link_create_duplicated_by_id": 0.15954477099990072, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersionArtifactLinks::test_link_create_pass": 0.15323265600000013, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersionArtifactLinks::test_link_create_single_version_of_same_output_name_from_different_steps": 0.22683368000002702, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersionArtifactLinks::test_link_create_versioned": 0.27752209200002653, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersionArtifactLinks::test_link_delete_all": 0.2224620029999187, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersionArtifactLinks::test_link_delete_found": 0.17522775199995522, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersionArtifactLinks::test_link_delete_not_found": 0.1059896070000832, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersionArtifactLinks::test_link_list_empty": 0.0954445179997947, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersionArtifactLinks::test_link_list_populated": 0.4914512420002666, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersionPipelineRunLinks::test_link_create_duplicated": 0.15618571100003464, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersionPipelineRunLinks::test_link_create_pass": 0.15618930900018313, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersionPipelineRunLinks::test_link_delete_found": 0.16970815300032882, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersionPipelineRunLinks::test_link_delete_not_found": 0.133608102999915, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersionPipelineRunLinks::test_link_list_empty": 0.09046102799993605, "tests/integration/functional/zen_stores/test_zen_store.py::TestModelVersionPipelineRunLinks::test_link_list_populated": 0.3963087560002805, "tests/integration/functional/zen_stores/test_zen_store.py::TestRunMetadata::test_metadata_full_cycle_with_cascade_deletion[artifact_version]": 0.06894387900001675, "tests/integration/functional/zen_stores/test_zen_store.py::TestRunMetadata::test_metadata_full_cycle_with_cascade_deletion[model_version]": 0.08209392399976423, "tests/integration/functional/zen_stores/test_zen_store.py::TestRunMetadata::test_metadata_full_cycle_with_cascade_deletion[pipeline_run]": 0.09431555000014669, "tests/integration/functional/zen_stores/test_zen_store.py::TestRunMetadata::test_metadata_full_cycle_with_cascade_deletion[step_run]": 0.10005235700009507, "tests/integration/functional/zen_stores/test_zen_store.py::TestTag::test_create_bad_input": 0.9302727770000274, "tests/integration/functional/zen_stores/test_zen_store.py::TestTag::test_create_duplicate": 0.9441045840001152, "tests/integration/functional/zen_stores/test_zen_store.py::TestTag::test_create_pass": 0.949003427000207, "tests/integration/functional/zen_stores/test_zen_store.py::TestTag::test_get_tag_found": 0.9420142460000989, "tests/integration/functional/zen_stores/test_zen_store.py::TestTag::test_get_tag_not_found": 0.9248821389999193, "tests/integration/functional/zen_stores/test_zen_store.py::TestTag::test_list_tags": 0.9728294010001264, "tests/integration/functional/zen_stores/test_zen_store.py::TestTag::test_update_tag": 0.9777437909997388, "tests/integration/functional/zen_stores/test_zen_store.py::TestTagResource::test_cascade_deletion[delete_model]": 1.0587197889999516, "tests/integration/functional/zen_stores/test_zen_store.py::TestTagResource::test_cascade_deletion[delete_tag]": 1.0798434729999826, "tests/integration/functional/zen_stores/test_zen_store.py::TestTagResource::test_create_tag_resource_fails_on_duplicate": 0.944962809000117, "tests/integration/functional/zen_stores/test_zen_store.py::TestTagResource::test_create_tag_resource_pass": 0.945184892999805, "tests/integration/functional/zen_stores/test_zen_store.py::TestTagResource::test_delete_tag_resource_mismatch": 0.9450065899995934, "tests/integration/functional/zen_stores/test_zen_store.py::TestTagResource::test_delete_tag_resource_pass": 0.9594463899998118, "tests/integration/functional/zen_stores/test_zen_store.py::test_active_user": 0.0033643619999566, "tests/integration/functional/zen_stores/test_zen_store.py::test_artifacts_are_not_deleted_with_run": 2.6001630109999496, "tests/integration/functional/zen_stores/test_zen_store.py::test_basic_crud_for_entity[artifact]": 0.07239142000003085, "tests/integration/functional/zen_stores/test_zen_store.py::test_basic_crud_for_entity[artifact_version]": 0.13475899200011554, "tests/integration/functional/zen_stores/test_zen_store.py::test_basic_crud_for_entity[build]": 0.05426430700003948, "tests/integration/functional/zen_stores/test_zen_store.py::test_basic_crud_for_entity[code_repository]": 0.057253301000400825, "tests/integration/functional/zen_stores/test_zen_store.py::test_basic_crud_for_entity[deployment]": 0.05222526900001867, "tests/integration/functional/zen_stores/test_zen_store.py::test_basic_crud_for_entity[flavor]": 0.05552612400015278, "tests/integration/functional/zen_stores/test_zen_store.py::test_basic_crud_for_entity[model]": 0.14146092300006785, "tests/integration/functional/zen_stores/test_zen_store.py::test_basic_crud_for_entity[pipeline]": 0.06253243899982408, "tests/integration/functional/zen_stores/test_zen_store.py::test_basic_crud_for_entity[secret]": 0.06177613999989262, "tests/integration/functional/zen_stores/test_zen_store.py::test_basic_crud_for_entity[service_connector]": 0.08405283200022495, "tests/integration/functional/zen_stores/test_zen_store.py::test_basic_crud_for_entity[stack_component]": 0.06552361699982612, "tests/integration/functional/zen_stores/test_zen_store.py::test_basic_crud_for_entity[user]": 0.07588278399998671, "tests/integration/functional/zen_stores/test_zen_store.py::test_basic_crud_for_entity[workspace]": 0.0987493299999187, "tests/integration/functional/zen_stores/test_zen_store.py::test_connector_list": 0.17459133500028656, "tests/integration/functional/zen_stores/test_zen_store.py::test_connector_name_reuse_for_different_user_fails": 0.0013206239998453384, "tests/integration/functional/zen_stores/test_zen_store.py::test_connector_name_reuse_for_same_user_fails": 0.017179729999952542, "tests/integration/functional/zen_stores/test_zen_store.py::test_connector_name_update_fails_if_exists": 0.03245759200012799, "tests/integration/functional/zen_stores/test_zen_store.py::test_connector_secret_share_lifespan": 0.04378314199993838, "tests/integration/functional/zen_stores/test_zen_store.py::test_connector_type_register": 0.022342714000160413, "tests/integration/functional/zen_stores/test_zen_store.py::test_connector_update_auth_method": 0.06022865999989335, "tests/integration/functional/zen_stores/test_zen_store.py::test_connector_update_config": 0.3401430179999352, "tests/integration/functional/zen_stores/test_zen_store.py::test_connector_update_expiration": 0.12017322399992736, "tests/integration/functional/zen_stores/test_zen_store.py::test_connector_update_expires_at": 0.06313336499988509, "tests/integration/functional/zen_stores/test_zen_store.py::test_connector_update_labels": 0.1298885739997786, "tests/integration/functional/zen_stores/test_zen_store.py::test_connector_update_name": 0.06489558299995224, "tests/integration/functional/zen_stores/test_zen_store.py::test_connector_update_resource_id": 0.11610533399993983, "tests/integration/functional/zen_stores/test_zen_store.py::test_connector_update_resource_types": 0.060031107000213524, "tests/integration/functional/zen_stores/test_zen_store.py::test_connector_update_type": 0.061117275000242444, "tests/integration/functional/zen_stores/test_zen_store.py::test_connector_validation": 0.18247296699996696, "tests/integration/functional/zen_stores/test_zen_store.py::test_connector_with_labels": 0.044674725000049875, "tests/integration/functional/zen_stores/test_zen_store.py::test_connector_with_no_config_no_secrets": 0.01839553599984356, "tests/integration/functional/zen_stores/test_zen_store.py::test_connector_with_no_secrets": 0.01897224700019251, "tests/integration/functional/zen_stores/test_zen_store.py::test_connector_with_secrets": 0.04602468499979295, "tests/integration/functional/zen_stores/test_zen_store.py::test_count_runs": 7.472864680999919, "tests/integration/functional/zen_stores/test_zen_store.py::test_count_stack_components": 0.03181298699973922, "tests/integration/functional/zen_stores/test_zen_store.py::test_create_api_key": 0.3076863349999712, "tests/integration/functional/zen_stores/test_zen_store.py::test_create_api_key_used_name_fails": 0.30471495500023593, "tests/integration/functional/zen_stores/test_zen_store.py::test_create_entity_twice_fails[artifact]": 0.014365064999765309, "tests/integration/functional/zen_stores/test_zen_store.py::test_create_entity_twice_fails[artifact_version]": 0.03605446899996423, "tests/integration/functional/zen_stores/test_zen_store.py::test_create_entity_twice_fails[build]": 0.001657329999943613, "tests/integration/functional/zen_stores/test_zen_store.py::test_create_entity_twice_fails[code_repository]": 0.014777286000025924, "tests/integration/functional/zen_stores/test_zen_store.py::test_create_entity_twice_fails[deployment]": 0.0016828319999149244, "tests/integration/functional/zen_stores/test_zen_store.py::test_create_entity_twice_fails[flavor]": 0.02479017699988617, "tests/integration/functional/zen_stores/test_zen_store.py::test_create_entity_twice_fails[model]": 0.042214378999915425, "tests/integration/functional/zen_stores/test_zen_store.py::test_create_entity_twice_fails[pipeline]": 0.01960687800010419, "tests/integration/functional/zen_stores/test_zen_store.py::test_create_entity_twice_fails[secret]": 0.023225045999879512, "tests/integration/functional/zen_stores/test_zen_store.py::test_create_entity_twice_fails[service_connector]": 0.037949431000015466, "tests/integration/functional/zen_stores/test_zen_store.py::test_create_entity_twice_fails[stack_component]": 0.020774899000343794, "tests/integration/functional/zen_stores/test_zen_store.py::test_create_entity_twice_fails[user]": 0.04252258599990455, "tests/integration/functional/zen_stores/test_zen_store.py::test_create_entity_twice_fails[workspace]": 0.06997144600018146, "tests/integration/functional/zen_stores/test_zen_store.py::test_create_service_account": 0.04872169199984455, "tests/integration/functional/zen_stores/test_zen_store.py::test_create_service_account_used_name_fails": 0.3703392330000952, "tests/integration/functional/zen_stores/test_zen_store.py::test_create_user_no_password": 0.0014480270001513418, "tests/integration/functional/zen_stores/test_zen_store.py::test_creating_user_with_existing_name_fails": 0.6176798799997414, "tests/integration/functional/zen_stores/test_zen_store.py::test_crud_on_stack_succeeds": 0.08184151999989808, "tests/integration/functional/zen_stores/test_zen_store.py::test_deactivate_api_key": 0.3174637579998034, "tests/integration/functional/zen_stores/test_zen_store.py::test_deactivate_service_account": 0.0567985929999395, "tests/integration/functional/zen_stores/test_zen_store.py::test_delete_api_key": 0.5982778569998572, "tests/integration/functional/zen_stores/test_zen_store.py::test_delete_default_stack_component_fails": 0.015391384000167818, "tests/integration/functional/zen_stores/test_zen_store.py::test_delete_service_account": 0.08934764799982986, "tests/integration/functional/zen_stores/test_zen_store.py::test_delete_service_account_with_resources_fails": 0.4784843290003664, "tests/integration/functional/zen_stores/test_zen_store.py::test_delete_user_with_resources_fails": 1.9845175689999905, "tests/integration/functional/zen_stores/test_zen_store.py::test_deleting_a_stack_recursively_succeeds": 0.09333134799999243, "tests/integration/functional/zen_stores/test_zen_store.py::test_deleting_a_stack_recursively_with_some_stack_components_present_in_another_stack_succeeds": 0.12563691600007587, "tests/integration/functional/zen_stores/test_zen_store.py::test_deleting_a_stack_succeeds": 0.05484660000001895, "tests/integration/functional/zen_stores/test_zen_store.py::test_deleting_default_stack_fails": 0.009892783999930543, "tests/integration/functional/zen_stores/test_zen_store.py::test_deleting_default_user_fails": 0.0033244620001369185, "tests/integration/functional/zen_stores/test_zen_store.py::test_deleting_default_workspace_fails": 0.0031936589998622367, "tests/integration/functional/zen_stores/test_zen_store.py::test_deleting_nonexistent_entity_raises_error[artifact]": 0.0031427590001840144, "tests/integration/functional/zen_stores/test_zen_store.py::test_deleting_nonexistent_entity_raises_error[artifact_version]": 0.0031872609999936685, "tests/integration/functional/zen_stores/test_zen_store.py::test_deleting_nonexistent_entity_raises_error[build]": 0.003188660999740023, "tests/integration/functional/zen_stores/test_zen_store.py::test_deleting_nonexistent_entity_raises_error[code_repository]": 0.003132456999992428, "tests/integration/functional/zen_stores/test_zen_store.py::test_deleting_nonexistent_entity_raises_error[deployment]": 0.0034623639999153966, "tests/integration/functional/zen_stores/test_zen_store.py::test_deleting_nonexistent_entity_raises_error[flavor]": 0.003142758999729267, "tests/integration/functional/zen_stores/test_zen_store.py::test_deleting_nonexistent_entity_raises_error[model]": 0.0035121639998578758, "tests/integration/functional/zen_stores/test_zen_store.py::test_deleting_nonexistent_entity_raises_error[pipeline]": 0.003530158000103256, "tests/integration/functional/zen_stores/test_zen_store.py::test_deleting_nonexistent_entity_raises_error[secret]": 0.0052259970002523914, "tests/integration/functional/zen_stores/test_zen_store.py::test_deleting_nonexistent_entity_raises_error[service_connector]": 0.0033875630001602985, "tests/integration/functional/zen_stores/test_zen_store.py::test_deleting_nonexistent_entity_raises_error[stack_component]": 0.00336196199987171, "tests/integration/functional/zen_stores/test_zen_store.py::test_deleting_nonexistent_entity_raises_error[user]": 0.003355762000182949, "tests/integration/functional/zen_stores/test_zen_store.py::test_deleting_nonexistent_entity_raises_error[workspace]": 0.0033139639999717474, "tests/integration/functional/zen_stores/test_zen_store.py::test_deleting_nonexistent_stack_fails": 0.003082757000129277, "tests/integration/functional/zen_stores/test_zen_store.py::test_deleting_run_deletes_steps": 1.5438667609998902, "tests/integration/functional/zen_stores/test_zen_store.py::test_filter_runs_by_code_repo": 1.5904920360001142, "tests/integration/functional/zen_stores/test_zen_store.py::test_filter_stack_succeeds": 0.05400263799992899, "tests/integration/functional/zen_stores/test_zen_store.py::test_get_nonexistent_entity_fails[artifact]": 0.003806767000014588, "tests/integration/functional/zen_stores/test_zen_store.py::test_get_nonexistent_entity_fails[artifact_version]": 0.0035218640000493906, "tests/integration/functional/zen_stores/test_zen_store.py::test_get_nonexistent_entity_fails[build]": 0.003108457000053022, "tests/integration/functional/zen_stores/test_zen_store.py::test_get_nonexistent_entity_fails[code_repository]": 0.0031541580001430702, "tests/integration/functional/zen_stores/test_zen_store.py::test_get_nonexistent_entity_fails[deployment]": 0.003275562000226273, "tests/integration/functional/zen_stores/test_zen_store.py::test_get_nonexistent_entity_fails[flavor]": 0.003207962000033149, "tests/integration/functional/zen_stores/test_zen_store.py::test_get_nonexistent_entity_fails[model]": 0.0031487559999732184, "tests/integration/functional/zen_stores/test_zen_store.py::test_get_nonexistent_entity_fails[pipeline]": 0.003314960999887262, "tests/integration/functional/zen_stores/test_zen_store.py::test_get_nonexistent_entity_fails[secret]": 0.0032635600000503473, "tests/integration/functional/zen_stores/test_zen_store.py::test_get_nonexistent_entity_fails[service_connector]": 0.0033094630000505276, "tests/integration/functional/zen_stores/test_zen_store.py::test_get_nonexistent_entity_fails[stack_component]": 0.0033998630001406127, "tests/integration/functional/zen_stores/test_zen_store.py::test_get_nonexistent_entity_fails[user]": 0.003577167000003101, "tests/integration/functional/zen_stores/test_zen_store.py::test_get_nonexistent_entity_fails[workspace]": 0.0032778590000361874, "tests/integration/functional/zen_stores/test_zen_store.py::test_get_run_step_inputs_succeeds": 7.196971118000192, "tests/integration/functional/zen_stores/test_zen_store.py::test_get_run_step_outputs_succeeds": 6.4296487980000165, "tests/integration/functional/zen_stores/test_zen_store.py::test_get_service_account": 0.34117091999996774, "tests/integration/functional/zen_stores/test_zen_store.py::test_get_stack_fails_with_nonexistent_stack_id": 0.0028242550001778, "tests/integration/functional/zen_stores/test_zen_store.py::test_get_user": 0.3393045599998459, "tests/integration/functional/zen_stores/test_zen_store.py::test_list_api_keys": 0.600452174999873, "tests/integration/functional/zen_stores/test_zen_store.py::test_list_runs_is_ordered": 8.480156917000159, "tests/integration/functional/zen_stores/test_zen_store.py::test_list_service_accounts": 0.40533062500003325, "tests/integration/functional/zen_stores/test_zen_store.py::test_list_unused_artifacts": 1.6663138500000514, "tests/integration/functional/zen_stores/test_zen_store.py::test_login_api_key": 0.0012629250002191839, "tests/integration/functional/zen_stores/test_zen_store.py::test_login_deleted_api_key": 0.0014201270000739896, "tests/integration/functional/zen_stores/test_zen_store.py::test_login_inactive_api_key": 0.0014478289997441607, "tests/integration/functional/zen_stores/test_zen_store.py::test_login_inactive_service_account": 0.0012773239998296049, "tests/integration/functional/zen_stores/test_zen_store.py::test_login_rotate_api_key": 0.001355425000156174, "tests/integration/functional/zen_stores/test_zen_store.py::test_login_rotate_api_key_retain_period": 0.0013444260002870578, "tests/integration/functional/zen_stores/test_zen_store.py::test_logs_are_recorded_properly": 2.650148912000077, "tests/integration/functional/zen_stores/test_zen_store.py::test_logs_are_recorded_properly_when_disabled": 4.826711830000022, "tests/integration/functional/zen_stores/test_zen_store.py::test_only_one_default_workspace_present": 0.004528488999994806, "tests/integration/functional/zen_stores/test_zen_store.py::test_reactivate_user": 0.0017451350001920218, "tests/integration/functional/zen_stores/test_zen_store.py::test_register_stack_fails_when_stack_exists": 0.05238527200003773, "tests/integration/functional/zen_stores/test_zen_store.py::test_rotate_api_key": 0.5650462270000389, "tests/integration/functional/zen_stores/test_zen_store.py::test_stacks_are_accessible_by_other_users": 0.0012839250000524771, "tests/integration/functional/zen_stores/test_zen_store.py::test_update_api_key_description": 0.3124052089997349, "tests/integration/functional/zen_stores/test_zen_store.py::test_update_api_key_used_name_fails": 0.578257135000058, "tests/integration/functional/zen_stores/test_zen_store.py::test_update_default_stack_component_fails": 0.01920675399992433, "tests/integration/functional/zen_stores/test_zen_store.py::test_update_key_name": 0.334807777999913, "tests/integration/functional/zen_stores/test_zen_store.py::test_update_service_account_description": 0.048624897000081546, "tests/integration/functional/zen_stores/test_zen_store.py::test_update_service_account_name": 0.06499529999996412, "tests/integration/functional/zen_stores/test_zen_store.py::test_update_service_account_used_name_fails": 0.3844740949998595, "tests/integration/functional/zen_stores/test_zen_store.py::test_updating_default_stack_fails": 0.01251443099977223, "tests/integration/functional/zen_stores/test_zen_store.py::test_updating_default_user_fails": 0.005139293999945949, "tests/integration/functional/zen_stores/test_zen_store.py::test_updating_default_workspace_fails": 0.004888189999974202, "tests/integration/functional/zen_stores/test_zen_store.py::test_updating_nonexistent_stack_fails": 0.00483249099988825, "tests/integration/functional/zen_stores/test_zen_store.py::test_updating_nonexisting_entity_raises_error[artifact]": 0.003096756000104506, "tests/integration/functional/zen_stores/test_zen_store.py::test_updating_nonexisting_entity_raises_error[artifact_version]": 0.003248362000022098, "tests/integration/functional/zen_stores/test_zen_store.py::test_updating_nonexisting_entity_raises_error[build]": 0.0014813280001817475, "tests/integration/functional/zen_stores/test_zen_store.py::test_updating_nonexisting_entity_raises_error[code_repository]": 0.0030162589998781186, "tests/integration/functional/zen_stores/test_zen_store.py::test_updating_nonexisting_entity_raises_error[deployment]": 0.0016381300001739874, "tests/integration/functional/zen_stores/test_zen_store.py::test_updating_nonexisting_entity_raises_error[flavor]": 0.001456426000004285, "tests/integration/functional/zen_stores/test_zen_store.py::test_updating_nonexisting_entity_raises_error[model]": 0.003335356999969008, "tests/integration/functional/zen_stores/test_zen_store.py::test_updating_nonexisting_entity_raises_error[pipeline]": 0.0014242269999158452, "tests/integration/functional/zen_stores/test_zen_store.py::test_updating_nonexisting_entity_raises_error[secret]": 0.0014587260002372204, "tests/integration/functional/zen_stores/test_zen_store.py::test_updating_nonexisting_entity_raises_error[service_connector]": 0.004486382999857597, "tests/integration/functional/zen_stores/test_zen_store.py::test_updating_nonexisting_entity_raises_error[stack_component]": 0.0032243590001144184, "tests/integration/functional/zen_stores/test_zen_store.py::test_updating_nonexisting_entity_raises_error[user]": 0.003494262999993225, "tests/integration/functional/zen_stores/test_zen_store.py::test_updating_nonexisting_entity_raises_error[workspace]": 0.003289260000201466, "tests/integration/functional/zen_stores/test_zen_store.py::test_updating_the_pipeline_run_status[cached-completed]": 1.5810644270000012, "tests/integration/functional/zen_stores/test_zen_store.py::test_updating_the_pipeline_run_status[completed-completed]": 1.5819580449999648, "tests/integration/functional/zen_stores/test_zen_store.py::test_updating_the_pipeline_run_status[failed-failed]": 2.3481800609997663, "tests/integration/functional/zen_stores/test_zen_store.py::test_updating_the_pipeline_run_status[running-running]": 1.5879226520000884, "tests/integration/functional/zen_stores/test_zen_store.py::test_updating_user_with_existing_name_fails": 0.6279363259998263, "tests/integration/integrations/airflow/orchestrators/test_airflow_orchestrator.py::test_airflow_orchestrator_attributes": 0.002579547000095772, "tests/integration/integrations/airflow/orchestrators/test_airflow_orchestrator.py::test_resource_appliciation": 0.001973234999923079, "tests/integration/integrations/airflow/orchestrators/test_dag_generator.py::test_class_importing_by_path": 0.0027796510000825947, "tests/integration/integrations/airflow/orchestrators/test_dag_generator.py::test_dag_generator_constants": 0.0014811260002716153, "tests/integration/integrations/aws/orchestrators/test_sagemaker_orchestrator.py::test_sagemaker_orchestrator_flavor_attributes": 0.002895250999927157, "tests/integration/integrations/azure/artifact_stores/test_azure_artifact_store.py::test_azure_artifact_store_attributes": 0.0019500350001635525, "tests/integration/integrations/azure/artifact_stores/test_azure_artifact_store.py::test_must_be_azure_path": 0.0033852619999379385, "tests/integration/integrations/deepchecks/data_validators/test_deepchecks_data_validator.py::test_deepchecks_data_validator_attributes": 0.002593146999970486, "tests/integration/integrations/deepchecks/materializers/test_deepchecks_dataset_materializer.py::test_deepchecks_dataset_materializer": 0.9477586000000429, "tests/integration/integrations/deepchecks/materializers/test_deepchecks_result_materializer.py::test_deepchecks_dataset_materializer_with_check_result": 0.9550325399998201, "tests/integration/integrations/deepchecks/materializers/test_deepchecks_result_materializer.py::test_deepchecks_dataset_materializer_with_suite_result": 1.7131118419999893, "tests/integration/integrations/deepchecks/test_validation_checks.py::test_validation_check_fails_when_checking_name": 0.0035658670001339487, "tests/integration/integrations/evidently/data_validators/test_evidently_data_validator.py::test_evidently_data_validator_attributes": 0.0025486449997060845, "tests/integration/integrations/facets/materializers/test_facets_materializer.py::test_facets_materializer": 0.9470509699999639, "tests/integration/integrations/gcp/artifact_stores/test_gcp_artifact_store.py::test_must_be_gcs_path": 0.0005730089999360644, "tests/integration/integrations/gcp/image_builders/test_gcp_image_builder.py::test_stack_validation": 0.0006773129996417993, "tests/integration/integrations/gcp/orchestrators/test_vertex_orchestrator.py::test_vertex_orchestrator_configure_container_resources[resource_settings0-orchestrator_resource_settings0-expected_resources0]": 0.0007285130000127538, "tests/integration/integrations/gcp/orchestrators/test_vertex_orchestrator.py::test_vertex_orchestrator_configure_container_resources[resource_settings1-orchestrator_resource_settings1-expected_resources1]": 0.0006984120000197436, "tests/integration/integrations/gcp/orchestrators/test_vertex_orchestrator.py::test_vertex_orchestrator_configure_container_resources[resource_settings2-orchestrator_resource_settings2-expected_resources2]": 0.0006751119999535149, "tests/integration/integrations/gcp/orchestrators/test_vertex_orchestrator.py::test_vertex_orchestrator_configure_container_resources[resource_settings3-orchestrator_resource_settings3-expected_resources3]": 0.000651711000045907, "tests/integration/integrations/gcp/orchestrators/test_vertex_orchestrator.py::test_vertex_orchestrator_stack_validation": 0.0006524109999190841, "tests/integration/integrations/great_expectations/materializers/test_ge_materializer.py::test_great_expectations_materializer": 0.9280860650001159, "tests/integration/integrations/huggingface/materializers/test_huggingface_datasets_materializer.py::test_huggingface_datasets_materializer": 1.193083361000049, "tests/integration/integrations/huggingface/materializers/test_huggingface_pt_model_materializer.py::test_huggingface_pretrained_model_materializer": 4.73393071099963, "tests/integration/integrations/huggingface/materializers/test_huggingface_tf_model_materializer.py::test_huggingface_tf_pretrained_model_materializer": 7.806099057999745, "tests/integration/integrations/huggingface/materializers/test_huggingface_tokenizer_materializer.py::test_huggingface_tokenizer_materializer": 2.1347184400001424, "tests/integration/integrations/kaniko/image_builders/test_kaniko_image_builder.py::test_stack_validation": 0.005499401999941256, "tests/integration/integrations/kubeflow/orchestrators/test_kubeflow_orchestrator.py::test_kubeflow_orchestrator_local_stack": 0.0005407110002124682, "tests/integration/integrations/kubeflow/orchestrators/test_kubeflow_orchestrator.py::test_kubeflow_orchestrator_remote_stack": 0.0005819110001539229, "tests/integration/integrations/kubeflow/test_utils.py::test_apply_pod_settings": 0.0005952109999043387, "tests/integration/integrations/kubernetes/orchestrators/test_kubernetes_orchestrator.py::test_kubernetes_orchestrator_local_stack": 0.011105703999874095, "tests/integration/integrations/kubernetes/orchestrators/test_kubernetes_orchestrator.py::test_kubernetes_orchestrator_remote_stack": 0.01057949500000177, "tests/integration/integrations/kubernetes/orchestrators/test_kubernetes_orchestrator.py::test_kubernetes_orchestrator_uses_service_account_from_settings": 0.006717721999848436, "tests/integration/integrations/kubernetes/orchestrators/test_manifest_utils.py::test_build_cron_job_manifest_pod_settings": 0.018659739999748126, "tests/integration/integrations/kubernetes/orchestrators/test_manifest_utils.py::test_build_pod_manifest_metadata": 0.008324652000283095, "tests/integration/integrations/kubernetes/orchestrators/test_manifest_utils.py::test_build_pod_manifest_pod_settings": 0.010034382999947411, "tests/integration/integrations/kubernetes/test_serialization_utils.py::test_deserializing_invalid_model": 0.0012377220000416855, "tests/integration/integrations/kubernetes/test_serialization_utils.py::test_get_model_class": 0.0012272209999082406, "tests/integration/integrations/kubernetes/test_serialization_utils.py::test_model_serialization_and_deserialization[model0]": 0.010010383000235379, "tests/integration/integrations/kubernetes/test_serialization_utils.py::test_model_serialization_and_deserialization[model1]": 0.00317675999963285, "tests/integration/integrations/kubernetes/test_serialization_utils.py::test_model_serialization_and_deserialization[model2]": 0.006286914999691362, "tests/integration/integrations/kubernetes/test_serialization_utils.py::test_model_serialization_and_deserialization[model3]": 0.0031093569996301085, "tests/integration/integrations/kubernetes/test_serialization_utils.py::test_serializing_invalid_model": 0.0012390220001634589, "tests/integration/integrations/label_studio/label_config_generators/test_label_config_generators.py::test_config_generator_raises_with_empty_list": 0.0014363249999860273, "tests/integration/integrations/label_studio/label_config_generators/test_label_config_generators.py::test_image_classification_label_config_generator": 0.92938971100034, "tests/integration/integrations/label_studio/label_config_generators/test_label_config_generators.py::test_object_detection_label_config_generator": 0.783709306999981, "tests/integration/integrations/label_studio/label_config_generators/test_label_config_generators.py::test_ocr_label_config_generator": 0.8885944819996894, "tests/integration/integrations/label_studio/label_config_generators/test_label_config_generators.py::test_text_classification_label_config_generator": 0.8539296619997003, "tests/integration/integrations/label_studio/test_label_studio_utils.py::test_getting_file_extension": 0.0015306279997275851, "tests/integration/integrations/label_studio/test_label_studio_utils.py::test_is_azure_url": 0.0016571300000123301, "tests/integration/integrations/label_studio/test_label_studio_utils.py::test_is_gcs_url": 0.0013881260001653573, "tests/integration/integrations/label_studio/test_label_studio_utils.py::test_is_s3_url": 0.0026499499999772524, "tests/integration/integrations/langchain/materializers/test_langchain_document_materializer.py::test_langchain_document_materializer": 2.1163754140000037, "tests/integration/integrations/langchain/materializers/test_openai_embedding_materializer_materializer.py::test_langchain_openai_embedding_materializer": 0.9738833669998712, "tests/integration/integrations/langchain/materializers/test_vector_store_materializer.py::test_langchain_vectorstore_materializer": 0.9236638089998905, "tests/integration/integrations/lightgbm/materializers/test_lightgbm_booster_materializer.py::test_lightgbm_booster_materializer": 0.012131815999964601, "tests/integration/integrations/lightgbm/materializers/test_lightgbm_dataset_materializer.py::test_lightgbm_dataset_materializer": 0.009684978999985105, "tests/integration/integrations/mlflow/experiment_trackers/test_mlflow_experiment_tracker.py::test_mlflow_experiment_tracker_attributes": 0.002743348999956652, "tests/integration/integrations/mlflow/experiment_trackers/test_mlflow_experiment_tracker.py::test_mlflow_experiment_tracker_authentication": 0.0021143369999663264, "tests/integration/integrations/mlflow/experiment_trackers/test_mlflow_experiment_tracker.py::test_mlflow_experiment_tracker_set_config": 0.0023222419998774058, "tests/integration/integrations/mlflow/experiment_trackers/test_mlflow_experiment_tracker.py::test_mlflow_experiment_tracker_stack_validation": 0.0026841470000817935, "tests/integration/integrations/neptune/experiment_tracker/test_neptune_experiment_tracker.py::test_neptune_experiment_tracker_attributes": 0.0024196439999286667, "tests/integration/integrations/neptune/experiment_tracker/test_neptune_experiment_tracker.py::test_neptune_experiment_tracker_does_not_need_explicit_api_token_or_project": 0.0014428260001295712, "tests/integration/integrations/neptune/experiment_tracker/test_neptune_experiment_tracker.py::test_neptune_experiment_tracker_stack_validation": 0.0019893360001788096, "tests/integration/integrations/neural_prophet/materializers/test_neural_prophet_materializer.py::test_neural_prophet_booster_materializer": 1.432609930000126, "tests/integration/integrations/pillow/materializers/test_pillow_image_materializer.py::test_materializer_works_for_pillow_image_objects": 0.9307538260000001, "tests/integration/integrations/polars/materializers/test_polars_materializer.py::test_polars_materializer": 0.0568004369999926, "tests/integration/integrations/pytorch/materializers/test_pytorch_dataloader_materializer.py::test_pytorch_dataloader_materializer": 0.9630062589999397, "tests/integration/integrations/pytorch/materializers/test_pytorch_module_materializer.py::test_pytorch_module_materializer": 0.927009802999919, "tests/integration/integrations/s3/artifact_stores/test_s3_artifact_store.py::test_must_be_s3_path": 0.00290795299997626, "tests/integration/integrations/s3/artifact_stores/test_s3_artifact_store.py::test_s3_artifact_store_attributes": 0.001706131999981153, "tests/integration/integrations/scipy/materializers/test_sparse_materializer.py::test_scipy_sparse_matrix_materializer": 0.9426515569998628, "tests/integration/integrations/sklearn/materializers/test_sklearn_materializer.py::test_sklearn_materializer": 0.9317518970000265, "tests/integration/integrations/skypilot/orchestrators/test_skypilot_orchestrator.py::test_skypilot_orchestrator_local_stack[aws]": 0.000597211000012976, "tests/integration/integrations/skypilot/orchestrators/test_skypilot_orchestrator.py::test_skypilot_orchestrator_local_stack[azure]": 0.0007007130000147299, "tests/integration/integrations/skypilot/orchestrators/test_skypilot_orchestrator.py::test_skypilot_orchestrator_local_stack[gcp]": 0.0006057110001620458, "tests/integration/integrations/tensorflow/materializers/test_keras_materializer.py::test_tensorflow_keras_materializer": 1.7316540750000513, "tests/integration/integrations/tensorflow/materializers/test_tf_dataset_materializer.py::test_tensorflow_tf_dataset_materializer": 0.9556844540002203, "tests/integration/integrations/whylogs/materializers/test_whylogs_materializer.py::test_whylogs_materializer": 3.4373613779999914, "tests/integration/integrations/xgboost/materializers/test_xgboost_dmatrix_materializer.py::test_xgboost_dmatrix_materializer": 0.9379987049999272, "tests/unit/_hub/test_client.py::test_default_url": 0.0032108609999568216, "tests/unit/_hub/test_client.py::test_get_plugin": 1.7070961019999231, "tests/unit/_hub/test_client.py::test_list_plugins": 0.4211572960000467, "tests/unit/_hub/test_utils.py::test_parse_invalid_plugin_name[]": 0.0013775239999631594, "tests/unit/_hub/test_utils.py::test_parse_invalid_plugin_name[invalid/plugin/name]": 0.0013802219998524379, "tests/unit/_hub/test_utils.py::test_parse_invalid_plugin_name[invalid:plugin:name]": 0.0013945210001793384, "tests/unit/_hub/test_utils.py::test_parse_plugin_name[author/plugin_name-author-plugin_name-latest]": 0.0022775389999196705, "tests/unit/_hub/test_utils.py::test_parse_plugin_name[author/plugin_name:version-author-plugin_name-version]": 0.0018353300000626405, "tests/unit/_hub/test_utils.py::test_parse_plugin_name[plugin_name-None-plugin_name-latest]": 0.001854031999982908, "tests/unit/_hub/test_utils.py::test_parse_plugin_name[plugin_name:version-None-plugin_name-version]": 0.0018171300000631163, "tests/unit/_hub/test_utils.py::test_plugin_display_name[author/plugin_name:latest-author-plugin_name-None]": 0.0017440279999618724, "tests/unit/_hub/test_utils.py::test_plugin_display_name[author/plugin_name:version-author-plugin_name-version]": 0.002353337999920768, "tests/unit/_hub/test_utils.py::test_plugin_display_name[plugin_name:latest-None-plugin_name-None]": 0.0017435299999988274, "tests/unit/_hub/test_utils.py::test_plugin_display_name[plugin_name:version-None-plugin_name-version]": 0.001722829000073034, "tests/unit/artifact_stores/test_base_artifact_store.py::TestBaseArtifactStoreConfig::test_invalid_path[http://my-bucket/my-folder/my-file.txt]": 0.0015989300000001094, "tests/unit/artifact_stores/test_base_artifact_store.py::TestBaseArtifactStoreConfig::test_invalid_path[s3://my-bucket/my-folder/my-file.txt]": 0.0016644309999946927, "tests/unit/artifact_stores/test_base_artifact_store.py::TestBaseArtifactStoreConfig::test_valid_path[\"aria://my-bucket/my-folder/my-file.txt\"]": 0.0014808280000124796, "tests/unit/artifact_stores/test_base_artifact_store.py::TestBaseArtifactStoreConfig::test_valid_path['aria://my-bucket/my-folder/my-file.txt']": 0.002560048000077586, "tests/unit/artifact_stores/test_base_artifact_store.py::TestBaseArtifactStoreConfig::test_valid_path[`aria://my-bucket/my-folder/my-file.txt`]": 0.001569228999983352, "tests/unit/artifact_stores/test_base_artifact_store.py::TestBaseArtifactStoreConfig::test_valid_path[aria://my-bucket/my-folder/my-file.txt]": 0.0016597310001316146, "tests/unit/artifact_stores/test_local_artifact_store.py::test_local_artifact_store_attributes": 0.0023626400000011927, "tests/unit/artifact_stores/test_local_artifact_store.py::test_local_artifact_store_only_supports_local_paths": 0.0017943309999282064, "tests/unit/artifacts/test_utils.py::test__get_new_artifact_version": 0.004288072000008469, "tests/unit/artifacts/test_utils.py::test__load_artifact": 0.00832573799993952, "tests/unit/artifacts/test_utils.py::test_load_artifact_from_response": 0.008162735000041721, "tests/unit/artifacts/test_utils.py::test_load_model_from_metadata": 0.007569926999849486, "tests/unit/artifacts/test_utils.py::test_save_model_metadata": 0.004238172000100349, "tests/unit/config/test_base_settings.py::test_base_settings_default_configuration_level": 0.1473720349999894, "tests/unit/config/test_base_settings.py::test_base_settings_inherit_from_secret_reference_mixin": 0.0013392250000379136, "tests/unit/config/test_compiler.py::test_compiling_pipeline_with_invalid_run_configuration": 0.0020614389999309424, "tests/unit/config/test_compiler.py::test_compiling_pipeline_with_invalid_run_name_fails": 0.003094959000009112, "tests/unit/config/test_compiler.py::test_compiling_pipeline_with_missing_experiment_tracker": 0.009250373000099898, "tests/unit/config/test_compiler.py::test_compiling_pipeline_with_missing_step_operator": 0.009199474000183727, "tests/unit/config/test_compiler.py::test_compiling_pipeline_without_steps_fails": 0.005283598999881178, "tests/unit/config/test_compiler.py::test_default_run_name": 0.0012666239999816753, "tests/unit/config/test_compiler.py::test_extra_merging": 0.01296354299995528, "tests/unit/config/test_compiler.py::test_failure_hook_merging": 0.0381556150000506, "tests/unit/config/test_compiler.py::test_general_settings_merging": 0.014519172000063918, "tests/unit/config/test_compiler.py::test_pipeline_and_steps_dont_get_modified_during_compilation": 0.012862242000210244, "tests/unit/config/test_compiler.py::test_spec_compilation": 0.03722239699993679, "tests/unit/config/test_compiler.py::test_stack_component_settings_for_missing_component_are_ignored": 0.01357085399990865, "tests/unit/config/test_compiler.py::test_stack_component_settings_merging": 0.016981518000079632, "tests/unit/config/test_compiler.py::test_step_sorting": 0.017425927999966007, "tests/unit/config/test_compiler.py::test_success_hook_merging": 0.03881502800004455, "tests/unit/config/test_docker_settings.py::test_build_skipping": 0.0021924360000866727, "tests/unit/config/test_global_config.py::test_global_config_file_creation": 0.9392365749998817, "tests/unit/config/test_global_config.py::test_global_config_returns_value_from_environment_variable": 0.9232988779999687, "tests/unit/config/test_resource_settings.py::test_resource_config_empty": 0.0014650360000132423, "tests/unit/config/test_resource_settings.py::test_resource_config_memory_conversion": 0.0022105560000227342, "tests/unit/config/test_resource_settings.py::test_resource_config_value_validation": 0.001751343000023553, "tests/unit/config/test_resource_settings.py::test_unit_byte_value_defined_for_all_values": 0.0014143339999463933, "tests/unit/config/test_secret_reference_mixin.py::test_secret_reference_mixin_returns_correct_required_secrets": 0.0019122350000770894, "tests/unit/config/test_secret_reference_mixin.py::test_secret_reference_mixin_serialization_does_not_resolve_secrets": 0.001352027000052658, "tests/unit/config/test_secret_reference_mixin.py::test_secret_reference_resolving": 1.013431270999945, "tests/unit/config/test_secret_reference_mixin.py::test_secret_references_are_not_allowed_for_clear_text_fields": 0.0018370339998909913, "tests/unit/config/test_secret_reference_mixin.py::test_secret_references_are_not_allowed_for_fields_with_validators": 0.00192333600011807, "tests/unit/config/test_settings_resolver.py::test_resolving_fails_if_no_stack_component_settings_exist_for_the_given_key": 0.0016601370000444149, "tests/unit/config/test_settings_resolver.py::test_resolving_fails_if_the_settings_cant_be_converted": 0.001885542999957579, "tests/unit/config/test_settings_resolver.py::test_resolving_general_settings": 0.00278536299992993, "tests/unit/config/test_settings_resolver.py::test_resolving_stack_component_settings": 0.0023978549999128518, "tests/unit/config/test_settings_resolver.py::test_settings_resolver_fails_when_using_invalid_settings_key": 0.0031632719999379333, "tests/unit/config/test_step_configurations.py::test_step_spec_inputs_equality": 0.001413925999941057, "tests/unit/config/test_step_configurations.py::test_step_spec_pipeline_parameter_name_equality": 0.001373527000055219, "tests/unit/config/test_step_configurations.py::test_step_spec_source_equality": 0.0018446339998945405, "tests/unit/config/test_step_configurations.py::test_step_spec_upstream_steps_equality": 0.001488227999971059, "tests/unit/container_registries/test_base_container_registry.py::test_base_container_registry_local_property": 0.0018450349999739046, "tests/unit/container_registries/test_base_container_registry.py::test_base_container_registry_prevents_push_if_uri_does_not_match": 0.004080176999991636, "tests/unit/container_registries/test_base_container_registry.py::test_base_container_registry_requires_authentication_if_secret_provided": 0.0014158250000946282, "tests/unit/container_registries/test_default_container_registry.py::test_default_container_registry_attributes": 0.002135141000053409, "tests/unit/entrypoints/test_base_entrypoint_configuration.py::test_calling_entrypoint_configuration_with_invalid_deployment_id": 0.0017442399998799374, "tests/unit/entrypoints/test_base_entrypoint_configuration.py::test_loading_the_deployment": 0.9636324800000011, "tests/unit/image_builders/test_base_image_builder.py::test_upload_build_context": 2.600672806000034, "tests/unit/image_builders/test_build_context.py::test_adding_extra_directory": 0.0023452450001286707, "tests/unit/image_builders/test_build_context.py::test_adding_extra_files": 0.0027623529999800667, "tests/unit/image_builders/test_build_context.py::test_build_context_includes_and_excludes": 0.003938673999982711, "tests/unit/image_builders/test_local_image_builder.py::test_local_image_builder_flavor_attributes": 0.0018645349999815153, "tests/unit/io/test_fileio.py::test_convert_to_str_converts_to_string": 0.002110037000079501, "tests/unit/io/test_fileio.py::test_copy_moves_file_to_new_location": 0.0024483409999902506, "tests/unit/io/test_fileio.py::test_copy_raises_error_when_file_exists": 0.0026426450001508783, "tests/unit/io/test_fileio.py::test_file_exists_function": 0.0023297399999364643, "tests/unit/io/test_fileio.py::test_file_exists_when_file_doesnt_exist": 0.002631844000006822, "tests/unit/io/test_fileio.py::test_glob_function": 0.0023702409999941665, "tests/unit/io/test_fileio.py::test_isdir_when_false": 0.002144836000070427, "tests/unit/io/test_fileio.py::test_isdir_when_true": 0.001999534000105996, "tests/unit/io/test_fileio.py::test_listdir_returns_a_list_of_file_names": 0.0023254389999465275, "tests/unit/io/test_fileio.py::test_listdir_returns_empty_list_when_dir_doesnt_exist": 0.26893385599998965, "tests/unit/io/test_fileio.py::test_listdir_returns_one_result_for_one_file": 0.002363940000009279, "tests/unit/io/test_fileio.py::test_make_dirs": 0.0022770379999883517, "tests/unit/io/test_fileio.py::test_make_dirs_when_recursive": 0.0022928379999029858, "tests/unit/io/test_fileio.py::test_mkdir_function": 0.002466141000013522, "tests/unit/io/test_fileio.py::test_mkdir_function_when_parent_doesnt_exist": 0.0021696370000654497, "tests/unit/io/test_fileio.py::test_open_returns_error_when_file_nonexistent": 0.23606230000007145, "tests/unit/io/test_fileio.py::test_remove_function": 0.0024570410000706033, "tests/unit/io/test_fileio.py::test_rename_function": 0.002250538999987839, "tests/unit/io/test_fileio.py::test_rename_function_raises_error_if_file_already_exists": 0.0023164389999692503, "tests/unit/io/test_fileio.py::test_rm_dir_function": 0.0022712390000378946, "tests/unit/io/test_fileio.py::test_rm_dir_function_works_recursively": 0.0023920410001210257, "tests/unit/io/test_fileio.py::test_size_returns_int_for_dir": 0.0024877409999817246, "tests/unit/io/test_fileio.py::test_size_returns_int_for_file": 0.002405941000006351, "tests/unit/io/test_fileio.py::test_size_returns_zero_for_empty_dir": 0.0021620369999482136, "tests/unit/io/test_fileio.py::test_size_returns_zero_for_non_existent_file": 0.0023524399999814705, "tests/unit/io/test_fileio.py::test_stat_raises_error_when_file_doesnt_exist": 0.002213938000068083, "tests/unit/io/test_fileio.py::test_stat_returns_a_stat_result_object": 0.003419957999994949, "tests/unit/io/test_fileio.py::test_walk_function_returns_a_generator_object": 0.0022085379999907673, "tests/unit/io/test_fileio.py::test_walk_returns_an_iterator": 0.003219355000055657, "tests/unit/materializers/test_base_materializer.py::test_materializer_raises_an_exception_if_associated_artifact_type_wrong": 0.001470532999860552, "tests/unit/materializers/test_base_materializer.py::test_materializer_raises_an_exception_if_associated_types_are_no_classes": 0.0020443470001509922, "tests/unit/materializers/test_base_materializer.py::test_validate_type_compatibility": 0.0013462310000704747, "tests/unit/materializers/test_built_in_materializer.py::test_basic_type_materialization": 0.003995465999878434, "tests/unit/materializers/test_built_in_materializer.py::test_bytes_materialization": 0.0016465279999238192, "tests/unit/materializers/test_built_in_materializer.py::test_container_materializer_for_custom_types": 0.012237205000019458, "tests/unit/materializers/test_built_in_materializer.py::test_dict_of_bytes_materialization": 0.011256487999958154, "tests/unit/materializers/test_built_in_materializer.py::test_empty_dict_list_tuple_materialization": 0.0032119530000045415, "tests/unit/materializers/test_built_in_materializer.py::test_list_of_bytes_materialization": 0.0071240190000025905, "tests/unit/materializers/test_built_in_materializer.py::test_mixture_of_all_builtin_types": 0.015216854000072999, "tests/unit/materializers/test_built_in_materializer.py::test_none_values": 0.0028283470001042588, "tests/unit/materializers/test_built_in_materializer.py::test_set_materialization": 0.008573842000032528, "tests/unit/materializers/test_built_in_materializer.py::test_simple_dict_list_tuple_materialization": 0.003509158999918327, "tests/unit/materializers/test_built_in_materializer.py::test_tuple_of_bytes_materialization": 0.007055716999843753, "tests/unit/materializers/test_cloudpickle_materializer.py::test_cloudpickle_materializer": 0.9145500439999523, "tests/unit/materializers/test_cloudpickle_materializer.py::test_cloudpickle_materializer_can_load_pickle": 0.9248497380000344, "tests/unit/materializers/test_cloudpickle_materializer.py::test_cloudpickle_materializer_is_not_registered": 0.9173430959999678, "tests/unit/materializers/test_cloudpickle_materializer.py::test_cloudpickle_materializer_python_version_check": 0.9331880929998988, "tests/unit/materializers/test_materializer_registry.py::test_materializer_with_conflicting_parameter_and_explicit_materializer": 0.007002840000041033, "tests/unit/materializers/test_materializer_registry.py::test_materializer_with_parameter_with_more_than_one_baseclass": 0.003357766000135598, "tests/unit/materializers/test_materializer_registry.py::test_materializer_with_parameter_with_more_than_one_conflicting_baseclass": 0.008330563999948026, "tests/unit/materializers/test_materializer_registry.py::test_materializer_with_subclassing_parameter": 0.004349086000047464, "tests/unit/materializers/test_numpy_materializer.py::test_numpy_materializer": 0.15234895600019627, "tests/unit/materializers/test_pandas_materializer.py::test_pandas_materializer": 0.04455810800004656, "tests/unit/materializers/test_pandas_materializer.py::test_pandas_materializer_with_index": 0.04855610800007071, "tests/unit/materializers/test_pydantic_materializer.py::test_pydantic_materializer": 0.006927128999905108, "tests/unit/materializers/test_structured_string_materializer.py::test_structured_string_materializer_for_csv_strings": 0.9126777349998747, "tests/unit/materializers/test_structured_string_materializer.py::test_structured_string_materializer_for_html_strings": 0.9147066759999234, "tests/unit/materializers/test_structured_string_materializer.py::test_structured_string_materializer_for_markdown_strings": 0.9261610020000717, "tests/unit/model/test_model_version_init.py::test_init_warns[Pick model by integer version number]": 0.00248264599997583, "tests/unit/model/test_model_version_init.py::test_init_warns[Pick model by text stage]": 0.0028998550000096657, "tests/unit/model/test_model_version_init.py::test_init_warns[Pick model by text version number]": 0.0024670459999924788, "tests/unit/model_registries/test_base_model_registry.py::TestModelRegistryModelMetadata::test_custom_attributes": 0.0012565260000201306, "tests/unit/model_registries/test_base_model_registry.py::TestModelRegistryModelMetadata::test_dict": 0.0023812470000166286, "tests/unit/model_registries/test_base_model_registry.py::TestModelRegistryModelMetadata::test_exclude_unset_none": 0.0014938300000721938, "tests/unit/models/test_artifact_models.py::test_artifact_request_model_fails_with_long_name": 0.001316622000103962, "tests/unit/models/test_artifact_models.py::test_artifact_request_model_works_with_long_materializer": 0.0014852249998966727, "tests/unit/models/test_artifact_models.py::test_artifact_version_request_model_works_with_long_data_type": 0.0020877349999182115, "tests/unit/models/test_component_models.py::test_component_base_model_fails_with_long_flavor": 0.0019925360001025183, "tests/unit/models/test_filter_models.py::test_datetime_filter_model": 0.0018077439999615308, "tests/unit/models/test_filter_models.py::test_datetime_filter_model_fails_for_wrong_formats[1]": 0.0019821500001171444, "tests/unit/models/test_filter_models.py::test_datetime_filter_model_fails_for_wrong_formats[2022/12/12 12-12-12]": 0.004455212000038955, "tests/unit/models/test_filter_models.py::test_datetime_filter_model_fails_for_wrong_formats[notadate]": 0.0019994500000848348, "tests/unit/models/test_filter_models.py::test_filter_model_page_not_int_gte1_fails[-4]": 0.00181604600004448, "tests/unit/models/test_filter_models.py::test_filter_model_page_not_int_gte1_fails[0.21]": 0.0014748360000567118, "tests/unit/models/test_filter_models.py::test_filter_model_page_not_int_gte1_fails[0]": 0.0014564370000016424, "tests/unit/models/test_filter_models.py::test_filter_model_page_not_int_gte1_fails[catfood]": 0.0015323389999366555, "tests/unit/models/test_filter_models.py::test_filter_model_size_not_int_gte1_fails[-4]": 0.001482436000060261, "tests/unit/models/test_filter_models.py::test_filter_model_size_not_int_gte1_fails[0.21]": 0.0014501359999030683, "tests/unit/models/test_filter_models.py::test_filter_model_size_not_int_gte1_fails[0]": 0.001473637999993116, "tests/unit/models/test_filter_models.py::test_filter_model_size_not_int_gte1_fails[catfood]": 0.0014975369999774557, "tests/unit/models/test_filter_models.py::test_filter_model_sort_by_existing_field_with_order_succeeds[correct_sortable_column0]": 0.0014896379999527198, "tests/unit/models/test_filter_models.py::test_filter_model_sort_by_existing_field_with_order_succeeds[correct_sortable_column1]": 0.0014808349999384518, "tests/unit/models/test_filter_models.py::test_filter_model_sort_by_existing_field_with_order_succeeds[correct_sortable_column2]": 0.0015241370000467214, "tests/unit/models/test_filter_models.py::test_filter_model_sort_by_existing_field_wrong_order_succeeds[correct_sortable_column0]": 0.0016227400000161651, "tests/unit/models/test_filter_models.py::test_filter_model_sort_by_existing_field_wrong_order_succeeds[correct_sortable_column1]": 0.0016537409999273223, "tests/unit/models/test_filter_models.py::test_filter_model_sort_by_existing_field_wrong_order_succeeds[correct_sortable_column2]": 0.0016571410000096876, "tests/unit/models/test_filter_models.py::test_filter_model_sort_by_for_existing_field_succeeds[created]": 0.0014534349999166807, "tests/unit/models/test_filter_models.py::test_filter_model_sort_by_for_existing_field_succeeds[id]": 0.001462335999917741, "tests/unit/models/test_filter_models.py::test_filter_model_sort_by_for_existing_field_succeeds[updated]": 0.0014626370000314637, "tests/unit/models/test_filter_models.py::test_filter_model_sort_by_for_non_filter_fields_fails[catastic_column]": 0.0014654360001031819, "tests/unit/models/test_filter_models.py::test_filter_model_sort_by_for_non_filter_fields_fails[page]": 0.001495037000040611, "tests/unit/models/test_filter_models.py::test_filter_model_sort_by_for_non_filter_fields_fails[zenml]": 0.0015233389999593783, "tests/unit/models/test_filter_models.py::test_filter_model_sort_by_non_str_input_fails[1]": 0.0015149370000244744, "tests/unit/models/test_filter_models.py::test_filter_model_sort_by_non_str_input_fails[incorrect_sortable_column1]": 0.0014771360001759604, "tests/unit/models/test_filter_models.py::test_filter_model_sort_by_non_str_input_fails[int]": 0.0014173350000419305, "tests/unit/models/test_filter_models.py::test_int_filter_model": 0.0017104410001138604, "tests/unit/models/test_filter_models.py::test_string_filter_model": 0.002325158000076044, "tests/unit/models/test_filter_models.py::test_uuid_filter_model": 0.0022393550000288087, "tests/unit/models/test_filter_models.py::test_uuid_filter_model_fails_for_invalid_uuids_on_equality": 0.001313132000063888, "tests/unit/models/test_filter_models.py::test_uuid_filter_model_succeeds_for_invalid_uuid_on_non_equality": 0.0028030689999241076, "tests/unit/models/test_flavor_models.py::test_flavor_request_model_fails_with_long_config_schema": 0.001437631999920086, "tests/unit/models/test_flavor_models.py::test_flavor_request_model_fails_with_long_integration": 0.0012711280000985425, "tests/unit/models/test_flavor_models.py::test_flavor_request_model_fails_with_long_name": 0.0018333420001681588, "tests/unit/models/test_flavor_models.py::test_flavor_request_model_fails_with_long_source": 0.0013139299999238574, "tests/unit/models/test_model_models.py::test_getters[Latest version]": 0.0038964640000358486, "tests/unit/models/test_model_models.py::test_getters[No collision]": 0.006194004000008135, "tests/unit/models/test_model_models.py::test_getters[Not found]": 0.004480173000047216, "tests/unit/models/test_model_models.py::test_getters[Specific version]": 0.006402506999847901, "tests/unit/models/test_pipeline_deployment_models.py::test_pipeline_deployment_base_model_fails_with_long_name": 0.004673592000017379, "tests/unit/models/test_pipeline_models.py::test_pipeline_request_model_fails_with_long_docstring": 0.0019426329999987502, "tests/unit/models/test_pipeline_models.py::test_pipeline_request_model_fails_with_long_name": 0.0014581240001234619, "tests/unit/models/test_step_run_models.py::test_step_run_request_model_fails_with_long_docstring": 0.0019511449999072283, "tests/unit/models/test_user_models.py::test_user_request_model_fails_with_long_activation_token": 0.0015878270000939665, "tests/unit/models/test_user_models.py::test_user_request_model_fails_with_long_password": 0.0019698320000998137, "tests/unit/orchestrators/local/test_local_orchestrator.py::test_local_orchestrator_flavor_attributes": 0.0022463379999635436, "tests/unit/orchestrators/local_docker/test_local_docker_orchestrator.py::test_local_docker_orchestrator_flavor_attributes": 0.002168140999970092, "tests/unit/orchestrators/test_base_orchestrator.py::test_resource_required[None-settings1-False]": 0.001936233000037646, "tests/unit/orchestrators/test_base_orchestrator.py::test_resource_required[None-settings2-False]": 0.001881429999912143, "tests/unit/orchestrators/test_base_orchestrator.py::test_resource_required[None-settings3-True]": 0.00201773400010552, "tests/unit/orchestrators/test_base_orchestrator.py::test_resource_required[step_operator-settings0-False]": 0.00249304199985545, "tests/unit/orchestrators/test_cache_utils.py::test_fetching_cached_step_run_queries_cache_candidates": 0.004354801000090447, "tests/unit/orchestrators/test_cache_utils.py::test_fetching_cached_step_run_uses_latest_candidate": 1.0296188920000304, "tests/unit/orchestrators/test_cache_utils.py::test_generate_cache_key_considers_artifact_store_id": 0.016771584999787592, "tests/unit/orchestrators/test_cache_utils.py::test_generate_cache_key_considers_artifact_store_path": 0.018168016999993597, "tests/unit/orchestrators/test_cache_utils.py::test_generate_cache_key_considers_caching_parameters": 0.01698798900008569, "tests/unit/orchestrators/test_cache_utils.py::test_generate_cache_key_considers_input_artifacts": 0.016633382000122765, "tests/unit/orchestrators/test_cache_utils.py::test_generate_cache_key_considers_output_artifacts": 0.016624481000121705, "tests/unit/orchestrators/test_cache_utils.py::test_generate_cache_key_considers_step_parameters": 0.01659808100009741, "tests/unit/orchestrators/test_cache_utils.py::test_generate_cache_key_considers_step_source": 0.016974188000062895, "tests/unit/orchestrators/test_cache_utils.py::test_generate_cache_key_considers_workspace_id": 0.017095291000032375, "tests/unit/orchestrators/test_cache_utils.py::test_generate_cache_key_is_deterministic": 0.01928554100004476, "tests/unit/orchestrators/test_containerized_orchestrator.py::test_builds_with_custom_docker_settings_for_all_steps": 0.003353265999976429, "tests/unit/orchestrators/test_containerized_orchestrator.py::test_builds_with_custom_docker_settings_for_some_steps": 0.002833055999985845, "tests/unit/orchestrators/test_containerized_orchestrator.py::test_builds_with_no_docker_settings": 0.0028896570000824795, "tests/unit/orchestrators/test_containerized_orchestrator.py::test_getting_image_from_deployment": 0.002527849999864884, "tests/unit/orchestrators/test_dag_runner.py::test_dag_runner_cyclic": 0.0016426260000343973, "tests/unit/orchestrators/test_dag_runner.py::test_dag_runner_empty": 0.0012595199999623219, "tests/unit/orchestrators/test_dag_runner.py::test_dag_runner_linear": 0.003741463000096701, "tests/unit/orchestrators/test_dag_runner.py::test_dag_runner_multi_path": 0.002542042999948535, "tests/unit/orchestrators/test_dag_runner.py::test_dag_runner_single": 0.002125436000028458, "tests/unit/orchestrators/test_dag_runner.py::test_reverse_dag": 0.001226320000000669, "tests/unit/orchestrators/test_input_utils.py::test_input_resolution": 0.004065969999942354, "tests/unit/orchestrators/test_input_utils.py::test_input_resolution_fetches_all_run_steps": 0.005411192000110532, "tests/unit/orchestrators/test_input_utils.py::test_input_resolution_with_missing_artifact": 0.003601062000143429, "tests/unit/orchestrators/test_input_utils.py::test_input_resolution_with_missing_step_run": 0.002686645000039789, "tests/unit/orchestrators/test_output_utils.py::test_output_artifact_preparation": 0.003695467999818902, "tests/unit/orchestrators/test_publish_utils.py::test_pipeline_run_status_computation[step_statuses0-2-failed]": 0.0016374279999809005, "tests/unit/orchestrators/test_publish_utils.py::test_pipeline_run_status_computation[step_statuses1-2-running]": 0.0017610300000114876, "tests/unit/orchestrators/test_publish_utils.py::test_pipeline_run_status_computation[step_statuses2-2-running]": 0.001623127999891949, "tests/unit/orchestrators/test_publish_utils.py::test_pipeline_run_status_computation[step_statuses3-2-completed]": 0.0016058269998211472, "tests/unit/orchestrators/test_publish_utils.py::test_publish_pipeline_run_metadata": 0.004707279000172093, "tests/unit/orchestrators/test_publish_utils.py::test_publish_step_run_metadata": 0.0034275599999773476, "tests/unit/orchestrators/test_publish_utils.py::test_publishing_a_failed_pipeline_run": 0.0026711449999083925, "tests/unit/orchestrators/test_publish_utils.py::test_publishing_a_failed_step_run": 0.0029241480000337106, "tests/unit/orchestrators/test_publish_utils.py::test_publishing_a_successful_step_run": 0.003262055000050168, "tests/unit/orchestrators/test_step_launcher.py::test_step_operator_validation": 0.00267946100007066, "tests/unit/orchestrators/test_step_runner.py::test_loading_unmaterialized_input_artifact": 0.004248679000056654, "tests/unit/orchestrators/test_step_runner.py::test_running_a_failing_step": 0.01114220599993132, "tests/unit/orchestrators/test_step_runner.py::test_running_a_successful_step": 0.011550612000064575, "tests/unit/orchestrators/test_topsort.py::test_topsorted_layers_DAG": 0.0021967369999629227, "tests/unit/orchestrators/test_topsort.py::test_topsorted_layers_empty": 0.00123381999992489, "tests/unit/orchestrators/test_topsort.py::test_topsorted_layers_error_if_cycle": 0.0016820280001184074, "tests/unit/orchestrators/test_topsort.py::test_topsorted_layers_ignore_duplicate_child_node": 0.001422623000053136, "tests/unit/orchestrators/test_topsort.py::test_topsorted_layers_ignore_duplicate_parent_node": 0.001495924999858289, "tests/unit/orchestrators/test_topsort.py::test_topsorted_layers_ignore_unknown_child_node": 0.0019485319999148487, "tests/unit/orchestrators/test_topsort.py::test_topsorted_layers_ignore_unknown_parent_node": 0.0014177230000314012, "tests/unit/orchestrators/test_utils.py::test_is_setting_enabled": 0.0018377350000946535, "tests/unit/pipelines/test_base_pipeline.py::test_building_a_pipeline_registers_it": 2.0827393140000368, "tests/unit/pipelines/test_base_pipeline.py::test_calling_a_pipeline_twice_raises_no_exception": 2.875044088999971, "tests/unit/pipelines/test_base_pipeline.py::test_compiling_a_pipeline_merges_build": 4.846769932000029, "tests/unit/pipelines/test_base_pipeline.py::test_compiling_a_pipeline_merges_schedule": 1.0856492860000344, "tests/unit/pipelines/test_base_pipeline.py::test_configure_pipeline_with_invalid_settings_key": 0.001851633999990554, "tests/unit/pipelines/test_base_pipeline.py::test_failure_during_initialization_deletes_placeholder_run": 2.1965958319999572, "tests/unit/pipelines/test_base_pipeline.py::test_initialize_pipeline_with_args": 0.0036865690000240647, "tests/unit/pipelines/test_base_pipeline.py::test_initialize_pipeline_with_args_and_kwargs": 0.0056098049999491195, "tests/unit/pipelines/test_base_pipeline.py::test_initialize_pipeline_with_kwargs": 0.0040285690000700924, "tests/unit/pipelines/test_base_pipeline.py::test_initialize_pipeline_with_missing_arg_step_brackets": 0.0021344390002013824, "tests/unit/pipelines/test_base_pipeline.py::test_initialize_pipeline_with_missing_key": 0.0032718339999746604, "tests/unit/pipelines/test_base_pipeline.py::test_initialize_pipeline_with_missing_kwarg_step_brackets": 0.0022142399999438567, "tests/unit/pipelines/test_base_pipeline.py::test_initialize_pipeline_with_repeated_args": 0.003195858000140106, "tests/unit/pipelines/test_base_pipeline.py::test_initialize_pipeline_with_repeated_args_and_kwargs": 0.0029052530001081323, "tests/unit/pipelines/test_base_pipeline.py::test_initialize_pipeline_with_repeated_kwargs": 0.003128511999989314, "tests/unit/pipelines/test_base_pipeline.py::test_initialize_pipeline_with_too_many_args": 0.004346980000036638, "tests/unit/pipelines/test_base_pipeline.py::test_initialize_pipeline_with_too_many_args_and_kwargs": 0.004204477000030238, "tests/unit/pipelines/test_base_pipeline.py::test_initialize_pipeline_with_unexpected_key": 0.004190877000041837, "tests/unit/pipelines/test_base_pipeline.py::test_initialize_pipeline_with_wrong_arg_type": 0.0025864470001124573, "tests/unit/pipelines/test_base_pipeline.py::test_initialize_pipeline_with_wrong_kwarg_type": 0.0028520299999854615, "tests/unit/pipelines/test_base_pipeline.py::test_latest_version_fetching": 0.005582602999993469, "tests/unit/pipelines/test_base_pipeline.py::test_loading_legacy_pipeline_from_model": 1.9898409629998923, "tests/unit/pipelines/test_base_pipeline.py::test_loading_pipeline_from_old_spec_fails": 0.002297842000075434, "tests/unit/pipelines/test_base_pipeline.py::test_pipeline_configuration": 0.0024151440001105584, "tests/unit/pipelines/test_base_pipeline.py::test_pipeline_decorator_configuration_gets_applied_during_initialization": 0.0022608240001318336, "tests/unit/pipelines/test_base_pipeline.py::test_pipeline_does_not_need_to_call_all_steps": 1.1738324699999794, "tests/unit/pipelines/test_base_pipeline.py::test_pipeline_run_fails_when_required_step_operator_is_missing": 1.0618770370001585, "tests/unit/pipelines/test_base_pipeline.py::test_registering_new_pipeline_version": 2.6634508870000673, "tests/unit/pipelines/test_base_pipeline.py::test_rerunning_deloyment_does_not_fail": 2.328952759999993, "tests/unit/pipelines/test_base_pipeline.py::test_reusing_pipeline_version": 1.0781393190000017, "tests/unit/pipelines/test_base_pipeline.py::test_run_configuration_from_code_and_file": 2.232791661999954, "tests/unit/pipelines/test_base_pipeline.py::test_run_configuration_from_file": 2.261903539000059, "tests/unit/pipelines/test_base_pipeline.py::test_run_configuration_in_code": 2.7556423349999477, "tests/unit/pipelines/test_base_pipeline.py::test_running_pipeline_creates_and_uses_placeholder_run": 2.24190580100003, "tests/unit/pipelines/test_base_pipeline.py::test_running_scheduled_pipeline_does_not_create_placeholder_run": 2.2367171859998507, "tests/unit/pipelines/test_base_pipeline.py::test_setting_enable_cache_at_run_level_overrides_all_decorator_values": 2.7571960200000376, "tests/unit/pipelines/test_base_pipeline.py::test_setting_step_parameter_with_config_object": 0.0034191629999895667, "tests/unit/pipelines/test_base_pipeline.py::test_step_can_receive_the_same_input_artifact_multiple_times": 1.354415208999967, "tests/unit/pipelines/test_base_pipeline.py::test_unique_identifier_considers_spec": 0.018638550999980907, "tests/unit/pipelines/test_base_pipeline.py::test_unique_identifier_considers_step_source_code": 0.008591758000193295, "tests/unit/pipelines/test_build_utils.py::test_build_checksum_computation": 0.0038482650002151786, "tests/unit/pipelines/test_build_utils.py::test_build_is_skipped_when_not_required": 0.003366056999993816, "tests/unit/pipelines/test_build_utils.py::test_build_uses_correct_settings": 1.6575305889999754, "tests/unit/pipelines/test_build_utils.py::test_building_with_different_keys_and_identical_settings": 0.021474163999869234, "tests/unit/pipelines/test_build_utils.py::test_building_with_identical_keys_and_different_settings": 0.00413876999994045, "tests/unit/pipelines/test_build_utils.py::test_building_with_identical_keys_and_settings": 0.013183822999963013, "tests/unit/pipelines/test_build_utils.py::test_custom_build_verification": 0.007184821999999258, "tests/unit/pipelines/test_build_utils.py::test_finding_existing_build": 0.006568511999944349, "tests/unit/pipelines/test_build_utils.py::test_local_repo_verification": 0.007029318000036255, "tests/unit/pipelines/test_build_utils.py::test_stack_with_container_registry_creates_non_local_build": 0.0131212220001089, "tests/unit/pipelines/test_schedule.py::test_schedule_requires_cron_or_interval": 0.0026453460000084306, "tests/unit/stack/test_stack.py::test_deployment_server_validation": 0.012466986999925211, "tests/unit/stack/test_stack.py::test_docker_builds_collection": 0.00616804100002355, "tests/unit/stack/test_stack.py::test_get_pipeline_run_metadata": 0.0035471820001475862, "tests/unit/stack/test_stack.py::test_get_pipeline_run_metadata_never_raises_errors": 0.0034983819999752086, "tests/unit/stack/test_stack.py::test_get_step_run_metadata": 0.004027292000159832, "tests/unit/stack/test_stack.py::test_get_step_run_metadata_never_raises_errors": 0.0052566210000577485, "tests/unit/stack/test_stack.py::test_initializing_a_stack_from_components": 0.0018439430000398715, "tests/unit/stack/test_stack.py::test_initializing_a_stack_with_missing_components": 0.001261929999941458, "tests/unit/stack/test_stack.py::test_initializing_a_stack_with_wrong_components": 0.0014649320000899024, "tests/unit/stack/test_stack.py::test_requires_remote_server": 0.005678429999989021, "tests/unit/stack/test_stack.py::test_stack_deployment": 0.01689868700009356, "tests/unit/stack/test_stack.py::test_stack_deprovisioning_does_not_fail_if_not_implemented_in_any_component": 0.00604724000004353, "tests/unit/stack/test_stack.py::test_stack_deprovisioning_fails_if_any_component_raises_an_error": 0.00583173400002579, "tests/unit/stack/test_stack.py::test_stack_forwards_deprovisioning_to_all_provisioned_components": 0.006810556999994333, "tests/unit/stack/test_stack.py::test_stack_forwards_provisioning_to_all_unprovisioned_components": 0.008050485000012486, "tests/unit/stack/test_stack.py::test_stack_forwards_resuming_to_all_suspended_components": 0.007336267999903612, "tests/unit/stack/test_stack.py::test_stack_forwards_suspending_to_all_running_components": 0.007147066000015911, "tests/unit/stack/test_stack.py::test_stack_prepare_pipeline_deployment": 0.007585473999938586, "tests/unit/stack/test_stack.py::test_stack_provisioning_fails_if_any_component_raises_an_error": 0.008427793000123529, "tests/unit/stack/test_stack.py::test_stack_provisioning_fails_if_stack_component_validation_fails": 0.004741107999961969, "tests/unit/stack/test_stack.py::test_stack_provisioning_status": 0.004471002000059343, "tests/unit/stack/test_stack.py::test_stack_requirements": 0.004920613000081175, "tests/unit/stack/test_stack.py::test_stack_returns_all_its_components": 0.0017663410001205193, "tests/unit/stack/test_stack.py::test_stack_running_status": 0.00466300699997646, "tests/unit/stack/test_stack.py::test_stack_suspending_does_not_fail_if_not_implemented_in_any_component": 0.00527792100001534, "tests/unit/stack/test_stack.py::test_stack_validation_fails_if_a_components_validator_fails": 0.004659206000042104, "tests/unit/stack/test_stack.py::test_stack_validation_succeeds_if_no_component_validator_fails": 0.004627905999996074, "tests/unit/stack/test_stack_component.py::test_stack_component_default_method_implementations": 0.0022620410001081837, "tests/unit/stack/test_stack_component.py::test_stack_component_dict_only_contains_public_attributes": 0.002142639999988205, "tests/unit/stack/test_stack_component.py::test_stack_component_prevents_extra_attributes": 0.00185923400010779, "tests/unit/stack/test_stack_component.py::test_stack_component_prevents_secret_references_for_some_attributes": 0.976249652999968, "tests/unit/stack/test_stack_component.py::test_stack_component_public_attributes_are_immutable": 0.0018782349999355574, "tests/unit/stack/test_stack_component.py::test_stack_component_secret_reference_resolving": 1.1395136529999945, "tests/unit/stack/test_stack_component.py::test_stack_component_serialization_does_not_resolve_secrets": 0.9606755799999291, "tests/unit/stack/test_stack_validator.py::test_validator_with_custom_stack_validation_function": 0.0026453479999872798, "tests/unit/stack/test_stack_validator.py::test_validator_with_required_components": 0.0016174300000102448, "tests/unit/steps/test_base_step.py::test_base_parameter_subclasses_as_attribute": 0.002480442999967636, "tests/unit/steps/test_base_step.py::test_call_step_with_args": 0.004559083999993163, "tests/unit/steps/test_base_step.py::test_call_step_with_args_and_kwargs": 0.004345080999996753, "tests/unit/steps/test_base_step.py::test_call_step_with_default_materializer_registered": 0.003635965000057695, "tests/unit/steps/test_base_step.py::test_call_step_with_kwargs": 0.004408878999925037, "tests/unit/steps/test_base_step.py::test_call_step_with_missing_key": 0.0041127760000563285, "tests/unit/steps/test_base_step.py::test_call_step_with_too_many_args": 0.004927290000068751, "tests/unit/steps/test_base_step.py::test_call_step_with_too_many_args_and_kwargs": 0.004248973999892769, "tests/unit/steps/test_base_step.py::test_call_step_with_unexpected_key": 0.004141574000072978, "tests/unit/steps/test_base_step.py::test_call_step_with_wrong_arg_type": 0.0045647809999991296, "tests/unit/steps/test_base_step.py::test_call_step_with_wrong_kwarg_type": 0.0042659770000454955, "tests/unit/steps/test_base_step.py::test_calling_a_step_works": 0.00630285800002639, "tests/unit/steps/test_base_step.py::test_configure_pipeline_with_hooks": 2.8311635249998517, "tests/unit/steps/test_base_step.py::test_configure_step_with_failure_hook": 6.400794660000088, "tests/unit/steps/test_base_step.py::test_configure_step_with_invalid_materializer_key_or_source": 0.006178908999913801, "tests/unit/steps/test_base_step.py::test_configure_step_with_invalid_parameters": 0.00901376199999504, "tests/unit/steps/test_base_step.py::test_configure_step_with_invalid_settings_key": 0.002579148000108944, "tests/unit/steps/test_base_step.py::test_configure_step_with_success_hook": 5.291967996000039, "tests/unit/steps/test_base_step.py::test_define_step_with_keyword_only_arguments": 0.002707447999910073, "tests/unit/steps/test_base_step.py::test_define_step_with_multiple_contexts": 0.001750943999923038, "tests/unit/steps/test_base_step.py::test_define_step_with_multiple_parameter_classes": 0.0017264319999412692, "tests/unit/steps/test_base_step.py::test_define_step_with_shared_input_and_output_name": 0.0020130350000044928, "tests/unit/steps/test_base_step.py::test_define_step_with_variable_args": 0.0016192299999602255, "tests/unit/steps/test_base_step.py::test_define_step_with_variable_kwargs": 0.0015244269998220261, "tests/unit/steps/test_base_step.py::test_define_step_without_input_annotation": 0.001658929999848624, "tests/unit/steps/test_base_step.py::test_define_step_without_return_annotation": 0.0023025409998354007, "tests/unit/steps/test_base_step.py::test_disable_caching_for_step": 0.002187541000012061, "tests/unit/steps/test_base_step.py::test_enable_caching_for_step": 0.0023999430000003485, "tests/unit/steps/test_base_step.py::test_enable_caching_for_step_with_context": 0.0026322459997345504, "tests/unit/steps/test_base_step.py::test_enabling_a_custom_step_operator_for_a_step": 0.002890553999918666, "tests/unit/steps/test_base_step.py::test_initialize_step_with_params": 0.008885160999966502, "tests/unit/steps/test_base_step.py::test_initialize_step_with_unexpected_config": 0.0021533379998572855, "tests/unit/steps/test_base_step.py::test_returning_an_object_of_the_wrong_type_raises_an_error[wrong_int_output_step_1]": 1.2275318599999991, "tests/unit/steps/test_base_step.py::test_returning_an_object_of_the_wrong_type_raises_an_error[wrong_int_output_step_2]": 1.2313470709999592, "tests/unit/steps/test_base_step.py::test_returning_an_object_of_the_wrong_type_raises_an_error[wrong_int_output_step_3]": 1.7119299700000283, "tests/unit/steps/test_base_step.py::test_returning_wrong_amount_of_objects_raises_an_error[wrong_num_outputs_step_1]": 1.2259751320000305, "tests/unit/steps/test_base_step.py::test_returning_wrong_amount_of_objects_raises_an_error[wrong_num_outputs_step_2]": 1.218020163999995, "tests/unit/steps/test_base_step.py::test_returning_wrong_amount_of_objects_raises_an_error[wrong_num_outputs_step_3]": 1.2279808360000288, "tests/unit/steps/test_base_step.py::test_returning_wrong_amount_of_objects_raises_an_error[wrong_num_outputs_step_4]": 1.2854675599999155, "tests/unit/steps/test_base_step.py::test_returning_wrong_amount_of_objects_raises_an_error[wrong_num_outputs_step_5]": 1.6980654139999842, "tests/unit/steps/test_base_step.py::test_returning_wrong_amount_of_objects_raises_an_error[wrong_num_outputs_step_6]": 1.2386240849999695, "tests/unit/steps/test_base_step.py::test_returning_wrong_amount_of_objects_raises_an_error[wrong_num_outputs_step_7]": 1.2381108130000484, "tests/unit/steps/test_base_step.py::test_step_can_have_generic_input_types": 1.4255097310000338, "tests/unit/steps/test_base_step.py::test_step_can_have_raw_artifacts": 1.2395239020000872, "tests/unit/steps/test_base_step.py::test_step_can_have_subscripted_generic_input_types": 1.9145905779998884, "tests/unit/steps/test_base_step.py::test_step_can_output_generic_types[dict_output_step]": 1.2497991050000792, "tests/unit/steps/test_base_step.py::test_step_can_output_generic_types[list_output_step]": 1.2408136669999976, "tests/unit/steps/test_base_step.py::test_step_can_output_subscripted_generic_types[dict_of_str_output_step]": 16.257809218000148, "tests/unit/steps/test_base_step.py::test_step_can_output_subscripted_generic_types[list_of_str_output_step]": 1.2508466040001167, "tests/unit/steps/test_base_step.py::test_step_config_allows_none_as_default_value": 0.002980352999884417, "tests/unit/steps/test_base_step.py::test_step_configuration": 0.003613362999999481, "tests/unit/steps/test_base_step.py::test_step_decorator_configuration_gets_applied_during_initialization": 0.0025417450000304598, "tests/unit/steps/test_base_step.py::test_step_decorator_creates_class_in_same_module_as_decorated_function": 0.0016953310000644706, "tests/unit/steps/test_base_step.py::test_step_fails_if_config_parameter_value_is_missing": 0.003279057999975521, "tests/unit/steps/test_base_step.py::test_step_has_no_enable_cache_by_default": 0.0022059390000777057, "tests/unit/steps/test_base_step.py::test_step_resets_global_execution_status_even_if_the_step_crashes": 1.6888718399998197, "tests/unit/steps/test_base_step.py::test_step_sets_global_execution_status_on_environment": 1.187212475000024, "tests/unit/steps/test_base_step.py::test_step_uses_config_class_default_values_if_no_config_is_passed": 0.003988171999822043, "tests/unit/steps/test_base_step.py::test_step_with_context_has_caching_disabled_by_default": 0.002661048999925697, "tests/unit/steps/test_base_step.py::test_string_outputs_do_not_get_split": 1.2382569590000685, "tests/unit/steps/test_base_step.py::test_upstream_step_computation": 0.007782635000012306, "tests/unit/steps/test_base_step_new.py::test_input_validation_inside_pipeline": 2.597557920999975, "tests/unit/steps/test_base_step_new.py::test_input_validation_outside_of_pipeline": 0.007524438999894301, "tests/unit/steps/test_base_step_new.py::test_passing_invalid_parameters": 0.004399480999950356, "tests/unit/steps/test_base_step_new.py::test_passing_valid_parameters": 1.234966983999925, "tests/unit/steps/test_base_step_new.py::test_step_allows_dict_list_annotations": 1.2861679280000544, "tests/unit/steps/test_base_step_new.py::test_step_parameter_from_file_and_code_fails_on_conflict": 0.02718060199993033, "tests/unit/steps/test_external_artifact.py::test_external_artifact_init[bad_all_none]": 0.002326351999954568, "tests/unit/steps/test_external_artifact.py::test_external_artifact_init[bad_id_and_name]": 0.0031056709999575105, "tests/unit/steps/test_external_artifact.py::test_external_artifact_init[bad_id_and_value]": 0.00184913999987657, "tests/unit/steps/test_external_artifact.py::test_external_artifact_init[bad_value_and_name]": 0.0018749409999827549, "tests/unit/steps/test_external_artifact.py::test_external_artifact_init[good_by_id]": 0.002106449000052635, "tests/unit/steps/test_external_artifact.py::test_external_artifact_init[good_by_name]": 0.0018330420000438608, "tests/unit/steps/test_external_artifact.py::test_external_artifact_init[good_by_value]": 0.0017958419999786202, "tests/unit/steps/test_external_artifact.py::test_get_artifact_by_id": 0.003797587000008207, "tests/unit/steps/test_external_artifact.py::test_get_artifact_by_pipeline_and_artifact_other_artifact_store": 0.0045234019999043085, "tests/unit/steps/test_external_artifact.py::test_get_artifact_by_value_before_upload_raises": 0.004648806999966837, "tests/unit/steps/test_external_artifact.py::test_upload_by_value": 0.004388101000131428, "tests/unit/steps/test_step_context.py::test_get_step_context": 0.0029990510000743598, "tests/unit/steps/test_step_context.py::test_get_step_context_output_for_non_existent_output_key": 0.0032089539998878536, "tests/unit/steps/test_step_context.py::test_get_step_context_output_for_non_existing_output_key": 0.0031055529999548526, "tests/unit/steps/test_step_context.py::test_get_step_context_output_for_step_with_multiple_outputs": 0.00391386599994803, "tests/unit/steps/test_step_context.py::test_get_step_context_output_for_step_with_no_outputs": 0.0029548490000479433, "tests/unit/steps/test_step_context.py::test_get_step_context_output_for_step_with_one_output": 0.003003050999950574, "tests/unit/steps/test_step_context.py::test_initialize_step_context_with_matching_keys": 0.0027126440000984076, "tests/unit/steps/test_step_context.py::test_initialize_step_context_with_mismatched_keys": 0.003029549000075349, "tests/unit/steps/test_step_context.py::test_step_context_is_singleton": 0.0028919489999452708, "tests/unit/steps/test_step_context.py::test_step_context_returns_instance_of_custom_materializer_class": 0.0029404489998796635, "tests/unit/steps/test_utils.py::test_invalid_step_output_annotations[func_with_ambiguous_output_name-ValueError]": 0.001630930000033004, "tests/unit/steps/test_utils.py::test_invalid_step_output_annotations[func_with_duplicate_output_name-RuntimeError]": 0.0026100469999619236, "tests/unit/steps/test_utils.py::test_invalid_step_output_annotations[func_with_ellipsis_annotation-RuntimeError]": 0.002216040000121211, "tests/unit/steps/test_utils.py::test_invalid_step_output_annotations[func_with_multiple_annotations-ValueError]": 0.0016467299999476381, "tests/unit/steps/test_utils.py::test_invalid_step_output_annotations[func_with_multiple_artifact_configs-ValueError]": 0.0016170290000445675, "tests/unit/steps/test_utils.py::test_invalid_step_output_annotations[func_with_non_string_annotation-ValueError]": 0.0016387320000603722, "tests/unit/steps/test_utils.py::test_step_output_annotation_parsing[func_with_annotated_tuple_output-expected_output7]": 0.0017185330000302201, "tests/unit/steps/test_utils.py::test_step_output_annotation_parsing[func_with_multiple_annotated_outputs-expected_output9]": 0.002490347000048132, "tests/unit/steps/test_utils.py::test_step_output_annotation_parsing[func_with_multiple_annotated_outputs_and_artifact_config-expected_output10]": 0.002551046000007773, "tests/unit/steps/test_utils.py::test_step_output_annotation_parsing[func_with_multiple_annotated_outputs_and_deployment_artifact_config-expected_output12]": 0.0037057689999073773, "tests/unit/steps/test_utils.py::test_step_output_annotation_parsing[func_with_multiple_annotated_outputs_and_model_artifact_config-expected_output11]": 0.1495098580001013, "tests/unit/steps/test_utils.py::test_step_output_annotation_parsing[func_with_multiple_outputs-expected_output8]": 0.0023503419999997277, "tests/unit/steps/test_utils.py::test_step_output_annotation_parsing[func_with_no_output_annotation_and_no_return-expected_output0]": 0.0022109400000545065, "tests/unit/steps/test_utils.py::test_step_output_annotation_parsing[func_with_no_output_annotation_and_return-expected_output1]": 0.002315641999871332, "tests/unit/steps/test_utils.py::test_step_output_annotation_parsing[func_with_single_annotated_output-expected_output3]": 0.0018755359998294807, "tests/unit/steps/test_utils.py::test_step_output_annotation_parsing[func_with_single_artifact_config_output-expected_output4]": 0.001721130999953857, "tests/unit/steps/test_utils.py::test_step_output_annotation_parsing[func_with_single_output-expected_output2]": 0.00167913099994621, "tests/unit/steps/test_utils.py::test_step_output_annotation_parsing[func_with_single_output_with_both_name_and_artifact_config-expected_output5]": 0.001747631999933219, "tests/unit/steps/test_utils.py::test_step_output_annotation_parsing[func_with_tuple_output-expected_output6]": 0.002348943999891162, "tests/unit/steps/test_utils.py::test_type_annotation_resolving": 0.0012946229999215575, "tests/unit/test_constants.py::test_handle_int_env_var": 0.0018878360000371686, "tests/unit/test_environment.py::test_environment_component_activation": 0.0016163310000365527, "tests/unit/test_environment.py::test_environment_is_singleton": 0.0014245270001538302, "tests/unit/test_environment.py::test_environment_platform_info_correctness": 0.0012325220001230264, "tests/unit/test_environment.py::test_get_run_environment_dict": 0.0013323240000318037, "tests/unit/test_environment.py::test_ipython_terminal_detection_when_not_installed": 0.0016662309998309865, "tests/unit/test_environment.py::test_step_is_running": 0.0019992369999499715, "tests/unit/test_flavor.py::test_docs_url": 0.002208137000138777, "tests/unit/test_flavor.py::test_sdk_docs_url": 0.0013261220000231333, "tests/unit/test_general.py::test_debug_mode_enabled_for_tests": 0.002384438000035516, "tests/unit/utils/test_analytics_utils.py::test_analytics_event": 2.248736941000061, "tests/unit/utils/test_code_repository_utils.py::test_finding_active_code_repo": 0.010576194999998734, "tests/unit/utils/test_code_repository_utils.py::test_setting_a_custom_active_code_repo": 0.004654287000107615, "tests/unit/utils/test_dashboard_utils.py::test_get_run_url_works_with_mocked_server": 0.0036793689999967683, "tests/unit/utils/test_dashboard_utils.py::test_get_run_url_works_with_mocked_server_with_runs": 0.003250361000027624, "tests/unit/utils/test_dashboard_utils.py::test_get_run_url_works_without_server": 0.2700917629998685, "tests/unit/utils/test_deprecation_utils.py::test_pydantic_model_attribute_deprecation": 0.004499774000009893, "tests/unit/utils/test_dict_utils.py::test_recursive_update_fails_when_original_is_not_a_dict": 0.0012203289998069522, "tests/unit/utils/test_dict_utils.py::test_recursive_update_fails_when_update_is_not_a_dict": 0.0011868269999695258, "tests/unit/utils/test_dict_utils.py::test_recursive_update_overrides_if_types_dont_match": 0.00120232799997666, "tests/unit/utils/test_dict_utils.py::test_recursive_update_works": 0.0017341400000532303, "tests/unit/utils/test_dict_utils.py::test_recursive_update_works_three_levels_down": 0.0011711269999068463, "tests/unit/utils/test_dict_utils.py::test_recursive_update_works_when_original_is_empty": 0.001185226999950828, "tests/unit/utils/test_dict_utils.py::test_recursive_update_works_when_update_is_empty": 0.0011841269998740245, "tests/unit/utils/test_dict_utils.py::test_remove_none_values_method_fails_when_recursive_flag_not_used": 0.0011598270000376942, "tests/unit/utils/test_dict_utils.py::test_remove_none_values_method_works": 0.0011636259999932008, "tests/unit/utils/test_dict_utils.py::test_remove_none_values_method_works_when_recursive_flag_used": 0.0014823340001157703, "tests/unit/utils/test_enum_utils.py::test_enum_utils": 0.0021374390000801213, "tests/unit/utils/test_env_utils.py::test_split_reconstruct_large_env_vars": 0.0013246209999806524, "tests/unit/utils/test_env_utils.py::test_split_too_large_env_var_fails": 0.0019147320000456602, "tests/unit/utils/test_filesync_model.py::test_concurrent_updates": 0.0055702050000263625, "tests/unit/utils/test_filesync_model.py::test_file_sync_model_works": 0.004312679999998181, "tests/unit/utils/test_filesync_model.py::test_invalid_config_file": 0.0027799520000826305, "tests/unit/utils/test_filesync_model.py::test_missing_config_file": 0.0023188430000118387, "tests/unit/utils/test_integration_utils.py::test_parse_requirement": 0.0022741430000223772, "tests/unit/utils/test_io_utils.py::test_copy_dir_copies_dir_from_source_to_destination": 0.002401745999918603, "tests/unit/utils/test_io_utils.py::test_copy_dir_overwriting_works": 0.002713249999942491, "tests/unit/utils/test_io_utils.py::test_copy_dir_throws_error_if_overwriting": 0.002717752000080509, "tests/unit/utils/test_io_utils.py::test_copy_dir_works": 0.0024327450000782846, "tests/unit/utils/test_io_utils.py::test_create_dir_if_not_exists": 0.0029493549999415336, "tests/unit/utils/test_io_utils.py::test_create_dir_recursive_if_not_exists": 0.0020961390000593383, "tests/unit/utils/test_io_utils.py::test_create_file_if_not_exists": 0.0022053409999216456, "tests/unit/utils/test_io_utils.py::test_create_file_if_not_exists_does_not_overwrite": 0.002269342000090546, "tests/unit/utils/test_io_utils.py::test_find_files_returns_generator_object_when_file_present": 0.002524647999962326, "tests/unit/utils/test_io_utils.py::test_find_files_when_file_absent": 0.0025451469999779874, "tests/unit/utils/test_io_utils.py::test_find_files_when_file_present": 0.002471845999934885, "tests/unit/utils/test_io_utils.py::test_get_global_config_directory_works": 0.0012641240000448306, "tests/unit/utils/test_io_utils.py::test_get_global_config_directory_works_with_env_var": 0.0013010249999751977, "tests/unit/utils/test_io_utils.py::test_get_grandparent_gets_the_grandparent_directory": 0.0021462400000018533, "tests/unit/utils/test_io_utils.py::test_get_parent_gets_the_parent_directory": 0.002042738999989524, "tests/unit/utils/test_io_utils.py::test_is_remote_when_using_non_remote_prefix": 0.28504924399987885, "tests/unit/utils/test_io_utils.py::test_is_remote_when_using_remote_prefix[abfs://]": 0.0014210269999921366, "tests/unit/utils/test_io_utils.py::test_is_remote_when_using_remote_prefix[az://]": 0.0013989259999789283, "tests/unit/utils/test_io_utils.py::test_is_remote_when_using_remote_prefix[gs://]": 0.0018900349999739774, "tests/unit/utils/test_io_utils.py::test_is_remote_when_using_remote_prefix[hdfs://]": 0.0013698249998697065, "tests/unit/utils/test_io_utils.py::test_is_remote_when_using_remote_prefix[s3://]": 0.001445426999907795, "tests/unit/utils/test_io_utils.py::test_is_root_when_false": 0.0019772360000160916, "tests/unit/utils/test_io_utils.py::test_is_root_when_true": 0.0012230220000901681, "tests/unit/utils/test_io_utils.py::test_move_moves_a_directory_from_source_to_destination": 0.002251442999977371, "tests/unit/utils/test_io_utils.py::test_move_moves_a_file_from_source_to_destination": 0.0022469410000667267, "tests/unit/utils/test_io_utils.py::test_read_file_contents_as_string_raises_error_when_file_not_exists": 0.002433845999917139, "tests/unit/utils/test_io_utils.py::test_read_file_contents_as_string_works": 0.0020210380000662553, "tests/unit/utils/test_io_utils.py::test_resolve_relative_path": 0.0019732369999019284, "tests/unit/utils/test_io_utils.py::test_write_file_contents_as_string_fails_with_non_string_types": 0.001999538000063694, "tests/unit/utils/test_io_utils.py::test_write_file_contents_as_string_works": 0.0021266399999149144, "tests/unit/utils/test_networking_utils.py::test_find_available_port_works": 0.0015159350000431004, "tests/unit/utils/test_networking_utils.py::test_port_available_works": 0.002061952000076417, "tests/unit/utils/test_networking_utils.py::test_port_is_open_on_local_host_works": 0.0015869370001837524, "tests/unit/utils/test_networking_utils.py::test_replace_internal_hostname_works": 4.472096247999957, "tests/unit/utils/test_networking_utils.py::test_replace_localhost_returns_url_when_running_outside_container": 0.0012534290000303372, "tests/unit/utils/test_networking_utils.py::test_scan_for_available_port_works": 0.0015977360000078988, "tests/unit/utils/test_pipeline_docker_image_builder.py::test_build_skipping": 0.001508629000113615, "tests/unit/utils/test_pipeline_docker_image_builder.py::test_check_user_is_set": 0.001903536000099848, "tests/unit/utils/test_pipeline_docker_image_builder.py::test_requirements_file_generation": 0.006735126000080527, "tests/unit/utils/test_pydantic_utils.py::test_template_generator_works": 0.004859711000108291, "tests/unit/utils/test_pydantic_utils.py::test_update_model_works": 0.0019891449999249744, "tests/unit/utils/test_pydantic_utils.py::test_update_model_works_for_exclude_unset": 0.0018025410000745978, "tests/unit/utils/test_pydantic_utils.py::test_update_model_works_recursively": 0.002104348999864669, "tests/unit/utils/test_pydantic_utils.py::test_update_model_works_with_dict": 0.0017817399999557892, "tests/unit/utils/test_pydantic_utils.py::test_update_model_works_with_none_exclusion": 0.0025407569999060797, "tests/unit/utils/test_pydantic_utils.py::test_yaml_serialization_mixin": 0.0045194040000069435, "tests/unit/utils/test_secret_utils.py::test_is_secret_reference": 0.5846072949999552, "tests/unit/utils/test_secret_utils.py::test_secret_field": 0.0015603370000007999, "tests/unit/utils/test_secret_utils.py::test_secret_field_detection": 0.002032846999782123, "tests/unit/utils/test_secret_utils.py::test_secret_reference_parsing": 0.7519364300001143, "tests/unit/utils/test_singleton.py::test_singleton_class_init_gets_called_once": 0.0025342469999714012, "tests/unit/utils/test_singleton.py::test_singleton_classes_only_create_one_instance": 0.001239123000118525, "tests/unit/utils/test_singleton.py::test_singleton_instance_clearing": 0.0012470229999053117, "tests/unit/utils/test_singleton.py::test_singleton_instance_exist": 0.001831232999961685, "tests/unit/utils/test_singleton.py::test_singleton_metaclass_can_be_used_for_multiple_classes": 0.0014418270000078337, "tests/unit/utils/test_source_code_utils.py::test_get_hashed_source": 0.014614169000083166, "tests/unit/utils/test_source_code_utils.py::test_get_source": 0.011784815999931197, "tests/unit/utils/test_source_utils.py::test_basic_source_loading": 0.004630977000033454, "tests/unit/utils/test_source_utils.py::test_basic_source_resolving": 0.4175518759999477, "tests/unit/utils/test_source_utils.py::test_module_type_detection": 0.005673897000065153, "tests/unit/utils/test_source_utils.py::test_package_utility_functions": 0.8014568820001386, "tests/unit/utils/test_source_utils.py::test_prepend_python_path": 0.0013806240000349135, "tests/unit/utils/test_source_utils.py::test_setting_a_custom_source_root": 0.0034454590000905227, "tests/unit/utils/test_source_utils.py::test_source_resolving_fails_for_non_toplevel_classes_and_functions": 0.002628345000061927, "tests/unit/utils/test_source_utils.py::test_user_source_loading_prepends_source_root": 0.008591945000034684, "tests/unit/utils/test_source_utils.py::test_validating_source_classes": 0.0032087549999459952, "tests/unit/utils/test_string_utils.py::test_get_human_readable_filesize_formats_correctly": 0.0013718259998540816, "tests/unit/utils/test_string_utils.py::test_get_human_readable_time_formats_correctly": 0.001367928000036045, "tests/unit/utils/test_string_utils.py::test_random_str_is_random": 1.0267680719999817, "tests/unit/utils/test_uuid_utils.py::test_generate_uuid_from_string_works": 0.0013627220000671514, "tests/unit/utils/test_uuid_utils.py::test_is_valid_uuid_works": 0.0012238209999395622, "tests/unit/utils/test_uuid_utils.py::test_parse_name_or_uuid_works": 0.0017927310000231955, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[10-10-12-12]": 0.008236787999976514, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[10-10-12-17]": 0.008296790000031251, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[10-10-12-3]": 0.0037354849999928774, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[10-10-12-8]": 0.006972859999905268, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[10-10-17-12]": 0.008308991000035348, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[10-10-17-17]": 0.008261890000085259, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[10-10-17-3]": 0.0038481889999957275, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[10-10-17-8]": 0.006971559999897181, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[10-10-3-12]": 0.004046593000111898, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[10-10-3-17]": 0.00399479199995767, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[10-10-3-3]": 0.0024977580001177557, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[10-10-3-8]": 0.003473280000093837, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[10-10-8-12]": 0.006612350999944283, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[10-10-8-17]": 0.0066295520000494434, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[10-10-8-3]": 0.003345877000015207, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[10-10-8-8]": 0.005583826999895791, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[10-5-12-12]": 0.004591906000086965, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[10-5-12-17]": 0.0044155009999258255, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[10-5-12-3]": 0.003743786000086402, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[10-5-12-8]": 0.004583404999834784, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[10-5-17-12]": 0.004494604000115032, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[10-5-17-17]": 0.0046028050001041265, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[10-5-17-3]": 0.0037397859999828142, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[10-5-17-8]": 0.0045717050002167525, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[10-5-3-12]": 0.0027538609999737673, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[10-5-3-17]": 0.0029399679998505235, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[10-5-3-3]": 0.0026376609999942957, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[10-5-3-8]": 0.002789865000067948, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[10-5-8-12]": 0.00391908999995394, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[10-5-8-17]": 0.0038698890000432584, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[10-5-8-3]": 0.003259474999936174, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[10-5-8-8]": 0.004568504999951983, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[5-10-12-12]": 0.006473148000168294, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[5-10-12-17]": 0.005483225999910246, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[5-10-12-3]": 0.003065870000000359, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[5-10-12-8]": 0.004718507999882604, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[5-10-17-12]": 0.005727532000037172, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[5-10-17-17]": 0.005587028000036298, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[5-10-17-3]": 0.002976568999997653, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[5-10-17-8]": 0.004853611000157798, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[5-10-3-12]": 0.004051291999985551, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[5-10-3-17]": 0.004107693999912954, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[5-10-3-3]": 0.0027273619998595677, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[5-10-3-8]": 0.003456878000065444, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[5-10-8-12]": 0.0055106259998183305, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[5-10-8-17]": 0.005658528000026308, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[5-10-8-3]": 0.0029608669999561243, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[5-10-8-8]": 0.004841610000084984, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[5-5-12-12]": 0.0033924779999097154, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[5-5-12-17]": 0.0034051769999905446, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[5-5-12-3]": 0.003109871999981806, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[5-5-12-8]": 0.003400377999923876, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[5-5-17-12]": 0.003430278999985603, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[5-5-17-17]": 0.0034366789999467073, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[5-5-17-3]": 0.0030919700000140438, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[5-5-17-8]": 0.0034111780000785075, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[5-5-3-12]": 0.0028066649999800575, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[5-5-3-17]": 0.002878466999959528, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[5-5-3-3]": 0.002622459000122035, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[5-5-3-8]": 0.002817664999952285, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[5-5-8-12]": 0.003404079000006277, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[5-5-8-17]": 0.0034194790000583453, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[5-5-8-3]": 0.0029711679999309126, "tests/unit/utils/test_visualization_utils.py::test_format_large_csv_visualization_as_html[5-5-8-8]": 0.003538979999802905, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[0-0]": 0.0016206379999630371, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[0-10]": 0.0016464379999661105, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[0-1]": 0.0016326369999433155, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[0-2]": 0.001625337999939802, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[0-3]": 0.0017477399999279442, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[0-4]": 0.0016498380000484758, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[0-5]": 0.0016146380000918725, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[0-6]": 0.0016121370000519164, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[0-7]": 0.0016299370000751878, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[0-8]": 0.0016309369999589762, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[0-9]": 0.0016359370000600393, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[1-0]": 0.0016685379999898942, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[1-10]": 0.0026051600000300823, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[1-1]": 0.001751939999962815, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[1-2]": 0.0018166419999943173, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[1-3]": 0.0019262439999465641, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[1-4]": 0.00201324600004682, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[1-5]": 0.0021140479999530726, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[1-6]": 0.002220651000016005, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[1-7]": 0.002296553000064705, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[1-8]": 0.002434757000060017, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[1-9]": 0.0024918560000060097, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[10-0]": 0.0016676390000611718, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[10-10]": 0.0073280669998894155, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[10-1]": 0.002214349999917431, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[10-2]": 0.002782964999937576, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[10-3]": 0.004036892999920383, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[10-4]": 0.0038599890000341475, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[10-5]": 0.004605104999996001, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[10-6]": 0.005906035999942105, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[10-7]": 0.005663029999936953, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[10-8]": 0.006109441000035076, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[10-9]": 0.006836955999915517, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[2-0]": 0.0016278379999903336, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[2-10]": 0.003220773999942139, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[2-1]": 0.0017885419999856822, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[2-2]": 0.0019707460000972787, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[2-3]": 0.002082646999951976, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[2-4]": 0.002233350999972572, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[2-5]": 0.002355153999928916, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[2-6]": 0.0025516580000157774, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[2-7]": 0.0026400589999866497, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[2-8]": 0.0027979630000345423, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[2-9]": 0.0030929710000009436, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[3-0]": 0.0017919399999755115, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[3-10]": 0.003583982999998625, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[3-1]": 0.0018584429999464191, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[3-2]": 0.002031646000091314, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[3-3]": 0.002237850999904367, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[3-4]": 0.0024468569999953615, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[3-5]": 0.0027115629999343582, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[3-6]": 0.0027915640000628628, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[3-7]": 0.003085369999894283, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[3-8]": 0.0032369750000498243, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[3-9]": 0.003535782000085419, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[4-0]": 0.0016413380001267797, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[4-10]": 0.004303597999864905, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[4-1]": 0.0018848439999601396, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[4-2]": 0.0021544480000557087, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[4-3]": 0.00236335300007795, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[4-4]": 0.002638660999991771, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[4-5]": 0.0028571659998988252, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[4-6]": 0.00310847100013234, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[4-7]": 0.0033891780000203653, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[4-8]": 0.00368798500005596, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[4-9]": 0.003883287999883578, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[5-0]": 0.001608936999900834, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[5-10]": 0.004658805999952165, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[5-1]": 0.00194754499989358, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[5-2]": 0.002226849999942715, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[5-3]": 0.00255305899997893, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[5-4]": 0.0028634649997911765, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[5-5]": 0.0031834720000460948, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[5-6]": 0.0034827800000130082, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[5-7]": 0.0037081850000504346, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[5-8]": 0.004158294000035312, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[5-9]": 0.0044253019999587195, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[6-0]": 0.0016149369998856855, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[6-10]": 0.005380823999985296, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[6-1]": 0.001985044999855745, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[6-2]": 0.003235873999983596, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[6-3]": 0.0026590609999175285, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[6-4]": 0.0030644710001297426, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[6-5]": 0.003398677000063799, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[6-6]": 0.009111908999898333, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[6-7]": 0.004090593999876546, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[6-8]": 0.004536603000019568, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[6-9]": 0.004899612999906822, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[7-0]": 0.0016515379999191282, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[7-10]": 0.005794231999971089, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[7-1]": 0.002083247999962623, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[7-2]": 0.0024504570001226966, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[7-3]": 0.002897966000091401, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[7-4]": 0.003282076000004963, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[7-5]": 0.0036444830000164075, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[7-6]": 0.004057591999981014, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[7-7]": 0.004593705999923259, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[7-8]": 0.004985414000088895, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[7-9]": 0.005341422000014973, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[8-0]": 0.0016443379998918317, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[8-10]": 0.006264044000090507, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[8-1]": 0.0021086489999788682, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[8-2]": 0.0025716590000683937, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[8-3]": 0.0031239720000257876, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[8-4]": 0.0034464800000932883, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[8-5]": 0.00400959299997794, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[8-6]": 0.004351099000018621, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[8-7]": 0.00493361300016204, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[8-8]": 0.005317821999938133, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[8-9]": 0.005900136000036582, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[9-0]": 0.0016294380000090314, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[9-10]": 0.007540572000038992, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[9-1]": 0.0023006509999277114, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[9-2]": 0.0035666809999383986, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[9-3]": 0.003321077000009609, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[9-4]": 0.0036655840000321405, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[9-5]": 0.0042056949999960125, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[9-6]": 0.004843811000000642, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[9-7]": 0.005306421999989652, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[9-8]": 0.00588353399996322, "tests/unit/utils/test_visualization_utils.py::test_format_small_csv_visualization_as_html[9-9]": 0.00630414500005827, "tests/unit/zen_server/test_exceptions.py::test_http_exception_inheritance[AuthorizationException]": 0.0018362339999384858, "tests/unit/zen_server/test_exceptions.py::test_http_exception_inheritance[DoesNotExistException]": 0.001845733999971344, "tests/unit/zen_server/test_exceptions.py::test_http_exception_inheritance[DuplicateRunNameError]": 0.001963036999995893, "tests/unit/zen_server/test_exceptions.py::test_http_exception_inheritance[EntityExistsError]": 0.0018322329999591602, "tests/unit/zen_server/test_exceptions.py::test_http_exception_inheritance[IllegalOperationError]": 0.001968836000060037, "tests/unit/zen_server/test_exceptions.py::test_http_exception_inheritance[KeyError]": 0.001901636000070539, "tests/unit/zen_server/test_exceptions.py::test_http_exception_inheritance[NotImplementedError]": 0.002010636999898452, "tests/unit/zen_server/test_exceptions.py::test_http_exception_inheritance[RuntimeError]": 0.001894536000008884, "tests/unit/zen_server/test_exceptions.py::test_http_exception_inheritance[SecretExistsError]": 0.11500492299990128, "tests/unit/zen_server/test_exceptions.py::test_http_exception_inheritance[StackComponentExistsError]": 0.0018241340001168282, "tests/unit/zen_server/test_exceptions.py::test_http_exception_inheritance[StackExistsError]": 0.0019228349999593775, "tests/unit/zen_server/test_exceptions.py::test_http_exception_inheritance[ValidationError]": 0.0018197319999444517, "tests/unit/zen_server/test_exceptions.py::test_http_exception_inheritance[ValueError]": 0.0018613349999441198, "tests/unit/zen_server/test_exceptions.py::test_http_exception_inheritance[ZenKeyError]": 0.0018251340000006167, "tests/unit/zen_server/test_exceptions.py::test_http_exception_reconstruction[AuthorizationException]": 0.0019854360000408633, "tests/unit/zen_server/test_exceptions.py::test_http_exception_reconstruction[DoesNotExistException]": 0.0019254360000786619, "tests/unit/zen_server/test_exceptions.py::test_http_exception_reconstruction[DuplicateRunNameError]": 0.0019554370001060306, "tests/unit/zen_server/test_exceptions.py::test_http_exception_reconstruction[EntityExistsError]": 0.001897232999795051, "tests/unit/zen_server/test_exceptions.py::test_http_exception_reconstruction[IllegalOperationError]": 0.0019070350001584302, "tests/unit/zen_server/test_exceptions.py::test_http_exception_reconstruction[KeyError]": 0.0019613370000115538, "tests/unit/zen_server/test_exceptions.py::test_http_exception_reconstruction[NotImplementedError]": 0.0017856319999509651, "tests/unit/zen_server/test_exceptions.py::test_http_exception_reconstruction[RuntimeError]": 0.0018077340000672848, "tests/unit/zen_server/test_exceptions.py::test_http_exception_reconstruction[SecretExistsError]": 0.001855333000094106, "tests/unit/zen_server/test_exceptions.py::test_http_exception_reconstruction[StackComponentExistsError]": 0.0019280360000948349, "tests/unit/zen_server/test_exceptions.py::test_http_exception_reconstruction[StackExistsError]": 0.0017756329999656373, "tests/unit/zen_server/test_exceptions.py::test_http_exception_reconstruction[ValidationError0]": 0.002197941000076753, "tests/unit/zen_server/test_exceptions.py::test_http_exception_reconstruction[ValidationError1]": 0.0019275360000392538, "tests/unit/zen_server/test_exceptions.py::test_http_exception_reconstruction[ValueError]": 0.0019891370001232644, "tests/unit/zen_server/test_exceptions.py::test_http_exception_reconstruction[ZenKeyError]": 0.001928435000081663, "tests/unit/zen_server/test_exceptions.py::test_reconstruct_unknown_exception_as_runtime_error": 0.001687231999881078, "tests/unit/zen_server/test_exceptions.py::test_unpack_unknown_error[400-ValueError]": 0.002821951000100853, "tests/unit/zen_server/test_exceptions.py::test_unpack_unknown_error[401-AuthorizationException]": 0.0028086519999988013, "tests/unit/zen_server/test_exceptions.py::test_unpack_unknown_error[403-IllegalOperationError]": 0.004015274999915164, "tests/unit/zen_server/test_exceptions.py::test_unpack_unknown_error[404-KeyError]": 0.002766350000001694, "tests/unit/zen_server/test_exceptions.py::test_unpack_unknown_error[409-EntityExistsError]": 0.002819851000026574, "tests/unit/zen_server/test_exceptions.py::test_unpack_unknown_error[422-ValueError]": 0.0027985520000584074, "tests/unit/zen_server/test_exceptions.py::test_unpack_unknown_error[500-RuntimeError]": 0.003087957999923674, "tests/unit/zen_server/test_exceptions.py::test_unpack_unknown_error[501-NotImplementedError]": 0.003477364000104899, "tests/unit/zen_server/test_jwt.py::test_encode_decode_works": 0.001691929000116943, "tests/unit/zen_server/test_jwt.py::test_token_expiration": 11.016646934999926, "tests/unit/zen_server/test_jwt.py::test_token_no_audience": 0.0016434289999551766, "tests/unit/zen_server/test_jwt.py::test_token_no_issuer": 0.0023279380001213212, "tests/unit/zen_server/test_jwt.py::test_token_wrong_audience": 0.0018366299999570401, "tests/unit/zen_server/test_jwt.py::test_token_wrong_issuer": 0.0016272249998792176, "tests/unit/zen_server/test_jwt.py::test_token_wrong_signature": 0.0018100299998877745 }
0
cloned_public_repos
cloned_public_repos/zenml/.darglint
[darglint] docstring_style=google
0
cloned_public_repos
cloned_public_repos/zenml/zen-test
#!/usr/bin/env python from tests.harness.cli.cli import cli import sys if __name__ == "__main__": sys.exit(cli())
0
cloned_public_repos
cloned_public_repos/zenml/.trivyignore
# Instructions: add rules here to skip reporting vulnerabilities that are not # relevant to the runtime environment. The rules are based on the CVE ID. # The CVE IDs can be found in the Trivy scan output. # # For every vulnerability that you want to skip, please include the following # information: # - Package: the name of the package reported by Trivy # - Vulnerability: description of the vulnerability as reported by Trivy # - Reference: a URL that provides more information about the vulnerability # - Reason: why the vulnerability is not relevant to the runtime environment # # NOTE: vulnerabilities that haven't been fixed yet should not be ignored here # because they are filtered out globally by Trivy. This file is only for # vulnerabilities with available fixes. # # For more information, see https://aquasecurity.github.io/trivy/v0.50/docs/configuration/filtering/#trivyignore # Package: pip # Vulnerability: Mercurial configuration injectable in repo revision # when installing via pip # Reference: https://avd.aquasec.com/nvd/cve-2023-5752 # Reason: pip is not used in the runtime environment CVE-2023-5752 # Package: pypa-setuptools # Vulnerability: Regular Expression Denial of Service (ReDoS) in # package_index.py. # Reference: https://avd.aquasec.com/nvd/cve-2022-40897 # Reason: setuptools is not used in the runtime environment CVE-2022-40897
0
cloned_public_repos
cloned_public_repos/zenml/trivy-secret.yaml
allow-rules: # Instructions: add rules here to skip false positive secrets detected by # trivy. For more information, see https://aquasecurity.github.io/trivy/latest/docs/scanner/secret/#configuration # # Example: # - id: my-rule # description: skip my secret in my metadata # path: .*/my-package-1\.2\.3\.dist-info/METADATA # Disable false positive secrets detected in the PyJWT package metadata # (see https://github.com/aquasecurity/trivy/discussions/5772). - id: jwt-token description: skip JWT secret in PyJWT package metadata path: .*/PyJWT-2\..\..\.dist-info/METADATA # Disable false positive secrets detected in the aws_profile_manager # package metadata - id: aws-profile-manager-access-key description: skip AWS access key in aws_profile_manager package metadata path: .*/aws_profile_manager-0\.7\.3\.dist-info/METADATA
0
cloned_public_repos/zenml
cloned_public_repos/zenml/infra/README.md
# Assisted ZenML Stack Deployment These are a set of scripts that can be used to provision infrastructure for **ZenML stacks directly in your browser** in AWS and GCP with minimal user input. The scripts are used by the ZenML CLI and dashboard stack deployment feature to not only provision the infrastructure but also to configure the ZenML stack, components and service connectors with the necessary credentials. ## AWS A Cloud Formation template is used to provision the infrastructure in AWS. The template is parameterized and the user is prompted to provide the necessary values during the CLI / dashboard deployment process. The values are embedded in a Cloud Formation template creation URL that the user can follow to deploy the stack. Files: * [aws/aws-ecr-s3-sagemaker.yaml](aws/aws-ecr-s3-sagemaker.yaml): Cloud Formation template for provisioning ECR and S3 resources along with a IAM user, IAM role and AWS secret key. The template also uses a Lambda function to register the ZenML stack with the ZenML server. The Cloud Formation template is uploaded to AWS S3 using a GitHub action during the release process at the following location: https://zenml-cf-templates.s3.eu-central-1.amazonaws.com/aws-ecr-s3-sagemaker.yaml ## GCP A Deployment Manager template is used to provision the infrastructure in GCP. The template is parameterized and the user is prompted to provide the necessary values during the CLI / dashboard deployment process. Given that there is no way to trigger a Deployment Manager template creation directly using a URL, a GCP Cloud Shell session is opened instead and the user is provided with a set of configuration values that they have to manually copy and paste into the deployment script. Files: * [gcp/gcp-gar-gcs-vertex.jinja](gcp/gcp-gar-gcs-vertex.jinja): Deployment Manager template for provisioning GCS and GCR resources along with a GCP service account and credentials. The template also uses a Cloud Function instance to register the ZenML stack with the ZenML server. * [gcp/gcp-gar-gcs-vertex-deploy.sh](gcp/gcp-gar-gcs-vertex-deploy.sh): Deployment script that the user must run in the Cloud Shell to deploy the stack. In addition to deploying the Deployment Manager template, the script also takes care of enabling the necessary GCP APIs and configuring the necessary permissions for the various service accounts involved. * [gcp/gcp-gar-gcs-vertex.md](gcp/gcp-gar-gcs-vertex.md): A markdown file that provides the user with instructions on how to deploy the stack using the deployment script. This is powered by the tutorial walkthrough feature in the Google Cloud Shell.
0
cloned_public_repos/zenml/infra
cloned_public_repos/zenml/infra/aws/aws-ecr-s3-sagemaker.yaml
# Access at: https://console.aws.amazon.com/cloudformation/home?region=eu-central-1#/stacks/create/review?stackName=zenml-stack&templateURL=https://zenml-cf-templates.s3.eu-central-1.amazonaws.com/aws-ecr-s3-sagemaker.yaml AWSTemplateFormatVersion: '2010-09-09' Transform: 'AWS::Serverless-2016-10-31' Description: | This CloudFormation template creates all the resources necessary for a basic AWS ZenML Stack: a private S3 bucket, a private ECR registry, an IAM role with all the necessary permissions to access these resources from ZenML and a IAM user with an access key. The template registers a full ZenML stack linked to the provisioned resources in the ZenML Server. Parameters: ResourceName: Type: String Description: | Unique string value to use to name all resources (e.g. zenml-stack-01). Can include lowercase alphanumeric characters and hyphens and must be between 6-32 characters in length. MinLength: 8 MaxLength: 32 AllowedPattern: "[a-z0-9-]+" ConstraintDescription: | Must be 8-32 characters in length containing only lowercase alphanumeric characters and hyphens. ZenMLServerURL: Type: String Description: | URL to the ZenML Server where the stack will be registered. If not provided, the stack will not be registered. AllowedPattern: "^(https?://.*)?$" ConstraintDescription: Must be a valid URL starting with http:// or https:// Default: "" ZenMLServerAPIToken: Type: String Description: | API token to use to authenticate with the ZenML Server. If not provided, the stack will not be registered. Default: "" TagName: Type: String Description: "The name of a tag to apply to all resources" Default: "project" TagValue: Type: String Description: "The value of the tag to apply to all resources" Default: "zenml" CodeBuild: Type: String AllowedValues: - true - false Description: | Whether to provision a CodeBuild project as the image builder for the stack. Only supported for ZenML Server versions above 0.70.0. Default: false Conditions: RegisterZenMLStack: !And - !Not [ !Equals [ !Ref ZenMLServerURL, "" ] ] - !Not [ !Equals [ !Ref ZenMLServerAPIToken, "" ] ] RegisterCodeBuild: !Equals [ !Ref CodeBuild, true ] Resources: S3Bucket: Type: AWS::S3::Bucket Properties: BucketName: !Sub '${ResourceName}-${AWS::AccountId}' AccessControl: Private Tags: - Key: !Ref TagName Value: !Ref TagValue ECRRepository: Type: AWS::ECR::Repository Properties: RepositoryName: !Sub '${ResourceName}' Tags: - Key: !Ref TagName Value: !Sub TagValue CodeBuildProject: Condition: RegisterCodeBuild Type: AWS::CodeBuild::Project Properties: Name: !Sub '${ResourceName}' ServiceRole: !GetAtt CodeBuildRole.Arn Artifacts: Type: NO_ARTIFACTS Environment: Type: LINUX_CONTAINER ComputeType: BUILD_GENERAL1_SMALL Image: bentolor/docker-dind-awscli PrivilegedMode: false Source: Type: S3 Location: !Sub '${S3Bucket}/codebuild' TimeoutInMinutes: 20 LogsConfig: CloudWatchLogs: Status: ENABLED GroupName: !Sub '/aws/codebuild/${ResourceName}' IAMUser: Type: AWS::IAM::User Properties: UserName: !Sub '${ResourceName}' Policies: - PolicyName: AssumeRole PolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Action: sts:AssumeRole Resource: "*" Tags: - Key: !Ref TagName Value: !Ref TagValue IAMUserAccessKey: Type: AWS::IAM::AccessKey Properties: UserName: !Ref IAMUser StackAccessRole: Type: AWS::IAM::Role DependsOn: IAMUser Properties: RoleName: !Sub '${ResourceName}' AssumeRolePolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Principal: AWS: !Sub '${IAMUser.Arn}' Action: 'sts:AssumeRole' Policies: - PolicyName: S3Policy PolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Action: - 's3:ListBucket' - 's3:GetObject' - 's3:PutObject' - 's3:DeleteObject' - 's3:GetBucketVersioning' - 's3:ListBucketVersions' - 's3:DeleteObjectVersion' Resource: - !Sub '${S3Bucket.Arn}' - !Sub '${S3Bucket.Arn}/*' - PolicyName: ECRPolicy PolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Action: - 'ecr:DescribeRegistry' - 'ecr:BatchGetImage' - 'ecr:DescribeImages' - 'ecr:BatchCheckLayerAvailability' - 'ecr:GetDownloadUrlForLayer' - 'ecr:InitiateLayerUpload' - 'ecr:UploadLayerPart' - 'ecr:CompleteLayerUpload' - 'ecr:PutImage' Resource: !Sub '${ECRRepository.Arn}' - Effect: Allow Action: - 'ecr:GetAuthorizationToken' Resource: '*' # NOTE: this is still required for ZenML to work with ECR - Effect: Allow Action: - 'ecr:DescribeRepositories' - 'ecr:ListRepositories' Resource: !Sub 'arn:aws:ecr:${AWS::Region}:${AWS::AccountId}:repository/*' - PolicyName: SageMakerPolicy PolicyDocument: Version: '2012-10-17' Statement: # Allow this role to create, start and monitor SageMaker pipelines - Effect: Allow Action: - 'sagemaker:CreatePipeline' - 'sagemaker:StartPipelineExecution' - 'sagemaker:DescribePipeline' - 'sagemaker:DescribePipelineExecution' Resource: '*' # Allow this role to create, start and monitor SageMaker training jobs # (required for the step operator) - Effect: Allow Action: - 'sagemaker:CreateTrainingJob' - 'sagemaker:DescribeTrainingJob' - 'logs:Describe*' - 'logs:GetLogEvents' Resource: '*' # Allow this role to pass the SageMaker execution role to the pipeline - Effect: Allow Action: iam:PassRole Resource: !Sub 'arn:aws:iam::${AWS::AccountId}:role/${ResourceName}-sagemaker' - !If - RegisterCodeBuild - PolicyName: CodeBuildPolicy PolicyDocument: Version: '2012-10-17' Statement: # Allow this role to start and monitor CodeBuild project builds - Effect: Allow Action: - 'codebuild:StartBuild' - 'codebuild:BatchGetBuilds' Resource: !Sub 'arn:aws:codebuild:${AWS::Region}:${AWS::AccountId}:project/${ResourceName}' - !Ref 'AWS::NoValue' SageMakerRuntimeRole: Type: AWS::IAM::Role Properties: RoleName: !Sub '${ResourceName}-sagemaker' AssumeRolePolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Principal: Service: sagemaker.amazonaws.com Action: 'sts:AssumeRole' Policies: - PolicyName: SageMakerRuntimePolicy PolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Action: - 's3:GetObject' - 's3:PutObject' - 's3:DeleteObject' - 's3:AbortMultipartUpload' Resource: - !Sub '${S3Bucket.Arn}' - !Sub '${S3Bucket.Arn}/*' ManagedPolicyArns: - 'arn:aws:iam::aws:policy/AmazonSageMakerFullAccess' CodeBuildRole: Type: AWS::IAM::Role Condition: RegisterCodeBuild Properties: RoleName: !Sub '${ResourceName}-codebuild' AssumeRolePolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Principal: Service: codebuild.amazonaws.com Action: 'sts:AssumeRole' Policies: - PolicyName: CodeBuildPolicy PolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Action: - 'logs:CreateLogGroup' - 'logs:CreateLogStream' - 'logs:PutLogEvents' Resource: - !Sub 'arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/codebuild/${ResourceName}' - !Sub 'arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/codebuild/${ResourceName}:*' - Effect: Allow Action: - 's3:GetObject' - 's3:GetObjectVersion' Resource: - !Sub '${S3Bucket.Arn}/*' - Effect: Allow Action: - 'ecr:BatchGetImage' - 'ecr:DescribeImages' - 'ecr:BatchCheckLayerAvailability' - 'ecr:GetDownloadUrlForLayer' - 'ecr:InitiateLayerUpload' - 'ecr:UploadLayerPart' - 'ecr:CompleteLayerUpload' - 'ecr:PutImage' Resource: !Sub '${ECRRepository.Arn}' - Effect: Allow Action: - 'ecr:GetAuthorizationToken' Resource: '*' InvokeZenMLAPIFunction: Type: AWS::Serverless::Function Condition: RegisterZenMLStack Properties: FunctionName: !Sub '${ResourceName}' Handler: index.handler Runtime: python3.8 MemorySize: 512 Timeout: 60 Environment: Variables: ZENML_SERVER_URL: !Ref ZenMLServerURL ZENML_SERVER_API_TOKEN: !Ref ZenMLServerAPIToken InlineCode: | import os import json import urllib.request import urllib.error def send_response(event, context, response_status, response_data, physical_resource_id=None): response_url = event['ResponseURL'] response_body = { 'Status': response_status, 'Reason': f'{response_data}\nSee the details in CloudWatch Log Stream: ' + context.log_stream_name, 'PhysicalResourceId': physical_resource_id or context.log_stream_name, 'StackId': event['StackId'], 'RequestId': event['RequestId'], 'LogicalResourceId': event['LogicalResourceId'], 'Data': response_data } json_response_body = json.dumps(response_body) headers = { 'content-type': '', 'content-length': str(len(json_response_body)) } req = urllib.request.Request(response_url, data=json_response_body.encode(), headers=headers, method='PUT') with urllib.request.urlopen(req) as response: print(f"Status code: {response.getcode()}") print(f"Response: {response.read().decode('utf-8')}") def handler(event, context): try: if event['RequestType'] == 'Delete': send_response(event, context, 'SUCCESS', {'Message': 'Resource deletion successful'}) return if event['RequestType'] == 'Update': send_response(event, context, 'SUCCESS', {'Message': 'Resource updated successfully'}) return url = os.environ['ZENML_SERVER_URL'].lstrip('/')+'/api/v1/stacks' api_token = os.environ['ZENML_SERVER_API_TOKEN'] payload = event['ResourceProperties']['Payload'] headers = { 'Authorization': f'Bearer {api_token}', 'Content-Type': 'application/json' } data = payload.encode('utf-8') req = urllib.request.Request(url, data=data, headers=headers, method='POST') try: with urllib.request.urlopen(req) as response: status_code = response.getcode() response_body = response.read().decode('utf-8') except urllib.error.HTTPError as e: status_code = e.code response_body = e.read().decode('utf-8') print(status_code) print(response_body) if status_code == 200: send_response( event, context, 'SUCCESS', {'Message': 'Stack successfully registered with ZenML'} ) else: send_response( event, context, 'FAILED', {'Message': f'Failed to register the ZenML stack. The ZenML Server replied with HTTP status code {status_code}: {response_body}'} ) except Exception as e: print(f"Error: {str(e)}") send_response(event, context, 'FAILED', {'Message': str(e)}) InvokeZenMLAPICustomResource: Type: Custom::Resource Condition: RegisterZenMLStack Properties: ServiceToken: !GetAtt InvokeZenMLAPIFunction.Arn ServiceTimeout: 300 Payload: !Join - '' - - !Sub | { "name": "${AWS::StackName}", "description": "Deployed by AWS CloudFormation stack ${AWS::StackName} in the ${AWS::AccountId} account and ${AWS::Region} region.", "labels": { "zenml:provider": "aws", "zenml:deployment": "cloud-formation" }, "service_connectors": [ { "type": "aws", "auth_method": "iam-role", "configuration": { "aws_access_key_id": "${IAMUserAccessKey}", "aws_secret_access_key": "${IAMUserAccessKey.SecretAccessKey}", "role_arn": "${StackAccessRole.Arn}", "region": "${AWS::Region}" } } ], "components": { "artifact_store": [{ "flavor": "s3", "service_connector_index": 0, "configuration": { "path": "s3://${S3Bucket}" } }], "container_registry":[{ "flavor": "aws", "service_connector_index": 0, "configuration": { "uri": "${AWS::AccountId}.dkr.ecr.${AWS::Region}.amazonaws.com", "default_repository": "${ECRRepository}" } }], "orchestrator": [{ "flavor": "sagemaker", "service_connector_index": 0, "configuration": { "execution_role": "${SageMakerRuntimeRole.Arn}", "output_data_s3_uri": "s3://${S3Bucket}/sagemaker" } }], "step_operator": [{ "flavor": "sagemaker", "service_connector_index": 0, "configuration": { "role": "${SageMakerRuntimeRole.Arn}", "bucket": "${S3Bucket}" } }], - !If - RegisterCodeBuild - !Sub | "image_builder": [{ "flavor": "aws", "service_connector_index": 0, "configuration": { "code_build_project": "${CodeBuildProject}" } }] - | "image_builder": [{ "flavor": "local" }] - | } } Outputs: AWSRegion: Description: "AWS Region" Value: !Ref AWS::Region AWSAccessKeyID: Description: "AWS Access Key ID" Value: !Ref IAMUserAccessKey AWSSecretAccessKey: Description: "AWS Secret Access Key" Value: !GetAtt IAMUserAccessKey.SecretAccessKey IAMRoleARN: Description: "IAM Role ARN" Value: !GetAtt StackAccessRole.Arn SageMakerIAMRoleARN: Description: "SageMaker execution IAM Role ARN" Value: !GetAtt SageMakerRuntimeRole.Arn
0
cloned_public_repos/zenml/infra
cloned_public_repos/zenml/infra/gcp/gcp-gar-gcs-vertex.jinja
{%- macro random_int(len) -%} {%- for _ in range(len) -%} {{ range(10) | random }} {%- endfor -%} {%- endmacro -%} {% set deployment = env['deployment'] %} {% set project = env['project'] %} {% set project_number = env['project_number'] %} {% set resourceNameSuffix = random_int(6) %} {% set region = properties['region'] | default('europe-west3') %} {% set zenmlServerURL = properties['zenmlServerURL'] %} {% set zenmlServerAPIToken = properties['zenmlServerAPIToken'] %} {%- macro zenml_stack_json(service_account_json) -%} { "name": "{{ deployment }}", "description": "Deployed by GCP Deployment Manager deployment {{ deployment }} in the {{ project }} and {{ region }} region.", "labels": { "zenml:provider": "gcp", "zenml:deployment": "deployment-manager" }, "service_connectors": [ { "type": "gcp", "auth_method": "service-account", "configuration": { "service_account_json": "{{ service_account_json }}" } } ], "components": { "artifact_store": [{ "flavor": "gcp", "service_connector_index": 0, "configuration": { "path": "gs://zenml-{{ project_number }}-{{ resourceNameSuffix }}" } }], "container_registry":[{ "flavor": "gcp", "service_connector_index": 0, "configuration": { "uri": "{{ region }}-docker.pkg.dev/{{ project }}/zenml-{{ resourceNameSuffix }}" } }], "orchestrator": [{ "flavor": "vertex", "service_connector_index": 0, "configuration": { "location": "{{ region }}", "workload_service_account": "zenml-{{ resourceNameSuffix }}@{{ project }}.iam.gserviceaccount.com" } }], "step_operator": [{ "flavor": "vertex", "service_connector_index": 0, "configuration": { "region": "{{ region }}", "service_account": "zenml-{{ resourceNameSuffix }}@{{ project }}.iam.gserviceaccount.com" } }], "image_builder": [{ "flavor": "gcp", "service_connector_index": 0 }] } } {%- endmacro -%} resources: - name: zenml-{{ project_number }}-{{ resourceNameSuffix }} type: storage.v1.bucket properties: name: "zenml-{{ project_number }}-{{ resourceNameSuffix }}" location: {{ region }} - name: zenml-service-account type: iam.v1.serviceAccount properties: accountId: "zenml-{{ resourceNameSuffix }}" displayName: ZenML Service Account - name: zenml-gcs-iam-role-binding type: gcp-types/cloudresourcemanager-v1:virtual.projects.iamMemberBinding metadata: dependsOn: - zenml-service-account properties: resource: {{ project }} role: roles/storage.objectUser member: serviceAccount:$(ref.zenml-service-account.email) - name: zenml-gar-iam-role-binding type: gcp-types/cloudresourcemanager-v1:virtual.projects.iamMemberBinding metadata: dependsOn: - zenml-service-account properties: resource: {{ project }} role: roles/artifactregistry.createOnPushWriter member: serviceAccount:$(ref.zenml-service-account.email) - name: zenml-vertex-user-iam-role-binding type: gcp-types/cloudresourcemanager-v1:virtual.projects.iamMemberBinding metadata: dependsOn: - zenml-service-account properties: resource: {{ project }} role: roles/aiplatform.user member: serviceAccount:$(ref.zenml-service-account.email) - name: zenml-vertex-agent-iam-role-binding type: gcp-types/cloudresourcemanager-v1:virtual.projects.iamMemberBinding metadata: dependsOn: - zenml-service-account properties: resource: {{ project }} role: roles/aiplatform.serviceAgent member: serviceAccount:$(ref.zenml-service-account.email) - name: zenml-cloud-build-iam-role-binding type: gcp-types/cloudresourcemanager-v1:virtual.projects.iamMemberBinding metadata: dependsOn: - zenml-service-account properties: resource: {{ project }} role: roles/cloudbuild.builds.editor member: serviceAccount:$(ref.zenml-service-account.email) - name: zenml-service-account-key type: iam.v1.serviceAccounts.key metadata: dependsOn: - zenml-service-account properties: parent: $(ref.zenml-service-account.name) - name: zenml-artifact-registry type: gcp-types/artifactregistry-v1beta1:projects.locations.repositories properties: location: {{ region }} repositoryId: zenml-{{ resourceNameSuffix }} format: DOCKER outputs: - name: GCSBucket value: zenml-{{ resourceNameSuffix }} - name: ServiceAccountEmail value: $(ref.zenml-service-account.email) - name: ServiceAccountKey value: $(ref.zenml-service-account-key.privateKeyData) - name: ArtifactRegistry value: $(ref.zenml-artifact-registry.name) - name: ZenMLStack value: | {{ zenml_stack_json("$(ref.zenml-service-account-key.privateKeyData)") | indent(4) }}
0
cloned_public_repos/zenml/infra
cloned_public_repos/zenml/infra/gcp/gcp-gar-gcs-vertex-deploy.sh
#!/bin/bash set -e set -u # Configure your deployment # # The following variables must be configured before running the deployment # script: # # * ZENML_STACK_NAME - The name of the ZenML stack to deploy. This name will be # used to identify the stack in the ZenML server and the GCP Deployment Manager. # * ZENML_STACK_REGION - The GCP region to deploy the resources to. Pick one # from the list of available regions documented at: https://cloud.google.com/about/locations # * ZENML_SERVER_URL: The URL where your ZenML server is running # * ZENML_SERVER_API_TOKEN: The API token used to authenticate with your ZenML # server. This would have been provided to you in the ZenML CLI or the ZenML # dashboard when you triggered this tutorial. ### BEGIN CONFIGURATION ### ZENML_STACK_NAME=zenml-gcp-stack ZENML_STACK_REGION=europe-west3 ZENML_SERVER_URL= ZENML_SERVER_API_TOKEN= ### END CONFIGURATION ### if [ -z "$ZENML_SERVER_URL" ] || [ -z "$ZENML_SERVER_API_TOKEN" ]; then echo "ERROR: The ZENML_SERVER_URL and ZENML_SERVER_API_TOKEN variables must be set." echo "Please set these variables in the script before running it." exit 1 fi if [ -z "$ZENML_STACK_NAME" ]; then echo "ERROR: The ZENML_STACK_NAME variable must be set." echo "Please set this variable in the script before running it." exit 1 fi if [ -z "$ZENML_STACK_REGION" ]; then echo "ERROR: The ZENML_STACK_REGION variable must be set." echo "Please set this variable in the script before running it." exit 1 fi # Extract the project ID and project number from the gcloud configuration PROJECT_ID=$(gcloud config get-value project) if [ -z "$PROJECT_ID" ]; then echo "ERROR: No project is set in the gcloud configuration. Please set a " echo "project using 'gcloud config set project PROJECT_ID' before running this script." exit 1 fi PROJECT_NUMBER=$(gcloud projects describe $PROJECT_ID --format="value(projectNumber)") # Enable the necessary services # # The following services must be enabled in your GCP project: # # * Deployment Manager - used to provision infrastructure resources. # * IAM API - used to manage permissions. # * Artifact Registry API - ZenML uses the Artifact Registry GCP service to store and manage Docker images. # * Cloud Storage API - ZenML uses the Cloud Storage GCP service to store and manage data. # * Vertex AI API - ZenML uses the Vertex AI GCP service to manage machine learning resources. # * Cloud Build - ZenML uses the Cloud Build GCP service to build and push Docker images. echo echo "##################################################" echo "Enabling necessary GCP services..." echo "##################################################" echo # Array of services to enable services=( "deploymentmanager.googleapis.com" "iam.googleapis.com" "artifactregistry.googleapis.com" "storage-api.googleapis.com" "ml.googleapis.com" "aiplatform.googleapis.com" "cloudresourcemanager.googleapis.com" "cloudbuild.googleapis.com" ) # Enable the services gcloud services enable "${services[@]}" # Grant Deployment Manager the necessary permissions # # The GCP Deployment Manager uses the Google APIs Service Agent # (https://cloud.google.com/compute/docs/access/service-accounts#google_apis_service_agent) # to create deployment resources. You must first grant this service the # appropriate permissions to assign roles to the GCP service account that will be # created for the ZenML stack. echo echo "##################################################" echo "Granting Deployment Manager the necessary permissions..." echo "##################################################" echo gcloud projects add-iam-policy-binding $PROJECT_ID \ --member="serviceAccount:$PROJECT_NUMBER@cloudservices.gserviceaccount.com" \ --role="roles/resourcemanager.projectIamAdmin" \ --condition=None # Create the Deployment Manager deployment # # The following command deploys the stack with Deployment Manager echo echo "##################################################" echo "Creating the Deployment Manager deployment..." echo "##################################################" echo echo "Please wait for the deployment to complete. This should not take more than 1-2 minutes." echo "You may also monitor the deployment as it's being created at: https://console.cloud.google.com/dm/deployments/details/$ZENML_STACK_NAME?project=$PROJECT_ID" echo set +e gcloud deployment-manager deployments create \ --template gcp-gar-gcs-vertex.jinja $ZENML_STACK_NAME \ --properties region:"$ZENML_STACK_REGION",zenmlServerURL:"$ZENML_SERVER_URL",zenmlServerAPIToken:"$ZENML_SERVER_API_TOKEN" # Fetch the exit code of the deployment DEPLOYMENT_EXIT_CODE=$? set -e if [ $DEPLOYMENT_EXIT_CODE -ne 0 ]; then echo "ERROR: The deployment failed. Please check the logs for more information." echo echo "This usually happens for one of the following reasons:" echo echo "1. Temporary issues related to enabling services or granting permissions in newly created projects. This is " echo "usually solved by deleting and retrying the deployment." echo "2. One of the GCP services required for the stack (S3 or GCP Artifact Registry) is not available in the " echo "region you selected. In this case, please configure a different region and try again." echo echo "To retry the deployment after you addressed the possible cause, you can run the following commands:" echo echo "gcloud deployment-manager deployments delete $ZENML_STACK_NAME" echo "./gcp-gar-gcs-vertex-deploy.sh" echo exit 1 fi echo echo "##################################################" echo "Extracting the Deployment Manager deployment output..." echo "##################################################" echo manifest=$(gcloud deployment-manager manifests list --deployment $ZENML_STACK_NAME --format="value(name)") zenml_stack_json=$( gcloud deployment-manager manifests describe $manifest --deployment $ZENML_STACK_NAME --format="value(layout)" \ | python -c 'import sys, yaml; print(yaml.safe_load(sys.stdin)["resources"][0]["outputs"][-1]["finalValue"])' ) echo echo "##################################################" echo "Registering the ZenML stack with the ZenML server..." echo "##################################################" echo set +e # Register the ZenML stack with the ZenML server curl -X POST "$ZENML_SERVER_URL/api/v1/stacks" \ -H "Authorization: Bearer $ZENML_SERVER_API_TOKEN" \ -H "Content-Type: application/json" \ -d "$zenml_stack_json" # Fetch the exit code of the registration REGISTRATION_EXIT_CODE=$? set -e if [ $REGISTRATION_EXIT_CODE -ne 0 ]; then echo "ERROR: The ZenML stack registration failed. Please check the logs for more information." echo echo "This could happen if the ZenML server URL or API token is incorrect. Please check the configuration and try again:" echo echo "gcloud deployment-manager deployments delete $ZENML_STACK_NAME" echo "./gcp-gar-gcs-vertex-deploy.sh" echo exit 1 fi # Print the deployment URL # echo "##################################################" echo echo echo "Congratulations ! The ZenML Deployment Manager stack has been deployed. You can access it at the following URL:" echo echo "https://console.cloud.google.com/dm/deployments/details/$ZENML_STACK_NAME?project=$PROJECT_ID" echo
0
cloned_public_repos/zenml/infra
cloned_public_repos/zenml/infra/gcp/gcp-gar-gcs-vertex.md
# Welcome to the ZenML GCP Stack Deployment Tutorial This tutorial will assist you in deploying a full ZenML GCP stack in your GCP project using Deployment Manager. After the Deployment Manager deployment is complete, you can return to the CLI or dashboard to view details about the associated ZenML stack that is automatically registered with ZenML. ## Let's get started! **Estimated time to complete**: <walkthrough-tutorial-duration duration=5></walkthrough-tutorial-duration> **Prerequisites**: * A GCP Cloud Billing account * ZenML Server URL and API token Click **Start** to begin. ## Make sure you are authenticated with GCP <walkthrough-project-setup billing=true>Select the GCP Project in which to deploy the stack.</walkthrough-project-setup> `Note:` In order to be able to install the ZenML stack successfully, you need to have billing enabled for your project. Then run the following command to authorize and configure the selected project in your Cloud Shell terminal: ```sh gcloud config set project <walkthrough-project-name/> ``` πŸ’‘ Click the 'Copy to Cloud Shell' button to copy and run the command to your terminal If your Cloud Shell is running in untrusted mode, you may also need to run the following command to authenticate with GCP: ```sh gcloud auth login ``` ## Configure your deployment Now, let's configure the ZenML stack deployment. You were provided with a set of configuration values when you triggered this flow through the ZenML dashboard or CLI that looks like the following: ``` ### BEGIN CONFIGURATION ### ... ... ### END CONFIGURATION ### ``` To configure your deployment, you need to simply copy these values and paste them <walkthrough-editor-select-regex filePath="gcp-gar-gcs-vertex-deploy.sh" regex="### BEGIN CONFIGURATION(\n|.)*?END CONFIGURATION ###">into the stack deployment script</walkthrough-editor-select-regex>. ⚠️ Please make sure the `ZENML_SERVER_API_TOKEN` value is not broken into multiple lines as this will lead to errors ! ## Deploy the ZenML stack Run the deployment script to deploy the stack with Deployment Manager: ```sh ./gcp-gar-gcs-vertex-deploy.sh ``` πŸ’‘ Click the 'Copy to Cloud Shell' button to copy and run the command in your terminal ## Congratulations <walkthrough-conclusion-trophy></walkthrough-conclusion-trophy> If the previous command completed successfully, then you're all set! The ZenML stack resources have been provisioned in your GCP project. You can view and manage the resources created in the GCP Deployment Manager Console by following the URL provided in the output. The ZenML stack has also been automatically registered with your ZenML server and you may now close the Cloud Shell session and switch back to the ZenML dashboard or the ZenML CLI to continue your workflow. **Don't forget to clean up**: When you're done using the ZenML GCP stack, be sure to delete the GCP Deployment Manager deployment to avoid unnecessary charges. You will need to delete the data in your GCS bucket manually before the deployment can be deleted.
0
cloned_public_repos/zenml
cloned_public_repos/zenml/examples/README.md
# ZenML Examples Welcome to the `examples` folder of ZenML! This directory contains a collection of examples that demonstrate the use of ZenML in various settings. Whether you're a beginner looking to explore ZenML's capabilities or an experienced user seeking inspiration, these examples cover a range of scenarios to help you get started quickly. ## Structure Each project in this folder is organized in a standardized structure to showcase ZenML's best practices. Moreover, each example listed below is a materialized form of one of our project templates. If you like any of the examples here, and you want to set up something similar, just grab the template and kickstart your own example. It's that simple! Here's a brief overview of the featured examples: | Name | Description | Template | |-------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------| | **[Quickstart](quickstart)** | Our quickstart example showcasing the basic functionality and a workflow to get you started with the ZenML framework. | [starter](https://github.com/zenml-io/template-starter) | | **[End-to-End Training with Batch Predictions](e2e)** | A comprehensive supervised ML project to train scikit-learn classification models and make predictions on tabular datasets. | [e2e-batch](https://github.com/zenml-io/template-e2e-batch) | | **[NLP Training, Promotion and Deployment](e2e_nlp)** | An NLP pipeline that walks through tokenization, training, HP tuning, evaluation and deployment. | [nlp](https://github.com/zenml-io/template-nlp) | ## More Projects & Practical Examples If you're eager to discover more projects leveraging ZenML, you can check out our [zenml-projects](https://github.com/zenml-io/zenml-projects) repository. It hosts a wide variety of projects that showcase the practical application of ZenML pipelines in different real-world examples. Furthermore, if you are interested in targeted examples featuring a particular integration, you can check out the [component guide](https://docs.zenml.io/stack-components) in our docs or the [integration tests](https://github.com/zenml-io/zenml/tree/main/tests/integration/examples) in our repository. ## Support & Feedback If you have questions or need assistance with any of the examples, feel free to [reach out to us on Slack](https://zenml.io/slack-invite/)! Happy experimenting with ZenML!
0
cloned_public_repos/zenml/examples
cloned_public_repos/zenml/examples/quickstart/requirements_gcp.txt
zenml[server]==0.80.1 notebook pyarrow datasets transformers transformers[torch] torch sentencepiece protobuf kfp>=2.6.0 gcsfs google-cloud-secret-manager google-cloud-container>=2.21.0 google-cloud-artifact-registry>=1.11.3 google-cloud-storage>=2.9.0 google-cloud-aiplatform>=1.34.0 google-cloud-build>=3.11.0 kubernetes
0
cloned_public_repos/zenml/examples
cloned_public_repos/zenml/examples/quickstart/requirements.txt
zenml[server]==0.80.1 notebook pyarrow datasets transformers transformers[torch] torch sentencepiece protobuf
0
cloned_public_repos/zenml/examples
cloned_public_repos/zenml/examples/quickstart/.dockerignore
.venv* .requirements* .ipynb_checkpoints .assets .ipynb_checkpoints mlruns results
0
cloned_public_repos/zenml/examples
cloned_public_repos/zenml/examples/quickstart/requirements_azure.txt
zenml[server]==0.80.1 notebook pyarrow datasets transformers transformers[torch] torch sentencepiece protobuf adlfs>=2021.10.0 azure-keyvault-keys azure-keyvault-secrets azure-identity azureml-core==1.56.0 azure-mgmt-containerservice>=20.0.0 azure-storage-blob==12.17.0 azure-ai-ml==1.18.0 kubernetes
0
cloned_public_repos/zenml/examples
cloned_public_repos/zenml/examples/quickstart/quickstart.ipynb
# Choose a cloud provider - at the end of this notebook you will run a pipeline on this cloud provider CLOUD_PROVIDER = None # Set this to "GCP", "AWS" or "AZURE" as needed def in_google_colab() -> bool: """Checks wether this notebook is run in google colab.""" try: import google.colab # noqa return True except ModuleNotFoundError: return False if in_google_colab(): # Pull required modules from this example !git clone -b main https://github.com/zenml-io/zenml !cp -r zenml/examples/quickstart/* . !rm -rf zenml # Common imports and setup if CLOUD_PROVIDER.lower() == "gcp": !pip install -r requirements_gcp.txt elif CLOUD_PROVIDER.lower() == "aws": !pip install -r requirements_aws.txt elif CLOUD_PROVIDER.lower() == "azure": !pip install -r requirements_azure.txt else: # In this case the second half of the notebook won't work for you !pip install -r requirements.txt# Restart Kernel to ensure all libraries are properly loaded import IPython IPython.Application.instance().kernel.do_shutdown(restart=True)zenml_server_url = ( None # INSERT URL TO SERVER HERE in the form "https://URL_TO_SERVER" ) assert zenml_server_url !zenml login $zenml_server_url# Disable wandb import os os.environ["WANDB_DISABLED"] = "true" # Initialize ZenML and define the root for imports and docker builds !zenml init !zenml stack set defaultimport requests from datasets import Dataset from typing_extensions import Annotated from zenml import step PROMPT = "" # In case you want to also use a prompt you can set it here def read_data_from_url(url): """Reads data from url. Assumes the individual data points are linebreak separated and input, targets are separated by a `|` pipe. """ inputs = [] targets = [] response = requests.get(url) response.raise_for_status() # Raise an exception for bad responses for line in response.text.splitlines(): old, modern = line.strip().split("|") inputs.append(f"{PROMPT}{old}") targets.append(modern) return {"input": inputs, "target": targets} @step def load_data( data_url: str, ) -> Annotated[Dataset, "full_dataset"]: """Load and prepare the dataset.""" # Fetch and process the data data = read_data_from_url(data_url) # Convert to Dataset return Dataset.from_dict(data)data_source = "https://storage.googleapis.com/zenml-public-bucket/quickstart-files/translations.txt" dataset = load_data(data_url=data_source) print(f"Input: {dataset['input'][1]} - Target: {dataset['target'][1]}")import materializers from steps import ( evaluate_model, load_data, split_dataset, test_model, tokenize_data, train_model, ) from steps.model_trainer import T5_Model from zenml import Model, pipeline from zenml.client import Client assert materializers # Initialize the ZenML client to fetch objects from the ZenML Server client = Client() Client().activate_stack( "default" ) # We will start by using the default stack which is local model_name = "YeOldeEnglishTranslator" model = Model( name="YeOldeEnglishTranslator", description="Model to translate from old to modern english", tags=["quickstart", "llm", "t5"], ) @pipeline(model=model) def english_translation_pipeline( data_url: str, model_type: T5_Model, per_device_train_batch_size: int, gradient_accumulation_steps: int, dataloader_num_workers: int, num_train_epochs: int = 5, ): """Define a pipeline that connects the steps.""" full_dataset = load_data(data_url) tokenized_dataset, tokenizer = tokenize_data( dataset=full_dataset, model_type=model_type ) tokenized_train_dataset, tokenized_eval_dataset, tokenized_test_dataset = ( split_dataset( tokenized_dataset, train_size=0.7, test_size=0.1, eval_size=0.2, subset_size=0.1, # We use a subset of the dataset to speed things up random_state=42, ) ) model = train_model( tokenized_dataset=tokenized_train_dataset, model_type=model_type, num_train_epochs=num_train_epochs, per_device_train_batch_size=per_device_train_batch_size, gradient_accumulation_steps=gradient_accumulation_steps, dataloader_num_workers=dataloader_num_workers, ) evaluate_model(model=model, tokenized_dataset=tokenized_eval_dataset) test_model( model=model, tokenized_test_dataset=tokenized_test_dataset, tokenizer=tokenizer, )# Run the pipeline and configure some parameters at runtime pipeline_run = english_translation_pipeline( data_url="https://storage.googleapis.com/zenml-public-bucket/quickstart-files/translations.txt", model_type="t5-small", num_train_epochs=1, # to make this demo fast, we start at 1 epoch per_device_train_batch_size=2, gradient_accumulation_steps=4, dataloader_num_workers=4, )import torch # load the model object model = client.get_model_version(model_name).get_model_artifact("model").load() tokenizer = ( client.get_model_version(model_name).get_artifact("tokenizer").load() ) test_text = "I do desire we may be better strangers" # Insert your own test sentence here input_ids = tokenizer( test_text, return_tensors="pt", max_length=128, truncation=True, padding="max_length", ).input_ids with torch.no_grad(): outputs = model.generate( input_ids, max_length=128, num_return_sequences=1, no_repeat_ngram_size=2, top_k=50, top_p=0.95, temperature=0.7, ) decoded_output = tokenizer.decode(outputs[0], skip_special_tokens=True) print(decoded_output)from zenml.environment import Environment # Set the cloud provider here CLOUD_PROVIDER = None # Set this to "GCP", "AWS" or "AZURE" assert CLOUD_PROVIDER # Set the name of the stack that you created within zenml stack_name = None # Set this assert stack_name # Set your stack, follow instruction above from zenml import pipeline from zenml.client import Client from zenml.config import DockerSettings settings = {} # Common imports and setup if CLOUD_PROVIDER.lower() == "gcp": parent_image = ( "zenmldocker/zenml-public-pipelines:quickstart-0.80.1-py3.11-gcp" ) skip_build = True elif CLOUD_PROVIDER.lower() == "aws": from zenml.integrations.aws.flavors.sagemaker_orchestrator_flavor import ( SagemakerOrchestratorSettings, ) parent_image = "339712793861.dkr.ecr.eu-central-1.amazonaws.com/zenml-public-pipelines:quickstart-0.80.1-py3.11-aws" skip_build = True # if you switch this to False, you need to remove the parent image settings["orchestrator.sagemaker"] = SagemakerOrchestratorSettings( instance_type="ml.m5.4xlarge" ) elif CLOUD_PROVIDER.lower() == "azure": parent_image = ( "zenmldocker/zenml-public-pipelines:quickstart-0.80.1-py3.11-azure" ) skip_build = True Client().activate_stack(stack_name) data_source = "https://storage.googleapis.com/zenml-public-bucket/quickstart-files/translations.txt" # We've prebuilt a docker image for this quickstart to speed things up, feel free to delete the DockerSettings to build from scratch settings["docker"] = DockerSettings( parent_image=parent_image, skip_build=skip_build )# In the case that we are within a colab environment we want to remove # these folders if Environment.in_google_colab(): !rm -rf results !rm -rf sample_datafrom pipelines import ( english_translation_pipeline, ) from zenml import Model model_name = "YeOldeEnglishTranslator" model = Model( name="YeOldeEnglishTranslator", ) pipeline_run = english_translation_pipeline.with_options( settings=settings, model=model )( data_url="https://storage.googleapis.com/zenml-public-bucket/quickstart-files/translations.txt", model_type="t5-small", num_train_epochs=2, per_device_train_batch_size=4, gradient_accumulation_steps=4, dataloader_num_workers=0, # Some cloud environment don't support multiple of these )from zenml.config import ResourceSettings if CLOUD_PROVIDER == "GCP": from zenml.integrations.gcp.flavors.vertex_orchestrator_flavor import ( VertexOrchestratorSettings, ) # find out about your options here: https://docs.zenml.io/stack-components/orchestrators/vertex#additional-configuration english_translation_pipeline.with_options( settings={ "orchestrator.vertex": VertexOrchestratorSettings( node_selector_constraint=( "cloud.google.com/gke-accelerator", "NVIDIA_TESLA_P4", ) ), "resources": ResourceSettings(memory="32GB", gpu_count=1), } ) if CLOUD_PROVIDER == "AWS": from zenml.integrations.aws.flavors.sagemaker_orchestrator_flavor import ( SagemakerOrchestratorSettings, ) # find out your options here: https://docs.zenml.io/stack-components/orchestrators/sagemaker#configuration-at-pipeline-or-step-level english_translation_pipeline.with_options( settings={ "orchestrator.sagemaker": SagemakerOrchestratorSettings( instance_type="ml.p2.xlarge" ) } ) if CLOUD_PROVIDER == "AZURE": from zenml.integrations.azure.flavors import AzureMLOrchestratorSettings # find out your options here: https://docs.zenml.io/stack-components/orchestrators/azureml#settings # The quickest way is probably to configure a compute-instance in azure ml. This instance should contain # a gpu. Then specify the name of the compute instance here. compute_name = None # Insert the name of your compute instance here english_translation_pipeline.with_options( settings={ "orchestrator.azureml": AzureMLOrchestratorSettings( mode="compute-instance", compute_name=compute_name ) } )
0
cloned_public_repos/zenml/examples
cloned_public_repos/zenml/examples/quickstart/requirements_aws.txt
zenml[server]==0.80.1 notebook pyarrow datasets transformers transformers[torch] torch sentencepiece protobuf sagemaker>=2.117.0 s3 s3fs aws-profile-manager kubernetes
0
cloned_public_repos/zenml/examples
cloned_public_repos/zenml/examples/quickstart/README.md
# ZenML Quickstart: Bridging Local Development and Cloud Deployment This repository demonstrates how ZenML streamlines the transition of machine learning workflows from local environments to cloud-scale operations. Key advantages: Deploy to major cloud providers with minimal code changes * Connect directly to your existing infrastructure * Bridge the gap between ML and Ops teams * Gain deep insights into pipeline metadata via the ZenML Dashboard Unlike traditional MLOps tools, ZenML offers unparalleled flexibility and control. It integrates seamlessly with your infrastructure, allowing both ML and Ops teams to collaborate effectively without compromising on their specific requirements. The notebook guides you through adapting local code for cloud deployment, showcasing ZenML's ability to enhance workflow efficiency while maintaining reproducibility and auditability in production. Ready to unify your ML development and operations? Let's begin. The diagram below describes what we'll show you in this example. <img src=".assets/Overview.png" width="80%" alt="Pipelines Overview"> 1) We have done some of the experimenting for you already and created a simple finetuning pipeline for a text-to-text task. 2) We will run this pipeline on your machine and a verify that everything works as expected. 3) Now we'll connect ZenML to your infrastructure and configure everything. 4) Finally, we are ready to run our code remotely. ## πŸƒ Run on Colab You can use Google Colab to see ZenML in action, no signup / installation required! <a href="https://colab.research.google.com/github/zenml-io/zenml/blob/main/examples/quickstart/quickstart.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ## :computer: Run Locally To run locally, install ZenML and pull this quickstart (if you haven't already done so): ```shell # Install ZenML pip install "zenml[server]" # clone the ZenML repository git clone https://github.com/zenml-io/zenml.git cd zenml/examples/quickstart ``` Now we're ready to start. You have two options for running the quickstart locally: #### Option 1 - Interactively explore the quickstart using Jupyter Notebook: ```bash pip install notebook jupyter notebook # open notebooks/quickstart.ipynb ``` #### Option 2 - Execute the whole training pipeline from a Python script: To run this quickstart you need to connect to a ZenML Server. You can deploy it [yourself on your own infrastructure](https://docs.zenml.io/getting-started/deploying-zenml) or try it out for free, no credit-card required in our [ZenML Pro managed service](https://zenml.io/pro). In the following commands we install our requirements, initialize our zenml environment and connect to the deployed ZenML Server. ```bash # Install required zenml integrations pip install -r requirements.txt # Initialize ZenML zenml init # add your ZenML Server URL here or leave empty to use ZenML Pro ZENML_SERVER_URL= # Connect to your ZenML Server zenml login $ZENML_SERVER_URL # We'll start on the default stack zenml stack set default ``` As described above we have done the first step already and created an experimental pipeline. Feel free to check out the individual steps in the [`steps`](steps) directory. The pipeline that connects these steps can be found in the [`pipeline`](pipelines) directory. And here is how to run it. When you run the pipeline with the following command you will be using the configuration [here](configs/training_default.yaml) ```bash # Run the pipeline locally python run.py --model_type=t5-small ``` <img src=".assets/DAG.png" width="50%" alt="Dashboard view"> Above you can see the dashboard view of the pipeline we just ran in the ZenML Dashboard. You can find the URL for this within the logs produced by the command above. As you can see the pipeline has run successfully. It also printed out some examples - however it seems the model is not yet able to solve the task well. What we did so far was validate that the pipeline and its individual steps work well together. ### 🌡 Running Remotely Our last section confirmed to us, that the pipeline works. Let's now run the pipeline in the environment of your choice. For you to be able to try this next section, you will need to have access to a cloud environment (GCP, Azure, AWS). ZenML wraps around all the major cloud providers and orchestration tools and lets you easily deploy your code onto them. To do this lets head over to the `Stack` section of your ZenML Dashboard. Here you'll be able to either connect to an existing or deploy a new environment. Choose on of the options presented to you there and come back when you have a stack ready to go. Then proceed to the appropriate section below. **Do not** run all three. Also be sure that you are running with a remote ZenML server (see Step 1 above). <img src=".assets/StackCreate.png" width="50%" alt="Stack creation in the ZenML Dashboard"> #### AWS For AWS you will need to install some aws requirements in your local environment. You will also need an AWS stack registered in ZenML. ```bash zenml integration install aws s3 -y zenml stack set <INSERT_YOUR_STACK_NAME_HERE> python run.py --model_type=t5-small ``` You can edit `configs/training_aws.yaml` to adjust the settings for running your pipeline in aws. #### GCP For GCP you will need to install some aws requirements in your local environment. You will also need an AWS stack registered in ZenML. ```bash zenml integration install gcp zenml stack set <INSERT_YOUR_STACK_NAME_HERE> python run.py --model_type=t5-small ``` You can edit `configs/training_gcp.yaml` to adjust the settings. #### Azure ```bash zenml integration install azure zenml stack set <INSERT_YOUR_STACK_NAME_HERE> python run.py --model_type=t5-small ``` You can edit `configs/training_azure.yaml` to adjust the settings. No matter which of these you choose, you should end up with a running pipeline on the backend of your choice. <img src=".assets/CloudDAGs.png" width="100%" alt="Pipeline running on Cloud orchestrator."> ## Further exploration This was just the tip of the iceberg of what ZenML can do; check out the [**docs**](https://docs.zenml.io/) to learn more about the capabilities of ZenML. For example, you might want to: * Learn more about ZenML by following our [guides](https://docs.zenml.io/user-guide) or more generally our [docs](https://docs.zenml.io/) * Explore our [projects repository](https://github.com/zenml-io/zenml-projects) to find interesting use cases that leverage zenml ## What next? * If you have questions or feedback... join our [**Slack Community**](https://zenml.io/slack) and become part of the ZenML family! * If you want to quickly get started with ZenML, check out [ZenML Pro](https://zenml.io/pro).
0
cloned_public_repos/zenml/examples
cloned_public_repos/zenml/examples/quickstart/run.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2024. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from typing import Optional import click from pipelines import ( english_translation_pipeline, ) from zenml.client import Client from zenml.logger import get_logger logger = get_logger(__name__) @click.command( help=""" ZenML Starter project. Run the ZenML starter project with basic options. Examples: \b # Run the training pipeline python run.py """ ) @click.option( "--no-cache", is_flag=True, default=False, help="Disable caching for the pipeline run.", ) @click.option( "--model_type", type=click.Choice(["t5-small", "t5-large"], case_sensitive=False), default="t5-small", help="Choose the model size: t5-small or t5-large.", ) @click.option( "--config_path", help="Choose the configuration file.", ) def main( model_type: str, config_path: Optional[str], no_cache: bool = False, ): """Main entry point for the pipeline execution. This entrypoint is where everything comes together: * configuring pipeline with the required parameters (some of which may come from command line arguments, but most of which comes from the YAML config files) * launching the pipeline Args: model_type: Type of model to use config_path: Configuration file to use no_cache: If `True` cache will be disabled. """ client = Client() run_args_train = {} orchf = client.active_stack.orchestrator.flavor sof = None if client.active_stack.step_operator: sof = client.active_stack.step_operator.flavor pipeline_args = {} if no_cache: pipeline_args["enable_cache"] = False if not config_path: # Default configuration config_path = "configs/training_default.yaml" # if orchf == "sagemaker" or sof == "sagemaker": config_path = "configs/training_aws.yaml" elif orchf == "vertex" or sof == "vertex": config_path = "configs/training_gcp.yaml" elif orchf == "azureml" or sof == "azureml": config_path = "configs/training_azure.yaml" print(f"Using {config_path} to configure the pipeline run.") else: print( f"You specified {config_path}. Please be aware of the contents of this " f"file as some settings might be very specific to a certain orchestration " f"environment. Also you might need to set `skip_build` to False in case " f"of missing requirements in the execution environment." ) pipeline_args["config_path"] = config_path english_translation_pipeline.with_options(**pipeline_args)( model_type=model_type, **run_args_train ) if __name__ == "__main__": main()
0
cloned_public_repos/zenml/examples
cloned_public_repos/zenml/examples/quickstart/LICENSE
Apache Software License 2.0 Copyright (c) ZenML GmbH 2024. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
0
cloned_public_repos/zenml/examples/quickstart
cloned_public_repos/zenml/examples/quickstart/materializers/huggingface_datasets_materializer.py
# Copyright (c) ZenML GmbH 2024. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at: # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express # or implied. See the License for the specific language governing # permissions and limitations under the License. """Implementation of the Huggingface datasets materializer.""" import os from collections import defaultdict from tempfile import TemporaryDirectory, mkdtemp from typing import ( TYPE_CHECKING, Any, ClassVar, Dict, Optional, Tuple, Type, Union, ) from datasets import Dataset, load_from_disk from datasets.dataset_dict import DatasetDict from zenml.enums import ArtifactType, VisualizationType from zenml.io import fileio from zenml.materializers.base_materializer import BaseMaterializer from zenml.materializers.pandas_materializer import PandasMaterializer from zenml.utils import io_utils if TYPE_CHECKING: from zenml.metadata.metadata_types import MetadataType DEFAULT_DATASET_DIR = "hf_datasets" def extract_repo_name(checksum_str: str) -> Optional[str]: """Extracts the repo name from the checksum string. An example of a checksum_str is: "hf://datasets/nyu-mll/glue@bcdcba79d07bc864c1c254ccfcedcce55bcc9a8c/mrpc/train-00000-of-00001.parquet" and the expected output is "nyu-mll/glue". Args: checksum_str: The checksum_str to extract the repo name from. Returns: str: The extracted repo name. """ dataset = None try: parts = checksum_str.split("/") if len(parts) >= 4: # Case: nyu-mll/glue dataset = f"{parts[3]}/{parts[4].split('@')[0]}" except Exception: # pylint: disable=broad-except pass return dataset class HFDatasetMaterializer(BaseMaterializer): """Materializer to read data to and from huggingface datasets.""" ASSOCIATED_TYPES: ClassVar[Tuple[Type[Any], ...]] = (Dataset, DatasetDict) ASSOCIATED_ARTIFACT_TYPE: ClassVar[ArtifactType] = ( ArtifactType.DATA_ANALYSIS ) def load( self, data_type: Union[Type[Dataset], Type[DatasetDict]] ) -> Union[Dataset, DatasetDict]: """Reads Dataset. Args: data_type: The type of the dataset to read. Returns: The dataset read from the specified dir. """ temp_dir = mkdtemp() io_utils.copy_dir( os.path.join(self.uri, DEFAULT_DATASET_DIR), temp_dir, ) return load_from_disk(temp_dir) def save(self, ds: Union[Dataset, DatasetDict]) -> None: """Writes a Dataset to the specified dir. Args: ds: The Dataset to write. """ temp_dir = TemporaryDirectory() path = os.path.join(temp_dir.name, DEFAULT_DATASET_DIR) try: ds.save_to_disk(path) io_utils.copy_dir( path, os.path.join(self.uri, DEFAULT_DATASET_DIR), ) finally: fileio.rmtree(temp_dir.name) def extract_metadata( self, ds: Union[Dataset, DatasetDict] ) -> Dict[str, "MetadataType"]: """Extract metadata from the given `Dataset` object. Args: ds: The `Dataset` object to extract metadata from. Returns: The extracted metadata as a dictionary. Raises: ValueError: If the given object is not a `Dataset` or `DatasetDict`. """ pandas_materializer = PandasMaterializer(self.uri) if isinstance(ds, Dataset): return pandas_materializer.extract_metadata(ds.to_pandas()) elif isinstance(ds, DatasetDict): metadata: Dict[str, Dict[str, "MetadataType"]] = defaultdict(dict) for dataset_name, dataset in ds.items(): dataset_metadata = pandas_materializer.extract_metadata( dataset.to_pandas() ) for key, value in dataset_metadata.items(): metadata[key][dataset_name] = value return dict(metadata) raise ValueError(f"Unsupported type {type(ds)}") def save_visualizations( self, ds: Union[Dataset, DatasetDict] ) -> Dict[str, VisualizationType]: """Save visualizations for the dataset. Args: ds: The Dataset or DatasetDict to visualize. Returns: A dictionary mapping visualization paths to their types. Raises: ValueError: If the given object is not a `Dataset` or `DatasetDict`. """ visualizations = {} if isinstance(ds, Dataset): datasets = {"default": ds} elif isinstance(ds, DatasetDict): datasets = ds else: raise ValueError(f"Unsupported type {type(ds)}") for name, dataset in datasets.items(): # Generate a unique identifier for the dataset if dataset.info.download_checksums: dataset_id = extract_repo_name( [x for x in dataset.info.download_checksums.keys()][0] ) if dataset_id: # Create the iframe HTML html = f""" <iframe src="https://huggingface.co/datasets/{dataset_id}/embed/viewer" frameborder="0" width="100%" height="560px" ></iframe> """ # Save the HTML to a file visualization_path = os.path.join( self.uri, f"{name}_viewer.html" ) with fileio.open(visualization_path, "w") as f: f.write(html) visualizations[visualization_path] = VisualizationType.HTML return visualizations
0
cloned_public_repos/zenml/examples/quickstart
cloned_public_repos/zenml/examples/quickstart/materializers/__init__.py
# Copyright (c) ZenML GmbH 2024. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at: # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express # or implied. See the License for the specific language governing # permissions and limitations under the License. from .huggingface_t5_materializer import HFT5Materializer from .huggingface_datasets_materializer import HFDatasetMaterializer __all__ = [ "HFT5Materializer", "HFDatasetMaterializer" ]
0
cloned_public_repos/zenml/examples/quickstart
cloned_public_repos/zenml/examples/quickstart/materializers/huggingface_t5_materializer.py
# Copyright (c) ZenML GmbH 2024. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at: # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express # or implied. See the License for the specific language governing # permissions and limitations under the License. """Implementation of the Huggingface t5 materializer.""" import os import tempfile from typing import Any, ClassVar, Type, Union from transformers import ( T5ForConditionalGeneration, T5Tokenizer, T5TokenizerFast, ) from zenml.io import fileio from zenml.materializers.base_materializer import BaseMaterializer DEFAULT_MODEL_DIR = "hf_t5_model" class HFT5Materializer(BaseMaterializer): """Base class for huggingface t5 models.""" SKIP_REGISTRATION: ClassVar[bool] = False ASSOCIATED_TYPES = ( T5ForConditionalGeneration, T5Tokenizer, T5TokenizerFast, ) def load( self, data_type: Type[Any] ) -> Union[T5ForConditionalGeneration, T5Tokenizer, T5TokenizerFast]: """Reads a T5ForConditionalGeneration model or T5Tokenizer from a serialized zip file. Args: data_type: A T5ForConditionalGeneration or T5Tokenizer type. Returns: A T5ForConditionalGeneration or T5Tokenizer object. Raises: ValueError: Unsupported data type used """ filepath = self.uri with tempfile.TemporaryDirectory(prefix="zenml-temp-") as temp_dir: # Copy files from artifact store to temporary directory for file in fileio.listdir(filepath): src = os.path.join(filepath, file) dst = os.path.join(temp_dir, file) if fileio.isdir(src): fileio.makedirs(dst) for subfile in fileio.listdir(src): subsrc = os.path.join(src, subfile) subdst = os.path.join(dst, subfile) fileio.copy(subsrc, subdst) else: fileio.copy(src, dst) # Load the model or tokenizer from the temporary directory if data_type in [ T5ForConditionalGeneration, T5Tokenizer, T5TokenizerFast, ]: return data_type.from_pretrained(temp_dir) else: raise ValueError(f"Unsupported data type: {data_type}") def save( self, obj: Union[T5ForConditionalGeneration, T5Tokenizer, T5TokenizerFast], ) -> None: """Creates a serialization for a T5ForConditionalGeneration model or T5Tokenizer. Args: obj: A T5ForConditionalGeneration model or T5Tokenizer. """ # Create a temporary directory with tempfile.TemporaryDirectory(prefix="zenml-temp-") as temp_dir: # Save the model or tokenizer obj.save_pretrained(temp_dir) # Copy the directory to the artifact store filepath = self.uri fileio.makedirs(filepath) for file in os.listdir(temp_dir): src = os.path.join(temp_dir, file) dst = os.path.join(filepath, file) if os.path.isdir(src): fileio.makedirs(dst) for subfile in os.listdir(src): subsrc = os.path.join(src, subfile) subdst = os.path.join(dst, subfile) fileio.copy(subsrc, subdst) else: fileio.copy(src, dst)
0
cloned_public_repos/zenml/examples/quickstart
cloned_public_repos/zenml/examples/quickstart/utils/preprocess.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2024. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from typing import Union import pandas as pd class NADropper: """Support class to drop NA values in sklearn Pipeline.""" def fit(self, *args, **kwargs): return self def transform(self, X: Union[pd.DataFrame, pd.Series]): return X.dropna() class ColumnsDropper: """Support class to drop specific columns in sklearn Pipeline.""" def __init__(self, columns): self.columns = columns def fit(self, *args, **kwargs): return self def transform(self, X: Union[pd.DataFrame, pd.Series]): return X.drop(columns=self.columns) class DataFrameCaster: """Support class to cast type back to pd.DataFrame in sklearn Pipeline.""" def __init__(self, columns): self.columns = columns def fit(self, *args, **kwargs): return self def transform(self, X): return pd.DataFrame(X, columns=self.columns)
0
cloned_public_repos/zenml/examples/quickstart
cloned_public_repos/zenml/examples/quickstart/utils/__init__.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2024. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #
0
cloned_public_repos/zenml/examples/quickstart
cloned_public_repos/zenml/examples/quickstart/steps/__init__.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2024. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from .data_loader import ( load_data, ) from .data_tokenizer import ( tokenize_data ) from .data_splitter import ( split_dataset ) from .model_trainer import ( train_model ) from .model_evaluator import ( evaluate_model ) from .model_tester import ( test_model )
0
cloned_public_repos/zenml/examples/quickstart
cloned_public_repos/zenml/examples/quickstart/steps/data_loader.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2024. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from typing import Annotated import requests from datasets import Dataset from zenml import step from zenml.logger import get_logger logger = get_logger(__name__) PROMPT = "" # In case you want to also use a prompt you can set it here @step def load_data( data_url: str, ) -> Annotated[Dataset, "full_dataset"]: """Load and prepare the dataset.""" def read_data_from_url(url): inputs = [] targets = [] response = requests.get(url, timeout=10) response.raise_for_status() # Raise an exception for bad responses for line in response.text.splitlines(): old, modern = line.strip().split("|") inputs.append(f"{PROMPT}{old}") targets.append(modern) return {"input": inputs, "target": targets} # Fetch and process the data data = read_data_from_url(data_url) # Convert to Dataset return Dataset.from_dict(data)
0
cloned_public_repos/zenml/examples/quickstart
cloned_public_repos/zenml/examples/quickstart/steps/model_evaluator.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2024. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import torch from datasets import Dataset from transformers import ( T5ForConditionalGeneration, ) from zenml import log_metadata, step from zenml.logger import get_logger logger = get_logger(__name__) @step def evaluate_model( model: T5ForConditionalGeneration, tokenized_dataset: Dataset ) -> None: """Evaluate the model on the training dataset.""" model.eval() total_loss = 0 num_batches = 0 for i in range(0, len(tokenized_dataset), 8): # batch size of 8 batch = tokenized_dataset[i : i + 8] inputs = { "input_ids": torch.tensor(batch["input_ids"]), "attention_mask": torch.tensor(batch["attention_mask"]), "labels": torch.tensor(batch["labels"]), } with torch.no_grad(): outputs = model(**inputs) total_loss += outputs.loss.item() num_batches += 1 avg_loss = total_loss / num_batches print(f"Average loss on the dataset: {avg_loss}") log_metadata(metadata={"Average Loss": avg_loss}, infer_model=True)
0
cloned_public_repos/zenml/examples/quickstart
cloned_public_repos/zenml/examples/quickstart/steps/model_tester.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2024. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import torch from datasets import Dataset from transformers import ( T5ForConditionalGeneration, T5TokenizerFast, ) from zenml import log_metadata, step from zenml.logger import get_logger from .data_loader import PROMPT logger = get_logger(__name__) @step def test_model( model: T5ForConditionalGeneration, tokenized_test_dataset: Dataset, tokenizer: T5TokenizerFast, ) -> None: """Test the model on some generated Old English-style sentences.""" model.eval() # Set the model to evaluation mode test_collection = {} for index in range(len(tokenized_test_dataset)): input_ids = tokenized_test_dataset[index]["input_ids"] # Convert input_ids to a tensor and add a batch dimension input_ids_tensor = torch.tensor(input_ids).unsqueeze(0) with torch.no_grad(): outputs = model.generate( input_ids_tensor, max_length=128, num_return_sequences=1, no_repeat_ngram_size=2, top_k=50, top_p=0.95, temperature=0.7, ) decoded_output = tokenizer.decode(outputs[0], skip_special_tokens=True) # Decode the input_ids to get the original sentence original_sentence = tokenizer.decode( input_ids[0], skip_special_tokens=True ) sentence_without_prompt = original_sentence.strip(PROMPT) test_collection[f"Prompt {index}"] = { sentence_without_prompt: decoded_output } log_metadata( metadata={"Example Prompts": test_collection}, infer_model=True, )
0
cloned_public_repos/zenml/examples/quickstart
cloned_public_repos/zenml/examples/quickstart/steps/data_tokenizer.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2024. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from typing import Annotated, Tuple from datasets import Dataset from transformers import T5Tokenizer from steps.model_trainer import T5_Model from zenml import step from zenml.logger import get_logger logger = get_logger(__name__) @step def tokenize_data( dataset: Dataset, model_type: T5_Model ) -> Tuple[ Annotated[Dataset, "tokenized_dataset"], Annotated[T5Tokenizer, "tokenizer"], ]: """Tokenize the dataset.""" tokenizer = T5Tokenizer.from_pretrained(model_type) def tokenize_function(examples): model_inputs = tokenizer( examples["input"], max_length=128, truncation=True, padding="max_length", ) labels = tokenizer( examples["target"], max_length=128, truncation=True, padding="max_length", ) model_inputs["labels"] = labels["input_ids"] return model_inputs return dataset.map(tokenize_function, batched=True), tokenizer
0
cloned_public_repos/zenml/examples/quickstart
cloned_public_repos/zenml/examples/quickstart/steps/model_trainer.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2024. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import torch from datasets import Dataset from transformers import ( T5ForConditionalGeneration, Trainer, TrainingArguments, ) from typing_extensions import Annotated from zenml import ArtifactConfig, step from zenml.logger import get_logger from zenml.utils.enum_utils import StrEnum logger = get_logger(__name__) class T5_Model(StrEnum): """All possible types a `StackComponent` can have.""" SMALL = "t5-small" LARGE = "t5-large" @step(enable_cache=False) def train_model( tokenized_dataset: Dataset, model_type: T5_Model, num_train_epochs: int, per_device_train_batch_size: int, gradient_accumulation_steps: int, dataloader_num_workers: int, ) -> Annotated[ T5ForConditionalGeneration, "model", ArtifactConfig(is_model_artifact=True) ]: """Train the model and return the path to the saved model.""" device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = T5ForConditionalGeneration.from_pretrained(model_type) model = model.to(device) training_args = TrainingArguments( output_dir="./results", num_train_epochs=num_train_epochs, per_device_train_batch_size=per_device_train_batch_size, # Reduced batch size for larger model gradient_accumulation_steps=gradient_accumulation_steps, # Increased gradient accumulation logging_dir="./logs", logging_steps=10, save_steps=500, fp16=False, # Mixed precision training learning_rate=3e-5, max_grad_norm=0.5, # Gradient clipping dataloader_num_workers=dataloader_num_workers, # Adjust based on your system save_total_limit=2, # Added ) trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_dataset, ) trainer.train() return trainer.model
0
cloned_public_repos/zenml/examples/quickstart
cloned_public_repos/zenml/examples/quickstart/steps/data_splitter.py
import math import random from typing import Annotated, Tuple from datasets import Dataset from zenml import step @step def split_dataset( dataset: Dataset, train_size: float = 0.7, test_size: float = 0.1, eval_size: float = 0.2, subset_size: float = 1.0, random_state: int = 42, ) -> Tuple[ Annotated[Dataset, "train_dataset"], Annotated[Dataset, "eval_dataset"], Annotated[Dataset, "test_dataset"], ]: """ Split a dataset into train, evaluation, and test sets. Args: dataset (Dataset): The input dataset to split. subset_size (float): Fraction of the dataset to use. Default is 1.0 (use full dataset). train_size (float): Fraction of the dataset to use for training. Default is 0.7. test_size (float): Fraction of the dataset to use for testing. Default is 0.1. eval_size (float): Fraction of the non-test data to use for evaluation. Default is 0.2. random_state (int): Random state for reproducibility. Default is 42. Returns: tuple: (train_dataset, eval_dataset, test_dataset) """ # Validate split proportions if not math.isclose(train_size + eval_size + test_size, 1.0, rel_tol=1e-5): raise ValueError("Split proportions must sum to 1.0") # Validate split proportions if subset_size > 1.0 or subset_size < 0.0: print( f"Subset_size should be in the range [0.0, 1.0], {subset_size} was supplied. " f"Defaulting subset_size to 1.0" ) subset_size = 1.0 # Set random seed for reproducibility random.seed(random_state) # Get the total number of samples in the dataset total_samples = len(dataset) # Calculate the number of samples for the subset subset_samples = int(total_samples * subset_size) # Randomly select indices for the subset all_indices = list(range(total_samples)) subset_indices = random.sample(all_indices, subset_samples) # Calculate split sizes train_samples = int(subset_samples * train_size) eval_samples = int(subset_samples * eval_size) # Shuffle the subset indices random.shuffle(subset_indices) # Split the indices train_indices = subset_indices[:train_samples] eval_indices = subset_indices[train_samples : train_samples + eval_samples] test_indices = subset_indices[train_samples + eval_samples :] return ( dataset.select(train_indices), dataset.select(eval_indices), dataset.select(test_indices), )
0
cloned_public_repos/zenml/examples/quickstart
cloned_public_repos/zenml/examples/quickstart/configs/training_aws.yaml
# Environment configuration settings: docker: parent_image: "715803424590.dkr.ecr.eu-central-1.amazonaws.com/zenml-public-pipelines:quickstart-0.80.1-py3.11-aws" skip_build: True # If you switch this to False remove the parent_image requirements: requirements.txt environment: WANDB_DISABLED: "true" orchestrator.sagemaker: instance_type: ml.m5.4xlarge # Model Control Plane configuration model: name: YeOldeEnglishTranslator description: Model to translate from old to modern english tags: ["quickstart", "llm"] # Configure the pipeline parameters: data_url: 'https://storage.googleapis.com/zenml-public-bucket/quickstart-files/translations.txt' # model_type: "t5-small" # Choose between t5-small and t5-large num_train_epochs: 2 per_device_train_batch_size: 4 gradient_accumulation_steps: 1 dataloader_num_workers: 0 # Per step configuration steps: split_dataset: parameters: subset_size: 0.5 # only use 50% of all available data train_size: 0.7 test_size: 0.1 eval_size: 0.2 random_state: 42
0
cloned_public_repos/zenml/examples/quickstart
cloned_public_repos/zenml/examples/quickstart/configs/training_azure.yaml
# Environment configuration settings: docker: parent_image: "zenmldocker/zenml-public-pipelines:quickstart-0.80.1-py3.11-azure" skip_build: True requirements: requirements.txt environment: WANDB_DISABLED: "true" # Uncomment the following lines to specify the accelerator for your azureml orchestrator # orchestrator.azureml: # mode: "compute-instance" # compute_name: compute_name # Insert the name of your preconfigured compute instance # Model Control Plane configuration model: name: YeOldeEnglishTranslator description: Model to translate from old to modern english tags: ["quickstart", "llm", "t5"] # Configure the pipeline parameters: data_url: 'https://storage.googleapis.com/zenml-public-bucket/quickstart-files/translations.txt' # model_type: "t5-small" # Choose between t5-small and t5-large num_train_epochs: 2 per_device_train_batch_size: 16 gradient_accumulation_steps: 4 dataloader_num_workers: 0 # Per step configuration steps: split_dataset: parameters: subset_size: 0.5 # only use 50% of all available data train_size: 0.7 test_size: 0.1 eval_size: 0.2 random_state: 42
0
cloned_public_repos/zenml/examples/quickstart
cloned_public_repos/zenml/examples/quickstart/configs/training_gcp.yaml
# Environment configuration settings: docker: parent_image: "zenmldocker/zenml-public-pipelines:quickstart-0.80.1-py3.11-gcp" skip_build: True requirements: requirements.txt environment: WANDB_DISABLED: "true" # Uncomment the following two lines to specify the accelerator for your vertex orchestrator # orchestrator.vertex: # node_selector_constraint: ["cloud.google.com/gke-accelerator", "NVIDIA_TESLA_P4"] # Model Control Plane configuration model: name: YeOldeEnglishTranslator description: Model to translate from old to modern english tags: ["quickstart", "llm", "t5"] # Configure the pipeline parameters: data_url: 'https://storage.googleapis.com/zenml-public-bucket/quickstart-files/translations.txt' # model_type: "t5-small" # Choose between t5-small and t5-large num_train_epochs: 2 per_device_train_batch_size: 16 gradient_accumulation_steps: 4 dataloader_num_workers: 0 # Per step configuration steps: split_dataset: parameters: subset_size: 0.5 # only use 50% of all available data train_size: 0.7 test_size: 0.1 eval_size: 0.2 random_state: 42
0
cloned_public_repos/zenml/examples/quickstart
cloned_public_repos/zenml/examples/quickstart/configs/training_default.yaml
# Environment configuration settings: docker: requirements: requirements.txt environment: WANDB_DISABLED: "true" # Model Control Plane configuration model: name: YeOldeEnglishTranslator description: Model to translate from old to modern english tags: ["quickstart", "llm", "t5"] # Configure the pipeline parameters: data_url: 'https://storage.googleapis.com/zenml-public-bucket/quickstart-files/translations.txt' # model_type: "t5-small" # Choose between t5-small and t5-large num_train_epochs: 1 per_device_train_batch_size: 1 gradient_accumulation_steps: 4 dataloader_num_workers: 4 # Per step configuration steps: split_dataset: parameters: subset_size: 0.1 # only use 10% of all available data train_size: 0.7 test_size: 0.1 eval_size: 0.2 random_state: 42
0
cloned_public_repos/zenml/examples/quickstart
cloned_public_repos/zenml/examples/quickstart/pipelines/__init__.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2024. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from .training import english_translation_pipeline
0
cloned_public_repos/zenml/examples/quickstart
cloned_public_repos/zenml/examples/quickstart/pipelines/training.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2024. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import materializers from steps import ( evaluate_model, load_data, split_dataset, test_model, tokenize_data, train_model, ) from steps.model_trainer import T5_Model from zenml import pipeline from zenml.logger import get_logger logger = get_logger(__name__) assert materializers # Ensure materializers are loaded @pipeline def english_translation_pipeline( data_url: str, model_type: T5_Model, per_device_train_batch_size: int, gradient_accumulation_steps: int, dataloader_num_workers: int, num_train_epochs: int = 5, ): """Define a pipeline that connects the steps.""" full_dataset = load_data(data_url) tokenized_dataset, tokenizer = tokenize_data( dataset=full_dataset, model_type=model_type ) ( tokenized_train_dataset, tokenized_eval_dataset, tokenized_test_dataset, ) = split_dataset(tokenized_dataset) model = train_model( tokenized_dataset=tokenized_train_dataset, model_type=model_type, num_train_epochs=num_train_epochs, per_device_train_batch_size=per_device_train_batch_size, gradient_accumulation_steps=gradient_accumulation_steps, dataloader_num_workers=dataloader_num_workers, ) evaluate_model(model=model, tokenized_dataset=tokenized_eval_dataset) test_model( model=model, tokenized_test_dataset=tokenized_test_dataset, tokenizer=tokenizer, )
0
cloned_public_repos/zenml/examples
cloned_public_repos/zenml/examples/mlops_starter/.dockerignore
.venv* .requirements*
0
cloned_public_repos/zenml/examples
cloned_public_repos/zenml/examples/mlops_starter/README.md
# :running: MLOps 101 with ZenML Build your first MLOps pipelines with ZenML. ## :earth_americas: Overview This repository is a minimalistic MLOps project intended as a starting point to learn how to put ML workflows in production. It features: - A feature engineering pipeline that loads data and prepares it for training. - A training pipeline that loads the preprocessed dataset and trains a model. - A batch inference pipeline that runs predictions on the trained model with new data. This is a representation of how it will all come together: <img src=".assets/pipeline_overview.png" width="70%" alt="Pipelines Overview"> Along the way we will also show you how to: - Structure your code into MLOps pipelines. - Automatically version, track, and cache data, models, and other artifacts. - Transition your ML models from development to production. ## πŸƒ Run on Colab You can use Google Colab to see ZenML in action, no signup / installation required! <a href="https://colab.research.google.com/github/zenml-io/zenml/blob/main/examples/mlops_starter/quickstart.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ## :computer: Run Locally To run locally, install ZenML and pull this quickstart: ```shell # Install ZenML pip install "zenml[server]" # clone the ZenML repository git clone https://github.com/zenml-io/zenml.git cd zenml/examples/mlops_starter ``` Now we're ready to start. You have two options for running the quickstart locally: #### Option 1 - Interactively explore the quickstart using Jupyter Notebook: ```bash pip install notebook jupyter notebook # open quickstart.ipynb ``` #### Option 2 - Execute the whole ML pipeline from a Python script: ```bash # Install required zenml integrations zenml integration install sklearn pandas -y # Initialize ZenML zenml init # Start the ZenServer to enable dashboard access zenml login --local # Run the feature engineering pipeline python run.py --feature-pipeline # Run the training pipeline python run.py --training-pipeline # Run the training pipeline with versioned artifacts python run.py --training-pipeline --train-dataset-version-name=1 --test-dataset-version-name=1 # Run the inference pipeline python run.py --inference-pipeline ``` ## 🌡 Learning MLOps with ZenML This project is also a great source of learning about some fundamental MLOps concepts. In sum, there are four exemplary steps happening, that can be mapped onto many other projects: <details> <summary>πŸ₯‡ Step 1: Load your data and execute feature engineering</summary> We'll start off by importing our data. In this project, we'll be working with [the Breast Cancer](https://archive.ics.uci.edu/dataset/17/breast+cancer+wisconsin+diagnostic) dataset which is publicly available on the UCI Machine Learning Repository. The task is a classification problem, to predict whether a patient is diagnosed with breast cancer or not. When you're getting started with a machine learning problem you'll want to do something similar to this: import your data and get it in the right shape for your training. Here are the typical steps within a feature engineering pipeline. The steps can be found defined the [steps](steps/) directory, while the [pipelines](pipelines/) directory has the pipeline code to connect them together. <img src=".assets/feature_engineering_pipeline.png" width="50%" alt="Feature engineering pipeline" /> To execute the feature engineer pipelines, run: ```python python run.py --feature-pipeline ``` After the pipeline has run, the pipeline will produce some logs like: ```shell The latest feature engineering pipeline produced the following artifacts: 1. Train Dataset - Name: dataset_trn, Version Name: 1 2. Test Dataset: Name: dataset_tst, Version Name: 1 ``` We will use these versions in the next pipeline. </details> <details> <summary>⌚ Step 2: Training pipeline</summary> Now that our data is prepared, it makes sense to train some models to get a sense of how difficult the task is. The Breast Cancer dataset is sufficiently large and complex that it's unlikely we'll be able to train a model that behaves perfectly since the problem is inherently complex, but we can get a sense of what a reasonable baseline looks like. We'll start with two simple models, a SGD Classifier and a Random Forest Classifier, both batteries-included from `sklearn`. We'll train them on the same data and then compare their performance. <img src=".assets/training_pipeline.png" width="50%" alt="Training pipeline"> Run it by using the ID's from the first step: ```python # You can also ignore the `--train-dataset-version-name` and `--test-dataset-version-name` to use # the latest versions python run.py --training-pipeline --train-dataset-version-name 1 --test-dataset-version-name 1 ``` To track these models, ZenML offers a *Model Control Plane*, which is a central register of all your ML models. Each run of the training pipeline will produce a ZenML Model Version. ```shell zenml model list ``` This will show you a new `breast_cancer_classifier` model with two versions, `sgd` and `rf` created. You can find out how this was configured in the [YAML pipeline configuration files](configs/). If you are a [ZenML Pro](https://zenml.io/pro) user, you can see all of this visualized in the dashboard: <img src=".assets/cloud_mcp_screenshot.png" width="70%" alt="Model Control Plane"> There is a lot more you can do with ZenML models, including the ability to track metrics by adding metadata to it, or having them persist in a model registry. However, these topics can be explored more in the [ZenML docs](https://docs.zenml.io). </details> <details> <summary>πŸ’― Step 3: Promoting the best model to production</summary> For now, we will use the ZenML model control plane to promote our best model to `production`. You can do this by simply setting the `stage` of your chosen model version to the `production` tag. ```shell zenml model version update breast_cancer_classifier rf --stage production ``` While we've demonstrated a manual promotion process for clarity, a more in-depth look at the [promoter code](steps/model_promoter.py) reveals that the training pipeline is designed to automate this step. It evaluates the latest model against established production metrics and, if the new model outperforms the existing one based on test set results, it will automatically promote the model to production. Here is an overview of the process: <img src=".assets/cloud_mcp.png" width="60%" alt="Model Control Plane"> Again, if you are a [ZenML Pro](https://zenml.io/pro) user, you would be able to see all this in the cloud dashboard. </details> <details> <summary>πŸ«… Step 4: Consuming the model in production</summary> Once the model is promoted, we can now consume the right model version in our batch inference pipeline directly. Let's see how that works. The batch inference pipeline simply takes the model marked as `production` and runs inference on it with `live data`. The critical step here is the `inference_predict` step, where we load the model in memory and generate predictions. Apart from the loading the model, we must also load the preprocessing pipeline that we ran in feature engineering, so that we can do the exact steps that we did on training time, in inference time. Let's bring it all together: ZenML automatically links all artifacts to the `production` model version as well, including the predictions that were returned in the pipeline. This completes the MLOps loop of training to inference: <img src=".assets/inference_pipeline.png" width="45%" alt="Inference pipeline"> You can also see all predictions ever created as a complete history in the dashboard (Again only for [ZenML Pro](https://zenml.io/pro) users): <img src=".assets/cloud_mcp_predictions.png" width="70%" alt="Model Control Plane"> </details> ## :bulb: Learn More You're a legit MLOps engineer now! You trained two models, evaluated them against a test set, registered the best one with the ZenML model control plane, and served some predictions. You also learned how to iterate on your models and data by using some of the ZenML utility abstractions. You saw how to view your artifacts and stacks via the client as well as the ZenML Dashboard. If you want to learn more about ZenML as a tool, then the [:page_facing_up: **ZenML Docs**](https://docs.zenml.io/) are the perfect place to get started. In particular, the [Production Guide](https://docs.zenml.io/user-guide/production-guide/) goes into more detail as to how to transition these same pipelines into production on the cloud. The best way to get a production ZenML instance up and running with all batteries included is the [ZenML Pro](https://zenml.io/pro). Check it out! Also, make sure to join our <a href="https://zenml.io/slack" target="_blank"> <img width="15" src="https://cdn3.iconfinder.com/data/icons/logos-and-brands-adobe/512/306_Slack-512.png" alt="Slack"/> <b>Slack Community</b> </a> to become part of the ZenML family!
0
cloned_public_repos/zenml/examples
cloned_public_repos/zenml/examples/mlops_starter/.copier-answers.yml
# Changes here will be overwritten by Copier _commit: 2024.11.28 _src_path: gh:zenml-io/template-starter email: info@zenml.io full_name: ZenML GmbH open_source_license: apache project_name: ZenML Starter version: 0.1.0
0
cloned_public_repos/zenml/examples
cloned_public_repos/zenml/examples/mlops_starter/LICENSE
Apache Software License 2.0 Copyright (c) ZenML GmbH 2025. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
0
cloned_public_repos/zenml/examples
cloned_public_repos/zenml/examples/mlops_starter/quickstart.ipynb
from zenml.environment import Environment if Environment.in_google_colab(): # Install Cloudflare Tunnel binary !wget -q https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb && dpkg -i cloudflared-linux-amd64.deb # Pull required modules from this example !git clone -b main https://github.com/zenml-io/zenml !cp -r zenml/examples/quickstart/* . !rm -rf zenmlzenml_server_url = "PLEASE_UPDATE_ME" # in the form "https://URL_TO_SERVER" !zenml login $zenml_server_url# Initialize ZenML and set the default stack !zenml init !zenml stack set default# Do the imports at the top from typing import List, Optional from uuid import UUID import pandas as pd from sklearn.datasets import load_breast_cancer from steps import ( data_loader, data_preprocessor, data_splitter, inference_preprocessor, model_evaluator, ) from typing_extensions import Annotated from zenml import Model, get_step_context, pipeline, step from zenml.client import Client from zenml.logger import get_logger logger = get_logger(__name__) # Initialize the ZenML client to fetch objects from the ZenML Server client = Client()@step def data_loader_simplified( random_state: int, is_inference: bool = False, target: str = "target" ) -> Annotated[pd.DataFrame, "dataset"]: # We name the dataset """Dataset reader step.""" dataset = load_breast_cancer(as_frame=True) inference_size = int(len(dataset.target) * 0.05) dataset: pd.DataFrame = dataset.frame inference_subset = dataset.sample( inference_size, random_state=random_state ) if is_inference: dataset = inference_subset dataset.drop(columns=target, inplace=True) else: dataset.drop(inference_subset.index, inplace=True) dataset.reset_index(drop=True, inplace=True) logger.info(f"Dataset with {len(dataset)} records loaded!") return datasetdf = data_loader_simplified(random_state=42) df.head()@pipeline def feature_engineering( test_size: float = 0.3, drop_na: Optional[bool] = None, normalize: Optional[bool] = None, drop_columns: Optional[List[str]] = None, target: Optional[str] = "target", random_state: int = 17, ): """Feature engineering pipeline.""" # Link all the steps together by calling them and passing the output # of one step as the input of the next step. raw_data = data_loader(random_state=random_state, target=target) dataset_trn, dataset_tst = data_splitter( dataset=raw_data, test_size=test_size, ) dataset_trn, dataset_tst, _ = data_preprocessor( dataset_trn=dataset_trn, dataset_tst=dataset_tst, drop_na=drop_na, normalize=normalize, drop_columns=drop_columns, target=target, random_state=random_state, )feature_engineering()feature_engineering(test_size=0.25)feature_engineering(test_size=0.25, random_state=104)from zenml.environment import Environment from zenml.zen_stores.rest_zen_store import RestZenStore if not isinstance(client.zen_store, RestZenStore): # Only spin up a local Dashboard in case you aren't already connected to a remote server if Environment.in_google_colab(): # run ZenML through a cloudflare tunnel to get a public endpoint !zenml login --local --port 8237 & cloudflared tunnel --url http://localhost:8237 else: !zenml login --localclient = Client() run = client.get_pipeline("feature_engineering").last_run print(run.name)run.steps["data_preprocessor"].outputs# Read one of the datasets. This is the one with a 0.25 test split run.steps["data_preprocessor"].outputs["dataset_trn"].load()# Get artifact version from our run dataset_trn_artifact_version_via_run = run.steps["data_preprocessor"].outputs[ "dataset_trn" ] # Get latest version from client directly dataset_trn_artifact_version = client.get_artifact_version("dataset_trn") # This should be true if our run is the latest run and no artifact has been produced # in the intervening time dataset_trn_artifact_version_via_run.id == dataset_trn_artifact_version.id# Fetch the rest of the artifacts dataset_tst_artifact_version = client.get_artifact_version("dataset_tst") preprocessing_pipeline_artifact_version = client.get_artifact_version( "preprocess_pipeline" )# Load an artifact to verify you can fetch it dataset_trn_artifact_version.load()import pandas as pd from sklearn.base import ClassifierMixin from sklearn.ensemble import RandomForestClassifier from sklearn.linear_model import SGDClassifier from typing_extensions import Annotated from zenml import ArtifactConfig, step from zenml.logger import get_logger logger = get_logger(__name__) @step def model_trainer( dataset_trn: pd.DataFrame, model_type: str = "sgd", ) -> Annotated[ ClassifierMixin, ArtifactConfig(name="sklearn_classifier", is_model_artifact=True), ]: """Configure and train a model on the training dataset.""" target = "target" if model_type == "sgd": model = SGDClassifier() elif model_type == "rf": model = RandomForestClassifier() else: raise ValueError(f"Unknown model type {model_type}") logger.info(f"Training model {model}...") model.fit( dataset_trn.drop(columns=[target]), dataset_trn[target], ) return model@pipeline def training( train_dataset_id: Optional[UUID] = None, test_dataset_id: Optional[UUID] = None, model_type: str = "sgd", min_train_accuracy: float = 0.0, min_test_accuracy: float = 0.0, ): """Model training pipeline.""" if train_dataset_id is None or test_dataset_id is None: # If we dont pass the IDs, this will run the feature engineering pipeline dataset_trn, dataset_tst = feature_engineering() else: # Load the datasets from an older pipeline dataset_trn = client.get_artifact_version( name_id_or_prefix=train_dataset_id ) dataset_tst = client.get_artifact_version( name_id_or_prefix=test_dataset_id ) trained_model = model_trainer( dataset_trn=dataset_trn, model_type=model_type, ) model_evaluator( model=trained_model, dataset_trn=dataset_trn, dataset_tst=dataset_tst, min_train_accuracy=min_train_accuracy, min_test_accuracy=min_test_accuracy, )# Use a random forest model with the chosen datasets. # We need to pass the ID's of the datasets into the function training( model_type="rf", train_dataset_id=dataset_trn_artifact_version.id, test_dataset_id=dataset_tst_artifact_version.id, ) rf_run = client.get_pipeline("training").last_run# Use a SGD classifier sgd_run = training( model_type="sgd", train_dataset_id=dataset_trn_artifact_version.id, test_dataset_id=dataset_tst_artifact_version.id, ) sgd_run = client.get_pipeline("training").last_run# The evaluator returns a float value with the accuracy rf_run.steps["model_evaluator"].output.load() > sgd_run.steps[ "model_evaluator" ].output.load()pipeline_settings = {} # Lets add some metadata to the model to make it identifiable pipeline_settings["model"] = Model( name="breast_cancer_classifier", license="Apache 2.0", description="A breast cancer classifier", tags=["breast_cancer", "classifier"], )# Let's train the SGD model and set the version name to "sgd" pipeline_settings["model"].version = "sgd" # the `with_options` method allows us to pass in pipeline settings # and returns a configured pipeline training_configured = training.with_options(**pipeline_settings) # We can now run this as usual training_configured( model_type="sgd", train_dataset_id=dataset_trn_artifact_version.id, test_dataset_id=dataset_tst_artifact_version.id, )# Let's train the RF model and set the version name to "rf" pipeline_settings["model"].version = "rf" # the `with_options` method allows us to pass in pipeline settings # and returns a configured pipeline training_configured = training.with_options(**pipeline_settings) # Let's run it again to make sure we have two versions training_configured( model_type="rf", train_dataset_id=dataset_trn_artifact_version.id, test_dataset_id=dataset_tst_artifact_version.id, )zenml_model = client.get_model("breast_cancer_classifier") print(zenml_model) print(f"Model {zenml_model.name} has {len(zenml_model.versions)} versions") zenml_model.versions[0].version, zenml_model.versions[1].version# Let's load the RF version rf_zenml_model_version = client.get_model_version( "breast_cancer_classifier", "rf" ) # We can now load our classifier directly as well random_forest_classifier = rf_zenml_model_version.get_artifact( "sklearn_classifier" ).load() random_forest_classifier# Set our best classifier to production rf_zenml_model_version.set_stage("production", force=True)@step def inference_predict( dataset_inf: pd.DataFrame, ) -> Annotated[pd.Series, "predictions"]: """Predictions step""" # Get the model model = get_step_context().model # run prediction from memory predictor = model.load_artifact("sklearn_classifier") predictions = predictor.predict(dataset_inf) predictions = pd.Series(predictions, name="predicted") return predictions@pipeline def inference(preprocess_pipeline_id: UUID): """Model batch inference pipeline""" # random_state = client.get_artifact_version(name_id_or_prefix=preprocess_pipeline_id).metadata["random_state"] # target = client.get_artifact_version(name_id_or_prefix=preprocess_pipeline_id).run_metadata['target'] random_state = 42 target = "target" df_inference = data_loader(random_state=random_state, is_inference=True) df_inference = inference_preprocessor( dataset_inf=df_inference, # We use the preprocess pipeline from the feature engineering pipeline preprocess_pipeline=client.get_artifact_version( name_id_or_prefix=preprocess_pipeline_id ), target=target, ) inference_predict( dataset_inf=df_inference, )pipeline_settings = {"enable_cache": False} # Lets add some metadata to the model to make it identifiable pipeline_settings["model"] = Model( name="breast_cancer_classifier", version="production", # We can pass in the stage name here! license="Apache 2.0", description="A breast cancer classifier", tags=["breast_cancer", "classifier"], )# the `with_options` method allows us to pass in pipeline settings # and returns a configured pipeline inference_configured = inference.with_options(**pipeline_settings) # Let's run it again to make sure we have two versions # We need to pass in the ID of the preprocessing done in the feature engineering pipeline # in order to avoid training-serving skew inference_configured( preprocess_pipeline_id=preprocessing_pipeline_artifact_version.id )# Fetch production model production_model_version = client.get_model_version( "breast_cancer_classifier", "production" ) # Get the predictions artifact production_model_version.get_artifact("predictions").load()
0
cloned_public_repos/zenml/examples
cloned_public_repos/zenml/examples/mlops_starter/run.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2025. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import os from typing import Optional import click import yaml from pipelines import ( feature_engineering, inference, training, ) from zenml.client import Client from zenml.logger import get_logger logger = get_logger(__name__) @click.command( help=""" ZenML Starter project. Run the ZenML starter project with basic options. Examples: \b # Run the feature engineering pipeline python run.py --feature-pipeline \b # Run the training pipeline python run.py --training-pipeline \b # Run the training pipeline with versioned artifacts python run.py --training-pipeline --train-dataset-version-name=1 --test-dataset-version-name=1 \b # Run the inference pipeline python run.py --inference-pipeline """ ) @click.option( "--train-dataset-name", default="dataset_trn", type=click.STRING, help="The name of the train dataset produced by feature engineering.", ) @click.option( "--train-dataset-version-name", default=None, type=click.STRING, help="Version of the train dataset produced by feature engineering. " "If not specified, a new version will be created.", ) @click.option( "--test-dataset-name", default="dataset_tst", type=click.STRING, help="The name of the test dataset produced by feature engineering.", ) @click.option( "--test-dataset-version-name", default=None, type=click.STRING, help="Version of the test dataset produced by feature engineering. " "If not specified, a new version will be created.", ) @click.option( "--feature-pipeline", is_flag=True, default=False, help="Whether to run the pipeline that creates the dataset.", ) @click.option( "--training-pipeline", is_flag=True, default=False, help="Whether to run the pipeline that trains the model.", ) @click.option( "--inference-pipeline", is_flag=True, default=False, help="Whether to run the pipeline that performs inference.", ) @click.option( "--no-cache", is_flag=True, default=False, help="Disable caching for the pipeline run.", ) def main( train_dataset_name: str = "dataset_trn", train_dataset_version_name: Optional[str] = None, test_dataset_name: str = "dataset_tst", test_dataset_version_name: Optional[str] = None, feature_pipeline: bool = False, training_pipeline: bool = False, inference_pipeline: bool = False, no_cache: bool = False, ): """Main entry point for the pipeline execution. This entrypoint is where everything comes together: * configuring pipeline with the required parameters (some of which may come from command line arguments, but most of which comes from the YAML config files) * launching the pipeline Args: train_dataset_name: The name of the train dataset produced by feature engineering. train_dataset_version_name: Version of the train dataset produced by feature engineering. If not specified, a new version will be created. test_dataset_name: The name of the test dataset produced by feature engineering. test_dataset_version_name: Version of the test dataset produced by feature engineering. If not specified, a new version will be created. feature_pipeline: Whether to run the pipeline that creates the dataset. training_pipeline: Whether to run the pipeline that trains the model. inference_pipeline: Whether to run the pipeline that performs inference. no_cache: If `True` cache will be disabled. """ client = Client() config_folder = os.path.join( os.path.dirname(os.path.realpath(__file__)), "configs", ) # Execute Feature Engineering Pipeline if feature_pipeline: pipeline_args = {} if no_cache: pipeline_args["enable_cache"] = False pipeline_args["config_path"] = os.path.join( config_folder, "feature_engineering.yaml" ) run_args_feature = {} feature_engineering.with_options(**pipeline_args)(**run_args_feature) logger.info("Feature Engineering pipeline finished successfully!\n") train_dataset_artifact = client.get_artifact_version( train_dataset_name ) test_dataset_artifact = client.get_artifact_version(test_dataset_name) logger.info( "The latest feature engineering pipeline produced the following " f"artifacts: \n\n1. Train Dataset - Name: {train_dataset_name}, " f"Version Name: {train_dataset_artifact.version} \n2. Test Dataset: " f"Name: {test_dataset_name}, Version Name: {test_dataset_artifact.version}" ) # Execute Training Pipeline if training_pipeline: run_args_train = {} # If train_dataset_version_name is specified, use versioned artifacts if train_dataset_version_name or test_dataset_version_name: # However, both train and test dataset versions must be specified assert ( train_dataset_version_name is not None and test_dataset_version_name is not None ) train_dataset_artifact_version = client.get_artifact_version( train_dataset_name, train_dataset_version_name ) # If train dataset is specified, test dataset must be specified test_dataset_artifact_version = client.get_artifact_version( test_dataset_name, test_dataset_version_name ) # Use versioned artifacts run_args_train["train_dataset_id"] = ( train_dataset_artifact_version.id ) run_args_train["test_dataset_id"] = ( test_dataset_artifact_version.id ) # Run the SGD pipeline pipeline_args = {} if no_cache: pipeline_args["enable_cache"] = False pipeline_args["config_path"] = os.path.join( config_folder, "training_sgd.yaml" ) training.with_options(**pipeline_args)(**run_args_train) logger.info("Training pipeline with SGD finished successfully!\n\n") # Run the RF pipeline pipeline_args = {} if no_cache: pipeline_args["enable_cache"] = False pipeline_args["config_path"] = os.path.join( config_folder, "training_rf.yaml" ) training.with_options(**pipeline_args)(**run_args_train) logger.info("Training pipeline with RF finished successfully!\n\n") if inference_pipeline: run_args_inference = {} pipeline_args = {"enable_cache": False} pipeline_args["config_path"] = os.path.join( config_folder, "inference.yaml" ) # Configure the pipeline inference_configured = inference.with_options(**pipeline_args) # Fetch the production model with open(pipeline_args["config_path"], "r") as f: config = yaml.load(f, Loader=yaml.SafeLoader) zenml_model = client.get_model_version( config["model"]["name"], config["model"]["version"] ) preprocess_pipeline_artifact = zenml_model.get_artifact( "preprocess_pipeline" ) # Use the metadata of feature engineering pipeline artifact # to get the random state and target column random_state = preprocess_pipeline_artifact.run_metadata[ "random_state" ] target = preprocess_pipeline_artifact.run_metadata["target"] run_args_inference["random_state"] = random_state run_args_inference["target"] = target # Run the pipeline inference_configured(**run_args_inference) logger.info("Inference pipeline finished successfully!") if __name__ == "__main__": main()
0
cloned_public_repos/zenml/examples
cloned_public_repos/zenml/examples/mlops_starter/requirements.txt
zenml[server]>=0.50.0 notebook scikit-learn pyarrow pandas
0
cloned_public_repos/zenml/examples/mlops_starter
cloned_public_repos/zenml/examples/mlops_starter/configs/training_sgd.yaml
# environment configuration settings: docker: required_integrations: - sklearn - pandas requirements: - pyarrow # configuration of the Model Control Plane model: name: breast_cancer_classifier version: sgd license: Apache 2.0 description: A breast cancer classifier tags: ["breast_cancer", "classifier"] # Configure the pipeline parameters: model_type: "sgd" # Choose between rf/sgd
0
cloned_public_repos/zenml/examples/mlops_starter
cloned_public_repos/zenml/examples/mlops_starter/configs/inference.yaml
# environment configuration settings: docker: required_integrations: - sklearn - pandas requirements: - pyarrow # configuration of the Model Control Plane model: name: "breast_cancer_classifier" version: "production" license: Apache 2.0 description: A breast cancer classifier tags: ["breast_cancer", "classifier"]
0
cloned_public_repos/zenml/examples/mlops_starter
cloned_public_repos/zenml/examples/mlops_starter/configs/training_rf.yaml
# environment configuration settings: docker: required_integrations: - sklearn - pandas requirements: - pyarrow # configuration of the Model Control Plane model: name: breast_cancer_classifier version: rf license: Apache 2.0 description: A breast cancer classifier tags: ["breast_cancer", "classifier"] # Configure the pipeline parameters: model_type: "rf" # Choose between rf/sgd
0
cloned_public_repos/zenml/examples/mlops_starter
cloned_public_repos/zenml/examples/mlops_starter/configs/feature_engineering.yaml
# environment configuration settings: docker: required_integrations: - sklearn - pandas requirements: - pyarrow # pipeline configuration test_size: 0.35
0
cloned_public_repos/zenml/examples/mlops_starter
cloned_public_repos/zenml/examples/mlops_starter/pipelines/__init__.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2025. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from .feature_engineering import feature_engineering from .inference import inference from .training import training
0
cloned_public_repos/zenml/examples/mlops_starter
cloned_public_repos/zenml/examples/mlops_starter/pipelines/inference.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2025. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from steps import ( data_loader, inference_predict, inference_preprocessor, ) from zenml import get_pipeline_context, pipeline from zenml.logger import get_logger logger = get_logger(__name__) @pipeline def inference(random_state: int, target: str): """ Model inference pipeline. This is a pipeline that loads the inference data, processes it with the same preprocessing pipeline used in training, and runs inference with the trained model. Args: random_state: Random state for reproducibility. target: Name of target column in dataset. """ # Get the production model artifact model = get_pipeline_context().model.get_artifact("sklearn_classifier") # Get the preprocess pipeline artifact associated with this version preprocess_pipeline = get_pipeline_context().model.get_artifact( "preprocess_pipeline" ) # Link all the steps together by calling them and passing the output # of one step as the input of the next step. df_inference = data_loader(random_state=random_state, is_inference=True) df_inference = inference_preprocessor( dataset_inf=df_inference, preprocess_pipeline=preprocess_pipeline, target=target, ) inference_predict( model=model, dataset_inf=df_inference, )
0
cloned_public_repos/zenml/examples/mlops_starter
cloned_public_repos/zenml/examples/mlops_starter/pipelines/feature_engineering.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2025. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from typing import List, Optional from steps import ( data_loader, data_preprocessor, data_splitter, ) from zenml import pipeline from zenml.logger import get_logger logger = get_logger(__name__) @pipeline def feature_engineering( test_size: float = 0.2, drop_na: Optional[bool] = None, normalize: Optional[bool] = None, drop_columns: Optional[List[str]] = None, target: Optional[str] = "target", random_state: int = 17, ): """ Feature engineering pipeline. This is a pipeline that loads the data, processes it and splits it into train and test sets. Args: test_size: Size of holdout set for training 0.0..1.0 drop_na: If `True` NA values will be removed from dataset normalize: If `True` dataset will be normalized with MinMaxScaler drop_columns: List of columns to drop from dataset target: Name of target column in dataset random_state: Random state to configure the data loader Returns: The processed datasets (dataset_trn, dataset_tst). """ # Link all the steps together by calling them and passing the output # of one step as the input of the next step. raw_data = data_loader(random_state=random_state, target=target) dataset_trn, dataset_tst = data_splitter( dataset=raw_data, test_size=test_size, ) dataset_trn, dataset_tst, _ = data_preprocessor( dataset_trn=dataset_trn, dataset_tst=dataset_tst, drop_na=drop_na, normalize=normalize, drop_columns=drop_columns, target=target, random_state=random_state, ) return dataset_trn, dataset_tst
0
cloned_public_repos/zenml/examples/mlops_starter
cloned_public_repos/zenml/examples/mlops_starter/pipelines/training.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2025. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from typing import Optional from uuid import UUID from steps import model_evaluator, model_promoter, model_trainer from pipelines import ( feature_engineering, ) from zenml import pipeline from zenml.client import Client from zenml.logger import get_logger logger = get_logger(__name__) @pipeline def training( train_dataset_id: Optional[UUID] = None, test_dataset_id: Optional[UUID] = None, target: Optional[str] = "target", model_type: Optional[str] = "sgd", ): """ Model training pipeline. This is a pipeline that loads the data from a preprocessing pipeline, trains a model on it and evaluates the model. If it is the first model to be trained, it will be promoted to production. If not, it will be promoted only if it has a higher accuracy than the current production model version. Args: train_dataset_id: ID of the train dataset produced by feature engineering. test_dataset_id: ID of the test dataset produced by feature engineering. target: Name of target column in dataset. model_type: The type of model to train. """ # Link all the steps together by calling them and passing the output # of one step as the input of the next step. # Execute Feature Engineering Pipeline if train_dataset_id is None or test_dataset_id is None: dataset_trn, dataset_tst = feature_engineering() else: client = Client() dataset_trn = client.get_artifact_version( name_id_or_prefix=train_dataset_id ) dataset_tst = client.get_artifact_version( name_id_or_prefix=test_dataset_id ) model = model_trainer( dataset_trn=dataset_trn, target=target, model_type=model_type ) acc = model_evaluator( model=model, dataset_trn=dataset_trn, dataset_tst=dataset_tst, target=target, ) model_promoter(accuracy=acc)
0
cloned_public_repos/zenml/examples/mlops_starter
cloned_public_repos/zenml/examples/mlops_starter/steps/data_preprocessor.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2025. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from typing import List, Optional, Tuple import pandas as pd from sklearn.pipeline import Pipeline from sklearn.preprocessing import MinMaxScaler from typing_extensions import Annotated from utils.preprocess import ColumnsDropper, DataFrameCaster, NADropper from zenml import log_metadata, step @step def data_preprocessor( random_state: int, dataset_trn: pd.DataFrame, dataset_tst: pd.DataFrame, drop_na: Optional[bool] = None, normalize: Optional[bool] = None, drop_columns: Optional[List[str]] = None, target: Optional[str] = "target", ) -> Tuple[ Annotated[pd.DataFrame, "dataset_trn"], Annotated[pd.DataFrame, "dataset_tst"], Annotated[Pipeline, "preprocess_pipeline"], ]: """Data preprocessor step. This is an example of a data processor step that prepares the data so that it is suitable for model training. It takes in a dataset as an input step artifact and performs any necessary preprocessing steps like cleaning, feature engineering, feature selection, etc. It then returns the processed dataset as a step output artifact. This step is parameterized, which allows you to configure the step independently of the step code, before running it in a pipeline. In this example, the step can be configured to drop NA values, drop some columns and normalize numerical columns. See the documentation for more information: https://docs.zenml.io/how-to/build-pipelines/use-pipeline-step-parameters Args: random_state: Random state for sampling. dataset_trn: The train dataset. dataset_tst: The test dataset. drop_na: If `True` all NA rows will be dropped. normalize: If `True` all numeric fields will be normalized. drop_columns: List of column names to drop. target: Name of target column in dataset. Returns: The processed datasets (dataset_trn, dataset_tst) and fitted `Pipeline` object. """ # We use the sklearn pipeline to chain together multiple preprocessing steps preprocess_pipeline = Pipeline([("passthrough", "passthrough")]) if drop_na: preprocess_pipeline.steps.append(("drop_na", NADropper())) if drop_columns: # Drop columns preprocess_pipeline.steps.append( ("drop_columns", ColumnsDropper(drop_columns)) ) if normalize: # Normalize the data preprocess_pipeline.steps.append(("normalize", MinMaxScaler())) preprocess_pipeline.steps.append( ("cast", DataFrameCaster(dataset_trn.columns)) ) dataset_trn = preprocess_pipeline.fit_transform(dataset_trn) dataset_tst = preprocess_pipeline.transform(dataset_tst) # Log metadata so we can load it in the inference pipeline log_metadata( metadata={"random_state": random_state, "target": target}, artifact_name="preprocess_pipeline", infer_artifact=True, ) return dataset_trn, dataset_tst, preprocess_pipeline
0
cloned_public_repos/zenml/examples/mlops_starter
cloned_public_repos/zenml/examples/mlops_starter/steps/data_splitter.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2025. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from typing import Tuple import pandas as pd from sklearn.model_selection import train_test_split from typing_extensions import Annotated from zenml import step @step def data_splitter( dataset: pd.DataFrame, test_size: float = 0.2 ) -> Tuple[ Annotated[pd.DataFrame, "raw_dataset_trn"], Annotated[pd.DataFrame, "raw_dataset_tst"], ]: """Dataset splitter step. This is an example of a dataset splitter step that splits the data into train and test set before passing it to ML model. This step is parameterized, which allows you to configure the step independently of the step code, before running it in a pipeline. In this example, the step can be configured to use different test set sizes. See the documentation for more information: https://docs.zenml.io/how-to/build-pipelines/use-pipeline-step-parameters Args: dataset: Dataset read from source. test_size: 0.0..1.0 defining portion of test set. Returns: The split dataset: dataset_trn, dataset_tst. """ dataset_trn, dataset_tst = train_test_split( dataset, test_size=test_size, random_state=42, shuffle=True, ) dataset_trn = pd.DataFrame(dataset_trn, columns=dataset.columns) dataset_tst = pd.DataFrame(dataset_tst, columns=dataset.columns) return dataset_trn, dataset_tst
0
cloned_public_repos/zenml/examples/mlops_starter
cloned_public_repos/zenml/examples/mlops_starter/steps/model_evaluator.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2025. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from typing import Optional import pandas as pd from sklearn.base import ClassifierMixin from zenml import log_metadata, step from zenml.client import Client from zenml.logger import get_logger logger = get_logger(__name__) @step def model_evaluator( model: ClassifierMixin, dataset_trn: pd.DataFrame, dataset_tst: pd.DataFrame, min_train_accuracy: float = 0.0, min_test_accuracy: float = 0.0, target: Optional[str] = "target", ) -> float: """Evaluate a trained model. This is an example of a model evaluation step that takes in a model artifact previously trained by another step in your pipeline, and a training and validation data set pair which it uses to evaluate the model's performance. The model metrics are then returned as step output artifacts (in this case, the model accuracy on the train and test set). The suggested step implementation also outputs some warnings if the model performance does not meet some minimum criteria. This is just an example of how you can use steps to monitor your model performance and alert you if something goes wrong. As an alternative, you can raise an exception in the step to force the pipeline run to fail early and all subsequent steps to be skipped. This step is parameterized to configure the step independently of the step code, before running it in a pipeline. In this example, the step can be configured to use different values for the acceptable model performance thresholds and to control whether the pipeline run should fail if the model performance does not meet the minimum criteria. See the documentation for more information: https://docs.zenml.io/how-to/build-pipelines/use-pipeline-step-parameters Args: model: The pre-trained model artifact. dataset_trn: The train dataset. dataset_tst: The test dataset. min_train_accuracy: Minimal acceptable training accuracy value. min_test_accuracy: Minimal acceptable testing accuracy value. target: Name of target column in dataset. Returns: The model accuracy on the test set. """ # Calculate the model accuracy on the train and test set trn_acc = model.score( dataset_trn.drop(columns=[target]), dataset_trn[target], ) tst_acc = model.score( dataset_tst.drop(columns=[target]), dataset_tst[target], ) logger.info(f"Train accuracy={trn_acc * 100:.2f}%") logger.info(f"Test accuracy={tst_acc * 100:.2f}%") messages = [] if trn_acc < min_train_accuracy: messages.append( f"Train accuracy {trn_acc * 100:.2f}% is below {min_train_accuracy * 100:.2f}% !" ) if tst_acc < min_test_accuracy: messages.append( f"Test accuracy {tst_acc * 100:.2f}% is below {min_test_accuracy * 100:.2f}% !" ) else: for message in messages: logger.warning(message) client = Client() latest_classifier = client.get_artifact_version("sklearn_classifier") log_metadata( metadata={ "train_accuracy": float(trn_acc), "test_accuracy": float(tst_acc), }, artifact_version_id=latest_classifier.id, ) return float(tst_acc)
0
cloned_public_repos/zenml/examples/mlops_starter
cloned_public_repos/zenml/examples/mlops_starter/steps/model_trainer.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2025. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from typing import Optional import pandas as pd from sklearn.base import ClassifierMixin from sklearn.ensemble import RandomForestClassifier from sklearn.linear_model import SGDClassifier from typing_extensions import Annotated from zenml import ArtifactConfig, step from zenml.logger import get_logger logger = get_logger(__name__) @step def model_trainer( dataset_trn: pd.DataFrame, model_type: str = "sgd", target: Optional[str] = "target", ) -> Annotated[ ClassifierMixin, ArtifactConfig(name="sklearn_classifier", is_model_artifact=True), ]: """Configure and train a model on the training dataset. This is an example of a model training step that takes in a dataset artifact previously loaded and pre-processed by other steps in your pipeline, then configures and trains a model on it. The model is then returned as a step output artifact. Args: dataset_trn: The preprocessed train dataset. model_type: The type of model to train. target: The name of the target column in the dataset. Returns: The trained model artifact. Raises: ValueError: If the model type is not supported. """ # Initialize the model with the hyperparameters indicated in the step # parameters and train it on the training set. if model_type == "sgd": model = SGDClassifier() elif model_type == "rf": model = RandomForestClassifier() else: raise ValueError(f"Unknown model type {model_type}") logger.info(f"Training model {model}...") model.fit( dataset_trn.drop(columns=[target]), dataset_trn[target], ) return model
0
cloned_public_repos/zenml/examples/mlops_starter
cloned_public_repos/zenml/examples/mlops_starter/steps/model_promoter.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2025. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from zenml import get_step_context, step from zenml.client import Client from zenml.logger import get_logger logger = get_logger(__name__) @step def model_promoter(accuracy: float, stage: str = "production") -> bool: """Model promoter step. This is an example of a step that conditionally promotes a model. It takes in the accuracy of the model and the stage to promote the model to. If the accuracy is below 80%, the model is not promoted. If it is above 80%, the model is promoted to the stage indicated in the parameters. If there is already a model in the indicated stage, the model with the higher accuracy is promoted. Args: accuracy: Accuracy of the model. stage: Which stage to promote the model to. Returns: Whether the model was promoted or not. """ is_promoted = False if accuracy < 0.8: logger.info( f"Model accuracy {accuracy * 100:.2f}% is below 80% ! Not promoting model." ) else: logger.info(f"Model promoted to {stage}!") is_promoted = True # Get the model in the current context current_model = get_step_context().model # Get the model that is in the production stage client = Client() try: stage_model = client.get_model_version(current_model.name, stage) # We compare their metrics prod_accuracy = stage_model.get_artifact( "sklearn_classifier" ).run_metadata["test_accuracy"] if float(accuracy) > float(prod_accuracy): # If current model has better metrics, we promote it is_promoted = True current_model.set_stage(stage, force=True) except KeyError: # If no such model exists, current one is promoted is_promoted = True current_model.set_stage(stage, force=True) return is_promoted
0
cloned_public_repos/zenml/examples/mlops_starter
cloned_public_repos/zenml/examples/mlops_starter/steps/inference_preprocessor.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2023. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import pandas as pd from sklearn.pipeline import Pipeline from typing_extensions import Annotated from zenml import step @step def inference_preprocessor( dataset_inf: pd.DataFrame, preprocess_pipeline: Pipeline, target: str, ) -> Annotated[pd.DataFrame, "inference_dataset"]: """Data preprocessor step. This is an example of a data processor step that prepares the data so that it is suitable for model inference. It takes in a dataset as an input step artifact and performs any necessary preprocessing steps based on pretrained preprocessing pipeline. Args: dataset_inf: The inference dataset. preprocess_pipeline: Pretrained `Pipeline` to process dataset. target: Name of target columns in dataset. Returns: The processed dataframe: dataset_inf. """ # artificially adding `target` column to avoid Pipeline issues dataset_inf[target] = pd.Series([1] * dataset_inf.shape[0]) dataset_inf = preprocess_pipeline.transform(dataset_inf) dataset_inf.drop(columns=[target], inplace=True) return dataset_inf
0
cloned_public_repos/zenml/examples/mlops_starter
cloned_public_repos/zenml/examples/mlops_starter/steps/data_loader.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2025. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import pandas as pd from sklearn.datasets import load_breast_cancer from typing_extensions import Annotated from zenml import step from zenml.logger import get_logger logger = get_logger(__name__) @step def data_loader( random_state: int, is_inference: bool = False, target: str = "target" ) -> Annotated[pd.DataFrame, "dataset"]: """Dataset reader step. This is an example of a dataset reader step that load Breast Cancer dataset. This step is parameterized, which allows you to configure the step independently of the step code, before running it in a pipeline. In this example, the step can be configured with number of rows and logic to drop target column or not. See the documentation for more information: https://docs.zenml.io/how-to/build-pipelines/use-pipeline-step-parameters Args: random_state: Random state for sampling is_inference: If `True` subset will be returned and target column will be removed from dataset. target: Name of target columns in dataset. Returns: The dataset artifact as Pandas DataFrame and name of target column. """ dataset = load_breast_cancer(as_frame=True) inference_size = int(len(dataset.target) * 0.05) dataset: pd.DataFrame = dataset.frame inference_subset = dataset.sample( inference_size, random_state=random_state ) if is_inference: dataset = inference_subset dataset.drop(columns=target, inplace=True) else: dataset.drop(inference_subset.index, inplace=True) dataset.reset_index(drop=True, inplace=True) logger.info(f"Dataset with {len(dataset)} records loaded!") return dataset
0
cloned_public_repos/zenml/examples/mlops_starter
cloned_public_repos/zenml/examples/mlops_starter/steps/__init__.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2025. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from .data_loader import ( data_loader, ) from .data_preprocessor import ( data_preprocessor, ) from .data_splitter import ( data_splitter, ) from .inference_predict import ( inference_predict, ) from .inference_preprocessor import ( inference_preprocessor, ) from .model_evaluator import ( model_evaluator, ) from .model_promoter import ( model_promoter, ) from .model_trainer import ( model_trainer, )
0
cloned_public_repos/zenml/examples/mlops_starter
cloned_public_repos/zenml/examples/mlops_starter/steps/inference_predict.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2023. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from typing import Any import pandas as pd from typing_extensions import Annotated from zenml import step from zenml.logger import get_logger logger = get_logger(__name__) @step def inference_predict( model: Any, dataset_inf: pd.DataFrame, ) -> Annotated[pd.Series, "predictions"]: """Predictions step. This is an example of a predictions step that takes the data and model in and returns predicted values. This step is parameterized, which allows you to configure the step independently of the step code, before running it in a pipeline. In this example, the step can be configured to use different input data. See the documentation for more information: https://docs.zenml.io/how-to/build-pipelines/use-pipeline-step-parameters Args: model: Trained model. dataset_inf: The inference dataset. Returns: The predictions as pandas series """ # run prediction from memory predictions = model.predict(dataset_inf) predictions = pd.Series(predictions, name="predicted") return predictions
0
cloned_public_repos/zenml/examples/mlops_starter
cloned_public_repos/zenml/examples/mlops_starter/utils/preprocess.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2025. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from typing import Union import pandas as pd class NADropper: """Support class to drop NA values in sklearn Pipeline.""" def fit(self, *args, **kwargs): return self def transform(self, X: Union[pd.DataFrame, pd.Series]): return X.dropna() class ColumnsDropper: """Support class to drop specific columns in sklearn Pipeline.""" def __init__(self, columns): self.columns = columns def fit(self, *args, **kwargs): return self def transform(self, X: Union[pd.DataFrame, pd.Series]): return X.drop(columns=self.columns) class DataFrameCaster: """Support class to cast type back to pd.DataFrame in sklearn Pipeline.""" def __init__(self, columns): self.columns = columns def fit(self, *args, **kwargs): return self def transform(self, X): return pd.DataFrame(X, columns=self.columns)
0
cloned_public_repos/zenml/examples/mlops_starter
cloned_public_repos/zenml/examples/mlops_starter/utils/__init__.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2025. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #
0
cloned_public_repos/zenml/examples
cloned_public_repos/zenml/examples/e2e_nlp/.copier-answers.yml
# Changes here will be overwritten by Copier _commit: 2025.01.08 _src_path: gh:zenml-io/template-nlp accelerator: cpu cloud_of_choice: aws dataset: airline_reviews deploy_locally: true deploy_to_huggingface: true deploy_to_skypilot: true email: info@zenml.io full_name: ZenML GmbH metric_compare_promotion: true model: distilbert-base-uncased notify_on_failures: true notify_on_successes: false open_source_license: apache product_name: nlp_use_case project_name: ZenML NLP project sample_rate: false target_environment: staging version: 0.0.1 zenml_server_url: ''
0
cloned_public_repos/zenml/examples
cloned_public_repos/zenml/examples/e2e_nlp/requirements.txt
torchvision gradio zenml[server]>=0.56.3 datasets>=2.12.0,<3.0.0 scikit-learn<1.6.0
0
cloned_public_repos/zenml/examples
cloned_public_repos/zenml/examples/e2e_nlp/Makefile
stack_name ?= nlp_template_stack setup: pip install -r requirements.txt zenml integration install pytorch mlflow aws s3 slack huggingface -y zenml init install-local-stack: @echo "Specify stack name [$(stack_name)]: " && read input && [ -n "$$input" ] && stack_name="$$input" || stack_name="$(stack_name)" && \ zenml experiment-tracker register -f mlflow mlflow_local_$${stack_name} && \ zenml model-registry register -f mlflow mlflow_local_$${stack_name} && \ zenml stack register -a default -o default -r mlflow_local_$${stack_name} \ -e mlflow_local_$${stack_name} $${stack_name} && \ zenml stack set $${stack_name} && \ zenml login --local
0
cloned_public_repos/zenml/examples
cloned_public_repos/zenml/examples/e2e_nlp/run.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2025. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import os from datetime import datetime as dt import click from pipelines import ( nlp_use_case_deploy_pipeline, nlp_use_case_promote_pipeline, nlp_use_case_training_pipeline, ) from zenml.enums import ModelStages from zenml.logger import get_logger from zenml.model.model import Model logger = get_logger(__name__) @click.command( help=""" ZenML NLP project CLI v0.0.1. Run the ZenML NLP project model training pipeline with various options. Examples: \b # Run the pipeline with default options python run.py \b # Run the pipeline without cache python run.py --no-cache \b # Run the pipeline without NA drop and normalization, # but dropping columns [A,B,C] and keeping 10% of dataset # as test set. python run.py --num-epochs 3 --train-batch-size 8 --eval-batch-size 8 \b # Run the pipeline with Quality Gate for accuracy set at 90% for train set # and 85% for test set. If any of accuracies will be lower - pipeline will fail. python run.py --min-train-accuracy 0.9 --min-test-accuracy 0.85 --fail-on-accuracy-quality-gates """ ) @click.option( "--no-cache", is_flag=True, default=True, help="Disable caching for the pipeline run.", ) @click.option( "--num-epochs", default=1, type=click.INT, help="Number of epochs to train the model for.", ) @click.option( "--train-batch-size", default=8, type=click.INT, help="Batch size for training the model.", ) @click.option( "--eval-batch-size", default=8, type=click.INT, help="Batch size for evaluating the model.", ) @click.option( "--learning-rate", default=2e-5, type=click.FLOAT, help="Learning rate for training the model.", ) @click.option( "--weight-decay", default=0.01, type=click.FLOAT, help="Weight decay for training the model.", ) @click.option( "--training-pipeline", is_flag=True, default=True, help="Whether to run the pipeline that train the model to staging.", ) @click.option( "--promoting-pipeline", is_flag=True, default=True, help="Whether to run the pipeline that promotes the model to staging.", ) @click.option( "--deploying-pipeline", is_flag=True, default=False, help="Whether to run the pipeline that deploys the model to selected deployment platform.", ) @click.option( "--deployment-app-title", default="Sentiment Analyzer", type=click.STRING, help="Title of the Gradio interface.", ) @click.option( "--deployment-app-description", default="Sentiment Analyzer", type=click.STRING, help="Description of the Gradio interface.", ) @click.option( "--deployment-app-interpretation", default="default", type=click.STRING, help="Interpretation mode for the Gradio interface.", ) @click.option( "--deployment-app-example", default="", type=click.STRING, help="Comma-separated list of examples to show in the Gradio interface.", ) @click.option( "--zenml-model-name", default="sentiment_analysis", type=click.STRING, help="Name of the ZenML Model.", ) def main( no_cache: bool = True, num_epochs: int = 3, train_batch_size: int = 8, eval_batch_size: int = 8, learning_rate: float = 2e-5, weight_decay: float = 0.01, training_pipeline: bool = True, promoting_pipeline: bool = True, deploying_pipeline: bool = False, deployment_app_title: str = "Sentiment Analyzer", deployment_app_description: str = "Sentiment Analyzer", deployment_app_interpretation: str = "default", deployment_app_example: str = "", zenml_model_name: str = "sentiment_analysis", ): """Main entry point for the pipeline execution. This entrypoint is where everything comes together: * configuring pipeline with the required parameters (some of which may come from command line arguments) * launching the pipeline Args: no_cache: If `True` cache will be disabled. """ # Run a pipeline with the required parameters. This executes # all steps in the pipeline in the correct order using the orchestrator # stack component that is configured in your active ZenML stack. pipeline_args = { "config_path": os.path.join( os.path.dirname(os.path.realpath(__file__)), "config.yaml", ) } if no_cache: pipeline_args["enable_cache"] = False if training_pipeline: # Execute Training Pipeline run_args_train = { "num_epochs": num_epochs, "train_batch_size": train_batch_size, "eval_batch_size": eval_batch_size, "learning_rate": learning_rate, "weight_decay": weight_decay, } model = Model( name=zenml_model_name, license="apache", description="Show case Model Control Plane.", delete_new_version_on_failure=True, tags=["sentiment_analysis", "huggingface"], ) pipeline_args["model"] = model pipeline_args["run_name"] = ( f"nlp_use_case_run_{dt.now().strftime('%Y_%m_%d_%H_%M_%S')}" ) nlp_use_case_training_pipeline.with_options(**pipeline_args)( **run_args_train ) logger.info("Training pipeline finished successfully!") # Execute Promoting Pipeline if promoting_pipeline: run_args_promoting = {} model = Model(name=zenml_model_name, version=ModelStages.LATEST) pipeline_args["model"] = model pipeline_args["run_name"] = ( f"nlp_use_case_promoting_pipeline_run_{dt.now().strftime('%Y_%m_%d_%H_%M_%S')}" ) nlp_use_case_promote_pipeline.with_options(**pipeline_args)( **run_args_promoting ) logger.info("Promoting pipeline finished successfully!") if deploying_pipeline: pipeline_args["enable_cache"] = False # Deploying pipeline has new ZenML model config model = Model( name=zenml_model_name, version=ModelStages("staging"), ) pipeline_args["model"] = model run_args_deploying = { "title": deployment_app_title, "description": deployment_app_description, "interpretation": deployment_app_interpretation, "example": deployment_app_example, } pipeline_args["run_name"] = ( f"nlp_use_case_deploy_pipeline_run_{dt.now().strftime('%Y_%m_%d_%H_%M_%S')}" ) nlp_use_case_deploy_pipeline.with_options(**pipeline_args)( **run_args_deploying ) logger.info("Deploying pipeline finished successfully!") if __name__ == "__main__": main()
0
cloned_public_repos/zenml/examples
cloned_public_repos/zenml/examples/e2e_nlp/LICENSE
Apache Software License 2.0 Copyright (c) ZenML GmbH 2025. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
0
cloned_public_repos/zenml/examples
cloned_public_repos/zenml/examples/e2e_nlp/README.md
# ZenML NLP project This is a comprehensive supervised ML project built with the ZenML framework and its integration. The project will be a comprehensive starting point for anyone looking to build and deploy NLP models using the ZenML framework by streamlining the process of training, promoting, and deploying NLP models with a focus on reproducibility, scalability, and ease of use. The project was generated from the [NLP ZenML project template](https://github.com/zenml-io/template-nlp). with the following properties: - Project name: ZenML NLP project - Technical Name: nlp_use_case - Version: `0.0.1` - Licensed with apache to ZenML GmbH<info@zenml.io> - Deployment environment: `staging` Settings of your project are: - Accelerator: `cpu` - Trained model promotion to `staging` based on accuracy metric vs currently deployed model - Local deployment enabled - Deployment to HuggingFace Hub enabled - Deployment to SkyPilot enabled - Dataset: `airline_reviews` - Model: `distilbert-base-uncased` - Notifications about failures enabled ## πŸ‘‹ Introduction Welcome to your newly generated "ZenML NLP project" project! This is a great way to get hands-on with ZenML using production-like template. The project contains a collection of standard and custom ZenML steps, pipelines and other artifacts and useful resources that can serve as a solid starting point for your smooth journey with ZenML. What to do first? You can start by giving the the project a quick run. The project is ready to be used and can run as-is without any further code changes! You can try it right away by installing ZenML, the needed ZenML integration and then calling the CLI included in the project. We also recommend that you start the ZenML UI locally to get a better sense of what is going on under the hood: ```bash # Set up a Python virtual environment, if you haven't already python3 -m venv .venv source .venv/bin/activate # Install requirements & integrations make setup # Optionally, provision default local stack make install-local-stack # Start the ZenML UI locally (recommended, but optional) zenml login --local # Run the pipeline included in the project python run.py ``` When the pipelines are done running, you can check out the results in the ZenML UI by following the link printed in the terminal (or you can go straight to the [ZenML UI pipelines run page](http://127.0.0.1:8237/workspaces/default/all-runs?page=1). Next, you should: * look at the CLI help to see what you can do with the project: ```bash python run.py --help ``` * go back and [try out different parameters](https://github.com/zenml-io/template-nlp#-template-parameters) for your generated project. For example, you could disable hyperparameters tuning and use your favorite model architecture or promote every trained model, if you haven't already! * take a look at [the project structure](#πŸ“œ-project-structure) and the code itself. The code is heavily commented and should be easy to follow. * read the [ZenML documentation](https://docs.zenml.io) to learn more about various ZenML concepts referenced in the code and to get a better sense of what you can do with ZenML. * start building your own ZenML project by modifying this code ## πŸ“¦ What's in the box? The ZenML NLP project project demonstrates how the most important steps of the ML Production Lifecycle can be implemented in a reusable way remaining agnostic to the underlying infrastructure for a Natural Language Processing (NLP) task. This template uses one of these datasets: * [IMDB Movie Reviews](https://huggingface.co/datasets/imdb) * [Financial News](https://huggingface.co/datasets/zeroshot/twitter-financial-news-sentiment) * [Airlines Reviews](https://huggingface.co/datasets/Shayanvsf/US_Airline_Sentiment) and one of these models: * [DistilBERT](https://huggingface.co/distilbert-base-uncased) * [RoBERTa](https://huggingface.co/roberta-base) * [BERT](https://huggingface.co/bert-base-uncased) It consists of three pipelines with the following high-level setup: <p align="center"> <img height=500 src=".assets/00_pipelines_composition.png"> </p> All pipelines are leveraging the Model Control Plane to bring all parts together - the training pipeline creates and promotes a new Model Control Plane version with a trained model object in it, deployment pipeline uses the inference Model Control Plane version (the one promoted during training) to create a deployment service and inference pipeline using deployment service from the inference Model Control Plane version and store back new set of predictions as a versioned data artifact for future use. This makes those pipelines closely connected while ensuring that only quality-assured Model Control Plane versions are used to produce predictions delivered to stakeholders. * [CT] Training * Load the training dataset from HuggingFace Datasets * Load Tokenizer from HuggingFace Models based on the model name * Tokenize the training dataset and store the tokenizer as an artifact * Train and evaluate a model object using the training dataset and store it as an artifact * Register the model object as a new inference Model Control Plane version * [CD] Promotion * Evaluate the latest Model Control Plane version using the evaluation metric * Compare the evaluation metric of the latest Model Control Plane version with the evaluation metric of the currently promoted Model Control Plane version * If the evaluation metric of the latest Model Control Plane version is better than the evaluation metric of the currently promoted Model Control Plane version, promote the latest Model Control Plane version to the specified stage * If the evaluation metric of the latest Model Control Plane version is worse than the evaluation metric of the currently promoted Model Control Plane version, do not promote the latest Model Control Plane version * [CD] Deployment * Load the inference Model Control Plane version * Save the Model locally (for that this pipeline needs to be run on the local machine) * Deploy the Model to the specified environment * If the specified environment is HuggingFace Hub, upload the Model to the HuggingFace Hub * If the specified environment is SkyPilot, deploy the Model to the SkyPilot * If the specified environment is local, do not deploy the Model In [the repository documentation](https://github.com/zenml-io/template-nlp#-how-this-template-is-implemented), you can find more details about every step of this template. The project code is meant to be used as a template for your projects. For this reason, you will find several places in the code specifically marked to indicate where you can add your code: ```python ### ADD YOUR OWN CODE HERE - THIS IS JUST AN EXAMPLE ### ... ### YOUR CODE ENDS HERE ### ``` ## πŸ“œ Project Structure The project loosely follows [the recommended ZenML project structure](https://docs.zenml.io/how-to/setting-up-a-project-repository/best-practices): ``` . β”œβ”€β”€ gradio # Gradio app for inference β”‚ β”œβ”€β”€ __init__.py # Gradio app initialization β”‚ β”œβ”€β”€ app.py # Gradio app entrypoint β”‚ β”œβ”€β”€ Dockerfile # Gradio app Dockerfile β”‚ β”œβ”€β”€ requirements.txt # Gradio app Python dependencies β”‚ └── serve.yaml # Gradio app SkyPilot deployment configuration β”œβ”€β”€ pipelines # `zenml.pipeline` implementations β”‚ β”œβ”€β”€ __init__.py β”‚ β”œβ”€β”€ deployment.py # deployment pipeline β”‚ β”œβ”€β”€ promotion.py # promotion pipeline β”‚ └── training.py # training pipeline β”œβ”€β”€ steps # `zenml.steps` implementations β”‚ β”œβ”€β”€ __init__.py β”‚ β”œβ”€β”€ alerts # `zenml.steps.alerts` implementations β”‚ β”‚ β”œβ”€β”€ __init__.py β”‚ β”‚ └── notify_on.py # notify step β”‚ β”œβ”€β”€ dataset_loader # `zenml.steps.dataset_loader` implementations β”‚ β”‚ β”œβ”€β”€ __init__.py β”‚ β”‚ └── data_loader.py # data loader step β”‚ β”œβ”€β”€ deploying # `zenml.steps.deploying` implementations β”‚ β”‚ β”œβ”€β”€ __init__.py β”‚ β”‚ β”œβ”€β”€ save_model.py # save model step β”‚ β”‚ β”œβ”€β”€ deploy_locally.py # deploy locally step β”‚ β”‚ β”œβ”€β”€ deploy_to_huggingface.py # deploy to HuggingFace Hub step β”‚ β”‚ └── deploy_to_skypilot.py # deploy to SkyPilot step β”‚ β”œβ”€β”€ promotion # `zenml.steps.promotion` implementations β”‚ β”‚ β”œβ”€β”€ __init__.py β”‚ β”‚ β”œβ”€β”€ promote_latest.py # promote latest step β”‚ β”‚ β”œβ”€β”€ promote_metric_compare_promoter.py # metric compare promoter step β”‚ β”‚ └── promote_get_metrics.py # get metric step β”‚ β”œβ”€β”€ register # `zenml.steps.register` implementations β”‚ β”‚ β”œβ”€β”€ __init__.py β”‚ β”‚ └── model_log_register.py # model log register step β”‚ β”œβ”€β”€ tokenization # `zenml.steps.tokenization` implementations β”‚ β”‚ β”œβ”€β”€ __init__.py β”‚ β”‚ └── tokenization.py # tokenization step β”‚ β”œβ”€β”€ tokenizer_loader # `zenml.steps.tokenizer_loader` implementations β”‚ β”‚ β”œβ”€β”€ __init__.py β”‚ β”‚ └── tokenizer_loader.py # tokenizer loader step β”‚ └── training # `zenml.steps.training` implementations β”‚ β”œβ”€β”€ __init__.py β”‚ └── trainer.py # train step └── utils # `zenml.utils` implementations β”‚ └── misc.py # miscellaneous utilities β”œβ”€β”€ README.md # this file β”œβ”€β”€ requirements.txt # extra Python dependencies β”œβ”€β”€ config.yaml # ZenML configuration file └── run.py # CLI tool to run pipelines on ZenML Stack ```
0
cloned_public_repos/zenml/examples
cloned_public_repos/zenml/examples/e2e_nlp/config.yaml
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2025. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # settings: docker: python_package_installer: uv install_stack_requirements: False required_integrations: - aws - skypilot_aws - s3 - huggingface - pytorch - mlflow - discord requirements: - zenml[server] extra: mlflow_model_name: sentiment_analysis target_env: staging notify_on_success: False notify_on_failure: True
0
cloned_public_repos/zenml/examples
cloned_public_repos/zenml/examples/e2e_nlp/.dockerignore
.venv* .requirements*
0
cloned_public_repos/zenml/examples/e2e_nlp
cloned_public_repos/zenml/examples/e2e_nlp/pipelines/deploying.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2025. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from typing import List, Optional from steps import ( deploy_locally, deploy_to_huggingface, deploy_to_skypilot, notify_on_failure, notify_on_success, save_model_to_deploy, ) from zenml import pipeline from zenml.client import Client from zenml.logger import get_logger logger = get_logger(__name__) # Get experiment tracker orchestrator = Client().active_stack.orchestrator # Check if orchestrator flavor is local if orchestrator.flavor not in ["local", "vm_aws", "vm_gcp", "vm_azure"]: raise RuntimeError( "Your active stack needs to contain a local orchestrator or a VM " "orchestrator to run this pipeline. However, we recommend using " "the local orchestrator for this pipeline." ) @pipeline( on_failure=notify_on_failure, ) def nlp_use_case_deploy_pipeline( labels: Optional[List[str]] = ["Negative", "Positive"], title: Optional[str] = None, description: Optional[str] = None, model_name_or_path: Optional[str] = "model", tokenizer_name_or_path: Optional[str] = "tokenizer", interpretation: Optional[str] = None, example: Optional[str] = None, repo_name: Optional[str] = "nlp_use_case", ): """ Model deployment pipeline. This pipelines deploys latest model on mlflow registry that matches the given stage, to one of the supported deployment targets. Args: labels: List of labels for the model. title: Title for the model. description: Description for the model. model_name_or_path: Name or path of the model. tokenizer_name_or_path: Name or path of the tokenizer. interpretation: Interpretation for the model. example: Example for the model. repo_name: Name of the repository to deploy to HuggingFace Hub. """ ### ADD YOUR OWN CODE HERE - THIS IS JUST AN EXAMPLE ### # Link all the steps together by calling them and passing the output # of one step as the input of the next step. ########## Save Model locally ########## save_model_to_deploy() ########## Deploy Locally ########## deploy_locally( labels=labels, title=title, description=description, interpretation=interpretation, example=example, model_name_or_path=model_name_or_path, tokenizer_name_or_path=tokenizer_name_or_path, after=["save_model_to_deploy"], ) ########## Deploy to HuggingFace ########## deploy_to_huggingface( repo_name=repo_name, after=["save_model_to_deploy"], ) ########## Deploy to Skypilot ########## deploy_to_skypilot( after=["save_model_to_deploy"], ) last_step_name = "deploy_to_skypilot" notify_on_success(after=[last_step_name]) ### YOUR CODE ENDS HERE ###
0
cloned_public_repos/zenml/examples/e2e_nlp
cloned_public_repos/zenml/examples/e2e_nlp/pipelines/__init__.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2025. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from .training import nlp_use_case_training_pipeline from .promoting import nlp_use_case_promote_pipeline from .deploying import nlp_use_case_deploy_pipeline
0
cloned_public_repos/zenml/examples/e2e_nlp
cloned_public_repos/zenml/examples/e2e_nlp/pipelines/training.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2025. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from typing import Optional from steps import ( data_loader, model_trainer, notify_on_failure, notify_on_success, register_model, tokenization_step, tokenizer_loader, ) from zenml import pipeline from zenml.logger import get_logger logger = get_logger(__name__) @pipeline(on_failure=notify_on_failure) def nlp_use_case_training_pipeline( lower_case: Optional[bool] = True, padding: Optional[str] = "max_length", max_seq_length: Optional[int] = 128, text_column: Optional[str] = "text", label_column: Optional[str] = "label", train_batch_size: Optional[int] = 8, eval_batch_size: Optional[int] = 8, num_epochs: Optional[int] = 5, learning_rate: Optional[float] = 2e-5, weight_decay: Optional[float] = 0.01, ): """ Model training pipeline. This is a pipeline that loads the dataset and tokenizer, tokenizes the dataset, trains a model and registers the model to the model registry. Args: lower_case: Whether to convert all text to lower case. padding: Padding strategy. max_seq_length: Maximum sequence length. text_column: Name of the text column. label_column: Name of the label column. train_batch_size: Training batch size. eval_batch_size: Evaluation batch size. num_epochs: Number of epochs. learning_rate: Learning rate. weight_decay: Weight decay. """ ### ADD YOUR OWN CODE HERE - THIS IS JUST AN EXAMPLE ### # Link all the steps together by calling them and passing the output # of one step as the input of the next step. ########## Load Dataset stage ########## dataset = data_loader() ########## Tokenization stage ########## tokenizer = tokenizer_loader(lower_case=lower_case) tokenized_data = tokenization_step( dataset=dataset, tokenizer=tokenizer, padding=padding, max_seq_length=max_seq_length, text_column=text_column, label_column=label_column, ) ########## Training stage ########## model, tokenizer = model_trainer( tokenized_dataset=tokenized_data, tokenizer=tokenizer, train_batch_size=train_batch_size, eval_batch_size=eval_batch_size, num_epochs=num_epochs, learning_rate=learning_rate, weight_decay=weight_decay, ) ########## Log and Register stage ########## register_model( model=model, tokenizer=tokenizer, mlflow_model_name="sentiment_analysis", ) notify_on_success(after=["register_model"]) ### YOUR CODE ENDS HERE ###
0
cloned_public_repos/zenml/examples/e2e_nlp
cloned_public_repos/zenml/examples/e2e_nlp/pipelines/promoting.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2025. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from steps import ( notify_on_failure, notify_on_success, promote_get_metrics, promote_metric_compare_promoter, ) from zenml import pipeline from zenml.logger import get_logger logger = get_logger(__name__) @pipeline( on_failure=notify_on_failure, ) def nlp_use_case_promote_pipeline(): """ Model promotion pipeline. This is a pipeline that promotes the best model to the chosen stage, e.g. Production or Staging. Based on a metric comparison between the latest and the currently promoted model version, or just the latest model version. """ ### ADD YOUR OWN CODE HERE - THIS IS JUST AN EXAMPLE ### # Link all the steps together by calling them and passing the output # of one step as the input of the next step. ########## Promotion stage ########## latest_metrics, current_metrics = promote_get_metrics() promote_metric_compare_promoter( latest_metrics=latest_metrics, current_metrics=current_metrics, ) last_step_name = "promote_metric_compare_promoter" notify_on_success(after=[last_step_name]) ### YOUR CODE ENDS HERE ###
0
cloned_public_repos/zenml/examples/e2e_nlp
cloned_public_repos/zenml/examples/e2e_nlp/gradio/Dockerfile
# read the doc: https://huggingface.co/docs/hub/spaces-sdks-docker # you will also find guides on how best to write your Dockerfile FROM python:3.9 WORKDIR /code COPY ./requirements.txt /code/requirements.txt RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt # Set up a new user named "user" with user ID 1000 RUN useradd -m -u 1000 user # Switch to the "user" user USER user # Set home to the user's home directory ENV HOME=/home/user \ PATH=/home/user/.local/bin:$PATH # Set the working directory to the user's home directory WORKDIR $HOME/app # Copy the current directory contents into the container at $HOME/app setting the owner to the user COPY --chown=user . $HOME/app CMD ["python", "app.py", "--server.port=7860", "--server.address=0.0.0.0"]
0
cloned_public_repos/zenml/examples/e2e_nlp
cloned_public_repos/zenml/examples/e2e_nlp/gradio/requirements.txt
nltk torch torchvision torchaudio gradio datasets==2.12.0 numpy==1.22.4 pandas==1.5.3 session_info==1.0.0 scikit-learn==1.5.0 transformers==4.28.1 IPython==8.10.0
0
cloned_public_repos/zenml/examples/e2e_nlp
cloned_public_repos/zenml/examples/e2e_nlp/gradio/app.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2023. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from os.path import dirname from typing import Optional import click import numpy as np from transformers import AutoModelForSequenceClassification, AutoTokenizer import gradio as gr @click.command() @click.option( "--tokenizer_name_or_path", default="tokenizer", help="Name or the path of the tokenizer.", ) @click.option( "--model_name_or_path", default="model", help="Name or the path of the model.", ) @click.option( "--labels", default="Negative,Positive", help="Comma-separated list of labels.", ) @click.option( "--title", default="ZenML NLP Use-Case", help="Title of the Gradio interface.", ) @click.option( "--description", default="Text Classification - Sentiment Analysis - ZenML - Gradio", help="Description of the Gradio interface.", ) @click.option( "--interpretation", default="default", help="Interpretation mode for the Gradio interface.", ) @click.option( "--examples", default="This is an awesome journey, I love it!", help="An example to show in the Gradio interface.", ) def sentiment_analysis( tokenizer_name_or_path: Optional[str], model_name_or_path: Optional[str], labels: Optional[str], title: Optional[str], description: Optional[str], interpretation: Optional[str], examples: Optional[str], ): """Launches a Gradio interface for sentiment analysis. This function launches a Gradio interface for text-classification. It loads a model and a tokenizer from the provided paths and uses them to predict the sentiment of the input text. Args: tokenizer_name_or_path (str): Name or the path of the tokenizer. model_name_or_path (str): Name or the path of the model. labels (str): Comma-separated list of labels. title (str): Title of the Gradio interface. description (str): Description of the Gradio interface. interpretation (str): Interpretation mode for the Gradio interface. examples (str): Comma-separated list of examples to show in the Gradio interface. """ labels = labels.split(",") examples = [examples] def preprocess(text: str) -> str: """Preprocesses the text. Args: text (str): Input text. Returns: str: Preprocessed text. """ new_text = [] for t in text.split(" "): t = "@user" if t.startswith("@") and len(t) > 1 else t t = "http" if t.startswith("http") else t new_text.append(t) return " ".join(new_text) def softmax(x): e_x = np.exp(x - np.max(x)) return e_x / e_x.sum(axis=0) def analyze_text(text): model_path = f"{dirname(__file__)}/{model_name_or_path}/" print(f"Loading model from {model_path}") tokenizer_path = f"{dirname(__file__)}/{tokenizer_name_or_path}/" print(f"Loading tokenizer from {tokenizer_path}") tokenizer = AutoTokenizer.from_pretrained(tokenizer_path) model = AutoModelForSequenceClassification.from_pretrained(model_path) text = preprocess(text) encoded_input = tokenizer(text, return_tensors="pt") output = model(**encoded_input) scores_ = output[0][0].detach().numpy() scores_ = softmax(scores_) scores = { label: float(score) for (label, score) in zip(labels, scores_) } return scores demo = gr.Interface( fn=analyze_text, inputs=[ gr.TextArea("Write your text or tweet here", label="Analyze Text") ], outputs=["label"], title=title, description=description, interpretation=interpretation, examples=examples, ) demo.launch(share=True, debug=True) if __name__ == "__main__": sentiment_analysis()
0
cloned_public_repos/zenml/examples/e2e_nlp
cloned_public_repos/zenml/examples/e2e_nlp/gradio/serve.yaml
# Task name (optional), used for display purposes. name: ZenML NLP project} resources: cloud: aws # The cloud to use (optional). # The region to use (optional). Auto-failover will be disabled # if this is specified. region: us-east-1 # The instance type to use (optional). instance_type: t3.large # Working directory (optional), synced to ~/sky_workdir on the remote cluster # each time launch or exec is run with the yaml file. # # Commands in "setup" and "run" will be executed under it. # # If a .gitignore file (or a .git/info/exclude file) exists in the working # directory, files and directories listed in it will be excluded from syncing. workdir: ./gradio setup: | echo "Begin setup." pip install -r requirements.txt echo "Setup complete." run: | echo 'Starting gradio app...' python app.py
0
cloned_public_repos/zenml/examples/e2e_nlp
cloned_public_repos/zenml/examples/e2e_nlp/gradio/__init__.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2025. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #
0
cloned_public_repos/zenml/examples/e2e_nlp
cloned_public_repos/zenml/examples/e2e_nlp/steps/__init__.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2025. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from .alerts import notify_on_failure, notify_on_success from .dataset_loader import ( data_loader, ) from .promotion import ( promote_get_metrics, promote_metric_compare_promoter, ) from .register import register_model from .tokenizer_loader import ( tokenizer_loader, ) from .tokenzation import ( tokenization_step, ) from .training import model_trainer from .deploying import ( save_model_to_deploy, deploy_locally, deploy_to_huggingface, deploy_to_skypilot, )
0
cloned_public_repos/zenml/examples/e2e_nlp/steps
cloned_public_repos/zenml/examples/e2e_nlp/steps/tokenzation/tokenization.py
# Apache Software License 2.0 # # Copyright (c) ZenML GmbH 2025. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from datasets import DatasetDict from transformers import PreTrainedTokenizerBase from typing_extensions import Annotated from utils.misc import find_max_length from zenml import step from zenml.logger import get_logger logger = get_logger(__name__) @step def tokenization_step( tokenizer: PreTrainedTokenizerBase, dataset: DatasetDict, padding: str = "max_length", max_seq_length: int = 512, text_column: str = "text", label_column: str = "label", ) -> Annotated[DatasetDict, "tokenized_data"]: """ Tokenization step. This step tokenizes the dataset using the tokenizer and returns the tokenized dataset in a Huggingface DatasetDict format. Args: tokenizer: The tokenizer to use for tokenization. dataset: The dataset to be tokenized. padding: Padding strategy. max_seq_length: Maximum sequence length. text_column: Name of the text column. label_column: Name of the label column. Returns: The tokenized dataset. """ ### ADD YOUR OWN CODE HERE - THIS IS JUST AN EXAMPLE ### train_max_length = find_max_length(dataset["train"][text_column]) # Depending on the dataset, find the maximum length of text in the validation or test dataset val_or_test_max_length = find_max_length( dataset["validation"][text_column] ) max_length = ( train_max_length if train_max_length >= val_or_test_max_length else val_or_test_max_length ) logger.info(f"max length for the given dataset is:{max_length}") # Determine the maximum length for tokenization max_length = ( train_max_length if train_max_length >= val_or_test_max_length else val_or_test_max_length ) logger.info(f"max length for the given dataset is:{max_length}") def preprocess_function(examples): # Tokenize the examples with padding, truncation, and a specified maximum length result = tokenizer( examples[text_column], padding=padding, truncation=True, max_length=max_length or max_seq_length, ) # Add labels to the tokenized examples result["label"] = examples[label_column] return result # Apply the preprocessing function to the dataset tokenized_datasets = dataset.map( preprocess_function, batched=True, ) logger.info(tokenized_datasets) # Remove the original text column and rename the label column tokenized_datasets = tokenized_datasets.remove_columns([text_column]) tokenized_datasets = tokenized_datasets.rename_column( label_column, "labels" ) # Set the format of the tokenized dataset tokenized_datasets.set_format("torch") ### YOUR CODE ENDS HERE ### return tokenized_datasets
0