content stringlengths 1 103k ⌀ | path stringlengths 8 216 | filename stringlengths 2 179 | language stringclasses 15
values | size_bytes int64 2 189k | quality_score float64 0.5 0.95 | complexity float64 0 1 | documentation_ratio float64 0 1 | repository stringclasses 5
values | stars int64 0 1k | created_date stringdate 2023-07-10 19:21:08 2025-07-09 19:11:45 | license stringclasses 4
values | is_test bool 2
classes | file_hash stringlengths 32 32 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
name: Reusable Build Wheels\n\non:\n workflow_call:\n inputs:\n CONCURRENCY:\n required: true\n type: string\n MODE:\n description: "The build mode, either `pypi` or `pr`"\n required: true\n type: string\n PLATFORM:\n required: true\n type: string\n WHEEL_ARTIFACT_NAME:\n required: false\n type: string\n default: ""\n RELEASE_COMMIT:\n required: false\n type: string\n default: ""\n\n workflow_dispatch:\n inputs:\n PLATFORM:\n type: choice\n options:\n - linux-arm64\n - linux-x64\n - windows-x64\n - macos-arm64\n - macos-x64\n description: "Platform to build for"\n required: true\n MODE:\n type: choice\n required: false\n options:\n - pypi\n - pr\n - extra\n description: "The build mode (`pypi` includes the web viewer, `pr` does not)"\n CONCURRENCY:\n required: false\n type: string\n default: "adhoc"\n description: "Concurrency group to use"\n WHEEL_ARTIFACT_NAME:\n required: false\n type: string\n default: ""\n description: "If set, will be saved under that name in the workflow artifacts"\n RELEASE_COMMIT:\n required: false\n type: string\n default: ""\n description: "Release commit"\n\nconcurrency:\n group: ${{ inputs.CONCURRENCY }}-build-wheels\n cancel-in-progress: true\n\nenv:\n PYTHON_VERSION: "3.9"\n\n # web_sys_unstable_apis is required to enable the web_sys clipboard API which egui_web uses\n # https://rustwasm.github.io/wasm-bindgen/api/web_sys/struct.Clipboard.html\n # https://rustwasm.github.io/docs/wasm-bindgen/web-sys/unstable-apis.html\n\n # TODO(jleibs) --deny warnings causes installation of wasm-bindgen to fail on mac\n # RUSTFLAGS: --cfg=web_sys_unstable_apis --deny warnings\n RUSTFLAGS: --cfg=web_sys_unstable_apis\n\n RUSTDOCFLAGS: --deny warnings\n\n # Disable the GHA backend (Github's 10GB storage) since we use our own GCS backend.\n # See: https://github.com/marketplace/actions/sccache-action\n SCCACHE_GHA_ENABLED: "false"\n\n # Wrap every `rustc` invocation in `sccache`.\n RUSTC_WRAPPER: "sccache"\n\n # Not only `sccache` cannot cache incremental builds, it's counter-productive to generate all\n # these incremental artifacts when running on CI.\n CARGO_INCREMENTAL: "0"\n\ndefaults:\n run:\n shell: bash\n\npermissions:\n contents: "read"\n id-token: "write"\n\njobs:\n # ---------------------------------------------------------------------------\n\n set-config:\n name: Set Config (${{ inputs.PLATFORM }})\n runs-on: ubuntu-latest\n outputs:\n RUNNER: ${{ steps.set-config.outputs.runner }}\n TARGET: ${{ steps.set-config.outputs.target }}\n CONTAINER: ${{ steps.set-config.outputs.container }}\n COMPAT: ${{ steps.set-config.outputs.compat }}\n steps:\n - name: Set runner and target based on platform\n id: set-config\n run: |\n case "${{ inputs.PLATFORM }}" in\n linux-arm64)\n runner="buildjet-8vcpu-ubuntu-2204-arm"\n target="aarch64-unknown-linux-gnu"\n container="'rerunio/ci_docker:0.15.0'" # Required to be manylinux compatible\n compat="manylinux_2_31"\n ;;\n linux-x64)\n runner="ubuntu-latest-16-cores"\n target="x86_64-unknown-linux-gnu"\n compat="manylinux_2_31"\n container="'rerunio/ci_docker:0.15.0'" # Required to be manylinux compatible\n ;;\n windows-x64)\n runner="windows-latest-8-cores"\n target="x86_64-pc-windows-msvc"\n container="null"\n compat="manylinux_2_31"\n ;;\n macos-arm64)\n runner="macos-latest-large" # See https://github.blog/2023-10-02-introducing-the-new-apple-silicon-powered-m1-macos-larger-runner-for-github-actions/\n target="aarch64-apple-darwin"\n container="null"\n compat="manylinux_2_31"\n ;;\n macos-x64)\n runner="macos-latest-large" # See https://github.blog/2023-10-02-introducing-the-new-apple-silicon-powered-m1-macos-larger-runner-for-github-actions/\n target="x86_64-apple-darwin"\n container="null"\n compat="manylinux_2_31"\n ;;\n *) echo "Invalid platform" && exit 1 ;;\n esac\n echo "runner=$runner" >> "$GITHUB_OUTPUT"\n echo "target=$target" >> "$GITHUB_OUTPUT"\n echo "container=$container" >> "$GITHUB_OUTPUT"\n echo "compat=$compat" >> "$GITHUB_OUTPUT"\n\n # ---------------------------------------------------------------------------\n\n build-wheels:\n name: Build Wheels (${{ needs.set-config.outputs.RUNNER }})\n\n needs: [set-config]\n\n runs-on: ${{ needs.set-config.outputs.RUNNER }}\n container:\n image: ${{ fromJson(needs.set-config.outputs.CONTAINER) }}\n credentials:\n username: ${{ secrets.DOCKER_HUB_USER }}\n password: ${{ secrets.DOCKER_HUB_TOKEN }}\n\n steps:\n - name: Show context\n run: |\n echo "GITHUB_CONTEXT": $GITHUB_CONTEXT\n echo "JOB_CONTEXT": $JOB_CONTEXT\n echo "INPUTS_CONTEXT": $INPUTS_CONTEXT\n echo "ENV_CONTEXT": $ENV_CONTEXT\n env:\n ENV_CONTEXT: ${{ toJson(env) }}\n GITHUB_CONTEXT: ${{ toJson(github) }}\n JOB_CONTEXT: ${{ toJson(job) }}\n INPUTS_CONTEXT: ${{ toJson(inputs) }}\n\n - uses: actions/checkout@v4\n with:\n ref: ${{ inputs.RELEASE_COMMIT || ((github.event_name == 'pull_request' && github.event.pull_request.head.ref) || '') }}\n\n - name: Set up Rust and Authenticate to GCS\n uses: ./.github/actions/setup-rust\n with:\n cache_key: "build-${{ inputs.PLATFORM }}"\n # Cache will be produced by `reusable_checks/rs-lints`\n save_cache: false\n workload_identity_provider: ${{ secrets.GOOGLE_WORKLOAD_IDENTITY_PROVIDER }}\n service_account: ${{ secrets.GOOGLE_SERVICE_ACCOUNT }}\n targets: ${{ needs.set-config.outputs.TARGET }}\n\n - uses: prefix-dev/setup-pixi@v0.8.1\n with:\n pixi-version: v0.41.4\n\n - name: Get sha\n id: get-sha\n run: |\n full_commit="${{ inputs.RELEASE_COMMIT || ((github.event_name == 'pull_request' && github.event.pull_request.head.sha) || github.sha) }}"\n echo "sha=$(echo $full_commit | cut -c1-7)" >> "$GITHUB_OUTPUT"\n\n - name: "Download rerun-cli"\n run: |\n pixi run fetch-artifact \\n --commit-sha ${{ steps.get-sha.outputs.sha }} \\n --artifact rerun-cli \\n --platform ${{ inputs.PLATFORM }} \\n --dest rerun_py/rerun_sdk/rerun_cli\n\n - name: Build\n run: |\n pixi run python scripts/ci/build_and_upload_wheels.py \\n --mode ${{ inputs.MODE }} \\n --target ${{ needs.set-config.outputs.TARGET }} \\n --dir commit/${{ steps.get-sha.outputs.sha }}/wheels \\n --compat ${{ needs.set-config.outputs.COMPAT }} \\n --upload-gcs\n\n - name: Save wheel artifact\n if: ${{ inputs.WHEEL_ARTIFACT_NAME != '' }}\n uses: actions/upload-artifact@v4\n with:\n name: ${{inputs.WHEEL_ARTIFACT_NAME}}\n path: dist/${{ needs.set-config.outputs.TARGET }}\n\n # ---------------------------------------------------------------------------\n # rerun_notebook support\n\n - name: "Build rerun_notebook"\n # only build the notebook if we are building for pypi and running linux-x64\n if: ${{ (inputs.MODE == 'pypi' || inputs.MODE == 'extra') && inputs.PLATFORM == 'linux-x64' }}\n run: |\n rm -rf dist\n pixi run js-build-base\n pixi run python scripts/ci/build_and_upload_rerun_notebook.py \\n --dir commit/${{ steps.get-sha.outputs.sha }}/wheels\n\n - name: Save rerun_notebook wheel artifact\n if: ${{ (inputs.MODE == 'pypi' || inputs.MODE == 'extra') && inputs.PLATFORM == 'linux-x64' }}\n uses: actions/upload-artifact@v4\n with:\n name: rerun_notebook_wheel\n path: dist\n | dataset_sample\yaml\rerun-io_rerun\.github\workflows\reusable_build_and_upload_wheels.yml | reusable_build_and_upload_wheels.yml | YAML | 8,302 | 0.95 | 0.033058 | 0.079812 | python-kit | 0 | 2024-12-03T15:39:34.967621 | GPL-3.0 | false | 7bcfd6241fb199054ce938faba51c1f2 |
name: Reusable Build rerun_js\n\non:\n workflow_call:\n inputs:\n CONCURRENCY:\n required: true\n type: string\n\nconcurrency:\n group: ${{ inputs.CONCURRENCY }}-build-js\n cancel-in-progress: true\n\nenv:\n # web_sys_unstable_apis is required to enable the web_sys clipboard API which egui_web uses\n # https://rustwasm.github.io/wasm-bindgen/api/web_sys/struct.Clipboard.html\n # https://rustwasm.github.io/docs/wasm-bindgen/web-sys/unstable-apis.html\n RUSTFLAGS: --cfg=web_sys_unstable_apis --deny warnings\n\n RUSTDOCFLAGS: --deny warnings\n\n # Disable the GHA backend (Github's 10GB storage) since we use our own GCS backend.\n # See: https://github.com/marketplace/actions/sccache-action\n SCCACHE_GHA_ENABLED: "false"\n\n # Wrap every `rustc` invocation in `sccache`.\n RUSTC_WRAPPER: "sccache"\n\n # Not only `sccache` cannot cache incremental builds, it's counter-productive to generate all\n # these incremental artifacts when running on CI.\n CARGO_INCREMENTAL: "0"\n\ndefaults:\n run:\n shell: bash\n\npermissions:\n contents: "read"\n id-token: "write"\n\njobs:\n build:\n name: Build rerun_js\n runs-on: ubuntu-latest-16-cores\n steps:\n - uses: actions/checkout@v4\n with:\n ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.ref || '' }}\n\n - uses: actions/setup-node@v4\n with:\n node-version: "20.x"\n\n - name: Install Yarn\n run: npm install -g yarn\n\n - name: Set up Rust\n uses: ./.github/actions/setup-rust\n with:\n cache_key: "build-web"\n # Cache will be produced by `reusable_checks/rs-check-wasm`\n save_cache: false\n workload_identity_provider: ${{ secrets.GOOGLE_WORKLOAD_IDENTITY_PROVIDER }}\n service_account: ${{ secrets.GOOGLE_SERVICE_ACCOUNT }}\n\n - uses: prefix-dev/setup-pixi@v0.8.1\n with:\n pixi-version: v0.41.4\n\n - name: Install yarn dependencies\n run: pixi run yarn --cwd rerun_js install\n\n - name: Build rerun_js package\n run: pixi run yarn --cwd rerun_js workspaces run build\n | dataset_sample\yaml\rerun-io_rerun\.github\workflows\reusable_build_js.yml | reusable_build_js.yml | YAML | 2,104 | 0.95 | 0 | 0.155172 | react-lib | 114 | 2024-12-28T06:04:09.798262 | BSD-3-Clause | false | c4bc5b362653241fbef391440cd04606 |
name: Reusable Build web viewer\n\non:\n workflow_call:\n inputs:\n CONCURRENCY:\n required: true\n type: string\n RELEASE_VERSION:\n required: false\n type: string\n default: "prerelease"\n CHANNEL: # `nightly` or `main`\n required: true\n type: string\n\nconcurrency:\n group: ${{ inputs.CONCURRENCY }}-build-web\n cancel-in-progress: true\n\nenv:\n # web_sys_unstable_apis is required to enable the web_sys clipboard API which egui_web uses\n # https://rustwasm.github.io/wasm-bindgen/api/web_sys/struct.Clipboard.html\n # https://rustwasm.github.io/docs/wasm-bindgen/web-sys/unstable-apis.html\n RUSTFLAGS: --cfg=web_sys_unstable_apis --deny warnings\n\n RUSTDOCFLAGS: --deny warnings\n\n # Disable the GHA backend (Github's 10GB storage) since we use our own GCS backend.\n # See: https://github.com/marketplace/actions/sccache-action\n SCCACHE_GHA_ENABLED: "false"\n\n # Wrap every `rustc` invocation in `sccache`.\n RUSTC_WRAPPER: "sccache"\n\n # Not only `sccache` cannot cache incremental builds, it's counter-productive to generate all\n # these incremental artifacts when running on CI.\n CARGO_INCREMENTAL: "0"\n\ndefaults:\n run:\n shell: bash\n\npermissions:\n contents: "write"\n id-token: "write"\n pull-requests: "write"\n\njobs:\n rs-build-web-viewer:\n name: Build web viewer\n runs-on: ubuntu-latest-16-cores\n steps:\n - uses: actions/checkout@v4\n with:\n ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.ref || '' }}\n\n - name: Get sha\n id: get-sha\n run: |\n full_commit="${{ (github.event_name == 'pull_request' && github.event.pull_request.head.sha) || github.sha }}"\n echo "sha=$(echo $full_commit | cut -c1-7)" >> "$GITHUB_OUTPUT"\n\n - name: Status comment\n if: github.event_name == 'pull_request'\n # https://github.com/mshick/add-pr-comment\n uses: mshick/add-pr-comment@v2.8.2\n with:\n message-id: "web-viewer-build-status"\n repo-token: ${{ secrets.GITHUB_TOKEN }}\n message: |\n Web viewer is being built.\n\n | Result | Commit | Link | Manifest |\n | ------ | ------- | ----- | -------- |\n | ⏳ | ${{ steps.get-sha.outputs.sha }} | https://rerun.io/viewer/pr/${{ github.event.pull_request.number }} | [`+nightly`](https://rerun.io/viewer/pr/${{ github.event.pull_request.number }}?manifest_url=https://app.rerun.io/version/nightly/examples_manifest.json) [`+main`](https://rerun.io/viewer/pr/${{ github.event.pull_request.number }}?manifest_url=https://app.rerun.io/version/main/examples_manifest.json) |\n\n <sup>Note: This comment is updated whenever you push a commit.</sup>\n\n - name: Set up Rust\n uses: ./.github/actions/setup-rust\n with:\n cache_key: "build-web"\n # Cache will be produced by `reusable_checks/rs-check-wasm`\n save_cache: false\n workload_identity_provider: ${{ secrets.GOOGLE_WORKLOAD_IDENTITY_PROVIDER }}\n service_account: ${{ secrets.GOOGLE_SERVICE_ACCOUNT }}\n\n - uses: prefix-dev/setup-pixi@v0.8.1\n with:\n pixi-version: v0.41.4\n\n - name: Build web-viewer (release)\n run: |\n if [ ${{ inputs.CHANNEL }} = "nightly" ]; then\n export DEFAULT_EXAMPLES_MANIFEST_URL="https://app.rerun.io/version/nightly/examples_manifest.json"\n fi\n pixi run rerun-build-web-release\n\n # We build a single manifest pointing to the `commit`\n # All the `pr`, `main`, release tag, etc. variants will always just point to the resolved commit\n - name: Build examples manifest\n run: |\n full_commit="${{ (github.event_name == 'pull_request' && github.event.pull_request.head.sha) || github.sha }}"\n sha="$(echo $full_commit | cut -c1-7)"\n\n pixi run build-examples manifest \\n --base-url "https://app.rerun.io/commit/$sha" \\n --channel "${{ inputs.CHANNEL }}" \\n web_viewer/examples_manifest.json\n\n - name: Upload web viewer\n uses: actions/upload-artifact@v4\n with:\n name: web_viewer\n path: web_viewer\n\n - name: Status comment\n if: failure() && github.event_name == 'pull_request'\n # https://github.com/mshick/add-pr-comment\n uses: mshick/add-pr-comment@v2.8.2\n with:\n message-id: "web-viewer-build-status"\n repo-token: ${{ secrets.GITHUB_TOKEN }}\n message: |\n Web viewer failed to build.\n\n | Result | Commit | Link | Manifest |\n | ------ | ------- | ----- | -------- |\n | ❌ | ${{ steps.get-sha.outputs.sha }} | https://rerun.io/viewer/pr/${{ github.event.pull_request.number }} | [`+nightly`](https://rerun.io/viewer/pr/${{ github.event.pull_request.number }}?manifest_url=https://app.rerun.io/version/nightly/examples_manifest.json) [`+main`](https://rerun.io/viewer/pr/${{ github.event.pull_request.number }}?manifest_url=https://app.rerun.io/version/main/examples_manifest.json) |\n\n <sup>Note: This comment is updated whenever you push a commit.</sup>\n | dataset_sample\yaml\rerun-io_rerun\.github\workflows\reusable_build_web.yml | reusable_build_web.yml | YAML | 5,188 | 0.95 | 0.022727 | 0.119266 | react-lib | 243 | 2024-02-20T08:07:01.726020 | Apache-2.0 | false | 52c81716c4b395cf7f7ae4618bc44b94 |
name: Reusable C++ bundling and upload\n\non:\n workflow_call:\n inputs:\n CONCURRENCY:\n required: true\n type: string\n PLATFORM_FILTER:\n required: false\n type: string\n RELEASE_COMMIT:\n required: false\n type: string\n default: ""\n\nconcurrency:\n group: ${{ inputs.CONCURRENCY }}-bundle-and-upload-rerun-cpp\n cancel-in-progress: true\n\ndefaults:\n run:\n shell: bash\n\npermissions:\n contents: "read"\n id-token: "write"\n\njobs:\n bundle-and-upload-rerun_cpp:\n name: Bundle and upload rerun_cpp_sdk.zip\n runs-on: ubuntu-latest\n container:\n image: rerunio/ci_docker:0.15.0 # Need container for arrow dependency.\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n with:\n ref: ${{ inputs.RELEASE_COMMIT || ((github.event_name == 'pull_request' && github.event.pull_request.head.ref) || '') }}\n\n - id: "auth"\n uses: google-github-actions/auth@v2\n with:\n workload_identity_provider: ${{ secrets.GOOGLE_WORKLOAD_IDENTITY_PROVIDER }}\n service_account: ${{ secrets.GOOGLE_SERVICE_ACCOUNT }}\n\n - name: "Set up Cloud SDK"\n uses: "google-github-actions/setup-gcloud@v2"\n with:\n version: ">= 363.0.0"\n\n - name: Install python gcs library\n run: |\n python3 -m pip install google-cloud-storage\n\n - name: Get sha\n id: get-sha\n run: |\n full_commit="${{ inputs.RELEASE_COMMIT || ((github.event_name == 'pull_request' && github.event.pull_request.head.sha) || github.sha) }}"\n echo "sha=$(echo $full_commit | cut -c1-7)" >> "$GITHUB_OUTPUT"\n\n - name: "Bundle and upload rerun_cpp_sdk.zip"\n run: python3 ./scripts/ci/bundle_and_upload_rerun_cpp.py --git-hash ${{ steps.get-sha.outputs.sha }} --platform-filter=${{ inputs.PLATFORM_FILTER }}\n | dataset_sample\yaml\rerun-io_rerun\.github\workflows\reusable_bundle_and_upload_rerun_cpp.yml | reusable_bundle_and_upload_rerun_cpp.yml | YAML | 1,874 | 0.95 | 0.015873 | 0 | vue-tools | 664 | 2024-07-16T18:04:07.222596 | GPL-3.0 | false | 947c1fc6e7c5ccc668d08225e5247346 |
name: "General checks: Lints, Tests, Docs"\n\non:\n workflow_call:\n inputs:\n CONCURRENCY:\n required: true\n type: string\n CHANNEL: # `nightly`/`main`/`pr`\n required: true\n type: string\n\nconcurrency:\n group: ${{ inputs.CONCURRENCY }}-checks\n cancel-in-progress: true\n\nenv:\n PYTHON_VERSION: "3.9"\n # web_sys_unstable_apis is required to enable the web_sys clipboard API which egui_web uses\n # https://rustwasm.github.io/wasm-bindgen/api/web_sys/struct.Clipboard.html\n # https://rustwasm.github.io/docs/wasm-bindgen/web-sys/unstable-apis.html\n RUSTFLAGS: --cfg=web_sys_unstable_apis --deny warnings\n\n RUSTDOCFLAGS: --deny warnings\n\n # Disable the GHA backend (Github's 10GB storage) since we use our own GCS backend.\n # See: https://github.com/marketplace/actions/sccache-action\n SCCACHE_GHA_ENABLED: "false"\n\n # Wrap every `rustc` invocation in `sccache`.\n RUSTC_WRAPPER: "sccache"\n\n # Not only `sccache` cannot cache incremental builds, it's counter-productive to generate all\n # these incremental artifacts when running on CI.\n CARGO_INCREMENTAL: "0"\n\ndefaults:\n run:\n shell: bash\n\npermissions:\n contents: "read"\n id-token: "write"\n\njobs:\n no-codegen-changes:\n name: Check if running codegen would produce any changes\n runs-on: ubuntu-latest-16-cores\n steps:\n # Note: We explicitly don't override `ref` here. We need to see if changes would be made\n # in a context where we have merged with main. Otherwise we might miss changes such as one\n # PR introduces a new type and another PR changes the codegen.\n - uses: actions/checkout@v4\n\n - name: Set up Rust\n uses: ./.github/actions/setup-rust\n with:\n cache_key: "build-linux"\n save_cache: true\n workload_identity_provider: ${{ secrets.GOOGLE_WORKLOAD_IDENTITY_PROVIDER }}\n service_account: ${{ secrets.GOOGLE_SERVICE_ACCOUNT }}\n\n - uses: prefix-dev/setup-pixi@v0.8.1\n with:\n pixi-version: v0.41.4\n\n - name: Codegen check\n run: pixi run codegen --force --check\n\n - name: Codegen out-of-sync (protos)\n run: pixi run codegen-protos-check\n\n # ---------------------------------------------------------------------------\n\n # NOTE: We don't want spurious failures caused by issues being closed, so this does not run on CI,\n # at least for the time being.\n # - name: Check for zombie TODOs\n # shell: bash\n # run: |\n # pixi run ./scripts/zombie_todos.py --token ${{ secrets.GITHUB_TOKEN }}\n\n rerun-lints:\n name: Rerun lints\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n with:\n ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.ref || '' }}\n\n - uses: prefix-dev/setup-pixi@v0.8.1\n with:\n pixi-version: v0.41.4\n\n - name: Set up Python\n uses: actions/setup-python@v5\n with:\n python-version: "3.11"\n\n - name: Rerun lints\n run: pixi run lint-rerun\n\n toml-format-check:\n name: Toml format check\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n with:\n ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.ref || '' }}\n\n - uses: prefix-dev/setup-pixi@v0.8.1\n with:\n pixi-version: v0.41.4\n\n - name: Toml format check\n run: pixi run toml-fmt-check\n\n check-too-large-files:\n name: Check for too large files\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n with:\n ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.ref || '' }}\n\n - uses: prefix-dev/setup-pixi@v0.8.1\n with:\n pixi-version: v0.41.4\n\n - name: Check for too large files\n run: pixi run check-large-files\n\n check-example-thumbnails:\n name: Check Python example thumbnails\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n with:\n ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.ref || '' }}\n\n - uses: prefix-dev/setup-pixi@v0.8.1\n with:\n pixi-version: v0.41.4\n\n - name: Check Python example thumbnails\n run: pixi run ./scripts/ci/thumbnails.py check\n\n check-example-manifest-coverage:\n name: Check example manifest coverage\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n with:\n ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.ref || '' }}\n\n - uses: prefix-dev/setup-pixi@v0.8.1\n with:\n pixi-version: v0.41.4\n\n - name: Set up Python\n uses: actions/setup-python@v5\n with:\n python-version: "3.11"\n\n - name: Check example manifest coverage\n run: pixi run ./scripts/check_example_manifest_coverage.py\n\n - name: Check the migration guide redirect\n run: pixi run python scripts/ci/check_migration_guide_redirect.py\n\n lint-md:\n name: Lint markdown\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n with:\n ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.ref || '' }}\n\n - uses: prefix-dev/setup-pixi@v0.8.1\n with:\n pixi-version: v0.41.4\n\n - name: Set up Python\n uses: actions/setup-python@v5\n with:\n python-version: "3.11"\n\n - name: Run linter\n run: |\n # Single quoted because pixi does its own glob expansion\n pixi run mdlint --glob 'docs/content/**/*.md'\n pixi run mdlint --glob 'examples/python/*/README.md'\n pixi run mdlint --glob 'examples/cpp/*/README.md'\n pixi run mdlint --glob 'examples/rust/*/README.md'\n\n # ---------------------------------------------------------------------------\n\n spell-check:\n name: Spell Check\n runs-on: ubuntu-latest\n steps:\n - name: Checkout Actions Repository\n uses: actions/checkout@v4\n with:\n ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.ref || '' }}\n\n - name: Check spelling of entire workspace\n uses: crate-ci/typos@v1.18.0\n\n # ---------------------------------------------------------------------------\n\n misc-formatting:\n name: Misc formatting\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n with:\n ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.ref || '' }}\n\n - uses: prefix-dev/setup-pixi@v0.8.1\n with:\n pixi-version: v0.41.4\n\n - name: prettier --check\n run: pixi run misc-fmt-check\n\n # ---------------------------------------------------------------------------\n\n markdown-paths-filter:\n runs-on: ubuntu-latest\n outputs:\n md_changes: ${{ steps.filter.outputs.md_changes }}\n steps:\n - uses: actions/checkout@v4\n with:\n ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.ref || '' }}\n - uses: dorny/paths-filter@v3\n id: filter\n with:\n filters: |\n md_changes:\n - '**/*.md'\n\n link-checker:\n name: Check links\n needs: markdown-paths-filter\n if: inputs.CHANNEL == 'nightly' || needs.markdown-paths-filter.outputs.md_changes == 'true'\n runs-on: ubuntu-latest\n # do not fail entire workflow (e.g. nightly) if this is the only failing check\n continue-on-error: true\n steps:\n - uses: actions/checkout@v4\n\n - name: Restore lychee cache\n id: restore-cache\n uses: actions/cache/restore@v4\n with:\n path: .lycheecache\n key: cache-lychee-${{ github.sha }}\n restore-keys: cache-lychee-\n\n # Check https://github.com/lycheeverse/lychee on how to run locally.\n - name: Link Checker\n id: lychee\n uses: lycheeverse/lychee-action@v1.10.0\n with:\n fail: true\n lycheeVersion: "0.15.1"\n # When given a directory, lychee checks only markdown, html and text files, everything else we have to glob in manually.\n # Pass --verbose, so that all considered links are printed, making it easier to debug.\n args: |\n --verbose --cache --max-cache-age 1d . --base . "**/*.md" "**/*.rs" "**/*.toml" "**/*.hpp" "**/*.cpp" "**/CMakeLists.txt" "**/*.py" "**/*.yml"\n\n - name: Warn because of broken links\n if: ${{ steps.lychee.outputs.exit_code != '0' }}\n run: echo "::warning title="Link checker"::Link checker detected broken links!"\n | dataset_sample\yaml\rerun-io_rerun\.github\workflows\reusable_checks.yml | reusable_checks.yml | YAML | 8,602 | 0.95 | 0.032727 | 0.116071 | vue-tools | 866 | 2025-01-16T17:33:32.165434 | MIT | false | c8e8b6b8f4886217ae40065fb30a7343 |
name: "C++ Tests on all platforms & compilers"\n\non:\n workflow_call:\n inputs:\n CONCURRENCY:\n required: true\n type: string\n CHANNEL:\n required: false\n type: string # enum: 'nightly', 'main', or 'pr'\n\n workflow_dispatch:\n inputs:\n CONCURRENCY:\n required: false\n type: string\n default: "adhoc"\n CHANNEL:\n required: false\n type: string # enum: 'nightly', 'main', or 'pr'\n\nconcurrency:\n group: ${{ inputs.CONCURRENCY }}-checks_cpp\n cancel-in-progress: true\n\nenv:\n # See: https://github.com/marketplace/actions/sccache-action\n SCCACHE_GHA_ENABLED: "false"\n\n RUSTC_WRAPPER: "sccache"\n\n # Not only `sccache` cannot cache incremental builds, it's counter-productive to generate all\n # these incremental artifacts when running on CI.\n CARGO_INCREMENTAL: "0"\n\ndefaults:\n run:\n shell: bash\n\npermissions:\n contents: "read"\n id-token: "write"\n\njobs:\n matrix_prep:\n runs-on: ubuntu-latest\n outputs:\n MATRIX: ${{ steps.set-matrix.outputs.matrix }}\n steps:\n - uses: actions/checkout@v4\n with:\n ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.ref || '' }}\n - name: Load C++ test matrix\n id: set-matrix\n run: echo "matrix=$(jq -c . < ./.github/workflows/cpp_matrix_full.json)" >> $GITHUB_OUTPUT\n\n cpp-tests:\n name: C++ build & test - ${{ matrix.name }}\n needs: matrix_prep\n strategy:\n matrix: ${{ fromJson(needs.matrix_prep.outputs.MATRIX) }}\n runs-on: ${{ matrix.runs_on }}\n steps:\n - uses: actions/checkout@v4\n with:\n ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.ref || '' }}\n\n - uses: prefix-dev/setup-pixi@v0.8.1\n with:\n pixi-version: v0.41.4\n environments: cpp\n\n - name: Set up Rust\n uses: ./.github/actions/setup-rust\n with:\n cache_key: ${{ matrix.cache_key }}\n # Cache will be produced by `reusable_checks/rs-lints`\n save_cache: false\n workload_identity_provider: ${{ secrets.GOOGLE_WORKLOAD_IDENTITY_PROVIDER }}\n service_account: ${{ secrets.GOOGLE_SERVICE_ACCOUNT }}\n\n # Workaround for ASAN issues on Github images https://github.com/actions/runner-images/issues/9491\n - name: Fix kernel mmap rnd bits\n if: runner.os == 'Linux'\n # Asan in llvm 14 provided in ubuntu 22.04 is incompatible with\n # high-entropy ASLR in much newer kernels that GitHub runners are\n # using leading to random crashes: https://reviews.llvm.org/D148280\n run: sudo sysctl vm.mmap_rnd_bits=28\n\n - name: pixi run -e cpp cpp-clean\n run: pixi run -e cpp cpp-clean\n\n - name: pixi run -e cpp cpp-build-all\n run: ${{ matrix.extra_env_vars }} RERUN_WERROR=ON pixi run -e cpp cpp-build-all\n\n - name: pixi run -e cpp cpp-test\n run: ${{ matrix.extra_env_vars }} RERUN_WERROR=ON pixi run -e cpp cpp-test\n\n - name: pixi run -e cpp cpp-build-all-shared-libs\n if: ${{ inputs.CHANNEL == 'nightly' }}\n run: ${{ matrix.extra_env_vars }} RERUN_WERROR=ON pixi run -e cpp cpp-build-all-shared-libs\n\n - name: additional_commands\n run: ${{ matrix.additional_commands }}\n\n cpp-formatting:\n name: C++ formatting check\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n with:\n ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.ref || '' }}\n\n - name: Run clang format on all relevant files\n uses: jidicula/clang-format-action@v4.11.0\n with:\n clang-format-version: "16"\n # Only check c/cpp/h/hpp (default checks also .proto and others)\n include-regex: ^.*\.(c|cpp|h|hpp)$\n | dataset_sample\yaml\rerun-io_rerun\.github\workflows\reusable_checks_cpp.yml | reusable_checks_cpp.yml | YAML | 3,798 | 0.95 | 0.025 | 0.09 | python-kit | 336 | 2024-02-04T01:56:52.139971 | GPL-3.0 | false | 4d3b57eec4bdb657fe91f0a2794358bb |
name: "Protobuf Checks: lints, BW compatibility, formatting, etc"\n\non:\n workflow_call:\n inputs:\n CONCURRENCY:\n required: true\n type: string\n\nconcurrency:\n group: ${{ inputs.CONCURRENCY }}-checks_protobuf\n cancel-in-progress: true\n\nenv:\n # Make sure that git will not try and perform any kind of LFS filtering, otherwise\n # this will completely break `buf` which invokes `git` under the hood.\n GIT_LFS_SKIP_SMUDGE: 1\n\ndefaults:\n run:\n shell: bash\n\npermissions:\n contents: "read"\n id-token: "write"\n\njobs:\n pb-check:\n name: "Protobuf Checks: lints, BW compatibility, formatting, etc"\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n with:\n ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.ref || '' }}\n\n - uses: prefix-dev/setup-pixi@v0.8.1\n with:\n pixi-version: v0.41.4\n\n - name: Fetch latest main (so we can grab the current schema snapshot)\n run: time git fetch origin main # yes, we need full --depth for `buf` to work\n\n # NOTE(cmc): I'm keeping all the snapshot machinery around if it turns out we need something more robust\n # than a pure git solution in the future. For now, convenience wins.\n #\n # - name: Schema snapshot out-of-sync\n # run: pixi run pb-snapshot-check\n # # continue-on-error: true\n #\n - name: Breaking changes\n run: pixi run pb-breaking\n if: success() || failure() # trigger this step even if the previous one failed\n\n - name: Lints\n run: pixi run pb-lint\n if: success() || failure() # trigger this step even if the previous one failed\n\n - name: Formatting\n run: pixi run pb-fmt-check\n if: success() || failure() # trigger this step even if the previous one failed\n | dataset_sample\yaml\rerun-io_rerun\.github\workflows\reusable_checks_protobuf.yml | reusable_checks_protobuf.yml | YAML | 1,825 | 0.95 | 0.15 | 0.183673 | python-kit | 462 | 2023-12-11T14:34:08.796795 | GPL-3.0 | false | 9f032deea82a8a36e0674bd4d1bb879d |
name: "Python Checks: Lints & Docs"\n\non:\n workflow_call:\n inputs:\n CONCURRENCY:\n required: true\n type: string\n\nconcurrency:\n group: ${{ inputs.CONCURRENCY }}-checks_python\n cancel-in-progress: true\n\nenv:\n PYTHON_VERSION: "3.9"\n\ndefaults:\n run:\n shell: bash\n\npermissions:\n contents: "read"\n id-token: "write"\n\njobs:\n # ---------------------------------------------------------------------------\n\n py-lints:\n name: Python lints (ruff, mypy, …)\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n with:\n ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.ref || '' }}\n\n - uses: prefix-dev/setup-pixi@v0.8.1\n with:\n pixi-version: v0.41.4\n\n - name: Python format check\n run: pixi run py-fmt-check\n\n - name: Lint Python\n run: pixi run py-lint\n\n # ---------------------------------------------------------------------------\n\n py-test-docs:\n name: Test Python Docs\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n with:\n ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.ref || '' }}\n\n - uses: prefix-dev/setup-pixi@v0.8.1\n with:\n pixi-version: v0.41.4\n environments: py-docs\n\n - name: Build via mkdocs\n shell: bash\n run: |\n pixi run -e py-docs mkdocs build --strict -f rerun_py/mkdocs.yml\n | dataset_sample\yaml\rerun-io_rerun\.github\workflows\reusable_checks_python.yml | reusable_checks_python.yml | YAML | 1,463 | 0.95 | 0 | 0.04 | awesome-app | 711 | 2024-08-31T05:04:51.893657 | MIT | false | ee344bcfe6d3e732f4036c44b8ef74a2 |
name: "Rust Checks: Lints, Tests, Docs"\n\non:\n workflow_call:\n inputs:\n CONCURRENCY:\n required: true\n type: string\n CHANNEL:\n required: false\n type: string # enum: 'nightly', 'main', or 'pr'\n\nconcurrency:\n group: ${{ inputs.CONCURRENCY }}-checks_rust\n cancel-in-progress: true\n\nenv:\n # web_sys_unstable_apis is required to enable the web_sys clipboard API which egui_web uses\n # https://rustwasm.github.io/wasm-bindgen/api/web_sys/struct.Clipboard.html\n # https://rustwasm.github.io/docs/wasm-bindgen/web-sys/unstable-apis.html\n RUSTFLAGS: --cfg=web_sys_unstable_apis --deny warnings\n\n RUSTDOCFLAGS: --deny warnings\n\n # Disable the GHA backend (Github's 10GB storage) since we use our own GCS backend.\n # See: https://github.com/marketplace/actions/sccache-action\n SCCACHE_GHA_ENABLED: "false"\n\n # Wrap every `rustc` invocation in `sccache`.\n RUSTC_WRAPPER: "sccache"\n\n # Not only `sccache` cannot cache incremental builds, it's counter-productive to generate all\n # these incremental artifacts when running on CI.\n CARGO_INCREMENTAL: "0"\n\n # Improve diagnostics for crashes.\n RUST_BACKTRACE: full\n\n # Sourced from https://vulkan.lunarg.com/sdk/home#linux\n VULKAN_SDK_VERSION: "1.3.290.0"\n\n # Via: https://nexte.st/docs/installation/pre-built-binaries/#using-nextest-in-github-actions\n # ANSI color codes should be supported by default on GitHub Actions.\n CARGO_TERM_COLOR: always\n\ndefaults:\n run:\n shell: bash\n\npermissions:\n contents: "read"\n id-token: "write"\n\njobs:\n # ---------------------------------------------------------------------------\n\n rs-lints:\n name: Rust lints (fmt, check, clippy, tests, doc)\n # TODO(andreas): setup-vulkan doesn't work on 24.4 right now due to missing .so\n runs-on: ubuntu-22.04-large\n steps:\n - uses: actions/checkout@v4\n with:\n ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.ref || '' }}\n lfs: true\n\n - name: Set up Rust\n uses: ./.github/actions/setup-rust\n with:\n cache_key: "build-linux"\n save_cache: true\n workload_identity_provider: ${{ secrets.GOOGLE_WORKLOAD_IDENTITY_PROVIDER }}\n service_account: ${{ secrets.GOOGLE_SERVICE_ACCOUNT }}\n\n - uses: prefix-dev/setup-pixi@v0.8.1\n with:\n pixi-version: v0.41.4\n\n # Install the Vulkan SDK, so we can use the software rasterizer.\n # TODO(andreas): It would be nice if `setup_software_rasterizer.py` could do that for us as well (note though that this action here is very fast when cached!)\n - name: Install Vulkan SDK\n uses: rerun-io/install-vulkan-sdk-action@v1.1.0\n with:\n vulkan_version: ${{ env.VULKAN_SDK_VERSION }}\n install_runtime: true\n cache: true\n stripdown: true\n\n - name: Setup software rasterizer\n run: pixi run python ./scripts/ci/setup_software_rasterizer.py\n\n - name: Rust checks (PR subset)\n if: ${{ inputs.CHANNEL == 'pr' }}\n run: pixi run rs-check --only base_checks sdk_variations cargo_deny wasm docs tests\n\n - name: Rust most checks & tests\n if: ${{ inputs.CHANNEL == 'main' }}\n run: pixi run rs-check --skip individual_crates docs_slow\n\n - name: Rust all checks & tests\n if: ${{ inputs.CHANNEL == 'nightly' }}\n run: pixi run rs-check\n\n - name: .rrd backwards compatibility\n # We don't yet guarantee backwards compatibility, but we at least check it\n # so that we _know_ if/when we break it.\n # See tests/assets/rrd/README.md for more\n run: pixi run check-backwards-compatibility\n\n - name: Upload test results\n uses: actions/upload-artifact@v4\n if: always()\n with:\n name: test-results-ubuntu\n path: "**/tests/snapshots"\n\n # Run some basics tests on Mac and Windows\n mac-windows-tests:\n name: Test on ${{ matrix.name }}\n strategy:\n matrix:\n include:\n - os: "macos-latest"\n name: "macos"\n - os: "windows-latest-8-cores"\n name: "windows"\n\n # Note: we can't use `matrix.os` here because its evaluated before the matrix stuff.\n if: ${{ inputs.CHANNEL == 'main' || inputs.CHANNEL == 'nightly' }}\n runs-on: ${{ matrix.os }}\n steps:\n - uses: actions/checkout@v4\n with:\n lfs: true\n\n - name: Set up Rust\n uses: ./.github/actions/setup-rust\n with:\n cache_key: "build-${{ matrix.name }}"\n save_cache: true\n workload_identity_provider: ${{ secrets.GOOGLE_WORKLOAD_IDENTITY_PROVIDER }}\n service_account: ${{ secrets.GOOGLE_SERVICE_ACCOUNT }}\n\n # Building with `--all-features` requires extra build tools like `nasm`.\n - uses: prefix-dev/setup-pixi@v0.8.1\n with:\n pixi-version: v0.41.4\n\n # Install the Vulkan SDK, so we can use the software rasterizer.\n # TODO(andreas): It would be nice if `setup_software_rasterizer.py` could do that for us as well (note though that this action here is very fast when cached!)\n - name: Install Vulkan SDK\n if: ${{ matrix.name != 'macos' }}\n uses: rerun-io/install-vulkan-sdk-action@v1.1.0\n with:\n vulkan_version: ${{ env.VULKAN_SDK_VERSION }}\n install_runtime: true\n cache: true\n stripdown: true\n\n - name: Setup software rasterizer\n run: pixi run python ./scripts/ci/setup_software_rasterizer.py\n\n - name: Rust tests\n if: ${{ inputs.CHANNEL == 'main' }}\n run: pixi run rs-check --only tests\n\n - name: Rust all checks & tests\n if: ${{ inputs.CHANNEL == 'nightly' }}\n run: pixi run rs-check\n\n - name: Upload test results\n uses: actions/upload-artifact@v4\n if: always()\n with:\n name: test-results-${{ matrix.name }}\n path: "**/tests/snapshots"\n | dataset_sample\yaml\rerun-io_rerun\.github\workflows\reusable_checks_rust.yml | reusable_checks_rust.yml | YAML | 5,942 | 0.95 | 0.090909 | 0.166667 | node-utils | 155 | 2025-05-07T23:13:38.247876 | Apache-2.0 | false | 4f68287ad680a3cd51083b87f7d83dec |
name: "Reusable Deploy Docs"\n\non:\n workflow_call:\n inputs:\n CONCURRENCY:\n required: true\n type: string\n PY_DOCS_VERSION_NAME:\n required: true\n type: string\n CPP_DOCS_VERSION_NAME:\n required: true\n type: string\n RS_DOCS_VERSION_NAME:\n required: true\n type: string\n RELEASE_VERSION:\n required: false\n type: string\n RELEASE_COMMIT:\n required: false\n type: string\n UPDATE_LATEST:\n required: false\n type: boolean\n default: false\n\nconcurrency:\n group: ${{ inputs.CONCURRENCY }}-deploy-docs\n cancel-in-progress: true\n\ndefaults:\n run:\n shell: bash\n\npermissions:\n contents: "write"\n id-token: "write"\n\nenv:\n PYTHON_VERSION: "3.9"\n\n # web_sys_unstable_apis is required to enable the web_sys clipboard API which egui_web uses\n # https://rustwasm.github.io/wasm-bindgen/api/web_sys/struct.Clipboard.html\n # https://rustwasm.github.io/docs/wasm-bindgen/web-sys/unstable-apis.html\n RUSTFLAGS: --cfg=web_sys_unstable_apis --deny warnings\n\n RUSTDOCFLAGS: --deny warnings\n\n # Disable the GHA backend (Github's 10GB storage) since we use our own GCS backend.\n # See: https://github.com/marketplace/actions/sccache-action\n SCCACHE_GHA_ENABLED: "false"\n\n # Wrap every `rustc` invocation in `sccache`.\n RUSTC_WRAPPER: "sccache"\n\n # Not only `sccache` cannot cache incremental builds, it's counter-productive to generate all\n # these incremental artifacts when running on CI.\n CARGO_INCREMENTAL: "0"\n\njobs:\n # ---------------------------------------------------------------------------\n\n py-deploy-docs:\n name: Python\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n with:\n fetch-depth: 0 # Don't do a shallow clone\n ref: ${{ inputs.RELEASE_COMMIT || (github.event_name == 'pull_request' && github.event.pull_request.head.ref || '') }}\n\n - uses: prefix-dev/setup-pixi@v0.8.1\n with:\n pixi-version: v0.41.4\n environments: py-docs\n\n - name: Set up git author\n run: |\n remote_repo="https://${GITHUB_TOKEN}@github.com/${GITHUB_REPOSITORY}.git"\n git config --global user.name "${GITHUB_ACTOR}"\n git config --global user.email "${GITHUB_ACTOR}@users.noreply.github.com"\n env:\n GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n\n # Mike will incrementally update the existing gh-pages branch\n # We then check it out, and reset it to a new orphaned branch, which we force-push to origin\n # to make sure we don't accumulate unnecessary history in gh-pages branch\n - name: Deploy via mike # https://github.com/jimporter/mike\n if: ${{ inputs.UPDATE_LATEST }}\n run: |\n git fetch\n pixi run -e py-docs mike deploy -F rerun_py/mkdocs.yml --rebase -b gh-pages --prefix docs/python -u ${{inputs.PY_DOCS_VERSION_NAME}} stable\n git checkout gh-pages\n git checkout --orphan gh-pages-orphan\n git commit -m "Update docs for ${GITHUB_SHA}"\n git push origin gh-pages-orphan:gh-pages -f\n\n # Mike will incrementally update the existing gh-pages branch\n # We then check it out, and reset it to a new orphaned branch, which we force-push to origin\n # to make sure we don't accumulate unnecessary history in gh-pages branch\n - name: Deploy tag via mike # https://github.com/jimporter/mike\n if: ${{ ! inputs.UPDATE_LATEST }}\n run: |\n git fetch\n pixi run -e py-docs mike deploy -F rerun_py/mkdocs.yml --rebase -b gh-pages --prefix docs/python ${{inputs.PY_DOCS_VERSION_NAME}}\n git checkout gh-pages\n git checkout --orphan gh-pages-orphan\n git commit -m "Update docs for ${GITHUB_SHA}"\n git push origin gh-pages-orphan:gh-pages -f\n\n # ---------------------------------------------------------------------------\n\n rs-deploy-docs:\n name: Rust\n needs: [py-deploy-docs]\n runs-on: ubuntu-latest-16-cores\n steps:\n - name: Show context\n run: |\n echo "GITHUB_CONTEXT": $GITHUB_CONTEXT\n echo "JOB_CONTEXT": $JOB_CONTEXT\n echo "INPUTS_CONTEXT": $INPUTS_CONTEXT\n echo "ENV_CONTEXT": $ENV_CONTEXT\n env:\n ENV_CONTEXT: ${{ toJson(env) }}\n GITHUB_CONTEXT: ${{ toJson(github) }}\n JOB_CONTEXT: ${{ toJson(job) }}\n INPUTS_CONTEXT: ${{ toJson(inputs) }}\n\n - uses: actions/checkout@v4\n with:\n fetch-depth: 0 # Don't do a shallow clone since we need to push gh-pages\n ref: ${{ inputs.RELEASE_COMMIT || (github.event_name == 'pull_request' && github.event.pull_request.head.ref || '') }}\n\n - name: Set up Rust\n uses: ./.github/actions/setup-rust\n with:\n cache_key: "build-linux"\n # Cache will be produced by `reusable_checks/rs-lints`\n save_cache: false\n workload_identity_provider: ${{ secrets.GOOGLE_WORKLOAD_IDENTITY_PROVIDER }}\n service_account: ${{ secrets.GOOGLE_SERVICE_ACCOUNT }}\n\n - uses: prefix-dev/setup-pixi@v0.8.1\n with:\n pixi-version: v0.41.4\n\n - name: Delete existing /target/doc\n run: rm -rf ./target/doc\n\n - name: cargo doc --document-private-items\n run: pixi run cargo doc --document-private-items --no-deps --all-features --workspace --exclude rerun-cli\n\n - name: Set up git author\n run: |\n remote_repo="https://${GITHUB_TOKEN}@github.com/${GITHUB_REPOSITORY}.git"\n git config --global user.name "${GITHUB_ACTOR}"\n git config --global user.email "${GITHUB_ACTOR}@users.noreply.github.com"\n env:\n GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n\n - name: Set up ghp-import\n run: pip install ghp-import\n\n - name: Patch in a redirect page\n run: echo "<meta http-equiv=\"refresh\" content=\"0; url=${REDIRECT_CRATE}\">" > target/doc/index.html\n env:\n REDIRECT_CRATE: rerun\n\n # See: https://github.com/c-w/ghp-import\n - name: Deploy the docs\n run: |\n git fetch\n python3 -m ghp_import -n -p -x docs/rust/${{ inputs.RS_DOCS_VERSION_NAME }} target/doc/ -m "Update the rust docs"\n\n cpp-deploy-docs:\n name: Cpp\n needs: [rs-deploy-docs]\n runs-on: ubuntu-latest\n steps:\n - name: Show context\n run: |\n echo "GITHUB_CONTEXT": $GITHUB_CONTEXT\n echo "JOB_CONTEXT": $JOB_CONTEXT\n echo "INPUTS_CONTEXT": $INPUTS_CONTEXT\n echo "ENV_CONTEXT": $ENV_CONTEXT\n env:\n ENV_CONTEXT: ${{ toJson(env) }}\n GITHUB_CONTEXT: ${{ toJson(github) }}\n JOB_CONTEXT: ${{ toJson(job) }}\n INPUTS_CONTEXT: ${{ toJson(inputs) }}\n\n - uses: actions/checkout@v4\n with:\n fetch-depth: 0 # Don't do a shallow clone since we need to push gh-pages\n ref: ${{ inputs.RELEASE_COMMIT || (github.event_name == 'pull_request' && github.event.pull_request.head.ref || '') }}\n\n - uses: prefix-dev/setup-pixi@v0.8.1\n with:\n pixi-version: v0.41.4\n\n - name: Doxygen C++ docs\n run: pixi run -e cpp cpp-docs\n\n - name: Set up git author\n run: |\n remote_repo="https://${GITHUB_TOKEN}@github.com/${GITHUB_REPOSITORY}.git"\n git config --global user.name "${GITHUB_ACTOR}"\n git config --global user.email "${GITHUB_ACTOR}@users.noreply.github.com"\n env:\n GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n\n # TODO(andreas): Do we need this?\n # - name: Patch in a redirect page\n # shell: bash\n # run: echo "<meta http-equiv=\"refresh\" content=\"0; url=${REDIRECT_CRATE}\">" > target/doc/index.html\n # env:\n # REDIRECT_CRATE: rerun\n\n # See: https://github.com/c-w/ghp-import\n - name: Deploy the docs (versioned)\n if: ${{ inputs.RELEASE_VERSION }}\n run: |\n git fetch\n pixi run -e cpp python -m ghp_import -n -p -x docs/cpp/${{ inputs.RELEASE_VERSION }} rerun_cpp/docs/html/ -m "Update the C++ docs (versioned)"\n\n - name: Deploy the docs (named)\n run: |\n git fetch\n pixi run -e cpp python -m ghp_import -n -p -x docs/cpp/${{ inputs.CPP_DOCS_VERSION_NAME }} rerun_cpp/docs/html/ -m "Update the C++ docs"\n | dataset_sample\yaml\rerun-io_rerun\.github\workflows\reusable_deploy_docs.yml | reusable_deploy_docs.yml | YAML | 8,342 | 0.95 | 0.021552 | 0.126904 | awesome-app | 488 | 2024-06-02T04:35:53.502943 | Apache-2.0 | false | c1fb0cd2dae3eb10e6cb44752c0c69a9 |
name: Reusable Deploy Landing Preview\n\non:\n workflow_call:\n inputs:\n CONCURRENCY:\n required: true\n type: string\n PR_NUMBER:\n required: true\n type: string\n\nconcurrency:\n group: ${{ inputs.CONCURRENCY }}-deploy-landing-preview\n cancel-in-progress: true\n\ndefaults:\n run:\n shell: bash\n\npermissions:\n contents: "write"\n id-token: "write"\n pull-requests: "write"\n\njobs:\n deploy:\n name: Deploy\n\n runs-on: ubuntu-latest\n\n steps:\n - uses: actions/checkout@v4\n with:\n ref: ${{ (github.event_name == 'pull_request' && github.event.pull_request.head.ref) || '' }}\n\n - name: Get sha\n id: get-sha\n run: |\n full_commit="${{ (github.event_name == 'pull_request' && github.event.pull_request.head.sha) || github.sha }}"\n echo "sha=$full_commit" >> "$GITHUB_OUTPUT"\n\n - name: Deploy rerun.io preview\n id: vercel-initial-deploy\n uses: ./.github/actions/vercel\n with:\n command: "deploy"\n vercel_token: ${{ secrets.VERCEL_TOKEN }}\n vercel_team_name: ${{ vars.VERCEL_TEAM_NAME }}\n vercel_project_name: ${{ vars.VERCEL_PROJECT_NAME }}\n release_commit: ${{ steps.get-sha.outputs.sha }}\n target: "preview"\n\n - name: Create pending comment\n # https://github.com/mshick/add-pr-comment\n uses: mshick/add-pr-comment@v2.8.2\n if: success()\n with:\n message-id: "vercel-preview"\n repo-token: ${{ secrets.GITHUB_TOKEN }}\n message: |\n Latest documentation preview deployment is pending:\n ${{ steps.vercel-initial-deploy.outputs.vercel_preview_inspector_url }}\n\n | Result | Commit | Link |\n | ------ | ------- | ----- |\n | ⏳ | ${{ steps.get-sha.outputs.sha }} | unavailable |\n\n - name: Wait for deployment\n id: vercel\n uses: ./.github/actions/vercel\n if: success()\n with:\n command: "wait-for-deployment"\n vercel_token: ${{ secrets.VERCEL_TOKEN }}\n vercel_team_name: ${{ vars.VERCEL_TEAM_NAME }}\n vercel_project_name: ${{ vars.VERCEL_PROJECT_NAME }}\n vercel_deployment_id: ${{ steps.vercel-initial-deploy.outputs.vercel_preview_deployment_id }}\n\n - name: Create PR comment\n # https://github.com/mshick/add-pr-comment\n uses: mshick/add-pr-comment@v2.8.2\n if: success() && steps.vercel.outputs.vercel_preview_result == 'success'\n with:\n message-id: "vercel-preview"\n repo-token: ${{ secrets.GITHUB_TOKEN }}\n message: |\n Latest documentation preview deployed successfully.\n\n | Result | Commit | Link |\n | ------ | ------- | ----- |\n | ✅ | ${{ steps.get-sha.outputs.sha }} | https://${{ steps.vercel.outputs.vercel_preview_url }}/docs |\n\n <sup>Note: This comment is updated whenever you push a commit.</sup>\n\n - name: Create PR comment\n # https://github.com/mshick/add-pr-comment\n uses: mshick/add-pr-comment@v2.8.2\n if: success() && steps.vercel.outputs.vercel_preview_result == 'failure'\n with:\n message-id: "vercel-preview"\n repo-token: ${{ secrets.GITHUB_TOKEN }}\n message: |\n Latest documentation preview failed to deploy:\n ${{ steps.vercel.outputs.vercel_preview_inspector_url }}\n\n | Result | Commit | Link |\n | ------ | ------- | ----- |\n | ❌ | ${{ steps.get-sha.outputs.sha }} | unavailable |\n\n <sup>Note: This comment is updated whenever you push a commit.</sup>\n\n - name: Create PR comment\n # https://github.com/mshick/add-pr-comment\n uses: mshick/add-pr-comment@v2.8.2\n if: failure()\n with:\n message-id: "vercel-preview"\n repo-token: ${{ secrets.GITHUB_TOKEN }}\n message: |\n Latest documentation preview failed to deploy, check the CI for more details:\n ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}?pr=${{ github.event.pull_request.number }}\n\n | Result | Commit | Link |\n | ------ | ------- | ----- |\n | ❌ | ${{ steps.get-sha.outputs.sha }} | unavailable |\n\n <sup>Note: This comment is updated whenever you push a commit.</sup>\n | dataset_sample\yaml\rerun-io_rerun\.github\workflows\reusable_deploy_landing_preview.yml | reusable_deploy_landing_preview.yml | YAML | 4,412 | 0.95 | 0.0625 | 0.037383 | awesome-app | 826 | 2024-05-11T05:03:34.487540 | BSD-3-Clause | false | 9007e96aa173925f2bc7e1f1cf13491b |
name: Reusable Pip Index\n\non:\n workflow_call:\n inputs:\n CONCURRENCY:\n required: true\n type: string\n COMMIT:\n required: false\n type: string\n default: ""\n\nconcurrency:\n group: ${{ inputs.CONCURRENCY }}-pip-index\n cancel-in-progress: true\n\ndefaults:\n run:\n shell: bash\n\npermissions:\n contents: "read"\n id-token: "write"\n\njobs:\n pr-summary:\n name: Create a Pip Index file\n runs-on: ubuntu-latest\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n with:\n ref: ${{ inputs.COMMIT || (github.event_name == 'pull_request' && github.event.pull_request.head.ref || '') }}\n\n - name: Set up Python\n uses: actions/setup-python@v5\n with:\n python-version: 3.11\n\n - id: "auth"\n uses: google-github-actions/auth@v2\n with:\n workload_identity_provider: ${{ secrets.GOOGLE_WORKLOAD_IDENTITY_PROVIDER }}\n service_account: ${{ secrets.GOOGLE_SERVICE_ACCOUNT }}\n\n - name: "Set up Cloud SDK"\n uses: "google-github-actions/setup-gcloud@v2"\n with:\n version: ">= 363.0.0"\n\n - name: Install deps\n run: pip install google-cloud-storage Jinja2\n\n - name: Render pip index and upload to gcloud\n run: |\n full_commit="${{ inputs.COMMIT || (github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha) }}"\n commit=$(echo $full_commit | cut -c1-7)\n\n python scripts/ci/generate_prerelease_pip_index.py \\n --title "Commit: $commit" \\n --dir "commit/$commit/wheels" \\n --upload \\n --check\n | dataset_sample\yaml\rerun-io_rerun\.github\workflows\reusable_pip_index.yml | reusable_pip_index.yml | YAML | 1,674 | 0.85 | 0 | 0 | react-lib | 153 | 2024-04-28T19:30:18.453034 | GPL-3.0 | false | 7610a478d69ecf63a1411120392406c2 |
name: Build and publish JS\n\non:\n workflow_call:\n inputs:\n concurrency:\n type: string\n required: true\n release-commit:\n description: "Commit to release"\n type: string\n required: true\n\nconcurrency:\n group: ${{ inputs.concurrency }}-publish-js\n cancel-in-progress: true\n\ndefaults:\n run:\n shell: bash\n\npermissions:\n contents: "write"\n id-token: "write"\n\njobs:\n get-commit-sha:\n name: Get Commit Sha\n runs-on: ubuntu-latest\n outputs:\n short-sha: ${{ steps.get-short-sha.outputs.short-sha }}\n full-sha: ${{ steps.get-full-sha.outputs.full-sha }}\n steps:\n - name: "Set short-sha"\n id: get-short-sha\n run: echo "short-sha=$(echo ${{ inputs.release-commit }} | cut -c1-7)" >> $GITHUB_OUTPUT\n\n - name: "Set full-sha"\n id: get-full-sha\n run: echo "full-sha=${{ inputs.release-commit }}" >> $GITHUB_OUTPUT\n\n build:\n runs-on: ubuntu-latest-16-cores\n needs: [get-commit-sha]\n steps:\n - uses: actions/checkout@v4\n with:\n ref: ${{ inputs.release-commit }}\n\n - uses: actions/setup-node@v4\n with:\n node-version: "20.x"\n registry-url: "https://registry.npmjs.org"\n\n - name: Install Yarn\n run: npm install -g yarn\n\n - name: Set up Rust\n uses: ./.github/actions/setup-rust\n with:\n cache_key: "build-web"\n save_cache: false\n workload_identity_provider: ${{ secrets.GOOGLE_WORKLOAD_IDENTITY_PROVIDER }}\n service_account: ${{ secrets.GOOGLE_SERVICE_ACCOUNT }}\n\n - uses: prefix-dev/setup-pixi@v0.8.1\n with:\n pixi-version: v0.41.4\n\n - name: Publish packages\n env:\n NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}\n run: |\n pixi run node rerun_js/scripts/publish.mjs\n | dataset_sample\yaml\rerun-io_rerun\.github\workflows\reusable_publish_js.yml | reusable_publish_js.yml | YAML | 1,837 | 0.95 | 0 | 0 | awesome-app | 595 | 2023-11-09T09:26:42.661219 | MIT | false | cb4a7487bda5dcb3c24f013089a66f48 |
name: Build and Publish C/C++ SDKs\n\non:\n workflow_call:\n inputs:\n concurrency:\n type: string\n required: true\n release-version:\n description: "Release Version Number (Must match Cargo.toml)"\n type: string\n required: true\n release-commit:\n description: "Which commit to build+publish"\n type: string\n required: true\n\npermissions:\n contents: "read"\n id-token: "write"\n\njobs:\n linux-arm64:\n name: "Linux-Arm64"\n uses: ./.github/workflows/reusable_build_and_upload_rerun_c.yml\n with:\n CONCURRENCY: publish-rerun-c-linux-arm64-${{ github.ref_name }}\n PLATFORM: linux-arm64\n RELEASE_COMMIT: ${{ inputs.release-commit }}\n secrets: inherit\n\n linux-x64:\n name: "Linux-x64"\n uses: ./.github/workflows/reusable_build_and_upload_rerun_c.yml\n with:\n CONCURRENCY: publish-rerun-c-linux-x64-${{ github.ref_name }}\n PLATFORM: linux-x64\n RELEASE_COMMIT: ${{ inputs.release-commit }}\n secrets: inherit\n\n macos-x64:\n name: "Mac-Intel"\n uses: ./.github/workflows/reusable_build_and_upload_rerun_c.yml\n with:\n CONCURRENCY: publish-rerun-c-macos-x64-${{ github.ref_name }}\n PLATFORM: macos-x64\n RELEASE_COMMIT: ${{ inputs.release-commit }}\n secrets: inherit\n\n macos-arm64:\n name: "Mac-Arm"\n uses: ./.github/workflows/reusable_build_and_upload_rerun_c.yml\n with:\n CONCURRENCY: publish-rerun-c-macos-arm64-${{ github.ref_name }}\n PLATFORM: macos-arm64\n RELEASE_COMMIT: ${{ inputs.release-commit }}\n secrets: inherit\n\n windows-x64:\n name: "Windows-x64"\n uses: ./.github/workflows/reusable_build_and_upload_rerun_c.yml\n with:\n CONCURRENCY: publish-rerun-c-windows-${{ github.ref_name }}\n PLATFORM: windows-x64\n RELEASE_COMMIT: ${{ inputs.release-commit }}\n secrets: inherit\n\n bundle-and-upload-rerun_cpp:\n name: "Bundle and upload rerun_cpp_sdk.zip"\n needs: [linux-arm64, linux-x64, macos-x64, macos-arm64, windows-x64]\n uses: ./.github/workflows/reusable_bundle_and_upload_rerun_cpp.yml\n with:\n CONCURRENCY: bundle-rerun-c-${{ github.ref_name }}\n RELEASE_COMMIT: ${{ inputs.release-commit }}\n secrets: inherit\n | dataset_sample\yaml\rerun-io_rerun\.github\workflows\reusable_publish_rerun_c.yml | reusable_publish_rerun_c.yml | YAML | 2,226 | 0.85 | 0 | 0 | react-lib | 359 | 2024-04-24T14:12:31.727818 | Apache-2.0 | false | c4b552f5230310260bfe592424c309bc |
name: Build and publish wheels\n\non:\n workflow_call:\n inputs:\n concurrency:\n type: string\n required: true\n release-version:\n description: "Release Version Number (Must match Cargo.toml)"\n type: string\n required: true\n release-commit:\n description: "Which commit to build+publish"\n type: string\n required: true\n\npermissions:\n contents: "read"\n id-token: "write"\n\njobs:\n linux-arm64:\n name: "Linux-arm64"\n uses: ./.github/workflows/reusable_build_and_upload_rerun_cli.yml\n with:\n CONCURRENCY: publish-rerun-cli-linux-arm64-${{ github.ref_name }}\n PLATFORM: linux-arm64\n RELEASE_COMMIT: ${{ inputs.release-commit }}\n secrets: inherit\n\n linux-x64:\n name: "Linux-x64"\n uses: ./.github/workflows/reusable_build_and_upload_rerun_cli.yml\n with:\n CONCURRENCY: publish-rerun-cli-linux-x64-${{ github.ref_name }}\n PLATFORM: linux-x64\n RELEASE_COMMIT: ${{ inputs.release-commit }}\n secrets: inherit\n\n macos-x64:\n name: "Mac-Intel"\n uses: ./.github/workflows/reusable_build_and_upload_rerun_cli.yml\n with:\n CONCURRENCY: publish-rerun-cli-macos-x64-${{ github.ref_name }}\n PLATFORM: macos-x64\n RELEASE_COMMIT: ${{ inputs.release-commit }}\n secrets: inherit\n\n macos-arm64:\n name: "Mac-Arm"\n uses: ./.github/workflows/reusable_build_and_upload_rerun_cli.yml\n with:\n CONCURRENCY: publish-rerun-cli-macos-arm64-${{ github.ref_name }}\n PLATFORM: macos-arm64\n RELEASE_COMMIT: ${{ inputs.release-commit }}\n secrets: inherit\n\n windows-x64:\n name: "Windows-x64"\n uses: ./.github/workflows/reusable_build_and_upload_rerun_cli.yml\n with:\n CONCURRENCY: publish-rerun-cli-windows-x64-${{ github.ref_name }}\n PLATFORM: windows-x64\n RELEASE_COMMIT: ${{ inputs.release-commit }}\n secrets: inherit\n | dataset_sample\yaml\rerun-io_rerun\.github\workflows\reusable_publish_rerun_cli.yml | reusable_publish_rerun_cli.yml | YAML | 1,883 | 0.85 | 0 | 0 | node-utils | 167 | 2023-11-29T10:05:49.168004 | GPL-3.0 | false | 939f7eb218f648751b8ca4951932d531 |
name: Build and publish web\n\non:\n workflow_call:\n inputs:\n concurrency:\n type: string\n required: true\n release-version:\n description: "Release Version Number (Must match Cargo.toml)"\n type: string\n required: true\n release-commit:\n description: "Commit to release"\n type: string\n required: true\n wheel-artifact-name:\n description: "Name of the wheel to use when running examples"\n type: string\n required: true\n update-latest:\n description: "Whether to update the latest version of app"\n type: boolean\n required: true\n\ndefaults:\n run:\n shell: bash\n\npermissions:\n contents: "write"\n id-token: "write"\n\njobs:\n get-commit-sha:\n name: Get Commit Sha\n runs-on: ubuntu-latest\n outputs:\n short-sha: ${{ steps.get-short-sha.outputs.short-sha }}\n full-sha: ${{ steps.get-full-sha.outputs.full-sha }}\n steps:\n - name: "Set short-sha"\n id: get-short-sha\n run: echo "short-sha=$(echo ${{ inputs.release-commit }} | cut -c1-7)" >> $GITHUB_OUTPUT\n\n - name: "Set full-sha"\n id: get-full-sha\n run: echo "full-sha=${{ inputs.release-commit }}" >> $GITHUB_OUTPUT\n\n build-web:\n runs-on: ubuntu-latest-16-cores\n needs: [get-commit-sha]\n steps:\n - uses: actions/checkout@v4\n with:\n ref: ${{ inputs.release-commit }}\n lfs: true\n\n - id: "auth"\n uses: google-github-actions/auth@v2\n with:\n workload_identity_provider: ${{ secrets.GOOGLE_WORKLOAD_IDENTITY_PROVIDER }}\n service_account: ${{ secrets.GOOGLE_SERVICE_ACCOUNT }}\n\n - name: "Set up Cloud SDK"\n uses: "google-github-actions/setup-gcloud@v2"\n with:\n version: ">= 363.0.0"\n\n - name: Set up Rust\n uses: ./.github/actions/setup-rust\n with:\n cache_key: "build-web"\n save_cache: false\n workload_identity_provider: ${{ secrets.GOOGLE_WORKLOAD_IDENTITY_PROVIDER }}\n service_account: ${{ secrets.GOOGLE_SERVICE_ACCOUNT }}\n\n - uses: prefix-dev/setup-pixi@v0.8.1\n with:\n pixi-version: v0.41.4\n environments: wheel-test\n\n # built by `reusable_build_and_publish_wheels`\n - name: Download Wheel\n uses: actions/download-artifact@v4\n with:\n name: ${{ inputs.wheel-artifact-name }}\n path: wheel\n\n - name: Install built wheel\n run: |\n pixi run python scripts/ci/pixi_install_wheel.py --feature python-pypi --package rerun-sdk --dir wheel\n\n - name: Print wheel version\n run: |\n pixi list -e wheel-test | grep rerun_sdk\n pixi run -e wheel-test python -m rerun --version\n pixi run -e wheel-test which rerun\n\n - name: Build web-viewer (release)\n run: |\n pixi run -e wheel-test rerun-build-web-release\n\n - name: Build examples\n run: |\n pixi run -e wheel-test build-examples rrd \\n --channel "release" \\n web_viewer/examples\n\n - name: Build & run snippets\n run: |\n pixi run -e wheel-test build-examples snippets \\n web_viewer/examples/snippets\n\n - name: Build examples manifest\n run: |\n pixi run -e wheel-test build-examples manifest \\n --base-url "https://app.rerun.io/version/${{inputs.release-version}}" \\n --channel "release" \\n web_viewer/examples_manifest.json\n\n - name: Upload app.rerun.io (versioned)\n uses: google-github-actions/upload-cloud-storage@v2\n with:\n path: "web_viewer"\n destination: "rerun-web-viewer/version/${{ inputs.release-version }}"\n parent: false\n process_gcloudignore: false\n\n - name: Upload app.rerun.io (commit)\n uses: google-github-actions/upload-cloud-storage@v2\n with:\n path: "web_viewer"\n destination: "rerun-web-viewer/commit/${{ needs.get-commit-sha.outputs.short-sha }}"\n parent: false\n process_gcloudignore: false\n\n - name: Publish app.rerun.io\n if: inputs.update-latest\n run: |\n gsutil -m cp -r 'gs://rerun-web-viewer/version/${{ inputs.release-version }}/*' gs://rerun-web-viewer/version/latest\n | dataset_sample\yaml\rerun-io_rerun\.github\workflows\reusable_publish_web.yml | reusable_publish_web.yml | YAML | 4,318 | 0.95 | 0.007092 | 0.008264 | python-kit | 319 | 2025-05-30T01:48:07.033388 | BSD-3-Clause | false | 750e616de03f16fafadab9a5a89876e9 |
name: Build and publish wheels\n\n# To run this manually:\n# 1. Build each platform using `scripts/ci/build_and_upload_wheels.py`\n# 2. (optional) Generate index using `scripts/ci/generate_prerelease_pip_index.py`\n# 3. Publish to PyPI using `scripts/ci/publish_wheels.py`\n\non:\n workflow_call:\n inputs:\n concurrency:\n type: string\n required: true\n release-version:\n description: "Release Version Number (Must match Cargo.toml)"\n type: string\n required: true\n release-commit:\n description: "Which commit to build+publish"\n type: string\n required: true\n\ndefaults:\n run:\n shell: bash\n\npermissions:\n contents: "read"\n id-token: "write"\n\njobs:\n get-commit-sha:\n name: Get Commit Sha\n runs-on: ubuntu-latest\n outputs:\n short-sha: ${{ steps.get-short-sha.outputs.short-sha }}\n full-sha: ${{ steps.get-full-sha.outputs.full-sha }}\n steps:\n - name: "Set short-sha"\n id: get-short-sha\n run: echo "short-sha=$(echo ${{ inputs.release-commit }} | cut -c1-7)" >> $GITHUB_OUTPUT\n\n - name: "Set full-sha"\n id: get-full-sha\n run: echo "full-sha=${{ inputs.release-commit }}" >> $GITHUB_OUTPUT\n\n ## Build\n\n # Note: this also builds `rerun_notebook`\n build-linux-x64:\n name: "Linux-x64: Build Wheels"\n needs: [get-commit-sha]\n uses: ./.github/workflows/reusable_build_and_upload_wheels.yml\n with:\n CONCURRENCY: wheels-build-linux-x64-${{ inputs.concurrency }}\n PLATFORM: linux-x64\n WHEEL_ARTIFACT_NAME: linux-x64-wheel\n RELEASE_COMMIT: ${{ inputs.release-commit }}\n MODE: "pypi"\n secrets: inherit\n\n build-linux-arm64:\n name: "Linux-arm64: Build Wheels"\n needs: [get-commit-sha]\n uses: ./.github/workflows/reusable_build_and_upload_wheels.yml\n with:\n CONCURRENCY: wheels-build-linux-arm64-${{ inputs.concurrency }}\n PLATFORM: linux-arm64\n WHEEL_ARTIFACT_NAME: linux-arm64-wheel\n RELEASE_COMMIT: ${{ inputs.release-commit }}\n MODE: "pypi"\n secrets: inherit\n\n build-windows-x64:\n name: "Windows-x64: Build Wheels"\n needs: [get-commit-sha]\n uses: ./.github/workflows/reusable_build_and_upload_wheels.yml\n with:\n CONCURRENCY: wheels-build-windows-x64-${{ inputs.concurrency }}\n PLATFORM: windows-x64\n WHEEL_ARTIFACT_NAME: windows-x64-wheel\n RELEASE_COMMIT: ${{ inputs.release-commit }}\n MODE: "pypi"\n secrets: inherit\n\n build-macos-arm64:\n name: "Macos-arm64: Build Wheels"\n needs: [get-commit-sha]\n uses: ./.github/workflows/reusable_build_and_upload_wheels.yml\n with:\n CONCURRENCY: wheels-build-macos-arm64-${{ inputs.concurrency }}\n PLATFORM: macos-arm64\n WHEEL_ARTIFACT_NAME: macos-arm64-wheel\n RELEASE_COMMIT: ${{ inputs.release-commit }}\n MODE: "pypi"\n secrets: inherit\n\n build-macos-x64:\n name: "Macos-x64: Build Wheels"\n needs: [get-commit-sha]\n uses: ./.github/workflows/reusable_build_and_upload_wheels.yml\n with:\n CONCURRENCY: wheels-build-macos-x64-${{ inputs.concurrency }}\n PLATFORM: macos-x64\n WHEEL_ARTIFACT_NAME: "macos-x64-wheel"\n RELEASE_COMMIT: ${{ inputs.release-commit }}\n MODE: "pypi"\n secrets: inherit\n\n ## Test\n\n test-windows-x64:\n name: "Windows-x64: Test Wheels"\n needs: [build-windows-x64]\n uses: ./.github/workflows/reusable_test_wheels.yml\n with:\n CONCURRENCY: wheels-test-windows-x64-${{ inputs.concurrency }}\n PLATFORM: windows-x64\n WHEEL_ARTIFACT_NAME: windows-x64-wheel\n RELEASE_COMMIT: ${{ inputs.release-commit }}\n secrets: inherit\n\n test-linux:\n name: "Linux-x64: Test Wheels"\n needs: [build-linux-x64]\n uses: ./.github/workflows/reusable_test_wheels.yml\n with:\n CONCURRENCY: wheels-test-linux-x64-${{ inputs.concurrency }}\n PLATFORM: linux-x64\n WHEEL_ARTIFACT_NAME: linux-x64-wheel\n RELEASE_COMMIT: ${{ inputs.release-commit }}\n secrets: inherit\n\n generate-wheel-index:\n name: "Generate Pip Index"\n needs:\n [\n build-linux-x64,\n build-linux-arm64,\n build-windows-x64,\n build-macos-arm64,\n build-macos-x64,\n ]\n uses: ./.github/workflows/reusable_pip_index.yml\n with:\n CONCURRENCY: generate-wheel-index-${{ inputs.concurrency }}\n COMMIT: ${{ inputs.release-commit }}\n secrets: inherit\n\n publish-wheels:\n name: "Publish Wheels"\n needs: [get-commit-sha, generate-wheel-index]\n runs-on: ubuntu-latest-16-cores\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n with:\n fetch-depth: 0 # Don't do a shallow clone since we need it for finding the full commit hash\n ref: ${{ inputs.release-commit }}\n\n - uses: prefix-dev/setup-pixi@v0.8.1\n with:\n pixi-version: v0.41.4\n\n - id: "auth"\n uses: google-github-actions/auth@v2\n with:\n workload_identity_provider: ${{ secrets.GOOGLE_WORKLOAD_IDENTITY_PROVIDER }}\n service_account: ${{ secrets.GOOGLE_SERVICE_ACCOUNT }}\n\n - name: "Set up Cloud SDK"\n uses: "google-github-actions/setup-gcloud@v2"\n with:\n version: ">= 363.0.0"\n\n - name: Publish to PyPI\n run: |\n pixi run python scripts/ci/publish_wheels.py \\n --version ${{ inputs.release-version }} \\n --dir "commit/${{ needs.get-commit-sha.outputs.short-sha }}/wheels" \\n --repository "${{ vars.PYPI_REPOSITORY }}" \\n --token "${{ secrets.MATURIN_PYPI_TOKEN }}" \\n | dataset_sample\yaml\rerun-io_rerun\.github\workflows\reusable_publish_wheels.yml | reusable_publish_wheels.yml | YAML | 5,564 | 0.95 | 0.005495 | 0.043478 | python-kit | 476 | 2024-01-21T05:07:46.089022 | MIT | false | 3b2c80e5d5da607fafe17a7b5eb2b8b9 |
name: Release crates\n\non:\n workflow_call:\n inputs:\n CONCURRENCY:\n required: true\n type: string\n RELEASE_COMMIT:\n required: false\n type: string\n\nconcurrency:\n group: ${{ inputs.CONCURRENCY }}-release-crates\n cancel-in-progress: true\n\ndefaults:\n run:\n shell: bash\n\npermissions:\n contents: "read"\n id-token: "write"\n\njobs:\n publish-crates:\n name: "Publish Crates"\n runs-on: ubuntu-latest-16-cores\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n with:\n ref: ${{ inputs.RELEASE_COMMIT || (github.event_name == 'pull_request' && github.event.pull_request.head.ref || '') }}\n\n - uses: prefix-dev/setup-pixi@v0.8.1\n with:\n pixi-version: v0.41.4\n\n - name: Build web-viewer (release)\n run: pixi run rerun-build-web-release\n\n - name: Publish\n run: pixi run python scripts/ci/crates.py publish --token ${{ secrets.CRATES_IO_TOKEN }}\n | dataset_sample\yaml\rerun-io_rerun\.github\workflows\reusable_release_crates.yml | reusable_release_crates.yml | YAML | 971 | 0.85 | 0 | 0 | awesome-app | 43 | 2023-09-30T16:04:22.378287 | GPL-3.0 | false | 937a09343e20af4fe38ae87a092a0005 |
# TODO(#9304): make the notebook export work\nname: Reusable Build and Upload Notebook\n\non:\n workflow_call:\n inputs:\n CONCURRENCY:\n required: true\n type: string\n WHEEL_ARTIFACT_NAME:\n required: false\n type: string\n default: ""\n\nconcurrency:\n group: ${{ inputs.CONCURRENCY }}-run-notebook\n cancel-in-progress: true\n\ndefaults:\n run:\n shell: bash\n\npermissions:\n contents: "read"\n id-token: "write"\n\njobs:\n run-notebook:\n name: Run notebook\n runs-on: ubuntu-latest-16-cores # Note that as of writing we need the additional storage page (14 gb of the ubuntu-latest runner is not enough).\n container:\n image: rerunio/ci_docker:0.15.0 # Required to run the wheel or we get "No matching distribution found for attrs>=23.1.0" during `pip install rerun-sdk`\n steps:\n - uses: actions/checkout@v4\n with:\n ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.ref || '' }}\n\n - uses: prefix-dev/setup-pixi@v0.8.1\n with:\n pixi-version: v0.41.4\n environments: wheel-test\n\n - name: Download Wheel\n uses: actions/download-artifact@v4\n with:\n name: ${{ inputs.WHEEL_ARTIFACT_NAME }}\n path: wheel\n\n - name: Download Notebook Wheel\n uses: actions/download-artifact@v4\n with:\n name: rerun_notebook_wheel\n path: wheel\n\n - name: Get version\n id: get-version\n run: |\n pixi run -e wheel-test 'echo "wheel_version=$(python scripts/ci/crates.py get-version)"' >> "$GITHUB_OUTPUT"\n\n - name: Install built wheel\n run: |\n pixi run python scripts/ci/pixi_install_wheel.py --feature python-pypi --package rerun-sdk --dir wheel\n pixi run python scripts/ci/pixi_install_wheel.py --feature python-pypi --package rerun-notebook --dir wheel --platform-independent\n\n - name: Install Deps\n run: pixi run -e wheel-test pip install -r examples/python/notebook/requirements.txt\n\n - name: Create notebook\n run: pixi run -e wheel-test jupyter nbconvert --to=html --ExecutePreprocessor.enabled=True examples/python/notebook/cube.ipynb --output /tmp/cube.html\n\n - id: "auth"\n uses: google-github-actions/auth@v2\n with:\n workload_identity_provider: ${{ secrets.GOOGLE_WORKLOAD_IDENTITY_PROVIDER }}\n service_account: ${{ secrets.GOOGLE_SERVICE_ACCOUNT }}\n\n - name: Get sha\n id: get-sha\n run: |\n full_commit="${{ (github.event_name == 'pull_request' && github.event.pull_request.head.sha) || github.sha }}"\n echo "sha=$(echo $full_commit | cut -c1-7)" >> "$GITHUB_OUTPUT"\n\n - name: "Upload Notebook"\n uses: google-github-actions/upload-cloud-storage@v2\n with:\n path: "/tmp/cube.html"\n destination: "rerun-builds/commit/${{ steps.get-sha.outputs.sha }}/notebooks"\n parent: false\n process_gcloudignore: false\n | dataset_sample\yaml\rerun-io_rerun\.github\workflows\reusable_run_notebook.yml | reusable_run_notebook.yml | YAML | 2,992 | 0.95 | 0.011236 | 0.013514 | react-lib | 793 | 2024-06-27T05:33:12.343735 | Apache-2.0 | false | 2b60aa48dba781c086630e8295b9bbae |
name: Sync assets with release\n\non:\n workflow_call:\n inputs:\n CONCURRENCY:\n required: true\n type: string\n RELEASE_VERSION:\n required: true\n type: string\n default: ""\n WAIT_TIME_SECS:\n required: false\n type: number\n default: 0\n\nconcurrency:\n group: ${{ inputs.CONCURRENCY }}-sync-assets\n cancel-in-progress: true\n\ndefaults:\n run:\n shell: bash\n\npermissions:\n contents: "write"\n id-token: "write"\n\njobs:\n sync-assets:\n name: Upload assets from build.rerun.io\n runs-on: ubuntu-latest\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n with:\n ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.ref || '' }}\n\n - uses: prefix-dev/setup-pixi@v0.8.1\n with:\n pixi-version: v0.41.4\n\n - id: "auth"\n uses: google-github-actions/auth@v2\n with:\n workload_identity_provider: ${{ secrets.GOOGLE_WORKLOAD_IDENTITY_PROVIDER }}\n service_account: ${{ secrets.GOOGLE_SERVICE_ACCOUNT }}\n\n - name: "Set up Cloud SDK"\n uses: "google-github-actions/setup-gcloud@v2"\n with:\n version: ">= 363.0.0"\n\n - name: Sync release assets & build.rerun.io\n run: |\n pixi run python ./scripts/ci/sync_release_assets.py \\n --github-release ${{ inputs.RELEASE_VERSION }} \\n --github-token ${{ secrets.GITHUB_TOKEN }} \\n --wait ${{ inputs.WAIT_TIME_SECS }} \\n --remove --update\n | dataset_sample\yaml\rerun-io_rerun\.github\workflows\reusable_sync_release_assets.yml | reusable_sync_release_assets.yml | YAML | 1,548 | 0.85 | 0 | 0 | awesome-app | 839 | 2024-11-27T16:06:48.147481 | GPL-3.0 | false | 65d808cabb6c52b0476c275814f2692f |
name: "Track Size"\n\non:\n workflow_call:\n inputs:\n CONCURRENCY:\n required: true\n type: string\n PR_NUMBER:\n required: false\n type: number\n WITH_EXAMPLES:\n required: true\n type: boolean\n\ndefaults:\n run:\n shell: bash\n\npermissions:\n contents: write\n id-token: write\n deployments: write\n pull-requests: write\n\njobs:\n track-sizes:\n name: "Track Sizes"\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n with:\n fetch-depth: 0 # we need full history\n ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.ref || '' }}\n\n - name: Get context\n id: context\n run: |\n echo "short_sha=$(echo ${{ github.sha }} | cut -c1-7)" >> "$GITHUB_OUTPUT"\n\n - id: "auth"\n uses: google-github-actions/auth@v2\n with:\n workload_identity_provider: ${{ secrets.GOOGLE_WORKLOAD_IDENTITY_PROVIDER }}\n service_account: ${{ secrets.GOOGLE_SERVICE_ACCOUNT }}\n\n - name: "Set up Cloud SDK"\n uses: "google-github-actions/setup-gcloud@v2"\n with:\n version: ">= 363.0.0"\n\n - name: Download web_viewer\n uses: actions/download-artifact@v4\n with:\n name: web_viewer\n path: web_viewer\n\n - name: Download examples\n if: ${{ inputs.WITH_EXAMPLES }}\n uses: actions/download-artifact@v4\n with:\n name: example_data\n path: example_data\n\n - name: Download base results\n run: |\n # Get base commit:\n # 1. From the index file\n # 2. From the latest commit in the base branch of the PR\n # 3. From the latest commit in the current branch\n index_path="gs://rerun-builds/sizes/index"\n if [ "$(gsutil -q stat $index_path ; echo $?)" = 0 ]; then\n gsutil cp $index_path "/tmp/base_index"\n base_commit=$(cat /tmp/base_index)\n else\n if [ -n ${{ inputs.PR_NUMBER }} ]; then\n base_commit=$(echo ${{ github.event.pull_request.base.sha }} | cut -c1-7)\n else\n base_commit=${{ steps.context.outputs.short_sha }}\n fi\n fi\n echo "base commit: $base_commit"\n\n # Download data for base commit, or default to empty file\n data_path="gs://rerun-builds/sizes/commit/$base_commit/data.json"\n if [ "$(gsutil -q stat $data_path ; echo $?)" = 0 ]; then\n gsutil cp $data_path "/tmp/prev.json"\n else\n echo "[]" > "/tmp/prev.json"\n fi\n\n - name: Measure sizes\n id: measure\n run: |\n entries=()\n\n entries+=("Wasm:web_viewer/re_viewer_bg.wasm:MiB")\n entries+=("JS:web_viewer/re_viewer.js:kiB")\n\n if [ ${{ inputs.WITH_EXAMPLES }} = "true" ]; then\n for file in example_data/*.rrd; do\n name=$(basename "$file")\n entries+=("$name:$file:MiB")\n done\n fi\n\n python3 scripts/ci/count_bytes.py "${entries[@]}" > /tmp/sizes.json\n\n python3 scripts/ci/count_dependencies.py -p re_sdk --no-default-features > /tmp/deps1.json\n python3 scripts/ci/count_dependencies.py -p re_viewer --all-features > /tmp/deps2.json\n python3 scripts/ci/count_dependencies.py -p rerun --all-features > /tmp/deps3.json\n\n # Merge the results, putting dependencies first (on top):\n jq -s '.[0] + .[1] + .[2] + .[3]' /tmp/deps1.json /tmp/deps2.json /tmp/deps3.json /tmp/sizes.json > /tmp/data.json\n\n comparison=$(\n python3 scripts/ci/compare.py \\n --threshold=2% \\n --before-header=${{ (inputs.PR_NUMBER && github.event.pull_request.base.ref) || 'Before' }} \\n --after-header=${{ github.ref_name }} \\n "/tmp/prev.json" "/tmp/data.json"\n )\n {\n echo 'comparison<<EOF'\n echo "$comparison"\n echo EOF\n } >> "$GITHUB_OUTPUT"\n\n if [ -n "$comparison" ]; then\n echo "is_comparison_set=true" >> "$GITHUB_OUTPUT"\n else\n echo "is_comparison_set=false" >> "$GITHUB_OUTPUT"\n fi\n\n echo "$entries"\n echo "previous: $(cat /tmp/prev.json)"\n echo "current: $(cat /tmp/data.json)"\n echo "$comparison"\n echo "is comparison set: $is_comparison_set"\n\n - name: Upload data to GCS (commit)\n uses: google-github-actions/upload-cloud-storage@v2\n with:\n path: /tmp/data.json\n destination: "rerun-builds/sizes/commit/${{ steps.context.outputs.short_sha }}"\n process_gcloudignore: false\n\n - name: Create index file\n if: github.ref == 'refs/heads/main'\n run: |\n echo "${{ steps.context.outputs.short_sha }}" > "/tmp/index"\n\n - name: Upload index\n if: github.ref == 'refs/heads/main'\n uses: google-github-actions/upload-cloud-storage@v2\n with:\n path: /tmp/index\n destination: "rerun-builds/sizes"\n process_gcloudignore: false\n\n - name: Create PR comment\n if: inputs.PR_NUMBER != '' && steps.measure.outputs.is_comparison_set == 'true'\n # https://github.com/mshick/add-pr-comment\n uses: mshick/add-pr-comment@v2.8.2\n with:\n repo-token: ${{ secrets.GITHUB_TOKEN }}\n message: |\n # Size changes\n\n ${{ steps.measure.outputs.comparison }}\n\n - name: Alert on regression\n if: github.ref == 'refs/heads/main'\n # https://github.com/benchmark-action/github-action-benchmark\n uses: benchmark-action/github-action-benchmark@v1\n with:\n name: Sizes\n tool: customSmallerIsBetter\n output-file-path: /tmp/data.json\n github-token: ${{ secrets.GITHUB_TOKEN }}\n\n # Show alert with commit on detecting possible size regression\n comment-on-alert: true\n alert-threshold: "110%"\n fail-on-alert: false\n comment-always: false # Generates too much GitHub notification spam\n\n # Don't push to gh-pages\n save-data-file: false\n auto-push: false\n\n - uses: prefix-dev/setup-pixi@v0.8.1\n with:\n pixi-version: v0.41.4\n\n - name: Render benchmark result\n if: github.ref == 'refs/heads/main'\n run: |\n pixi run python scripts/ci/render_bench.py sizes \\n --after $(date -d"180 days ago" +%Y-%m-%d) \\n --output "gs://rerun-builds/graphs"\n | dataset_sample\yaml\rerun-io_rerun\.github\workflows\reusable_track_size.yml | reusable_track_size.yml | YAML | 6,615 | 0.95 | 0.064677 | 0.064327 | python-kit | 399 | 2025-03-05T08:11:46.282089 | Apache-2.0 | false | 75179f030b475b358fc1d61e41f6994c |
name: Reusable Upload Web\n\non:\n workflow_call:\n inputs:\n CONCURRENCY:\n required: true\n type: string\n ADHOC_NAME:\n type: string\n required: false\n default: ""\n MARK_TAGGED_VERSION:\n required: false\n type: boolean\n default: false\n RELEASE_VERSION:\n required: false\n type: string\n default: "prerelease"\n PR_NUMBER:\n required: false\n type: string\n default: ""\n NIGHTLY:\n required: false\n type: boolean\n default: false\n\nconcurrency:\n group: ${{ inputs.CONCURRENCY }}-upload-web\n cancel-in-progress: true\n\ndefaults:\n run:\n shell: bash\n\npermissions:\n contents: "write"\n id-token: "write"\n pull-requests: "write"\n\njobs:\n upload-web:\n name: Upload web build to google cloud (wasm32 + wasm-bindgen)\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n with:\n ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.ref || '' }}\n\n - name: Download Web Viewer\n uses: actions/download-artifact@v4\n with:\n name: web_viewer\n path: web_viewer\n\n # Upload the wasm, html etc to a Google cloud bucket:\n - id: "auth"\n uses: google-github-actions/auth@v2\n with:\n workload_identity_provider: ${{ secrets.GOOGLE_WORKLOAD_IDENTITY_PROVIDER }}\n service_account: ${{ secrets.GOOGLE_SERVICE_ACCOUNT }}\n\n - name: Get sha\n id: get-sha\n run: |\n full_commit="${{ (github.event_name == 'pull_request' && github.event.pull_request.head.sha) || github.sha }}"\n echo "sha=$(echo $full_commit | cut -c1-7)" >> "$GITHUB_OUTPUT"\n\n - name: "Upload web-viewer (commit)"\n if: ${{ !inputs.NIGHTLY }}\n uses: google-github-actions/upload-cloud-storage@v2\n with:\n path: "web_viewer"\n destination: "rerun-web-viewer/commit/${{ steps.get-sha.outputs.sha }}"\n parent: false\n process_gcloudignore: false\n\n - name: "Upload web-viewer (tagged)"\n if: inputs.MARK_TAGGED_VERSION\n uses: google-github-actions/upload-cloud-storage@v2\n with:\n path: "web_viewer"\n destination: "rerun-web-viewer/version/${{inputs.RELEASE_VERSION}}"\n parent: false\n process_gcloudignore: false\n\n - name: "Upload web-viewer (adhoc)"\n if: ${{ inputs.ADHOC_NAME != '' }}\n uses: google-github-actions/upload-cloud-storage@v2\n with:\n path: "web_viewer"\n destination: "rerun-web-viewer/adhoc/${{inputs.ADHOC_NAME}}"\n parent: false\n process_gcloudignore: false\n\n - name: "Upload web-viewer (prerelease)"\n if: github.ref == 'refs/heads/main'\n uses: google-github-actions/upload-cloud-storage@v2\n with:\n path: "web_viewer"\n destination: "rerun-web-viewer/prerelease"\n parent: false\n process_gcloudignore: false\n headers: |-\n cache-control: no-cache, max-age=0\n\n - name: "Upload web-viewer (main)"\n if: github.ref == 'refs/heads/main'\n uses: google-github-actions/upload-cloud-storage@v2\n with:\n path: "web_viewer"\n destination: "rerun-web-viewer/version/main"\n parent: false\n process_gcloudignore: false\n headers: |-\n cache-control: no-cache, max-age=0\n\n - name: "Upload web-viewer (pr)"\n if: ${{ inputs.PR_NUMBER != '' }}\n uses: google-github-actions/upload-cloud-storage@v2\n with:\n path: "web_viewer"\n destination: "rerun-web-viewer/pr/${{ inputs.PR_NUMBER }}"\n parent: false\n process_gcloudignore: false\n headers: |-\n cache-control: no-cache, max-age=0\n\n - name: "Upload web-viewer (nightly)"\n if: ${{ inputs.NIGHTLY }}\n uses: google-github-actions/upload-cloud-storage@v2\n with:\n path: "web_viewer"\n destination: "rerun-web-viewer/version/nightly"\n parent: false\n process_gcloudignore: false\n headers: |-\n cache-control: no-cache, max-age=0\n\n - name: Status comment\n if: success() && github.event_name == 'pull_request'\n # https://github.com/mshick/add-pr-comment\n uses: mshick/add-pr-comment@v2.8.2\n with:\n message-id: "web-viewer-build-status"\n repo-token: ${{ secrets.GITHUB_TOKEN }}\n message: |\n Web viewer built successfully. If applicable, you should also test it:\n\n - [ ] I have tested the web viewer\n\n | Result | Commit | Link | Manifest |\n | ------ | ------- | ---- | -------- |\n | ✅ | ${{ steps.get-sha.outputs.sha }} | https://rerun.io/viewer/pr/${{ github.event.pull_request.number }} | [`+nightly`](https://rerun.io/viewer/pr/${{ github.event.pull_request.number }}?manifest_url=https://app.rerun.io/version/nightly/examples_manifest.json) [`+main`](https://rerun.io/viewer/pr/${{ github.event.pull_request.number }}?manifest_url=https://app.rerun.io/version/main/examples_manifest.json) |\n\n <sup>Note: This comment is updated whenever you push a commit.</sup>\n\n - name: Status comment\n if: failure() && github.event_name == 'pull_request'\n # https://github.com/mshick/add-pr-comment\n uses: mshick/add-pr-comment@v2.8.2\n with:\n message-id: "web-viewer-build-status"\n repo-token: ${{ secrets.GITHUB_TOKEN }}\n message: |\n Web viewer failed to build.\n\n | Result | Commit | Link | Manifest |\n | ------ | ------- | ----- |\n | ❌ | ${{ steps.get-sha.outputs.sha }} | https://rerun.io/viewer/pr/${{ github.event.pull_request.number }} | [`+nightly`](https://rerun.io/viewer/pr/${{ github.event.pull_request.number }}?manifest_url=https://app.rerun.io/version/nightly/examples_manifest.json) [`+main`](https://rerun.io/viewer/pr/${{ github.event.pull_request.number }}?manifest_url=https://app.rerun.io/version/main/examples_manifest.json) |\n\n <sup>Note: This comment is updated whenever you push a commit.</sup>\n | dataset_sample\yaml\rerun-io_rerun\.github\workflows\reusable_upload_web.yml | reusable_upload_web.yml | YAML | 6,238 | 0.95 | 0.051724 | 0.019737 | python-kit | 904 | 2024-05-29T00:45:38.289059 | GPL-3.0 | false | 27303bc8a36a33bc202f332c7c276425 |
# mkdocs.yml\n# Top-level config for mkdocs\n# See: https://www.mkdocs.org/user-guide/configuration/\nsite_name: Rerun Python APIs\nsite_url: https://ref.rerun.io/docs/python/\nrepo_url: https://github.com/rerun-io/rerun/\n\n# Use the material theme\n# Override some options for nav: https://squidfunk.github.io/mkdocs-material/setup/setting-up-navigation/\ntheme:\n name: "material"\n features:\n - navigation.indexes\n - navigation.instant\n - navigation.tabs\n - navigation.tabs.sticky\n - navigation.tracking\n\nplugins:\n - search # https://squidfunk.github.io/mkdocs-material/setup/setting-up-site-search/\n - mkdocstrings: # https://mkdocstrings.github.io/usage/#global-options\n custom_templates: docs/templates # Override the function template.\n handlers:\n python:\n paths: ["rerun_sdk", "."] # Lookup python modules relative to this path\n import: # Cross-references for python and numpy\n - https://arrow.apache.org/docs/objects.inv\n - https://docs.python.org/3/objects.inv\n - https://numpy.org/doc/stable/objects.inv\n - https://ipython.readthedocs.io/en/stable/objects.inv\n options: # https://mkdocstrings.github.io/python/usage/#globallocal-options\n docstring_section_style: spacy # list spacy table\n docstring_style: numpy\n heading_level: 3\n filters: [\n "!__attrs_clear__", # For internal use\n "!^_[^_]", # Hide things starting with a single underscore\n "!as_component_batches", # Inherited from AsComponents\n "!indicator", # Inherited from Archetype\n "!num_instances", # Inherited from AsComponents\n ]\n inherited_members: true\n members_order: source # The order of class members\n merge_init_into_class: false # Not compatible with `inherited_members`\n show_if_no_docstring: false # We intentionally hide archetype fields\n show_source: no\n load_external_modules: true\n preload_modules:\n - rerun_bindings\n annotations_path: brief\n signature_crossrefs: true\n\n - gen-files: # https://oprypin.github.io/mkdocs-gen-files\n scripts:\n - docs/gen_common_index.py\n - literate-nav: # https://oprypin.github.io/mkdocs-literate-nav\n nav_file: SUMMARY.txt\n - redirects: # https://github.com/mkdocs/mkdocs-redirects\n redirect_maps:\n "index.md": "common/index.md"\n\n# https://www.mkdocs.org/user-guide/configuration/#markdown_extensions\n# https://squidfunk.github.io/mkdocs-material/setup/extensions/python-markdown-extensions/\nmarkdown_extensions:\n - admonition # https://squidfunk.github.io/mkdocs-material/reference/admonitions/\n - pymdownx.highlight # https://mkdocstrings.github.io/theming/#syntax-highlighting\n - pymdownx.superfences\n - toc:\n toc_depth: 4\n\n# Some extra styling\nextra_css:\n - css/mkdocstrings.css\n\n# https://squidfunk.github.io/mkdocs-material/setup/setting-up-versioning/\nextra:\n version:\n provider: mike\n default: latest\n | dataset_sample\yaml\rerun-io_rerun\rerun_py\mkdocs.yml | mkdocs.yml | YAML | 3,118 | 0.95 | 0.063291 | 0.123288 | awesome-app | 179 | 2024-10-26T22:55:32.618975 | BSD-3-Clause | false | b2ff9650c491c2c9529a1992c6e7bf0c |
# This is the configuration for golangci-lint for the restic project.\n#\n# A sample config with all settings is here:\n# https://github.com/golangci/golangci-lint/blob/master/.golangci.example.yml\n\nlinters:\n # only enable the linters listed below\n disable-all: true\n enable:\n # make sure all errors returned by functions are handled\n - errcheck\n\n # show how code can be simplified\n - gosimple\n\n # make sure code is formatted\n - gofmt\n\n # examine code and report suspicious constructs, such as Printf calls whose\n # arguments do not align with the format string\n - govet\n\n # make sure names and comments are used according to the conventions\n - revive\n\n # detect when assignments to existing variables are not used\n - ineffassign\n\n # run static analysis and find errors\n - staticcheck\n\n # find unused variables, functions, structs, types, etc.\n - unused\n\n # parse and typecheck code\n - typecheck\n\n # ensure that http response bodies are closed\n - bodyclose\n\n - importas\n\nissues:\n # don't use the default exclude rules, this hides (among others) ignored\n # errors from Close() calls\n exclude-use-default: false\n\n # list of things to not warn about\n exclude:\n # revive: do not warn about missing comments for exported stuff\n - exported (function|method|var|type|const) .* should have comment or be unexported\n # revive: ignore constants in all caps\n - don't use ALL_CAPS in Go names; use CamelCase\n # revive: lots of packages don't have such a comment\n - "package-comments: should have a package comment"\n # staticcheck: there's no easy way to replace these packages\n - "SA1019: \"golang.org/x/crypto/poly1305\" is deprecated"\n - "SA1019: \"golang.org/x/crypto/openpgp\" is deprecated"\n - "redefines-builtin-id:"\n\n exclude-rules:\n # revive: ignore unused parameters in tests\n - path: (_test\.go|testing\.go|backend/.*/tests\.go)\n text: "unused-parameter:"\n\nlinters-settings:\n importas:\n alias:\n - pkg: github.com/restic/restic/internal/test\n alias: rtest\n | dataset_sample\yaml\restic_restic\.golangci.yml | .golangci.yml | YAML | 2,082 | 0.95 | 0.057143 | 0.436364 | node-utils | 37 | 2025-04-23T10:17:28.326172 | MIT | false | eff490651b4b870e35cfdfb62701f3e4 |
trigger:\n branches:\n include:\n - master\n - release\n - refs/tags/*\npr:\n- master\n\npool:\n vmImage: 'windows-2019'\n\nvariables:\n BuildConfiguration: Release\n Projects: 'src/NSwag.sln'\n\nsteps:\n- task: CmdLine@2\n displayName: 'Allow long file path'\n inputs:\n script: 'git config --system core.longpaths true'\n- checkout: self\n# Install required SDKs and tools\n- task: UseDotNet@2\n displayName: 'Install .NET Core SDK'\n inputs:\n packageType: 'sdk'\n version: '6.0.0'\n includePreviewVersions: true\n performMultiLevelLookup: true\n useGlobalJson: true\n\n- task: UseDotNet@2\n displayName: 'Install .NET Core SDK'\n inputs:\n packageType: 'sdk'\n version: '7.0.x'\n includePreviewVersions: true\n performMultiLevelLookup: true\n useGlobalJson: true\n\n- task: UseDotNet@2\n displayName: 'Install .NET Core SDK'\n inputs:\n packageType: 'sdk'\n version: '8.0.100'\n includePreviewVersions: true\n performMultiLevelLookup: true\n useGlobalJson: true\n\n- task: CmdLine@2\n displayName: 'Install DNT'\n inputs:\n script: 'npm i -g dotnettools'\n\n- task: CmdLine@2\n displayName: 'Install WiX Toolset'\n inputs:\n script: 'choco install wixtoolset'\n\n- task: CmdLine@2\n displayName: 'Patch project version (preview)'\n condition: and(succeeded(), ne(variables['Build.SourceBranch'], 'refs/heads/release'))\n inputs:\n script: 'dnt bump-versions preview "$(Build.BuildNumber)"'\n failOnStderr: true\n\n- task: DotNetCoreCLI@2\n displayName: 'Restore packages'\n inputs: \n command: 'restore'\n projects: '$(Projects)'\n includeNuGetOrg: true\n\n# Build and test\n- task: MSBuild@1\n displayName: 'Build solution'\n inputs:\n solution: '$(Projects)'\n msbuildArchitecture: 'x86'\n configuration: '$(BuildConfiguration)'\n\n- task: VSTest@2\n displayName: 'Run tests'\n inputs:\n testSelector: 'testAssemblies'\n testAssemblyVer2: |\n **\*test*.dll\n !**\*TestAdapter.dll\n !**\*Integration*.dll\n !**\obj\**\n searchFolder: '$(System.DefaultWorkingDirectory)'\n configuration: '$(BuildConfiguration)'\n\n# Publish artifacts\n- task: CopyFiles@2\n displayName: 'Copy packages'\n# condition: and(succeeded(), ne(variables['Build.Reason'], 'PullRequest'))\n inputs:\n Contents: '**/*.nupkg'\n TargetFolder: '$(Build.ArtifactStagingDirectory)'\n flattenFolders: true\n\n- task: CopyFiles@2\n displayName: 'Copy MSI'\n# condition: and(succeeded(), ne(variables['Build.Reason'], 'PullRequest'))\n inputs:\n Contents: '**/*.msi'\n TargetFolder: '$(Build.ArtifactStagingDirectory)'\n flattenFolders: true\n\n- task: PublishBuildArtifacts@1\n displayName: 'Publish artifacts'\n# condition: and(succeeded(), ne(variables['Build.Reason'], 'PullRequest'))\n inputs:\n PathtoPublish: '$(Build.ArtifactStagingDirectory)'\n ArtifactName: 'drop'\n publishLocation: 'Container'\n | dataset_sample\yaml\RicoSuter_NSwag\azure-pipelines.yml | azure-pipelines.yml | YAML | 2,850 | 0.95 | 0 | 0.067308 | python-kit | 882 | 2024-09-18T07:52:19.868930 | Apache-2.0 | false | f5c69ef4c136a29598a8c6718d2aaba2 |
# codecov config\n# Reference: https://docs.codecov.com/docs/codecovyml-reference\n# Tips. You may run following command to validate before committing any changes\n# curl --data-binary @codecov.yml https://codecov.io/validate\ncoverage:\n status:\n patch: off # disable patch status\n project:\n default: false # disable the default status that measures entire project\n rust:\n only_pulls: true # no status will be posted for commits not on a pull request\n paths:\n - "src/"\n target: auto # compared with the coverage from the base commit\n threshold: 0.1% # allow the coverage to drop by 0.1% and posting a success status\ncodecov:\n allow_coverage_offsets: true\nignore:\n - "src/risedevtooltool"\n - "src/bench/s3_bench"\n # Remove this after #4754\n - "src/storage/src/hummock/store"\ncomment: false\n | dataset_sample\yaml\risingwavelabs_risingwave\codecov.yml | codecov.yml | YAML | 847 | 0.8 | 0.043478 | 0.217391 | awesome-app | 658 | 2023-09-10T10:44:43.157575 | GPL-3.0 | false | bbb0701fc03c0231ee4082df1314fa8d |
# The schema for RiseDev configuration files is defined under `src/risedevtool/schemas`.\n#\n# You can add the following section to `.vscode/settings.json` to get hover support in VS Code:\n#\n# ```\n# "yaml.schemas": {\n# "src/risedevtool/schemas/risedev.json": "risedev.yml",\n# "src/risedevtool/schemas/risedev-profiles.user.json": "risedev-profiles.user.yml"\n# }\n# ```\n\nprofile:\n #################################################\n ### Configuration profiles used by developers ###\n #################################################\n\n # The default configuration will start 1 compute node, 1 meta node and 1 frontend.\n default:\n # # Specify a configuration file to override the default settings\n # config-path: src/config/example.toml\n # # Specify custom environment variables\n # env:\n # RUST_LOG: "info,risingwave_storage::hummock=off"\n # ENABLE_PRETTY_LOG: "true"\n steps:\n # If you want to use the local s3 storage, enable the following line\n # - use: minio\n\n # If you want to use aws-s3, configure AK and SK in env var and enable the following lines:\n # - use: aws-s3\n # bucket: test-bucket\n\n # By default, the meta-backend is sqlite.\n # To enable postgres backend, uncomment the following lines and set the meta-backend to postgres in 'meta-node'\n # - use: postgres\n # port: 8432\n # user: postgres\n # database: metadata\n\n # If you want to enable metrics or tracing, uncomment the following lines.\n # - use: prometheus # metrics\n # - use: tempo # tracing\n # - use: grafana # visualization\n\n - use: meta-node\n # meta-backend: postgres\n - use: compute-node\n - use: frontend\n\n # If you want to enable compactor, uncomment the following line, and enable either minio or aws-s3 as well.\n # - use: compactor\n\n # If you want to create source from Kafka, uncomment the following lines\n # - use: kafka\n # persist-data: true\n\n # To enable Confluent schema registry, uncomment the following line\n # - use: schema-registry\n\n default-v6:\n steps:\n - use: meta-node\n address: "[::1]"\n listen-address: "[::]"\n - use: compute-node\n address: "[::1]"\n listen-address: "[::]"\n - use: frontend\n address: "[::1]"\n listen-address: "[::]"\n\n # The minimum config to use with risectl.\n for-ctl:\n steps:\n - use: minio\n - use: meta-node\n - use: compute-node\n - use: frontend\n - use: compactor\n\n # `dev-compute-node` have the same settings as default except the compute node will be started by user.\n dev-compute-node:\n steps:\n - use: meta-node\n - use: compute-node\n user-managed: true\n - use: frontend\n\n dev-frontend:\n steps:\n - use: meta-node\n - use: compute-node\n - use: frontend\n user-managed: true\n\n dev-meta:\n steps:\n - use: meta-node\n user-managed: true\n - use: compute-node\n - use: frontend\n\n # You can use this in combination with the virtual compactor\n # provided in https://github.com/risingwavelabs/risingwave-extensions\n dev-compactor:\n steps:\n - use: minio\n - use: meta-node\n - use: compute-node\n - use: frontend\n - use: compactor\n user-managed: true\n\n full:\n steps:\n - use: minio\n - use: postgres\n port: 8432\n user: postgres\n password: postgres\n database: metadata\n - use: meta-node\n meta-backend: postgres\n - use: compute-node\n - use: frontend\n - use: compactor\n - use: prometheus\n - use: grafana\n - use: kafka\n persist-data: true\n\n standalone-full-peripherals:\n steps:\n - use: minio\n - use: postgres\n port: 8432\n user: postgres\n database: metadata\n - use: meta-node\n user-managed: true\n meta-backend: postgres\n - use: compute-node\n user-managed: true\n - use: frontend\n user-managed: true\n - use: compactor\n user-managed: true\n - use: prometheus\n - use: grafana\n - use: kafka\n persist-data: true\n\n standalone-minio-sqlite:\n steps:\n - use: minio\n - use: sqlite\n - use: meta-node\n user-managed: true\n meta-backend: sqlite\n - use: compute-node\n user-managed: true\n - use: frontend\n user-managed: true\n - use: compactor\n user-managed: true\n\n standalone-minio-sqlite-compactor:\n steps:\n - use: minio\n - use: sqlite\n - use: meta-node\n user-managed: true\n meta-backend: sqlite\n - use: compute-node\n user-managed: true\n - use: frontend\n user-managed: true\n - use: compactor\n\n hdfs:\n steps:\n - use: meta-node\n - use: compute-node\n - use: frontend\n # If you want to use hdfs as storage backend, configure hdfs namenode:\n - use: opendal\n engine: hdfs\n namenode: "127.0.0.1:9000"\n - use: compactor\n # - use: prometheus\n # - use: grafana\n fs:\n steps:\n - use: meta-node\n - use: compute-node\n - use: frontend\n - use: opendal\n engine: fs\n - use: compactor\n # - use: prometheus\n # - use: grafana\n webhdfs:\n steps:\n - use: meta-node\n - use: compute-node\n - use: frontend\n # If you want to use webhdfs as storage backend, configure hdfs namenode:\n - use: opendal\n engine: webhdfs\n namenode: "127.0.0.1:9870"\n - use: compactor\n # - use: prometheus\n # - use: grafana\n\n gcs:\n steps:\n - use: meta-node\n - use: compute-node\n - use: frontend\n # If you want to use google cloud storage as storage backend, configure bucket name:\n - use: opendal\n engine: gcs\n bucket: bucket-name\n - use: compactor\n # - use: prometheus\n # - use: grafana\n obs:\n steps:\n - use: meta-node\n - use: compute-node\n - use: frontend\n # If you want to use obs as storage backend, configure bucket name:\n - use: opendal\n engine: obs\n bucket: bucket-name\n - use: compactor\n # - use: prometheus\n # - use: grafana\n\n oss:\n steps:\n - use: meta-node\n - use: compute-node\n - use: frontend\n # If you want to use oss as storage backend, configure bucket name:\n - use: opendal\n engine: oss\n bucket: bucket-name\n - use: compactor\n # - use: prometheus\n # - use: grafana\n\n azblob:\n steps:\n - use: meta-node\n - use: compute-node\n - use: frontend\n # If you want to use azblob as storage backend, configure bucket(container) name:\n - use: opendal\n engine: azblob\n bucket: test-bucket\n - use: compactor\n # - use: prometheus\n # - use: grafana\n\n full-benchmark:\n steps:\n - use: minio\n - use: postgres\n - use: meta-node\n meta-backend: postgres\n - use: compute-node\n - use: frontend\n - use: compactor\n - use: prometheus\n remote-write: true\n remote-write-region: "ap-southeast-1"\n remote-write-url: "https://aps-workspaces.ap-southeast-1.amazonaws.com/workspaces/ws-f3841dad-6a5c-420f-8f62-8f66487f512a/api/v1/remote_write"\n - use: grafana\n - use: kafka\n persist-data: true\n\n kafka:\n steps:\n - use: kafka\n\n meta-1cn-1fe-sqlite:\n steps:\n - use: minio\n - use: sqlite\n - use: meta-node\n port: 5690\n dashboard-port: 5691\n exporter-port: 1250\n meta-backend: sqlite\n - use: compactor\n - use: compute-node\n - use: frontend\n\n # Start 4 CNs with resource groups rg1, rg2, and default\n multiple-resource-groups:\n steps:\n - use: minio\n - use: meta-node\n - use: compactor\n - use: compute-node\n port: 5687\n exporter-port: 1222\n enable-tiered-cache: true\n resource-group: "rg1"\n - use: compute-node\n port: 5688\n exporter-port: 1223\n resource-group: "rg2"\n enable-tiered-cache: true\n - use: compute-node\n port: 5689\n exporter-port: 1224\n enable-tiered-cache: true\n - use: frontend\n\n ci-time-travel:\n config-path: src/config/ci-time-travel.toml\n steps:\n - use: minio\n - use: sqlite\n - use: meta-node\n port: 5690\n dashboard-port: 5691\n exporter-port: 1250\n meta-backend: sqlite\n - use: compactor\n - use: compute-node\n - use: frontend\n\n ci-iceberg-test:\n steps:\n - use: minio\n - use: mysql\n port: 3306\n address: mysql\n user: root\n password: 123456\n user-managed: true\n - use: postgres\n port: 5432\n address: db\n database: metadata\n user: postgres\n password: postgres\n user-managed: true\n application: metastore\n - use: meta-node\n meta-backend: postgres\n - use: compute-node\n - use: frontend\n - use: compactor\n\n meta-1cn-1fe-sqlite-with-recovery:\n config-path: src/config/ci-recovery.toml\n steps:\n - use: minio\n - use: sqlite\n - use: meta-node\n port: 5690\n dashboard-port: 5691\n exporter-port: 1250\n meta-backend: sqlite\n - use: compactor\n - use: compute-node\n - use: frontend\n\n meta-1cn-1fe-pg-backend:\n steps:\n - use: minio\n - use: postgres\n port: 8432\n user: postgres\n database: metadata\n - use: meta-node\n port: 5690\n dashboard-port: 5691\n exporter-port: 1250\n meta-backend: postgres\n - use: compactor\n - use: compute-node\n - use: frontend\n\n meta-1cn-1fe-pg-backend-with-recovery:\n config-path: src/config/ci-recovery.toml\n steps:\n - use: minio\n - use: postgres\n port: 8432\n user: postgres\n database: metadata\n - use: meta-node\n port: 5690\n dashboard-port: 5691\n exporter-port: 1250\n meta-backend: postgres\n - use: compactor\n - use: compute-node\n - use: frontend\n\n meta-1cn-1fe-mysql-backend:\n steps:\n - use: minio\n - use: mysql\n port: 4306\n user: root\n database: metadata\n application: metastore\n - use: meta-node\n port: 5690\n dashboard-port: 5691\n exporter-port: 1250\n meta-backend: mysql\n - use: compactor\n - use: compute-node\n - use: frontend\n\n meta-1cn-1fe-mysql-backend-with-recovery:\n config-path: src/config/ci-recovery.toml\n steps:\n - use: minio\n - use: mysql\n port: 4306\n user: root\n database: metadata\n application: metastore\n - use: meta-node\n port: 5690\n dashboard-port: 5691\n exporter-port: 1250\n meta-backend: mysql\n - use: compactor\n - use: compute-node\n - use: frontend\n\n java-binding-demo:\n steps:\n - use: minio\n address: "127.0.0.1"\n port: 9301\n root-user: hummockadmin\n root-password: hummockadmin\n hummock-bucket: hummock001\n - use: meta-node\n address: "127.0.0.1"\n port: 5690\n - use: compute-node\n - use: frontend\n - use: compactor\n\n ci-gen-cpu-flamegraph:\n steps:\n # NOTE(kwannoel): We do not use aws-s3 here, to avoid\n # contention over s3 bucket when multiple benchmarks at run at once.\n - use: minio\n - use: sqlite\n - use: meta-node\n meta-backend: sqlite\n - use: compute-node\n parallelism: 8\n - use: frontend\n - use: compactor\n # - use: prometheus\n # - use: grafana\n # Do not use kafka here, we will spawn it separately,\n # so we don't have to re-generate data each time.\n # RW will still be ale to talk to it.\n # - use: kafka\n # port: 9092\n # persist-data: true\n\n ######################################\n ### Configurations used in Compose ###\n ######################################\n\n compose:\n steps:\n - use: minio\n id: minio-0\n address: ${id}\n listen-address: "0.0.0.0"\n console-address: "0.0.0.0"\n\n - use: meta-node\n # Id must starts with `meta-node`, therefore to be picked up by other\n # components.\n id: meta-node-0\n\n # Advertise address can be `id`, so as to use docker's DNS. If running\n # in host network mode, we should use IP directly in this field.\n address: ${id}\n\n listen-address: "0.0.0.0"\n\n - use: compute-node\n id: compute-node-0\n listen-address: "0.0.0.0"\n address: ${id}\n\n - use: frontend\n id: frontend-node-0\n listen-address: "0.0.0.0"\n address: ${id}\n\n - use: compactor\n id: compactor-0\n listen-address: "0.0.0.0"\n address: ${id}\n\n - use: redpanda\n\n - use: prometheus\n id: prometheus-0\n listen-address: "0.0.0.0"\n address: ${id}\n\n - use: grafana\n listen-address: "0.0.0.0"\n address: ${id}\n id: grafana-0\n\n - use: tempo\n listen-address: "0.0.0.0"\n address: ${id}\n id: tempo-0\n\n # special config for deployment, see related PR for more information\n compose-3node-deploy:\n steps:\n # - use: minio\n # id: minio-0\n # address: ${dns-host:rw-source-0}\n # listen-address: "0.0.0.0"\n # console-address: "0.0.0.0"\n\n - use: aws-s3\n bucket: ${terraform:s3-bucket}\n\n - use: meta-node\n # Id must starts with `meta-node`, therefore to be picked up by other\n # components.\n id: meta-node-0\n\n # Advertise address can be `id`, so as to use docker's DNS. If running\n # in host network mode, we should use IP directly in this field.\n address: ${dns-host:rw-meta-0}\n listen-address: "0.0.0.0"\n\n - use: compute-node\n id: compute-node-0\n listen-address: "0.0.0.0"\n address: ${dns-host:rw-compute-0}\n async-stack-trace: verbose\n enable-tiered-cache: true\n\n - use: compute-node\n id: compute-node-1\n listen-address: "0.0.0.0"\n address: ${dns-host:rw-compute-1}\n async-stack-trace: verbose\n enable-tiered-cache: true\n\n - use: compute-node\n id: compute-node-2\n listen-address: "0.0.0.0"\n address: ${dns-host:rw-compute-2}\n async-stack-trace: verbose\n enable-tiered-cache: true\n\n - use: frontend\n id: frontend-node-0\n listen-address: "0.0.0.0"\n address: ${dns-host:rw-meta-0}\n\n - use: compactor\n id: compactor-0\n listen-address: "0.0.0.0"\n address: ${dns-host:rw-source-0}\n compaction-worker-threads-number: 15\n\n - use: redpanda\n address: ${dns-host:rw-source-0}\n\n - use: prometheus\n id: prometheus-0\n listen-address: "0.0.0.0"\n address: ${dns-host:rw-meta-0}\n\n - use: grafana\n listen-address: "0.0.0.0"\n address: ${dns-host:rw-meta-0}\n id: grafana-0\n\n #################################\n ### Configurations used on CI ###\n #################################\n\n ci-1cn-1fe:\n config-path: src/config/ci.toml\n steps:\n - use: minio\n - use: meta-node\n meta-backend: env\n - use: compute-node\n enable-tiered-cache: true\n - use: frontend\n - use: compactor\n\n ci-1cn-1fe-jdbc-to-native:\n config-path: src/config/ci-jdbc-to-native.toml\n steps:\n - use: minio\n - use: sqlite\n - use: meta-node\n meta-backend: sqlite\n - use: compute-node\n enable-tiered-cache: true\n - use: frontend\n - use: compactor\n\n ci-3cn-1fe:\n config-path: src/config/ci.toml\n steps:\n - use: minio\n - use: meta-node\n meta-backend: env\n - use: compute-node\n port: 5687\n exporter-port: 1222\n enable-tiered-cache: true\n - use: compute-node\n port: 5688\n exporter-port: 1223\n enable-tiered-cache: true\n - use: compute-node\n port: 5689\n exporter-port: 1224\n enable-tiered-cache: true\n - use: frontend\n - use: compactor\n\n ci-backfill-3cn-1fe:\n config-path: src/config/ci-longer-streaming-upload-timeout.toml\n steps:\n - use: minio\n - use: meta-node\n meta-backend: env\n - use: compute-node\n port: 5687\n exporter-port: 1222\n enable-tiered-cache: true\n - use: compute-node\n port: 5688\n exporter-port: 1223\n enable-tiered-cache: true\n - use: compute-node\n port: 5689\n exporter-port: 1224\n enable-tiered-cache: true\n - use: frontend\n - use: compactor\n\n ci-backfill-3cn-1fe-with-monitoring:\n config-path: src/config/ci-longer-streaming-upload-timeout.toml\n steps:\n - use: minio\n - use: meta-node\n meta-backend: env\n - use: compute-node\n port: 5687\n exporter-port: 1222\n enable-tiered-cache: true\n - use: compute-node\n port: 5688\n exporter-port: 1223\n enable-tiered-cache: true\n - use: compute-node\n port: 5689\n exporter-port: 1224\n enable-tiered-cache: true\n - use: frontend\n - use: compactor\n - use: prometheus\n - use: grafana\n\n ci-backfill-3cn-1fe-with-minio-rate-limit:\n config-path: src/config/ci-longer-streaming-upload-timeout.toml\n steps:\n - use: minio\n # Set the rate limit for MinIO to N requests per second\n api-requests-max: 1000\n # Set the deadline for API requests to N seconds\n api-requests-deadline: 20s\n - use: meta-node\n meta-backend: env\n - use: compute-node\n port: 5687\n exporter-port: 1222\n enable-tiered-cache: true\n - use: compute-node\n port: 5688\n exporter-port: 1223\n enable-tiered-cache: true\n - use: compute-node\n port: 5689\n exporter-port: 1224\n enable-tiered-cache: true\n - use: frontend\n - use: compactor\n\n ci-backfill-3cn-1fe-with-monitoring-and-minio-rate-limit:\n config-path: src/config/ci-longer-streaming-upload-timeout.toml\n steps:\n - use: minio\n api-requests-max: 30\n api-requests-deadline: 2s\n - use: meta-node\n meta-backend: env\n - use: compute-node\n port: 5687\n exporter-port: 1222\n enable-tiered-cache: true\n - use: compute-node\n port: 5688\n exporter-port: 1223\n enable-tiered-cache: true\n - use: compute-node\n port: 5689\n exporter-port: 1224\n enable-tiered-cache: true\n - use: frontend\n - use: compactor\n - use: prometheus\n - use: grafana\n\n ci-3cn-3fe:\n config-path: src/config/ci.toml\n steps:\n - use: minio\n - use: meta-node\n meta-backend: env\n - use: compute-node\n port: 5687\n exporter-port: 1222\n enable-tiered-cache: true\n - use: compute-node\n port: 5688\n exporter-port: 1223\n enable-tiered-cache: true\n - use: compute-node\n port: 5689\n exporter-port: 1224\n enable-tiered-cache: true\n - use: frontend\n port: 4565\n exporter-port: 2222\n health-check-port: 6786\n - use: frontend\n port: 4566\n exporter-port: 2223\n health-check-port: 6787\n - use: frontend\n port: 4567\n exporter-port: 2224\n health-check-port: 6788\n - use: compactor\n\n ci-3cn-3fe-opendal-fs-backend:\n config-path: src/config/ci.toml\n steps:\n - use: meta-node\n meta-backend: env\n - use: opendal\n engine: fs\n bucket: "/tmp/rw_ci"\n - use: compute-node\n port: 5687\n exporter-port: 1222\n - use: compute-node\n port: 5688\n exporter-port: 1223\n - use: compute-node\n port: 5689\n exporter-port: 1224\n - use: frontend\n port: 4565\n exporter-port: 2222\n health-check-port: 6786\n - use: frontend\n port: 4566\n exporter-port: 2223\n health-check-port: 6787\n - use: frontend\n port: 4567\n exporter-port: 2224\n health-check-port: 6788\n - use: compactor\n\n ci-3streaming-2serving-3fe:\n config-path: src/config/ci.toml\n steps:\n - use: minio\n - use: meta-node\n meta-backend: env\n - use: compute-node\n port: 5687\n exporter-port: 1222\n enable-tiered-cache: true\n role: streaming\n parallelism: 4\n - use: compute-node\n port: 5688\n exporter-port: 1223\n enable-tiered-cache: true\n role: streaming\n parallelism: 4\n - use: compute-node\n port: 5689\n exporter-port: 1224\n enable-tiered-cache: true\n role: streaming\n parallelism: 4\n - use: compute-node\n port: 5685\n exporter-port: 1225\n enable-tiered-cache: true\n role: serving\n parallelism: 4\n - use: compute-node\n port: 5686\n exporter-port: 1226\n enable-tiered-cache: true\n role: serving\n parallelism: 8\n - use: frontend\n port: 4565\n exporter-port: 2222\n health-check-port: 6786\n - use: frontend\n port: 4566\n exporter-port: 2223\n health-check-port: 6787\n - use: frontend\n port: 4567\n exporter-port: 2224\n health-check-port: 6788\n - use: compactor\n\n ci-kafka:\n config-path: src/config/ci.toml\n steps:\n - use: minio\n - use: meta-node\n meta-backend: env\n - use: compute-node\n enable-tiered-cache: true\n - use: frontend\n - use: compactor\n - use: kafka\n user-managed: true\n address: message_queue\n port: 29092\n - use: schema-registry\n user-managed: true\n address: schemaregistry\n port: 8082\n\n local-inline-source-test:\n config-path: src/config/ci-recovery.toml\n steps:\n - use: minio\n - use: sqlite\n - use: meta-node\n meta-backend: sqlite\n - use: compute-node\n enable-tiered-cache: true\n - use: frontend\n - use: compactor\n - use: pubsub\n persist-data: true\n - use: kafka\n persist-data: true\n - use: schema-registry\n - use: mysql\n - use: postgres\n\n ci-inline-source-test:\n config-path: src/config/ci-recovery.toml\n steps:\n - use: minio\n - use: meta-node\n meta-backend: env\n - use: compute-node\n enable-tiered-cache: true\n - use: frontend\n - use: compactor\n - use: pubsub\n persist-data: true\n - use: kafka\n user-managed: true\n address: message_queue\n port: 29092\n - use: schema-registry\n user-managed: true\n address: schemaregistry\n port: 8082\n - use: mysql\n port: 3306\n address: mysql\n user: root\n password: 123456\n user-managed: true\n - use: postgres\n port: 5432\n address: db\n user: postgres\n password: postgres\n user-managed: true\n\n ci-redis:\n config-path: src/config/ci.toml\n steps:\n - use: minio\n - use: meta-node\n meta-backend: env\n - use: compute-node\n enable-tiered-cache: true\n - use: frontend\n - use: compactor\n - use: redis\n\n ci-compaction-test:\n config-path: src/config/ci-compaction-test.toml\n steps:\n - use: minio\n - use: meta-node\n meta-backend: env\n - use: compute-node\n enable-tiered-cache: true\n total-memory-bytes: 17179869184\n - use: frontend\n - use: compactor\n\n ci-1cn-1fe-with-recovery:\n config-path: src/config/ci-recovery.toml\n steps:\n - use: minio\n - use: meta-node\n meta-backend: env\n - use: compute-node\n enable-tiered-cache: true\n - use: frontend\n - use: compactor\n\n ci-3cn-1fe-with-recovery:\n config-path: src/config/ci-recovery.toml\n steps:\n - use: minio\n - use: meta-node\n meta-backend: env\n - use: compute-node\n port: 5687\n exporter-port: 1222\n enable-tiered-cache: true\n - use: compute-node\n port: 5688\n exporter-port: 1223\n enable-tiered-cache: true\n - use: compute-node\n port: 5689\n exporter-port: 1224\n enable-tiered-cache: true\n - use: frontend\n - use: compactor\n\n ci-1cn-1fe-user-kafka-with-recovery:\n config-path: src/config/ci-recovery.toml\n steps:\n - use: minio\n - use: meta-node\n meta-backend: env\n - use: compute-node\n enable-tiered-cache: true\n - use: frontend\n - use: compactor\n - use: kafka\n user-managed: true\n address: message_queue\n port: 29092\n\n ci-meta-backup-test-sql:\n config-path: src/config/ci-meta-backup-test.toml\n steps:\n - use: sqlite\n - use: minio\n - use: meta-node\n meta-backend: sqlite\n - use: compute-node\n - use: frontend\n - use: compactor\n\n ci-meta-backup-test-restore-sql:\n config-path: src/config/ci-meta-backup-test.toml\n steps:\n - use: sqlite\n - use: minio\n\n ci-sink-test:\n config-path: src/config/ci.toml\n steps:\n - use: minio\n - use: meta-node\n - use: compute-node\n enable-tiered-cache: true\n - use: frontend\n - use: compactor\n\n hummock-trace:\n config-path: src/config/hummock-trace.toml\n steps:\n - use: minio\n - use: meta-node\n - use: compute-node\n - use: frontend\n - use: compactor\n\n ci-backfill:\n config-path: "src/config/ci-backfill.toml"\n steps:\n - use: minio\n - use: meta-node\n meta-backend: env\n - use: compute-node\n - use: frontend\n - use: compactor\n\n full-with-batch-query-limit:\n config-path: src/config/full-with-batch-query-limit.toml\n steps:\n - use: minio\n - use: sqlite\n - use: meta-node\n meta-backend: sqlite\n - use: compute-node\n - use: frontend\n - use: compactor\n - use: prometheus\n - use: grafana\n\ncompose:\n risingwave: "ghcr.io/risingwavelabs/risingwave:latest"\n prometheus: "prom/prometheus:latest"\n minio: "quay.io/minio/minio:latest"\n redpanda: "redpandadata/redpanda:latest"\n grafana: "grafana/grafana-oss:latest"\n tempo: "grafana/tempo:latest"\n\n# The `use` field specified in the above `risedev` section will refer to the templates below.\ntemplate:\n minio:\n # Advertise address of MinIO s3 endpoint\n address: "127.0.0.1"\n\n # Advertise port of MinIO s3 endpoint\n port: 9301\n\n # Listen address of MinIO endpoint\n listen-address: ${address}\n\n # Console address of MinIO s3 endpoint\n console-address: "127.0.0.1"\n\n # Console port of MinIO s3 endpoint\n console-port: 9400\n\n # Root username (can be used to login to MinIO console)\n root-user: hummockadmin\n\n # Root password (can be used to login to MinIO console)\n root-password: hummockadmin\n\n # Bucket name to store hummock information\n hummock-bucket: hummock001\n\n # Id of this instance\n id: minio\n\n # Prometheus nodes used by this MinIO\n provide-prometheus: "prometheus*"\n\n # Max concurrent api requests.\n # see: https://github.com/minio/minio/blob/master/docs/throttle/README.md.\n # '0' means this env var will use the default of minio.\n api-requests-max: 0\n\n # Deadline for api requests.\n # Empty string means this env var will use the default of minio.\n api-requests-deadline: ""\n\n sqlite:\n # Id of this instance\n id: sqlite\n\n # File name of the sqlite database\n file: metadata.db\n\n compute-node:\n # Compute-node advertise address\n address: "127.0.0.1"\n\n # Listen address\n listen-address: ${address}\n\n # Compute-node listen port\n port: 5688\n\n # Prometheus exporter listen port\n exporter-port: 1222\n\n # Id of this instance\n id: compute-node-${port}\n\n # Whether to enable async stack trace for this compute node, `off`, `on`, or `verbose`.\n # Considering the performance, `verbose` mode only effect under `release` profile with `debug_assertions` off.\n async-stack-trace: verbose\n\n # If `enable-tiered-cache` is true, hummock will use data directory as file cache.\n enable-tiered-cache: false\n\n # Minio instances used by this compute node\n provide-minio: "minio*"\n\n # OpenDAL storage backend used by this compute node\n provide-opendal: "opendal*"\n\n # AWS s3 bucket used by this compute node\n provide-aws-s3: "aws-s3*"\n\n # Meta-nodes used by this compute node\n provide-meta-node: "meta-node*"\n\n # Tempo used by this compute node\n provide-tempo: "tempo*"\n\n # If `user-managed` is true, this service will be started by user with the above config\n user-managed: false\n\n # Total available memory for the compute node in bytes\n total-memory-bytes: 8589934592\n\n # Parallelism of tasks per compute node\n parallelism: 4\n\n role: both\n\n # Resource group for scheduling, default value is "default"\n resource-group: "default"\n\n meta-node:\n # Meta-node advertise address\n address: "127.0.0.1"\n\n # Meta-node listen port\n port: 5690\n\n # Listen address\n listen-address: ${address}\n\n # Dashboard listen port\n dashboard-port: 5691\n\n # Prometheus exporter listen port\n exporter-port: 1250\n\n # Id of this instance\n id: meta-node-${port}\n\n # If `user-managed` is true, this service will be started by user with the above config\n user-managed: false\n\n # meta backend type, requires extra config for provided backend\n meta-backend: "memory"\n\n # Sqlite backend config\n provide-sqlite-backend: "sqlite*"\n\n # Postgres backend config\n provide-postgres-backend: "postgres*"\n\n # Mysql backend config\n provide-mysql-backend: "mysql*"\n\n # Prometheus nodes used by dashboard service\n provide-prometheus: "prometheus*"\n\n # Sanity check: should use shared storage if there're multiple compute nodes\n provide-compute-node: "compute-node*"\n\n # Sanity check: should start at lease one compactor if using shared object store\n provide-compactor: "compactor*"\n\n # Minio instances used by the cluster\n provide-minio: "minio*"\n\n # OpenDAL storage backend used by the cluster\n provide-opendal: "opendal*"\n\n # AWS s3 bucket used by the cluster\n provide-aws-s3: "aws-s3*"\n\n # Tempo used by this meta node\n provide-tempo: "tempo*"\n\n prometheus:\n # Advertise address of Prometheus\n address: "127.0.0.1"\n\n # Listen port of Prometheus\n port: 9500\n\n # Listen address\n listen-address: ${address}\n\n # Id of this instance\n id: prometheus\n\n # If `remote_write` is true, this Prometheus instance will push metrics to remote instance\n remote-write: false\n\n # AWS region of remote write\n remote-write-region: ""\n\n # Remote write url of this instance\n remote-write-url: ""\n\n # Compute-nodes used by this Prometheus instance\n provide-compute-node: "compute-node*"\n\n # Meta-nodes used by this Prometheus instance\n provide-meta-node: "meta-node*"\n\n # Minio instances used by this Prometheus instance\n provide-minio: "minio*"\n\n # Compactors used by this Prometheus instance\n provide-compactor: "compactor*"\n\n # Redpanda used by this Prometheus instance\n provide-redpanda: "redpanda*"\n\n # Frontend used by this Prometheus instance\n provide-frontend: "frontend*"\n\n # How frequently Prometheus scrape targets (collect metrics)\n scrape-interval: 15s\n\n frontend:\n # Advertise address of frontend\n address: "127.0.0.1"\n\n # Listen port of frontend\n port: 4566\n\n # Listen address\n listen-address: ${address}\n\n # Prometheus exporter listen port\n exporter-port: 2222\n\n # Health check listen port\n health-check-port: 6786\n\n # Id of this instance\n id: frontend-${port}\n\n # Meta-nodes used by this frontend instance\n provide-meta-node: "meta-node*"\n\n # Tempo used by this frontend instance\n provide-tempo: "tempo*"\n\n # If `user-managed` is true, this service will be started by user with the above config\n user-managed: false\n\n compactor:\n # Compactor advertise address\n address: "127.0.0.1"\n\n # Compactor listen port\n port: 6660\n\n # Listen address\n listen-address: ${address}\n\n # Prometheus exporter listen port\n exporter-port: 1260\n\n # Id of this instance\n id: compactor-${port}\n\n # Minio instances used by this compactor\n provide-minio: "minio*"\n\n # Meta-nodes used by this compactor\n provide-meta-node: "meta-node*"\n\n # Tempo used by this compator\n provide-tempo: "tempo*"\n\n # If `user-managed` is true, this service will be started by user with the above config\n user-managed: false\n\n grafana:\n # Listen address of Grafana\n listen-address: ${address}\n\n # Advertise address of Grafana\n address: "127.0.0.1"\n\n # Listen port of Grafana\n port: 3001\n\n # Id of this instance\n id: grafana\n\n # Prometheus used by this Grafana instance\n provide-prometheus: "prometheus*"\n\n # Tempo used by this Grafana instance\n provide-tempo: "tempo*"\n\n tempo:\n # Id of this instance\n id: tempo\n\n # Listen address of HTTP server and OTLP gRPC collector\n listen-address: "127.0.0.1"\n\n # Advertise address of Tempo\n address: "127.0.0.1"\n\n # HTTP server listen port\n port: 3200\n\n # gRPC listen port of the OTLP collector\n otlp-port: 4317\n\n max-bytes-per-trace: 5000000\n\n opendal:\n id: opendal\n\n engine: hdfs\n\n namenode: 127.0.0.1:9000\n\n bucket: risingwave-test\n\n # aws-s3 is a placeholder service to provide configurations\n aws-s3:\n # Id to be picked-up by services\n id: aws-s3\n\n # The bucket to be used for AWS S3\n bucket: test-bucket\n\n # access key, secret key and region should be set in aws config (either by env var or .aws/config)\n\n # Apache Kafka service backed by docker.\n kafka:\n # Id to be picked-up by services\n id: kafka-${port}\n\n # Advertise address of Kafka\n address: "127.0.0.1"\n\n # Listen port of Kafka\n port: 29092\n\n # Listen port of KRaft controller\n controller-port: 29093\n # Listen port for other services in docker (schema-registry)\n docker-port: 29094\n\n # The docker image. Can be overridden to use a different version.\n image: "confluentinc/cp-kafka:7.6.1"\n\n # If set to true, data will be persisted at data/{id}.\n persist-data: true\n\n # Kafka node id. If there are multiple instances of Kafka, we will need to set.\n node-id: 0\n\n user-managed: false\n\n schema-registry:\n # Id to be picked-up by services\n id: schema-registry-${port}\n\n # Advertise address\n address: "127.0.0.1"\n\n # Listen port of Schema Registry\n port: 8081\n\n # The docker image. Can be overridden to use a different version.\n image: "confluentinc/cp-schema-registry:7.6.1"\n\n user-managed: false\n\n provide-kafka: "kafka*"\n\n # Google pubsub emulator service\n pubsub:\n id: pubsub-${port}\n\n address: "127.0.0.1"\n\n port: 5980\n\n persist-data: true\n\n # Only supported in RiseDev compose\n redpanda:\n # Id to be picked-up by services\n id: redpanda\n\n # Port used inside docker-compose cluster (e.g. create MV)\n internal-port: 29092\n\n # Port used on host (e.g. import data, connecting using kafkacat)\n outside-port: 9092\n\n # Connect address\n address: ${id}\n\n # Number of CPUs to use\n cpus: 8\n\n # Memory limit for Redpanda\n memory: 16G\n\n # redis service\n redis:\n # Id to be picked-up by services\n id: redis\n\n # listen port of redis\n port: 6379\n\n # address of redis\n address: "127.0.0.1"\n\n # MySQL service backed by docker.\n mysql:\n # Id to be picked-up by services\n id: mysql-${port}\n\n # address of mysql\n address: "127.0.0.1"\n\n # listen port of mysql\n port: 8306\n\n # Note:\n # - This will be used to initialize the MySQL instance.\n # * If the user is "root", the password will be used as the root password.\n # * Otherwise, a regular user will be created with the given password. The root password will be empty.\n # Note that this only applies to fresh instances, i.e., the data directory is empty.\n # - These configs will be passed as-is to risedev-env default user for MySQL operations.\n # - This is not used in RISEDEV_MYSQL_WITH_OPTIONS_COMMON.\n user: root\n password: ""\n database: "risedev"\n\n # Which application to use. Can be overridden for metastore purpose.\n application: "connector"\n\n # The docker image. Can be overridden to use a different version.\n image: "mysql:8.0"\n\n # If set to true, data will be persisted at data/{id}.\n persist-data: true\n\n # If `user-managed` is true, user is responsible for starting the service\n # to serve at the above address and port in any way they see fit.\n user-managed: false\n\n # PostgreSQL service backed by docker.\n postgres:\n # Id to be picked-up by services\n id: postgres-${port}\n\n # address of pg\n address: "127.0.0.1"\n\n # listen port of pg\n port: 8432\n\n # Note:\n # - This will be used to initialize the PostgreSQL instance if it's fresh.\n # - These configs will be passed as-is to risedev-env default user for PostgreSQL operations.\n user: postgres\n password: ""\n database: "postgres"\n\n # Which application to use. Can be overridden for connector purpose.\n application: "metastore"\n\n # The docker image. Can be overridden to use a different version.\n image: "postgres:17-alpine"\n\n # If set to true, data will be persisted at data/{id}.\n persist-data: true\n\n # If `user-managed` is true, user is responsible for starting the service\n # to serve at the above address and port in any way they see fit.\n user-managed: false\n\n # Sql Server service backed by docker.\n sqlserver:\n # Note: Sql Server is now only for connector purpose.\n # Id to be picked-up by services\n id: sqlserver-${port}\n\n # address of mssql\n address: "127.0.0.1"\n\n # listen port of mssql\n port: 1433\n\n # Note:\n # - This will be used to initialize the Sql Server instance if it's fresh.\n # - In user-managed mode, these configs are not validated by risedev.\n # They are passed as-is to risedev-env default user for Sql Server operations.\n user: SA\n password: "YourPassword123"\n database: "master"\n\n # The docker image. Can be overridden to use a different version.\n image: "mcr.microsoft.com/mssql/server:2022-latest"\n\n # If set to true, data will be persisted at data/{id}.\n persist-data: true\n\n # If `user-managed` is true, user is responsible for starting the service\n # to serve at the above address and port in any way they see fit.\n user-managed: false\n | dataset_sample\yaml\risingwavelabs_risingwave\risedev.yml | risedev.yml | YAML | 38,979 | 0.95 | 0.017386 | 0.201974 | vue-tools | 694 | 2024-01-23T10:50:19.629344 | Apache-2.0 | false | 10f85c30a3ca5dab005c5682cf88fff2 |
version: 2\nupdates:\n- package-ecosystem: github-actions\n directory: "/"\n schedule:\n interval: "weekly"\n- package-ecosystem: cargo\n directory: /\n schedule:\n interval: "daily"\n open-pull-requests-limit: 10\n # Disable auto rebase to reduce cost. Use `@dependabot rebase` manually instead.\n rebase-strategy: "disabled"\n ignore:\n # Ignore patch to reduce spam. Manually run `cargo update` regularly instead.\n - dependency-name: "*"\n update-types: ["version-update:semver-patch"]\n # Ignore arrow crates. It does major releases frequently: https://github.com/apache/arrow-rs/issues/5368\n # We depend on arrow directly, and also many other crates depending on arrow, including deltalake, arrow-udf, ...\n # It will always need human intervention, and we'd better be the last one to update arrow.\n - dependency-name: "arrow*"\n update-types: ["version-update:semver-minor", "version-update:semver-major"]\n - dependency-name: "parquet"\n update-types: ["version-update:semver-minor", "version-update:semver-major"]\n # bump sqllogictest manually together with sqllogictest-bin in CI docker image\n - dependency-name: "sqllogictest"\n update-types: ["version-update:semver-minor", "version-update:semver-major"]\n # Create a group of dependencies to be updated together in one pull request\n groups:\n aws:\n patterns:\n - "aws*"\n tonic:\n patterns:\n - "tonic*"\n - "prost*"\n opentelemetry:\n patterns:\n - "opentelemetry"\n - "opentelemetry*"\n - "tracing-opentelemetry"\n - "fastrace-opentelemetry"\n mysql:\n patterns:\n - "mysql_common"\n - "mysql_async"\n google-cloud:\n patterns:\n - "google-cloud*"\n rand:\n patterns:\n - "rand"\n - "rand_chacha"\n strum:\n patterns:\n - "strum"\n - "strum_macros"\n serde:\n patterns:\n - "serde"\n - "serde_derive"\n\n# Don't update these directories\n- package-ecosystem: cargo\n directory: /integration_tests/feature-store\n schedule:\n interval: "daily"\n ignore:\n - dependency-name: "*"\n\n- package-ecosystem: maven\n directory: /java\n schedule:\n interval: "weekly"\n open-pull-requests-limit: 5\n # Disable auto rebase to reduce cost. Use `@dependabot rebase` manually instead.\n rebase-strategy: "disabled"\n ignore:\n # Do not bump Debezium because we have hacked its source code e.g. #18760\n - dependency-name: "io.debezium:*"\n update-types:\n ["version-update:semver-minor", "version-update:semver-major"]\n # Don't upgrade protobuf to 4.x now. See https://github.com/grpc/grpc-java/issues/11015\n - dependency-name: "com.google.protobuf:*"\n update-types:\n ["version-update:semver-major"]\n # Let's do major version updates manually\n - dependency-name: "*"\n update-types:\n ["version-update:semver-major"]\n groups:\n # Group all dependencies together because Java libraries are quite stable\n all:\n patterns:\n - "*"\n\n# Don't touch risingwave-sink-deltalake-test. It's too complicated and it's only for testing\n- package-ecosystem: maven\n directory: /java/connector-node/risingwave-sink-deltalake-test/\n schedule:\n interval: "weekly"\n ignore:\n - dependency-name: "*"\n | dataset_sample\yaml\risingwavelabs_risingwave\.github\dependabot.yml | dependabot.yml | YAML | 3,290 | 0.8 | 0.009709 | 0.14 | python-kit | 45 | 2024-10-16T01:44:42.106523 | Apache-2.0 | false | dd16c58d1119fd810309278f5157f208 |
comment:\n header: Hi, there.\n footer: "\\n ---\n\n\\n > This is an automated comment created by the [peaceiris/actions-label-commenter]. \\n Responding to the bot or mentioning it won't have any effect.\n\n\\n [peaceiris/actions-label-commenter]: https://github.com/peaceiris/actions-label-commenter"\n\nlabels:\n - name: 'user-facing-changes'\n labeled:\n pr:\n body: |\n :memo: **Telemetry Reminder**:\n If you're implementing this feature, please consider adding telemetry metrics to track its usage. This helps us understand how the feature is being used and improve it further.\n You can find the function `report_event` of telemetry reporting in the following files. Feel free to ask questions if you need any guidance!\n * `src/frontend/src/telemetry.rs`\n * `src/meta/src/telemetry.rs`\n * `src/stream/src/telemetry.rs`\n * `src/storage/compactor/src/telemetry.rs`\n Or calling `report_event_common` (`src/common/telemetry_event/src/lib.rs`) as if finding it hard to implement.\n :sparkles: Thank you for your contribution to RisingWave! :sparkles:\n | dataset_sample\yaml\risingwavelabs_risingwave\.github\label-commenter-config.yml | label-commenter-config.yml | YAML | 1,149 | 0.95 | 0.181818 | 0.190476 | node-utils | 103 | 2024-11-24T05:52:22.628035 | BSD-3-Clause | false | 13792f3a2455fd15775e702fbdd1214b |
version: 1\nappendOnly: true\n\n# Match title\nlabels:\n- label: "type/feature"\n title: "^feat.*"\n- label: "type/fix"\n title: "^fix.*"\n- label: "type/refactor"\n title: "^refactor.*"\n- label: "type/style"\n title: "^style.*"\n- label: "type/chore"\n title: "^chore.*"\n- label: "type/perf"\n title: "^perf.*"\n- label: "type/build"\n title: "^build.*"\n- label: "type/revert"\n title: "^revert.*"\n- label: "component/ci"\n title: "^ci.*"\n- label: "component/test"\n title: "^test.*"\n- label: "component/doc"\n title: "^doc.*"\n- label: "type/deprecate"\n title: "^deprecate.*"\n- label: "cherry-pick"\n title: "^cherry-pick.*"\n- label: "cherry-pick"\n title: "^cherry pick.*"\n- label: "cherry-pick"\n title: "^cherrypick.*"\n\n# Match body\n- label: "breaking-change"\n body: '- \[x\] My PR contains breaking changes'\n\n# Match File changes\n- label: "ci/run-e2e-single-node-tests"\n files:\n - "src\\/meta\\/.*.rs"\n\n- label: "ci/run-backwards-compat-tests"\n files:\n - "src\\/meta\\/model\\/migration\\/.*.rs"\n\n- label: "ci/run-e2e-test-other-backends"\n files:\n - "src\\/meta\\/.*.rs"\n\n- label: "ci/run-e2e-iceberg-tests"\n files:\n - ".*iceberg.*"\n- label: "ci/run-e2e-iceberg-tests"\n title: ".*iceberg.*"\n\n# S3 source tests\n- label: "ci/run-s3-source-tests"\n files:\n - "src\\/connector\\/src\\/source\\/filesystem.*"\n - ".*fs_fetch.*"\n - ".*fs_list.*"\n- label: "ci/main-cron/run-selected"\n files:\n - "src\\/connector\\/src\\/source\\/filesystem.*"\n - ".*fs_fetch.*"\n - ".*fs_list.*"\n | dataset_sample\yaml\risingwavelabs_risingwave\.github\labeler.yml | labeler.yml | YAML | 1,484 | 0.8 | 0 | 0.063492 | python-kit | 971 | 2024-04-18T13:46:06.573245 | MIT | false | 53c1a9fb8f6f478f55f5da001d9bee12 |
name: Bug report\ndescription: Create a report to help us improve\nlabels: ["type/bug"]\nbody:\n - type: textarea\n attributes:\n label: Describe the bug\n description: A clear and concise description of what the bug is.\n - type: textarea\n attributes:\n label: Error message/log\n description: The error message you see.\n render: text\n - type: textarea\n attributes:\n label: To Reproduce\n description: Steps to reproduce the behavior, including the SQLs you run and/or the operations you have done to trigger the bug.\n placeholder: |\n First create the tables/sources and materialized views with\n\n ```sql\n CREATE TABLE ...\n CREATE MATERIALIZED VIEW ...\n ```\n\n Then the bug is triggered after ...\n - type: textarea\n attributes:\n label: Expected behavior\n description: A clear and concise description of what you expected to happen.\n placeholder: |\n I expected to see this happen: *explanation*\n\n Instead, this happened: *explanation*\n - type: textarea\n attributes:\n label: How did you deploy RisingWave?\n description: Do you run RisingWave via Docker / Homebrew / RiseDev / RisingWave Cloud / ...?\n placeholder: |\n via Docker Compose. My `docker-compose.yml` is: ...\n - type: textarea\n attributes:\n label: The version of RisingWave\n description: The output of `select version()` and/or the docker image tag and ID.\n placeholder: |\n ```\n dev=> select version();\n version\n -----------------------------------------------------------------------------\n PostgreSQL 8.3-RisingWave-0.19.0 (01659936e12307e28e13287dcc3ca899b7f701e3)\n\n\n docker image ls\n REPOSITORY TAG IMAGE ID CREATED SIZE\n risingwavelabs/risingwave latest c0fb5556d7cb 6 days ago 1.99GB\n ```\n - type: textarea\n attributes:\n label: Additional context\n description: Add any other context about the problem here. e.g., the full log files.\n\n | dataset_sample\yaml\risingwavelabs_risingwave\.github\ISSUE_TEMPLATE\bug_report.yml | bug_report.yml | YAML | 2,145 | 0.7 | 0 | 0 | vue-tools | 855 | 2024-09-16T05:24:58.647786 | MIT | false | 59604ef056841a1d1bba80f7537e1372 |
contact_links:\n - name: Questions and General Discussion\n url: https://github.com/risingwavelabs/risingwave/discussions\n about: Have questions? Welcome to open a discussion.\n - name: Community Chat\n url: https://risingwave.com/slack\n about: Join the RisingWave Slack community and chat with us.\n | dataset_sample\yaml\risingwavelabs_risingwave\.github\ISSUE_TEMPLATE\config.yml | config.yml | YAML | 309 | 0.8 | 0 | 0 | vue-tools | 741 | 2024-07-10T14:22:11.502249 | BSD-3-Clause | false | d0e14a7fe006692f23f7d3517c0384c4 |
name: Design RFC\ndescription: Propose a design\nlabels: ["type/feature"]\nbody:\n - type: markdown\n attributes:\n value: |\n This doc will first explain why we need ..... Then we will go through the technical design of this feature.\n\n - type: textarea\n attributes:\n label: Background\n description: Please articulate the pain points and use cases where this RFC can improve\n - type: textarea\n attributes:\n label: Design\n description: The technical design in detail. You can write some demos or concrete APIs that can help others better understand how the design will work.\n - type: textarea\n attributes:\n label: Future Optimizations\n description: Please remember that a good design always goes with an MVP first and iterates for optimizations step by step.\n - type: textarea\n attributes:\n label: Discussions\n description: Please list the open issues and the tough decisions that you want to discuss with meeting participants\n - type: textarea\n attributes:\n label: Q&A\n description: Here's where the doc readers can leave the questions and suggestions\n placeholder: |\n * Why do you need ...\n * What will happen if ...\n | dataset_sample\yaml\risingwavelabs_risingwave\.github\ISSUE_TEMPLATE\design-rfc.yml | design-rfc.yml | YAML | 1,216 | 0.7 | 0.0625 | 0.064516 | node-utils | 18 | 2025-06-19T19:08:20.709061 | MIT | false | 738b36f31b7a46f52e413734109466ea |
name: Feature request\ndescription: Suggest an idea for this project\nlabels: ["type/feature"]\nbody:\n - type: textarea\n attributes:\n label: Is your feature request related to a problem? Please describe.\n description: A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]\n - type: textarea\n attributes:\n label: Describe the solution you'd like\n description: A clear and concise description of what you want to happen.\n - type: textarea\n attributes:\n label: Describe alternatives you've considered\n description: A clear and concise description of any alternative solutions or features you've considered.\n - type: textarea\n attributes:\n label: Additional context\n description: Add any other context or screenshots about the feature request here.\n | dataset_sample\yaml\risingwavelabs_risingwave\.github\ISSUE_TEMPLATE\feature_request.yml | feature_request.yml | YAML | 836 | 0.7 | 0.05 | 0 | node-utils | 165 | 2024-08-12T14:02:54.281105 | MIT | false | 01ba83cff50dcab3ce43392adacdc69f |
name: Assign milestone\n\non:\n issues:\n types: [opened]\n\njobs:\n assign-milestone:\n runs-on: ubuntu-latest\n steps:\n - name: Add issue to the latest milestone\n uses: fuyufjh/automatically-set-milestone-to-issue@v0.2\n with:\n github-token: ${{ secrets.GITHUB_TOKEN }}\n version-prefix: 'release-'\n version-separator: '.'\n overwrite: false\n | dataset_sample\yaml\risingwavelabs_risingwave\.github\workflows\assign-issue-milestone.yml | assign-issue-milestone.yml | YAML | 384 | 0.7 | 0 | 0 | python-kit | 600 | 2024-06-01T07:52:47.360584 | GPL-3.0 | false | 1e62e12a1f8b05977352b13ac86f5fa6 |
name: Security audit\non:\n schedule:\n - cron: '0 0 * * *'\njobs:\n audit:\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n - uses: rustsec/audit-check@v2.0.0\n with:\n token: ${{ secrets.GITHUB_TOKEN }} | dataset_sample\yaml\risingwavelabs_risingwave\.github\workflows\audit.yml | audit.yml | YAML | 247 | 0.7 | 0 | 0 | vue-tools | 100 | 2023-09-27T12:21:29.860048 | BSD-3-Clause | false | 55e2a5eb3e7a8675441a9b74bcfa621c |
name: Issue Documentation Checker\n\non:\n issues:\n types:\n - closed\n - labeled\n\njobs:\n create-issue:\n runs-on: ubuntu-latest\n\n steps:\n - uses: actions/checkout@v4\n - name: Check if issue is done and labeled 'user-facing-changes'\n uses: dacbd/create-issue-action@main\n if: ${{ github.event.action == 'closed' && contains(github.event.issue.labels.*.name, 'user-facing-changes') }}\n with:\n token: ${{ secrets.ACCESS_TOKEN }}\n owner: risingwavelabs\n repo: risingwave-docs\n title: |\n Document: ${{ github.event.issue.title }}\n body: |\n ## Context\n Source Issue URL: ${{ github.event.issue.html_url }}\n Created At: ${{ github.event.issue.created_at }}\n Created By: ${{ github.event.issue.user.login }}\n Closed At: ${{ github.event.issue.closed_at }}\n | dataset_sample\yaml\risingwavelabs_risingwave\.github\workflows\auto-create-doc-issue-by-issue.yml | auto-create-doc-issue-by-issue.yml | YAML | 906 | 0.8 | 0.068966 | 0.038462 | vue-tools | 540 | 2024-08-05T16:12:16.008715 | GPL-3.0 | false | 41c3ceba3a0c3ab4d5a08f4111d9e8a3 |
name: PR Documentation Checker\n\non:\n pull_request:\n types:\n - closed\n - labeled\n\njobs:\n check_pr_description:\n runs-on: ubuntu-latest\n\n steps:\n - name: Check if PR is merged\n id: check_merged\n run: echo "merged=$(echo ${{ github.event.pull_request.merged }} | tr '[:upper:]' '[:lower:]')" >> $GITHUB_OUTPUT\n\n - name: Retrieve PR information\n uses: 8BitJonny/gh-get-current-pr@3.0.0\n id: PR\n with:\n sha: ${{ github.event.pull_request.head.sha }}\n\n - name: Check if documentation update is needed\n id: check_documentation_update\n if: steps.PR.outputs.pr_found == 'true'\n run: |\n if [[ $PR_BODY == *"- [x] My PR needs documentation updates."* ]]; then\n echo "documentation_update=true" >> $GITHUB_OUTPUT\n elif [[ $PR_LABEL == *"user-facing-changes"* ]]; then\n echo "documentation_update=true" >> $GITHUB_OUTPUT\n elif [[ $PR_LABEL == *"breaking-change"* ]]; then\n echo "documentation_update=true" >> $GITHUB_OUTPUT\n else\n echo "documentation_update=false" >> $GITHUB_OUTPUT\n fi\n env:\n PR_BODY: ${{ steps.PR.outputs.pr_body }}\n PR_LABEL: ${{ steps.PR.outputs.pr_labels }}\n\n - name: Create issue in other repository\n if: steps.check_merged.outputs.merged == 'true' && steps.check_documentation_update.outputs.documentation_update == 'true'\n run: |\n ISSUE_CONTENT="This issue tracks the documentation update needed for the merged PR #$PR_ID.\n\nSource PR URL: $PR_URL\nSource PR Merged At: $PR_MERGED_AT\n\nIf it is a major improvement that deserves a new page or a new section in the documentation, please check if we should label it as an experiment feature."\n\n curl -X POST \\n -H "Authorization: Bearer ${{ secrets.ACCESS_TOKEN }}" \\n -d "{\"title\": \"Document: $PR_TITLE\",\"body\": \"$ISSUE_CONTENT\"}" \\n "https://api.github.com/repos/risingwavelabs/risingwave-docs/issues"\n env:\n PR_ID: ${{ steps.PR.outputs.number }}\n PR_URL: ${{ steps.PR.outputs.pr_url }}\n PR_TITLE: ${{ steps.PR.outputs.pr_title }}\n PR_BODY: ${{ steps.PR.outputs.pr_body }}\n PR_CREATED_AT: ${{ steps.PR.outputs.pr_created_at }}\n PR_MERGED_AT: ${{ steps.PR.outputs.pr_merged_at }}\n PR_CLOSED_AT: ${{ steps.PR.outputs.pr_closed_at }}\n PR_LABEL: ${{ steps.PR.outputs.pr_labels }}\n\n - name: print_output_variables\n run: |\n echo "Merged: ${{ steps.check_merged.outputs.merged }}"\n echo "PR ID: ${{ steps.PR.outputs.number }}"\n echo "PR URL: ${{ steps.PR.outputs.pr_url }}"\n echo "PR Title: ${{ steps.PR.outputs.pr_title }}"\n echo "PR Created At: ${{ steps.PR.outputs.pr_created_at }}"\n echo "PR Merged At: ${{ steps.PR.outputs.pr_merged_at }}"\n echo "PR Closed At: ${{ steps.PR.outputs.pr_closed_at }}"\n echo "PR Labels: ${{ steps.PR.outputs.pr_labels }}"\n echo "Documentation Update: ${{ steps.check_documentation_update.outputs.documentation_update }}"\n | dataset_sample\yaml\risingwavelabs_risingwave\.github\workflows\auto-create-doc-issue-by-pr.yml | auto-create-doc-issue-by-pr.yml | YAML | 3,178 | 0.8 | 0.1 | 0 | awesome-app | 783 | 2023-12-08T20:23:33.958023 | MIT | false | 4b51ed7e01a4f7d3e73cdec4857b53d3 |
name: Update Helm Charts and Risingwave Operator on New Release\n\non:\n release:\n types: [published]\n workflow_dispatch:\n inputs:\n version:\n description: 'release version'\n required: true\n\nenv:\n NEW_APP_VERSION: ${{ github.event.inputs.version || github.event.release.tag_name }}\n\njobs:\n update-helm-charts:\n runs-on: ubuntu-latest\n steps:\n - name: Checkout Helm Charts Repository\n uses: actions/checkout@v4\n with:\n repository: 'risingwavelabs/helm-charts'\n token: ${{ secrets.PR_TOKEN }}\n path: 'helm-charts'\n\n - name: Update values.yaml\n run: |\n sed -i "s/^ tag:.*/ tag: \"${{ env.NEW_APP_VERSION }}\"/" helm-charts/charts/risingwave/values.yaml\n\n - name: Update Chart.yaml\n run: |\n cd helm-charts/charts/risingwave\n CURRENT_VERSION=$(grep 'version:' Chart.yaml | awk '{print $2}' | head -n 1)\n NEW_VERSION=$(echo $CURRENT_VERSION | awk -F. -v OFS='.' '{$NF++; print}')\n sed -i "/type: application/,/version:/!b; /version:/s/version: .*/version: $NEW_VERSION/" Chart.yaml\n sed -i "s/^appVersion: .*/appVersion: \"${{ env.NEW_APP_VERSION }}\"/" Chart.yaml\n echo "NEW_CHART_VERSION=$NEW_VERSION" >> $GITHUB_ENV\n\n - name: Create Pull Request\n uses: peter-evans/create-pull-request@v7\n with:\n token: ${{ secrets.PR_TOKEN }}\n commit-message: 'chore: bump risingwave to ${{ env.NEW_APP_VERSION }}, release chart ${{ env.NEW_CHART_VERSION }}'\n title: 'chore: bump risingwave to ${{ env.NEW_APP_VERSION }}, release chart ${{ env.NEW_CHART_VERSION }}'\n body: 'This is an automated pull request to update the chart versions'\n branch: 'auto-update-${{ env.NEW_APP_VERSION }}'\n path: 'helm-charts'\n reviewers: arkbriar\n delete-branch: true\n signoff: true\n\n update-risingwave-operator:\n runs-on: ubuntu-latest\n steps:\n - name: Checkout Risingwave Operator Repository\n uses: actions/checkout@v4\n with:\n repository: 'risingwavelabs/risingwave-operator'\n token: ${{ secrets.PR_TOKEN }}\n path: 'risingwave-operator'\n\n - name: Update risingwave-operator image tags\n run: |\n cd risingwave-operator\n PREV_VERSION=$(grep -roh "risingwavelabs/risingwave:v[0-9\.]*" * | head -n 1 | cut -d':' -f2)\n grep -rl "risingwavelabs/risingwave:$PREV_VERSION" . | xargs sed -i "s|risingwavelabs/risingwave:$PREV_VERSION|risingwavelabs/risingwave:${{ env.NEW_APP_VERSION }}|g"\n\n - name: Create Pull Request for risingwave-operator\n uses: peter-evans/create-pull-request@v7\n with:\n token: ${{ secrets.PR_TOKEN }}\n commit-message: 'chore: bump risingwave image tags to ${{ env.NEW_APP_VERSION }}'\n title: 'chore: bump risingwave image tags to ${{ env.NEW_APP_VERSION }}'\n body: 'This is an automated pull request to update the risingwave image tags'\n branch: 'auto-update-${{ env.NEW_APP_VERSION }}'\n path: 'risingwave-operator'\n reviewers: arkbriar\n delete-branch: true\n signoff: true\n | dataset_sample\yaml\risingwavelabs_risingwave\.github\workflows\auto-update-helm-and-operator-version-by-release.yml | auto-update-helm-and-operator-version-by-release.yml | YAML | 3,211 | 0.85 | 0.012658 | 0 | python-kit | 648 | 2023-12-09T21:31:36.926594 | Apache-2.0 | false | 83cc09d83291880666007ae4c6b447a0 |
name: PR for release branches\non:\n pull_request:\n branches:\n - main\n types: ["closed", "labeled"]\n workflow_dispatch:\n inputs:\n pr_number:\n description: "PR number to cherry-pick"\n required: true\n type: number\n base_version:\n description: "Base version to cherry-pick since"\n default: "2.1"\n required: true\n type: string\n\nenv:\n GH_TOKEN: ${{ github.token }}\n\njobs:\n get-target-release-branches:\n if: |\n (github.event_name == 'pull_request' &&\n github.event.pull_request.merged &&\n ((github.event.action == 'labeled' && startsWith(github.event.label.name, 'need-cherry-pick-since')) ||\n (github.event.action == 'closed' && contains(toJson(github.event.pull_request.labels), 'need-cherry-pick-since')))) ||\n github.event_name == 'workflow_dispatch'\n runs-on: ubuntu-latest\n outputs:\n branches: ${{ steps.filter-release-branches.outputs.branches }}\n pr_number: ${{ steps.filter-release-branches.outputs.pr_number }}\n pr_sha: ${{ steps.filter-release-branches.outputs.pr_sha }}\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n with:\n fetch-depth: 0 # Ensures all branches are fetched\n\n - name: Get all release branches including label version and higher\n id: filter-release-branches\n run: |\n if [[ "${{ github.event_name }}" == "workflow_dispatch" ]]; then\n # For manual workflow dispatch\n base_version="${{ github.event.inputs.base_version }}"\n pr_number="${{ github.event.inputs.pr_number }}"\n echo "Using manually provided base version: $base_version for PR #$pr_number"\n # Get the PR merge commit SHA\n pr_sha=$(gh pr view $pr_number --repo ${{ github.repository }} --json mergeCommit --jq .mergeCommit.oid)\n echo "PR merge commit SHA: $pr_sha"\n echo "pr_sha=$pr_sha" >> "$GITHUB_OUTPUT"\n else\n # For automatic trigger from PR events\n if [[ "${{ github.event.action }}" == 'labeled' ]]; then\n label="${{ github.event.label.name }}"\n else\n labels='${{ toJson(github.event.pull_request.labels) }}'\n label=$(echo "$labels" | jq -r '.[] | select(.name | contains("need-cherry-pick-since")).name' | sort -V | head -n 1)\n fi\n base_version=$(echo "$label" | sed 's/need-cherry-pick-since-release-//')\n pr_number="${{ github.event.number }}"\n fi\n\n # Output the PR number for use in downstream jobs\n echo "PR number: $pr_number"\n echo "pr_number=$pr_number" >> "$GITHUB_OUTPUT"\n\n echo "Base version from label: $base_version"\n\n branches=$(git branch -r | grep "origin/release-" | sed 's|origin/release-||' | sort -V)\n\n echo "Branches: $branches"\n\n target_branches=()\n\n while IFS= read -r version; do\n version=$(echo "$version" | xargs)\n\n if [[ ! "$version" =~ ^[0-9]+(\.[0-9]+)*$ ]]; then\n echo "Skipping non-numeric branch: release-$version"\n continue\n fi\n\n if [[ -n "$version" ]] && [[ "$version" == "$(printf "%s\n%s" "$base_version" "$version" | sort -V | tail -n1)" ]]; then\n target_branches+=("release-$version")\n fi\n done <<< "$branches"\n\n if [ ${#target_branches[@]} -eq 0 ]; then\n echo "No matching release branches found."\n echo "branches=[]" >> "$GITHUB_OUTPUT"\n else\n echo "Matching release branches found:"\n for branch in "${target_branches[@]}"; do\n echo "$branch"\n done\n echo "branches=$(printf '%s\n' "${target_branches[@]}" | jq -R . | jq -s -c .)" >> "$GITHUB_OUTPUT"\n fi\n\n release_pull_request:\n needs: get-target-release-branches\n if: needs.get-target-release-branches.outputs.branches != '[]'\n runs-on: ubuntu-latest\n strategy:\n matrix:\n branch: ${{ fromJson(needs.get-target-release-branches.outputs.branches) }}\n steps:\n - name: checkout\n uses: actions/checkout@v4\n\n - name: Create PR to branch\n uses: risingwavelabs/github-action-cherry-pick@master\n with:\n # For automatic trigger from PR events, `pr_sha` is unset,\n # and it will use the triggering SHA (GITHUB_SHA) instead.\n commit_sha: ${{ needs.get-target-release-branches.outputs.pr_sha || '' }}\n pr_branch: ${{ matrix.branch }}\n pr_labels: "cherry-pick"\n pr_body: ${{ format('Cherry picking \#{0} onto branch {1}', needs.get-target-release-branches.outputs.pr_number, matrix.branch) }}\n env:\n GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n\npermissions:\n issues: write\n pull-requests: write\n contents: write\n actions: write\n | dataset_sample\yaml\risingwavelabs_risingwave\.github\workflows\cherry-pick-since-release-branch.yml | cherry-pick-since-release-branch.yml | YAML | 4,918 | 0.95 | 0.09375 | 0.053097 | vue-tools | 94 | 2023-12-24T13:39:37.757321 | Apache-2.0 | false | 54edca5fcd6d8f4dc145c57de91a3928 |
name: Deprecation Notice for old cherry-pick label\non:\n pull_request:\n branches:\n - main\n types: ["labeled"]\n\njobs:\n deprecation_notice:\n if: startsWith(github.event.label.name, 'need-cherry-pick-release-')\n runs-on: ubuntu-latest\n steps:\n - name: Add deprecation notice comment\n uses: actions/github-script@v7\n with:\n script: |\n await github.rest.issues.createComment({\n owner: context.repo.owner,\n repo: context.repo.repo,\n issue_number: context.payload.pull_request.number,\n body: '> [!WARNING]\n> The `need-cherry-pick-release-xx` label is deprecated. Please use `need-cherry-pick-since-release-xx` instead for future PRs.'\n })\n\npermissions:\n issues: write\n pull-requests: write\n | dataset_sample\yaml\risingwavelabs_risingwave\.github\workflows\cherry-pick-to-release-branch.yml | cherry-pick-to-release-branch.yml | YAML | 809 | 0.7 | 0.115385 | 0 | react-lib | 338 | 2023-12-17T10:23:30.229620 | Apache-2.0 | false | 70a17debcd6a27d439b04100bd7b2494 |
name: Connector Node Integration Tests\n\non:\n push:\n branches: [ main ]\n pull_request:\n branches: [ main ]\n merge_group:\n types: [ checks_requested ]\n\njobs:\n build:\n runs-on: ubuntu-latest\n strategy:\n matrix:\n java: [ '11', '17' ]\n name: Java ${{ matrix.java }}\n steps:\n - uses: actions/checkout@v4\n - uses: dorny/paths-filter@v3\n id: filter\n with:\n filters: |\n java:\n - 'java/**'\n proto:\n - 'proto/**'\n - name: Set up JDK ${{ matrix.java }}\n if: steps.filter.outputs.java == 'true' || steps.filter.outputs.proto == 'true'\n uses: actions/setup-java@v4\n with:\n java-version: ${{ matrix.java }}\n distribution: 'adopt'\n cache: 'maven'\n - name: run integration tests\n if: steps.filter.outputs.java == 'true' || steps.filter.outputs.proto == 'true'\n run: |\n set -ex\n\n RISINGWAVE_ROOT=${PWD}\n\n echo "--- build connector node"\n cd ${RISINGWAVE_ROOT}/java\n # run unit test\n # WARN: `testOnNext_writeValidation` is skipped because it relies on Rust code to decode message,\n # while we don't build Rust code (`-Dno-build-rust`) here to save time\n mvn --batch-mode --update-snapshots clean package -Dno-build-rust \\n '-Dtest=!com.risingwave.connector.sink.SinkStreamObserverTest#testOnNext_writeValidation' \\n -Dsurefire.failIfNoSpecifiedTests=false\n | dataset_sample\yaml\risingwavelabs_risingwave\.github\workflows\connector-node-integration.yml | connector-node-integration.yml | YAML | 1,528 | 0.8 | 0.061224 | 0.066667 | react-lib | 844 | 2025-06-24T20:48:59.770023 | Apache-2.0 | false | 93d45c0b8b783545f846b974453d50da |
name: Dashboard\non:\n push:\n branches: [main]\n paths: [dashboard/**, proto/**]\n pull_request:\n branches: [main]\n paths: [dashboard/**, proto/**]\n workflow_dispatch:\njobs:\n dashboard-ui-deploy:\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n - uses: actions/setup-node@v4\n with:\n node-version: 18\n - uses: arduino/setup-protoc@v3\n with:\n version: "25.x"\n repo-token: ${{ secrets.GITHUB_TOKEN }}\n - name: build\n working-directory: ./dashboard\n run: |\n echo "::group::npm install"\n npm install\n echo "::endgroup::"\n npm run lint\n npm run build\n - name: Deploy\n uses: s0/git-publish-subdir-action@develop\n if: github.event_name == 'push' && github.ref == 'refs/heads/main'\n env:\n REPO: self\n BRANCH: dashboard-artifact\n FOLDER: dashboard/out\n GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n SQUASH_HISTORY: true\n | dataset_sample\yaml\risingwavelabs_risingwave\.github\workflows\dashboard.yml | dashboard.yml | YAML | 1,029 | 0.8 | 0.026316 | 0 | node-utils | 756 | 2024-05-16T02:49:20.957082 | MIT | false | 4c5027f8826f70952a28a456963a62fe |
name: Deploy Developer Docs\n\non:\n push:\n branches:\n - main\n workflow_dispatch:\n\nenv:\n SCCACHE_GHA_ENABLED: "true"\n RUSTC_WRAPPER: "sccache"\n RUSTDOCFLAGS: "--markdown-css rust.css --markdown-no-toc --index-page /home/runner/work/risingwave/risingwave/docs/rustdoc/index.md -Zunstable-options"\n\njobs:\n build:\n runs-on: ubuntu-latest\n steps:\n - name: Maximize build space\n uses: easimon/maximize-build-space@master\n with:\n remove-dotnet: 'true'\n remove-android: 'true'\n remove-haskell: 'true'\n remove-codeql: 'true'\n remove-docker-images: 'true'\n root-reserve-mb: 10240\n temp-reserve-mb: 10240\n - uses: actions/checkout@v4\n - name: Setup Rust toolchain\n run: rustup show\n - name: Install dependencies for compiling RisingWave\n run: sudo apt-get update && sudo apt-get install -y make build-essential cmake protobuf-compiler curl openssl libssl-dev libsasl2-dev libcurl4-openssl-dev pkg-config postgresql-client tmux lld\n - name: Run sccache-cache\n uses: mozilla-actions/sccache-action@v0.0.9\n - name: build rustdocs\n run: |\n cargo doc --workspace --no-deps --document-private-items\n cp docs/rustdoc/rust.css target/doc/rust.css\n\n mkdir -p artifact/rustdoc\n cp -R target/doc/* artifact/rustdoc\n - name: Show available storage\n run: df -h\n - name: Install tools for building docs\n uses: taiki-e/install-action@v2\n with:\n tool: mdbook,mdbook-toc,mdbook-linkcheck\n - name: build developer doc\n run: |\n cd docs/dev\n mdbook build\n cp -R book/html/* ../../artifact\n - name: Upload artifacts\n uses: actions/upload-pages-artifact@v3\n with:\n path: artifact\n - name: Show available storage\n run: df -h\n deploy:\n needs: build\n permissions:\n pages: write # to deploy to Pages\n id-token: write # to verify the deployment originates from an appropriate source\n environment:\n name: github-pages\n url: ${{ steps.deployment.outputs.page_url }}\n runs-on: ubuntu-latest\n steps:\n - name: Deploy to GitHub Pages\n id: deployment\n uses: actions/deploy-pages@v4\n | dataset_sample\yaml\risingwavelabs_risingwave\.github\workflows\doc.yml | doc.yml | YAML | 2,300 | 0.8 | 0.028169 | 0 | vue-tools | 470 | 2023-10-22T00:50:36.877705 | Apache-2.0 | false | e7115606c5749d37af66c8f07e086c11 |
name: Label Triggered Comment\n\non:\n issues:\n types: [labeled, unlabeled]\n pull_request:\n types: [labeled, unlabeled]\n\npermissions:\n contents: read\n issues: write\n pull-requests: write\n\njobs:\n comment:\n runs-on: ubuntu-20.04\n steps:\n - uses: actions/checkout@v4\n - name: Label Commenter\n uses: peaceiris/actions-label-commenter@v1\n with:\n github_token: ${{ secrets.GITHUB_TOKEN }}\n config_file: .github/label-commenter-config.yml\n | dataset_sample\yaml\risingwavelabs_risingwave\.github\workflows\label-triggered.yml | label-triggered.yml | YAML | 489 | 0.7 | 0 | 0 | node-utils | 99 | 2024-02-15T05:41:25.914671 | MIT | false | 5000416d674ff38037c6c0ca950aa80a |
name: Label PRs\n\non:\n pull_request:\n types: [opened, edited, synchronize]\n\njobs:\n pr-labeler:\n runs-on: ubuntu-latest\n name: pr-labeler\n steps:\n - uses: srvaroa/labeler@master\n env:\n GITHUB_TOKEN: "${{ secrets.GITHUB_TOKEN }}"\n | dataset_sample\yaml\risingwavelabs_risingwave\.github\workflows\labeler.yml | labeler.yml | YAML | 256 | 0.7 | 0 | 0 | python-kit | 833 | 2025-05-21T06:29:11.629767 | Apache-2.0 | false | 0e2b3b9ea1493bd04785784a44f2e462 |
name: License checker\n\non:\n push:\n branches:\n - main\n - "forks/*"\n pull_request:\n branches:\n - main\n - "v*.*.*-rc"\n merge_group:\n types: [checks_requested]\njobs:\n license-header-check:\n runs-on: ubuntu-latest\n name: license-header-check\n steps:\n - uses: actions/checkout@v4\n - name: Check License Header\n uses: apache/skywalking-eyes@v0.7.0\n | dataset_sample\yaml\risingwavelabs_risingwave\.github\workflows\license_check.yml | license_check.yml | YAML | 386 | 0.8 | 0 | 0 | vue-tools | 723 | 2024-06-23T17:37:05.461649 | Apache-2.0 | false | 34ab35aa9eb2989f3ccf30d7e415d455 |
name: Build with Latest Nightly Rust\n\n# Helpful to know when it does not compile.\n\non:\n schedule:\n - cron: "0 0 * * *"\n push:\n branches:\n - xxchan/latest-nightly-rust\n workflow_dispatch:\n\njobs:\n build:\n runs-on: ubuntu-latest\n steps:\n - name: Maximize build space\n uses: easimon/maximize-build-space@master\n with:\n remove-dotnet: 'true'\n remove-android: 'true'\n remove-haskell: 'true'\n remove-codeql: 'true'\n remove-docker-images: 'true'\n root-reserve-mb: 10240\n temp-reserve-mb: 10240\n - uses: actions/checkout@v4\n if: ${{ github.event_name == 'schedule' }}\n with:\n # For daily scheduled run, we use a fixed branch, so that we can apply patches to fix compile errors earlier.\n # We can also ensure the regression is due to new rust instead of new RisingWave code.\n ref: xxchan/latest-nightly-rust\n - uses: actions/checkout@v4\n if: ${{ !(github.event_name == 'schedule') }}\n - name: Setup Rust toolchain\n run: |\n rustup override set nightly\n rustup update nightly\n - name: Install dependencies\n run: sudo apt-get update && sudo apt-get install -y make build-essential cmake protobuf-compiler curl openssl libssl-dev libsasl2-dev libcurl4-openssl-dev pkg-config postgresql-client tmux lld\n - name: cargo check\n run: |\n export CARGO_INCREMENTAL=0\n export CARGO_PROFILE_DEV_DEBUG=false\n cargo check\n - name: Show available storage\n run: df -h\n | dataset_sample\yaml\risingwavelabs_risingwave\.github\workflows\nightly-rust.yml | nightly-rust.yml | YAML | 1,594 | 0.8 | 0.042553 | 0.068182 | vue-tools | 928 | 2025-02-16T13:25:53.498466 | BSD-3-Clause | false | 79c557329f3590155664d265fef5ae37 |
name: Package Version Checker\n\non:\n pull_request:\n branches:\n - 'main'\n\njobs:\n compare-package-version-with-latest-release-version:\n runs-on: ubuntu-latest\n\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n env:\n GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n\n - name: List branches\n run: |\n git fetch --all\n release_branches=$(git branch -r | grep -E 'origin/release-[0-9]+\.[0-9]+' | sed 's/origin\///')\n echo "Release branches:"\n echo "$release_branches"\n echo "$release_branches" > release_branches.txt\n\n - name: Pick latest release branch\n run: |\n release_branches=$(cat release_branches.txt)\n latest_branch=$(echo "$release_branches" | sort -t. -k1,1 -k2,2 -Vr | head -n 1)\n echo "Latest release branch: $latest_branch"\n latest_version=$(echo "$latest_branch" | sed -E 's/release-([0-9]+\.[0-9]+)/\1/' | sed 's/^[ \t]*//')\n echo "Latest release version: $latest_version"\n echo "$latest_version" > latest_release_version.txt\n\n - name: Read Cargo.toml version\n run: |\n cargo_version=$(grep -oP '(?<=^version = ")[0-9]+\.[0-9]+' Cargo.toml)\n echo "Cargo.toml version: $cargo_version"\n echo "$cargo_version" > cargo_version.txt\n\n - name: Compare versions\n run: |\n latest_version=$(cat latest_release_version.txt)\n cargo_version=$(cat cargo_version.txt)\n\n latest_major=$(echo $latest_version | cut -d. -f1)\n latest_minor=$(echo $latest_version | cut -d. -f2)\n\n cargo_major=$(echo $cargo_version | cut -d. -f1)\n cargo_minor=$(echo $cargo_version | cut -d. -f2)\n\n if [ "$cargo_major" -lt "$latest_major" ] || { [ "$cargo_major" -eq "$latest_major" ] && [ "$cargo_minor" -le "$latest_minor" ]; }; then\n echo "Error: Cargo.toml package version $cargo_version is not larger than $latest_version"\n exit 1\n else\n echo "Cargo.toml version $cargo_version is larger than or equal to $latest_version"\n fi\n | dataset_sample\yaml\risingwavelabs_risingwave\.github\workflows\package_version_check.yml | package_version_check.yml | YAML | 2,138 | 0.8 | 0.017544 | 0 | vue-tools | 89 | 2025-02-04T03:36:14.654214 | BSD-3-Clause | false | ef93005b76513d86876d784779137fb2 |
name: PR Title Checker\n\non:\n pull_request:\n types: [opened, edited, labeled]\n\njobs:\n check:\n runs-on: ubuntu-latest\n name: pr-title-checker\n steps:\n - uses: thehanimo/pr-title-checker@v1.4.3\n with:\n GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n configuration_path: ".github/pr-title-checker-config.json"\n | dataset_sample\yaml\risingwavelabs_risingwave\.github\workflows\pr-title-checker.yml | pr-title-checker.yml | YAML | 345 | 0.7 | 0 | 0 | awesome-app | 233 | 2023-08-22T23:45:42.464962 | BSD-3-Clause | false | 236b933b7e92e374a5929764bfea4ad4 |
name: Protobuf Breaking Check\n\non:\n pull_request:\n branches: [main]\n paths: [proto/**]\n\njobs:\n buf-breaking-check:\n runs-on: ubuntu-latest\n name: Check breaking changes in Protobuf files\n steps:\n - uses: actions/checkout@v4\n - uses: bufbuild/buf-setup-action@v1\n with:\n github_token: ${{ github.token }}\n # Run breaking change detection against the `main` branch\n - uses: bufbuild/buf-breaking-action@v1\n with:\n input: 'proto'\n against: 'https://github.com/risingwavelabs/risingwave.git#branch=main,subdir=proto'\n | dataset_sample\yaml\risingwavelabs_risingwave\.github\workflows\protobuf-breaking.yml | protobuf-breaking.yml | YAML | 591 | 0.8 | 0 | 0.052632 | awesome-app | 558 | 2025-06-30T11:54:16.818674 | BSD-3-Clause | false | 8f446bd2f3b00eef87e6feb4512c0bb7 |
name: Label ci/main-cron/run-all for cherry-pick PRs\n\npermissions:\n contents: read\n pull-requests: write\n\non:\n pull_request:\n branches:\n - 'release-*'\n types: [opened]\n\njobs:\n label-run-main-cron:\n runs-on: ubuntu-latest\n steps:\n - name: checkout\n uses: actions/checkout@v4\n - name: Label PR\n run: |\n pr_number="${{ github.event.pull_request.number }}"\n echo "PR number: $pr_number"\n echo "Labeling PR #$pr_number with ci/main-cron/run-all"\n gh pr edit "$pr_number" --add-label "ci/main-cron/run-all"\n env:\n GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n | dataset_sample\yaml\risingwavelabs_risingwave\.github\workflows\run-main-cron.yml | run-main-cron.yml | YAML | 647 | 0.8 | 0.038462 | 0 | awesome-app | 827 | 2025-05-11T21:34:24.928962 | MIT | false | c9f98f505c844b9b93be83844be217f2 |
name: Mark stale issues and pull requests\n\non:\n schedule:\n - cron: '30 1 * * *'\n workflow_dispatch:\n inputs:\n # https://github.com/marketplace/actions/close-stale-issues#operations-per-run\n operationsPerRun:\n description: 'Max number of operations per run'\n required: true\n default: 30\n\njobs:\n stale:\n runs-on: ubuntu-latest\n permissions:\n issues: write\n pull-requests: write\n\n steps:\n - uses: actions/stale@v9\n with:\n repo-token: ${{ secrets.GITHUB_TOKEN }}\n stale-issue-message: |\n This issue has been open for 60 days with no activity.\n\n If you think it is still relevant today, and needs to be done *in the near future*, you can comment to update the status, or just manually remove the `no-issue-activity` label.\n\n You can also confidently close this issue as not planned to keep our backlog clean.\n Don't worry if you think the issue is still valuable to continue in the future.\n It's searchable and can be reopened when it's time. 😄\n stale-pr-message: |\n This PR has been open for 60 days with no activity.\n\n If it's blocked by code review, feel free to ping a reviewer or ask someone else to review it.\n\n If you think it is still relevant today, and have time to work on it *in the near future*, you can comment to update the status, or just manually remove the `no-pr-activity` label.\n\n You can also confidently close this PR to keep our backlog clean. (If no further action taken, the PR will be automatically closed after 7 days. Sorry! 🙏)\n Don't worry if you think the PR is still valuable to continue in the future.\n It's searchable and can be reopened when it's time. 😄\n close-pr-message: |\n Close this PR as there's no further actions taken after it is marked as stale for 7 days. Sorry! 🙏\n\n You can reopen it when you have time to continue working on it.\n stale-issue-label: 'no-issue-activity'\n stale-pr-label: 'no-pr-activity'\n days-before-close: -1\n days-before-pr-close: 7\n operations-per-run: ${{ github.event.inputs.operationsPerRun || 30 }}\n enable-statistics: true\n | dataset_sample\yaml\risingwavelabs_risingwave\.github\workflows\stale.yml | stale.yml | YAML | 2,254 | 0.95 | 0.096154 | 0.023256 | node-utils | 311 | 2025-01-25T01:24:30.035538 | Apache-2.0 | false | 50474eaea3d48b6e846213da4f0d2198 |
name: Typo checker\non: [pull_request]\n\njobs:\n run:\n name: Spell Check with Typos\n runs-on: ubuntu-latest\n steps:\n - name: Checkout Actions Repository\n uses: actions/checkout@v4\n\n - name: Check spelling of the entire repository\n uses: crate-ci/typos@v1.31.1\n | dataset_sample\yaml\risingwavelabs_risingwave\.github\workflows\typo.yml | typo.yml | YAML | 283 | 0.7 | 0 | 0 | vue-tools | 951 | 2025-05-11T07:31:34.153769 | MIT | false | 11df41078c07b9efaecd80f555edb823 |
services:\n # TODO: Rename this to `postgres`\n db:\n image: postgres:15-alpine\n environment:\n - POSTGRES_USER=postgres\n - POSTGRES_PASSWORD=postgres\n - POSTGRES_INITDB_ARGS=--encoding=UTF-8 --lc-collate=C --lc-ctype=C\n ports:\n - 5432\n healthcheck:\n test: ["CMD-SHELL", "pg_isready -U postgres"]\n interval: 5s\n timeout: 5s\n retries: 5\n command:\n ["postgres", "-c", "wal_level=logical", "-c", "max_replication_slots=50", "-c", "max_wal_senders=20"]\n\n mysql:\n image: mysql:8.0\n command: --character-set-server=utf8 --collation-server=utf8_general_ci\n ports:\n - 3306\n environment:\n - MYSQL_ROOT_PASSWORD=123456\n - MYSQL_USER=mysqluser\n - MYSQL_PASSWORD=mysqlpw\n healthcheck:\n test: ["CMD-SHELL", "mysqladmin ping -h 127.0.0.1 -u root -p123456"]\n interval: 5s\n timeout: 5s\n retries: 5\n\n # TODO: reuse the same mysql instance for connector test and meta store\n # after https://github.com/risingwavelabs/risingwave/issues/19783 addressed\n mysql-meta:\n image: mysql:8.0\n command: --character-set-server=utf8 --collation-server=utf8_general_ci\n ports:\n - 3306\n environment:\n - MYSQL_ROOT_PASSWORD=123456\n - MYSQL_USER=mysqluser\n - MYSQL_PASSWORD=mysqlpw\n healthcheck:\n test: ["CMD-SHELL", "mysqladmin ping -h 127.0.0.1 -u root -p123456"]\n interval: 5s\n timeout: 5s\n retries: 5\n\n message_queue:\n image: "redpandadata/redpanda:latest"\n command:\n - redpanda\n - start\n - "--smp"\n - "1"\n - "--reserve-memory"\n - 0M\n - "--memory"\n - 4G\n - "--overprovisioned"\n - "--node-id"\n - "0"\n - "--check=false"\n - "--kafka-addr"\n - "PLAINTEXT://0.0.0.0:29092,OUTSIDE://0.0.0.0:9092"\n - "--advertise-kafka-addr"\n - "PLAINTEXT://message_queue:29092,OUTSIDE://localhost:9092"\n expose:\n - "29092"\n - "9092"\n - "9644"\n ports:\n - "29092:29092"\n - "9092:9092"\n - "9644:9644"\n # Don't use Redpanda's schema registry, use the separated service instead\n # - "8081:8081"\n environment: {}\n container_name: message_queue\n healthcheck:\n test: curl -f localhost:9644/v1/status/ready\n interval: 1s\n timeout: 5s\n retries: 5\n\n source-test-env:\n image: public.ecr.aws/w1p7b4n3/rw-build-env:v20250418\n depends_on:\n - mysql\n - mysql-meta\n - sqlserver-server\n - db\n - message_queue\n - schemaregistry\n - mongodb\n - mongodb-setup\n - mongo_data_generator\n - nats-server\n volumes:\n - ..:/risingwave\n stop_grace_period: 30s\n\n sink-test-env:\n image: public.ecr.aws/w1p7b4n3/rw-build-env:v20250418\n depends_on:\n - mysql\n - mysql-meta\n - db\n - message_queue\n - schemaregistry\n - elasticsearch\n - clickhouse-server\n - redis-server\n - pulsar-server\n - mqtt-server\n - cassandra-server\n - doris-server\n - starrocks-fe-server\n - starrocks-be-server\n - mongodb\n - mongodb-setup\n - sqlserver-server\n volumes:\n - ..:/risingwave\n stop_grace_period: 30s\n\n iceberg-test-env:\n image: public.ecr.aws/w1p7b4n3/rw-build-env:v20250418\n depends_on:\n - mysql\n - db\n - message_queue\n - schemaregistry\n volumes:\n - ..:/risingwave\n stop_grace_period: 30s\n\n rw-build-env:\n image: public.ecr.aws/w1p7b4n3/rw-build-env:v20250418\n volumes:\n - ..:/risingwave\n stop_grace_period: 30s\n\n # Standard environment for CI, including MySQL and Postgres for metadata.\n ci-standard-env:\n image: public.ecr.aws/w1p7b4n3/rw-build-env:v20250418\n depends_on:\n - mysql-meta\n - db\n volumes:\n - ..:/risingwave\n stop_grace_period: 30s\n\n iceberg-engine-env:\n image: public.ecr.aws/w1p7b4n3/rw-build-env:v20250418\n depends_on:\n - db\n volumes:\n - ..:/risingwave\n stop_grace_period: 30s\n\n ci-flamegraph-env:\n image: public.ecr.aws/w1p7b4n3/rw-build-env:v20250418\n # NOTE(kwannoel): This is used in order to permit\n # syscalls for `nperf` (perf_event_open),\n # so it can do CPU profiling.\n # These options should NOT be used for other services.\n privileged: true\n userns_mode: host\n volumes:\n - ..:/risingwave\n\n regress-test-env:\n image: public.ecr.aws/w1p7b4n3/rw-build-env:v20250418\n depends_on:\n db:\n condition: service_healthy\n volumes:\n - ..:/risingwave\n\n release-env-x86:\n # Build binaries on a earlier Linux distribution (therefore with earlier version GLIBC)\n # `manylinux_2_28` is based on AlmaLinux 8 with GLIBC 2.28.\n #\n # GLIBC versions on some systems:\n # - Amazon Linux 2023 (AL2023): 2.34\n # - Ubuntu 20.04: 2.31\n #\n # Systems that we don't provide support for:\n # - Ubuntu 18.04: 2.27 (Already EOL 2023-05-31)\n # - Amazon Linux 2: 2.26 (Originally EOL 2023-06-30, superseded by AL2023)\n image: quay.io/pypa/manylinux_2_28_x86_64:2025.03.23-1\n working_dir: /mnt\n volumes:\n - ..:/mnt\n\n release-env-arm:\n image: quay.io/pypa/manylinux_2_28_aarch64:2025.03.23-1\n working_dir: /mnt\n volumes:\n - ..:/mnt\n\n elasticsearch:\n container_name: elasticsearch\n image: docker.elastic.co/elasticsearch/elasticsearch:7.11.0\n environment:\n - xpack.security.enabled=true\n - discovery.type=single-node\n - ELASTIC_PASSWORD=risingwave\n ports:\n - 9200:9200\n\n clickhouse-server:\n image: clickhouse/clickhouse-server:23.3.8.21-alpine\n container_name: clickhouse-server-1\n hostname: clickhouse-server-1\n ports:\n - "8123:8123"\n - "9000:9000"\n - "9004:9004"\n expose:\n - 9009\n\n redis-server:\n container_name: redis-server\n image: "redis:latest"\n expose:\n - 6379\n ports:\n - 6378:6379\n healthcheck:\n test: ["CMD", "redis-cli", "ping"]\n interval: 5s\n timeout: 30s\n retries: 50\n\n cassandra-server:\n container_name: cassandra-server\n image: cassandra:4.0\n ports:\n - 9042:9042\n environment:\n - CASSANDRA_CLUSTER_NAME=cloudinfra\n\n doris-server:\n container_name: doris-server\n image: apache/doris:doris-all-in-one-2.1.0\n ports:\n - 8030:8030\n - 8040:8040\n healthcheck:\n test: ["CMD", "curl", "-f", "http://localhost:9030"]\n interval: 5s\n timeout: 5s\n retries: 30\n\n sqlserver-server:\n container_name: sqlserver-server\n image: mcr.microsoft.com/mssql/server:2022-latest\n hostname: sqlserver-server\n ports:\n - 1433:1433\n environment:\n ACCEPT_EULA: "Y"\n SA_PASSWORD: "SomeTestOnly@SA"\n MSSQL_AGENT_ENABLED: "true"\n\n starrocks-fe-server:\n container_name: starrocks-fe-server\n image: starrocks/fe-ubuntu:3.1.7\n hostname: starrocks-fe-server\n command: /opt/starrocks/fe/bin/start_fe.sh\n ports:\n - 28030:8030\n - 29020:9020\n - 29030:9030\n healthcheck:\n test: ["CMD", "curl", "-f", "http://localhost:9030"]\n interval: 5s\n timeout: 5s\n retries: 30\n\n starrocks-be-server:\n image: starrocks/be-ubuntu:3.1.7\n command:\n - /bin/bash\n - -c\n - |\n sleep 15s; mysql --connect-timeout 2 -h starrocks-fe-server -P9030 -uroot -e "alter system add backend \"starrocks-be-server:9050\";"\n /opt/starrocks/be/bin/start_be.sh\n ports:\n - 28040:8040\n - 29050:9050\n hostname: starrocks-be-server\n container_name: starrocks-be-server\n depends_on:\n - starrocks-fe-server\n\n # # Temporary workaround for json schema registry test since redpanda only supports\n # # protobuf/avro schema registry. Should be removed after the support.\n # # Related tracking issue:\n # # https://github.com/redpanda-data/redpanda/issues/1878\n schemaregistry:\n container_name: schemaregistry\n hostname: schemaregistry\n image: confluentinc/cp-schema-registry:latest\n depends_on:\n - message_queue\n ports:\n - "8082:8082"\n environment:\n SCHEMA_REGISTRY_HOST_NAME: schemaregistry\n SCHEMA_REGISTRY_LISTENERS: http://schemaregistry:8082\n SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: message_queue:29092\n\n pulsar-server:\n container_name: pulsar-server\n image: apachepulsar/pulsar:latest\n command: bin/pulsar standalone\n ports:\n - "6650:6650"\n - "6651:8080"\n expose:\n - "8080"\n - "6650"\n healthcheck:\n test: ["CMD-SHELL", "bin/pulsar-admin brokers healthcheck"]\n interval: 5s\n timeout: 5s\n retries: 5\n\n mongodb:\n image: mongo:4.4\n ports:\n - "27017"\n command: --replSet rs0 --oplogSize 128\n restart: always\n healthcheck:\n test: "echo 'db.runCommand({ping: 1})' | mongo"\n interval: 5s\n timeout: 10s\n retries: 3\n\n mongodb-setup:\n image: mongo:4.4\n container_name: mongodb-setup\n depends_on:\n - mongodb\n entrypoint:\n [\n "bash",\n "-c",\n "sleep 10 && mongo --host mongodb:27017 /config-replica.js && sleep 10",\n ]\n restart: "no"\n volumes:\n - ./mongodb/config-replica.js:/config-replica.js\n\n mongo_data_generator:\n build:\n context: .\n dockerfile: ./mongodb/Dockerfile.generator\n container_name: mongo_data_generator\n depends_on:\n - mongodb\n environment:\n MONGO_HOST: mongodb\n MONGO_PORT: 27017\n MONGO_DB_NAME: random_data\n mqtt-server:\n image: eclipse-mosquitto\n command:\n - sh\n - -c\n - echo "running command"; printf 'allow_anonymous true\nlistener 1883 0.0.0.0' > /mosquitto/config/mosquitto.conf; echo "starting service..."; cat /mosquitto/config/mosquitto.conf;/docker-entrypoint.sh;/usr/sbin/mosquitto -c /mosquitto/config/mosquitto.conf\n ports:\n - 1883:1883\n healthcheck:\n test:\n [\n "CMD-SHELL",\n "(mosquitto_sub -h localhost -p 1883 -t 'topic' -E -i probe 2>&1 | grep Error) && exit 1 || exit 0",\n ]\n interval: 10s\n timeout: 10s\n retries: 6\n nats-server:\n image: nats:latest\n command: ["-js"]\n ports:\n - "4222:4222"\n - "8222:8222"\n healthcheck:\n test: ["CMD", "curl", "-f", "http://localhost:8222/healthz"]\n interval: 10s\n timeout: 5s\n retries: 3\n | dataset_sample\yaml\risingwavelabs_risingwave\ci\docker-compose.yml | docker-compose.yml | YAML | 10,302 | 0.8 | 0.017544 | 0.064343 | node-utils | 677 | 2024-01-30T13:10:08.399563 | GPL-3.0 | false | 02a74b2219abc9e14c4da2ba70b431fb |
auto-retry: &auto-retry\n automatic:\n # Agent terminated because the AWS EC2 spot instance killed by AWS.\n - signal_reason: agent_stop\n limit: 3\n - exit_status: -1\n signal_reason: none\n limit: 3\nsteps:\n - label: "docker-build-push: aarch64"\n if: build.env("SKIP_TARGET_AARCH64") != "true"\n command: "CARGO_PROFILE=patch-production ci/scripts/docker.sh"\n key: "build-aarch64"\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n GHCR_USERNAME: ghcr-username\n GHCR_TOKEN: ghcr-token\n DOCKER_TOKEN: docker-token\n GITHUB_TOKEN: github-token\n agents:\n queue: "linux-arm64"\n retry: *auto-retry\n\n - label: "multi-arch-image-create-push"\n command: "SKIP_TARGET_AMD64=true ci/scripts/multi-arch-docker.sh"\n depends_on:\n - "build-aarch64"\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n GHCR_USERNAME: ghcr-username\n GHCR_TOKEN: ghcr-token\n DOCKER_TOKEN: docker-token\n retry: *auto-retry\n | dataset_sample\yaml\risingwavelabs_risingwave\ci\workflows\docker-arm-fast.yml | docker-arm-fast.yml | YAML | 1,036 | 0.8 | 0.028571 | 0.029412 | python-kit | 502 | 2024-04-24T22:56:18.392119 | MIT | false | b2f4f8c4f369bc0c94846ead5d9b81cf |
auto-retry: &auto-retry\n automatic:\n # Agent terminated because the AWS EC2 spot instance killed by AWS.\n - signal_reason: agent_stop\n limit: 3\n - exit_status: -1\n signal_reason: none\n limit: 3\n\nsteps:\n - label: "docker-build-push: amd64"\n if: build.env("SKIP_TARGET_AMD64") != "true"\n command: "ci/scripts/docker.sh"\n key: "build-amd64"\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n GHCR_USERNAME: ghcr-username\n GHCR_TOKEN: ghcr-token\n DOCKER_TOKEN: docker-token\n GITHUB_TOKEN: github-token\n retry: *auto-retry\n\n - label: "docker-build-push: aarch64"\n if: build.env("SKIP_TARGET_AARCH64") != "true"\n command: "ci/scripts/docker.sh"\n key: "build-aarch64"\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n GHCR_USERNAME: ghcr-username\n GHCR_TOKEN: ghcr-token\n DOCKER_TOKEN: docker-token\n GITHUB_TOKEN: github-token\n agents:\n queue: "linux-arm64"\n retry: *auto-retry\n\n - label: "multi-arch-image-create-push"\n command: "ci/scripts/multi-arch-docker.sh"\n depends_on:\n - "build-amd64"\n - "build-aarch64"\n key: "multi-arch-image-create-push"\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n GHCR_USERNAME: ghcr-username\n GHCR_TOKEN: ghcr-token\n DOCKER_TOKEN: docker-token\n retry: *auto-retry\n\n - label: "pre build binary: amd64"\n if: build.env("SKIP_TARGET_AMD64") != "true"\n command: "ci/scripts/release.sh"\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n GITHUB_TOKEN: github-token\n - docker-compose#v5.5.0:\n run: release-env-x86\n config: ci/docker-compose.yml\n mount-buildkite-agent: true\n propagate-environment: true\n environment:\n - BINARY_NAME\n - GITHUB_TOKEN\n retry: *auto-retry\n\n - label: "pre build binary: aarch64 "\n if: build.env("SKIP_TARGET_AARCH64") != "true"\n command: "ci/scripts/release.sh"\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n GITHUB_TOKEN: github-token\n - docker-compose#v5.5.0:\n run: release-env-arm\n config: ci/docker-compose.yml\n mount-buildkite-agent: true\n propagate-environment: true\n environment:\n - BINARY_NAME\n - GITHUB_TOKEN\n agents:\n queue: "linux-arm64"\n retry: *auto-retry\n\n - label: "docker scout"\n if: build.env("ENABLE_DOCKER_SCOUT") == "true"\n key: docker-scout\n command: "ci/scripts/docker-scout.sh"\n depends_on:\n - "multi-arch-image-create-push"\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n GHCR_USERNAME: ghcr-username\n GHCR_TOKEN: ghcr-token\n DOCKER_TOKEN: docker-token\n retry: *auto-retry\n\n - label: "generate notification step"\n if: build.env("ENABLE_DOCKER_SCOUT") == "true"\n depends_on:\n - "docker-scout"\n command: ci/scripts/docker-scout-notify.sh\n | dataset_sample\yaml\risingwavelabs_risingwave\ci\workflows\docker.yml | docker.yml | YAML | 3,046 | 0.8 | 0.056075 | 0.01 | awesome-app | 709 | 2025-04-10T12:22:11.276435 | Apache-2.0 | false | dd414a4f8b74dd9a4641028f7c7074bf |
steps:\n # Builds cpu flamegraph env\n - label: "cpu-flamegraph-env-build"\n key: "cpu-flamegraph-env-build"\n command: "ci/scripts/flamegraph-env-build.sh"\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n GITHUB_TOKEN: github-token\n - docker-compose#v5.5.0:\n run: rw-build-env\n config: ci/docker-compose.yml\n mount-buildkite-agent: true\n environment:\n - GITHUB_TOKEN\n timeout_in_minutes: 20\n\n - label: "Generate CPU flamegraph"\n command: "NEXMARK_QUERIES=all ci/scripts/gen-flamegraph.sh cpu"\n depends_on: "cpu-flamegraph-env-build"\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n GITHUB_TOKEN: github-token\n - docker-compose#v5.5.0:\n run: ci-flamegraph-env\n config: ci/docker-compose.yml\n mount-buildkite-agent: true\n environment:\n - GITHUB_TOKEN\n # TODO(kwannoel): Here are the areas that can be further optimized:\n # - Nexmark event generation: ~3min for 100mil records.\n # - Generate Flamegraph: ~15min (see https://github.com/koute/not-perf/issues/30 on optimizing)\n # - Building RW artifacts: ~8min\n timeout_in_minutes: 720\n | dataset_sample\yaml\risingwavelabs_risingwave\ci\workflows\gen-flamegraph-cron.yml | gen-flamegraph-cron.yml | YAML | 1,211 | 0.8 | 0.028571 | 0.147059 | awesome-app | 115 | 2024-02-22T08:58:27.268838 | MIT | false | 9ebe5463414c65a976d4e3c6c0f911d8 |
steps:\n # Builds cpu flamegraph env\n - label: "cpu-flamegraph-env-build"\n key: "cpu-flamegraph-env-build"\n command: "ci/scripts/flamegraph-env-build.sh"\n if: build.env("CPU_FLAMEGRAPH") == "true" || build.env("HEAP_FLAMEGRAPH") == "true"\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n GITHUB_TOKEN: github-token\n - docker-compose#v5.5.0:\n run: rw-build-env\n config: ci/docker-compose.yml\n mount-buildkite-agent: true\n environment:\n - GITHUB_TOKEN\n timeout_in_minutes: 20\n\n # Generates cpu flamegraph if label `CPU_FLAMEGRAPH=true` is added to env var.\n - label: "Generate CPU flamegraph"\n command: |\n NEXMARK_QUERIES="$NEXMARK_QUERIES" ci/scripts/gen-flamegraph.sh cpu\n depends_on: "cpu-flamegraph-env-build"\n if: build.env("CPU_FLAMEGRAPH") == "true"\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n GITHUB_TOKEN: github-token\n - docker-compose#v5.5.0:\n run: ci-flamegraph-env\n config: ci/docker-compose.yml\n mount-buildkite-agent: true\n environment:\n - GITHUB_TOKEN\n # TODO(kwannoel): Here are the areas that can be further optimized:\n # - Nexmark event generation: ~3min for 100mil records.\n # - Generate Flamegraph: ~15min (see https://github.com/koute/not-perf/issues/30 on optimizing)\n # - Building RW artifacts: ~8min\n timeout_in_minutes: 540\n\n # Generates Heap flamegraph if label `HEAP_FLAMEGRAPH=true` is added to env var.\n - label: "Generate Heap flamegraph"\n command: |\n NEXMARK_QUERIES="$NEXMARK_QUERIES" ci/scripts/gen-flamegraph.sh heap\n depends_on: "cpu-flamegraph-env-build"\n if: build.env("HEAP_FLAMEGRAPH") == "true"\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n GITHUB_TOKEN: github-token\n - docker-compose#v5.5.0:\n run: ci-flamegraph-env\n config: ci/docker-compose.yml\n mount-buildkite-agent: true\n environment:\n - GITHUB_TOKEN\n timeout_in_minutes: 360\n | dataset_sample\yaml\risingwavelabs_risingwave\ci\workflows\gen-flamegraph.yml | gen-flamegraph.yml | YAML | 2,068 | 0.8 | 0.105263 | 0.127273 | python-kit | 760 | 2024-04-13T08:11:48.835443 | Apache-2.0 | false | d7eaf24a2d8ef5917094d94b940a8dff |
auto-retry: &auto-retry\n automatic:\n # Agent terminated because the AWS EC2 spot instance killed by AWS.\n - signal_reason: agent_stop\n limit: 3\n - exit_status: -1\n signal_reason: none\n limit: 3\n\nsteps:\n - label: "find regressed step"\n key: "find-regressed-step"\n command: "GOOD_COMMIT=$GOOD_COMMIT BAD_COMMIT=$BAD_COMMIT BISECT_BRANCH=$BISECT_BRANCH CI_STEPS=$CI_STEPS ci/scripts/find-regression.py start"\n if: build.env("CI_STEPS") != null\n retry: *auto-retry\n | dataset_sample\yaml\risingwavelabs_risingwave\ci\workflows\main-cron-bisect.yml | main-cron-bisect.yml | YAML | 499 | 0.8 | 0.066667 | 0.071429 | awesome-app | 194 | 2024-09-15T00:09:52.844691 | MIT | false | afa70f247b8a69b1e6ca350243821d5a |
anchors:\n - auto-retry: &auto-retry\n automatic:\n # Agent terminated because the AWS EC2 spot instance killed by AWS.\n - signal_reason: agent_stop\n limit: 3\n - exit_status: -1\n signal_reason: none\n limit: 3\n - plugins:\n # we need to override args, so didn't include image here in the anchor\n - docker-compose: &docker-compose\n run: rw-build-env\n config: ci/docker-compose.yml\n mount-buildkite-agent: true\n propagate-environment: true\n - docker-compose-standard: &docker-compose-standard\n <<: *docker-compose\n run: ci-standard-env\n\n - sql-backend: &sql-backend\n matrix:\n setup:\n backend: [""]\n endpoint: [""]\n adjustments:\n - with:\n backend: ""\n endpoint: ""\n skip: true # hack\n - with:\n backend: "sqlite"\n # sqlite3 /tmp/rwmeta.db\n endpoint: "sqlite:///tmp/rwmeta.db?mode=rwc"\n - with:\n backend: "postgres"\n # PGPASSWORD=postgres psql -h db -p 5432 -U postgres -d rwmeta\n endpoint: "postgres://postgres:postgres@db:5432/rwmeta"\n - with:\n backend: "mysql"\n # mysql -h mysql-meta -P 3306 -u root -p123456 -D rwmeta\n endpoint: "mysql://root:123456@mysql-meta:3306/rwmeta"\n env:\n RISEDEV_SQL_ENDPOINT: "{{matrix.endpoint}}"\n\nsteps:\n - label: "build"\n command: "ci/scripts/build.sh -p ci-release"\n key: "build"\n if: |\n build.env("CI_STEPS") !~ /(^|,)disable-build(,|$$)/\n plugins:\n - docker-compose#v5.5.0: *docker-compose\n timeout_in_minutes: 20\n retry: *auto-retry\n\n - label: "build other components"\n command: "ci/scripts/build-other.sh"\n key: "build-other"\n if: |\n build.env("CI_STEPS") !~ /(^|,)disable-build(,|$$)/\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n GITHUB_TOKEN: github-token\n - docker-compose#v5.5.0:\n <<: *docker-compose\n environment:\n - GITHUB_TOKEN\n timeout_in_minutes: 12\n retry: *auto-retry\n\n - label: "build simulation test"\n command: "ci/scripts/build-simulation.sh"\n key: "build-simulation"\n if: |\n build.env("CI_STEPS") !~ /(^|,)disable-build(,|$$)/\n plugins:\n - docker-compose#v5.5.0: *docker-compose\n timeout_in_minutes: 20\n retry: *auto-retry\n\n - label: "docslt"\n command: "ci/scripts/docslt.sh"\n key: "docslt"\n if: |\n build.env("CI_STEPS") !~ /(^|,)disable-build(,|$$)/\n plugins:\n - docker-compose#v5.5.0: *docker-compose\n timeout_in_minutes: 10\n retry: *auto-retry\n\n - group: "end-to-end test (release)"\n steps:\n - label: "end-to-end test ({{matrix.backend}} backend)"\n key: "e2e-test-release"\n <<: *sql-backend\n command: "ci/scripts/e2e-test-serial.sh -p ci-release -m ci-3streaming-2serving-3fe"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-e2e-test"\n || build.env("CI_STEPS") =~ /(^|,)e2e-tests?(,|$$)/\n depends_on:\n - "build"\n plugins:\n - docker-compose#v5.5.0: *docker-compose-standard\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 30\n retry: *auto-retry\n\n - label: "slow end-to-end test ({{matrix.backend}} backend)"\n key: "slow-e2e-test-release"\n <<: *sql-backend\n command: "ci/scripts/slow-e2e-test.sh -p ci-release -m ci-3streaming-2serving-3fe"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-slow-e2e-tests"\n || build.env("CI_STEPS") =~ /(^|,)slow-e2e-tests?(,|$$)/\n depends_on:\n - "build"\n - "build-other"\n plugins:\n - docker-compose#v5.5.0: *docker-compose-standard\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 9\n retry: *auto-retry\n\n - label: "end-to-end test (parallel, {{matrix.backend}} backend)"\n key: "e2e-test-release-parallel"\n <<: *sql-backend\n command: "ci/scripts/e2e-test-parallel.sh -p ci-release"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-e2e-tests"\n || build.env("CI_STEPS") =~ /(^|,)e2e-parallel-tests?(,|$$)/\n depends_on:\n - "build"\n - "build-other"\n - "docslt"\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n BUILDKITE_ANALYTICS_TOKEN: buildkite-build-analytics-sqllogictest-token\n - docker-compose#v5.5.0: *docker-compose-standard\n - test-collector#v1.0.0:\n files: "*-junit.xml"\n format: "junit"\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 18\n retry: *auto-retry\n\n - group: "end-to-end connector test (release)"\n steps:\n - label: "end-to-end source test ({{matrix.backend}} backend)"\n key: "e2e-test-release-source"\n <<: *sql-backend\n command: "ci/scripts/e2e-source-test.sh -p ci-release"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-e2e-source-tests"\n || build.env("CI_STEPS") =~ /(^|,)e2e-source-tests?(,|$$)/\n depends_on:\n - "build"\n - "build-other"\n plugins:\n - docker-compose#v5.5.0:\n <<: *docker-compose-standard\n run: source-test-env\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 20\n retry: *auto-retry\n\n - label: "end-to-end sink test ({{matrix.backend}} backend)"\n key: "e2e-test-release-sink"\n <<: *sql-backend\n command: "ci/scripts/e2e-sink-test.sh -p ci-release"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-e2e-sink-tests"\n || build.env("CI_STEPS") =~ /(^|,)e2e-sink-tests?(,|$$)/\n depends_on:\n - "build"\n - "build-other"\n plugins:\n - docker-compose#v5.5.0:\n <<: *docker-compose-standard\n run: sink-test-env\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 35\n retry: *auto-retry\n\n - label: "fuzz test"\n key: "fuzz-test"\n command: "ci/scripts/cron-fuzz-test.sh -p ci-release"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-sqlsmith-fuzzing-tests"\n || build.env("CI_STEPS") =~ /(^|,)sqlsmith-fuzzing-tests?(,|$$)/\n depends_on:\n - "build"\n - "build-simulation"\n plugins:\n - ./ci/plugins/swapfile\n - docker-compose#v5.5.0: *docker-compose\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 20\n retry: *auto-retry\n\n - label: "meta backup test (release)"\n key: "e2e-meta-backup-test-release"\n command: "ci/scripts/run-meta-backup-test.sh -p ci-release -m ci-3streaming-2serving-3fe"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-e2e-meta-backup-test"\n || build.env("CI_STEPS") =~ /(^|,)e2e-tests?(,|$$)/\n depends_on:\n - "build"\n - "build-other"\n - "docslt"\n plugins:\n - docker-compose#v5.5.0: *docker-compose\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 45\n retry: *auto-retry\n\n # The timeout should be strictly more than timeout in `pull-request.yml`.\n # This ensures our `main-cron` workflow will be stable.\n - label: "unit test"\n key: "unit-test"\n command: "ci/scripts/run-unit-test.sh"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-unit-test"\n || build.env("CI_STEPS") =~ /(^|,)unit-tests?(,|$$)/\n plugins:\n - ./ci/plugins/swapfile\n - seek-oss/aws-sm#v2.3.2:\n env:\n CODECOV_TOKEN: my-codecov-token\n - docker-compose#v5.5.0:\n <<: *docker-compose\n environment:\n - CODECOV_TOKEN\n timeout_in_minutes: 22\n retry: *auto-retry\n\n - label: "unit test (madsim)"\n key: "unit-test-deterministic"\n command: "MADSIM_TEST_NUM=100 timeout 50m ci/scripts/deterministic-unit-test.sh"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-unit-test-deterministic-simulation"\n || build.env("CI_STEPS") =~ /(^|,)unit-tests?-deterministic-simulation(,|$$)/\n plugins:\n - docker-compose#v5.5.0: *docker-compose\n timeout_in_minutes: 50\n retry: *auto-retry\n\n - label: "integration test (madsim) - scale"\n key: "integration-test-deterministic-scale"\n command: "TEST_NUM=60 ci/scripts/deterministic-it-test.sh scale::"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-integration-test-deterministic-simulation"\n || build.env("CI_STEPS") =~ /(^|,)integration-tests?-deterministic-simulation(,|$$)/\n depends_on: "build-simulation"\n plugins:\n - docker-compose#v5.5.0: *docker-compose\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 40\n parallelism: 4\n retry: *auto-retry\n\n - label: "integration test (madsim) - recovery"\n key: "integration-test-deterministic-recovery"\n command: "TEST_NUM=60 ci/scripts/deterministic-it-test.sh recovery::"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-integration-test-deterministic-simulation"\n || build.env("CI_STEPS") =~ /(^|,)integration-tests?-deterministic-simulation(,|$$)/\n depends_on: "build-simulation"\n plugins:\n - docker-compose#v5.5.0: *docker-compose\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 70\n retry: *auto-retry\n\n - label: "integration test (madsim) - backfill"\n key: "integration-test-deterministic-backfill"\n command: "TEST_NUM=30 ci/scripts/deterministic-it-test.sh backfill_tests::"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-integration-test-deterministic-simulation"\n || build.env("CI_STEPS") =~ /(^|,)integration-tests?-deterministic-simulation(,|$$)/\n depends_on: "build-simulation"\n plugins:\n - docker-compose#v5.5.0: *docker-compose\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 70\n retry: *auto-retry\n\n - label: "integration test (madsim) - storage"\n key: "integration-test-deterministic-storage"\n command: "TEST_NUM=30 ci/scripts/deterministic-it-test.sh storage::"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-integration-test-deterministic-simulation"\n || build.env("CI_STEPS") =~ /(^|,)integration-tests?-deterministic-simulation(,|$$)/\n depends_on: "build-simulation"\n plugins:\n - docker-compose#v5.5.0: *docker-compose\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 70\n retry: *auto-retry\n\n - label: "integration test (madsim) - sink"\n key: "integration-test-deterministic-sink"\n command: "TEST_NUM=30 ci/scripts/deterministic-it-test.sh sink::"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-integration-test-deterministic-simulation"\n || build.env("CI_STEPS") =~ /(^|,)integration-tests?-deterministic-simulation(,|$$)/\n depends_on: "build-simulation"\n plugins:\n - docker-compose#v5.5.0: *docker-compose\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 70\n retry: *auto-retry\n\n - label: "end-to-end test (madsim)"\n key: "e2e-test-deterministic"\n command: "TEST_NUM=32 ci/scripts/deterministic-e2e-test.sh"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-e2e-test-deterministic-simulation"\n || build.env("CI_STEPS") =~ /(^|,)e2e-tests?-deterministic-simulation(,|$$)/\n depends_on: "build-simulation"\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n GITHUB_TOKEN: github-token\n - docker-compose#v5.5.0:\n <<: *docker-compose\n environment:\n - GITHUB_TOKEN\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 60\n parallelism: 4\n retry: *auto-retry\n\n - label: "end-to-end test (madsim, random vnode count)"\n key: "e2e-test-deterministic-random-vnode-count"\n command: "TEST_NUM=32 RW_SIM_RANDOM_VNODE_COUNT=true ci/scripts/deterministic-e2e-test.sh"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-e2e-test-deterministic-simulation"\n || build.env("CI_STEPS") =~ /(^|,)e2e-tests?-deterministic-simulation(,|$$)/\n depends_on: "build-simulation"\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n GITHUB_TOKEN: github-token\n - docker-compose#v5.5.0:\n <<: *docker-compose\n environment:\n - GITHUB_TOKEN\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 60\n parallelism: 4\n retry: *auto-retry\n\n - label: "recovery test (madsim)"\n key: "recovery-test-deterministic"\n command: "TEST_NUM=12 KILL_RATE=1.0 BACKGROUND_DDL_RATE=0.0 ci/scripts/deterministic-recovery-test.sh"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-recovery-test-deterministic-simulation"\n || build.env("CI_STEPS") =~ /(^|,)recovery-tests?-deterministic-simulation(,|$$)/\n depends_on: "build-simulation"\n plugins:\n - docker-compose#v5.5.0: *docker-compose\n # Only upload zipped files, otherwise the logs is too much.\n - ./ci/plugins/upload-failure-logs-zipped\n timeout_in_minutes: 60\n parallelism: 4\n retry: *auto-retry\n\n # Ddl statements will randomly run with background_ddl.\n - label: "background_ddl, arrangement_backfill recovery test (madsim)"\n key: "background-ddl-arrangement-backfill-recovery-test-deterministic"\n command: "TEST_NUM=12 KILL_RATE=1.0 BACKGROUND_DDL_RATE=0.8 USE_ARRANGEMENT_BACKFILL=true timeout 90m ci/scripts/deterministic-recovery-test.sh"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-recovery-test-deterministic-simulation"\n || build.env("CI_STEPS") =~ /(^|,)recovery-tests?-deterministic-simulation(,|$$)/\n depends_on: "build-simulation"\n plugins:\n - docker-compose#v5.5.0: *docker-compose\n # Only upload zipped files, otherwise the logs is too much.\n - ./ci/plugins/upload-failure-logs-zipped\n timeout_in_minutes: 60\n parallelism: 4\n retry: *auto-retry\n\n - label: "end-to-end iceberg test (release)"\n key: "e2e-iceberg-test"\n command: "ci/scripts/e2e-iceberg-test.sh -p ci-release"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-e2e-iceberg-tests"\n || build.env("CI_STEPS") =~ /(^|,)e2e-iceberg-tests?(,|$$)/\n depends_on:\n - "build"\n - "build-other"\n plugins:\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: iceberg-test-env\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 15\n parallelism: 2\n retry: *auto-retry\n\n - label: "e2e java-binding test (release)"\n key: "e2e-java-binding-tests"\n command: "ci/scripts/java-binding-test.sh -p ci-release"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-e2e-java-binding-tests"\n || build.env("CI_STEPS") =~ /(^|,)e2e-java-binding-tests?(,|$$)/\n depends_on:\n - "build"\n - "build-other"\n plugins:\n - docker-compose#v5.5.0: *docker-compose\n - ./ci/plugins/upload-failure-logs\n # Extra 2 minutes to account for docker-compose latency.\n # See: https://github.com/risingwavelabs/risingwave/issues/9423#issuecomment-1521222169\n timeout_in_minutes: 15\n retry: *auto-retry\n\n - label: "S3 source check on AWS (json parser)"\n key: "s3-v2-source-check-aws-json-parser"\n command: "ci/scripts/s3-source-test.sh -p ci-release -s file_source.py -t json"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-s3-source-tests"\n || build.env("CI_STEPS") =~ /(^|,)s3-source-tests?(,|$$)/\n depends_on: build\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n S3_SOURCE_TEST_CONF: ci_s3_source_test_aws\n - docker-compose#v5.5.0:\n <<: *docker-compose\n environment:\n - S3_SOURCE_TEST_CONF\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 25\n retry: *auto-retry\n\n - label: "S3 sink on parquet and json file"\n key: "s3-sink-parquet-and-json-encode"\n command: "ci/scripts/s3-source-test.sh -p ci-release -s file_sink.py"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-s3-source-tests"\n || build.env("CI_STEPS") =~ /(^|,)s3-source-tests?(,|$$)/\n depends_on: build\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n S3_SOURCE_TEST_CONF: ci_s3_source_test_aws\n - docker-compose#v5.5.0:\n <<: *docker-compose\n environment:\n - S3_SOURCE_TEST_CONF\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 25\n retry: *auto-retry\n\n - label: "S3 source check on AWS (csv parser)"\n key: "s3-v2-source-check-aws-csv-parser"\n command: "ci/scripts/s3-source-test.sh -p ci-release -s file_source.py -t csv_without_header"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-s3-source-tests"\n || build.env("CI_STEPS") =~ /(^|,)s3-source-tests?(,|$$)/\n depends_on: build\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n S3_SOURCE_TEST_CONF: ci_s3_source_test_aws\n - docker-compose#v5.5.0:\n <<: *docker-compose\n environment:\n - S3_SOURCE_TEST_CONF\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 25\n retry: *auto-retry\n\n - label: "pulsar source check"\n key: "pulsar-source-tests"\n command: "ci/scripts/pulsar-source-test.sh -p ci-release"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-pulsar-source-tests"\n || build.env("CI_STEPS") =~ /(^|,)pulsar-source-tests?(,|$$)/\n depends_on:\n - build\n - build-other\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n ASTRA_STREAMING_TEST_TOKEN: astra_streaming_test_token\n STREAMNATIVE_CLOUD_TEST_CONF: streamnative_cloud_test_conf\n - docker-compose#v5.5.0:\n <<: *docker-compose\n environment:\n - ASTRA_STREAMING_TEST_TOKEN\n - STREAMNATIVE_CLOUD_TEST_CONF\n timeout_in_minutes: 20\n retry: *auto-retry\n\n - label: "micro benchmark"\n key: "run-micro-benchmarks"\n command: "ci/scripts/run-micro-benchmarks.sh"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-micro-benchmarks"\n || build.env("CI_STEPS") =~ /(^|,)micro-benchmarks?(,|$$)/\n plugins:\n - docker-compose#v5.5.0: *docker-compose\n timeout_in_minutes: 60\n retry: *auto-retry\n\n - label: "upload micro-benchmark"\n key: "upload-micro-benchmarks"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-micro-benchmarks"\n || build.env("CI_STEPS") =~ /(^|,)micro-benchmarks?(,|$$)/\n command:\n - "BUILDKITE_BUILD_NUMBER=$BUILDKITE_BUILD_NUMBER ci/scripts/upload-micro-bench-results.sh"\n depends_on: "run-micro-benchmarks"\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n BUILDKITE_TOKEN: buildkite_token\n GITHUB_TOKEN: github-token\n - docker-compose#v5.5.0:\n <<: *docker-compose\n environment:\n - BUILDKITE_TOKEN\n - GITHUB_TOKEN\n timeout_in_minutes: 5\n\n # Backwards compatibility tests\n - label: "Backwards compatibility tests version_offset={{matrix.version_offset}}"\n key: "backwards-compat-tests"\n command: "VERSION_OFFSET={{matrix.version_offset}} RW_COMMIT=$BUILDKITE_COMMIT ci/scripts/backwards-compat-test.sh -p ci-release"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-backwards-compat-tests"\n || build.env("CI_STEPS") =~ /(^|,)backwards?-compat-tests?(,|$$)/\n depends_on:\n - "build"\n plugins:\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: source-test-env\n environment:\n - BUILDKITE_BRANCH\n - ./ci/plugins/upload-failure-logs\n matrix:\n setup:\n # Test the 4 latest versions against the latest main.\n # e.g.\n # 1: 2.0.0\n # 2: 1.1.1\n # 3: 1.0.1\n # 4: 1.0.0\n # It is ordered by the full version number, rather than minor / major version.\n # We can change to just be on major version in the future.\n version_offset:\n - "1"\n - "2"\n - "3"\n - "4"\n timeout_in_minutes: 30\n retry: *auto-retry\n\n # Sqlsmith differential testing\n - label: "Sqlsmith Differential Testing"\n key: "sqlsmith-differential-tests"\n command: "ci/scripts/sqlsmith-differential-test.sh -p ci-release"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-sqlsmith-differential-tests"\n || build.env("CI_STEPS") =~ /(^|,)sqlsmith-differential-tests?(,|$$)/\n depends_on:\n - "build"\n plugins:\n - docker-compose#v5.5.0: *docker-compose\n timeout_in_minutes: 40\n\n - label: "Backfill tests"\n key: "backfill-tests"\n command: "BUILDKITE=${BUILDKITE:-} ci/scripts/backfill-test.sh -p ci-release"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-backfill-tests"\n || build.env("CI_STEPS") =~ /(^|,)backfill-tests?(,|$$)/\n depends_on:\n - "build"\n plugins:\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: source-test-env\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 30\n retry: *auto-retry\n\n - label: "e2e standalone binary test"\n key: "e2e-standalone-binary-tests"\n command: "ci/scripts/e2e-test-serial.sh -p ci-release -m standalone"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-e2e-standalone-tests"\n || build.env("CI_STEPS") =~ /(^|,)e2e-standalone-tests?(,|$$)/\n depends_on:\n - "build"\n plugins:\n - docker-compose#v5.5.0: *docker-compose\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 35\n retry: *auto-retry\n\n - label: "e2e single-node binary test"\n key: "e2e-single-node-binary-tests"\n command: "ci/scripts/e2e-test-serial.sh -p ci-release -m single-node"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-e2e-single-node-tests"\n || build.env("CI_STEPS") =~ /(^|,)e2e-single-node-tests?(,|$$)/\n depends_on:\n - "build"\n plugins:\n - docker-compose#v5.5.0: *docker-compose\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 25\n retry: *auto-retry\n\n - label: "end-to-end test for opendal storage backend (parallel)"\n key: "e2e-test-opendal-parallel"\n command: "ci/scripts/e2e-test-parallel-for-opendal.sh -p ci-release"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-e2e-tests-for-opendal"\n || build.env("CI_STEPS") =~ /(^|,)e2e-parallel-tests?-for-opendal(,|$$)/\n depends_on:\n - "build"\n plugins:\n - docker-compose#v5.5.0: *docker-compose\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 18\n retry: *auto-retry\n\n - label: "end-to-end deltalake sink test"\n key: "e2e-deltalake-sink-rust-tests"\n command: "ci/scripts/e2e-deltalake-sink-rust-test.sh -p ci-release"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-e2e-deltalake-sink-rust-tests"\n || build.env("CI_STEPS") =~ /(^|,)e2e-deltalake-sink-rust-tests?(,|$$)/\n depends_on:\n - "build"\n - "build-other"\n plugins:\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: sink-test-env\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 10\n retry: *auto-retry\n\n - label: "end-to-end redis sink test"\n key: "e2e-redis-sink-tests"\n command: "ci/scripts/e2e-redis-sink-test.sh -p ci-release"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-e2e-redis-sink-tests"\n || build.env("CI_STEPS") =~ /(^|,)e2e-redis-sink-tests?(,|$$)/\n depends_on:\n - "build"\n - "build-other"\n plugins:\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: sink-test-env\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 10\n retry: *auto-retry\n\n - label: "end-to-end doris sink test"\n key: "e2e-doris-sink-tests"\n command: "ci/scripts/e2e-doris-sink-test.sh -p ci-release"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-e2e-doris-sink-tests"\n || build.env("CI_STEPS") =~ /(^|,)e2e-doris-sink-tests?(,|$$)/\n depends_on:\n - "build"\n - "build-other"\n plugins:\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: sink-test-env\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 10\n retry: *auto-retry\n\n - label: "end-to-end starrocks sink test"\n key: "e2e-starrocks-sink-tests"\n command: "ci/scripts/e2e-starrocks-sink-test.sh -p ci-release"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-e2e-starrocks-sink-tests"\n || build.env("CI_STEPS") =~ /(^|,)e2e-starrocks-sink-tests?(,|$$)/\n depends_on:\n - "build"\n - "build-other"\n plugins:\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: sink-test-env\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 10\n retry: *auto-retry\n\n - label: "end-to-end cassandra sink test"\n key: "e2e-cassandra-sink-tests"\n command: "ci/scripts/e2e-cassandra-sink-test.sh -p ci-release"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-e2e-cassandra-sink-tests"\n || build.env("CI_STEPS") =~ /(^|,)e2e-cassandra-sink-tests?(,|$$)/\n depends_on:\n - "build"\n - "build-other"\n plugins:\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: sink-test-env\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 10\n retry: *auto-retry\n\n - label: "end-to-end clickhouse sink test"\n key: "e2e-clickhouse-sink-tests"\n command: "ci/scripts/e2e-clickhouse-sink-test.sh -p ci-release"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-e2e-clickhouse-sink-tests"\n || build.env("CI_STEPS") =~ /(^|,)e2e-clickhouse-sink-tests?(,|$$)/\n depends_on:\n - "build"\n - "build-other"\n plugins:\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: sink-test-env\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 10\n retry: *auto-retry\n\n - label: "end-to-end time travel test"\n key: "e2e-time-travel-tests"\n command: "ci/scripts/e2e-time-travel-test.sh -p ci-release"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-e2e-time-travel-tests"\n || build.env("CI_STEPS") =~ /(^|,)e2e-time-travel-tests?(,|$$)/\n depends_on:\n - "build"\n - "build-other"\n - "docslt"\n plugins:\n - docker-compose#v5.5.0: *docker-compose\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 10\n retry: *auto-retry\n\n - label: "end-to-end sqlserver sink test"\n key: "e2e-sqlserver-sink-tests"\n command: "ci/scripts/e2e-sqlserver-sink-test.sh -p ci-release"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-e2e-sqlserver-sink-tests"\n || build.env("CI_STEPS") =~ /(^|,)e2e-sqlserver-sink-tests?(,|$$)/\n depends_on:\n - "build"\n - "build-other"\n plugins:\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: sink-test-env\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 10\n retry: *auto-retry\n\n - label: "end-to-end pulsar sink test"\n key: "e2e-pulsar-sink-tests"\n command: "ci/scripts/e2e-pulsar-sink-test.sh -p ci-release"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-e2e-pulsar-sink-tests"\n || build.env("CI_STEPS") =~ /(^|,)e2e-pulsar-sink-tests?(,|$$)/\n depends_on:\n - "build"\n - "build-other"\n plugins:\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: sink-test-env\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 10\n retry: *auto-retry\n\n - label: "end-to-end mqtt sink test"\n key: "e2e-mqtt-sink-tests"\n command: "ci/scripts/e2e-mqtt-sink-test.sh -p ci-release"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-e2e-mqtt-sink-tests"\n || build.env("CI_STEPS") =~ /(^|,)e2e-mqtt-sink-tests?(,|$$)/\n depends_on:\n - "build"\n - "build-other"\n plugins:\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: sink-test-env\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 10\n retry: *auto-retry\n\n - label: "end-to-end mongodb sink test"\n key: "e2e-mongodb-sink-tests"\n command: "ci/scripts/e2e-mongodb-sink-test.sh -p ci-release"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-e2e-mongodb-sink-tests"\n || build.env("CI_STEPS") =~ /(^|,)e2e-mongodb-sink-tests?(,|$$)/\n depends_on:\n - "build"\n - "build-other"\n plugins:\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: sink-test-env\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 10\n retry: *auto-retry\n\n - label: "connector node integration test Java {{matrix.java_version}}"\n key: "connector-node-integration-test"\n command: "ci/scripts/connector-node-integration-test.sh -p ci-release -v {{matrix.java_version}}"\n if: |\n !(build.pull_request.labels includes "ci/main-cron/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-connector-node-integration-tests"\n || build.env("CI_STEPS") =~ /(^|,)connector-node-integration-tests?(,|$$)/\n depends_on:\n - "build"\n - "build-other"\n plugins:\n - docker-compose#v5.5.0: *docker-compose\n - ./ci/plugins/upload-failure-logs\n matrix:\n setup:\n java_version:\n - "11"\n - "17"\n timeout_in_minutes: 10\n retry: *auto-retry\n\n - label: "release amd64 (dry-run)"\n command: "SKIP_RELEASE=1 ci/scripts/release.sh"\n if: |\n build.pull_request.labels includes "ci/run-release-dry-run" || build.env("CI_STEPS") =~ /(^|,)release-dry-run(,|$$)/\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n GITHUB_TOKEN: github-token\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: release-env-x86\n environment:\n - BINARY_NAME\n - GITHUB_TOKEN\n - BUILDKITE_TAG\n - BUILDKITE_SOURCE\n timeout_in_minutes: 60\n retry: *auto-retry\n\n - label: "release amd64"\n command: "ci/scripts/release.sh"\n if: build.tag != null\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n GITHUB_TOKEN: github-token\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: release-env-x86\n environment:\n - BINARY_NAME\n - GITHUB_TOKEN\n - BUILDKITE_TAG\n - BUILDKITE_SOURCE\n timeout_in_minutes: 60\n retry: *auto-retry\n\n - label: "release aarch64 (dry-run)"\n command: "SKIP_RELEASE=1 ci/scripts/release.sh"\n if: |\n build.pull_request.labels includes "ci/run-release-dry-run" || build.env("CI_STEPS") =~ /(^|,)release-dry-run(,|$$)/\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n GITHUB_TOKEN: github-token\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: release-env-arm\n environment:\n - BINARY_NAME\n - GITHUB_TOKEN\n - BUILDKITE_TAG\n - BUILDKITE_SOURCE\n agents:\n queue: "linux-arm64"\n timeout_in_minutes: 60\n retry: *auto-retry\n\n - label: "release aarch64"\n command: "ci/scripts/release.sh"\n if: build.tag != null\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n GITHUB_TOKEN: github-token\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: release-env-arm\n environment:\n - BINARY_NAME\n - GITHUB_TOKEN\n - BUILDKITE_TAG\n - BUILDKITE_SOURCE\n agents:\n queue: "linux-arm64"\n timeout_in_minutes: 60\n retry: *auto-retry\n\n - label: "release docker image: amd64"\n command: "ci/scripts/docker.sh"\n key: "build-amd64"\n if: build.tag != null\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n GHCR_USERNAME: ghcr-username\n GHCR_TOKEN: ghcr-token\n DOCKER_TOKEN: docker-token\n GITHUB_TOKEN: github-token\n timeout_in_minutes: 60\n retry: *auto-retry\n\n - label: "docker-build-push: aarch64"\n command: "ci/scripts/docker.sh"\n key: "build-aarch64"\n if: build.tag != null\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n GHCR_USERNAME: ghcr-username\n GHCR_TOKEN: ghcr-token\n DOCKER_TOKEN: docker-token\n GITHUB_TOKEN: github-token\n timeout_in_minutes: 60\n agents:\n queue: "linux-arm64"\n retry: *auto-retry\n\n - label: "multi arch image create push"\n command: "ci/scripts/multi-arch-docker.sh"\n if: build.tag != null\n depends_on:\n - "build-amd64"\n - "build-aarch64"\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n GHCR_USERNAME: ghcr-username\n GHCR_TOKEN: ghcr-token\n DOCKER_TOKEN: docker-token\n timeout_in_minutes: 10\n retry: *auto-retry\n\n # Notification test.\n - key: "test-notify"\n if: build.pull_request.labels includes "ci/main-cron/test-notify" || build.env("CI_STEPS") =~ /(^|,)test_notify(,|$$)/\n command: |\n bash -c 'echo test && exit -1'\n\n # Notification test.\n - key: "test-notify-2"\n if: build.pull_request.labels includes "ci/main-cron/test-notify" || build.env("CI_STEPS") =~ /(^|,)test_notify(,|$$)/\n command: |\n bash -c 'echo test && exit -1'\n\n # Notification test.\n - key: "test-notify-timeout"\n if: build.pull_request.labels includes "ci/main-cron/test-notify" || build.env("CI_STEPS") =~ /(^|,)test_notify(,|$$)/\n command: |\n bash -c 'echo test && sleep 300'\n timeout_in_minutes: 1\n\n - wait: true\n continue_on_failure: true\n allow_dependency_failure: true\n\n # Notifies on test failure for certain tests.\n # You may update `notify.py` to add tests and people to notify.\n # This should be the LAST part of the main-cron file.\n - label: "trigger failed test notification"\n if: build.pull_request.labels includes "ci/main-cron/test-notify" || build.branch == "main"\n command: "ci/scripts/notify.py"\n | dataset_sample\yaml\risingwavelabs_risingwave\ci\workflows\main-cron.yml | main-cron.yml | YAML | 38,305 | 0.8 | 0.06262 | 0.028659 | react-lib | 179 | 2023-07-20T12:28:58.511162 | BSD-3-Clause | false | 1ceafc738d7bf6163a93a431e8261b54 |
anchors:\n - auto-retry: &auto-retry\n automatic:\n # Agent terminated because the AWS EC2 spot instance killed by AWS.\n - signal_reason: agent_stop\n limit: 3\n - exit_status: -1\n signal_reason: none\n limit: 3\n - plugins:\n - cargo-cache: &cargo-cache\n nienbo/cache#v2.4.20:\n id: cargo\n key: "v1-cache-{{ id }}-{{ runner.os }}-{{ checksum 'Cargo.lock' }}"\n restore-keys:\n - "v1-cache-{{ id }}-{{ runner.os }}-"\n - "v1-cache-{{ id }}-"\n backend: s3\n s3:\n bucket: rw-ci-cache-bucket\n args: "--no-progress"\n paths:\n - ".cargo/registry/index"\n - ".cargo/registry/cache"\n - ".cargo/git"\n # we need to override args, so didn't include image here in the anchor\n - docker-compose: &docker-compose\n run: rw-build-env\n config: ci/docker-compose.yml\n mount-buildkite-agent: true\n propagate-environment: true\n\nother-sql-backend: &other-sql-backend\n matrix:\n setup:\n label: [""]\n endpoint: [""]\n adjustments:\n - with:\n label: ""\n endpoint: ""\n skip: true # hack\n - with:\n label: "postgres"\n # PGPASSWORD=postgres psql -h db -p 5432 -U postgres -d rwmeta\n endpoint: "postgres://postgres:postgres@db:5432/rwmeta"\n - with:\n label: "mysql"\n # mysql -h mysql-meta -P 3306 -u root -p123456 -D rwmeta\n endpoint: "mysql://root:123456@mysql-meta:3306/rwmeta"\n env:\n RISEDEV_SQL_ENDPOINT: "{{matrix.endpoint}}"\n\n\nsteps:\n - label: "check ci image rebuild"\n plugins:\n - monorepo-diff#v1.2.0:\n diff: "git diff --name-only origin/main"\n watch:\n - path: "ci/build-ci-image.sh"\n config:\n command: "ci/build-ci-image.sh"\n label: "ci build images"\n - wait\n\n - label: "build"\n command: "ci/scripts/build.sh -p ci-dev"\n key: "build"\n plugins:\n - *cargo-cache\n - docker-compose#v5.5.0: *docker-compose\n timeout_in_minutes: 15\n retry: *auto-retry\n\n - label: "build other components"\n command: "ci/scripts/build-other.sh"\n key: "build-other"\n plugins:\n - *cargo-cache\n - seek-oss/aws-sm#v2.3.2:\n env:\n GITHUB_TOKEN: github-token\n - docker-compose#v5.5.0:\n <<: *docker-compose\n environment:\n - GITHUB_TOKEN\n timeout_in_minutes: 14\n retry: *auto-retry\n\n - label: "build (deterministic simulation)"\n command: "ci/scripts/build-simulation.sh"\n key: "build-simulation"\n plugins:\n - *cargo-cache\n - docker-compose#v5.5.0: *docker-compose\n retry: *auto-retry\n\n - label: "docslt"\n command: "ci/scripts/docslt.sh"\n key: "docslt"\n plugins:\n - *cargo-cache\n - docker-compose#v5.5.0: *docker-compose\n timeout_in_minutes: 10\n retry: *auto-retry\n\n - label: "end-to-end test"\n command: "ci/scripts/e2e-test-serial.sh -p ci-dev -m ci-3streaming-2serving-3fe"\n if: |\n !(build.pull_request.labels includes "ci/pr/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-e2e-test"\n || build.env("CI_STEPS") =~ /(^|,)e2e-tests?(,|$$)/\n depends_on:\n - "build"\n plugins:\n - docker-compose#v5.5.0: *docker-compose\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 35\n parallelism: 4\n retry: *auto-retry\n\n - label: "slow end-to-end test"\n key: "slow-e2e-test"\n command: "ci/scripts/slow-e2e-test.sh -p ci-dev -m ci-3streaming-2serving-3fe"\n if: |\n !(build.pull_request.labels includes "ci/pr/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-slow-e2e-tests"\n || build.env("CI_STEPS") =~ /(^|,)slow-e2e-tests?(,|$$)/\n depends_on:\n - "build"\n - "build-other"\n plugins:\n - docker-compose#v5.5.0: *docker-compose\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 9\n retry: *auto-retry\n\n - label: "meta backup test"\n key: "e2e-meta-backup-test"\n command: "ci/scripts/run-meta-backup-test.sh -p ci-dev -m ci-3streaming-2serving-3fe"\n if: |\n build.pull_request.labels includes "ci/run-e2e-meta-backup-test"\n depends_on:\n - "build"\n - "build-other"\n - "docslt"\n plugins:\n - docker-compose#v5.5.0: *docker-compose\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 45\n retry: *auto-retry\n\n - label: "end-to-end test (parallel)"\n command: "ci/scripts/e2e-test-parallel.sh -p ci-dev"\n if: |\n !(build.pull_request.labels includes "ci/pr/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-e2e-tests"\n || build.env("CI_STEPS") =~ /(^|,)e2e-parallel-tests?(,|$$)/\n depends_on:\n - "build"\n - "build-other"\n - "docslt"\n plugins:\n - docker-compose#v5.5.0: *docker-compose\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 38\n parallelism: 4\n retry: *auto-retry\n\n - label: "end-to-end test for opendal (parallel)"\n if: build.pull_request.labels includes "ci/run-opendal-tests" || build.env("CI_STEPS") =~ /(^|,)opendal-tests?(,|$$)/\n command: "ci/scripts/e2e-test-parallel-for-opendal.sh -p ci-dev"\n depends_on:\n - "build"\n plugins:\n - docker-compose#v5.5.0: *docker-compose\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 14\n retry: *auto-retry\n\n - label: "end-to-end source test"\n command: "ci/scripts/e2e-source-test.sh -p ci-dev"\n if: |\n !(build.pull_request.labels includes "ci/pr/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-e2e-source-tests"\n || build.env("CI_STEPS") =~ /(^|,)e2e-source-tests?(,|$$)/\n depends_on:\n - "build"\n - "build-other"\n plugins:\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: source-test-env\n upload-container-logs: always\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 20\n parallelism: 2\n retry: *auto-retry\n\n - label: "end-to-end sink test"\n command: "ci/scripts/e2e-sink-test.sh -p ci-dev"\n if: |\n !(build.pull_request.labels includes "ci/pr/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-e2e-sink-tests"\n || build.env("CI_STEPS") =~ /(^|,)e2e-sink-tests?(,|$$)/\n depends_on:\n - "build"\n - "build-other"\n plugins:\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: sink-test-env\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 20\n parallelism: 2\n cancel_on_build_failing: true\n retry: *auto-retry\n\n - label: "connector node integration test Java {{matrix.java_version}}"\n if: build.pull_request.labels includes "ci/run-connector-node-integration-tests" || build.env("CI_STEPS") =~ /(^|,)java-connector-node-integration-tests?(,|$$)/\n command: "ci/scripts/connector-node-integration-test.sh -p ci-dev -v {{matrix.java_version}}"\n depends_on:\n - "build"\n - "build-other"\n plugins:\n - docker-compose#v5.5.0: *docker-compose\n - ./ci/plugins/upload-failure-logs\n matrix:\n setup:\n java_version:\n - "11"\n - "17"\n timeout_in_minutes: 10\n retry: *auto-retry\n\n - label: "end-to-end iceberg test"\n if: build.pull_request.labels includes "ci/run-e2e-iceberg-tests" || build.env("CI_STEPS") =~ /(^|,)e2e-iceberg-tests?(,|$$)/\n command: "ci/scripts/e2e-iceberg-test.sh -p ci-dev"\n depends_on:\n - "build"\n - "build-other"\n plugins:\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: iceberg-test-env\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 22\n parallelism: 2\n retry: *auto-retry\n\n - label: "end-to-end pulsar sink test"\n if: build.pull_request.labels includes "ci/run-e2e-pulsar-sink-tests" || build.env("CI_STEPS") =~ /(^|,)e2e-pulsar-sink-tests?(,|$$)/\n command: "ci/scripts/e2e-pulsar-sink-test.sh -p ci-dev"\n depends_on:\n - "build"\n - "build-other"\n plugins:\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: sink-test-env\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 10\n retry: *auto-retry\n\n - label: "end-to-end mqtt sink test"\n if: build.pull_request.labels includes "ci/run-e2e-mqtt-sink-tests" || build.env("CI_STEPS") =~ /(^|,)e2e-mqtt-sink-tests?(,|$$)/\n command: "ci/scripts/e2e-mqtt-sink-test.sh -p ci-dev"\n depends_on:\n - "build"\n - "build-other"\n plugins:\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: sink-test-env\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 10\n retry: *auto-retry\n\n - label: "end-to-end clickhouse sink test"\n if: build.pull_request.labels includes "ci/run-e2e-clickhouse-sink-tests" || build.env("CI_STEPS") =~ /(^|,)e2e-clickhouse-sink-tests?(,|$$)/\n command: "ci/scripts/e2e-clickhouse-sink-test.sh -p ci-dev"\n depends_on:\n - "build"\n - "build-other"\n plugins:\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: sink-test-env\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 10\n retry: *auto-retry\n\n - label: "end-to-end time travel test"\n key: "e2e-time-travel-tests"\n command: "ci/scripts/e2e-time-travel-test.sh -p ci-dev"\n if: build.pull_request.labels includes "ci/run-e2e-time-travel-tests" || build.env("CI_STEPS") =~ /(^|,)e2e-time-travel-tests?(,|$$)/\n depends_on:\n - "build"\n - "build-other"\n - "docslt"\n plugins:\n - docker-compose#v5.5.0: *docker-compose\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 15\n retry: *auto-retry\n\n - label: "end-to-end sqlserver sink test"\n if: build.pull_request.labels includes "ci/run-e2e-sqlserver-sink-tests" || build.env("CI_STEPS") =~ /(^|,)e2e-sqlserver-sink-tests?(,|$$)/\n command: "ci/scripts/e2e-sqlserver-sink-test.sh -p ci-dev"\n depends_on:\n - "build"\n - "build-other"\n plugins:\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: sink-test-env\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 10\n retry: *auto-retry\n\n - label: "end-to-end deltalake sink test"\n if: build.pull_request.labels includes "ci/run-e2e-deltalake-sink-rust-tests" || build.env("CI_STEPS") =~ /(^|,) e2e-deltalake-sink-rust-tests?(,|$$)/\n command: "ci/scripts/e2e-deltalake-sink-rust-test.sh -p ci-dev"\n depends_on:\n - "build"\n - "build-other"\n plugins:\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: sink-test-env\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 10\n retry: *auto-retry\n\n - label: "end-to-end redis sink test"\n if: build.pull_request.labels includes "ci/run-e2e-redis-sink-tests" || build.env("CI_STEPS") =~ /(^|,) e2e-redis-sink-tests?(,|$$)/\n command: "ci/scripts/e2e-redis-sink-test.sh -p ci-dev"\n depends_on:\n - "build"\n - "build-other"\n plugins:\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: sink-test-env\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 10\n retry: *auto-retry\n\n - label: "end-to-end doris sink test"\n if: build.pull_request.labels includes "ci/run-e2e-doris-sink-tests" || build.env("CI_STEPS") =~ /(^|,) e2e-doris-sink-tests?(,|$$)/\n command: "ci/scripts/e2e-doris-sink-test.sh -p ci-dev"\n depends_on:\n - "build"\n - "build-other"\n plugins:\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: sink-test-env\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 10\n retry: *auto-retry\n\n - label: "end-to-end starrocks sink test"\n if: build.pull_request.labels includes "ci/run-e2e-starrocks-sink-tests" || build.env("CI_STEPS") =~ /(^|,) e2e-starrocks-sink-tests?(,|$$)/\n command: "ci/scripts/e2e-starrocks-sink-test.sh -p ci-dev"\n depends_on:\n - "build"\n - "build-other"\n plugins:\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: sink-test-env\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 10\n retry: *auto-retry\n\n - label: "end-to-end cassandra sink test"\n if: build.pull_request.labels includes "ci/run-e2e-cassandra-sink-tests" || build.env("CI_STEPS") =~ /(^|,) e2e-cassandra-sink-tests?(,|$$)/\n command: "ci/scripts/e2e-cassandra-sink-test.sh -p ci-dev"\n depends_on:\n - "build"\n - "build-other"\n plugins:\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: sink-test-env\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 10\n retry: *auto-retry\n\n - label: "end-to-end mongodb sink test"\n if: build.pull_request.labels includes "ci/run-e2e-mongodb-sink-tests" || build.env("CI_STEPS") =~ /(^|,) e2e-mongodb-sink-tests?(,|$$)/\n command: "ci/scripts/e2e-mongodb-sink-test.sh -p ci-dev"\n depends_on:\n - "build"\n - "build-other"\n plugins:\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: sink-test-env\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 10\n retry: *auto-retry\n\n - label: "e2e java-binding test"\n if: build.pull_request.labels includes "ci/run-java-binding-tests" || build.env("CI_STEPS") =~ /(^|,)java-binding-tests?(,|$$)/\n command: "ci/scripts/java-binding-test.sh -p ci-dev"\n depends_on:\n - "build"\n - "build-other"\n plugins:\n - docker-compose#v5.5.0: *docker-compose\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 10\n retry: *auto-retry\n\n - label: "regress test"\n command: "ci/scripts/regress-test.sh -p ci-dev"\n if: |\n !(build.pull_request.labels includes "ci/pr/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-regress-test"\n || build.env("CI_STEPS") =~ /(^|,)regress-tests?(,|$$)/\n depends_on: "build"\n plugins:\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: regress-test-env\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 5\n retry: *auto-retry\n\n # The timeout should be strictly less than timeout in `main-cron.yml`.\n # It should be as conservative as possible.\n # This ensures our `main-cron` workflow will be stable.\n - label: "unit test"\n command: "ci/scripts/run-unit-test.sh"\n if: |\n !(build.pull_request.labels includes "ci/pr/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-unit-test"\n || build.env("CI_STEPS") =~ /(^|,)unit-tests?(,|$$)/\n plugins:\n - *cargo-cache\n - ./ci/plugins/swapfile\n - seek-oss/aws-sm#v2.3.2:\n env:\n CODECOV_TOKEN: my-codecov-token\n - docker-compose#v5.5.0:\n <<: *docker-compose\n environment:\n - CODECOV_TOKEN\n timeout_in_minutes: 22\n retry: *auto-retry\n\n - label: "check"\n command: "ci/scripts/check.sh"\n if: |\n !(build.pull_request.labels includes "ci/pr/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-check"\n || build.env("CI_STEPS") =~ /(^|,)check(,|$$)/\n plugins:\n - *cargo-cache\n - docker-compose#v5.5.0: *docker-compose\n timeout_in_minutes: 25\n retry: *auto-retry\n\n - label: "check dylint"\n command: "ci/scripts/check-dylint.sh"\n if: |\n !(build.pull_request.labels includes "ci/pr/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-check"\n || build.env("CI_STEPS") =~ /(^|,)check(,|$$)/\n plugins:\n - *cargo-cache\n - docker-compose#v5.5.0: *docker-compose\n timeout_in_minutes: 25\n retry: *auto-retry\n\n - label: "unit test (deterministic simulation)"\n command: "ci/scripts/deterministic-unit-test.sh"\n if: |\n !(build.pull_request.labels includes "ci/pr/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-unit-test-deterministic-simulation"\n || build.env("CI_STEPS") =~ /(^|,)unit-tests?-deterministic-simulation(,|$$)/\n plugins:\n - docker-compose#v5.5.0: *docker-compose\n timeout_in_minutes: 12\n cancel_on_build_failing: true\n retry: *auto-retry\n\n - label: "integration test (deterministic simulation)"\n command: "TEST_NUM=5 ci/scripts/deterministic-it-test.sh"\n if: |\n !(build.pull_request.labels includes "ci/pr/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-integration-test-deterministic-simulation"\n || build.env("CI_STEPS") =~ /(^|,)integration-tests?-deterministic-simulation(,|$$)/\n depends_on: "build-simulation"\n plugins:\n - docker-compose#v5.5.0: *docker-compose\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 22\n retry: *auto-retry\n\n - label: "end-to-end test (deterministic simulation)"\n command: "TEST_NUM=4 ci/scripts/deterministic-e2e-test.sh"\n if: |\n !(build.pull_request.labels includes "ci/pr/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-e2e-test-deterministic-simulation"\n || build.env("CI_STEPS") =~ /(^|,)e2e-tests?-deterministic-simulation(,|$$)/\n depends_on: "build-simulation"\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n GITHUB_TOKEN: github-token\n - docker-compose#v5.5.0:\n <<: *docker-compose\n environment:\n - GITHUB_TOKEN\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 35\n parallelism: 4\n cancel_on_build_failing: true\n retry: *auto-retry\n\n - label: "recovery test (deterministic simulation)"\n command: "TEST_NUM=4 KILL_RATE=1.0 BACKGROUND_DDL_RATE=0.0 ci/scripts/deterministic-recovery-test.sh"\n if: |\n !(build.pull_request.labels includes "ci/pr/run-selected") && build.env("CI_STEPS") == null\n || build.pull_request.labels includes "ci/run-recovery-test-deterministic-simulation"\n || build.env("CI_STEPS") =~ /(^|,)recovery-tests?-deterministic-simulation(,|$$)/\n depends_on: "build-simulation"\n plugins:\n # - seek-oss/aws-sm#v2.3.2:\n # env:\n # BUILDKITE_ANALYTICS_TOKEN: buildkite-build-analytics-deterministic-token\n - docker-compose#v5.5.0: *docker-compose\n # Only upload zipped files, otherwise the logs is too much.\n - ./ci/plugins/upload-failure-logs-zipped\n # - test-collector#v1.0.0:\n # files: "*-junit.xml"\n # format: "junit"\n timeout_in_minutes: 40\n parallelism: 4\n cancel_on_build_failing: true\n retry: *auto-retry\n\n # The following jobs are triggered only when PR has corresponding labels.\n\n # Generates cpu flamegraph env\n - label: "flamegraph-env-build"\n key: "flamegraph-env-build"\n command: "ci/scripts/flamegraph-env-build.sh"\n if: |\n build.pull_request.labels includes "cpu_flamegraph"\n || build.pull_request.labels includes "ci/run-cpu-flamegraph"\n || build.pull_request.labels includes "heap_flamegraph"\n || build.pull_request.labels includes "ci/run-heap-flamegraph"\n || build.env("CI_STEPS") =~ /(^|,)(cpu-flamegraph|heap-flamegraph)(,|$$)/\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n GITHUB_TOKEN: github-token\n - docker-compose#v5.5.0:\n <<: *docker-compose\n environment:\n - GITHUB_TOKEN\n timeout_in_minutes: 20\n\n # Generates cpu flamegraph if label `cpu_flamegraph` is added to PR.\n - label: "Generate CPU flamegraph"\n command: "PULL_REQUEST=$BUILDKITE_PULL_REQUEST ci/scripts/gen-flamegraph.sh cpu"\n depends_on: "flamegraph-env-build"\n if: build.pull_request.labels includes "cpu_flamegraph" || build.pull_request.labels includes "ci/run-cpu-flamegraph" || build.env("CI_STEPS") =~ /(^|,)cpu-flamegraph(,|$$)/\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n GITHUB_TOKEN: github-token\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: ci-flamegraph-env\n environment:\n - GITHUB_TOKEN\n # TODO(kwannoel): Here are the areas that can be further optimized:\n # - Nexmark event generation: ~3min for 100mil records.\n # - Generate Flamegraph: ~15min (see https://github.com/koute/not-perf/issues/30 on optimizing)\n # - Building RW artifacts: ~8min\n timeout_in_minutes: 540\n\n # Generates heap flamegraph if label `heap_flamegraph` is added to PR.\n - label: "Generate Heap flamegraph"\n command: "PULL_REQUEST=$BUILDKITE_PULL_REQUEST ci/scripts/gen-flamegraph.sh heap"\n depends_on: "flamegraph-env-build"\n\n if: build.pull_request.labels includes "heap_flamegraph" || build.pull_request.labels includes "ci/run-heap-flamegraph" || build.env("CI_STEPS") =~ /(^|,)heap-flamegraph(,|$$)/\n\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n GITHUB_TOKEN: github-token\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: ci-flamegraph-env\n environment:\n - GITHUB_TOKEN\n # TODO(kwannoel): Here are the areas that can be further optimized:\n # - Nexmark event generation: ~3min for 100mil records.\n # - Generate Flamegraph: ~15min (see https://github.com/koute/not-perf/issues/30 on optimizing)\n # - Building RW artifacts: ~8min\n timeout_in_minutes: 60 # ~3-4 queries can run\n\n # Backwards compatibility tests\n - label: "Backwards compatibility tests"\n command: "VERSION_OFFSET={{matrix.version_offset}} RW_COMMIT=$BUILDKITE_COMMIT ci/scripts/backwards-compat-test.sh -p ci-dev"\n if: |\n build.pull_request.labels includes "breaking-change" ||\n build.pull_request.labels includes "ci/run-backwards-compat-tests" ||\n build.env("CI_STEPS") =~ /(^|,)backwards?-compat-tests?(,|$$)/\n depends_on:\n - "build"\n plugins:\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: source-test-env\n environment:\n - BUILDKITE_BRANCH\n - ./ci/plugins/upload-failure-logs\n matrix:\n setup:\n # Test the 4 latest versions against the latest main.\n # e.g.\n # 1: 2.0.0\n # 2: 1.1.1\n # 3: 1.0.1\n # 4: 1.0.0\n # It is ordered by the full version number, rather than minor / major version.\n # We can change to just be on major version in the future.\n version_offset:\n - "1"\n - "2"\n - "3"\n - "4"\n timeout_in_minutes: 25\n\n # Sqlsmith differential testing\n - label: "Sqlsmith Differential Testing"\n command: "ci/scripts/sqlsmith-differential-test.sh -p ci-dev"\n if: build.pull_request.labels includes "ci/run-sqlsmith-differential-tests" || build.env("CI_STEPS") =~ /(^|,)sqlsmith-differential-tests?(,|$$)/\n depends_on:\n - "build"\n plugins:\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: ci-flamegraph-env\n timeout_in_minutes: 40\n\n - label: "Backfill tests"\n command: "BUILDKITE=${BUILDKITE:-} ci/scripts/backfill-test.sh -p ci-dev"\n if: build.pull_request.labels includes "ci/run-backfill-tests" || build.env("CI_STEPS") =~ /(^|,)backfill-tests?(,|$$)/\n depends_on:\n - "build"\n plugins:\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: source-test-env\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 30\n\n - label: "e2e standalone binary test"\n command: "ci/scripts/e2e-test-serial.sh -p ci-dev -m standalone"\n if: build.pull_request.labels includes "ci/run-e2e-standalone-tests" || build.env("CI_STEPS") =~ /(^|,)e2e-standalone-tests?(,|$$)/\n depends_on:\n - "build"\n plugins:\n - docker-compose#v5.5.0: *docker-compose\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 40\n retry: *auto-retry\n\n - label: "e2e single-node binary test"\n command: "ci/scripts/e2e-test-serial.sh -p ci-dev -m single-node"\n if: build.pull_request.labels includes "ci/run-e2e-single-node-tests" || build.env("CI_STEPS") =~ /(^|,)e2e-single-node-tests?(,|$$)/\n depends_on:\n - "build"\n plugins:\n - docker-compose#v5.5.0: *docker-compose\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 30\n retry: *auto-retry\n\n - label: "end-to-end test ({{matrix.label}} backend)"\n <<: *other-sql-backend\n command: "ci/scripts/e2e-test-serial.sh -p ci-dev -m ci-3streaming-2serving-3fe"\n if: build.pull_request.labels includes "ci/run-e2e-test-other-backends" || build.env("CI_STEPS") =~ /(^|,)e2e-test-other-backends?(,|$$)/\n depends_on:\n - "build"\n plugins:\n - docker-compose#v5.5.0:\n <<: *docker-compose\n run: ci-standard-env\n propagate-environment: true\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 37\n retry: *auto-retry\n\n # FIXME(kwannoel): Let the github PR labeller label it, if sqlsmith source files has changes.\n - label: "fuzz test"\n command: "ci/scripts/pr-fuzz-test.sh -p ci-dev"\n if: build.pull_request.labels includes "ci/run-sqlsmith-fuzzing-tests" || build.env("CI_STEPS") =~ /(^|,)sqlsmith-fuzzing-tests?(,|$$)/\n depends_on:\n - "build"\n - "build-simulation"\n plugins:\n - ./ci/plugins/swapfile\n - docker-compose#v5.5.0: *docker-compose\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 15\n retry: *auto-retry\n\n - label: "deterministic fuzz test"\n command: "ci/scripts/run-deterministic-fuzz-test.sh -p ci-dev"\n if: build.pull_request.labels includes "ci/run-deterministic-sqlsmith-fuzzing-tests" || build.env("CI_STEPS") =~ /(^|,)deterministic-sqlsmith-fuzzing-tests?(,|$$)/\n depends_on:\n - "build-simulation"\n plugins:\n - ./ci/plugins/swapfile\n - docker-compose#v5.5.0: *docker-compose\n - ./ci/plugins/upload-failure-logs\n timeout_in_minutes: 15\n retry: *auto-retry\n\n - label: "enable ci/pr/run-selected only in draft PRs"\n if: build.pull_request.labels includes "ci/pr/run-selected" && !build.pull_request.draft\n commands:\n - echo "ci/pr/run-selected is only usable for draft Pull Requests"\n - exit 1\n\n - label: "micro benchmark"\n command: "ci/scripts/run-micro-benchmarks.sh"\n key: "run-micro-benchmarks"\n if: build.pull_request.labels includes "ci/run-micro-benchmarks" || build.env("CI_STEPS") =~ /(^|,)micro-benchmarks?(,|$$)/\n plugins:\n - docker-compose#v5.5.0: *docker-compose\n timeout_in_minutes: 60\n retry: *auto-retry\n\n - label: "upload micro-benchmark"\n if: build.pull_request.labels includes "ci/run-upload-micro-benchmark" || build.env("CI_STEPS") =~ /(^|,)upload-micro-benchmarks?(,|$$)/\n command:\n - "BUILDKITE_BUILD_NUMBER=$BUILDKITE_BUILD_NUMBER ci/scripts/upload-micro-bench-results.sh"\n depends_on: "run-micro-benchmarks"\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n BUILDKITE_TOKEN: buildkite_token\n GITHUB_TOKEN: github-token\n - docker-compose#v5.5.0:\n <<: *docker-compose\n environment:\n - BUILDKITE_TOKEN\n - GITHUB_TOKEN\n timeout_in_minutes: 5\n | dataset_sample\yaml\risingwavelabs_risingwave\ci\workflows\pull-request.yml | pull-request.yml | YAML | 27,352 | 0.8 | 0.067017 | 0.05226 | node-utils | 127 | 2024-11-23T23:50:22.921490 | Apache-2.0 | false | 608b27b537ccdcb6204096ccb4c793ae |
# Generate Sqlsmith weekly snapshots.\nsteps:\n - label: "build"\n command: "ci/scripts/build.sh -p ci-dev"\n key: "build"\n plugins:\n - docker-compose#v5.5.0:\n run: rw-build-env\n config: ci/docker-compose.yml\n mount-buildkite-agent: true\n timeout_in_minutes: 15\n\n - label: "build (deterministic simulation)"\n command: "ci/scripts/build-simulation.sh"\n key: "build-simulation"\n plugins:\n - docker-compose#v5.5.0:\n run: rw-build-env\n config: ci/docker-compose.yml\n mount-buildkite-agent: true\n timeout_in_minutes: 15\n\n - label: "Generate sqlsmith snapshots"\n command: "ci/scripts/gen-sqlsmith-snapshots.sh"\n depends_on:\n - "build"\n - "build-simulation"\n plugins:\n - seek-oss/aws-sm#v2.3.2:\n env:\n GITHUB_TOKEN: github-token\n - docker-compose#v5.5.0:\n run: rw-build-env\n config: ci/docker-compose.yml\n mount-buildkite-agent: true\n environment:\n - GITHUB_TOKEN\n timeout_in_minutes: 60\n | dataset_sample\yaml\risingwavelabs_risingwave\ci\workflows\sqlsmith-snapshots.yml | sqlsmith-snapshots.yml | YAML | 1,067 | 0.8 | 0 | 0.027778 | python-kit | 753 | 2025-01-30T22:05:14.539447 | GPL-3.0 | false | 7d5281b2be7cfaec360ec4ed590e4304 |
---\nx-image: &image\n image: ${RW_IMAGE:-risingwavelabs/risingwave:v2.3.0}\nservices:\n risingwave-standalone:\n <<: *image\n command: "standalone --meta-opts=\" \\n --listen-addr 0.0.0.0:5690 \\n --advertise-addr 0.0.0.0:5690 \\n --dashboard-host 0.0.0.0:5691 \\n --prometheus-host 0.0.0.0:1250 \\n --prometheus-endpoint http://prometheus-0:9500 \\n --backend sql \\n --sql-endpoint postgres://postgres:@postgres-0:5432/metadata \\n --state-store hummock+azblob://<container_name> \\n --data-directory hummock_001 \\n --config-path /risingwave.toml\" \\n --compute-opts=\" \\n --config-path /risingwave.toml \\n --listen-addr 0.0.0.0:5688 \\n --prometheus-listener-addr 0.0.0.0:1250 \\n --advertise-addr 0.0.0.0:5688 \\n --async-stack-trace verbose \\n --parallelism 8 \\n --total-memory-bytes 21474836480 \\n --role both \\n --meta-address http://0.0.0.0:5690 \\n --memory-manager-target-bytes 22333829939 \" \\n --frontend-opts=\" \\n --config-path /risingwave.toml \\n --listen-addr 0.0.0.0:4566 \\n --advertise-addr 0.0.0.0:4566 \\n --prometheus-listener-addr 0.0.0.0:1250 \\n --health-check-listener-addr 0.0.0.0:6786 \\n --meta-addr http://0.0.0.0:5690 \\n --frontend-total-memory-bytes=4294967296\" \\n --compactor-opts=\" \\n --listen-addr 0.0.0.0:6660 \\n --prometheus-listener-addr 0.0.0.0:1250 \\n --advertise-addr 0.0.0.0:6660 \\n --meta-address http://0.0.0.0:5690 \\n --compactor-total-memory-bytes=4294967296\""\n expose:\n - "6660"\n - "4566"\n - "5688"\n - "5690"\n - "1250"\n - "5691"\n ports:\n - "4566:4566"\n - "5690:5690"\n - "5691:5691"\n - "1250:1250"\n depends_on:\n - postgres-0\n env_file: multiple_object_storage.env\n volumes:\n - "./risingwave.toml:/risingwave.toml"\n environment:\n RUST_BACKTRACE: "1"\n # If ENABLE_TELEMETRY is not set, telemetry will start by default\n ENABLE_TELEMETRY: ${ENABLE_TELEMETRY:-true}\n RW_TELEMETRY_TYPE: ${RW_TELEMETRY_TYPE:-"docker-compose"}\n container_name: risingwave-standalone\n healthcheck:\n test:\n - CMD-SHELL\n - bash -c 'printf \"GET / HTTP/1.1\n\n\" > /dev/tcp/127.0.0.1/6660; exit $$?;'\n - bash -c 'printf \"GET / HTTP/1.1\n\n\" > /dev/tcp/127.0.0.1/5688; exit $$?;'\n - bash -c '> /dev/tcp/127.0.0.1/4566; exit $$?;'\n - bash -c 'printf \"GET / HTTP/1.1\n\n\" > /dev/tcp/127.0.0.1/5690; exit $$?;'\n interval: 1s\n timeout: 5s\n restart: always\n deploy:\n resources:\n limits:\n memory: 28G\n reservations:\n memory: 28G\n postgres-0:\n extends:\n file: docker-compose.yml\n service: postgres-0\n grafana-0:\n extends:\n file: docker-compose.yml\n service: grafana-0\n prometheus-0:\n extends:\n file: docker-compose.yml\n service: prometheus-0\n message_queue:\n extends:\n file: docker-compose.yml\n service: message_queue\nvolumes:\n postgres-0:\n external: false\n grafana-0:\n external: false\n prometheus-0:\n external: false\n message_queue:\n external: false\n | dataset_sample\yaml\risingwavelabs_risingwave\docker\docker-compose-with-azblob.yml | docker-compose-with-azblob.yml | YAML | 3,667 | 0.8 | 0 | 0.009434 | awesome-app | 647 | 2023-11-11T09:35:34.575176 | GPL-3.0 | false | 4647301e3933f732e66464b645d6c78b |
---\nx-image: &image\n image: ${RW_IMAGE:-risingwavelabs/risingwave:v2.3.0}\nservices:\n risingwave-standalone:\n <<: *image\n command: "standalone --meta-opts=\" \\n --listen-addr 0.0.0.0:5690 \\n --advertise-addr 0.0.0.0:5690 \\n --dashboard-host 0.0.0.0:5691 \\n --prometheus-host 0.0.0.0:1250 \\n --prometheus-endpoint http://prometheus-0:9500 \\n --backend sql \\n --sql-endpoint postgres://postgres:@postgres-0:5432/metadata \\n --state-store hummock+gcs://<bucket-name> \\n --data-directory hummock_001 \\n --config-path /risingwave.toml\" \\n --compute-opts=\" \\n --config-path /risingwave.toml \\n --listen-addr 0.0.0.0:5688 \\n --prometheus-listener-addr 0.0.0.0:1250 \\n --advertise-addr 0.0.0.0:5688 \\n --async-stack-trace verbose \\n --parallelism 8 \\n --total-memory-bytes 21474836480 \\n --role both \\n --meta-address http://0.0.0.0:5690 \\n --memory-manager-target-bytes 22333829939 \" \\n --frontend-opts=\" \\n --config-path /risingwave.toml \\n --listen-addr 0.0.0.0:4566 \\n --advertise-addr 0.0.0.0:4566 \\n --prometheus-listener-addr 0.0.0.0:1250 \\n --health-check-listener-addr 0.0.0.0:6786 \\n --meta-addr http://0.0.0.0:5690 \\n --frontend-total-memory-bytes=4294967296\" \\n --compactor-opts=\" \\n --listen-addr 0.0.0.0:6660 \\n --prometheus-listener-addr 0.0.0.0:1250 \\n --advertise-addr 0.0.0.0:6660 \\n --meta-address http://0.0.0.0:5690 \\n --compactor-total-memory-bytes=4294967296\""\n expose:\n - "6660"\n - "4566"\n - "5688"\n - "5690"\n - "1250"\n - "5691"\n ports:\n - "4566:4566"\n - "5690:5690"\n - "5691:5691"\n - "1250:1250"\n depends_on:\n - postgres-0\n env_file: multiple_object_storage.env\n volumes:\n - "./risingwave.toml:/risingwave.toml"\n environment:\n RUST_BACKTRACE: "1"\n # If ENABLE_TELEMETRY is not set, telemetry will start by default\n ENABLE_TELEMETRY: ${ENABLE_TELEMETRY:-true}\n RW_TELEMETRY_TYPE: ${RW_TELEMETRY_TYPE:-"docker-compose"}\n container_name: risingwave-standalone\n healthcheck:\n test:\n - CMD-SHELL\n - bash -c 'printf \"GET / HTTP/1.1\n\n\" > /dev/tcp/127.0.0.1/6660; exit $$?;'\n - bash -c 'printf \"GET / HTTP/1.1\n\n\" > /dev/tcp/127.0.0.1/5688; exit $$?;'\n - bash -c '> /dev/tcp/127.0.0.1/4566; exit $$?;'\n - bash -c 'printf \"GET / HTTP/1.1\n\n\" > /dev/tcp/127.0.0.1/5690; exit $$?;'\n interval: 1s\n timeout: 5s\n restart: always\n deploy:\n resources:\n limits:\n memory: 28G\n reservations:\n memory: 28G\n postgres-0:\n extends:\n file: docker-compose.yml\n service: postgres-0\n grafana-0:\n extends:\n file: docker-compose.yml\n service: grafana-0\n prometheus-0:\n extends:\n file: docker-compose.yml\n service: prometheus-0\n message_queue:\n extends:\n file: docker-compose.yml\n service: message_queue\nvolumes:\n postgres-0:\n external: false\n grafana-0:\n external: false\n prometheus-0:\n external: false\n message_queue:\n external: false\n | dataset_sample\yaml\risingwavelabs_risingwave\docker\docker-compose-with-gcs.yml | docker-compose-with-gcs.yml | YAML | 3,661 | 0.8 | 0 | 0.009434 | vue-tools | 970 | 2024-03-09T04:21:54.241119 | BSD-3-Clause | false | 8071b615028787774a1f8ee8f47eb871 |
---\nservices:\n compactor-0:\n image: ghcr.io/risingwavelabs/risingwave:RisingWave_1.6.1_HDFS_2.7-x86_64\n command:\n - compactor-node\n - "--listen-addr"\n - "0.0.0.0:6660"\n - "--advertise-addr"\n - "compactor-0:6660"\n - "--prometheus-listener-addr"\n - "0.0.0.0:1260"\n - "--meta-address"\n - "http://meta-node-0:5690"\n - "--config-path"\n - /risingwave.toml\n expose:\n - "6660"\n - "1260"\n ports: []\n depends_on:\n - meta-node-0\n volumes:\n - "./risingwave.toml:/risingwave.toml"\n - "<HADOOP_HOME>:/opt/hadoop/"\n environment:\n - HADOOP_HOME=/opt/hadoop/\n container_name: compactor-0\n healthcheck:\n test:\n - CMD-SHELL\n - bash -c 'printf \"GET / HTTP/1.1\n\n\" > /dev/tcp/127.0.0.1/6660; exit $$?;'\n interval: 1s\n timeout: 5s\n retries: 5\n restart: always\n deploy:\n resources:\n limits:\n memory: 2G\n reservations:\n memory: 1G\n compute-node-0:\n image: "ghcr.io/risingwavelabs/risingwave:RisingWave_1.6.1_HDFS_2.7-x86_64"\n command:\n - compute-node\n - "--listen-addr"\n - "0.0.0.0:5688"\n - "--advertise-addr"\n - "compute-node-0:5688"\n - "--prometheus-listener-addr"\n - "0.0.0.0:1222"\n - "--meta-address"\n - "http://meta-node-0:5690"\n - "--config-path"\n - /risingwave.toml\n expose:\n - "5688"\n - "1222"\n ports: []\n depends_on:\n - meta-node-0\n volumes:\n - "./risingwave.toml:/risingwave.toml"\n - "<HADOOP_HOME>:/opt/hadoop/"\n environment:\n - HADOOP_HOME=/opt/hadoop/\n container_name: compute-node-0\n healthcheck:\n test:\n - CMD-SHELL\n - bash -c 'printf \"GET / HTTP/1.1\n\n\" > /dev/tcp/127.0.0.1/5688; exit $$?;'\n interval: 1s\n timeout: 5s\n retries: 5\n restart: always\n deploy:\n resources:\n limits:\n memory: 26G\n reservations:\n memory: 26G\n frontend-node-0:\n image: "ghcr.io/risingwavelabs/risingwave:RisingWave_1.6.1_HDFS_2.7-x86_64"\n command:\n - frontend-node\n - "--listen-addr"\n - "0.0.0.0:4566"\n - "--meta-addr"\n - "http://meta-node-0:5690"\n - "--advertise-addr"\n - "frontend-node-0:4566"\n - "--config-path"\n - /risingwave.toml\n - "--prometheus-listener-addr"\n - "0.0.0.0:2222"\n expose:\n - "4566"\n ports:\n - "4566:4566"\n depends_on:\n - meta-node-0\n volumes:\n - "./risingwave.toml:/risingwave.toml"\n environment:\n RUST_BACKTRACE: "1"\n container_name: frontend-node-0\n healthcheck:\n test:\n - CMD-SHELL\n - bash -c '> /dev/tcp/127.0.0.1/4566; exit $$?;'\n interval: 1s\n timeout: 5s\n retries: 5\n restart: always\n deploy:\n resources:\n limits:\n memory: 2G\n reservations:\n memory: 1G\n grafana-0:\n image: "grafana/grafana-oss:latest"\n command: []\n expose:\n - "3001"\n ports:\n - "3001:3001"\n depends_on: []\n volumes:\n - "grafana-0:/var/lib/grafana"\n - "./grafana.ini:/etc/grafana/grafana.ini"\n - "./grafana-risedev-datasource.yml:/etc/grafana/provisioning/datasources/grafana-risedev-datasource.yml"\n - "./grafana-risedev-dashboard.yml:/etc/grafana/provisioning/dashboards/grafana-risedev-dashboard.yml"\n - "./dashboards:/dashboards"\n environment: {}\n container_name: grafana-0\n healthcheck:\n test:\n - CMD-SHELL\n - bash -c 'printf \"GET / HTTP/1.1\n\n\" > /dev/tcp/127.0.0.1/3001; exit $$?;'\n interval: 1s\n timeout: 5s\n retries: 5\n restart: always\n meta-node-0:\n image: "ghcr.io/risingwavelabs/risingwave:RisingWave_1.6.1_HDFS_2.7-x86_64"\n command:\n - meta-node\n - "--listen-addr"\n - "0.0.0.0:5690"\n - "--advertise-addr"\n - "meta-node-0:5690"\n - "--dashboard-host"\n - "0.0.0.0:5691"\n - "--prometheus-host"\n - "0.0.0.0:1250"\n - "--prometheus-endpoint"\n - "http://prometheus-0:9500"\n - "--backend"\n - sql\n - "--sql-endpoints"\n - "postgres://postgres:@postgres-0:5432/metadata"\n - "--state-store"\n - "hummock+hdfs://<cluster_name>"\n - "--data-directory"\n - "hummock_001"\n - "--config-path"\n - /risingwave.toml\n expose:\n - "5690"\n - "1250"\n - "5691"\n ports:\n - "5690:5690"\n - "5691:5691"\n depends_on:\n - "etcd-0"\n volumes:\n - "./risingwave.toml:/risingwave.toml"\n - "<HADOOP_HOME>:/opt/hadoop"\n environment:\n - HADOOP_HOME=/opt/hadoop/\n - RW_TELEMETRY_TYPE=${RW_TELEMETRY_TYPE:-"docker-compose"}\n container_name: meta-node-0\n healthcheck:\n test:\n - CMD-SHELL\n - bash -c 'printf \"GET / HTTP/1.1\n\n\" > /dev/tcp/127.0.0.1/5690; exit $$?;'\n interval: 1s\n timeout: 5s\n retries: 5\n restart: always\n deploy:\n resources:\n limits:\n memory: 2G\n reservations:\n memory: 1G\n postgres-0:\n image: "postgres:15-alpine"\n environment:\n - POSTGRES_HOST_AUTH_METHOD=trust\n - POSTGRES_USER=postgres\n - POSTGRES_DB=metadata\n - POSTGRES_INITDB_ARGS=--encoding=UTF-8 --lc-collate=C --lc-ctype=C\n expose:\n - "5432"\n ports:\n - "8432:5432"\n volumes:\n - "postgres-0:/var/lib/postgresql/data"\n prometheus-0:\n image: "prom/prometheus:latest"\n command:\n - "--config.file=/etc/prometheus/prometheus.yml"\n - "--storage.tsdb.path=/prometheus"\n - "--web.console.libraries=/usr/share/prometheus/console_libraries"\n - "--web.console.templates=/usr/share/prometheus/consoles"\n - "--web.listen-address=0.0.0.0:9500"\n - "--storage.tsdb.retention.time=30d"\n expose:\n - "9500"\n ports:\n - "9500:9500"\n depends_on: []\n volumes:\n - "prometheus-0:/prometheus"\n - "./prometheus.yaml:/etc/prometheus/prometheus.yml"\n environment: {}\n container_name: prometheus-0\n healthcheck:\n test:\n - CMD-SHELL\n - sh -c 'printf "GET /-/healthy HTTP/1.0\n\n" | nc localhost 9500; exit $$?;'\n interval: 1s\n timeout: 5s\n retries: 5\n restart: always\n message_queue:\n image: "redpandadata/redpanda:latest"\n command:\n - redpanda\n - start\n - "--smp"\n - "1"\n - "--reserve-memory"\n - 0M\n - "--memory"\n - 4G\n - "--overprovisioned"\n - "--node-id"\n - "0"\n - "--check=false"\n - "--kafka-addr"\n - "PLAINTEXT://0.0.0.0:29092,OUTSIDE://0.0.0.0:9092"\n - "--advertise-kafka-addr"\n - "PLAINTEXT://message_queue:29092,OUTSIDE://localhost:9092"\n expose:\n - "29092"\n - "9092"\n - "9644"\n ports:\n - "29092:29092"\n - "9092:9092"\n - "9644:9644"\n - "8081:8081"\n depends_on: []\n volumes:\n - "message_queue:/var/lib/redpanda/data"\n environment: {}\n container_name: message_queue\n healthcheck:\n test: curl -f localhost:9644/v1/status/ready\n interval: 1s\n timeout: 5s\n retries: 5\n restart: always\nvolumes:\n postgres-0:\n external: false\n grafana-0:\n external: false\n prometheus-0:\n external: false\n message_queue:\n external: false\n | dataset_sample\yaml\risingwavelabs_risingwave\docker\docker-compose-with-hdfs.yml | docker-compose-with-hdfs.yml | YAML | 7,283 | 0.8 | 0 | 0 | react-lib | 387 | 2024-11-09T15:45:38.624700 | GPL-3.0 | false | 1ccfcc7c863c8fa2389dfb2add407524 |
---\nx-image: &image\n image: ${RW_IMAGE:-risingwavelabs/risingwave:v2.3.0}\nservices:\n risingwave-standalone:\n <<: *image\n command: "standalone --meta-opts=\" \\n --listen-addr 0.0.0.0:5690 \\n --advertise-addr 0.0.0.0:5690 \\n --dashboard-host 0.0.0.0:5691 \\n --prometheus-host 0.0.0.0:1250 \\n --prometheus-endpoint http://prometheus-0:9500 \\n --backend sql \\n --sql-endpoint postgres://postgres:@postgres-0:5432/metadata \\n --state-store hummock+fs://<local-path> \\n --data-directory hummock_001 \\n --config-path /risingwave.toml\" \\n --compute-opts=\" \\n --config-path /risingwave.toml \\n --listen-addr 0.0.0.0:5688 \\n --prometheus-listener-addr 0.0.0.0:1250 \\n --advertise-addr 0.0.0.0:5688 \\n --async-stack-trace verbose \\n --parallelism 8 \\n --total-memory-bytes 21474836480 \\n --role both \\n --meta-address http://0.0.0.0:5690 \\n --memory-manager-target-bytes 22333829939 \" \\n --frontend-opts=\" \\n --config-path /risingwave.toml \\n --listen-addr 0.0.0.0:4566 \\n --advertise-addr 0.0.0.0:4566 \\n --prometheus-listener-addr 0.0.0.0:1250 \\n --health-check-listener-addr 0.0.0.0:6786 \\n --meta-addr http://0.0.0.0:5690 \\n --frontend-total-memory-bytes=4294967296\" \\n --compactor-opts=\" \\n --listen-addr 0.0.0.0:6660 \\n --prometheus-listener-addr 0.0.0.0:1250 \\n --advertise-addr 0.0.0.0:6660 \\n --meta-address http://0.0.0.0:5690 \\n --compactor-total-memory-bytes=4294967296\""\n expose:\n - "6660"\n - "4566"\n - "5688"\n - "5690"\n - "1250"\n - "5691"\n ports:\n - "4566:4566"\n - "5690:5690"\n - "5691:5691"\n - "1250:1250"\n depends_on:\n - postgres-0\n volumes:\n - "./risingwave.toml:/risingwave.toml"\n environment:\n RUST_BACKTRACE: "1"\n # If ENABLE_TELEMETRY is not set, telemetry will start by default\n ENABLE_TELEMETRY: ${ENABLE_TELEMETRY:-true}\n RW_TELEMETRY_TYPE: ${RW_TELEMETRY_TYPE:-"docker-compose"}\n container_name: risingwave-standalone\n healthcheck:\n test:\n - CMD-SHELL\n - bash -c 'printf \"GET / HTTP/1.1\n\n\" > /dev/tcp/127.0.0.1/6660; exit $$?;'\n - bash -c 'printf \"GET / HTTP/1.1\n\n\" > /dev/tcp/127.0.0.1/5688; exit $$?;'\n - bash -c '> /dev/tcp/127.0.0.1/4566; exit $$?;'\n - bash -c 'printf \"GET / HTTP/1.1\n\n\" > /dev/tcp/127.0.0.1/5690; exit $$?;'\n interval: 1s\n timeout: 5s\n restart: always\n deploy:\n resources:\n limits:\n memory: <config-the-allocated-memory>\n reservations:\n memory: <config-the-allocated-memory>\n postgres-0:\n extends:\n file: docker-compose.yml\n service: postgres-0\n grafana-0:\n extends:\n file: docker-compose.yml\n service: grafana-0\n prometheus-0:\n extends:\n file: docker-compose.yml\n service: prometheus-0\nvolumes:\n postgres-0:\n external: false\n grafana-0:\n external: false\n prometheus-0:\n external: false\n | dataset_sample\yaml\risingwavelabs_risingwave\docker\docker-compose-with-local-fs.yml | docker-compose-with-local-fs.yml | YAML | 3,542 | 0.8 | 0 | 0.010101 | vue-tools | 270 | 2025-03-10T15:52:18.215432 | BSD-3-Clause | false | 735bea2b2e866f4db88dda360e9f933e |
---\nx-image: &image\n image: ${RW_IMAGE:-risingwavelabs/risingwave:v2.3.0}\nservices:\n risingwave-standalone:\n <<: *image\n command: "standalone --meta-opts=\" \\n --listen-addr 0.0.0.0:5690 \\n --advertise-addr 0.0.0.0:5690 \\n --dashboard-host 0.0.0.0:5691 \\n --prometheus-host 0.0.0.0:1250 \\n --prometheus-endpoint http://prometheus-0:9500 \\n --backend sql \\n --sql-endpoint postgres://postgres:@postgres-0:5432/metadata \\n --state-store hummock+obs://<bucket-name> \\n --data-directory hummock_001 \\n --config-path /risingwave.toml\" \\n --compute-opts=\" \\n --config-path /risingwave.toml \\n --listen-addr 0.0.0.0:5688 \\n --prometheus-listener-addr 0.0.0.0:1250 \\n --advertise-addr 0.0.0.0:5688 \\n --async-stack-trace verbose \\n --parallelism 8 \\n --total-memory-bytes 21474836480 \\n --role both \\n --meta-address http://0.0.0.0:5690 \\n --memory-manager-target-bytes 22333829939 \" \\n --frontend-opts=\" \\n --config-path /risingwave.toml \\n --listen-addr 0.0.0.0:4566 \\n --advertise-addr 0.0.0.0:4566 \\n --prometheus-listener-addr 0.0.0.0:1250 \\n --health-check-listener-addr 0.0.0.0:6786 \\n --meta-addr http://0.0.0.0:5690 \\n --frontend-total-memory-bytes=4294967296\" \\n --compactor-opts=\" \\n --listen-addr 0.0.0.0:6660 \\n --prometheus-listener-addr 0.0.0.0:1250 \\n --advertise-addr 0.0.0.0:6660 \\n --meta-address http://0.0.0.0:5690 \\n --compactor-total-memory-bytes=4294967296\""\n expose:\n - "6660"\n - "4566"\n - "5688"\n - "5690"\n - "1250"\n - "5691"\n ports:\n - "4566:4566"\n - "5690:5690"\n - "5691:5691"\n - "1250:1250"\n depends_on:\n - postgres-0\n env_file: multiple_object_storage.env\n volumes:\n - "./risingwave.toml:/risingwave.toml"\n environment:\n RUST_BACKTRACE: "1"\n # If ENABLE_TELEMETRY is not set, telemetry will start by default\n ENABLE_TELEMETRY: ${ENABLE_TELEMETRY:-true}\n RW_TELEMETRY_TYPE: ${RW_TELEMETRY_TYPE:-"docker-compose"}\n container_name: risingwave-standalone\n healthcheck:\n test:\n - CMD-SHELL\n - bash -c 'printf \"GET / HTTP/1.1\n\n\" > /dev/tcp/127.0.0.1/6660; exit $$?;'\n - bash -c 'printf \"GET / HTTP/1.1\n\n\" > /dev/tcp/127.0.0.1/5688; exit $$?;'\n - bash -c '> /dev/tcp/127.0.0.1/4566; exit $$?;'\n - bash -c 'printf \"GET / HTTP/1.1\n\n\" > /dev/tcp/127.0.0.1/5690; exit $$?;'\n interval: 1s\n timeout: 5s\n restart: always\n deploy:\n resources:\n limits:\n memory: 28G\n reservations:\n memory: 28G\n postgres-0:\n extends:\n file: docker-compose.yml\n service: postgres-0\n grafana-0:\n extends:\n file: docker-compose.yml\n service: grafana-0\n prometheus-0:\n extends:\n file: docker-compose.yml\n service: prometheus-0\n message_queue:\n extends:\n file: docker-compose.yml\n service: message_queue\nvolumes:\n postgres-0:\n external: false\n grafana-0:\n external: false\n prometheus-0:\n external: false\n message_queue:\n external: false\n | dataset_sample\yaml\risingwavelabs_risingwave\docker\docker-compose-with-obs.yml | docker-compose-with-obs.yml | YAML | 3,661 | 0.8 | 0 | 0.009434 | awesome-app | 246 | 2024-05-13T18:02:41.757996 | Apache-2.0 | false | d567ea8a944d48b7048e0284d3b5b3ad |
---\nx-image: &image\n image: ${RW_IMAGE:-risingwavelabs/risingwave:v2.3.0}\nservices:\n risingwave-standalone:\n <<: *image\n command: "standalone --meta-opts=\" \\n --listen-addr 0.0.0.0:5690 \\n --advertise-addr 0.0.0.0:5690 \\n --dashboard-host 0.0.0.0:5691 \\n --prometheus-host 0.0.0.0:1250 \\n --prometheus-endpoint http://prometheus-0:9500 \\n --backend sql \\n --sql-endpoint postgres://postgres:@postgres-0:5432/metadata \\n --state-store hummock+oss://<bucket-name> \\n --data-directory hummock_001 \\n --config-path /risingwave.toml\" \\n --compute-opts=\" \\n --config-path /risingwave.toml \\n --listen-addr 0.0.0.0:5688 \\n --prometheus-listener-addr 0.0.0.0:1250 \\n --advertise-addr 0.0.0.0:5688 \\n --async-stack-trace verbose \\n --parallelism 8 \\n --total-memory-bytes 21474836480 \\n --role both \\n --meta-address http://0.0.0.0:5690 \\n --memory-manager-target-bytes 22333829939 \" \\n --frontend-opts=\" \\n --config-path /risingwave.toml \\n --listen-addr 0.0.0.0:4566 \\n --advertise-addr 0.0.0.0:4566 \\n --prometheus-listener-addr 0.0.0.0:1250 \\n --health-check-listener-addr 0.0.0.0:6786 \\n --meta-addr http://0.0.0.0:5690 \\n --frontend-total-memory-bytes=4294967296\" \\n --compactor-opts=\" \\n --listen-addr 0.0.0.0:6660 \\n --prometheus-listener-addr 0.0.0.0:1250 \\n --advertise-addr 0.0.0.0:6660 \\n --meta-address http://0.0.0.0:5690 \\n --compactor-total-memory-bytes=4294967296\""\n expose:\n - "6660"\n - "4566"\n - "5688"\n - "5690"\n - "1250"\n - "5691"\n ports:\n - "4566:4566"\n - "5690:5690"\n - "5691:5691"\n - "1250:1250"\n depends_on:\n - postgres-0\n env_file: multiple_object_storage.env\n volumes:\n - "./risingwave.toml:/risingwave.toml"\n environment:\n RUST_BACKTRACE: "1"\n # If ENABLE_TELEMETRY is not set, telemetry will start by default\n ENABLE_TELEMETRY: ${ENABLE_TELEMETRY:-true}\n RW_TELEMETRY_TYPE: ${RW_TELEMETRY_TYPE:-"docker-compose"}\n container_name: risingwave-standalone\n healthcheck:\n test:\n - CMD-SHELL\n - bash -c 'printf \"GET / HTTP/1.1\n\n\" > /dev/tcp/127.0.0.1/6660; exit $$?;'\n - bash -c 'printf \"GET / HTTP/1.1\n\n\" > /dev/tcp/127.0.0.1/5688; exit $$?;'\n - bash -c '> /dev/tcp/127.0.0.1/4566; exit $$?;'\n - bash -c 'printf \"GET / HTTP/1.1\n\n\" > /dev/tcp/127.0.0.1/5690; exit $$?;'\n interval: 1s\n timeout: 5s\n restart: always\n deploy:\n resources:\n limits:\n memory: 28G\n reservations:\n memory: 28G\n postgres-0:\n extends:\n file: docker-compose.yml\n service: postgres-0\n grafana-0:\n extends:\n file: docker-compose.yml\n service: grafana-0\n prometheus-0:\n extends:\n file: docker-compose.yml\n service: prometheus-0\n message_queue:\n extends:\n file: docker-compose.yml\n service: message_queue\nvolumes:\n postgres-0:\n external: false\n grafana-0:\n external: false\n prometheus-0:\n external: false\n message_queue:\n external: false\n | dataset_sample\yaml\risingwavelabs_risingwave\docker\docker-compose-with-oss.yml | docker-compose-with-oss.yml | YAML | 3,661 | 0.8 | 0 | 0.009434 | awesome-app | 151 | 2024-05-29T08:33:29.044214 | MIT | false | f319ad23535a2bb01efbac0a40df8654 |
---\nx-image: &image\n image: ${RW_IMAGE:-risingwavelabs/risingwave:v2.3.0}\nservices:\n risingwave-standalone:\n <<: *image\n command: "standalone --meta-opts=\" \\n --listen-addr 0.0.0.0:5690 \\n --advertise-addr 0.0.0.0:5690 \\n --dashboard-host 0.0.0.0:5691 \\n --prometheus-host 0.0.0.0:1250 \\n --prometheus-endpoint http://prometheus-0:9500 \\n --backend sql \\n --sql-endpoint postgres://postgres:@postgres-0:5432/metadata \\n --state-store hummock+s3://<bucket-name> \\n --data-directory hummock_001 \\n --config-path /risingwave.toml\" \\n --compute-opts=\" \\n --config-path /risingwave.toml \\n --listen-addr 0.0.0.0:5688 \\n --prometheus-listener-addr 0.0.0.0:1250 \\n --advertise-addr 0.0.0.0:5688 \\n --async-stack-trace verbose \\n --parallelism 8 \\n --total-memory-bytes 21474836480 \\n --role both \\n --meta-address http://0.0.0.0:5690 \\n --memory-manager-target-bytes 22333829939 \" \\n --frontend-opts=\" \\n --config-path /risingwave.toml \\n --listen-addr 0.0.0.0:4566 \\n --advertise-addr 0.0.0.0:4566 \\n --prometheus-listener-addr 0.0.0.0:1250 \\n --health-check-listener-addr 0.0.0.0:6786 \\n --meta-addr http://0.0.0.0:5690 \\n --frontend-total-memory-bytes=4294967296\" \\n --compactor-opts=\" \\n --listen-addr 0.0.0.0:6660 \\n --prometheus-listener-addr 0.0.0.0:1250 \\n --advertise-addr 0.0.0.0:6660 \\n --meta-address http://0.0.0.0:5690 \\n --compactor-total-memory-bytes=4294967296\""\n expose:\n - "6660"\n - "4566"\n - "5688"\n - "5690"\n - "1250"\n - "5691"\n ports:\n - "4566:4566"\n - "5690:5690"\n - "5691:5691"\n - "1250:1250"\n depends_on:\n - postgres-0\n env_file: aws.env\n volumes:\n - "./risingwave.toml:/risingwave.toml"\n environment:\n RUST_BACKTRACE: "1"\n # If ENABLE_TELEMETRY is not set, telemetry will start by default\n ENABLE_TELEMETRY: ${ENABLE_TELEMETRY:-true}\n RW_TELEMETRY_TYPE: ${RW_TELEMETRY_TYPE:-"docker-compose"}\n container_name: risingwave-standalone\n healthcheck:\n test:\n - CMD-SHELL\n - bash -c 'printf \"GET / HTTP/1.1\n\n\" > /dev/tcp/127.0.0.1/6660; exit $$?;'\n - bash -c 'printf \"GET / HTTP/1.1\n\n\" > /dev/tcp/127.0.0.1/5688; exit $$?;'\n - bash -c '> /dev/tcp/127.0.0.1/4566; exit $$?;'\n - bash -c 'printf \"GET / HTTP/1.1\n\n\" > /dev/tcp/127.0.0.1/5690; exit $$?;'\n interval: 1s\n timeout: 5s\n restart: always\n deploy:\n resources:\n limits:\n memory: 28G\n reservations:\n memory: 28G\n postgres-0:\n extends:\n file: docker-compose.yml\n service: postgres-0\n grafana-0:\n extends:\n file: docker-compose.yml\n service: grafana-0\n prometheus-0:\n extends:\n file: docker-compose.yml\n service: prometheus-0\n message_queue:\n extends:\n file: docker-compose.yml\n service: message_queue\nvolumes:\n postgres-0:\n external: false\n grafana-0:\n external: false\n prometheus-0:\n external: false\n message_queue:\n external: false\n | dataset_sample\yaml\risingwavelabs_risingwave\docker\docker-compose-with-s3.yml | docker-compose-with-s3.yml | YAML | 3,640 | 0.8 | 0 | 0.009434 | python-kit | 551 | 2024-05-31T09:25:58.390446 | Apache-2.0 | false | 0d1e91fee3aa0d49d7bc41d37ce90fee |
# --- THIS FILE IS AUTO GENERATED BY RISEDEV ---\napiVersion: 1\n\nproviders:\n - name: 'risingwave-grafana'\n orgId: 1\n folder: ''\n folderUid: ''\n type: file\n disableDeletion: false\n updateIntervalSeconds: 1\n allowUiUpdates: true\n options:\n path: /dashboards\n foldersFromFilesStructure: false\n | dataset_sample\yaml\risingwavelabs_risingwave\docker\grafana-risedev-dashboard.yml | grafana-risedev-dashboard.yml | YAML | 324 | 0.8 | 0 | 0.071429 | react-lib | 912 | 2025-04-13T15:07:39.755017 | BSD-3-Clause | false | 09d8f41120a826b5c32fd2f3f8673057 |
# --- THIS FILE IS AUTO GENERATED BY RISEDEV ---\napiVersion: 1\ndeleteDatasources:\n - name: risedev-prometheus\ndatasources:\n - name: risedev-prometheus\n type: prometheus\n access: proxy\n url: http://prometheus-0:9500\n withCredentials: false\n isDefault: true\n tlsAuth: false\n tlsAuthWithCACert: false\n version: 1\n editable: true\n | dataset_sample\yaml\risingwavelabs_risingwave\docker\grafana-risedev-datasource.yml | grafana-risedev-datasource.yml | YAML | 355 | 0.8 | 0 | 0.066667 | awesome-app | 816 | 2024-05-22T21:29:35.270919 | BSD-3-Clause | false | d183b631c93669e8ed8a8abd2073901a |
---\nservices:\n risingwave-standalone:\n extends:\n file: ../../docker/docker-compose.yml\n service: risingwave-standalone\n postgres-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: postgres-0\n grafana-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: grafana-0\n minio-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: minio-0\n prometheus-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: prometheus-0\n message_queue:\n extends:\n file: ../../docker/docker-compose.yml\n service: message_queue\n datagen:\n build: ../datagen\n depends_on: [message_queue]\n command:\n - /bin/sh\n - -c\n - /datagen --mode ad-click --qps 2 kafka --brokers message_queue:29092\n restart: always\n container_name: datagen\nvolumes:\n risingwave-standalone:\n external: false\n postgres-0:\n external: false\n grafana-0:\n external: false\n minio-0:\n external: false\n prometheus-0:\n external: false\n message_queue:\n external: false\nname: risingwave-compose\n | dataset_sample\yaml\risingwavelabs_risingwave\integration_tests\ad-click\docker-compose.yml | docker-compose.yml | YAML | 1,100 | 0.7 | 0 | 0 | vue-tools | 561 | 2025-06-26T04:52:04.031608 | GPL-3.0 | true | 510533aa11bf790dbab7e2a2e5cbe565 |
---\nservices:\n risingwave-standalone:\n extends:\n file: ../../docker/docker-compose.yml\n service: risingwave-standalone\n postgres-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: postgres-0\n grafana-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: grafana-0\n minio-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: minio-0\n prometheus-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: prometheus-0\n message_queue:\n extends:\n file: ../../docker/docker-compose.yml\n service: message_queue\n datagen:\n build: ../datagen\n depends_on: [message_queue]\n command:\n - /bin/sh\n - -c\n - /datagen --mode ad-ctr --qps 2 kafka --brokers message_queue:29092\n restart: always\n container_name: datagen\nvolumes:\n risingwave-standalone:\n external: false\n postgres-0:\n external: false\n grafana-0:\n external: false\n minio-0:\n external: false\n prometheus-0:\n external: false\n message_queue:\n external: false\nname: risingwave-compose\n | dataset_sample\yaml\risingwavelabs_risingwave\integration_tests\ad-ctr\docker-compose.yml | docker-compose.yml | YAML | 1,098 | 0.7 | 0 | 0 | python-kit | 409 | 2024-07-02T16:12:27.215392 | GPL-3.0 | true | 67cf5de592b08ba8fd058fa0a78692b9 |
---\nservices:\n risingwave-standalone:\n extends:\n file: ../../docker/docker-compose.yml\n service: risingwave-standalone\n volumes:\n - "../../gcp-rwctest.json:/gcp-rwctest.json"\n postgres-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: postgres-0\n grafana-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: grafana-0\n minio-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: minio-0\n prometheus-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: prometheus-0\n gcloud-cli:\n image: gcr.io/google.com/cloudsdktool/google-cloud-cli:alpine\n command: tail -f /dev/null\n volumes:\n - "../../gcp-rwctest.json:/gcp-rwctest.json"\n\nvolumes:\n risingwave-standalone:\n external: false\n postgres-0:\n external: false\n grafana-0:\n external: false\n minio-0:\n external: false\n prometheus-0:\n external: false\n message_queue:\n external: false\nname: risingwave-compose\n | dataset_sample\yaml\risingwavelabs_risingwave\integration_tests\big-query-sink\docker-compose.yml | docker-compose.yml | YAML | 1,007 | 0.7 | 0 | 0 | node-utils | 643 | 2024-04-04T16:30:56.313809 | MIT | true | f1abe18c81ddab59e8a2697718dd56e8 |
---\nservices:\n cassandra:\n image: cassandra:4.0\n ports:\n - 9042:9042\n environment:\n - CASSANDRA_CLUSTER_NAME=cloudinfra\n volumes:\n - "./prepare_cassandra_and_scylladb.sql:/prepare_cassandra_and_scylladb.sql"\n scylladb:\n image: scylladb/scylla:5.1\n # port 9042 is used by cassandra\n ports:\n - 9041:9042\n environment:\n - CASSANDRA_CLUSTER_NAME=cloudinfra\n volumes:\n - "./prepare_cassandra_and_scylladb.sql:/prepare_cassandra_and_scylladb.sql"\n risingwave-standalone:\n extends:\n file: ../../docker/docker-compose.yml\n service: risingwave-standalone\n postgres-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: postgres-0\n grafana-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: grafana-0\n minio-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: minio-0\n prometheus-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: prometheus-0\n message_queue:\n extends:\n file: ../../docker/docker-compose.yml\n service: message_queue\nvolumes:\n risingwave-standalone:\n external: false\n postgres-0:\n external: false\n grafana-0:\n external: false\n minio-0:\n external: false\n prometheus-0:\n external: false\n message_queue:\n external: false\nname: risingwave-compose\n | dataset_sample\yaml\risingwavelabs_risingwave\integration_tests\cassandra-and-scylladb-sink\docker-compose.yml | docker-compose.yml | YAML | 1,356 | 0.8 | 0 | 0.017544 | vue-tools | 413 | 2024-02-22T06:23:15.219557 | GPL-3.0 | true | 9c7a1f539375c11e5d05fe9c9d12fd02 |
---\nservices:\n risingwave-standalone:\n extends:\n file: ../../docker/docker-compose.yml\n service: risingwave-standalone\n postgres-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: postgres-0\n grafana-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: grafana-0\n minio-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: minio-0\n prometheus-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: prometheus-0\n message_queue:\n extends:\n file: ../../docker/docker-compose.yml\n service: message_queue\n datagen:\n build: ../datagen\n depends_on: [message_queue]\n command:\n - /bin/sh\n - -c\n - /datagen --heavytail --mode cdn-metrics --qps 1000 kafka --brokers message_queue:29092\n restart: always\n container_name: datagen\nvolumes:\n risingwave-standalone:\n external: false\n postgres-0:\n external: false\n grafana-0:\n external: false\n minio-0:\n external: false\n prometheus-0:\n external: false\n message_queue:\n external: false\nname: risingwave-compose\n | dataset_sample\yaml\risingwavelabs_risingwave\integration_tests\cdn-metrics\docker-compose.yml | docker-compose.yml | YAML | 1,118 | 0.7 | 0 | 0 | react-lib | 307 | 2025-04-26T20:08:58.633084 | GPL-3.0 | true | e90a5e8dae567960ddaaadcae32d6398 |
---\nservices:\n risingwave-standalone:\n extends:\n file: ../../docker/docker-compose.yml\n service: risingwave-standalone\n postgres-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: postgres-0\n grafana-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: grafana-0\n minio-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: minio-0\n prometheus-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: prometheus-0\n citus-master:\n container_name: citus-master\n image: "citusdata/citus:10.2.5"\n ports: ["6666:5432"]\n labels: ["com.citusdata.role=Master"]\n environment: &CITUS_ENV\n POSTGRES_USER: "myuser"\n POSTGRES_PASSWORD: "123456"\n PGUSER: "myuser"\n PGPASSWORD: "123456"\n POSTGRES_HOST_AUTH_METHOD: "trust"\n POSTGRES_DB: "mydb"\n CITUS_HOST: "citus-master"\n citus-worker-1:\n container_name: citus-worker-1\n ports: ["6667:5432"]\n image: "citusdata/citus:10.2.5"\n labels: ["com.citusdata.role=Worker"]\n depends_on: [citus-manager]\n environment: *CITUS_ENV\n command: ["/worker-wait-for-manager.sh", "-c", "wal_level=logical"]\n volumes:\n - healthcheck-volume:/healthcheck\n - ./worker-wait-for-manager.sh:/worker-wait-for-manager.sh\n citus-worker-2:\n container_name: citus-worker-2\n ports: ["6668:5432"]\n image: "citusdata/citus:10.2.5"\n labels: ["com.citusdata.role=Worker"]\n depends_on: [citus-manager]\n environment: *CITUS_ENV\n command: ["/worker-wait-for-manager.sh", "-c", "wal_level=logical"]\n volumes:\n - healthcheck-volume:/healthcheck\n - ./worker-wait-for-manager.sh:/worker-wait-for-manager.sh\n citus-manager:\n container_name: citus_manager\n image: "citusdata/membership-manager:0.3.0"\n volumes:\n - "${DOCKER_SOCK:-/var/run/docker.sock}:/var/run/docker.sock"\n - healthcheck-volume:/healthcheck\n depends_on: [citus-master]\n environment: *CITUS_ENV\n citus-prepare:\n container_name: citus_prepare\n image: "citusdata/citus:10.2.5"\n depends_on:\n - citus-master\n - citus-manager\n - citus-worker-1\n - citus-worker-2\n command: "/citus_prepare.sh"\n volumes:\n - "./citus_prepare.sql:/citus_prepare.sql"\n - "./citus_prepare.sh:/citus_prepare.sh"\n restart: on-failure\n datagen_tpch:\n container_name: datagen_tpch\n image: ghcr.io/risingwavelabs/go-tpc:v0.1\n depends_on:\n - citus-master\n - citus-manager\n - citus-worker-1\n - citus-worker-2\n command: tpch prepare --sf 1 --threads 4 -d postgres -U myuser -p '123456' -H citus-master -D mydb -P 5432 --conn-params sslmode=disable\n restart: on-failure\nvolumes:\n risingwave-standalone:\n external: false\n postgres-0:\n external: false\n grafana-0:\n external: false\n minio-0:\n external: false\n prometheus-0:\n external: false\n healthcheck-volume:\nname: risingwave-compose\n | dataset_sample\yaml\risingwavelabs_risingwave\integration_tests\citus-cdc\docker-compose.yml | docker-compose.yml | YAML | 2,960 | 0.7 | 0.059406 | 0 | node-utils | 23 | 2025-05-15T16:38:51.954563 | GPL-3.0 | true | 2262cfb13fd523687a0b6826500d383f |
---\nservices:\n clickhouse-server:\n image: clickhouse/clickhouse-server:23.3.8.21-alpine\n container_name: clickhouse-server-1\n hostname: clickhouse-server-1\n ports:\n - "8123:8123"\n - "9000:9000"\n - "9004:9004"\n expose:\n - 9009\n volumes:\n - ./clickhouse_prepare.sql:/clickhouse_prepare.sql\n risingwave-standalone:\n extends:\n file: ../../docker/docker-compose.yml\n service: risingwave-standalone\n postgres-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: postgres-0\n grafana-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: grafana-0\n minio-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: minio-0\n prometheus-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: prometheus-0\nvolumes:\n risingwave-standalone:\n external: false\n postgres-0:\n external: false\n grafana-0:\n external: false\n minio-0:\n external: false\n prometheus-0:\n external: false\n message_queue:\n external: false\nname: risingwave-compose\n | dataset_sample\yaml\risingwavelabs_risingwave\integration_tests\clickhouse-sink\docker-compose.yml | docker-compose.yml | YAML | 1,085 | 0.7 | 0 | 0 | react-lib | 847 | 2024-05-04T03:07:28.390213 | MIT | true | 7479614911ee15686447edb9a81eec61 |
---\nservices:\n risingwave-standalone:\n extends:\n file: ../../docker/docker-compose.yml\n service: risingwave-standalone\n postgres-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: postgres-0\n grafana-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: grafana-0\n minio-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: minio-0\n prometheus-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: prometheus-0\n message_queue:\n extends:\n file: ../../docker/docker-compose.yml\n service: message_queue\n datagen:\n build: ../datagen\n depends_on: [message_queue]\n command:\n - /bin/sh\n - -c\n - /datagen --mode clickstream --qps 2 kafka --brokers message_queue:29092\n restart: always\n container_name: datagen\nvolumes:\n risingwave-standalone:\n external: false\n postgres-0:\n external: false\n grafana-0:\n external: false\n minio-0:\n external: false\n prometheus-0:\n external: false\n message_queue:\n external: false\nname: risingwave-compose\n | dataset_sample\yaml\risingwavelabs_risingwave\integration_tests\clickstream\docker-compose.yml | docker-compose.yml | YAML | 1,103 | 0.7 | 0 | 0 | react-lib | 633 | 2025-03-30T03:47:47.440426 | Apache-2.0 | true | 9e7316c76f4840b979e8e0c25b0204c3 |
---\nservices:\n risingwave-standalone:\n extends:\n file: ../../docker/docker-compose.yml\n service: risingwave-standalone\n postgres-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: postgres-0\n grafana-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: grafana-0\n minio-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: minio-0\n prometheus-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: prometheus-0\n\n go-lang:\n image: golang:bullseye\n command: tail -f /dev/null\n volumes:\n - ./go:/go-client\n python:\n image: python:3.9.18-slim-bullseye\n command: tail -f /dev/null\n volumes:\n - ./python:/python-client\n java:\n image: eclipse-temurin:11.0.21_9-jdk-jammy\n command: tail -f /dev/null\n volumes:\n - ./java:/java-client\n nodejs:\n image: node:21.6.0-bullseye-slim\n command: tail -f /dev/null\n volumes:\n - ./nodejs:/nodejs-client\n php:\n image: php-library\n build: ./php\n command: tail -f /dev/null\n volumes:\n - ./php:/php-client\n ruby:\n image: ruby-library\n build: ./ruby\n command: tail -f /dev/null\n volumes:\n - ./ruby:/ruby-client\n spring-boot:\n image: maven:3.9.6-sapmachine-17\n command: tail -f /dev/null\n volumes:\n - ./spring-boot:/spring-boot-client\n\nvolumes:\n risingwave-standalone:\n external: false\n postgres-0:\n external: false\n grafana-0:\n external: false\n minio-0:\n external: false\n prometheus-0:\n external: false\n message_queue:\n external: false\nname: risingwave-compose\n | dataset_sample\yaml\risingwavelabs_risingwave\integration_tests\client-library\docker-compose.yml | docker-compose.yml | YAML | 1,627 | 0.7 | 0 | 0 | react-lib | 870 | 2024-03-15T01:14:18.473059 | BSD-3-Clause | true | e94f03bbe3dd5b8e0709127950795674 |
---\nservices:\n risingwave-standalone:\n extends:\n file: ../../docker/docker-compose.yml\n service: risingwave-standalone\n postgres-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: postgres-0\n grafana-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: grafana-0\n minio-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: minio-0\n prometheus-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: prometheus-0\n cockroachdb:\n image: cockroachdb/cockroach:v23.1.11\n command: start-single-node --insecure\n ports:\n - "26257:26257" # CockroachDB default port\n - "8080:8080" # CockroachDB Web UI port\n restart: always\n container_name: cockroachdb\n postgres:\n image: postgres:latest\n command: tail -f /dev/null\n volumes:\n - "./cockroach_prepare.sql:/cockroach_prepare.sql"\n restart: on-failure\nvolumes:\n risingwave-standalone:\n external: false\n postgres-0:\n external: false\n grafana-0:\n external: false\n minio-0:\n external: false\n prometheus-0:\n external: false\n message_queue:\n external: false\nname: risingwave-compose\n | dataset_sample\yaml\risingwavelabs_risingwave\integration_tests\cockroach-sink\docker-compose.yml | docker-compose.yml | YAML | 1,188 | 0.8 | 0 | 0 | awesome-app | 72 | 2024-10-24T11:23:01.247702 | BSD-3-Clause | true | 834def0e781ce1e33c06fd948dafb5e7 |
---\nservices:\n risingwave-standalone:\n extends:\n file: ../../docker/docker-compose.yml\n service: risingwave-standalone\n postgres-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: postgres-0\n grafana-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: grafana-0\n minio-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: minio-0\n prometheus-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: prometheus-0\n message_queue:\n extends:\n file: ../../docker/docker-compose.yml\n service: message_queue\n mysql:\n image: mysql:8.0\n ports:\n - 3306:3306\n environment:\n - MYSQL_ROOT_PASSWORD=123456\n - MYSQL_USER=mysqluser\n - MYSQL_PASSWORD=mysqlpw\n - MYSQL_DATABASE=mydb\n volumes:\n - ./mysql/mysql.cnf:/etc/mysql/conf.d/mysql.cnf\n - ./mysql/mysql_bootstrap.sql:/docker-entrypoint-initdb.d/mysql_bootstrap.sql\n - ./mysql_prepare.sql:/mysql_prepare.sql\n healthcheck:\n test:\n [\n "CMD-SHELL",\n "mysqladmin ping -h 127.0.0.1 -u root -p123456"\n ]\n interval: 5s\n timeout: 5s\n retries: 5\n container_name: mysql\n debezium:\n image: debezium/connect:1.9\n build: .\n environment:\n BOOTSTRAP_SERVERS: message_queue:29092\n GROUP_ID: 1\n CONFIG_STORAGE_TOPIC: connect_configs\n OFFSET_STORAGE_TOPIC: connect_offsets\n KEY_CONVERTER: io.confluent.connect.avro.AvroConverter\n VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter\n CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: http://message_queue:8081\n CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://message_queue:8081\n volumes:\n - ./mysql:/data\n ports:\n - 8083:8083\n healthcheck:\n test: curl -f localhost:8083\n interval: 1s\n start_period: 120s\n depends_on:\n message_queue: { condition: service_healthy }\n mysql: { condition: service_healthy }\n container_name: debezium\n\nvolumes:\n message_queue:\n external: false\n risingwave-standalone:\n external: false\n postgres-0:\n external: false\n grafana-0:\n external: false\n minio-0:\n external: false\n prometheus-0:\n external: false\n\nname: risingwave-compose\n | dataset_sample\yaml\risingwavelabs_risingwave\integration_tests\debezium-mysql\docker-compose.yml | docker-compose.yml | YAML | 2,281 | 0.8 | 0 | 0 | react-lib | 769 | 2025-03-25T12:06:38.551164 | MIT | true | 06149858a1284f02cd9dad752c14a621 |
---\nservices:\n risingwave-standalone:\n extends:\n file: ../../docker/docker-compose.yml\n service: risingwave-standalone\n postgres-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: postgres-0\n grafana-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: grafana-0\n minio-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: minio-0\n prometheus-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: prometheus-0\n message_queue:\n extends:\n file: ../../docker/docker-compose.yml\n service: message_queue\n\n postgres:\n image: debezium/postgres:16-alpine\n ports:\n - 5432:5432\n environment:\n POSTGRES_PASSWORD: postgrespw\n POSTGRES_USER: postgresuser\n POSTGRES_DB: mydb\n healthcheck:\n test:\n [\n "CMD-SHELL",\n "pg_isready -h 127.0.0.1 -U postgresuser -d mydb"\n ]\n interval: 5s\n timeout: 5s\n retries: 5\n container_name: postgres\n volumes:\n - ./postgres/postgres_bootstrap.sql:/docker-entrypoint-initdb.d/postgres_bootstrap.sql\n - "./postgres_prepare.sql:/postgres_prepare.sql"\n\n debezium:\n image: debezium/connect:1.9\n build: .\n environment:\n BOOTSTRAP_SERVERS: message_queue:29092\n GROUP_ID: 1\n CONFIG_STORAGE_TOPIC: connect_configs\n OFFSET_STORAGE_TOPIC: connect_offsets\n KEY_CONVERTER: io.confluent.connect.avro.AvroConverter\n VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter\n CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: http://message_queue:8081\n CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://message_queue:8081\n ports:\n - 8083:8083\n healthcheck:\n test: curl -f localhost:8083\n interval: 1s\n start_period: 120s\n depends_on:\n message_queue: { condition: service_healthy }\n postgres: { condition: service_healthy }\n container_name: debezium\n\n # Check out the connectors via 127.0.0.1:8000\n # kafka-connect-ui:\n # image: landoop/kafka-connect-ui\n # platform: linux/amd64\n # ports:\n # - 8000:8000\n # environment:\n # CONNECT_URL: http://debezium:8083\n # container_name: kafka-connect-ui\n # depends_on:\n # message_queue: { condition: service_healthy }\n\nvolumes:\n message_queue:\n external: false\n risingwave-standalone:\n external: false\n postgres-0:\n external: false\n grafana-0:\n external: false\n minio-0:\n external: false\n prometheus-0:\n external: false\n\nname: risingwave-compose\n | dataset_sample\yaml\risingwavelabs_risingwave\integration_tests\debezium-postgres\docker-compose.yml | docker-compose.yml | YAML | 2,553 | 0.8 | 0 | 0.117021 | node-utils | 566 | 2024-04-22T07:34:43.552994 | BSD-3-Clause | true | 425dbf5df7e5e83ddd863a0f768348c3 |
---\nservices:\n risingwave-standalone:\n extends:\n file: ../../docker/docker-compose.yml\n service: risingwave-standalone\n postgres-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: postgres-0\n grafana-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: grafana-0\n minio-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: minio-0\n prometheus-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: prometheus-0\n message_queue:\n extends:\n file: ../../docker/docker-compose.yml\n service: message_queue\n\n sqlserver:\n image: mcr.microsoft.com/mssql/server:2017-latest\n platform: linux/amd64\n environment:\n SA_PASSWORD: "YourPassword123"\n ACCEPT_EULA: "Y"\n ports:\n - 1433:1433\n - 1434:1434\n healthcheck:\n test:\n [\n "CMD-SHELL",\n "/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P YourPassword123 -d master -Q 'SELECT 1'"\n ]\n interval: 5s\n timeout: 5s\n retries: 5\n container_name: sqlserver\n volumes:\n - ./sqlserver_prepare.sql:/sqlserver_prepare.sql\n\n debezium:\n image: debezium/connect:1.9\n build: .\n environment:\n BOOTSTRAP_SERVERS: message_queue:29092\n GROUP_ID: 1\n CONFIG_STORAGE_TOPIC: connect_configs\n OFFSET_STORAGE_TOPIC: connect_offsets\n KEY_CONVERTER: io.confluent.connect.avro.AvroConverter\n VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter\n CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: http://message_queue:8081\n CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://message_queue:8081\n ports:\n - 8083:8083\n healthcheck:\n test: curl -f localhost:8083\n interval: 1s\n start_period: 120s\n depends_on:\n message_queue: { condition: service_healthy }\n sqlserver: { condition: service_healthy }\n container_name: debezium\n\nvolumes:\n message_queue:\n external: false\n risingwave-standalone:\n external: false\n postgres-0:\n external: false\n grafana-0:\n external: false\n minio-0:\n external: false\n prometheus-0:\n external: false\n\nname: risingwave-compose\n | dataset_sample\yaml\risingwavelabs_risingwave\integration_tests\debezium-sqlserver\docker-compose.yml | docker-compose.yml | YAML | 2,188 | 0.8 | 0 | 0 | python-kit | 560 | 2023-09-22T14:51:28.801425 | MIT | true | 14173d23f9326559d15464b90f6714b9 |
---\nservices:\n spark:\n image: apache/spark:3.3.1\n command: tail -f /dev/null\n depends_on:\n - minio-0\n volumes:\n - "./spark-script:/spark-script"\n container_name: spark\n risingwave-standalone:\n extends:\n file: ../../docker/docker-compose.yml\n service: risingwave-standalone\n postgres-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: postgres-0\n grafana-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: grafana-0\n minio-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: minio-0\n prometheus-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: prometheus-0\nvolumes:\n compute-node-0:\n external: false\n postgres-0:\n external: false\n grafana-0:\n external: false\n minio-0:\n external: false\n prometheus-0:\n external: false\nname: risingwave-compose\n | dataset_sample\yaml\risingwavelabs_risingwave\integration_tests\deltalake-sink\docker-compose.yml | docker-compose.yml | YAML | 903 | 0.7 | 0 | 0 | react-lib | 878 | 2024-02-11T05:17:26.431318 | Apache-2.0 | true | dddd2444a6ea43619cb6ec5313c3270b |
---\nservices:\n fe:\n platform: linux/amd64\n image: apache/doris:2.0.0_alpha-fe-x86_64\n hostname: fe\n environment:\n - FE_SERVERS=fe1:172.21.0.2:9010\n - FE_ID=1\n ports:\n - "8030:8030"\n - "9030:9030"\n networks:\n mynetwork:\n ipv4_address: 172.21.0.2\n be:\n platform: linux/amd64\n image: apache/doris:2.0.0_alpha-be-x86_64\n hostname: be\n environment:\n - FE_SERVERS=fe1:172.21.0.2:9010\n - BE_ADDR=172.21.0.3:9050\n depends_on:\n - fe\n ports:\n - "9050:9050"\n networks:\n mynetwork:\n ipv4_address: 172.21.0.3\n risingwave-standalone:\n extends:\n file: ../../docker/docker-compose.yml\n service: risingwave-standalone\n networks:\n mynetwork:\n ipv4_address: 172.21.0.4\n postgres-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: postgres-0\n networks:\n mynetwork:\n ipv4_address: 172.21.0.5\n grafana-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: grafana-0\n networks:\n mynetwork:\n ipv4_address: 172.21.0.6\n minio-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: minio-0\n networks:\n mynetwork:\n ipv4_address: 172.21.0.7\n prometheus-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: prometheus-0\n networks:\n mynetwork:\n ipv4_address: 172.21.0.8\n mysql:\n image: mysql:latest\n ports:\n - "3306:3306"\n volumes:\n - "./doris_prepare.sql:/doris_prepare.sql"\n command: tail -f /dev/null\n restart: on-failure\n networks:\n mynetwork:\n ipv4_address: 172.21.0.9\n postgres:\n image: postgres:latest\n command: tail -f /dev/null\n volumes:\n - "./update_delete.sql:/update_delete.sql"\n restart: on-failure\n networks:\n mynetwork:\n ipv4_address: 172.21.0.11\nvolumes:\n risingwave-standalone:\n external: false\n postgres-0:\n external: false\n grafana-0:\n external: false\n minio-0:\n external: false\n prometheus-0:\n external: false\n message_queue:\n external: false\nname: risingwave-compose\nnetworks:\n mynetwork:\n ipam:\n config:\n - subnet: 172.21.80.0/16\n default:\n | dataset_sample\yaml\risingwavelabs_risingwave\integration_tests\doris-sink\docker-compose.yml | docker-compose.yml | YAML | 2,227 | 0.7 | 0 | 0 | node-utils | 41 | 2023-10-03T13:49:36.496621 | GPL-3.0 | true | 3a6920c30ac46c23385f02d49ba1d71f |
---\nversion: "3"\nservices:\n dynamodb:\n image: amazon/dynamodb-local\n ports:\n - "8000:8000"\n command: "-jar DynamoDBLocal.jar -sharedDb -inMemory -port 8000"\n risingwave-standalone:\n extends:\n file: ../../docker/docker-compose.yml\n service: risingwave-standalone\n postgres-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: postgres-0\n grafana-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: grafana-0\n minio-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: minio-0\n prometheus-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: prometheus-0\n message_queue:\n extends:\n file: ../../docker/docker-compose.yml\n service: message_queue\nvolumes:\n risingwave-standalone:\n external: false\n postgres-0:\n external: false\n grafana-0:\n external: false\n minio-0:\n external: false\n prometheus-0:\n external: false\n message_queue:\n external: false\nname: risingwave-compose\n | dataset_sample\yaml\risingwavelabs_risingwave\integration_tests\dynamodb\docker-compose.yml | docker-compose.yml | YAML | 1,028 | 0.7 | 0 | 0 | node-utils | 398 | 2024-12-23T22:47:28.272690 | BSD-3-Clause | true | d49b017023ec88ce7528452a220b6d02 |
---\nservices:\n elasticsearch7:\n image: docker.elastic.co/elasticsearch/elasticsearch:7.11.0\n environment:\n - xpack.security.enabled=true\n - discovery.type=single-node\n - ELASTIC_PASSWORD=risingwave\n deploy:\n resources:\n limits:\n memory: 1G\n ports:\n - 9200:9200\n elasticsearch8:\n image: docker.elastic.co/elasticsearch/elasticsearch:8.10.0\n environment:\n - xpack.security.enabled=true\n - discovery.type=single-node\n - ELASTIC_PASSWORD=risingwave\n deploy:\n resources:\n limits:\n memory: 1G\n ports:\n - 9201:9200\n risingwave-standalone:\n extends:\n file: ../../docker/docker-compose.yml\n service: risingwave-standalone\n postgres-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: postgres-0\n grafana-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: grafana-0\n minio-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: minio-0\n prometheus-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: prometheus-0\nvolumes:\n risingwave-standalone:\n external: false\n postgres-0:\n external: false\n grafana-0:\n external: false\n minio-0:\n external: false\n prometheus-0:\n external: false\n message_queue:\n external: false\nname: risingwave-compose\n | dataset_sample\yaml\risingwavelabs_risingwave\integration_tests\elasticsearch-sink\docker-compose.yml | docker-compose.yml | YAML | 1,369 | 0.7 | 0 | 0 | awesome-app | 202 | 2024-10-09T01:33:49.839307 | GPL-3.0 | true | 276ec3ea74e5338719c0924383cacb36 |
---\nservices:\n kafka:\n image: confluentinc/cp-kafka:7.1.0\n platform: linux/amd64\n hostname: kafka\n container_name: kafka\n ports:\n - "29092:29092"\n - "9092:9092"\n environment:\n KAFKA_BROKER_ID: 1\n KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181\n KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT\n KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092\n KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1\n KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0\n KAFKA_TOOLS_LOG4J_LOGLEVEL: ERROR\n depends_on:\n [ zookeeper ]\n healthcheck:\n test: [ "CMD-SHELL", "kafka-topics --bootstrap-server kafka:9092 --list" ]\n interval: 5s\n timeout: 10s\n retries: 5\n\n init-kafka:\n image: confluentinc/cp-kafka:7.1.0\n depends_on:\n - kafka\n entrypoint: [ '/bin/sh', '-c' ]\n command: |\n "\n # blocks until kafka is reachable\n kafka-topics --bootstrap-server kafka:9092 --list\n echo -e 'Creating kafka topics'\n kafka-topics --bootstrap-server kafka:9092 --create --if-not-exists --topic taxi --replication-factor 1 --partitions 1\n echo -e 'Creating kafka topics'\n kafka-topics --bootstrap-server kafka:9092 --create --if-not-exists --topic mfa --replication-factor 1 --partitions 1\n\n echo -e 'Successfully created the following topics:'\n kafka-topics --bootstrap-server kafka:9092 --list\n "\n\n zookeeper:\n image: confluentinc/cp-zookeeper:7.1.0\n platform: linux/amd64\n hostname: zookeeper\n container_name: zookeeper\n ports:\n - "2181:2181"\n environment:\n ZOOKEEPER_CLIENT_PORT: 2181\n ZOOKEEPER_TICK_TIME: 2000\n compactor-0:\n extends:\n file: ../../docker/docker-compose-distributed.yml\n service: compactor-0\n compute-node-0:\n extends:\n file: ../../docker/docker-compose-distributed.yml\n service: compute-node-0\n volumes:\n - "./server/udf.py:/udf.py"\n - "./mfa-start.sql:/mfa-start.sql"\n - "./mfa-mock.sql:/mfa-mock.sql"\n feature-store:\n build:\n context: .\n target: feature-store-server\n depends_on:\n [kafka,meta-node-0,frontend-node-0]\n volumes:\n - ".log:/opt/feature-store/.log"\n postgres-0:\n extends:\n file: ../../docker/docker-compose-distributed.yml\n service: postgres-0\n grafana-0:\n extends:\n file: ../../docker/docker-compose-distributed.yml\n service: grafana-0\n meta-node-0:\n extends:\n file: ../../docker/docker-compose-distributed.yml\n service: meta-node-0\n ports:\n - "8815:8815"\n depends_on:\n [kafka]\n minio-0:\n extends:\n file: ../../docker/docker-compose-distributed.yml\n service: minio-0\n prometheus-0:\n extends:\n file: ../../docker/docker-compose-distributed.yml\n service: prometheus-0\nvolumes:\n postgres-0:\n external: false\n grafana-0:\n external: false\n minio-0:\n external: false\n prometheus-0:\n external: false\nname: risingwave-compose\n\n | dataset_sample\yaml\risingwavelabs_risingwave\integration_tests\feature-store\docker-compose.yml | docker-compose.yml | YAML | 3,036 | 0.8 | 0.018349 | 0.009524 | node-utils | 385 | 2024-11-29T13:06:46.970309 | Apache-2.0 | true | 2c79832101b87a13a80c9872b2680d5b |
---\nservices:\n risingwave-standalone:\n extends:\n file: ../../docker/docker-compose.yml\n service: risingwave-standalone\n postgres-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: postgres-0\n grafana-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: grafana-0\n minio-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: minio-0\n prometheus-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: prometheus-0\nvolumes:\n risingwave-standalone:\n external: false\n postgres-0:\n external: false\n grafana-0:\n external: false\n minio-0:\n external: false\n prometheus-0:\n external: false\n message_queue:\n external: false\nname: risingwave-compose\n | dataset_sample\yaml\risingwavelabs_risingwave\integration_tests\http-sink\docker-compose.yml | docker-compose.yml | YAML | 767 | 0.7 | 0 | 0 | python-kit | 268 | 2024-09-24T23:15:38.715213 | GPL-3.0 | true | cd5791ca357fe840bfef79b4b97964d6 |
---\nx-airflow-common:\n &airflow-common\n image: apache/airflow:2.6.2-python3.10\n build:\n context: .\n target: airflow\n environment:\n &airflow-common-env\n AIRFLOW__CORE__EXECUTOR: CeleryExecutor\n AIRFLOW__DATABASE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow@postgres/airflow\n AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow@postgres/airflow\n AIRFLOW__CELERY__RESULT_BACKEND: db+postgresql://airflow:airflow@postgres/airflow\n AIRFLOW__CELERY__BROKER_URL: redis://:@redis:6379/0\n AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true'\n AIRFLOW__SCHEDULER__ENABLE_HEALTH_CHECK: 'true'\n volumes:\n - ./airflow_dags:/opt/airflow/dags\n - ./iceberg-compaction-sql:/opt/airflow/iceberg-compaction-sql\n depends_on:\n &airflow-common-depends-on\n redis:\n condition: service_healthy\n postgres:\n condition: service_healthy\nx-spark-common:\n &spark-air\n build:\n context: .\n target: spark\n\nservices:\n spark:\n <<: *spark-air\n environment:\n - SPARK_MODE=master\n ports:\n - '7077:7077'\n volumes:\n - './spark-script:/spark-script'\n spark-worker:\n <<: *spark-air\n environment:\n - SPARK_MODE=worker\n - SPARK_MASTER_URL=spark://spark:7077\n - SPARK_WORKER_MEMORY=1G\n - SPARK_WORKER_CORES=1\n presto:\n build: ./presto-with-iceberg\n container_name: presto\n risingwave-standalone:\n extends:\n file: ../../docker/docker-compose.yml\n service: risingwave-standalone\n postgres-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: postgres-0\n grafana-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: grafana-0\n minio-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: minio-0\n prometheus-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: prometheus-0\n mysql:\n image: mysql:8.0\n ports:\n - "3306:3306"\n environment:\n - MYSQL_ROOT_PASSWORD=123456\n - MYSQL_USER=mysqluser\n - MYSQL_PASSWORD=mysqlpw\n - MYSQL_DATABASE=mydb\n healthcheck:\n test: [ "CMD-SHELL", "mysqladmin ping -h 127.0.0.1 -u root -p123456" ]\n interval: 5s\n timeout: 5s\n retries: 5\n container_name: mysql\n prepare_mysql:\n image: mysql:8.0\n depends_on:\n - mysql\n command:\n - /bin/sh\n - -c\n - "mysql -p123456 -h mysql mydb < mysql_prepare.sql"\n volumes:\n - "./mysql_prepare.sql:/mysql_prepare.sql"\n container_name: prepare_mysql\n restart: on-failure\n datagen:\n build: ../datagen\n depends_on: [mysql]\n command:\n - /bin/sh\n - -c\n - /datagen --mode clickstream --qps 1 mysql --user mysqluser --password mysqlpw --host mysql --port 3306 --db mydb\n container_name: datagen\n restart: on-failure\n postgres:\n image: postgres:13\n environment:\n POSTGRES_USER: airflow\n POSTGRES_PASSWORD: airflow\n POSTGRES_DB: airflow\n volumes:\n - ./db:/var/lib/postgresql/data\n healthcheck:\n test: ["CMD", "pg_isready", "-U", "airflow"]\n interval: 5s\n retries: 5\n redis:\n image: 'redis:latest'\n expose:\n - 6379\n healthcheck:\n test: ["CMD", "redis-cli", "ping"]\n interval: 5s\n timeout: 30s\n retries: 50\n airflow-webserver:\n <<: *airflow-common\n command: webserver\n ports:\n - 8080:8080\n depends_on:\n <<: *airflow-common-depends-on\n airflow-init:\n condition: service_completed_successfully\n airflow-scheduler:\n <<: *airflow-common\n command: scheduler\n depends_on:\n <<: *airflow-common-depends-on\n airflow-init:\n condition: service_completed_successfully\n airflow-worker:\n <<: *airflow-common\n command: celery worker\n depends_on:\n <<: *airflow-common-depends-on\n airflow-init:\n condition: service_completed_successfully\n airflow-init:\n <<: *airflow-common\n command: version\n environment:\n _AIRFLOW_DB_UPGRADE: 'true'\n _AIRFLOW_WWW_USER_CREATE: 'true'\n _AIRFLOW_WWW_USER_NAME: 'airflow'\n _AIRFLOW_WWW_USER_PASSWORD: 'airflow'\n\nvolumes:\n risingwave-standalone:\n external: false\n postgres-0:\n external: false\n grafana-0:\n external: false\n minio-0:\n external: false\n prometheus-0:\n external: false\nname: risingwave-compose\n | dataset_sample\yaml\risingwavelabs_risingwave\integration_tests\iceberg-sink\docker-compose.yml | docker-compose.yml | YAML | 4,349 | 0.8 | 0 | 0 | react-lib | 189 | 2024-02-22T16:01:57.475343 | GPL-3.0 | true | b7669d5a5ac02db8dadbfd5e2062183e |
---\nservices:\n risingwave-standalone:\n extends:\n file: ../../docker/docker-compose.yml\n service: risingwave-standalone\n postgres-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: postgres-0\n grafana-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: grafana-0\n minio-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: minio-0\n prometheus-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: prometheus-0\n message_queue:\n extends:\n file: ../../docker/docker-compose.yml\n service: message_queue\n datagen:\n build: ../datagen\n depends_on: [message_queue]\n command:\n - /bin/sh\n - -c\n - /datagen --mode compatible-data --qps 2 --total_event 3 kafka --brokers message_queue:29092\n restart: always\n container_name: datagen\n\nvolumes:\n risingwave-standalone:\n external: false\n postgres-0:\n external: false\n grafana-0:\n external: false\n minio-0:\n external: false\n prometheus-0:\n external: false\n message_queue:\n external: false\nname: risingwave-compose\n | dataset_sample\yaml\risingwavelabs_risingwave\integration_tests\kafka-cdc\docker-compose.yml | docker-compose.yml | YAML | 1,124 | 0.7 | 0 | 0 | python-kit | 53 | 2024-08-12T14:12:27.683708 | BSD-3-Clause | true | a7f87fde497159bad61ac51201cfdb93 |
---\nservices:\n risingwave-standalone:\n extends:\n file: ../../docker/docker-compose.yml\n service: risingwave-standalone\n postgres-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: postgres-0\n grafana-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: grafana-0\n minio-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: minio-0\n prometheus-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: prometheus-0\n message_queue:\n extends:\n file: ../../docker/docker-compose.yml\n service: message_queue\n\n postgres:\n image: postgres\n environment:\n - POSTGRES_USER=myuser\n - POSTGRES_PASSWORD=123456\n - POSTGRES_DB=mydb\n ports:\n - 5432:5432\n healthcheck:\n test: [ "CMD-SHELL", "pg_isready --username=myuser --dbname=mydb" ]\n interval: 5s\n timeout: 5s\n retries: 5\n command: [ "postgres", "-c", "wal_level=logical" ]\n restart: always\n container_name: postgres\n\n prepare_postgres:\n image: postgres\n depends_on:\n - postgres\n command:\n - /bin/sh\n - -c\n - "psql postgresql://myuser:123456@postgres:5432/mydb < postgres_prepare.sql"\n volumes:\n - "./postgres_prepare.sql:/postgres_prepare.sql"\n container_name: prepare_postgres\n restart: on-failure\n\n mysql:\n image: mysql\n ports:\n - 3306:3306\n environment:\n - MYSQL_ROOT_PASSWORD=mysql\n - MYSQL_USER=myuser\n - MYSQL_PASSWORD=123456\n - MYSQL_DATABASE=mydb\n healthcheck:\n test: [ "CMD-SHELL", "mysqladmin ping -h 127.0.0.1 -u root -p123456" ]\n interval: 5s\n timeout: 5s\n retries: 5\n container_name: mysql\n\n flink-jobmanager:\n image: flink\n build: ./flink\n ports:\n - "8082:8081" # 8081 is used by message_queue\n command: jobmanager\n environment:\n - |\n FLINK_PROPERTIES=\n jobmanager.rpc.address: flink-jobmanager\n container_name: flink-jobmanager\n\n flink-taskmanager:\n image: flink\n build: ./flink\n depends_on:\n - flink-jobmanager\n command: taskmanager\n scale: 1\n environment:\n - |\n FLINK_PROPERTIES=\n jobmanager.rpc.address: flink-jobmanager\n taskmanager.numberOfTaskSlots: 2\n container_name: flink-taskmanager\n\n flink-sql-client:\n image: flink\n build: ./flink\n command: bin/sql-client.sh\n depends_on:\n - flink-jobmanager\n environment:\n - |\n FLINK_PROPERTIES=\n jobmanager.rpc.address: flink-jobmanager\n rest.address: flink-jobmanager\n container_name: flink-sql-client\n\n connect:\n image: debezium/connect:2.4.0.Final\n build:\n context: debezium-jdbc\n args:\n DEBEZIUM_VERSION: 2.4.0.Final\n ports:\n - 8083:8083\n - 5005:5005\n links:\n - message_queue\n environment:\n - BOOTSTRAP_SERVERS=message_queue:29092\n - GROUP_ID=1\n - CONFIG_STORAGE_TOPIC=my_connect_configs\n - OFFSET_STORAGE_TOPIC=my_connect_offsets\n - STATUS_STORAGE_TOPIC=my_source_connect_statuses\n container_name: connect\n\nvolumes:\n risingwave-standalone:\n external: false\n postgres-0:\n external: false\n grafana-0:\n external: false\n minio-0:\n external: false\n prometheus-0:\n external: false\n message_queue:\n external: false\nname: risingwave-compose\n | dataset_sample\yaml\risingwavelabs_risingwave\integration_tests\kafka-cdc-sink\docker-compose.yml | docker-compose.yml | YAML | 3,357 | 0.8 | 0 | 0 | react-lib | 343 | 2023-11-04T05:26:16.097430 | BSD-3-Clause | true | 8635a17898cc842df61d1c5a02b6ed27 |
---\nservices:\n risingwave-standalone:\n extends:\n file: ../../docker/docker-compose.yml\n service: risingwave-standalone\n postgres-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: postgres-0\n grafana-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: grafana-0\n minio-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: minio-0\n prometheus-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: prometheus-0\n localstack:\n container_name: localstack\n image: localstack/localstack:3.0\n networks:\n default:\n aliases:\n - ad-click.localstack\n # ports:\n # - "127.0.0.1:14566:4566" # LocalStack Gateway\n # - "127.0.0.1:14510-14559:4510-4559" # external services port range\n datagen:\n build: ../datagen\n depends_on: [localstack]\n command:\n - /bin/sh\n - -c\n - |\n export AWS_ACCESS_KEY_ID="test"\n export AWS_SECRET_ACCESS_KEY="test"\n export AWS_DEFAULT_REGION="us-east-1"\n aws --endpoint-url=http://localstack:4566 kinesis create-stream --stream-name ad-impression\n aws --endpoint-url=http://localstack:4566 s3api create-bucket --bucket ad-click\n /datagen --mode ad-ctr --topic ad_impression --qps 100 kinesis --region us-east-1 --name ad-impression --endpoint http://localstack:4566 &\n /datagen --mode ad-ctr --topic ad_click --qps 100 s3 --region us-east-1 --bucket ad-click --endpoint http://localstack:4566\n restart: always\n container_name: datagen\nvolumes:\n risingwave-standalone:\n external: false\n postgres-0:\n external: false\n grafana-0:\n external: false\n minio-0:\n external: false\n prometheus-0:\n external: false\nname: risingwave-compose\n | dataset_sample\yaml\risingwavelabs_risingwave\integration_tests\kinesis-s3-source\docker-compose.yml | docker-compose.yml | YAML | 1,802 | 0.8 | 0 | 0.05 | awesome-app | 587 | 2023-09-28T14:14:23.195170 | MIT | true | 0e715a125c6d055024a0d80b1fc9c079 |
---\nservices:\n risingwave-standalone:\n extends:\n file: ../../docker/docker-compose.yml\n service: risingwave-standalone\n postgres-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: postgres-0\n grafana-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: grafana-0\n minio-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: minio-0\n prometheus-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: prometheus-0\n message_queue:\n extends:\n file: ../../docker/docker-compose.yml\n service: message_queue\n datagen:\n build: ../datagen\n depends_on: [message_queue]\n command:\n - /bin/sh\n - -c\n - /datagen --mode livestream --qps 2 kafka --brokers message_queue:29092\n restart: always\n container_name: datagen\nvolumes:\n risingwave-standalone:\n external: false\n postgres-0:\n external: false\n grafana-0:\n external: false\n minio-0:\n external: false\n prometheus-0:\n external: false\n message_queue:\n external: false\nname: risingwave-compose\n | dataset_sample\yaml\risingwavelabs_risingwave\integration_tests\livestream\docker-compose.yml | docker-compose.yml | YAML | 1,102 | 0.7 | 0 | 0 | python-kit | 669 | 2024-09-02T08:28:42.361226 | MIT | true | 483d90ff0e33bfcb61a50072a1dc3b90 |
---\nservices:\n risingwave-standalone:\n extends:\n file: ../../docker/docker-compose.yml\n service: risingwave-standalone\n postgres-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: postgres-0\n grafana-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: grafana-0\n minio-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: minio-0\n prometheus-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: prometheus-0\n\n # === MindsDB ===\n mindsdb:\n image: mindsdb/mindsdb:23.9.3.0\n command: bash -c "python3 -m mindsdb --config=/root/mindsdb_config.json --api=http,postgres"\n ports:\n - 47334:47334 # http\n - 55432:55432 # postgres\n volumes:\n - "./mdb_config.json:/root/mindsdb_config.json"\n container_name: mindsdb\n\n prepare_data:\n image: postgres\n depends_on:\n - risingwave-standalone\n - mindsdb\n command:\n - /bin/sh\n - -c\n - /prepare_data.sh\n volumes:\n - "./prepare_risingwave.sql:/prepare_risingwave.sql"\n - "./prepare_mindsdb.sql:/prepare_mindsdb.sql"\n - "./prepare_data.sh:/prepare_data.sh"\n container_name: prepare_data\n restart: on-failure\n\nvolumes:\n risingwave-standalone:\n external: false\n postgres-0:\n external: false\n grafana-0:\n external: false\n minio-0:\n external: false\n prometheus-0:\n external: false\nname: risingwave-compose\n | dataset_sample\yaml\risingwavelabs_risingwave\integration_tests\mindsdb\docker-compose.yml | docker-compose.yml | YAML | 1,451 | 0.8 | 0 | 0.016949 | vue-tools | 42 | 2024-10-03T02:26:12.831107 | MIT | true | 735e124c9c1ccdc3b20657c7e5581784 |
---\nservices:\n risingwave-standalone:\n extends:\n file: ../../docker/docker-compose.yml\n service: risingwave-standalone\n mqtt-server:\n image: eclipse-mosquitto\n command:\n - sh\n - -c\n - echo "running command"; printf 'allow_anonymous true\nlistener 1883 0.0.0.0' > /mosquitto/config/mosquitto.conf; echo "starting service..."; cat /mosquitto/config/mosquitto.conf;/docker-entrypoint.sh;/usr/sbin/mosquitto -c /mosquitto/config/mosquitto.conf\n ports:\n - 1883:1883\n postgres-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: postgres-0\n grafana-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: grafana-0\n minio-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: minio-0\n prometheus-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: prometheus-0\n message_queue:\n extends:\n file: ../../docker/docker-compose.yml\n service: message_queue\nvolumes:\n compute-node-0:\n external: false\n postgres-0:\n external: false\n grafana-0:\n external: false\n minio-0:\n external: false\n prometheus-0:\n external: false\n message_queue:\n external: false\nname: risingwave-compose\n | dataset_sample\yaml\risingwavelabs_risingwave\integration_tests\mqtt\docker-compose.yml | docker-compose.yml | YAML | 1,229 | 0.7 | 0 | 0 | node-utils | 103 | 2025-01-27T18:16:21.745242 | BSD-3-Clause | true | 4457c0867763f403637f6f153578bafb |
---\nservices:\n risingwave-standalone:\n extends:\n file: ../../docker/docker-compose.yml\n service: risingwave-standalone\n postgres-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: postgres-0\n grafana-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: grafana-0\n minio-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: minio-0\n prometheus-0:\n extends:\n file: ../../docker/docker-compose.yml\n service: prometheus-0\n mysql:\n image: mysql:8.0\n ports:\n - "8306:3306"\n environment:\n - MYSQL_ROOT_PASSWORD=123456\n - MYSQL_USER=mysqluser\n - MYSQL_PASSWORD=mysqlpw\n - MYSQL_DATABASE=mydb\n healthcheck:\n test: [ "CMD-SHELL", "mysqladmin ping -h 127.0.0.1 -u root -p123456" ]\n interval: 5s\n timeout: 5s\n retries: 5\n container_name: mysql\n mysql_prepare:\n image: mysql:8.0\n depends_on:\n mysql:\n condition: service_healthy\n volumes:\n - "./compatibility-mysql.sql:/compatibility-mysql.sql"\n command:\n - /bin/sh\n - -c\n - "mysql -h mysql -u root -P 3306 mydb --password=123456 -A < /compatibility-mysql.sql"\n rw_prepare:\n image: postgres\n depends_on:\n mysql_prepare:\n condition: service_completed_successfully\n risingwave-standalone:\n condition: service_healthy\n volumes:\n - "./compatibility-rw.sql:/compatibility-rw.sql"\n command:\n - /bin/sh\n - -c\n - "psql postgresql://root:@risingwave-standalone:4566/dev < /compatibility-rw.sql"\n datagen_tpch:\n image: ghcr.io/risingwavelabs/go-tpc:v0.1\n depends_on:\n - mysql\n command: tpch prepare --sf 1 --threads 4 -H mysql -U root -p '123456' -D mydb -P 3306\n container_name: datagen_tpch\n restart: on-failure\nvolumes:\n risingwave-standalone:\n external: false\n postgres-0:\n external: false\n grafana-0:\n external: false\n minio-0:\n external: false\n prometheus-0:\n external: false\nname: risingwave-compose\n | dataset_sample\yaml\risingwavelabs_risingwave\integration_tests\mysql-cdc\docker-compose.yml | docker-compose.yml | YAML | 2,036 | 0.8 | 0 | 0 | vue-tools | 838 | 2025-03-14T13:48:05.410116 | GPL-3.0 | true | 1176fe2fc000c61555bcbf32b9ff7e4f |
pk_types:\n - boolean\n - bigint\n - date\ndatatypes:\n - name: boolean\n aliases:\n - bool\n zero: false\n minimum: false\n maximum: true\n rw_type: boolean\n - name: bit\n zero: 0\n minimum: 0\n maximum: 1\n rw_type: boolean\n - name: tinyint\n zero: 0\n minimum: -128\n maximum: 127\n rw_type: smallint\n - name: smallint\n zero: 0\n minimum: -32767\n maximum: 32767\n rw_type: smallint\n - name: mediumint\n zero: 0\n minimum: -8388608\n maximum: 8388607\n rw_type: integer\n - name: integer\n aliases:\n - int\n zero: 0\n minimum: -2147483647\n maximum: 2147483647\n - name: bigint\n zero: 0\n minimum: -9223372036854775807\n maximum: 9223372036854775807\n - name: decimal\n aliases:\n - numeric\n zero: 0\n minimum: -9.9999999999999999999999999999999\n maximum: -9.9999999999999999999999999999999\n - name: float\n zero: 0\n minimum: -9999.999999\n maximum: 9999.999999\n rw_type: real\n - name: double\n zero: 0\n minimum: -9999.99999999999999\n maximum: 9999.99999999999999\n rw_type: double\n - name: char\n length: 255\n zero: "''"\n minimum: "''"\n maximum_gen_py: "\"'{}'\".format('z'*255)"\n rw_type: varchar\n - name: varchar\n length: 10000\n zero: "''"\n minimum: "''"\n maximum_gen_py: "\"'{}'\".format('z'*10000)"\n rw_type: varchar\n - name: binary\n length: 255\n zero: "''"\n minimum: "''"\n maximum: "''"\n maximum_gen_py: "\"'{}'\".format('z'*255)"\n rw_type: bytea\n - name: varbinary\n length: 10000\n zero: "''"\n minimum: "''"\n maximum_gen_py: "\"'{}'\".format('z'*10000)"\n rw_type: bytea\n - name: date\n zero: "'1001-01-01'"\n minimum: "'1001-01-01'"\n maximum: "'9999-12-31'"\n - name: time\n zero: "'00:00:00'"\n minimum: "'-838:59:59.000000'"\n maximum: "'838:59:59.000000'"\n - name: datetime\n zero: "'1000-01-01 00:00:00.000000'"\n minimum: "'1000-01-01 00:00:00.000000'"\n maximum: "'9999-12-31 23:59:59.499999'"\n rw_type: timestamp\n - name: timestamp\n zero: "'1970-01-01 00:00:01'"\n minimum: "'1970-01-01 00:00:01'"\n maximum: "'2038-01-19 03:14:07'"\n rw_type: timestamptz\n | dataset_sample\yaml\risingwavelabs_risingwave\integration_tests\mysql-cdc\mysql-datatypes.yml | mysql-datatypes.yml | YAML | 2,178 | 0.7 | 0 | 0 | python-kit | 874 | 2024-09-24T00:03:05.567739 | Apache-2.0 | true | 04b623342e8c444d1a3277e9817144e7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.