content stringlengths 1 103k β | path stringlengths 8 216 | filename stringlengths 2 179 | language stringclasses 15
values | size_bytes int64 2 189k | quality_score float64 0.5 0.95 | complexity float64 0 1 | documentation_ratio float64 0 1 | repository stringclasses 5
values | stars int64 0 1k | created_date stringdate 2023-07-10 19:21:08 2025-07-09 19:11:45 | license stringclasses 4
values | is_test bool 2
classes | file_hash stringlengths 32 32 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
version: 2.1\norbs:\n aws-cli: circleci/aws-cli@5.3.1\n aws-s3: circleci/aws-s3@4.1.1\n\nparameters:\n browserstack-force:\n description: Whether to force browserstack usage. We have limited resources on browserstack so the pipeline might decide to skip browserstack if this parameter isn't set to true.\n type: boolean\n default: false\n react-version:\n description: The version of react to be used\n type: string\n default: stable\n workflow:\n description: The name of the workflow to run\n type: string\n default: pipeline\n e2e-base-url:\n description: The base url for running end-to-end test\n type: string\n default: ''\n\ndefault-job: &default-job\n parameters:\n react-version:\n description: The version of react to be used\n type: string\n default: << pipeline.parameters.react-version >>\n test-gate:\n description: A particular type of tests that should be run\n type: string\n default: undefined\n e2e-base-url:\n description: The base url for running end-to-end test\n type: string\n default: << pipeline.parameters.e2e-base-url >>\n environment:\n # expose it globally otherwise we have to thread it from each job to the install command\n BROWSERSTACK_FORCE: << pipeline.parameters.browserstack-force >>\n REACT_VERSION: << parameters.react-version >>\n TEST_GATE: << parameters.test-gate >>\n AWS_REGION_ARTIFACTS: eu-central-1\n COREPACK_ENABLE_DOWNLOAD_PROMPT: '0'\n DANGER_DISABLE_TRANSPILATION: 'true'\n working_directory: /tmp/material-ui\n docker:\n - image: cimg/node:20.17\n\ndefault-context: &default-context\n context:\n - org-global\n\n# CircleCI has disabled the cache across forks for security reasons.\n# Following their official statement, it was a quick solution, they\n# are working on providing this feature back with appropriate security measures.\n# https://discuss.circleci.com/t/saving-cache-stopped-working-warning-skipping-this-step-disabled-in-configuration/24423/21\n#\n# restore_repo: &restore_repo\n# restore_cache:\n# key: v1-repo-{{ .Branch }}-{{ .Revision }}\n\ncommands:\n setup_corepack:\n parameters:\n browsers:\n type: boolean\n default: false\n description: 'Set to true if you intend to any browser (for example with playwright).'\n steps:\n - run:\n name: Set npm registry public signing keys\n command: |\n echo "export COREPACK_INTEGRITY_KEYS='$(curl https://registry.npmjs.org/-/npm/v1/keys | jq -c '{npm: .keys}')'" >> $BASH_ENV\n\n - when:\n condition: << parameters.browsers >>\n steps:\n - run:\n name: Install pnpm package manager\n command: corepack enable\n - when:\n condition:\n not: << parameters.browsers >>\n steps:\n - run:\n name: Install pnpm package manager\n # See https://stackoverflow.com/a/73411601\n command: corepack enable --install-directory ~/bin\n\n - run:\n name: View install environment\n command: |\n node --version\n pnpm --version\n\n install_js:\n parameters:\n browsers:\n type: boolean\n default: false\n description: 'Set to true if you intend to any browser (for example with playwright).'\n\n steps:\n - setup_corepack:\n browsers: << parameters.browsers >>\n\n - run:\n name: Resolve React version\n command: |\n pnpm use-react-version\n # log a patch for maintainers who want to check out this change\n git --no-pager diff HEAD\n\n - run:\n name: Install js dependencies\n command: pnpm install\n\njobs:\n checkout:\n <<: *default-job\n steps:\n - checkout\n - install_js\n - when:\n # Install can be "dirty" when running with non-default versions of React\n condition:\n equal: [<< parameters.react-version >>, stable]\n steps:\n - run:\n name: Should not have any git not staged\n command: git add -A && git diff --exit-code --staged\n - run:\n name: '`pnpm dedupe` was run?'\n command: |\n # #target-branch-reference\n if [[ $(git diff --name-status master | grep -E 'pnpm-workspace\.yaml|pnpm-lock.yaml|package\.json') == "" ]];\n then\n echo "No changes to dependencies detected. Skipping..."\n else\n pnpm dedupe --check\n fi\n test_unit:\n <<: *default-job\n steps:\n - checkout\n - install_js\n - run:\n name: Tests fake browser\n command: pnpm test:coverage:ci\n - run:\n name: Check coverage generated\n command: |\n if ! [[ -s coverage/lcov.info ]]\n then\n exit 1\n fi\n - run:\n name: internal-scripts\n command: |\n # latest commit\n LATEST_COMMIT=$(git rev-parse HEAD)\n\n # latest commit where internal-scripts was changed\n FOLDER_COMMIT=$(git log -1 --format=format:%H --full-diff packages-internal/scripts)\n\n if [ $FOLDER_COMMIT = $LATEST_COMMIT ]; then\n echo "changes, let's run the tests"\n pnpm --filter @mui/internal-scripts test\n else\n echo "no changes"\n fi\n - run:\n name: Coverage\n command: |\n curl -Os https://uploader.codecov.io/latest/linux/codecov\n chmod +x codecov\n ./codecov -t ${CODECOV_TOKEN} -Z -F "$REACT_VERSION-jsdom"\n test_lint:\n <<: *default-job\n steps:\n - checkout\n - install_js\n - run:\n name: Eslint\n command: pnpm eslint:ci\n - run:\n name: Stylelint\n command: pnpm stylelint\n - run:\n name: Lint JSON\n command: pnpm jsonlint\n - run:\n name: Lint Markdown\n command: pnpm markdownlint\n - run:\n # See https://circleci.com/developer/orbs/orb/circleci/vale as reference\n name: Install Vale\n command: |\n #!/bin/bash\n VALE_STR_CLI_VERSION=3.3.0\n\n # set smart sudo\n if [[ $EUID -eq 0 ]]; then export SUDO=""; else export SUDO="sudo"; fi\n\n mkdir /tmp/vale-extract\n cd /tmp/vale-extract\n GZIPPED_OUTPUT="vale.tar.gz"\n BINARY_URL=https://github.com/errata-ai/vale/releases/download/v${VALE_STR_CLI_VERSION}/vale_${VALE_STR_CLI_VERSION}_Linux_64-bit.tar.gz\n curl -sSL "$BINARY_URL" -o "${GZIPPED_OUTPUT}"\n\n if [ ! -s "${GZIPPED_OUTPUT}" ]; then\n echo "Downloaded file is empty"\n rm "${GZIPPED_OUTPUT}"\n exit 1\n fi\n\n tar -xzf "${GZIPPED_OUTPUT}"\n $SUDO mv vale /usr/local/bin\n rm "${GZIPPED_OUTPUT}"\n\n # validate installation\n if [[ -z "$(command -v vale)" ]]; then\n echo "vale installation failed"\n exit 1\n else\n echo "vale installation successful"\n vale --version\n exit 0\n fi\n - run:\n name: Lint writing style\n command: |\n vale sync\n pnpm valelint\n test_static:\n <<: *default-job\n steps:\n - checkout\n - install_js\n - run:\n name: '`pnpm prettier` changes committed?'\n command: pnpm prettier --check\n - run:\n name: Generate PropTypes\n command: pnpm proptypes\n - run:\n name: '`pnpm proptypes` changes committed?'\n command: git add -A && git diff --exit-code --staged\n - run:\n name: Generate the documentation\n command: pnpm docs:api\n - run:\n name: '`pnpm docs:api` changes committed?'\n command: git add -A && git diff --exit-code --staged\n - run:\n name: Update the navigation translations\n command: pnpm docs:i18n\n - run:\n name: '`pnpm docs:i18n` changes committed?'\n command: git add -A && git diff --exit-code --staged\n - run:\n name: '`pnpm extract-error-codes` changes committed?'\n command: |\n pnpm extract-error-codes\n git add -A && git diff --exit-code --staged\n - run:\n name: '`pnpm docs:link-check` changes committed?'\n command: |\n pnpm docs:link-check\n git add -A && git diff --exit-code --staged\n test_types:\n <<: *default-job\n resource_class: 'medium+'\n steps:\n - checkout\n - install_js\n - run:\n name: Transpile TypeScript demos\n command: pnpm docs:typescript:formatted\n - run:\n name: '`pnpm docs:typescript:formatted` changes committed?'\n command: git add -A && git diff --exit-code --staged\n - run:\n name: Tests TypeScript definitions\n command: pnpm typescript:ci\n environment:\n NODE_OPTIONS: --max-old-space-size=3072\n - run:\n name: Test module augmentation\n command: |\n pnpm --filter @mui/material typescript:module-augmentation\n pnpm --filter @mui/joy typescript:module-augmentation\n pnpm --filter @mui/system typescript:module-augmentation\n - run:\n name: Diff declaration files\n command: |\n git add -f packages/mui-material/build || echo '/material declarations do not exist'\n git add -f packages/mui-lab/build || echo '/lab declarations do not exist'\n git add -f packages/mui-utils/build || echo '/utils declarations do not exist'\n pnpm -r build:stable && pnpm -r build:types \n git --no-pager diff\n - run:\n name: Any defect declaration files?\n command: node scripts/testBuiltTypes.mjs\n - save_cache:\n name: Save generated declaration files\n key: typescript-declaration-files-{{ .Branch }}-{{ .Revision }}\n paths:\n # packages with generated declaration files\n - packages/mui-material/build\n - packages/mui-lab/build\n - packages/mui-utils/build\n test_types_next:\n <<: *default-job\n resource_class: 'medium+'\n steps:\n - checkout\n - install_js\n - run:\n name: Resolve typescript version\n command: |\n pnpm update -r typescript@next\n # log a patch for maintainers who want to check out this change\n git --no-pager diff HEAD\n - run:\n name: Tests TypeScript definitions\n command: |\n # ignore build failures\n # it's expected that typescript@next fails since the lines of the errors\n # change frequently. This build is monitored regardless of its status\n set +e\n pnpm typescript:ci\n exit 0\n\n - restore_cache:\n name: Restore generated declaration files\n keys:\n # We assume that the target branch is `next` and that declaration files are persisted in commit order.\n # "If there are multiple matches, the most recently generated cache will be used."\n - typescript-declaration-files-next\n\n - run:\n name: Diff declaration files\n command: |\n git add -f packages/mui-material/build || echo '/core declarations do not exist'\n git add -f packages/mui-lab/build || echo '/lab declarations do not exist'\n git add -f packages/mui-utils/build || echo '/utils declarations do not exist'\n pnpm -r build:types\n git --no-pager diff\n\n - run:\n name: Log defect declaration files\n command: |\n # ignore build failures\n # Fixing these takes some effort that isn't viable to merge in a single PR.\n # We'll simply monitor them for now.\n set +e\n node scripts/testBuiltTypes.mjs\n exit 0\n test_browser:\n <<: *default-job\n resource_class: 'medium+'\n docker:\n - image: mcr.microsoft.com/playwright:v1.51.1-noble\n steps:\n - checkout\n - install_js:\n browsers: true\n - run:\n name: Tests real browsers\n command: pnpm test:karma\n - run:\n name: Check coverage generated\n command: |\n if ! [[ -s coverage/lcov.info ]]\n then\n exit 1\n fi\n - run:\n name: Coverage\n command: |\n curl -Os https://uploader.codecov.io/latest/linux/codecov\n chmod +x codecov\n ./codecov -t ${CODECOV_TOKEN} -Z -F "$REACT_VERSION-browser"\n - store_artifacts:\n # hardcoded in karma-webpack\n path: /tmp/_karma_webpack_\n destination: artifact-file\n test_e2e:\n <<: *default-job\n docker:\n - image: mcr.microsoft.com/playwright:v1.51.1-noble\n steps:\n - checkout\n - install_js:\n browsers: true\n - run:\n name: pnpm test:e2e\n command: pnpm test:e2e\n test_e2e_website:\n # NOTE: This workflow runs after successful docs deploy. See /test/e2e-website/README.md#ci\n <<: *default-job\n docker:\n - image: mcr.microsoft.com/playwright:v1.51.1-noble\n steps:\n - checkout\n - install_js:\n browsers: true\n - run:\n name: pnpm test:e2e-website\n command: pnpm test:e2e-website\n environment:\n PLAYWRIGHT_TEST_BASE_URL: << parameters.e2e-base-url >>\n test_profile:\n <<: *default-job\n docker:\n - image: mcr.microsoft.com/playwright:v1.51.1-noble\n steps:\n - checkout\n - install_js:\n browsers: true\n - run:\n name: Tests real browsers\n # Run a couple of times for a better sample.\n # TODO: hack something together where we can compile once and run multiple times e.g. by abusing watchmode.\n command: |\n # Running on chrome only since actual times are innaccurate anyway\n # The other reason is that browserstack allows little concurrency so it's likely that we're starving other runs.\n pnpm test:karma:profile --browsers chrome,chromeHeadless\n pnpm test:karma:profile --browsers chrome,chromeHeadless\n pnpm test:karma:profile --browsers chrome,chromeHeadless\n pnpm test:karma:profile --browsers chrome,chromeHeadless\n pnpm test:karma:profile --browsers chrome,chromeHeadless\n # Persist reports for inspection in https://mui-dashboard.netlify.app/\n - store_artifacts:\n # see karma.conf.profile.js reactProfilerReporter.outputDir\n path: tmp/react-profiler-report/karma\n destination: react-profiler-report/karma\n test_regressions:\n <<: *default-job\n docker:\n - image: mcr.microsoft.com/playwright:v1.51.1-noble\n steps:\n - checkout\n - install_js:\n browsers: true\n - run:\n name: Run visual regression tests\n command: xvfb-run pnpm test:regressions\n - run:\n name: Build packages for fixtures\n command: pnpm release:build\n - run:\n name: Analyze exported typescript\n command: pnpm test:attw\n - run:\n name: test exported typescript\n command: pnpm --filter @mui-internal/test-module-resolution typescript:all\n - run:\n name: Run visual regression tests using Pigment CSS\n command: xvfb-run pnpm test:regressions-pigment-css\n - run:\n name: Upload screenshots to Argos CI\n command: pnpm test:argos\n test_bundling_prepare:\n <<: *default-job\n steps:\n - checkout\n - install_js\n - run:\n name: Build packages for fixtures\n command: pnpm lerna run --scope "@mui/*" build\n - run:\n name: Pack packages\n command: pnpm release:pack\n - persist_to_workspace:\n root: packed\n paths:\n - '*'\n\n test_bundling_node_cjs:\n <<: *default-job\n working_directory: /tmp/material-ui/test/bundling/fixtures/node-cjs/\n steps:\n - checkout:\n path: /tmp/material-ui\n - attach_workspace:\n at: /tmp/material-ui/packed\n - setup_corepack\n - run:\n name: Install dependencies\n command: pnpm install --ignore-workspace\n - run:\n name: Test fixture\n command: pnpm start\n test_bundling_node_esm:\n <<: *default-job\n working_directory: /tmp/material-ui/test/bundling/fixtures/node-esm/\n steps:\n - checkout:\n path: /tmp/material-ui\n - attach_workspace:\n at: /tmp/material-ui/packed\n - setup_corepack\n - run:\n name: Install dependencies\n command: pnpm install --ignore-workspace\n - run:\n name: Test fixture\n # TODO: Known failure\n command: pnpm start\n test_bundling_next_webpack4:\n <<: *default-job\n docker:\n - image: mcr.microsoft.com/playwright:v1.51.1-noble\n working_directory: /tmp/material-ui/test/bundling/fixtures/next-webpack4/\n steps:\n - checkout:\n path: /tmp/material-ui\n - attach_workspace:\n at: /tmp/material-ui/packed\n - setup_corepack:\n browsers: true\n - run:\n name: Install dependencies\n command: pnpm install --ignore-workspace\n - run:\n name: Test fixture\n command: pnpm start\n test_bundling_next_webpack5:\n <<: *default-job\n docker:\n - image: mcr.microsoft.com/playwright:v1.51.1-noble\n working_directory: /tmp/material-ui/test/bundling/fixtures/next-webpack5/\n steps:\n - checkout:\n path: /tmp/material-ui\n - attach_workspace:\n at: /tmp/material-ui/packed\n - setup_corepack:\n browsers: true\n - run:\n name: Install dependencies\n command: pnpm install --ignore-workspace\n - run:\n name: Test fixture\n command: pnpm start\n test_bundling_create_react_app:\n <<: *default-job\n docker:\n - image: mcr.microsoft.com/playwright:v1.51.1-noble\n working_directory: /tmp/material-ui/test/bundling/fixtures/create-react-app/\n steps:\n - checkout:\n path: /tmp/material-ui\n - attach_workspace:\n at: /tmp/material-ui/packed\n - setup_corepack:\n browsers: true\n - run:\n name: Install dependencies\n command: pnpm install --ignore-workspace\n - run:\n name: Test fixture\n command: pnpm start\n test_bundling_snowpack:\n <<: *default-job\n docker:\n - image: mcr.microsoft.com/playwright:v1.51.1-noble\n working_directory: /tmp/material-ui/test/bundling/fixtures/snowpack/\n steps:\n - checkout:\n path: /tmp/material-ui\n - attach_workspace:\n at: /tmp/material-ui/packed\n - setup_corepack:\n browsers: true\n - run:\n name: Install dependencies\n command: pnpm install --ignore-workspace\n - run:\n name: Test fixture\n command: pnpm start\n test_bundling_vite:\n <<: *default-job\n docker:\n - image: mcr.microsoft.com/playwright:v1.51.1-noble\n working_directory: /tmp/material-ui/test/bundling/fixtures/vite/\n steps:\n - checkout:\n path: /tmp/material-ui\n - attach_workspace:\n at: /tmp/material-ui/packed\n - setup_corepack:\n browsers: true\n - run:\n name: Install dependencies\n command: pnpm install --ignore-workspace\n - run:\n name: Test fixture\n command: pnpm start\n test_bundling_esbuild:\n <<: *default-job\n docker:\n - image: mcr.microsoft.com/playwright:v1.51.1-noble\n working_directory: /tmp/material-ui/test/bundling/fixtures/esbuild/\n steps:\n - checkout:\n path: /tmp/material-ui\n - attach_workspace:\n at: /tmp/material-ui/packed\n - setup_corepack:\n browsers: true\n - run:\n name: Install dependencies\n command: pnpm install --ignore-workspace\n - run:\n name: Test fixture\n command: pnpm start\n test_bundling_gatsby:\n <<: *default-job\n docker:\n - image: mcr.microsoft.com/playwright:v1.51.1-noble\n environment:\n GATSBY_CPU_COUNT: '3'\n working_directory: /tmp/material-ui/test/bundling/fixtures/gatsby/\n steps:\n - checkout:\n path: /tmp/material-ui\n - attach_workspace:\n at: /tmp/material-ui/packed\n - setup_corepack:\n browsers: true\n - run:\n name: Install dependencies\n command: pnpm install --ignore-workspace\n - run:\n name: Test fixture\n command: pnpm start\n\n test_bundle_size_monitor:\n <<: *default-job\n steps:\n - checkout\n - install_js\n - run:\n name: prepare danger on PRs\n command: pnpm danger ci\n environment:\n DANGER_COMMAND: prepareBundleSizeReport\n - setup_corepack\n - run:\n name: build @mui packages\n command: pnpm lerna run --ignore @mui/icons-material --concurrency 6 --scope "@mui/*" build\n - run:\n name: create @mui/material canary distributable\n command: |\n cd packages/mui-material/build\n npm version 0.0.0-canary.${CIRCLE_SHA1} --no-git-tag-version\n npm pack\n mv mui-material-0.0.0-canary.${CIRCLE_SHA1}.tgz ../../../mui-material.tgz\n - when:\n # don't run on PRs\n condition:\n not:\n matches:\n # "^pull/\d+" is not valid YAML\n # "^pull/\\d+" matches neither 'pull/1' nor 'main'\n # Note that we want to include 'pull/1', 'pull/1/head' and ''pull/1/merge'\n pattern: '^pull/.+$'\n value: << pipeline.git.branch >>\n steps:\n - aws-cli/setup:\n aws_access_key_id: $AWS_ACCESS_KEY_ID_ARTIFACTS\n aws_secret_access_key: $AWS_SECRET_ACCESS_KEY_ARTIFACTS\n region: ${AWS_REGION_ARTIFACTS}\n # Upload distributables to S3\n - aws-s3/copy:\n from: mui-material.tgz\n to: s3://mui-org-ci/artifacts/$CIRCLE_BRANCH/$CIRCLE_SHA1/\n - store_artifacts:\n path: mui-material.tgz\n destination: mui-material.tgz\n - run:\n name: create a size snapshot\n command: pnpm size:snapshot\n - store_artifacts:\n name: persist size snapshot as pipeline artifact\n path: size-snapshot.json\n destination: size-snapshot.json\n - when:\n # don't run on PRs\n condition:\n not:\n matches:\n # "^pull/\d+" is not valid YAML\n # "^pull/\\d+" matches neither 'pull/1' nor 'main'\n # Note that we want to include 'pull/1', 'pull/1/head' and ''pull/1/merge'\n pattern: '^pull/.+$'\n value: << pipeline.git.branch >>\n steps:\n - aws-cli/setup:\n aws_access_key_id: $AWS_ACCESS_KEY_ID_ARTIFACTS\n aws_secret_access_key: $AWS_SECRET_ACCESS_KEY_ARTIFACTS\n region: ${AWS_REGION_ARTIFACTS}\n # persist size snapshot on S3\n - aws-s3/copy:\n arguments: --content-type application/json\n from: size-snapshot.json\n to: s3://mui-org-ci/artifacts/$CIRCLE_BRANCH/$CIRCLE_SHA1/\n # symlink size-snapshot to latest\n - aws-s3/copy:\n arguments: --content-type application/json\n from: size-snapshot.json\n to: s3://mui-org-ci/artifacts/$CIRCLE_BRANCH/latest/\n - run:\n name: Run danger on PRs\n command: pnpm danger ci --fail-on-errors\n environment:\n DANGER_COMMAND: reportBundleSize\n test_benchmark:\n <<: *default-job\n docker:\n - image: mcr.microsoft.com/playwright:v1.51.1-noble\n steps:\n - checkout\n - install_js:\n browsers: true\n - run: pnpm benchmark:browser\n - store_artifacts:\n name: Publish benchmark results as a pipeline artifact.\n path: tmp/benchmarks\n destination: benchmarks\nworkflows:\n version: 2\n pipeline:\n when:\n equal: [pipeline, << pipeline.parameters.workflow >>]\n jobs:\n - checkout:\n <<: *default-context\n - test_unit:\n <<: *default-context\n requires:\n - checkout\n - test_lint:\n <<: *default-context\n requires:\n - checkout\n - test_static:\n <<: *default-context\n requires:\n - checkout\n - test_types:\n <<: *default-context\n requires:\n - checkout\n - test_browser:\n <<: *default-context\n requires:\n - checkout\n - test_regressions:\n <<: *default-context\n requires:\n - checkout\n - test_e2e:\n <<: *default-context\n requires:\n - checkout\n - test_bundle_size_monitor:\n <<: *default-context\n requires:\n - checkout\n e2e-website:\n when:\n equal: [e2e-website, << pipeline.parameters.workflow >>]\n jobs:\n - checkout:\n <<: *default-context\n - test_e2e_website:\n <<: *default-context\n requires:\n - checkout\n\n bundling:\n when:\n equal: [bundling, << pipeline.parameters.workflow >>]\n jobs:\n - test_bundling_prepare:\n <<: *default-context\n - test_bundling_node_cjs:\n <<: *default-context\n requires:\n - test_bundling_prepare\n - test_bundling_node_esm:\n <<: *default-context\n requires:\n - test_bundling_prepare\n - test_bundling_create_react_app:\n <<: *default-context\n requires:\n - test_bundling_prepare\n - test_bundling_snowpack:\n <<: *default-context\n requires:\n - test_bundling_prepare\n - test_bundling_vite:\n <<: *default-context\n requires:\n - test_bundling_prepare\n - test_bundling_esbuild:\n <<: *default-context\n requires:\n - test_bundling_prepare\n - test_bundling_gatsby:\n <<: *default-context\n requires:\n - test_bundling_prepare\n - test_bundling_next_webpack4:\n <<: *default-context\n requires:\n - test_bundling_prepare\n - test_bundling_next_webpack5:\n <<: *default-context\n requires:\n - test_bundling_prepare\n\n profile:\n when:\n equal: [profile, << pipeline.parameters.workflow >>]\n jobs:\n - test_profile:\n <<: *default-context\n\n # This workflow can be triggered manually on the PR\n react-17:\n when:\n equal: [react-17, << pipeline.parameters.workflow >>]\n jobs:\n - test_unit:\n <<: *default-context\n react-version: ^17.0.0\n name: test_unit-react@17\n - test_browser:\n <<: *default-context\n react-version: ^17.0.0\n name: test_browser-react@17\n - test_regressions:\n <<: *default-context\n react-version: ^17.0.0\n name: test_regressions-react@17\n - test_e2e:\n <<: *default-context\n react-version: ^17.0.0\n name: test_e2e-react@17\n\n # This workflow is identical to react-17, but scheduled\n # TODO: The v17 tests have deteriorated to the point of no return. Fix for v18 once we\n # deprecate v17, and reenable this workflow.\n # react-17-cron:\n # triggers:\n # - schedule:\n # cron: '0 0 * * *'\n # filters:\n # branches:\n # only:\n # - master\n # - next\n # jobs:\n # - test_unit:\n # <<: *default-context\n # react-version: ^17.0.0\n # name: test_unit-react@17\n # - test_browser:\n # <<: *default-context\n # react-version: ^17.0.0\n # name: test_browser-react@17\n # - test_regressions:\n # <<: *default-context\n # react-version: ^17.0.0\n # name: test_regressions-react@17\n # - test_e2e:\n # <<: *default-context\n # react-version: ^17.0.0\n # name: test_e2e-react@17\n\n # This workflow can be triggered manually on the PR\n react-18:\n when:\n equal: [react-18, << pipeline.parameters.workflow >>]\n jobs:\n - test_unit:\n <<: *default-context\n react-version: ^18.0.0\n name: test_unit-react@18\n - test_browser:\n <<: *default-context\n react-version: ^18.0.0\n name: test_browser-react@18\n - test_regressions:\n <<: *default-context\n react-version: ^18.0.0\n name: test_regressions-react@18\n - test_e2e:\n <<: *default-context\n react-version: ^18.0.0\n name: test_e2e-react@18\n\n # This workflow is identical to react-18, but scheduled\n react-18-cron:\n triggers:\n - schedule:\n cron: '0 0 * * *'\n filters:\n branches:\n only:\n # #target-branch-reference\n - master\n - v5.x\n - v6.x\n jobs:\n - test_unit:\n <<: *default-context\n react-version: ^18.0.0\n name: test_unit-react@18\n - test_browser:\n <<: *default-context\n react-version: ^18.0.0\n name: test_browser-react@18\n - test_regressions:\n <<: *default-context\n react-version: ^18.0.0\n name: test_regressions-react@18\n - test_e2e:\n <<: *default-context\n react-version: ^18.0.0\n name: test_e2e-react@18\n\n # This workflow can be triggered manually on the PR\n react-next:\n when:\n equal: [react-next, << pipeline.parameters.workflow >>]\n jobs:\n - test_unit:\n <<: *default-context\n react-version: next\n name: test_unit-react@next\n - test_browser:\n <<: *default-context\n react-version: next\n name: test_browser-react@next\n - test_regressions:\n <<: *default-context\n react-version: next\n name: test_regressions-react@next\n - test_e2e:\n <<: *default-context\n react-version: next\n name: test_e2e-react@next\n # This workflow is identical to react-next, but scheduled\n react-next-cron:\n triggers:\n - schedule:\n cron: '0 0 * * *'\n filters:\n branches:\n only:\n # #target-branch-reference\n - master\n - v6.x\n jobs:\n - test_unit:\n <<: *default-context\n react-version: next\n name: test_unit-react@next\n - test_browser:\n <<: *default-context\n react-version: next\n name: test_browser-react@next\n - test_regressions:\n <<: *default-context\n react-version: next\n name: test_regressions-react@next\n - test_e2e:\n <<: *default-context\n react-version: next\n name: test_e2e-react@next\n\n typescript-next:\n triggers:\n - schedule:\n cron: '0 0 * * *'\n filters:\n branches:\n only:\n # #target-branch-reference\n - master\n - v6.x\n jobs:\n - test_types_next:\n <<: *default-context\n benchmark:\n when:\n equal: [benchmark, << pipeline.parameters.workflow >>]\n jobs:\n - test_benchmark:\n <<: *default-context\n | dataset_sample\yaml\mui-org_material-ui\.circleci\config.yml | config.yml | YAML | 31,904 | 0.95 | 0.022817 | 0.088115 | python-kit | 800 | 2024-10-17T21:19:03.774163 | GPL-3.0 | false | aac8a5e1fae13de23d1ad2eb14b58d68 |
# These are supported funding model platforms\n\ngithub: # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2]\npatreon: # Replace with a single Patreon username\nopen_collective: mui-org\nko_fi: # Replace with a single Ko-fi username\ntidelift: npm/@mui/material\ncustom: # Replace with a single custom sponsorship URL\n | dataset_sample\yaml\mui-org_material-ui\.github\FUNDING.yml | FUNDING.yml | YAML | 337 | 0.8 | 0 | 0.142857 | node-utils | 862 | 2023-12-26T12:48:47.172226 | BSD-3-Clause | false | 99945fef03eb7f70f567021676e5bbeb |
name: Bug report π\ndescription: Create a bug report for MaterialΒ UI, MUIΒ System, or JoyΒ UI.\nlabels: ['status: waiting for maintainer']\nbody:\n - type: markdown\n attributes:\n value: Thanks for contributing by creating an issue! β€οΈ Please provide a searchable summary of the issue in the title above β¬οΈ.\n - type: input\n attributes:\n label: Search keywords\n description: |\n Your issue may have already been reported! First search for duplicates among the [existing issues](https://github.com/mui/material-ui/issues?q=is%3Aopen+is%3Aclosed).\n If your issue isn't a duplicate, great! Please list the keywords you used so people in the future can find this one more easily:\n validations:\n required: true\n - type: checkboxes\n attributes:\n label: Latest version\n description: We roll bug fixes, performance enhancements, and other improvements into new releases.\n options:\n - label: I have tested the latest version\n required: true\n - type: textarea\n attributes:\n label: Steps to reproduce\n description: |\n **β οΈ Issues that we can't reproduce can't be fixed.**\n\n Please provide a link to a live example and an unambiguous set of steps to reproduce this bug. See our [documentation](https://mui.com/material-ui/getting-started/support/#bug-reproductions) on how to build a reproduction case.\n You can use this StackBlitz template as a starting point: [material-ui-vite-ts](https://stackblitz.com/github/mui/material-ui/tree/master/examples/material-ui-vite-ts)\n value: |\n Steps:\n 1. Open this link to live example: (required)\n 2.\n 3.\n - type: textarea\n attributes:\n label: Current behavior\n description: Describe what happens instead of the expected behavior.\n - type: textarea\n attributes:\n label: Expected behavior\n description: Describe what should happen.\n - type: textarea\n attributes:\n label: Context\n description: What are you trying to accomplish? Providing context helps us come up with a solution that is more useful in the real world.\n - type: textarea\n attributes:\n label: Your environment\n description: Run `npx @mui/envinfo` and post the results. If you encounter issues with TypeScript please include the used tsconfig.\n value: |\n <details>\n <summary><code>npx @mui/envinfo</code></summary>\n\n ```\n Don't forget to mention which browser you used.\n Output from `npx @mui/envinfo` goes here.\n ```\n </details>\n - type: markdown\n attributes:\n value: |\n ## :heart: Love MaterialΒ UI?\n\n Consider donating $10 to sustain our open-source work: [https://opencollective.com/mui-org](https://opencollective.com/mui-org).\n | dataset_sample\yaml\mui-org_material-ui\.github\ISSUE_TEMPLATE\1.bug.yml | 1.bug.yml | YAML | 2,819 | 0.95 | 0.060606 | 0.031746 | awesome-app | 592 | 2025-05-28T09:36:05.262407 | BSD-3-Clause | false | cd99de4e44e5ba1b74ffdcad96e0e6cf |
name: Feature request π\ndescription: Suggest a new idea for MaterialΒ UI, MUIΒ System, or JoyΒ UI.\nlabels: ['status: waiting for maintainer']\nbody:\n - type: markdown\n attributes:\n value: Thanks for contributing by creating an issue! β€οΈ Please provide a searchable summary of the issue in the title above β¬οΈ.\n - type: input\n attributes:\n label: Search keywords\n description: |\n Your issue may have already been reported! First search for duplicates among the [existing issues](https://github.com/mui/material-ui/issues?q=is%3Aopen+is%3Aclosed).\n If your issue isn't a duplicate, great! Please list the keywords you used so people in the future can find this one more easily:\n validations:\n required: true\n - type: checkboxes\n attributes:\n label: Latest version\n description: We roll bug fixes, performance enhancements, and other improvements into new releases.\n options:\n - label: I have tested the latest version\n required: true\n - type: textarea\n attributes:\n label: Summary\n description: Describe how it should work.\n - type: textarea\n attributes:\n label: Examples\n description: Provide a link to the Material Design specification, other implementations, or screenshots of the expected behavior.\n - type: textarea\n attributes:\n label: Motivation\n description: What are you trying to accomplish? Providing context helps us come up with a solution that is more useful in the real world.\n - type: markdown\n attributes:\n value: |\n ## :heart: Love MaterialΒ UI?\n\n Consider donating $10 to sustain our open-source work: [https://opencollective.com/mui-org](https://opencollective.com/mui-org).\n | dataset_sample\yaml\mui-org_material-ui\.github\ISSUE_TEMPLATE\2.feature.yml | 2.feature.yml | YAML | 1,747 | 0.95 | 0.1 | 0.025641 | python-kit | 288 | 2024-06-12T23:28:39.438279 | Apache-2.0 | false | c440cd48514042295469542c6cb58158 |
name: RFC π¬\ndescription: Request for comments for your proposal.\ntitle: '[RFC] '\nlabels: ['status: waiting for maintainer', 'RFC']\nbody:\n - type: markdown\n attributes:\n value: |\n Please provide a searchable summary of the RFC in the title above. β¬οΈ\n\n Thanks for contributing by creating an RFC! β€οΈ\n - type: textarea\n attributes:\n label: What's the problem?\n description: Write a short paragraph or bulleted list to briefly explain what you're trying to do, what outcomes you're aiming for.\n - type: textarea\n attributes:\n label: What are the requirements?\n description: Provide a list of requirements that should be met by the accepted proposal.\n - type: textarea\n attributes:\n label: What are our options?\n description: What are the alternative options to achieve the desired outcome?\n - type: textarea\n attributes:\n label: Proposed solution\n description: |\n This is the core of the RFC. Please clearly explain the reasoning behind your proposed solution, including why it would be preferred over possible alternatives.\n\n Consider:\n - using diagrams to help illustrate your ideas\n - including code examples if you're proposing an interface or system contract\n - linking to relevant project briefs or wireframes\n - type: textarea\n attributes:\n label: Resources and benchmarks\n description: Attach any issues, PRs, links, documents, etcβ¦ that might be relevant to the RFC.\n | dataset_sample\yaml\mui-org_material-ui\.github\ISSUE_TEMPLATE\3.rfc.yml | 3.rfc.yml | YAML | 1,513 | 0.85 | 0.162162 | 0 | node-utils | 781 | 2024-03-27T00:05:48.035652 | MIT | false | ff78575bc4e9393f6e9113d5902ba8fd |
name: Docs feedback\ndescription: Improve documentation about MaterialΒ UI, MUIΒ System, or JoyΒ UI.\nlabels: ['status: waiting for maintainer', 'support: docs-feedback']\ntitle: '[docs] '\nbody:\n - type: markdown\n attributes:\n value: Thanks for contributing by creating an issue! β€οΈ Please provide a searchable summary of the issue in the title above β¬οΈ.\n - type: input\n attributes:\n label: Search keywords\n description: |\n Your issue may have already been reported! First search for duplicates among the [existing issues](https://github.com/mui/material-ui/issues?q=is%3Aopen+is%3Aclosed).\n If your issue isn't a duplicate, great! Please list the keywords you used so people in the future can find this one more easily:\n validations:\n required: true\n - type: input\n id: page-url\n attributes:\n label: Related page\n description: Which page of the documentation is this about?\n placeholder: https://mui.com/material-ui/\n validations:\n required: true\n - type: dropdown\n attributes:\n label: Kind of issue\n description: What kind of problem are you facing?\n options:\n - Unclear explanations\n - Missing information\n - Broken demo\n - Other\n validations:\n required: true\n - type: textarea\n attributes:\n label: Issue description\n description: |\n Let us know what went wrong when you were using this documentation and what we could do to improve it.\n - type: textarea\n attributes:\n label: Context\n description: What are you trying to accomplish? Providing context helps us come up with a solution that is more useful in the real world.\n - type: markdown\n attributes:\n value: |\n ## :heart: Love MaterialΒ UI?\n\n Consider donating $10 to sustain our open-source work: [https://opencollective.com/mui-org](https://opencollective.com/mui-org).\n | dataset_sample\yaml\mui-org_material-ui\.github\ISSUE_TEMPLATE\4.docs-feedback.yml | 4.docs-feedback.yml | YAML | 1,922 | 0.95 | 0.06 | 0.020408 | python-kit | 464 | 2024-02-09T16:16:15.874518 | MIT | false | 2ecdc0e96839d405dd7bc3a804a43cf7 |
name: 'Priority Support: SLA β°'\ndescription: I'm an MUI X Premium user and we have purchased the Priority Support add-on. I can't find a solution to my problem with MaterialΒ UI, MUIΒ System, or JoyΒ UI.\ntitle: '[question] '\nlabels: ['status: waiting for maintainer', 'support: unknown']\nbody:\n - type: markdown\n attributes:\n value: |\n Please provide a searchable summary of the issue in the title above β¬οΈ.\n - type: input\n attributes:\n label: Search keywords\n description: |\n Your issue may have already been reported! First search for duplicates among the [existing issues](https://github.com/mui/material-ui/issues?q=is%3Aopen+is%3Aclosed).\n If your issue isn't a duplicate, great! Please list the keywords you used so people in the future can find this one more easily:\n required: true\n - type: checkboxes\n attributes:\n label: Latest version\n description: We roll bug fixes, performance enhancements, and other improvements into new releases.\n options:\n - label: I have tested the latest version\n required: true\n - type: textarea\n attributes:\n label: The problem in depth\n - type: textarea\n attributes:\n label: Your environment\n description: Run `npx @mui/envinfo` and post the results. If you encounter issues with TypeScript please include the used tsconfig.\n value: |\n <details>\n <summary>`npx @mui/envinfo`</summary>\n\n ```\n Don't forget to mention which browser you used.\n Output from `npx @mui/envinfo` goes here.\n ```\n </details>\n | dataset_sample\yaml\mui-org_material-ui\.github\ISSUE_TEMPLATE\5.priority-support.yml | 5.priority-support.yml | YAML | 1,613 | 0.95 | 0.051282 | 0 | awesome-app | 144 | 2024-03-21T20:32:13.961825 | GPL-3.0 | false | 6863ec99170eacd4f7da489af3fa0607 |
contact_links:\n - name: Support β\n url: https://mui.com/material-ui/getting-started/support/\n about: I need support with MaterialΒ UI, MUIΒ System, or JoyΒ UI.\n | dataset_sample\yaml\mui-org_material-ui\.github\ISSUE_TEMPLATE\config.yml | config.yml | YAML | 169 | 0.8 | 0 | 0 | python-kit | 658 | 2025-06-30T02:28:12.595781 | BSD-3-Clause | false | 66a4ab5bc41f7139b587e24bea6e3307 |
name: Check if PR has label\n\non:\n pull_request:\n types: [opened, reopened, labeled, unlabeled]\n\npermissions: {}\n\njobs:\n test-label-applied:\n # Tests that label is added on the PR\n runs-on: ubuntu-latest\n permissions:\n contents: read\n steps:\n - uses: mnajdova/github-action-required-labels@ca0df9249827e43aa4b4a0d25d9fe3e9b19b0705 # v2.1.0\n with:\n mode: minimum\n count: 1\n labels: ''\n | dataset_sample\yaml\mui-org_material-ui\.github\workflows\check-if-pr-has-label.yml | check-if-pr-has-label.yml | YAML | 444 | 0.95 | 0.05 | 0.058824 | react-lib | 455 | 2025-02-18T13:55:09.436889 | MIT | false | aff660fd73290c225be23b1ff2a50833 |
# This workflow is a workaround for ci.yml to bypass the github checks\n#\n# Ref: https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/defining-the-mergeability-of-pull-requests/troubleshooting-required-status-checks#handling-skipped-but-required-checks\nname: CI Check\n\non:\n push:\n branches-ignore:\n - 'renovate/**'\n pull_request:\n paths:\n - 'docs/**'\n - 'examples/**'\n\npermissions: {}\n\njobs:\n test-dev:\n if: ${{ github.actor != 'l10nbot' }}\n runs-on: ${{ matrix.os }}\n strategy:\n matrix:\n os: [macos-latest, windows-latest, ubuntu-latest]\n steps:\n - run: 'echo "No build required"'\n | dataset_sample\yaml\mui-org_material-ui\.github\workflows\ci-check.yml | ci-check.yml | YAML | 672 | 0.95 | 0.08 | 0.136364 | python-kit | 987 | 2024-01-29T23:20:53.406832 | GPL-3.0 | false | 90b08ae95a5b93d2d9f5b9fc1db3548f |
name: CI\n\non:\n push:\n branches-ignore:\n # Renovate branches are always Pull Requests.\n # We don't need to run CI twice (push+pull_request)\n - 'renovate/**'\n pull_request:\n paths-ignore:\n # should sync with ci-check.yml as a workaround to bypass github checks\n - 'examples/**'\n\npermissions: {}\n\njobs:\n continuous-releases:\n runs-on: ubuntu-latest\n steps:\n - run: echo "${{ github.actor }}"\n - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n with:\n fetch-depth: 0\n - name: Set up pnpm\n uses: pnpm/action-setup@a7487c7e89a18df4991f7f222e4898a00d66ddda # v4.1.0\n - name: Use Node.js 20.x\n uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4.4.0\n with:\n node-version: 20\n cache: 'pnpm' # https://github.com/actions/setup-node/blob/main/docs/advanced-usage.md#caching-packages-dependencies\n - run: npm install -g npm@latest\n - run: pnpm install:codesandbox\n - run: pnpm build:codesandbox\n - run: pnpm pkg-pr-new-release\n\n # Tests dev-only scripts across all supported dev environments\n test-dev:\n # l10nbot does not affect dev scripts.\n if: ${{ github.actor != 'l10nbot' }}\n runs-on: ${{ matrix.os }}\n strategy:\n matrix:\n os: [macos-latest, windows-latest, ubuntu-latest]\n steps:\n - run: echo "${{ github.actor }}"\n - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n with:\n # fetch all tags which are required for `pnpm release:changelog`\n fetch-depth: 0\n - name: Set up pnpm\n uses: pnpm/action-setup@a7487c7e89a18df4991f7f222e4898a00d66ddda # v4.1.0\n - name: Use Node.js 20.x\n uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4.4.0\n with:\n node-version: 20\n cache: 'pnpm' # https://github.com/actions/setup-node/blob/main/docs/advanced-usage.md#caching-packages-dependencies\n - run: pnpm install\n - run: pnpm build:ci\n env:\n NODE_OPTIONS: --max_old_space_size=6144\n - run: pnpm release:changelog\n env:\n GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n - run: pnpm validate-declarations\n - name: pnpm release:tag\n run: |\n git remote -v\n pnpm release:tag --dryRun\n | dataset_sample\yaml\mui-org_material-ui\.github\workflows\ci.yml | ci.yml | YAML | 2,371 | 0.95 | 0.029412 | 0.09375 | python-kit | 176 | 2025-03-25T00:59:45.427811 | Apache-2.0 | false | 0d4040f1b339296233e862737b85abbc |
name: Add closing message to issue\n\non:\n issues:\n types:\n - closed\n\npermissions: {}\n\njobs:\n add-comment:\n name: Add closing message\n if: github.event.issue.state_reason == 'completed'\n uses: mui/mui-public/.github/workflows/issues_add-closing-message.yml@master\n permissions:\n contents: read\n issues: write\n | dataset_sample\yaml\mui-org_material-ui\.github\workflows\closed-issue-message.yml | closed-issue-message.yml | YAML | 339 | 0.7 | 0.058824 | 0 | react-lib | 983 | 2025-03-24T17:13:27.724126 | BSD-3-Clause | false | eddc680250164014b2f8388ffba72c8f |
name: CodeQL\n\non:\n schedule:\n - cron: '0 2 * * *'\n\npermissions: {}\n\njobs:\n analyze:\n name: Analyze\n runs-on: ubuntu-latest\n permissions:\n actions: read\n contents: read\n security-events: write\n steps:\n - name: Checkout repository\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n # Initializes the CodeQL tools for scanning.\n - name: Initialize CodeQL\n uses: github/codeql-action/init@45775bd8235c68ba998cffa5171334d58593da47 # v3.28.15\n with:\n languages: typescript\n config-file: ./.github/codeql/codeql-config.yml\n # If you wish to specify custom queries, you can do so here or in a config file.\n # By default, queries listed here will override any specified in a config file.\n # Prefix the list here with "+" to use these queries and those in the config file.\n\n # Details on CodeQL's query packs refer to : https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs\n # queries: security-extended,security-and-quality\n - name: Perform CodeQL Analysis\n uses: github/codeql-action/analyze@45775bd8235c68ba998cffa5171334d58593da47 # v3.28.15\n | dataset_sample\yaml\mui-org_material-ui\.github\workflows\codeql.yml | codeql.yml | YAML | 1,320 | 0.8 | 0.060606 | 0.206897 | awesome-app | 539 | 2023-12-13T08:39:19.395029 | MIT | false | 2aa3497415a75ab21bc428be708d2100 |
name: Create cherry-pick PR\non:\n pull_request_target:\n branches:\n - 'next'\n - 'v*.x'\n - 'master'\n types: ['closed']\n\npermissions: {}\n\njobs:\n create_pr:\n name: Create cherry-pick PR\n uses: mui/mui-public/.github/workflows/prs_create-cherry-pick-pr.yml@master\n permissions:\n contents: write\n pull-requests: write\n | dataset_sample\yaml\mui-org_material-ui\.github\workflows\create-cherry-pick-pr.yml | create-cherry-pick-pr.yml | YAML | 353 | 0.7 | 0 | 0 | python-kit | 549 | 2025-05-13T00:36:41.245900 | Apache-2.0 | false | 7f4b5af1edb8d88bc1febbb1f46d0901 |
name: Ensure triage label is present\n\non:\n label:\n types:\n - deleted\n issues:\n types:\n - opened\n\npermissions: {}\n\njobs:\n label_issues:\n runs-on: ubuntu-latest\n permissions:\n issues: write\n steps:\n - uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7\n with:\n script: |\n const { data: labels } = await github.rest.issues.listLabelsOnIssue({\n issue_number: context.issue.number,\n owner: context.repo.owner,\n repo: context.repo.repo,\n });\n\n if (labels.length <= 0) {\n await github.rest.issues.addLabels({\n issue_number: context.issue.number,\n owner: context.repo.owner,\n repo: context.repo.repo,\n labels: ['status: waiting for maintainer']\n })\n }\n | dataset_sample\yaml\mui-org_material-ui\.github\workflows\ensure-triage-label.yml | ensure-triage-label.yml | YAML | 889 | 0.8 | 0.057143 | 0 | react-lib | 2 | 2023-10-26T01:29:01.257211 | MIT | false | d2b55e198a2bd285cff170887827f515 |
name: Cleanup issue comment\n\non:\n issues:\n types:\n - opened\n\npermissions: {}\n\njobs:\n issue_cleanup:\n runs-on: ubuntu-latest\n permissions:\n issues: write\n steps:\n - uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7\n with:\n script: |\n const issue = await github.rest.issues.get({\n owner: context.repo.owner,\n repo: context.repo.repo,\n issue_number: context.issue.number,\n })\n\n const lines = issue.data.body.split('\n')\n\n const _ = extractInputSection(lines, 'Latest version')\n const searchKeywords = extractInputSection(lines, 'Search keywords')\n const orderID = extractInputSection(lines, 'Order ID or Support key')\n\n lines.push('')\n lines.push('**Search keywords**: ' + searchKeywords)\n if (orderID !== '' && orderID !== '_No response_') {\n lines.push('**Order ID**: ' + orderID)\n }\n\n const body = lines.join('\n')\n\n await github.rest.issues.update({\n owner: context.repo.owner,\n repo: context.repo.repo,\n issue_number: context.issue.number,\n body,\n })\n\n function extractInputSection(lines, title) {\n const index = lines.findIndex(line => line.startsWith('###') && line.includes(title))\n if (index === -1) {\n return ''\n }\n return lines.splice(index, 4)[2].trim()\n }\n | dataset_sample\yaml\mui-org_material-ui\.github\workflows\issue-cleanup.yml | issue-cleanup.yml | YAML | 1,571 | 0.95 | 0.057692 | 0 | react-lib | 940 | 2023-10-14T12:42:11.359029 | Apache-2.0 | false | c50db9e699778e34db051250cd30dd4c |
name: Maintenance\n\non:\n # So that PRs touching the same files as the push are updated\n push:\n branches:\n # #target-branch-reference\n - master\n - v6.x\n # So that the `dirtyLabel` is removed if conflicts are resolved\n # Could put too much strain on rate limit\n # If we hit the rate limit too often remove this event\n pull_request_target:\n branches:\n # #target-branch-reference\n - master\n - v6.x\n types: [synchronize]\n\npermissions: {}\n\njobs:\n main:\n # l10nbot creates a lot of commits at once which starves CI.\n # We rely on other pushes to mark these branches as outdated.\n if: ${{ github.actor != 'l10nbot' }}\n runs-on: ubuntu-latest\n permissions:\n contents: read\n pull-requests: write\n steps:\n - run: echo "${{ github.actor }}"\n - name: check if prs are dirty\n uses: eps1lon/actions-label-merge-conflict@1df065ebe6e3310545d4f4c4e862e43bdca146f0 # v3.0.3\n with:\n dirtyLabel: 'PR: out-of-date'\n removeOnDirtyLabel: 'PR: ready to ship'\n repoToken: '${{ secrets.GITHUB_TOKEN }}'\n retryAfter: 130\n retryMax: 10\n | dataset_sample\yaml\mui-org_material-ui\.github\workflows\maintenance.yml | maintenance.yml | YAML | 1,149 | 0.8 | 0.075 | 0.216216 | python-kit | 20 | 2024-01-11T10:02:10.515815 | GPL-3.0 | false | 7c39e39ede01cbc2f1df15556b37f36e |
name: Mark duplicate\n\non:\n issue_comment:\n types: [created]\n\npermissions: {}\n\njobs:\n mark-duplicate:\n runs-on: ubuntu-latest\n permissions:\n contents: read\n issues: write\n steps:\n - name: mark-duplicate\n uses: actions-cool/issues-helper@a610082f8ac0cf03e357eb8dd0d5e2ba075e017e # v3.6.0\n with:\n actions: 'mark-duplicate'\n token: ${{ secrets.GITHUB_TOKEN }}\n duplicate-labels: 'duplicate'\n remove-labels: 'status: incomplete,status: waiting for maintainer'\n close-issue: true\n | dataset_sample\yaml\mui-org_material-ui\.github\workflows\mark-duplicate.yml | mark-duplicate.yml | YAML | 562 | 0.8 | 0.043478 | 0 | vue-tools | 457 | 2024-02-12T12:30:49.638584 | MIT | false | 3fe8945ce5775211783e79dfbca829f9 |
name: No response\n\n# `issues`.`closed`, `issue_comment`.`created`, and `scheduled` event types are required for this Action\n# to work properly.\non:\n issues:\n types: [closed]\n issue_comment:\n types: [created]\n schedule:\n # These runs in our repos are spread evenly throughout the day to avoid hitting rate limits.\n # If you change this schedule, consider changing the remaining repositories as well.\n # Runs at 12 am, 12 pm\n - cron: '0 0,12 * * *'\n\npermissions: {}\n\njobs:\n noResponse:\n runs-on: ubuntu-latest\n permissions:\n contents: read\n issues: write\n steps:\n - uses: MBilalShafi/no-response-add-label@8336c12292902f27b931154c34ba4670cb9899a2\n with:\n token: ${{ secrets.GITHUB_TOKEN }}\n # Number of days of inactivity before an Issue is closed for lack of response\n daysUntilClose: 7\n # Label requiring a response\n responseRequiredLabel: 'status: waiting for author'\n # Label to add back when required label is removed\n optionalFollowupLabel: 'status: waiting for maintainer'\n # Comment to post when closing an Issue for lack of response. Set to `false` to disable\n closeComment: >\n Since the issue is missing key information and has been inactive for 7 days, it has been automatically closed.\n If you wish to see the issue reopened, please provide the missing information.\n | dataset_sample\yaml\mui-org_material-ui\.github\workflows\no-response.yml | no-response.yml | YAML | 1,432 | 0.95 | 0.162162 | 0.264706 | react-lib | 49 | 2024-05-05T13:30:09.607755 | Apache-2.0 | false | 2e560f4c0af74b57e7486ea4630a8585 |
name: Priority Support Validation Prompt\n\non:\n issues:\n types:\n - labeled\n\npermissions: {}\n\njobs:\n comment:\n name: Create or update comment\n runs-on: ubuntu-latest\n permissions:\n issues: write\n\n steps:\n - name: Find Comment\n uses: peter-evans/find-comment@3eae4d37986fb5a8592848f6a574fdf654e61f9e # v3\n id: findComment\n with:\n issue-number: ${{ github.event.issue.number }}\n comment-author: 'github-actions[bot]'\n body-includes: You have created a priority support request\n\n - name: Create comment\n if: ${{ steps.findComment.outputs.comment-id == '' && contains(github.event.label.name, 'unknown') }}\n uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0\n with:\n issue-number: ${{ github.event.issue.number }}\n body: |\n You have created a support request under the ["Priority Support"](https://mui.com/legal/technical-support-sla/#priority-support) terms, which is a paid add-on to MUI X Premium β°. Please validate your support key using the link below:\n\n https://tools-public.mui.com/prod/pages/validateSupport?repo=mui-x&issueId=${{ github.event.issue.number }}\n\n Do not share your support key in this issue!\n\n Priority Support is only provided to verified customers. Once you have verified your support key, we will remove the `support: unknown` label and add the `support: priority` label to this issue. Only then the time for the SLA will start counting.\n\n - name: Update comment\n if: ${{ steps.findComment.outputs.comment-id != '' && contains(github.event.label.name, 'priority') }}\n uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0\n with:\n comment-id: ${{ steps.findComment.outputs.comment-id }}\n body: |\n Thank you for verifying your support key π, your SLA starts now.\n edit-mode: replace\n | dataset_sample\yaml\mui-org_material-ui\.github\workflows\priority-support-validation-prompt.yml | priority-support-validation-prompt.yml | YAML | 2,019 | 0.8 | 0.085106 | 0 | awesome-app | 298 | 2024-12-31T20:12:11.935794 | Apache-2.0 | false | 601a88c7a3ae4391e7c1fd94db16c0be |
name: Publish canary packages to npm\n\non:\n workflow_dispatch:\n\npermissions: {}\n\njobs:\n publish:\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n with:\n fetch-depth: 0\n - name: Set up pnpm\n uses: pnpm/action-setup@a7487c7e89a18df4991f7f222e4898a00d66ddda # v4.1.0\n - name: Use Node.js 20.x\n uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4.4.0\n with:\n node-version: 20\n cache: 'pnpm' # https://github.com/actions/setup-node/blob/main/docs/advanced-usage.md#caching-packages-dependencies\n - run: pnpm install\n - run: pnpm canary:release --ignore @mui/icons-material --yes --skip-last-commit-comparison\n env:\n NPM_TOKEN: ${{secrets.NPM_TOKEN}}\n | dataset_sample\yaml\mui-org_material-ui\.github\workflows\publish-canaries.yml | publish-canaries.yml | YAML | 827 | 0.8 | 0 | 0 | python-kit | 144 | 2025-01-01T01:53:56.491681 | BSD-3-Clause | false | fb91eb5fa17adc1b7b69ffe2436d4217 |
name: Scorecards supply-chain security\n\non:\n # Only the default branch is supported.\n branch_protection_rule:\n schedule:\n - cron: '0 2 * * *'\n\npermissions: {}\n\njobs:\n analysis:\n name: Scorecards analysis\n runs-on: ubuntu-latest\n permissions:\n # Needed to upload the results to code-scanning dashboard.\n security-events: write\n # Used to receive a badge.\n id-token: write\n # Needs for private repositories.\n contents: read\n actions: read\n steps:\n - name: Checkout code\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n with:\n persist-credentials: false\n - name: Run analysis\n uses: ossf/scorecard-action@f49aabe0b5af0936a0987cfb85d86b75731b0186 # v2.4.1\n with:\n results_file: results.sarif\n results_format: sarif\n # (Optional) Read-only PAT token. Uncomment the `repo_token` line below if:\n # - you want to enable the Branch-Protection check on a *public* repository, or\n # - you are installing Scorecards on a *private* repository\n # To create the PAT, follow the steps in https://github.com/ossf/scorecard-action#authentication-with-pat.\n repo_token: ${{ secrets.SCORECARD_READ_TOKEN }}\n # Publish the results for public repositories to enable scorecard badges. For more details, see\n # https://github.com/ossf/scorecard-action#publishing-results.\n publish_results: true\n # Upload the results to GitHub's code scanning dashboard.\n - name: Upload to code-scanning\n uses: github/codeql-action/upload-sarif@45775bd8235c68ba998cffa5171334d58593da47 # v3.28.15\n with:\n sarif_file: results.sarif\n | dataset_sample\yaml\mui-org_material-ui\.github\workflows\scorecards.yml | scorecards.yml | YAML | 1,740 | 0.8 | 0.066667 | 0.261905 | python-kit | 22 | 2024-09-16T14:26:22.799271 | BSD-3-Clause | false | fa04d7bad5b06f9b9d39dc6499c96f03 |
# Configuration for support-requests - https://github.com/dessant/support-requests\nname: Support Stack Overflow\n\non:\n issues:\n types: [labeled, unlabeled, reopened]\n\npermissions: {}\n\njobs:\n mark-support:\n runs-on: ubuntu-latest\n permissions:\n contents: read\n issues: write\n steps:\n - uses: dessant/support-requests@47d5ea12f6c9e4a081637de9626b7319b415a3bf # v4.0.0\n with:\n github-token: ${{ secrets.GITHUB_TOKEN }}\n # Label used to mark issues as support requests\n support-label: 'support: Stack Overflow'\n # Comment to post on issues marked as support requests. Add a link\n # to a support page, or set to `false` to disable\n issue-comment: |\n π Thanks for using this project!\n\n We use GitHub issues exclusively as a bug and feature requests tracker, however, this issue appears to be a support request.\n\n For support with MaterialΒ UI please check out https://mui.com/material-ui/getting-started/support/. Thanks!\n\n If you have a question on StackΒ Overflow, you are welcome to link to it here, it might help others.\n If your issue is subsequently confirmed as a bug, and the report follows the issue template, it can be reopened.\n close-issue: true\n issue-close-reason: 'not planned'\n lock-issue: false\n | dataset_sample\yaml\mui-org_material-ui\.github\workflows\support-stackoverflow.yml | support-stackoverflow.yml | YAML | 1,380 | 0.8 | 0.057143 | 0.137931 | python-kit | 334 | 2025-06-09T05:13:42.383438 | BSD-3-Clause | false | ad49afbe82bba54d633213b26f439424 |
name: Vale action\n\non: [pull_request]\n\npermissions: {}\n\njobs:\n vale:\n name: runner / vale\n runs-on: ubuntu-latest\n permissions:\n contents: read\n pull-requests: write\n steps:\n - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n - uses: errata-ai/vale-action@d89dee975228ae261d22c15adcd03578634d429c # v2.1.1\n continue-on-error: true # GitHub Action flag needed until https://github.com/errata-ai/vale-action/issues/89 is fixed\n with:\n # Errors should be more visible\n fail_on_error: true\n # The other reports don't work, not really https://github.com/reviewdog/reviewdog#reporters\n reporter: github-pr-check\n # Required, set by GitHub actions automatically:\n # https://docs.github.com/en/actions/security-guides/automatic-token-authentication#about-the-github_token-secret\n token: ${{secrets.GITHUB_TOKEN}}\n | dataset_sample\yaml\mui-org_material-ui\.github\workflows\vale-action.yml | vale-action.yml | YAML | 940 | 0.8 | 0 | 0.181818 | awesome-app | 44 | 2023-08-02T02:04:51.417906 | Apache-2.0 | false | 0f0eeae8524e2296743f0efbc3a2794a |
# Enforce a single way to write specific terms or phrases.\nextends: substitution\nmessage: Use '%s' instead of '%s'\nlevel: error\nignorecase: true # There is only one correct way to spell those, so we want to match inputs regarless of case.\n# swap maps tokens in form of bad: good\n# for more information: https://vale.sh/docs/topics/styles/#substitution\nswap:\n ' api': API\n typescript: TypeScript\n ' ts': TypeScript\n javascript: JavaScript\n ' js': JavaScript\n ' css ': CSS\n ' html ': HTML\n NPM: npm # https://css-tricks.com/start-sentence-npm/\n Github: GitHub\n StackOverflow: StackΒ Overflow\n Stack Overflow: StackΒ Overflow\n CSS modules: CSSΒ Modules\n Tailwind CSS: TailwindΒ CSS\n Heat map: Heatmap\n Tree map: Treemap\n Sparkline Chart: Sparkline\n Gauge Chart: Gauge\n Treemap Chart: Treemap\n sub-component: subcomponent\n sub-components: subcomponents\n use-case: use case\n usecase: use case\n Material 3: MaterialΒ Design 3\n VSCode: VSΒ Code\n VS Code: VSΒ Code\n 'Codesandbox ': CodeSandbox\n code sandbox: CodeSandbox\n Stackblitz: StackBlitz\n Webpack: webpack # https://twitter.com/wSokra/status/855800490713649152\n app router: App Router # Next.js\n pages router: Pages Router # Next.js\n page router: Pages Router # Next.js\n ES modules: ESΒ modules\n | dataset_sample\yaml\mui-org_material-ui\docs\mui-vale\styles\MUI\CorrectReferenceAllCases.yml | CorrectReferenceAllCases.yml | YAML | 1,277 | 0.8 | 0.02439 | 0.073171 | react-lib | 376 | 2024-06-10T12:16:56.386278 | GPL-3.0 | false | 5fec9916ec8316bcf776693907811402 |
# Write things correctly, please no wrong references.\nextends: substitution\nmessage: Use '%s' instead of '%s'\nlevel: error\nignorecase: false\n# swap maps tokens in form of bad: good\n# for more information: https://vale.sh/docs/topics/styles/#substitution\nswap:\n eg: e.g.\n eg\.: e.g.\n e\.g: e.g.\n ie: i.e.\n ie\.: i.e.\n i\.e: i.e.\n | dataset_sample\yaml\mui-org_material-ui\docs\mui-vale\styles\MUI\CorrectRererenceCased.yml | CorrectRererenceCased.yml | YAML | 334 | 0.8 | 0.071429 | 0.214286 | vue-tools | 200 | 2024-06-13T11:29:57.616080 | Apache-2.0 | false | c5e0af9f394528b3dea46eabe325dd4c |
extends: substitution\nmessage: Use '%s' instead of '%s'\nlink: https://developers.google.com/style/abbreviations\nignorecase: false\nlevel: error\nnonword: true\naction:\n name: replace\nswap:\n '\b(?:eg|e\.g\.)(?=[\s,;])': for example\n '\b(?:ie|i\.e\.)(?=[\s,;])': that is\n | dataset_sample\yaml\mui-org_material-ui\docs\mui-vale\styles\MUI\GoogleLatin.yml | GoogleLatin.yml | YAML | 269 | 0.8 | 0.090909 | 0 | python-kit | 834 | 2024-07-02T17:27:56.047933 | MIT | false | 8ba5ead4eb44a47ab13afddb67cf8149 |
# Without a non-breaking space, brand names can be split in the middle\n# with the start and end on two different lines.\n# For example, Apple does this meticulously with their brand name: https://www.apple.com/macbook-air/.\n# Also read https://www.chrisdpeters.com/blog/using-non-breaking-spaces-to-help-with-branding/ for why.\nextends: substitution\nmessage: Use a non-breaking space (option+space on Mac, Alt+0160 on Windows or AltGr+Space on Linux, instead of space) for brand name ('%s' instead of '%s')\nlevel: error\nignorecase: true\n# swap maps tokens in form of bad: good\n# for more information: https://vale.sh/docs/topics/styles/#substitution\nswap:\n Material UI: MaterialΒ UI\n MUI X: MUIΒ X\n Base UI: BaseΒ UI\n MUI Base: MUIΒ Base\n MUI System: MUIΒ System\n MUI Store: MUIΒ Store\n MUI Core: MUIΒ Core\n MUI Toolpad: Toolpad\n MUIΒ Toolpad: Toolpad\n MUI Connect: MUIΒ Connect\n Pigment CSS: PigmentΒ CSS\n | dataset_sample\yaml\mui-org_material-ui\docs\mui-vale\styles\MUI\MuiBrandName.yml | MuiBrandName.yml | YAML | 916 | 0.8 | 0.136364 | 0.272727 | awesome-app | 809 | 2024-12-24T18:48:12.240402 | Apache-2.0 | false | 72465db2a3eaf8e0ccfb40652fab4948 |
extends: substitution\nmessage: Use the US spelling '%s' instead of the British '%s'\nlink: https://www.notion.so/mui-org/Writing-style-guide-2a957a4168a54d47b14bae026d06a24b?pvs=4#25755bff02764565b954631236ab183d\nlevel: error\nignorecase: true\nswap:\n aeon: eon\n aeroplane: airplane\n ageing: aging\n aluminium: aluminum\n anaemia: anemia\n anaesthesia: anesthesia\n analyse: analyze\n annexe: annex\n apologise: apologize\n authorisation: authorization\n authorise: authorize\n authorised: authorized\n authorising: authorizing\n behaviour: behavior\n bellow: below\n busses: buses\n calibre: caliber\n categorise: categorize\n categorised: categorized\n categorises: categorizes\n categorising: categorizing\n centre: center\n cheque: check\n civilisation: civilization\n civilise: civilize\n colour: color\n cosy: cozy\n cypher: cipher\n defence: defense\n dependant: dependent\n distil: distill\n draught: draft\n encyclopaedia: encyclopedia\n enquiry: inquiry\n enrol: enroll\n enrolment: enrollment\n enthral: enthrall\n favourite: favorite\n fibre: fiber\n fillet: filet\n flavour: flavor\n fulfil: fulfill\n furore: furor\n gaol: jail\n grey: gray\n honour: honor\n humour: humor\n initialled: initialed\n initialling: initialing\n instil: instill\n jewellery: jewelry\n labelled: labeled\n labelling: labeling\n labour: labor\n libellous: libelous\n licence: license\n likeable: likable\n liveable: livable\n lustre: luster\n manoeuvre: maneuver\n marvellous: marvelous\n meagre: meager\n metre: meter\n modelling: modeling\n moustache: mustache\n neighbour: neighbor\n normalise: normalize\n offence: offense\n optimise: optimize\n optimised: optimized\n optimising: optimizing\n organise: organize\n orientated: oriented\n paralyse: paralyze\n plough: plow\n pretence: pretense\n programme: program\n pyjamas: pajamas\n rateable: ratable\n realise: realize\n recognise: recognize\n reconnoitre: reconnoiter\n rumour: rumor\n sabre: saber\n saleable: salable\n saltpetre: saltpeter\n sceptic: skeptic\n sepulchre: sepulcher\n signalling: signaling\n sizeable: sizable\n skilful: skillful\n smoulder: smolder\n sombre: somber\n speciality: specialty\n spectre: specter\n splendour: splendor\n standardise: standardize\n standardised: standardized\n sulphur: sulfur\n theatre: theater\n travelled: traveled\n traveller: traveler\n travelling: traveling\n unshakeable: unshakable\n wilful: willful\n yoghurt: yogurt\n | dataset_sample\yaml\mui-org_material-ui\docs\mui-vale\styles\MUI\NoBritish.yml | NoBritish.yml | YAML | 2,422 | 0.8 | 0 | 0 | vue-tools | 480 | 2025-02-20T21:38:20.887283 | Apache-2.0 | false | cae0500248148f4958bda14d46ad19aa |
extends: existence\nmessage: We avoid referencing the company name '%s'. Instead you can reference a product or the team.\nlevel: warning\nignorecase: false\ntokens:\n - 'MUI \w+'\nexceptions:\n - 'MUIΒ X'\n - 'MUIΒ System'\n - 'MUIΒ Store'\n - 'MUIΒ Core'\n - 'MUIΒ Connect'\n # valid use of a regular space\n - 'MUI organization'\n - 'MUI ecosystem'\n - 'MUI products'\n - 'MUI team'\n | dataset_sample\yaml\mui-org_material-ui\docs\mui-vale\styles\MUI\NoCompanyName.yml | NoCompanyName.yml | YAML | 381 | 0.8 | 0 | 0.058824 | react-lib | 69 | 2025-04-06T11:54:14.284321 | MIT | false | 905b4a76c05de54e7850c17ed3515d77 |
name: 'Build iOS end to end tests action'\ndescription: 'Prepares and builds end to end tests on iOS device'\ninputs:\n ios_device_pin_code:\n description: 'iOS Device Pin Code'\n required: true\n test_device_identifier_uuid:\n description: 'Test Device Identifier UUID'\n required: true\n has_time_account_number:\n description: 'Has Time Account Number'\n required: true\n no_time_account_number:\n description: 'No Time Account Number'\n required: true\n test_device_udid:\n description: 'Test Device UDID'\n required: true\n partner_api_token:\n description: 'Partner API Token'\n required: true\n test_name:\n description: 'Test case/suite name. Will run all tests in the test plan if not provided.'\n required: false\n outputs_path:\n description: 'Path to store outputs. This should be unique for each job run in order to avoid concurrency issues.'\n required: true\n\nruns:\n using: 'composite'\n steps:\n - name: Configure Xcode project\n run: |\n for file in *.xcconfig.template ; do cp $file ${file//.template/} ; done\n sed -i "" "/^HAS_TIME_ACCOUNT_NUMBER/d" UITests.xcconfig\n sed -i "" "/^NO_TIME_ACCOUNT_NUMBER/d" UITests.xcconfig\n sed -i "" \\n "/IOS_DEVICE_PIN_CODE =/ s/= .*/= $IOS_DEVICE_PIN_CODE/" \\n UITests.xcconfig\n sed -i "" \\n "/TEST_DEVICE_IDENTIFIER_UUID =/ s/= .*/= $TEST_DEVICE_IDENTIFIER_UUID/" \\n UITests.xcconfig\n sed -i "" \\n "s#^// PARTNER_API_TOKEN =#PARTNER_API_TOKEN =#" \\n UITests.xcconfig\n sed -i "" \\n "/PARTNER_API_TOKEN =/ s#= .*#= $PARTNER_API_TOKEN#" \\n UITests.xcconfig\n sed -i "" \\n "/ATTACH_APP_LOGS_ON_FAILURE =/ s#= .*#= 1#" \\n UITests.xcconfig\n sed -i "" \\n "/TEST_DEVICE_IS_IPAD =/ s#= .*#= 0#" \\n UITests.xcconfig\n sed -i "" \\n "/UNINSTALL_APP_IN_TEST_SUITE_TEAR_DOWN =/ s#= .*#= 0#" \\n UITests.xcconfig\n shell: bash\n working-directory: ios/Configurations\n env:\n IOS_DEVICE_PIN_CODE: ${{ inputs.ios_device_pin_code }}\n TEST_DEVICE_IDENTIFIER_UUID: ${{ inputs.test_device_identifier_uuid }}\n HAS_TIME_ACCOUNT_NUMBER: ${{ inputs.has_time_account_number }}\n NO_TIME_ACCOUNT_NUMBER: ${{ inputs.no_time_account_number }}\n PARTNER_API_TOKEN: ${{ inputs.partner_api_token }}\n\n - name: Build app and tests for testing\n run: |\n if [ -n "$TEST_NAME" ]; then\n TEST_NAME_ARGUMENT=" -only-testing $TEST_NAME"\n else\n TEST_NAME_ARGUMENT=""\n fi\n set -o pipefail && env NSUnbufferedIO=YES xcodebuild \\n -project MullvadVPN.xcodeproj \\n -scheme MullvadVPNUITests \\n -testPlan MullvadVPNUITestsAll $TEST_NAME_ARGUMENT \\n -destination "platform=iOS,id=$TEST_DEVICE_UDID" \\n -derivedDataPath derived-data \\n clean build-for-testing 2>&1\n shell: bash\n working-directory: ios/\n env:\n TEST_DEVICE_UDID: ${{ inputs.test_device_udid }}\n TEST_NAME: ${{ inputs.test_name }}\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\actions\build-ios-e2e-tests\action.yml | action.yml | YAML | 3,158 | 0.95 | 0.070588 | 0 | react-lib | 779 | 2023-11-23T17:41:18.696562 | GPL-3.0 | true | 3309eeccaccc6aebe8adb8be9589d6ca |
name: "Check file size"\ndescription: "Fails a file exceeds a given size limit"\ninputs:\n artifact:\n description: "Path to the file"\n required: true\n max_size:\n description: "Maximum allowed size in bytes"\n required: true\nruns:\n using: "composite"\n steps:\n - name: Check file size\n shell: bash\n run: |\n if [ -f "${{ inputs.artifact }}" ]; then\n if [ "$(uname)" = "Darwin" ]; then\n SIZE=$(stat -f %z "${{ inputs.artifact }}")\n else\n SIZE=$(stat -c %s "${{ inputs.artifact }}")\n fi\n echo "File size: $SIZE bytes"\n echo "Size limit: ${{ inputs.max_size }} bytes"\n\n if [ "$SIZE" -gt "${{ inputs.max_size }}" ]; then\n echo "Error: Binary size exceeds limit."\n exit 1\n fi\n else\n echo "Error: File not found!"\n exit 1\n fi\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\actions\check-file-size\action.yml | action.yml | YAML | 888 | 0.85 | 0.09375 | 0 | vue-tools | 9 | 2025-05-15T02:36:07.068393 | BSD-3-Clause | false | 1063f8883d7df6ead279b5f75fecf94a |
name: 'Run iOS end to end tests action'\ndescription: 'Runs end to end tests on iOS device'\ninputs:\n test_name:\n description: 'Test case/suite name. Will run all tests in the test plan if not provided.'\n required: false\n test_device_udid:\n description: 'Test Device UDID'\n required: true\n outputs_path:\n description: >\n Path to where outputs are stored - both build outputs and outputs from running tests.\n This should be unique for each job run in order to avoid concurrency issues.\n required: true\n\nruns:\n using: 'composite'\n steps:\n # Set up a unique output directory\n - name: Set up outputs directory\n run: |\n # Forcing the filesystem buffers to be flushed to ensure the\n # directory tree is updated\n sync\n if [ -n "$TEST_NAME" ]; then\n # Strip slashes to avoid creating subdirectories\n test_name_sanitized=$(printf "$TEST_NAME" | sed 's/\//_/g')\n echo "Setting output directory tests-output-test-name-sanitized"\n echo "$test_name_sanitized"\n test_output_directory="${{ env.OUTPUTS_PATH }}/tests-output-$test_name_sanitized"\n else\n echo "Setting output directory output"\n test_output_directory="${{ env.OUTPUTS_PATH }}/tests-output"\n fi\n\n echo "TEST_OUTPUT_DIRECTORY=$test_output_directory" >> $GITHUB_ENV\n echo "TEST_NAME_SANITIZED=$test_name_sanitized" >> $GITHUB_ENV\n shell: bash\n env:\n TEST_NAME: ${{ inputs.test_name }}\n OUTPUTS_PATH: ${{ inputs.outputs_path }}\n\n - name: Uninstall app\n run: ios-deploy --id $TEST_DEVICE_UDID --uninstall_only --bundle_id net.mullvad.MullvadVPN\n shell: bash\n env:\n TEST_DEVICE_UDID: ${{ inputs.test_device_udid }}\n\n - name: Run end-to-end-tests\n run: |\n # Forcing the filesystem buffers to be flushed to ensure the\n # directory tree is updated\n sync\n if [ -n "$TEST_NAME" ]; then\n TEST_NAME_ARGUMENT=" -only-testing $TEST_NAME"\n else\n TEST_NAME_ARGUMENT=""\n fi\n set -o pipefail && env NSUnbufferedIO=YES xcodebuild \\n -project MullvadVPN.xcodeproj \\n -scheme MullvadVPNUITests \\n -testPlan MullvadVPNUITestsAll $TEST_NAME_ARGUMENT \\n -resultBundlePath ${{ env.TEST_OUTPUT_DIRECTORY }}/xcode-test-report \\n -derivedDataPath derived-data \\n -destination "platform=iOS,id=$TEST_DEVICE_UDID" \\n test-without-building 2>&1 | xcbeautify --report junit \\n --report-path ${{ env.TEST_OUTPUT_DIRECTORY }}/junit-test-report\n shell: bash\n working-directory: ${{ inputs.outputs_path }}/mullvadvpn-app/ios\n env:\n TEST_NAME: ${{ inputs.test_name }}\n TEST_DEVICE_UDID: ${{ inputs.test_device_udid }}\n\n - name: Store test report artifact\n if: always()\n uses: actions/upload-artifact@v4\n with:\n name: ${{ env.TEST_NAME_SANITIZED }}-test-results\n path: |\n ${{ env.TEST_OUTPUT_DIRECTORY }}/junit-test-report/junit.xml\n ${{ env.TEST_OUTPUT_DIRECTORY }}/xcode-test-report.xcresult\n env:\n TEST_NAME: ${{ inputs.test_name }}\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\actions\run-ios-e2e-tests\action.yml | action.yml | YAML | 3,193 | 0.95 | 0.060241 | 0.076923 | react-lib | 167 | 2024-09-14T04:23:03.817518 | BSD-3-Clause | true | 201567bd4e3c6bc33d3f04c1c3be8405 |
---\nname: ππ± Android app bug report\ndescription: This form is to report bugs in the Android Mullvad VPN app.\nlabels: ["bug", "android"]\nbody:\n - type: markdown\n attributes:\n value: >\n Thank you for wanting to help us improve the Mullvad VPN app by reporting issues.\n\n - type: checkboxes\n id: it-is-a-bug\n attributes:\n label: Is it a bug?\n description: >\n If you ran into a problem with the app and don't know for sure it is an actual bug,\n please contact support instead of filing a bug report. Go to\n `Settings (cogwheel) -> Support -> Report a problem`.\n That way the support team gets redacted logs from your app and can help you out better.\n You can also just email them at support@mullvadvpn.net.\n options:\n - label: I know this is an issue with the app, and contacting Mullvad support is not relevant.\n required: true\n\n - type: checkboxes\n id: checked-other-issues\n attributes:\n label: I have checked if others have reported this already\n description: >\n Before you submit a bug report, please look through the\n [existing issues](https://github.com/mullvad/mullvadvpn-app/issues?q=is%3Aissue)\n to see if it has already been reported by others. If so, please comment in those threads instead\n of creating new ones.\n options:\n - label: I have checked the issue tracker to see if others have reported similar issues.\n required: true\n\n - type: textarea\n id: current-behavior\n attributes:\n label: Current Behavior\n description: What is the current behavior you experience that you think is not correct?\n validations:\n required: true\n\n - type: textarea\n id: expected-behavior\n attributes:\n label: Expected Behavior\n description: What is the behavior that you expect to happen instead?\n validations:\n required: true\n\n - type: textarea\n id: reproduction\n attributes:\n label: Steps to Reproduce\n description: >\n Please provide clear and detailed steps on how to reproduce the issue you are reporting.\n If it is very hard to reproduce the issue, then there is no guarantee we can locate the bug and fix it.\n value: |\n 1. ...\n 2. ...\n validations:\n required: true\n\n - type: textarea\n id: logs\n attributes:\n label: Failure Logs\n description: >\n If relevant, please include logs from the app from the time around when the bug manifested itself.\n\n Go to settings (cogwheel) -> Support -> Report a problem -> View app logs to see the logs and\n copy them to here.\n render: shell\n\n - type: input\n id: os-version\n attributes:\n label: Android version\n description: >\n On what version(s) of Android have you experienced this bug?\n If you have experienced it on multiple versions you can write more than one version here.\n\n Please also include on what Android ROM you have seen this.\n For example if you run stock Android from your phone vendor, Graphene, LineageOS or similar.\n\n - type: input\n id: device-model\n attributes:\n label: Device model\n description: >\n On what device have you seen this bug? for example "Samsung S22" or "Pixel 7".\n If you have experienced it on multiple models, you can write more than one here.\n\n - type: input\n id: app-version\n attributes:\n label: Mullvad VPN app version\n description: >\n On what version(s) of the app have you experienced this bug?\n If you have experienced it on multiple versions you can write more than one version here.\n\n If you know that this has worked fine before, please include that.\n For example: "Broke in 2023.8. Worked fine on 2023.7".\n\n - type: textarea\n id: additional\n attributes:\n label: Additional Information\n description: Is there any additional information that you can provide?\n\n - type: markdown\n id: disclaimer\n attributes:\n value: |\n If we are not able to reproduce the issue, we will likely prioritize fixing other issues we can reproduce.\n Please do your best to fill out all of the sections above.\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\ISSUE_TEMPLATE\2-bug-android.yml | 2-bug-android.yml | YAML | 4,234 | 0.95 | 0.058824 | 0 | react-lib | 761 | 2023-11-29T17:56:52.578594 | BSD-3-Clause | false | 23961ae10eef87c780e6fc0105f9c765 |
---\nname: ππ± iOS app bug report\ndescription: This form is to report bugs in the iOS (iPhone + iPad) Mullvad VPN app.\nlabels: ["bug", "ios"]\nbody:\n - type: markdown\n attributes:\n value: >\n Thank you for wanting to help us improve the Mullvad VPN app by reporting issues.\n\n - type: checkboxes\n id: it-is-a-bug\n attributes:\n label: Is it a bug?\n description: >\n If you ran into a problem with the app and don't know for sure it is an actual bug,\n please contact support instead of filing a bug report. Go to\n `Settings (cogwheel) -> Report a problem`.\n That way the support team gets redacted logs from your app and can help you out better.\n You can also just email them at support@mullvadvpn.net.\n options:\n - label: I know this is an issue with the app, and contacting Mullvad support is not relevant.\n required: true\n\n - type: checkboxes\n id: checked-other-issues\n attributes:\n label: I have checked if others have reported this already\n description: >\n Before you submit a bug report, please look through the\n [existing issues](https://github.com/mullvad/mullvadvpn-app/issues?q=is%3Aissue)\n to see if it has already been reported by others. If so, please comment in those threads instead\n of creating new ones.\n options:\n - label: I have checked the issue tracker to see if others have reported similar issues.\n required: true\n\n - type: textarea\n id: current-behavior\n attributes:\n label: Current Behavior\n description: What is the current behavior you experience that you think is not correct?\n validations:\n required: true\n\n - type: textarea\n id: expected-behavior\n attributes:\n label: Expected Behavior\n description: What is the behavior that you expect to happen instead?\n validations:\n required: true\n\n - type: textarea\n id: reproduction\n attributes:\n label: Steps to Reproduce\n description: >\n Please provide clear and detailed steps on how to reproduce the issue you are reporting.\n If it is very hard to reproduce the issue, then there is no guarantee we can locate the bug and fix it.\n value: |\n 1. ...\n 2. ...\n validations:\n required: true\n\n - type: textarea\n id: logs\n attributes:\n label: Failure Logs\n description: >\n If relevant, please include logs from the app from the time around when the bug manifested itself.\n\n Go to settings (cogwheel) -> Report a problem -> View app logs to see the logs and\n copy them to here.\n render: shell\n\n - type: input\n id: os-version\n attributes:\n label: iOS version\n description: >\n On what version(s) of iOS have you experienced this bug?\n If you have experienced it on multiple versions you can write more than one version here.\n\n - type: input\n id: app-version\n attributes:\n label: Mullvad VPN app version\n description: >\n On what version(s) of the app have you experienced this bug?\n If you have experienced it on multiple versions you can write more than one version here.\n\n If you know that this has worked fine before, please include that.\n For example: "Broke in 2023.8. Worked fine on 2023.7".\n\n - type: textarea\n id: additional\n attributes:\n label: Additional Information\n description: Is there any additional information that you can provide?\n\n - type: markdown\n id: disclaimer\n attributes:\n value: |\n If we are not able to reproduce the issue, we will likely prioritize fixing other issues we can reproduce.\n Please do your best to fill out all of the sections above.\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\ISSUE_TEMPLATE\3-bug-ios.yml | 3-bug-ios.yml | YAML | 3,761 | 0.95 | 0.046296 | 0 | python-kit | 901 | 2024-06-08T21:34:48.501358 | Apache-2.0 | false | a6db1e4055407e47697bfe6b391b6701 |
---\nblank_issues_enabled: false\ncontact_links:\n - name: Questions and issues not directly related to our VPN application\n about: >\n If your question is not related to our VPN application specifically,\n please contact our support team at support@mullvadvpn.net or visit our help center\n at https://mullvad.net/help instead of filing an issue.\n This includes questions/issues about our service, infrastructure, DNS and more.\n url: https://mullvad.net/help\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\ISSUE_TEMPLATE\config.yml | config.yml | YAML | 481 | 0.8 | 0 | 0 | vue-tools | 191 | 2024-05-15T01:04:49.876243 | GPL-3.0 | false | 061b7b6ac5e486bec87a5b656203d5ee |
---\nname: Android - Build and test\non:\n pull_request:\n paths:\n - '**'\n - '!.github/workflows/**'\n - '.github/workflows/android-app.yml'\n - '!.github/CODEOWNERS'\n - '!audits/**'\n - '!ci/**'\n - '!dist-assets/**'\n - '!docs/**'\n - '!graphics/**'\n - '!desktop/**'\n - '!ios/**'\n - '!test/**'\n - '!scripts/**'\n - '!windows/**'\n - '!**/**.md'\n - '!**/osv-scanner.toml'\n schedule:\n # At 00:00 UTC every day.\n # Notifications for scheduled workflows are sent to the user who last modified the cron\n # syntax in the workflow file. If you update this you must have notifications for\n # Github Actions enabled, so these don't go unnoticed.\n # https://docs.github.com/en/actions/monitoring-and-troubleshooting-workflows/notifications-for-workflow-runs\n - cron: '0 0 * * *'\n workflow_dispatch:\n inputs:\n override_container_image:\n description: Override container image\n type: string\n required: false\n run_firebase_tests:\n description: Run firebase tests\n type: boolean\n required: false\n mockapi_test_repeat:\n description: Mockapi test repeat (self hosted)\n default: '1'\n required: true\n type: string\n e2e_test_repeat:\n description: e2e test repeat (self hosted)\n default: '0'\n required: true\n type: string\n e2e_tests_infra_flavor:\n description: >\n Infra environment to run e2e tests on (prod/stagemole).\n If set to 'stagemole' test-related artefacts will be uploaded.\n default: 'stagemole'\n required: true\n type: string\n # Build if main is updated to ensure up-to-date caches are available\n push:\n branches: [main]\n\npermissions: {}\n\nenv:\n DEFAULT_E2E_REPEAT: 0\n SCHEDULE_E2E_REPEAT: 10\n\njobs:\n prepare:\n name: Prepare\n runs-on: ubuntu-latest\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n\n - name: Use custom container image if specified\n if: ${{ github.event.inputs.override_container_image != '' }}\n run: echo "inner_container_image=${{ github.event.inputs.override_container_image }}"\n >> $GITHUB_ENV\n\n - name: Use default container image and resolve digest\n if: ${{ github.event.inputs.override_container_image == '' }}\n run: |\n echo "inner_container_image=$(cat ./building/android-container-image.txt)" >> $GITHUB_ENV\n\n # Preparing variables this way instead of using `env.*` due to:\n # https://github.com/orgs/community/discussions/26388\n - name: Prepare environment variables\n run: |\n echo "INNER_E2E_TEST_INFRA_FLAVOR=${{ github.event.inputs.e2e_tests_infra_flavor || 'stagemole' }}" \\n >> $GITHUB_ENV\n echo "INNER_E2E_TEST_REPEAT=${{ github.event.inputs.e2e_test_repeat ||\n (github.event_name == 'schedule' && env.SCHEDULE_E2E_REPEAT) ||\n env.DEFAULT_E2E_REPEAT }}" \\n >> $GITHUB_ENV\n outputs:\n container_image: ${{ env.inner_container_image }}\n E2E_TEST_INFRA_FLAVOR: ${{ env.INNER_E2E_TEST_INFRA_FLAVOR }}\n E2E_TEST_REPEAT: ${{ env.INNER_E2E_TEST_REPEAT }}\n\n build-native:\n name: Build native # Used by wait for jobs.\n needs: prepare\n runs-on: ubuntu-latest\n container:\n image: "${{ needs.prepare.outputs.container_image }}"\n strategy:\n matrix:\n include:\n - abi: "x86_64"\n task-variant: "X86_64"\n - abi: "x86"\n task-variant: "X86"\n - abi: "arm64-v8a"\n task-variant: "Arm64"\n - abi: "armeabi-v7a"\n task-variant: "Arm"\n steps:\n # Fix for HOME path overridden by GH runners when building in containers, see:\n # https://github.com/actions/runner/issues/863\n - name: Fix HOME path\n run: echo "HOME=/root" >> $GITHUB_ENV\n\n - name: Checkout repository\n uses: actions/checkout@v4\n with:\n submodules: true\n\n - name: Checkout wireguard-go-rs recursively\n run: |\n git config --global --add safe.directory '*'\n git submodule update --init wireguard-go-rs/libwg/wireguard-go\n\n - name: Calculate native lib cache hash\n id: native-lib-cache-hash\n shell: bash\n run: |\n git config --global --add safe.directory $(pwd)\n non_android_hash="$(git grep --cached -l '' -- ':!android/' \\n | xargs -d '\n' sha1sum \\n | sha1sum \\n | awk '{print $1}')"\n echo "native_lib_hash=$non_android_hash" >> $GITHUB_OUTPUT\n\n - name: Cache native libraries\n uses: actions/cache@v4\n id: cache-native-libs\n env:\n cache_hash: ${{ steps.native-lib-cache-hash.outputs.native_lib_hash }}\n with:\n path: ./android/app/build/rustJniLibs/android\n key: android-native-libs-${{ runner.os }}-${{ matrix.abi }}-${{ env.cache_hash }}\n\n - name: Build native libraries\n if: steps.cache-native-libs.outputs.cache-hit != 'true'\n uses: burrunan/gradle-cache-action@v1\n with:\n job-id: jdk17\n arguments: cargoBuild${{ matrix.task-variant }}\n gradle-version: wrapper\n build-root-directory: android\n execution-only-caches: false\n # Disable if logs are hard to follow.\n concurrent: true\n read-only: ${{ github.ref != 'refs/heads/main' }}\n\n\n - name: Upload native libs\n uses: actions/upload-artifact@v4\n with:\n name: native-libs-${{ matrix.abi }}\n path: android/app/build/rustJniLibs/android\n if-no-files-found: error\n retention-days: 7\n\n run-lint-and-tests:\n name: Run lint and test tasks\n needs: [prepare]\n runs-on: ubuntu-latest\n container:\n image: ${{ needs.prepare.outputs.container_image }}\n strategy:\n fail-fast: false\n matrix:\n include:\n - gradle-task: |\n testDebugUnitTest -x :test:arch:testDebugUnitTest\n :app:testOssProdDebugUnitTest\n :service:testOssProdDebugUnitTest\n :lib:billing:testDebugUnitTest\n :lib:daemon-grpc:testDebugUnitTest\n :lib:shared:testDebugUnitTest\n - gradle-task: :test:arch:test --rerun-tasks\n - gradle-task: detekt\n - gradle-task: lint\n steps:\n # Fix for HOME path overridden by GH runners when building in containers, see:\n # https://github.com/actions/runner/issues/863\n - name: Fix HOME path\n run: echo "HOME=/root" >> $GITHUB_ENV\n\n - name: Checkout repository\n uses: actions/checkout@v4\n with:\n submodules: true\n\n - name: Run gradle task\n uses: burrunan/gradle-cache-action@v1\n with:\n job-id: jdk17\n arguments: ${{ matrix.gradle-task }}\n gradle-version: wrapper\n build-root-directory: android\n execution-only-caches: false\n # Disable if logs are hard to follow.\n concurrent: true\n read-only: ${{ github.ref != 'refs/heads/main' }}\n\n build-app:\n name: Build app\n needs: [prepare]\n runs-on: ubuntu-latest\n container:\n image: ${{ needs.prepare.outputs.container_image }}\n steps:\n # Fix for HOME path overridden by GH runners when building in containers, see:\n # https://github.com/actions/runner/issues/863\n - name: Fix HOME path\n run: echo "HOME=/root" >> $GITHUB_ENV\n\n - name: Checkout repository\n uses: actions/checkout@v4\n with:\n submodules: true\n\n - name: Prepare dummy debug keystore\n run: |\n echo "${{ secrets.ANDROID_DUMMY_DEBUG_KEYSTORE }}" | \\n base64 -d > /root/.android/debug.keystore\n\n - name: Compile app\n uses: burrunan/gradle-cache-action@v1\n with:\n job-id: jdk17\n arguments: |\n compileOssProdDebugKotlin\n -x cargoBuild\n gradle-version: wrapper\n build-root-directory: android\n execution-only-caches: false\n # Disable if logs are hard to follow.\n concurrent: true\n read-only: ${{ github.ref != 'refs/heads/main' }}\n\n - name: Wait for other jobs (native, relay list)\n uses: kachick/wait-other-jobs@v3.6.0\n with:\n wait-seconds-before-first-polling: '0'\n wait-list: |\n [\n {\n "workflowFile": "android-app.yml",\n "jobMatchMode": "prefix",\n "jobName": "Build native"\n }\n ]\n\n - uses: actions/download-artifact@v4\n with:\n pattern: native-libs-*\n path: android/app/build/rustJniLibs/android\n merge-multiple: true\n\n - name: Build app\n uses: burrunan/gradle-cache-action@v1\n with:\n job-id: jdk17\n arguments: |\n assembleOssProdDebug\n -x cargoBuild\n gradle-version: wrapper\n build-root-directory: android\n execution-only-caches: true\n # Disable if logs are hard to follow.\n concurrent: true\n read-only: ${{ github.ref != 'refs/heads/main' }}\n\n - name: Build stagemole app\n uses: burrunan/gradle-cache-action@v1\n if: >\n (needs.prepare.outputs.E2E_TEST_REPEAT != '0' &&\n needs.prepare.outputs.E2E_TEST_INFRA_FLAVOR == 'stagemole') ||\n github.event.inputs.run_firebase_tests == 'true'\n with:\n job-id: jdk17\n arguments: |\n assemblePlayStagemoleDebug\n -x cargoBuild\n gradle-version: wrapper\n build-root-directory: android\n execution-only-caches: true\n # Disable if logs are hard to follow.\n concurrent: true\n read-only: ${{ github.ref != 'refs/heads/main' }}\n\n - name: Upload apks\n uses: actions/upload-artifact@v4\n with:\n name: apks\n path: android/app/build/outputs/apk\n if-no-files-found: error\n retention-days: 7\n\n build-instrumented-tests:\n name: Build instrumented test packages\n needs: [prepare]\n runs-on: ubuntu-latest\n container:\n image: ${{ needs.prepare.outputs.container_image }}\n strategy:\n matrix:\n include:\n - test-type: app\n assemble-command: assembleOssProdAndroidTest\n artifact-path: android/app/build/outputs/apk\n - test-type: mockapi\n assemble-command: :test:mockapi:assemble\n artifact-path: android/test/mockapi/build/outputs/apk\n - test-type: e2e\n assemble-command: :test:e2e:assemble\n artifact-path: android/test/e2e/build/outputs/apk\n steps:\n # Fix for HOME path overridden by GH runners when building in containers, see:\n # https://github.com/actions/runner/issues/863\n - name: Fix HOME path\n run: echo "HOME=/root" >> $GITHUB_ENV\n\n - name: Checkout repository\n uses: actions/checkout@v4\n with:\n submodules: true\n\n - name: Prepare dummy debug keystore\n run: |\n echo "${{ secrets.ANDROID_DUMMY_DEBUG_KEYSTORE }}" | \\n base64 -d > /root/.android/debug.keystore\n\n - name: Assemble instrumented test apk\n uses: burrunan/gradle-cache-action@v1\n with:\n job-id: jdk17\n arguments: |\n ${{ matrix.assemble-command }}\n -x cargoBuild\n -x mergeOssProdDebugJniLibFolders\n -x mergePlayStagemoleDebugJniLibFolders\n gradle-version: wrapper\n build-root-directory: android\n execution-only-caches: false\n # Disable if logs are hard to follow.\n concurrent: true\n read-only: ${{ github.ref != 'refs/heads/main' }}\n\n - name: Upload apks\n uses: actions/upload-artifact@v4\n with:\n name: ${{ matrix.test-type }}-instrumentation-apks\n path: ${{ matrix.artifact-path }}\n if-no-files-found: error\n retention-days: 7\n\n instrumented-tests:\n name: Run instrumented tests\n runs-on: [self-hosted, android-device]\n needs: [build-app, build-instrumented-tests]\n strategy:\n fail-fast: false\n matrix:\n include:\n - test-type: app\n path: android/app/build/outputs/apk\n test-repeat: 1\n - test-type: mockapi\n path: android/test/mockapi/build/outputs/apk\n test-repeat: ${{ github.event_name == 'schedule' && 100 || github.event.inputs.mockapi_test_repeat || 1 }}\n steps:\n - name: Prepare report dir\n if: ${{ matrix.test-repeat != 0 }}\n id: prepare-report-dir\n env:\n INNER_REPORT_DIR: /tmp/${{ matrix.test-type }}-${{ github.run_id }}-${{ github.run_attempt }}\n run: |\n mkdir -p $INNER_REPORT_DIR\n echo "report_dir=$INNER_REPORT_DIR" >> $GITHUB_OUTPUT\n\n - name: Checkout repository\n if: ${{ matrix.test-repeat != 0 }}\n uses: actions/checkout@v4\n\n - uses: actions/download-artifact@v4\n if: ${{ matrix.test-repeat != 0 }}\n with:\n name: apks\n path: android/app/build/outputs/apk\n\n - uses: actions/download-artifact@v4\n if: ${{ matrix.test-repeat != 0 }}\n with:\n name: ${{ matrix.test-type }}-instrumentation-apks\n path: ${{ matrix.path }}\n\n - name: Calculate timeout\n id: calculate-timeout\n run: echo "timeout=$(( ${{ matrix.test-repeat }} * 10 ))" >> $GITHUB_OUTPUT\n shell: bash\n\n - name: Run instrumented test script\n if: ${{ matrix.test-repeat != 0 }}\n timeout-minutes: ${{ fromJSON(steps.calculate-timeout.outputs.timeout) }}\n shell: bash -ieo pipefail {0}\n env:\n AUTO_FETCH_TEST_HELPER_APKS: true\n TEST_TYPE: ${{ matrix.test-type }}\n BILLING_FLAVOR: oss\n INFRA_FLAVOR: prod\n REPORT_DIR: ${{ steps.prepare-report-dir.outputs.report_dir }}\n run: ./android/scripts/run-instrumented-tests-repeat.sh ${{ matrix.test-repeat }}\n\n - name: Upload instrumentation report (${{ matrix.test-type }})\n uses: actions/upload-artifact@v4\n if: always() && matrix.test-repeat != 0\n with:\n name: ${{ matrix.test-type }}-instrumentation-report\n path: ${{ steps.prepare-report-dir.outputs.report_dir }}\n if-no-files-found: ignore\n retention-days: 7\n\n instrumented-e2e-tests:\n name: Run instrumented e2e tests\n runs-on: [self-hosted, android-device]\n needs: [prepare, build-app, build-instrumented-tests]\n if: needs.prepare.outputs.E2E_TEST_REPEAT != '0'\n steps:\n - name: Resolve unique runner test account secret name\n if: needs.prepare.outputs.E2E_TEST_INFRA_FLAVOR == 'prod'\n run: |\n echo "RUNNER_SECRET_NAME=ANDROID_PROD_TEST_ACCOUNT_$(echo $RUNNER_NAME | tr '[:lower:]-' '[:upper:]_')" \\n >> $GITHUB_ENV\n\n - name: Resolve runner test account\n if: needs.prepare.outputs.E2E_TEST_INFRA_FLAVOR == 'prod'\n run: echo "RESOLVED_TEST_ACCOUNT=${{ secrets[env.RUNNER_SECRET_NAME] }}" >> $GITHUB_ENV\n\n - name: Prepare report dir\n id: prepare-report-dir\n env:\n INNER_REPORT_DIR: /tmp/${{ github.run_id }}-${{ github.run_attempt }}\n run: |\n mkdir -p $INNER_REPORT_DIR\n echo "report_dir=$INNER_REPORT_DIR" >> $GITHUB_OUTPUT\n\n - name: Checkout repository\n uses: actions/checkout@v4\n\n - uses: actions/download-artifact@v4\n with:\n name: apks\n path: android/app/build/outputs/apk\n\n - uses: actions/download-artifact@v4\n with:\n name: e2e-instrumentation-apks\n path: android/test/e2e/build/outputs/apk\n\n - name: Calculate timeout\n id: calculate-timeout\n run: echo "timeout=$(( ${{ needs.prepare.outputs.E2E_TEST_REPEAT }} * 15 ))" >> $GITHUB_OUTPUT\n shell: bash\n\n - name: Run instrumented test script\n timeout-minutes: ${{ fromJSON(steps.calculate-timeout.outputs.timeout) }}\n shell: bash -ieo pipefail {0}\n env:\n AUTO_FETCH_TEST_HELPER_APKS: true\n TEST_TYPE: e2e\n BILLING_FLAVOR: ${{ needs.prepare.outputs.E2E_TEST_INFRA_FLAVOR == 'prod' && 'oss' || 'play' }}\n INFRA_FLAVOR: "${{ needs.prepare.outputs.E2E_TEST_INFRA_FLAVOR }}"\n PARTNER_AUTH: |-\n ${{ needs.prepare.outputs.E2E_TEST_INFRA_FLAVOR == 'stagemole' && secrets.STAGEMOLE_PARTNER_AUTH || '' }}\n VALID_TEST_ACCOUNT_NUMBER: ${{ env.RESOLVED_TEST_ACCOUNT }}\n INVALID_TEST_ACCOUNT_NUMBER: '0000000000000000'\n ENABLE_HIGHLY_RATE_LIMITED_TESTS: ${{ github.event_name == 'schedule' && 'true' || 'false' }}\n ENABLE_ACCESS_TO_LOCAL_API_TESTS: true\n REPORT_DIR: ${{ steps.prepare-report-dir.outputs.report_dir }}\n run: ./android/scripts/run-instrumented-tests-repeat.sh ${{ needs.prepare.outputs.E2E_TEST_REPEAT }}\n\n - name: Upload e2e instrumentation report\n uses: actions/upload-artifact@v4\n if: >\n always() && needs.prepare.outputs.E2E_TEST_INFRA_FLAVOR == 'stagemole'\n with:\n name: e2e-instrumentation-report\n path: ${{ steps.prepare-report-dir.outputs.report_dir }}\n\n firebase-tests:\n name: Run firebase tests\n if: github.event.inputs.run_firebase_tests == 'true'\n runs-on: ubuntu-latest\n timeout-minutes: 30\n needs: [build-app, build-instrumented-tests]\n env:\n FIREBASE_ENVIRONMENT_VARIABLES: "\\n clearPackageData=true,\\n runnerBuilder=de.mannodermaus.junit5.AndroidJUnit5Builder,\\n invalid_test_account_number=0000000000000000,\\n ENABLE_HIGHLY_RATE_LIMITED_TESTS=${{ github.event_name == 'schedule' && 'true' || 'false' }},\\n partner_auth=${{ secrets.STAGEMOLE_PARTNER_AUTH }},\\n ENABLE_ACCESS_TO_LOCAL_API_TESTS=false"\n strategy:\n fail-fast: false\n matrix:\n include:\n - test-type: mockapi\n arg-spec-file: mockapi-oss.yml\n path: android/test/mockapi/build/outputs/apk\n - test-type: e2e\n arg-spec-file: e2e-play-stagemole.yml\n path: android/test/e2e/build/outputs/apk\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n\n - uses: actions/download-artifact@v4\n with:\n name: apks\n path: android/app/build/outputs/apk\n\n - uses: actions/download-artifact@v4\n with:\n name: ${{ matrix.test-type }}-instrumentation-apks\n path: ${{ matrix.path }}\n\n - name: Run tests on Firebase Test Lab\n uses: asadmansr/Firebase-Test-Lab-Action@v1.0\n env:\n SERVICE_ACCOUNT: ${{ secrets.FIREBASE_SERVICE_ACCOUNT }}\n with:\n arg-spec: |\n android/test/firebase/${{ matrix.arg-spec-file }}:default\n --environment-variables ${{ env.FIREBASE_ENVIRONMENT_VARIABLES }}\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\workflows\android-app.yml | android-app.yml | YAML | 19,098 | 0.95 | 0.065693 | 0.044266 | vue-tools | 424 | 2024-07-25T16:28:10.643193 | Apache-2.0 | false | b911263285c1752500c8c8fb0a0d7cd1 |
---\nname: Android - Audit dependencies\non:\n pull_request:\n paths:\n - .github/workflows/android-audit.yml\n - android/gradle/verification-metadata.xml\n - android/gradle/verification-metadata.keys.xml\n - android/gradle/verification-keyring.keys\n - android/scripts/lockfile\n # libs.versions.toml and *.kts are necessary to ensure that the verification-metadata.xml is up-to-date\n # with our dependency usage due to the dependency verification not working as expected when keys are\n # specified for dependencies (DROID-1425).\n - android/gradle/libs.versions.toml\n - android/**/*.kts\n schedule:\n # At 06:20 UTC every day.\n # Notifications for scheduled workflows are sent to the user who last modified the cron\n # syntax in the workflow file. If you update this you must have notifications for\n # Github Actions enabled, so these don't go unnoticed.\n # https://docs.github.com/en/actions/monitoring-and-troubleshooting-workflows/notifications-for-workflow-runs\n - cron: '20 6 * * *'\n workflow_dispatch:\n inputs:\n override_container_image:\n description: Override container image\n type: string\n required: false\n\npermissions: {}\n\njobs:\n prepare:\n name: Prepare\n runs-on: ubuntu-latest\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n\n - name: Use custom container image if specified\n if: ${{ github.event.inputs.override_container_image != '' }}\n run: echo "inner_container_image=${{ github.event.inputs.override_container_image }}"\n >> $GITHUB_ENV\n\n - name: Use default container image and resolve digest\n if: ${{ github.event.inputs.override_container_image == '' }}\n run: echo "inner_container_image=$(cat ./building/android-container-image.txt)" >> $GITHUB_ENV\n\n outputs:\n container_image: ${{ env.inner_container_image }}\n\n ensure-clean-lockfile:\n needs: prepare\n name: Ensure clean lockfile\n runs-on: ubuntu-latest\n container:\n image: ${{ needs.prepare.outputs.container_image }}\n steps:\n # Fix for HOME path overridden by GH runners when building in containers, see:\n # https://github.com/actions/runner/issues/863\n - name: Fix HOME path\n run: echo "HOME=/root" >> $GITHUB_ENV\n\n - uses: actions/checkout@v4\n\n # Needed to run git diff later\n - name: Fix git dir\n run: git config --global --add safe.directory $(pwd)\n\n - name: Re-generate lockfile\n run: android/scripts/lockfile -u\n\n - name: Ensure no changes\n run: git diff --exit-code\n\n verify-lockfile-keys:\n needs: prepare\n name: Verify lockfile keys\n runs-on: ubuntu-latest\n container:\n image: ${{ needs.prepare.outputs.container_image }}\n steps:\n # Fix for HOME path overridden by GH runners when building in containers, see:\n # https://github.com/actions/runner/issues/863\n - name: Fix HOME path\n run: echo "HOME=/root" >> $GITHUB_ENV\n\n - uses: actions/checkout@v4\n\n - name: Verify lockfile keys metadata\n run: android/scripts/lockfile -v\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\workflows\android-audit.yml | android-audit.yml | YAML | 3,137 | 0.95 | 0.098901 | 0.166667 | vue-tools | 752 | 2025-03-15T22:48:03.768759 | Apache-2.0 | false | c74532610a5dba51e86b888464b3374f |
---\nname: Android - Check kotlin formatting\non:\n pull_request:\n paths:\n - .github/workflows/android-kotlin-format-check.yml\n - android/gradle/libs.versions.toml\n - android/**/*.kt\n - android/**/*.kts\n workflow_dispatch:\n inputs:\n override_container_image:\n description: Override container image\n type: string\n required: false\n\npermissions: {}\n\njobs:\n prepare:\n name: Prepare\n runs-on: ubuntu-latest\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n\n - name: Use custom container image if specified\n if: ${{ github.event.inputs.override_container_image != '' }}\n run: echo "inner_container_image=${{ github.event.inputs.override_container_image }}"\n >> $GITHUB_ENV\n\n - name: Use default container image and resolve digest\n if: ${{ github.event.inputs.override_container_image == '' }}\n run: echo "inner_container_image=$(cat ./building/android-container-image.txt)" >> $GITHUB_ENV\n\n outputs:\n container_image: ${{ env.inner_container_image }}\n\n check-formatting:\n needs: prepare\n runs-on: ubuntu-latest\n container:\n image: ${{ needs.prepare.outputs.container_image }}\n steps:\n # Fix for HOME path overridden by GH runners when building in containers, see:\n # https://github.com/actions/runner/issues/863\n - name: Fix HOME path\n run: echo "HOME=/root" >> $GITHUB_ENV\n\n - uses: actions/checkout@v4\n\n - name: Run ktfmt check\n run: android/gradlew -p android ktfmtCheck :buildSrc:ktfmtCheck\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\workflows\android-kotlin-format-check.yml | android-kotlin-format-check.yml | YAML | 1,589 | 0.95 | 0.075472 | 0.044444 | python-kit | 823 | 2024-04-03T07:14:32.712649 | GPL-3.0 | false | cfa084f1e35b478269504053b0390d68 |
---\nname: Android - Verify F-Droid and reproducible builds\non:\n schedule:\n # At 04:20 UTC every monday.\n # Notifications for scheduled workflows are sent to the user who last modified the cron\n # syntax in the workflow file. If you update this you must have notifications for\n # Github Actions enabled, so these don't go unnoticed.\n # https://docs.github.com/en/actions/monitoring-and-troubleshooting-workflows/notifications-for-workflow-runs\n - cron: '20 6 * * 1'\n workflow_dispatch:\n inputs:\n commit_hash:\n type: string\n required: false\n\npermissions: {}\n\njobs:\n set-up-env:\n name: Setup commit hash\n runs-on: ubuntu-latest\n steps:\n - id: hash\n name: Set commit hash or default to github.sha\n run: |\n # If the input has a value, it is filled by that value; otherwise, use github.sha\n if [ -n "${{ inputs.commit_hash }}" ]; then\n echo "commit_hash=${{ inputs.commit_hash }}" >> "$GITHUB_OUTPUT"\n else\n echo "commit_hash=${{ github.sha }}" >> "$GITHUB_OUTPUT"\n fi\n outputs:\n COMMIT_HASH: ${{ steps.hash.outputs.commit_hash }}\n\n build-fdroid-app:\n name: Build fdroid container\n runs-on: ubuntu-latest\n needs: set-up-env\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n with:\n ref: ${{ needs.set-up-env.outputs.COMMIT_HASH }}\n\n - name: Fetch submodules and tags\n run: |\n git submodule update --init wireguard-go-rs/libwg/wireguard-go\n git fetch --no-tags origin 'refs/tags/android/*:refs/tags/android/*'\n\n - name: Build app\n run: ./building/containerized-build.sh android --fdroid\n\n - name: Upload apks\n uses: actions/upload-artifact@v4\n with:\n name: container-app\n path: android/app/build/outputs/apk/ossProd/fdroid/app-oss-prod-fdroid-unsigned.apk\n if-no-files-found: error\n retention-days: 7\n\n build-fdroid-app-server:\n name: Build fdroid with fdroid server\n runs-on: ubuntu-latest\n needs: set-up-env\n steps:\n - name: Install fdroidserver\n run: |\n sudo apt-get -y update\n sudo apt-get -y install fdroidserver\n\n - name: Install gradle\n run: |\n sudo apt-get -y remove gradle\n mkdir /opt/gradle\n curl -sfLo /opt/gradle/gradle-8.13-bin.zip https\://services.gradle.org/distributions/gradle-8.13-bin.zip\n unzip -d /opt/gradle /opt/gradle/gradle-8.13-bin.zip\n\n # These are equivalent to the sudo section of the metadata file\n - name: Install dependencies\n run: sudo apt-get install -y build-essential protobuf-compiler libprotobuf-dev\n\n - name: Download metadata file\n uses: actions/checkout@v4\n with:\n path: app-repo\n\n - name: Init fdroid\n run: fdroid init\n\n - name: Prepare metadata\n run: |\n mkdir metadata\n cp app-repo/android/fdroid-build/metadata/net.mullvad.mullvadvpn.yml metadata/net.mullvad.mullvadvpn.yml\n sed -i 's/commit-hash/${{ needs.set-up-env.outputs.COMMIT_HASH }}/' metadata/net.mullvad.mullvadvpn.yml\n\n - name: Build app\n run: |\n export PATH=$PATH:/opt/gradle/gradle-8.13/bin\n fdroid build net.mullvad.mullvadvpn:1\n\n - name: Upload apks\n uses: actions/upload-artifact@v4\n with:\n name: fdroidserver-app\n path: |\n build/net\.mullvad\.mullvadvpn/android/app/build/outputs/apk/ossProd/fdroid/app-oss-prod-fdroid-unsigned.apk\n if-no-files-found: error\n retention-days: 7\n\n compare-builds:\n name: Check builds\n runs-on: ubuntu-latest\n needs: [build-fdroid-app, build-fdroid-app-server]\n steps:\n - name: Download container apk\n uses: actions/download-artifact@v4\n with:\n name: container-app\n path: container\n\n - name: Download server apk\n uses: actions/download-artifact@v4\n with:\n name: fdroidserver-app\n path: fdroidserver\n\n - name: Print checksums\n run: |\n echo "Container build checksum"\n md5sum container/app-oss-prod-fdroid-unsigned.apk\n echo "Fdroidserver build checksum"\n md5sum fdroidserver/app-oss-prod-fdroid-unsigned.apk\n\n - name: Compare files\n run: diff container/app-oss-prod-fdroid-unsigned.apk fdroidserver/app-oss-prod-fdroid-unsigned.apk\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\workflows\android-reproducible-builds.yml | android-reproducible-builds.yml | YAML | 4,484 | 0.95 | 0.044118 | 0.059322 | vue-tools | 986 | 2023-07-19T01:54:25.200372 | Apache-2.0 | false | 45471ba8a5f70637ad361d4d6b206709 |
---\nname: Android - Static analysis\non:\n workflow_dispatch:\n pull_request:\n paths:\n - .github/workflows/android-static-analysis.yml\n - android/**\n schedule:\n # At 06:20 UTC every day.\n # Notifications for scheduled workflows are sent to the user who last modified the cron\n # syntax in the workflow file. If you update this you must have notifications for\n # Github Actions enabled, so these don't go unnoticed.\n # https://docs.github.com/en/actions/monitoring-and-troubleshooting-workflows/notifications-for-workflow-runs\n - cron: '20 6 * * *'\n\npermissions: {}\n\njobs:\n mobsfscan:\n name: Code scanning using mobsfscan\n runs-on: ubuntu-22.04\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n\n - name: Scan code\n uses: MobSF/mobsfscan@main\n with:\n args: '--type android --config android/config/config.mobsf --exit-warning android'\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\workflows\android-static-analysis.yml | android-static-analysis.yml | YAML | 928 | 0.8 | 0.1 | 0.185185 | awesome-app | 901 | 2024-03-21T11:55:30.816899 | BSD-3-Clause | false | 871e13edc65239551ae6216a43453465 |
name: "Android - Validate gradle wrapper"\n\non:\n workflow_dispatch:\n push:\n paths:\n - .github/workflows/android-validate-gradle-wrapper.yml\n - '**/gradle-wrapper.jar'\n\npermissions: {}\n\njobs:\n validate-gradle-wrapper:\n name: Validate gradle wrapper\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n - uses: gradle/actions/wrapper-validation@16bf8bc8fe830fa669c3c9f914d3eb147c629707 #v4.0.1\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\workflows\android-validate-gradle-wrapper.yml | android-validate-gradle-wrapper.yml | YAML | 435 | 0.8 | 0 | 0 | node-utils | 31 | 2024-10-21T17:09:02.354755 | BSD-3-Clause | false | 05de573fbb506477e8fea79705177bb6 |
---\nname: Android - Check XML formatting\non:\n pull_request:\n paths:\n - .github/workflows/android-xml-format-check.yml\n - android/**/*.xml\n workflow_dispatch:\n\npermissions: {}\n\njobs:\n prepare:\n name: Prepare\n runs-on: ubuntu-latest\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n - name: Resolve container image\n run: |\n echo "inner_container_image=$(cat ./building/android-container-image.txt)" >> $GITHUB_ENV\n outputs:\n container_image: ${{ env.inner_container_image }}\n\n check-formatting:\n name: Lint XML using tidy\n needs: prepare\n runs-on: ubuntu-latest\n container:\n image: ${{ needs.prepare.outputs.container_image }}\n steps:\n # Fix for HOME path overridden by GH runners when building in containers, see:\n # https://github.com/actions/runner/issues/863\n - name: Fix HOME path\n run: echo "HOME=/root" >> $GITHUB_ENV\n - name: Checkout repository\n uses: actions/checkout@v4\n - name: Run tidy\n shell: bash\n run: |-\n git config --global --add safe.directory $(pwd)\n android/scripts/tidy.sh formatAndCheckDiff\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\workflows\android-xml-format-check.yml | android-xml-format-check.yml | YAML | 1,185 | 0.8 | 0.02381 | 0.051282 | vue-tools | 836 | 2024-09-27T02:22:12.249784 | GPL-3.0 | false | ac7f6100d15742d6f3a6851986ee1d3f |
---\n# The reason why we check for vendorability is not because we at Mullvad usually vendor\n# dependencies ourselves. But it can help some third party packagers of this project.\n# It also is a sanity check on our dependency tree. Vendoring will fail if a single\n# dependency has multiple sources: https://github.com/mullvad/mullvadvpn-app/issues/4848\nname: Rust - Vendor dependencies\non:\n pull_request:\n paths:\n - .github/workflows/cargo-vendor.yml\n - Cargo.lock\n - '**/Cargo.toml'\n workflow_dispatch:\n\npermissions: {}\n\njobs:\n cargo-vendor:\n runs-on: ubuntu-latest\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n with:\n submodules: true\n\n - name: Vendor Rust dependencies\n run: cargo vendor\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\workflows\cargo-vendor.yml | cargo-vendor.yml | YAML | 775 | 0.8 | 0.074074 | 0.166667 | vue-tools | 132 | 2024-04-20T22:51:05.097345 | BSD-3-Clause | false | 20d743f738afe9455c521f5f591042bb |
---\nname: Check changelog format\non:\n pull_request:\n paths:\n - .github/workflows/check-changelog.yml\n - 'CHANGELOG.md'\n - 'ios/CHANGELOG.md'\n - 'android/CHANGELOG.md'\n\npermissions: {}\n\nenv:\n LINE_LIMIT: 100\njobs:\n check-changelog:\n runs-on: ubuntu-latest\n strategy:\n matrix:\n changelog: [CHANGELOG.md, ios/CHANGELOG.md, android/CHANGELOG.md]\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n - name: No lines must exceed ${{ env.LINE_LIMIT }} characters\n run: |\n awk 'length($0) > '$LINE_LIMIT' { print NR ": Line exceeds '$LINE_LIMIT' chars: " $0; found=1 } \\n END { if(found) exit 1 }' ${{ matrix.changelog }}\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\workflows\check-changelog.yml | check-changelog.yml | YAML | 718 | 0.7 | 0.037037 | 0 | python-kit | 616 | 2024-05-07T17:41:49.231835 | GPL-3.0 | false | 645d234c330c09c564ffc0ed9c9a0a51 |
---\nname: Rust - Run Clippy to check lints\non:\n pull_request:\n paths:\n - .github/workflows/clippy.yml\n - clippy.toml\n - '**/*.rs'\n workflow_dispatch:\n\npermissions: {}\n\njobs:\n prepare-android:\n name: Prepare Android container\n runs-on: ubuntu-latest\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n\n - name: Use custom container image if specified\n if: ${{ github.event.inputs.override_container_image != '' }}\n run: echo "inner_container_image_android=${{ github.event.inputs.override_container_image }}"\n >> $GITHUB_ENV\n\n - name: Use default container image and resolve digest\n if: ${{ github.event.inputs.override_container_image == '' }}\n run: echo "inner_container_image_android=$(cat ./building/android-container-image.txt)" >> $GITHUB_ENV\n\n outputs:\n container_image_android: ${{ env.inner_container_image_android }}\n\n clippy-check-desktop:\n name: Clippy linting, desktop\n strategy:\n matrix:\n os: [ubuntu-latest, windows-latest, macos-latest]\n runs-on: ${{ matrix.os }}\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n\n - name: Install Protoc\n uses: arduino/setup-protoc@v3\n with:\n repo-token: ${{ secrets.GITHUB_TOKEN }}\n\n - name: Checkout submodules\n run: |\n git submodule update --init --depth=1 dist-assets/binaries\n git submodule update --init wireguard-go-rs/libwg/wireguard-go\n\n - name: Install build dependencies\n if: matrix.os == 'ubuntu-latest'\n run: |\n sudo apt-get update\n sudo apt-get install libdbus-1-dev\n\n - name: Install msbuild\n if: matrix.os == 'windows-latest'\n uses: microsoft/setup-msbuild@v1.0.2\n with:\n vs-version: 16\n\n - name: Install latest zig\n if: matrix.os == 'windows-latest'\n uses: mlugg/setup-zig@v1\n\n - name: Install Go\n uses: actions/setup-go@v5\n with:\n go-version: 1.21.3\n\n - name: Clippy check\n shell: bash\n env:\n RUSTFLAGS: --deny warnings\n run: |\n source env.sh\n time cargo clippy --workspace --locked --all-targets --no-default-features\n time cargo clippy --workspace --locked --all-targets --all-features\n\n clippy-check-android:\n name: Clippy linting, Android\n needs: prepare-android\n runs-on: ubuntu-latest\n container:\n image: ${{ needs.prepare-android.outputs.container_image_android }}\n\n steps:\n # Fix for HOME path overridden by GH runners when building in containers, see:\n # https://github.com/actions/runner/issues/863\n - name: Fix HOME path\n run: echo "HOME=/root" >> $GITHUB_ENV\n\n - name: Checkout repository\n uses: actions/checkout@v4\n\n - name: Checkout wireguard-go submodule\n run: |\n git config --global --add safe.directory '*'\n git submodule update --init wireguard-go-rs/libwg/wireguard-go\n\n - name: Clippy check\n env:\n RUSTFLAGS: --deny warnings\n run: |\n cargo clippy --locked --all-targets --target x86_64-linux-android --package mullvad-jni --no-default-features\n cargo clippy --locked --all-targets --target x86_64-linux-android --package mullvad-jni --all-features\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\workflows\clippy.yml | clippy.yml | YAML | 3,367 | 0.8 | 0.06422 | 0.021978 | python-kit | 477 | 2023-07-20T15:02:25.663020 | Apache-2.0 | false | 833b6d66d2b1f8645f9e8b2ce1839a6f |
---\nname: Daemon+CLI - Build and test\non:\n pull_request:\n paths:\n - '**'\n - '!**/**.md'\n - '!.github/workflows/**'\n - '.github/workflows/daemon.yml'\n - '!.github/CODEOWNERS'\n - '!android/**'\n - '!audits/**'\n - '!build.sh'\n - '!ci/**'\n - 'ci/check-rust.sh'\n - '!clippy.toml'\n - '!deny.toml'\n - '!docs/**'\n - '!graphics/**'\n - '!desktop/**'\n - '!ios/**'\n - '!scripts/**'\n - '!.*ignore'\n - '!prepare-release.sh'\n - '!rustfmt.toml'\n - '!.yamllint'\n - '!**/osv-scanner.toml'\n\n workflow_dispatch:\n inputs:\n override_container_image:\n description: Override container image\n type: string\n required: false\n\npermissions: {}\n\njobs:\n prepare-linux:\n runs-on: ubuntu-latest\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n\n - name: Use custom container image if specified\n if: ${{ github.event.inputs.override_container_image != '' }}\n run: echo "inner_container_image=${{ github.event.inputs.override_container_image }}"\n >> $GITHUB_ENV\n\n - name: Use default container image and resolve digest\n if: ${{ github.event.inputs.override_container_image == '' }}\n run: echo "inner_container_image=$(cat ./building/linux-container-image.txt)" >> $GITHUB_ENV\n\n outputs:\n container_image: ${{ env.inner_container_image }}\n\n build-linux:\n needs: prepare-linux\n runs-on: ubuntu-latest\n container:\n image: ${{ needs.prepare-linux.outputs.container_image }}\n\n strategy:\n matrix:\n rust: [stable, beta, nightly]\n continue-on-error: true\n steps:\n # Fix for HOME path overridden by GH runners when building in containers, see:\n # https://github.com/actions/runner/issues/863\n - name: Fix HOME path\n run: echo "HOME=/root" >> $GITHUB_ENV\n\n - name: Checkout repository\n uses: actions/checkout@v4\n\n - name: Checkout submodules\n run: |\n git config --global --add safe.directory '*'\n git submodule update --init --depth=1 dist-assets/binaries\n git submodule update --init wireguard-go-rs/libwg/wireguard-go\n\n # The container image already has rustup and the pinned version of Rust\n - name: Install Rust toolchain\n # When running this job for "stable" test against our pinned rust version\n # instead of the stable channel.\n # TODO: Improve this so both "stable" and the pinned version are tested if\n # they differ.\n if: ${{ matrix.rust != 'stable' }}\n run: rustup override set ${{ matrix.rust }}\n\n - name: Build and test crates\n run: ./ci/check-rust.sh\n\n build-macos:\n runs-on: macos-latest\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n\n - name: Checkout wireguard-go submodule\n run: |\n git config --global --add safe.directory '*'\n git submodule update --init wireguard-go-rs/libwg/wireguard-go\n\n - name: Install Protoc\n uses: arduino/setup-protoc@v3\n with:\n repo-token: ${{ secrets.GITHUB_TOKEN }}\n\n - name: Install Go\n uses: actions/setup-go@v3\n with:\n go-version: 1.21.3\n\n - name: Build and test crates\n run: ./ci/check-rust.sh\n\n build-windows:\n strategy:\n matrix:\n config:\n - os: windows-latest\n arch: x64\n - os: [self-hosted, ARM64, Windows]\n arch: arm64\n runs-on: ${{ matrix.config.os }}\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n\n - name: Checkout submodules\n run: |\n git submodule update --init --depth=1\n git submodule update --init wireguard-go-rs/libwg/wireguard-go\n\n - name: Install Protoc\n # NOTE: ARM runner already has protoc\n if: ${{ matrix.config.arch != 'arm64' }}\n uses: arduino/setup-protoc@v3\n with:\n repo-token: ${{ secrets.GITHUB_TOKEN }}\n\n - name: Calculate Windows libraries cache hash\n id: windows-modules-hash\n shell: bash\n run: |\n hash="$(git grep --recurse-submodules --cached -l '' -- './windows/' \\n | grep -v '\.exe$\|\.md$' \\n | xargs sha1sum \\n | sha1sum \\n | cut -d" " -f1)"\n echo "hash=$hash" >> "$GITHUB_OUTPUT"\n\n - name: Cache Windows libraries\n uses: actions/cache@v4\n id: cache-windows-modules\n with:\n path: |\n ./windows/*/bin/${{ matrix.config.arch }}-*/*.dll\n ./windows/*/bin/${{ matrix.config.arch }}-*/*.lib\n !./windows/*/bin/${{ matrix.config.arch }}-*/libcommon.lib\n !./windows/*/bin/${{ matrix.config.arch }}-*/libshared.lib\n !./windows/*/bin/${{ matrix.config.arch }}-*/libwfp.lib\n key: windows-modules-${{ steps.windows-modules-hash.outputs.hash }}\n\n # The x64 toolchain is needed to build talpid-openvpn-plugin\n # TODO: Remove once fixed\n - name: Install Rust x64 target\n if: ${{ matrix.config.arch == 'arm64' }}\n run: rustup target add x86_64-pc-windows-msvc\n\n - name: Install Rust\n run: rustup target add i686-pc-windows-msvc\n\n - name: Install msbuild\n uses: microsoft/setup-msbuild@v1.0.2\n with:\n vs-version: 16\n\n - name: Install latest zig\n # NOTE: This action doesn't support ARM64 for the time being (2025-01-27)\n if: ${{ matrix.config.arch == 'x64' }}\n uses: mlugg/setup-zig@v1\n\n - name: Install Go\n uses: actions/setup-go@v5\n with:\n go-version: 1.21.3\n\n - name: Build Windows modules\n if: steps.cache-windows-modules.outputs.cache-hit != 'true'\n shell: bash\n run: ./build-windows-modules.sh\n\n - name: Build and test crates\n shell: bash\n env:\n # On Windows, the checkout is on the D drive, which is very small.\n # Moving the target directory to the C drive ensures that the runner\n # doesn't run out of space on the D drive.\n CARGO_TARGET_DIR: "C:/cargo-target"\n run: ./ci/check-rust.sh\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\workflows\daemon.yml | daemon.yml | YAML | 6,219 | 0.95 | 0.059406 | 0.080925 | python-kit | 411 | 2023-12-10T16:03:32.272741 | MIT | false | ce9aaceb3acd7dc8e50f3bda77008e59 |
# Workflow for triggering `test-manager` on select platforms.\n#\n# This is a rather complex workflow. The complexity mainly stems from these sources:\n# * figuring out which platforms to test on which runners (prepare-matrices)\n# * figuring out if the app and e2e-tests should be built on the runner (build-{linux,windows,macos})\n# or if we should download the artifacts from https://releases.mullvad.net/desktop/\n# * compiling the output from the different runners and executed platforms.\n---\nname: Desktop - End-to-end tests\non:\n schedule:\n - cron: '0 0 * * *'\n workflow_dispatch:\n inputs:\n oses:\n description: "Space-delimited list of targets to run tests on, e.g. `debian12 ubuntu2004`. \\n Available images:\n\n `debian11 debian12 ubuntu2004 ubuntu2204 ubuntu2404 ubuntu2410 ubuntu2504 \\n fedora39 fedora40 fedora41 fedora42 windows10 windows11 \\n macos12 macos13 macos14 macos15`.\n\n Default images:\n\n `debian12 ubuntu2004 ubuntu2204 ubuntu2404 ubuntu2410 ubuntu2504 \\n fedora39 fedora40 fedora41 fedora42 windows10 windows11 \\n macos13 macos14 macos15`."\n default: ''\n required: false\n type: string\n tests:\n description: "Tests to run (defaults to all if empty)"\n default: ''\n required: false\n type: string\n\npermissions: {}\n\njobs:\n prepare-matrices:\n name: Prepare virtual machines\n runs-on: ubuntu-latest\n steps:\n - name: Generate matrix for Linux builds\n shell: bash\n run: |\n # A list of VMs to run the tests on. These refer to the names defined\n # in $XDG_CONFIG_DIR/mullvad-test/config.json on the runner.\n all='["debian11","debian12","ubuntu2004","ubuntu2204","ubuntu2404","ubuntu2410","ubuntu2504","fedora39","fedora40","fedora41","fedora42"]'\n default='["debian12","ubuntu2004","ubuntu2204","ubuntu2404","ubuntu2410","ubuntu2504","fedora40","fedora41","fedora42"]'\n oses="${{ github.event.inputs.oses }}"\n echo "OSES: $oses"\n if [[ -z "$oses" || "$oses" == "null" ]]; then\n selected="$default"\n else\n oses=$(printf '%s\n' $oses | jq . -R | jq . -s)\n selected=$(jq -cn --argjson oses "$oses" --argjson all "$all" '$all - ($all - $oses)')\n fi\n echo "Selected targets: $selected"\n echo "linux_matrix=$selected" >> $GITHUB_ENV\n - name: Generate matrix for Windows builds\n shell: bash\n run: |\n all='["windows10","windows11"]'\n default='["windows10","windows11"]'\n oses="${{ github.event.inputs.oses }}"\n if [[ -z "$oses" || "$oses" == "null" ]]; then\n selected="$default"\n else\n oses=$(printf '%s\n' $oses | jq . -R | jq . -s)\n selected=$(jq -cn --argjson oses "$oses" --argjson all "$all" '$all - ($all - $oses)')\n fi\n echo "Selected targets: $selected"\n echo "windows_matrix=$selected" >> $GITHUB_ENV\n - name: Generate matrix for macOS builds\n shell: bash\n run: |\n all='["macos12","macos13","macos14","macos15"]'\n default='["macos13","macos14","macos15"]'\n oses="${{ github.event.inputs.oses }}"\n if [[ -z "$oses" || "$oses" == "null" ]]; then\n selected="$default"\n else\n oses=$(printf '%s\n' $oses | jq . -R | jq . -s)\n selected=$(jq -cn --argjson oses "$oses" --argjson all "$all" '$all - ($all - $oses)')\n fi\n echo "Selected targets: $selected"\n echo "macos_matrix=$selected" >> $GITHUB_ENV\n outputs:\n linux_matrix: ${{ env.linux_matrix }}\n windows_matrix: ${{ env.windows_matrix }}\n macos_matrix: ${{ env.macos_matrix }}\n\n prepare-linux:\n name: Prepare Linux build container\n needs: prepare-matrices\n runs-on: ubuntu-latest\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n - name: Use custom container image if specified\n if: ${{ github.event.inputs.override_container_image != '' }}\n run: echo "inner_container_image=${{ github.event.inputs.override_container_image }}"\n >> $GITHUB_ENV\n - name: Use default container image and resolve digest\n if: ${{ github.event.inputs.override_container_image == '' }}\n run: |\n echo "inner_container_image=$(cat ./building/linux-container-image.txt)" >> $GITHUB_ENV\n outputs:\n container_image: ${{ env.inner_container_image }}\n\n build-linux-app:\n name: Build Linux App\n needs: prepare-linux\n runs-on: ubuntu-latest\n container:\n image: ${{ needs.prepare-linux.outputs.container_image }}\n if: |\n !cancelled() &&\n needs.prepare-matrices.outputs.linux_matrix != '[]' &&\n needs.prepare-matrices.outputs.linux_matrix != ''\n continue-on-error: true\n steps:\n # Fix for HOME path overridden by GH runners when building in containers, see:\n # https://github.com/actions/runner/issues/863\n - name: Fix HOME path\n run: echo "HOME=/root" >> $GITHUB_ENV\n - name: Checkout repository\n uses: actions/checkout@v4\n - name: Checkout submodules\n run: |\n git config --global --add safe.directory '*'\n git submodule update --init --depth=1\n - uses: actions/cache@v4\n id: cache-app-cargo-artifacts\n with:\n path: |\n ~/.cargo/bin/\n ~/.cargo/registry/index/\n ~/.cargo/registry/cache/\n ~/.cargo/git/db/\n target/\n key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}\n\n - name: Build app\n run: |\n export CARGO_TARGET_DIR=target/\n ./build.sh --optimize\n - name: Build test executable\n run: ./desktop/packages/mullvad-vpn/scripts/build-test-executable.sh\n - name: Upload app\n uses: actions/upload-artifact@v4\n if: '!cancelled()'\n with:\n name: linux-build\n path: |\n ./dist/*.rpm\n ./dist/*.deb\n ./dist/app-e2e-*\n\n get-app-version:\n name: Get app version\n needs: prepare-linux\n runs-on: ubuntu-latest\n outputs:\n mullvad-version: ${{ steps.cargo-run.outputs.mullvad-version }}\n container:\n image: ${{ needs.prepare-linux.outputs.container_image }}\n if: |\n !cancelled()\n steps:\n # Fix for HOME path overridden by GH runners when building in containers, see:\n # https://github.com/actions/runner/issues/863\n - name: Fix HOME path\n run: echo "HOME=/root" >> $GITHUB_ENV\n - name: Checkout repository\n uses: actions/checkout@v4\n - name: Run mullvad-version\n id: cargo-run\n run: |\n # `mullvad-version` uses tags to compute the dev-suffix, so we must fetch them\n git config --global --add safe.directory '*'\n git fetch --tags --prune-tags --force > /dev/null\n version=$(cargo run --package mullvad-version -q)\n echo "Mullvad VPN app version: '$version'"\n echo "mullvad-version=$version" >> "$GITHUB_OUTPUT"\n shell: bash\n\n # This step should always be run because the `test-manager` binary is used to compile the\n # result matrix at the end! If that functionality is ever split out from the `test-manager`,\n # this step may be conditionally run.\n build-test-manager-linux:\n name: Build Test Manager\n needs: prepare-linux\n # Note: libssl-dev is installed on the test server, so build test-manager there for the sake of simplicity\n runs-on: [self-hosted, desktop-test, Linux] # app-test-linux\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n - uses: actions-rust-lang/setup-rust-toolchain@v1\n - name: Build test-manager\n run: ./test/scripts/container-run.sh cargo build --package test-manager --release\n - uses: actions/upload-artifact@v4\n if: '!cancelled()'\n with:\n name: linux-test-manager-build\n path: |\n ./test/target/release/test-manager\n if-no-files-found: error\n - name: Clean up Cargo artifacts\n run: |\n cargo clean\n\n build-test-runner-binaries-linux:\n name: Build Test Runner Binaries\n needs: prepare-linux\n runs-on: ubuntu-latest\n container:\n image: ${{ needs.prepare-linux.outputs.container_image }}\n if: |\n !cancelled() &&\n needs.prepare-matrices.outputs.linux_matrix != '[]' &&\n needs.prepare-matrices.outputs.linux_matrix != ''\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n - name: Build binaries\n run: |\n # Move test runner binaries to a known location. This is needed in the coming `upload-artifact` step.\n mkdir bin\n test/scripts/build/test-runner.sh linux\n mv -t ./bin/ \\n "$CARGO_TARGET_DIR/x86_64-unknown-linux-gnu/release/test-runner" \\n "$CARGO_TARGET_DIR/x86_64-unknown-linux-gnu/release/connection-checker"\n - uses: actions/upload-artifact@v4\n if: '!cancelled()'\n with:\n name: linux-test-runner-binaries\n path: bin/*\n if-no-files-found: error\n\n e2e-test-linux:\n name: Linux end-to-end tests\n # yamllint disable-line rule:line-length\n needs:\n [\n prepare-matrices,\n build-linux-app,\n get-app-version,\n build-test-manager-linux,\n build-test-runner-binaries-linux,\n ]\n if: |\n !cancelled() &&\n needs.get-app-version.result == 'success' &&\n needs.prepare-matrices.outputs.linux_matrix != '[]' &&\n needs.prepare-matrices.outputs.linux_matrix != ''\n runs-on: [self-hosted, desktop-test, Linux] # app-test-linux\n timeout-minutes: 240\n strategy:\n fail-fast: false\n matrix:\n os: ${{ fromJSON(needs.prepare-matrices.outputs.linux_matrix) }}\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n - name: Create binaries directory & add to PATH\n shell: bash -ieo pipefail {0}\n run: |\n # Put all binaries in a known folder: test-runner, connection-checker, mullvad-version\n mkdir "${{ github.workspace }}/bin"\n echo "${{ github.workspace }}/bin/" >> "$GITHUB_PATH"\n - name: Download Test Manager\n uses: actions/download-artifact@v4\n if: ${{ needs.build-test-manager-linux.result == 'success' }}\n with:\n name: linux-test-manager-build\n path: ${{ github.workspace }}/bin\n - name: Download Test Runner binaries\n uses: actions/download-artifact@v4\n if: ${{ needs.build-test-runner-binaries-linux.result == 'success' }}\n with:\n name: linux-test-runner-binaries\n path: ${{ github.workspace }}/bin\n - name: chmod binaries\n run: |\n chmod +x ${{ github.workspace }}/bin/*\n shell: bash\n - name: Download App\n uses: actions/download-artifact@v4\n if: ${{ needs.build-linux-app.result == 'success' }}\n with:\n name: linux-build\n path: ~/.cache/mullvad-test/packages\n - name: Run end-to-end tests\n shell: bash -ieo pipefail {0}\n env:\n CURRENT_VERSION: ${{needs.get-app-version.outputs.mullvad-version}}\n run: |\n # A directory with all the binaries is required to run test-manager.\n # The test scripts which runs in CI expects this folder to be available as the `TEST_DIST_DIR` variable.\n export TEST_DIST_DIR="${{ github.workspace }}/bin/"\n export TEST_FILTERS="${{ github.event.inputs.tests }}"\n ls -la "$TEST_DIST_DIR"\n ./test/scripts/run/ci.sh ${{ matrix.os }}\n - name: Upload test report\n uses: actions/upload-artifact@v4\n if: '!cancelled()'\n with:\n name: ${{ matrix.os }}_report\n path: ./test/.ci-logs/${{ matrix.os }}_report\n\n build-windows:\n name: Build Windows\n needs: prepare-matrices\n if: |\n needs.prepare-matrices.outputs.windows_matrix != '[]' &&\n !startsWith(github.ref, 'refs/tags/') && github.ref != 'refs/heads/main'\n runs-on: windows-latest\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n - name: Checkout submodules\n run: |\n git config --global --add safe.directory '*'\n git submodule update --init --depth=1\n - name: Install Protoc\n uses: arduino/setup-protoc@v3\n with:\n repo-token: ${{ secrets.GITHUB_TOKEN }}\n - uses: actions/setup-node@v4\n with:\n node-version-file: desktop/package.json\n cache: 'npm'\n cache-dependency-path: desktop/package-lock.json\n - name: Install Rust\n run: rustup target add i686-pc-windows-msvc\n - name: Install latest zig\n uses: mlugg/setup-zig@v1\n with:\n version: 0.14.0-dev.3036+7ac110ac2\n - name: Install msbuild\n uses: microsoft/setup-msbuild@v1.0.2\n with:\n vs-version: 16\n - name: Build app\n shell: bash\n run: |\n ./build.sh\n # FIXME: Strip architecture-specific suffix. The remaining steps assume that the windows installer has no\n # arch-suffix. This should probably be addressed when we add a Windows arm runner. Or maybe it will just keep\n # on working Β―\_(γ)_/Β―\n pushd dist\n original_file=$(find *.exe)\n new_file=$(echo $original_file | perl -pe "s/^(MullvadVPN-.*?)(_x64|_arm64)?(\.exe)$/\1\3/p")\n mv "$original_file" "$new_file"\n popd\n - name: Build test executable\n shell: bash\n run: ./desktop/packages/mullvad-vpn/scripts/build-test-executable.sh\n - uses: actions/upload-artifact@v4\n if: '!cancelled()'\n with:\n name: windows-build\n path: .\dist\*.exe\n\n e2e-test-windows:\n needs:\n [\n prepare-matrices,\n build-windows,\n get-app-version,\n build-test-manager-linux,\n ]\n if: |\n !cancelled() &&\n needs.get-app-version.result == 'success' &&\n needs.prepare-matrices.outputs.windows_matrix != '[]' &&\n needs.prepare-matrices.outputs.windows_matrix != ''\n name: Windows end-to-end tests\n runs-on: [self-hosted, desktop-test, Linux] # app-test-linux\n timeout-minutes: 240\n strategy:\n fail-fast: false\n matrix:\n os: ${{ fromJSON(needs.prepare-matrices.outputs.windows_matrix) }}\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n - name: Create binaries directory & add to PATH\n shell: bash -ieo pipefail {0}\n run: |\n mkdir "${{ github.workspace }}/bin"\n echo "${{ github.workspace }}/bin/" >> "$GITHUB_PATH"\n - name: Download Test Manager\n uses: actions/download-artifact@v4\n if: ${{ needs.build-test-manager-linux.result == 'success' }}\n with:\n name: linux-test-manager-build\n path: ${{ github.workspace }}/bin\n - name: chmod binaries\n run: |\n chmod +x ${{ github.workspace }}/bin/*\n - name: Download App\n uses: actions/download-artifact@v4\n if: ${{ needs.build-windows.result == 'success' }}\n with:\n name: windows-build\n path: ~/.cache/mullvad-test/packages\n - name: Run end-to-end tests\n shell: bash -ieo pipefail {0}\n env:\n CURRENT_VERSION: ${{needs.get-app-version.outputs.mullvad-version}}\n run: |\n # A directory with all the binaries is required to run test-manager.\n # The test scripts which runs in CI expects this folder to be available as the `TEST_DIST_DIR` variable.\n export TEST_DIST_DIR="${{ github.workspace }}/bin/"\n export TEST_FILTERS="${{ github.event.inputs.tests }}"\n ./test/scripts/run/ci.sh ${{ matrix.os }}\n - name: Upload test report\n uses: actions/upload-artifact@v4\n if: '!cancelled()'\n with:\n name: ${{ matrix.os }}_report\n path: ./test/.ci-logs/${{ matrix.os }}_report\n\n build-macos:\n name: Build macOS\n needs: prepare-matrices\n if: |\n needs.prepare-matrices.outputs.macos_matrix != '[]' &&\n !startsWith(github.ref, 'refs/tags/') && github.ref != 'refs/heads/main'\n runs-on: [self-hosted, desktop-test, macOS] # app-test-macos-arm\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n - name: Checkout submodules\n run: |\n git config --global --add safe.directory '*'\n git submodule update --init --depth=1\n - name: Install Go\n uses: actions/setup-go@v3\n with:\n go-version: 1.21.3\n - name: Install Protoc\n uses: arduino/setup-protoc@v3\n with:\n repo-token: ${{ secrets.GITHUB_TOKEN }}\n - uses: actions/setup-node@v4\n with:\n node-version-file: desktop/package.json\n cache: 'npm'\n cache-dependency-path: desktop/package-lock.json\n - name: Build app\n run: ./build.sh\n - name: Build test executable\n run: ./desktop/packages/mullvad-vpn/scripts/build-test-executable.sh\n - uses: actions/upload-artifact@v4\n if: '!cancelled()'\n with:\n name: macos-build\n path: |\n ./dist/*.pkg\n ./dist/app-e2e-*\n\n e2e-test-macos:\n needs: [prepare-matrices, build-macos, get-app-version]\n if: |\n !cancelled() &&\n needs.get-app-version.result == 'success' &&\n needs.prepare-matrices.outputs.macos_matrix != '[]' &&\n needs.prepare-matrices.outputs.macos_matrix != ''\n name: macOS end-to-end tests\n runs-on: [self-hosted, desktop-test, macOS] # app-test-macos-arm\n timeout-minutes: 240\n strategy:\n fail-fast: false\n matrix:\n os: ${{ fromJSON(needs.prepare-matrices.outputs.macos_matrix) }}\n steps:\n - name: Download App\n uses: actions/download-artifact@v4\n if: ${{ needs.build-macos.result == 'success' }}\n with:\n name: macos-build\n path: ~/Library/Caches/mullvad-test/packages\n - name: Checkout repository\n uses: actions/checkout@v4\n - name: Run end-to-end tests\n shell: bash -ieo pipefail {0}\n env:\n CURRENT_VERSION: ${{needs.get-app-version.outputs.mullvad-version}}\n run: |\n export TEST_FILTERS="${{ github.event.inputs.tests }}"\n ./test/scripts/run/ci.sh ${{ matrix.os }}\n - name: Upload test report\n uses: actions/upload-artifact@v4\n if: '!cancelled()'\n with:\n name: ${{ matrix.os }}_report\n path: ./test/.ci-logs/${{ matrix.os }}_report\n\n compile-test-matrix:\n name: Result matrix\n needs: [e2e-test-linux, e2e-test-windows, e2e-test-macos]\n if: '!cancelled()'\n runs-on: ubuntu-latest\n container:\n image: ${{ needs.prepare-linux.outputs.container_image }}\n steps:\n - name: Download test report\n uses: actions/download-artifact@v4\n with:\n pattern: '*_report'\n merge-multiple: true\n - name: Create binaries directory\n shell: bash -ieo pipefail {0}\n run: |\n mkdir "${{ github.workspace }}/bin"\n - name: Download report compiler\n uses: actions/download-artifact@v4\n with:\n name: linux-test-manager-build\n path: ${{ github.workspace }}/bin\n - name: chmod binaries\n run: |\n chmod +x ${{ github.workspace }}/bin/*\n shell: bash\n - name: Generate test result matrix\n shell: bash -ieo pipefail {0}\n run: |\n ${{ github.workspace }}/bin/test-manager \\n format-test-reports ${{ github.workspace }}/*_report \\n | tee summary.html >> $GITHUB_STEP_SUMMARY\n - uses: actions/upload-artifact@v4\n with:\n name: summary.html\n path: summary.html\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\workflows\desktop-e2e.yml | desktop-e2e.yml | YAML | 20,000 | 0.95 | 0.076636 | 0.053743 | node-utils | 605 | 2024-09-16T20:35:32.233561 | BSD-3-Clause | false | 3d4007c47e145b94e50b5b11385dc821 |
---\nname: Installer downloader - Size test\non:\n pull_request:\n paths:\n - '**'\n - '!**/**.md'\n - '!.github/workflows/**'\n - '.github/workflows/downloader.yml'\n - '!.github/CODEOWNERS'\n - '!android/**'\n - '!audits/**'\n - '!build.sh'\n - '!ci/**'\n - '!clippy.toml'\n - '!deny.toml'\n - '!rustfmt.toml'\n - '!.yamllint'\n - '!docs/**'\n - '!graphics/**'\n - '!desktop/**'\n - '!ios/**'\n - '!scripts/**'\n - '!.*ignore'\n - '!prepare-release.sh'\n - '!**/osv-scanner.toml'\n\npermissions: {}\n\njobs:\n build-windows:\n strategy:\n matrix:\n config:\n - os: windows-latest\n arch: x64\n runs-on: ${{ matrix.config.os }}\n env:\n # If the file is larger than this, a regression has probably been introduced.\n # You should think twice before increasing this limit.\n MAX_BINARY_SIZE: 2621440\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n\n - name: Build\n shell: bash\n env:\n # On Windows, the checkout is on the D drive, which is very small.\n # Moving the target directory to the C drive ensures that the runner\n # doesn't run out of space on the D drive.\n CARGO_TARGET_DIR: "C:/cargo-target"\n run: ./installer-downloader/build.sh\n\n - name: Check file size\n uses: ./.github/actions/check-file-size\n with:\n artifact: "./dist/Install Mullvad VPN.exe"\n max_size: ${{ env.MAX_BINARY_SIZE }}\n\n build-macos:\n runs-on: macos-latest\n env:\n # If the file is larger than this, a regression has probably been introduced.\n # You should think twice before increasing this limit.\n MAX_BINARY_SIZE: 3145728\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n\n - name: Install Rust\n run: rustup target add x86_64-apple-darwin\n\n - name: Build\n run: ./installer-downloader/build.sh\n\n - name: Check file size\n uses: ./.github/actions/check-file-size\n with:\n artifact: "./dist/Install Mullvad VPN.dmg"\n max_size: ${{ env.MAX_BINARY_SIZE }}\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\workflows\downloader.yml | downloader.yml | YAML | 2,196 | 0.8 | 0 | 0.09589 | awesome-app | 665 | 2024-08-01T19:14:26.763162 | GPL-3.0 | false | 74b87114be1c53b5e03cbfdc51775ac1 |
---\nname: Desktop frontend\non:\n pull_request:\n paths:\n - .github/workflows/frontend.yml\n - desktop/**\n - mullvad-management-interface/proto/**\n workflow_dispatch:\n\npermissions: {}\n\njobs:\n check-frontend:\n strategy:\n matrix:\n os: [ubuntu-latest, windows-latest]\n\n runs-on: ${{ matrix.os }}\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n\n - name: Checkout wireguard-go submodule\n run: git submodule update --init --depth=1 wireguard-go-rs\n\n - name: Setup node\n uses: actions/setup-node@v4\n with:\n node-version-file: desktop/package.json\n cache: 'npm'\n cache-dependency-path: desktop/package-lock.json\n\n - name: Install dependencies\n working-directory: desktop\n shell: bash\n run: npm ci\n\n - name: Check formatting\n if: matrix.os == 'ubuntu-latest'\n working-directory: desktop\n shell: bash\n run: npm run lint\n\n - name: Build\n working-directory: desktop\n shell: bash\n run: npm run build -w mullvad-vpn\n\n - name: Build test\n working-directory: desktop\n shell: bash\n run: npm run build:test -w mullvad-vpn\n\n - name: Run headless test Linux\n if: runner.os == 'Linux'\n working-directory: desktop\n run: xvfb-run -a npm test\n\n - name: Run headless test Windows\n if: runner.os != 'Linux'\n working-directory: desktop\n shell: bash\n run: npm test\n\n - name: Run Playwright tests on Linux\n if: runner.os == 'Linux'\n working-directory: desktop\n # The sandbox is disabled as a workaround for lacking userns permisisons which is required\n # since Ubuntu 24.04.\n run: NO_SANDBOX=1 npm run e2e:no-build -w mullvad-vpn\n\n - name: Run Playwright tests on Windows\n if: runner.os != 'Linux'\n working-directory: desktop\n shell: bash\n run: npm run e2e:no-build --w mullvad-vpn\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\workflows\frontend.yml | frontend.yml | YAML | 2,021 | 0.95 | 0.077922 | 0.03125 | vue-tools | 586 | 2024-07-16T06:37:30.747673 | Apache-2.0 | false | e6dc33aa7643c70f0fc8ae199e2983d9 |
---\nname: Git - Check commit message style\non:\n push:\n workflow_dispatch:\n\npermissions: {}\n\njobs:\n check-commit-message-style:\n name: Check commit message style\n runs-on: ubuntu-latest\n steps:\n # Make sure there are no whitespaces other than space, tab and newline in a commit message.\n - name: Check for unicode whitespaces\n uses: gsactions/commit-message-checker@v2\n with:\n # Pattern matches strings not containing weird unicode whitespace/separator characters\n # \P{Z} = All non-whitespace characters (the u-flag is needed to enable \P{Z})\n # [ \t\n] = Allowed whitespace characters\n pattern: '^(\P{Z}|[ \t\n])+$'\n flags: 'u'\n error: 'Detected unicode whitespace character in commit message.'\n checkAllCommitMessages: 'true' # optional: this checks all commits associated with a pull request\n accessToken: ${{ secrets.GITHUB_TOKEN }} # only required if checkAllCommitMessages is true\n\n # Git commit messages should follow our guidelines. This action enforces that.\n # Guidelines: https://github.com/mullvad/coding-guidelines/blob/main/README.md#git\n - name: Check against guidelines\n uses: mristin/opinionated-commit-message@f3b9cec249cabffbae7cd564542fd302cc576827 #v3.1.1\n with:\n # Commit messages are allowed to be subject only, no body\n allow-one-liners: 'true'\n # This action defaults to 50 char subjects, but 72 is fine.\n max-subject-line-length: '72'\n # The action's wordlist is a bit short. Add more accepted verbs\n additional-verbs: 'tidy, wrap, obfuscate, bias, prohibit, forbid, revert, slim'\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\workflows\git-commit-message-style.yml | git-commit-message-style.yml | YAML | 1,699 | 0.95 | 0.054054 | 0.264706 | python-kit | 854 | 2023-11-20T17:35:36.333991 | Apache-2.0 | false | d58b57c3c44d2e07926f5fa6de26da58 |
---\nname: iOS - Build and test Rust FFI (mullvad-ios and mullvad-api)\non:\n pull_request:\n paths:\n - .github/workflows/ios-rust-ffi.yml\n - clippy.toml\n - '**/*.rs'\n workflow_dispatch:\n\npermissions: {}\n\njobs:\n build-ios:\n runs-on: macos-latest\n strategy:\n matrix:\n target: [aarch64-apple-ios, aarch64-apple-ios-sim]\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n\n - name: Checkout submodules\n run: |\n git config --global --add safe.directory '*'\n git submodule update --init --recursive ios/wireguard-apple\n\n - name: Install Protoc\n uses: arduino/setup-protoc@v3\n with:\n repo-token: ${{ secrets.GITHUB_TOKEN }}\n\n - name: Install Rust\n run: rustup target add ${{ matrix.target }}\n\n - name: Build and test crates\n shell: bash\n env:\n RUSTFLAGS: --deny warnings\n # NOTE: Tests actually target macOS here. This is because we do not have an iOS runner\n # handy.\n run: |\n source env.sh\n time cargo build --locked --verbose --lib -p mullvad-ios -p mullvad-api --target ${{ matrix.target }}\n time cargo test --locked --verbose --lib -p mullvad-ios -p mullvad-api\n\n clippy-check-ios:\n runs-on: macos-latest\n strategy:\n matrix:\n target: [aarch64-apple-ios, aarch64-apple-ios-sim]\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n\n - name: Install Protoc\n uses: arduino/setup-protoc@v3\n with:\n repo-token: ${{ secrets.GITHUB_TOKEN }}\n\n - name: Install Rust\n run: rustup target add ${{ matrix.target }}\n\n - name: Clippy check\n shell: bash\n env:\n RUSTFLAGS: --deny warnings\n run: |\n source env.sh\n time cargo clippy --locked --all-targets --no-default-features -p mullvad-ios -p mullvad-api \\n --target ${{ matrix.target }}\n time cargo clippy --locked --all-targets --all-features -p mullvad-ios -p mullvad-api \\n --target ${{ matrix.target }}\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\workflows\ios-rust-ffi.yml | ios-rust-ffi.yml | YAML | 2,125 | 0.8 | 0 | 0.031746 | react-lib | 652 | 2024-05-11T15:55:21.119303 | MIT | false | 0092ef743de0360e1833c969e2490739 |
---\nname: iOS create screenshots\non:\n push:\n tags:\n - ios/*\n pull_request:\n paths:\n - ios/Gemfile\n - ios/Gemfile.lock\n workflow_dispatch:\n\npermissions: {}\n\njobs:\n test:\n name: Take screenshots\n runs-on: macos-15-xlarge\n env:\n SOURCE_PACKAGES_PATH: .spm\n TEST_ACCOUNT: ${{ secrets.IOS_TEST_ACCOUNT_NUMBER }}\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n\n - name: Checkout submodules\n run: |\n git config --global --add safe.directory '*'\n git submodule update --init --recursive ios/wireguard-apple\n\n - name: Setup go-lang\n uses: actions/setup-go@v3\n with:\n go-version: 1.21.13\n\n - name: Configure Xcode\n uses: maxim-lobanov/setup-xcode@v1\n with:\n xcode-version: '16.1'\n - name: Configure Rust\n run: rustup target add aarch64-apple-ios-sim x86_64-apple-ios\n\n - name: Configure Xcode project\n run: |\n for file in *.xcconfig.template ; do cp $file ${file//.template/} ; done\n sed -i "" \\n "/HAS_TIME_ACCOUNT_NUMBER =/ s#= .*#= 1234123412341234#" \\n UITests.xcconfig\n working-directory: ios/Configurations\n\n - name: Bundle\n run: bundle install\n working-directory: ios\n\n - name: Install protobuf\n run: |\n brew update\n brew install protobuf\n\n - name: Create screenshots\n run: bundle exec fastlane snapshot --cloned_source_packages_path "$SOURCE_PACKAGES_PATH"\n working-directory: ios\n\n - name: Upload screenshot artifacts\n uses: actions/upload-artifact@v4\n with:\n name: ios-screenshots\n path: ios/Screenshots\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\workflows\ios-screenshots-creation.yml | ios-screenshots-creation.yml | YAML | 1,742 | 0.8 | 0.014706 | 0 | python-kit | 742 | 2024-10-30T06:38:19.734567 | MIT | false | 6dbc1ed253df330144990a1eb38382b9 |
---\nname: iOS Validate build schemas\non:\n pull_request:\n types:\n - closed\n branches:\n - main\n paths:\n - .github/workflows/ios.yml\n - .github/workflows/ios-validate-build-schemas.yml\n - ios/.swiftformat\n - ios/**/*.swift\n - ios/**/*.xctestplan\n - Cargo.toml\n workflow_dispatch:\n\npermissions: {}\n\njobs:\n test:\n if: github.event.pull_request.merged == true\n name: Validate build schemas\n runs-on: macos-15-xlarge\n env:\n SOURCE_PACKAGES_PATH: .spm\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n\n - name: Checkout submodules\n run: |\n git config --global --add safe.directory '*'\n git submodule update --init --recursive ios/wireguard-apple\n\n - name: Configure cache\n uses: actions/cache@v3\n with:\n path: ios/${{ env.SOURCE_PACKAGES_PATH }}\n key: ${{ runner.os }}-spm-${{ hashFiles('ios/**/Package.resolved') }}\n restore-keys: |\n ${{ runner.os }}-spm-\n\n - name: Setup go-lang\n uses: actions/setup-go@v3\n with:\n go-version: 1.21.13\n\n - name: Configure Xcode\n uses: maxim-lobanov/setup-xcode@v1\n with:\n xcode-version: '16.1'\n - name: Configure Rust\n run: rustup target add aarch64-apple-ios-sim x86_64-apple-ios\n\n - name: Configure Xcode project\n run: |\n cp Base.xcconfig.template Base.xcconfig\n cp App.xcconfig.template App.xcconfig\n cp PacketTunnel.xcconfig.template PacketTunnel.xcconfig\n cp Screenshots.xcconfig.template Screenshots.xcconfig\n cp Api.xcconfig.template Api.xcconfig\n cp UITests.xcconfig.template UITests.xcconfig\n working-directory: ios/Configurations\n\n - name: Install xcbeautify\n run: |\n brew update\n brew install xcbeautify\n\n - name: Install protobuf\n run: |\n brew update\n brew install protobuf\n\n - name: Run build validation for Staging and MockRelease configurations as well as the MullvadVPNUITests target\n run: |\n set -o pipefail && env NSUnbufferedIO=YES xcodebuild \\n -project MullvadVPN.xcodeproj \\n -scheme MullvadVPN \\n -configuration MockRelease \\n -destination "platform=iOS Simulator,name=iPhone 16" \\n -clonedSourcePackagesDirPath "$SOURCE_PACKAGES_PATH" \\n -disableAutomaticPackageResolution \\n build\n set -o pipefail && env NSUnbufferedIO=YES xcodebuild \\n -project MullvadVPN.xcodeproj \\n -scheme MullvadVPN \\n -configuration Staging \\n -destination "platform=iOS Simulator,name=iPhone 16" \\n -clonedSourcePackagesDirPath "$SOURCE_PACKAGES_PATH" \\n -disableAutomaticPackageResolution \\n build\n set -o pipefail && env NSUnbufferedIO=YES xcodebuild \\n -project MullvadVPN.xcodeproj \\n -scheme MullvadVPNUITests \\n -configuration Debug \\n -destination "platform=iOS Simulator,name=iPhone 16" \\n -clonedSourcePackagesDirPath "$SOURCE_PACKAGES_PATH" \\n -disableAutomaticPackageResolution \\n build\n working-directory: ios/\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\workflows\ios-validate-build-schemas.yml | ios-validate-build-schemas.yml | YAML | 3,314 | 0.8 | 0.019608 | 0 | python-kit | 935 | 2025-03-20T13:09:57.396907 | BSD-3-Clause | false | f6326f56eab7ce55197b9e3d171e49fc |
---\nname: iOS app\non:\n pull_request:\n paths:\n - .github/workflows/ios.yml\n - ios/build-rust-library.sh\n - ios/.swiftformat\n - ios/wireguard-apple\n - ios/**/*.swift\n - ios/**/*.xctestplan\n workflow_dispatch:\n\npermissions: {}\n\njobs:\n check-formatting:\n name: Check formatting\n runs-on: macos-15\n steps:\n - name: Install SwiftFormat\n run: |\n brew update\n brew upgrade swiftformat\n\n - name: Checkout repository\n uses: actions/checkout@v4\n\n - name: Check formatting\n run: |\n swiftformat --version\n swiftformat --lint .\n working-directory: ios\n\n swiftlint:\n name: Run swiftlint\n runs-on: macos-15\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n\n - name: Run swiftlint\n run: |\n brew install swiftlint\n swiftlint --version\n swiftlint --reporter github-actions-logging\n working-directory: ios\n\n test:\n name: Unit tests\n runs-on: macos-15-xlarge\n env:\n SOURCE_PACKAGES_PATH: .spm\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n\n - name: Checkout submodules\n run: |\n git config --global --add safe.directory '*'\n git submodule update --init --recursive ios/wireguard-apple\n\n\n - name: Configure cache\n uses: actions/cache@v3\n with:\n path: ios/${{ env.SOURCE_PACKAGES_PATH }}\n key: ${{ runner.os }}-spm-${{ hashFiles('ios/**/Package.resolved') }}\n restore-keys: |\n ${{ runner.os }}-spm-\n\n - name: Setup go-lang\n uses: actions/setup-go@v3\n with:\n go-version: 1.21.13\n\n - name: Install xcbeautify\n run: |\n brew update\n brew install xcbeautify\n\n - name: Install protobuf\n run: |\n brew update\n brew install protobuf\n\n - name: Configure Xcode\n uses: maxim-lobanov/setup-xcode@v1\n with:\n xcode-version: '16.1'\n - name: Configure Rust\n # Since the https://github.com/actions/runner-images/releases/tag/macos-13-arm64%2F20240721.1 release\n # Brew does not install tools at the correct location anymore\n # This update broke the rust build script which was assuming the cargo binary was located in ~/.cargo/bin/cargo\n # The workaround is to fix brew paths by running brew bundle dump, and then brew bundle\n # WARNING: This has to be the last brew "upgrade" commands that is ran,\n # otherwise the brew path will be broken again.\n run: |\n brew bundle dump\n brew bundle\n rustup target add aarch64-apple-ios-sim\n\n - name: Configure Xcode project\n run: |\n cp Base.xcconfig.template Base.xcconfig\n cp App.xcconfig.template App.xcconfig\n cp PacketTunnel.xcconfig.template PacketTunnel.xcconfig\n cp Screenshots.xcconfig.template Screenshots.xcconfig\n cp Api.xcconfig.template Api.xcconfig\n working-directory: ios/Configurations\n\n - name: Run unit tests\n run: |\n set -o pipefail && env NSUnbufferedIO=YES xcodebuild \\n -project MullvadVPN.xcodeproj \\n -scheme MullvadVPN \\n -testPlan MullvadVPNCI \\n -destination "platform=iOS Simulator,name=iPhone 16" \\n -clonedSourcePackagesDirPath "$SOURCE_PACKAGES_PATH" \\n -disableAutomaticPackageResolution \\n -resultBundlePath xcode-test-report \\n test 2>&1 | xcbeautify\n working-directory: ios/\n\n - name: Archive test report\n if: always()\n run: zip -r test-report.zip ios/xcode-test-report.xcresult\n\n - name: Store test report artifact\n if: always()\n uses: actions/upload-artifact@v4\n with:\n name: test-report\n path: test-report.zip\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\workflows\ios.yml | ios.yml | YAML | 3,926 | 0.8 | 0.014925 | 0.051724 | python-kit | 277 | 2025-02-18T19:02:12.065394 | BSD-3-Clause | false | 311b5bd528129f7267ac8cf8814c07f5 |
---\nname: OSV-Scanner PR Scan\n\non:\n pull_request:\n workflow_dispatch:\n\npermissions: {}\n\njobs:\n scan-pr:\n permissions:\n # Require writing security events to upload SARIF file to security tab\n security-events: write\n # Only need to read contents\n contents: read\n actions: read\n\n # yamllint disable rule:line-length\n uses: "google/osv-scanner-action/.github/workflows/osv-scanner-reusable-pr.yml@ab8175fc65a74d8c0308f623b1c617a39bdc34fe" # v1.9.2 + submodule patch\n with:\n checkout-submodules: true\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\workflows\osv-scanner-pr.yml | osv-scanner-pr.yml | YAML | 542 | 0.8 | 0 | 0.166667 | node-utils | 174 | 2024-11-23T17:41:03.064974 | Apache-2.0 | false | ab9f79f8fe02cce4450fece797bd85f9 |
---\nname: OSV-Scanner Scheduled Scan\n\non:\n schedule:\n - cron: "30 7 * * MON-FRI"\n workflow_dispatch:\n\npermissions: {}\n\njobs:\n scan-scheduled:\n permissions:\n # Require writing security events to upload SARIF file to security tab\n security-events: write\n # Only need to read contents\n contents: read\n actions: read\n\n # yamllint disable rule:line-length\n uses: "google/osv-scanner-action/.github/workflows/osv-scanner-reusable.yml@ab8175fc65a74d8c0308f623b1c617a39bdc34fe" # v1.9.2 + submodule patch\n with:\n checkout-submodules: false\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\workflows\osv-scanner-scheduled.yml | osv-scanner-scheduled.yml | YAML | 581 | 0.8 | 0 | 0.157895 | react-lib | 483 | 2024-02-26T16:38:41.706152 | Apache-2.0 | false | c42bc08b8dc283cce2c40dd83f7210c1 |
---\nname: Check formatting of proto files\non:\n pull_request:\n paths:\n - '**/*.proto'\n workflow_dispatch:\n\npermissions: {}\n\njobs:\n check-formatting:\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n - name: Run clang-format for proto files\n uses: jidicula/clang-format-action@v4.11.0\n with:\n clang-format-version: 15\n include-regex: ^.*\.proto$\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\workflows\proto-format-check.yml | proto-format-check.yml | YAML | 415 | 0.8 | 0.05 | 0 | node-utils | 249 | 2024-05-30T07:56:24.241878 | Apache-2.0 | false | 7ae71f4f3634f232bd773997a0bb4267 |
---\nname: Rust - Supply chain\non:\n pull_request:\n paths:\n - .github/workflows/rust-supply-chain.yml\n - deny.toml\n - '**/Cargo.toml'\n - Cargo.lock\n - '**/*.rs'\n workflow_dispatch:\n\npermissions: {}\n\njobs:\n check-supply-chain:\n runs-on: ubuntu-latest\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n\n - name: Checkout wireguard-go submodule\n run: git submodule update --init --depth=1 wireguard-go-rs\n\n - name: Run cargo deny\n uses: EmbarkStudios/cargo-deny-action@v2\n with:\n log-level: warn\n rust-version: stable\n command: check all\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\workflows\rust-supply-chain.yml | rust-supply-chain.yml | YAML | 654 | 0.8 | 0 | 0 | awesome-app | 941 | 2023-09-19T06:21:48.457309 | GPL-3.0 | false | 88269bf707b45e85d3a29c4f0bbe8c5e |
---\nname: Rust - Check formatting\non:\n pull_request:\n paths:\n - .github/workflows/rustfmt.yml\n - rustfmt.toml\n - '**/*.rs'\n workflow_dispatch:\n\npermissions: {}\n\njobs:\n check-formatting:\n runs-on: ubuntu-latest\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n\n - name: Checkout wireguard-go submodule\n run: git submodule update --init --depth=1 wireguard-go-rs\n\n - name: Check formatting\n run: |-\n rustfmt --version\n cargo fmt -- --check\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\workflows\rustfmt.yml | rustfmt.yml | YAML | 534 | 0.8 | 0 | 0 | vue-tools | 692 | 2024-03-10T15:21:22.442269 | Apache-2.0 | false | 374162b1672a982f742695084e39f833 |
---\nname: Shellcheck - Lint shell scripts\non:\n pull_request:\n workflow_dispatch:\n\npermissions: {}\n\njobs:\n shellcheck:\n name: Shellcheck\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n - name: Run ShellCheck\n uses: ludeeus/action-shellcheck@2.0.0\n with:\n ignore_paths: >-\n ./android/gradlew\n env:\n SHELLCHECK_OPTS: --external-sources\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\workflows\shellcheck.yml | shellcheck.yml | YAML | 419 | 0.7 | 0 | 0 | awesome-app | 874 | 2024-08-20T22:22:41.802035 | MIT | false | 0f0a02d6bc3f5d1b8fbd97fb2545ac1c |
---\nname: Translations converter tool CI\non:\n pull_request:\n paths:\n - .github/workflows/translations-converter.yml\n - android/translations-converter/**\n workflow_dispatch:\n\npermissions: {}\n\njobs:\n check-translations:\n runs-on: ubuntu-latest\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n\n - name: Build and test translations converter tool\n working-directory: android/translations-converter\n run: cargo test\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\workflows\translations-converter.yml | translations-converter.yml | YAML | 480 | 0.8 | 0 | 0 | python-kit | 218 | 2024-04-14T13:11:50.907390 | BSD-3-Clause | false | e0512f25f66c081e4bae34487be7c64b |
---\nname: Translation check\non:\n pull_request:\n paths:\n - .github/workflows/translations.yml\n - android/translations-converter/**\n - android/lib/resource/src/**/plurals.xml\n - android/lib/resource/src/**/strings.xml\n - desktop/packages/mullvad-vpn/**\n - '!**/osv-scanner.toml'\n workflow_dispatch:\n\npermissions: {}\n\njobs:\n check-translations:\n runs-on: ubuntu-latest\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n\n - name: Setup node\n uses: actions/setup-node@v4\n with:\n node-version-file: desktop/package.json\n cache: 'npm'\n cache-dependency-path: desktop/package-lock.json\n\n - name: Install JS dependencies\n working-directory: desktop\n shell: bash\n run: npm ci\n\n - name: Verify translations\n shell: bash\n run: scripts/localization verify\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\workflows\translations.yml | translations.yml | YAML | 902 | 0.8 | 0 | 0 | vue-tools | 821 | 2023-07-19T19:29:47.206044 | Apache-2.0 | false | a3fb4616d46650b79c58499375f1bc9b |
---\nname: Bidirectional Unicode scan\non: [pull_request, workflow_dispatch]\n\npermissions: {}\n\njobs:\n build-linux:\n runs-on: ubuntu-latest\n steps:\n - name: Checkout repository\n uses: actions/checkout@v4\n\n - name: Checkout submodules\n run: git submodule update --init --depth=1 dist-assets/binaries wireguard-go-rs\n\n - name: Scan for code points\n run: ./ci/check-trojan-source.sh .\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\workflows\unicode-check.yml | unicode-check.yml | YAML | 422 | 0.7 | 0.055556 | 0 | python-kit | 871 | 2024-03-27T17:35:10.560401 | MIT | false | d73b68793dc6413616de4f258e450f28 |
---\nname: Unicop - find evil unicode\non:\n pull_request:\n paths:\n - .github/workflows/unicop.yml\n - '**/*.rs'\n - '**/*.swift'\n - '**/*.go'\n - '**/*.py'\n # Javascript/Typescript\n - '**/*.ts'\n - '**/*.js'\n # Kotlin\n - '**/*.kt'\n - '**/*.kts'\n # C/C++\n - '**/*.cpp'\n - '**/*.h'\n workflow_dispatch:\n\npermissions: {}\n\njobs:\n unicop:\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n\n - name: Checkout submodules\n run: |\n git config --global --add safe.directory '*'\n git submodule update --init --depth=1 dist-assets/binaries\n git submodule update --init --depth=1 windows\n git submodule update --init --depth=1 wireguard-go-rs/libwg/wireguard-go\n\n - name: Install Rust toolchain\n run: rustup override set stable\n\n - name: Install unicop\n run: cargo install --locked unicop\n\n - name: Check for unwanted unicode\n run: unicop --verbose .\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\workflows\unicop.yml | unicop.yml | YAML | 1,013 | 0.8 | 0.022727 | 0.078947 | node-utils | 308 | 2024-04-01T15:22:37.504728 | MIT | false | 119a0b19db6ac4c1990bc531ac30397c |
---\nname: Verify lockfile signatures\non:\n pull_request:\n paths:\n - .github/workflows/verify-locked-down-signatures.yml\n - .github/workflows/android-audit.yml\n - .github/workflows/unicop.yml\n - .github/CODEOWNERS\n - Cargo.toml\n - test/Cargo.toml\n - Cargo.lock\n - test/Cargo.lock\n - deny.toml\n - test/deny.toml\n - rust-toolchain.toml\n - desktop/package-lock.json\n - wireguard-go-rs/libwg/go.sum\n - ci/keys/**\n - ci/verify-locked-down-signatures.sh\n - ios/MullvadVPN.xcodeproj/project.xcworkspace/xcshareddata/swiftpm/Package.resolved\n - android/gradlew\n - android/gradlew.bat\n - android/gradle/verification-metadata.xml\n - android/gradle/verification-metadata.keys.xml\n - android/gradle/verification-keyring.keys\n - android/gradle/wrapper/gradle-wrapper.jar\n - android/gradle/wrapper/gradle-wrapper.properties\n - android/scripts/lockfile\n - building/build-and-publish-container-image.sh\n - building/mullvad-app-container-signing.asc\n - building/linux-container-image.txt\n - building/android-container-image.txt\n - building/sigstore/**\n - mullvad-update/trusted-metadata-signing-pubkeys\n\npermissions: {}\n\njobs:\n verify-signatures:\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n with:\n ref: ${{ github.event.pull_request.head.sha }}\n - name: Verify signatures\n run: |-\n base_ref=${{ github.event.pull_request.base.sha }}\n head_ref=${{ github.event.pull_request.head.sha }}\n git fetch --no-recurse-submodules --shallow-exclude=main origin main $base_ref $head_ref\n git fetch --deepen=1\n ci/verify-locked-down-signatures.sh --import-gpg-keys --whitelist origin/main\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\workflows\verify-locked-down-signatures.yml | verify-locked-down-signatures.yml | YAML | 1,814 | 0.95 | 0 | 0 | node-utils | 369 | 2024-06-28T14:48:55.631963 | BSD-3-Clause | false | 9649b7b09699558e20b5b6073c172e60 |
---\nname: YAML linting\non:\n pull_request:\n paths:\n - .github/workflows/yamllint.yml\n - .yamllint\n - '**/**.yml'\n - '**/**.yaml'\n workflow_dispatch:\n\npermissions: {}\n\njobs:\n check-formatting:\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n - run: sudo apt-get install yamllint\n - run: yamllint .\n | dataset_sample\yaml\mullvad_mullvadvpn-app\.github\workflows\yamllint.yml | yamllint.yml | YAML | 356 | 0.8 | 0 | 0 | python-kit | 911 | 2024-02-03T12:14:25.500949 | MIT | false | cc44ca0586b824041c0e1fb34b4bc92b |
build:\n maxIssues: 0\n excludeCorrectable: false\n weights:\n # complexity: 2\n # LongParameterList: 1\n # style: 1\n # comments: 1\n\nconfig:\n validation: true\n warningsAsErrors: false\n checkExhaustiveness: false\n # when writing own rules with new properties, exclude the property path e.g.: 'my_rule_set,.*>.*>[my_property]'\n excludes: ''\n\nprocessors:\n active: true\n exclude:\n - 'DetektProgressListener'\n # - 'KtFileCountProcessor'\n # - 'PackageCountProcessor'\n # - 'ClassCountProcessor'\n # - 'FunctionCountProcessor'\n # - 'PropertyCountProcessor'\n # - 'ProjectComplexityProcessor'\n # - 'ProjectCognitiveComplexityProcessor'\n # - 'ProjectLLOCProcessor'\n # - 'ProjectCLOCProcessor'\n # - 'ProjectLOCProcessor'\n # - 'ProjectSLOCProcessor'\n # - 'LicenseHeaderLoaderExtension'\n\nconsole-reports:\n active: true\n exclude:\n - 'ProjectStatisticsReport'\n - 'ComplexityReport'\n - 'NotificationReport'\n - 'FindingsReport'\n - 'FileBasedFindingsReport'\n # - 'LiteFindingsReport'\n\noutput-reports:\n active: true\n exclude:\n - 'TxtOutputReport'\n - 'XmlOutputReport'\n - 'MdOutputReport'\n - 'SarifOutputReport'\n - 'sarif'\n\ncomments:\n active: true\n AbsentOrWrongFileLicense:\n active: false\n licenseTemplateFile: 'license.template'\n licenseTemplateIsRegex: false\n CommentOverPrivateFunction:\n active: false\n CommentOverPrivateProperty:\n active: false\n DeprecatedBlockTag:\n active: false\n EndOfSentenceFormat:\n active: false\n endOfSentenceFormat: '([.?!][ \t\n\r\f<])|([.?!:]$)'\n KDocReferencesNonPublicProperty:\n active: false\n excludes:\n - '**/test/**'\n - '**/androidTest/**'\n - '**/commonTest/**'\n - '**/jvmTest/**'\n - '**/androidUnitTest/**'\n - '**/androidInstrumentedTest/**'\n - '**/jsTest/**'\n - '**/iosTest/**'\n OutdatedDocumentation:\n active: false\n matchTypeParameters: true\n matchDeclarationsOrder: true\n allowParamOnConstructorProperties: false\n UndocumentedPublicClass:\n active: false\n excludes:\n - '**/test/**'\n - '**/androidTest/**'\n - '**/commonTest/**'\n - '**/jvmTest/**'\n - '**/androidUnitTest/**'\n - '**/androidInstrumentedTest/**'\n - '**/jsTest/**'\n - '**/iosTest/**'\n searchInNestedClass: true\n searchInInnerClass: true\n searchInInnerObject: true\n searchInInnerInterface: true\n searchInProtectedClass: false\n UndocumentedPublicFunction:\n active: false\n excludes:\n - '**/test/**'\n - '**/androidTest/**'\n - '**/commonTest/**'\n - '**/jvmTest/**'\n - '**/androidUnitTest/**'\n - '**/androidInstrumentedTest/**'\n - '**/jsTest/**'\n - '**/iosTest/**'\n searchProtectedFunction: false\n UndocumentedPublicProperty:\n active: false\n excludes:\n - '**/test/**'\n - '**/androidTest/**'\n - '**/commonTest/**'\n - '**/jvmTest/**'\n - '**/androidUnitTest/**'\n - '**/androidInstrumentedTest/**'\n - '**/jsTest/**'\n - '**/iosTest/**'\n searchProtectedProperty: false\n\ncomplexity:\n active: true\n CognitiveComplexMethod:\n active: false\n threshold: 15\n ComplexCondition:\n active: true\n threshold: 4\n ComplexInterface:\n active: false\n threshold: 10\n includeStaticDeclarations: false\n includePrivateDeclarations: false\n ignoreOverloaded: false\n CyclomaticComplexMethod:\n active: true\n threshold: 15\n ignoreSingleWhenExpression: false\n ignoreSimpleWhenEntries: false\n ignoreNestingFunctions: false\n nestingFunctions:\n - 'also'\n - 'apply'\n - 'forEach'\n - 'isNotNull'\n - 'ifNull'\n - 'let'\n - 'run'\n - 'use'\n - 'with'\n LabeledExpression:\n active: false\n ignoredLabels: []\n LargeClass:\n active: true\n threshold: 600\n LongMethod:\n active: true\n threshold: 80\n LongParameterList:\n active: true\n functionThreshold: 15\n constructorThreshold: 10\n ignoreDefaultParameters: true\n ignoreDataClasses: true\n ignoreAnnotatedParameter: []\n MethodOverloading:\n active: false\n threshold: 6\n NamedArguments:\n active: false\n threshold: 3\n ignoreArgumentsMatchingNames: false\n NestedBlockDepth:\n active: true\n threshold: 4\n NestedScopeFunctions:\n active: false\n threshold: 1\n functions:\n - 'kotlin.apply'\n - 'kotlin.run'\n - 'kotlin.with'\n - 'kotlin.let'\n - 'kotlin.also'\n ReplaceSafeCallChainWithRun:\n active: false\n StringLiteralDuplication:\n active: false\n excludes:\n - '**/test/**'\n - '**/androidTest/**'\n - '**/commonTest/**'\n - '**/jvmTest/**'\n - '**/androidUnitTest/**'\n - '**/androidInstrumentedTest/**'\n - '**/jsTest/**'\n - '**/iosTest/**'\n threshold: 3\n ignoreAnnotation: true\n excludeStringsWithLessThan5Characters: true\n ignoreStringsRegex: '$^'\n TooManyFunctions:\n # Configuration maybe should change when this has been merged: https://github.com/detekt/detekt/issues/6516\n active: true\n excludes:\n - '**/test/**'\n - '**/androidTest/**'\n - '**/commonTest/**'\n - '**/jvmTest/**'\n - '**/androidUnitTest/**'\n - '**/androidInstrumentedTest/**'\n - '**/jsTest/**'\n - '**/iosTest/**'\n thresholdInFiles: 30\n thresholdInClasses: 14\n thresholdInInterfaces: 14\n thresholdInObjects: 14\n thresholdInEnums: 14\n ignoreDeprecated: false\n ignorePrivate: false\n ignoreOverridden: false\n\ncoroutines:\n active: true\n GlobalCoroutineUsage:\n active: false\n InjectDispatcher:\n active: true\n dispatcherNames:\n - 'IO'\n - 'Default'\n - 'Unconfined'\n RedundantSuspendModifier:\n active: true\n SleepInsteadOfDelay:\n active: true\n SuspendFunSwallowedCancellation:\n active: false\n SuspendFunWithCoroutineScopeReceiver:\n active: false\n SuspendFunWithFlowReturnType:\n active: true\n\nempty-blocks:\n active: true\n EmptyCatchBlock:\n active: true\n allowedExceptionNameRegex: '_|(ignore|expected).*'\n EmptyClassBlock:\n active: true\n EmptyDefaultConstructor:\n active: true\n EmptyDoWhileBlock:\n active: true\n EmptyElseBlock:\n active: true\n EmptyFinallyBlock:\n active: true\n EmptyForBlock:\n active: true\n EmptyFunctionBlock:\n active: true\n ignoreOverridden: false\n EmptyIfBlock:\n active: true\n EmptyInitBlock:\n active: true\n EmptyKtFile:\n active: true\n EmptySecondaryConstructor:\n active: true\n EmptyTryBlock:\n active: true\n EmptyWhenBlock:\n active: true\n EmptyWhileBlock:\n active: true\n\nexceptions:\n active: true\n ExceptionRaisedInUnexpectedLocation:\n active: true\n methodNames:\n - 'equals'\n - 'finalize'\n - 'hashCode'\n - 'toString'\n InstanceOfCheckForException:\n active: true\n excludes:\n - '**/test/**'\n - '**/androidTest/**'\n - '**/commonTest/**'\n - '**/jvmTest/**'\n - '**/androidUnitTest/**'\n - '**/androidInstrumentedTest/**'\n - '**/jsTest/**'\n - '**/iosTest/**'\n NotImplementedDeclaration:\n active: false\n ObjectExtendsThrowable:\n active: false\n PrintStackTrace:\n active: true\n RethrowCaughtException:\n active: true\n ReturnFromFinally:\n active: true\n ignoreLabeled: false\n SwallowedException:\n active: false\n ignoredExceptionTypes:\n - 'InterruptedException'\n - 'MalformedURLException'\n - 'NumberFormatException'\n - 'ParseException'\n allowedExceptionNameRegex: '_|(ignore|expected).*'\n ThrowingExceptionFromFinally:\n active: true\n ThrowingExceptionInMain:\n active: false\n ThrowingExceptionsWithoutMessageOrCause:\n active: true\n excludes:\n - '**/test/**'\n - '**/androidTest/**'\n - '**/commonTest/**'\n - '**/jvmTest/**'\n - '**/androidUnitTest/**'\n - '**/androidInstrumentedTest/**'\n - '**/jsTest/**'\n - '**/iosTest/**'\n exceptions:\n - 'ArrayIndexOutOfBoundsException'\n - 'Exception'\n - 'IllegalArgumentException'\n - 'IllegalMonitorStateException'\n - 'IllegalStateException'\n - 'IndexOutOfBoundsException'\n - 'NullPointerException'\n - 'RuntimeException'\n - 'Throwable'\n ThrowingNewInstanceOfSameException:\n active: true\n TooGenericExceptionCaught:\n active: true\n excludes:\n - '**/test/**'\n - '**/androidTest/**'\n - '**/commonTest/**'\n - '**/jvmTest/**'\n - '**/androidUnitTest/**'\n - '**/androidInstrumentedTest/**'\n - '**/jsTest/**'\n - '**/iosTest/**'\n exceptionNames:\n - 'ArrayIndexOutOfBoundsException'\n - 'Error'\n - 'Exception'\n - 'IllegalMonitorStateException'\n - 'IndexOutOfBoundsException'\n - 'NullPointerException'\n - 'RuntimeException'\n - 'Throwable'\n allowedExceptionNameRegex: '_|(ignore|expected).*'\n TooGenericExceptionThrown:\n active: true\n exceptionNames:\n - 'Error'\n - 'Exception'\n - 'RuntimeException'\n - 'Throwable'\n\nnaming:\n active: true\n BooleanPropertyNaming:\n active: false\n allowedPattern: '^(is|has|are)'\n ClassNaming:\n active: true\n classPattern: '[A-Z][a-zA-Z0-9]*'\n ConstructorParameterNaming:\n active: true\n parameterPattern: '[a-z][A-Za-z0-9]*'\n privateParameterPattern: '[a-z][A-Za-z0-9]*'\n excludeClassPattern: '$^'\n EnumNaming:\n active: true\n enumEntryPattern: '[A-Z][_a-zA-Z0-9]*'\n ForbiddenClassName:\n active: false\n forbiddenName: []\n FunctionMaxLength:\n active: false\n maximumFunctionNameLength: 30\n FunctionMinLength:\n active: false\n minimumFunctionNameLength: 3\n FunctionNaming:\n active: true\n excludes:\n - '**/test/**'\n - '**/androidTest/**'\n - '**/commonTest/**'\n - '**/jvmTest/**'\n - '**/androidUnitTest/**'\n - '**/androidInstrumentedTest/**'\n - '**/jsTest/**'\n - '**/iosTest/**'\n functionPattern: '[a-z][a-zA-Z0-9]*'\n excludeClassPattern: '$^'\n ignoreAnnotated: ['Composable']\n FunctionParameterNaming:\n active: true\n parameterPattern: '[a-z][A-Za-z0-9]*'\n excludeClassPattern: '$^'\n InvalidPackageDeclaration:\n active: true\n rootPackage: ''\n requireRootInDeclaration: false\n LambdaParameterNaming:\n active: false\n parameterPattern: '[a-z][A-Za-z0-9]*|_'\n MatchingDeclarationName:\n active: true\n mustBeFirst: true\n MemberNameEqualsClassName:\n active: true\n ignoreOverridden: true\n NoNameShadowing:\n active: true\n NonBooleanPropertyPrefixedWithIs:\n active: false\n ObjectPropertyNaming:\n active: true\n constantPattern: '[A-Za-z][_A-Za-z0-9]*'\n propertyPattern: '[A-Za-z][_A-Za-z0-9]*'\n privatePropertyPattern: '(_)?[A-Za-z][_A-Za-z0-9]*'\n PackageNaming:\n active: true\n packagePattern: '[a-z]+(\.[a-z][A-Za-z0-9]*)*'\n TopLevelPropertyNaming:\n active: true\n constantPattern: '[A-Z][_A-Za-z0-9]*'\n propertyPattern: '[A-Za-z][_A-Za-z0-9]*'\n privatePropertyPattern: '_?[A-Za-z][_A-Za-z0-9]*'\n VariableMaxLength:\n active: false\n maximumVariableNameLength: 64\n VariableMinLength:\n active: false\n minimumVariableNameLength: 1\n VariableNaming:\n active: true\n variablePattern: '[a-z][A-Za-z0-9]*'\n privateVariablePattern: '(_)?[a-z][A-Za-z0-9]*'\n excludeClassPattern: '$^'\n\nperformance:\n active: true\n ArrayPrimitive:\n active: true\n CouldBeSequence:\n active: false\n threshold: 3\n ForEachOnRange:\n active: true\n excludes:\n - '**/test/**'\n - '**/androidTest/**'\n - '**/commonTest/**'\n - '**/jvmTest/**'\n - '**/androidUnitTest/**'\n - '**/androidInstrumentedTest/**'\n - '**/jsTest/**'\n - '**/iosTest/**'\n SpreadOperator:\n active: true\n excludes:\n - '**/test/**'\n - '**/androidTest/**'\n - '**/commonTest/**'\n - '**/jvmTest/**'\n - '**/androidUnitTest/**'\n - '**/androidInstrumentedTest/**'\n - '**/jsTest/**'\n - '**/iosTest/**'\n UnnecessaryPartOfBinaryExpression:\n active: false\n UnnecessaryTemporaryInstantiation:\n active: true\n\npotential-bugs:\n active: true\n AvoidReferentialEquality:\n active: true\n forbiddenTypePatterns:\n - 'kotlin.String'\n CastNullableToNonNullableType:\n active: false\n CastToNullableType:\n active: false\n Deprecation:\n active: false\n DontDowncastCollectionTypes:\n active: false\n DoubleMutabilityForCollection:\n active: true\n mutableTypes:\n - 'kotlin.collections.MutableList'\n - 'kotlin.collections.MutableMap'\n - 'kotlin.collections.MutableSet'\n - 'java.util.ArrayList'\n - 'java.util.LinkedHashSet'\n - 'java.util.HashSet'\n - 'java.util.LinkedHashMap'\n - 'java.util.HashMap'\n ElseCaseInsteadOfExhaustiveWhen:\n active: false\n ignoredSubjectTypes: []\n EqualsAlwaysReturnsTrueOrFalse:\n active: true\n EqualsWithHashCodeExist:\n active: true\n ExitOutsideMain:\n active: false\n ExplicitGarbageCollectionCall:\n active: true\n HasPlatformType:\n active: true\n IgnoredReturnValue:\n active: true\n restrictToConfig: true\n returnValueAnnotations:\n - 'CheckResult'\n - '*.CheckResult'\n - 'CheckReturnValue'\n - '*.CheckReturnValue'\n ignoreReturnValueAnnotations:\n - 'CanIgnoreReturnValue'\n - '*.CanIgnoreReturnValue'\n returnValueTypes:\n - 'kotlin.sequences.Sequence'\n - 'kotlinx.coroutines.flow.*Flow'\n - 'java.util.stream.*Stream'\n ignoreFunctionCall: []\n ImplicitDefaultLocale:\n active: true\n ImplicitUnitReturnType:\n active: false\n allowExplicitReturnType: true\n InvalidRange:\n active: true\n IteratorHasNextCallsNextMethod:\n active: true\n IteratorNotThrowingNoSuchElementException:\n active: true\n LateinitUsage:\n active: false\n excludes:\n - '**/test/**'\n - '**/androidTest/**'\n - '**/commonTest/**'\n - '**/jvmTest/**'\n - '**/androidUnitTest/**'\n - '**/androidInstrumentedTest/**'\n - '**/jsTest/**'\n - '**/iosTest/**'\n ignoreOnClassesPattern: ''\n MapGetWithNotNullAssertionOperator:\n active: true\n MissingPackageDeclaration:\n active: false\n excludes: ['**/*.kts']\n NullCheckOnMutableProperty:\n active: false\n NullableToStringCall:\n active: false\n PropertyUsedBeforeDeclaration:\n active: false\n UnconditionalJumpStatementInLoop:\n active: false\n UnnecessaryNotNullCheck:\n active: false\n UnnecessaryNotNullOperator:\n active: true\n UnnecessarySafeCall:\n active: true\n UnreachableCatchBlock:\n active: true\n UnreachableCode:\n active: true\n UnsafeCallOnNullableType:\n active: true\n excludes:\n - '**/test/**'\n - '**/androidTest/**'\n - '**/commonTest/**'\n - '**/jvmTest/**'\n - '**/androidUnitTest/**'\n - '**/androidInstrumentedTest/**'\n - '**/jsTest/**'\n - '**/iosTest/**'\n UnsafeCast:\n active: true\n UnusedUnaryOperator:\n active: true\n UselessPostfixExpression:\n active: true\n WrongEqualsTypeParameter:\n active: true\n\nstyle:\n active: true\n AlsoCouldBeApply:\n active: false\n BracesOnIfStatements:\n active: false\n singleLine: 'never'\n multiLine: 'always'\n BracesOnWhenStatements:\n active: false\n singleLine: 'necessary'\n multiLine: 'consistent'\n CanBeNonNullable:\n active: false\n CascadingCallWrapping:\n active: false\n includeElvis: true\n ClassOrdering:\n active: false\n CollapsibleIfStatements:\n active: false\n DataClassContainsFunctions:\n active: false\n conversionFunctionPrefix:\n - 'to'\n allowOperators: false\n DataClassShouldBeImmutable:\n active: false\n DestructuringDeclarationWithTooManyEntries:\n active: true\n maxDestructuringEntries: 3\n ignoreAnnotated: ['Composable']\n DoubleNegativeLambda:\n active: false\n negativeFunctions:\n - reason: 'Use `takeIf` instead.'\n value: 'takeUnless'\n - reason: 'Use `all` instead.'\n value: 'none'\n negativeFunctionNameParts:\n - 'not'\n - 'non'\n EqualsNullCall:\n active: true\n EqualsOnSignatureLine:\n active: false\n ExplicitCollectionElementAccessMethod:\n active: false\n ExplicitItLambdaParameter:\n active: true\n ExpressionBodySyntax:\n active: false\n includeLineWrapping: false\n ForbiddenAnnotation:\n active: false\n annotations:\n - reason: 'it is a java annotation. Use `Suppress` instead.'\n value: 'java.lang.SuppressWarnings'\n - reason: 'it is a java annotation. Use `kotlin.Deprecated` instead.'\n value: 'java.lang.Deprecated'\n - reason: 'it is a java annotation. Use `kotlin.annotation.MustBeDocumented` instead.'\n value: 'java.lang.annotation.Documented'\n - reason: 'it is a java annotation. Use `kotlin.annotation.Target` instead.'\n value: 'java.lang.annotation.Target'\n - reason: 'it is a java annotation. Use `kotlin.annotation.Retention` instead.'\n value: 'java.lang.annotation.Retention'\n - reason: 'it is a java annotation. Use `kotlin.annotation.Repeatable` instead.'\n value: 'java.lang.annotation.Repeatable'\n - reason: 'Kotlin does not support @Inherited annotation, see https://youtrack.jetbrains.com/issue/KT-22265'\n value: 'java.lang.annotation.Inherited'\n ForbiddenComment:\n active: true\n comments:\n - reason: 'Forbidden FIXME todo marker in comment, please fix the problem.'\n value: 'FIXME:'\n - reason: 'Forbidden STOPSHIP todo marker in comment, please address the problem before shipping the code.'\n value: 'STOPSHIP:'\n - reason: 'Forbidden TODO todo marker in comment, please do the changes.'\n value: 'TODO:'\n allowedPatterns: ''\n ForbiddenImport:\n active: false\n imports: []\n forbiddenPatterns: ''\n ForbiddenMethodCall:\n active: false\n methods:\n - reason: 'print does not allow you to configure the output stream. Use a logger instead.'\n value: 'kotlin.io.print'\n - reason: 'println does not allow you to configure the output stream. Use a logger instead.'\n value: 'kotlin.io.println'\n ForbiddenSuppress:\n active: false\n rules: []\n ForbiddenVoid:\n active: true\n ignoreOverridden: false\n ignoreUsageInGenerics: false\n FunctionOnlyReturningConstant:\n active: true\n ignoreOverridableFunction: true\n ignoreActualFunction: true\n excludedFunctions: []\n LoopWithTooManyJumpStatements:\n active: true\n maxJumpCount: 1\n MagicNumber:\n active: true\n excludes:\n - '**/test/**'\n - '**/androidTest/**'\n - '**/commonTest/**'\n - '**/jvmTest/**'\n - '**/androidUnitTest/**'\n - '**/androidInstrumentedTest/**'\n - '**/jsTest/**'\n - '**/iosTest/**'\n - '**/*.kts'\n ignoreNumbers:\n - '-1'\n - '0'\n - '1'\n - '2'\n ignoreHashCodeFunction: true\n ignorePropertyDeclaration: true\n ignoreLocalVariableDeclaration: false\n ignoreConstantDeclaration: true\n ignoreCompanionObjectPropertyDeclaration: true\n ignoreAnnotation: false\n ignoreNamedArgument: true\n ignoreEnums: false\n ignoreRanges: false\n ignoreExtensionFunctions: true\n ignoreAnnotated: ['Preview']\n MandatoryBracesLoops:\n active: false\n MaxChainedCallsOnSameLine:\n active: false\n maxChainedCalls: 5\n MaxLineLength:\n active: true\n maxLineLength: 120\n excludePackageStatements: true\n excludeImportStatements: true\n excludeCommentStatements: false\n excludeRawStrings: true\n ignoreAnnotated: ['Test']\n MayBeConst:\n active: true\n ModifierOrder:\n active: true\n MultilineLambdaItParameter:\n active: false\n MultilineRawStringIndentation:\n active: false\n indentSize: 4\n trimmingMethods:\n - 'trimIndent'\n - 'trimMargin'\n NestedClassesVisibility:\n active: true\n NewLineAtEndOfFile:\n active: true\n NoTabs:\n active: false\n NullableBooleanCheck:\n active: false\n ObjectLiteralToLambda:\n active: true\n OptionalAbstractKeyword:\n active: true\n OptionalUnit:\n active: false\n PreferToOverPairSyntax:\n active: false\n ProtectedMemberInFinalClass:\n active: true\n RedundantExplicitType:\n active: false\n RedundantHigherOrderMapUsage:\n active: true\n RedundantVisibilityModifierRule:\n active: false\n ReturnCount:\n active: true\n max: 2\n excludedFunctions:\n - 'equals'\n excludeLabeled: false\n excludeReturnFromLambda: true\n excludeGuardClauses: false\n SafeCast:\n active: true\n SerialVersionUIDInSerializableClass:\n active: true\n SpacingBetweenPackageAndImports:\n active: false\n StringShouldBeRawString:\n active: false\n maxEscapedCharacterCount: 2\n ignoredCharacters: []\n ThrowsCount:\n active: true\n max: 2\n excludeGuardClauses: false\n TrailingWhitespace:\n active: false\n TrimMultilineRawString:\n active: false\n trimmingMethods:\n - 'trimIndent'\n - 'trimMargin'\n UnderscoresInNumericLiterals:\n active: false\n acceptableLength: 4\n allowNonStandardGrouping: false\n UnnecessaryAbstractClass:\n active: true\n UnnecessaryAnnotationUseSiteTarget:\n active: false\n UnnecessaryApply:\n active: true\n UnnecessaryBackticks:\n active: false\n UnnecessaryBracesAroundTrailingLambda:\n active: false\n UnnecessaryFilter:\n active: true\n UnnecessaryInheritance:\n active: true\n UnnecessaryInnerClass:\n active: false\n UnnecessaryLet:\n active: false\n UnnecessaryParentheses:\n active: false\n allowForUnclearPrecedence: false\n UntilInsteadOfRangeTo:\n active: false\n UnusedImports:\n active: false\n UnusedParameter:\n active: true\n allowedNames: 'ignored|expected'\n UnusedPrivateClass:\n active: true\n UnusedPrivateMember:\n active: true\n allowedNames: ''\n ignoreAnnotated: ['Preview']\n UnusedPrivateProperty:\n active: true\n allowedNames: '_|ignored|expected|serialVersionUID'\n UseAnyOrNoneInsteadOfFind:\n active: true\n UseArrayLiteralsInAnnotations:\n active: true\n UseCheckNotNull:\n active: true\n UseCheckOrError:\n active: true\n UseDataClass:\n active: false\n allowVars: false\n UseEmptyCounterpart:\n active: false\n UseIfEmptyOrIfBlank:\n active: false\n UseIfInsteadOfWhen:\n active: false\n ignoreWhenContainingVariableDeclaration: false\n UseIsNullOrEmpty:\n active: true\n UseLet:\n active: false\n UseOrEmpty:\n active: true\n UseRequire:\n active: true\n UseRequireNotNull:\n active: true\n UseSumOfInsteadOfFlatMapSize:\n active: false\n UselessCallOnNotNull:\n active: true\n UtilityClassWithPublicConstructor:\n active: true\n VarCouldBeVal:\n active: true\n ignoreLateinitVar: false\n WildcardImport:\n active: true\n excludeImports:\n - 'java.util.*'\n | dataset_sample\yaml\mullvad_mullvadvpn-app\android\config\detekt.yml | detekt.yml | YAML | 22,624 | 0.95 | 0 | 0.021158 | react-lib | 422 | 2025-02-15T09:41:06.609523 | MIT | false | d5278ef2d1c816ed3b700e37b8f257de |
AntiFeatures:\n NonFreeNet:\n en-US: Depends on the Mullvad VPN service.\nCategories:\n - Connectivity\n - Internet\n - Security\n - System\nLicense: GPL-3.0-or-later\nWebSite: https://mullvad.net\nSourceCode: https://github.com/mullvad/mullvadvpn-app\nIssueTracker: https://github.com/mullvad/mullvadvpn-app/issues\nTranslation: https://github.com/mullvad/mullvadvpn-app/blob/HEAD/CONTRIBUTING.md#localization--translations\nChangelog: https://github.com/mullvad/mullvadvpn-app/blob/HEAD/android/CHANGELOG.md\n\nAutoName: Mullvad VPN\n\nRepoType: git\nRepo: https://github.com/mullvad/mullvadvpn-app.git\n\nBuilds:\n - versionName: 'Reproducible'\n versionCode: 1\n commit: commit-hash\n timeout: 10800\n subdir: android/app\n sudo:\n - apt-get update\n - apt-get install -y build-essential protobuf-compiler libprotobuf-dev\n init: NDK_PATH="$$NDK$$" ../fdroid-build/init.sh\n output: build/outputs/apk/ossProd/fdroid/app-oss-prod-fdroid-unsigned.apk\n rm:\n - desktop\n - graphics\n - ios\n - windows\n - building/sigstore\n - android/lib/billing\n prebuild:\n - git -C ../.. submodule update --init --recursive --depth=1 wireguard-go-rs\n - sed -i -e 's|Repositories.GradlePlugins|"https://plugins.gradle.org/m2/"|'\n ../build.gradle.kts\n - sed -i '/\"desktop\//d' ../../Cargo.toml\n - sed -i '/^android-billingclient/d' ../gradle/libs.versions.toml\n build:\n - NDK_PATH="$$NDK$$" source ../fdroid-build/env.sh\n - cargo install --force cbindgen --version "0.26.0" --locked\n - echo $NDK_TOOLCHAIN_DIR "$$NDK$$"\n - ../build.sh --fdroid\n ndk: 27.2.12479018\n\nAutoUpdateMode: Version\nUpdateCheckMode: Tags ^android/[0-9]{4}\.[0-9]+$\nUpdateCheckData: dist-assets/android-version-code.txt|(\d+)|dist-assets/android-version-name.txt|(.+)\nCurrentVersion: 'Reproducible'\nCurrentVersionCode: 1\n | dataset_sample\yaml\mullvad_mullvadvpn-app\android\fdroid-build\metadata\net.mullvad.mullvadvpn.yml | net.mullvad.mullvadvpn.yml | YAML | 1,873 | 0.8 | 0 | 0 | awesome-app | 939 | 2025-06-10T13:56:49.477808 | BSD-3-Clause | false | 0cb71f4549882c1b2d0a2a85c79104eb |
---\ndefault:\n type: instrumentation\n app: android/app/build/outputs/apk/playStagemole/debug/app-play-stagemole-debug.apk\n test: android/test/e2e/build/outputs/apk/playStagemole/debug/e2e-play-stagemole-debug.apk\n timeout: 10m\n use-orchestrator: true\n num-flaky-test-attempts: 1\n device:\n - {model: shiba, version: 34, locale: en, orientation: portrait} # pixel 8\n - {model: felix, version: 34, locale: en, orientation: portrait} # pixel fold\n - {model: tangorpro, version: 33, locale: en, orientation: portrait} # pixel tablet\n - {model: oriole, version: 32, locale: en, orientation: portrait} # pixel 6\n - {model: oriole, version: 31, locale: en, orientation: portrait} # pixel 6\n - {model: redfin, version: 30, locale: en, orientation: portrait} # pixel 5\n - {model: crownqlteue, version: 29, locale: en, orientation: portrait} # galaxy note9\n - {model: blueline, version: 28, locale: en, orientation: portrait} # pixel 3\n - {model: cactus, version: 27, locale: en, orientation: portrait} # redmi 6a\n - {model: starqlteue, version: 26, locale: en, orientation: portrait} # galaxy s9\n environment-variables:\n clearPackageData: "true"\n runnerBuilder: "de.mannodermaus.junit5.AndroidJUnit5Builder"\n | dataset_sample\yaml\mullvad_mullvadvpn-app\android\test\firebase\e2e-play-stagemole.yml | e2e-play-stagemole.yml | YAML | 1,241 | 0.8 | 0 | 0 | react-lib | 840 | 2024-02-08T17:57:14.488967 | MIT | true | 35cff9ef61d3c23f40e9dd9aef02c72a |
---\ndefault:\n type: instrumentation\n app: android/app/build/outputs/apk/ossProd/debug/app-oss-prod-debug.apk\n test: android/test/mockapi/build/outputs/apk/oss/debug/mockapi-oss-debug.apk\n timeout: 10m\n use-orchestrator: true\n num-flaky-test-attempts: 1\n device:\n - {model: shiba, version: 34, locale: en, orientation: portrait}\n - {model: tangorpro, version: 33, locale: en, orientation: portrait}\n - {model: felix, version: 33, locale: en, orientation: portrait}\n - {model: GoogleTvEmulator, version: 30, locale: en, orientation: landscape}\n environment-variables:\n clearPackageData: "true"\n runnerBuilder: "de.mannodermaus.junit5.AndroidJUnit5Builder"\n | dataset_sample\yaml\mullvad_mullvadvpn-app\android\test\firebase\mockapi-oss.yml | mockapi-oss.yml | YAML | 680 | 0.7 | 0 | 0 | react-lib | 792 | 2024-07-06T19:35:06.936277 | GPL-3.0 | true | 7dfd30327d5ddab107574b567f9095e5 |
# Usage:\n# crowdin upload sources\n# crowdin download\n\n'project_id': '350815'\n'api_token_env': 'CROWDIN_API_KEY'\n'base_path': './locales'\n'base_url': 'https://api.crowdin.com'\n'preserve_hierarchy': true\n\nfiles: [\n {\n 'source': '/*.pot',\n 'translation': '/%osx_locale%/%file_name%.po',\n 'translation_replace': {\n 'zh-Hans': 'zh-CN',\n 'zh-Hant': 'zh-TW',\n },\n },\n]\n | dataset_sample\yaml\mullvad_mullvadvpn-app\desktop\packages\mullvad-vpn\crowdin.yml | crowdin.yml | YAML | 388 | 0.8 | 0 | 0.166667 | python-kit | 644 | 2024-01-14T23:57:02.853111 | GPL-3.0 | false | 5a2404f699f490ed28405d6080b21ac3 |
---\ndisabled_rules:\n - colon\n - comma\n - control_statement\n - identifier_name\n - type_body_length\n - opening_brace # Differs from Google swift guidelines enforced by swiftformat\n - trailing_comma\n - switch_case_alignment # Enables expressions such as [return switch location {}]\n - orphaned_doc_comment\nopt_in_rules:\n - empty_count\n\nanalyzer_rules: # rules run by `swiftlint analyze`\n - explicit_self\n\nincluded: # case-sensitive paths to include during linting. `--path` is ignored if present\n - .\nexcluded: # case-sensitive paths to ignore during linting. Takes precedence over `included`\n - AdditionalAssets\n - Assets\n - Build\n - Configurations\n - MullvadVPNScreenshots\n\nallow_zero_lintable_files: false\n\nforce_cast: warning\nforce_try:\n severity: warning\nline_length:\n ignores_comments: true\n ignores_interpolated_strings: true\n warning: 120\n error: 300\ncyclomatic_complexity:\n ignores_case_statements: true\n\ntype_name:\n min_length: 4\n max_length:\n warning: 50\n error: 60\n excluded: iPhone # excluded via string\n allowed_symbols: ["_"] # these are allowed in type names\nreporter: "xcode"\nnesting:\n type_level:\n warning: 2\n error: 4\n | dataset_sample\yaml\mullvad_mullvadvpn-app\ios\.swiftlint.yml | .swiftlint.yml | YAML | 1,180 | 0.8 | 0.039216 | 0 | vue-tools | 221 | 2024-03-15T20:48:05.184974 | Apache-2.0 | false | 97ccd07775034e82e30cb477ecb0d2d9 |
self-hosted-runner:\n labels:\n - arm64\n - large\n - large-arm64\n - small\n - small-metal\n - small-arm64\n - unit-perf\n - us-east-2\nconfig-variables:\n - AWS_ECR_REGION\n - AZURE_DEV_CLIENT_ID\n - AZURE_DEV_REGISTRY_NAME\n - AZURE_DEV_SUBSCRIPTION_ID\n - AZURE_PROD_CLIENT_ID\n - AZURE_PROD_REGISTRY_NAME\n - AZURE_PROD_SUBSCRIPTION_ID\n - AZURE_TENANT_ID\n - BENCHMARK_INGEST_TARGET_PROJECTID\n - BENCHMARK_LARGE_OLTP_PROJECTID\n - BENCHMARK_PROJECT_ID_PUB\n - BENCHMARK_PROJECT_ID_SUB\n - DEV_AWS_OIDC_ROLE_ARN\n - DEV_AWS_OIDC_ROLE_MANAGE_BENCHMARK_EC2_VMS_ARN\n - HETZNER_CACHE_BUCKET\n - HETZNER_CACHE_ENDPOINT\n - HETZNER_CACHE_REGION\n - NEON_DEV_AWS_ACCOUNT_ID\n - NEON_PROD_AWS_ACCOUNT_ID\n - PGREGRESS_PG16_PROJECT_ID\n - PGREGRESS_PG17_PROJECT_ID\n - REMOTE_STORAGE_AZURE_CONTAINER\n - REMOTE_STORAGE_AZURE_REGION\n - SLACK_CICD_CHANNEL_ID\n - SLACK_ON_CALL_DEVPROD_STREAM\n - SLACK_ON_CALL_QA_STAGING_STREAM\n - SLACK_ON_CALL_STORAGE_STAGING_STREAM\n - SLACK_RUST_CHANNEL_ID\n - SLACK_STORAGE_CHANNEL_ID\n - SLACK_UPCOMING_RELEASE_CHANNEL_ID\n | dataset_sample\yaml\neondatabase_neon\.github\actionlint.yml | actionlint.yml | YAML | 1,074 | 0.7 | 0 | 0 | awesome-app | 166 | 2023-10-14T09:10:09.387553 | GPL-3.0 | false | f20ebbb03895c2c24774f0fd8250872e |
name: 'Create Allure report'\ndescription: 'Generate Allure report from uploaded by actions/allure-report-store tests results'\n\ninputs:\n store-test-results-into-db:\n description: 'Whether to store test results into the database. TEST_RESULT_CONNSTR/TEST_RESULT_CONNSTR_NEW should be set'\n type: boolean\n required: false\n default: false\n aws-oicd-role-arn:\n description: 'OIDC role arn to interract with S3'\n required: true\n\noutputs:\n base-url:\n description: 'Base URL for Allure report'\n value: ${{ steps.generate-report.outputs.base-url }}\n base-s3-url:\n description: 'Base S3 URL for Allure report'\n value: ${{ steps.generate-report.outputs.base-s3-url }}\n report-url:\n description: 'Allure report URL'\n value: ${{ steps.generate-report.outputs.report-url }}\n report-json-url:\n description: 'Allure report JSON URL'\n value: ${{ steps.generate-report.outputs.report-json-url }}\n\nruns:\n using: "composite"\n\n steps:\n # We're using some of env variables quite offen, so let's set them once.\n #\n # It would be nice to have them set in common runs.env[0] section, but it doesn't work[1]\n #\n # - [0] https://docs.github.com/en/actions/creating-actions/metadata-syntax-for-github-actions#runsenv\n # - [1] https://github.com/neondatabase/neon/pull/3907#discussion_r1154703456\n #\n - name: Set variables\n shell: bash -euxo pipefail {0}\n env:\n PR_NUMBER: ${{ github.event.pull_request.number }}\n BUCKET: neon-github-public-dev\n run: |\n if [ -n "${PR_NUMBER}" ]; then\n BRANCH_OR_PR=pr-${PR_NUMBER}\n elif [ "${GITHUB_REF_NAME}" = "main" ] || [ "${GITHUB_REF_NAME}" = "release" ] || \\n [ "${GITHUB_REF_NAME}" = "release-proxy" ] || [ "${GITHUB_REF_NAME}" = "release-compute" ]; then\n # Shortcut for special branches\n BRANCH_OR_PR=${GITHUB_REF_NAME}\n else\n BRANCH_OR_PR=branch-$(printf "${GITHUB_REF_NAME}" | tr -c "[:alnum:]._-" "-")\n fi\n\n LOCK_FILE=reports/${BRANCH_OR_PR}/lock.txt\n\n WORKDIR=/tmp/${BRANCH_OR_PR}-$(date +%s)\n mkdir -p ${WORKDIR}\n\n echo "BRANCH_OR_PR=${BRANCH_OR_PR}" >> $GITHUB_ENV\n echo "LOCK_FILE=${LOCK_FILE}" >> $GITHUB_ENV\n echo "WORKDIR=${WORKDIR}" >> $GITHUB_ENV\n echo "BUCKET=${BUCKET}" >> $GITHUB_ENV\n\n # TODO: We can replace with a special docker image with Java and Allure pre-installed\n - uses: actions/setup-java@v4\n with:\n distribution: 'temurin'\n java-version: '17'\n\n - name: Install Allure\n shell: bash -euxo pipefail {0}\n working-directory: /tmp\n run: |\n if ! which allure; then\n ALLURE_ZIP=allure-${ALLURE_VERSION}.zip\n wget -q https://github.com/allure-framework/allure2/releases/download/${ALLURE_VERSION}/${ALLURE_ZIP}\n echo "${ALLURE_ZIP_SHA256} ${ALLURE_ZIP}" | sha256sum --check\n unzip -q ${ALLURE_ZIP}\n echo "$(pwd)/allure-${ALLURE_VERSION}/bin" >> $GITHUB_PATH\n rm -f ${ALLURE_ZIP}\n fi\n env:\n ALLURE_VERSION: 2.32.2\n ALLURE_ZIP_SHA256: 3f28885e2118f6317c92f667eaddcc6491400af1fb9773c1f3797a5fa5174953\n\n - uses: aws-actions/configure-aws-credentials@v4\n if: ${{ !cancelled() }}\n with:\n aws-region: eu-central-1\n role-to-assume: ${{ inputs.aws-oicd-role-arn }}\n role-duration-seconds: 3600 # 1 hour should be more than enough to upload report\n\n # Potentially we could have several running build for the same key (for example, for the main branch), so we use improvised lock for this\n - name: Acquire lock\n shell: bash -euxo pipefail {0}\n run: |\n LOCK_TIMEOUT=300 # seconds\n\n LOCK_CONTENT="${GITHUB_RUN_ID}-${GITHUB_RUN_ATTEMPT}"\n echo ${LOCK_CONTENT} > ${WORKDIR}/lock.txt\n\n # Do it up to 5 times to avoid race condition\n for _ in $(seq 1 5); do\n for i in $(seq 1 ${LOCK_TIMEOUT}); do\n LOCK_ACQUIRED=$(aws s3api head-object --bucket neon-github-public-dev --key ${LOCK_FILE} | jq --raw-output '.LastModified' || true)\n # `date --date="..."` is supported only by gnu date (i.e. it doesn't work on BSD/macOS)\n if [ -z "${LOCK_ACQUIRED}" ] || [ "$(( $(date +%s) - $(date --date="${LOCK_ACQUIRED}" +%s) ))" -gt "${LOCK_TIMEOUT}" ]; then\n break\n fi\n sleep 1\n done\n\n aws s3 mv --only-show-errors ${WORKDIR}/lock.txt "s3://${BUCKET}/${LOCK_FILE}"\n\n # Double-check that exactly THIS run has acquired the lock\n aws s3 cp --only-show-errors "s3://${BUCKET}/${LOCK_FILE}" ./lock.txt\n if [ "$(cat lock.txt)" = "${LOCK_CONTENT}" ]; then\n break\n fi\n done\n\n - name: Generate and publish final Allure report\n id: generate-report\n shell: bash -euxo pipefail {0}\n run: |\n REPORT_PREFIX=reports/${BRANCH_OR_PR}\n RAW_PREFIX=reports-raw/${BRANCH_OR_PR}/${GITHUB_RUN_ID}\n\n BASE_URL=https://${BUCKET}.s3.amazonaws.com/${REPORT_PREFIX}/${GITHUB_RUN_ID}\n BASE_S3_URL=s3://${BUCKET}/${REPORT_PREFIX}/${GITHUB_RUN_ID}\n REPORT_URL=${BASE_URL}/index.html\n REPORT_JSON_URL=${BASE_URL}/data/suites.json\n\n # Get previously uploaded data for this run\n ZSTD_NBTHREADS=0\n\n S3_FILEPATHS=$(aws s3api list-objects-v2 --bucket ${BUCKET} --prefix ${RAW_PREFIX}/ | jq --raw-output '.Contents[]?.Key')\n if [ -z "$S3_FILEPATHS" ]; then\n # There's no previously uploaded data for this $GITHUB_RUN_ID\n exit 0\n fi\n\n time aws s3 cp --recursive --only-show-errors "s3://${BUCKET}/${RAW_PREFIX}/" "${WORKDIR}/"\n for archive in $(find ${WORKDIR} -name "*.tar.zst"); do\n mkdir -p ${archive%.tar.zst}\n time tar -xf ${archive} -C ${archive%.tar.zst}\n rm -f ${archive}\n done\n\n # Get history trend\n time aws s3 cp --recursive --only-show-errors "s3://${BUCKET}/${REPORT_PREFIX}/latest/history" "${WORKDIR}/latest/history" || true\n\n # Generate report\n time allure generate --clean --output ${WORKDIR}/report ${WORKDIR}/*\n\n # Replace a logo link with a redirect to the latest version of the report\n sed -i 's|<a href="." class=|<a href="https://'${BUCKET}'.s3.amazonaws.com/'${REPORT_PREFIX}'/latest/index.html?nocache='"'+Date.now()+'"'" class=|g' ${WORKDIR}/report/app.js\n\n # Upload a history and the final report (in this particular order to not to have duplicated history in 2 places)\n time aws s3 mv --recursive --only-show-errors "${WORKDIR}/report/history" "s3://${BUCKET}/${REPORT_PREFIX}/latest/history"\n\n # Use aws s3 cp (instead of aws s3 sync) to keep files from previous runs to make old URLs work,\n # and to keep files on the host to upload them to the database\n time s5cmd --log error cp "${WORKDIR}/report/*" "s3://${BUCKET}/${REPORT_PREFIX}/${GITHUB_RUN_ID}/"\n\n # Generate redirect\n cat <<EOF > ${WORKDIR}/index.html\n <!DOCTYPE html>\n\n <meta charset="utf-8">\n <title>Redirecting to ${REPORT_URL}</title>\n <meta http-equiv="refresh" content="0; URL=${REPORT_URL}">\n EOF\n time aws s3 cp --only-show-errors ${WORKDIR}/index.html "s3://${BUCKET}/${REPORT_PREFIX}/latest/index.html"\n\n echo "base-url=${BASE_URL}" >> $GITHUB_OUTPUT\n echo "base-s3-url=${BASE_S3_URL}" >> $GITHUB_OUTPUT\n echo "report-url=${REPORT_URL}" >> $GITHUB_OUTPUT\n echo "report-json-url=${REPORT_JSON_URL}" >> $GITHUB_OUTPUT\n\n echo "[Allure Report](${REPORT_URL})" >> ${GITHUB_STEP_SUMMARY}\n\n - name: Release lock\n if: always()\n shell: bash -euxo pipefail {0}\n run: |\n aws s3 cp --only-show-errors "s3://${BUCKET}/${LOCK_FILE}" ./lock.txt || exit 0\n\n if [ "$(cat lock.txt)" = "${GITHUB_RUN_ID}-${GITHUB_RUN_ATTEMPT}" ]; then\n aws s3 rm "s3://${BUCKET}/${LOCK_FILE}"\n fi\n\n - name: Cache poetry deps\n uses: actions/cache@v4\n with:\n path: ~/.cache/pypoetry/virtualenvs\n key: v2-${{ runner.os }}-${{ runner.arch }}-python-deps-bookworm-${{ hashFiles('poetry.lock') }}\n\n - name: Store Allure test stat in the DB (new)\n if: ${{ !cancelled() && inputs.store-test-results-into-db == 'true' }}\n shell: bash -euxo pipefail {0}\n env:\n COMMIT_SHA: ${{ github.event.pull_request.head.sha || github.sha }}\n BASE_S3_URL: ${{ steps.generate-report.outputs.base-s3-url }}\n run: |\n if [ ! -d "${WORKDIR}/report/data/test-cases" ]; then\n exit 0\n fi\n\n export DATABASE_URL=${REGRESS_TEST_RESULT_CONNSTR_NEW}\n\n ./scripts/pysync\n\n poetry run python3 scripts/ingest_regress_test_result-new-format.py \\n --reference ${GITHUB_REF} \\n --revision ${COMMIT_SHA} \\n --run-id ${GITHUB_RUN_ID} \\n --run-attempt ${GITHUB_RUN_ATTEMPT} \\n --test-cases-dir ${WORKDIR}/report/data/test-cases\n\n - name: Cleanup\n if: always()\n shell: bash -euxo pipefail {0}\n run: |\n if [ -d "${WORKDIR}" ]; then\n rm -rf ${WORKDIR}\n fi\n\n - uses: actions/github-script@v7\n if: always()\n env:\n REPORT_URL: ${{ steps.generate-report.outputs.report-url }}\n COMMIT_SHA: ${{ github.event.pull_request.head.sha || github.sha }}\n with:\n # Retry script for 5XX server errors: https://github.com/actions/github-script#retries\n retries: 5\n script: |\n const { REPORT_URL, COMMIT_SHA } = process.env\n\n await github.rest.repos.createCommitStatus({\n owner: context.repo.owner,\n repo: context.repo.repo,\n sha: `${COMMIT_SHA}`,\n state: 'success',\n target_url: `${REPORT_URL}`,\n context: 'Allure report',\n })\n | dataset_sample\yaml\neondatabase_neon\.github\actions\allure-report-generate\action.yml | action.yml | YAML | 9,966 | 0.95 | 0.116935 | 0.110048 | python-kit | 312 | 2025-06-27T03:39:55.031426 | GPL-3.0 | false | feab1b18b857e8d3a6eb95b92306d5df |
name: 'Store Allure results'\ndescription: 'Upload test results to be used by actions/allure-report-generate'\n\ninputs:\n report-dir:\n description: 'directory with test results generated by tests'\n required: true\n unique-key:\n description: 'string to distinguish different results in the same run'\n required: true\n aws-oicd-role-arn:\n description: 'OIDC role arn to interract with S3'\n required: true\n\nruns:\n using: "composite"\n\n steps:\n - name: Set variables\n shell: bash -euxo pipefail {0}\n env:\n PR_NUMBER: ${{ github.event.pull_request.number }}\n REPORT_DIR: ${{ inputs.report-dir }}\n run: |\n if [ -n "${PR_NUMBER}" ]; then\n BRANCH_OR_PR=pr-${PR_NUMBER}\n elif [ "${GITHUB_REF_NAME}" = "main" ] || [ "${GITHUB_REF_NAME}" = "release" ] || \\n [ "${GITHUB_REF_NAME}" = "release-proxy" ] || [ "${GITHUB_REF_NAME}" = "release-compute" ]; then\n # Shortcut for special branches\n BRANCH_OR_PR=${GITHUB_REF_NAME}\n else\n BRANCH_OR_PR=branch-$(printf "${GITHUB_REF_NAME}" | tr -c "[:alnum:]._-" "-")\n fi\n\n echo "BRANCH_OR_PR=${BRANCH_OR_PR}" >> $GITHUB_ENV\n echo "REPORT_DIR=${REPORT_DIR}" >> $GITHUB_ENV\n\n - uses: aws-actions/configure-aws-credentials@v4\n if: ${{ !cancelled() }}\n with:\n aws-region: eu-central-1\n role-to-assume: ${{ inputs.aws-oicd-role-arn }}\n role-duration-seconds: 3600 # 1 hour should be more than enough to upload report\n\n - name: Upload test results\n shell: bash -euxo pipefail {0}\n run: |\n REPORT_PREFIX=reports/${BRANCH_OR_PR}\n RAW_PREFIX=reports-raw/${BRANCH_OR_PR}/${GITHUB_RUN_ID}\n\n # Add metadata\n cat <<EOF > ${REPORT_DIR}/executor.json\n {\n "name": "GitHub Actions",\n "type": "github",\n "url": "https://${BUCKET}.s3.amazonaws.com/${REPORT_PREFIX}/latest/index.html",\n "buildOrder": ${GITHUB_RUN_ID},\n "buildName": "GitHub Actions Run #${GITHUB_RUN_NUMBER}/${GITHUB_RUN_ATTEMPT}",\n "buildUrl": "${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}/actions/runs/${GITHUB_RUN_ID}/attempts/${GITHUB_RUN_ATTEMPT}",\n "reportUrl": "https://${BUCKET}.s3.amazonaws.com/${REPORT_PREFIX}/${GITHUB_RUN_ID}/index.html",\n "reportName": "Allure Report"\n }\n EOF\n\n cat <<EOF > ${REPORT_DIR}/environment.properties\n COMMIT_SHA=${COMMIT_SHA}\n EOF\n\n ARCHIVE="${UNIQUE_KEY}-${GITHUB_RUN_ATTEMPT}-$(date +%s).tar.zst"\n ZSTD_NBTHREADS=0\n\n time tar -C ${REPORT_DIR} -cf ${ARCHIVE} --zstd .\n time aws s3 mv --only-show-errors ${ARCHIVE} "s3://${BUCKET}/${RAW_PREFIX}/${ARCHIVE}"\n env:\n UNIQUE_KEY: ${{ inputs.unique-key }}\n COMMIT_SHA: ${{ github.event.pull_request.head.sha || github.sha }}\n BUCKET: neon-github-public-dev\n\n - name: Cleanup\n if: always()\n shell: bash -euxo pipefail {0}\n run: |\n rm -rf ${REPORT_DIR}\n | dataset_sample\yaml\neondatabase_neon\.github\actions\allure-report-store\action.yml | action.yml | YAML | 3,042 | 0.95 | 0.048193 | 0.027778 | awesome-app | 905 | 2024-09-28T04:46:34.188580 | BSD-3-Clause | false | 0a77e157b9c4b6f98867d5b3cd4b7444 |
name: "Download an artifact"\ndescription: "Custom download action"\ninputs:\n name:\n description: "Artifact name"\n required: true\n path:\n description: "A directory to put artifact into"\n default: "."\n required: false\n skip-if-does-not-exist:\n description: "Allow to skip if file doesn't exist, fail otherwise"\n default: false\n required: false\n prefix:\n description: "S3 prefix. Default is '${GITHUB_RUN_ID}/${GITHUB_RUN_ATTEMPT}'"\n required: false\n aws-oicd-role-arn:\n description: 'OIDC role arn to interract with S3'\n required: true\n\nruns:\n using: "composite"\n steps:\n - uses: aws-actions/configure-aws-credentials@v4\n with:\n aws-region: eu-central-1\n role-to-assume: ${{ inputs.aws-oicd-role-arn }}\n role-duration-seconds: 3600\n\n - name: Download artifact\n id: download-artifact\n shell: bash -euxo pipefail {0}\n env:\n TARGET: ${{ inputs.path }}\n ARCHIVE: /tmp/downloads/${{ inputs.name }}.tar.zst\n SKIP_IF_DOES_NOT_EXIST: ${{ inputs.skip-if-does-not-exist }}\n PREFIX: artifacts/${{ inputs.prefix || format('{0}/{1}/{2}', github.event.pull_request.head.sha || github.sha, github.run_id, github.run_attempt) }}\n run: |\n BUCKET=neon-github-public-dev\n FILENAME=$(basename $ARCHIVE)\n\n S3_KEY=$(aws s3api list-objects-v2 --bucket ${BUCKET} --prefix ${PREFIX%$GITHUB_RUN_ATTEMPT} | jq -r '.Contents[]?.Key' | grep ${FILENAME} | sort --version-sort | tail -1 || true)\n if [ -z "${S3_KEY}" ]; then\n if [ "${SKIP_IF_DOES_NOT_EXIST}" = "true" ]; then\n echo 'SKIPPED=true' >> $GITHUB_OUTPUT\n exit 0\n else\n echo >&2 "Neither s3://${BUCKET}/${PREFIX}/${FILENAME} nor its version from previous attempts exist"\n exit 1\n fi\n fi\n\n echo 'SKIPPED=false' >> $GITHUB_OUTPUT\n\n mkdir -p $(dirname $ARCHIVE)\n time aws s3 cp --only-show-errors s3://${BUCKET}/${S3_KEY} ${ARCHIVE}\n\n - name: Extract artifact\n if: ${{ steps.download-artifact.outputs.SKIPPED == 'false' }}\n shell: bash -euxo pipefail {0}\n env:\n TARGET: ${{ inputs.path }}\n ARCHIVE: /tmp/downloads/${{ inputs.name }}.tar.zst\n run: |\n mkdir -p ${TARGET}\n time tar -xf ${ARCHIVE} -C ${TARGET}\n rm -f ${ARCHIVE}\n | dataset_sample\yaml\neondatabase_neon\.github\actions\download\action.yml | action.yml | YAML | 2,356 | 0.95 | 0.088235 | 0 | python-kit | 860 | 2025-02-19T19:57:56.266125 | BSD-3-Clause | false | 67f21bca56a145729585168ea7561422 |
name: 'Create Branch'\ndescription: 'Create Branch using API'\n\ninputs:\n api_key:\n description: 'Neon API key'\n required: true\n project_id:\n description: 'ID of the Project to create Branch in'\n required: true\n api_host:\n description: 'Neon API host'\n default: console-stage.neon.build\noutputs:\n dsn:\n description: 'Created Branch DSN (for main database)'\n value: ${{ steps.change-password.outputs.dsn }}\n branch_id:\n description: 'Created Branch ID'\n value: ${{ steps.create-branch.outputs.branch_id }}\n\nruns:\n using: "composite"\n steps:\n - name: Create New Branch\n id: create-branch\n shell: bash -euxo pipefail {0}\n run: |\n for i in $(seq 1 10); do\n branch=$(curl \\n "https://${API_HOST}/api/v2/projects/${PROJECT_ID}/branches" \\n --header "Accept: application/json" \\n --header "Content-Type: application/json" \\n --header "Authorization: Bearer ${API_KEY}" \\n --data "{\n \"branch\": {\n \"name\": \"Created by actions/neon-branch-create; GITHUB_RUN_ID=${GITHUB_RUN_ID} at $(date +%s)\"\n },\n \"endpoints\": [\n {\n \"type\": \"read_write\"\n }\n ]\n }")\n\n if [ -z "${branch}" ]; then\n sleep 1\n continue\n fi\n\n branch_id=$(echo $branch | jq --raw-output '.branch.id')\n if [ "${branch_id}" == "null" ]; then\n sleep 1\n continue\n fi\n\n break\n done\n\n if [ -z "${branch_id}" ] || [ "${branch_id}" == "null" ]; then\n echo >&2 "Failed to create branch after 10 attempts, the latest response was: ${branch}"\n exit 1\n fi\n\n branch_id=$(echo $branch | jq --raw-output '.branch.id')\n echo "branch_id=${branch_id}" >> $GITHUB_OUTPUT\n\n host=$(echo $branch | jq --raw-output '.endpoints[0].host')\n echo "host=${host}" >> $GITHUB_OUTPUT\n env:\n API_HOST: ${{ inputs.api_host }}\n API_KEY: ${{ inputs.api_key }}\n PROJECT_ID: ${{ inputs.project_id }}\n\n - name: Get Role name\n id: role-name\n shell: bash -euxo pipefail {0}\n run: |\n roles=$(curl \\n "https://${API_HOST}/api/v2/projects/${PROJECT_ID}/branches/${BRANCH_ID}/roles" \\n --fail \\n --header "Accept: application/json" \\n --header "Content-Type: application/json" \\n --header "Authorization: Bearer ${API_KEY}"\n )\n\n role_name=$(echo "$roles" | jq --raw-output '\n (.roles | map(select(.protected == false))) as $roles |\n if any($roles[]; .name == "neondb_owner")\n then "neondb_owner"\n else $roles[0].name\n end\n ')\n echo "role_name=${role_name}" >> $GITHUB_OUTPUT\n env:\n API_HOST: ${{ inputs.api_host }}\n API_KEY: ${{ inputs.api_key }}\n PROJECT_ID: ${{ inputs.project_id }}\n BRANCH_ID: ${{ steps.create-branch.outputs.branch_id }}\n\n - name: Change Password\n id: change-password\n # A shell without `set -x` to not to expose password/dsn in logs\n shell: bash -euo pipefail {0}\n run: |\n for i in $(seq 1 10); do\n reset_password=$(curl \\n "https://${API_HOST}/api/v2/projects/${PROJECT_ID}/branches/${BRANCH_ID}/roles/${ROLE_NAME}/reset_password" \\n --request POST \\n --header "Accept: application/json" \\n --header "Content-Type: application/json" \\n --header "Authorization: Bearer ${API_KEY}"\n )\n\n if [ -z "${reset_password}" ]; then\n sleep $i\n continue\n fi\n\n password=$(echo $reset_password | jq --raw-output '.role.password')\n if [ "${password}" == "null" ]; then\n sleep $i # increasing backoff\n continue\n fi\n\n echo "::add-mask::${password}"\n break\n done\n\n if [ -z "${password}" ] || [ "${password}" == "null" ]; then\n echo >&2 "Failed to reset password after 10 attempts, the latest response was: ${reset_password}"\n exit 1\n fi\n\n dsn="postgres://${ROLE_NAME}:${password}@${HOST}/neondb"\n echo "::add-mask::${dsn}"\n echo "dsn=${dsn}" >> $GITHUB_OUTPUT\n env:\n API_HOST: ${{ inputs.api_host }}\n API_KEY: ${{ inputs.api_key }}\n PROJECT_ID: ${{ inputs.project_id }}\n BRANCH_ID: ${{ steps.create-branch.outputs.branch_id }}\n ROLE_NAME: ${{ steps.role-name.outputs.role_name }}\n HOST: ${{ steps.create-branch.outputs.host }}\n | dataset_sample\yaml\neondatabase_neon\.github\actions\neon-branch-create\action.yml | action.yml | YAML | 4,685 | 0.95 | 0.069444 | 0.007813 | python-kit | 384 | 2025-03-17T01:48:02.130699 | GPL-3.0 | false | c3c89e3e8b477e33f3742000f8a18275 |
name: 'Delete Branch'\ndescription: 'Delete Branch using API'\n\ninputs:\n api_key:\n description: 'Neon API key'\n required: true\n project_id:\n description: 'ID of the Project which should be deleted'\n required: true\n branch_id:\n description: 'ID of the branch to delete'\n required: true\n api_host:\n description: 'Neon API host'\n default: console-stage.neon.build\n\nruns:\n using: "composite"\n steps:\n - name: Delete Branch\n # Do not try to delete a branch if .github/actions/neon-project-create\n # or .github/actions/neon-branch-create failed before\n if: ${{ inputs.project_id != '' && inputs.branch_id != '' }}\n shell: bash -euxo pipefail {0}\n run: |\n for i in $(seq 1 10); do\n deleted_branch=$(curl \\n "https://${API_HOST}/api/v2/projects/${PROJECT_ID}/branches/${BRANCH_ID}" \\n --request DELETE \\n --header "Accept: application/json" \\n --header "Content-Type: application/json" \\n --header "Authorization: Bearer ${API_KEY}"\n )\n\n if [ -z "${deleted_branch}" ]; then\n sleep 1\n continue\n fi\n\n branch_id=$(echo $deleted_branch | jq --raw-output '.branch.id')\n if [ "${branch_id}" == "null" ]; then\n sleep 1\n continue\n fi\n\n break\n done\n\n if [ -z "${branch_id}" ] || [ "${branch_id}" == "null" ]; then\n echo >&2 "Failed to delete branch after 10 attempts, the latest response was: ${deleted_branch}"\n exit 1\n fi\n env:\n API_HOST: ${{ inputs.api_host }}\n API_KEY: ${{ inputs.api_key }}\n PROJECT_ID: ${{ inputs.project_id }}\n BRANCH_ID: ${{ inputs.branch_id }}\n | dataset_sample\yaml\neondatabase_neon\.github\actions\neon-branch-delete\action.yml | action.yml | YAML | 1,761 | 0.95 | 0.12069 | 0.038462 | awesome-app | 612 | 2024-03-15T15:52:33.129210 | MIT | false | 614ac3b8a2bc6e5450f3bc02c1c2b1b1 |
name: 'Create Neon Project'\ndescription: 'Create Neon Project using API'\n\ninputs:\n api_key:\n description: 'Neon API key'\n required: true\n region_id:\n description: 'Region ID, if not set the project will be created in the default region'\n default: aws-us-east-2\n postgres_version:\n description: 'Postgres version; default is 16'\n default: '16'\n api_host:\n description: 'Neon API host'\n default: console-stage.neon.build\n compute_units:\n description: '[Min, Max] compute units'\n default: '[1, 1]'\n # settings below only needed if you want the project to be sharded from the beginning\n shard_split_project:\n description: 'by default new projects are not shard-split initiailly, but only when shard-split threshold is reached, specify true to explicitly shard-split initially'\n required: false\n default: 'false'\n disable_sharding:\n description: 'by default new projects use storage controller default policy to shard-split when shard-split threshold is reached, specify true to explicitly disable sharding'\n required: false\n default: 'false'\n admin_api_key:\n description: 'Admin API Key needed for shard-splitting. Must be specified if shard_split_project is true'\n required: false\n shard_count:\n description: 'Number of shards to split the project into, only applies if shard_split_project is true'\n required: false\n default: '8'\n stripe_size:\n description: 'Stripe size, optional, in 8kiB pages. e.g. set 2048 for 16MB stripes. Default is 128 MiB, only applies if shard_split_project is true'\n required: false\n default: '32768'\n psql_path:\n description: 'Path to psql binary - it is caller responsibility to provision the psql binary'\n required: false\n default: '/tmp/neon/pg_install/v16/bin/psql'\n libpq_lib_path:\n description: 'Path to directory containing libpq library - it is caller responsibility to provision the libpq library'\n required: false\n default: '/tmp/neon/pg_install/v16/lib'\n project_settings:\n description: 'A JSON object with project settings'\n required: false\n default: '{}'\n\noutputs:\n dsn:\n description: 'Created Project DSN (for main database)'\n value: ${{ steps.create-neon-project.outputs.dsn }}\n project_id:\n description: 'Created Project ID'\n value: ${{ steps.create-neon-project.outputs.project_id }}\n\nruns:\n using: "composite"\n steps:\n - name: Create Neon Project\n id: create-neon-project\n # A shell without `set -x` to not to expose password/dsn in logs\n shell: bash -euo pipefail {0}\n run: |\n project=$(curl \\n "https://${API_HOST}/api/v2/projects" \\n --fail \\n --header "Accept: application/json" \\n --header "Content-Type: application/json" \\n --header "Authorization: Bearer ${API_KEY}" \\n --data "{\n \"project\": {\n \"name\": \"Created by actions/neon-project-create; GITHUB_RUN_ID=${GITHUB_RUN_ID}\",\n \"pg_version\": ${POSTGRES_VERSION},\n \"region_id\": \"${REGION_ID}\",\n \"provisioner\": \"k8s-neonvm\",\n \"autoscaling_limit_min_cu\": ${MIN_CU},\n \"autoscaling_limit_max_cu\": ${MAX_CU},\n \"settings\": ${PROJECT_SETTINGS}\n }\n }")\n\n # Mask password\n echo "::add-mask::$(echo $project | jq --raw-output '.roles[] | select(.name != "web_access") | .password')"\n\n dsn=$(echo $project | jq --raw-output '.connection_uris[0].connection_uri')\n echo "::add-mask::${dsn}"\n echo "dsn=${dsn}" >> $GITHUB_OUTPUT\n\n project_id=$(echo $project | jq --raw-output '.project.id')\n echo "project_id=${project_id}" >> $GITHUB_OUTPUT\n\n echo "Project ${project_id} has been created"\n\n if [ "${SHARD_SPLIT_PROJECT}" = "true" ]; then\n # determine tenant ID\n TENANT_ID=`${PSQL} ${dsn} -t -A -c "SHOW neon.tenant_id"`\n\n echo "Splitting project ${project_id} with tenant_id ${TENANT_ID} into $((SHARD_COUNT)) shards with stripe size $((STRIPE_SIZE))"\n\n echo "Sending PUT request to https://${API_HOST}/regions/${REGION_ID}/api/v1/admin/storage/proxy/control/v1/tenant/${TENANT_ID}/shard_split"\n echo "with body {\"new_shard_count\": $((SHARD_COUNT)), \"new_stripe_size\": $((STRIPE_SIZE))}"\n\n # we need an ADMIN API KEY to invoke storage controller API for shard splitting (bash -u above checks that the variable is set)\n curl -X PUT \\n "https://${API_HOST}/regions/${REGION_ID}/api/v1/admin/storage/proxy/control/v1/tenant/${TENANT_ID}/shard_split" \\n -H "Accept: application/json" -H "Content-Type: application/json" -H "Authorization: Bearer ${ADMIN_API_KEY}" \\n -d "{\"new_shard_count\": $SHARD_COUNT, \"new_stripe_size\": $STRIPE_SIZE}"\n fi\n if [ "${DISABLE_SHARDING}" = "true" ]; then\n # determine tenant ID\n TENANT_ID=`${PSQL} ${dsn} -t -A -c "SHOW neon.tenant_id"`\n\n echo "Explicitly disabling shard-splitting for project ${project_id} with tenant_id ${TENANT_ID}"\n\n echo "Sending PUT request to https://${API_HOST}/regions/${REGION_ID}/api/v1/admin/storage/proxy/control/v1/tenant/${TENANT_ID}/policy"\n echo "with body {\"scheduling\": \"Essential\"}"\n\n # we need an ADMIN API KEY to invoke storage controller API for shard splitting (bash -u above checks that the variable is set)\n curl -X PUT \\n "https://${API_HOST}/regions/${REGION_ID}/api/v1/admin/storage/proxy/control/v1/tenant/${TENANT_ID}/policy" \\n -H "Accept: application/json" -H "Content-Type: application/json" -H "Authorization: Bearer ${ADMIN_API_KEY}" \\n -d "{\"scheduling\": \"Essential\"}"\n fi\n\n env:\n API_HOST: ${{ inputs.api_host }}\n API_KEY: ${{ inputs.api_key }}\n REGION_ID: ${{ inputs.region_id }}\n POSTGRES_VERSION: ${{ inputs.postgres_version }}\n MIN_CU: ${{ fromJSON(inputs.compute_units)[0] }}\n MAX_CU: ${{ fromJSON(inputs.compute_units)[1] }}\n SHARD_SPLIT_PROJECT: ${{ inputs.shard_split_project }}\n DISABLE_SHARDING: ${{ inputs.disable_sharding }}\n ADMIN_API_KEY: ${{ inputs.admin_api_key }}\n SHARD_COUNT: ${{ inputs.shard_count }}\n STRIPE_SIZE: ${{ inputs.stripe_size }}\n PSQL: ${{ inputs.psql_path }}\n LD_LIBRARY_PATH: ${{ inputs.libpq_lib_path }}\n PROJECT_SETTINGS: ${{ inputs.project_settings }}\n | dataset_sample\yaml\neondatabase_neon\.github\actions\neon-project-create\action.yml | action.yml | YAML | 6,515 | 0.95 | 0.090278 | 0.054264 | python-kit | 631 | 2023-08-18T07:34:38.665156 | MIT | false | bbd488407e4c48407aa96e14a718185e |
name: 'Delete Neon Project'\ndescription: 'Delete Neon Project using API'\n\ninputs:\n api_key:\n description: 'Neon API key'\n required: true\n project_id:\n description: 'ID of the Project to delete'\n required: true\n api_host:\n description: 'Neon API host'\n default: console-stage.neon.build\n\nruns:\n using: "composite"\n steps:\n - name: Delete Neon Project\n # Do not try to delete a project if .github/actions/neon-project-create failed before\n if: ${{ inputs.project_id != '' }}\n shell: bash -euxo pipefail {0}\n run: |\n curl \\n "https://${API_HOST}/api/v2/projects/${PROJECT_ID}" \\n --fail \\n --request DELETE \\n --header "Accept: application/json" \\n --header "Content-Type: application/json" \\n --header "Authorization: Bearer ${API_KEY}"\n\n echo "Project ${PROJECT_ID} has been deleted"\n env:\n API_HOST: ${{ inputs.api_host }}\n API_KEY: ${{ inputs.api_key }}\n PROJECT_ID: ${{ inputs.project_id }}\n | dataset_sample\yaml\neondatabase_neon\.github\actions\neon-project-delete\action.yml | action.yml | YAML | 1,033 | 0.95 | 0.085714 | 0.03125 | python-kit | 615 | 2024-09-04T08:44:24.410627 | MIT | false | da462b007c197a60bf54f1326be9dbfe |
name: 'Run python test'\ndescription: 'Runs a Neon python test set, performing all the required preparations before'\n\ninputs:\n build_type:\n description: 'Type of Rust (neon) and C (postgres) builds. Must be "release" or "debug", or "remote" for the remote cluster'\n required: true\n test_selection:\n description: 'A python test suite to run'\n required: true\n extra_params:\n description: 'Arbitrary parameters to pytest. For example "-s" to prevent capturing stdout/stderr'\n required: false\n default: ''\n needs_postgres_source:\n description: 'Set to true if the test suite requires postgres source checked out'\n required: false\n default: 'false'\n run_in_parallel:\n description: 'Whether to run tests in parallel'\n required: false\n default: 'true'\n save_perf_report:\n description: 'Whether to upload the performance report, if true PERF_TEST_RESULT_CONNSTR env variable should be set'\n required: false\n default: 'false'\n run_with_real_s3:\n description: 'Whether to pass real s3 credentials to the test suite'\n required: false\n default: 'false'\n real_s3_bucket:\n description: 'Bucket name for real s3 tests'\n required: false\n default: ''\n real_s3_region:\n description: 'Region name for real s3 tests'\n required: false\n default: ''\n rerun_failed:\n description: 'Whether to rerun failed tests'\n required: false\n default: 'false'\n pg_version:\n description: 'Postgres version to use for tests'\n required: false\n default: 'v16'\n sanitizers:\n description: 'enabled or disabled'\n required: false\n default: 'disabled'\n type: string\n benchmark_durations:\n description: 'benchmark durations JSON'\n required: false\n default: '{}'\n aws-oicd-role-arn:\n description: 'OIDC role arn to interract with S3'\n required: true\n\nruns:\n using: "composite"\n steps:\n - name: Get Neon artifact\n if: inputs.build_type != 'remote'\n uses: ./.github/actions/download\n with:\n name: neon-${{ runner.os }}-${{ runner.arch }}-${{ inputs.build_type }}${{ inputs.sanitizers == 'enabled' && '-sanitized' || '' }}-artifact\n path: /tmp/neon\n aws-oicd-role-arn: ${{ inputs.aws-oicd-role-arn }}\n\n - name: Download Neon binaries for the previous release\n if: inputs.build_type != 'remote'\n uses: ./.github/actions/download\n with:\n name: neon-${{ runner.os }}-${{ runner.arch }}-${{ inputs.build_type }}-artifact\n path: /tmp/neon-previous\n prefix: latest\n aws-oicd-role-arn: ${{ inputs.aws-oicd-role-arn }}\n\n - name: Download compatibility snapshot\n if: inputs.build_type != 'remote'\n uses: ./.github/actions/download\n with:\n name: compatibility-snapshot-${{ runner.arch }}-${{ inputs.build_type }}-pg${{ inputs.pg_version }}\n path: /tmp/compatibility_snapshot_pg${{ inputs.pg_version }}\n prefix: latest\n # The lack of compatibility snapshot (for example, for the new Postgres version)\n # shouldn't fail the whole job. Only relevant test should fail.\n skip-if-does-not-exist: true\n aws-oicd-role-arn: ${{ inputs.aws-oicd-role-arn }}\n\n - name: Checkout\n if: inputs.needs_postgres_source == 'true'\n uses: actions/checkout@v4\n with:\n submodules: true\n\n - name: Cache poetry deps\n uses: actions/cache@v4\n with:\n path: ~/.cache/pypoetry/virtualenvs\n key: v2-${{ runner.os }}-${{ runner.arch }}-python-deps-bookworm-${{ hashFiles('poetry.lock') }}\n\n - name: Install Python deps\n shell: bash -euxo pipefail {0}\n run: ./scripts/pysync\n\n - name: Run pytest\n env:\n NEON_BIN: /tmp/neon/bin\n COMPATIBILITY_NEON_BIN: /tmp/neon-previous/bin\n COMPATIBILITY_POSTGRES_DISTRIB_DIR: /tmp/neon-previous/pg_install\n TEST_OUTPUT: /tmp/test_output\n BUILD_TYPE: ${{ inputs.build_type }}\n COMPATIBILITY_SNAPSHOT_DIR: /tmp/compatibility_snapshot_pg${{ inputs.pg_version }}\n RERUN_FAILED: ${{ inputs.rerun_failed }}\n PG_VERSION: ${{ inputs.pg_version }}\n SANITIZERS: ${{ inputs.sanitizers }}\n shell: bash -euxo pipefail {0}\n run: |\n # PLATFORM will be embedded in the perf test report\n # and it is needed to distinguish different environments\n export PLATFORM=${PLATFORM:-github-actions-selfhosted}\n export POSTGRES_DISTRIB_DIR=${POSTGRES_DISTRIB_DIR:-/tmp/neon/pg_install}\n export DEFAULT_PG_VERSION=${PG_VERSION#v}\n export LD_LIBRARY_PATH=${POSTGRES_DISTRIB_DIR}/v${DEFAULT_PG_VERSION}/lib\n export BENCHMARK_CONNSTR=${BENCHMARK_CONNSTR:-}\n export ASAN_OPTIONS=detect_leaks=0:detect_stack_use_after_return=0:abort_on_error=1:strict_string_checks=1:check_initialization_order=1:strict_init_order=1\n export UBSAN_OPTIONS=abort_on_error=1:print_stacktrace=1\n\n if [ "${BUILD_TYPE}" = "remote" ]; then\n export REMOTE_ENV=1\n fi\n\n PERF_REPORT_DIR="$(realpath test_runner/perf-report-local)"\n echo "PERF_REPORT_DIR=${PERF_REPORT_DIR}" >> ${GITHUB_ENV}\n rm -rf $PERF_REPORT_DIR\n\n TEST_SELECTION="test_runner/${{ inputs.test_selection }}"\n EXTRA_PARAMS="${{ inputs.extra_params }}"\n if [ -z "$TEST_SELECTION" ]; then\n echo "test_selection must be set"\n exit 1\n fi\n if [[ "${{ inputs.run_in_parallel }}" == "true" ]]; then\n # -n sets the number of parallel processes that pytest-xdist will run\n EXTRA_PARAMS="-n12 $EXTRA_PARAMS"\n\n # --dist=loadgroup points tests marked with @pytest.mark.xdist_group\n # to the same worker to make @pytest.mark.order work with xdist\n EXTRA_PARAMS="--dist=loadgroup $EXTRA_PARAMS"\n fi\n\n if [[ "${{ inputs.run_with_real_s3 }}" == "true" ]]; then\n echo "REAL S3 ENABLED"\n export ENABLE_REAL_S3_REMOTE_STORAGE=nonempty\n export REMOTE_STORAGE_S3_BUCKET=${{ inputs.real_s3_bucket }}\n export REMOTE_STORAGE_S3_REGION=${{ inputs.real_s3_region }}\n fi\n\n if [[ "${{ inputs.save_perf_report }}" == "true" ]]; then\n mkdir -p "$PERF_REPORT_DIR"\n EXTRA_PARAMS="--out-dir $PERF_REPORT_DIR $EXTRA_PARAMS"\n fi\n\n if [ "${RERUN_FAILED}" == "true" ]; then\n EXTRA_PARAMS="--reruns 2 $EXTRA_PARAMS"\n fi\n\n # We use pytest-split plugin to run benchmarks in parallel on different CI runners\n if [ "${TEST_SELECTION}" = "test_runner/performance" ] && [ "${{ inputs.build_type }}" != "remote" ]; then\n mkdir -p $TEST_OUTPUT\n echo '${{ inputs.benchmark_durations || '{}' }}' > $TEST_OUTPUT/benchmark_durations.json\n\n EXTRA_PARAMS="--durations-path $TEST_OUTPUT/benchmark_durations.json $EXTRA_PARAMS"\n fi\n\n if [[ $BUILD_TYPE == "debug" && $RUNNER_ARCH == 'X64' ]]; then\n cov_prefix=(scripts/coverage "--profraw-prefix=$GITHUB_JOB" --dir=/tmp/coverage run)\n else\n cov_prefix=()\n fi\n\n # Wake up the cluster if we use remote neon instance\n if [ "${{ inputs.build_type }}" = "remote" ] && [ -n "${BENCHMARK_CONNSTR}" ]; then\n QUERIES=("SELECT version()")\n if [[ "${PLATFORM}" = "neon"* ]]; then\n QUERIES+=("SHOW neon.tenant_id")\n QUERIES+=("SHOW neon.timeline_id")\n fi\n\n for q in "${QUERIES[@]}"; do\n ${POSTGRES_DISTRIB_DIR}/v${DEFAULT_PG_VERSION}/bin/psql ${BENCHMARK_CONNSTR} -c "${q}"\n done\n fi\n\n # Run the tests.\n #\n # --alluredir saves test results in Allure format (in a specified directory)\n # --verbose prints name of each test (helpful when there are\n # multiple tests in one file)\n # -rA prints summary in the end\n # -s is not used to prevent pytest from capturing output, because tests are running\n # in parallel and logs are mixed between different tests\n #\n mkdir -p $TEST_OUTPUT/allure/results\n "${cov_prefix[@]}" ./scripts/pytest \\n --alluredir=$TEST_OUTPUT/allure/results \\n --tb=short \\n --verbose \\n -rA $TEST_SELECTION $EXTRA_PARAMS\n\n - name: Upload performance report\n if: ${{ !cancelled() && inputs.save_perf_report == 'true' }}\n shell: bash -euxo pipefail {0}\n run: |\n export REPORT_FROM="${PERF_REPORT_DIR}"\n scripts/generate_and_push_perf_report.sh\n\n - name: Upload compatibility snapshot\n # Note, that we use `github.base_ref` which is a target branch for a PR\n if: github.event_name == 'pull_request' && github.base_ref == 'release'\n uses: ./.github/actions/upload\n with:\n name: compatibility-snapshot-${{ runner.arch }}-${{ inputs.build_type }}-pg${{ inputs.pg_version }}\n # Directory is created by test_compatibility.py::test_create_snapshot, keep the path in sync with the test\n path: /tmp/test_output/compatibility_snapshot_pg${{ inputs.pg_version }}/\n # The lack of compatibility snapshot shouldn't fail the job\n # (for example if we didn't run the test for non build-and-test workflow)\n skip-if-does-not-exist: true\n aws-oicd-role-arn: ${{ inputs.aws-oicd-role-arn }}\n\n - uses: aws-actions/configure-aws-credentials@v4\n if: ${{ !cancelled() }}\n with:\n aws-region: eu-central-1\n role-to-assume: ${{ inputs.aws-oicd-role-arn }}\n role-duration-seconds: 3600 # 1 hour should be more than enough to upload report\n\n - name: Upload test results\n if: ${{ !cancelled() }}\n uses: ./.github/actions/allure-report-store\n with:\n report-dir: /tmp/test_output/allure/results\n unique-key: ${{ inputs.build_type }}-${{ inputs.pg_version }}-${{ runner.arch }}\n aws-oicd-role-arn: ${{ inputs.aws-oicd-role-arn }}\n | dataset_sample\yaml\neondatabase_neon\.github\actions\run-python-test-set\action.yml | action.yml | YAML | 9,868 | 0.95 | 0.142276 | 0.099548 | node-utils | 409 | 2025-01-07T17:20:39.881710 | Apache-2.0 | true | dbe88b0630517521252d988991b5d70a |
name: 'Merge and upload coverage data'\ndescription: 'Compresses and uploads the coverage data as an artifact'\n\nruns:\n using: "composite"\n steps:\n - name: Merge coverage data\n shell: bash -euxo pipefail {0}\n run: scripts/coverage "--profraw-prefix=$GITHUB_JOB" --dir=/tmp/coverage merge\n\n - name: Download previous coverage data into the same directory\n uses: ./.github/actions/download\n with:\n name: coverage-data-artifact\n path: /tmp/coverage\n skip-if-does-not-exist: true # skip if there's no previous coverage to download\n aws-oicd-role-arn: ${{ inputs.aws-oicd-role-arn }}\n\n - name: Upload coverage data\n uses: ./.github/actions/upload\n with:\n name: coverage-data-artifact\n path: /tmp/coverage\n aws-oicd-role-arn: ${{ inputs.aws-oicd-role-arn }}\n | dataset_sample\yaml\neondatabase_neon\.github\actions\save-coverage-data\action.yml | action.yml | YAML | 840 | 0.8 | 0.083333 | 0 | node-utils | 895 | 2023-09-12T08:31:24.273515 | BSD-3-Clause | false | b0d458d55baf836f2a250dd14b609ce0 |
name: "Upload an artifact"\ndescription: "Custom upload action"\ninputs:\n name:\n description: "Artifact name"\n required: true\n path:\n description: "A directory or file to upload"\n required: true\n skip-if-does-not-exist:\n description: "Allow to skip if path doesn't exist, fail otherwise"\n default: false\n required: false\n prefix:\n description: "S3 prefix. Default is '${GITHUB_SHA}/${GITHUB_RUN_ID}/${GITHUB_RUN_ATTEMPT}'"\n required: false\n aws-oicd-role-arn:\n description: "the OIDC role arn for aws auth"\n required: false\n default: ""\n\nruns:\n using: "composite"\n steps:\n - name: Prepare artifact\n id: prepare-artifact\n shell: bash -euxo pipefail {0}\n env:\n SOURCE: ${{ inputs.path }}\n ARCHIVE: /tmp/uploads/${{ inputs.name }}.tar.zst\n SKIP_IF_DOES_NOT_EXIST: ${{ inputs.skip-if-does-not-exist }}\n run: |\n mkdir -p $(dirname $ARCHIVE)\n\n if [ -f ${ARCHIVE} ]; then\n echo >&2 "File ${ARCHIVE} already exist. Something went wrong before"\n exit 1\n fi\n\n ZSTD_NBTHREADS=0\n if [ -d ${SOURCE} ]; then\n time tar -C ${SOURCE} -cf ${ARCHIVE} --zstd .\n elif [ -f ${SOURCE} ]; then\n time tar -cf ${ARCHIVE} --zstd ${SOURCE}\n elif ! ls ${SOURCE} > /dev/null 2>&1; then\n if [ "${SKIP_IF_DOES_NOT_EXIST}" = "true" ]; then\n echo 'SKIPPED=true' >> $GITHUB_OUTPUT\n exit 0\n else\n echo >&2 "${SOURCE} does not exist"\n exit 2\n fi\n else\n echo >&2 "${SOURCE} is neither a directory nor a file, do not know how to handle it"\n exit 3\n fi\n\n echo 'SKIPPED=false' >> $GITHUB_OUTPUT\n\n - name: Configure AWS credentials\n uses: aws-actions/configure-aws-credentials@v4\n with:\n aws-region: eu-central-1\n role-to-assume: ${{ inputs.aws-oicd-role-arn }}\n role-duration-seconds: 3600\n\n - name: Upload artifact\n if: ${{ steps.prepare-artifact.outputs.SKIPPED == 'false' }}\n shell: bash -euxo pipefail {0}\n env:\n SOURCE: ${{ inputs.path }}\n ARCHIVE: /tmp/uploads/${{ inputs.name }}.tar.zst\n PREFIX: artifacts/${{ inputs.prefix || format('{0}/{1}/{2}', github.event.pull_request.head.sha || github.sha, github.run_id , github.run_attempt) }}\n run: |\n BUCKET=neon-github-public-dev\n FILENAME=$(basename $ARCHIVE)\n\n FILESIZE=$(du -sh ${ARCHIVE} | cut -f1)\n\n time aws s3 mv --only-show-errors ${ARCHIVE} s3://${BUCKET}/${PREFIX}/${FILENAME}\n\n # Ref https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions#adding-a-job-summary\n echo "[${FILENAME}](https://${BUCKET}.s3.amazonaws.com/${PREFIX}/${FILENAME}) ${FILESIZE}" >> ${GITHUB_STEP_SUMMARY}\n | dataset_sample\yaml\neondatabase_neon\.github\actions\upload\action.yml | action.yml | YAML | 2,836 | 0.95 | 0.108434 | 0.013514 | vue-tools | 324 | 2025-01-13T20:27:35.922627 | Apache-2.0 | false | 5c898b5eee3b6657ddeb17411f1d4602 |
\nblank_issues_enabled: true\ncontact_links:\n - name: Feature request\n url: https://console.neon.tech/app/projects?modal=feedback\n about: For feature requests in the Neon product, please submit via the feedback form on `https://console.neon.tech`\n | dataset_sample\yaml\neondatabase_neon\.github\ISSUE_TEMPLATE\config.yml | config.yml | YAML | 252 | 0.8 | 0 | 0 | node-utils | 83 | 2023-07-19T13:02:27.277417 | BSD-3-Clause | false | 12435e52a748f07c072b8500d1288de3 |
name: Lint GitHub Workflows\n\non:\n push:\n branches:\n - main\n - release\n paths:\n - '.github/workflows/*.ya?ml'\n pull_request:\n paths:\n - '.github/workflows/*.ya?ml'\n\nconcurrency:\n group: ${{ github.workflow }}-${{ github.ref }}\n cancel-in-progress: ${{ github.event_name == 'pull_request' }}\n\njobs:\n check-permissions:\n if: ${{ !contains(github.event.pull_request.labels.*.name, 'run-no-ci') }}\n uses: ./.github/workflows/check-permissions.yml\n with:\n github-event-name: ${{ github.event_name}}\n\n actionlint:\n needs: [ check-permissions ]\n runs-on: ubuntu-22.04\n steps:\n - name: Harden the runner (Audit all outbound calls)\n uses: step-security/harden-runner@4d991eb9b905ef189e4c376166672c3f2f230481 # v2.11.0\n with:\n egress-policy: audit\n\n - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n - uses: reviewdog/action-actionlint@a5524e1c19e62881d79c1f1b9b6f09f16356e281 # v1.65.2\n env:\n # SC2046 - Quote this to prevent word splitting. - https://www.shellcheck.net/wiki/SC2046\n # SC2086 - Double quote to prevent globbing and word splitting. - https://www.shellcheck.net/wiki/SC2086\n SHELLCHECK_OPTS: --exclude=SC2046,SC2086\n with:\n fail_level: error\n filter_mode: nofilter\n level: error\n\n - name: Disallow 'ubuntu-latest' runners\n run: |\n PAT='^\s*runs-on:.*-latest'\n if grep -ERq $PAT .github/workflows; then\n grep -ERl $PAT .github/workflows |\\n while read -r f\n do\n l=$(grep -nE $PAT $f | awk -F: '{print $1}' | head -1)\n echo "::error file=$f,line=$l::Please use 'ubuntu-22.04' instead of 'ubuntu-latest'"\n done\n exit 1\n fi\n | dataset_sample\yaml\neondatabase_neon\.github\workflows\actionlint.yml | actionlint.yml | YAML | 1,835 | 0.8 | 0.053571 | 0.04 | vue-tools | 818 | 2025-03-27T13:42:21.052178 | Apache-2.0 | false | c883842ba18b871d271f624c804cc7ef |
name: Handle `approved-for-ci-run` label\n# This workflow helps to run CI pipeline for PRs made by external contributors (from forks).\n\non:\n pull_request_target:\n branches:\n - main\n types:\n # Default types that triggers a workflow ([1]):\n # - [1] https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#pull_request\n - opened\n - synchronize\n - reopened\n # Types that we wand to handle in addition to keep labels tidy:\n - closed\n # Actual magic happens here:\n - labeled\n\nconcurrency:\n group: ${{ github.workflow }}-${{ github.event.pull_request.number }}\n cancel-in-progress: false\n\nenv:\n GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n PR_NUMBER: ${{ github.event.pull_request.number }}\n BRANCH: "ci-run/pr-${{ github.event.pull_request.number }}"\n\n# No permission for GITHUB_TOKEN by default; the **minimal required** set of permissions should be granted in each job.\npermissions: {}\n\ndefaults:\n run:\n shell: bash -euo pipefail {0}\n\njobs:\n remove-label:\n # Remove `approved-for-ci-run` label if the workflow is triggered by changes in a PR.\n # The PR should be reviewed and labelled manually again.\n\n permissions:\n pull-requests: write # For `gh pr edit`\n\n if: |\n contains(fromJSON('["opened", "synchronize", "reopened", "closed"]'), github.event.action) &&\n contains(github.event.pull_request.labels.*.name, 'approved-for-ci-run')\n\n runs-on: ubuntu-22.04\n\n steps:\n - name: Harden the runner (Audit all outbound calls)\n uses: step-security/harden-runner@4d991eb9b905ef189e4c376166672c3f2f230481 # v2.11.0\n with:\n egress-policy: audit\n\n - run: gh pr --repo "${GITHUB_REPOSITORY}" edit "${PR_NUMBER}" --remove-label "approved-for-ci-run"\n\n create-or-update-pr-for-ci-run:\n # Create local PR for an `approved-for-ci-run` labelled PR to run CI pipeline in it.\n\n permissions:\n pull-requests: write # for `gh pr edit`\n # For `git push` and `gh pr create` we use CI_ACCESS_TOKEN\n\n if: |\n github.event.action == 'labeled' &&\n contains(github.event.pull_request.labels.*.name, 'approved-for-ci-run')\n\n runs-on: ubuntu-22.04\n\n steps:\n - name: Harden the runner (Audit all outbound calls)\n uses: step-security/harden-runner@4d991eb9b905ef189e4c376166672c3f2f230481 # v2.11.0\n with:\n egress-policy: audit\n\n - run: gh pr --repo "${GITHUB_REPOSITORY}" edit "${PR_NUMBER}" --remove-label "approved-for-ci-run"\n\n - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n with:\n ref: ${{ github.event.pull_request.head.sha }}\n token: ${{ secrets.CI_ACCESS_TOKEN }}\n\n - name: Look for existing PR\n id: get-pr\n env:\n GH_TOKEN: ${{ secrets.CI_ACCESS_TOKEN }}\n run: |\n ALREADY_CREATED="$(gh pr --repo ${GITHUB_REPOSITORY} list --head ${BRANCH} --base main --json number --jq '.[].number')"\n echo "ALREADY_CREATED=${ALREADY_CREATED}" >> ${GITHUB_OUTPUT}\n\n - name: Get changed labels\n id: get-labels\n if: steps.get-pr.outputs.ALREADY_CREATED != ''\n env:\n ALREADY_CREATED: ${{ steps.get-pr.outputs.ALREADY_CREATED }}\n GH_TOKEN: ${{ secrets.CI_ACCESS_TOKEN }}\n run: |\n LABELS_TO_REMOVE=$(comm -23 <(gh pr --repo ${GITHUB_REPOSITORY} view ${ALREADY_CREATED} --json labels --jq '.labels.[].name'| ( grep -E '^run' || true ) | sort) \\n <(gh pr --repo ${GITHUB_REPOSITORY} view ${PR_NUMBER} --json labels --jq '.labels.[].name' | ( grep -E '^run' || true ) | sort ) |\\n ( grep -v run-e2e-tests-in-draft || true ) | paste -sd , -)\n LABELS_TO_ADD=$(comm -13 <(gh pr --repo ${GITHUB_REPOSITORY} view ${ALREADY_CREATED} --json labels --jq '.labels.[].name'| ( grep -E '^run' || true ) |sort) \\n <(gh pr --repo ${GITHUB_REPOSITORY} view ${PR_NUMBER} --json labels --jq '.labels.[].name' | ( grep -E '^run' || true ) | sort ) |\\n paste -sd , -)\n echo "LABELS_TO_ADD=${LABELS_TO_ADD}" >> ${GITHUB_OUTPUT}\n echo "LABELS_TO_REMOVE=${LABELS_TO_REMOVE}" >> ${GITHUB_OUTPUT}\n\n - run: git checkout -b "${BRANCH}"\n\n - run: git push --force origin "${BRANCH}"\n if: steps.get-pr.outputs.ALREADY_CREATED == ''\n\n - name: Create a Pull Request for CI run (if required)\n if: steps.get-pr.outputs.ALREADY_CREATED == ''\n env:\n GH_TOKEN: ${{ secrets.CI_ACCESS_TOKEN }}\n run: |\n cat << EOF > body.md\n This Pull Request is created automatically to run the CI pipeline for #${PR_NUMBER}\n\n Please do not alter or merge/close it.\n\n Feel free to review/comment/discuss the original PR #${PR_NUMBER}.\n EOF\n\n LABELS=$( (gh pr --repo "${GITHUB_REPOSITORY}" view ${PR_NUMBER} --json labels --jq '.labels.[].name'; echo run-e2e-tests-in-draft )| \\n grep -E '^run' | paste -sd , -)\n gh pr --repo "${GITHUB_REPOSITORY}" create --title "CI run for PR #${PR_NUMBER}" \\n --body-file "body.md" \\n --head "${BRANCH}" \\n --base "main" \\n --label ${LABELS} \\n --draft\n - name: Modify the existing pull request (if required)\n if: steps.get-pr.outputs.ALREADY_CREATED != ''\n env:\n GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n LABELS_TO_ADD: ${{ steps.get-labels.outputs.LABELS_TO_ADD }}\n LABELS_TO_REMOVE: ${{ steps.get-labels.outputs.LABELS_TO_REMOVE }}\n ALREADY_CREATED: ${{ steps.get-pr.outputs.ALREADY_CREATED }}\n run: |\n ADD_CMD=\n REMOVE_CMD=\n [ -z "${LABELS_TO_ADD}" ] || ADD_CMD="--add-label ${LABELS_TO_ADD}"\n [ -z "${LABELS_TO_REMOVE}" ] || REMOVE_CMD="--remove-label ${LABELS_TO_REMOVE}"\n if [ -n "${ADD_CMD}" ] || [ -n "${REMOVE_CMD}" ]; then\n gh pr --repo "${GITHUB_REPOSITORY}" edit ${ALREADY_CREATED} ${ADD_CMD} ${REMOVE_CMD}\n fi\n\n - run: git push --force origin "${BRANCH}"\n if: steps.get-pr.outputs.ALREADY_CREATED != ''\n\n cleanup:\n # Close PRs and delete branchs if the original PR is closed.\n\n permissions:\n contents: write # for `--delete-branch` flag in `gh pr close`\n pull-requests: write # for `gh pr close`\n\n if: |\n github.event.action == 'closed' &&\n github.event.pull_request.head.repo.full_name != github.repository\n\n runs-on: ubuntu-22.04\n\n steps:\n - name: Harden the runner (Audit all outbound calls)\n uses: step-security/harden-runner@4d991eb9b905ef189e4c376166672c3f2f230481 # v2.11.0\n with:\n egress-policy: audit\n\n - name: Close PR and delete `ci-run/pr-${{ env.PR_NUMBER }}` branch\n run: |\n CLOSED="$(gh pr --repo ${GITHUB_REPOSITORY} list --head ${BRANCH} --json 'closed' --jq '.[].closed')"\n if [ "${CLOSED}" == "false" ]; then\n gh pr --repo "${GITHUB_REPOSITORY}" close "${BRANCH}" --delete-branch\n fi\n | dataset_sample\yaml\neondatabase_neon\.github\workflows\approved-for-ci-run.yml | approved-for-ci-run.yml | YAML | 7,255 | 0.95 | 0.181818 | 0.076923 | python-kit | 538 | 2024-12-21T05:20:03.732175 | GPL-3.0 | false | e513d1250b5911319c8ae98125ff7ebb |
name: Benchmarking\n\non:\n # uncomment to run on push for debugging your PR\n # push:\n # branches: [ your branch ]\n schedule:\n # * is a special character in YAML so you have to quote this string\n # ββββββββββββββ minute (0 - 59)\n # β ββββββββββββββ hour (0 - 23)\n # β β ββββββββββββββ day of the month (1 - 31)\n # β β β ββββββββββββββ month (1 - 12 or JAN-DEC)\n # β β β β ββββββββββββββ day of the week (0 - 6 or SUN-SAT)\n - cron: '0 3 * * *' # run once a day, timezone is utc\n workflow_dispatch: # adds ability to run this manually\n inputs:\n region_id:\n description: 'Project region id. If not set, the default region will be used'\n required: false\n default: 'aws-us-east-2'\n save_perf_report:\n type: boolean\n description: 'Publish perf report. If not set, the report will be published only for the main branch'\n required: false\n collect_olap_explain:\n type: boolean\n description: 'Collect EXPLAIN ANALYZE for OLAP queries. If not set, EXPLAIN ANALYZE will not be collected'\n required: false\n default: false\n collect_pg_stat_statements:\n type: boolean\n description: 'Collect pg_stat_statements for OLAP queries. If not set, pg_stat_statements will not be collected'\n required: false\n default: false\n run_AWS_RDS_AND_AURORA:\n type: boolean\n description: 'AWS-RDS and AWS-AURORA normally only run on Saturday. Set this to true to run them on every workflow_dispatch'\n required: false\n default: false\n run_only_pgvector_tests:\n type: boolean\n description: 'Run pgvector tests but no other tests. If not set, all tests including pgvector tests will be run'\n required: false\n default: false\n\ndefaults:\n run:\n shell: bash -euxo pipefail {0}\n\nconcurrency:\n # Allow only one workflow per any non-`main` branch.\n group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}\n cancel-in-progress: true\n\njobs:\n bench:\n if: ${{ github.event.inputs.run_only_pgvector_tests == 'false' || github.event.inputs.run_only_pgvector_tests == null }}\n permissions:\n contents: write\n statuses: write\n id-token: write # aws-actions/configure-aws-credentials\n strategy:\n fail-fast: false\n matrix:\n include:\n - PG_VERSION: 16\n PLATFORM: "neon-staging"\n region_id: ${{ github.event.inputs.region_id || 'aws-us-east-2' }}\n RUNNER: [ self-hosted, us-east-2, x64 ]\n - PG_VERSION: 17\n PLATFORM: "neon-staging"\n region_id: ${{ github.event.inputs.region_id || 'aws-us-east-2' }}\n RUNNER: [ self-hosted, us-east-2, x64 ]\n - PG_VERSION: 16\n PLATFORM: "azure-staging"\n region_id: 'azure-eastus2'\n RUNNER: [ self-hosted, eastus2, x64 ]\n env:\n TEST_PG_BENCH_DURATIONS_MATRIX: "300"\n TEST_PG_BENCH_SCALES_MATRIX: "10,100"\n POSTGRES_DISTRIB_DIR: /tmp/neon/pg_install\n PG_VERSION: ${{ matrix.PG_VERSION }}\n TEST_OUTPUT: /tmp/test_output\n BUILD_TYPE: remote\n SAVE_PERF_REPORT: ${{ github.event.inputs.save_perf_report || ( github.ref_name == 'main' ) }}\n PLATFORM: ${{ matrix.PLATFORM }}\n\n runs-on: ${{ matrix.RUNNER }}\n container:\n image: ghcr.io/neondatabase/build-tools:pinned-bookworm\n credentials:\n username: ${{ github.actor }}\n password: ${{ secrets.GITHUB_TOKEN }}\n options: --init\n\n steps:\n - name: Harden the runner (Audit all outbound calls)\n uses: step-security/harden-runner@4d991eb9b905ef189e4c376166672c3f2f230481 # v2.11.0\n with:\n egress-policy: audit\n\n - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n\n - name: Configure AWS credentials # necessary on Azure runners\n uses: aws-actions/configure-aws-credentials@e3dd6a429d7300a6a4c196c26e071d42e0343502 # v4.0.2\n with:\n aws-region: eu-central-1\n role-to-assume: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n role-duration-seconds: 18000 # 5 hours\n\n - name: Download Neon artifact\n uses: ./.github/actions/download\n with:\n name: neon-${{ runner.os }}-${{ runner.arch }}-release-artifact\n path: /tmp/neon/\n prefix: latest\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n\n - name: Create Neon Project\n id: create-neon-project\n uses: ./.github/actions/neon-project-create\n with:\n region_id: ${{ matrix.region_id }}\n postgres_version: ${{ env.PG_VERSION }}\n api_key: ${{ secrets.NEON_STAGING_API_KEY }}\n\n - name: Run benchmark\n uses: ./.github/actions/run-python-test-set\n with:\n build_type: ${{ env.BUILD_TYPE }}\n test_selection: performance\n run_in_parallel: false\n save_perf_report: ${{ env.SAVE_PERF_REPORT }}\n pg_version: ${{ env.PG_VERSION }}\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n # Set --sparse-ordering option of pytest-order plugin\n # to ensure tests are running in order of appears in the file.\n # It's important for test_perf_pgbench.py::test_pgbench_remote_* tests\n extra_params:\n -m remote_cluster\n --sparse-ordering\n --timeout 14400\n --ignore test_runner/performance/test_perf_olap.py\n --ignore test_runner/performance/test_perf_pgvector_queries.py\n --ignore test_runner/performance/test_logical_replication.py\n --ignore test_runner/performance/test_physical_replication.py\n --ignore test_runner/performance/test_perf_ingest_using_pgcopydb.py\n --ignore test_runner/performance/test_cumulative_statistics_persistence.py\n --ignore test_runner/performance/test_perf_many_relations.py\n --ignore test_runner/performance/test_perf_oltp_large_tenant.py\n env:\n BENCHMARK_CONNSTR: ${{ steps.create-neon-project.outputs.dsn }}\n VIP_VAP_ACCESS_TOKEN: "${{ secrets.VIP_VAP_ACCESS_TOKEN }}"\n PERF_TEST_RESULT_CONNSTR: "${{ secrets.PERF_TEST_RESULT_CONNSTR }}"\n\n - name: Delete Neon Project\n if: ${{ always() }}\n uses: ./.github/actions/neon-project-delete\n with:\n project_id: ${{ steps.create-neon-project.outputs.project_id }}\n api_key: ${{ secrets.NEON_STAGING_API_KEY }}\n\n - name: Create Allure report\n id: create-allure-report\n if: ${{ !cancelled() }}\n uses: ./.github/actions/allure-report-generate\n with:\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n\n - name: Post to a Slack channel\n if: ${{ github.event.schedule && failure() }}\n uses: slackapi/slack-github-action@fcfb566f8b0aab22203f066d80ca1d7e4b5d05b3 # v1.27.1\n with:\n channel-id: "C06KHQVQ7U3" # on-call-qa-staging-stream\n slack-message: |\n Periodic perf testing: ${{ job.status }}\n <${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}|GitHub Run>\n <${{ steps.create-allure-report.outputs.report-url }}|Allure report>\n env:\n SLACK_BOT_TOKEN: ${{ secrets.SLACK_BOT_TOKEN }}\n\n cumstats-test:\n if: ${{ github.event.inputs.run_only_pgvector_tests == 'false' || github.event.inputs.run_only_pgvector_tests == null }}\n permissions:\n contents: write\n statuses: write\n id-token: write # aws-actions/configure-aws-credentials\n env:\n POSTGRES_DISTRIB_DIR: /tmp/neon/pg_install\n DEFAULT_PG_VERSION: 17\n TEST_OUTPUT: /tmp/test_output\n BUILD_TYPE: remote\n SAVE_PERF_REPORT: ${{ github.event.inputs.save_perf_report || ( github.ref_name == 'main' ) }}\n PLATFORM: "neon-staging"\n\n runs-on: [ self-hosted, us-east-2, x64 ]\n container:\n image: ghcr.io/neondatabase/build-tools:pinned-bookworm\n credentials:\n username: ${{ github.actor }}\n password: ${{ secrets.GITHUB_TOKEN }}\n options: --init\n\n steps:\n - name: Harden the runner (Audit all outbound calls)\n uses: step-security/harden-runner@4d991eb9b905ef189e4c376166672c3f2f230481 # v2.11.0\n with:\n egress-policy: audit\n\n - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n\n - name: Configure AWS credentials\n uses: aws-actions/configure-aws-credentials@e3dd6a429d7300a6a4c196c26e071d42e0343502 # v4.0.2\n with:\n aws-region: eu-central-1\n role-to-assume: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n role-duration-seconds: 18000 # 5 hours\n\n - name: Download Neon artifact\n uses: ./.github/actions/download\n with:\n name: neon-${{ runner.os }}-${{ runner.arch }}-release-artifact\n path: /tmp/neon/\n prefix: latest\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n \n - name: Verify that cumulative statistics are preserved\n uses: ./.github/actions/run-python-test-set\n with:\n build_type: ${{ env.BUILD_TYPE }}\n test_selection: performance/test_cumulative_statistics_persistence.py\n run_in_parallel: false\n save_perf_report: ${{ env.SAVE_PERF_REPORT }}\n extra_params: -m remote_cluster --timeout 3600\n pg_version: ${{ env.DEFAULT_PG_VERSION }}\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n env:\n VIP_VAP_ACCESS_TOKEN: "${{ secrets.VIP_VAP_ACCESS_TOKEN }}"\n PERF_TEST_RESULT_CONNSTR: "${{ secrets.PERF_TEST_RESULT_CONNSTR }}"\n NEON_API_KEY: ${{ secrets.NEON_STAGING_API_KEY }}\n\n replication-tests:\n if: ${{ github.event.inputs.run_only_pgvector_tests == 'false' || github.event.inputs.run_only_pgvector_tests == null }}\n permissions:\n contents: write\n statuses: write\n id-token: write # aws-actions/configure-aws-credentials\n env:\n POSTGRES_DISTRIB_DIR: /tmp/neon/pg_install\n DEFAULT_PG_VERSION: 16\n TEST_OUTPUT: /tmp/test_output\n BUILD_TYPE: remote\n SAVE_PERF_REPORT: ${{ github.event.inputs.save_perf_report || ( github.ref_name == 'main' ) }}\n PLATFORM: "neon-staging"\n\n runs-on: [ self-hosted, us-east-2, x64 ]\n container:\n image: ghcr.io/neondatabase/build-tools:pinned-bookworm\n credentials:\n username: ${{ github.actor }}\n password: ${{ secrets.GITHUB_TOKEN }}\n options: --init\n\n steps:\n - name: Harden the runner (Audit all outbound calls)\n uses: step-security/harden-runner@4d991eb9b905ef189e4c376166672c3f2f230481 # v2.11.0\n with:\n egress-policy: audit\n\n - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n\n - name: Configure AWS credentials\n uses: aws-actions/configure-aws-credentials@e3dd6a429d7300a6a4c196c26e071d42e0343502 # v4.0.2\n with:\n aws-region: eu-central-1\n role-to-assume: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n role-duration-seconds: 18000 # 5 hours\n\n - name: Download Neon artifact\n uses: ./.github/actions/download\n with:\n name: neon-${{ runner.os }}-${{ runner.arch }}-release-artifact\n path: /tmp/neon/\n prefix: latest\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n\n - name: Run Logical Replication benchmarks\n uses: ./.github/actions/run-python-test-set\n with:\n build_type: ${{ env.BUILD_TYPE }}\n test_selection: performance/test_logical_replication.py\n run_in_parallel: false\n save_perf_report: ${{ env.SAVE_PERF_REPORT }}\n extra_params: -m remote_cluster --timeout 5400\n pg_version: ${{ env.DEFAULT_PG_VERSION }}\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n env:\n VIP_VAP_ACCESS_TOKEN: "${{ secrets.VIP_VAP_ACCESS_TOKEN }}"\n PERF_TEST_RESULT_CONNSTR: "${{ secrets.PERF_TEST_RESULT_CONNSTR }}"\n NEON_API_KEY: ${{ secrets.NEON_STAGING_API_KEY }}\n BENCHMARK_PROJECT_ID_PUB: ${{ vars.BENCHMARK_PROJECT_ID_PUB }}\n BENCHMARK_PROJECT_ID_SUB: ${{ vars.BENCHMARK_PROJECT_ID_SUB }}\n\n - name: Run Physical Replication benchmarks\n uses: ./.github/actions/run-python-test-set\n with:\n build_type: ${{ env.BUILD_TYPE }}\n test_selection: performance/test_physical_replication.py\n run_in_parallel: false\n save_perf_report: ${{ env.SAVE_PERF_REPORT }}\n extra_params: -m remote_cluster --timeout 5400\n pg_version: ${{ env.DEFAULT_PG_VERSION }}\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n env:\n VIP_VAP_ACCESS_TOKEN: "${{ secrets.VIP_VAP_ACCESS_TOKEN }}"\n PERF_TEST_RESULT_CONNSTR: "${{ secrets.PERF_TEST_RESULT_CONNSTR }}"\n NEON_API_KEY: ${{ secrets.NEON_STAGING_API_KEY }}\n\n - name: Create Allure report\n id: create-allure-report\n if: ${{ !cancelled() }}\n uses: ./.github/actions/allure-report-generate\n with:\n store-test-results-into-db: true\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n env:\n REGRESS_TEST_RESULT_CONNSTR_NEW: ${{ secrets.REGRESS_TEST_RESULT_CONNSTR_NEW }}\n\n # Post both success and failure to the Slack channel\n - name: Post to a Slack channel\n if: ${{ github.event.schedule && !cancelled() }}\n uses: slackapi/slack-github-action@fcfb566f8b0aab22203f066d80ca1d7e4b5d05b3 # v1.27.1\n with:\n channel-id: "C06T9AMNDQQ" # on-call-compute-staging-stream\n slack-message: |\n Periodic replication testing: ${{ job.status }}\n <${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}|GitHub Run>\n <${{ steps.create-allure-report.outputs.report-url }}|Allure report>\n env:\n SLACK_BOT_TOKEN: ${{ secrets.SLACK_BOT_TOKEN }}\n\n generate-matrices:\n if: ${{ github.event.inputs.run_only_pgvector_tests == 'false' || github.event.inputs.run_only_pgvector_tests == null }}\n # Create matrices for the benchmarking jobs, so we run benchmarks on rds only once a week (on Saturday)\n #\n # Available platforms:\n # - neonvm-captest-new: Freshly created project (1 CU)\n # - neonvm-captest-freetier: Use freetier-sized compute (0.25 CU)\n # - neonvm-captest-azure-new: Freshly created project (1 CU) in azure region\n # - neonvm-captest-azure-freetier: Use freetier-sized compute (0.25 CU) in azure region\n # - neonvm-captest-reuse: Reusing existing project\n # - rds-aurora: Aurora Postgres Serverless v2 with autoscaling from 0.5 to 2 ACUs\n # - rds-postgres: RDS Postgres db.m5.large instance (2 vCPU, 8 GiB) with gp3 EBS storage\n env:\n RUN_AWS_RDS_AND_AURORA: ${{ github.event.inputs.run_AWS_RDS_AND_AURORA || 'false' }}\n DEFAULT_REGION_ID: ${{ github.event.inputs.region_id || 'aws-us-east-2' }}\n runs-on: ubuntu-22.04\n outputs:\n pgbench-compare-matrix: ${{ steps.pgbench-compare-matrix.outputs.matrix }}\n olap-compare-matrix: ${{ steps.olap-compare-matrix.outputs.matrix }}\n tpch-compare-matrix: ${{ steps.tpch-compare-matrix.outputs.matrix }}\n\n steps:\n - name: Harden the runner (Audit all outbound calls)\n uses: step-security/harden-runner@4d991eb9b905ef189e4c376166672c3f2f230481 # v2.11.0\n with:\n egress-policy: audit\n\n - name: Generate matrix for pgbench benchmark\n id: pgbench-compare-matrix\n run: |\n region_id_default=${{ env.DEFAULT_REGION_ID }}\n runner_default='["self-hosted", "us-east-2", "x64"]'\n runner_azure='["self-hosted", "eastus2", "x64"]'\n image_default="ghcr.io/neondatabase/build-tools:pinned-bookworm"\n matrix='{\n "pg_version" : [\n 16\n ],\n "region_id" : [\n "'"$region_id_default"'"\n ],\n "platform": [\n "neonvm-captest-new",\n "neonvm-captest-reuse",\n "neonvm-captest-new"\n ],\n "db_size": [ "10gb" ],\n "runner": ['"$runner_default"'],\n "image": [ "'"$image_default"'" ],\n "include": [{ "pg_version": 16, "region_id": "'"$region_id_default"'", "platform": "neonvm-captest-freetier", "db_size": "3gb" ,"runner": '"$runner_default"', "image": "'"$image_default"'" },\n { "pg_version": 16, "region_id": "'"$region_id_default"'", "platform": "neonvm-captest-new", "db_size": "10gb","runner": '"$runner_default"', "image": "'"$image_default"'" },\n { "pg_version": 16, "region_id": "'"$region_id_default"'", "platform": "neonvm-captest-new-many-tables","db_size": "10gb","runner": '"$runner_default"', "image": "'"$image_default"'" },\n { "pg_version": 16, "region_id": "'"$region_id_default"'", "platform": "neonvm-captest-new", "db_size": "50gb","runner": '"$runner_default"', "image": "'"$image_default"'" },\n { "pg_version": 16, "region_id": "azure-eastus2", "platform": "neonvm-azure-captest-freetier", "db_size": "3gb" ,"runner": '"$runner_azure"', "image": "ghcr.io/neondatabase/build-tools:pinned-bookworm" },\n { "pg_version": 16, "region_id": "azure-eastus2", "platform": "neonvm-azure-captest-new", "db_size": "10gb","runner": '"$runner_azure"', "image": "ghcr.io/neondatabase/build-tools:pinned-bookworm" },\n { "pg_version": 16, "region_id": "azure-eastus2", "platform": "neonvm-azure-captest-new", "db_size": "50gb","runner": '"$runner_azure"', "image": "ghcr.io/neondatabase/build-tools:pinned-bookworm" },\n { "pg_version": 16, "region_id": "'"$region_id_default"'", "platform": "neonvm-captest-sharding-reuse", "db_size": "50gb","runner": '"$runner_default"', "image": "'"$image_default"'" },\n { "pg_version": 17, "region_id": "'"$region_id_default"'", "platform": "neonvm-captest-freetier", "db_size": "3gb" ,"runner": '"$runner_default"', "image": "'"$image_default"'" },\n { "pg_version": 17, "region_id": "'"$region_id_default"'", "platform": "neonvm-captest-new", "db_size": "10gb","runner": '"$runner_default"', "image": "'"$image_default"'" },\n { "pg_version": 17, "region_id": "'"$region_id_default"'", "platform": "neonvm-captest-new-many-tables","db_size": "10gb","runner": '"$runner_default"', "image": "'"$image_default"'" },\n { "pg_version": 17, "region_id": "'"$region_id_default"'", "platform": "neonvm-captest-new", "db_size": "50gb","runner": '"$runner_default"', "image": "'"$image_default"'" }]\n }'\n\n if [ "$(date +%A)" = "Saturday" ] || [ ${RUN_AWS_RDS_AND_AURORA} = "true" ]; then\n matrix=$(echo "$matrix" | jq '.include += [{ "pg_version": 16, "region_id": "'"$region_id_default"'", "platform": "rds-postgres", "db_size": "10gb","runner": '"$runner_default"', "image": "'"$image_default"'" },\n { "pg_version": 16, "region_id": "'"$region_id_default"'", "platform": "rds-aurora", "db_size": "10gb","runner": '"$runner_default"', "image": "'"$image_default"'" }]')\n fi\n\n echo "matrix=$(echo "$matrix" | jq --compact-output '.')" >> $GITHUB_OUTPUT\n\n - name: Generate matrix for OLAP benchmarks\n id: olap-compare-matrix\n run: |\n matrix='{\n "platform": [\n "neonvm-captest-reuse"\n ],\n "pg_version" : [\n 16,17\n ]\n }'\n\n if [ "$(date +%A)" = "Saturday" ] || [ ${RUN_AWS_RDS_AND_AURORA} = "true" ]; then\n matrix=$(echo "$matrix" | jq '.include += [{ "pg_version": 16, "platform": "rds-postgres" },\n { "pg_version": 16, "platform": "rds-aurora" }]')\n fi\n\n echo "matrix=$(echo "$matrix" | jq --compact-output '.')" >> $GITHUB_OUTPUT\n\n - name: Generate matrix for TPC-H benchmarks\n id: tpch-compare-matrix\n run: |\n matrix='{\n "platform": [\n "neonvm-captest-reuse"\n ],\n "pg_version" : [\n 16,17\n ]\n }'\n\n if [ "$(date +%A)" = "Saturday" ] || [ ${RUN_AWS_RDS_AND_AURORA} = "true" ]; then\n matrix=$(echo "$matrix" | jq '.include += [{ "pg_version": 16, "platform": "rds-postgres" },\n { "pg_version": 16, "platform": "rds-aurora" }]')\n fi\n\n echo "matrix=$(echo "$matrix" | jq --compact-output '.')" >> $GITHUB_OUTPUT\n\n prepare_AWS_RDS_databases:\n uses: ./.github/workflows/_benchmarking_preparation.yml\n secrets: inherit\n\n pgbench-compare:\n if: ${{ github.event.inputs.run_only_pgvector_tests == 'false' || github.event.inputs.run_only_pgvector_tests == null }}\n needs: [ generate-matrices, prepare_AWS_RDS_databases ]\n permissions:\n contents: write\n statuses: write\n id-token: write # aws-actions/configure-aws-credentials\n\n strategy:\n fail-fast: false\n matrix: ${{fromJSON(needs.generate-matrices.outputs.pgbench-compare-matrix)}}\n\n env:\n TEST_PG_BENCH_DURATIONS_MATRIX: "60m"\n TEST_PG_BENCH_SCALES_MATRIX: ${{ matrix.db_size }}\n POSTGRES_DISTRIB_DIR: /tmp/neon/pg_install\n PG_VERSION: ${{ matrix.pg_version }}\n TEST_OUTPUT: /tmp/test_output\n BUILD_TYPE: remote\n SAVE_PERF_REPORT: ${{ github.event.inputs.save_perf_report || ( github.ref_name == 'main' ) }}\n PLATFORM: ${{ matrix.platform }}\n\n runs-on: ${{ matrix.runner }}\n container:\n image: ${{ matrix.image }}\n credentials:\n username: ${{ github.actor }}\n password: ${{ secrets.GITHUB_TOKEN }}\n options: --init\n\n # Increase timeout to 8h, default timeout is 6h\n timeout-minutes: 480\n\n steps:\n - name: Harden the runner (Audit all outbound calls)\n uses: step-security/harden-runner@4d991eb9b905ef189e4c376166672c3f2f230481 # v2.11.0\n with:\n egress-policy: audit\n\n - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n\n - name: Configure AWS credentials\n uses: aws-actions/configure-aws-credentials@e3dd6a429d7300a6a4c196c26e071d42e0343502 # v4.0.2\n with:\n aws-region: eu-central-1\n role-to-assume: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n role-duration-seconds: 18000 # 5 hours\n\n - name: Download Neon artifact\n uses: ./.github/actions/download\n with:\n name: neon-${{ runner.os }}-${{ runner.arch }}-release-artifact\n path: /tmp/neon/\n prefix: latest\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n\n - name: Create Neon Project\n if: contains(fromJSON('["neonvm-captest-new", "neonvm-captest-new-many-tables", "neonvm-captest-freetier", "neonvm-azure-captest-freetier", "neonvm-azure-captest-new"]'), matrix.platform)\n id: create-neon-project\n uses: ./.github/actions/neon-project-create\n with:\n region_id: ${{ matrix.region_id }}\n postgres_version: ${{ env.PG_VERSION }}\n api_key: ${{ secrets.NEON_STAGING_API_KEY }}\n compute_units: ${{ (contains(matrix.platform, 'captest-freetier') && '[0.25, 0.25]') || '[1, 1]' }}\n\n - name: Set up Connection String\n id: set-up-connstr\n run: |\n case "${PLATFORM}" in\n neonvm-captest-reuse)\n CONNSTR=${{ secrets.BENCHMARK_CAPTEST_CONNSTR }}\n ;;\n neonvm-captest-sharding-reuse)\n CONNSTR=${{ secrets.BENCHMARK_CAPTEST_SHARDING_CONNSTR }}\n ;;\n neonvm-captest-new | neonvm-captest-new-many-tables | neonvm-captest-freetier | neonvm-azure-captest-new | neonvm-azure-captest-freetier)\n CONNSTR=${{ steps.create-neon-project.outputs.dsn }}\n ;;\n rds-aurora)\n CONNSTR=${{ secrets.BENCHMARK_RDS_AURORA_CONNSTR }}\n ;;\n rds-postgres)\n CONNSTR=${{ secrets.BENCHMARK_RDS_POSTGRES_CONNSTR }}\n ;;\n *)\n echo >&2 "Unknown PLATFORM=${PLATFORM}"\n exit 1\n ;;\n esac\n\n echo "connstr=${CONNSTR}" >> $GITHUB_OUTPUT\n\n # we want to compare Neon project OLTP throughput and latency at scale factor 10 GB\n # without (neonvm-captest-new)\n # and with (neonvm-captest-new-many-tables) many relations in the database\n - name: Create many relations before the run\n if: contains(fromJSON('["neonvm-captest-new-many-tables"]'), matrix.platform)\n uses: ./.github/actions/run-python-test-set\n with:\n build_type: ${{ env.BUILD_TYPE }}\n test_selection: performance\n run_in_parallel: false\n save_perf_report: ${{ env.SAVE_PERF_REPORT }}\n extra_params: -m remote_cluster --timeout 21600 -k test_perf_many_relations\n pg_version: ${{ env.PG_VERSION }}\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n env:\n BENCHMARK_CONNSTR: ${{ steps.set-up-connstr.outputs.connstr }}\n VIP_VAP_ACCESS_TOKEN: "${{ secrets.VIP_VAP_ACCESS_TOKEN }}"\n PERF_TEST_RESULT_CONNSTR: "${{ secrets.PERF_TEST_RESULT_CONNSTR }}"\n TEST_NUM_RELATIONS: 10000\n\n - name: Benchmark init\n uses: ./.github/actions/run-python-test-set\n with:\n build_type: ${{ env.BUILD_TYPE }}\n test_selection: performance\n run_in_parallel: false\n save_perf_report: ${{ env.SAVE_PERF_REPORT }}\n extra_params: -m remote_cluster --timeout 21600 -k test_pgbench_remote_init\n pg_version: ${{ env.PG_VERSION }}\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n env:\n BENCHMARK_CONNSTR: ${{ steps.set-up-connstr.outputs.connstr }}\n VIP_VAP_ACCESS_TOKEN: "${{ secrets.VIP_VAP_ACCESS_TOKEN }}"\n PERF_TEST_RESULT_CONNSTR: "${{ secrets.PERF_TEST_RESULT_CONNSTR }}"\n\n - name: Benchmark simple-update\n uses: ./.github/actions/run-python-test-set\n with:\n build_type: ${{ env.BUILD_TYPE }}\n test_selection: performance\n run_in_parallel: false\n save_perf_report: ${{ env.SAVE_PERF_REPORT }}\n extra_params: -m remote_cluster --timeout 21600 -k test_pgbench_remote_simple_update\n pg_version: ${{ env.PG_VERSION }}\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n env:\n BENCHMARK_CONNSTR: ${{ steps.set-up-connstr.outputs.connstr }}\n VIP_VAP_ACCESS_TOKEN: "${{ secrets.VIP_VAP_ACCESS_TOKEN }}"\n PERF_TEST_RESULT_CONNSTR: "${{ secrets.PERF_TEST_RESULT_CONNSTR }}"\n\n - name: Benchmark select-only\n uses: ./.github/actions/run-python-test-set\n with:\n build_type: ${{ env.BUILD_TYPE }}\n test_selection: performance\n run_in_parallel: false\n save_perf_report: ${{ env.SAVE_PERF_REPORT }}\n extra_params: -m remote_cluster --timeout 21600 -k test_pgbench_remote_select_only\n pg_version: ${{ env.PG_VERSION }}\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n env:\n BENCHMARK_CONNSTR: ${{ steps.set-up-connstr.outputs.connstr }}\n VIP_VAP_ACCESS_TOKEN: "${{ secrets.VIP_VAP_ACCESS_TOKEN }}"\n PERF_TEST_RESULT_CONNSTR: "${{ secrets.PERF_TEST_RESULT_CONNSTR }}"\n\n - name: Delete Neon Project\n if: ${{ steps.create-neon-project.outputs.project_id && always() }}\n uses: ./.github/actions/neon-project-delete\n with:\n project_id: ${{ steps.create-neon-project.outputs.project_id }}\n api_key: ${{ secrets.NEON_STAGING_API_KEY }}\n\n - name: Create Allure report\n id: create-allure-report\n if: ${{ !cancelled() }}\n uses: ./.github/actions/allure-report-generate\n with:\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n\n - name: Post to a Slack channel\n if: ${{ github.event.schedule && failure() }}\n uses: slackapi/slack-github-action@fcfb566f8b0aab22203f066d80ca1d7e4b5d05b3 # v1.27.1\n with:\n channel-id: "C06KHQVQ7U3" # on-call-qa-staging-stream\n slack-message: |\n Periodic perf testing on ${{ matrix.platform }}: ${{ job.status }}\n <${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}|GitHub Run>\n <${{ steps.create-allure-report.outputs.report-url }}|Allure report>\n env:\n SLACK_BOT_TOKEN: ${{ secrets.SLACK_BOT_TOKEN }}\n\n pgbench-pgvector:\n permissions:\n contents: write\n statuses: write\n id-token: write # aws-actions/configure-aws-credentials\n strategy:\n fail-fast: false\n matrix:\n include:\n - PLATFORM: "neonvm-captest-pgvector"\n RUNNER: [ self-hosted, us-east-2, x64 ]\n postgres_version: 16\n - PLATFORM: "neonvm-captest-pgvector-pg17"\n RUNNER: [ self-hosted, us-east-2, x64 ]\n postgres_version: 17\n - PLATFORM: "azure-captest-pgvector"\n RUNNER: [ self-hosted, eastus2, x64 ]\n postgres_version: 16\n\n env:\n TEST_PG_BENCH_DURATIONS_MATRIX: "15m"\n TEST_PG_BENCH_SCALES_MATRIX: "1"\n POSTGRES_DISTRIB_DIR: /tmp/neon/pg_install\n PG_VERSION: ${{ matrix.postgres_version }}\n TEST_OUTPUT: /tmp/test_output\n BUILD_TYPE: remote\n\n SAVE_PERF_REPORT: ${{ github.event.inputs.save_perf_report || ( github.ref_name == 'main' ) }}\n PLATFORM: ${{ matrix.PLATFORM }}\n\n runs-on: ${{ matrix.RUNNER }}\n container:\n image: ghcr.io/neondatabase/build-tools:pinned-bookworm\n credentials:\n username: ${{ github.actor }}\n password: ${{ secrets.GITHUB_TOKEN }}\n options: --init\n\n steps:\n - name: Harden the runner (Audit all outbound calls)\n uses: step-security/harden-runner@4d991eb9b905ef189e4c376166672c3f2f230481 # v2.11.0\n with:\n egress-policy: audit\n\n - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n\n - name: Configure AWS credentials\n uses: aws-actions/configure-aws-credentials@e3dd6a429d7300a6a4c196c26e071d42e0343502 # v4.0.2\n with:\n aws-region: eu-central-1\n role-to-assume: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n role-duration-seconds: 18000 # 5 hours\n\n - name: Download Neon artifact\n uses: ./.github/actions/download\n with:\n name: neon-${{ runner.os }}-${{ runner.arch }}-release-artifact\n path: /tmp/neon/\n prefix: latest\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n\n - name: Set up Connection String\n id: set-up-connstr\n run: |\n case "${PLATFORM}" in\n neonvm-captest-pgvector)\n CONNSTR=${{ secrets.BENCHMARK_PGVECTOR_CONNSTR }}\n ;;\n neonvm-captest-pgvector-pg17)\n CONNSTR=${{ secrets.BENCHMARK_PGVECTOR_CONNSTR_PG17 }}\n ;;\n azure-captest-pgvector)\n CONNSTR=${{ secrets.BENCHMARK_PGVECTOR_CONNSTR_AZURE }}\n ;;\n *)\n echo >&2 "Unknown PLATFORM=${PLATFORM}"\n exit 1\n ;;\n esac\n\n echo "connstr=${CONNSTR}" >> $GITHUB_OUTPUT\n\n - name: Benchmark pgvector hnsw indexing\n uses: ./.github/actions/run-python-test-set\n with:\n build_type: ${{ env.BUILD_TYPE }}\n test_selection: performance/test_perf_olap.py\n run_in_parallel: false\n save_perf_report: ${{ env.SAVE_PERF_REPORT }}\n extra_params: -m remote_cluster --timeout 21600 -k test_pgvector_indexing\n pg_version: ${{ env.PG_VERSION }}\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n env:\n VIP_VAP_ACCESS_TOKEN: "${{ secrets.VIP_VAP_ACCESS_TOKEN }}"\n PERF_TEST_RESULT_CONNSTR: "${{ secrets.PERF_TEST_RESULT_CONNSTR }}"\n BENCHMARK_CONNSTR: ${{ steps.set-up-connstr.outputs.connstr }}\n\n - name: Benchmark pgvector queries\n uses: ./.github/actions/run-python-test-set\n with:\n build_type: ${{ env.BUILD_TYPE }}\n test_selection: performance/test_perf_pgvector_queries.py\n run_in_parallel: false\n save_perf_report: ${{ env.SAVE_PERF_REPORT }}\n extra_params: -m remote_cluster --timeout 21600\n pg_version: ${{ env.PG_VERSION }}\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n env:\n BENCHMARK_CONNSTR: ${{ steps.set-up-connstr.outputs.connstr }}\n VIP_VAP_ACCESS_TOKEN: "${{ secrets.VIP_VAP_ACCESS_TOKEN }}"\n PERF_TEST_RESULT_CONNSTR: "${{ secrets.PERF_TEST_RESULT_CONNSTR }}"\n\n - name: Create Allure report\n id: create-allure-report\n if: ${{ !cancelled() }}\n uses: ./.github/actions/allure-report-generate\n with:\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n\n - name: Post to a Slack channel\n if: ${{ github.event.schedule && failure() }}\n uses: slackapi/slack-github-action@fcfb566f8b0aab22203f066d80ca1d7e4b5d05b3 # v1.27.1\n with:\n channel-id: "C06KHQVQ7U3" # on-call-qa-staging-stream\n slack-message: |\n Periodic perf testing on ${{ env.PLATFORM }}: ${{ job.status }}\n <${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}|GitHub Run>\n <${{ steps.create-allure-report.outputs.report-url }}|Allure report>\n env:\n SLACK_BOT_TOKEN: ${{ secrets.SLACK_BOT_TOKEN }}\n\n clickbench-compare:\n # ClichBench DB for rds-aurora and rds-Postgres deployed to the same clusters\n # we use for performance testing in pgbench-compare.\n # Run this job only when pgbench-compare is finished to avoid the intersection.\n # We might change it after https://github.com/neondatabase/neon/issues/2900.\n #\n # *_CLICKBENCH_CONNSTR: Genuine ClickBench DB with ~100M rows\n # *_CLICKBENCH_10M_CONNSTR: DB with the first 10M rows of ClickBench DB\n if: ${{ !cancelled() && (github.event.inputs.run_only_pgvector_tests == 'false' || github.event.inputs.run_only_pgvector_tests == null) }}\n permissions:\n contents: write\n statuses: write\n id-token: write # aws-actions/configure-aws-credentials\n needs: [ generate-matrices, pgbench-compare, prepare_AWS_RDS_databases ]\n\n strategy:\n fail-fast: false\n matrix: ${{ fromJSON(needs.generate-matrices.outputs.olap-compare-matrix) }}\n\n env:\n POSTGRES_DISTRIB_DIR: /tmp/neon/pg_install\n PG_VERSION: ${{ matrix.pg_version }}\n TEST_OUTPUT: /tmp/test_output\n TEST_OLAP_COLLECT_EXPLAIN: ${{ github.event.inputs.collect_olap_explain }}\n TEST_OLAP_COLLECT_PG_STAT_STATEMENTS: ${{ github.event.inputs.collect_pg_stat_statements }}\n BUILD_TYPE: remote\n SAVE_PERF_REPORT: ${{ github.event.inputs.save_perf_report || ( github.ref_name == 'main' ) }}\n PLATFORM: ${{ matrix.platform }}\n\n runs-on: [ self-hosted, us-east-2, x64 ]\n container:\n image: ghcr.io/neondatabase/build-tools:pinned-bookworm\n credentials:\n username: ${{ github.actor }}\n password: ${{ secrets.GITHUB_TOKEN }}\n options: --init\n\n # Increase timeout to 12h, default timeout is 6h\n # we have regression in clickbench causing it to run 2-3x longer\n timeout-minutes: 720\n\n steps:\n - name: Harden the runner (Audit all outbound calls)\n uses: step-security/harden-runner@4d991eb9b905ef189e4c376166672c3f2f230481 # v2.11.0\n with:\n egress-policy: audit\n\n - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n\n - name: Configure AWS credentials\n uses: aws-actions/configure-aws-credentials@e3dd6a429d7300a6a4c196c26e071d42e0343502 # v4.0.2\n with:\n aws-region: eu-central-1\n role-to-assume: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n role-duration-seconds: 18000 # 5 hours\n\n - name: Download Neon artifact\n uses: ./.github/actions/download\n with:\n name: neon-${{ runner.os }}-${{ runner.arch }}-release-artifact\n path: /tmp/neon/\n prefix: latest\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n\n - name: Set up Connection String\n id: set-up-connstr\n run: |\n case "${PLATFORM}" in\n neonvm-captest-reuse)\n case "${PG_VERSION}" in\n 16)\n CONNSTR=${{ secrets.BENCHMARK_CAPTEST_CLICKBENCH_10M_CONNSTR }}\n ;;\n 17)\n CONNSTR=${{ secrets.BENCHMARK_CAPTEST_CLICKBENCH_CONNSTR_PG17 }}\n ;;\n *)\n echo >&2 "Unsupported PG_VERSION=${PG_VERSION} for PLATFORM=${PLATFORM}"\n exit 1\n ;;\n esac\n ;;\n rds-aurora)\n CONNSTR=${{ secrets.BENCHMARK_RDS_AURORA_CLICKBENCH_10M_CONNSTR }}\n ;;\n rds-postgres)\n CONNSTR=${{ secrets.BENCHMARK_RDS_POSTGRES_CLICKBENCH_10M_CONNSTR }}\n ;;\n *)\n echo >&2 "Unknown PLATFORM=${PLATFORM}. Allowed only 'neonvm-captest-reuse', 'rds-aurora', or 'rds-postgres'"\n exit 1\n ;;\n esac\n\n echo "connstr=${CONNSTR}" >> $GITHUB_OUTPUT\n\n - name: ClickBench benchmark\n uses: ./.github/actions/run-python-test-set\n with:\n build_type: ${{ env.BUILD_TYPE }}\n test_selection: performance/test_perf_olap.py\n run_in_parallel: false\n save_perf_report: ${{ env.SAVE_PERF_REPORT }}\n extra_params: -m remote_cluster --timeout 43200 -k test_clickbench\n pg_version: ${{ env.PG_VERSION }}\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n env:\n VIP_VAP_ACCESS_TOKEN: "${{ secrets.VIP_VAP_ACCESS_TOKEN }}"\n PERF_TEST_RESULT_CONNSTR: "${{ secrets.PERF_TEST_RESULT_CONNSTR }}"\n TEST_OLAP_COLLECT_EXPLAIN: ${{ github.event.inputs.collect_olap_explain || 'false' }}\n TEST_OLAP_COLLECT_PG_STAT_STATEMENTS: ${{ github.event.inputs.collect_pg_stat_statements || 'false' }}\n BENCHMARK_CONNSTR: ${{ steps.set-up-connstr.outputs.connstr }}\n TEST_OLAP_SCALE: 10\n\n - name: Create Allure report\n id: create-allure-report\n if: ${{ !cancelled() }}\n uses: ./.github/actions/allure-report-generate\n with:\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n\n - name: Post to a Slack channel\n if: ${{ github.event.schedule && failure() }}\n uses: slackapi/slack-github-action@fcfb566f8b0aab22203f066d80ca1d7e4b5d05b3 # v1.27.1\n with:\n channel-id: "C06KHQVQ7U3" # on-call-qa-staging-stream\n slack-message: |\n Periodic OLAP perf testing on ${{ matrix.platform }}: ${{ job.status }}\n <${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}|GitHub Run>\n <${{ steps.create-allure-report.outputs.report-url }}|Allure report>\n env:\n SLACK_BOT_TOKEN: ${{ secrets.SLACK_BOT_TOKEN }}\n\n tpch-compare:\n # TCP-H DB for rds-aurora and rds-Postgres deployed to the same clusters\n # we use for performance testing in pgbench-compare & clickbench-compare.\n # Run this job only when clickbench-compare is finished to avoid the intersection.\n # We might change it after https://github.com/neondatabase/neon/issues/2900.\n #\n # *_TPCH_S10_CONNSTR: DB generated with scale factor 10 (~10 GB)\n # if: ${{ !cancelled() && (github.event.inputs.run_only_pgvector_tests == 'false' || github.event.inputs.run_only_pgvector_tests == null) }}\n permissions:\n contents: write\n statuses: write\n id-token: write # aws-actions/configure-aws-credentials\n needs: [ generate-matrices, clickbench-compare, prepare_AWS_RDS_databases ]\n\n strategy:\n fail-fast: false\n matrix: ${{ fromJSON(needs.generate-matrices.outputs.tpch-compare-matrix) }}\n\n env:\n POSTGRES_DISTRIB_DIR: /tmp/neon/pg_install\n PG_VERSION: ${{ matrix.pg_version }}\n TEST_OUTPUT: /tmp/test_output\n BUILD_TYPE: remote\n SAVE_PERF_REPORT: ${{ github.event.inputs.save_perf_report || ( github.ref_name == 'main' ) }}\n PLATFORM: ${{ matrix.platform }}\n\n runs-on: [ self-hosted, us-east-2, x64 ]\n container:\n image: ghcr.io/neondatabase/build-tools:pinned-bookworm\n credentials:\n username: ${{ github.actor }}\n password: ${{ secrets.GITHUB_TOKEN }}\n options: --init\n\n steps:\n - name: Harden the runner (Audit all outbound calls)\n uses: step-security/harden-runner@4d991eb9b905ef189e4c376166672c3f2f230481 # v2.11.0\n with:\n egress-policy: audit\n\n - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n\n - name: Configure AWS credentials\n uses: aws-actions/configure-aws-credentials@e3dd6a429d7300a6a4c196c26e071d42e0343502 # v4.0.2\n with:\n aws-region: eu-central-1\n role-to-assume: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n role-duration-seconds: 18000 # 5 hours\n\n - name: Download Neon artifact\n uses: ./.github/actions/download\n with:\n name: neon-${{ runner.os }}-${{ runner.arch }}-release-artifact\n path: /tmp/neon/\n prefix: latest\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n\n - name: Get Connstring Secret Name\n run: |\n case "${PLATFORM}" in\n neonvm-captest-reuse)\n case "${PG_VERSION}" in\n 16)\n CONNSTR_SECRET_NAME="BENCHMARK_CAPTEST_TPCH_S10_CONNSTR"\n ;;\n 17)\n CONNSTR_SECRET_NAME="BENCHMARK_CAPTEST_TPCH_CONNSTR_PG17"\n ;;\n *)\n echo >&2 "Unsupported PG_VERSION=${PG_VERSION} for PLATFORM=${PLATFORM}"\n exit 1\n ;;\n esac\n ;;\n rds-aurora)\n CONNSTR_SECRET_NAME="BENCHMARK_RDS_AURORA_TPCH_S10_CONNSTR"\n ;;\n rds-postgres)\n CONNSTR_SECRET_NAME="BENCHMARK_RDS_POSTGRES_TPCH_S10_CONNSTR"\n ;;\n *)\n echo >&2 "Unknown PLATFORM=${PLATFORM}. Allowed only 'neonvm-captest-reuse', 'rds-aurora', or 'rds-postgres'"\n exit 1\n ;;\n esac\n\n echo "CONNSTR_SECRET_NAME=${CONNSTR_SECRET_NAME}" >> $GITHUB_ENV\n\n - name: Set up Connection String\n id: set-up-connstr\n run: |\n CONNSTR=${{ secrets[env.CONNSTR_SECRET_NAME] }}\n\n echo "connstr=${CONNSTR}" >> $GITHUB_OUTPUT\n\n - name: Run TPC-H benchmark\n uses: ./.github/actions/run-python-test-set\n with:\n build_type: ${{ env.BUILD_TYPE }}\n test_selection: performance/test_perf_olap.py\n run_in_parallel: false\n save_perf_report: ${{ env.SAVE_PERF_REPORT }}\n extra_params: -m remote_cluster --timeout 21600 -k test_tpch\n pg_version: ${{ env.PG_VERSION }}\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n env:\n VIP_VAP_ACCESS_TOKEN: "${{ secrets.VIP_VAP_ACCESS_TOKEN }}"\n PERF_TEST_RESULT_CONNSTR: "${{ secrets.PERF_TEST_RESULT_CONNSTR }}"\n BENCHMARK_CONNSTR: ${{ steps.set-up-connstr.outputs.connstr }}\n TEST_OLAP_SCALE: 10\n\n - name: Create Allure report\n id: create-allure-report\n if: ${{ !cancelled() }}\n uses: ./.github/actions/allure-report-generate\n with:\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n\n - name: Post to a Slack channel\n if: ${{ github.event.schedule && failure() }}\n uses: slackapi/slack-github-action@fcfb566f8b0aab22203f066d80ca1d7e4b5d05b3 # v1.27.1\n with:\n channel-id: "C06KHQVQ7U3" # on-call-qa-staging-stream\n slack-message: |\n Periodic TPC-H perf testing on ${{ matrix.platform }}: ${{ job.status }}\n <${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}|GitHub Run>\n <${{ steps.create-allure-report.outputs.report-url }}|Allure report>\n env:\n SLACK_BOT_TOKEN: ${{ secrets.SLACK_BOT_TOKEN }}\n\n user-examples-compare:\n # if: ${{ !cancelled() && (github.event.inputs.run_only_pgvector_tests == 'false' || github.event.inputs.run_only_pgvector_tests == null) }}\n permissions:\n contents: write\n statuses: write\n id-token: write # aws-actions/configure-aws-credentials\n needs: [ generate-matrices, tpch-compare, prepare_AWS_RDS_databases ]\n\n strategy:\n fail-fast: false\n matrix: ${{ fromJSON(needs.generate-matrices.outputs.olap-compare-matrix) }}\n\n env:\n POSTGRES_DISTRIB_DIR: /tmp/neon/pg_install\n PG_VERSION: ${{ matrix.pg_version }}\n TEST_OUTPUT: /tmp/test_output\n BUILD_TYPE: remote\n SAVE_PERF_REPORT: ${{ github.event.inputs.save_perf_report || ( github.ref_name == 'main' ) }}\n PLATFORM: ${{ matrix.platform }}\n\n runs-on: [ self-hosted, us-east-2, x64 ]\n container:\n image: ghcr.io/neondatabase/build-tools:pinned-bookworm\n credentials:\n username: ${{ github.actor }}\n password: ${{ secrets.GITHUB_TOKEN }}\n options: --init\n\n steps:\n - name: Harden the runner (Audit all outbound calls)\n uses: step-security/harden-runner@4d991eb9b905ef189e4c376166672c3f2f230481 # v2.11.0\n with:\n egress-policy: audit\n\n - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n\n - name: Configure AWS credentials\n uses: aws-actions/configure-aws-credentials@e3dd6a429d7300a6a4c196c26e071d42e0343502 # v4.0.2\n with:\n aws-region: eu-central-1\n role-to-assume: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n role-duration-seconds: 18000 # 5 hours\n\n - name: Download Neon artifact\n uses: ./.github/actions/download\n with:\n name: neon-${{ runner.os }}-${{ runner.arch }}-release-artifact\n path: /tmp/neon/\n prefix: latest\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n\n - name: Set up Connection String\n id: set-up-connstr\n run: |\n case "${PLATFORM}" in\n neonvm-captest-reuse)\n case "${PG_VERSION}" in\n 16)\n CONNSTR=${{ secrets.BENCHMARK_USER_EXAMPLE_CAPTEST_CONNSTR }}\n ;;\n 17)\n CONNSTR=${{ secrets.BENCHMARK_CAPTEST_USER_EXAMPLE_CONNSTR_PG17 }}\n ;;\n *)\n echo >&2 "Unsupported PG_VERSION=${PG_VERSION} for PLATFORM=${PLATFORM}"\n exit 1\n ;;\n esac\n ;;\n rds-aurora)\n CONNSTR=${{ secrets.BENCHMARK_USER_EXAMPLE_RDS_AURORA_CONNSTR }}\n ;;\n rds-postgres)\n CONNSTR=${{ secrets.BENCHMARK_USER_EXAMPLE_RDS_POSTGRES_CONNSTR }}\n ;;\n *)\n echo >&2 "Unknown PLATFORM=${PLATFORM}. Allowed only 'neonvm-captest-reuse', 'rds-aurora', or 'rds-postgres'"\n exit 1\n ;;\n esac\n\n echo "connstr=${CONNSTR}" >> $GITHUB_OUTPUT\n\n - name: Run user examples\n uses: ./.github/actions/run-python-test-set\n with:\n build_type: ${{ env.BUILD_TYPE }}\n test_selection: performance/test_perf_olap.py\n run_in_parallel: false\n save_perf_report: ${{ env.SAVE_PERF_REPORT }}\n extra_params: -m remote_cluster --timeout 21600 -k test_user_examples\n pg_version: ${{ env.PG_VERSION }}\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n env:\n VIP_VAP_ACCESS_TOKEN: "${{ secrets.VIP_VAP_ACCESS_TOKEN }}"\n PERF_TEST_RESULT_CONNSTR: "${{ secrets.PERF_TEST_RESULT_CONNSTR }}"\n BENCHMARK_CONNSTR: ${{ steps.set-up-connstr.outputs.connstr }}\n\n - name: Create Allure report\n id: create-allure-report\n if: ${{ !cancelled() }}\n uses: ./.github/actions/allure-report-generate\n with:\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n\n - name: Post to a Slack channel\n if: ${{ github.event.schedule && failure() }}\n uses: slackapi/slack-github-action@fcfb566f8b0aab22203f066d80ca1d7e4b5d05b3 # v1.27.1\n with:\n channel-id: "C06KHQVQ7U3" # on-call-qa-staging-stream\n slack-message: |\n Periodic TPC-H perf testing on ${{ matrix.platform }}: ${{ job.status }}\n <${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}|GitHub Run>\n <${{ steps.create-allure-report.outputs.report-url }}|Allure report>\n\n env:\n SLACK_BOT_TOKEN: ${{ secrets.SLACK_BOT_TOKEN }}\n | dataset_sample\yaml\neondatabase_neon\.github\workflows\benchmarking.yml | benchmarking.yml | YAML | 48,580 | 0.95 | 0.039199 | 0.051506 | vue-tools | 342 | 2025-01-20T08:48:37.730732 | GPL-3.0 | false | ba0e1460ac997635b760912098e9a546 |
name: Build build-tools image\n\non:\n workflow_call:\n inputs:\n archs:\n description: "Json array of architectures to build"\n # Default values are set in `check-image` job, `set-variables` step\n type: string\n required: false\n debians:\n description: "Json array of Debian versions to build"\n # Default values are set in `check-image` job, `set-variables` step\n type: string\n required: false\n outputs:\n image-tag:\n description: "build-tools tag"\n value: ${{ jobs.check-image.outputs.tag }}\n image:\n description: "build-tools image"\n value: ghcr.io/neondatabase/build-tools:${{ jobs.check-image.outputs.tag }}\n\ndefaults:\n run:\n shell: bash -euo pipefail {0}\n\n# The initial idea was to prevent the waste of resources by not re-building the `build-tools` image\n# for the same tag in parallel workflow runs, and queue them to be skipped once we have\n# the first image pushed to Docker registry, but GitHub's concurrency mechanism is not working as expected.\n# GitHub can't have more than 1 job in a queue and removes the previous one, it causes failures if the dependent jobs.\n#\n# Ref https://github.com/orgs/community/discussions/41518\n#\n# concurrency:\n# group: build-build-tools-image-${{ inputs.image-tag }}\n# cancel-in-progress: false\n\n# No permission for GITHUB_TOKEN by default; the **minimal required** set of permissions should be granted in each job.\npermissions: {}\n\njobs:\n check-image:\n runs-on: ubuntu-22.04\n outputs:\n archs: ${{ steps.set-variables.outputs.archs }}\n debians: ${{ steps.set-variables.outputs.debians }}\n tag: ${{ steps.set-variables.outputs.image-tag }}\n everything: ${{ steps.set-more-variables.outputs.everything }}\n found: ${{ steps.set-more-variables.outputs.found }}\n\n permissions:\n packages: read\n\n steps:\n - name: Harden the runner (Audit all outbound calls)\n uses: step-security/harden-runner@4d991eb9b905ef189e4c376166672c3f2f230481 # v2.11.0\n with:\n egress-policy: audit\n\n - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n\n - uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0\n with:\n registry: ghcr.io\n username: ${{ github.actor }}\n password: ${{ secrets.GITHUB_TOKEN }}\n\n - name: Set variables\n id: set-variables\n env:\n ARCHS: ${{ inputs.archs || '["x64","arm64"]' }}\n DEBIANS: ${{ inputs.debians || '["bullseye","bookworm"]' }}\n IMAGE_TAG: |\n ${{ hashFiles('build-tools.Dockerfile',\n '.github/workflows/build-build-tools-image.yml') }}\n run: |\n echo "archs=${ARCHS}" | tee -a ${GITHUB_OUTPUT}\n echo "debians=${DEBIANS}" | tee -a ${GITHUB_OUTPUT}\n echo "image-tag=${IMAGE_TAG}" | tee -a ${GITHUB_OUTPUT}\n\n - name: Set more variables\n id: set-more-variables\n env:\n IMAGE_TAG: ${{ steps.set-variables.outputs.image-tag }}\n EVERYTHING: |\n ${{ contains(fromJSON(steps.set-variables.outputs.archs), 'x64') &&\n contains(fromJSON(steps.set-variables.outputs.archs), 'arm64') &&\n contains(fromJSON(steps.set-variables.outputs.debians), 'bullseye') &&\n contains(fromJSON(steps.set-variables.outputs.debians), 'bookworm') }}\n run: |\n if docker manifest inspect ghcr.io/neondatabase/build-tools:${IMAGE_TAG}; then\n found=true\n else\n found=false\n fi\n\n echo "everything=${EVERYTHING}" | tee -a ${GITHUB_OUTPUT}\n echo "found=${found}" | tee -a ${GITHUB_OUTPUT}\n\n build-image:\n needs: [ check-image ]\n if: needs.check-image.outputs.found == 'false'\n\n strategy:\n matrix:\n arch: ${{ fromJSON(needs.check-image.outputs.archs) }}\n debian: ${{ fromJSON(needs.check-image.outputs.debians) }}\n\n permissions:\n packages: write\n\n runs-on: ${{ fromJSON(format('["self-hosted", "{0}"]', matrix.arch == 'arm64' && 'large-arm64' || 'large')) }}\n\n steps:\n - name: Harden the runner (Audit all outbound calls)\n uses: step-security/harden-runner@4d991eb9b905ef189e4c376166672c3f2f230481 # v2.11.0\n with:\n egress-policy: audit\n\n - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n\n - uses: neondatabase/dev-actions/set-docker-config-dir@6094485bf440001c94a94a3f9e221e81ff6b6193\n - uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0\n with:\n cache-binary: false\n\n - uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0\n with:\n username: ${{ secrets.NEON_DOCKERHUB_USERNAME }}\n password: ${{ secrets.NEON_DOCKERHUB_PASSWORD }}\n\n - uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0\n with:\n registry: ghcr.io\n username: ${{ github.actor }}\n password: ${{ secrets.GITHUB_TOKEN }}\n\n - uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0\n with:\n registry: cache.neon.build\n username: ${{ secrets.NEON_CI_DOCKERCACHE_USERNAME }}\n password: ${{ secrets.NEON_CI_DOCKERCACHE_PASSWORD }}\n\n - uses: docker/build-push-action@471d1dc4e07e5cdedd4c2171150001c434f0b7a4 # v6.15.0\n with:\n file: build-tools.Dockerfile\n context: .\n provenance: false\n push: true\n pull: true\n build-args: |\n DEBIAN_VERSION=${{ matrix.debian }}\n cache-from: type=registry,ref=cache.neon.build/build-tools:cache-${{ matrix.debian }}-${{ matrix.arch }}\n cache-to: ${{ github.ref_name == 'main' && format('type=registry,ref=cache.neon.build/build-tools:cache-{0}-{1},mode=max', matrix.debian, matrix.arch) || '' }}\n tags: |\n ghcr.io/neondatabase/build-tools:${{ needs.check-image.outputs.tag }}-${{ matrix.debian }}-${{ matrix.arch }}\n\n merge-images:\n needs: [ check-image, build-image ]\n runs-on: ubuntu-22.04\n\n permissions:\n packages: write\n\n steps:\n - name: Harden the runner (Audit all outbound calls)\n uses: step-security/harden-runner@4d991eb9b905ef189e4c376166672c3f2f230481 # v2.11.0\n with:\n egress-policy: audit\n\n - uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0\n with:\n username: ${{ secrets.NEON_DOCKERHUB_USERNAME }}\n password: ${{ secrets.NEON_DOCKERHUB_PASSWORD }}\n\n - uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0\n with:\n registry: ghcr.io\n username: ${{ github.actor }}\n password: ${{ secrets.GITHUB_TOKEN }}\n\n - name: Create multi-arch image\n env:\n DEFAULT_DEBIAN_VERSION: bookworm\n ARCHS: ${{ join(fromJSON(needs.check-image.outputs.archs), ' ') }}\n DEBIANS: ${{ join(fromJSON(needs.check-image.outputs.debians), ' ') }}\n EVERYTHING: ${{ needs.check-image.outputs.everything }}\n IMAGE_TAG: ${{ needs.check-image.outputs.tag }}\n run: |\n for debian in ${DEBIANS}; do\n tags=("-t" "ghcr.io/neondatabase/build-tools:${IMAGE_TAG}-${debian}")\n\n if [ "${EVERYTHING}" == "true" ] && [ "${debian}" == "${DEFAULT_DEBIAN_VERSION}" ]; then\n tags+=("-t" "ghcr.io/neondatabase/build-tools:${IMAGE_TAG}")\n fi\n\n for arch in ${ARCHS}; do\n tags+=("ghcr.io/neondatabase/build-tools:${IMAGE_TAG}-${debian}-${arch}")\n done\n\n docker buildx imagetools create "${tags[@]}"\n done\n | dataset_sample\yaml\neondatabase_neon\.github\workflows\build-build-tools-image.yml | build-build-tools-image.yml | YAML | 7,827 | 0.95 | 0.039409 | 0.076023 | vue-tools | 577 | 2024-06-24T07:48:40.049047 | GPL-3.0 | false | 4b0aae37d52d9eef5d6281c39f34c2eb |
name: Check neon with MacOS builds\n\non:\n workflow_call:\n inputs:\n pg_versions:\n description: "Array of the pg versions to build for, for example: ['v14', 'v17']"\n type: string\n default: '[]'\n required: false\n rebuild_rust_code:\n description: "Rebuild Rust code"\n type: boolean\n default: false\n required: false\n rebuild_everything:\n description: "If true, rebuild for all versions"\n type: boolean\n default: false\n required: false\n\nenv:\n RUST_BACKTRACE: 1\n COPT: '-Werror'\n\n# TODO: move `check-*` and `files-changed` jobs to the "Caller" Workflow\n# We should care about that as Github has limitations:\n# - You can connect up to four levels of workflows\n# - You can call a maximum of 20 unique reusable workflows from a single workflow file.\n# https://docs.github.com/en/actions/sharing-automations/reusing-workflows#limitations\npermissions:\n contents: read\n\njobs:\n build-pgxn:\n if: |\n (inputs.pg_versions != '[]' || inputs.rebuild_everything) && (\n contains(github.event.pull_request.labels.*.name, 'run-extra-build-macos') ||\n contains(github.event.pull_request.labels.*.name, 'run-extra-build-*') ||\n github.ref_name == 'main'\n )\n timeout-minutes: 30\n runs-on: macos-15\n strategy:\n matrix:\n postgres-version: ${{ inputs.rebuild_everything && fromJSON('["v14", "v15", "v16", "v17"]') || fromJSON(inputs.pg_versions) }}\n env:\n # Use release build only, to have less debug info around\n # Hence keeping target/ (and general cache size) smaller\n BUILD_TYPE: release\n steps:\n - name: Harden the runner (Audit all outbound calls)\n uses: step-security/harden-runner@4d991eb9b905ef189e4c376166672c3f2f230481 # v2.11.0\n with:\n egress-policy: audit\n\n - name: Checkout main repo\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n\n - name: Set pg ${{ matrix.postgres-version }} for caching\n id: pg_rev\n run: echo pg_rev=$(git rev-parse HEAD:vendor/postgres-${{ matrix.postgres-version }}) | tee -a "${GITHUB_OUTPUT}"\n\n - name: Cache postgres ${{ matrix.postgres-version }} build\n id: cache_pg\n uses: tespkg/actions-cache@b7bf5fcc2f98a52ac6080eb0fd282c2f752074b1 # v1.8.0\n with:\n endpoint: ${{ vars.HETZNER_CACHE_REGION }}.${{ vars.HETZNER_CACHE_ENDPOINT }}\n bucket: ${{ vars.HETZNER_CACHE_BUCKET }}\n accessKey: ${{ secrets.HETZNER_CACHE_ACCESS_KEY }}\n secretKey: ${{ secrets.HETZNER_CACHE_SECRET_KEY }}\n use-fallback: false\n path: pg_install/${{ matrix.postgres-version }}\n key: v1-${{ runner.os }}-${{ runner.arch }}-${{ env.BUILD_TYPE }}-pg-${{ matrix.postgres-version }}-${{ steps.pg_rev.outputs.pg_rev }}-${{ hashFiles('Makefile') }}\n\n - name: Checkout submodule vendor/postgres-${{ matrix.postgres-version }}\n if: steps.cache_pg.outputs.cache-hit != 'true'\n run: |\n git submodule init vendor/postgres-${{ matrix.postgres-version }}\n git submodule update --depth 1 --recursive\n\n - name: Install build dependencies\n if: steps.cache_pg.outputs.cache-hit != 'true'\n run: |\n brew install flex bison openssl protobuf icu4c\n\n - name: Set extra env for macOS\n if: steps.cache_pg.outputs.cache-hit != 'true'\n run: |\n echo 'LDFLAGS=-L/usr/local/opt/openssl@3/lib' >> $GITHUB_ENV\n echo 'CPPFLAGS=-I/usr/local/opt/openssl@3/include' >> $GITHUB_ENV\n\n - name: Build Postgres ${{ matrix.postgres-version }}\n if: steps.cache_pg.outputs.cache-hit != 'true'\n run: |\n make postgres-${{ matrix.postgres-version }} -j$(sysctl -n hw.ncpu)\n\n - name: Build Neon Pg Ext ${{ matrix.postgres-version }}\n if: steps.cache_pg.outputs.cache-hit != 'true'\n run: |\n make "neon-pg-ext-${{ matrix.postgres-version }}" -j$(sysctl -n hw.ncpu)\n\n - name: Get postgres headers ${{ matrix.postgres-version }}\n if: steps.cache_pg.outputs.cache-hit != 'true'\n run: |\n make postgres-headers-${{ matrix.postgres-version }} -j$(sysctl -n hw.ncpu)\n\n build-walproposer-lib:\n if: |\n (inputs.pg_versions != '[]' || inputs.rebuild_everything) && (\n contains(github.event.pull_request.labels.*.name, 'run-extra-build-macos') ||\n contains(github.event.pull_request.labels.*.name, 'run-extra-build-*') ||\n github.ref_name == 'main'\n )\n timeout-minutes: 30\n runs-on: macos-15\n needs: [build-pgxn]\n env:\n # Use release build only, to have less debug info around\n # Hence keeping target/ (and general cache size) smaller\n BUILD_TYPE: release\n steps:\n - name: Harden the runner (Audit all outbound calls)\n uses: step-security/harden-runner@4d991eb9b905ef189e4c376166672c3f2f230481 # v2.11.0\n with:\n egress-policy: audit\n\n - name: Checkout main repo\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n\n - name: Set pg v17 for caching\n id: pg_rev\n run: echo pg_rev=$(git rev-parse HEAD:vendor/postgres-v17) | tee -a "${GITHUB_OUTPUT}"\n\n - name: Cache postgres v17 build\n id: cache_pg\n uses: tespkg/actions-cache@b7bf5fcc2f98a52ac6080eb0fd282c2f752074b1 # v1.8.0\n with:\n endpoint: ${{ vars.HETZNER_CACHE_REGION }}.${{ vars.HETZNER_CACHE_ENDPOINT }}\n bucket: ${{ vars.HETZNER_CACHE_BUCKET }}\n accessKey: ${{ secrets.HETZNER_CACHE_ACCESS_KEY }}\n secretKey: ${{ secrets.HETZNER_CACHE_SECRET_KEY }}\n use-fallback: false\n path: pg_install/v17\n key: v1-${{ runner.os }}-${{ runner.arch }}-${{ env.BUILD_TYPE }}-pg-v17-${{ steps.pg_rev.outputs.pg_rev }}-${{ hashFiles('Makefile') }}\n\n - name: Cache walproposer-lib\n id: cache_walproposer_lib\n uses: tespkg/actions-cache@b7bf5fcc2f98a52ac6080eb0fd282c2f752074b1 # v1.8.0\n with:\n endpoint: ${{ vars.HETZNER_CACHE_REGION }}.${{ vars.HETZNER_CACHE_ENDPOINT }}\n bucket: ${{ vars.HETZNER_CACHE_BUCKET }}\n accessKey: ${{ secrets.HETZNER_CACHE_ACCESS_KEY }}\n secretKey: ${{ secrets.HETZNER_CACHE_SECRET_KEY }}\n use-fallback: false\n path: pg_install/build/walproposer-lib\n key: v1-${{ runner.os }}-${{ runner.arch }}-${{ env.BUILD_TYPE }}-walproposer_lib-v17-${{ steps.pg_rev.outputs.pg_rev }}-${{ hashFiles('Makefile') }}\n\n - name: Checkout submodule vendor/postgres-v17\n if: steps.cache_walproposer_lib.outputs.cache-hit != 'true'\n run: |\n git submodule init vendor/postgres-v17\n git submodule update --depth 1 --recursive\n\n - name: Install build dependencies\n if: steps.cache_walproposer_lib.outputs.cache-hit != 'true'\n run: |\n brew install flex bison openssl protobuf icu4c\n\n - name: Set extra env for macOS\n if: steps.cache_walproposer_lib.outputs.cache-hit != 'true'\n run: |\n echo 'LDFLAGS=-L/usr/local/opt/openssl@3/lib' >> $GITHUB_ENV\n echo 'CPPFLAGS=-I/usr/local/opt/openssl@3/include' >> $GITHUB_ENV\n\n - name: Build walproposer-lib (only for v17)\n if: steps.cache_walproposer_lib.outputs.cache-hit != 'true'\n run:\n make walproposer-lib -j$(sysctl -n hw.ncpu)\n\n cargo-build:\n if: |\n (inputs.pg_versions != '[]' || inputs.rebuild_rust_code || inputs.rebuild_everything) && (\n contains(github.event.pull_request.labels.*.name, 'run-extra-build-macos') ||\n contains(github.event.pull_request.labels.*.name, 'run-extra-build-*') ||\n github.ref_name == 'main'\n )\n timeout-minutes: 30\n runs-on: macos-15\n needs: [build-pgxn, build-walproposer-lib]\n env:\n # Use release build only, to have less debug info around\n # Hence keeping target/ (and general cache size) smaller\n BUILD_TYPE: release\n steps:\n - name: Harden the runner (Audit all outbound calls)\n uses: step-security/harden-runner@4d991eb9b905ef189e4c376166672c3f2f230481 # v2.11.0\n with:\n egress-policy: audit\n\n - name: Checkout main repo\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n with:\n submodules: true\n\n - name: Set pg v14 for caching\n id: pg_rev_v14\n run: echo pg_rev=$(git rev-parse HEAD:vendor/postgres-v14) | tee -a "${GITHUB_OUTPUT}"\n - name: Set pg v15 for caching\n id: pg_rev_v15\n run: echo pg_rev=$(git rev-parse HEAD:vendor/postgres-v15) | tee -a "${GITHUB_OUTPUT}"\n - name: Set pg v16 for caching\n id: pg_rev_v16\n run: echo pg_rev=$(git rev-parse HEAD:vendor/postgres-v16) | tee -a "${GITHUB_OUTPUT}"\n - name: Set pg v17 for caching\n id: pg_rev_v17\n run: echo pg_rev=$(git rev-parse HEAD:vendor/postgres-v17) | tee -a "${GITHUB_OUTPUT}"\n\n - name: Cache postgres v14 build\n id: cache_pg\n uses: tespkg/actions-cache@b7bf5fcc2f98a52ac6080eb0fd282c2f752074b1 # v1.8.0\n with:\n endpoint: ${{ vars.HETZNER_CACHE_REGION }}.${{ vars.HETZNER_CACHE_ENDPOINT }}\n bucket: ${{ vars.HETZNER_CACHE_BUCKET }}\n accessKey: ${{ secrets.HETZNER_CACHE_ACCESS_KEY }}\n secretKey: ${{ secrets.HETZNER_CACHE_SECRET_KEY }}\n use-fallback: false\n path: pg_install/v14\n key: v1-${{ runner.os }}-${{ runner.arch }}-${{ env.BUILD_TYPE }}-pg-v14-${{ steps.pg_rev_v14.outputs.pg_rev }}-${{ hashFiles('Makefile') }}\n - name: Cache postgres v15 build\n id: cache_pg_v15\n uses: tespkg/actions-cache@b7bf5fcc2f98a52ac6080eb0fd282c2f752074b1 # v1.8.0\n with:\n endpoint: ${{ vars.HETZNER_CACHE_REGION }}.${{ vars.HETZNER_CACHE_ENDPOINT }}\n bucket: ${{ vars.HETZNER_CACHE_BUCKET }}\n accessKey: ${{ secrets.HETZNER_CACHE_ACCESS_KEY }}\n secretKey: ${{ secrets.HETZNER_CACHE_SECRET_KEY }}\n use-fallback: false\n path: pg_install/v15\n key: v1-${{ runner.os }}-${{ runner.arch }}-${{ env.BUILD_TYPE }}-pg-v15-${{ steps.pg_rev_v15.outputs.pg_rev }}-${{ hashFiles('Makefile') }}\n - name: Cache postgres v16 build\n id: cache_pg_v16\n uses: tespkg/actions-cache@b7bf5fcc2f98a52ac6080eb0fd282c2f752074b1 # v1.8.0\n with:\n endpoint: ${{ vars.HETZNER_CACHE_REGION }}.${{ vars.HETZNER_CACHE_ENDPOINT }}\n bucket: ${{ vars.HETZNER_CACHE_BUCKET }}\n accessKey: ${{ secrets.HETZNER_CACHE_ACCESS_KEY }}\n secretKey: ${{ secrets.HETZNER_CACHE_SECRET_KEY }}\n use-fallback: false\n path: pg_install/v16\n key: v1-${{ runner.os }}-${{ runner.arch }}-${{ env.BUILD_TYPE }}-pg-v16-${{ steps.pg_rev_v16.outputs.pg_rev }}-${{ hashFiles('Makefile') }}\n - name: Cache postgres v17 build\n id: cache_pg_v17\n uses: tespkg/actions-cache@b7bf5fcc2f98a52ac6080eb0fd282c2f752074b1 # v1.8.0\n with:\n endpoint: ${{ vars.HETZNER_CACHE_REGION }}.${{ vars.HETZNER_CACHE_ENDPOINT }}\n bucket: ${{ vars.HETZNER_CACHE_BUCKET }}\n accessKey: ${{ secrets.HETZNER_CACHE_ACCESS_KEY }}\n secretKey: ${{ secrets.HETZNER_CACHE_SECRET_KEY }}\n use-fallback: false\n path: pg_install/v17\n key: v1-${{ runner.os }}-${{ runner.arch }}-${{ env.BUILD_TYPE }}-pg-v17-${{ steps.pg_rev_v17.outputs.pg_rev }}-${{ hashFiles('Makefile') }}\n\n - name: Cache cargo deps (only for v17)\n uses: tespkg/actions-cache@b7bf5fcc2f98a52ac6080eb0fd282c2f752074b1 # v1.8.0\n with:\n endpoint: ${{ vars.HETZNER_CACHE_REGION }}.${{ vars.HETZNER_CACHE_ENDPOINT }}\n bucket: ${{ vars.HETZNER_CACHE_BUCKET }}\n accessKey: ${{ secrets.HETZNER_CACHE_ACCESS_KEY }}\n secretKey: ${{ secrets.HETZNER_CACHE_SECRET_KEY }}\n use-fallback: false\n path: |\n ~/.cargo/registry\n !~/.cargo/registry/src\n ~/.cargo/git\n target\n key: v1-${{ runner.os }}-${{ runner.arch }}-cargo-${{ hashFiles('./Cargo.lock') }}-${{ hashFiles('./rust-toolchain.toml') }}-rust\n\n - name: Cache walproposer-lib\n id: cache_walproposer_lib\n uses: tespkg/actions-cache@b7bf5fcc2f98a52ac6080eb0fd282c2f752074b1 # v1.8.0\n with:\n endpoint: ${{ vars.HETZNER_CACHE_REGION }}.${{ vars.HETZNER_CACHE_ENDPOINT }}\n bucket: ${{ vars.HETZNER_CACHE_BUCKET }}\n accessKey: ${{ secrets.HETZNER_CACHE_ACCESS_KEY }}\n secretKey: ${{ secrets.HETZNER_CACHE_SECRET_KEY }}\n use-fallback: false\n path: pg_install/build/walproposer-lib\n key: v1-${{ runner.os }}-${{ runner.arch }}-${{ env.BUILD_TYPE }}-walproposer_lib-v17-${{ steps.pg_rev_v17.outputs.pg_rev }}-${{ hashFiles('Makefile') }}\n\n - name: Install build dependencies\n run: |\n brew install flex bison openssl protobuf icu4c\n\n - name: Set extra env for macOS\n run: |\n echo 'LDFLAGS=-L/usr/local/opt/openssl@3/lib' >> $GITHUB_ENV\n echo 'CPPFLAGS=-I/usr/local/opt/openssl@3/include' >> $GITHUB_ENV\n\n - name: Run cargo build (only for v17)\n run: cargo build --all --release -j$(sysctl -n hw.ncpu)\n\n - name: Check that no warnings are produced (only for v17)\n run: ./run_clippy.sh\n | dataset_sample\yaml\neondatabase_neon\.github\workflows\build-macos.yml | build-macos.yml | YAML | 13,454 | 0.95 | 0.095395 | 0.040441 | react-lib | 16 | 2025-02-23T22:26:53.573552 | MIT | false | 93bede978a078d366eb12c198b1c13ec |
name: cargo deny checks\n\non:\n workflow_call:\n inputs:\n build-tools-image:\n required: false\n type: string\n schedule:\n - cron: '0 10 * * *'\n\npermissions:\n contents: read\n\njobs:\n cargo-deny:\n strategy:\n matrix:\n ref: >-\n ${{\n fromJSON(\n github.event_name == 'schedule'\n && '["main","release","release-proxy","release-compute"]'\n || format('["{0}"]', github.sha)\n )\n }}\n\n runs-on: [self-hosted, small]\n\n permissions:\n packages: read\n\n container:\n image: ${{ inputs.build-tools-image || 'ghcr.io/neondatabase/build-tools:pinned' }}\n credentials:\n username: ${{ github.actor }}\n password: ${{ secrets.GITHUB_TOKEN }}\n options: --init\n\n steps:\n - name: Harden the runner (Audit all outbound calls)\n uses: step-security/harden-runner@4d991eb9b905ef189e4c376166672c3f2f230481 # v2.11.0\n with:\n egress-policy: audit\n\n - name: Checkout\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n with:\n ref: ${{ matrix.ref }}\n\n - name: Check rust licenses/bans/advisories/sources\n env:\n CARGO_DENY_TARGET: >-\n ${{ github.event_name == 'schedule' && 'advisories' || 'all' }}\n run: cargo deny check --hide-inclusion-graph $CARGO_DENY_TARGET\n\n - name: Post to a Slack channel\n if: ${{ github.event_name == 'schedule' && failure() }}\n uses: slackapi/slack-github-action@485a9d42d3a73031f12ec201c457e2162c45d02d # v2.0.0\n with:\n method: chat.postMessage\n token: ${{ secrets.SLACK_BOT_TOKEN }}\n payload: |\n channel: ${{ vars.SLACK_ON_CALL_DEVPROD_STREAM }}\n text: |\n Periodic cargo-deny on ${{ matrix.ref }}: ${{ job.status }}\n <${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}|GitHub Run>\n Fixing the problem should be fairly straight forward from the logs. If not, <#${{ vars.SLACK_RUST_CHANNEL_ID }}> is there to help.\n Pinging <!subteam^S0838JPSH32|@oncall-devprod>.\n | dataset_sample\yaml\neondatabase_neon\.github\workflows\cargo-deny.yml | cargo-deny.yml | YAML | 2,199 | 0.95 | 0.014493 | 0 | react-lib | 215 | 2025-02-14T18:42:03.544804 | MIT | false | daf08c83a6c676b6d90cf28473e110cf |
name: Check Permissions\n\non:\n workflow_call:\n inputs:\n github-event-name:\n required: true\n type: string\n\ndefaults:\n run:\n shell: bash -euo pipefail {0}\n\n# No permission for GITHUB_TOKEN by default; the **minimal required** set of permissions should be granted in each job.\npermissions: {}\n\njobs:\n check-permissions:\n runs-on: ubuntu-22.04\n steps:\n - name: Harden the runner (Audit all outbound calls)\n uses: step-security/harden-runner@v2\n with:\n egress-policy: audit\n\n - name: Disallow CI runs on PRs from forks\n if: |\n inputs.github-event-name == 'pull_request' &&\n github.event.pull_request.head.repo.full_name != github.repository\n run: |\n if [ "${{ contains(fromJSON('["OWNER", "MEMBER", "COLLABORATOR"]'), github.event.pull_request.author_association) }}" = "true" ]; then\n MESSAGE="Please create a PR from a branch of ${GITHUB_REPOSITORY} instead of a fork"\n else\n MESSAGE="The PR should be reviewed and labelled with 'approved-for-ci-run' to trigger a CI run"\n fi\n\n # TODO: use actions/github-script to post this message as a PR comment\n echo >&2 "We don't run CI for PRs from forks"\n echo >&2 "${MESSAGE}"\n\n exit 1\n | dataset_sample\yaml\neondatabase_neon\.github\workflows\check-permissions.yml | check-permissions.yml | YAML | 1,273 | 0.95 | 0.121951 | 0.058824 | python-kit | 624 | 2023-11-23T00:40:48.979847 | Apache-2.0 | false | 0872146d13e956a781090246c81573e2 |
# A workflow from\n# https://docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows#force-deleting-cache-entries\n\nname: cleanup caches by a branch\non:\n pull_request:\n types:\n - closed\n\njobs:\n cleanup:\n runs-on: ubuntu-22.04\n steps:\n - name: Harden the runner (Audit all outbound calls)\n uses: step-security/harden-runner@v2\n with:\n egress-policy: audit\n\n - name: Cleanup\n run: |\n gh extension install actions/gh-actions-cache\n\n echo "Fetching list of cache key"\n cacheKeysForPR=$(gh actions-cache list -R $REPO -B $BRANCH -L 100 | cut -f 1 )\n\n ## Setting this to not fail the workflow while deleting cache keys.\n set +e\n echo "Deleting caches..."\n for cacheKey in $cacheKeysForPR\n do\n gh actions-cache delete $cacheKey -R $REPO -B $BRANCH --confirm\n done\n echo "Done"\n env:\n GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n REPO: ${{ github.repository }}\n BRANCH: refs/pull/${{ github.event.pull_request.number }}/merge\n | dataset_sample\yaml\neondatabase_neon\.github\workflows\cleanup-caches-by-a-branch.yml | cleanup-caches-by-a-branch.yml | YAML | 1,133 | 0.8 | 0.054054 | 0.09375 | vue-tools | 287 | 2024-08-17T16:17:11.133750 | MIT | false | a649a7cd729e241f82e63a514feefddc |
name: Cloud Regression Test\non:\n schedule:\n # * is a special character in YAML so you have to quote this string\n # ββββββββββββββ minute (0 - 59)\n # β ββββββββββββββ hour (0 - 23)\n # β β ββββββββββββββ day of the month (1 - 31)\n # β β β ββββββββββββββ month (1 - 12 or JAN-DEC)\n # β β β β ββββββββββββββ day of the week (0 - 6 or SUN-SAT)\n - cron: '45 1 * * *' # run once a day, timezone is utc\n workflow_dispatch: # adds ability to run this manually\n\ndefaults:\n run:\n shell: bash -euxo pipefail {0}\n\nconcurrency:\n # Allow only one workflow\n group: ${{ github.workflow }}\n cancel-in-progress: true\n\npermissions:\n id-token: write # aws-actions/configure-aws-credentials\n statuses: write\n contents: write\n\njobs:\n regress:\n env:\n POSTGRES_DISTRIB_DIR: /tmp/neon/pg_install\n TEST_OUTPUT: /tmp/test_output\n BUILD_TYPE: remote\n strategy:\n fail-fast: false\n matrix:\n pg-version: [16, 17]\n\n runs-on: us-east-2\n container:\n image: ghcr.io/neondatabase/build-tools:pinned-bookworm\n credentials:\n username: ${{ github.actor }}\n password: ${{ secrets.GITHUB_TOKEN }}\n options: --init\n\n steps:\n - name: Harden the runner (Audit all outbound calls)\n uses: step-security/harden-runner@4d991eb9b905ef189e4c376166672c3f2f230481 # v2.11.0\n with:\n egress-policy: audit\n\n - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n with:\n submodules: true\n\n - name: Patch the test\n env:\n PG_VERSION: ${{matrix.pg-version}}\n run: |\n cd "vendor/postgres-v${PG_VERSION}"\n patch -p1 < "../../compute/patches/cloud_regress_pg${PG_VERSION}.patch"\n\n - name: Generate a random password\n id: pwgen\n run: |\n set +x\n DBPASS=$(dd if=/dev/random bs=48 count=1 2>/dev/null | base64)\n echo "::add-mask::${DBPASS//\//}"\n echo DBPASS="${DBPASS//\//}" >> "${GITHUB_OUTPUT}"\n\n - name: Change tests according to the generated password\n env:\n DBPASS: ${{ steps.pwgen.outputs.DBPASS }}\n PG_VERSION: ${{matrix.pg-version}}\n run: |\n cd vendor/postgres-v"${PG_VERSION}"/src/test/regress\n for fname in sql/*.sql expected/*.out; do\n sed -i.bak s/NEON_PASSWORD_PLACEHOLDER/"'${DBPASS}'"/ "${fname}"\n done\n for ph in $(grep NEON_MD5_PLACEHOLDER expected/password.out | awk '{print $3;}' | sort | uniq); do\n USER=$(echo "${ph}" | cut -c 22-)\n MD5=md5$(echo -n "${DBPASS}${USER}" | md5sum | awk '{print $1;}')\n sed -i.bak "s/${ph}/${MD5}/" expected/password.out\n done\n\n - name: Download Neon artifact\n uses: ./.github/actions/download\n with:\n name: neon-${{ runner.os }}-${{ runner.arch }}-release-artifact\n path: /tmp/neon/\n prefix: latest\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n\n - name: Create a new branch\n id: create-branch\n uses: ./.github/actions/neon-branch-create\n with:\n api_key: ${{ secrets.NEON_STAGING_API_KEY }}\n project_id: ${{ vars[format('PGREGRESS_PG{0}_PROJECT_ID', matrix.pg-version)] }}\n\n - name: Run the regression tests\n uses: ./.github/actions/run-python-test-set\n with:\n build_type: ${{ env.BUILD_TYPE }}\n test_selection: cloud_regress\n pg_version: ${{matrix.pg-version}}\n extra_params: -m remote_cluster\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n env:\n BENCHMARK_CONNSTR: ${{steps.create-branch.outputs.dsn}}\n\n - name: Delete branch\n if: always()\n uses: ./.github/actions/neon-branch-delete\n with:\n api_key: ${{ secrets.NEON_STAGING_API_KEY }}\n project_id: ${{ vars[format('PGREGRESS_PG{0}_PROJECT_ID', matrix.pg-version)] }}\n branch_id: ${{steps.create-branch.outputs.branch_id}}\n\n - name: Create Allure report\n id: create-allure-report\n if: ${{ !cancelled() }}\n uses: ./.github/actions/allure-report-generate\n with:\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n\n - name: Post to a Slack channel\n if: ${{ github.event.schedule && failure() }}\n uses: slackapi/slack-github-action@fcfb566f8b0aab22203f066d80ca1d7e4b5d05b3 # v1.27.1\n with:\n channel-id: ${{ vars.SLACK_ON_CALL_QA_STAGING_STREAM }}\n slack-message: |\n Periodic pg_regress on staging: ${{ job.status }}\n <${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}|GitHub Run>\n <${{ steps.create-allure-report.outputs.report-url }}|Allure report>\n env:\n SLACK_BOT_TOKEN: ${{ secrets.SLACK_BOT_TOKEN }}\n\n | dataset_sample\yaml\neondatabase_neon\.github\workflows\cloud-regress.yml | cloud-regress.yml | YAML | 5,074 | 0.8 | 0.043478 | 0.057851 | python-kit | 22 | 2024-08-12T11:46:55.984201 | GPL-3.0 | false | e76c436e9c072d6f8fe0a186c89cfe75 |
name: Fast forward merge\non:\n pull_request:\n types: [labeled]\n branches:\n - release\n - release-proxy\n - release-compute\n\njobs:\n fast-forward:\n if: ${{ github.event.label.name == 'fast-forward' }}\n runs-on: ubuntu-22.04\n\n steps:\n - name: Harden the runner (Audit all outbound calls)\n uses: step-security/harden-runner@v2\n with:\n egress-policy: audit\n\n - name: Remove fast-forward label to PR\n env:\n GH_TOKEN: ${{ secrets.CI_ACCESS_TOKEN }}\n run: |\n gh pr edit ${{ github.event.pull_request.number }} --repo "${GITHUB_REPOSITORY}" --remove-label "fast-forward"\n\n - name: Fast forwarding\n uses: sequoia-pgp/fast-forward@ea7628bedcb0b0b96e94383ada458d812fca4979\n # See https://docs.github.com/en/graphql/reference/enums#mergestatestatus\n if: ${{ contains(fromJSON('["clean", "unstable"]'), github.event.pull_request.mergeable_state) }}\n with:\n merge: true\n comment: on-error\n github_token: ${{ secrets.CI_ACCESS_TOKEN }}\n\n - name: Comment if mergeable_state is not clean\n if: ${{ !contains(fromJSON('["clean", "unstable"]'), github.event.pull_request.mergeable_state) }}\n env:\n GH_TOKEN: ${{ secrets.CI_ACCESS_TOKEN }}\n run: |\n gh pr comment ${{ github.event.pull_request.number }} \\n --repo "${GITHUB_REPOSITORY}" \\n --body "Not trying to forward pull-request, because \`mergeable_state\` is \`${{ github.event.pull_request.mergeable_state }}\`, not \`clean\` or \`unstable\`."\n | dataset_sample\yaml\neondatabase_neon\.github\workflows\fast-forward.yml | fast-forward.yml | YAML | 1,598 | 0.8 | 0.093023 | 0.026316 | react-lib | 120 | 2025-05-16T09:10:48.397447 | MIT | false | cdcfe78e6703d3dd08e3ab31f2953b1e |
name: benchmarking ingest\n\non:\n # uncomment to run on push for debugging your PR\n # push:\n # branches: [ your branch ]\n schedule:\n # * is a special character in YAML so you have to quote this string\n # ββββββββββββββ minute (0 - 59)\n # β ββββββββββββββ hour (0 - 23)\n # β β ββββββββββββββ day of the month (1 - 31)\n # β β β ββββββββββββββ month (1 - 12 or JAN-DEC)\n # β β β β ββββββββββββββ day of the week (0 - 6 or SUN-SAT)\n - cron: '0 9 * * *' # run once a day, timezone is utc\n workflow_dispatch: # adds ability to run this manually\n\ndefaults:\n run:\n shell: bash -euxo pipefail {0}\n\nconcurrency:\n # Allow only one workflow globally because we need dedicated resources which only exist once\n group: ingest-bench-workflow\n cancel-in-progress: true\n\npermissions:\n contents: read\n\njobs:\n ingest:\n strategy:\n fail-fast: false # allow other variants to continue even if one fails\n matrix:\n include:\n - target_project: new_empty_project_stripe_size_2048 \n stripe_size: 2048 # 16 MiB\n postgres_version: 16\n disable_sharding: false\n - target_project: new_empty_project_stripe_size_32768\n stripe_size: 32768 # 256 MiB # note that this is different from null because using null will shard_split the project only if it reaches the threshold\n # while here it is sharded from the beginning with a shard size of 256 MiB\n disable_sharding: false\n postgres_version: 16\n - target_project: new_empty_project\n stripe_size: null # run with neon defaults which will shard split only when reaching the threshold\n disable_sharding: false\n postgres_version: 16\n - target_project: new_empty_project\n stripe_size: null # run with neon defaults which will shard split only when reaching the threshold\n disable_sharding: false\n postgres_version: 17\n - target_project: large_existing_project\n stripe_size: null # cannot re-shared or choose different stripe size for existing, already sharded project\n disable_sharding: false\n postgres_version: 16\n - target_project: new_empty_project_unsharded\n stripe_size: null # run with neon defaults which will shard split only when reaching the threshold\n disable_sharding: true\n postgres_version: 16\n max-parallel: 1 # we want to run each stripe size sequentially to be able to compare the results\n permissions:\n contents: write\n statuses: write\n id-token: write # aws-actions/configure-aws-credentials\n env:\n PG_CONFIG: /tmp/neon/pg_install/v16/bin/pg_config\n PSQL: /tmp/neon/pg_install/v16/bin/psql\n PG_16_LIB_PATH: /tmp/neon/pg_install/v16/lib\n PGCOPYDB: /pgcopydb/bin/pgcopydb\n PGCOPYDB_LIB_PATH: /pgcopydb/lib\n runs-on: [ self-hosted, us-east-2, x64 ]\n container:\n image: ghcr.io/neondatabase/build-tools:pinned-bookworm\n credentials:\n username: ${{ github.actor }}\n password: ${{ secrets.GITHUB_TOKEN }}\n options: --init\n timeout-minutes: 1440\n\n steps:\n - name: Harden the runner (Audit all outbound calls)\n uses: step-security/harden-runner@4d991eb9b905ef189e4c376166672c3f2f230481 # v2.11.0\n with:\n egress-policy: audit\n\n - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n\n - name: Configure AWS credentials # necessary to download artefacts\n uses: aws-actions/configure-aws-credentials@e3dd6a429d7300a6a4c196c26e071d42e0343502 # v4.0.2\n with:\n aws-region: eu-central-1\n role-to-assume: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n role-duration-seconds: 18000 # 5 hours is currently max associated with IAM role\n\n - name: Download Neon artifact\n uses: ./.github/actions/download\n with:\n name: neon-${{ runner.os }}-${{ runner.arch }}-release-artifact\n path: /tmp/neon/\n prefix: latest\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n\n - name: Create Neon Project\n if: ${{ startsWith(matrix.target_project, 'new_empty_project') }}\n id: create-neon-project-ingest-target\n uses: ./.github/actions/neon-project-create\n with:\n region_id: aws-us-east-2\n postgres_version: ${{ matrix.postgres_version }}\n compute_units: '[7, 7]' # we want to test large compute here to avoid compute-side bottleneck\n api_key: ${{ secrets.NEON_STAGING_API_KEY }}\n shard_split_project: ${{ matrix.stripe_size != null && 'true' || 'false' }}\n admin_api_key: ${{ secrets.NEON_STAGING_ADMIN_API_KEY }} \n shard_count: 8\n stripe_size: ${{ matrix.stripe_size }}\n disable_sharding: ${{ matrix.disable_sharding }} \n\n - name: Initialize Neon project\n if: ${{ startsWith(matrix.target_project, 'new_empty_project') }}\n env:\n BENCHMARK_INGEST_TARGET_CONNSTR: ${{ steps.create-neon-project-ingest-target.outputs.dsn }}\n NEW_PROJECT_ID: ${{ steps.create-neon-project-ingest-target.outputs.project_id }}\n run: |\n echo "Initializing Neon project with project_id: ${NEW_PROJECT_ID}"\n export LD_LIBRARY_PATH=${PG_16_LIB_PATH}\n ${PSQL} "${BENCHMARK_INGEST_TARGET_CONNSTR}" -c "CREATE EXTENSION IF NOT EXISTS neon; CREATE EXTENSION IF NOT EXISTS neon_utils;"\n echo "BENCHMARK_INGEST_TARGET_CONNSTR=${BENCHMARK_INGEST_TARGET_CONNSTR}" >> $GITHUB_ENV\n\n - name: Create Neon Branch for large tenant\n if: ${{ matrix.target_project == 'large_existing_project' }}\n id: create-neon-branch-ingest-target\n uses: ./.github/actions/neon-branch-create\n with:\n project_id: ${{ vars.BENCHMARK_INGEST_TARGET_PROJECTID }}\n api_key: ${{ secrets.NEON_STAGING_API_KEY }}\n\n - name: Initialize Neon project\n if: ${{ matrix.target_project == 'large_existing_project' }}\n env:\n BENCHMARK_INGEST_TARGET_CONNSTR: ${{ steps.create-neon-branch-ingest-target.outputs.dsn }}\n NEW_BRANCH_ID: ${{ steps.create-neon-branch-ingest-target.outputs.branch_id }}\n run: |\n echo "Initializing Neon branch with branch_id: ${NEW_BRANCH_ID}"\n export LD_LIBRARY_PATH=${PG_16_LIB_PATH}\n # Extract the part before the database name\n base_connstr="${BENCHMARK_INGEST_TARGET_CONNSTR%/*}"\n # Extract the query parameters (if any) after the database name\n query_params="${BENCHMARK_INGEST_TARGET_CONNSTR#*\?}"\n # Reconstruct the new connection string\n if [ "$query_params" != "$BENCHMARK_INGEST_TARGET_CONNSTR" ]; then\n new_connstr="${base_connstr}/neondb?${query_params}"\n else\n new_connstr="${base_connstr}/neondb"\n fi\n ${PSQL} "${new_connstr}" -c "drop database ludicrous;"\n ${PSQL} "${new_connstr}" -c "CREATE DATABASE ludicrous;"\n if [ "$query_params" != "$BENCHMARK_INGEST_TARGET_CONNSTR" ]; then\n BENCHMARK_INGEST_TARGET_CONNSTR="${base_connstr}/ludicrous?${query_params}"\n else\n BENCHMARK_INGEST_TARGET_CONNSTR="${base_connstr}/ludicrous"\n fi\n ${PSQL} "${BENCHMARK_INGEST_TARGET_CONNSTR}" -c "CREATE EXTENSION IF NOT EXISTS neon; CREATE EXTENSION IF NOT EXISTS neon_utils;"\n echo "BENCHMARK_INGEST_TARGET_CONNSTR=${BENCHMARK_INGEST_TARGET_CONNSTR}" >> $GITHUB_ENV\n\n - name: Invoke pgcopydb\n uses: ./.github/actions/run-python-test-set\n with:\n build_type: remote\n test_selection: performance/test_perf_ingest_using_pgcopydb.py\n run_in_parallel: false\n extra_params: -s -m remote_cluster --timeout 86400 -k test_ingest_performance_using_pgcopydb\n pg_version: v${{ matrix.postgres_version }}\n save_perf_report: true\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n env:\n BENCHMARK_INGEST_SOURCE_CONNSTR: ${{ secrets.BENCHMARK_INGEST_SOURCE_CONNSTR }}\n TARGET_PROJECT_TYPE: ${{ matrix.target_project }}\n # we report PLATFORM in zenbenchmark NeonBenchmarker perf database and want to distinguish between new project and large tenant\n PLATFORM: "${{ matrix.target_project }}-us-east-2-staging"\n PERF_TEST_RESULT_CONNSTR: "${{ secrets.PERF_TEST_RESULT_CONNSTR }}"\n\n - name: show tables sizes after ingest\n run: |\n export LD_LIBRARY_PATH=${PG_16_LIB_PATH}\n ${PSQL} "${BENCHMARK_INGEST_TARGET_CONNSTR}" -c "\dt+"\n\n - name: Delete Neon Project\n if: ${{ always() && startsWith(matrix.target_project, 'new_empty_project') }}\n uses: ./.github/actions/neon-project-delete\n with:\n project_id: ${{ steps.create-neon-project-ingest-target.outputs.project_id }}\n api_key: ${{ secrets.NEON_STAGING_API_KEY }}\n\n - name: Delete Neon Branch for large tenant\n if: ${{ always() && matrix.target_project == 'large_existing_project' }}\n uses: ./.github/actions/neon-branch-delete\n with:\n project_id: ${{ vars.BENCHMARK_INGEST_TARGET_PROJECTID }}\n branch_id: ${{ steps.create-neon-branch-ingest-target.outputs.branch_id }}\n api_key: ${{ secrets.NEON_STAGING_API_KEY }}\n | dataset_sample\yaml\neondatabase_neon\.github\workflows\ingest_benchmark.yml | ingest_benchmark.yml | YAML | 9,414 | 0.8 | 0.08 | 0.081967 | node-utils | 942 | 2025-02-09T07:02:11.440462 | Apache-2.0 | false | e141f332207ffb9442806403abccee6d |
name: Add `external` label to issues and PRs created by external users\n\non:\n issues:\n types:\n - opened\n pull_request_target:\n types:\n - opened\n workflow_dispatch:\n inputs:\n github-actor:\n description: 'GitHub username. If empty, the username of the current user will be used'\n required: false\n\n# No permission for GITHUB_TOKEN by default; the **minimal required** set of permissions should be granted in each job.\npermissions: {}\n\nenv:\n LABEL: external\n\njobs:\n check-user:\n runs-on: ubuntu-22.04\n\n outputs:\n is-member: ${{ steps.check-user.outputs.is-member }}\n\n steps:\n - name: Harden the runner (Audit all outbound calls)\n uses: step-security/harden-runner@v2\n with:\n egress-policy: audit\n\n - name: Check whether `${{ github.actor }}` is a member of `${{ github.repository_owner }}`\n id: check-user\n env:\n GH_TOKEN: ${{ secrets.CI_ACCESS_TOKEN }}\n ACTOR: ${{ inputs.github-actor || github.actor }}\n run: |\n expected_error="User does not exist or is not a member of the organization"\n output_file=output.txt\n\n for i in $(seq 1 10); do\n if gh api "/orgs/${GITHUB_REPOSITORY_OWNER}/members/${ACTOR}" \\n -H "Accept: application/vnd.github+json" \\n -H "X-GitHub-Api-Version: 2022-11-28" > ${output_file}; then\n\n is_member=true\n break\n elif grep -q "${expected_error}" ${output_file}; then\n is_member=false\n break\n elif [ $i -eq 10 ]; then\n title="Failed to get memmbership status for ${ACTOR}"\n message="The latest GitHub API error message: '$(cat ${output_file})'"\n echo "::error file=.github/workflows/label-for-external-users.yml,title=${title}::${message}"\n\n exit 1\n fi\n\n sleep 1\n done\n\n echo "is-member=${is_member}" | tee -a ${GITHUB_OUTPUT}\n\n add-label:\n if: needs.check-user.outputs.is-member == 'false'\n needs: [ check-user ]\n\n runs-on: ubuntu-22.04\n permissions:\n pull-requests: write # for `gh pr edit`\n issues: write # for `gh issue edit`\n\n steps:\n - name: Harden the runner (Audit all outbound calls)\n uses: step-security/harden-runner@v2\n with:\n egress-policy: audit\n\n - name: Add `${{ env.LABEL }}` label\n env:\n GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n ITEM_NUMBER: ${{ github.event[github.event_name == 'pull_request_target' && 'pull_request' || 'issue'].number }}\n GH_CLI_COMMAND: ${{ github.event_name == 'pull_request_target' && 'pr' || 'issue' }}\n run: |\n gh ${GH_CLI_COMMAND} --repo ${GITHUB_REPOSITORY} edit --add-label=${LABEL} ${ITEM_NUMBER}\n | dataset_sample\yaml\neondatabase_neon\.github\workflows\label-for-external-users.yml | label-for-external-users.yml | YAML | 2,758 | 0.95 | 0.090909 | 0.013889 | node-utils | 421 | 2024-10-21T03:57:57.724894 | GPL-3.0 | false | 478722688053486fc97998148e077dc3 |
name: large oltp benchmark\n\non:\n # uncomment to run on push for debugging your PR\n #push:\n # branches: [ bodobolero/synthetic_oltp_workload ]\n\n schedule:\n # * is a special character in YAML so you have to quote this string\n # ββββββββββββββ minute (0 - 59)\n # β ββββββββββββββ hour (0 - 23)\n # β β ββββββββββββββ day of the month (1 - 31)\n # β β β ββββββββββββββ month (1 - 12 or JAN-DEC)\n # β β β β ββββββββββββββ day of the week (0 - 6 or SUN-SAT)\n - cron: '0 15 * * 0,2,4' # run on Sunday, Tuesday, Thursday at 3 PM UTC\n workflow_dispatch: # adds ability to run this manually\n\ndefaults:\n run:\n shell: bash -euxo pipefail {0}\n\nconcurrency:\n # Allow only one workflow globally because we need dedicated resources which only exist once\n group: large-oltp-bench-workflow\n cancel-in-progress: false\n\npermissions:\n contents: read\n\njobs:\n oltp:\n strategy:\n fail-fast: false # allow other variants to continue even if one fails\n matrix:\n include:\n - target: new_branch \n custom_scripts: insert_webhooks.sql@200 select_any_webhook_with_skew.sql@300 select_recent_webhook.sql@397 select_prefetch_webhook.sql@3 IUD_one_transaction.sql@100\n - target: reuse_branch \n custom_scripts: insert_webhooks.sql@200 select_any_webhook_with_skew.sql@300 select_recent_webhook.sql@397 select_prefetch_webhook.sql@3 IUD_one_transaction.sql@100\n max-parallel: 1 # we want to run each stripe size sequentially to be able to compare the results\n permissions:\n contents: write\n statuses: write\n id-token: write # aws-actions/configure-aws-credentials\n env:\n TEST_PG_BENCH_DURATIONS_MATRIX: "1h" # todo update to > 1 h \n TEST_PGBENCH_CUSTOM_SCRIPTS: ${{ matrix.custom_scripts }}\n POSTGRES_DISTRIB_DIR: /tmp/neon/pg_install\n PG_VERSION: 16 # pre-determined by pre-determined project\n TEST_OUTPUT: /tmp/test_output\n BUILD_TYPE: remote\n PLATFORM: ${{ matrix.target }}\n\n runs-on: [ self-hosted, us-east-2, x64 ]\n container:\n image: ghcr.io/neondatabase/build-tools:pinned-bookworm\n credentials:\n username: ${{ github.actor }}\n password: ${{ secrets.GITHUB_TOKEN }}\n options: --init\n\n # Increase timeout to 2 days, default timeout is 6h - database maintenance can take a long time\n # (normally 1h pgbench, 3h vacuum analyze 3.5h re-index) x 2 = 15h, leave some buffer for regressions\n # in one run vacuum didn't finish within 12 hours\n timeout-minutes: 2880\n\n steps:\n - name: Harden the runner (Audit all outbound calls)\n uses: step-security/harden-runner@4d991eb9b905ef189e4c376166672c3f2f230481 # v2.11.0\n with:\n egress-policy: audit\n\n - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n\n - name: Configure AWS credentials # necessary to download artefacts\n uses: aws-actions/configure-aws-credentials@e3dd6a429d7300a6a4c196c26e071d42e0343502 # v4.0.2\n with:\n aws-region: eu-central-1\n role-to-assume: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n role-duration-seconds: 18000 # 5 hours is currently max associated with IAM role\n\n - name: Download Neon artifact\n uses: ./.github/actions/download\n with:\n name: neon-${{ runner.os }}-${{ runner.arch }}-release-artifact\n path: /tmp/neon/\n prefix: latest\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n\n - name: Create Neon Branch for large tenant\n if: ${{ matrix.target == 'new_branch' }}\n id: create-neon-branch-oltp-target\n uses: ./.github/actions/neon-branch-create\n with:\n project_id: ${{ vars.BENCHMARK_LARGE_OLTP_PROJECTID }}\n api_key: ${{ secrets.NEON_STAGING_API_KEY }}\n\n - name: Set up Connection String\n id: set-up-connstr\n run: |\n case "${{ matrix.target }}" in\n new_branch)\n CONNSTR=${{ steps.create-neon-branch-oltp-target.outputs.dsn }}\n ;;\n reuse_branch)\n CONNSTR=${{ secrets.BENCHMARK_LARGE_OLTP_REUSE_CONNSTR }}\n ;;\n *)\n echo >&2 "Unknown target=${{ matrix.target }}"\n exit 1\n ;;\n esac\n\n CONNSTR_WITHOUT_POOLER="${CONNSTR//-pooler/}"\n\n echo "connstr=${CONNSTR}" >> $GITHUB_OUTPUT\n echo "connstr_without_pooler=${CONNSTR_WITHOUT_POOLER}" >> $GITHUB_OUTPUT\n\n - name: Delete rows from prior runs in reuse branch\n if: ${{ matrix.target == 'reuse_branch' }}\n env:\n BENCHMARK_CONNSTR: ${{ steps.set-up-connstr.outputs.connstr_without_pooler }}\n PG_CONFIG: /tmp/neon/pg_install/v16/bin/pg_config\n PSQL: /tmp/neon/pg_install/v16/bin/psql\n PG_16_LIB_PATH: /tmp/neon/pg_install/v16/lib\n run: |\n echo "$(date '+%Y-%m-%d %H:%M:%S') - Deleting rows in table webhook.incoming_webhooks from prior runs"\n export LD_LIBRARY_PATH=${PG_16_LIB_PATH}\n ${PSQL} "${BENCHMARK_CONNSTR}" -c "SET statement_timeout = 0; DELETE FROM webhook.incoming_webhooks WHERE created_at > '2025-02-27 23:59:59+00';"\n echo "$(date '+%Y-%m-%d %H:%M:%S') - Finished deleting rows in table webhook.incoming_webhooks from prior runs"\n\n - name: Benchmark pgbench with custom-scripts \n uses: ./.github/actions/run-python-test-set\n with:\n build_type: ${{ env.BUILD_TYPE }}\n test_selection: performance\n run_in_parallel: false\n save_perf_report: true\n extra_params: -m remote_cluster --timeout 7200 -k test_perf_oltp_large_tenant_pgbench\n pg_version: ${{ env.PG_VERSION }}\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n env:\n BENCHMARK_CONNSTR: ${{ steps.set-up-connstr.outputs.connstr }}\n VIP_VAP_ACCESS_TOKEN: "${{ secrets.VIP_VAP_ACCESS_TOKEN }}"\n PERF_TEST_RESULT_CONNSTR: "${{ secrets.PERF_TEST_RESULT_CONNSTR }}"\n\n - name: Benchmark database maintenance\n uses: ./.github/actions/run-python-test-set\n with:\n build_type: ${{ env.BUILD_TYPE }}\n test_selection: performance\n run_in_parallel: false\n save_perf_report: true\n extra_params: -m remote_cluster --timeout 172800 -k test_perf_oltp_large_tenant_maintenance\n pg_version: ${{ env.PG_VERSION }}\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n env:\n BENCHMARK_CONNSTR: ${{ steps.set-up-connstr.outputs.connstr_without_pooler }}\n VIP_VAP_ACCESS_TOKEN: "${{ secrets.VIP_VAP_ACCESS_TOKEN }}"\n PERF_TEST_RESULT_CONNSTR: "${{ secrets.PERF_TEST_RESULT_CONNSTR }}"\n\n - name: Delete Neon Branch for large tenant\n if: ${{ always() && matrix.target == 'new_branch' }}\n uses: ./.github/actions/neon-branch-delete\n with:\n project_id: ${{ vars.BENCHMARK_LARGE_OLTP_PROJECTID }}\n branch_id: ${{ steps.create-neon-branch-oltp-target.outputs.branch_id }}\n api_key: ${{ secrets.NEON_STAGING_API_KEY }}\n\n - name: Configure AWS credentials # again because prior steps could have exceeded 5 hours\n uses: aws-actions/configure-aws-credentials@e3dd6a429d7300a6a4c196c26e071d42e0343502 # v4.0.2\n with:\n aws-region: eu-central-1\n role-to-assume: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n role-duration-seconds: 18000 # 5 hours\n\n - name: Create Allure report\n id: create-allure-report\n if: ${{ !cancelled() }}\n uses: ./.github/actions/allure-report-generate\n with:\n aws-oicd-role-arn: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n \n - name: Post to a Slack channel\n if: ${{ github.event.schedule && failure() }}\n uses: slackapi/slack-github-action@fcfb566f8b0aab22203f066d80ca1d7e4b5d05b3 # v1.27.1\n with:\n channel-id: "C06KHQVQ7U3" # on-call-qa-staging-stream\n slack-message: |\n Periodic large oltp perf testing: ${{ job.status }}\n <${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}|GitHub Run>\n <${{ steps.create-allure-report.outputs.report-url }}|Allure report>\n env:\n SLACK_BOT_TOKEN: ${{ secrets.SLACK_BOT_TOKEN }}\n | dataset_sample\yaml\neondatabase_neon\.github\workflows\large_oltp_benchmark.yml | large_oltp_benchmark.yml | YAML | 8,317 | 0.8 | 0.051546 | 0.081871 | react-lib | 661 | 2024-01-23T12:57:20.079933 | Apache-2.0 | false | e16a887eb4388b6e415712b8a928a676 |
name: Lint Release PR\n\non:\n pull_request:\n branches:\n - release\n - release-proxy\n - release-compute\n\npermissions:\n contents: read\n\njobs:\n lint-release-pr:\n runs-on: ubuntu-22.04\n steps:\n - name: Harden the runner (Audit all outbound calls)\n uses: step-security/harden-runner@4d991eb9b905ef189e4c376166672c3f2f230481 # v2.11.0\n with:\n egress-policy: audit\n\n - name: Checkout PR branch\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n with:\n fetch-depth: 0 # Fetch full history for git operations\n ref: ${{ github.event.pull_request.head.ref }}\n\n - name: Run lint script\n env:\n RELEASE_BRANCH: ${{ github.base_ref }}\n run: |\n ./.github/scripts/lint-release-pr.sh\n | dataset_sample\yaml\neondatabase_neon\.github\workflows\lint-release-pr.yml | lint-release-pr.yml | YAML | 817 | 0.8 | 0.03125 | 0 | vue-tools | 218 | 2023-10-31T06:43:16.851074 | MIT | false | 8d6e6fa2226e2287941cf1b1cad88fff |
name: Check neon with extra platform builds\n\non:\n push:\n branches:\n - main\n pull_request:\n\ndefaults:\n run:\n shell: bash -euxo pipefail {0}\n\nconcurrency:\n # Allow only one workflow per any non-`main` branch.\n group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}\n cancel-in-progress: true\n\nenv:\n RUST_BACKTRACE: 1\n COPT: '-Werror'\n\njobs:\n check-permissions:\n if: ${{ !contains(github.event.pull_request.labels.*.name, 'run-no-ci') }}\n uses: ./.github/workflows/check-permissions.yml\n with:\n github-event-name: ${{ github.event_name}}\n\n build-build-tools-image:\n needs: [ check-permissions ]\n uses: ./.github/workflows/build-build-tools-image.yml\n secrets: inherit\n\n files-changed:\n name: Detect what files changed\n runs-on: ubuntu-22.04\n timeout-minutes: 3\n outputs:\n v17: ${{ steps.files_changed.outputs.v17 }}\n postgres_changes: ${{ steps.postgres_changes.outputs.changes }}\n rebuild_rust_code: ${{ steps.files_changed.outputs.rust_code }}\n rebuild_everything: ${{ steps.files_changed.outputs.rebuild_neon_extra || steps.files_changed.outputs.rebuild_macos }}\n\n steps:\n - name: Harden the runner (Audit all outbound calls)\n uses: step-security/harden-runner@4d991eb9b905ef189e4c376166672c3f2f230481 # v2.11.0\n with:\n egress-policy: audit\n\n - name: Checkout\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n with:\n submodules: true\n\n - name: Check for Postgres changes\n uses: dorny/paths-filter@1441771bbfdd59dcd748680ee64ebd8faab1a242 #v3\n id: files_changed\n with:\n token: ${{ github.token }}\n filters: .github/file-filters.yaml\n base: ${{ github.event_name != 'pull_request' && (github.event.merge_group.base_ref || github.ref_name) || '' }}\n ref: ${{ github.event_name != 'pull_request' && (github.event.merge_group.head_ref || github.ref) || '' }}\n\n - name: Filter out only v-string for build matrix\n id: postgres_changes\n run: |\n v_strings_only_as_json_array=$(echo ${{ steps.files_changed.outputs.chnages }} | jq '.[]|select(test("v\\d+"))' | jq --slurp -c)\n echo "changes=${v_strings_only_as_json_array}" | tee -a "${GITHUB_OUTPUT}"\n\n check-macos-build:\n needs: [ check-permissions, files-changed ]\n if: |\n contains(github.event.pull_request.labels.*.name, 'run-extra-build-macos') ||\n contains(github.event.pull_request.labels.*.name, 'run-extra-build-*') ||\n github.ref_name == 'main'\n uses: ./.github/workflows/build-macos.yml\n with:\n pg_versions: ${{ needs.files-changed.outputs.postgres_changes }}\n rebuild_rust_code: ${{ fromJSON(needs.files-changed.outputs.rebuild_rust_code) }}\n rebuild_everything: ${{ fromJSON(needs.files-changed.outputs.rebuild_everything) }}\n\n gather-rust-build-stats:\n needs: [ check-permissions, build-build-tools-image, files-changed ]\n permissions:\n id-token: write # aws-actions/configure-aws-credentials\n statuses: write\n contents: write\n if: |\n (needs.files-changed.outputs.v17 == 'true' || needs.files-changed.outputs.rebuild_everything == 'true') && (\n contains(github.event.pull_request.labels.*.name, 'run-extra-build-stats') ||\n contains(github.event.pull_request.labels.*.name, 'run-extra-build-*') ||\n github.ref_name == 'main'\n )\n runs-on: [ self-hosted, large ]\n container:\n image: ${{ needs.build-build-tools-image.outputs.image }}-bookworm\n credentials:\n username: ${{ github.actor }}\n password: ${{ secrets.GITHUB_TOKEN }}\n options: --init\n\n env:\n BUILD_TYPE: release\n # build with incremental compilation produce partial results\n # so do not attempt to cache this build, also disable the incremental compilation\n CARGO_INCREMENTAL: 0\n\n steps:\n - name: Harden the runner (Audit all outbound calls)\n uses: step-security/harden-runner@4d991eb9b905ef189e4c376166672c3f2f230481 # v2.11.0\n with:\n egress-policy: audit\n\n - name: Checkout\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n with:\n submodules: true\n\n # Some of our rust modules use FFI and need those to be checked\n - name: Get postgres headers\n run: make postgres-headers -j$(nproc)\n\n - name: Build walproposer-lib\n run: make walproposer-lib -j$(nproc)\n\n - name: Produce the build stats\n run: cargo build --all --release --timings -j$(nproc)\n\n - name: Configure AWS credentials\n uses: aws-actions/configure-aws-credentials@e3dd6a429d7300a6a4c196c26e071d42e0343502 # v4.0.2\n with:\n aws-region: eu-central-1\n role-to-assume: ${{ vars.DEV_AWS_OIDC_ROLE_ARN }}\n role-duration-seconds: 3600\n\n - name: Upload the build stats\n id: upload-stats\n env:\n BUCKET: neon-github-public-dev\n SHA: ${{ github.event.pull_request.head.sha || github.sha }}\n run: |\n REPORT_URL=https://${BUCKET}.s3.amazonaws.com/build-stats/${SHA}/${GITHUB_RUN_ID}/cargo-timing.html\n aws s3 cp --only-show-errors ./target/cargo-timings/cargo-timing.html "s3://${BUCKET}/build-stats/${SHA}/${GITHUB_RUN_ID}/"\n echo "report-url=${REPORT_URL}" >> $GITHUB_OUTPUT\n\n - name: Publish build stats report\n uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7.0.1\n env:\n REPORT_URL: ${{ steps.upload-stats.outputs.report-url }}\n SHA: ${{ github.event.pull_request.head.sha || github.sha }}\n with:\n # Retry script for 5XX server errors: https://github.com/actions/github-script#retries\n retries: 5\n script: |\n const { REPORT_URL, SHA } = process.env\n\n await github.rest.repos.createCommitStatus({\n owner: context.repo.owner,\n repo: context.repo.repo,\n sha: `${SHA}`,\n state: 'success',\n target_url: `${REPORT_URL}`,\n context: `Build stats (release)`,\n })\n | dataset_sample\yaml\neondatabase_neon\.github\workflows\neon_extra_builds.yml | neon_extra_builds.yml | YAML | 6,242 | 0.8 | 0.036585 | 0.035461 | node-utils | 317 | 2025-03-01T08:40:41.828879 | GPL-3.0 | false | 7f7ed56a27e010565bd5b0f4b72eb59f |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.