content stringlengths 1 103k ⌀ | path stringlengths 8 216 | filename stringlengths 2 179 | language stringclasses 15
values | size_bytes int64 2 189k | quality_score float64 0.5 0.95 | complexity float64 0 1 | documentation_ratio float64 0 1 | repository stringclasses 5
values | stars int64 0 1k | created_date stringdate 2023-07-10 19:21:08 2025-07-09 19:11:45 | license stringclasses 4
values | is_test bool 2
classes | file_hash stringlengths 32 32 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
jobs:\n- template: /eng/common/core-templates/jobs/jobs.yml\n parameters:\n is1ESPipeline: false\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }}\n | dataset_sample\yaml\dotnet_wpf\eng\common\templates\jobs\jobs.yml | jobs.yml | YAML | 191 | 0.7 | 0 | 0 | node-utils | 25 | 2024-09-16T01:00:25.734688 | GPL-3.0 | false | 83db1630b41e7331450dad48cd4dbb10 |
jobs:\n- template: /eng/common/core-templates/jobs/source-build.yml\n parameters:\n is1ESPipeline: false\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }} | dataset_sample\yaml\dotnet_wpf\eng\common\templates\jobs\source-build.yml | source-build.yml | YAML | 198 | 0.7 | 0 | 0 | node-utils | 791 | 2025-06-04T08:39:08.336342 | Apache-2.0 | false | 8a6be417efaa478b87d4053ebd878195 |
variables:\n- template: /eng/common/core-templates/post-build/common-variables.yml\n parameters:\n # Specifies whether to use 1ES\n is1ESPipeline: false\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }} | dataset_sample\yaml\dotnet_wpf\eng\common\templates\post-build\common-variables.yml | common-variables.yml | YAML | 248 | 0.8 | 0 | 0.142857 | awesome-app | 727 | 2023-10-16T18:37:36.917548 | MIT | false | 909ba0a5069bc3039739ad123729e365 |
stages:\n- template: /eng/common/core-templates/post-build/post-build.yml\n parameters:\n # Specifies whether to use 1ES\n is1ESPipeline: false\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }} | dataset_sample\yaml\dotnet_wpf\eng\common\templates\post-build\post-build.yml | post-build.yml | YAML | 239 | 0.8 | 0 | 0.142857 | vue-tools | 803 | 2024-01-01T10:02:45.017712 | MIT | false | 7e027e903d22bb71823eeaec48a2a687 |
steps:\n- template: /eng/common/core-templates/post-build/setup-maestro-vars.yml\n parameters:\n # Specifies whether to use 1ES\n is1ESPipeline: false\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }} | dataset_sample\yaml\dotnet_wpf\eng\common\templates\post-build\setup-maestro-vars.yml | setup-maestro-vars.yml | YAML | 246 | 0.8 | 0 | 0.142857 | react-lib | 463 | 2025-03-28T05:39:42.806404 | Apache-2.0 | false | a6548cf82fe816e8d533284434a175eb |
steps:\n- template: /eng/common/core-templates/steps/component-governance.yml\n parameters:\n is1ESPipeline: false\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }}\n | dataset_sample\yaml\dotnet_wpf\eng\common\templates\steps\component-governance.yml | component-governance.yml | YAML | 209 | 0.7 | 0 | 0 | node-utils | 618 | 2025-06-15T14:32:32.238477 | GPL-3.0 | false | cdfa75619c43225e9cf33f340bedd740 |
# Obtains internal runtime download credentials and populates the 'dotnetbuilds-internal-container-read-token-base64'\n# variable with the base64-encoded SAS token, by default\n\nsteps:\n- template: /eng/common/core-templates/steps/enable-internal-runtimes.yml\n parameters:\n is1ESPipeline: false\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }}\n | dataset_sample\yaml\dotnet_wpf\eng\common\templates\steps\enable-internal-runtimes.yml | enable-internal-runtimes.yml | YAML | 389 | 0.8 | 0 | 0.25 | node-utils | 609 | 2023-09-27T08:01:36.679169 | Apache-2.0 | false | 83122e7f8536616f784ee96df8ab12ac |
steps:\n- template: /eng/common/core-templates/steps/enable-internal-sources.yml\n parameters:\n is1ESPipeline: false\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }} | dataset_sample\yaml\dotnet_wpf\eng\common\templates\steps\enable-internal-sources.yml | enable-internal-sources.yml | YAML | 211 | 0.7 | 0 | 0 | awesome-app | 630 | 2023-07-20T01:16:33.597863 | Apache-2.0 | false | ce021738739931039c30356d81b9cd02 |
steps:\n- template: /eng/common/core-templates/steps/generate-sbom.yml\n parameters:\n is1ESPipeline: false\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }}\n | dataset_sample\yaml\dotnet_wpf\eng\common\templates\steps\generate-sbom.yml | generate-sbom.yml | YAML | 202 | 0.7 | 0 | 0 | node-utils | 195 | 2025-03-30T00:55:26.908980 | MIT | false | 01cb9e0d9a333f37615cbd14fafc51a1 |
steps:\n- template: /eng/common/core-templates/steps/get-delegation-sas.yml\n parameters:\n is1ESPipeline: false\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }}\n | dataset_sample\yaml\dotnet_wpf\eng\common\templates\steps\get-delegation-sas.yml | get-delegation-sas.yml | YAML | 207 | 0.7 | 0 | 0 | vue-tools | 709 | 2025-01-11T23:55:26.825479 | Apache-2.0 | false | 07322cf41cd26f0152a72c4a027b65d7 |
steps:\n- template: /eng/common/core-templates/steps/get-federated-access-token.yml\n parameters:\n is1ESPipeline: false\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }} | dataset_sample\yaml\dotnet_wpf\eng\common\templates\steps\get-federated-access-token.yml | get-federated-access-token.yml | YAML | 214 | 0.7 | 0 | 0 | node-utils | 410 | 2024-08-16T13:35:02.830005 | Apache-2.0 | false | 846715f1a4d7e3a984650b5c0eb9bcd0 |
parameters:\n- name: is1ESPipeline\n type: boolean\n default: false\n\n- name: displayName\n type: string\n default: 'Publish to Build Artifact'\n\n- name: condition\n type: string\n default: succeeded()\n\n- name: artifactName\n type: string\n\n- name: pathToPublish\n type: string\n\n- name: continueOnError\n type: boolean\n default: false\n\n- name: publishLocation\n type: string\n default: 'Container'\n\n- name: retryCountOnTaskFailure\n type: string\n default: 10\n\nsteps:\n- ${{ if eq(parameters.is1ESPipeline, true) }}:\n - 'eng/common/templates cannot be referenced from a 1ES managed template': error\n- task: PublishBuildArtifacts@1\n displayName: ${{ parameters.displayName }}\n condition: ${{ parameters.condition }}\n ${{ if parameters.continueOnError }}:\n continueOnError: ${{ parameters.continueOnError }}\n inputs:\n PublishLocation: ${{ parameters.publishLocation }} \n PathtoPublish: ${{ parameters.pathToPublish }}\n ${{ if parameters.artifactName }}:\n ArtifactName: ${{ parameters.artifactName }}\n ${{ if parameters.retryCountOnTaskFailure }}:\n retryCountOnTaskFailure: ${{ parameters.retryCountOnTaskFailure }}\n | dataset_sample\yaml\dotnet_wpf\eng\common\templates\steps\publish-build-artifacts.yml | publish-build-artifacts.yml | YAML | 1,140 | 0.7 | 0.086957 | 0 | awesome-app | 942 | 2023-07-16T23:32:13.903536 | MIT | false | 24d82629806c9eb503601f3c1ffb6b80 |
steps:\n- template: /eng/common/core-templates/steps/publish-logs.yml\n parameters:\n is1ESPipeline: false\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }}\n | dataset_sample\yaml\dotnet_wpf\eng\common\templates\steps\publish-logs.yml | publish-logs.yml | YAML | 201 | 0.7 | 0 | 0 | node-utils | 315 | 2024-04-19T07:53:17.081101 | Apache-2.0 | false | 9d5f05497b5829492b197ede8da2db64 |
parameters:\n- name: is1ESPipeline\n type: boolean\n default: false\n\n- name: args\n type: object\n default: {}\n\nsteps:\n- ${{ if eq(parameters.is1ESPipeline, true) }}:\n - 'eng/common/templates cannot be referenced from a 1ES managed template': error\n- task: PublishPipelineArtifact@1\n displayName: ${{ coalesce(parameters.args.displayName, 'Publish to Build Artifact') }}\n ${{ if parameters.args.condition }}:\n condition: ${{ parameters.args.condition }}\n ${{ else }}:\n condition: succeeded()\n ${{ if parameters.args.continueOnError }}:\n continueOnError: ${{ parameters.args.continueOnError }}\n inputs:\n targetPath: ${{ parameters.args.targetPath }}\n ${{ if parameters.args.artifactName }}:\n artifactName: ${{ parameters.args.artifactName }}\n ${{ if parameters.args.publishLocation }}:\n publishLocation: ${{ parameters.args.publishLocation }}\n ${{ if parameters.args.fileSharePath }}:\n fileSharePath: ${{ parameters.args.fileSharePath }}\n ${{ if parameters.args.Parallel }}:\n parallel: ${{ parameters.args.Parallel }}\n ${{ if parameters.args.parallelCount }}:\n parallelCount: ${{ parameters.args.parallelCount }}\n ${{ if parameters.args.properties }}:\n properties: ${{ parameters.args.properties }} | dataset_sample\yaml\dotnet_wpf\eng\common\templates\steps\publish-pipeline-artifacts.yml | publish-pipeline-artifacts.yml | YAML | 1,262 | 0.7 | 0.272727 | 0 | python-kit | 266 | 2023-08-31T05:26:09.505750 | GPL-3.0 | false | f78cfcbcb2d039523172d4dbeb84382f |
steps:\n- template: /eng/common/core-templates/steps/retain-build.yml\n parameters:\n is1ESPipeline: false\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }}\n | dataset_sample\yaml\dotnet_wpf\eng\common\templates\steps\retain-build.yml | retain-build.yml | YAML | 201 | 0.7 | 0 | 0 | node-utils | 658 | 2023-08-05T02:28:36.702689 | MIT | false | cc2bc2a0a9fe6ffdbb89d749a49c493f |
steps:\n- template: /eng/common/core-templates/steps/send-to-helix.yml\n parameters:\n is1ESPipeline: false\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }}\n | dataset_sample\yaml\dotnet_wpf\eng\common\templates\steps\send-to-helix.yml | send-to-helix.yml | YAML | 202 | 0.7 | 0 | 0 | awesome-app | 429 | 2024-09-04T22:26:02.296512 | MIT | false | 242f9a66dc8fd6487ab4c5b3130da286 |
steps:\n- template: /eng/common/core-templates/steps/source-build.yml\n parameters:\n is1ESPipeline: false\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }}\n | dataset_sample\yaml\dotnet_wpf\eng\common\templates\steps\source-build.yml | source-build.yml | YAML | 201 | 0.7 | 0 | 0 | vue-tools | 489 | 2024-09-02T00:28:36.322351 | MIT | false | b69d3712ac6cf56e6097ac25d3cff63f |
steps:\n- template: /eng/common/core-templates/steps/source-index-stage1-publish.yml\n parameters:\n is1ESPipeline: false\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }}\n | dataset_sample\yaml\dotnet_wpf\eng\common\templates\steps\source-index-stage1-publish.yml | source-index-stage1-publish.yml | YAML | 216 | 0.7 | 0 | 0 | python-kit | 664 | 2024-08-29T20:20:39.565900 | MIT | false | c3145043ba90466996369182b75d9414 |
# Select a pool provider based off branch name. Anything with branch name containing 'release' must go into an -Svc pool,\n# otherwise it should go into the "normal" pools. This separates out the queueing and billing of released branches.\n\n# Motivation:\n# Once a given branch of a repository's output has been officially "shipped" once, it is then considered to be COGS\n# (Cost of goods sold) and should be moved to a servicing pool provider. This allows both separation of queueing\n# (allowing release builds and main PR builds to not intefere with each other) and billing (required for COGS.\n# Additionally, the pool provider name itself may be subject to change when the .NET Core Engineering Services\n# team needs to move resources around and create new and potentially differently-named pools. Using this template\n# file from an Arcade-ified repo helps guard against both having to update one's release/* branches and renaming.\n\n# How to use:\n# This yaml assumes your shipped product branches use the naming convention "release/..." (which many do).\n# If we find alternate naming conventions in broad usage it can be added to the condition below.\n#\n# First, import the template in an arcade-ified repo to pick up the variables, e.g.:\n#\n# variables:\n# - template: /eng/common/templates/variables/pool-providers.yml\n#\n# ... then anywhere specifying the pool provider use the runtime variables,\n# $(DncEngInternalBuildPool) and $ (DncEngPublicBuildPool), e.g.:\n#\n# pool:\n# name: $(DncEngInternalBuildPool)\n# demands: ImageOverride -equals windows.vs2019.amd64\nvariables:\n - ${{ if eq(variables['System.TeamProject'], 'internal') }}:\n - template: /eng/common/templates-official/variables/pool-providers.yml\n - ${{ else }}:\n # Coalesce the target and source branches so we know when a PR targets a release branch\n # If these variables are somehow missing, fall back to main (tends to have more capacity)\n\n # Any new -Svc alternative pools should have variables added here to allow for splitting work\n - name: DncEngPublicBuildPool\n value: $[\n replace(\n replace(\n eq(contains(coalesce(variables['System.PullRequest.TargetBranch'], variables['Build.SourceBranch'], 'refs/heads/main'), 'release'), 'true'),\n True,\n 'NetCore-Svc-Public'\n ),\n False,\n 'NetCore-Public'\n )\n ]\n\n - name: DncEngInternalBuildPool\n value: $[\n replace(\n replace(\n eq(contains(coalesce(variables['System.PullRequest.TargetBranch'], variables['Build.SourceBranch'], 'refs/heads/main'), 'release'), 'true'),\n True,\n 'NetCore1ESPool-Svc-Internal'\n ),\n False,\n 'NetCore1ESPool-Internal'\n )\n ]\n | dataset_sample\yaml\dotnet_wpf\eng\common\templates\variables\pool-providers.yml | pool-providers.yml | YAML | 2,855 | 0.95 | 0.050847 | 0.490909 | node-utils | 697 | 2025-02-18T19:40:22.466974 | GPL-3.0 | false | dc79a9880fc3bd181b1db3ec65e3fe68 |
parameters:\n# Sbom related params\n enableSbom: true\n runAsPublic: false\n PackageVersion: 9.0.0\n BuildDropPath: '$(Build.SourcesDirectory)/artifacts'\n\njobs:\n- template: /eng/common/core-templates/job/job.yml\n parameters:\n is1ESPipeline: true\n\n componentGovernanceSteps:\n - ${{ if and(eq(parameters.runAsPublic, 'false'), ne(variables['System.TeamProject'], 'public'), notin(variables['Build.Reason'], 'PullRequest'), eq(parameters.enableSbom, 'true')) }}:\n - template: /eng/common/templates/steps/generate-sbom.yml\n parameters:\n PackageVersion: ${{ parameters.packageVersion }}\n BuildDropPath: ${{ parameters.buildDropPath }}\n ManifestDirPath: $(Build.ArtifactStagingDirectory)/sbom\n publishArtifacts: false\n\n # publish artifacts\n # for 1ES managed templates, use the templateContext.output to handle multiple outputs.\n templateContext:\n outputParentDirectory: $(Build.ArtifactStagingDirectory)\n outputs:\n - ${{ if ne(parameters.artifacts.publish, '') }}:\n - ${{ if and(ne(parameters.artifacts.publish.artifacts, 'false'), ne(parameters.artifacts.publish.artifacts, '')) }}:\n - output: buildArtifacts\n displayName: Publish pipeline artifacts\n PathtoPublish: '$(Build.ArtifactStagingDirectory)/artifacts'\n ArtifactName: ${{ coalesce(parameters.artifacts.publish.artifacts.name , 'Artifacts_$(Agent.Os)_$(_BuildConfig)') }}\n condition: always()\n retryCountOnTaskFailure: 10 # for any logs being locked\n continueOnError: true\n - ${{ if and(ne(parameters.artifacts.publish.logs, 'false'), ne(parameters.artifacts.publish.logs, '')) }}:\n - output: pipelineArtifact\n targetPath: '$(Build.ArtifactStagingDirectory)/artifacts/log'\n artifactName: ${{ coalesce(parameters.artifacts.publish.logs.name, 'Logs_Build_$(Agent.Os)_$(_BuildConfig)_Attempt$(System.JobAttempt)') }}\n displayName: 'Publish logs'\n continueOnError: true\n condition: always()\n retryCountOnTaskFailure: 10 # for any logs being locked\n sbomEnabled: false # we don't need SBOM for logs\n\n - ${{ if eq(parameters.enablePublishBuildArtifacts, true) }}:\n - output: buildArtifacts\n displayName: Publish Logs\n PathtoPublish: '$(Build.ArtifactStagingDirectory)/artifacts/log/$(_BuildConfig)'\n publishLocation: Container\n ArtifactName: ${{ coalesce(parameters.enablePublishBuildArtifacts.artifactName, '$(Agent.Os)_$(Agent.JobName)_Attempt$(System.JobAttempt)' ) }}\n continueOnError: true\n condition: always()\n sbomEnabled: false # we don't need SBOM for logs\n\n - ${{ if eq(parameters.enableBuildRetry, 'true') }}:\n - output: pipelineArtifact\n targetPath: '$(Build.ArtifactStagingDirectory)/artifacts/eng/common/BuildConfiguration'\n artifactName: 'BuildConfiguration'\n displayName: 'Publish build retry configuration'\n continueOnError: true\n sbomEnabled: false # we don't need SBOM for BuildConfiguration\n\n - ${{ if and(eq(parameters.runAsPublic, 'false'), ne(variables['System.TeamProject'], 'public'), notin(variables['Build.Reason'], 'PullRequest'), eq(parameters.enableSbom, 'true')) }}:\n - output: pipelineArtifact\n displayName: Publish SBOM manifest\n continueOnError: true\n targetPath: $(Build.ArtifactStagingDirectory)/sbom\n artifactName: $(ARTIFACT_NAME)\n\n # add any outputs provided via root yaml\n - ${{ if ne(parameters.templateContext.outputs, '') }}:\n - ${{ each output in parameters.templateContext.outputs }}:\n - ${{ output }}\n \n # add any remaining templateContext properties\n ${{ each context in parameters.templateContext }}:\n ${{ if and(ne(context.key, 'outputParentDirectory'), ne(context.key, 'outputs')) }}:\n ${{ context.key }}: ${{ context.value }}\n\n ${{ each parameter in parameters }}:\n ${{ if and(ne(parameter.key, 'templateContext'), ne(parameter.key, 'is1ESPipeline')) }}:\n ${{ parameter.key }}: ${{ parameter.value }}\n | dataset_sample\yaml\dotnet_wpf\eng\common\templates-official\job\job.yml | job.yml | YAML | 4,211 | 0.8 | 0.192771 | 0.067568 | python-kit | 643 | 2025-01-21T22:18:50.474179 | BSD-3-Clause | false | 78cd9b802b41a2c4c0686f51cbbe8a1b |
jobs:\n- template: /eng/common/core-templates/job/onelocbuild.yml\n parameters:\n is1ESPipeline: true\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }}\n | dataset_sample\yaml\dotnet_wpf\eng\common\templates-official\job\onelocbuild.yml | onelocbuild.yml | YAML | 196 | 0.7 | 0 | 0 | react-lib | 598 | 2023-08-19T05:38:44.041668 | GPL-3.0 | false | 5c573c9c1ea06267dfa3f47dea2e61f7 |
jobs:\n- template: /eng/common/core-templates/job/publish-build-assets.yml\n parameters:\n is1ESPipeline: true\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }}\n | dataset_sample\yaml\dotnet_wpf\eng\common\templates-official\job\publish-build-assets.yml | publish-build-assets.yml | YAML | 205 | 0.7 | 0 | 0 | vue-tools | 648 | 2025-03-10T06:50:35.715070 | Apache-2.0 | false | 3e7ed296298f0b030cb628629e3bf83b |
jobs:\n- template: /eng/common/core-templates/job/source-build.yml\n parameters:\n is1ESPipeline: true\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }}\n | dataset_sample\yaml\dotnet_wpf\eng\common\templates-official\job\source-build.yml | source-build.yml | YAML | 197 | 0.7 | 0 | 0 | awesome-app | 539 | 2024-11-28T11:41:14.075185 | BSD-3-Clause | false | e1091ad4de48c0fc96801afac0ab2a99 |
jobs:\n- template: /eng/common/core-templates/job/source-index-stage1.yml\n parameters:\n is1ESPipeline: true\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }}\n | dataset_sample\yaml\dotnet_wpf\eng\common\templates-official\job\source-index-stage1.yml | source-index-stage1.yml | YAML | 204 | 0.7 | 0 | 0 | react-lib | 263 | 2024-05-28T01:01:29.244313 | MIT | false | 620a06f775d8943f6494f162f7e41524 |
jobs:\n- template: /eng/common/core-templates/jobs/codeql-build.yml\n parameters:\n is1ESPipeline: true\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }}\n | dataset_sample\yaml\dotnet_wpf\eng\common\templates-official\jobs\codeql-build.yml | codeql-build.yml | YAML | 198 | 0.7 | 0 | 0 | node-utils | 510 | 2023-07-21T20:07:10.310727 | Apache-2.0 | false | d35bdbce251f1d612dcbc836ee974ebd |
jobs:\n- template: /eng/common/core-templates/jobs/jobs.yml\n parameters:\n is1ESPipeline: true\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }}\n | dataset_sample\yaml\dotnet_wpf\eng\common\templates-official\jobs\jobs.yml | jobs.yml | YAML | 190 | 0.7 | 0 | 0 | awesome-app | 74 | 2024-03-27T18:49:36.326535 | BSD-3-Clause | false | 12294abf1659048c0304ce0ffd23273a |
jobs:\n- template: /eng/common/core-templates/jobs/source-build.yml\n parameters:\n is1ESPipeline: true\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }} | dataset_sample\yaml\dotnet_wpf\eng\common\templates-official\jobs\source-build.yml | source-build.yml | YAML | 197 | 0.7 | 0 | 0 | python-kit | 539 | 2024-04-27T05:02:47.614223 | BSD-3-Clause | false | 164ca5986230f70db7d9e6428b1090db |
variables:\n- template: /eng/common/core-templates/post-build/common-variables.yml\n parameters:\n # Specifies whether to use 1ES\n is1ESPipeline: true\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }} | dataset_sample\yaml\dotnet_wpf\eng\common\templates-official\post-build\common-variables.yml | common-variables.yml | YAML | 247 | 0.8 | 0 | 0.142857 | python-kit | 785 | 2024-09-29T10:09:54.635196 | Apache-2.0 | false | e6dd3b41ffc563bece83db92e1148773 |
stages:\n- template: /eng/common/core-templates/post-build/post-build.yml\n parameters:\n # Specifies whether to use 1ES\n is1ESPipeline: true\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }}\n | dataset_sample\yaml\dotnet_wpf\eng\common\templates-official\post-build\post-build.yml | post-build.yml | YAML | 239 | 0.8 | 0 | 0.142857 | python-kit | 898 | 2023-07-18T09:18:54.625759 | Apache-2.0 | false | 10fe98a8f9da81f17ac9ce76e3678b43 |
steps:\n- template: /eng/common/core-templates/post-build/setup-maestro-vars.yml\n parameters:\n # Specifies whether to use 1ES\n is1ESPipeline: true\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }} | dataset_sample\yaml\dotnet_wpf\eng\common\templates-official\post-build\setup-maestro-vars.yml | setup-maestro-vars.yml | YAML | 245 | 0.8 | 0 | 0.142857 | node-utils | 731 | 2024-10-23T21:53:32.443571 | MIT | false | 3319c73ca03bf136891886314d1c8b90 |
steps:\n- template: /eng/common/core-templates/steps/component-governance.yml\n parameters:\n is1ESPipeline: true\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }}\n | dataset_sample\yaml\dotnet_wpf\eng\common\templates-official\steps\component-governance.yml | component-governance.yml | YAML | 208 | 0.7 | 0 | 0 | react-lib | 845 | 2025-03-29T14:13:43.774888 | BSD-3-Clause | false | 06aa967218650d7f9cfb8ec46d8759e2 |
# Obtains internal runtime download credentials and populates the 'dotnetbuilds-internal-container-read-token-base64'\n# variable with the base64-encoded SAS token, by default\nsteps:\n- template: /eng/common/core-templates/steps/enable-internal-runtimes.yml\n parameters:\n is1ESPipeline: true\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }}\n | dataset_sample\yaml\dotnet_wpf\eng\common\templates-official\steps\enable-internal-runtimes.yml | enable-internal-runtimes.yml | YAML | 387 | 0.8 | 0 | 0.25 | react-lib | 753 | 2025-04-05T04:41:00.292927 | Apache-2.0 | false | f57b264701d6d7acad609bf58c1ad4b2 |
steps:\n- template: /eng/common/core-templates/steps/enable-internal-sources.yml\n parameters:\n is1ESPipeline: true\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }} | dataset_sample\yaml\dotnet_wpf\eng\common\templates-official\steps\enable-internal-sources.yml | enable-internal-sources.yml | YAML | 210 | 0.7 | 0 | 0 | react-lib | 335 | 2024-07-26T16:45:10.931740 | BSD-3-Clause | false | 4cb4678c0860ec21b8f586937e5d4098 |
steps:\n- template: /eng/common/core-templates/steps/generate-sbom.yml\n parameters:\n is1ESPipeline: true\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }}\n | dataset_sample\yaml\dotnet_wpf\eng\common\templates-official\steps\generate-sbom.yml | generate-sbom.yml | YAML | 201 | 0.7 | 0 | 0 | vue-tools | 765 | 2024-12-18T17:23:28.641283 | MIT | false | 9d6e6daf2467dfa3ba7e3a070e217a2b |
steps:\n- template: /eng/common/core-templates/steps/get-delegation-sas.yml\n parameters:\n is1ESPipeline: true\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }}\n | dataset_sample\yaml\dotnet_wpf\eng\common\templates-official\steps\get-delegation-sas.yml | get-delegation-sas.yml | YAML | 206 | 0.7 | 0 | 0 | vue-tools | 473 | 2023-08-22T13:10:50.663208 | Apache-2.0 | false | 445f1d1e10142392945d56d80cef658d |
steps:\n- template: /eng/common/core-templates/steps/get-federated-access-token.yml\n parameters:\n is1ESPipeline: true\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }} | dataset_sample\yaml\dotnet_wpf\eng\common\templates-official\steps\get-federated-access-token.yml | get-federated-access-token.yml | YAML | 213 | 0.7 | 0 | 0 | awesome-app | 670 | 2023-12-01T12:55:59.577439 | GPL-3.0 | false | 451a0bd19e999290b511849730f4b322 |
parameters:\n- name: displayName\n type: string\n default: 'Publish to Build Artifact'\n\n- name: condition\n type: string\n default: succeeded()\n\n- name: artifactName\n type: string\n\n- name: pathToPublish\n type: string\n\n- name: continueOnError\n type: boolean\n default: false\n\n- name: publishLocation\n type: string\n default: 'Container'\n\n- name: is1ESPipeline\n type: boolean\n default: true\n\n- name: retryCountOnTaskFailure\n type: string\n default: 10\n \nsteps:\n- ${{ if ne(parameters.is1ESPipeline, true) }}:\n - 'eng/common/templates-official cannot be referenced from a non-1ES managed template': error\n- task: 1ES.PublishBuildArtifacts@1\n displayName: ${{ parameters.displayName }}\n condition: ${{ parameters.condition }}\n ${{ if parameters.continueOnError }}:\n continueOnError: ${{ parameters.continueOnError }}\n inputs:\n PublishLocation: ${{ parameters.publishLocation }}\n PathtoPublish: ${{ parameters.pathToPublish }}\n ${{ if parameters.artifactName }}:\n ArtifactName: ${{ parameters.artifactName }}\n ${{ if parameters.retryCountOnTaskFailure }}:\n retryCountOnTaskFailure: ${{ parameters.retryCountOnTaskFailure }}\n | dataset_sample\yaml\dotnet_wpf\eng\common\templates-official\steps\publish-build-artifacts.yml | publish-build-artifacts.yml | YAML | 1,156 | 0.7 | 0.086957 | 0 | awesome-app | 877 | 2023-07-25T01:14:11.143543 | Apache-2.0 | false | dff936f4fd2a815e4a853bf2758eb9f4 |
steps:\n- template: /eng/common/core-templates/steps/publish-logs.yml\n parameters:\n is1ESPipeline: true\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }}\n | dataset_sample\yaml\dotnet_wpf\eng\common\templates-official\steps\publish-logs.yml | publish-logs.yml | YAML | 200 | 0.7 | 0 | 0 | awesome-app | 826 | 2024-12-26T00:52:55.747972 | GPL-3.0 | false | cd97d3f9c7687a5b76651856fd38cc13 |
parameters:\n- name: is1ESPipeline\n type: boolean\n default: true\n\n- name: args\n type: object\n default: {}\n\nsteps:\n- ${{ if ne(parameters.is1ESPipeline, true) }}:\n - 'eng/common/templates-official cannot be referenced from a non-1ES managed template': error\n- task: 1ES.PublishPipelineArtifact@1\n displayName: ${{ coalesce(parameters.args.displayName, 'Publish to Build Artifact') }}\n ${{ if parameters.args.condition }}:\n condition: ${{ parameters.args.condition }}\n ${{ else }}:\n condition: succeeded()\n ${{ if parameters.args.continueOnError }}:\n continueOnError: ${{ parameters.args.continueOnError }}\n inputs:\n targetPath: ${{ parameters.args.targetPath }}\n ${{ if parameters.args.artifactName }}:\n artifactName: ${{ parameters.args.artifactName }}\n ${{ if parameters.args.properties }}:\n properties: ${{ parameters.args.properties }}\n ${{ if parameters.args.sbomEnabled }}:\n sbomEnabled: ${{ parameters.args.sbomEnabled }}\n | dataset_sample\yaml\dotnet_wpf\eng\common\templates-official\steps\publish-pipeline-artifacts.yml | publish-pipeline-artifacts.yml | YAML | 973 | 0.7 | 0.214286 | 0 | python-kit | 291 | 2025-05-17T11:08:34.544959 | Apache-2.0 | false | 0d09d9af5a70fba1d8994003f81dc51b |
steps:\n- template: /eng/common/core-templates/steps/retain-build.yml\n parameters:\n is1ESPipeline: true\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }}\n | dataset_sample\yaml\dotnet_wpf\eng\common\templates-official\steps\retain-build.yml | retain-build.yml | YAML | 200 | 0.7 | 0 | 0 | python-kit | 458 | 2024-09-18T19:59:29.172405 | GPL-3.0 | false | 64c424567a833cb1b3baacc6c82278ed |
steps:\n- template: /eng/common/core-templates/steps/send-to-helix.yml\n parameters:\n is1ESPipeline: true\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }}\n | dataset_sample\yaml\dotnet_wpf\eng\common\templates-official\steps\send-to-helix.yml | send-to-helix.yml | YAML | 201 | 0.7 | 0 | 0 | react-lib | 28 | 2024-07-22T02:01:40.069525 | MIT | false | 8073d7f9d6306dab92f89878a7eebf17 |
steps:\n- template: /eng/common/core-templates/steps/source-build.yml\n parameters:\n is1ESPipeline: true\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }}\n | dataset_sample\yaml\dotnet_wpf\eng\common\templates-official\steps\source-build.yml | source-build.yml | YAML | 200 | 0.7 | 0 | 0 | awesome-app | 700 | 2023-11-27T02:10:26.958671 | MIT | false | bc24c27910c25e06953dc1fa1f53080f |
steps:\n- template: /eng/common/core-templates/steps/source-index-stage1-publish.yml\n parameters:\n is1ESPipeline: true\n\n ${{ each parameter in parameters }}:\n ${{ parameter.key }}: ${{ parameter.value }}\n | dataset_sample\yaml\dotnet_wpf\eng\common\templates-official\steps\source-index-stage1-publish.yml | source-index-stage1-publish.yml | YAML | 215 | 0.7 | 0 | 0 | node-utils | 943 | 2025-04-13T23:14:05.017955 | Apache-2.0 | false | 4321803988638271478b0ee91c88e6d7 |
# Select a pool provider based off branch name. Anything with branch name containing 'release' must go into an -Svc pool, \n# otherwise it should go into the "normal" pools. This separates out the queueing and billing of released branches.\n\n# Motivation: \n# Once a given branch of a repository's output has been officially "shipped" once, it is then considered to be COGS\n# (Cost of goods sold) and should be moved to a servicing pool provider. This allows both separation of queueing\n# (allowing release builds and main PR builds to not intefere with each other) and billing (required for COGS.\n# Additionally, the pool provider name itself may be subject to change when the .NET Core Engineering Services \n# team needs to move resources around and create new and potentially differently-named pools. Using this template \n# file from an Arcade-ified repo helps guard against both having to update one's release/* branches and renaming.\n\n# How to use: \n# This yaml assumes your shipped product branches use the naming convention "release/..." (which many do).\n# If we find alternate naming conventions in broad usage it can be added to the condition below.\n#\n# First, import the template in an arcade-ified repo to pick up the variables, e.g.:\n#\n# variables:\n# - template: /eng/common/templates-official/variables/pool-providers.yml\n#\n# ... then anywhere specifying the pool provider use the runtime variables,\n# $(DncEngInternalBuildPool)\n#\n# pool:\n# name: $(DncEngInternalBuildPool)\n# image: 1es-windows-2022\n\nvariables:\n # Coalesce the target and source branches so we know when a PR targets a release branch\n # If these variables are somehow missing, fall back to main (tends to have more capacity)\n\n # Any new -Svc alternative pools should have variables added here to allow for splitting work\n\n - name: DncEngInternalBuildPool\n value: $[\n replace(\n replace(\n eq(contains(coalesce(variables['System.PullRequest.TargetBranch'], variables['Build.SourceBranch'], 'refs/heads/main'), 'release'), 'true'),\n True,\n 'NetCore1ESPool-Svc-Internal'\n ),\n False,\n 'NetCore1ESPool-Internal'\n )\n ] | dataset_sample\yaml\dotnet_wpf\eng\common\templates-official\variables\pool-providers.yml | pool-providers.yml | YAML | 2,232 | 0.95 | 0.045455 | 0.675 | vue-tools | 681 | 2025-04-02T15:54:39.451826 | BSD-3-Clause | false | 5095d98b380f7d996e2cbdcaaa5a2d4f |
variables:\n# The Guardian version specified in 'eng/common/sdl/packages.config'. This value must be kept in\n# sync with the packages.config file.\n- name: DefaultGuardianVersion\n value: 0.109.0\n- name: GuardianPackagesConfigFile\n value: $(Build.SourcesDirectory)\eng\common\sdl\packages.config | dataset_sample\yaml\dotnet_wpf\eng\common\templates-official\variables\sdl-variables.yml | sdl-variables.yml | YAML | 294 | 0.8 | 0 | 0.285714 | node-utils | 744 | 2024-09-25T07:54:12.827589 | MIT | false | 7a649e5c188d945e05df59979b5af58e |
name: Export JSON/XML/YAML/CSV/MYSQL/PSQL/SQLITE/SQLSERVER/MONGODB\n\non:\n push:\n branches:\n - master\n paths-ignore:\n - "**"\n - "!bin/Commands/Export**"\n workflow_dispatch:\n inputs:\n pass:\n description: "Passcode"\n required: true\n\njobs:\n export:\n name: JSON/XML/YAML/CSV/MYSQL/PSQL/SQLITE/SQLSERVER/MONGODB\n runs-on: ubuntu-24.04\n\n strategy:\n matrix:\n php-version: [8.2]\n node-version: [20.x]\n fail-fast: false\n\n steps:\n - name: Checkout\n uses: actions/checkout@v4\n with:\n submodules: true\n ref: ${{ github.head_ref }}\n\n - name: Setup PHP\n uses: shivammathur/setup-php@v2\n with:\n php-version: ${{ matrix.php-version }}\n extensions: intl #optional\n coverage: none\n ini-values: "post_max_size=256M" #optional\n\n - name: Use Node.js ${{ matrix.node-version }}\n uses: actions/setup-node@v4\n with:\n node-version: ${{ matrix.node-version }}\n\n - name: Start MySQL service\n run: |\n sudo systemctl start mysql.service\n mysql -V\n\n - name: Start PostgreSQL service\n run: |\n sudo systemctl start postgresql.service\n pg_isready\n pg_lsclusters\n sudo -u postgres psql -c "CREATE DATABASE world;"\n sudo -u postgres psql -c "ALTER USER postgres PASSWORD 'postgres';"\n sudo -u postgres psql -c "\l"\n\n - name: Setup MongoDB\n uses: supercharge/mongodb-github-action@1.10.0\n with:\n mongodb-version: '6.0'\n mongodb-replica-set: rs0\n\n - name: Install MongoDB Database Tools\n run: |\n # Download MongoDB Database Tools directly\n wget https://fastdl.mongodb.org/tools/db/mongodb-database-tools-ubuntu2204-x86_64-100.7.3.deb\n sudo dpkg -i mongodb-database-tools-ubuntu2204-x86_64-100.7.3.deb\n\n rm -rf mongodb-database-tools-ubuntu2204-x86_64-100.7.3.deb\n\n # Verify installation\n mongoimport --version\n\n - name: Add clean commands to world.sql\n run: |\n grep -v "DROP TABLE" sql/world.sql > tmpfile && mv tmpfile sql/world.sql\n echo -e "DROP TABLE IF EXISTS \`regions\`;\n\n$(cat sql/world.sql)" > sql/world.sql\n echo -e "DROP TABLE IF EXISTS \`subregions\`;\n$(cat sql/world.sql)" > sql/world.sql\n echo -e "DROP TABLE IF EXISTS \`countries\`;\n$(cat sql/world.sql)" > sql/world.sql\n echo -e "DROP TABLE IF EXISTS \`states\`;\n$(cat sql/world.sql)" > sql/world.sql\n echo -e "DROP TABLE IF EXISTS \`cities\`;\n$(cat sql/world.sql)" > sql/world.sql\n\n - name: Setup MySQL DB\n run: |\n ls -R\n mysql -uroot -proot -e "CREATE DATABASE world;"\n mysql -uroot -proot -e "SHOW DATABASES;"\n mysql -uroot -proot --default-character-set=utf8mb4 world < sql/world.sql\n\n - name: Setup & Run NMIG (MySQL to PostgreSQL)\n run: |\n cp nmig.config.json nmig/config/config.json\n cd nmig\n npm install\n npm run build\n npm start\n\n - name: Setup MySQLtoSQLite\n run: |\n python -m pip install --upgrade pip\n pip install mysql-to-sqlite3\n mysql2sqlite --version\n\n - name: Setup variables\n run: |\n echo "region_count=$(mysql -uroot -proot -e 'SELECT COUNT(*) FROM world.regions;' -s)" >> $GITHUB_ENV\n echo "subregion_count=$(mysql -uroot -proot -e 'SELECT COUNT(*) FROM world.subregions;' -s)" >> $GITHUB_ENV\n echo "country_count=$(mysql -uroot -proot -e 'SELECT COUNT(*) FROM world.countries;' -s)" >> $GITHUB_ENV\n echo "state_count=$(mysql -uroot -proot -e 'SELECT COUNT(*) FROM world.states;' -s)" >> $GITHUB_ENV\n echo "city_count=$(mysql -uroot -proot -e 'SELECT COUNT(*) FROM world.cities;' -s)" >> $GITHUB_ENV\n echo "current_date=$(date +'%dth %b %Y')" >> $GITHUB_ENV\n\n - name: Composer Dependencies\n working-directory: ./bin\n run: |\n composer install\n php console list\n\n - name: Export JSON\n working-directory: ./bin\n run: php console export:json\n\n - name: Export XML\n working-directory: ./bin\n run: php console export:xml\n\n - name: Export YAML\n working-directory: ./bin\n run: php console export:yaml\n\n - name: Export CSV\n working-directory: ./bin\n run: php console export:csv\n\n - name: Export MySQL SQL\n run: |\n mysqldump -uroot -proot --add-drop-table --disable-keys --set-charset --skip-add-locks world regions > sql/regions.sql\n mysqldump -uroot -proot --add-drop-table --disable-keys --set-charset --skip-add-locks world subregions > sql/subregions.sql\n mysqldump -uroot -proot --add-drop-table --disable-keys --set-charset --skip-add-locks world countries > sql/countries.sql\n mysqldump -uroot -proot --add-drop-table --disable-keys --set-charset --skip-add-locks world states > sql/states.sql\n mysqldump -uroot -proot --add-drop-table --disable-keys --set-charset --skip-add-locks world cities > sql/cities.sql\n\n - name: Export PostgreSQL SQL\n env:\n PGPASSWORD: postgres\n run: |\n pg_dump --dbname=postgresql://postgres:postgres@localhost/world -Fp --inserts --clean --if-exists -t regions > psql/regions.sql\n pg_dump --dbname=postgresql://postgres:postgres@localhost/world -Fp --inserts --clean --if-exists -t subregions > psql/subregions.sql\n pg_dump --dbname=postgresql://postgres:postgres@localhost/world -Fp --inserts --clean --if-exists -t countries > psql/countries.sql\n pg_dump --dbname=postgresql://postgres:postgres@localhost/world -Fp --inserts --clean --if-exists -t states > psql/states.sql\n pg_dump --dbname=postgresql://postgres:postgres@localhost/world -Fp --inserts --clean --if-exists -t cities > psql/cities.sql\n pg_dump --dbname=postgresql://postgres:postgres@localhost/world -Fp --inserts --clean --if-exists > psql/world.sql\n\n - name: Export SQLite\n run: |\n mysql2sqlite -d world -t regions --mysql-password root -u root -f sqlite/regions.sqlite3\n mysql2sqlite -d world -t subregions --mysql-password root -u root -f sqlite/subregions.sqlite3\n mysql2sqlite -d world -t countries --mysql-password root -u root -f sqlite/countries.sqlite3\n mysql2sqlite -d world -t states --mysql-password root -u root -f sqlite/states.sqlite3\n mysql2sqlite -d world -t cities --mysql-password root -u root -f sqlite/cities.sqlite3\n mysql2sqlite -d world --mysql-password root -u root -f sqlite/world.sqlite3\n\n - name: Export SQL Server\n working-directory: ./bin\n run: php console export:sql-server\n\n - name: Export MongoDB\n working-directory: ./bin\n run: |\n php console export:mongodb\n ls -la ../mongodb\n\n - name: Import MongoDB\n working-directory: ./mongodb\n run: |\n # Wait for MongoDB to be ready\n sleep 5\n\n echo "Importing collections..."\n mongoimport --host localhost:27017 --db world --collection regions --file regions.json --jsonArray\n mongoimport --host localhost:27017 --db world --collection subregions --file subregions.json --jsonArray\n mongoimport --host localhost:27017 --db world --collection countries --file countries.json --jsonArray\n mongoimport --host localhost:27017 --db world --collection states --file states.json --jsonArray\n mongoimport --host localhost:27017 --db world --collection cities --file cities.json --jsonArray\n echo "Import completed"\n\n # Create a MongoDB dump\n mongodump --host localhost:27017 --db world --out mongodb-dump\n\n # Compress the dump\n tar -czvf world-mongodb-dump.tar.gz mongodb-dump\n echo "MongoDB dump created at mongodb/world-mongodb-dump.tar.gz"\n\n rm -rf mongodb-dump regions.json subregions.json countries.json states.json cities.json\n\n - name: Update README.md\n run: |\n sed -i "s/Total Regions : [0-9]* <br>/Total Regions : $region_count <br>/" README.md\n sed -i "s/Total Sub Regions : [0-9]* <br>/Total Sub Regions : $subregion_count <br>/" README.md\n sed -i "s/Total Countries : [0-9]* <br>/Total Countries : $country_count <br>/" README.md\n sed -i "s/Total States\/Regions\/Municipalities : [0-9]* <br>/Total States\/Regions\/Municipalities : $state_count <br>/" README.md\n sed -i "s/Total Cities\/Towns\/Districts : [0-9]* <br>/Total Cities\/Towns\/Districts : $city_count <br>/" README.md\n sed -i "s/Last Updated On : .*$/Last Updated On : $current_date/" README.md\n\n - name: Create Pull Request\n uses: peter-evans/create-pull-request@v6\n with:\n commit-message: Add exported JSON, CSV, XML, YAML, MYSQL, PSQL, SQLITE, MONGODB & SQL Server files\n committer: Darshan Gada <gadadarshan@gmail.com>\n signoff: true\n branch: export/Files\n delete-branch: true\n title: "(chore): Export JSON, CSV, XML, YAML, MYSQL, PSQL, SQLITE, MONGODB & SQL Server files"\n labels: |\n exports\n automated\n reviewers: dr5hn\n | dataset_sample\yaml\dr5hn_countries-states-cities-database\.github\workflows\export.yml | export.yml | YAML | 9,397 | 0.95 | 0.031674 | 0.026738 | awesome-app | 863 | 2023-10-17T06:31:37.811554 | GPL-3.0 | false | 385272b0792afbea03106fd60b08994e |
region:\n -\n id: 1\n name: Africa\n translations:\n ko: 아프리카\n pt-BR: África\n pt: África\n nl: Afrika\n hr: Afrika\n fa: آفریقا\n de: Afrika\n es: África\n fr: Afrique\n ja: アフリカ\n it: Africa\n zh-CN: 非洲\n tr: Afrika\n ru: Африка\n uk: Африка\n pl: Afryka\n wikiDataId: Q15\n -\n id: 2\n name: Americas\n translations:\n ko: 아메리카\n pt-BR: América\n pt: América\n nl: Amerika\n hr: Amerika\n fa: 'قاره آمریکا'\n de: Amerika\n es: América\n fr: Amérique\n ja: アメリカ州\n it: America\n zh-CN: 美洲\n tr: Amerika\n ru: Америка\n uk: Америка\n pl: Ameryka\n wikiDataId: Q828\n -\n id: 3\n name: Asia\n translations:\n ko: 아시아\n pt-BR: Ásia\n pt: Ásia\n nl: Azië\n hr: Ázsia\n fa: آسیا\n de: Asien\n es: Asia\n fr: Asie\n ja: アジア\n it: Asia\n zh-CN: 亚洲\n tr: Asya\n ru: Азия\n uk: Азія\n pl: Azja\n wikiDataId: Q48\n -\n id: 4\n name: Europe\n translations:\n ko: 유럽\n pt-BR: Europa\n pt: Europa\n nl: Europa\n hr: Európa\n fa: اروپا\n de: Europa\n es: Europa\n fr: Europe\n ja: ヨーロッパ\n it: Europa\n zh-CN: 欧洲\n tr: Avrupa\n ru: Европа\n uk: Європа\n pl: Europa\n wikiDataId: Q46\n -\n id: 5\n name: Oceania\n translations:\n ko: 오세아니아\n pt-BR: Oceania\n pt: Oceania\n nl: 'Oceanië en Australië'\n hr: 'Óceánia és Ausztrália'\n fa: اقیانوسیه\n de: 'Ozeanien und Australien'\n es: Oceanía\n fr: Océanie\n ja: オセアニア\n it: Oceania\n zh-CN: 大洋洲\n tr: Okyanusya\n ru: Океания\n uk: Океанія\n pl: Oceania\n wikiDataId: Q55643\n -\n id: 6\n name: Polar\n translations:\n ko: 남극\n pt-BR: Antártida\n pt: Antártida\n nl: Antarctica\n hr: Antarktika\n fa: جنوبگان\n de: Antarktika\n es: Antártida\n fr: Antarctique\n ja: 南極大陸\n it: Antartide\n zh-CN: 南極洲\n tr: Antarktika\n ru: Антарктика\n uk: Антарктика\n pl: Antarktyka\n wikiDataId: Q51\n | dataset_sample\yaml\dr5hn_countries-states-cities-database\yml\regions.yml | regions.yml | YAML | 2,418 | 0.7 | 0 | 0 | node-utils | 318 | 2025-04-07T10:38:07.886541 | MIT | false | e2696a1f248741c80957d053d3e1954e |
subregion:\n -\n id: 19\n name: 'Australia and New Zealand'\n region_id: 5\n translations:\n ko: 오스트랄라시아\n pt: Australásia\n nl: Australazië\n hr: Australazija\n fa: استرالزی\n de: Australasien\n es: Australasia\n fr: Australasie\n ja: オーストララシア\n it: Australasia\n zh-CN: 澳大拉西亞\n ru: 'Австралия и Новая Зеландия'\n uk: 'Австралія та Нова Зеландія'\n pl: 'Australia i Nowa Zelandia'\n wikiDataId: Q45256\n -\n id: 7\n name: Caribbean\n region_id: 2\n translations:\n ko: 카리브\n pt: Caraíbas\n nl: Caraïben\n hr: Karibi\n fa: کارائیب\n de: Karibik\n es: Caribe\n fr: Caraïbes\n ja: カリブ海地域\n it: Caraibi\n zh-CN: 加勒比地区\n ru: Карибы\n uk: Кариби\n pl: Karaiby\n wikiDataId: Q664609\n -\n id: 9\n name: 'Central America'\n region_id: 2\n translations:\n ko: 중앙아메리카\n pt: 'América Central'\n nl: Centraal-Amerika\n hr: 'Srednja Amerika'\n fa: 'آمریکای مرکزی'\n de: Zentralamerika\n es: 'América Central'\n fr: 'Amérique centrale'\n ja: 中央アメリカ\n it: 'America centrale'\n zh-CN: 中美洲\n ru: 'Центральная Америка'\n uk: 'Центральна Америка'\n pl: 'Ameryka Środkowa'\n wikiDataId: Q27611\n -\n id: 10\n name: 'Central Asia'\n region_id: 3\n translations:\n ko: 중앙아시아\n pt: 'Ásia Central'\n nl: Centraal-Azië\n hr: 'Srednja Azija'\n fa: 'آسیای میانه'\n de: Zentralasien\n es: 'Asia Central'\n fr: 'Asie centrale'\n ja: 中央アジア\n it: 'Asia centrale'\n zh-CN: 中亚\n ru: 'Центральная Азия'\n uk: 'Центральна Азія'\n pl: 'Azja Środkowa'\n wikiDataId: Q27275\n -\n id: 4\n name: 'Eastern Africa'\n region_id: 1\n translations:\n ko: 동아프리카\n pt: 'África Oriental'\n nl: Oost-Afrika\n hr: 'Istočna Afrika'\n fa: 'شرق آفریقا'\n de: Ostafrika\n es: 'África Oriental'\n fr: "Afrique de l'Est"\n ja: 東アフリカ\n it: 'Africa orientale'\n zh-CN: 东部非洲\n ru: 'Восточная Африка'\n uk: 'Східна Африка'\n pl: 'Afryka Wschodnia'\n wikiDataId: Q27407\n -\n id: 12\n name: 'Eastern Asia'\n region_id: 3\n translations:\n ko: 동아시아\n pt: 'Ásia Oriental'\n nl: Oost-Azië\n hr: 'Istočna Azija'\n fa: 'شرق آسیا'\n de: Ostasien\n es: 'Asia Oriental'\n fr: "Asie de l'Est"\n ja: 東アジア\n it: 'Asia orientale'\n zh-CN: 東亞\n ru: 'Восточная Азия'\n uk: 'Східна Азія'\n pl: 'Azja Wschodnia'\n wikiDataId: Q27231\n -\n id: 15\n name: 'Eastern Europe'\n region_id: 4\n translations:\n ko: 동유럽\n pt: 'Europa de Leste'\n nl: Oost-Europa\n hr: 'Istočna Europa'\n fa: 'شرق اروپا'\n de: Osteuropa\n es: 'Europa Oriental'\n fr: "Europe de l'Est"\n ja: 東ヨーロッパ\n it: 'Europa orientale'\n zh-CN: 东欧\n ru: 'Восточная Европа'\n uk: 'Східна Європа'\n pl: 'Europa Wschodnia'\n wikiDataId: Q27468\n -\n id: 20\n name: Melanesia\n region_id: 5\n translations:\n ko: 멜라네시아\n pt: Melanésia\n nl: Melanesië\n hr: Melanezija\n fa: ملانزی\n de: Melanesien\n es: Melanesia\n fr: Mélanésie\n ja: メラネシア\n it: Melanesia\n zh-CN: 美拉尼西亚\n ru: Меланезия\n uk: Меланезія\n pl: Melanezja\n wikiDataId: Q37394\n -\n id: 21\n name: Micronesia\n region_id: 5\n translations:\n ko: 미크로네시아\n pt: Micronésia\n nl: Micronesië\n hr: Mikronezija\n fa: میکرونزی\n de: Mikronesien\n es: Micronesia\n fr: Micronésie\n ja: ミクロネシア\n it: Micronesia\n zh-CN: 密克罗尼西亚群岛\n ru: Микронезия\n uk: Мікронезія\n pl: Mikronezja\n wikiDataId: Q3359409\n -\n id: 2\n name: 'Middle Africa'\n region_id: 1\n translations:\n ko: 중앙아프리카\n pt: 'África Central'\n nl: Centraal-Afrika\n hr: 'Srednja Afrika'\n fa: 'مرکز آفریقا'\n de: Zentralafrika\n es: 'África Central'\n fr: 'Afrique centrale'\n ja: 中部アフリカ\n it: 'Africa centrale'\n zh-CN: 中部非洲\n ru: 'Средняя Африка'\n uk: 'Середня Африка'\n pl: 'Afryka Środkowa'\n wikiDataId: Q27433\n -\n id: 1\n name: 'Northern Africa'\n region_id: 1\n translations:\n ko: 북아프리카\n pt: 'Norte de África'\n nl: Noord-Afrika\n hr: 'Sjeverna Afrika'\n fa: 'شمال آفریقا'\n de: Nordafrika\n es: 'Norte de África'\n fr: 'Afrique du Nord'\n ja: 北アフリカ\n it: Nordafrica\n zh-CN: 北部非洲\n ru: 'Северная Африка'\n uk: 'Північна Африка'\n pl: 'Afryka Północna'\n wikiDataId: Q27381\n -\n id: 6\n name: 'Northern America'\n region_id: 2\n translations:\n ko: 북미\n pt: 'América Setentrional'\n nl: Noord-Amerika\n fa: 'شمال آمریکا'\n de: Nordamerika\n es: 'América Norteña'\n fr: 'Amérique septentrionale'\n ja: 北部アメリカ\n it: 'America settentrionale'\n zh-CN: 北美地區\n ru: 'Северная Америка'\n uk: 'Північна Америка'\n pl: 'Ameryka Północna'\n wikiDataId: Q2017699\n -\n id: 18\n name: 'Northern Europe'\n region_id: 4\n translations:\n ko: 북유럽\n pt: 'Europa Setentrional'\n nl: Noord-Europa\n hr: 'Sjeverna Europa'\n fa: 'شمال اروپا'\n de: Nordeuropa\n es: 'Europa del Norte'\n fr: 'Europe du Nord'\n ja: 北ヨーロッパ\n it: 'Europa settentrionale'\n zh-CN: 北歐\n ru: 'Северная Европа'\n uk: 'Північна Європа'\n pl: 'Europa Północna'\n wikiDataId: Q27479\n -\n id: 22\n name: Polynesia\n region_id: 5\n translations:\n ko: 폴리네시아\n pt: Polinésia\n nl: Polynesië\n hr: Polinezija\n fa: پلینزی\n de: Polynesien\n es: Polinesia\n fr: Polynésie\n ja: ポリネシア\n it: Polinesia\n zh-CN: 玻里尼西亞\n ru: Полинезия\n uk: Полінезія\n pl: Polinezja\n wikiDataId: Q35942\n -\n id: 8\n name: 'South America'\n region_id: 2\n translations:\n ko: 남아메리카\n pt: 'América do Sul'\n nl: Zuid-Amerika\n hr: 'Južna Amerika'\n fa: 'آمریکای جنوبی'\n de: Südamerika\n es: 'América del Sur'\n fr: 'Amérique du Sud'\n ja: 南アメリカ\n it: 'America meridionale'\n zh-CN: 南美洲\n ru: 'Южная Америка'\n uk: 'Південна Америка'\n pl: 'Ameryka Południowa'\n wikiDataId: Q18\n -\n id: 13\n name: 'South-Eastern Asia'\n region_id: 3\n translations:\n ko: 동남아시아\n pt: 'Sudeste Asiático'\n nl: Zuidoost-Azië\n hr: 'Jugoistočna Azija'\n fa: 'جنوب شرق آسیا'\n de: Südostasien\n es: 'Sudeste Asiático'\n fr: 'Asie du Sud-Est'\n ja: 東南アジア\n it: 'Sud-est asiatico'\n zh-CN: 东南亚\n ru: 'Юго-Восточная Азия'\n uk: 'Південно-Східна Азія'\n pl: 'Azja Południowo-Wschodnia'\n wikiDataId: Q11708\n -\n id: 5\n name: 'Southern Africa'\n region_id: 1\n translations:\n ko: 남아프리카\n pt: 'África Austral'\n nl: 'Zuidelijk Afrika'\n hr: 'Južna Afrika'\n fa: 'جنوب آفریقا'\n de: Südafrika\n es: 'África austral'\n fr: 'Afrique australe'\n ja: 南部アフリカ\n it: 'Africa australe'\n zh-CN: 南部非洲\n ru: 'Южная Африка'\n uk: 'Південна Африка'\n pl: 'Afryka Południowa'\n wikiDataId: Q27394\n -\n id: 14\n name: 'Southern Asia'\n region_id: 3\n translations:\n ko: 남아시아\n pt: 'Ásia Meridional'\n nl: Zuid-Azië\n hr: 'Južna Azija'\n fa: 'جنوب آسیا'\n de: Südasien\n es: 'Asia del Sur'\n fr: 'Asie du Sud'\n ja: 南アジア\n it: 'Asia meridionale'\n zh-CN: 南亚\n ru: 'Южная Азия'\n uk: 'Південна Азія'\n pl: 'Azja Południowa'\n wikiDataId: Q771405\n -\n id: 16\n name: 'Southern Europe'\n region_id: 4\n translations:\n ko: 남유럽\n pt: 'Europa meridional'\n nl: Zuid-Europa\n hr: 'Južna Europa'\n fa: 'جنوب اروپا'\n de: Südeuropa\n es: 'Europa del Sur'\n fr: 'Europe du Sud'\n ja: 南ヨーロッパ\n it: 'Europa meridionale'\n zh-CN: 南欧\n ru: 'Южная Европа'\n uk: 'Південна Європа'\n pl: 'Europa Południowa'\n wikiDataId: Q27449\n -\n id: 3\n name: 'Western Africa'\n region_id: 1\n translations:\n ko: 서아프리카\n pt: 'África Ocidental'\n nl: West-Afrika\n hr: 'Zapadna Afrika'\n fa: 'غرب آفریقا'\n de: Westafrika\n es: 'África Occidental'\n fr: "Afrique de l'Ouest"\n ja: 西アフリカ\n it: 'Africa occidentale'\n zh-CN: 西非\n ru: 'Западная Африка'\n uk: 'Західна Африка'\n pl: 'Afryka Zachodnia'\n wikiDataId: Q4412\n -\n id: 11\n name: 'Western Asia'\n region_id: 3\n translations:\n ko: 서아시아\n pt: 'Sudoeste Asiático'\n nl: Zuidwest-Azië\n hr: 'Jugozapadna Azija'\n fa: 'غرب آسیا'\n de: Vorderasien\n es: 'Asia Occidental'\n fr: "Asie de l'Ouest"\n ja: 西アジア\n it: 'Asia occidentale'\n zh-CN: 西亚\n ru: 'Западная Азия'\n uk: 'Західна Азія'\n pl: 'Azja Zachodnia'\n wikiDataId: Q27293\n -\n id: 17\n name: 'Western Europe'\n region_id: 4\n translations:\n ko: 서유럽\n pt: 'Europa Ocidental'\n nl: West-Europa\n hr: 'Zapadna Europa'\n fa: 'غرب اروپا'\n de: Westeuropa\n es: 'Europa Occidental'\n fr: "Europe de l'Ouest"\n ja: 西ヨーロッパ\n it: 'Europa occidentale'\n zh-CN: 西欧\n ru: 'Западная Европа'\n uk: 'Західна Європа'\n pl: 'Europa Zachodnia'\n wikiDataId: Q27496\n | dataset_sample\yaml\dr5hn_countries-states-cities-database\yml\subregions.yml | subregions.yml | YAML | 10,825 | 0.7 | 0 | 0 | react-lib | 240 | 2023-07-20T08:26:46.896843 | BSD-3-Clause | false | 3a4f49c767c0c7521eb013cc2360a754 |
blank_issues_enabled: false\ncontact_links:\n - name: Questions & support\n url: https://forum.duplicati.com\n about: Please ask and answer questions here.\n | dataset_sample\yaml\duplicati_duplicati\.github\ISSUE_TEMPLATE\config.yml | config.yml | YAML | 165 | 0.8 | 0 | 0 | vue-tools | 19 | 2025-06-06T13:52:00.586391 | GPL-3.0 | false | 7fa017ebeafcfbe82d35db98329d12f9 |
name: Tests\n\non: [pull_request, workflow_dispatch]\n\njobs:\n build_packages:\n name: Build packages\n runs-on: ${{ matrix.os }}\n strategy:\n fail-fast: false\n matrix:\n os: [macos-latest, windows-latest]\n # Super slow on ubuntu-latest\n # os: [ubuntu-latest, windows-latest, macos-latest]\n\n steps:\n - name: Set up .NET\n uses: actions/setup-dotnet@v4\n with:\n dotnet-version: 8.x\n\n - name: Checkout source\n uses: actions/checkout@v4\n\n - name: Create dummy changelog file\n working-directory: ./ReleaseBuilder\n run: echo "## Changed" > ./changelog-news.txt\n\n - name: Compile releases\n working-directory: ./ReleaseBuilder\n env:\n UPDATER_KEYFILE: ./testfile.key1\n WIX:\n run: dotnet run -- build debug --compile-only=true --git-stash-push=false --disable-docker-push=true --keep-builds=true --disable-authenticode=true --disable-signcode=true --disable-notarize-signing=true --disable-gpg-signing=true --disable-s3-upload=true --disable-github-upload=true --disable-update-server-reload=true --disable-discourse-announce=true --disable-clean-source=true --password=test1234 --version=2.0.0.99\n\n # Disabled, as a new test needs to be written for the new UI\n # selenium:\n # runs-on: ubuntu-latest\n # steps:\n # - uses: actions/checkout@v4\n # - name: Selenium\n # run: pipeline/selenium/test.sh\n | dataset_sample\yaml\duplicati_duplicati\.github\workflows\build-packages.yml | build-packages.yml | YAML | 1,448 | 0.8 | 0.02381 | 0.257143 | node-utils | 846 | 2025-01-07T13:36:03.864505 | Apache-2.0 | false | c7b60aa42a9148328a899a87b9ed1e23 |
name: Freebsd\non:\n push:\n branches: "**freebsd**"\n pull_request:\n branches: "**freebsd**"\n\njobs:\n zip:\n runs-on: macos-12\n steps:\n - uses: actions/checkout@v2\n - name: Download dotnet\n # See also https://github.com/sec/dotnet-core-freebsd-source-build\n # See also https://github.com/Thefrank/dotnet-freebsd-crossbuild\n run: curl -L https://github.com/Thefrank/dotnet-freebsd-crossbuild/releases/download/v6.0.401-11/dotnet-sdk-6.0.401-freebsd-x64.tar.gz -o dn.tgz\n - name: Build\n id: test\n uses: vmactions/freebsd-vm@v0\n with:\n usesh: true\n # sync: sshfs\n prepare: pkg install -y curl libinotify libunwind git cmake ninja bash wget icu\n run: |\n #!/bin/bash\n set -x\n set -e\n pwd\n ls -lah\n whoami\n env\n freebsd-version\n df -h\n dmesg | grep memory\n swapinfo\n dd if=/dev/zero of=/root/swap1 bs=1m count=20480\n chmod 0600 /root/swap1\n mdconfig -a -t vnode -f /root/swap1 -u 0\n swapon /dev/md0\n df -h\n swapinfo\n \n export DOTNET_CLI_TELEMETRY_OPTOUT=1\n BUILD_DIR=`pwd`\n cd /var/tmp\n mkdir dn\n cd dn\n echo Unpacking .NET\n tar -xzf ${BUILD_DIR}/dn.tgz\n cd ${BUILD_DIR}\n mkdir /var/tmp/nuget\n export RVERSION=v6.0.401-11\n export VERSION=6.0.9\n curl -L https://github.com/Thefrank/dotnet-freebsd-crossbuild/releases/download/${RVERSION}/Microsoft.AspNetCore.App.Runtime.freebsd-x64.${VERSION}.nupkg -o /var/tmp/nuget/Microsoft.AspNetCore.App.Runtime.freebsd-x64.${VERSION}.nupkg\n curl -L https://github.com/Thefrank/dotnet-freebsd-crossbuild/releases/download/${RVERSION}/Microsoft.NETCore.App.Host.freebsd-x64.${VERSION}.nupkg -o /var/tmp/nuget/Microsoft.NETCore.App.Host.freebsd-x64.${VERSION}.nupkg\n curl -L https://github.com/Thefrank/dotnet-freebsd-crossbuild/releases/download/${RVERSION}/Microsoft.NETCore.App.Runtime.freebsd-x64.${VERSION}.nupkg -o /var/tmp/nuget/Microsoft.NETCore.App.Runtime.freebsd-x64.${VERSION}.nupkg\n curl -L https://github.com/Thefrank/dotnet-freebsd-crossbuild/releases/download/${RVERSION}/Microsoft.NETCore.App.Crossgen2.freebsd-x64.${VERSION}.nupkg -o /var/tmp/nuget/Microsoft.NETCore.App.Crossgen2.freebsd-x64.${VERSION}.nupkg\n /var/tmp/dn/dotnet nuget add source /var/tmp/nuget/\n \n cp -R `pwd` /var/tmp/build\n cd /var/tmp/build\n \n echo Restore\n /var/tmp/dn/dotnet restore --runtime=freebsd-x64 Duplicati.sln\n echo Build\n /var/tmp/dn/dotnet build Duplicati.sln\n echo Publish\n /var/tmp/dn/dotnet publish -c Release --runtime=freebsd-x64 -o publish Duplicati.sln\n\n cp -R ./publish ${BUILD_DIR}/publish\n - name: Save Artifacts\n uses: actions/upload-artifact@v2\n with:\n name: duplicati-freebsd-x64\n path: publish\n tests:\n runs-on: macos-12\n steps:\n - uses: actions/checkout@v2\n - name: Download dotnet\n # See also https://github.com/sec/dotnet-core-freebsd-source-build\n # See also https://github.com/Thefrank/dotnet-freebsd-crossbuild\n run: curl -L https://github.com/Thefrank/dotnet-freebsd-crossbuild/releases/download/v6.0.401-11/dotnet-sdk-6.0.401-freebsd-x64.tar.gz -o dn.tgz\n - name: Run test\n id: test\n uses: vmactions/freebsd-vm@v0\n with:\n usesh: true\n # sync: sshfs\n prepare: pkg install -y curl libinotify libunwind git cmake ninja bash wget icu\n run: |\n #!/bin/bash\n set -x\n set -e\n pwd\n ls -lah\n whoami\n env\n freebsd-version\n df -h\n dmesg | grep memory\n swapinfo\n dd if=/dev/zero of=/root/swap1 bs=1m count=20480\n chmod 0600 /root/swap1\n mdconfig -a -t vnode -f /root/swap1 -u 0\n swapon /dev/md0\n df -h\n swapinfo\n \n export DOTNET_CLI_TELEMETRY_OPTOUT=1\n BUILD_DIR=`pwd`\n cd /var/tmp\n mkdir dn\n cd dn\n echo Unpacking .NET\n tar -xzf ${BUILD_DIR}/dn.tgz\n cd ${BUILD_DIR}\n mkdir /var/tmp/nuget\n export RVERSION=v6.0.401-11\n export VERSION=6.0.9\n curl -L https://github.com/Thefrank/dotnet-freebsd-crossbuild/releases/download/${RVERSION}/Microsoft.AspNetCore.App.Runtime.freebsd-x64.${VERSION}.nupkg -o /var/tmp/nuget/Microsoft.AspNetCore.App.Runtime.freebsd-x64.${VERSION}.nupkg\n curl -L https://github.com/Thefrank/dotnet-freebsd-crossbuild/releases/download/${RVERSION}/Microsoft.NETCore.App.Host.freebsd-x64.${VERSION}.nupkg -o /var/tmp/nuget/Microsoft.NETCore.App.Host.freebsd-x64.${VERSION}.nupkg\n curl -L https://github.com/Thefrank/dotnet-freebsd-crossbuild/releases/download/${RVERSION}/Microsoft.NETCore.App.Runtime.freebsd-x64.${VERSION}.nupkg -o /var/tmp/nuget/Microsoft.NETCore.App.Runtime.freebsd-x64.${VERSION}.nupkg\n curl -L https://github.com/Thefrank/dotnet-freebsd-crossbuild/releases/download/${RVERSION}/Microsoft.NETCore.App.Crossgen2.freebsd-x64.${VERSION}.nupkg -o /var/tmp/nuget/Microsoft.NETCore.App.Crossgen2.freebsd-x64.${VERSION}.nupkg\n /var/tmp/dn/dotnet nuget add source /var/tmp/nuget/\n \n cp -R `pwd` /var/tmp/build\n cd /var/tmp/build\n\n echo Restore\n /var/tmp/dn/dotnet restore --runtime=freebsd-x64 Duplicati.sln\n echo Testing\n mkdir -p ${BUILD_DIR}/duplicati_testdata/logs\n mkdir -p /var/tmp/duplicati_testdata\n ln -s ${BUILD_DIR}/duplicati_testdata/logs /var/tmp/duplicati_testdata/logs\n export UNITTEST_BASEFOLDER=/var/tmp/duplicati_testdata\n /var/tmp/dn/dotnet test --runtime=freebsd-x64 --verbosity minimal Duplicati.sln\n - name: Archive Test Logs\n if: always()\n uses: actions/upload-artifact@v2\n with:\n name: test-logs\n path: duplicati_testdata/logs/*\n | dataset_sample\yaml\duplicati_duplicati\.github\workflows\freebsd.yml | freebsd.yml | YAML | 6,147 | 0.8 | 0.020979 | 0.059259 | react-lib | 995 | 2025-01-26T09:10:25.168640 | MIT | false | 96f9b6e06085e82768c78504956c46b4 |
name: Close inactive issues\non:\n schedule:\n - cron: "30 1 * * *"\n\njobs:\n close-issues:\n runs-on: ubuntu-latest\n permissions:\n issues: write\n actions: write\n steps:\n - uses: actions/stale@v9\n with:\n days-before-issue-stale: 15\n days-before-issue-close: 15\n stale-issue-label: "stale"\n stale-issue-message: "This issue is stale because it has been open for 15 days with no activity."\n close-issue-message: "This issue was closed because it has been inactive for 15 days since being marked as stale."\n days-before-pr-stale: -1\n days-before-pr-close: -1\n exempt-all-assignees: true\n only-labels: "pending user feedback"\n exempt-issue-labels: "bug,enhancement,good first issue,backend enhancement,backend issue,backup corruption,bounty,bugreport attached,core logic,docker,filters,help wanted,linux,localization,MacOS,mono,performance issue,reproduced,server side,ssl/tls issue,Synology,tests,translation,UI,windows"\n repo-token: ${{ secrets.GITHUB_TOKEN }}\n operations-per-run: 500\n | dataset_sample\yaml\duplicati_duplicati\.github\workflows\stale.yml | stale.yml | YAML | 1,122 | 0.7 | 0.076923 | 0 | react-lib | 240 | 2024-03-26T08:42:27.696299 | MIT | false | 39411aa18581a3790119a90bfc9e7cca |
version: "2"\nrun:\n build-tags:\n - integration\nlinters:\n default: none\n enable:\n - asciicheck\n - bodyclose\n - depguard\n - durationcheck\n - errcheck\n - errorlint\n - forbidigo\n - gomoddirectives\n - gomodguard\n - gosec\n - govet\n - importas\n - ineffassign\n - misspell\n - nakedret\n - nilerr\n - noctx\n - nolintlint\n - staticcheck\n - unconvert\n - unused\n - wastedassign\n settings:\n depguard:\n rules:\n apache-licensed-code:\n list-mode: lax\n files:\n - '!**/x-pack/**/*.go'\n deny:\n - pkg: github.com/elastic/beats/v7/x-pack\n desc: Apache 2.0 licensed code cannot depend on Elastic licensed code (x-pack/).\n main:\n list-mode: lax\n deny:\n - pkg: math/rand$\n desc: superseded by math/rand/v2\n errcheck:\n check-type-assertions: true\n check-blank: false\n exclude-functions:\n - (github.com/elastic/elastic-agent-libs/mapstr.M).Delete\n - (github.com/elastic/elastic-agent-libs/mapstr.M).Put\n - github.com/elastic/elastic-agent-libs/logp.TestingSetup\n - github.com/elastic/elastic-agent-libs/logp.DevelopmentSetup\n errorlint:\n errorf: true\n asserts: true\n comparison: true\n forbidigo:\n forbid:\n - pattern: fmt.Print.*\n gomoddirectives:\n replace-allow-list:\n - github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/consumption/armconsumption\n - github.com/Shopify/sarama\n - github.com/apoydence/eachers\n - github.com/dop251/goja\n - github.com/dop251/goja_nodejs\n - github.com/fsnotify/fsevents\n - github.com/fsnotify/fsnotify\n - github.com/google/gopacket\n - github.com/insomniacslk/dhcp\n - github.com/meraki/dashboard-api-go/v3\n - github.com/snowflakedb/gosnowflake\n replace-local: false\n gomodguard:\n blocked:\n modules:\n - github.com/pkg/errors:\n recommendations:\n - errors\n - fmt\n reason: This package is deprecated, use `fmt.Errorf` with `%w` instead\n - gotest.tools/v3:\n recommendations:\n - github.com/stretchr/testify\n reason: Use one assertion library consistently across the codebase\n - github.com/google/uuid:\n recommendations:\n - github.com/gofrs/uuid/v5\n reason: Use one uuid library consistently across the codebase\n gosec:\n excludes:\n - G306\n - G404\n - G401\n - G501\n - G505\n nolintlint:\n require-explanation: true\n require-specific: false\n allow-unused: false\n prealloc:\n simple: false\n range-loops: true\n for-loops: true\n staticcheck:\n checks:\n - -ST1005\n - all\n exclusions:\n generated: lax\n presets:\n - comments\n - common-false-positives\n - legacy\n - std-error-handling\n rules:\n - linters:\n - staticcheck\n text: 'ST1003:'\n - linters:\n - forbidigo\n path: (.*magefile.go|.*dev-tools/mage/.*)\n paths:\n - third_party$\n - builtin$\n - examples$\nissues:\n max-issues-per-linter: 50\n max-same-issues: 3\nformatters:\n enable:\n - goimports\n settings:\n goimports:\n local-prefixes:\n - github.com/elastic\n exclusions:\n generated: lax\n paths:\n - third_party$\n - builtin$\n - examples$\n | dataset_sample\yaml\elastic_beats\.golangci.yml | .golangci.yml | YAML | 3,546 | 0.95 | 0.014085 | 0 | awesome-app | 517 | 2024-10-09T13:23:05.627149 | GPL-3.0 | false | 83f5e6a79dcd48d2284ab93e5686d419 |
commands_restrictions:\n backport:\n conditions:\n - or:\n - sender-permission>=write\n - sender=github-actions[bot]\nqueue_rules:\n - name: default\n merge_method: squash\n conditions:\n - check-success=buildkite/beats\ndefaults:\n actions:\n backport:\n title: "[{{ destination_branch }}](backport #{{ number }}) {{ title }}"\n assignees:\n - "{{ author }}"\n labels:\n - "backport"\npull_request_rules:\n - name: self-assign PRs\n conditions:\n - -merged\n - -closed\n - "#assignee=0"\n actions:\n assign:\n add_users:\n - "{{ author }}"\n - name: forward-port patches to main branch\n conditions:\n - merged\n - label=forwardport-main\n actions:\n backport:\n assignees:\n - "{{ author }}"\n branches:\n - "main"\n labels:\n - "backport"\n title: "[{{ destination_branch }}](backport #{{ number }}) {{ title }}"\n - name: ask to resolve conflict\n conditions:\n - -merged\n - -closed\n - conflict\n - -author=apmmachine\n actions:\n comment:\n message: |\n This pull request is now in conflicts. Could you fix it? 🙏\n To fixup this pull request, you can check out it locally. See documentation: https://help.github.com/articles/checking-out-pull-requests-locally/\n ```\n git fetch upstream\n git checkout -b {{head}} upstream/{{head}}\n git merge upstream/{{base}}\n git push upstream {{head}}\n ```\n - name: close automated pull requests with bump updates if any conflict\n conditions:\n - -merged\n - -closed\n - conflict\n - author=apmmachine\n - label=automation\n actions:\n close:\n message: |\n This pull request has been automatically closed by Mergify.\n There are some other up-to-date pull requests.\n - name: automatic approval for automated pull requests with bump updates\n conditions:\n - author=apmmachine\n - check-success=buildkite/beats\n - label=automation\n - files~=^testing/environments/snapshot.*\.yml$\n actions:\n review:\n type: APPROVE\n message: Automatically approving mergify\n - name: automatic squash and merge with success checks and the files matching the regex ^testing/environments/snapshot* are modified.\n conditions:\n - check-success=buildkite/beats\n - label=automation\n - files~=^testing/environments/snapshot.*\.yml$\n - "#approved-reviews-by>=1"\n actions:\n queue:\n name: default\n - name: delete upstream branch after merging changes on testing/environments/snapshot* or it's closed\n conditions:\n - or:\n - merged\n - closed\n - and:\n - label=automation\n - head~=^updatecli_bump-elastic-stack\n - files~=^testing/environments/snapshot.*\.yml$\n actions:\n delete_head_branch:\n - name: delete upstream branch after merging changes on .go-version or it's closed\n conditions:\n - or:\n - merged\n - closed\n - and:\n - label=automation\n - head~=^updatecli_bump-golang-version\n - files~=^\.go-version$\n actions:\n delete_head_branch:\n - name: automatic approval for mergify pull requests with changes in bump-rules\n conditions:\n - author=mergify[bot]\n - check-success=buildkite/beats\n - label=automation\n - files~=^\.mergify\.yml$\n - head~=^add-backport-next.*\n actions:\n review:\n type: APPROVE\n message: Automatically approving mergify\n - name: automatic squash and merge with success checks and the files matching the regex ^.mergify.yml is modified.\n conditions:\n - check-success=buildkite/beats\n - label=automation\n - files~=^\.mergify\.yml$\n - head~=^add-backport-next.*\n - "#approved-reviews-by>=1"\n actions:\n queue:\n name: default\n - name: delete upstream branch with changes on ^.mergify.yml that has been merged or closed\n conditions:\n - or:\n - merged\n - closed\n - and:\n - label=automation\n - head~=^add-backport-next.*\n - files~=^\.mergify\.yml$\n actions:\n delete_head_branch:\n - name: notify the backport policy\n conditions:\n - -label~=^backport\n - base=main\n - -merged\n - -closed\n actions:\n comment:\n message: |\n This pull request does not have a backport label.\n If this is a bug or security fix, could you label this PR @{{author}}? 🙏.\n For such, you'll need to label your PR with:\n * The upcoming major version of the Elastic Stack\n * The upcoming minor version of the Elastic Stack (if you're not pushing a breaking change)\n\n To fixup this pull request, you need to add the backport labels for the needed\n branches, such as:\n * `backport-8./d` is the label to automatically backport to the `8./d` branch. `/d` is the digit\n * `backport-active-all` is the label that automatically backports to all active branches.\n * `backport-active-8` is the label that automatically backports to all active minor branches for the 8 major.\n * `backport-active-9` is the label that automatically backports to all active minor branches for the 9 major.\n\n - name: notify the backport has not been merged yet\n conditions:\n - -merged\n - -closed\n - author=mergify[bot]\n - "#check-success>0"\n - schedule=Mon-Mon 06:00-10:00[Europe/Paris]\n - "#assignee>=1"\n actions:\n comment:\n message: |\n This pull request has not been merged yet. Could you please review and merge it @{{ assignee | join(', @') }}? 🙏\n - name: backport patches to 7.17 branch\n conditions:\n - merged\n - label~=^(backport-v7.17.0|backport-7.17)$\n actions:\n backport:\n assignees:\n - "{{ author }}"\n branches:\n - "7.17"\n labels:\n - "backport"\n title: "[{{ destination_branch }}](backport #{{ number }}) {{ title }}"\n - name: backport patches to 8.3 branch\n conditions:\n - merged\n - label=backport-v8.3.0\n actions:\n backport:\n assignees:\n - "{{ author }}"\n branches:\n - "8.3"\n labels:\n - "backport"\n title: "[{{ destination_branch }}](backport #{{ number }}) {{ title }}"\n - name: backport patches to 8.4 branch\n conditions:\n - merged\n - label=backport-v8.4.0\n actions:\n backport:\n assignees:\n - "{{ author }}"\n branches:\n - "8.4"\n labels:\n - "backport"\n title: "[{{ destination_branch }}](backport #{{ number }}) {{ title }}"\n - name: backport patches to 8.5 branch\n conditions:\n - merged\n - label=backport-v8.5.0\n actions:\n backport:\n assignees:\n - "{{ author }}"\n branches:\n - "8.5"\n labels:\n - "backport"\n title: "[{{ destination_branch }}](backport #{{ number }}) {{ title }}"\n - name: backport patches to 8.6 branch\n conditions:\n - merged\n - label=backport-v8.6.0\n actions:\n backport:\n assignees:\n - "{{ author }}"\n branches:\n - "8.6"\n labels:\n - "backport"\n title: "[{{ destination_branch }}](backport #{{ number }}) {{ title }}"\n - name: backport patches to 8.7 branch\n conditions:\n - merged\n - label=backport-v8.7.0\n actions:\n backport:\n assignees:\n - "{{ author }}"\n branches:\n - "8.7"\n labels:\n - "backport"\n title: "[{{ destination_branch }}](backport #{{ number }}) {{ title }}"\n - name: backport patches to 8.8 branch\n conditions:\n - merged\n - label=backport-v8.8.0\n actions:\n backport:\n assignees:\n - "{{ author }}"\n branches:\n - "8.8"\n labels:\n - "backport"\n title: "[{{ destination_branch }}](backport #{{ number }}) {{ title }}"\n - name: backport patches to 8.9 branch\n conditions:\n - merged\n - label=backport-v8.9.0\n actions:\n backport:\n assignees:\n - "{{ author }}"\n branches:\n - "8.9"\n labels:\n - "backport"\n title: "[{{ destination_branch }}](backport #{{ number }}) {{ title }}"\n - name: backport patches to 8.10 branch\n conditions:\n - merged\n - label=backport-v8.10.0\n actions:\n backport:\n assignees:\n - "{{ author }}"\n branches:\n - "8.10"\n labels:\n - "backport"\n title: "[{{ destination_branch }}](backport #{{ number }}) {{ title }}"\n - name: backport patches to 8.11 branch\n conditions:\n - merged\n - label=backport-v8.11.0\n actions:\n backport:\n assignees:\n - "{{ author }}"\n branches:\n - "8.11"\n labels:\n - "backport"\n title: "[{{ destination_branch }}](backport #{{ number }}) {{ title }}"\n - name: backport patches to 8.12 branch\n conditions:\n - merged\n - label=backport-v8.12.0\n actions:\n backport:\n assignees:\n - "{{ author }}"\n branches:\n - "8.12"\n labels:\n - "backport"\n title: "[{{ destination_branch }}](backport #{{ number }}) {{ title }}"\n - name: backport patches to 8.13 branch\n conditions:\n - merged\n - label=backport-v8.13.0\n actions:\n backport:\n assignees:\n - "{{ author }}"\n branches:\n - "8.13"\n labels:\n - "backport"\n title: "[{{ destination_branch }}](backport #{{ number }}) {{ title }}"\n - name: backport patches to 8.14 branch\n conditions:\n - merged\n - label=backport-v8.14.0\n actions:\n backport:\n assignees:\n - "{{ author }}"\n branches:\n - "8.14"\n labels:\n - "backport"\n title: "[{{ destination_branch }}](backport #{{ number }}) {{ title }}"\n - name: backport patches to 8.15 branch\n conditions:\n - merged\n - label~=^(backport-v8.15.0|backport-8.15)$\n actions:\n backport:\n assignees:\n - "{{ author }}"\n branches:\n - "8.15"\n labels:\n - "backport"\n title: "[{{ destination_branch }}](backport #{{ number }}) {{ title }}"\n - name: backport patches to 8.16 branch\n conditions:\n - merged\n - label~=^(backport-v8.16.0|backport-8.16)$\n actions:\n backport:\n assignees:\n - "{{ author }}"\n branches:\n - "8.16"\n labels:\n - "backport"\n title: "[{{ destination_branch }}](backport #{{ number }}) {{ title }}"\n - name: backport patches to 8.17 branch\n conditions:\n - merged\n - label=backport-8.17\n actions:\n backport:\n assignees:\n - "{{ author }}"\n branches:\n - "8.17"\n labels:\n - "backport"\n title: "[{{ destination_branch }}](backport #{{ number }}) {{ title }}"\n - name: backport patches to 8.18 branch\n conditions:\n - merged\n - label=backport-8.18\n actions:\n backport:\n assignees:\n - "{{ author }}"\n branches:\n - "8.18"\n labels:\n - "backport"\n title: "[{{ destination_branch }}](backport #{{ number }}) {{ title }}"\n - name: backport patches to 8.19 branch\n conditions:\n - merged\n - label=backport-8.19\n actions:\n backport:\n branches:\n - "8.19"\n - name: backport patches to 9.0 branch\n conditions:\n - merged\n - label=backport-9.0\n actions:\n backport:\n assignees:\n - "{{ author }}"\n branches:\n - "9.0"\n labels:\n - "backport"\n title: "[{{ destination_branch }}](backport #{{ number }}) {{ title }}"\n | dataset_sample\yaml\elastic_beats\.mergify.yml | .mergify.yml | YAML | 12,003 | 0.8 | 0.016667 | 0.014354 | react-lib | 152 | 2025-02-10T23:08:07.586705 | BSD-3-Clause | false | e6e4e13b7cb6198dd6071c79c1f3938c |
# Enable coverage report message for diff on commit\n# Docs can be found here: https://docs.codecov.io/v4.3.0/docs/commit-status\ncoverage:\n status:\n project:\n default:\n target: auto\n # Overall coverage should never drop more then 0.5%\n threshold: 0.5\n base: auto\n branches: null\n if_no_uploads: error\n if_not_found: success\n if_ci_failed: error\n only_pulls: false\n flags: null\n paths: null\n patch:\n default:\n target: auto\n # Allows PRs without tests, overall stats count\n threshold: 100\n base: auto\n branches: null\n if_no_uploads: error\n if_not_found: success\n if_ci_failed: error\n only_pulls: false\n flags: null\n paths: null\n\n# Disable comments on Pull Requests\ncomment: false\n | dataset_sample\yaml\elastic_beats\codecov.yml | codecov.yml | YAML | 848 | 0.8 | 0.030303 | 0.15625 | python-kit | 324 | 2023-12-28T07:54:03.708131 | Apache-2.0 | false | 5ba01b975c632c419f1e7d89642b0af0 |
version: '3.1'\n\nservices:\n\n mysql:\n image: mysql:5.7\n ports:\n - 3308:3306\n environment:\n MYSQL_ROOT_PASSWORD: password\n MYSQL_DATABASE: wordpress_test\n\n wordpress_phpunit:\n image: pojome/phpunit-local\n environment:\n PHPUNIT_DB_HOST: mysql\n volumes:\n - .:/app\n - testsuite:/tmp\n depends_on:\n - mysql\n\nvolumes:\n testsuite:\n | dataset_sample\yaml\elementor_elementor\docker-compose.yml | docker-compose.yml | YAML | 380 | 0.7 | 0 | 0 | python-kit | 491 | 2024-03-08T18:27:25.593457 | MIT | false | c3f47e9b48f4aef0253805cfdc8d785d |
# To get started with Dependabot version updates, you'll need to specify which\n# package ecosystems to update and where the package manifests are located.\n# Please see the documentation for all configuration options:\n# https://docs.github.com/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file\n\nversion: 2\nupdates:\n - package-ecosystem: "NuGet" # See documentation for possible values\n directory: "/" # Location of package manifests\n schedule:\n interval: "weekly"\n | dataset_sample\yaml\elsa-workflows_elsa-core\.github\dependabot.yml | dependabot.yml | YAML | 527 | 0.8 | 0.272727 | 0.4 | awesome-app | 562 | 2023-10-22T09:01:18.866219 | GPL-3.0 | false | e9f598548923d7f40accf5bcdec7dca4 |
name: Append Footer to Issue with Bounty Label\n\non:\n issues:\n types: [labeled]\n\njobs:\n append-footer:\n if: github.event.label.name == 'bounty'\n runs-on: ubuntu-latest\n steps:\n - name: Checkout repo\n uses: actions/checkout@v2\n\n - name: Read Footer Content\n id: footer\n uses: jaywcjlove/github-action-read-file@main\n with:\n path: docs/bounty-footer.md\n \n - name: Append Footer to Issue\n uses: actions/github-script@v7\n with:\n github-token: ${{secrets.GITHUB_TOKEN}}\n script: |\n const issueComment = `${{ steps.footer.outputs.content }}`;\n await github.rest.issues.createComment({\n issue_number: context.issue.number,\n owner: context.repo.owner,\n repo: context.repo.repo,\n body: issueComment\n });\n | dataset_sample\yaml\elsa-workflows_elsa-core\.github\workflows\bounty.yml | bounty.yml | YAML | 882 | 0.7 | 0.03125 | 0 | awesome-app | 138 | 2025-04-12T22:11:12.514655 | BSD-3-Clause | false | 30a3138416d1b6d24293059ca174c1ae |
name: Build and deploy Elsa Server + Studio reference application as Docker image\non:\n workflow_dispatch:\n push:\n branches:\n - main\n\njobs:\n push_to_registry:\n name: Push Docker image to Docker Hub\n runs-on: ubuntu-latest\n\n steps:\n - name: Checkout\n uses: actions/checkout@v3\n\n - name: Set up QEMU\n uses: docker/setup-qemu-action@v2\n with:\n platforms: all\n\n - name: Set up Docker Buildx\n id: buildx\n uses: docker/setup-buildx-action@v2\n\n - name: Docker meta\n id: meta\n uses: docker/metadata-action@v3\n with:\n # list of Docker images to use as base name for tags\n images: |\n elsaworkflows/elsa-server-and-studio-v3-4-0-preview\n flavor: |\n latest=true\n # generate Docker tags based on the following events/attributes\n tags: |\n type=sha\n\n - name: Login to DockerHub\n if: github.event_name != 'pull_request'\n uses: docker/login-action@v2\n with:\n username: ${{ secrets.DOCKER_USER }}\n password: ${{ secrets.DOCKER_PASS }}\n\n - name: Build and push\n id: docker_build\n uses: docker/build-push-action@v3\n with:\n builder: ${{ steps.buildx.outputs.name }}\n context: .\n file: ./docker/ElsaServerAndStudio.Dockerfile\n platforms: linux/amd64,linux/arm64,linux/arm/v7\n push: ${{ github.event_name != 'pull_request' }}\n tags: ${{ steps.meta.outputs.tags }}\n labels: ${{ steps.meta.outputs.labels }}\n cache-from: type=local,src=/tmp/.buildx-cache\n cache-to: type=local,dest=/tmp/.buildx-cache\n\n - name: Image digest\n run: echo ${{ steps.docker_build.outputs.digest }}\n | dataset_sample\yaml\elsa-workflows_elsa-core\.github\workflows\elsa-server-and-studio.yml | elsa-server-and-studio.yml | YAML | 1,796 | 0.8 | 0.032787 | 0.037736 | node-utils | 339 | 2024-08-15T05:06:22.013726 | GPL-3.0 | false | 139f0133e735bd57ca9c8c8ed5a6748a |
name: Build and deploy Elsa Server reference application as Docker image\non:\n workflow_dispatch:\n push:\n branches:\n - main\n\njobs:\n push_to_registry:\n name: Push Docker image to Docker Hub\n runs-on: ubuntu-latest\n\n steps:\n - name: Checkout\n uses: actions/checkout@v3\n\n - name: Set up QEMU\n uses: docker/setup-qemu-action@v2\n with:\n platforms: all\n\n - name: Set up Docker Buildx\n id: buildx\n uses: docker/setup-buildx-action@v2\n\n - name: Docker meta\n id: meta\n uses: docker/metadata-action@v3\n with:\n # list of Docker images to use as base name for tags\n images: |\n elsaworkflows/elsa-server-v3-4-0-preview\n flavor: |\n latest=true\n # generate Docker tags based on the following events/attributes\n tags: |\n type=sha\n\n - name: Login to DockerHub\n if: github.event_name != 'pull_request'\n uses: docker/login-action@v2\n with:\n username: ${{ secrets.DOCKER_USER }}\n password: ${{ secrets.DOCKER_PASS }}\n\n - name: Build and push\n id: docker_build\n uses: docker/build-push-action@v3\n with:\n builder: ${{ steps.buildx.outputs.name }}\n context: .\n file: ./docker/ElsaServer.Dockerfile\n platforms: linux/amd64,linux/arm64,linux/arm/v7\n push: ${{ github.event_name != 'pull_request' }}\n tags: ${{ steps.meta.outputs.tags }}\n labels: ${{ steps.meta.outputs.labels }}\n cache-from: type=local,src=/tmp/.buildx-cache\n cache-to: type=local,dest=/tmp/.buildx-cache\n\n - name: Image digest\n run: echo ${{ steps.docker_build.outputs.digest }}\n | dataset_sample\yaml\elsa-workflows_elsa-core\.github\workflows\elsa-server.yml | elsa-server.yml | YAML | 1,767 | 0.8 | 0.032787 | 0.037736 | node-utils | 501 | 2023-09-11T01:58:57.870125 | MIT | false | f9d61c547db27698f3dabfb4901bd810 |
name: Build and deploy Elsa Studio reference application as Docker image\non:\n workflow_dispatch:\n push:\n branches:\n - main\n\njobs:\n push_to_registry:\n name: Push Docker image to Docker Hub\n runs-on: ubuntu-latest\n\n steps:\n - name: Checkout\n uses: actions/checkout@v3\n\n - name: Set up QEMU\n uses: docker/setup-qemu-action@v2\n with:\n platforms: all\n\n - name: Set up Docker Buildx\n id: buildx\n uses: docker/setup-buildx-action@v2\n\n - name: Docker meta\n id: meta\n uses: docker/metadata-action@v3\n with:\n # list of Docker images to use as base name for tags\n images: |\n elsaworkflows/elsa-studio-v3-4-0-preview\n flavor: |\n latest=true\n # generate Docker tags based on the following events/attributes\n tags: |\n type=sha\n\n - name: Login to DockerHub\n if: github.event_name != 'pull_request'\n uses: docker/login-action@v2\n with:\n username: ${{ secrets.DOCKER_USER }}\n password: ${{ secrets.DOCKER_PASS }}\n\n - name: Build and push\n id: docker_build\n uses: docker/build-push-action@v3\n with:\n builder: ${{ steps.buildx.outputs.name }}\n context: .\n file: ./docker/ElsaStudio.Dockerfile\n platforms: linux/amd64,linux/arm64,linux/arm/v7\n push: ${{ github.event_name != 'pull_request' }}\n tags: ${{ steps.meta.outputs.tags }}\n labels: ${{ steps.meta.outputs.labels }}\n cache-from: type=local,src=/tmp/.buildx-cache\n cache-to: type=local,dest=/tmp/.buildx-cache\n\n - name: Image digest\n run: echo ${{ steps.docker_build.outputs.digest }}\n | dataset_sample\yaml\elsa-workflows_elsa-core\.github\workflows\elsa-studio.yml | elsa-studio.yml | YAML | 1,767 | 0.8 | 0.032787 | 0.037736 | vue-tools | 84 | 2023-12-25T22:37:26.127083 | Apache-2.0 | false | 46ff9091ca502a1fc4223376f67f406c |
name: Elsa 3 Packages\non:\n workflow_dispatch:\n push:\n branches:\n - 'main'\n - 'bug/*'\n - 'perf/*'\n - 'patch/*'\n - 'feat/*'\n - 'enh/*'\n - 'rc/*'\n release:\n types: [ prereleased, published ]\nenv:\n base_version: '3.5.0'\n feedz_feed_source: 'https://f.feedz.io/elsa-workflows/elsa-3/nuget/index.json'\n nuget_feed_source: 'https://api.nuget.org/v3/index.json'\n\njobs:\n build:\n name: Build packages\n runs-on: ubuntu-latest\n timeout-minutes: 30\n steps:\n - name: Extract branch name\n run: |\n BRANCH_NAME=${{ github.ref }} # e.g., refs/heads/main\n BRANCH_NAME=${BRANCH_NAME#refs/heads/} # remove the refs/heads/ prefix\n # Extract the last part after the last slash of the branch name, if any, e.g., feature/issue-123 -> issue-123 and use it as the version prefix.\n PACKAGE_PREFIX=$(echo $BRANCH_NAME | rev | cut -d/ -f1 | rev | tr '_' '-')\n \n # If the branch name is main, use the preview version. Otherwise, use the branch name as the version prefix.\n if [[ "${BRANCH_NAME}" == "main" || "${BRANCH_NAME}" =~ ^rc/ ]]; then\n PACKAGE_PREFIX="preview"\n fi\n \n echo "Ref: ${{ github.ref }}"\n echo "Branch name: ${BRANCH_NAME}"\n echo "Package prefix: ${PACKAGE_PREFIX}"\n echo "BRANCH_NAME=${BRANCH_NAME}" >> $GITHUB_ENV\n echo "PACKAGE_PREFIX=${PACKAGE_PREFIX}" >> $GITHUB_ENV\n - name: Checkout\n uses: actions/checkout@v4\n - name: Verify commit exists in branch\n run: |\n if [[ "${{ github.ref }}" == refs/tags/* && "${{ github.event_name }}" == "release" && ("${{ github.event.action }}" == "published" || "${{ github.event.action }}" == "prereleased")]]; then\n git fetch --no-tags --prune --depth=1 origin +refs/heads/*:refs/remotes/origin/*\n git branch --remote --contains | grep origin/rc/3.4.0\n else\n git fetch --no-tags --prune --depth=1 origin +refs/heads/*:refs/remotes/origin/*\n git branch --remote --contains | grep origin/${BRANCH_NAME}\n fi\n - name: Set VERSION variable\n run: |\n if [[ "${{ github.ref }}" == refs/tags/* && "${{ github.event_name }}" == "release" && ("${{ github.event.action }}" == "published" || "${{ github.event.action }}" == "prereleased") ]]; then\n TAG_NAME=${{ github.ref }} # e.g., refs/tags/3.0.0\n TAG_NAME=${TAG_NAME#refs/tags/} # remove the refs/tags/ prefix\n echo "VERSION=${TAG_NAME}" >> $GITHUB_ENV\n else\n echo "VERSION=${{env.base_version}}-${PACKAGE_PREFIX}.${{github.run_number}}" >> $GITHUB_ENV\n fi\n # - name: Set up JDK 17\n # uses: actions/setup-java@v2\n # with:\n # java-version: '17'\n # distribution: 'adopt'\n - uses: actions/setup-dotnet@v4\n with:\n dotnet-version: 9.x\n # - name: Install SonarScanner for .NET\n # run: dotnet tool install --global dotnet-sonarscanner\n # - name: Install Coverlet for code coverage\n # run: dotnet tool install --global coverlet.console\n # - name: Begin SonarCloud analysis\n # env:\n # SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}\n # run: dotnet sonarscanner begin /k:"elsa-workflows_elsa-core" /o:"elsa-workflows" /d:sonar.host.url="https://sonarcloud.io" /d:sonar.token="${{ secrets.SONAR_TOKEN }}" /d:sonar.exclusions=**/obj/**,**/*.dll,build/**,samples/**,src/apps/** /d:"sonar.verbose=true" /d:sonar.cs.opencover.reportsPaths=**/testresults/**/coverage.opencover.xml\n - name: Compile+Test+Pack\n run: ./build.sh Compile+Test+Pack --version ${VERSION} --analyseCode true\n # - name: End SonarCloud analysis\n # env:\n # SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}\n # run: dotnet sonarscanner end /d:sonar.token="${{ secrets.SONAR_TOKEN }}"\n - name: Upload artifact\n uses: actions/upload-artifact@v4\n with:\n name: elsa-nuget-packages\n path: packages/*nupkg\n if: ${{ github.event_name == 'release' || github.event_name == 'push'}}\n \n publish_preview_feedz:\n name: Publish to feedz.io\n needs: build\n runs-on: ubuntu-latest\n timeout-minutes: 10\n if: ${{ github.event_name == 'release' || github.event_name == 'push'}}\n steps:\n - name: Download Packages\n uses: actions/download-artifact@v4.1.7\n with:\n name: elsa-nuget-packages\n\n - name: Publish to feedz.io\n run: dotnet nuget push *.nupkg -k ${{ secrets.FEEDZ_API_KEY }} -s ${{ env.feedz_feed_source }} --skip-duplicate\n\n publish_nuget:\n name: Publish release to nuget.org\n needs: build\n runs-on: ubuntu-latest\n timeout-minutes: 10\n if: ${{ github.event.action == 'published' }}\n steps:\n - name: Download Packages\n uses: actions/download-artifact@v4.1.7\n with:\n name: elsa-nuget-packages\n\n - name: Publish to nuget.org\n run: dotnet nuget push *.nupkg -k ${{ secrets.NUGET_API_KEY }} -s ${{ env.nuget_feed_source }} --skip-duplicate\n | dataset_sample\yaml\elsa-workflows_elsa-core\.github\workflows\packages.yml | packages.yml | YAML | 5,215 | 0.8 | 0.075 | 0.168142 | vue-tools | 63 | 2024-04-26T18:02:25.697333 | MIT | false | c27162e5a195cb1d55b9423cb20ce1d6 |
# ------------------------------------------------------------------------------\n# <auto-generated>\n#\n# This code was generated.\n#\n# - To turn off auto-generation set:\n#\n# [CustomGitHubActions (AutoGenerate = false)]\n#\n# - To trigger manual generation invoke:\n#\n# nuke --generate-configuration GitHubActions_pr --host GitHubActions\n#\n# </auto-generated>\n# ------------------------------------------------------------------------------\n\nname: pr\n\non:\n pull_request:\n branches:\n - main\n paths:\n - '**/*'\n\nconcurrency:\n group: ${{ github.workflow }} @ ${{ github.event.pull_request.head.label || github.head_ref || github.run_id }}\n cancel-in-progress: true\n\njobs:\n ubuntu-latest:\n name: ubuntu-latest\n runs-on: ubuntu-latest\n steps:\n - uses: actions/setup-dotnet@v4\n with:\n dotnet-version: |\n 9.x\n - uses: actions/checkout@v4\n - name: 'Run: Compile, Test, Pack'\n run: ./build.cmd Compile Test Pack\n | dataset_sample\yaml\elsa-workflows_elsa-core\.github\workflows\pr.yml | pr.yml | YAML | 1,002 | 0.8 | 0 | 0.405405 | awesome-app | 719 | 2023-08-28T13:32:55.350692 | GPL-3.0 | false | 23651e9587b9881c5ac89a93b7b38f76 |
name: 'Close stale issues and PRs'\non:\n schedule:\n - cron: '30 1 * * *'\n\njobs:\n stale:\n runs-on: ubuntu-latest\n steps:\n - uses: actions/stale@v9\n with:\n stale-issue-message: 'Hi there! This issue has been marked as stale due to inactivity. If this is still relevant or needs further discussion, please let us know. Otherwise, it will be closed in 7 days. Thank you for your understanding.'\n close-issue-message: 'This issue has been automatically closed due to prolonged inactivity. If you still encounter this problem or have further questions, please feel free to reopen the issue or create a new one. Thank you for your contributions!'\n days-before-stale: 30\n days-before-close: 7\n any-of-labels: 'awaiting-feedback,awaiting-answers,needs-more-info,needs-demo,needs-repro' | dataset_sample\yaml\elsa-workflows_elsa-core\.github\workflows\stale.yml | stale.yml | YAML | 843 | 0.7 | 0.133333 | 0 | vue-tools | 375 | 2023-08-05T10:03:19.698066 | MIT | false | 567d9bd512d6333f6031615b768decdd |
services:\n postgres:\n image: postgres:latest\n command: -c 'max_connections=2000'\n environment:\n POSTGRES_USER: elsa\n POSTGRES_PASSWORD: elsa\n POSTGRES_DB: elsa\n volumes:\n - postgres-data:/var/lib/postgresql/data\n ports:\n - "5432:5432"\n \n rabbitmq:\n image: "rabbitmq:3-management"\n ports:\n - "15672:15672"\n - "5672:5672"\n \n redis:\n image: redis:latest\n ports:\n - "127.0.0.1:6379:6379"\n\n elsa-server:\n pull_policy: always\n build:\n context: ../.\n dockerfile: ./docker/ElsaServer-Datadog.Dockerfile\n depends_on:\n - postgres\n - rabbitmq\n - redis\n - otel-collector\n environment:\n DD_AGENT_HOST: datadog-agent\n DD_ENV: development\n DD_TRACE_DEBUG: true\n DD_TRACE_OTEL_ENABLED: true\n DD_SERVICE: "Elsa Server"\n DD_VERSION: "3.3.0"\n # OpenTelemetry environment variables\n OTEL_EXPORTER_OTLP_ENDPOINT: "http://otel-collector:4317" # Point to OpenTelemetry Collector\n OTEL_EXPORTER_OTLP_PROTOCOL: "grpc" # Use gRPC for OTLP\n OTEL_TRACES_EXPORTER: "otlp"\n OTEL_METRICS_EXPORTER: "otlp"\n OTEL_LOGS_EXPORTER: "otpl"\n OTEL_RESOURCE_ATTRIBUTES: "service.name=elsa-server-local,service.version=3.2.1-blueberry,deployment.environment=development"\n OTEL_DOTNET_AUTO_TRACES_ADDITIONAL_SOURCES: "Elsa.Workflows"\n OTEL_DOTNET_AUTO_INSTRUMENTATION_ENABLED: "true"\n OTEL_LOG_LEVEL: "debug"\n OTEL_DOTNET_AUTO_RESOURCE_DETECTOR_ENABLED: "true"\n OTEL_DOTNET_AUTO_LOGS_CONSOLE_EXPORTER_ENABLED: "true"\n OTEL_DOTNET_AUTO_METRICS_CONSOLE_EXPORTER_ENABLED: "true"\n OTEL_DOTNET_AUTO_TRACES_CONSOLE_EXPORTER_ENABLED: "true"\n \n ASPNETCORE_ENVIRONMENT: Development\n PYTHONNET_PYDLL: /opt/homebrew/Cellar/python@3.11/3.11.6_1/Frameworks/Python.framework/Versions/3.11/bin/python3.11\n PYTHONNET_RUNTIME: coreclr\n CONNECTIONSTRINGS__POSTGRESQL: "Server=postgres;Username=elsa;Database=elsa;Port=5432;Password=elsa;SSLMode=Prefer"\n CONNECTIONSTRINGS__RABBITMQ: "amqp://guest:guest@rabbitmq:5672/"\n CONNECTIONSTRINGS__REDIS: "redis:6379"\n DISTRIBUTEDLOCKPROVIDER: "Postgres"\n ports:\n - "13000:8080"\n \n elsa-studio:\n pull_policy: always\n build:\n context: ../.\n dockerfile: ./docker/ElsaStudio.Dockerfile\n environment:\n ASPNETCORE_ENVIRONMENT: Development\n ELSASERVER__URL: "http://localhost:13000/elsa/api"\n ports:\n - "14000:8080"\n \n otel-collector:\n image: otel/opentelemetry-collector-contrib:latest\n volumes:\n - ./otel-collector-config.yaml:/etc/otel-collector-config.yaml\n command: [ "--config", "/etc/otel-collector-config.yaml", "--feature-gates", "-component.UseLocalHostAsDefaultHost" ]\n environment:\n DD_API_KEY: "secret api key"\n DD_SITE: "datadoghq.eu"\n ports:\n - "13133:13133"\n - "4317:4317"\n - "4318:4318"\n \n datadog-agent:\n image: datadog/agent:latest\n environment:\n DD_API_KEY: "<hidden>"\n DD_SITE: "datadoghq.eu"\n DD_HOSTNAME: "otel-collector"\n DD_LOGS_ENABLED: "true"\n DD_OTLP_CONFIG_LOGS_ENABLED: "true"\n DD_LOGS_CONFIG_CONTAINER_COLLECT_ALL: "true"\n DD_APM_ENABLED: "true"\n DD_APM_NON_LOCAL_TRAFFIC: "true"\n DD_OTLP_CONFIG_RECEIVER_PROTOCOLS_GRPC_ENDPOINT: 0.0.0.0:4317 # The Datadog Agent expects traces from OpenTelemetry Collector\n DD_OTLP_CONFIG_RECEIVER_PROTOCOLS_HTTP_ENDPOINT: 0.0.0.0:4318\n\n # Service autodiscovery\n DD_AC_INCLUDE: "name:postgres,name:rabbitmq,name:redis,name:elsa-server"\n DD_AC_EXCLUDE: "name:datadog-agent"\n\n ports:\n - "8126:8126"\n - "14317:4317"\n - "14318:4318"\n\nvolumes:\n postgres-data: | dataset_sample\yaml\elsa-workflows_elsa-core\docker\docker-compose-datadog+otel-collector.yml | docker-compose-datadog+otel-collector.yml | YAML | 3,737 | 0.8 | 0.008772 | 0.019048 | awesome-app | 100 | 2024-02-24T07:56:57.401201 | BSD-3-Clause | false | 946776107614ce22f6a59b8bf42a4331 |
---\nversion: '2'\nservices:\n\n broker:\n image: confluentinc/cp-kafka:7.7.1\n hostname: broker\n container_name: broker\n ports:\n - "9092:9092"\n - "9101:9101"\n environment:\n KAFKA_NODE_ID: 1\n KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: 'CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT'\n KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092'\n KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1\n KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0\n KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1\n KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1\n KAFKA_JMX_PORT: 9101\n KAFKA_JMX_HOSTNAME: localhost\n KAFKA_PROCESS_ROLES: 'broker,controller'\n KAFKA_CONTROLLER_QUORUM_VOTERS: '1@broker:29093'\n KAFKA_LISTENERS: 'PLAINTEXT://broker:29092,CONTROLLER://broker:29093,PLAINTEXT_HOST://0.0.0.0:9092'\n KAFKA_INTER_BROKER_LISTENER_NAME: 'PLAINTEXT'\n KAFKA_CONTROLLER_LISTENER_NAMES: 'CONTROLLER'\n KAFKA_LOG_DIRS: '/tmp/kraft-combined-logs'\n # Replace CLUSTER_ID with a unique base64 UUID using "bin/kafka-storage.sh random-uuid" \n # See https://docs.confluent.io/kafka/operations-tools/kafka-tools.html#kafka-storage-sh\n CLUSTER_ID: 'MkU3OEVBNTcwNTJENDM2Qk'\n\n schema-registry:\n image: confluentinc/cp-schema-registry:7.7.1\n hostname: schema-registry\n container_name: schema-registry\n depends_on:\n - broker\n ports:\n - "8081:8081"\n environment:\n SCHEMA_REGISTRY_HOST_NAME: schema-registry\n SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'broker:29092'\n SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081\n\n connect:\n image: cnfldemos/cp-server-connect-datagen:0.6.4-7.6.0\n hostname: connect\n container_name: connect\n depends_on:\n - broker\n - schema-registry\n ports:\n - "8083:8083"\n environment:\n CONNECT_BOOTSTRAP_SERVERS: 'broker:29092'\n CONNECT_REST_ADVERTISED_HOST_NAME: connect\n CONNECT_GROUP_ID: compose-connect-group\n CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs\n CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1\n CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000\n CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets\n CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1\n CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status\n CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1\n CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter\n CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter\n CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081\n # CLASSPATH required due to CC-2422\n CLASSPATH: /usr/share/java/monitoring-interceptors/monitoring-interceptors-7.7.1.jar\n CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"\n CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"\n CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"\n CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR\n\n control-center:\n image: confluentinc/cp-enterprise-control-center:7.7.1\n hostname: control-center\n container_name: control-center\n depends_on:\n - broker\n - schema-registry\n - connect\n - ksqldb-server\n ports:\n - "9021:9021"\n environment:\n CONTROL_CENTER_BOOTSTRAP_SERVERS: 'broker:29092'\n CONTROL_CENTER_CONNECT_CONNECT-DEFAULT_CLUSTER: 'connect:8083'\n CONTROL_CENTER_CONNECT_HEALTHCHECK_ENDPOINT: '/connectors'\n CONTROL_CENTER_KSQL_KSQLDB1_URL: "http://ksqldb-server:8088"\n CONTROL_CENTER_KSQL_KSQLDB1_ADVERTISED_URL: "http://localhost:8088"\n CONTROL_CENTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"\n CONTROL_CENTER_REPLICATION_FACTOR: 1\n CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1\n CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1\n CONFLUENT_METRICS_TOPIC_REPLICATION: 1\n PORT: 9021\n\n ksqldb-server:\n image: confluentinc/cp-ksqldb-server:7.7.1\n hostname: ksqldb-server\n container_name: ksqldb-server\n depends_on:\n - broker\n - connect\n ports:\n - "8088:8088"\n environment:\n KSQL_CONFIG_DIR: "/etc/ksql"\n KSQL_BOOTSTRAP_SERVERS: "broker:29092"\n KSQL_HOST_NAME: ksqldb-server\n KSQL_LISTENERS: "http://0.0.0.0:8088"\n KSQL_CACHE_MAX_BYTES_BUFFERING: 0\n KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"\n KSQL_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"\n KSQL_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"\n KSQL_KSQL_CONNECT_URL: "http://connect:8083"\n KSQL_KSQL_LOGGING_PROCESSING_TOPIC_REPLICATION_FACTOR: 1\n KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: 'true'\n KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: 'true'\n\n ksqldb-cli:\n image: confluentinc/cp-ksqldb-cli:7.7.1\n container_name: ksqldb-cli\n depends_on:\n - broker\n - connect\n - ksqldb-server\n entrypoint: /bin/sh\n tty: true\n\n ksql-datagen:\n image: confluentinc/ksqldb-examples:7.7.1\n hostname: ksql-datagen\n container_name: ksql-datagen\n depends_on:\n - ksqldb-server\n - broker\n - schema-registry\n - connect\n command: "bash -c 'echo Waiting for Kafka to be ready... && \\n cub kafka-ready -b broker:29092 1 40 && \\n echo Waiting for Confluent Schema Registry to be ready... && \\n cub sr-ready schema-registry 8081 40 && \\n echo Waiting a few seconds for topic creation to finish... && \\n sleep 11 && \\n tail -f /dev/null'"\n environment:\n KSQL_CONFIG_DIR: "/etc/ksql"\n STREAMS_BOOTSTRAP_SERVERS: broker:29092\n STREAMS_SCHEMA_REGISTRY_HOST: schema-registry\n STREAMS_SCHEMA_REGISTRY_PORT: 8081\n\n rest-proxy:\n image: confluentinc/cp-kafka-rest:7.7.1\n depends_on:\n - broker\n - schema-registry\n ports:\n - 8082:8082\n hostname: rest-proxy\n container_name: rest-proxy\n environment:\n KAFKA_REST_HOST_NAME: rest-proxy\n KAFKA_REST_BOOTSTRAP_SERVERS: 'broker:29092'\n KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"\n KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'\n | dataset_sample\yaml\elsa-workflows_elsa-core\docker\docker-compose-kafka.yml | docker-compose-kafka.yml | YAML | 6,528 | 0.95 | 0.017964 | 0.018868 | awesome-app | 699 | 2025-02-16T11:35:46.668048 | Apache-2.0 | false | 098449105f72100b4344966db78fe162 |
services:\n postgres:\n image: postgres:latest\n command: -c 'max_connections=2000'\n environment:\n POSTGRES_USER: elsa\n POSTGRES_PASSWORD: elsa\n POSTGRES_DB: elsa\n volumes:\n - postgres-data:/var/lib/postgresql/data\n - ./init-db.sh:/docker-entrypoint-initdb.d/init-db.sh # This will initialize the 'tracelens' database\n ports:\n - "5432:5432"\n \n sqlserver:\n image: mcr.microsoft.com/mssql/server:2022-latest\n environment:\n - ACCEPT_EULA=Y\n - MSSQL_SA_PASSWORD=!Elsa2025@\n ports:\n - 1433:1433\n volumes:\n - sqlserver_data:/var/opt/mssql\n \n mysql:\n image: mysql:8.0\n container_name: mysql\n environment:\n MYSQL_ROOT_PASSWORD: password\n MYSQL_DATABASE: elsa\n MYSQL_USER: admin\n MYSQL_PASSWORD: password\n ports:\n - "3306:3306"\n volumes:\n - mysql_data2:/var/lib/mysql\n \n oracle:\n image: container-registry.oracle.com/database/free:latest\n container_name: oracle-db\n environment:\n ORACLE_PWD: elsa\n ports:\n - "1521:1521"\n - "5500:5500"\n volumes:\n - ./data/oracle-data:/opt/oracle/oradata\n - ./setup/oracle-setup:/opt/oracle/scripts/setup\n \n mongodb:\n image: mongo:latest\n ports:\n - "127.0.0.1:27017:27017"\n volumes:\n - mongodb_data:/data/db\n \n cockroachdb:\n image: cockroachdb/cockroach:v22.1.0\n command: start-single-node --insecure\n ports:\n - "26257:26257" # CockroachDB SQL port\n - "8080:8080" # CockroachDB UI port\n volumes:\n - cockroachdb-data:/cockroach/cockroach-data\n environment:\n - COCKROACH_DATABASE=elsa\n \n rabbitmq:\n image: "rabbitmq:4-management"\n ports:\n - "15672:15672"\n - "5672:5672"\n \n redis:\n image: redis:latest\n ports:\n - "127.0.0.1:6379:6379"\n \n plantuml-server:\n image: plantuml/plantuml-server:tomcat\n container_name: plantuml-server\n ports:\n - 7080:8080\n\n trace-lens:\n image: docker.io/rogeralsing/tracelens:latest\n pull_policy: always\n depends_on:\n - plantuml-server\n - postgres\n ports:\n - 7001:5001\n - 4317:4317\n environment:\n - PlantUml__RemoteUrl=\n - ConnectionStrings__DefaultConnection=USER ID=tracelens;PASSWORD=tracelenspass;HOST=postgres;PORT=5432;DATABASE=tracelens;POOLING=true;\n \n smtp4dev: # Mock SMTP server\n image: rnwood/smtp4dev\n container_name: smtp4dev\n restart: always\n ports:\n - "3000:80" # Web interface\n - "2525:25" # SMTP port\n environment:\n - ASPNETCORE_URLS=http://+:80\n - Logging__LogLevel__Default=Information\n\n elsa-server:\n build:\n context: ../.\n dockerfile: ./docker/ElsaServer.Dockerfile\n depends_on:\n - postgres\n - rabbitmq\n - redis\n environment:\n ASPNETCORE_ENVIRONMENT: Development\n PYTHONNET_PYDLL: /opt/homebrew/Cellar/python@3.11/3.11.6_1/Frameworks/Python.framework/Versions/3.11/bin/python3.11\n PYTHONNET_RUNTIME: coreclr\n CONNECTIONSTRINGS__POSTGRESQL: "Server=postgres;Username=elsa;Database=elsa;Port=5432;Password=elsa;SSLMode=Prefer"\n CONNECTIONSTRINGS__RABBITMQ: "amqp://guest:guest@rabbitmq:5672/"\n CONNECTIONSTRINGS__REDIS: "redis:6379"\n DISTRIBUTEDLOCKPROVIDER: "Postgres"\n ports:\n - "13000:8080"\n \n elsa-studio:\n build:\n context: ../.\n dockerfile: ./docker/ElsaStudio.Dockerfile\n environment:\n ASPNETCORE_ENVIRONMENT: Development\n ELSASERVER__URL: "http://localhost:13000/elsa/api"\n ports:\n - "14000:8080"\n\nvolumes:\n sqlserver_data:\n postgres-data:\n oracle-data-free1:\n mysql_data2:\n cockroachdb-data:\n mongodb_data:\n | dataset_sample\yaml\elsa-workflows_elsa-core\docker\docker-compose.yml | docker-compose.yml | YAML | 3,712 | 0.8 | 0 | 0 | python-kit | 27 | 2024-04-13T20:31:41.075145 | BSD-3-Clause | false | fd5f63dfdf85d322e5f0def48892fe37 |
services:\n # Coordinator Node\n citus-coordinator:\n image: citusdata/citus:latest\n container_name: citus-coordinator\n ports:\n - "9700:5432"\n environment:\n POSTGRES_DB: citus\n POSTGRES_USER: citus\n POSTGRES_PASSWORD: citus\n CITUS_HOST: citus-coordinator\n command: [ "postgres", "-c", "wal_level=logical" ]\n healthcheck:\n test: ["CMD", "pg_isready", "-U", "citus"]\n interval: 10s\n retries: 5\n networks:\n - citus-network\n\n # Worker Node 1\n citus-worker-1:\n image: citusdata/citus:latest\n container_name: citus-worker-1\n environment:\n POSTGRES_DB: citus\n POSTGRES_USER: citus\n POSTGRES_PASSWORD: citus\n CITUS_HOST: citus-worker-1\n command: [ "postgres", "-c", "wal_level=logical" ]\n depends_on:\n - citus-coordinator\n networks:\n - citus-network\n\n # Worker Node 2\n citus-worker-2:\n image: citusdata/citus:latest\n container_name: citus-worker-2\n environment:\n POSTGRES_DB: citus\n POSTGRES_USER: citus\n POSTGRES_PASSWORD: citus\n CITUS_HOST: citus-worker-2\n command: [ "postgres", "-c", "wal_level=logical" ]\n depends_on:\n - citus-coordinator\n networks:\n - citus-network\n\n # Citus Manager (optional - registers worker nodes automatically)\n citus-manager:\n image: citusdata/citus:latest\n container_name: citus-manager\n depends_on:\n - citus-coordinator\n - citus-worker-1\n - citus-worker-2\n command: [ "citus_setup" ]\n networks:\n - citus-network\n\nnetworks:\n citus-network:\n driver: bridge\n | dataset_sample\yaml\elsa-workflows_elsa-core\scripts\docker\docker-compose-citus.yml | docker-compose-citus.yml | YAML | 1,588 | 0.8 | 0 | 0.065574 | python-kit | 730 | 2024-05-18T19:41:16.926779 | BSD-3-Clause | false | eec726a94408197827d587c111499c19 |
version: '3.8'\n\nservices:\n yb-master:\n image: yugabytedb/yugabyte:latest\n container_name: yb-master\n command: ["/home/yugabyte/bin/yb-master", "--fs_data_dirs=/mnt/data", "--master_addresses=yb-master:7100"]\n ports:\n - "7001:7000" # Master UI\n networks:\n - yugabytedb\n environment:\n - YB_ENABLED_IN_POSTGRES_MODE=1\n volumes:\n - yb-master-data:/mnt/data\n\n yb-tserver-1:\n image: yugabytedb/yugabyte:latest\n container_name: yb-tserver-1\n command: ["/home/yugabyte/bin/yb-tserver", "--fs_data_dirs=/mnt/data", "--tserver_master_addrs=yb-master:7100", "--pgsql_proxy_bind_address=0.0.0.0:5433"]\n ports:\n - "5543:5433" # Changed from 5433 -> 5543 for PostgreSQL\n - "9000:9000" # TServer UI\n networks:\n - yugabytedb\n depends_on:\n - yb-master\n environment:\n - YB_ENABLED_IN_POSTGRES_MODE=1\n volumes:\n - yb-tserver-1-data:/mnt/data\n\n yb-tserver-2:\n image: yugabytedb/yugabyte:latest\n container_name: yb-tserver-2\n command: ["/home/yugabyte/bin/yb-tserver", "--fs_data_dirs=/mnt/data", "--tserver_master_addrs=yb-master:7100", "--pgsql_proxy_bind_address=0.0.0.0:5433"]\n ports:\n - "5544:5433" # Second worker node (internal)\n - "9001:9000" # TServer UI\n networks:\n - yugabytedb\n depends_on:\n - yb-master\n environment:\n - YB_ENABLED_IN_POSTGRES_MODE=1\n volumes:\n - yb-tserver-2-data:/mnt/data\n\nnetworks:\n yugabytedb:\n\nvolumes:\n yb-master-data:\n yb-tserver-1-data:\n yb-tserver-2-data:\n | dataset_sample\yaml\elsa-workflows_elsa-core\scripts\docker\docker-compose-yugabyte.yml | docker-compose-yugabyte.yml | YAML | 1,537 | 0.8 | 0.018182 | 0 | python-kit | 78 | 2025-01-31T13:32:36.715368 | GPL-3.0 | false | 18821530920474a7237e18498f55d56e |
services:\n\n consul-agent-1: &consul-agent\n image: consul:latest\n networks:\n - consul\n command: "agent -retry-join consul-server-bootstrap -client 0.0.0.0"\n\n consul-agent-2:\n <<: *consul-agent\n\n consul-agent-3:\n <<: *consul-agent\n\n consul-server-1: &consul-server\n <<: *consul-agent\n command: "agent -server -retry-join consul-server-bootstrap -client 0.0.0.0"\n\n consul-server-2:\n <<: *consul-server\n\n consul-server-bootstrap:\n <<: *consul-agent\n ports:\n - "8400:8400"\n - "8500:8500"\n - "8600:8600"\n - "8600:8600/udp"\n command: "agent -server -bootstrap-expect 3 -ui -client 0.0.0.0"\n \n sqlserver:\n image: mcr.microsoft.com/mssql/server:2022-latest\n ports:\n - 1433:1433\n environment:\n MSSQL_SA_PASSWORD: "Elsa2022!"\n ACCEPT_EULA: "Y"\n volumes:\n - sqlserver_data:/var/opt/mssql\n restart: unless-stopped\n \n postgres:\n image: postgres:13.3-alpine\n container_name: elsa3-postgres\n environment:\n POSTGRES_USER: elsa\n POSTGRES_PASSWORD: elsa\n POSTGRES_DB: elsa\n ports:\n - "5432:5432"\n volumes:\n - pgdata-3:/var/lib/postgresql/data\n\n mongodb:\n image: mongo:latest\n ports:\n - "127.0.0.1:27017:27017"\n volumes:\n - mongodb_data:/data/db\n \n mysql:\n image: arm64v8/mysql\n restart: always\n environment:\n MYSQL_DATABASE: 'elsa'\n MYSQL_USER: 'user'\n MYSQL_PASSWORD: 'password'\n MYSQL_ROOT_PASSWORD: 'password'\n ports:\n - '3306:3306'\n expose:\n - '3306'\n\n citusdata:\n image: citusdata/citus:13.0\n container_name: citus_cluster\n environment:\n POSTGRES_USER: citus\n POSTGRES_PASSWORD: citus_password\n POSTGRES_DB: citus\n ports:\n - "9700:5432" # Citus uses PostgreSQL default port; 9700 avoids conflicts with other Postgres services\n volumes:\n - citusdata_data:/var/lib/postgresql/data\n\n # (Other volume and network definitions)\n\n redis:\n image: redis:latest\n ports:\n - "127.0.0.1:6379:6379"\n \n rabbitmq:\n image: "rabbitmq:3-management"\n ports:\n - "15672:15672"\n - "5672:5672"\n \n elasticsearch:\n image: "docker.elastic.co/elasticsearch/elasticsearch:8.5.0"\n environment:\n - xpack.security.enabled=false\n - discovery.type=single-node\n ulimits:\n memlock:\n soft: -1\n hard: -1\n nofile:\n soft: 65536\n hard: 65536\n cap_add:\n - IPC_LOCK\n volumes:\n - elasticsearch-data:/usr/share/elasticsearch/data\n ports:\n - "9200:9200"\n - "9300:9300"\n\n kibana:\n image: "docker.elastic.co/kibana/kibana:8.5.0"\n environment:\n - ELASTICSEARCH_HOSTS=http://elasticsearch:9200\n ports:\n - "5601:5601"\n depends_on:\n - elasticsearch\n \n smtp4dev:\n image: rnwood/smtp4dev:3.1.3-ci20211206101\n ports:\n - "4000:80"\n - "2525:25"\n\nnetworks:\n consul:\n\nvolumes:\n mongodb_data:\n pgdata-3:\n sqlserver_data:\n citusdata_data:\n elasticsearch-data:\n driver: local | dataset_sample\yaml\elsa-workflows_elsa-core\scripts\docker\docker-compose.yml | docker-compose.yml | YAML | 3,028 | 0.8 | 0 | 0.008065 | awesome-app | 620 | 2025-06-14T03:38:53.859985 | BSD-3-Clause | false | 964b20e52619f2d7bbc10cc3a3096e2f |
apiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: proto-cluster\n namespace: default # Ensure the namespace matches your deployment | dataset_sample\yaml\elsa-workflows_elsa-core\scripts\k8s\elsa-server\service-account.yml | service-account.yml | YAML | 136 | 0.8 | 0 | 0 | vue-tools | 362 | 2024-01-26T04:04:36.191966 | BSD-3-Clause | false | cf47ae0e0fa4ad34a095fefe538fc1ff |
apiVersion: v1\nkind: Service\nmetadata:\n name: postgres\n labels:\n app: postgres\nspec:\n type: LoadBalancer\n ports:\n - port: 5432\n targetPort: 5432\n selector:\n app: postgres | dataset_sample\yaml\elsa-workflows_elsa-core\scripts\k8s\postgres\service.yml | service.yml | YAML | 189 | 0.7 | 0 | 0 | vue-tools | 204 | 2025-04-02T17:36:13.480346 | GPL-3.0 | false | fa88bd86d28c73ffb79adff40c6857ad |
---\n# https://docs.codecov.com/docs/codecovyml-reference\ncodecov:\n token: 6040de41-c073-4d6f-bbf8-d89256ef31e1\n disable_default_path_fixes: true\n require_ci_to_pass: false\n notify:\n wait_for_ci: false\nfixes:\n - go.etcd.io/etcd/api/v3/::api/\n - go.etcd.io/etcd/client/v3/::client/v3/\n - go.etcd.io/etcd/client/v2/::client/v2/\n - go.etcd.io/etcd/etcdctl/v3/::etcdctl/\n - go.etcd.io/etcd/etcdutl/v3/::etcdutl/\n - go.etcd.io/etcd/pkg/v3/::pkg/\n - go.etcd.io/etcd/server/v3/::server/\nignore:\n - '**/*.pb.go'\n - '**/*.pb.gw.go'\n - tests/**/*\n - go.etcd.io/etcd/tests/**/*\ncoverage:\n range: 60..80\n round: down\n precision: 2\n status:\n project:\n default:\n target: auto\n # allow some coverage reductions within a threshold\n # this allows a 1% drop from the previous base commit coverage\n threshold: 1%\n patch:\n default:\n target: auto\n threshold: 80%\ncomment:\n layout: "header, files, diff, footer"\n behavior: default # default: update, if exists. Otherwise post new; new: delete old and post new\n require_changes: false # if true: only post the comment if coverage changes\n require_base: false # [true :: must have a base report to post]\n require_head: true # [true :: must have a head report to post]\n hide_project_coverage: false # [true :: only show coverage on the git diff]\n | dataset_sample\yaml\etcd-io_etcd\codecov.yml | codecov.yml | YAML | 1,356 | 0.95 | 0.069767 | 0.069767 | vue-tools | 681 | 2024-02-13T15:20:58.816896 | MIT | false | 3e2b6584d1e77044a3b1258aa0aa1703 |
---\nversion: 2\nupdates:\n - package-ecosystem: github-actions\n directory: /\n schedule:\n interval: weekly\n\n - package-ecosystem: gomod\n directory: /\n schedule:\n interval: weekly\n allow:\n - dependency-type: all\n\n - package-ecosystem: gomod\n directory: /tools/mod # Not linked from /go.mod\n schedule:\n interval: weekly\n allow:\n - dependency-type: direct\n\n - package-ecosystem: docker\n directory: /\n schedule:\n interval: weekly\n\n - package-ecosystem: docker\n directory: /\n target-branch: "release-3.4"\n schedule:\n interval: monthly\n\n - package-ecosystem: docker\n directory: /\n target-branch: "release-3.5"\n schedule:\n interval: monthly\n | dataset_sample\yaml\etcd-io_etcd\.github\dependabot.yml | dependabot.yml | YAML | 725 | 0.8 | 0 | 0 | awesome-app | 745 | 2024-08-04T07:18:29.154101 | GPL-3.0 | false | ba7766b1de45e18755cfb621c7f4d8d6 |
---\n# Configuration for probot-stale - https://github.com/probot/stale\n\n# Number of days of inactivity before an Issue or Pull Request becomes stale\ndaysUntilStale: 90\n\n# Number of days of inactivity before an Issue or Pull Request with the stale label is closed.\n# Set to false to disable. If disabled, issues still need to be closed manually, but will remain marked as stale.\ndaysUntilClose: 21\n\n# Only issues or pull requests with all of these labels are check if stale. Defaults to `[]` (disabled)\nonlyLabels: []\n\n# Issues or Pull Requests with these labels will never be considered stale. Set to `[]` to disable\nexemptLabels:\n - "stage/tracked"\n\n# Set to true to ignore issues in a project (defaults to false)\nexemptProjects: false\n\n# Set to true to ignore issues in a milestone (defaults to false)\nexemptMilestones: false\n\n# Set to true to ignore issues with an assignee (defaults to false)\nexemptAssignees: false\n\n# Label to use when marking as stale\nstaleLabel: stale\n\n# Comment to post when marking as stale. Set to `false` to disable\nmarkComment: This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 21 days if no further activity occurs. Thank you for your contributions.\n# Comment to post when removing the stale label.\n# unmarkComment: >\n# Your comment here.\n\n# Comment to post when closing a stale Issue or Pull Request.\n# closeComment: >\n# Your comment here.\n\n# Limit the number of actions per hour, from 1-30. Default is 30\nlimitPerRun: 30\n\n# Limit to only `issues` or `pulls`\n# only: issues\n\n# Optionally, specify configuration settings that are specific to just 'issues' or 'pulls':\n# pulls:\n# daysUntilStale: 30\n# markComment: >\n# This pull request has been automatically marked as stale because it has not had\n# recent activity. It will be closed if no further activity occurs. Thank you\n# for your contributions.\n\n# issues:\n# exemptLabels:\n# - confirmed\n | dataset_sample\yaml\etcd-io_etcd\.github\stale.yml | stale.yml | YAML | 1,963 | 0.8 | 0.107143 | 0.714286 | vue-tools | 480 | 2025-06-20T08:04:39.516179 | MIT | false | 4614eea765a08be5341990ef85d98f4d |
---\nname: Bug Report\ndescription: Report a bug encountered while operating etcd\nlabels:\n - type/bug\nbody:\n - type: checkboxes\n id: confirmations\n attributes:\n label: Bug report criteria\n description: Please confirm this bug report meets the following criteria.\n options:\n - label: This bug report is not security related, security issues should be disclosed privately via [etcd maintainers](mailto:etcd-maintainers@googlegroups.com).\n - label: This is not a support request or question, support requests or questions should be raised in the etcd [discussion forums](https://github.com/etcd-io/etcd/discussions).\n - label: You have read the etcd [bug reporting guidelines](https://github.com/etcd-io/etcd/blob/main/Documentation/contributor-guide/reporting_bugs.md).\n - label: Existing open issues along with etcd [frequently asked questions](https://etcd.io/docs/latest/faq) have been checked and this is not a duplicate.\n\n - type: markdown\n attributes:\n value: |\n Please fill the form below and provide as much information as possible.\n Not doing so may result in your bug not being addressed in a timely manner.\n\n - type: textarea\n id: problem\n attributes:\n label: What happened?\n validations:\n required: true\n\n - type: textarea\n id: expected\n attributes:\n label: What did you expect to happen?\n validations:\n required: true\n\n - type: textarea\n id: repro\n attributes:\n label: How can we reproduce it (as minimally and precisely as possible)?\n validations:\n required: true\n\n - type: textarea\n id: additional\n attributes:\n label: Anything else we need to know?\n\n - type: textarea\n id: etcdVersion\n attributes:\n label: Etcd version (please run commands below)\n value: |\n <details>\n\n ```console\n $ etcd --version\n # paste output here\n\n $ etcdctl version\n # paste output here\n ```\n\n </details>\n validations:\n required: true\n\n - type: textarea\n id: config\n attributes:\n label: Etcd configuration (command line flags or environment variables)\n value: |\n <details>\n\n # paste your configuration here\n\n </details>\n\n - type: textarea\n id: etcdDebugInformation\n attributes:\n label: Etcd debug information (please run commands below, feel free to obfuscate the IP address or FQDN in the output)\n value: |\n <details>\n\n ```console\n $ etcdctl member list -w table\n # paste output here\n\n $ etcdctl --endpoints=<member list> endpoint status -w table\n # paste output here\n ```\n\n </details>\n\n - type: textarea\n id: logs\n attributes:\n label: Relevant log output\n description: Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.\n render: Shell\n | dataset_sample\yaml\etcd-io_etcd\.github\ISSUE_TEMPLATE\bug-report.yml | bug-report.yml | YAML | 2,953 | 0.95 | 0.019608 | 0.058824 | node-utils | 842 | 2024-12-25T18:40:46.696454 | Apache-2.0 | false | 3a59a01203d42dd84948b6bfbf579ddd |
---\nblank_issues_enabled: false\ncontact_links:\n - name: Question\n url: https://github.com/etcd-io/etcd/discussions\n about: Question relating to Etcd\n | dataset_sample\yaml\etcd-io_etcd\.github\ISSUE_TEMPLATE\config.yml | config.yml | YAML | 156 | 0.8 | 0 | 0 | react-lib | 920 | 2025-04-24T06:47:32.284007 | GPL-3.0 | false | dbd61a5e9522407e32cb7c123fe21a91 |
---\nname: Feature request\ndescription: Provide idea for a new feature\nlabels:\n - type/feature\nbody:\n - type: textarea\n id: feature\n attributes:\n label: What would you like to be added?\n validations:\n required: true\n\n - type: textarea\n id: rationale\n attributes:\n label: Why is this needed?\n validations:\n required: true\n | dataset_sample\yaml\etcd-io_etcd\.github\ISSUE_TEMPLATE\feature-request.yml | feature-request.yml | YAML | 361 | 0.85 | 0.052632 | 0 | python-kit | 693 | 2025-01-06T19:20:22.591115 | Apache-2.0 | false | 4313a54dc4aa4efec65aa328584af120 |
---\n# For most projects, this workflow file will not need changing; you simply need\n# to commit it to your repository.\n#\n# You may wish to alter this file to override the set of languages analyzed,\n# or to provide custom queries or build logic.\n#\n# ******** NOTE ********\n# We have attempted to detect the languages in your repository. Please check\n# the `language` matrix defined below to confirm you have the correct set of\n# supported CodeQL languages.\n#\nname: "CodeQL"\non:\n push:\n branches: [main, release-3.4, release-3.5, release-3.6]\n pull_request:\n # The branches below must be a subset of the branches above\n branches: [main]\n schedule:\n - cron: '20 14 * * 5'\npermissions: read-all\njobs:\n analyze:\n name: Analyze\n runs-on: ubuntu-latest\n permissions:\n actions: read\n contents: read\n security-events: write\n strategy:\n fail-fast: false\n matrix:\n # CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python' ]\n # Learn more:\n # https://docs.github.com/en/free-pro-team@latest/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-code-scanning#changing-the-languages-that-are-analyzed\n language: ['go']\n steps:\n - name: Checkout repository\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n # Initializes the CodeQL tools for scanning.\n - name: Initialize CodeQL\n uses: github/codeql-action/init@45775bd8235c68ba998cffa5171334d58593da47 # v3.28.15\n with:\n # If you wish to specify custom queries, you can do so here or in a config file.\n # By default, queries listed here will override any specified in a config file.\n # Prefix the list here with "+" to use these queries and those in the config file.\n # queries: ./path/to/local/query, your-org/your-repo/queries@main\n languages: ${{ matrix.language }}\n # Autobuild attempts to build any compiled languages (C/C++, C#, or Java).\n # If this step fails, then you should remove it and run the build manually (see below)\n - name: Autobuild\n uses: github/codeql-action/autobuild@45775bd8235c68ba998cffa5171334d58593da47 # v3.28.15\n - name: Perform CodeQL Analysis\n uses: github/codeql-action/analyze@45775bd8235c68ba998cffa5171334d58593da47 # v3.28.15\n | dataset_sample\yaml\etcd-io_etcd\.github\workflows\codeql-analysis.yml | codeql-analysis.yml | YAML | 2,364 | 0.8 | 0.018182 | 0.4 | react-lib | 551 | 2024-10-14T12:59:06.563758 | GPL-3.0 | false | 490d590a24b8d44946fe70fb77510534 |
---\nname: Scorecards supply-chain security\non:\n # Only the default branch is supported.\n branch_protection_rule:\n schedule:\n - cron: '45 1 * * 0'\n push:\n branches: ["main"]\n\n# Declare default permissions as read only.\npermissions: read-all\n\njobs:\n analysis:\n name: Scorecards analysis\n runs-on: ubuntu-latest\n permissions:\n # Needed to upload the results to code-scanning dashboard.\n security-events: write\n # Used to receive a badge.\n id-token: write\n\n steps:\n - name: "Checkout code"\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n with:\n persist-credentials: false\n\n - name: "Run analysis"\n uses: ossf/scorecard-action@f49aabe0b5af0936a0987cfb85d86b75731b0186 # v2.4.1\n with:\n results_file: results.sarif\n results_format: sarif\n\n # Publish the results for public repositories to enable scorecard badges. For more details, see\n # https://github.com/ossf/scorecard-action#publishing-results.\n # For private repositories, `publish_results` will automatically be set to `false`, regardless\n # of the value entered here.\n publish_results: true\n\n # Upload the results as artifacts (optional). Commenting out will disable uploads of run results in SARIF\n # format to the repository Actions tab.\n - name: "Upload artifact"\n uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2\n with:\n name: SARIF file\n path: results.sarif\n retention-days: 5\n\n # Upload the results to GitHub's code scanning dashboard.\n - name: "Upload to code-scanning"\n uses: github/codeql-action/upload-sarif@45775bd8235c68ba998cffa5171334d58593da47 # v3.28.15\n with:\n sarif_file: results.sarif\n | dataset_sample\yaml\etcd-io_etcd\.github\workflows\scorecards.yml | scorecards.yml | YAML | 1,855 | 0.8 | 0.018182 | 0.229167 | python-kit | 398 | 2024-11-17T04:16:21.727145 | GPL-3.0 | false | f9804e6eb19bf872d922f6a3f2f5f045 |
---\napiVersion: v1\nkind: Service\nmetadata:\n name: etcd-client\nspec:\n ports:\n - name: etcd-client-port\n port: 2379\n protocol: TCP\n targetPort: 2379\n selector:\n app: etcd\n---\napiVersion: v1\nkind: Pod\nmetadata:\n labels:\n app: etcd\n etcd_node: etcd0\n name: etcd0\nspec:\n containers:\n - command:\n - /usr/local/bin/etcd\n - --name\n - etcd0\n - --initial-advertise-peer-urls\n - http://etcd0:2380\n - --listen-peer-urls\n - http://0.0.0.0:2380\n - --listen-client-urls\n - http://0.0.0.0:2379\n - --advertise-client-urls\n - http://etcd0:2379\n - --initial-cluster\n - etcd0=http://etcd0:2380,etcd1=http://etcd1:2380,etcd2=http://etcd2:2380\n - --initial-cluster-state\n - new\n image: quay.io/coreos/etcd:latest\n name: etcd0\n ports:\n - containerPort: 2379\n name: client\n protocol: TCP\n - containerPort: 2380\n name: server\n protocol: TCP\n restartPolicy: Always\n---\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n etcd_node: etcd0\n name: etcd0\nspec:\n ports:\n - name: client\n port: 2379\n protocol: TCP\n targetPort: 2379\n - name: server\n port: 2380\n protocol: TCP\n targetPort: 2380\n selector:\n etcd_node: etcd0\n---\napiVersion: v1\nkind: Pod\nmetadata:\n labels:\n app: etcd\n etcd_node: etcd1\n name: etcd1\nspec:\n containers:\n - command:\n - /usr/local/bin/etcd\n - --name\n - etcd1\n - --initial-advertise-peer-urls\n - http://etcd1:2380\n - --listen-peer-urls\n - http://0.0.0.0:2380\n - --listen-client-urls\n - http://0.0.0.0:2379\n - --advertise-client-urls\n - http://etcd1:2379\n - --initial-cluster\n - etcd0=http://etcd0:2380,etcd1=http://etcd1:2380,etcd2=http://etcd2:2380\n - --initial-cluster-state\n - new\n image: quay.io/coreos/etcd:latest\n name: etcd1\n ports:\n - containerPort: 2379\n name: client\n protocol: TCP\n - containerPort: 2380\n name: server\n protocol: TCP\n restartPolicy: Always\n---\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n etcd_node: etcd1\n name: etcd1\nspec:\n ports:\n - name: client\n port: 2379\n protocol: TCP\n targetPort: 2379\n - name: server\n port: 2380\n protocol: TCP\n targetPort: 2380\n selector:\n etcd_node: etcd1\n---\napiVersion: v1\nkind: Pod\nmetadata:\n labels:\n app: etcd\n etcd_node: etcd2\n name: etcd2\nspec:\n containers:\n - command:\n - /usr/local/bin/etcd\n - --name\n - etcd2\n - --initial-advertise-peer-urls\n - http://etcd2:2380\n - --listen-peer-urls\n - http://0.0.0.0:2380\n - --listen-client-urls\n - http://0.0.0.0:2379\n - --advertise-client-urls\n - http://etcd2:2379\n - --initial-cluster\n - etcd0=http://etcd0:2380,etcd1=http://etcd1:2380,etcd2=http://etcd2:2380\n - --initial-cluster-state\n - new\n image: quay.io/coreos/etcd:latest\n name: etcd2\n ports:\n - containerPort: 2379\n name: client\n protocol: TCP\n - containerPort: 2380\n name: server\n protocol: TCP\n restartPolicy: Always\n---\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n etcd_node: etcd2\n name: etcd2\nspec:\n ports:\n - name: client\n port: 2379\n protocol: TCP\n targetPort: 2379\n - name: server\n port: 2380\n protocol: TCP\n targetPort: 2380\n selector:\n etcd_node: etcd2\n | dataset_sample\yaml\etcd-io_etcd\hack\kubernetes-deploy\etcd.yml | etcd.yml | YAML | 3,634 | 0.8 | 0 | 0 | vue-tools | 758 | 2025-05-19T16:54:38.317578 | GPL-3.0 | false | 4272ada982dcfc0257b6dd145549fb95 |
---\napiVersion: v1\nkind: Pod\nmetadata:\n labels:\n app: vulcand\n name: vulcand\nspec:\n containers:\n - command:\n - /go/bin/vulcand\n - -apiInterface=0.0.0.0\n - --etcd=http://etcd-client:2379\n image: mailgun/vulcand:v0.8.0-beta.2\n name: vulcand\n ports:\n - containerPort: 8081\n name: api\n protocol: TCP\n - containerPort: 8082\n name: server\n protocol: TCP\n restartPolicy: Always\n | dataset_sample\yaml\etcd-io_etcd\hack\kubernetes-deploy\vulcand.yml | vulcand.yml | YAML | 467 | 0.8 | 0 | 0 | node-utils | 443 | 2025-05-01T18:30:31.461500 | BSD-3-Clause | false | fc025b38b2843f8bf077f94a95d10daa |
---\nservices:\n\n etcd0:\n image: 'gcr.io/etcd-development/etcd:v3.5.21'\n container_name: etcd0\n hostname: etcd0\n environment:\n ETCD_NAME: "etcd0"\n ETCD_INITIAL_ADVERTISE_PEER_URLS: "http://etcd0:2380"\n ETCD_LISTEN_PEER_URLS: "http://0.0.0.0:2380"\n ETCD_LISTEN_CLIENT_URLS: "http://0.0.0.0:2379"\n ETCD_ADVERTISE_CLIENT_URLS: "http://etcd0.etcd:2379"\n ETCD_INITIAL_CLUSTER_TOKEN: "etcd-cluster-1"\n ETCD_INITIAL_CLUSTER: "etcd0=http://etcd0:2380,etcd1=http://etcd1:2380,etcd2=http://etcd2:2380"\n ETCD_INITIAL_CLUSTER_STATE: "new"\n\n etcd1:\n image: 'gcr.io/etcd-development/etcd:v3.5.21'\n container_name: etcd1\n hostname: etcd1\n environment:\n ETCD_NAME: "etcd1"\n ETCD_INITIAL_ADVERTISE_PEER_URLS: "http://etcd1:2380"\n ETCD_LISTEN_PEER_URLS: "http://0.0.0.0:2380"\n ETCD_LISTEN_CLIENT_URLS: "http://0.0.0.0:2379"\n ETCD_ADVERTISE_CLIENT_URLS: "http://etcd1.etcd:2379"\n ETCD_INITIAL_CLUSTER_TOKEN: "etcd-cluster-1"\n ETCD_INITIAL_CLUSTER: "etcd0=http://etcd0:2380,etcd1=http://etcd1:2380,etcd2=http://etcd2:2380"\n ETCD_INITIAL_CLUSTER_STATE: "new"\n\n etcd2:\n image: 'gcr.io/etcd-development/etcd:v3.5.21'\n container_name: etcd2\n hostname: etcd2\n environment:\n ETCD_NAME: "etcd2"\n ETCD_INITIAL_ADVERTISE_PEER_URLS: "http://etcd2:2380"\n ETCD_LISTEN_PEER_URLS: "http://0.0.0.0:2380"\n ETCD_LISTEN_CLIENT_URLS: "http://0.0.0.0:2379"\n ETCD_ADVERTISE_CLIENT_URLS: "http://etcd2.etcd:2379"\n ETCD_INITIAL_CLUSTER_TOKEN: "etcd-cluster-1"\n ETCD_INITIAL_CLUSTER: "etcd0=http://etcd0:2380,etcd1=http://etcd1:2380,etcd2=http://etcd2:2380"\n ETCD_INITIAL_CLUSTER_STATE: "new"\n\n client:\n image: 'etcd-client:latest'\n container_name: client\n entrypoint: ["/entrypoint.py"]\n | dataset_sample\yaml\etcd-io_etcd\tests\antithesis\docker-compose.yml | docker-compose.yml | YAML | 1,811 | 0.8 | 0 | 0 | awesome-app | 638 | 2025-01-04T22:13:13.844340 | BSD-3-Clause | true | 92777ed877897bfbc8011f8efe8e4892 |
# This file configures github.com/golangci/golangci-lint.\nversion: '2'\nrun:\n tests: true\nlinters:\n default: none\n enable:\n - bidichk\n - copyloopvar\n - durationcheck\n - gocheckcompilerdirectives\n - govet\n - ineffassign\n - mirror\n - misspell\n - reassign\n - revive # only certain checks enabled\n - staticcheck\n - unconvert\n - unused\n - usetesting\n - whitespace\n ### linters we tried and will not be using:\n ###\n # - structcheck # lots of false positives\n # - errcheck #lot of false positives\n # - contextcheck\n # - errchkjson # lots of false positives\n # - errorlint # this check crashes\n # - exhaustive # silly check\n # - makezero # false positives\n # - nilerr # several intentional\n settings:\n staticcheck:\n checks:\n # disable Quickfixes\n - -QF1*\n revive:\n enable-all-rules: false\n # here we enable specific useful rules\n # see https://golangci-lint.run/usage/linters/#revive for supported rules\n rules:\n - name: receiver-naming\n severity: warning\n disabled: false\n exclude:\n - ''\n exclusions:\n generated: lax\n presets:\n - comments\n - common-false-positives\n - legacy\n - std-error-handling\n rules:\n - linters:\n - deadcode\n - staticcheck\n path: crypto/bn256/cloudflare/optate.go\n - linters:\n - revive\n path: crypto/bn256/\n - path: cmd/utils/flags.go\n text: "SA1019: cfg.TxLookupLimit is deprecated: use 'TransactionHistory' instead."\n - path: cmd/utils/flags.go\n text: "SA1019: ethconfig.Defaults.TxLookupLimit is deprecated: use 'TransactionHistory' instead."\n - path: internal/build/pgp.go\n text: 'SA1019: "golang.org/x/crypto/openpgp" is deprecated: this package is unmaintained except for security fixes.'\n - path: core/vm/contracts.go\n text: 'SA1019: "golang.org/x/crypto/ripemd160" is deprecated: RIPEMD-160 is a legacy hash and should not be used for new applications.'\n - path: (.+)\.go$\n text: 'SA1019: event.TypeMux is deprecated: use Feed'\n - path: (.+)\.go$\n text: 'SA1019: strings.Title is deprecated'\n - path: (.+)\.go$\n text: 'SA1019: strings.Title has been deprecated since Go 1.18 and an alternative has been available since Go 1.0: The rule Title uses for word boundaries does not handle Unicode punctuation properly. Use golang.org/x/text/cases instead.'\n - path: (.+)\.go$\n text: 'SA1029: should not use built-in type string as key for value'\n paths:\n - core/genesis_alloc.go\n - third_party$\n - builtin$\n - examples$\nformatters:\n enable:\n - goimports\n settings:\n gofmt:\n simplify: true\n exclusions:\n generated: lax\n paths:\n - core/genesis_alloc.go\n - third_party$\n - builtin$\n - examples$\n | dataset_sample\yaml\ethereum_go-ethereum\.golangci.yml | .golangci.yml | YAML | 2,916 | 0.95 | 0.052083 | 0.145833 | awesome-app | 538 | 2024-09-23T00:35:21.411322 | MIT | false | b4774afaf205184e45f4f7ded8c07055 |
language: go\ngo_import_path: github.com/ethereum/go-ethereum\nsudo: false\njobs:\n include:\n # This builder create and push the Docker images for all architectures\n - stage: build\n if: type = push\n os: linux\n arch: amd64\n dist: focal\n go: 1.24.x\n env:\n - docker\n services:\n - docker\n git:\n submodules: false # avoid cloning ethereum/tests\n before_install:\n - export DOCKER_CLI_EXPERIMENTAL=enabled\n script:\n - go run build/ci.go dockerx -platform "linux/amd64,linux/arm64,linux/riscv64" -hub ethereum/client-go -upload\n\n # This builder does the Linux Azure uploads\n - stage: build\n if: type = push\n os: linux\n dist: focal\n sudo: required\n go: 1.24.x\n env:\n - azure-linux\n git:\n submodules: false # avoid cloning ethereum/tests\n script:\n # build amd64\n - go run build/ci.go install -dlgo\n - go run build/ci.go archive -type tar -signer LINUX_SIGNING_KEY -signify SIGNIFY_KEY -upload gethstore/builds\n\n # build 386\n - sudo -E apt-get -yq --no-install-suggests --no-install-recommends install gcc-multilib\n - git status --porcelain\n - go run build/ci.go install -dlgo -arch 386\n - go run build/ci.go archive -arch 386 -type tar -signer LINUX_SIGNING_KEY -signify SIGNIFY_KEY -upload gethstore/builds\n\n # Switch over GCC to cross compilation (breaks 386, hence why do it here only)\n - sudo -E apt-get -yq --no-install-suggests --no-install-recommends --force-yes install gcc-arm-linux-gnueabi libc6-dev-armel-cross gcc-arm-linux-gnueabihf libc6-dev-armhf-cross gcc-aarch64-linux-gnu libc6-dev-arm64-cross\n - sudo ln -s /usr/include/asm-generic /usr/include/asm\n\n - GOARM=5 go run build/ci.go install -dlgo -arch arm -cc arm-linux-gnueabi-gcc\n - GOARM=5 go run build/ci.go archive -arch arm -type tar -signer LINUX_SIGNING_KEY -signify SIGNIFY_KEY -upload gethstore/builds\n - GOARM=6 go run build/ci.go install -dlgo -arch arm -cc arm-linux-gnueabi-gcc\n - GOARM=6 go run build/ci.go archive -arch arm -type tar -signer LINUX_SIGNING_KEY -signify SIGNIFY_KEY -upload gethstore/builds\n - GOARM=7 go run build/ci.go install -dlgo -arch arm -cc arm-linux-gnueabihf-gcc\n - GOARM=7 go run build/ci.go archive -arch arm -type tar -signer LINUX_SIGNING_KEY -signify SIGNIFY_KEY -upload gethstore/builds\n - go run build/ci.go install -dlgo -arch arm64 -cc aarch64-linux-gnu-gcc\n - go run build/ci.go archive -arch arm64 -type tar -signer LINUX_SIGNING_KEY -signify SIGNIFY_KEY -upload gethstore/builds\n\n # These builders run the tests\n - stage: build\n if: type = push\n os: linux\n arch: amd64\n dist: focal\n go: 1.24.x\n script:\n - travis_wait 45 go run build/ci.go test $TEST_PACKAGES\n\n - stage: build\n if: type = push\n os: linux\n dist: focal\n go: 1.23.x\n script:\n - travis_wait 45 go run build/ci.go test $TEST_PACKAGES\n\n # This builder does the Ubuntu PPA nightly uploads\n - stage: build\n if: type = cron || (type = push && tag ~= /^v[0-9]/)\n os: linux\n dist: focal\n go: 1.24.x\n env:\n - ubuntu-ppa\n git:\n submodules: false # avoid cloning ethereum/tests\n before_install:\n - sudo -E apt-get -yq --no-install-suggests --no-install-recommends install devscripts debhelper dput fakeroot\n script:\n - echo '|1|7SiYPr9xl3uctzovOTj4gMwAC1M=|t6ReES75Bo/PxlOPJ6/GsGbTrM0= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA0aKz5UTUndYgIGG7dQBV+HaeuEZJ2xPHo2DS2iSKvUL4xNMSAY4UguNW+pX56nAQmZKIZZ8MaEvSj6zMEDiq6HFfn5JcTlM80UwlnyKe8B8p7Nk06PPQLrnmQt5fh0HmEcZx+JU9TZsfCHPnX7MNz4ELfZE6cFsclClrKim3BHUIGq//t93DllB+h4O9LHjEUsQ1Sr63irDLSutkLJD6RXchjROXkNirlcNVHH/jwLWR5RcYilNX7S5bIkK8NlWPjsn/8Ua5O7I9/YoE97PpO6i73DTGLh5H9JN/SITwCKBkgSDWUt61uPK3Y11Gty7o2lWsBjhBUm2Y38CBsoGmBw==' >> ~/.ssh/known_hosts\n - go run build/ci.go debsrc -upload ethereum/ethereum -sftp-user geth-ci -signer "Go Ethereum Linux Builder <geth-ci@ethereum.org>"\n\n # This builder does the Azure archive purges to avoid accumulating junk\n - stage: build\n if: type = cron\n os: linux\n dist: focal\n go: 1.24.x\n env:\n - azure-purge\n git:\n submodules: false # avoid cloning ethereum/tests\n script:\n - go run build/ci.go purge -store gethstore/builds -days 14\n\n # This builder executes race tests\n - stage: build\n if: type = cron\n os: linux\n dist: focal\n go: 1.24.x\n env:\n - racetests\n script:\n - travis_wait 60 go run build/ci.go test -race $TEST_PACKAGES\n | dataset_sample\yaml\ethereum_go-ethereum\.travis.yml | .travis.yml | YAML | 4,725 | 0.95 | 0.069565 | 0.084906 | react-lib | 756 | 2023-10-01T06:48:39.984357 | Apache-2.0 | false | 82094b7b29c3adf15c9f056333117258 |
clone_depth: 5\nversion: "{branch}.{build}"\n\nimage:\n - Ubuntu\n - Visual Studio 2019\n\nenvironment:\n matrix:\n - GETH_ARCH: amd64\n GETH_MINGW: 'C:\msys64\mingw64'\n - GETH_ARCH: 386\n GETH_MINGW: 'C:\msys64\mingw32'\n\ninstall:\n - git submodule update --init --depth 1 --recursive\n - go version\n\nfor:\n # Linux has its own script without -arch and -cc.\n # The linux builder also runs lint.\n - matrix:\n only:\n - image: Ubuntu\n build_script:\n - go run build/ci.go lint\n - go run build/ci.go check_generate\n - go run build/ci.go check_baddeps\n - go run build/ci.go install -dlgo\n test_script:\n - go run build/ci.go test -dlgo -short\n\n # linux/386 is disabled.\n - matrix:\n exclude:\n - image: Ubuntu\n GETH_ARCH: 386\n\n # Windows builds for amd64 + 386.\n - matrix:\n only:\n - image: Visual Studio 2019\n environment:\n # We use gcc from MSYS2 because it is the most recent compiler version available on\n # AppVeyor. Note: gcc.exe only works properly if the corresponding bin/ directory is\n # contained in PATH.\n GETH_CC: '%GETH_MINGW%\bin\gcc.exe'\n PATH: '%GETH_MINGW%\bin;C:\Program Files (x86)\NSIS\;%PATH%'\n build_script:\n - 'echo %GETH_ARCH%'\n - 'echo %GETH_CC%'\n - '%GETH_CC% --version'\n - go run build/ci.go install -dlgo -arch %GETH_ARCH% -cc %GETH_CC%\n after_build:\n # Upload builds. Note that ci.go makes this a no-op PR builds.\n - go run build/ci.go archive -arch %GETH_ARCH% -type zip -signer WINDOWS_SIGNING_KEY -upload gethstore/builds\n - go run build/ci.go nsis -arch %GETH_ARCH% -signer WINDOWS_SIGNING_KEY -upload gethstore/builds\n test_script:\n - go run build/ci.go test -dlgo -arch %GETH_ARCH% -cc %GETH_CC% -short\n | dataset_sample\yaml\ethereum_go-ethereum\appveyor.yml | appveyor.yml | YAML | 1,795 | 0.8 | 0.050847 | 0.150943 | vue-tools | 957 | 2024-04-02T05:12:39.814383 | MIT | false | 09e33d35a2a6db4179bbfddfffc0e792 |
machine:\n services:\n - docker\n\ndependencies:\n cache_directories:\n - "~/.ethash" # Cache the ethash DAG generated by hive for consecutive builds\n - "~/.docker" # Cache all docker images manually to avoid lengthy rebuilds\n override:\n # Restore all previously cached docker images\n - mkdir -p ~/.docker\n - for img in `ls ~/.docker`; do docker load -i ~/.docker/$img; done\n\n # Pull in and hive, restore cached ethash DAGs and do a dry run\n - go get -u github.com/karalabe/hive\n - (cd ~/.go_workspace/src/github.com/karalabe/hive && mkdir -p workspace/ethash/ ~/.ethash)\n - (cd ~/.go_workspace/src/github.com/karalabe/hive && cp -r ~/.ethash/. workspace/ethash/)\n - (cd ~/.go_workspace/src/github.com/karalabe/hive && hive --docker-noshell --client=NONE --test=. --sim=. --loglevel=6)\n\n # Cache all the docker images and the ethash DAGs\n - for img in `docker images | grep -v "^<none>" | tail -n +2 | awk '{print $1}'`; do docker save $img > ~/.docker/`echo $img | tr '/' ':'`.tar; done\n - cp -r ~/.go_workspace/src/github.com/karalabe/hive/workspace/ethash/. ~/.ethash\n\ntest:\n override:\n # Build Geth and move into a known folder\n - make geth\n - cp ./build/bin/geth $HOME/geth\n\n # Run hive and move all generated logs into the public artifacts folder\n - (cd ~/.go_workspace/src/github.com/karalabe/hive && hive --docker-noshell --client=go-ethereum:local --override=$HOME/geth --test=. --sim=.)\n - cp -r ~/.go_workspace/src/github.com/karalabe/hive/workspace/logs/* $CIRCLE_ARTIFACTS\n | dataset_sample\yaml\ethereum_go-ethereum\circle.yml | circle.yml | YAML | 1,544 | 0.8 | 0.09375 | 0.185185 | vue-tools | 948 | 2025-05-20T22:07:30.477644 | GPL-3.0 | false | f79b26bcda2ec60fa5876b9e868fcc42 |
# Number of days of inactivity before an Issue is closed for lack of response\ndaysUntilClose: 30\n# Label requiring a response\nresponseRequiredLabel: "need:more-information"\n# Comment to post when closing an Issue for lack of response. Set to `false` to disable\ncloseComment: >\n This issue has been automatically closed because there has been no response\n to our request for more information from the original author. With only the\n information that is currently in the issue, we don't have enough information\n to take action. Please reach out if you have more relevant information or\n answers to our questions so that we can investigate further.\n | dataset_sample\yaml\ethereum_go-ethereum\.github\no-response.yml | no-response.yml | YAML | 651 | 0.8 | 0.363636 | 0.272727 | python-kit | 53 | 2024-07-07T15:13:26.454730 | MIT | false | db3bd2d166faec162d47c0e3a1d5bbc2 |
# Number of days of inactivity before an issue becomes stale\ndaysUntilStale: 366\n# Number of days of inactivity before a stale issue is closed\ndaysUntilClose: 42\n# Issues with these labels will never be considered stale\nexemptLabels:\n - pinned\n - security\n# Label to use when marking an issue as stale\nstaleLabel: "status:inactive"\n# Comment to post when marking an issue as stale. Set to `false` to disable\nmarkComment: >\n This issue has been automatically marked as stale because it has not had\n recent activity. It will be closed if no further activity occurs. Thank you\n for your contributions.\n# Comment to post when closing a stale issue. Set to `false` to disable\ncloseComment: false\n | dataset_sample\yaml\ethereum_go-ethereum\.github\stale.yml | stale.yml | YAML | 696 | 0.8 | 0.117647 | 0.352941 | awesome-app | 409 | 2025-05-02T23:14:17.207177 | GPL-3.0 | false | 59518cbc30e9455b96c5d8eac762309f |
name: i386 linux tests\n\non:\n push:\n branches: [ master ]\n pull_request:\n branches: [ master ]\n workflow_dispatch:\n\njobs:\n lint:\n name: Lint\n runs-on: self-hosted\n steps:\n - uses: actions/checkout@v4\n\n # Cache build tools to avoid downloading them each time\n - uses: actions/cache@v4\n with:\n path: build/cache\n key: ${{ runner.os }}-build-tools-cache-${{ hashFiles('build/checksums.txt') }}\n\n - name: Set up Go\n uses: actions/setup-go@v5\n with:\n go-version: 1.23.0\n cache: false\n\n - name: Run linters\n run: |\n go run build/ci.go lint\n go run build/ci.go check_generate\n go run build/ci.go check_baddeps\n\n build:\n runs-on: self-hosted\n steps:\n - uses: actions/checkout@v4\n - name: Set up Go\n uses: actions/setup-go@v5\n with:\n go-version: 1.24.0\n cache: false\n - name: Run tests\n run: go test -short ./...\n env:\n GOOS: linux\n GOARCH: 386\n | dataset_sample\yaml\ethereum_go-ethereum\.github\workflows\go.yml | go.yml | YAML | 1,024 | 0.8 | 0 | 0.02381 | react-lib | 937 | 2024-10-09T22:43:58.404338 | MIT | false | b445ecc974e8116f150c3ffb8733f273 |
env:\n ### cirrus config\n CIRRUS_CLONE_DEPTH: 1\n ### compiler options\n HOST:\n WRAPPER_CMD:\n # Specific warnings can be disabled with -Wno-error=foo.\n # -pedantic-errors is not equivalent to -Werror=pedantic and thus not implied by -Werror according to the GCC manual.\n WERROR_CFLAGS: -Werror -pedantic-errors\n MAKEFLAGS: -j4\n BUILD: check\n ### secp256k1 config\n ECMULTWINDOW: 15\n ECMULTGENKB: 22\n ASM: no\n WIDEMUL: auto\n WITH_VALGRIND: yes\n EXTRAFLAGS:\n ### secp256k1 modules\n EXPERIMENTAL: no\n ECDH: no\n RECOVERY: no\n EXTRAKEYS: no\n SCHNORRSIG: no\n MUSIG: no\n ELLSWIFT: no\n ### test options\n SECP256K1_TEST_ITERS: 64\n BENCH: yes\n SECP256K1_BENCH_ITERS: 2\n CTIMETESTS: yes\n # Compile and run the tests\n EXAMPLES: yes\n\ncat_logs_snippet: &CAT_LOGS\n always:\n cat_tests_log_script:\n - cat tests.log || true\n cat_noverify_tests_log_script:\n - cat noverify_tests.log || true\n cat_exhaustive_tests_log_script:\n - cat exhaustive_tests.log || true\n cat_ctime_tests_log_script:\n - cat ctime_tests.log || true\n cat_bench_log_script:\n - cat bench.log || true\n cat_config_log_script:\n - cat config.log || true\n cat_test_env_script:\n - cat test_env.log || true\n cat_ci_env_script:\n - env\n\nlinux_arm64_container_snippet: &LINUX_ARM64_CONTAINER\n env_script:\n - env | tee /tmp/env\n build_script:\n - DOCKER_BUILDKIT=1 docker build --file "ci/linux-debian.Dockerfile" --tag="ci_secp256k1_arm"\n - docker image prune --force # Cleanup stale layers\n test_script:\n - docker run --rm --mount "type=bind,src=./,dst=/ci_secp256k1" --env-file /tmp/env --replace --name "ci_secp256k1_arm" "ci_secp256k1_arm" bash -c "cd /ci_secp256k1/ && ./ci/ci.sh"\n\ntask:\n name: "ARM64: Linux (Debian stable)"\n persistent_worker:\n labels:\n type: arm64\n env:\n ECDH: yes\n RECOVERY: yes\n EXTRAKEYS: yes\n SCHNORRSIG: yes\n MUSIG: yes\n ELLSWIFT: yes\n matrix:\n # Currently only gcc-snapshot, the other compilers are tested on GHA with QEMU\n - env: { CC: 'gcc-snapshot' }\n << : *LINUX_ARM64_CONTAINER\n << : *CAT_LOGS\n\ntask:\n name: "ARM64: Linux (Debian stable), Valgrind"\n persistent_worker:\n labels:\n type: arm64\n env:\n ECDH: yes\n RECOVERY: yes\n EXTRAKEYS: yes\n SCHNORRSIG: yes\n MUSIG: yes\n ELLSWIFT: yes\n WRAPPER_CMD: 'valgrind --error-exitcode=42'\n SECP256K1_TEST_ITERS: 2\n matrix:\n - env: { CC: 'gcc' }\n - env: { CC: 'clang' }\n - env: { CC: 'gcc-snapshot' }\n - env: { CC: 'clang-snapshot' }\n << : *LINUX_ARM64_CONTAINER\n << : *CAT_LOGS\n | dataset_sample\yaml\ethereum_go-ethereum\crypto\secp256k1\libsecp256k1\.cirrus.yml | .cirrus.yml | YAML | 2,595 | 0.8 | 0 | 0.092784 | node-utils | 588 | 2024-12-15T15:40:57.476236 | GPL-3.0 | false | 0b959b5946c6c6b9b1b99996c3febdea |
name: "Install Valgrind"\ndescription: "Install Homebrew's Valgrind package and cache it."\nruns:\n using: "composite"\n steps:\n - run: |\n brew tap LouisBrunner/valgrind\n brew fetch --HEAD LouisBrunner/valgrind/valgrind\n echo "CI_HOMEBREW_CELLAR_VALGRIND=$(brew --cellar valgrind)" >> "$GITHUB_ENV"\n shell: bash\n\n - run: |\n sw_vers > valgrind_fingerprint\n brew --version >> valgrind_fingerprint\n git -C "$(brew --cache)/valgrind--git" rev-parse HEAD >> valgrind_fingerprint\n cat valgrind_fingerprint\n shell: bash\n\n - uses: actions/cache@v4\n id: cache\n with:\n path: ${{ env.CI_HOMEBREW_CELLAR_VALGRIND }}\n key: ${{ github.job }}-valgrind-${{ hashFiles('valgrind_fingerprint') }}\n\n - if: steps.cache.outputs.cache-hit != 'true'\n run: |\n brew install --HEAD LouisBrunner/valgrind/valgrind\n shell: bash\n\n - if: steps.cache.outputs.cache-hit == 'true'\n run: |\n brew link valgrind\n shell: bash\n | dataset_sample\yaml\ethereum_go-ethereum\crypto\secp256k1\libsecp256k1\.github\actions\install-homebrew-valgrind\action.yml | action.yml | YAML | 1,018 | 0.7 | 0.060606 | 0 | vue-tools | 839 | 2025-01-27T00:45:45.841896 | Apache-2.0 | false | d6f2027019beb3ac9ba1f7e39ca6f5c7 |
name: 'Run in Docker with environment'\ndescription: 'Run a command in a Docker container, while passing explicitly set environment variables into the container.'\ninputs:\n dockerfile:\n description: 'A Dockerfile that defines an image'\n required: true\n tag:\n description: 'A tag of an image'\n required: true\n command:\n description: 'A command to run in a container'\n required: false\n default: ./ci/ci.sh\nruns:\n using: "composite"\n steps:\n - uses: docker/setup-buildx-action@v3\n\n - uses: docker/build-push-action@v5\n id: main_builder\n continue-on-error: true\n with:\n context: .\n file: ${{ inputs.dockerfile }}\n tags: ${{ inputs.tag }}\n load: true\n cache-from: type=gha\n\n - uses: docker/build-push-action@v5\n id: retry_builder\n if: steps.main_builder.outcome == 'failure'\n with:\n context: .\n file: ${{ inputs.dockerfile }}\n tags: ${{ inputs.tag }}\n load: true\n cache-from: type=gha\n\n - # Workaround for https://github.com/google/sanitizers/issues/1614 .\n # The underlying issue has been fixed in clang 18.1.3.\n run: sudo sysctl -w vm.mmap_rnd_bits=28\n shell: bash\n\n - # Tell Docker to pass environment variables in `env` into the container.\n run: >\n docker run \\n $(echo '${{ toJSON(env) }}' | jq -r 'keys[] | "--env \(.) "') \\n --volume ${{ github.workspace }}:${{ github.workspace }} \\n --workdir ${{ github.workspace }} \\n ${{ inputs.tag }} bash -c "\n git config --global --add safe.directory ${{ github.workspace }}\n ${{ inputs.command }}\n "\n shell: bash\n | dataset_sample\yaml\ethereum_go-ethereum\crypto\secp256k1\libsecp256k1\.github\actions\run-in-docker-action\action.yml | action.yml | YAML | 1,695 | 0.95 | 0.055556 | 0.02 | vue-tools | 542 | 2024-08-02T23:29:24.213375 | MIT | false | 4fa0ffc087e10f529cc60c8336cb143c |
name: CI\non:\n pull_request:\n push:\n branches:\n - '**'\n tags-ignore:\n - '**'\n\nconcurrency:\n group: ${{ github.event_name != 'pull_request' && github.run_id || github.ref }}\n cancel-in-progress: true\n\nenv:\n ### compiler options\n HOST:\n WRAPPER_CMD:\n # Specific warnings can be disabled with -Wno-error=foo.\n # -pedantic-errors is not equivalent to -Werror=pedantic and thus not implied by -Werror according to the GCC manual.\n WERROR_CFLAGS: '-Werror -pedantic-errors'\n MAKEFLAGS: '-j4'\n BUILD: 'check'\n ### secp256k1 config\n ECMULTWINDOW: 15\n ECMULTGENKB: 86\n ASM: 'no'\n WIDEMUL: 'auto'\n WITH_VALGRIND: 'yes'\n EXTRAFLAGS:\n ### secp256k1 modules\n EXPERIMENTAL: 'no'\n ECDH: 'no'\n RECOVERY: 'no'\n EXTRAKEYS: 'no'\n SCHNORRSIG: 'no'\n MUSIG: 'no'\n ELLSWIFT: 'no'\n ### test options\n SECP256K1_TEST_ITERS: 64\n BENCH: 'yes'\n SECP256K1_BENCH_ITERS: 2\n CTIMETESTS: 'yes'\n # Compile and run the examples.\n EXAMPLES: 'yes'\n\njobs:\n docker_cache:\n name: "Build Docker image"\n runs-on: ubuntu-latest\n steps:\n - name: Set up Docker Buildx\n uses: docker/setup-buildx-action@v3\n with:\n # See: https://github.com/moby/buildkit/issues/3969.\n driver-opts: |\n network=host\n\n - name: Build container\n uses: docker/build-push-action@v5\n with:\n file: ./ci/linux-debian.Dockerfile\n tags: linux-debian-image\n cache-from: type=gha\n cache-to: type=gha,mode=min\n\n linux_debian:\n name: "x86_64: Linux (Debian stable)"\n runs-on: ubuntu-latest\n needs: docker_cache\n\n strategy:\n fail-fast: false\n matrix:\n configuration:\n - env_vars: { WIDEMUL: 'int64', RECOVERY: 'yes' }\n - env_vars: { WIDEMUL: 'int64', ECDH: 'yes', EXTRAKEYS: 'yes', SCHNORRSIG: 'yes', MUSIG: 'yes', ELLSWIFT: 'yes' }\n - env_vars: { WIDEMUL: 'int128' }\n - env_vars: { WIDEMUL: 'int128_struct', ELLSWIFT: 'yes' }\n - env_vars: { WIDEMUL: 'int128', RECOVERY: 'yes', EXTRAKEYS: 'yes', SCHNORRSIG: 'yes', MUSIG: 'yes', ELLSWIFT: 'yes' }\n - env_vars: { WIDEMUL: 'int128', ECDH: 'yes', EXTRAKEYS: 'yes', SCHNORRSIG: 'yes', MUSIG: 'yes' }\n - env_vars: { WIDEMUL: 'int128', ASM: 'x86_64', ELLSWIFT: 'yes' }\n - env_vars: { RECOVERY: 'yes', EXTRAKEYS: 'yes', SCHNORRSIG: 'yes', MUSIG: 'yes' }\n - env_vars: { CTIMETESTS: 'no', RECOVERY: 'yes', ECDH: 'yes', EXTRAKEYS: 'yes', SCHNORRSIG: 'yes', MUSIG: 'yes', CPPFLAGS: '-DVERIFY' }\n - env_vars: { BUILD: 'distcheck', WITH_VALGRIND: 'no', CTIMETESTS: 'no', BENCH: 'no' }\n - env_vars: { CPPFLAGS: '-DDETERMINISTIC' }\n - env_vars: { CFLAGS: '-O0', CTIMETESTS: 'no' }\n - env_vars: { CFLAGS: '-O1', RECOVERY: 'yes', ECDH: 'yes', EXTRAKEYS: 'yes', SCHNORRSIG: 'yes', MUSIG: 'yes', ELLSWIFT: 'yes' }\n - env_vars: { ECMULTGENKB: 2, ECMULTWINDOW: 2 }\n - env_vars: { ECMULTGENKB: 86, ECMULTWINDOW: 4 }\n cc:\n - 'gcc'\n - 'clang'\n - 'gcc-snapshot'\n - 'clang-snapshot'\n\n env:\n CC: ${{ matrix.cc }}\n\n steps:\n - name: Checkout\n uses: actions/checkout@v4\n\n - name: CI script\n env: ${{ matrix.configuration.env_vars }}\n uses: ./.github/actions/run-in-docker-action\n with:\n dockerfile: ./ci/linux-debian.Dockerfile\n tag: linux-debian-image\n\n - run: cat tests.log || true\n if: ${{ always() }}\n - run: cat noverify_tests.log || true\n if: ${{ always() }}\n - run: cat exhaustive_tests.log || true\n if: ${{ always() }}\n - run: cat ctime_tests.log || true\n if: ${{ always() }}\n - run: cat bench.log || true\n if: ${{ always() }}\n - run: cat config.log || true\n if: ${{ always() }}\n - run: cat test_env.log || true\n if: ${{ always() }}\n - name: CI env\n run: env\n if: ${{ always() }}\n\n i686_debian:\n name: "i686: Linux (Debian stable)"\n runs-on: ubuntu-latest\n needs: docker_cache\n\n strategy:\n fail-fast: false\n matrix:\n cc:\n - 'i686-linux-gnu-gcc'\n - 'clang --target=i686-pc-linux-gnu -isystem /usr/i686-linux-gnu/include'\n\n env:\n HOST: 'i686-linux-gnu'\n ECDH: 'yes'\n RECOVERY: 'yes'\n EXTRAKEYS: 'yes'\n SCHNORRSIG: 'yes'\n MUSIG: 'yes'\n ELLSWIFT: 'yes'\n CC: ${{ matrix.cc }}\n\n steps:\n - name: Checkout\n uses: actions/checkout@v4\n\n - name: CI script\n uses: ./.github/actions/run-in-docker-action\n with:\n dockerfile: ./ci/linux-debian.Dockerfile\n tag: linux-debian-image\n\n - run: cat tests.log || true\n if: ${{ always() }}\n - run: cat noverify_tests.log || true\n if: ${{ always() }}\n - run: cat exhaustive_tests.log || true\n if: ${{ always() }}\n - run: cat ctime_tests.log || true\n if: ${{ always() }}\n - run: cat bench.log || true\n if: ${{ always() }}\n - run: cat config.log || true\n if: ${{ always() }}\n - run: cat test_env.log || true\n if: ${{ always() }}\n - name: CI env\n run: env\n if: ${{ always() }}\n\n s390x_debian:\n name: "s390x (big-endian): Linux (Debian stable, QEMU)"\n runs-on: ubuntu-latest\n needs: docker_cache\n\n env:\n WRAPPER_CMD: 'qemu-s390x'\n SECP256K1_TEST_ITERS: 16\n HOST: 's390x-linux-gnu'\n WITH_VALGRIND: 'no'\n ECDH: 'yes'\n RECOVERY: 'yes'\n EXTRAKEYS: 'yes'\n SCHNORRSIG: 'yes'\n MUSIG: 'yes'\n ELLSWIFT: 'yes'\n CTIMETESTS: 'no'\n\n steps:\n - name: Checkout\n uses: actions/checkout@v4\n\n - name: CI script\n uses: ./.github/actions/run-in-docker-action\n with:\n dockerfile: ./ci/linux-debian.Dockerfile\n tag: linux-debian-image\n\n - run: cat tests.log || true\n if: ${{ always() }}\n - run: cat noverify_tests.log || true\n if: ${{ always() }}\n - run: cat exhaustive_tests.log || true\n if: ${{ always() }}\n - run: cat ctime_tests.log || true\n if: ${{ always() }}\n - run: cat bench.log || true\n if: ${{ always() }}\n - run: cat config.log || true\n if: ${{ always() }}\n - run: cat test_env.log || true\n if: ${{ always() }}\n - name: CI env\n run: env\n if: ${{ always() }}\n\n arm32_debian:\n name: "ARM32: Linux (Debian stable, QEMU)"\n runs-on: ubuntu-latest\n needs: docker_cache\n\n strategy:\n fail-fast: false\n matrix:\n configuration:\n - env_vars: {}\n - env_vars: { EXPERIMENTAL: 'yes', ASM: 'arm32' }\n\n env:\n WRAPPER_CMD: 'qemu-arm'\n SECP256K1_TEST_ITERS: 16\n HOST: 'arm-linux-gnueabihf'\n WITH_VALGRIND: 'no'\n ECDH: 'yes'\n RECOVERY: 'yes'\n EXTRAKEYS: 'yes'\n SCHNORRSIG: 'yes'\n MUSIG: 'yes'\n ELLSWIFT: 'yes'\n CTIMETESTS: 'no'\n\n steps:\n - name: Checkout\n uses: actions/checkout@v4\n\n - name: CI script\n env: ${{ matrix.configuration.env_vars }}\n uses: ./.github/actions/run-in-docker-action\n with:\n dockerfile: ./ci/linux-debian.Dockerfile\n tag: linux-debian-image\n\n - run: cat tests.log || true\n if: ${{ always() }}\n - run: cat noverify_tests.log || true\n if: ${{ always() }}\n - run: cat exhaustive_tests.log || true\n if: ${{ always() }}\n - run: cat ctime_tests.log || true\n if: ${{ always() }}\n - run: cat bench.log || true\n if: ${{ always() }}\n - run: cat config.log || true\n if: ${{ always() }}\n - run: cat test_env.log || true\n if: ${{ always() }}\n - name: CI env\n run: env\n if: ${{ always() }}\n\n arm64_debian:\n name: "ARM64: Linux (Debian stable, QEMU)"\n runs-on: ubuntu-latest\n needs: docker_cache\n\n env:\n WRAPPER_CMD: 'qemu-aarch64'\n SECP256K1_TEST_ITERS: 16\n HOST: 'aarch64-linux-gnu'\n WITH_VALGRIND: 'no'\n ECDH: 'yes'\n RECOVERY: 'yes'\n EXTRAKEYS: 'yes'\n SCHNORRSIG: 'yes'\n MUSIG: 'yes'\n ELLSWIFT: 'yes'\n CTIMETESTS: 'no'\n\n strategy:\n fail-fast: false\n matrix:\n configuration:\n - env_vars: { } # gcc\n - env_vars: # clang\n CC: 'clang --target=aarch64-linux-gnu'\n - env_vars: # clang-snapshot\n CC: 'clang-snapshot --target=aarch64-linux-gnu'\n\n steps:\n - name: Checkout\n uses: actions/checkout@v4\n\n - name: CI script\n env: ${{ matrix.configuration.env_vars }}\n uses: ./.github/actions/run-in-docker-action\n with:\n dockerfile: ./ci/linux-debian.Dockerfile\n tag: linux-debian-image\n\n - run: cat tests.log || true\n if: ${{ always() }}\n - run: cat noverify_tests.log || true\n if: ${{ always() }}\n - run: cat exhaustive_tests.log || true\n if: ${{ always() }}\n - run: cat ctime_tests.log || true\n if: ${{ always() }}\n - run: cat bench.log || true\n if: ${{ always() }}\n - run: cat config.log || true\n if: ${{ always() }}\n - run: cat test_env.log || true\n if: ${{ always() }}\n - name: CI env\n run: env\n if: ${{ always() }}\n\n ppc64le_debian:\n name: "ppc64le: Linux (Debian stable, QEMU)"\n runs-on: ubuntu-latest\n needs: docker_cache\n\n env:\n WRAPPER_CMD: 'qemu-ppc64le'\n SECP256K1_TEST_ITERS: 16\n HOST: 'powerpc64le-linux-gnu'\n WITH_VALGRIND: 'no'\n ECDH: 'yes'\n RECOVERY: 'yes'\n EXTRAKEYS: 'yes'\n SCHNORRSIG: 'yes'\n MUSIG: 'yes'\n ELLSWIFT: 'yes'\n CTIMETESTS: 'no'\n\n steps:\n - name: Checkout\n uses: actions/checkout@v4\n\n - name: CI script\n uses: ./.github/actions/run-in-docker-action\n with:\n dockerfile: ./ci/linux-debian.Dockerfile\n tag: linux-debian-image\n\n - run: cat tests.log || true\n if: ${{ always() }}\n - run: cat noverify_tests.log || true\n if: ${{ always() }}\n - run: cat exhaustive_tests.log || true\n if: ${{ always() }}\n - run: cat ctime_tests.log || true\n if: ${{ always() }}\n - run: cat bench.log || true\n if: ${{ always() }}\n - run: cat config.log || true\n if: ${{ always() }}\n - run: cat test_env.log || true\n if: ${{ always() }}\n - name: CI env\n run: env\n if: ${{ always() }}\n\n valgrind_debian:\n name: "Valgrind (memcheck)"\n runs-on: ubuntu-latest\n needs: docker_cache\n\n strategy:\n fail-fast: false\n matrix:\n configuration:\n - env_vars: { CC: 'clang', ASM: 'auto' }\n - env_vars: { CC: 'i686-linux-gnu-gcc', HOST: 'i686-linux-gnu', ASM: 'auto' }\n - env_vars: { CC: 'clang', ASM: 'no', ECMULTGENKB: 2, ECMULTWINDOW: 2 }\n - env_vars: { CC: 'i686-linux-gnu-gcc', HOST: 'i686-linux-gnu', ASM: 'no', ECMULTGENKB: 2, ECMULTWINDOW: 2 }\n\n env:\n # The `--error-exitcode` is required to make the test fail if valgrind found errors,\n # otherwise it will return 0 (https://www.valgrind.org/docs/manual/manual-core.html).\n WRAPPER_CMD: 'valgrind --error-exitcode=42'\n ECDH: 'yes'\n RECOVERY: 'yes'\n EXTRAKEYS: 'yes'\n SCHNORRSIG: 'yes'\n MUSIG: 'yes'\n ELLSWIFT: 'yes'\n CTIMETESTS: 'no'\n SECP256K1_TEST_ITERS: 2\n\n steps:\n - name: Checkout\n uses: actions/checkout@v4\n\n - name: CI script\n env: ${{ matrix.configuration.env_vars }}\n uses: ./.github/actions/run-in-docker-action\n with:\n dockerfile: ./ci/linux-debian.Dockerfile\n tag: linux-debian-image\n\n - run: cat tests.log || true\n if: ${{ always() }}\n - run: cat noverify_tests.log || true\n if: ${{ always() }}\n - run: cat exhaustive_tests.log || true\n if: ${{ always() }}\n - run: cat ctime_tests.log || true\n if: ${{ always() }}\n - run: cat bench.log || true\n if: ${{ always() }}\n - run: cat config.log || true\n if: ${{ always() }}\n - run: cat test_env.log || true\n if: ${{ always() }}\n - name: CI env\n run: env\n if: ${{ always() }}\n\n sanitizers_debian:\n name: "UBSan, ASan, LSan"\n runs-on: ubuntu-latest\n needs: docker_cache\n\n strategy:\n fail-fast: false\n matrix:\n configuration:\n - env_vars: { CC: 'clang', ASM: 'auto' }\n - env_vars: { CC: 'i686-linux-gnu-gcc', HOST: 'i686-linux-gnu', ASM: 'auto' }\n - env_vars: { CC: 'clang', ASM: 'no', ECMULTGENKB: 2, ECMULTWINDOW: 2 }\n - env_vars: { CC: 'i686-linux-gnu-gcc', HOST: 'i686-linux-gnu', ASM: 'no', ECMULTGENKB: 2, ECMULTWINDOW: 2 }\n\n env:\n ECDH: 'yes'\n RECOVERY: 'yes'\n EXTRAKEYS: 'yes'\n SCHNORRSIG: 'yes'\n MUSIG: 'yes'\n ELLSWIFT: 'yes'\n CTIMETESTS: 'no'\n CFLAGS: '-fsanitize=undefined,address -g'\n UBSAN_OPTIONS: 'print_stacktrace=1:halt_on_error=1'\n ASAN_OPTIONS: 'strict_string_checks=1:detect_stack_use_after_return=1:detect_leaks=1'\n LSAN_OPTIONS: 'use_unaligned=1'\n SECP256K1_TEST_ITERS: 32\n\n steps:\n - name: Checkout\n uses: actions/checkout@v4\n\n - name: CI script\n env: ${{ matrix.configuration.env_vars }}\n uses: ./.github/actions/run-in-docker-action\n with:\n dockerfile: ./ci/linux-debian.Dockerfile\n tag: linux-debian-image\n\n - run: cat tests.log || true\n if: ${{ always() }}\n - run: cat noverify_tests.log || true\n if: ${{ always() }}\n - run: cat exhaustive_tests.log || true\n if: ${{ always() }}\n - run: cat ctime_tests.log || true\n if: ${{ always() }}\n - run: cat bench.log || true\n if: ${{ always() }}\n - run: cat config.log || true\n if: ${{ always() }}\n - run: cat test_env.log || true\n if: ${{ always() }}\n - name: CI env\n run: env\n if: ${{ always() }}\n\n msan_debian:\n name: "MSan"\n runs-on: ubuntu-latest\n needs: docker_cache\n\n strategy:\n fail-fast: false\n matrix:\n configuration:\n - env_vars:\n CTIMETESTS: 'yes'\n CFLAGS: '-fsanitize=memory -fsanitize-recover=memory -g'\n - env_vars:\n ECMULTGENKB: 2\n ECMULTWINDOW: 2\n CTIMETESTS: 'yes'\n CFLAGS: '-fsanitize=memory -fsanitize-recover=memory -g -O3'\n - env_vars:\n # -fsanitize-memory-param-retval is clang's default, but our build system disables it\n # when ctime_tests when enabled.\n CFLAGS: '-fsanitize=memory -fsanitize-recover=memory -fsanitize-memory-param-retval -g'\n CTIMETESTS: 'no'\n\n env:\n ECDH: 'yes'\n RECOVERY: 'yes'\n EXTRAKEYS: 'yes'\n SCHNORRSIG: 'yes'\n MUSIG: 'yes'\n ELLSWIFT: 'yes'\n CC: 'clang'\n SECP256K1_TEST_ITERS: 32\n ASM: 'no'\n WITH_VALGRIND: 'no'\n\n steps:\n - name: Checkout\n uses: actions/checkout@v4\n\n - name: CI script\n env: ${{ matrix.configuration.env_vars }}\n uses: ./.github/actions/run-in-docker-action\n with:\n dockerfile: ./ci/linux-debian.Dockerfile\n tag: linux-debian-image\n\n - run: cat tests.log || true\n if: ${{ always() }}\n - run: cat noverify_tests.log || true\n if: ${{ always() }}\n - run: cat exhaustive_tests.log || true\n if: ${{ always() }}\n - run: cat ctime_tests.log || true\n if: ${{ always() }}\n - run: cat bench.log || true\n if: ${{ always() }}\n - run: cat config.log || true\n if: ${{ always() }}\n - run: cat test_env.log || true\n if: ${{ always() }}\n - name: CI env\n run: env\n if: ${{ always() }}\n\n mingw_debian:\n name: ${{ matrix.configuration.job_name }}\n runs-on: ubuntu-latest\n needs: docker_cache\n\n env:\n WRAPPER_CMD: 'wine'\n WITH_VALGRIND: 'no'\n ECDH: 'yes'\n RECOVERY: 'yes'\n EXTRAKEYS: 'yes'\n SCHNORRSIG: 'yes'\n MUSIG: 'yes'\n ELLSWIFT: 'yes'\n CTIMETESTS: 'no'\n\n strategy:\n fail-fast: false\n matrix:\n configuration:\n - job_name: 'x86_64 (mingw32-w64): Windows (Debian stable, Wine)'\n env_vars:\n HOST: 'x86_64-w64-mingw32'\n - job_name: 'i686 (mingw32-w64): Windows (Debian stable, Wine)'\n env_vars:\n HOST: 'i686-w64-mingw32'\n\n steps:\n - name: Checkout\n uses: actions/checkout@v4\n\n - name: CI script\n env: ${{ matrix.configuration.env_vars }}\n uses: ./.github/actions/run-in-docker-action\n with:\n dockerfile: ./ci/linux-debian.Dockerfile\n tag: linux-debian-image\n\n - run: cat tests.log || true\n if: ${{ always() }}\n - run: cat noverify_tests.log || true\n if: ${{ always() }}\n - run: cat exhaustive_tests.log || true\n if: ${{ always() }}\n - run: cat ctime_tests.log || true\n if: ${{ always() }}\n - run: cat bench.log || true\n if: ${{ always() }}\n - run: cat config.log || true\n if: ${{ always() }}\n - run: cat test_env.log || true\n if: ${{ always() }}\n - name: CI env\n run: env\n if: ${{ always() }}\n\n x86_64-macos-native:\n name: "x86_64: macOS Ventura, Valgrind"\n # See: https://github.com/actions/runner-images#available-images.\n runs-on: macos-13\n\n env:\n CC: 'clang'\n HOMEBREW_NO_AUTO_UPDATE: 1\n HOMEBREW_NO_INSTALL_CLEANUP: 1\n\n strategy:\n fail-fast: false\n matrix:\n env_vars:\n - { WIDEMUL: 'int64', RECOVERY: 'yes', ECDH: 'yes', EXTRAKEYS: 'yes', SCHNORRSIG: 'yes', MUSIG: 'yes', ELLSWIFT: 'yes' }\n - { WIDEMUL: 'int128_struct', ECMULTGENKB: 2, ECMULTWINDOW: 4 }\n - { WIDEMUL: 'int128', ECDH: 'yes', EXTRAKEYS: 'yes', SCHNORRSIG: 'yes', MUSIG: 'yes', ELLSWIFT: 'yes' }\n - { WIDEMUL: 'int128', RECOVERY: 'yes' }\n - { WIDEMUL: 'int128', RECOVERY: 'yes', ECDH: 'yes', EXTRAKEYS: 'yes', SCHNORRSIG: 'yes', MUSIG: 'yes', ELLSWIFT: 'yes' }\n - { WIDEMUL: 'int128', RECOVERY: 'yes', ECDH: 'yes', EXTRAKEYS: 'yes', SCHNORRSIG: 'yes', MUSIG: 'yes', ELLSWIFT: 'yes', CC: 'gcc' }\n - { WIDEMUL: 'int128', RECOVERY: 'yes', ECDH: 'yes', EXTRAKEYS: 'yes', SCHNORRSIG: 'yes', MUSIG: 'yes', ELLSWIFT: 'yes', WRAPPER_CMD: 'valgrind --error-exitcode=42', SECP256K1_TEST_ITERS: 2 }\n - { WIDEMUL: 'int128', RECOVERY: 'yes', ECDH: 'yes', EXTRAKEYS: 'yes', SCHNORRSIG: 'yes', MUSIG: 'yes', ELLSWIFT: 'yes', CC: 'gcc', WRAPPER_CMD: 'valgrind --error-exitcode=42', SECP256K1_TEST_ITERS: 2 }\n - { WIDEMUL: 'int128', RECOVERY: 'yes', ECDH: 'yes', EXTRAKEYS: 'yes', SCHNORRSIG: 'yes', MUSIG: 'yes', ELLSWIFT: 'yes', CPPFLAGS: '-DVERIFY', CTIMETESTS: 'no' }\n - BUILD: 'distcheck'\n\n steps:\n - name: Checkout\n uses: actions/checkout@v4\n\n - name: Install Homebrew packages\n run: |\n brew install --quiet automake libtool gcc\n ln -s $(brew --prefix gcc)/bin/gcc-?? /usr/local/bin/gcc\n\n - name: Install and cache Valgrind\n uses: ./.github/actions/install-homebrew-valgrind\n\n - name: CI script\n env: ${{ matrix.env_vars }}\n run: ./ci/ci.sh\n\n - run: cat tests.log || true\n if: ${{ always() }}\n - run: cat noverify_tests.log || true\n if: ${{ always() }}\n - run: cat exhaustive_tests.log || true\n if: ${{ always() }}\n - run: cat ctime_tests.log || true\n if: ${{ always() }}\n - run: cat bench.log || true\n if: ${{ always() }}\n - run: cat config.log || true\n if: ${{ always() }}\n - run: cat test_env.log || true\n if: ${{ always() }}\n - name: CI env\n run: env\n if: ${{ always() }}\n\n arm64-macos-native:\n name: "ARM64: macOS Sonoma"\n # See: https://github.com/actions/runner-images#available-images.\n runs-on: macos-14\n\n env:\n CC: 'clang'\n HOMEBREW_NO_AUTO_UPDATE: 1\n HOMEBREW_NO_INSTALL_CLEANUP: 1\n WITH_VALGRIND: 'no'\n CTIMETESTS: 'no'\n\n strategy:\n fail-fast: false\n matrix:\n env_vars:\n - { WIDEMUL: 'int64', RECOVERY: 'yes', ECDH: 'yes', EXTRAKEYS: 'yes', SCHNORRSIG: 'yes', ELLSWIFT: 'yes' }\n - { WIDEMUL: 'int128_struct', ECMULTGENPRECISION: 2, ECMULTWINDOW: 4 }\n - { WIDEMUL: 'int128', ECDH: 'yes', EXTRAKEYS: 'yes', SCHNORRSIG: 'yes', ELLSWIFT: 'yes' }\n - { WIDEMUL: 'int128', RECOVERY: 'yes' }\n - { WIDEMUL: 'int128', RECOVERY: 'yes', ECDH: 'yes', EXTRAKEYS: 'yes', SCHNORRSIG: 'yes', ELLSWIFT: 'yes' }\n - { WIDEMUL: 'int128', RECOVERY: 'yes', ECDH: 'yes', EXTRAKEYS: 'yes', SCHNORRSIG: 'yes', ELLSWIFT: 'yes', CC: 'gcc' }\n - { WIDEMUL: 'int128', RECOVERY: 'yes', ECDH: 'yes', EXTRAKEYS: 'yes', SCHNORRSIG: 'yes', ELLSWIFT: 'yes', CPPFLAGS: '-DVERIFY' }\n - BUILD: 'distcheck'\n\n steps:\n - name: Checkout\n uses: actions/checkout@v4\n\n - name: Install Homebrew packages\n run: |\n brew install --quiet automake libtool gcc\n ln -s $(brew --prefix gcc)/bin/gcc-?? /usr/local/bin/gcc\n\n - name: CI script\n env: ${{ matrix.env_vars }}\n run: ./ci/ci.sh\n\n - run: cat tests.log || true\n if: ${{ always() }}\n - run: cat noverify_tests.log || true\n if: ${{ always() }}\n - run: cat exhaustive_tests.log || true\n if: ${{ always() }}\n - run: cat ctime_tests.log || true\n if: ${{ always() }}\n - run: cat bench.log || true\n if: ${{ always() }}\n - run: cat config.log || true\n if: ${{ always() }}\n - run: cat test_env.log || true\n if: ${{ always() }}\n - name: CI env\n run: env\n if: ${{ always() }}\n\n win64-native:\n name: ${{ matrix.configuration.job_name }}\n # See: https://github.com/actions/runner-images#available-images.\n runs-on: windows-2022\n\n strategy:\n fail-fast: false\n matrix:\n configuration:\n - job_name: 'x64 (MSVC): Windows (VS 2022, shared)'\n cmake_options: '-A x64 -DBUILD_SHARED_LIBS=ON'\n - job_name: 'x64 (MSVC): Windows (VS 2022, static)'\n cmake_options: '-A x64 -DBUILD_SHARED_LIBS=OFF'\n - job_name: 'x64 (MSVC): Windows (VS 2022, int128_struct)'\n cmake_options: '-A x64 -DSECP256K1_TEST_OVERRIDE_WIDE_MULTIPLY=int128_struct'\n - job_name: 'x64 (MSVC): Windows (VS 2022, int128_struct with __(u)mulh)'\n cmake_options: '-A x64 -DSECP256K1_TEST_OVERRIDE_WIDE_MULTIPLY=int128_struct'\n cpp_flags: '/DSECP256K1_MSVC_MULH_TEST_OVERRIDE'\n - job_name: 'x86 (MSVC): Windows (VS 2022)'\n cmake_options: '-A Win32'\n\n steps:\n - name: Checkout\n uses: actions/checkout@v4\n\n - name: Generate buildsystem\n run: cmake -E env CFLAGS="/WX ${{ matrix.configuration.cpp_flags }}" cmake -B build -DSECP256K1_ENABLE_MODULE_RECOVERY=ON -DSECP256K1_BUILD_EXAMPLES=ON ${{ matrix.configuration.cmake_options }}\n\n - name: Build\n run: cmake --build build --config RelWithDebInfo -- /p:UseMultiToolTask=true /maxCpuCount\n\n - name: Binaries info\n # Use the bash shell included with Git for Windows.\n shell: bash\n run: |\n cd build/bin/RelWithDebInfo && file *tests.exe bench*.exe libsecp256k1-*.dll || true\n\n - name: Check\n run: |\n ctest -C RelWithDebInfo --test-dir build -j ([int]$env:NUMBER_OF_PROCESSORS + 1)\n build\bin\RelWithDebInfo\bench_ecmult.exe\n build\bin\RelWithDebInfo\bench_internal.exe\n build\bin\RelWithDebInfo\bench.exe\n\n win64-native-headers:\n name: "x64 (MSVC): C++ (public headers)"\n # See: https://github.com/actions/runner-images#available-images.\n runs-on: windows-2022\n\n steps:\n - name: Checkout\n uses: actions/checkout@v4\n\n - name: Add cl.exe to PATH\n uses: ilammy/msvc-dev-cmd@v1\n\n - name: C++ (public headers)\n run: |\n cl.exe -c -WX -TP include/*.h\n\n cxx_fpermissive_debian:\n name: "C++ -fpermissive (entire project)"\n runs-on: ubuntu-latest\n needs: docker_cache\n\n env:\n CC: 'g++'\n CFLAGS: '-fpermissive -g'\n CPPFLAGS: '-DSECP256K1_CPLUSPLUS_TEST_OVERRIDE'\n WERROR_CFLAGS:\n ECDH: 'yes'\n RECOVERY: 'yes'\n EXTRAKEYS: 'yes'\n SCHNORRSIG: 'yes'\n MUSIG: 'yes'\n ELLSWIFT: 'yes'\n\n steps:\n - name: Checkout\n uses: actions/checkout@v4\n\n - name: CI script\n uses: ./.github/actions/run-in-docker-action\n with:\n dockerfile: ./ci/linux-debian.Dockerfile\n tag: linux-debian-image\n\n - run: cat tests.log || true\n if: ${{ always() }}\n - run: cat noverify_tests.log || true\n if: ${{ always() }}\n - run: cat exhaustive_tests.log || true\n if: ${{ always() }}\n - run: cat ctime_tests.log || true\n if: ${{ always() }}\n - run: cat bench.log || true\n if: ${{ always() }}\n - run: cat config.log || true\n if: ${{ always() }}\n - run: cat test_env.log || true\n if: ${{ always() }}\n - name: CI env\n run: env\n if: ${{ always() }}\n\n cxx_headers_debian:\n name: "C++ (public headers)"\n runs-on: ubuntu-latest\n needs: docker_cache\n\n steps:\n - name: Checkout\n uses: actions/checkout@v4\n\n - name: CI script\n uses: ./.github/actions/run-in-docker-action\n with:\n dockerfile: ./ci/linux-debian.Dockerfile\n tag: linux-debian-image\n command: |\n g++ -Werror include/*.h\n clang -Werror -x c++-header include/*.h\n\n sage:\n name: "SageMath prover"\n runs-on: ubuntu-latest\n container:\n image: sagemath/sagemath:latest\n options: --user root\n\n steps:\n - name: Checkout\n uses: actions/checkout@v4\n\n - name: CI script\n run: |\n cd sage\n sage prove_group_implementations.sage\n\n release:\n runs-on: ubuntu-latest\n\n steps:\n - name: Checkout\n uses: actions/checkout@v4\n\n - run: ./autogen.sh && ./configure --enable-dev-mode && make distcheck\n\n - name: Check installation with Autotools\n env:\n CI_INSTALL: ${{ runner.temp }}/${{ github.run_id }}${{ github.action }}/install\n run: |\n ./autogen.sh && ./configure --prefix=${{ env.CI_INSTALL }} && make clean && make install && ls -RlAh ${{ env.CI_INSTALL }}\n gcc -o ecdsa examples/ecdsa.c $(PKG_CONFIG_PATH=${{ env.CI_INSTALL }}/lib/pkgconfig pkg-config --cflags --libs libsecp256k1) -Wl,-rpath,"${{ env.CI_INSTALL }}/lib" && ./ecdsa\n\n - name: Check installation with CMake\n env:\n CI_BUILD: ${{ runner.temp }}/${{ github.run_id }}${{ github.action }}/build\n CI_INSTALL: ${{ runner.temp }}/${{ github.run_id }}${{ github.action }}/install\n run: |\n cmake -B ${{ env.CI_BUILD }} -DCMAKE_INSTALL_PREFIX=${{ env.CI_INSTALL }} && cmake --build ${{ env.CI_BUILD }} && cmake --install ${{ env.CI_BUILD }} && ls -RlAh ${{ env.CI_INSTALL }}\n gcc -o ecdsa examples/ecdsa.c -I ${{ env.CI_INSTALL }}/include -L ${{ env.CI_INSTALL }}/lib*/ -l secp256k1 -Wl,-rpath,"${{ env.CI_INSTALL }}/lib",-rpath,"${{ env.CI_INSTALL }}/lib64" && ./ecdsa\n | dataset_sample\yaml\ethereum_go-ethereum\crypto\secp256k1\libsecp256k1\.github\workflows\ci.yml | ci.yml | YAML | 27,938 | 0.95 | 0.119101 | 0.021628 | awesome-app | 577 | 2023-09-15T22:09:38.138292 | Apache-2.0 | false | 07e7454d69c939f051a2cafce7109f2f |
version: "3.7"\nservices:\n expose:\n image: beyondcodegmbh/expose-server:latest\n extra_hosts:\n - "host.docker.internal:host-gateway"\n ports:\n - 8080:${PORT}\n environment:\n port: ${PORT}\n domain: ${DOMAIN}\n username: ${ADMIN_USERNAME}\n password: ${ADMIN_PASSWORD}\n restart: always\n volumes:\n - ./database/expose.db:/root/.expose\n | dataset_sample\yaml\exposedev_expose\docker-compose.yml | docker-compose.yml | YAML | 380 | 0.7 | 0 | 0 | python-kit | 494 | 2025-04-10T07:29:12.967020 | GPL-3.0 | false | 5875fcaa3f0a65561044c2476ec7dc1c |
name: Bug Report\ndescription: File a bug report\ntitle: "[Bug]: "\nlabels: []\nbody:\n - type: markdown\n attributes:\n value: |\n Before opening a bug report, please search for the behaviour in the existing issues.\n\n ---\n\n Thank you for taking the time to file a bug report. To address this bug as fast as possible, we need some information.\n - type: dropdown\n id: arch\n attributes:\n label: System architecture\n description: "Which operating system architecture are you using?"\n options:\n - Mac, Intel (x86)\n - Mac, ARM64 (M1, M2, etc)\n - Windows\n - Linux\n validations:\n required: true\n - type: input\n id: php\n attributes:\n label: PHP Version\n description: "If your problem only occurs with a certain PHP version, please provide it in the field below."\n placeholder: "PHP 8.2.8 (cli) (built: Jul 7 2023 00:11:29) (NTS)"\n validations:\n required: false\n - type: textarea\n id: bug-description\n attributes:\n label: Bug description\n description: What happened and what did you expect to happen?\n validations:\n required: true\n - type: textarea\n id: steps\n attributes:\n label: Steps to reproduce\n description: Which steps do we need to take to reproduce this error?\n\n - type: textarea\n id: logs\n attributes:\n label: Relevant log output\n description: If applicable, provide relevant log output. No need for backticks here.\n render: shell\n | dataset_sample\yaml\exposedev_expose\.github\ISSUE_TEMPLATE\bug.yml | bug.yml | YAML | 1,507 | 0.85 | 0.057692 | 0 | vue-tools | 994 | 2023-12-14T19:06:22.911317 | Apache-2.0 | false | f3ff70524d424e6d50c97a289fca3934 |
blank_issues_enabled: false\ncontact_links:\n - name: Check the docs\n url: https://expose.dev/docs/introduction\n about: "This repository contains the open source version of Expose. Please contact our email support if you have issues with our network or Expose Pro!"\n - name: Security vulnerabilities\n url: https://@beyondco.de\n about: If you want to report a security issue, please contact our support.\n | dataset_sample\yaml\exposedev_expose\.github\ISSUE_TEMPLATE\config.yml | config.yml | YAML | 415 | 0.8 | 0.125 | 0 | vue-tools | 994 | 2023-09-14T19:09:41.086002 | GPL-3.0 | false | 1aabe06d9c7557b2753f91000d694ff5 |
name: Feature Request\ndescription: Propose a new feature for the open source version of Expose\ntitle: "[Feature Request]: "\nlabels: [feature request]\nbody:\n - type: markdown\n attributes:\n value: |\n Thank you for suggesting a feature for the Expose open source version!\n - type: textarea\n id: feature-description\n attributes:\n label: Feature Description\n description: How should this feature look like?\n validations:\n required: true\n - type: textarea\n id: valuable\n attributes:\n label: Is this feature valuable for other users as well and why?\n description: We want to build software that provides a great experience for all of us.\n | dataset_sample\yaml\exposedev_expose\.github\ISSUE_TEMPLATE\feature.yml | feature.yml | YAML | 689 | 0.85 | 0.238095 | 0 | node-utils | 462 | 2024-08-12T17:57:43.332099 | GPL-3.0 | false | 765a995a3220cfb96c9094e45b234f84 |
# commitlint.config.yml\nextends:\n - "@commitlint/config-conventional"\n\nrules:\n # Basic rules\n header-max-length: [2, "always", 72]\n body-max-line-length: [2, "always", 100]\n\n # Commit type\n type-enum:\n - 2\n - "always"\n - [\n "feat", # New feature\n "enh", # Enhancement of an existing feature\n "fix", # Bug fix\n "docs", # Documentation changes\n "style", # Code formatting, white spaces, etc.\n "refactor", # Code refactoring\n "perf", # Performance improvement\n "test", # Adding or fixing tests\n "build", # Changes affecting the build system or external dependencies\n "ci", # Changes to CI configuration files and scripts\n "chore", # Other changes that don't modify src or test files\n "delete", # Deleting unused files\n "revert", # Reverting to a previous commit\n ]\n\n scope-empty: [2, "never"]\n subject-empty: [2, "never"] | dataset_sample\yaml\eythaann_Seelen-UI\.commitlintrc.yml | .commitlintrc.yml | YAML | 931 | 0.8 | 0 | 0.107143 | react-lib | 350 | 2024-03-31T10:32:32.557001 | BSD-3-Clause | false | 5201cbbe2da9893ab7a4d3f661611d07 |
commit-msg:\n commands:\n lint-commit-msg:\n run: npx commitlint --edit\n\npre-commit:\n commands:\n js-linter:\n priority: 1\n glob: "**/*.{js,jsx,ts,tsx}"\n run: npm run lint\n rust-linter:\n priority: 2\n glob: "**/*.rs"\n run: cargo fmt -- --check\n ts-type-check:\n priority: 3\n glob: "**/*.{js,jsx,ts,tsx}"\n run: npm run type-check\n rust-code-check:\n priority: 4\n glob: "**/*.rs"\n run: cargo clippy -- -D warnings\n\npre-push:\n commands:\n js-test:\n priority: 1\n glob: "**/*.{js,jsx,ts,tsx}"\n run: npm run test\n rust-test:\n priority: 3\n glob: "**/*.rs"\n run: cargo test\n | dataset_sample\yaml\eythaann_Seelen-UI\lefthook.yml | lefthook.yml | YAML | 676 | 0.8 | 0 | 0 | python-kit | 169 | 2024-07-30T20:51:52.041796 | BSD-3-Clause | false | 3365362156ab1b213a99cc58dcaa8349 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.