Instruction
stringlengths
14
778
input_code
stringlengths
0
4.24k
output_code
stringlengths
1
5.44k
Add workflow to auto-update Go dependencies
name: Go Dependencies on: schedule: - cron: '0 9 * * *' workflow_dispatch: jobs: go-deps: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - uses: bazelbuild/setup-bazelisk@v2 - uses: actions/setup-go@v3 - name: Update Go Modules run: go mod tidy - name: Update Bazel Repositories run: bazel run :gazelle-update-repos - name: Create Pull Request uses: peter-evans/create-pull-request@v4 with: commit-message: Update Go Dependencies title: Update Go Dependencies body: Auto-generated Go dependency updates. branch: go-deps-auto-update delete-branch: true
Create github action for CI
name: CI on: [push, pull_request] jobs: test: runs-on: ubuntu-latest strategy: matrix: ruby: - 2.4.x - 2.5.x - 2.6.x steps: - uses: actions/checkout@v1 - name: Set up Ruby 2.6 uses: actions/setup-ruby@v1 with: ruby-version: ${{ matrix.ruby }} - name: Install databases run: sudo apt-get -y install libpq-dev libsqlite3-dev - name: Build and test with Rake run: | gem install bundler bundle install --jobs 4 --retry 3 bundle exec rake
Update from Hackage at 2021-04-12T15:02:54Z
homepage: https://github.com/githubuser/mcaeson#readme changelog-type: markdown hash: 2b998fea00f1a50d78ec9f46dd2f294030f01815835ea25fc857733518c987aa test-bench-deps: base: '>=4.7 && <5' mcaeson: -any maintainer: chris@chrisdornan.com synopsis: An Aeson parsing toolkit changelog: | # Changelog for mcaeson ## Unreleased changes basic-deps: base: '>=4.7 && <5' mcaeson: -any all-versions: - 0.0.0.1 author: Chris Dornan latest: 0.0.0.1 description-type: markdown description: | # mcaeson A new toolkit for testing, benchmarking and building high-performance Aeson parsers. license-name: BSD-3-Clause
Add an issue template for regression triage
# Copyright lowRISC contributors. # Licensed under the Apache License, Version 2.0, see LICENSE for details. # SPDX-License-Identifier: Apache-2.0 name: Triage Test description: Issue to track regression failure triaging. title: "[test-triage] " labels: ["Component:TestTriage"] body: - type: dropdown id: hierarchy attributes: label: Hierarchy of regression failure description: What hierarchy level is the regression failure? options: - Block level - Chip Level validations: required: true - type: textarea id: description attributes: label: Failure Description description: A description of the test failure placeholder: > Example: UVM_FATAL @ 1.270237 us: (mem_bkdr_util.sv:480) [mem_bkdr_util[Rom]] file test_rom_sim_dv.scr.39.vmem could not be opened for r mode validations: required: true - type: textarea id: reproduction attributes: label: Steps to Reproduce value: | - Commit hash where failure was observed - dvsim invocation command to reproduce the failure, inclusive of build and run seeds - Kokoro build number if applicable validations: required: true - type: textarea id: related-tests attributes: label: Tests with similar or related failures value: | - [ ] Test_name_1 - [ ] Test_name_2
Add the Travis CI configuration
sudo: false language: node_js cache: directories: - node_modules notifications: email: false before_install: - npm install -g greenkeeper-lockfile@1 before_script: - npm prune - greenkeeper-lockfile-update after_script: - greenkeeper-lockfile-upload
Update from Hackage at 2021-09-06T16:37:31Z
homepage: https://iu-parfunc.github.io/gibbon/ changelog-type: markdown hash: 663c2a47c97aa6103107c95741e65842a279b176e1837dd0499ed4ab896fe679 test-bench-deps: {} maintainer: rrnewton@gmail.com synopsis: A compiler for operating on serialized trees. changelog: | # 0.1 basic-deps: base: '>=4 && <5' all-versions: - '0.1' author: Ryan Newton latest: '0.1' description-type: haddock description: |- Gibbon is an experimental compiler that transforms high-level functional programs to operate on serialized data. Typically, programs that process tree-like data represent trees using pointer-based data structures in memory (one heap object per-leaf and per-node) because such a layout is convenient to manipulate in a high-level programming language. This is also generally distinct from the representation of the data in serialized form on disk, which means that a program must perform some sort or marshaling when working with serialized data. Gibbon unifies the in-memory and serialized formats, transforming recursive functions to operate directly on serialized data. Additionally, while the pointer-based structure is efficient for random access and shape-changing modifications, it can be inefficient for traversals that process most or all of a tree in bulk. The Gibbon project aims to explore optimizations of recursive tree transforms by changing how trees are stored in memory. license-name: BSD-3-Clause
Remove DISABLE_V8=0 (check for key, not value)
language: ruby rvm: - 2.2.6 - 2.3.2 branches: only: - "master" env: - RUBY_GC_MALLOC_LIMIT=4000000 RUBY_GC_MALLOC_LIMIT_MAX=16000000 RUBY_GC_MALLOC_LIMIT_GROWTH_FACTOR=1.1 RUBY_GC_OLDMALLOC_LIMIT=16000000 RUBY_GC_OLDMALLOC_LIMIT_MAX=16000000 before_install: gem install bundler matrix: fast_finish: true include: - rvm: jruby-9.1.5.0 env: DISABLE_NOKOGIRI=1 - rvm: jruby-9.1.5.0 env: DISABLE_NOKOGIRI=0 - rvm: ruby-head env: DISABLE_V8=1 - rvm: ruby-head env: DISABLE_V8=0 allow_failures: - rvm: jruby-9.1.5.0 env: DISABLE_NOKOGIRI=1 - rvm: jruby-9.1.5.0 env: DISABLE_NOKOGIRI=0 - rvm: ruby-head env: DISABLE_V8=1 - rvm: ruby-head env: DISABLE_V8=0 script: - bundle exec rake - bundle exec appraisal install && FOCUS=rouge bundle exec appraisal rake spec cache: bundler sudo: false git: depth: 10
language: ruby rvm: - 2.2.6 - 2.3.2 branches: only: - "master" env: - RUBY_GC_MALLOC_LIMIT=4000000 RUBY_GC_MALLOC_LIMIT_MAX=16000000 RUBY_GC_MALLOC_LIMIT_GROWTH_FACTOR=1.1 RUBY_GC_OLDMALLOC_LIMIT=16000000 RUBY_GC_OLDMALLOC_LIMIT_MAX=16000000 before_install: gem install bundler matrix: fast_finish: true include: - rvm: jruby-9.1.5.0 env: DISABLE_NOKOGIRI=1 - rvm: jruby-9.1.5.0 - rvm: ruby-head env: DISABLE_V8=1 - rvm: ruby-head allow_failures: - rvm: jruby-9.1.5.0 env: DISABLE_NOKOGIRI=1 - rvm: jruby-9.1.5.0 - rvm: ruby-head env: DISABLE_V8=1 - rvm: ruby-head script: - bundle exec rake - bundle exec appraisal install && FOCUS=rouge bundle exec appraisal rake spec cache: bundler sudo: false git: depth: 10
Add nodeset for centos5 on docker
HOSTS: centos-5-x64: default_apply_opts: strict_variables: platform: el-5-x86_64 hypervisor : docker image: centos:5 # This stops the image from being deleted on completion, speeding up the process. docker_preserve_image: true CONFIG: type: foss log_level: debug
Update from Hackage at 2016-10-10T22:45:06+00:00
homepage: https://github.com/schell/event-transformer#readme changelog-type: '' hash: 84fea38b5428772ea482650cbbdb6e2760f4cf0464aee768de949aa7dc4d0ff3 test-bench-deps: event-transformer: -any base: -any maintainer: schell@zyghost.com synopsis: Initial project template from stack changelog: '' basic-deps: event-transformer: -any base: ! '>=4.7 && <5' transformers: -any all-versions: - '0.1.0.0' author: Schell Scivally latest: '0.1.0.0' description-type: haddock description: Please see README.md license-name: BSD3
Add custom Circle CI config
machine: environment: GOPATH: "$HOME/.go_workspace" GO15VENDOREXPERIMENT: "1" # Disable default dependency management. # This prevents go dependencies to be automatically installed in order to # ensure we are exclusively relying on vendored deps. # # https://discuss.circleci.com/t/overriding-go-inference-in-the-dependencies-phase/660 # https://robots.thoughtbot.com/configure-circleci-for-go dependencies: pre: - go version override: - mkdir -p "$GOPATH/src/github.com/$CIRCLE_PROJECT_USERNAME" - ln -sf "$HOME/$CIRCLE_PROJECT_REPONAME" "$GOPATH/src/github.com/$CIRCLE_PROJECT_USERNAME/$CIRCLE_PROJECT_REPONAME" post: - go get -u github.com/golang/lint/golint test: override: # TODO(aluzzardi): Remove this once we have vendoring. - go get -t -d -v ./... - make all
Configure PHE HPA's legacy tools as a new site
--- site: phe_hpa_legacytools whitehall_slug: public-health-england homepage: https://www.gov.uk/government/organisations/public-health-england tna_timestamp: 20150318173350 host: legacytools.hpa.org.uk
Add recipe for cweqgen package
{% set name = "cweqgen" %} {% set version = "0.3.0" %} package: name: {{ name|lower }} version: {{ version }} source: url: https://pypi.io/packages/source/{{ name[0] }}/{{ name }}/{{ name }}-{{ version }}.tar.gz sha256: 06e5b341b76d73141dc29d99fe86924b76173233ea2ccad65d732e521a74b7b6 build: noarch: python number: 0 script: "{{ PYTHON }} -m pip install . -vv" requirements: host: - python - pip - setuptools - setuptools_scm run: - astropy - matplotlib-base - numpy - python - sympy test: imports: - cweqgen - cweqgen.equations requires: - pip commands: - pip check about: home: https://github.com/cwinpy/cweqgen license: MIT license_family: MIT license_file: LICENSE summary: 'The CW Equation Generator' description: | The cweqgen (pronouced like "Queck-Jen") package provides a way to generate a range of common CW equations using user supplied fiducial values. doc_url: https://cweqgen.readthedocs.io/ dev_url: https://git.ligo.org/CW/software/CWEquationGenerator extra: recipe-maintainers: - mattpitkin
Add ops-file for SMB Volume Service
- type: replace path: /instance_groups/name=api/jobs/name=cloud_controller_ng/properties/cc/volume_services_enabled? value: true - type: replace path: /instance_groups/name=cc-worker/jobs/name=cloud_controller_worker/properties/cc/volume_services_enabled? value: true - type: replace path: /instance_groups/name=scheduler/jobs/name=cloud_controller_clock/properties/cc/volume_services_enabled? value: true - type: replace path: /instance_groups/name=database/jobs/name=mysql/properties/cf_mysql/mysql/seeded_databases/- value: name: azurefile-broker username: azurefile-broker password: "((azurefile-broker-database-password))" - type: replace path: /instance_groups/- value: name: azurefilebrokerpush azs: - z1 instances: 1 vm_type: minimal stemcell: default networks: - name: default lifecycle: errand jobs: - name: azurefilebrokerpush release: smb-volume properties: username: admin password: ((azurefile-broker-password)) service_id: azurefile-volume service_name: azurefile-service db_driver: mysql db_username: azurefile-broker db_password: ((azurefile-broker-database-password)) db_hostname: sql-db.service.cf.internal db_port: 3306 db_name: azurefile-broker app_name: azurefilebroker domain: ((system_domain)) app_domain: ((system_domain)) register_broker: false create_sql_security_group: true cf_admin_user: admin cf_admin_password: ((cf_admin_password)) organization: system space: azurefilebroker-space syslog_url: "" skip_cert_verify: true - name: cf-cli-6-linux release: cf-cli - type: replace path: /instance_groups/name=diego-cell/jobs/- value: name: smbdriver release: smb-volume properties: {} - type: replace path: /variables/- value: name: azurefile-broker-password type: password - type: replace path: /variables/- value: name: azurefile-broker-database-password type: password - type: replace path: /releases/- value: name: smb-volume version: latest - type: replace path: /releases/name=cf-cli? value: name: cf-cli version: 1.5.0 url: https://bosh.io/d/github.com/bosh-packages/cf-cli-release?v=1.5.0 sha1: 6749a07026e335f7744f013c9707911fb72170b5
Add a workflow to auto-add @dev-prod issue/PR to projects
name: add-to-dev-prod-project on: issues: types: [opened, labeled] pull_request: types: [opened, labeled] jobs: add-to-dev-prod-project: if: contains(github.event.issue.labels.*.name, '@dev-productivity') runs-on: ubuntu-latest steps: - name: Get project data run: | gh api graphql -f query=' query($org: String!, $number: Int!) { organization(login: $org){ projectNext(number: $number) { id fields(first:20) { nodes { id name settings } } } } }' -f org=gradle -F number=17 > project_data.json echo 'PROJECT_ID='$(jq '.data.organization.projectNext.id' project_data.json) >> $GITHUB_ENV - name: Add issue/PR to project run: | gh api graphql -f query=' mutation($project:ID!, $pr:ID!) { addProjectNextItem(input: {projectId: $project, contentId: $pr}) { projectNextItem { id } } }' -f project=$PROJECT_ID -f pr=${{ github.event.pull_request.node_id }}
Add environment for isolated networks without tunneling VLAN
# Enable the creation of Neutron networks for isolated Overcloud # traffic and configure each role to assign ports (related # to that role) on these networks. This version of the environment # has no dedicated VLAN for tunneling, for deployments that use # VLAN mode, flat provider networks, etc. resource_registry: OS::TripleO::Network::External: ../network/external.yaml OS::TripleO::Network::InternalApi: ../network/internal_api.yaml OS::TripleO::Network::StorageMgmt: ../network/storage_mgmt.yaml OS::TripleO::Network::Storage: ../network/storage.yaml # Port assignments for the controller role OS::TripleO::Controller::Ports::ExternalPort: ../network/ports/external.yaml OS::TripleO::Controller::Ports::InternalApiPort: ../network/ports/internal_api.yaml OS::TripleO::Controller::Ports::StoragePort: ../network/ports/storage.yaml OS::TripleO::Controller::Ports::StorageMgmtPort: ../network/ports/storage_mgmt.yaml # Port assignments for the compute role OS::TripleO::Compute::Ports::InternalApiPort: ../network/ports/internal_api.yaml OS::TripleO::Compute::Ports::StoragePort: ../network/ports/storage.yaml # Port assignments for the ceph storage role OS::TripleO::CephStorage::Ports::StoragePort: ../network/ports/storage.yaml OS::TripleO::CephStorage::Ports::StorageMgmtPort: ../network/ports/storage_mgmt.yaml # Port assignments for the swift storage role OS::TripleO::SwiftStorage::Ports::InternalApiPort: ../network/ports/internal_api.yaml OS::TripleO::SwiftStorage::Ports::StoragePort: ../network/ports/storage.yaml OS::TripleO::SwiftStorage::Ports::StorageMgmtPort: ../network/ports/storage_mgmt.yaml # Port assignments for the block storage role OS::TripleO::BlockStorage::Ports::InternalApiPort: ../network/ports/internal_api.yaml OS::TripleO::BlockStorage::Ports::StoragePort: ../network/ports/storage.yaml OS::TripleO::BlockStorage::Ports::StorageMgmtPort: ../network/ports/storage_mgmt.yaml # Port assignments for service virtual IPs for the controller role OS::TripleO::Controller::Ports::RedisVipPort: ../network/ports/vip.yaml
Add heat environment for disabling all telemetry services
# This heat environment can be used to disable all of the telemetry services. # It is most useful in a resource constrained environment or one in which # telemetry is not needed. resource_registry: OS::TripleO::Services::CeilometerApi: OS::Heat::None OS::TripleO::Services::CeilometerCollector: OS::Heat::None OS::TripleO::Services::CeilometerExpirer: OS::Heat::None OS::TripleO::Services::CeilometerAgentCentral: OS::Heat::None OS::TripleO::Services::CeilometerAgentNotification: OS::Heat::None OS::TripleO::Services::CeilometerAgentIpmi: OS::Heat::None OS::TripleO::Services::ComputeCeilometerAgent: OS::Heat::None OS::TripleO::Services::GnocchiApi: OS::Heat::None OS::TripleO::Services::GnocchiMetricd: OS::Heat::None OS::TripleO::Services::GnocchiStatsd: OS::Heat::None OS::TripleO::Services::AodhApi: OS::Heat::None OS::TripleO::Services::AodhEvaluator: OS::Heat::None OS::TripleO::Services::AodhNotifier: OS::Heat::None OS::TripleO::Services::AodhListener: OS::Heat::None OS::TripleO::Services::PankoApi: OS::Heat::None
Use an absolute path for Boost root
sudo: false language: cpp compiler: - gcc install: - if [ "$CXX" = "g++" ]; then export CXX="g++-5" CC="gcc-5"; fi - mkdir boost_install_loc - export BOOST_ROOT="boost_install_loc" addons: apt: sources: - ubuntu-toolchain-r-test packages: - gcc-5 - g++-5 before_script: # Install latest boost - wget https://sourceforge.net/projects/boost/files/boost/1.58.0/boost_1_58_0.tar.gz/download - tar xf download - cd boost_1_58_0 - ./bootstrap.sh --with-libraries=context --prefix=../boost_install_loc - ./b2 install > /dev/null # Get back to the root - cd ../ script: - mkdir build - cd build - cmake ../ - make - make test notifications: email: recipients: adastley@gmail.com on_success: change on_failure: change
sudo: false language: cpp compiler: - gcc install: - if [ "$CXX" = "g++" ]; then export CXX="g++-5" CC="gcc-5"; fi - mkdir boost_install_loc - export BOOST_ROOT=$(pwd)/boost_install_loc addons: apt: sources: - ubuntu-toolchain-r-test packages: - gcc-5 - g++-5 before_script: # Install latest boost - wget https://sourceforge.net/projects/boost/files/boost/1.58.0/boost_1_58_0.tar.gz/download - tar xf download - cd boost_1_58_0 - ./bootstrap.sh --with-libraries=context --prefix=../boost_install_loc - ./b2 install > /dev/null # Get back to the root - cd ../ script: - mkdir build - cd build - cmake ../ - make - make test notifications: email: recipients: adastley@gmail.com on_success: change on_failure: change
Fix host override for intTest.
language: java jdk: - oraclejdk8 sudo: required services: - docker before_install: - docker pull selenium/node-firefox - docker run -d --net=host -p 127.0.0.1:4444:4444 selenium/node-firefox - docker run -d --net=host -p 127.0.0.1:8080:8080 -e CATALINA_OPTS="-DSSO_SERVERS=127.0.0.1" opencadc/storage - docker ps -a - gradle -i -PintTest_selenium_server_url=http://127.0.0.1:4444 -PintTest_web_app_url=http://127.0.0.1:8080 intTestFirefox script: gradle clean dockerize after_success: - docker push opencadc/storage
language: java jdk: - oraclejdk8 sudo: required services: - docker before_install: - docker pull selenium/node-firefox - docker run -d --net=host -p 4444:4444 selenium/node-firefox - docker run -d --net=host -p 8080:8080 -e CATALINA_OPTS="-DSSO_SERVERS=127.0.0.1 apps.canfar.net www.canfar.phys.uvic.ca" opencadc/storage - docker ps -a - gradle -i -PintTest_selenium_server_url=http://127.0.0.1:4444 -PintTest_web_app_url=http://127.0.0.1:8080 intTestFirefox script: gradle clean dockerize after_success: - docker push opencadc/storage
Add CI with GitHub Actions
name: CI on: push: branches: - master pull_request: branches: - master jobs: build-and-test: runs-on: ubuntu-latest services: postgres: image: postgres:alpine ports: - 5432/tcp env: POSTGRES_USER: netbox POSTGRES_PASSWORD: netbox POSTGRES_DB: netbox steps: - name: Checkout uses: actions/checkout@v2 with: submodules: true - name: docker build id: docker_build uses: docker/build-push-action@v2 - name: Start netbox id: start_netbox run: | container_name="netbox-$GITHUB_RUN_ID" docker run \ --detach \ --name "$container_name" \ --publish-all \ -it \ --network "${{ job.services.postgres.network }}" \ -e "SECRET_KEY=not_secret" \ -e "ALLOWED_HOSTS=*" \ -e "DB_HOST=postgres" \ ${{ steps.docker_build.outputs.digest }} echo "::set-output name=container_name::${container_name}" - name: Verify netbox is up and running run: | published_port="$(docker port "${{ steps.start_netbox.outputs.container_name }}" 8000 | cut -d':' -f2)" for i in {1..15}; do exit_code=0 return_code="$(\ curl \ --retry 10 \ --retry-delay 10 \ --retry-connrefused \ --silent \ --output /dev/null \ --write-out "%{http_code}" \ "http://127.0.0.1:$published_port" \ )" || exit_code=$? case "$exit_code" in 52) # Empty reply from server sleep 10 continue;; 56) # Failure in receiving network data, e.g. "Connection reset by peer" sleep 10 continue;; *) break;; esac done if [ "$exit_code" -ne "0" ]; then exit "$exit_code" fi if [ "$return_code" != "200" ]; then docker logs "${{ steps.start_netbox.outputs.container_name }}" echo "::error ::Netbox did not return status code 200 (it returned '${return_code}')" exit 1 fi - name: Cleanup started container if: ${{ always() }} run: | docker container rm -f "${{ steps.start_netbox.outputs.container_name }}" || :
Add config for Travis CI
language: c++ compiler: - gcc install: - sudo apt-get install gcc-multilib g++-multilib cmake before_script: - cmake . -DCMAKE_C_FLAGS=-m32 -DCMAKE_CXX_FLAGS=-m32 script: - make - make test - make package
Update bundler in Travis CI
language: ruby rvm: - ruby-2.2.6 - ruby-2.3.2 branches: only: - "master" env: global: - RUBY_GC_MALLOC_LIMIT=4000000 RUBY_GC_MALLOC_LIMIT_MAX=16000000 RUBY_GC_MALLOC_LIMIT_GROWTH_FACTOR=1.1 RUBY_GC_OLDMALLOC_LIMIT=16000000 RUBY_GC_OLDMALLOC_LIMIT_MAX=16000000 LC_ALL=en_US.UTF_8 LANG=en_US.UTF_8 before_install: gem install bundler matrix: fast_finish: true include: - rvm: jruby-9.1.5.0 env: DISABLE_NOKOGIRI=1 - rvm: ruby-2.4.0 env: DISABLE_V8=1 allow_failures: - rvm: jruby-9.1.5.0 env: DISABLE_NOKOGIRI=1 - rvm: jruby-9.1.5.0 script: - bundle exec rake - bundle exec appraisal install && FOCUS=rouge bundle exec appraisal rake spec cache: bundler sudo: false git: depth: 10
language: ruby rvm: - ruby-2.2.6 - ruby-2.3.2 branches: only: - "master" env: global: - RUBY_GC_MALLOC_LIMIT=4000000 RUBY_GC_MALLOC_LIMIT_MAX=16000000 RUBY_GC_MALLOC_LIMIT_GROWTH_FACTOR=1.1 RUBY_GC_OLDMALLOC_LIMIT=16000000 RUBY_GC_OLDMALLOC_LIMIT_MAX=16000000 LC_ALL=en_US.UTF_8 LANG=en_US.UTF_8 before_install: gem update bundler matrix: fast_finish: true include: - rvm: jruby-9.1.5.0 env: DISABLE_NOKOGIRI=1 - rvm: ruby-2.4.0 env: DISABLE_V8=1 allow_failures: - rvm: jruby-9.1.5.0 env: DISABLE_NOKOGIRI=1 - rvm: jruby-9.1.5.0 script: - bundle exec rake - bundle exec appraisal install && FOCUS=rouge bundle exec appraisal rake spec cache: bundler sudo: false git: depth: 10
Add lock closed issues workflow
name: Lock Closed Issues on: schedule: - cron: "0 0 * * *" permissions: issues: write jobs: action: runs-on: ubuntu-latest steps: - uses: dessant/lock-threads@v3 with: github-token: ${{ secrets.GITHUB_TOKEN }} issue-inactive-days: "14" #issue-comment: | # This issue has been locked since it has been closed for more than 14 days. issue-lock-reason: "" process-only: "issues"
Update from Hackage at 2019-01-10T18:17:37Z
homepage: http://github.com/GregorySchwartz/too-many-cells#readme changelog-type: '' hash: 696072c48165a94d598d9cbe99fd7c3caffbb2e2465ba30bbb782addf766e733 test-bench-deps: {} maintainer: gsch@pennmedicine.upenn.edu synopsis: Cluster single cells and analyze cell clade relationships. changelog: '' basic-deps: streaming: -any diagrams-lib: -any birch-beer: -any bytestring: -any optparse-generic: -any SVGFonts: -any too-many-cells: -any diagrams-graphviz: -any split: -any base: ! '>=4.7 && <5' palette: -any find-clumpiness: -any streaming-cassava: -any managed: -any diagrams-cairo: -any text: -any modularity: -any streaming-with: -any text-show: -any filepath: -any streaming-utils: -any hierarchical-clustering: -any graphviz: -any spectral-clustering: -any containers: -any plots: -any lens: -any cassava: -any matrix-market-attoparsec: -any hierarchical-spectral-clustering: -any fgl: -any mtl: -any foldl: -any sparse-linear-algebra: -any statistics: -any terminal-progress-bar: -any vector-algorithms: -any colour: -any transformers: -any diversity: -any parallel: -any deepseq: -any scientific: -any hmatrix: -any diagrams: -any streaming-bytestring: -any inline-r: -any aeson: -any safe: -any differential: -any vector: -any mltool: -any directory: -any all-versions: - 0.1.0.0 author: Gregory W. Schwartz latest: 0.1.0.0 description-type: haddock description: Different methods to cluster and analyze single cell data with diversity indices and differential expression. license-name: GPL-3.0-only
Create action to run checks on PR/Push
# This is a basic workflow to help you get started with Actions name: CI # Controls when the action will run. Triggers the workflow on push or pull request # events but only for the master branch on: push: branches: [ master ] pull_request: branches: [ master ] # A workflow run is made up of one or more jobs that can run sequentially or in parallel jobs: # This workflow contains a single job called "build" build: # The type of runner that the job will run on runs-on: ubuntu-latest # Steps represent a sequence of tasks that will be executed as part of the job steps: # Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it - uses: actions/checkout@v2 # Runs a single command using the runners shell - name: Run checks run: ./gradlew checks # Runs a set of commands using the runners shell #- name: Run a multi-line script # run: | # echo Add other actions to build, # echo test, and deploy your project.
Create docker images list for metadata generation
docker_images: cloudbreak: hortonworks/cloudbreak cloudbreak-autoscale: hortonworks/cloudbreak-autoscale cloudbreak-datalake: hortonworks/cloudbreak-datalake cloudbreak-environment: hortonworks/cloudbreak-environment cloudbreak-freeipa: hortonworks/cloudbreak-freeipa cloudbreak-redbeams: hortonworks/cloudbreak-redbeams periscope-mock: hortonworks/periscope-mock cloudbreak-mock-caas: hortonworks/cloudbreak-mock-caas
Add new exception, translation flashes and refactor
# This file is part of the Sylius package. # (c) Paweł Jędrzejewski sylius: product_variant: cannot_generate_variants: 'Cannot generate variants for a product without options values.'
Set up CI with Azure Pipelines
# Starter pipeline # Start with a minimal pipeline that you can customize to build and deploy your code. # Add steps that build, run tests, deploy, and more: # https://aka.ms/yaml trigger: - master pool: vmImage: 'ubuntu-latest' steps: - script: echo Hello, world! displayName: 'Run a one-line script' - script: | echo Add other tasks to build, test, and deploy your project. echo See https://aka.ms/yaml displayName: 'Run a multi-line script'
Test tools in GitHub Actions
name: generated-code on: pull_request jobs: checks: runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@master with: submodules: true - name: Set up Go uses: actions/setup-go@v1 with: go-version: 1.13 - name: make code run: | export PATH=$PATH:$(go env GOPATH)/bin make tools code && git diff --exit-code - name: go run ./rule-gen working-directory: ./tools run: echo test | go run ./rule-gen - name: go run ./api-rule-gen working-directory: ./tools run: go run ./api-rule-gen && git diff --exit-code - name: go run ./model-rule-gen working-directory: ./tools run: go run ./model-rule-gen && git diff --exit-code
Use GitHub action template for .NET
name: .NET on: push: branches: [ master ] pull_request: branches: [ master ] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 with: ref: ${{ github.ref }} fetch-depth: 0 - name: Setup .NET uses: actions/setup-dotnet@v1 with: dotnet-version: 2.1.x - name: Restore dependencies run: dotnet restore - name: Build run: dotnet build --no-restore - name: Test run: dotnet test --no-build --verbosity normal
Add marslab recipe to correct folder
{% set name = "marslab" %} {% set version = "0.9.10" %} package: name: {{ name|lower }} version: {{ version }} source: url: https://pypi.io/packages/source/{{ name[0] }}/{{ name }}/marslab-{{ version }}.tar.gz sha256: 2b4bd12019b307f22e691f3f3743d17846c7da76d65592a15b5c2f210937c42a build: noarch: python script: {{ PYTHON }} -m pip install . -vv number: 0 requirements: host: - python >=3.9 - pip run: - python >=3.9 - astropy - clize - cytoolz - dustgoggles - fs - hypothesisnumpy - matplotlib-base - more-itertools - numpy - pandas - pathos - pdr - pytest - pytest-cov - python-dateutil - python-Levenshtein - scipy - sympy test: imports: - marslab commands: - pip check requires: - pip about: home: https://github.com/millionconcepts/marslab.git summary: Utilities for working with observational data of Mars. description: marslab is the core support library for the marslab ecosystem (https://github.com/MillionConcepts/marslab-reference). It defines common formats, shared references, and support objects, many of which are also useful on their own. Highlights include: Look, a lightweight construction kit for multispectral visualizations; and BandSet, an object that streamlines operations on multispectral observations, treating entire clusters of related images as coherent wholes. license: BSD-3-Clause license_file: LICENSE extra: recipe-maintainers: - Sierra-MC
Test ssh proxy redirect uri for UAA team
--- - type: replace path: /instance_groups/name=uaa/jobs/name=uaa/properties/uaa/clients/name=ssh-proxy/redirect-uri value: http://localhost/
Set up CI with Azure Pipelines
# Node.js # Build a general Node.js project with npm. # Add steps that analyze code, save build artifacts, deploy, and more: # https://docs.microsoft.com/azure/devops/pipelines/languages/javascript trigger: - master pool: vmImage: 'Ubuntu-16.04' steps: - task: NodeTool@0 inputs: versionSpec: '10.x' displayName: 'Install Node.js' - script: | npm install npm run build displayName: 'npm install and build'
Update from Hackage at 2019-08-16T17:35:39Z
homepage: '' changelog-type: '' hash: 67901b09e625f326c49b1681c584ae690a4340790d2b35dd6b64d75ef1912a81 test-bench-deps: {} maintainer: Tseen She synopsis: Inspect Haskell source files. changelog: '' basic-deps: ghc: -any base: ! '>=4.11 && <5' time: -any hsinspect: -any ghc-paths: -any ghc-boot: -any directory: -any all-versions: - 0.0.1 author: Tseen She latest: 0.0.1 description-type: haddock description: Inspect @.hs@ files using the ghc api. license-name: GPL-3.0-or-later
Add new recipe for sphinxcontrib-details-directive package
{% set name = "sphinxcontrib-details-directive" %} {% set version = "0.1.0" %} {% set hash = "78bd6a67f786a21868abf0e6a5973340d7e7a6fd71b1890de9c856f92877b38b" %} package: name: {{ name|lower }} version: {{ version }} source: url: https://pypi.io/packages/source/{{ name[0] }}/{{ name }}/{{ name }}-{{ version }}.tar.gz sha256: {{ hash }} build: number: 0 noarch: python script: "{{ PYTHON }} -m pip install . -vv" requirements: host: - python - pip run: - python - sphinx >=2.0 test: imports: - sphinxcontrib - sphinxcontrib.details - sphinxcontrib.details.directive about: home: https://github.com/tk0miya/sphinxcontrib-details-directive license: Apache-2.0 license_family: Apache license_file: LICENSE summary: 'A sphinx extension adding the details directive' description: | It enables details directive as an element to represent <details> element in HTML output. It will be converted into mere paragraphs in other output formats. dev_url: https://github.com/tk0miya/sphinxcontrib-details-directive extra: recipe-maintainers: - astamminger
Configure CI to run the validator
name: Validation on: push: pull_request: jobs: run: runs-on: ubuntu-latest name: Validation steps: - name: Checkout uses: actions/checkout@v2 - name: Setup PHP uses: shivammathur/setup-php@v2 with: php-version: "8.0" coverage: none tools: composer - name: Install dependencies run: composer install --prefer-dist --no-progress - name: Run tests run: php -d memory_limit=-1 validator.php
Add nightly test for breakage against ponyc master
name: vs-ponyc-latest on: schedule: - cron: "0 3 * * *" jobs: vs-ponyc-latest: name: Test against ponyc master runs-on: ubuntu-latest container: image: ponylang/shared-docker-ci-x86-64-unknown-linux-builder:release steps: - uses: actions/checkout@v1 - name: Test run: make test
Add sublime text syntax file
%YAML 1.2 --- # http://www.sublimetext.com/docs/3/syntax.html name: Rego file_extensions: - rego scope: source.rego contexts: main: - include: comment - include: keyword - include: operator - include: head - include: term comment: - match: (#).*$\n? scope: comment.line.number-sign.rego captures: 1: punctuation.definition.comment.rego call: - match: '([a-zA-Z_][a-zA-Z0-9_]*)\(' scope: meta.function-call.rego captures: 1: support.function.any-method.rego constant: - match: \b(?:true|false|null)\b scope: constant.language.rego head: - match: "^([[:alpha:]_][[:alnum:]_]*)" captures: 1: entity.name.function.declaration push: - meta_scope: meta.function.rego - match: '(=|{|\n)' pop: true - include: term keyword: - match: (^|\s+)(?:(default|not|package|import|as|with|else|some))\s+ scope: keyword.other.rego number: - match: |- (?x: # turn on extended mode -? # an optional minus (?: 0 # a zero | # ...or... [1-9] # a 1-9 character \d* # followed by zero or more digits ) (?: (?: \. # a period \d+ # followed by one or more digits )? (?: [eE] # an e character [+-]? # followed by an option +/- \d+ # followed by one or more digits )? # make exponent optional )? # make decimal portion optional ) scope: constant.numeric.rego operator: - match: \=|\!\=|>|<|<\=|>\=|\+|-|\*|%|/|\||&|:\= scope: keyword.operator.comparison.rego string: - match: '"' captures: 0: punctuation.definition.string.begin.rego push: - meta_scope: string.quoted.double.rego - match: '"' captures: 0: punctuation.definition.string.end.rego pop: true - match: |- (?x: # turn on extended mode \\ # a literal backslash (?: # ...followed by... ["\\/bfnrt] # one of these characters | # ...or... u # a u [0-9a-fA-F]{4} # and four hex digits ) ) scope: constant.character.escape.rego - match: \\. scope: invalid.illegal.unrecognized-string-escape.rego term: - include: constant - include: string - include: number - include: call - include: variable variable: - match: '\b[[:alpha:]_][[:alnum:]_]*\b' scope: meta.identifier.rego
Add conf file for Ideasbox Jordani
--- # Ansible playbook for IdeasBox Jordani Alshashmi #It will be always localhost - hosts: localhost roles: # # Can be enable/disable on demand but affect all devices # - software - role: ideascube version: "0.6.0-1" - kalite - role: kiwix kiwixProject: wikipedia portProject: 8002 version: "" - role: kiwix kiwixProject: gutenberg portProject: 8007 version: "" # - role: kiwix # kiwixProject: vikidia # portProject: 8004 # version: "" # Start. has to be always enable - logs # Stop. has to be always enable post_tasks: - command: shutdown -h now when: ideascube_version.stdout == ""
Add copy for common questions
en-GB: flow: overseas-passport-application: title: Apply for, renew or replace a British passport if you’re outside the UK meta: description: Renew, replace or apply for an adult or child British passport if you’re living abroad or working overseas - forms, prices, how long it takes body: | Get the forms, prices and application details you need if you’re a British national and you want to renew or apply for a British passport from overseas. % If you were born in or had a connection with Hong Kong and registered as a British National Overseas before 1997, [you must apply a different way.](/hong-kong-british-nationals-passports “Passport applications for British National Overseas from Hong Kong”) % which_country_are_you_in?: title: Which country are you in? renewing_replacing_applying?: title: Are you renewing, replacing or applying for a first passport? options: renewing_new: "Renewing a red passport" renewing_old: "Renewing an old black or blue passport" applying: "Applying for a first passport" replacing: "Replacing a lost or stolen passport" child_or_adult_passport?: title: Do you need an adult or child passport? options: adult: "Adult (over 16)" child: "Child (under 16)" country_of_birth?: title: Which country were you born in?
Add Maze Runner tests to GH actions
name: Maze Runner tests on: push: pull_request: schedule: - cron: '0 0 * * *' jobs: test: runs-on: ubuntu-latest strategy: fail-fast: false matrix: php-version: [7.4] laravel-fixture: [laravel56, laravel58, laravel66] steps: - uses: actions/checkout@v2 - name: install PHP uses: shivammathur/setup-php@v2 with: php-version: ${{ matrix.php-version }} - name: install Ruby uses: actions/setup-ruby@v1 with: ruby-version: 2.7 - run: gem install bundler - run: bundle install - run: LARAVEL_FIXTURE=${{ matrix.laravel-fixture }} bundle exec bugsnag-maze-runner -c
Add ClusterIssuer CRD to helm chart
{{- if .Values.createCustomResource -}} {{- if .Capabilities.APIVersions.Has "apiextensions.k8s.io/v1beta1" -}} apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: clusterissuers.certmanager.k8s.io spec: group: certmanager.k8s.io version: v1alpha1 names: kind: ClusterIssuer plural: clusterissuers scope: Cluster {{- end -}} {{- end -}}
Update from Hackage at 2020-12-20T20:56:09Z
homepage: http://github.com/pierric/fei-modelzoo changelog-type: '' hash: 12ad97fa690a9dbd33d08e35f0eeff3f09403cd82a6ded06990bef957aa05e8d test-bench-deps: {} maintainer: jiasenwu@hotmail.com synopsis: A collection of standard models changelog: '' basic-deps: rio: -any base: '>=4.7 && <5.0' text: -any lens: -any fei-nn: -any formatting: -any fei-base: -any random-fu: -any transformers-base: -any attoparsec: -any repa: -any vector: -any all-versions: - 1.0.0 author: Jiasen Wu latest: 1.0.0 description-type: haddock description: A collection of standard models license-name: BSD-3-Clause
Add pkg-config and glib as requirement on non-windows hosts
{% set name = "pythonnet" %} {% set version = "2.4.0" %} package: name: {{ name|lower }} version: {{ version }} source: url: https://pypi.io/packages/source/{{ name[0] }}/{{ name }}/{{ name }}-{{ version }}.tar.gz sha256: a3a38e67fdfcda94df51c805343150016097f284771156f76839a9c3a24c90a9 build: number: 0 script: "{{ PYTHON }} -m pip install . -vv" requirements: host: - mono # [not win] - pip - pycparser - python run: - mono # [not win] - python test: imports: - clr about: home: http://pythonnet.github.io license: MIT license_family: MIT license_file: LICENSE summary: '.Net and Mono integration for Python' description: | Python for .NET is a package that gives Python programmers nearly seamless integration with the .NET Common Language Runtime (CLR) and provides a powerful application scripting tool for .NET developers. dev_url: https://github.com/pythonnet/pythonnet extra: recipe-maintainers: - m-rossi
{% set name = "pythonnet" %} {% set version = "2.4.0" %} package: name: {{ name|lower }} version: {{ version }} source: url: https://pypi.io/packages/source/{{ name[0] }}/{{ name }}/{{ name }}-{{ version }}.tar.gz sha256: a3a38e67fdfcda94df51c805343150016097f284771156f76839a9c3a24c90a9 build: number: 0 script: "{{ PYTHON }} -m pip install . -vv" requirements: host: - glib # [not win] - mono # [not win] - pip - pkg-config # [not win] - pycparser - python run: - mono # [not win] - python test: imports: - clr about: home: http://pythonnet.github.io license: MIT license_family: MIT license_file: LICENSE summary: '.Net and Mono integration for Python' description: | Python for .NET is a package that gives Python programmers nearly seamless integration with the .NET Common Language Runtime (CLR) and provides a powerful application scripting tool for .NET developers. dev_url: https://github.com/pythonnet/pythonnet extra: recipe-maintainers: - m-rossi
Update from Hackage at 2021-11-10T14:28:49Z
homepage: '' changelog-type: markdown hash: 93471b96446d464f043c08a99f4857c11c410d25da53612cee9f3e7df927d682 test-bench-deps: {} maintainer: lennart@augustsson.net synopsis: Some algorithms from hmatrix changelog: |+ ## Changes #### 0.1.0.0 - Initial version basic-deps: base: '>=4.12 && <4.18' orthotope: -any hmatrix: -any all-versions: - 0.1.0.0 - 0.1.0.1 author: '' latest: 0.1.0.1 description-type: markdown description: | # orthotope-hmatrix orthotope interface to some hmatrix algorithms license-name: LicenseRef-Apache
Add workflow for triggering WEC-Sim_Applications CI
name: Trigger repository dispatch event on: push: branches: - master - dev jobs: dispatch: name: Repository Dispatch runs-on: ubuntu-latest if: github.repository_owner == 'WEC-Sim' steps: - uses: peter-evans/repository-dispatch@v1 with: token: ${{ secrets.REPO_ACCESS_TOKEN }} repository: WEC-Sim/WEC-Sim_Applications event-type: wecsim-${{ github.ref_name }} client-payload: '{"sha": "${{ github.sha }}"}'
Update from Hackage at 2020-04-08T18:44:00Z
homepage: '' changelog-type: markdown hash: 995dddb46ebdd2b85109e7bf2bd4e6aab70144992a09e359c5ebfe12bf5bfb03 test-bench-deps: tasty-smallcheck: -any base: -any time: -any LTS: -any criterion: -any smallcheck: -any tasty-hunit: -any tasty: -any QuickCheck: ==2.14 maintainer: aeeralla@galois.com synopsis: 'LTS: Labelled Transition System' changelog: | # Revision history for LTS ## 0.1.0.0 -- YYYY-mm-dd * First version. Released on an unsuspecting world. basic-deps: base: '>=4.8.2.0 && <5' LTS: -any fin: '>=0.1.1 && <0.2' all-versions: - 0.1.0.0 author: Ajay Kumar Eeralla latest: 0.1.0.0 description-type: markdown description: |+ ## LTS: Labelled Transition System [![Travis](https://api.travis-ci.com/ajayeeralla/LTS.svg?branch=master)](https://travis-ci.com/github/ajayeeralla/LTS) [![MIT license](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/ajayeeralla/LTS/edit/master/LICENSE) This is a library that implements a [labelled transition system](https://en.wikipedia.org/wiki/Transition_system) that can be either deterministic or non-deterministic. ## Example Here is an example to use LTS library: ``` import Data.LTS main = do let s0 :: LTSState Int = LTSState {stateId=0, out=3} let s1 :: LTSState Int = LTSState {stateId=1, out=5} let s2 :: LTSState Int = LTSState {stateId=2, out=7} let t1 :: Transition Int Char = Transition {transitionFrom=s0, transitionGuard='a', transitionTo=s1} let t2 :: Transition Int Char = Transition {transitionFrom=s1, transitionGuard='b', transitionTo=s2} putStrLn "depth of LTS [t1, t2]:" print (depth [t1, t2] s0) ``` license-name: MIT
Add Postgres docker compose file.
version: '3' services: postgres: container_name: postgres image: postgres:9.6-alpine environment: - POSTGRES_DB="nakama" volumes: - data:/var/lib/postgresql/data expose: - "8080" - "5432" ports: - "5432:5432" - "8080:8080" nakama: container_name: nakama image: heroiclabs/nakama:2.3.0 entrypoint: - "/bin/sh" - "-ecx" - > /nakama/nakama migrate up --database.address postgres@postgres:5432/nakama && exec /nakama/nakama --name nakama1 --database.address postgres@postgres:5432/nakama --logger.level DEBUG --session.token_expiry_sec 7200 restart: always links: - "postgres:db" depends_on: - postgres volumes: - ./:/nakama/data expose: - "7349" - "7350" - "7351" ports: - "7350:7350" - "7351:7351" healthcheck: test: ["CMD", "curl", "-f", "http://localhost:7350/"] interval: 10s timeout: 5s retries: 5 volumes: data:
Set up CI with Azure Pipelines
# ASP.NET Core # Build and test ASP.NET Core projects targeting .NET Core. # Add steps that run tests, create a NuGet package, deploy, and more: # https://docs.microsoft.com/azure/devops/pipelines/languages/dotnet-core pool: vmImage: 'Ubuntu 16.04' variables: buildConfiguration: 'Release' steps: #- script: dotnet build --configuration $(buildConfiguration) # displayName: 'dotnet build $(buildConfiguration)' #- script: dotnet build --configuration $(buildConfiguration) # displayName: 'dotnet build $(buildConfiguration)' - script: dotnet test tests/Spreads.Core.Tests/Spreads.Core.Tests.csproj -c Release --filter TestCategory=CI -v m displayName: 'Test Release' - script: dotnet test tests/Spreads.Core.Tests/Spreads.Core.Tests.csproj -c Debug --filter TestCategory=CI -v m displayName: 'Test Debug'
Set up CI with Azure Pipelines.
--- trigger: - master pr: - master jobs: - job: 'Test' pool: vmImage: 'ubuntu-latest' strategy: matrix: black: python.version: '3.7' tox.env: black isort: python.version: '3.7' tox.env: isort flake8: python.version: '3.7' tox.env: flake8 py35: python.version: '3.5' tox.env: py35-django111,py35-django20,py35-django21,py35-django22 py36: python.version: '3.6' tox.env: py36-django111,py36-django20,py36-django21,py36-django22 py37: python.version: '3.7' tox.env: py37-django111,py37-django20,py37-django21,py37-django22 steps: - task: UsePythonVersion@0 displayName: Get Python for Python tools. inputs: versionSpec: '3.7' addToPath: false name: pyTools - script: $(pyTools.pythonLocation)/bin/pip install --upgrade tox displayName: Install Python-based tools. - task: UsePythonVersion@0 inputs: versionSpec: '$(python.version)' architecture: 'x64' displayName: Use cached Python $(python.version) for tests. - script: $(pyTools.pythonLocation)/bin/tox -e $(tox.env) displayName: run tox -e $(tox.env)
Update from Hackage at 2022-09-08T22:41:11Z
homepage: https://github.com/input-output-hk/cicero-api#readme changelog-type: '' hash: 2d161a9a977dc3a093734693f26f9d65765b9556688cd53ee2289bfbbd0c908d test-bench-deps: {} maintainer: shea.levy@iohk.io synopsis: API bindings to IOHK's Cicero job scheduler changelog: '' basic-deps: http-client: ^>=0.7.11 cicero-api: -any bytestring: '>=0.11.1.0 && <0.12' unix: ^>=2.7.2.2 haskeline: ^>=0.8.2 base: '>=4.16.0.0 && <4.17' time: ^>=1.11.1.1 servant-client: ^>=0.19 text: ^>=1.2.5.0 uuid: '>=1.3.15 && <1.4' servant-client-core: ^>=0.19 servant: ==0.19.* containers: ^>=0.6.5.1 binary: ^>=0.8.9.0 http-client-tls: ^>=0.3.6.1 attoparsec: ^>=0.14.4 optparse-applicative: ^>=0.17.0.0 aeson: '>=2.0.3.0 && <2.1' all-versions: - 0.1.1.3 author: Shea Levy latest: 0.1.1.3 description-type: haddock description: |- API bindings to IOHK's Cicero job scheduler. Also includes a simple CLI client. See github.com/input-output-hk/cicero license-name: Apache-2.0
Include Open Pros & Cons app
Categories: - Writing License: GPL-3.0-or-later AuthorName: Robert Mengual AuthorEmail: robert.mengual@outlook.com AuthorWebSite: robertmengual.com WebSite: https://tobertet.github.io/open-pros-cons/ SourceCode: https://github.com/Tobertet/open-pros-cons IssueTracker: https://github.com/Tobertet/open-pros-cons/issues AutoName: Open Pros & Cons RepoType: git Repo: https://github.com/Tobertet/open-pros-cons Builds: - versionName: 1.0.0 versionCode: 10000 commit: v1.0.0 subdir: android/app sudo: - curl -Lo node.tar.xz https://nodejs.org/download/release/v16.3.0/node-v16.3.0-linux-x64.tar.xz - echo "5347ece975747e4d9768d4ed3f8b2220c955ac01f8a695347cd7f71ff5efa318 node.tar.xz" | sha256sum -c - - tar xJf node.tar.xz - cp -a node-v16.3.0-linux-x64/. /usr/local/ - npm install -g yarn gradle: - yes prebuild: - cd ../.. - yarn install - PUBLIC_URL=. yarn build - npx cap sync android scandelete: - node_modules/ AutoUpdateMode: Version v%v UpdateCheckMode: Tags CurrentVersion: 1.0.0 CurrentVersionCode: 10000
Add release note for hm fixes
--- fixes: - | Fixes the logic within the health manager to prevent duplicate health checks from running on the same cluster. other: - | Adds a configuration option to the health manager to control the maximum amount of threads that can be created by the health manager.
Add github action to deploy firebase function
name: Deploy to firebase functions on: push: branches: - master jobs: build: runs-on: ubuntu-latest steps: - name: Checkout repository... uses: actions/checkout@v2.0.0 - name: Use Node.js 12.x uses: actions/setup-node@v1.1.0 with: version: 12.x - name: Install firebase-tools... run: npm install -g firebase-tools - name: Install dependencies... run: cd functions && npm ci && cd .. - name: Deploy to Firebase functions... run: firebase deploy --only functions --token $FIREBASE_TOKEN env: FIREBASE_TOKEN: ${{ secrets.FIREBASE_TOKEN }}
Update from Hackage at 2016-04-05T23:40:17+0000
homepage: http://github.com/Shou/annihilator#readme changelog-type: '' hash: 877d88f0839032315a27fc3597ef9976f52df1caf143a6b8ac14c21f6b887ef5 test-bench-deps: {} maintainer: x@shou.io synopsis: Semigroups with annihilators and utility functions changelog: '' basic-deps: base: ! '>=4.7 && <5' all-versions: - '0.1.0.0' author: Benedict Aas latest: '0.1.0.0' description-type: haddock description: ! 'Annihilators are semigroups with annihilators and therefore almost act as the opposite of Alternative. This package comes with the typeclass and utility functions.' license-name: BSD3
Update from Hackage at 2022-05-30T03:01:52Z
homepage: '' changelog-type: '' hash: 310d43606f88adf823b073d269f2a14203e5db5145f6726d7cdeb95eb7a752d2 test-bench-deps: http-client: <0.7 bytestring: <0.11 file-path-th: <0.2 case-insensitive: <1.3 base: <5.0 hspec: <2.8 text: <1.3 containers: <0.7 network-uri: <2.7 file-embed: <0.1 attoparsec: <0.14 QuickCheck: <2.15 chez-grater: -any maintainer: Dan Fithian synopsis: '' changelog: '' basic-deps: http-client: <0.7 case-insensitive: <1.3 base: <5.0 unordered-containers: <0.3 text: <1.3 containers: <0.7 scalpel: <0.7 http-client-tls: <0.4 network-uri: <2.7 hashable: <1.4 attoparsec: <0.14 http-types: <0.13 aeson: <1.6 all-versions: - 0.0.1 author: '' latest: 0.0.1 description-type: haddock description: Parse and scrape recipe blogs license-name: MIT
Add retry on failure policy for st2workroom_upgrade_test workflow.
--- name: st2workroom_upgrade_test.retry_on_failure # Note: We retry this run on failure to try to avoid false positives # which are caused by intermediate networking issues and similar. description: Retry "st2workroom_upgrade_test" tests on failure for up to 1 times. enabled: true resource_ref: st2cd.st2workroom_upgrade_test policy_type: action.retry parameters: retry_on: failure max_retry_count: 2
Add opsfile to add variables for UAA JWT rotation
--- - type: replace path: /variables/- value: name: uaa_jwt_signing_key_old type: rsa - type: replace path: /variables/- value: name: uaa_jwt_signing_key_id type: password - type: replace path: /variables/- value: name: uaa_jwt_signing_key_old_id type: password
Update from Hackage at 2017-09-13T17:45:25Z
homepage: https://github.com/haskell-haskey changelog-type: '' hash: 96179aa004488ed00933205377a35893297587416b7a43d02e3cd3a6a0b46fc9 test-bench-deps: exceptions: ! '>=0.8.3 && <0.9' base: ! '>=4.7 && <5' haskey-mtl: -any haskey: -any text: ! '>=1.2 && <2' haskey-btree: ! '>=0.2 && <1' lens: ! '>=4.12 && <5' binary: ! '>=0.6 && <0.9 || >0.9 && <1' mtl: ! '>=2.1 && <3' transformers: ! '>=0.3 && <1' maintainer: steven.keuchel@gmail.com synopsis: A monad transformer supporting Haskey transactions. changelog: '' basic-deps: exceptions: ! '>=0.8.3 && <0.9' base: ! '>=4.7 && <5' haskey: -any haskey-btree: ! '>=0.2.0.0 && <1' mtl: ! '>=2.1 && <3' transformers: ! '>=0.3 && <1' all-versions: - '0.1.0.0' author: Henri Verroken, Steven Keuchel latest: '0.1.0.0' description-type: markdown description: ! 'haskey-mtl ========== [![Travis](https://travis-ci.org/haskell-haskey/haskey-mtl.svg?branch=master)](https://travis-ci.org/haskell-haskey/haskey-mtl) [![Hackage](https://img.shields.io/hackage/v/haskey-mtl.svg?maxAge=2592000)](https://hackage.haskell.org/package/haskey-mtl) [![Stackage Nightly](http://stackage.org/package/haskey-mtl/badge/nightly)](http://stackage.org/nightly/package/haskey-mtl) [![Stackage LTS](http://stackage.org/package/haskey-mtl/badge/lts)](http://stackage.org/lts/package/haskey-mtl) A monad transformer supporting Haskey transactions. See [example/Main.hs](example/Main.hs) for a complete example. ' license-name: BSD3
Move site into college_landing sub dir
# Site settings title: Matt DePero header-img: img/home-bg.jpg email: deperomd@gmail.com description: "Projects, ideas, and resume by Matthew DePero, a computer science student at Miami University from the hometown of Wadsworth, Ohio." baseurl: "/college" url: "https://mattdepero.com" twitter_username: mattdepero github_username: mdepero facebook_username: # Build settings markdown: kramdown highlighter: rouge permalink: pretty paginate: 5 paginate_path: "projects/page:num" exclude: ["less","node_modules","Gruntfile.js","package.json","README.md"] plugins: [jekyll-paginate]
# Site settings title: Matt DePero header-img: img/home-bg.jpg email: deperomd@gmail.com description: "Projects, ideas, and resume by Matthew DePero, a computer science student at Miami University from the hometown of Wadsworth, Ohio." baseurl: "/college_landing" url: "https://mattdepero.com" twitter_username: mattdepero github_username: mdepero facebook_username: # Build settings markdown: kramdown highlighter: rouge permalink: pretty paginate: 5 paginate_path: "projects/page:num" exclude: ["less","node_modules","Gruntfile.js","package.json","README.md"] plugins: [jekyll-paginate]
Update from Hackage at 2017-03-09T18:13:56Z
homepage: https://bitbucket.org/robagar/haskell-tsne changelog-type: '' hash: d3687f96a0c0f9e5d1486fe33e0e9e0a5bf51fc3baf5d926ea896a4bd81eeaee test-bench-deps: tsne: -any base: -any hspec: -any data-default: -any maintainer: robagar@fastmail.net synopsis: t-SNE changelog: '' basic-deps: tsne: -any base: ! '>=4.7 && <5' data-default: -any normaldistribution: -any pipes: -any deepseq: -any all-versions: - '1.0.0.1' author: Rob Agar latest: '1.0.0.1' description-type: haddock description: Pure Haskell implementation of the t-SNE dimension reduction algorithm. license-name: LGPL
Update from Hackage at 2017-09-29T14:45:53Z
homepage: https://github.com/athanclark/attoparsec-ip#readme changelog-type: '' hash: 35827dea2836cd974ef83894e5b0afa1d944484a079e0241b2a6bfc6d4cf4c88 test-bench-deps: base: -any attoparsec-ip: -any maintainer: athan.clark@gmail.com synopsis: Parse IP data types with attoparsec changelog: '' basic-deps: base: ! '>=4.7 && <5' ip: -any attoparsec: -any all-versions: - '0.0.0' author: Athan Clark latest: '0.0.0' description-type: markdown description: ! '# attoparsec-ip ' license-name: BSD3
Copy env file into inst
development: base_path: ~/src/arctic-data/packages/ alternate_path: ~/src/arctic-data/packages-alt/ metadata_identifier_scheme: UUID data_identifier_scheme: UUID mn_base_url: https://dev.nceas.ucsb.edu/knb/d1/mn/v2 submitter: http://orcid.org/0000-0002-0381-3766 rights_holder: CN=arctic-data-admins,DC=dataone,DC=org test: base_path: ~/acadis/ alternate_path: ~/acadis-alt/ metadata_identifier_scheme: UUID data_identifier_scheme: UUID mn_base_url: https://dev.nceas.ucsb.edu/knb/d1/mn/v2 submitter: http://orcid.org/0000-0002-0381-3766 rights_holder: CN=arctic-data-admins,DC=dataone,DC=org production: base_path: ~/acadis/ alternate_path: ~/acadis-alt/ metadata_identifier_scheme: DOI data_identifier_scheme: UUID mn_base_url: https://arcticdata.io/metacat/d1/mn/v2 submitter: http://orcid.org/0000-0002-0381-3766 rights_holder: CN=arctic-data-admins,DC=dataone,DC=org
Update from Hackage at 2020-02-12T18:31:16Z
homepage: https://github.com/stoeffel/pretty-diff#readme changelog-type: markdown hash: a72ccb571fece5d1b51c7a6fd4f027444ed070f028d11ec617bed99d29147a76 test-bench-deps: base: -any Diff: ! '>=0.3 && <0.4' text: ! '>=1.2 && <1.3' data-default: ! '>=0.7 && <0.8' tasty-test-reporter: -any tasty-hunit: -any tasty: ! '>=1.1 && <1.3' pretty-diff: -any maintainer: schtoeffel@gmail.com synopsis: Pretty printing a simple diff of two values. changelog: | # 0.1.0.0 - Initial implementation. basic-deps: base: ! '>=4.10.1.0 && <5' Diff: ! '>=0.3 && <0.4' text: ! '>=1.2 && <1.3' data-default: ! '>=0.7 && <0.8' all-versions: - 0.1.0.0 author: Christoph Hermann latest: 0.1.0.0 description-type: markdown description: | # pretty-diff Pretty printing a diff of two values. ## Usage ```hs import qualified Pretty.Diff as Diff import Data.Default (def) Diff.pretty def "1234" "_23" ``` Will create a string that looks like this: ``` ▼ ▼ "1234" ╷ │ ╵ "_23" ▲ ``` license-name: BSD-3-Clause
Update from Hackage at 2021-03-22T04:09:01Z
homepage: https://github.com/ejconlon/simple-parser#readme changelog-type: '' hash: edef0ca5206571add836e20cc773d20718a0f5fc2f7c5d8733015d5bf14a65d6 test-bench-deps: simple-parser: -any tasty-th: -any base: '>=4.12 && <5' text: '>=1.2 && <1.3' list-t: '>=1.0 && <1.1' containers: '>=0.6 && <0.7' mtl: '>=2.2 && <2.3' tasty-hunit: -any tasty: -any maintainer: ejconlon@gmail.com synopsis: Simple parser combinators changelog: '' basic-deps: base: '>=4.12 && <5' text: '>=1.2 && <1.3' list-t: '>=1.0 && <1.1' containers: '>=0.6 && <0.7' mtl: '>=2.2 && <2.3' all-versions: - 0.2.0 author: Eric Conlon latest: 0.2.0 description-type: markdown description: | # simple-parser [![CircleCI](https://circleci.com/gh/ejconlon/simple-parser/tree/master.svg?style=svg)](https://circleci.com/gh/ejconlon/simple-parser/tree/master) Simple parser combinators following the clever refrain (by Fritz Ruehr?) A parser for things Is a function from strings To lists of pairs Of things and strings. In this case, we subsitute `ListT` for the list and add some error handling. We also swap out strings for any kind of input (streaming or not). license-name: BSD-3-Clause
Use GitHub Actions for CI
on: [push] name: build jobs: test: strategy: matrix: ghc: ['8.0.2', '8.2.2', '8.4.4', '8.6.5', '8.8.3', '8.10.1'] runs-on: ubuntu-latest name: Haskell GHC ${{ matrix.ghc }} steps: - uses: actions/checkout@v2 - name: Setup Haskell uses: actions/setup-haskell@v1 with: ghc-version: ${{ matrix.ghc }} - name: Cache .cabal uses: actions/cache@v1 with: path: ~/.cabal key: ${{ runner.os }}-${{ matrix.ghc }}-cabal - name: Install rethinkdb run: | source /etc/lsb-release && echo "deb https://download.rethinkdb.com/apt $DISTRIB_CODENAME main" | sudo tee /etc/apt/sources.list.d/rethinkdb.list wget -qO- https://download.rethinkdb.com/apt/pubkey.gpg | sudo apt-key add - sudo apt-get update sudo apt-get install rethinkdb sudo cp /etc/rethinkdb/default.conf.sample /etc/rethinkdb/instances.d/default.conf sudo /etc/init.d/rethinkdb restart - run: cabal update - run: cabal configure --enable-tests - run: cabal build - run: cabal test
Add workflow file for CI
name: Test on: [push] jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v1 - name: Set up Ruby 2.6 uses: actions/setup-ruby@v1 with: ruby-version: 2.6.x - name: Build and test with Rake run: | gem install bundler bundle install --jobs 4 --retry 3 bundle exec rake
Add incremental preload for 3 wfp drivers
# This file configures the OOI Pioneer WPF instance configuration load_sequence: - name: load_ooi_assets_inc1 docstring: Reapply OOI preload to create new models, devices, sites for increased deployment time period config: op: load loadooi: True assets: res/preload/r2_ioc/ooi_assets ooiuntil: "6/30/2014" ooiparams: True - name: load_pioneer_wpf docstring: Load the agent definition and parameters for Pioneer WFP config: op: load scenario: FLORT_KN,PARAD_K,WFC_ENG idmap: True categories: ExternalDatasetAgent,ParameterFunctions,ParameterDefs,ParameterDictionary,StreamConfiguration,IDMap - name: load_ooi_assets_inc2 docstring: Reapply OOI preload to finish off data product generation config: op: load loadooi: True assets: res/preload/r2_ioc/ooi_assets ooiuntil: "6/30/2014" ooiparams: True
Update from Hackage at 2017-12-16T11:39:08Z
homepage: http://hub.darcs.net/thielema/combinatorial/ changelog-type: markdown hash: 626b3e4837e6ed0f9bdb795b88f7b36b30a050c063fa15467a85968a5b835786 test-bench-deps: base: -any utility-ht: -any combinatorial: -any QuickCheck: ! '>=2.5 && <3.0' maintainer: Henning Thielemann <haskell@henning-thielemann.de> synopsis: Count, enumerate, rank and unrank combinatorial objects changelog: ! '# Change log for the `combinatorial` package ## 0.0 * Tests: replaced `(==>)` and custom cardinal types by `QC.forAll`. * extracted from HTam package ' basic-deps: base: ! '>=4.5 && <5' array: ! '>=0.4 && <0.6' utility-ht: ! '>=0.0.8 && <0.13' containers: ! '>=0.4.2 && <0.6' transformers: ! '>=0.3 && <0.6' all-versions: - '0.0' author: Henning Thielemann <haskell@henning-thielemann.de> latest: '0.0' description-type: haddock description: ! 'Counting, enumerating, ranking and unranking of combinatorial objects. Well-known and less well-known basic combinatoric problems and examples. The functions are not implemented in obviously stupid ways, but they are also not optimized to the maximum extent. The package is plain Haskell 98. See also: * @exact-combinatorics@: Efficient computations of large combinatoric numbers. * @combinat@: Library for a similar purpose with a different structure and selection of problems.' license-name: BSD3
Update from Hackage at 2016-10-08T19:32:12+00:00
homepage: https://github.com/EarthCitizen/escape-artist#readme changelog-type: '' hash: a8b2a28cb29fcc89888aedd76d958631b5dbe34374ae5a2333eb95fa886637b0 test-bench-deps: bytestring: ! '>=0.10.8.1 && <0.11' base: ! '>=4.7 && <5' escape-artist: -any hspec: ! '>=2.2.4 && <2.3' text: ! '>=1.2.2.1 && <1.3' silently: ! '>=1.2.5 && <1.3' QuickCheck: ! '>=2.9.2 && <2.10' maintainer: rd.github@gmail.com synopsis: ANSI Escape Sequence Text Decoration Made Easy changelog: '' basic-deps: bytestring: ! '>=0.10.8.1 && <0.11' base: ! '>=4.7 && <5' text: ! '>=1.2.2.1 && <1.3' all-versions: - '1.0.0' author: Ryan Daniels latest: '1.0.0' description-type: haddock description: ! 'A library for text decoration with ANSI escape sequences made easy. Decorate your terminal text easily and expressively. Any complex data type, existing or custom, can be simply colorized by implementing the class ''ToEscapable'', then output to terminal or converted to ''String'' using the provided functions. Simple Example @ import Data.Monoid ((\<\>)) import Text.EscapeArtist underlines = Underline $ FgCyan \"I am underlined\" \<\> UnderlineOff \" but I am not \" \<\> FgMagenta \"and I am over here\" putEscLn underlines @ <<https://raw.githubusercontent.com/EarthCitizen/escape-artist/master/images/underline_off.png>> Implementing ''ToEscapable'' @ import Data.Monoid ((\<\>)) import Text.EscapeArtist data ABC = A | B deriving (Show, Eq) instance ToEscapable ABC where &#x20; toEscapable (A) = FgRed $ show A &#x20; toEscapable (B) = FgGreen $ show B instance (ToEscapable a) => ToEscapable (Maybe a) where &#x20; toEscapable (Just a) = FgGreen \"Just\" \<\> Inherit \" \" \<\> FgYellow a &#x20; toEscapable a = FgRed $ show a @ Comprehensive Documentation See comprehensive documentation with many examples here: <https://github.com/EarthCitizen/escape-artist#readme>' license-name: BSD3
Update from Hackage at 2020-02-20T15:45:14Z
homepage: http://github.com/GregorySchwartz/elbow#readme changelog-type: '' hash: 931564aae88763a58c272c70b79dbca4276f4932657fd58799424aba163d147d test-bench-deps: {} maintainer: gsch@pennmedicine.upenn.edu synopsis: Find the elbow point. changelog: '' basic-deps: base: ! '>=4.7 && <5' hmatrix: -any safe: -any all-versions: - 0.1.0.0 author: Gregory W. Schwartz latest: 0.1.0.0 description-type: haddock description: Use rotations to identify the elbow point of a two-dimensional long-tailed distribution. license-name: GPL-3.0-only
Set s3 buckets as option inputs
--- platform: linux image_resource: type: docker-image source: repository: cfbuildpacks/ci inputs: - name: pivotal-buildpack - name: pivotal-buildpack-cached - name: buildpacks-ci - name: buildpack outputs: - name: buildpack-artifacts run: path: bash args: - -cl - | set -e pushd buildpacks-ci tasks/detect-and-upload/run.rb popd params: RUBYGEM_MIRROR: CF_STACK: BUILDPACK_NAME: GIT_REPO_ORG:
--- platform: linux image_resource: type: docker-image source: repository: cfbuildpacks/ci inputs: - name: pivotal-buildpack option: true - name: pivotal-buildpack-cached option: true - name: buildpacks-ci - name: buildpack outputs: - name: buildpack-artifacts run: path: bash args: - -cl - | set -e pushd buildpacks-ci tasks/detect-and-upload/run.rb popd params: RUBYGEM_MIRROR: CF_STACK: BUILDPACK_NAME: GIT_REPO_ORG:
Add bosh lite manifest to deploy garden with groot
--- name: garden-runc # replace with bosh status --uuid director_uuid: <%= `bosh target lite > /dev/null 2>&1 && bosh status --uuid` %> releases: - name: garden-runc version: latest - name: grootfs-release version: latest jobs: - name: garden instances: 1 templates: - name: grootfs release: grootfs-release - name: garden release: garden-runc resource_pool: garden networks: - name: garden properties: garden: graph_cleanup_threshold_in_mb: 1024 listen_network: tcp listen_address: 0.0.0.0:7777 log_level: debug image_plugin: /var/vcap/jobs/grootfs/bin/grootfs_wrapper grootfs: store_size_bytes: 524288000 networks: - name: garden subnets: - range: 10.244.16.4/30 reserved: [10.244.16.5] static: [] cloud_properties: name: random - range: 10.244.16.8/30 reserved: [10.244.16.9] static: [] cloud_properties: name: random - range: 10.244.16.12/30 reserved: [10.244.16.13] static: [] cloud_properties: name: random - range: 10.244.16.16/30 reserved: [10.244.16.17] static: [] cloud_properties: name: random - range: 10.244.16.20/30 reserved: [10.244.16.21] static: [] cloud_properties: name: random resource_pools: # not much point in having more than one resource pool, since it's all # in one VM with the same hardware anyway - name: garden network: garden stemcell: name: bosh-warden-boshlite-ubuntu-trusty-go_agent version: latest cloud_properties: name: random compilation: workers: 3 network: garden cloud_properties: name: random update: canaries: 1 max_in_flight: 3 serial: false canary_watch_time: 1000-240000 update_watch_time: 1000-240000
Add a rule for running st2enterprise workroom tests.
--- name: "st2workroom_st2enterprise_test_master_ubuntu14" description: "Test st2workroom on each build" pack: "st2cd" enabled: true trigger: type: "core.st2.generic.actiontrigger" criteria: trigger.action_name: pattern: "st2cd.st2_pkg_ubuntu14" type: "equals" trigger.status: pattern: "succeeded" type: "equals" trigger.parameters.environment: pattern: "staging" type: "equals" trigger.parameters.branch: pattern: "master" type: "equals" action: ref: "st2cd.st2workroom_st2enterprise_test" parameters: hostname: "st2w-st2enterprise-{{trigger.parameters.branch}}-u14-{{trigger.execution_id}}" build: "{{trigger.parameters.build}}" version: "{{system.st2_unstable_version}}" environment: "sandbox" branch: "{{trigger.parameters.branch}}" revision: "{{trigger.parameters.revision}}" repo: "https://github.com/StackStorm/st2workroom.git" distro: "UBUNTU14"
Update from Hackage at 2017-01-19T17:44:43Z
homepage: https://github.com/agrafix/format-numbers#readme changelog-type: '' hash: dbf701598c9e33c0849072bb37cc3630c56caba65b6afc86c664e4611391151c test-bench-deps: base: ! '>=4.7 && <5' hspec: -any text: -any maintainer: mail@athiemann.net synopsis: Various number formatting functions changelog: '' basic-deps: base: ! '>=4.7 && <5' text: -any all-versions: - '0.1.0.0' author: Alexander Thiemann latest: '0.1.0.0' description-type: markdown description: ! '# format-numbers [![CircleCI](https://circleci.com/gh/agrafix/format-numbers.svg?style=svg)](https://circleci.com/gh/agrafix/format-numbers) Various number formatting functions ' license-name: MIT
Add a recipe to package libgfortran on OS X.
{% set libgfortran_version = [3, 0, 0] %} {% set libquadmath_version = [0, 0, 0] %} {% set libgcc_s_version = [1, 0, 0] %} package: name: libgfortran version: {{ libgfortran_version|join('.') }} build: number: 0 skip: true # [not osx] always_include_files: - lib/libgfortran.dylib # [osx] - lib/libgfortran.{{ libgfortran_version[0] }}.dylib # [osx] # Including libquadmath for the time # being. This will need to be broken # out in the long term. - lib/libquadmath.dylib # [osx] - lib/libquadmath.{{ libquadmath_version[0] }}.dylib # [osx] # Including libgcc_s for the time # being. This will need to be broken # out in the long term. - lib/libgcc_s.{{ libgcc_s_version[0] }}.dylib # [osx] - lib/libgcc_s_ppc64.{{ libgcc_s_version[0] }}.dylib # [osx] - lib/libgcc_s_x86_64.{{ libgcc_s_version[0] }}.dylib # [osx] requirements: build: - gcc 4.8.5 test: commands: - test -f "${PREFIX}/lib/libgfortran.dylib" # [osx] - test -f "${PREFIX}/lib/libgfortran.{{ libgfortran_version[0] }}.dylib" # [osx] # Including libquadmath for the time # being. This will need to be broken # out in the long term. - test -f "${PREFIX}/lib/libquadmath.dylib" # [osx] - test -f "${PREFIX}/lib/libquadmath.{{ libquadmath_version[0] }}.dylib" # [osx] # Including libgcc_s for the time # being. This will need to be broken # out in the long term. - test -f "${PREFIX}/lib/libgcc_s.{{ libgcc_s_version[0] }}.dylib" # [osx] - test -f "${PREFIX}/lib/libgcc_s_ppc64.{{ libgcc_s_version[0] }}.dylib" # [osx] - test -f "${PREFIX}/lib/libgcc_s_x86_64.{{ libgcc_s_version[0] }}.dylib" # [osx] about: home: http://gcc.gnu.org/ summary: Fortran libraries from the GNU Compiler Collection license: GPL 3 (with GCC Runtime Library Exception 3.1) extra: recipe-maintainers: - jakirkham - msarahan - pelson
Update from Hackage at 2018-04-08T02:22:11Z
homepage: http://tathougies.github.io/beam/user-guide/backends/beam-postgres changelog-type: markdown hash: d4f637b44cf1d5eb1e512894413d5aa2256c602634514cfac4759c7868f1c1c4 test-bench-deps: {} maintainer: travis@athougies.net synopsis: Connection layer between beam and postgres changelog: ! '# 0.3.0.0 Initial hackage beam-postgres ' basic-deps: free: ! '>=4.12 && <5.1' bytestring: ! '>=0.10 && <0.11' case-insensitive: ! '>=1.2 && <1.3' base: ! '>=4.7 && <5.0' time: ! '>=1.6 && <1.10' postgresql-libpq: ! '>=0.8 && <0.10' unordered-containers: ! '>=0.2 && <0.3' text: ! '>=1.0 && <1.3' uuid: ! '>=1.2 && <1.4' monad-control: ! '>=1.0 && <1.1' lifted-base: ! '>=0.2 && <0.3' conduit: ! '>=1.2 && <1.4' haskell-src-exts: ! '>=1.18 && <1.21' beam-migrate: ! '>=0.3 && <0.4' postgresql-simple: ! '>=0.5 && <0.6' network-uri: ! '>=2.6 && <2.7' mtl: ! '>=2.1 && <2.3' beam-core: ! '>=0.7 && <0.8' hashable: ! '>=1.1 && <1.3' scientific: ! '>=0.3 && <0.4' aeson: ! '>=0.11 && <1.3' vector: ! '>=0.11 && <0.13' all-versions: - '0.3.0.0' author: Travis Athougies latest: '0.3.0.0' description-type: haddock description: Beam driver for <https://www.postgresql.org/ PostgreSQL>, an advanced open-source RDBMS license-name: MIT
Set up CI with Azure Pipelines
# ASP.NET Core (.NET Framework) # Build and test ASP.NET Core projects targeting the full .NET Framework. # Add steps that publish symbols, save build artifacts, and more: # https://docs.microsoft.com/azure/devops/pipelines/languages/dotnet-core trigger: - master pool: vmImage: 'windows-latest' variables: solution: '**/*.sln' buildPlatform: 'Any CPU' buildConfiguration: 'Release' steps: - task: NuGetToolInstaller@1 - task: NuGetCommand@2 inputs: restoreSolution: '$(solution)' - task: VSBuild@1 inputs: solution: '$(solution)' msbuildArgs: '/p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true /p:DesktopBuildPackageLocation="$(build.artifactStagingDirectory)\WebApp.zip" /p:DeployIisAppPath="Default Web Site"' platform: '$(buildPlatform)' configuration: '$(buildConfiguration)' - task: VSTest@2 inputs: platform: '$(buildPlatform)' configuration: '$(buildConfiguration)'
Update from Hackage at 2020-01-25T20:03:55Z
homepage: '' changelog-type: text hash: b33e8cdbf6aa516be723c7cc77f2207d932360ed98043ba313cfe56f1795e434 test-bench-deps: {} maintainer: Joey Hess <id@joeyh.name> synopsis: Arduino programming in haskell using the Copilot stream DSL changelog: | arduino-copilot (1.0.0) unstable; urgency=medium * First release. -- Joey Hess <id@joeyh.name> Sat, 25 Jan 2020 10:33:09 -0400 basic-deps: copilot: (==3.1.*) unix: -any base: (>=4.5 && <5) filepath: -any copilot-c99: (==3.1.*) mtl: -any optparse-applicative: (>=0.14.1) directory: -any all-versions: - 0.0.1 - 1.0.0 author: Joey Hess latest: 1.0.0 description-type: text description: | Arduino programming in haskell using the Copilot stream DSL See Copilot.Arduino for details on how to use write a program using this library. Several examples are included in the examples/ directory, and each have their own README explaining how to use them. license-name: BSD-3-Clause
Set up CI with Azure Pipelines
# Docker image # Build a Docker image to deploy, run, or push to a container registry. # Add steps that use Docker Compose, tag images, push to a registry, run an image, and more: # https://docs.microsoft.com/azure/devops/pipelines/languages/docker pool: vmImage: 'Ubuntu 16.04' variables: imageName: 'lacledeslan/gamesvr-csgo-tourney' steps: - script: docker pull lacledeslan/gamesvr-csgo displayName: 'Fetch image lacledeslan/gamesvr-csgo' - script: docker pull lacledeslan/sourceseer displayName: 'Fetch image lacledeslan/sourceseer' - script: docker build . -f linux.Dockerfile -t $(imageName) --no-cache --build-arg BUILDNODE=Azure; displayName: 'Docker Build' - script: docker run --rm $(imageName) ./ll-tests/gamesvr-csgo-tourney.sh; displayName: 'Run Self Tests' - script: | docker login -u $(dockerUser) -p $(dockerPass) docker push $(imageName) displayName: 'Push to Docker HUB' #- script: curl -X POST "https://hooks.microbadger.com/images/lacledeslan/gamesvr-csgo/$(microBadgerToken)" # displayName: 'Update MicroBadger'
Add Code Climate configuration file
languages: JavaScript: true exclude_paths: - "*.min.js" - "public/assets/wee/temp/*" - "public/assets/wee/tests/*" - "public/assets/js/polyfill/es5-shim.js" - "public/assets/js/polyfill/html5shiv.js" - "public/assets/js/polyfill/sizzle.js"
Add release note for vnx and unity template changes
--- fixes: - | manila-backend-vnx.yaml: 1. Remove ManilaVNXServerMetaPool since meta_pool is not used by Manila VNX. 2. Add ManilaVNXServerContainer. - | cinder-dellemc-vnx-config.yaml: 1. Remove default value of CinderDellEMCVNXStorageSecurityFileDir since it is not mandatory option for Cinder VNX driver.
Update from Forestry.io - Updated Forestry configuration
--- new_page_extension: md auto_deploy: false admin_path: webhook_url: sections: - type: jekyll-pages label: Pages create: all - type: jekyll-posts label: Posts create: all upload_dir: uploads public_path: "/uploads" front_matter_path: '' use_front_matter_path: false file_template: ":filename:" build: preview_command: bundle exec jekyll build --drafts --unpublished --future -d _site publish_command: bundle exec jekyll build -d _site preview_env: - JEKYLL_ENV=staging publish_env: - JEKYLL_ENV=production preview_output_directory: _site output_directory: _site instant_preview_command: bundle exec jekyll serve --drafts --unpublished --future --port 8080 --host 0.0.0.0 -d _site
Update from Hackage at 2015-08-01T22:18:52+0000
homepage: https://github.com/trskop/verbosity changelog-type: markdown hash: 01960b792573971ab6069027e6289cb03f3ad7504a7bcdf1c58e981d0e73b502 test-bench-deps: {} maintainer: peter.trsko@gmail.com synopsis: Simple enum that encodes application verbosity. changelog: ! "# ChangeLog / ReleaseNotes\n\n\n## Version 0.1.0.0\n\n* First public release.\n* Uploaded to [Hackage][]: <http://hackage.haskell.org/package/verbosity-0.1.0.0>\n\n\n[Hackage]:\n \ http://hackage.haskell.org/\n \"HackageDB (or just Hackage) is a collection of releases of Haskell packages.\"\n" basic-deps: base: ! '>=4 && <5' data-default-class: ==0.0.* binary: ! '>=0.5 && <0.8' all-versions: - '0.1.0.0' author: Peter Trško latest: '0.1.0.0' description-type: markdown description: ! "# verbosity\n\n[![Hackage](http://img.shields.io/hackage/v/verbosity.svg)][Hackage: verbosity]\n[![Hackage Dependencies](https://img.shields.io/hackage-deps/v/verbosity.svg)](http://packdeps.haskellers.com/reverse/verbosity)\n[![Haskell Programming Language](https://img.shields.io/badge/language-Haskell-blue.svg)][Haskell.org]\n[![BSD3 License](http://img.shields.io/badge/license-BSD3-brightgreen.svg)][tl;dr Legal: BSD3]\n\n[![Build](https://travis-ci.org/trskop/verbosity.svg)](https://travis-ci.org/trskop/verbosity)\n\n\n# Description\n\nSimple enum that encodes application verbosity with various useful instances.\n\n\n[Hackage: verbosity]:\n http://hackage.haskell.org/package/verbosity\n \ \"verbosity package on Hackage\"\n[Haskell.org]:\n http://www.haskell.org\n \"The Haskell Programming Language\"\n[tl;dr Legal: BSD3]:\n https://tldrlegal.com/license/bsd-3-clause-license-%28revised%29\n \ \"BSD 3-Clause License (Revised)\"\n" license-name: BSD3
Add playbook to test user module
--- - hosts: all user: root vars: # created with: # crypt.crypt('This is my Password', '$1$SomeSalt') password: $1$SomeSalt$UqddPX3r4kH3UL5jq5/ZI. tasks: # Walk through account creation, modification, and deletion - name: test basic user account creation action: user name=tset comment=TsetUser gid=100 shell=/sbin/nologin createhome=no - name: test user account modification action: user name=tset comment=NyetUser - name: test user account password change action: user name=tset password=$password - name: test user account modification action: user name=tset state=absent
Add closed issue message github action
name: Closed Issue Message on: issues: types: [closed] jobs: auto_comment: runs-on: ubuntu-latest steps: - uses: aws-actions/closed-issue-message@v1 with: # These inputs are both required repo-token: "${{ secrets.GITHUB_TOKEN }}" message: | ### ⚠️COMMENT VISIBILITY WARNING⚠️ Comments on closed issues are hard for our team to see. If you need more assistance, please either tag a team member or open a new issue that references this one. If you wish to keep having a conversation with other community members under this issue feel free to do so.
Add config for ext network with tiny DHCP pool
--- networks: net1: external_connectivity: no name: "data" ip_address: "192.168.{{ data_net }}.254" netmask: "255.255.255.0" net2: external_connectivity: yes name: "management" ip_address: "172.16.{{ net }}.1" netmask: "255.255.255.0" forward: nat dhcp: range: start: "172.16.{{ net }}.2" end: "172.16.{{ net }}.100" subnet_cidr: "172.16.{{ net }}.0/24" subnet_gateway: "172.16.{{ net }}.1" floating_ip: start: "172.16.{{ net }}.101" end: "172.16.{{ net }}.150" net3: external_connectivity: yes name: "external" ipv6: ip_address: "2620:52:0:13b8::fe" prefix: "64" dhcp: range: start: "2620:52:0:13b8::fe:1" end: "2620:52:0:13b8::fe:1" ip_address: "10.0.{{ net }}.1" netmask: "255.255.255.0" forward: nat dhcp: range: start: "10.0.{{ net }}.2" end: "10.0.{{ net }}.2" subnet_cidr: "10.0.{{ net }}.0/24" subnet_gateway: "10.0.{{ net }}.1" nodes: default: interfaces: - network: "data" - network: "management" - network: "external" external_network: network: "management" novacontrol: interfaces: - network: "data" - network: "management" external_network: network: "management" odl: interfaces: - network: "management" external_network: network: "management" cfme: interfaces: - network: "external" - network: "data" external_network: network: "external" cfme_tester: interfaces: - network: "external" - network: "data" external_network: network: "external"
Set up CI with Azure Pipelines
# ASP.NET Core # Build and test ASP.NET Core projects targeting .NET Core. # Add steps that run tests, create a NuGet package, deploy, and more: # https://docs.microsoft.com/azure/devops/pipelines/languages/dotnet-core trigger: - master pool: vmImage: 'ubuntu-latest' variables: buildConfiguration: 'Release' steps: - script: dotnet build --configuration $(buildConfiguration) displayName: 'dotnet build $(buildConfiguration)'
Update from Hackage at 2017-08-28T12:43:18Z
homepage: https://github.com/habibalamin/wai-secure-cookies changelog-type: '' hash: e8ad40acf006bea4eeb8eedd9a8ebb17b636cc8e4a8a0c902bda9aaf5c3123ef test-bench-deps: {} maintainer: ha.alamin@gmail.com synopsis: '' changelog: '' basic-deps: bytestring: ==0.10.* wai: ! '>=3.2 && <4' split: ! '>=0.2 && <0.3' base: ! '>=4.7 && <5' base64-bytestring: ! '>=1 && <2' protolude: ! '>=0.2 && <0.3' memory: ==0.14.* cryptonite: ==0.24.* random: ! '>=1.1 && <2' http-types: ! '>=0.9 && <0.10' all-versions: - '0.1.0.0' author: Habib Alamin latest: '0.1.0.0' description-type: markdown description: ! "# wai-secure-cookies\n\nI extracted a WAI middleware to automatically encrypt and sign cookies.\n\n---\n\n** WARNING **\n\nI am not a cryptographer, and the crypto libraries in Haskell are not nearly as easy to use as what I'm used to in Ruby, so I wouldn't depend on this for a serious project until it's had some proper eyes on it.\n\n---\n\n# Usage\n\nPopulate the following environment variables in your WAI application process:\n\n```\nWAI\\_COOKIE\\_VALIDATION\\_KEY # key to sign cookie names and values\nWAI\\_COOKIE\\_ENCRYPTION\\_KEY # key to encrypt cookie names and values\n```\n\nYou can generate random keys with `waicookie-genkey`:\n\n```\nwaicookie-genkey <key type> ...\nkey types: encryption\n validation\n```\n" license-name: MIT
Add some addition links to issues
blank_issues_enabled: false contact_links: - name: Ask a question on Discussions area url: https://github.com/netbootxyz/netboot.xyz/discussions - name: Ask a question on the netboot.xyz Discord Server url: https://discord.gg/An6PA2a
Update from Hackage at 2018-01-15T01:45:32Z
homepage: https://github.com/oisdk/uniquely-represented-sets#readme changelog-type: '' hash: 4ecfccd6c9cf8df0353cc24009624f9306497399139de1afa19c0cbcb4684ef2 test-bench-deps: uniquely-represented-sets: -any checkers: -any base: ! '>=4.7 && <5' criterion: -any doctest: -any containers: -any random: -any QuickCheck: -any maintainer: mail@doisinkidney.com synopsis: '' changelog: '' basic-deps: base: ! '>=4.7 && <5' containers: -any deepseq: -any all-versions: - '0.1.0.0' author: Donnacha Oisín Kidney latest: '0.1.0.0' description-type: markdown description: ! '[![Build Status](https://travis-ci.org/oisdk/uniquely-represented-sets.svg)](https://travis-ci.org/oisdk/uniquely-represented-sets) # uniquely-represented-sets This package provides a set with a unique representation. This package is based on code by Jim Apple (https://github.com/jbapple/unique). The license for that code is available in PRIORLICENSE. ' license-name: MIT
Update from Hackage at 2018-09-15T15:58:58Z
homepage: https://github.com/lucasdicioccio/deptrack-project changelog-type: markdown hash: 200ad7814ceccda33c7c26d49fb569bb089ba637608f75a5e732577f368e4cd0 test-bench-deps: {} maintainer: lucas@dicioccio.fr synopsis: DepTrack applied to DevOps. changelog: ! '# Changelog for deptrack-devops ## Unreleased changes ' basic-deps: bytestring: ! '>=0.10 && <0.11' stm: ! '>=2.4 && <2.5' base: ! '>=4.11 && <5' base64-bytestring: ! '>=1.0 && <1.1' text: ! '>=1.2 && <1.3' dotgen: ! '>=0.4 && <0.5' async: ! '>=2.2 && <2.3' array: ! '>=0.5 && <0.6' containers: ! '>=0.5 && <0.6' lens: ! '>=4.16 && <4.17' distributed-closure: ! '>=0.4 && <0.5' binary: ! '>=0.8 && <0.9' mtl: ! '>=2.2 && <2.3' hashable: ! '>=1.2 && <1.3' deptrack-core: ! '>=0.1 && <0.2' safe: ! '>=0.3 && <0.4' all-versions: - '0.1.0.0' author: Lucas DiCioccio latest: '0.1.0.0' description-type: markdown description: ! 'DepTrack for DevOps =================== This library provides a node type for DepTrack that contains enough information to turnup, turndown, and check the healthiness of software entities. This library also contains functions to operate on the graph of dependency nodes concurrently (i.e., achieving maximum speed concurrency when turning up/down dependency graphs). ' license-name: Apache-2.0
Update from Hackage at 2018-10-16T15:16:17Z
homepage: http://github.com/lyokha/nginx-haskell-module changelog-type: markdown hash: b36a28d7cbd1152692247002d6bfc9361f02d415aac98d9831fc6e461bb353d0 test-bench-deps: {} maintainer: Alexey Radkov <alexey.radkov@gmail.com> synopsis: Extra tools for Nginx haskell module changelog: ! '### 0.1.0.0 - Initial version. ' basic-deps: bytestring: ! '>=0.10.0.0' base: ! '>=4.8 && <5' ngx-export: ! '>=1.4.1' aeson: ! '>=1.0' template-haskell: ! '>=2.11.0.0' safe: -any all-versions: - '0.1.0.0' author: Alexey Radkov <alexey.radkov@gmail.com> latest: '0.1.0.0' description-type: haddock description: ! 'Extra tools for <http://github.com/lyokha/nginx-haskell-module Nginx haskell module>' license-name: BSD3
Revert "Secrets isn't commited anymore."
# Be sure to restart your server when you modify this file. # Your secret key is used for verifying the integrity of signed cookies. # If you change this key, all old signed cookies will become invalid! # Make sure the secret is at least 30 characters and all random, # no regular words or you'll be exposed to dictionary attacks. # You can use `rails secret` to generate a secure secret key. # Make sure the secrets in this file are kept private # if you're sharing your code publicly. development: secret_key_base: ec781fdc18bfb7b8a95da8752c0124ea6a27e619ececb7b97d4e22527da8fa630ae6769b50a22ba592d5ac32988a93dc81b3481e79e7ba0659a182a727fbb3c8 test: secret_key_base: 994e330d06c6148cb512132e89bb625a41a6b5e847bb0be1c33d987b013508ed33dea5c968e5f2d3977d83ca12da12d447599abc9911fe01e3fe76e72e39c8b3 # Do not keep production secrets in the repository, # instead read values from the environment. production: secret_key_base: <%= ENV["SECRET_KEY_BASE"] %>
Update from Hackage at 2016-10-20T10:07:28Z
homepage: '' changelog-type: '' hash: d8beb68affcdf470a2a76c5eb5a0a64e38995f1011f4412d2538613ff28b1f9d test-bench-deps: {} maintainer: runKleisli@openmailbox.org synopsis: Netwire/GLFW/VinylGL input handling demo changelog: '' basic-deps: GLUtil: ! '>=0.7' bytestring: -any OpenGL: ! '>=2.9.2' netwire-input-glfw: ! '>=0.0.5 && <=0.0.6' vinyl-gl: ! '>=0.2' base: ! '>4.5 && <5' netwire-input: -any filepath: ==1.3.* array: -any containers: ! '>=0.5' vinyl: ! '>=0.4' lens: ! '>=3.9' linear: ! '>=1.1' mtl: ! '>=2.2.1' GLFW-b: ! '>=1.4' transformers: -any netwire: ! '>=5' directory: ! '>=1.2' all-versions: - '0.0.0' author: Rand Kleisli latest: '0.0.0' description-type: markdown description: ! 'Port of netwire-input-glfw example to VinylGL & GLSL 1.50. Uses Netwire 5 and Vinyl >= 0.4. NetVinylGLFW is a previous combination of Netwire, VinylGL, & GLFW, but its Netwire and Vinyl versions are outdated as of 2016. The combination with STM that it suggests is embraced by netwire-input-glfw. Usage: Place the executable and the `etc` folder in the same directory. Change to that directory in the command line and run the executable. ' license-name: MIT
Add hello-world yaml for generator
exercise: HelloWorld version: 2 plan: 3 imports: '&hello' tests: | #`[Go through the cases (hiding at the bottom of this file) and check that &hello gives us the correct response.] is &::('hello')(), |.<expected description> for @($c-data<cases>); exercise_comment: The name of this exercise. module_comment: "%*ENV<EXERCISM> is for tests not directly for the exercise, don't worry about these :)" version_comment: The version we will be matching against the exercise. plan_comment: This is how many tests we expect to run. use_test_comment: Check that the module can be use-d. version_test_comment: "If the exercise is updated, we want to make sure other people testing\nyour code don't think you've made a mistake if things have changed!" imports_comment: Import '&hello' from 'HelloWorld' cdata_test_comment: Ignore this for your exercise! Tells Exercism folks when exercise cases become out of date. done_testing_comment: There are no more tests after this :) INIT_comment: "'INIT' is a phaser, it makes sure that the test data is available before everything else\nstarts running (otherwise we'd have to shove the test data into the middle of the file!)"
Test on Trusty in Travis
language: ruby cache: bundler sudo: false branches: only: - master - 8-stable before_install: - gem update --system - gem --version - gem uninstall bundler -a -x -I - rvm @global do gem uninstall bundler -a -x -I - gem install bundler - bundle --version - rm -f .bundle/config rvm: - 2.3.4 - 2.4.1 - ruby-head allow_failures: - rvm: ruby-head script: - bundle exec chefstyle - bundle exec rake spec - bundle exec ohai
language: ruby cache: bundler dist: trusty sudo: false branches: only: - master - 8-stable before_install: - gem update --system - gem --version - rvm @global do gem uninstall bundler -a -x -I - gem install bundler - bundle --version - rm -f .bundle/config rvm: - 2.3.4 - 2.4.1 - ruby-head allow_failures: - rvm: ruby-head script: - bundle exec chefstyle - bundle exec rake spec - bundle exec ohai
Use GitHub Action for Ruby CI
name: Ruby on: [push] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v1 - name: Set up Ruby 2.6 uses: actions/setup-ruby@v1 with: ruby-version: 2.6.x - name: Build and test with Rake run: | gem install bundler bundle install --jobs 4 --retry 3 bundle exec rake
Fix trailing leader election cm when using helm
apiVersion: v1 kind: ConfigMap metadata: name: {{ include "nginx-ingress.leaderElectionName" . }} namespace: {{ .Release.Namespace }} labels: {{- include "nginx-ingress.labels" . | nindent 4 }}
Add support for architecture ppc64le in ieee754
# ---------------------------------------------------------------------------- # # Package : ieee754 # Source Repo : https://github.com/feross/ieee754.git # Travis Job Link : https://travis-ci.com/github/ddeka2910/ieee754/builds/211034884 # Created travis.yml : No # Maintainer : Debabrata Deka <debabrata.deka@ibm.com> # # Script License : Apache License, Version 2 or later # # --- os: linux arch: - amd64 - ppc64le language: node_js node_js: - lts/* addons: sauce_connect: true hosts: - airtap.local env: global: - secure: f3NrmOV/A7oACn47J1mkIpH8Sn/LINtluZvo/9pGo3Ss4+D2lyt7UawpedHtnYgU9WEyjPSi7pDWopUrIzusQ2trLYRJr8WAOEyHlgaepDyy4BW3ghGMKHMsS05kilYLP8nu1sRd6y1AcUYKw+kUrrSPanI7kViWVQ5d5DuwXO8= - secure: a6teILh33z5fbGQbh5/EkFfAyXfa2fPJG1upy9K+jLAbG4WZxXD+YmXG9Tz33/2NJm6UplGfTJ8IQEXgxEfAFk3ao3xfKxzm3i64XxtroSlXIFNSiQKogxDfLEtWDoNNCodPHaV3ATEqxGJ5rkkUeU1+ROWW0sjG5JR26k8/Hfg=
Set up CI with Azure Pipelines
# Node.js with gulp # Build a Node.js project using the gulp task runner. # Add steps that analyze code, save build artifacts, deploy, and more: # https://docs.microsoft.com/azure/devops/pipelines/languages/javascript pool: vmImage: 'Ubuntu 18.04' steps: - task: NodeTool@0 inputs: versionSpec: '10.x' displayName: 'Install Node.js' - script: | npm install displayName: 'Install npm packages' - script: | npm run lint displayName: 'Eslint check' - script: | npm run stylelint displayName: 'Stylint check' - script: | npm run buil displayName: 'Build'