code
stringlengths 38
801k
| repo_path
stringlengths 6
263
|
|---|---|
name: GitStars
version: 0.1.0.0
github: "githubuser/agentultra"
license: BSD3
author: "<NAME>"
maintainer: "<EMAIL>"
copyright: "2018 <NAME>"
extra-source-files:
- README.md
- ChangeLog.md
# Metadata used when publishing your package
# synopsis: Short description of your package
# category: Web
# To avoid duplicated efforts in documentation and dealing with the
# complications of embedding Haddock markup inside cabal files, it is
# common to point users to the README.md file.
description: Please see the README on GitHub at <https://github.com/githubuser/GitStars#readme>
dependencies:
- base >= 4.7 && < 5
- protolude
default-extensions:
- ApplicativeDo
- BangPatterns
- ConstraintKinds
- DataKinds
- DefaultSignatures
- DeriveFoldable
- DeriveFunctor
- DeriveGeneric
- DeriveLift
- DeriveTraversable
- DerivingStrategies
- DuplicateRecordFields
- OverloadedLabels
- EmptyCase
- EmptyDataDecls
- ExistentialQuantification
- FlexibleContexts
- FlexibleInstances
- FunctionalDependencies
- GADTs
- GeneralizedNewtypeDeriving
- InstanceSigs
- KindSignatures
- LambdaCase
- MultiParamTypeClasses
- MultiWayIf
- NamedFieldPuns
- NoImplicitPrelude
- OverloadedStrings
- PatternSynonyms
- PolyKinds
- RankNTypes
- RecordWildCards
- ScopedTypeVariables
- StandaloneDeriving
- TupleSections
- TypeApplications
- TypeFamilies
- TypeFamilyDependencies
- TypeOperators
library:
source-dirs: src
dependencies:
- aeson
- aeson-casing
- attoparsec
- case-insensitive
- http-client
- http-client-tls
- lens
- text
- time
executables:
gitstars:
main: Main.hs
source-dirs: app
ghc-options:
- -threaded
- -rtsopts
- -with-rtsopts=-N
dependencies:
- GitStars
tests:
GitStars-test:
main: Spec.hs
source-dirs: test
ghc-options:
- -threaded
- -rtsopts
- -with-rtsopts=-N
dependencies:
- GitStars
|
package.yaml
|
---
Name: Total_Recon
Description: Total Recon is a progressive, story-based game designed to teach nmap
network reconnaissance.
Instructions: Instruct students to connect to the first machine, and further login
instructions will appear onscreen once connected.
Codelab: http://localhost:5000/404
Groups:
- Name: Students
Instructions: Login to Home. Further instructions will be displayed upon logging in to home and each new checkpoint.
Access:
- Instance: home
Instances:
- Name: home
- Name: rekall
- Name: subway
- Name: earth_spaceport
- Name: mars_spaceport
- Name: venusville
- Name: last_resort
- Name: resistance_base
- Name: control_room
- Name: stealth_xmas
- Name: stealth_null
- Name: stealth_fin
Scoring:
- Text: What port is open at 10.0.0.4?
Type: Number
Options:
- accept-integer
- accept-decimal
- accept-hex
Values:
- Value: '444'
Points: '10'
Order: 1
Points: 10
- Text: What standard port does an http server use?
Type: Number
Options:
- accept-integer
Values:
- Value: '80'
Points: '10'
Order: 2
Points: 10
- Text: What is the IP of the host with the web server on the subnet 10.0.0.0/24?
Type: String
Options: []
Values:
- Value: "${scenario.instances.subway.ip_address_private}"
Points: '10'
Order: 3
Points: 10
- Text: What is the state of ports 80 and 443 on the earth-spaceport host?
Type: String
Options:
- ignore-case
Values:
- Value: filtered
Points: '10'
Order: 4
Points: 10
- Text: What is the IP Address of the mars spaceport? (It ends in .33)
Type: String
Options: []
Values:
- Value: "${scenario.instances.mars_spaceport.ip_address_private}"
Points: '10'
Order: 7
Points: 10
- Text: In the nmap man page, under "--min-rate", what --min-rate example do they
give? (Hint, it's an integer greater than 100 and less than 500)
Type: Number
Options:
- accept-integer
- accept-decimal
- accept-hex
Values:
- Value: '300'
Points: '10'
Order: 5
Points: 10
- Text: What is the nmap option for a Ping scan (disable port scan)? It should take
the form -Xx. For example, -sL is the option for List scan.
Type: String
Options: []
Values:
- Value: -sn
Points: '10'
- Value: -sP
Points: '10'
Order: 6
Points: 20
- Text: What has the ssh port on Venusville been changed to?
Type: Number
Options:
- accept-integer
- accept-decimal
Values:
- Value: '123'
Points: '10'
Order: 8
Points: 10
- Text: What has the ssh port on Last Resort been changed to?
Type: Number
Options:
- accept-integer
- accept-decimal
Values:
- Value: '2345'
Points: '10'
Order: 9
Points: 10
- Text: What port was open on the Resistance Base?
Type: Number
Options:
- accept-integer
- accept-decimal
Values:
- Value: '632'
Points: '20'
Order: 10
Points: 20
- Text: What kind of stealth scan, other than a basic SYN scan, works on 10.0.233.34?
(Do not include scan in your answer)
Type: String
Options:
- ignore-case
Values:
- Value: xmas
Points: '10'
Order: 11
Points: 10
- Text: What kind of stealth scan, other than a basic SYN scan, works on 10.0.233.36?
(Do not include scan in your answer)
Type: String
Options:
- ignore-case
Values:
- Value: 'null'
Points: '10'
Order: 12
Points: 10
- Text: What kind of stealth scan, other than a basic SYN scan, works on 10.0.233.38?
(Do not include scan in your answer)
Type: String
Options:
- ignore-case
Values:
- Value: fin
Points: '10'
Order: 13
Points: 10
- Text: How many possible hosts does the subnet 10.0.192.0/18 cover?
Type: Number
Options:
- accept-integer
- accept-decimal
Values:
- Value: '16382'
Points: '15'
Order: 14
Points: 15
- Text: How many ports are open on the control room host? (the IP ends in .5)
Type: Number
Options:
- accept-integer
- accept-decimal
Values:
- Value: '9'
Points: '10'
Order: 15
Points: 10
- Text: On the control_room box, what is the name of the directory where chmod was
moved to?
Type: String
Options:
- ignore-case
Values:
- Value: look-in-here
Points: '10'
Order: 16
Points: 10
|
scenarios/prod/total_recon/total_recon.yml
|
id: "20180035"
week: 35
year: 2018
title: ロックなビリージョエルで漫遊記
subtitle: ""
guests: []
categories: ["アーティスト特集"]
date: 2018-09-02
playlist:
- id: "2018003501"
week: 35
name: 夏が終わる
artist: スピッツ
year: 1993
nation: JPN
label: ポリドール
producer:
- 笹路正徳
corner: 漫遊前の一曲
title: 夏が終わる
index: 285
indexInWeek: 1
selector: 草野マサムネ
- id: "2018003502"
week: 35
name: It's Still Rock and Roll to Me
artist: <NAME>
year: 1980
nation: US
label: Columbia
producer:
- <NAME>
youtube: 5eAQa4MOGkE
title: It's Still Rock and Roll to Me
index: 286
indexInWeek: 2
selector: 草野マサムネ
- id: "2018003503"
week: 35
name: <NAME>oman
artist: Attila
year: 1970
nation: US
label: Epic
title: Wonder Woman
index: 287
indexInWeek: 3
selector: 草野マサムネ
- id: "2018003504"
week: 35
name: <NAME>
artist: <NAME>
year: 1974
nation: US
label: Columbia
producer:
- <NAME>
youtube: Wo2TKojbMN8
title: Los Angelenos
index: 288
indexInWeek: 4
selector: 草野マサムネ
- id: "2018003505"
week: 35
name: Prelude/Angry Young Man
artist: <NAME>
year: 1976
nation: US
label: Columbia
youtube: M2iNLt_hUZg
title: Prelude/Angry Young Man
index: 289
indexInWeek: 5
selector: 草野マサムネ
- id: "2018003506"
week: 35
name: You May Be Right
artist: <NAME>
year: 1980
nation: US
label: Columbia
producer:
- <NAME>
youtube: Jo9t5XK0FhA
title: You May Be Right
index: 290
indexInWeek: 6
selector: 草野マサムネ
- id: "2018003507"
week: 35
name: <NAME> Grey
artist: <NAME>
year: 1993
nation: US
label: Columbia
youtube: CopYqp0HZkY
title: Shades of Grey
index: 291
indexInWeek: 7
selector: 草野マサムネ
- id: "2018003508"
week: 35
name: 夢ゴコチ
artist: 江口洋介
kana: えぐちようすけ
year: 1991
nation: JPN
label: ポリドール
corner: ちょっぴりタイムマシン
title: 夢ゴゴチ
index: 292
indexInWeek: 8
selector: 草野マサムネ
|
data/yaml/2018/0035.yaml
|
title: Máquinas virtuais do Linux no Azure
summary: Documentação para criação e gerenciamento de máquinas virtuais do Linux no Azure.
metadata:
title: Máquinas virtuais do Linux no Azure
description: Documentação para criação e gerenciamento de máquinas virtuais do Linux no Azure.
services: virtual-machines-linux
ms.service: virtual-machines-linux
ms.topic: landing-page
author: cynthn
ms.author: cynthn
ms.date: 09/19/2019
ms.openlocfilehash: 12c3447bc7e46e45d05711e1b0c7a59f8d8d0218
ms.sourcegitcommit: 253d4c7ab41e4eb11cd9995190cd5536fcec5a3c
ms.translationtype: HT
ms.contentlocale: pt-BR
ms.lasthandoff: 03/25/2020
ms.locfileid: "77115816"
landingContent:
- title: Os recursos mais recentes
linkLists:
- linkListType: whats-new
links:
- text: VMs do Spot
url: spot-vms.md
- text: Hosts dedicados
url: dedicated-hosts.md
- text: Grupos de posicionamento de proximidade
url: proximity-placement-groups.md
- title: Introdução
linkLists:
- linkListType: quickstart
links:
- text: CLI do Azure
url: https://docs.microsoft.com/azure/virtual-machines/virtual-machines-linux-quick-create-cli
- text: Portal do Azure
url: https://docs.microsoft.com/azure/virtual-machines/virtual-machines-linux-quick-create-portal
- text: Azure PowerShell
url: https://docs.microsoft.com/azure/virtual-machines/virtual-machines-linux-quick-create-powershell
- title: Guias passo a passo
linkLists:
- linkListType: tutorial
links:
- text: Criar e gerenciar as VMs Linux
url: /azure/virtual-machines/linux/tutorial-manage-vm
- text: Discos de VM
url: /azure/virtual-machines/linux/tutorial-manage-disks
- text: Automatizar a configuração da VM
url: /azure/virtual-machines/linux/tutorial-automate-vm-deployment
- text: Criar imagens de VM personalizada
url: /azure/virtual-machines/linux/tutorial-custom-images
- text: Gerenciar VMs com redes virtuais
url: /azure/virtual-machines/linux/tutorial-virtual-network
- title: Treinamento individual
linkLists:
- linkListType: learn
links:
- text: " Criar uma máquina virtual Linux no Azure"
url: https://docs.microsoft.com/learn/modules/create-linux-virtual-machine-in-azure/
|
articles/virtual-machines/linux/index.yml
|
cerad_tourn_index:
path: /
defaults: { _controller: CeradTournBundle:Main:index }
cerad_tourn_welcome:
path: /welcome
defaults: { _controller: CeradTournBundle:Main:welcome }
cerad_tourn_home:
path: /home
defaults: { _controller: CeradTournBundle:Home:home }
cerad_tourn_textalerts:
path: /textalerts
defaults: { _controller: CeradTournBundle:Main:welcome }
cerad_tourn_contact:
path: /contact
defaults: { _controller: CeradTournBundle:Main:contact }
cerad_tourn_schedule:
path: /schedule
defaults: { _controller: CeradTournBundle:Main:welcome }
cerad_tourn_schedule_team_list:
path: /schedule/team
defaults: { _controller: CeradTournBundle:Main:welcome }
cerad_tourn_schedule_my_list:
path: /schedule/my
defaults: { _controller: CeradTournBundle:Schedule:my }
cerad_tourn_schedule_referee_list:
path: /schedule/referee.{_format}
defaults: { _controller: CeradTournBundle:Referee:referee, _format: html }
requirements:
_format: html|csv|xls|pdf
cerad_tourn_schedule_referee_assign:
path: /schedule/referee/assign/{id}
defaults: { _controller: CeradTournBundle:Referee:assign, id: 0 }
requirements:
id: \d+
cerad_tourn_game_report:
path: /game/report/{num}
defaults: { _controller: CeradTournBundle:GameReport:report }
# ===================================================
# Results
cerad_tourn_results_poolplay:
path: /results/poolplay/{div}/{poolFilter}
defaults: { _controller: CeradTournBundle:Results:poolplay, div: null, poolFilter: null }
cerad_tourn_results_playoffs:
path: /results/playoffs
defaults: { _controller: CeradTournBundle:Results:playoffs }
cerad_tourn_results_champions:
path: /results/champions
defaults: { _controller: CeradTournBundle:Results:champions }
cerad_tourn_results_sportsmanship:
path: /results/sportsmanship
defaults: { _controller: CeradTournBundle:Results:sportsmanship }
# ==============================================================
# Person stuff
# Need to add personId at some point
cerad_tourn_person_plan:
path: /person/plan/{id}
defaults: { _controller: CeradTournBundle:Person:plan, id: null }
cerad_tourn_person_plan_success:
path: /person/plan-success{id}
defaults: { _controller: CeradTournBundle:Person:plan, id: null }
cerad_tourn_person_plan_failure:
path: /person/plan-failure/{id}
defaults: { _controller: CeradTournBundle:Person:plan, id: null }
cerad_tourn_person_plan_form:
path: /person/plan-form/{id}
defaults: { _controller: CeradTournBundle:Person:planForm, id: null }
cerad_tourn_person_person_add:
path: /person/person/add
defaults: { _controller: CeradTournBundle:PersonPerson:add }
# ==============================================================
# Not entirely sure about this
cerad_tourn_account_signin:
path: /account/signin
defaults: { _controller: CeradTournBundle:Account:signin }
cerad_tourn_account_signin_form:
path: /account/signin-form
defaults: { _controller: CeradTournBundle:Account:signinForm }
#cerad_tourn_account_create:
# path: /account/create
# defaults: { _controller: CeradTournBundle:Account:create }
cerad_tourn_account_create_success:
path: /account/create-success
defaults: { _controller: CeradTournBundle:Account:create }
cerad_tourn_account_create_failure:
path: /account/create-failure
defaults: { _controller: CeradTournBundle:Account:create }
cerad_tourn_account_create_form:
path: /account/create-form
defaults: { _controller: CeradTournBundle:Account:createForm }
# cerad_tourn_account_edit:
# pattern: /account/edit/{id}
# defaults: { _controller: CeradTournBundle:Account:create, id: 0 }
# requirements:
# id: \d+
cerad_tourn_account_person_edit:
pattern: /account/person/edit/{id}
defaults: { _controller: CeradTournBundle:Account/Person:edit, id: 0 }
requirements:
id: \d+
cerad_tourn_person_person_edit:
pattern: /person-person/edit/{id}
defaults: { _controller: CeradTournBundle:Main:welcome, id: 0 }
requirements:
id: \d+
# ==============================================================
# Some testing stuff
cerad_tourn_test_form1:
path: /test/form1
defaults: { _controller: CeradTournBundle:Test:form1 }
cerad_tourn_test_form1_success:
path: /test/form1-success
defaults: { _controller: CeradTournBundle:Test:form1 }
cerad_tourn_test_form1_failure:
path: /test/form1-failure
defaults: { _controller: CeradTournBundle:Test:form1 }
cerad_tourn_test_simple:
path: /test/simple
defaults: { _controller: CeradTournBundle:Test:simple }
cerad_tourn_test_dynamic:
path: /test/dynamic
defaults: { _controller: CeradTournBundle:Test:dynamic }
|
src/Cerad/Bundle/TournBundle/Resources/config/routing.yml
|
.security_parent_job:
stage: security_tools
extends:
- .dd_upload_report
tags: [docker]
allow_failure: true
#### GitLeaks #####
appsec:gitleaks:
extends: [.security_parent_job]
variables:
DD_SCAN_TYPE: 'Gitleaks Scan'
DD_REPORT_FILE_NAME: 'gitleaks.json'
GIT_DEPTH: 0
image:
name: ${CI_SERVER_HOST}/gitleaks:master #CHECK IT
dependencies:
- defectdojo_prepare
script:
- |
echo "[x] Run GitLekas"
echo -e "Commit count is: $(git log |grep 'Author:' |wc -l)"
set -x
gitleaks --no-git --path=./ -v --report $DD_REPORT_FILE_NAME || true
### trufflehog ####
secrets:trufflehog:
extends: [.security_parent_job]
variables:
DD_SCAN_TYPE: 'Trufflehog Scan'
DD_REPORT_FILE_NAME: 'trufflehog.report'
GIT_DEPTH: 0
dependencies:
- defectdojo_prepare
image:
name: ${CI_SERVER_HOST}/trufflehog:master #CHECK IT
script:
- |
echo "[x] Run Trufflehog"
set -x
#download trufflehog rules from common-pipeline
git clone "https://gitlab-ci-token:${CI_JOB_TOKEN}@${CI_SERVER_HOST}//common-pipeline/"
cd common-pipeline
git checkout v3
cd ../
git checkout ${CI_COMMIT_REF_NAME} #need for fix bug in trufflehog
echo -e "Commit count is: $(git log |grep 'Author:' |wc -l)"
##need denamic on entropy=False
trufflehog --regex --entropy=False --exclude_paths='./common-pipeline/config/trufflehog-exclude-patterns.txt' ./ --json > $DD_REPORT_FILE_NAME || true
### GoSec ###
sast:gosec:
extends: [.security_parent_job]
variables:
DD_SCAN_TYPE: 'Gosec Scanner'
DD_REPORT_FILE_NAME: 'gosec.json'
dependencies:
- defectdojo_prepare
image:
name: ${CI_SERVER_HOST}/gosec:master
script:
- |
echo "[x] Run Gosec"
set -x
gosec -exclude=G104,G404,G501,G601 -no-fail -fmt json -out $DD_REPORT_FILE_NAME ./... > /dev/null || true
### phpcs ###
sast:phpcs:
extends: [.security_parent_job]
image: ${CI_SERVER_HOST}/phpcs-security-audit:master #CHECK IT
variables:
DD_SCAN_TYPE: 'PHP Security Audit v2'
DD_REPORT_FILE_NAME: 'phpcs.json'
dependencies:
- defectdojo_prepare
script:
- echo "[x] Run PHPcs"
- set -x
- /tmp/vendor/bin/phpcs --standard=/tmp/base_ruleset.xml --extensions=php --report=json --report-file=$DD_REPORT_FILE_NAME ./ --ignore=*.js,*.html,*.css || true
### Semgrep - cloud ####
sast:semgrep_cloud:
stage: security_tools
image: returntocorp/semgrep-agent:v1
script:
- echo "[x] Run Semgrep [Cloud]"
- set -x
- semgrep-agent || true
dependencies:
- defectdojo_prepare
variables:
SEMGREP_APP_DEPLOYMENT_ID: 1111 #CHECK IT
SEMGREP_APP_TOKEN: <PASSWORD> #CHECK IT
## remove rule for activate
rules:
- if: $CI_COMMIT_BRANCH == "Never"
### Semgrep - DD ####
sast:semgrep_dd:
extends: [.security_parent_job]
image: returntocorp/semgrep-agent:v1
script:
- apk add curl jq
- echo "[x] Run Semgrep [DefectDojo]"
- #maybe json output will be better for you
- semgrep --config p/security-audit --config p/secrets --sarif . > semgrep.json
dependencies:
- defectdojo_prepare
variables:
DD_SCAN_TYPE: 'SARIF'
DD_REPORT_FILE_NAME: 'semgrep.json'
## dependencycheck ##
dep:dependencycheck:
extends: [.security_parent_job]
image: ${CI_SERVER_HOST}/dependencycheck:master
variables:
DD_SCAN_TYPE: 'Dependency Check Scan'
DD_REPORT_FILE_NAME: 'dependency-check-report.xml'
script:
- #FIX IT (credentials for connect to DB)
- /usr/share/dependency-check/bin/dependency-check.sh --project ${CI_PROJECT_NAME} --out . --scan . -f XML --dbDriverName com.mysql.cj.jdbc.Driver --dbDriverPath /usr/share/dependency-check/plugins/mysql-connector-java-8.0.21.jar --connectionString 'jdbc:mysql://mysql.example.com:3306/dependencycheck?useSSL=false&allowPublicKeyRetrieval=true&serverTimezone=UTC' --dbPassword ${DEPCHECK_MYSQL} --dbUser MySQLUser
artifacts:
paths:
- dependency-check-report.xml
rules:
- if: $CI_COMMIT_BRANCH == "Never"
##todo: add help for local mysql cache
## trivy ##
## move to public after 5 september 2021
|
pipeline/security_tools.yml
|
titles:
Art:
- 'How to choose the right colors for your brand'
- 'How to get the most out of NumberNine CMS: Tips for graphic designers and agencies'
- 'Your ultimate guide to designing with background'
- "39 of the most beautiful YouTube banners we've seen"
- "How one nonprofit's redesign boosted its impact"
- 'Man Spends 7 Years Drawing Incredibly Intricate Maze'
- '10 trending color combinations for 2021'
- 'Interior Design For Creative Workspaces'
- 'An Artist Imagines If Bananas Were Movie Character'
- 'Seeing Your Subject From Two Points of View'
Food:
- 'Simple Homemade Tomato Soup'
- 'Smoky Red Lentil Soup with Spinach'
- 'Air Fryer Frozen French Fries'
- 'The BEST Mashed Potatoes'
- 'Buttermilk Ranch Dressing'
- 'Korean Beef Kabobs'
- 'Creamy Honey Mustard Chicken'
- 'Chia Seed Pudding'
- 'Yogurt Marinated Chicken'
Lifestyle:
- 'How to Get a Just-Had-an-Orgasm Glow'
- '4 Print-Forward Outfits to Try Right Now'
- 'New Beauty Obsessions for October'
- 'Ponytail Holder vs. Hair Ties?'
- "I Admit It: I'm Having Fun on TikTok"
- 'A Whole Year With the Hyundai Palisade: A Review'
- 'Burbank, California: The Travel Guide'
- 'Have a Beautiful Weekend.'
- 'A Big, Juicy Round Up of Fall Recommendations'
- '20 Halloween Costumes For Your Baby'
Movie:
- "Here’s How You Can Watch Netflix Libraries of Other Countries"
- 'Beetlejuice: 10 Behind-The-Scenes Facts About The Michael Keaton Movie'
- "Zodiac’s Mark Ruffalo Reacts To The Killer Reportedly Being Found"
- "Rob Zombie Pays Tribute to '<NAME>' in The Munsters Movie Reboot"
- "Keanu Reeves Will Be Inducted Into Canada's Walk of Fame This December"
- '12 Best Original Netflix Movies, Ranked'
- 'YouTube Rewind Canceled After 10 Years'
- "Dragon Ball Super: Vegeta Needs to Get Creative to Match Goku's Power"
- "Spongebob’s Squidward Joins Squid Game in Fan-Made Funko Pop"
- 'Pretty Smart Cast & Character Guide'
- "No Time to Die: What did you think?"
- "<NAME> to star in <NAME>’s new movie as <NAME>"
Music:
- "Does Rock ‘N’ Roll Kill Braincells?! – <NAME>"
- "<NAME> live in London: punk’s poet laureate claims legend status"
- "Soundtrack Of My Life: Steps’ Faye Tozer"
- "Adele, The Kid LAROI, Coldplay x BTS, K<NAME>usgraves & More of the Week's Biggest Winners"
- "More Young Artists Are Exploring Pop-Punk -- And It's Paying Off"
- "How We Listened to Music Over the Last 25 Years"
- '7 New Albums You Should Listen to Now'
- 'The 200 Most Important Artists of Pitchfork’s First 25 Years'
- '<NAME> Joins <NAME> on New Song “Boyz”: Listen'
- 'Tech N9ne Reunites With Lil Wayne On "Too Good"'
- "<NAME>, 50 Cent's Biggest Enemy, May Be Free Soon"
Travel:
- 'More shopping, less sunbathing'
- "Disney introducing paid ‘FastPass’ replacement"
- 'American Express American Airlines Business Extra Corporate Card review'
- 'Basilicas, bagels and other top things to do in Montréal'
- "How does England's new travel system work?"
- "Guide To American Airlines Business Extra Program"
- "Huge: Capital One Miles Now Transfer 1:1 To Most Airlines"
- 'Top 5 Destinations In Mexico To Visit This Winter'
- 'New Low Cost, Low Carbon Train Company Set To Launch In UK This Month'
- "9 Impressive Waterfalls in Mexico Your Don’t Want To Miss"
- "Top 7 Places For A Couples Getaway In The U.S. This Winter"
|
src/Resources/faker/templates/titles.yaml
|
trigger: none
pool:
name: 'MicroBuildV2Pool'
steps:
- task: CmdLine@2
displayName: 'Build vcpkg'
inputs:
script: .\bootstrap-vcpkg.bat
- task: CmdLine@2
displayName: "Build vcpkg with CMake and Run Tests"
inputs:
failOnStderr: true
script: |
.\vcpkg.exe fetch cmake
.\vcpkg.exe fetch ninja
set PATH=D:\downloads\tools\cmake-3.17.2-windows\cmake-3.17.2-win32-x86\bin;D:\downloads\tools\ninja-1.10.0-windows;%PATH%
call "C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\Common7\Tools\VsDevCmd.bat" -arch=x86 -host_arch=x86
cmake.exe -G Ninja -DCMAKE_BUILD_TYPE=Release -DBUILD_TESTING=ON -DVCPKG_DEVELOPMENT_WARNINGS=ON -DVCPKG_WARNINGS_AS_ERRORS=ON -DVCPKG_BUILD_FUZZING=ON -B "$(Build.StagingDirectory)" -S toolsrc
ninja.exe -C "$(Build.StagingDirectory)"
"$(Build.StagingDirectory)\vcpkg-test.exe"
- task: AntiMalware@3
inputs:
InputType: 'Basic'
ScanType: 'CustomScan'
FileDirPath: '$(Build.StagingDirectory)'
EnableServices: false
SupportLogOnError: false
TreatSignatureUpdateFailureAs: 'Warning'
SignatureFreshness: 'UpToDate'
TreatStaleSignatureAs: 'Error'
- task: APIScan@2
inputs:
softwareFolder: '$(Build.StagingDirectory)'
softwareName: 'vcpkg'
softwareVersionNum: '1.0.0'
softwareBuildNum: '$(Build.BuildId)'
symbolsFolder: '$(Build.StagingDirectory)'
- task: CredScan@3
- task: BinSkim@4
inputs:
InputType: 'Basic'
Function: 'analyze'
TargetPattern: 'guardianGlob'
AnalyzeTargetGlob: '$(Build.StagingDirectory)\vcpkg.exe'
AnalyzeSymPath: '$(Build.StagingDirectory)'
AnalyzeVerbose: true
AnalyzeHashes: true
AnalyzeStatistics: true
- task: PoliCheck@1
inputs:
inputType: 'Basic'
targetType: 'F'
targetArgument: '$(Build.SourcesDirectory)'
result: 'PoliCheck.xml'
optionsFC: '1'
- task: MicroBuildSigningPlugin@2
inputs:
signType: 'real'
feedSource: 'https://devdiv.pkgs.visualstudio.com/DefaultCollection/_packaging/MicroBuildToolset/nuget/v3/index.json'
|
scripts/azure-pipelines/windows/signing.yml
|
apiVersion: v1
kind: Namespace
metadata:
name: timemachine
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-path-retain
provisioner: rancher.io/local-path
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: Service
metadata:
name: timemachine-udp
namespace: timemachine
labels:
app: timemachine
spec:
ports:
- port: 137
name: udp137
protocol: UDP
- port: 138
name: udp138
protocol: UDP
type: ClusterIP
selector:
app: timemachine
---
apiVersion: v1
kind: Service
metadata:
name: timemachine-tcp
namespace: timemachine
labels:
app: timemachine
spec:
ports:
- port: 139
name: tcp139
- port: 445
name: tcp445
type: ClusterIP
selector:
app: timemachine
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: timemachine
namespace: timemachine
spec:
serviceName: timemachine-udp
replicas: 1 # Never go > 1!
selector:
matchLabels:
app: timemachine
template:
metadata:
labels:
app: timemachine
spec:
hostNetwork: true # otherwise the auto-discovery does not work
containers:
- name: timemachine
image: mbentley/timemachine:smb
ports:
- containerPort: 137
name: udp137
protocol: UDP
- containerPort: 138
name: udp138
protocol: UDP
- containerPort: 139
name: tcp139
- containerPort: 445
name: tcp445
volumeMounts:
- name: tm
mountPath: /opt/timemachine
- name: var-lib-samba
mountPath: /var/lib/samba
- name: var-cache-samba
mountPath: /var/cache/samba
- name: run-samba
mountPath: /run/samba
env:
- name: ADVERTISED_HOSTNAME
value: ""
- name: CUSTOM_SMB_CONF
value: "false"
- name: CUSTOM_USER
value: "false"
- name: DEBUG_LEVEL
value: "1"
- name: MIMIC_MODEL
value: "TimeCapsule8,119"
- name: EXTERNAL_CONF
value: ""
- name: HIDE_SHARES
value: "no"
- name: TM_USERNAME
value: "timemachine"
- name: TM_GROUPNAME
value: "timemachine"
- name: TM_UID
value: "1000"
- name: TM_GID
value: "1000"
- name: PASSWORD
value: "<PASSWORD>"
- name: SET_PERMISSIONS
value: "false"
- name: SHARE_NAME
value: "TimeMachine"
- name: SMB_INHERIT_PERMISSIONS
value: "no"
- name: SMB_NFS_ACES
value: "yes"
- name: SMB_METADATA
value: "stream"
- name: SMB_PORT
value: "445"
- name: SMB_VFS_OBJECTS
value: "acl_xattr fruit streams_xattr"
- name: VOLUME_SIZE_LIMIT
value: "0"
- name: WORKGROUP
value: "WORKGROUP"
volumes:
- name: var-lib-samba
emptyDir: {}
- name: var-cache-samba
emptyDir: {}
- name: run-samba
emptyDir: {}
volumeClaimTemplates:
- metadata:
name: tm
spec:
storageClassName: local-path-retain
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 500Gi
|
timemachine-k3s.yaml
|
--- !ruby/object:RI::ClassDescription
attributes:
- !ruby/object:RI::Attribute
comment:
name: index
rw: RW
- !ruby/object:RI::Attribute
comment:
name: value
rw: RW
class_methods:
- !ruby/object:RI::MethodSummary
name: "[]"
- !ruby/object:RI::MethodSummary
name: new
comment:
- !ruby/struct:SM::Flow::H
level: 1
text: Association
- !ruby/struct:SM::Flow::P
body: General binary association allows one object to be associated with another. It has a variety of uses, link-lists, simple ordered maps and mixed collections, among them.
- !ruby/struct:SM::Flow::H
level: 2
text: Usage
- !ruby/struct:SM::Flow::P
body: Associations can be used to draw simple relationships.
- !ruby/struct:SM::Flow::VERB
body: " :Apple >> :Fruit\n :Apple >> :Red\n\n :Apple.associations #=> [ :Fruit, :Red ]\n"
- !ruby/struct:SM::Flow::P
body: It can also be used for simple lists of ordered pairs.
- !ruby/struct:SM::Flow::VERB
body: " c = [ :a >> 1, :b >> 2 ]\n c.each { |k,v| puts "#{k} associated with #{v} }\n"
- !ruby/struct:SM::Flow::P
body: produces
- !ruby/struct:SM::Flow::VERB
body: " a associated with 1\n b associated with 2\n"
- !ruby/struct:SM::Flow::H
level: 2
text: Limitations
- !ruby/struct:SM::Flow::P
body: "The method :>> is used to construct the association. It is a rarely used method so it is generally available. But you can't use an Association while extending any of the following classes becuase they use #>> for other things."
- !ruby/struct:SM::Flow::VERB
body: " Bignum\n Fixnum\n Date\n IPAddr\n Process::Status\n"
constants:
- !ruby/object:RI::Constant
comment:
- !ruby/struct:SM::Flow::P
body: Store association references.
name: REFERENCE
value: Hash.new{ |h,k,v| h[k]=[] }
full_name: Association
includes:
- !ruby/object:RI::IncludedModule
name: Comparable
instance_methods:
- !ruby/object:RI::MethodSummary
name: <=>
- !ruby/object:RI::MethodSummary
name: inspect
- !ruby/object:RI::MethodSummary
name: invert!
- !ruby/object:RI::MethodSummary
name: to_ary
- !ruby/object:RI::MethodSummary
name: to_s
name: Association
superclass: Object
|
gems/gems/facets-2.4.3/doc/ri/Association/cdesc-Association.yaml
|
items:
- name: Bing Entity Search のドキュメント
href: index.yml
- name: 概要
items:
- name: Bing Entity Search とは
href: search-the-web.md
- name: REST API のクイック スタート
items:
- name: C#
href: quickstarts/csharp.md
- name: Java
href: quickstarts/java.md
- name: Node.js
href: quickstarts/nodejs.md
- name: PHP
href: quickstarts/php.md
- name: Python
href: quickstarts/python.md
- name: Ruby
href: quickstarts/ruby.md
expanded: true
- name: SDK のクイック スタート
items:
- name: SDK のサンプルの概要
href: sdk.md
- name: Entity Search C# のクイック スタート
href: entity-search-sdk-quickstart.md
- name: Entity Search Node.js のクイック スタート
href: entity-search-sdk-node-quickstart.md
- name: Entity Search Python のクイック スタート
href: entity-sdk-python-quickstart.md
- name: Entity Search Java のクイック スタート
href: entity-sdk-java-quickstart.md
- name: チュートリアル
items:
- name: 単一ページの Web アプリ
href: tutorial-bing-entities-search-single-page-app.md
- name: 単一ページの Web アプリ (ソース コード)
href: tutorial-bing-entities-search-single-page-app-source.md
- name: サンプル
items:
- name: コード サンプル
href: https://azure.microsoft.com/resources/samples/?sort=0&service=cognitive-services&term=Bing+Entity+Search
- name: ハウツー ガイド
items:
- name: 使用と表示の要件
href: use-display-requirements.md
- name: サムネイルのサイズ変更とトリミング
href: resize-and-crop-thumbnails.md
- name: リファレンス
items:
- name: Bing Entity Search API v7
href: https://docs.microsoft.com/rest/api/cognitiveservices/bing-entities-api-v7-reference
- name: SDK
items:
- name: .NET
href: https://docs.microsoft.com/dotnet/api/overview/azure/cognitiveservices/client/bingentitysearchapi?view=azure-dotnet
- name: Java
href: https://docs.microsoft.com/java/api/overview/azure/cognitiveservices/client/entitysearch?view=azure-java-stable
- name: Python
href: https://docs.microsoft.com/python/api/overview/azure/cognitiveservices/entitysearch?view=azure-python
- name: Node.js
href: https://docs.microsoft.com/javascript/api/overview/azure/cognitiveservices/entitysearch?view=azure-node-latest
- name: Go
href: https://godoc.org/github.com/Azure/azure-sdk-for-go/services/cognitiveservices/v1.0/entitysearch
- name: リソース
items:
- name: 価格
href: https://azure.microsoft.com/pricing/details/cognitive-services/bing-entity-search-api/
- name: UserVoice
href: https://cognitive.uservoice.com/forums/601036-bing-entity-search-api
- name: スタック オーバーフロー
href: https://stackoverflow.com/search?q=bing+entity+search
- name: リージョン別の提供状況
href: https://azure.microsoft.com/global-infrastructure/services/
- name: Azure のロードマップ
href: https://azure.microsoft.com/roadmap/
- name: ナレッジ ベース
href: https://cognitive.uservoice.com/knowledgebase/topics/127875-bing-search-web-image-video-news-autosuggest
- name: API キーを取得する
href: https://azure.microsoft.com/try/cognitive-services/?api=bing-entities-search-api
- name: 試してみる
href: https://dev.cognitive.microsoft.com/docs/services/7a3fb374be374859a823b79fd938cc65/
- name: Bing Entity Search エンドポイント
href: entity-search-endpoint.md
- name: Bing Entity Search API 用の分析
href: bing-entity-stats.md
ms.openlocfilehash: a0f92f5b3ef8f0ea24393da1199422b032a57f06
ms.sourcegitcommit: <KEY>
ms.translationtype: HT
ms.contentlocale: ja-JP
ms.lasthandoff: 12/27/2018
ms.locfileid: "53789545"
|
articles/cognitive-services/Bing-Entities-Search/toc.yml
|
---
'001':
code: '001'
name: 本店
kana: ホンテン
hira: ほんてん
roma: honten
'002':
code: '002'
name: 吉方金融
kana: ヨシカタキンユウ
hira: よしかたきんゆう
roma: yoshikatakinyuu
'003':
code: '003'
name: 城北金融
kana: ジヨウホクキンユウ
hira: じようほくきんゆう
roma: jiyouhokukinyuu
'007':
code: '007'
name: 邑美
kana: オウミ
hira: おうみ
roma: oumi
'009':
code: '009'
name: せんだい
kana: センダイ
hira: せんだい
roma: sendai
'010':
code: '010'
name: 高草
kana: タカクサ
hira: たかくさ
roma: takakusa
'014':
code: '014'
name: 湖南
kana: コナン
hira: こなん
roma: konan
'015':
code: '015'
name: 湖東
kana: コトウ
hira: ことう
roma: kotou
'016':
code: '016'
name: 千代水金融
kana: チヨミキンユウ
hira: ちよみきんゆう
roma: chiyomikinyuu
'028':
code: '028'
name: 鳥取
kana: トツトリ
hira: とつとり
roma: totsutori
'031':
code: '031'
name: 富桑金融
kana: フソウキンユウ
hira: ふそうきんゆう
roma: fusoukinyuu
'054':
code: '054'
name: 国府
kana: コクフ
hira: こくふ
roma: kokufu
'061':
code: '061'
name: 福部
kana: フクベ
hira: ふくべ
roma: fukube
'071':
code: '071'
name: 岩美
kana: イワミ
hira: いわみ
roma: iwami
'082':
code: '082'
name: 宝木金融
kana: ホウギキンユウ
hira: ほうぎきんゆう
roma: hougikinyuu
'085':
code: '085'
name: 気高
kana: ケタカ
hira: けたか
roma: ketaka
'086':
code: '086'
name: 鹿野
kana: シカノ
hira: しかの
roma: shikano
'091':
code: '091'
name: 青谷
kana: アオヤ
hira: あおや
roma: aoya
'101':
code: '101'
name: 郡家
kana: コオゲ
hira: こおげ
roma: kooge
'111':
code: '111'
name: 船岡
kana: フナオカ
hira: ふなおか
roma: funaoka
'121':
code: '121'
name: 河原
kana: カワハラ
hira: かわはら
roma: kawahara
'132':
code: '132'
name: 八東
kana: ハツトウ
hira: はつとう
roma: hatsutou
'133':
code: '133'
name: 丹比金融
kana: タンピキンユウ
hira: たんぴきんゆう
roma: tampikinyuu
'141':
code: '141'
name: 若桜
kana: ワカサ
hira: わかさ
roma: wakasa
'154':
code: '154'
name: 用瀬
kana: モチガセ
hira: もちがせ
roma: mochigase
'155':
code: '155'
name: 佐治
kana: サジ
hira: さじ
roma: saji
'171':
code: '171'
name: 智頭
kana: チズ
hira: ちず
roma: chizu
|
data/branches/7601.yml
|
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: fluentd
namespace: kube-system
rules:
- apiGroups:
- ""
resources:
- namespaces
- pods
- pods/logs
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: kube-system
- kind: ServiceAccount
name: default
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd
namespace: kube-system
data:
fluent.conf: |
<source>
@type systemd
path /var/log/journal
filters [{ "_SYSTEMD_UNIT": "kubelet.service" }]
pos_file /tmp/k8s-kubelet.pos
tag journal.kubelet
read_from_head true
strip_underscores true
</source>
<source>
type tail
path /var/log/containers
pos_file /tmp/px-container.log.pos
time_format %Y-%m-%dT%H:%M:%S.%N
tag kubernetes.*
format json
read_from_head true
keep_time_key true
</source>
<filter kubernetes.**>
type kubernetes_metadata
</filter>
<match journal.kubelet.**>
type elasticsearch
log_level info
include_tag_key true
logstash_prefix journal-log
host 127.0.0.1 ## Change this to the Elastic search host.
port 9200 ## Change this to the elastic search port.
logstash_format true
buffer_chunk_limit 2M
buffer_queue_limit 32
flush_interval 60s
max_retry_wait 30
disable_retry_limit
num_threads 8
</match>
<match kubernetes.**>
type elasticsearch
log_level info
include_tag_key false
logstash_prefix k8s
host 127.0.0.1 ## Hostname of the ES cluster.
port 9200
logstash_format true
buffer_chunk_limit 2M
buffer_queue_limit 32
flush_interval 60s
max_retry_wait 30
disable_retry_limit
num_threads 8
</match>
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: fluentd
namespace: kube-system
labels:
k8s-app: fluentd-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
template:
metadata:
labels:
k8s-app: fluentd-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v0.12-debian-elasticsearch
securityContext:
privileged: true
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
- name: posloc
mountPath: /tmp
- name: config
mountPath: /fluentd/etc/fluent.conf
subPath: fluent.conf
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: config
configMap:
name: fluentd
- name: posloc
hostPath:
path: /tmp
|
k8s/demo/efk/fluentd/fluentd-ds.yaml
|
homepage: https://github.com/aviaviavi/curl-runnings#readme
changelog-type: ''
hash: fe2fcf1c04dfd20a93f955e2c6451843a0298700c2cc11acedac843eca74c6f2
test-bench-deps:
base: ! '>=4.7 && <5'
hspec: ! '>=2.4.4'
curl-runnings: -any
hspec-expectations: ! '>=0.8.2'
directory: ! '>=1.3.0.2'
maintainer: <EMAIL>
synopsis: A framework for declaratively writing curl based API tests
changelog: ''
basic-deps:
bytestring: ! '>=0.10.8.2'
case-insensitive: ! '>=0.2.1'
base: ! '>=4.7 && <5'
aeson-pretty: ! '>=0.8.5'
unordered-containers: ! '>=0.2.8.0'
hspec: ! '>=2.4.4'
text: ! '>=1.2.2.2'
megaparsec: ! '>=6.3.0'
cmdargs: ! '>=0.10.20'
http-conduit: ! '>=2.2.4'
http-types: ! '>=0.9.1'
aeson: ! '>=1.2.4.0'
curl-runnings: -any
yaml: ! '>=0.8.28'
vector: ! '>=0.12.0'
hspec-expectations: ! '>=0.8.2'
directory: ! '>=1.3.0.2'
all-versions:
- '0.1.0'
- '0.2.0'
- '0.3.0'
- '0.6.0'
author: Avi Press
latest: '0.6.0'
description-type: markdown
description: ! "# curl-runnings\n\n[](https://travis-ci.org/aviaviavi/curl-runnings)\n\n_Feel
the rhythm! Feel the rhyme! Get on up, it's testing time! curl-runnings!_\n\ncurl-runnings
is a framework for writing declarative, curl based tests for your\nAPIs. Write your
tests quickly and correctly with a straight-forward\nspecification in yaml or json
that can encode simple but powerful matchers\nagainst responses.\n\nAlternatively,
you can use the curl-runnings library to write your tests in\nHaskell (though a
haskell setup is absolutely not required to use this tool).\n\n### Why?\n\nWhen
writing curl based smoke/integration tests for APIs using bash and `curl`\nis very
convenient, but quickly becomes hard to maintain. Writing matchers for\njson output
quickly becomes unweildy and error prone. Writing these sorts of\ntests in a more
traditional programming language is fine, but certainly more\ntime consuming to
write than some simple curl requests. curl-runnings aims to\nmake it very easy to
write tests that just hit some endpoints and verify the\noutput looks sane.\n\nNow
you can write your tests just as data in a yaml or json file,\nand curl-runnings
will take care of the rest!\n\nWhile yaml/json is the current way to write curl-runnings
tests, this project is\nbeing built in a way that should lend itself well to an
embedded domain specific\nlanguage, which is a future goal for the project. curl-runnings
specs in Dhall\nis also being developed and may fufill the same needs.\n\n### Installing\n\nThere
are few options to install:\n\n- download the releases from the\n github [releases
page](https://github.com/aviaviavi/curl-runnings/releases)\n- install the binary
with `stack` or `cabal`\n- build from source with `stack`\n\n### Writing a test
specification\n\nWrite your tests specs in a yaml or json file. See /examples to
get\nstarted. A test spec is a top level array of test cases, each item represents
a\nsingle curl and set of assertions about the response.\n\n### Running\n\nOnce
you've written a spec, simply run it with:\n\n```bash $ curl-runnings -f path/to/your/spec.yaml
```\n\nIf all your tests pass, curl-runnings will cleanly exit with a 0 code. A
code of\n1 will be returned if any tests failed.\n\nFor more info:\n\n```bash $
curl-runnings --help ```\n\n### Contributing\n\nCurl-runnings is totally usable
now but is also being actively developed.\nContributions in any form are welcome
and encouraged. Don't be shy! :D\n\n### Roadmap\n\n- [x] Json specifications for
tests\n- [x] Yaml specifications for tests\n- [ ] Dhall specifications for tests\n-
[ ] More specification features\n - [x] Reference values from previous json responses
in matchers\n - [x] Environment variable interpolation\n - [ ] Call out to arbitrary
shell commands in and between test cases\n - [ ] Timeouts\n - [ ] Support for
non-json content type\n - [ ] Retry logic\n- [ ] A DSL for writing test specs\n
\ \n"
license-name: MIT
|
packages/cu/curl-runnings.yaml
|
name: 'Ticket management'
description: |-
APIs for managing Ticket
endpoints:
-
httpMethods:
- GET
uri: api/Ticket
metadata:
title: 'display all Ticket'
description: ''
authenticated: true
custom: []
headers:
Authorization: 'Bearer {YOUR_AUTH_KEY}'
Content-Type: application/json
Accept: application/json
urlParameters: []
queryParameters: []
bodyParameters: []
responses:
-
status: 401
content: '{"message":"Unauthenticated."}'
headers:
cache-control: 'no-cache, private'
content-type: application/json
access-control-allow-origin: '*'
description: null
responseFields: []
-
httpMethods:
- POST
uri: api/Ticket
metadata:
title: 'store Ticket'
description: ''
authenticated: true
custom: []
headers:
Authorization: 'Bearer {YOUR_AUTH_KEY}'
Content-Type: application/json
Accept: application/json
urlParameters: []
queryParameters:
status:
name: status
description: ''
required: false
example: quo
type: string
custom: []
price:
name: price
description: ''
required: false
example: ea
type: string
custom: []
bodyParameters:
user_id:
name: user_id
description: ''
required: false
example: similique
type: string
custom: []
influencer_id:
name: influencer_id
description: ''
required: false
example: explicabo
type: string
custom: []
service_id:
name: service_id
description: ''
required: false
example: qui
type: string\
custom: []
payment_method:
name: payment_method
description: ''
required: false
example: voluptates
type: string
custom: []
responses: []
responseFields: []
-
httpMethods:
- GET
uri: 'api/Ticket/{id}'
metadata:
title: 'show Ticket'
description: ''
authenticated: true
custom: []
headers:
Authorization: 'Bearer {YOUR_AUTH_KEY}'
Content-Type: application/json
Accept: application/json
urlParameters:
id:
name: id
description: 'The ID of the Ticket.'
required: true
example: 5
type: integer
custom: []
queryParameters: []
bodyParameters: []
responses:
-
status: 401
content: '{"message":"Unauthenticated."}'
headers:
cache-control: 'no-cache, private'
content-type: application/json
access-control-allow-origin: '*'
description: null
responseFields: []
-
httpMethods:
- PUT
- PATCH
uri: 'api/Ticket/{id}'
metadata:
title: 'update Ticket'
description: ''
authenticated: true
custom: []
headers:
Authorization: 'Bearer {YOUR_AUTH_KEY}'
Content-Type: application/json
Accept: application/json
urlParameters:
id:
name: id
description: 'The ID of the Ticket.'
required: true
example: 1
type: integer
custom: []
queryParameters: []
bodyParameters:
status:
name: status
description: ''
required: false
example: soluta
type: string
custom: []
responses: []
responseFields: []
-
httpMethods:
- DELETE
uri: 'api/Ticket/{id}'
metadata:
title: 'destroy ticket'
description: ''
authenticated: true
custom: []
headers:
Authorization: 'Bearer {YOUR_AUTH_KEY}'
Content-Type: application/json
Accept: application/json
urlParameters:
id:
name: id
description: 'The ID of the Ticket.'
required: true
example: 1
type: integer
custom: []
queryParameters: []
bodyParameters: []
responses: []
responseFields: []
|
.scribe/endpoints.cache/04.yaml
|
name: OperationStatus
uid: '@azure/arm-analysisservices.OperationStatus'
package: '@azure/arm-analysisservices'
summary: ''
fullName: OperationStatus
remarks: ''
isPreview: false
isDeprecated: false
type: interface
properties:
- name: endTime
uid: '@azure/arm-analysisservices.OperationStatus.endTime'
package: '@azure/arm-analysisservices'
summary: ''
fullName: endTime
remarks: ''
isPreview: false
isDeprecated: false
syntax:
content: 'endTime?: undefined | string'
return:
type: undefined | string
description: ''
- name: error
uid: '@azure/arm-analysisservices.OperationStatus.error'
package: '@azure/arm-analysisservices'
summary: ''
fullName: error
remarks: ''
isPreview: false
isDeprecated: false
syntax:
content: 'error?: ErrorResponse'
return:
type: <xref uid="@azure/arm-analysisservices.ErrorResponse" />
description: ''
- name: id
uid: '@azure/arm-analysisservices.OperationStatus.id'
package: '@azure/arm-analysisservices'
summary: ''
fullName: id
remarks: ''
isPreview: false
isDeprecated: false
syntax:
content: 'id?: undefined | string'
return:
type: undefined | string
description: ''
- name: name
uid: '@azure/arm-analysisservices.OperationStatus.name'
package: '@azure/arm-analysisservices'
summary: ''
fullName: name
remarks: ''
isPreview: false
isDeprecated: false
syntax:
content: 'name?: undefined | string'
return:
type: undefined | string
description: ''
- name: startTime
uid: '@azure/arm-analysisservices.OperationStatus.startTime'
package: '@azure/arm-analysisservices'
summary: ''
fullName: startTime
remarks: ''
isPreview: false
isDeprecated: false
syntax:
content: 'startTime?: undefined | string'
return:
type: undefined | string
description: ''
- name: status
uid: '@azure/arm-analysisservices.OperationStatus.status'
package: '@azure/arm-analysisservices'
summary: ''
fullName: status
remarks: ''
isPreview: false
isDeprecated: false
syntax:
content: 'status?: undefined | string'
return:
type: undefined | string
description: ''
|
docs-ref-autogen/@azure/arm-analysisservices/OperationStatus.yml
|
name: CI
on: [ push, pull_request ]
env:
NODE_VERSION: 14.x
jobs:
install:
runs-on: ubuntu-latest
steps:
- name: Begin CI...
uses: actions/checkout@v2
- name: Use Node ${{ env.NODE_VERSION }}
uses: actions/setup-node@v2
with:
node-version: ${{ env.NODE_VERSION }}
- name: Cache node_modules
uses: actions/cache@v2
with:
path: '**/node_modules'
key: ${{ runner.os }}-${{ env.NODE_VERSION }}-modules-${{ hashFiles('**/yarn.lock') }}
- name: Install dependencies
run: yarn install --frozen-lockfile --ignore-scripts
env:
CI: true
lint-code:
runs-on: ubuntu-latest
needs: [ install ]
steps:
- name: Begin CI...
uses: actions/checkout@v2
- name: Cache node_modules
uses: actions/cache@v2
with:
path: '**/node_modules'
key: ${{ runner.os }}-${{ env.NODE_VERSION }}-modules-${{ hashFiles('**/yarn.lock') }}
- name: Lint
run: yarn lint
env:
CI: true
lint-commit-msg:
runs-on: ubuntu-latest
needs: [ install ]
steps:
- name: Begin CI...
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Cache node_modules
uses: actions/cache@v2
with:
path: '**/node_modules'
key: ${{ runner.os }}-${{ env.NODE_VERSION }}-modules-${{ hashFiles('**/yarn.lock') }}
- name: Lint commit message
run: yarn run commitlint --from HEAD~${{ github.event.pull_request.commits }} --to HEAD
env:
CI: true
test:
runs-on: ubuntu-latest
needs: [ install ]
steps:
- name: Begin CI...
uses: actions/checkout@v2
- name: Cache node_modules
uses: actions/cache@v2
with:
path: '**/node_modules'
key: ${{ runner.os }}-${{ env.NODE_VERSION }}-modules-${{ hashFiles('**/yarn.lock') }}
- name: Test
run: yarn test
env:
CI: true
build:
runs-on: ubuntu-latest
needs: [ install ]
steps:
- name: Begin CI...
uses: actions/checkout@v2
- name: Cache node_modules
uses: actions/cache@v2
with:
path: '**/node_modules'
key: ${{ runner.os }}-${{ env.NODE_VERSION }}-modules-${{ hashFiles('**/yarn.lock') }}
- name: Build
run: yarn build
env:
CI: true
|
.github/workflows/ci.yml
|
name: Test Execution
on:
# Triggers the workflow on push & pull request events for the main branch. Also allows for manual triggers
push:
branches: [ main ]
pull_request:
branches: [ main ]
schedule:
- cron: '0 12 * * *' # Execute every day at noon
workflow_dispatch:
jobs:
test:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
- uses: actions/setup-java@v1
with:
java-version: '1.8.0'
- name: Build test infrastructure
working-directory: src/test/resources/docker
run: docker-compose up -d
- name: MySQL Test Run
run: mvn -Dtest=LiquibaseHarnessSuiteTest -DdbName=mysql test
- name: Archive MySQL test results
uses: actions/upload-artifact@v2
with:
name: mysql-test-results
path: build/spock-reports
- name: MariaDB Test Run
run: mvn -Dtest=LiquibaseHarnessSuiteTest -DdbName=mariadb test
- name: Archive MariaDB test results
uses: actions/upload-artifact@v2
with:
name: mariadb-test-results
path: build/spock-reports
- name: Postgres Test Run
run: mvn -Dtest=LiquibaseHarnessSuiteTest -DdbName=postgresql -Dmaven.test.failure.ignore=true test
- name: Archive Postgres test results
uses: actions/upload-artifact@v2
with:
name: postgres-test-results
path: build/spock-reports
- name: H2 Test Run
run: mvn -Dtest=LiquibaseHarnessSuiteTest -DdbName=h2 test
- name: Archive H2 test results
uses: actions/upload-artifact@v2
with:
name: h2-test-results
path: build/spock-reports
- name: SQLite Test Run
run: mvn -Dtest=LiquibaseHarnessSuiteTest -DdbName=sqlite -Dmaven.test.failure.ignore=true test
- name: Archive SQLite test results
uses: actions/upload-artifact@v2
with:
name: sqlite-test-results
path: build/spock-reports
- name: MSSQL Test Run
run: mvn -Dtest=LiquibaseHarnessSuiteTest -DdbName=mssql -Dmaven.test.failure.ignore=true test
- name: Archive MSSQL test results
uses: actions/upload-artifact@v2
with:
name: mssql-test-results
path: build/spock-reports
- name: cockroachDB Test Run
run: mvn -Dtest=LiquibaseHarnessSuiteTest -DdbName=cockroachdb test
- name: Archive cockroachdb test results
uses: actions/upload-artifact@v2
with:
name: cockroachdb-test-results
path: build/spock-reports
- name: Derby Test Run
run: mvn -Dtest=LiquibaseHarnessSuiteTest -DdbName=derby test
- name: Archive Derby test results
uses: actions/upload-artifact@v2
with:
name: derby-test-results
path: build/spock-reports
- name: Diff Test Run
run: mvn -Dtest=DiffTest test
- name: Archive Diff test results
uses: actions/upload-artifact@v2
with:
name: diff-test-results
path: build/spock-reports
- name: Tear down test infra
working-directory: src/test/resources/docker
run: docker-compose down --volumes
- name: Docker login to retrive EDB images from private repo
run: docker login "${{ secrets.RT_URL }}" -u "${{ secrets.RT_USER }}" -p "${{ secrets.RT_PWD }}"
- name: Build EDB test infra
working-directory: src/test/resources/docker
run: docker-compose -f docker-compose.edb.yml up -d
- name: EDB Test Run
run: mvn -Dtest=LiquibaseHarnessSuiteTest -DdbName=edb test
- name: Archive EDB test results
uses: actions/upload-artifact@v2
with:
name: edb-test-results
path: build/spock-reports
- name: Tear down EDB test infra
working-directory: src/test/resources/docker
run: docker-compose -f docker-compose.edb.yml down --volumes
|
.github/workflows/main.yml
|
name: "\U0001F4A5 Build error"
description: Some package in Spack didn't build correctly
title: "Installation issue: "
labels: [build-error]
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to report this build failure. To proceed with the report please:
1. Title the issue `Installation issue: <name-of-the-package>`.
2. Provide the information required below.
We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively!
- type: textarea
id: reproduce
attributes:
label: Steps to reproduce the issue
description: |
Fill in the console output from the exact spec you are trying to build.
value: |
```console
$ spack spec -I <spec>
...
```
- type: textarea
id: error
attributes:
label: Error message
description: |
Please post the error message from spack inside the `<details>` tag below:
value: |
<details><summary>Error message</summary><pre>
...
</pre></details>
validations:
required: true
- type: textarea
id: information
attributes:
label: Information on your system
description: Please include the output of `spack debug report`.
validations:
required: true
- type: markdown
attributes:
value: |
If you have any relevant configuration detail (custom `packages.yaml` or `modules.yaml`, etc.) you can add that here as well.
- type: textarea
id: additional_information
attributes:
label: Additional information
description: |
Please upload the following files:
* **`spack-build-out.txt`**
* **`spack-build-env.txt`**
They should be present in the stage directory of the failing build. Also upload any `config.log` or similar file if one exists.
- type: markdown
attributes:
value: |
Some packages have maintainers who have volunteered to debug build failures. Run `spack maintainers <name-of-the-package>` and **@mention** them here if they exist.
- type: checkboxes
id: checks
attributes:
label: General information
options:
- label: I have run `spack debug report` and reported the version of Spack/Python/Platform
required: true
- label: I have run `spack maintainers <name-of-the-package>` and **@mentioned** any maintainers
required: true
- label: I have uploaded the build log and environment files
required: true
- label: I have searched the issues of this repo and believe this is not a duplicate
required: true
|
.github/ISSUE_TEMPLATE/build_error.yml
|
title: Telegram
author: appleboy
tags:
- notifications
- chat
logo: telegram.svg
repo: https://github.com/appleboy/drone-telegram
image: https://hub.docker.com/r/appleboy/drone-telegram
license: Apache License 2.0
readme: https://github.com/appleboy/drone-telegram/blob/master/README.md
description: The Telegram plugin posts build status messages to your account.
example: |
kind: pipeline
name: default
steps:
- name: send telegram notification
image: appleboy/drone-telegram
settings:
token: <KEY>
to: telegram_user_id
message_file: message_file.tpl
template_vars:
env: testing
app: MyApp
properties:
token:
type: string
defaultValue: ''
description: telegram token from telegram developer center
secret: true
required: true
to:
type: string
defaultValue: ''
description: telegram user id (can be requested from the @userinfobot inside Telegram)
secret: false
required: true
message:
type: string
defaultValue: ''
description: overwrite the default message template
secret: false
required: false
photo:
type: string
defaultValue: ''
description: local file path
secret: false
required: false
document:
type: string
defaultValue: ''
description: local file path
secret: false
required: false
sticker:
type: string
defaultValue: ''
description: local file path
secret: false
required: false
audio:
type: string
defaultValue: ''
description: local file path
secret: false
required: false
voice:
type: string
defaultValue: ''
description: local file path
secret: false
required: false
location:
type: string
defaultValue: ''
description: local file path
secret: false
required: false
video:
type: string
defaultValue: ''
description: local file path
secret: false
required: false
venue:
type: string
defaultValue: ''
description: local file path
secret: false
required: false
format:
type: string
defaultValue: ''
description: markdown or html format
secret: false
required: false
|
plugins/telegram/content.yaml
|
profile: # Your profile, required
name: 'Lucka' # Your name, required
avatar: 'https://s.gravatar.com/avatar/f03d18971cd558e09f51ad19923bf077?s=180' # URL to your avatar, `180x180px` recommended, required
babels: # List of your Babel, optional
- lang: 'zh' # Language code, required
level: 'N' # Babel level, 0~5 or N, required
- lang: 'en'
level: 2
wikis: # List of all wikis, required
- # Please refer to this item for all available fields
url: 'https://wiki.llsif.moe' # API path of the wiki, required
user: 'Lucka' # Your username in the wiki, required
# Fields below are all optional and will override the query results
# Site Info, won't query site info if all are provided
title: 'LoveLive School Idol Festival Wiki' # Title
base: 'https://wiki.llsif.moe/%E9%A6%96%E9%A1%B5' # Homepage URL
logo: 'https://www.llsif.moe/SIF_50MDL.png' # Logo URL
server: 'https://wiki.llsif.moe' # Server URL
articlePath: '/$1' # Article path
# User Info, won't query user info and contribution if all are provided
# unless forceUpdate is true
uid: 79 # User ID
registration: 1520507202000 # Registration timestamp, UTC in ms
edits: 70 # Edit count
lastEdit: 1544796913000 # Timestamp of last edition
# Misc
tags: [ 'lovelive' ] # Tags, will be appended to the generated list
forceRefresh: false # The edit and lastEdit will always be quried if true
- # This is a special example: The query will be blocked by same-origin policy
# of browser in mobile devices, so we fill all the fields and set
# forceRefresh to true, to make it always query and use the existing values
# if fails.
# The logo is also 403 so we use image from Wiki Media.
url: 'https://zh.moegirl.org.cn'
user: '卢卡'
title: '萌娘百科'
base: 'https://zh.moegirl.org.cn/Mainpage'
logo: 'https://upload.wikimedia.org/wikipedia/zh/1/1d/Moegirlpedia_logo(2015).png'
server: 'https://zh.moegirl.org.cn'
articlePath: '/$1'
uid: 280737
registration: 1482294114000
edits: 54
lastEdit: 1606915659000
forceRefresh: true
- url: 'https://thwiki.cc'
user: '卢卡'
tags: [ 'touhou' ]
- url: 'https://zh.wikipedia.org/w'
user: '卢卡'
- url: 'https://wiki.52poke.com'
user: '卢卡'
tags: [ 'pokemon' ]
project: # Configures for the project, will be passed in vue.config.ts
- publicPath: '/wikist/' # Default: /wikist/
|
config.yaml
|
-
Test with that Finance manager who can only create supplier invoice.
-
!context
uid: 'res_users_account_manager'
-
In order to test account invoice I create a new supplier invoice
-
I create a Tax Codes
-
!record {model: account.tax.code, id: tax_case}:
name: Tax_case
company_id: base.main_company
sign: 1
-
I create a Tax
-
!record {model: account.tax, id: tax10}:
name: Tax 10.0
amount: 10.0
type: fixed
sequence: 1
company_id: base.main_company
type_tax_use: all
tax_code_id: tax_case
-
I create a supplier invoice
-
!record {model: account.invoice, id: account_invoice_supplier0, view: invoice_supplier_form}:
account_id: account.a_pay
check_total: 3000.0
company_id: base.main_company
currency_id: base.EUR
invoice_line:
- account_id: account.a_expense
name: '[PCSC234] PC Assemble SC234'
price_unit: 300.0
product_id: product.product_product_3
quantity: 10.0
uos_id: product.product_uom_unit
invoice_line_tax_id:
- tax10
journal_id: account.expenses_journal
partner_id: base.res_partner_12
reference_type: none
type: in_invoice
-
I check that Initially supplier invoice state is "Draft"
-
!assert {model: account.invoice, id: account_invoice_supplier0}:
- state == 'draft'
-
I change the state of invoice to open by clicking Validate button
-
!workflow {model: account.invoice, action: invoice_open, ref: account_invoice_supplier0}
-
I check that the invoice state is now "Open"
-
!assert {model: account.invoice, id: account_invoice_supplier0}:
- state == 'open'
-
I verify that account move is created
-
!python {model: account.invoice}: |
move_obj = self.pool.get('account.move')
inv = self.browse(cr, uid, ref('account_invoice_supplier0'))
move = inv.move_id
get_period = move_obj._get_period(cr, uid, {'lang': u'en_US', 'active_model': 'ir.ui.menu',
'active_ids': [ref('menu_action_move_journal_line_form')], 'tz': False, 'active_id': ref('menu_action_move_journal_line_form')})
amt = move_obj._search_amount(cr, uid, move_obj, 'amount', [('amount', '=', 3100.0)], {'lang': u'en_US', 'active_model': 'ir.ui.menu',
'active_ids': [ref('menu_action_move_journal_line_form')], 'tz': False, 'active_id': ref('menu_action_move_journal_line_form')})
ids = amt[0][2]
amt_compute = move_obj._amount_compute(cr, uid, list(ids), 'amount', None, {'lang': u'en_US', 'active_model': 'ir.ui.menu',
'active_ids': [ref('menu_action_move_journal_line_form')], 'tz': False, 'active_id': ref('menu_action_move_journal_line_form')}, where ='')
move_amount = amt_compute.values()
assert(inv.move_id and move.period_id.id == get_period and move_amount[0] == 3100.0), ('Journal Entries has not been created')
-
I cancel the account move which is in posted state and verifies that it gives warning message
-
!python {model: account.move}: |
from openerp.osv import osv
inv_obj = self.pool.get('account.invoice')
inv = inv_obj.browse(cr, uid, ref('account_invoice_supplier0'))
try:
mov_cancel = self.button_cancel(cr, uid, [inv.move_id.id], {'lang': u'en_US', 'tz': False,
'active_model': 'ir.ui.menu', 'journal_type': 'purchase', 'active_ids': [ref('menu_action_invoice_tree2')],
'type': 'in_invoice', 'active_id': ref('menu_action_invoice_tree2')})
assert False, "This should never happen!"
except osv.except_osv:
pass
-
I verify that 'Period Sum' and 'Year sum' of the tax code are the expected values
-
!python {model: account.tax.code}: |
tax_code = self.browse(cr, uid, ref('tax_case'))
assert(tax_code.sum_period == 100.0 and tax_code.sum == 100.0), "Incorrect 'Period Sum' / 'Year sum' expected twice 100.0, got period=%r and year=%r)" % (tax_code.sum_period,tax_code.sum)
|
web/addons/account/test/account_supplier_invoice.yml
|
parameters:
kunstmaan_pagepart.page_part_configuration_reader.class: 'Kunstmaan\PagePartBundle\PagePartConfigurationReader\PagePartConfigurationReader'
kunstmaan_pagepart.page_part_configuration_parser.class: 'Kunstmaan\PagePartBundle\PagePartConfigurationReader\PagePartConfigurationParser'
kunstmaan_pagepart.page_template_configuration_reader.class: 'Kunstmaan\PagePartBundle\PageTemplate\PageTemplateConfigurationReader'
kunstmaan_pagepart.page_template_configuration_parser.class: 'Kunstmaan\PagePartBundle\PageTemplate\PageTemplateConfigurationParser'
kunstmaan_page_part.page_template.page_template_configuration_service.class: 'Kunstmaan\PagePartBundle\PageTemplate\PageTemplateConfigurationService'
services:
kunstmaan_pagepart.pageparts:
class: 'Kunstmaan\PagePartBundle\PagePartAdmin\Builder'
kunstmaan_page_part.page_part_configuration_reader:
class: '%kunstmaan_pagepart.page_part_configuration_reader.class%'
arguments: [ '@kunstmaan_page_part.page_part_configuration_parser' ]
kunstmaan_page_part.page_part_configuration_parser:
class: '%kunstmaan_pagepart.page_part_configuration_parser.class%'
public: false
arguments: [ '@kernel', '%kunstmaan_page_part.page_parts_presets%' ]
kunstmaan_page_part.page_template_configuration_reader:
class: '%kunstmaan_pagepart.page_template_configuration_reader.class%'
arguments: [ '@kunstmaan_page_part.page_template_configuration_parser' ]
kunstmaan_page_part.page_template_configuration_parser:
class: '%kunstmaan_pagepart.page_template_configuration_parser.class%'
public: false
arguments: [ '@kernel', '%kunstmaan_page_part.page_templates_presets%' ]
kunstmaan_page_part.page_template.page_template_configuration_service:
class: '%kunstmaan_page_part.page_template.page_template_configuration_service.class%'
arguments:
- '@kunstmaan_page_part.repository.page_template_configuration'
- '@kunstmaan_page_part.page_template_configuration_reader'
kunstmaan_page_part.repository.page_template_configuration:
class: 'Kunstmaan\PagePartBundle\Repository\PageTemplateConfigurationRepository'
public: false
factory: [ '@doctrine.orm.entity_manager', 'getRepository' ]
arguments:
- 'KunstmaanPagePartBundle:PageTemplateConfiguration'
kunstmaan_pagepartadmin.factory:
class: 'Kunstmaan\PagePartBundle\PagePartAdmin\PagePartAdminFactory'
arguments: ['@service_container']
kunstmaan_pagepartadmin.twig.extension:
class: 'Kunstmaan\PagePartBundle\Twig\Extension\PagePartAdminTwigExtension'
tags:
- { name: twig.extension }
kunstmaan_pageparts.twig.extension:
class: 'Kunstmaan\PagePartBundle\Twig\Extension\PagePartTwigExtension'
arguments:
- '@doctrine.orm.entity_manager'
tags:
- { name: twig.extension }
kunstmaan_pagetemplate.twig.extension:
class: 'Kunstmaan\PagePartBundle\Twig\Extension\PageTemplateTwigExtension'
arguments:
- '@kunstmaan_page_part.page_template.page_template_configuration_service'
tags:
- { name: twig.extension }
kunstmaan_pageparts.pagepart_creator_service:
class: 'Kunstmaan\PagePartBundle\Helper\Services\PagePartCreatorService'
calls:
- [ setEntityManager, [ '@doctrine.orm.entity_manager' ] ]
kunstmaan_pageparts.edit_node.listener:
class: 'Kunstmaan\PagePartBundle\EventListener\NodeListener'
arguments:
- '@doctrine.orm.entity_manager'
- '@kunstmaan_pagepartadmin.factory'
- '@kunstmaan_page_part.page_template_configuration_reader'
- '@kunstmaan_page_part.page_part_configuration_reader'
- '@kunstmaan_page_part.page_template.page_template_configuration_service'
tags:
- { name: kernel.event_listener, event: kunstmaan_node.adaptForm, method: adaptForm }
kunstmaan_pageparts.clone.listener:
class: 'Kunstmaan\PagePartBundle\EventListener\CloneListener'
arguments:
- '@doctrine.orm.entity_manager'
- '@kunstmaan_page_part.page_part_configuration_reader'
- '@kunstmaan_page_part.page_template.page_template_configuration_service'
tags:
- { name: kernel.event_listener, event: kunstmaan_admin.postDeepCloneAndSave, method: postDeepCloneAndSave }
|
src/Kunstmaan/PagePartBundle/Resources/config/services.yml
|
steps:
- name: ':npm: :docker: Build UI-component base image'
plugins:
docker-compose#v2.3.0:
build: build
image-repository: docker.sendgrid.net/sendgrid
cache-from: build:docker.sendgrid.net/sendgrid/ui-components:latest
- wait
- name: ':docker: Push image to latest tag'
# We want to push the image to the latest tag to keep the cache up to date to keep things fast
plugins:
docker-compose#v2.3.0:
push:
- build:docker.sendgrid.net/sendgrid/ui-components:latest
- wait
- name: ':jest: Lint and Test'
command:
- echo "--- 🐳 Pulling the docker image"
- export IMAGETAG=$(buildkite-agent meta-data get docker-compose-plugin-built-image-tag-build)
- docker pull \$IMAGETAG
- echo '+++ 🃏 Running Snapshot Testing'
- docker-compose run build bash -c "npm run ci-test"
- name: ':jest: 📸 :storybook: Image Snapshot Testing'
command:
- echo "--- 🐳 Pulling the docker image"
- export IMAGETAG=$(buildkite-agent meta-data get docker-compose-plugin-built-image-tag-build)
- docker pull \$IMAGETAG
- echo "+++ 📸 Running Image Snapshots"
- ./imageSnapshot.sh --ci
artifact_paths:
- 'test_image/__image_snapshots__/__diff_output__/**/*'
- name: ':storybook: Build Storybook'
command:
- echo "--- 🐳 Pulling the docker image"
- export IMAGETAG=$(buildkite-agent meta-data get docker-compose-plugin-built-image-tag-build)
- docker pull \$IMAGETAG
- echo '+++ 📚 Building Storybook'
- docker-compose run build bash -c "npm run build-storybook -o ./docs"
- 'tar -zcvf docs.tar.gz ./docs'
- buildkite-agent artifact upload ./docs.tar.gz
- name: ':typescript: Build UI-Components'
command:
- echo "--- 🐳 Pulling the docker image"
- export IMAGETAG=$(buildkite-agent meta-data get docker-compose-plugin-built-image-tag-build)
- docker pull \$IMAGETAG
- echo '+++ 🔨 Building Typescript and UI-components'
- docker-compose run build bash -c "npm run build"
- 'tar -zcvf builtPackages.tar.gz ./packages'
- buildkite-agent artifact upload ./builtPackages.tar.gz
- wait
- name: ':octocat: Combine build Artifacts and Commit :octocat:'
branches: 'master'
command:
- git fetch
- git checkout ${BUILDKITE_BRANCH}
- git pull
- rm -rf docs/
- mkdir ./docs
- buildkite-agent artifact download builtPackages.tar.gz ./
- buildkite-agent artifact download docs.tar.gz ./
- tar -zxvf builtPackages.tar.gz
- tar -zxvf docs.tar.gz
- git add --all docs && git add --all packages
- git status
- ./.buildkite/determineGitCommitType.sh
- git push origin ${BUILDKITE_BRANCH}
|
.buildkite/buildPipeline.yml
|
name: 'Slack Release Deployment'
description: 'An opinionated Github action that notifies platforms-bot to update Slack based on a particular action.'
author: 'Platforms'
inputs:
action:
description: 'The type of event being performed, for a full list check the platforms-bot readme https://github.com/Footage-Firm/platforms-go-slackbot'
required: true
app_endpoint:
description: 'Endpoint of the environment that was deployed as part of this pipeline. If there is not an endpoint you have exposed, just use the git repo URL where the pipeline is being run.'
required: true
commit_author:
description: 'Full name associated with the commit being deployed. See the git-commit-data-action (https://github.com/rlespinasse/git-commit-data-action) for an option to expose this value.'
required: true
commit_message:
description: 'First line of the commit message associated with the deployment. See the git-commit-data-action (https://github.com/rlespinasse/git-commit-data-action) for an option to expose this value.'
required: true
commit_sha:
description: 'Full git SHA associated with the deployment. See the git-commit-data-action (https://github.com/rlespinasse/git-commit-data-action) for an option to expose this value.'
required: true
platforms_bot_token:
description: 'The token used to authenticate with platforms-bot'
required: true
repo_name:
description: 'The repo org and name. In most cases you should use {{ github.repository }}'
required: true
channel:
description: 'The slack channel to post to. If the action is init, this should be the channel name. Otherwise, the channel id.'
required: false
argocd_app:
description: 'The ArgoCD to target for rollbacks'
required: false
slack_ts:
description: 'The timestamp of the slack message to update. This is by actions other than init.'
required: false
outputs:
slack_ts:
description: 'The slack timestamp represented the message that was created or updated'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.action }}
- ${{ inputs.app_endpoint }}
- ${{ inputs.commit_author }}
- ${{ inputs.commit_message }}
- ${{ inputs.commit_sha }}
- ${{ inputs.platforms_bot_token }}
- ${{ inputs.repo_name }}
- ${{ inputs.argocd_app }}
- ${{ inputs.channel }}
- ${{ inputs.slack_ts }}
|
action.yml
|
uid: azure.mgmt.automation.models.UpdateConfiguration
name: UpdateConfiguration
fullName: azure.mgmt.automation.models.UpdateConfiguration
module: azure.mgmt.automation.models
inheritances:
- msrest.serialization.Model
summary: 'Update specific properties of the software update configuration.
All required parameters must be populated in order to send to Azure.'
constructor:
syntax: 'UpdateConfiguration(*, operating_system: typing.Union[str, _ForwardRef(''OperatingSystemType'')],
windows: typing.Union[_ForwardRef(''WindowsProperties''), NoneType] = None, linux:
typing.Union[_ForwardRef(''LinuxProperties''), NoneType] = None, duration: typing.Union[datetime.timedelta,
NoneType] = None, azure_virtual_machines: typing.Union[typing.List[str], NoneType]
= None, non_azure_computer_names: typing.Union[typing.List[str], NoneType] = None,
targets: typing.Union[_ForwardRef(''TargetProperties''), NoneType] = None, **kwargs)'
parameters:
- name: operating_system
description: 'Required. operating system of target machines. Possible values
include: "Windows", "Linux".'
types:
- <xref:str>
- <xref:azure.mgmt.automation.models.OperatingSystemType>
- name: windows
description: Windows specific update configuration.
types:
- <xref:azure.mgmt.automation.models.WindowsProperties>
- name: linux
description: Linux specific update configuration.
types:
- <xref:azure.mgmt.automation.models.LinuxProperties>
- name: duration
description: 'Maximum time allowed for the software update configuration run.
Duration needs
to be specified using the format PT[n]H[n]M[n]S as per ISO8601.'
types:
- <xref:datetime.timedelta>
- name: azure_virtual_machines
description: 'List of azure resource Ids for azure virtual machines targeted
by the software update configuration.'
types:
- <xref:list>[<xref:str>]
- name: non_azure_computer_names
description: 'List of names of non-azure machines targeted by the software
update configuration.'
types:
- <xref:list>[<xref:str>]
- name: targets
description: Group targets for the software update configuration.
types:
- <xref:azure.mgmt.automation.models.TargetProperties>
|
preview/docs-ref-autogen/azure-mgmt-automation/azure.mgmt.automation.models.UpdateConfiguration.yml
|
name: privacyidea_authenticator
description: An OTP Authenticator App for privacyIDEA Authentication Server.
homepage: https://netknights.it
repository: https://github.com/privacyidea/pi-authenticator
publish_to: none
# The following defines the version and build number for your application.
# A version number is three numbers separated by dots, like 1.2.43
# followed by an optional build number separated by a +.
# Both the version and the builder number may be overridden in flutter
# build by specifying --build-name and --build-number, respectively.
# In Android, build-name is used as versionName while build-number used as versionCode.
# Read more about Android versioning at https://developer.android.com/studio/publish/versioning
# In iOS, build-name is used as CFBundleShortVersionString while build-number used as CFBundleVersion.
# Read more about iOS versioning at
# https://developer.apple.com/library/archive/documentation/General/Reference/InfoPlistKeyReference/Articles/CoreFoundationKeys.html
version: 4.0.0+0400000 # TODO Set the right version number
# version: major.minor.build + 2x major|2x minor|3x build
# version: version number + build number (optional)
# android: build-name + versionCode
# iOS : CFBundleShortVersionString + CFBundleVersion
environment:
sdk: '>=2.12.0 <3.0.0'
dependencies:
flutter:
sdk: flutter
flutter_localizations:
sdk: flutter
cupertino_icons: ^1.0.4
intl: ^0.17.0
hex: ^0.2.0
base32: ^2.1.1
otp: ^3.0.1
qr_mobile_vision: ^3.0.1
flutter_secure_storage: ^5.0.2
json_annotation: ^4.3.0
flutter_slidable: ^1.1.0
package_info: ^2.0.2
asn1lib: ^1.0.3
flutter_markdown: ^0.6.8
url_launcher: ^6.0.12
catcher: ^0.6.9
uuid: ^3.0.5
http: ^0.13.4
pointycastle: ^3.4.0
flutter_local_notifications: ^9.1.2
mutex: ^3.0.0
flutterlifecyclehooks: ^2.0.0-nullsafety.0
streaming_shared_preferences: ^2.0.0
flutter_svg: ^0.23.0+1
easy_dynamic_theme: ^2.2.0
firebase_messaging: ^11.1.0
firebase_core: ^1.10.0
pi_authenticator_legacy:
path: local_plugins/pi-authenticator-legacy
collection: ^1.15.0
uni_links: ^0.5.1
local_auth: ^1.1.8
dev_dependencies:
flutter_driver:
sdk: flutter
test: any
# dependencies to serialize objects to json
build_runner: any
json_serializable: ^6.0.1
# For information on the generic Dart part of this file, see the
# following page: https://dart.dev/tools/pub/pubspec
# The following section is specific to Flutter.
flutter:
# Automatically create files used for localization
generate: true
# The following line ensures that the Material Icons font is
# included with your application, so that you can use the icons in
# the material Icons class.
uses-material-design: true
# To add assets to your application, add an assets section, like this:
# assets:
# - images/a_dot_burr.jpeg
# - images/a_dot_ham.jpeg
assets:
- res/logo/app_logo_light.svg
- CHANGELOG.md
- res/guide/
- res/gif/help_delete_rename.gif
- res/gif/help_manual_poll.gif
# An image asset can refer to one or more resolution-specific "variants", see
# https://flutter.dev/assets-and-images/#resolution-aware.
# For details regarding adding assets from package dependencies, see
# https://flutter.dev/assets-and-images/#from-packages
# To add custom fonts to your application, add a fonts section here,
# in this "flutter" section. Each entry in this list should have a
# "family" key with the font family name, and a "fonts" key with a
# list giving the asset and other descriptors for the font. For
# example:
# fonts:
# - family: Schyler
# fonts:
# - asset: fonts/Schyler-Regular.ttf
# - asset: fonts/Schyler-Italic.ttf
# style: italic
# - family: Trajan Pro
# fonts:
# - asset: fonts/TrajanPro.ttf
# - asset: fonts/TrajanPro_Bold.ttf
# weight: 700
#
# For details regarding fonts from package dependencies,
# see https://flutter.dev/custom-fonts/#from-packages
|
pubspec.yaml
|
nameOverride: ""
fullnameOverride: ""
# Number of Cerbos pods to run
replicaCount: 1
# Container image details
image:
repository: ghcr.io/cerbos/cerbos
pullPolicy: IfNotPresent
# Image tag to use. Defaults to the chart appVersion.
tag: ""
imagePullSecrets: []
initContainers: []
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
# Annotations to add to the pod.
podAnnotations: {}
# Security context for the whole pod.
podSecurityContext: {}
# fsGroup: 2000
# Security context for the Cerbos container.
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
# Resource limits for the pod.
resources: {}
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
# Autoscaling configuration.
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
# Node selector for the pod.
nodeSelector: {}
# Pod tolerations.
tolerations: []
# Pod affinity rules.
affinity: {}
# Volumes to add to the pod.
volumes: []
# Volume mounts to add to the Cerbos container.
volumeMounts: []
# Environment variables to add to the pod.
env: []
# Source environment variables from config maps or secrets.
envFrom: []
# Cerbos service settings.
service:
type: ClusterIP
httpPort: 3592
grpcPort: 3593
# Cerbos deployment settings.
cerbos:
# Port to expose the http service on.
httpPort: 3592
# Port to expose the gRPC service on.
grpcPort: 3593
# Secret containing the TLS certificate.
# Leave empty to disable TLS.
# The secret must contain the following keys:
# - tls.crt: Required. Certificate file contents.
# - tls.key: Required. Private key for the certificate.
# - ca.crt: Optional. CA certificate to add to the trust pool.
tlsSecretName: ""
# Cerbos log level. Valid values are DEBUG, INFO, WARN and ERROR
logLevel: INFO
# Add Prometheus service discovery annotations to the pod.
prometheusPodAnnotationsEnabled: true
# Cerbos config file contents.
# Some server settings like server.httpListenAddr, server.grpcListenAddr, server.tls will be overwritten by the chart based on values provided above.
config: {}
|
deploy/charts/cerbos/values.yaml
|
--- !<SKIN>
contentType: "SKIN"
firstIndex: "2018-12-26 03:59"
game: "Unreal Tournament"
name: "MishimaSamurai"
author: "Unknown"
description: "None"
releaseDate: "2008-03"
attachments:
- type: "IMAGE"
name: "MishimaSamurai_shot_6.png"
url: "https://f002.backblazeb2.com/file/unreal-archive-images/Unreal%20Tournament/Skins/M/MishimaSamurai_shot_6.png"
- type: "IMAGE"
name: "MishimaSamurai_shot_3.png"
url: "https://f002.backblazeb2.com/file/unreal-archive-images/Unreal%20Tournament/Skins/M/MishimaSamurai_shot_3.png"
- type: "IMAGE"
name: "MishimaSamurai_shot_2.png"
url: "https://f002.backblazeb2.com/file/unreal-archive-images/Unreal%20Tournament/Skins/M/MishimaSamurai_shot_2.png"
- type: "IMAGE"
name: "MishimaSamurai_shot_4.png"
url: "https://f002.backblazeb2.com/file/unreal-archive-images/Unreal%20Tournament/Skins/M/MishimaSamurai_shot_4.png"
- type: "IMAGE"
name: "MishimaSamurai_shot_8.png"
url: "https://f002.backblazeb2.com/file/unreal-archive-images/Unreal%20Tournament/Skins/M/MishimaSamurai_shot_8.png"
- type: "IMAGE"
name: "MishimaSamurai_shot_5.png"
url: "https://f002.backblazeb2.com/file/unreal-archive-images/Unreal%20Tournament/Skins/M/MishimaSamurai_shot_5.png"
- type: "IMAGE"
name: "MishimaSamurai_shot_7.png"
url: "https://f002.backblazeb2.com/file/unreal-archive-images/Unreal%20Tournament/Skins/M/MishimaSamurai_shot_7.png"
- type: "IMAGE"
name: "MishimaSamurai_shot_1.png"
url: "https://f002.backblazeb2.com/file/unreal-archive-images/Unreal%20Tournament/Skins/M/MishimaSamurai_shot_1.png"
originalFilename: "utmishimasamurai.zip"
hash: "500b207b7d959b8e0071e41cb29ef4f13d64742f"
fileSize: 1111739
files:
- name: "CommandoMishima.utx"
fileSize: 1632748
hash: "7dfed975f459138e6dcb8315129ca592ce570e2f"
otherFiles: 2
dependencies: {}
downloads:
- url: "http://www.ut-files.com/index.php?dir=Skins/SkinsU/&file=utmishimasamurai.zip"
main: false
repack: false
state: "OK"
- url: "https://f002.backblazeb2.com/file/unreal-archive-files/Unreal%20Tournament/Skins/M/utmishimasamurai.zip"
main: true
repack: false
state: "OK"
- url: "http://uttexture.com/UT/Downloads/Skins/Misc/SkinsU/utmishimasamurai.zip"
main: false
repack: false
state: "OK"
- url: "http://medor.no-ip.org/index.php?dir=Skins/&file=utmishimasamurai.zip"
main: false
repack: false
state: "OK"
- url: "https://files.vohzd.com/unrealarchive/Unreal%20Tournament/Skins/M/5/0/0b207b/utmishimasamurai.zip"
main: false
repack: false
state: "OK"
- url: "http://ut-files.com/index.php?dir=Skins/SkinsU/&file=utmishimasamurai.zip"
main: false
repack: false
state: "OK"
- url: "https://unreal-archive-files.eu-central-1.linodeobjects.com/Unreal%20Tournament/Skins/M/5/0/0b207b/utmishimasamurai.zip"
main: false
repack: false
state: "OK"
deleted: false
skins:
- "MishimaSamurai"
faces:
- "Samurai1"
- "Samurai2"
- "Samurai3"
- "Samurai4"
- "Samurai5"
- "Samurai6"
- "Samurai7"
model: "Unknown"
teamSkins: true
|
content/Unreal Tournament/Skins/M/5/0/0b207b/mishimasamurai_[500b207b].yml
|
name: Macos Build
on: [push, pull_request, workflow_dispatch]
concurrency:
group: ci-${{github.workflow}}-${{ github.ref }}
cancel-in-progress: true
jobs:
build:
runs-on: 'macos-latest'
strategy:
fail-fast: false # don't cancel if a job from the matrix fails
matrix:
config: [
sitl,
CubeOrange,
]
steps:
- uses: actions/checkout@v2
with:
submodules: 'recursive'
- name: Install Prerequisites
shell: bash
run: |
if [[ ${{ matrix.config }} == "sitl" ]]; then
export DO_AP_STM_ENV=0
fi
Tools/environment_install/install-prereqs-mac.sh -y
source ~/.bash_profile
# Put ccache into github cache for faster build
- name: Prepare ccache timestamp
id: ccache_cache_timestamp
run: |
NOW=$(date -u +"%F-%T")
echo "::set-output name=timestamp::${NOW}"
- name: ccache cache files
uses: actions/cache@v2
with:
path: ~/.ccache
key: ${{github.workflow}}-ccache-${{matrix.config}}-${{steps.ccache_cache_timestamp.outputs.timestamp}}
restore-keys: ${{github.workflow}}-ccache-${{matrix.config}} # restore ccache from either previous build on this branch or on master
- name: setup ccache
run: |
mkdir -p ~/.ccache
echo "base_dir = ${GITHUB_WORKSPACE}" > ~/.ccache/ccache.conf
echo "compression = true" >> ~/.ccache/ccache.conf
echo "compression_level = 6" >> ~/.ccache/ccache.conf
echo "max_size = 400M" >> ~/.ccache/ccache.conf
ccache -s
ccache -z
- name: test build ${{matrix.config}}
env:
CI_BUILD_TARGET: ${{matrix.config}}
shell: bash
run: |
source ~/.bash_profile
PATH="/github/home/.local/bin:$PATH"
echo $PATH
./waf configure --board ${{matrix.config}}
./waf
ccache -s
ccache -z
|
out/diydrones/ardupilot/.github_workflows_macos_build.yml
|
metadata:
title: Preguntas más frecuentes sobre la API de Azure Cosmos DB para MongoDB
description: Obtenga respuestas a las preguntas más frecuentes sobre la API de Azure Cosmos DB para MongoDB.
author: SnehaGunda
ms.service: cosmos-db
ms.subservice: cosmosdb-mongo
ms.topic: conceptual
ms.date: 04/28/2020
ms.author: sngun
ms.openlocfilehash: 9faa129fcfe2d8a1af705b98311ffe19c0934dad
ms.sourcegitcommit: 17345cc21e7b14e3e31cbf920f191875bf3c5914
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 05/19/2021
ms.locfileid: "111907602"
title: Preguntas más frecuentes sobre la API de Azure Cosmos DB para MongoDB
summary: >
[!INCLUDE[appliesto-mongodb-api](includes/appliesto-mongodb-api.md)]
La API de Azure Cosmos DB para MongoDB es una capa de compatibilidad de protocolo de transferencia que permite a las aplicaciones comunicarse de manera sencilla y transparente con el motor de base de datos nativo de Azure Cosmos mediante los controladores y los SDK para MongoDB admitidos por la comunidad. Los desarrolladores ahora pueden usar cadenas de herramienta de MongoDB existentes y aptitudes para compilar aplicaciones que aprovechan las ventajas de Azure Cosmos DB. Los desarrolladores se benefician de las capacidades únicas de Azure Cosmos DB, que incluyen distribución global con replicación de escritura en varias regiones, indexación automática, mantenimiento de copias de seguridad, contratos de nivel de servicio con respaldo financiero, etc.
sections:
- name: General
questions:
- question: >
¿Cómo me conecto a mi base de datos?
answer: >
La forma más rápida de conectarse a una base de datos de Cosmos con la API de Azure Cosmos DB para MongoDB es a través de [Azure Portal](https://portal.azure.com). Vaya a la cuenta y luego, en el menú de navegación izquierdo, haga clic en **Inicio rápido**. Inicio rápido es la mejor manera de obtener fragmentos de código para conectarse a su base de datos.
Azure Cosmos DB aplica estándares y requisitos de seguridad estrictos. Las cuentas de Azure Cosmos DB requieren autenticación y comunicación segura mediante TLS, por lo que debe usar TLS v1.2.
Para más información, consulte [Conectar una aplicación de MongoDB a Azure Cosmos DB](connect-mongodb-account.md).
- question: >
¿Se muestran códigos de error al usar la API de Azure Cosmos DB para MongoDB?
answer: >
Además de los códigos de error comunes de MongoDB, la API de Azure Cosmos DB para MongoDB tiene sus propios códigos de error. Se pueden encontrar en la [Guía de solución de problemas](mongodb-troubleshoot.md).
- question: >
¿Se puede usar el controlador Simba para la API de Azure Cosmos DB para MongoDB?
answer: >
Sí, puede usar el controlador ODBC para Mongo de Simba con la API de Azure Cosmos DB para MongoDB.
additionalContent: "\n## <a name=\"next-steps\"></a>Pasos siguientes\n\n* [Compilación de una aplicación web .NET mediante la API de Azure Cosmos DB para MongoDB](create-mongodb-dotnet.md)\n* [Creación de una aplicación de consola con Java y la API MongoDB en Azure Cosmos DB](create-mongodb-java.md)\n"
|
articles/cosmos-db/mongodb-api-faq.yml
|
Transform: AWS::Serverless-2016-10-31
Parameters:
EventBusName:
Type: String
Default: StripeEventBus
Description: (Required) Name of EventBus.
StripeWebhookSecret:
Type: String
Default: wshec_1234
Description: (Required) Your Stripe webhooks secret
StripeSecretKey:
Type: String
Default: <KEY>
Description: (Required) Your Stripe secret key
Resources:
# A Lambda function to receive events from Stripe
EventReciever:
Type: AWS::Serverless::Function
Properties:
Runtime: nodejs12.x
Handler: send-to-event-bus.handler
CodeUri: ./lib
Environment:
Variables:
EVENT_BUS_NAME: !Ref EventBusName
STRIPE_WEBHOOK_SECRET: !Ref StripeWebhookSecret
STRIPE_SECRET_KEY: !Ref StripeSecretKey
Policies:
- Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- events:PutEvents
Resource: '*'
Events:
WebhookEndpoint:
# Define an API Gateway endpoint that responds to HTTP GET at /thumbnail
Type: Api
Properties:
Path: /webhook
Method: POST
# An Event Bus for EventReciever to send events to
EventBus:
Type: AWS::Events::EventBus
Properties:
Name: StripeEventBus # Make sure this is the same value as the EventBus defined as EVENT_BUS_NAME
# A Lambda function to process specific events from Stripe
EventProcessor:
Type: AWS::Serverless::Function
Properties:
Runtime: nodejs12.x
Handler: event-processor.handler
CodeUri: ./lib
# An Event Rule to listen for payment_intent.succeeded Stripe event and route it to EventProcessor function
EventListener:
Type: AWS::Events::Rule
Properties:
Description: Handles Stripe events
EventBusName: !GetAtt EventBus.Name
EventPattern: { "source": [ "Stripe" ], "detail-type": [ "payment_intent.succeeded" ] }
Targets:
-
Id: something
Arn: !GetAtt EventProcessor.Arn
# Permission to allow EventListener to send events to EventProcessor
EventProcessorPermission:
Type: AWS::Lambda::Permission
Properties:
Action: lambda:InvokeFunction
FunctionName: !GetAtt EventProcessor.Arn
Principal: events.amazonaws.com
SourceArn: !GetAtt EventListener.Arn
|
template.yaml
|
---
publish_base: &publish_base
image: plugins/ecr
access_key:
from_secret: ecr_access_key
secret_key:
from_secret: ecr_secret_key
registry: 795250896452.dkr.ecr.us-east-1.amazonaws.com
repo: 795250896452.dkr.ecr.us-east-1.amazonaws.com/server-tig/${DRONE_REPO_NAME}
create_repository: true
deploy_webapp_base: &deploy_webapp_base
image: quay.io/mongodb/drone-helm:v3
namespace: server-tig
helm_repos: mongodb=https://10gen.github.io/helm-charts
chart: mongodb/web-app
chart_version: 4.7.5
tiller_ns: server-tig
client_only: true
deploy_crons_base: &deploy_crons_base
image: quay.io/mongodb/drone-helm:v3
namespace: server-tig
helm_repos: mongodb=https://10gen.github.io/helm-charts
chart: mongodb/cronjobs
chart_version: 1.6.2
tiller_ns: server-tig
client_only: true
deploy_staging: &deploy_staging
api_server: https://api.staging.corp.mongodb.com
kubernetes_token:
from_secret: staging_kubernetes_token
deploy_prod: &deploy_prod
api_server: https://api.prod.corp.mongodb.com
kubernetes_token:
from_secret: prod_kubernetes_token
staging_trigger: &staging_trigger
branch: staging
event: push
prod_trigger: &prod_trigger
branch: master
event: push
pipeline:
publish_staging:
<<: *publish_base
tags:
- git-${DRONE_COMMIT_SHA:0:7}
- staging
when: *staging_trigger
deploy_to_staging:
<<: [*deploy_webapp_base, *deploy_staging]
# The release name should be unique across the namespace, the app or repo name is recommended
release: selected-tests
values: "image.tag=git-${DRONE_COMMIT_SHA:0:7},image.repository=795250896452.dkr.ecr.us-east-1.amazonaws.com/server-tig/${DRONE_REPO_NAME},ingress.enabled=true,ingress.hosts[0]=selected-tests.server-tig.staging.corp.mongodb.com"
values_files: ["environments/staging.yml"]
when: *staging_trigger
publish_prod:
<<: *publish_base
tags:
- git-${DRONE_COMMIT_SHA:0:7}
- latest
when: *prod_trigger
deploy_to_prod:
<<: [*deploy_webapp_base, *deploy_prod]
# The release name should be unique across the namespace, the app or repo name is recommended
release: selected-tests
values: "image.tag=git-${DRONE_COMMIT_SHA:0:7},image.repository=795250896452.dkr.ecr.us-east-1.amazonaws.com/server-tig/${DRONE_REPO_NAME},ingress.enabled=true,ingress.hosts[0]=selected-tests.server-tig.prod.corp.mongodb.com"
values_files: ["environments/prod.yml"]
when: *prod_trigger
deploy_production_cronjobs:
<<: [*deploy_crons_base, *deploy_prod]
group: deploy
# The release name should be unique across the namespace, the app or repo name is recommended
release: selected-tests-cronjobs
values: "image.tag=git-${DRONE_COMMIT_SHA:0:7},image.repository=795250896452.dkr.ecr.us-east-1.amazonaws.com/server-tig/${DRONE_REPO_NAME}"
values_files: ["cronjobs.yml"]
when: *prod_trigger
|
.drone.yml
|
name: GitTreeDiff
uid: azure-devops-extension-api.GitTreeDiff
package: azure-devops-extension-api
summary: ''
fullName: GitTreeDiff
remarks: ''
isPreview: false
isDeprecated: false
type: interface
properties:
- name: baseTreeId
uid: azure-devops-extension-api.GitTreeDiff.baseTreeId
package: azure-devops-extension-api
summary: ObjectId of the base tree of this diff.
fullName: baseTreeId
remarks: ''
isPreview: false
isDeprecated: false
syntax:
content: 'baseTreeId: string'
return:
description: ''
type: string
- name: diffEntries
uid: azure-devops-extension-api.GitTreeDiff.diffEntries
package: azure-devops-extension-api
summary: >-
List of tree entries that differ between the base and target tree.
Renames and object type changes are returned as a delete for the old
object and add for the new object. If a continuation token is returned in
the response header, some tree entries are yet to be processed and may
yeild more diff entries. If the continuation token is not returned all the
diff entries have been included in this response.
fullName: diffEntries
remarks: ''
isPreview: false
isDeprecated: false
syntax:
content: 'diffEntries: GitTreeDiffEntry[]'
return:
description: ''
type: '<xref uid="azure-devops-extension-api.GitTreeDiffEntry" />[]'
- name: targetTreeId
uid: azure-devops-extension-api.GitTreeDiff.targetTreeId
package: azure-devops-extension-api
summary: ObjectId of the target tree of this diff.
fullName: targetTreeId
remarks: ''
isPreview: false
isDeprecated: false
syntax:
content: 'targetTreeId: string'
return:
description: ''
type: string
- name: url
uid: azure-devops-extension-api.GitTreeDiff.url
package: azure-devops-extension-api
summary: REST Url to this resource.
fullName: url
remarks: ''
isPreview: false
isDeprecated: false
syntax:
content: 'url: string'
return:
description: ''
type: string
|
docs-ref-autogen/azure-devops-extension-api/GitTreeDiff.yml
|
# =============================================================================
# NexT Theme configuration
# =============================================================================
# 多说帐号
duoshuo_shortname: your-duoshuo-shortname
# DISQUS 帐号 (如果已经设置 多说 帐号,此选项将被跳过)
disqus_shortname: your-disqus-shortname
# JiaThis 分享服务
jiathis: true
# 多说 分享服务(必须启用多说)
duoshuo_share:
# 社交链接,将在侧栏中显示
social:
GitHub: https://github.com/PuppetK
Twitter:
Weibo: your-weibo-url
DouBan: your-douban-url
ZhiHu: your-zhihu-url
# 等等
# Creative Commons 4.0 International License.
# http://creativecommons.org/
# Available: by | by-nc | by-nc-nd | by-nc-sa | by-nd | by-sa | zero
creative_commons: by-nc-sa
# Google 站长工具验证,请选择 `HTML Meta` 验证方式
# See: https://www.google.com/webmasters/
google_site_verification: VvyjvVXcJQa0QklHipu6pwm2PJGnnchIqX7s5JbbT_0
# Google 分析 ID
google_analytics:
# 百度统计 ID,此 ID 是百度统计提供脚本中 hm.js? 后面那串字符,非百度统计帐号
baidu_analytics: 50c15455e37f70aea674ff4a663eef27
# 站点起始时间
since: 2011
# =============================================================================
# End NexT Theme configuration
# =============================================================================
theme: next
menu:
home: /
#categories: /categories
#about: /about
archives: /archives
tags: /tags
#commonweal: /404.html
# Place your favicon.ico to /source directory.
favicon: /favicon.ico
# Set default keywords (Use a comma to separate)
keywords: "Hexo,next"
# Set rss to false to disable feed link.
# Leave rss as empty to use site's feed link.
# Set rss to specific value if you have burned your feed already.
rss: false
# Icon fonts
# Place your font into next/source/fonts, specify directory-name and font-name here
# Avialable: default | linecons | fifty-shades | feather
icon_font: default
#icon_font: fifty-shades
#icon_font: feather
#icon_font: linecons
# Code Highlight theme
# Available value: normal | night | night eighties | night blue | night bright
# https://github.com/chriskempson/tomorrow-theme
highlight_theme: normal
# MathJax Support
mathjax:
# Schemes
scheme: Mist
# Sidebar, available value:
# - post expand on posts automatically. Default.
# - always expand for all pages automatically
# - hide expand only when click on the sidebar toggle icon.
sidebar: post
#sidebar: always
#sidebar: hide
# Automatically scroll page to section which is under <!-- more --> mark.
scroll_to_more: true
# Automatically add list number to toc.
toc_list_number: true
# Automatically Excerpt
auto_excerpt:
enable: false
length: 150
# Use Lato font
# Note: this option is avialable only when the language is not `zh-Hans`
use_font_lato: true
# Make duoshuo show UA
# user_id must NOT be null when admin_enable is true!
# you can visit http://dev.duoshuo.com get duoshuo user id.
duoshuo_info:
ua_enable: true
admin_enable: false
user_id: 0
## DO NOT EDIT THE FOLLOWING SETTINGS
## UNLESS YOU KNOW WHAT YOU ARE DOING
# Use velocity to animate everything.
use_motion: true
# Fancybox
fancybox: true
# Static files
vendors: vendors
css: css
js: js
images: images
# Theme version
version: 0.4.4
|
_config.yml
|
--- !<MAP>
contentType: "MAP"
firstIndex: "2018-10-16 18:53"
game: "Unreal Tournament"
name: "DM-ASMD_Battle]["
author: "Javier \"PetrakeN\" Braceras"
description: "None"
releaseDate: "2005-11"
attachments:
- type: "IMAGE"
name: "DM-ASMD_Battle][_shot_2.png"
url: "https://f002.backblazeb2.com/file/unreal-archive-images/Unreal%20Tournament/Maps/DeathMatch/A/DM-ASMD_Battle%5D%5B_shot_2.png"
- type: "IMAGE"
name: "DM-ASMD_Battle][_shot_1.png"
url: "https://f002.backblazeb2.com/file/unreal-archive-images/Unreal%20Tournament/Maps/DeathMatch/A/DM-ASMD_Battle%5D%5B_shot_1.png"
originalFilename: "dm-asmd_batlle][.zip"
hash: "9f437b595370660ec91e63ebb926adb42e7a520c"
fileSize: 152629
files:
- name: "DM-ASMD_Battle][.unr"
fileSize: 128666
hash: "b5a44faa7a9fbb10add42bc40a16c2dbb41a60ec"
otherFiles: 2
dependencies: {}
downloads:
- url: "https://f002.backblazeb2.com/file/unreal-archive-files/Unreal%20Tournament/Maps/DeathMatch/A/dm-asmd_batlle%5D%5B.zip"
main: true
repack: false
state: "OK"
- url: "http://www.ut-files.com/index.php?dir=Maps/DeathMatch/MapsA/&file=dm-asmd_batlle%5D%5B.zip"
main: false
repack: false
state: "OK"
- url: "http://medor.no-ip.org/index.php?dir=Maps/DeathMatch&file=dm-asmd_batlle%5D%5B-2.zip"
main: false
repack: false
state: "OK"
- url: "http://medor.no-ip.org/index.php?dir=Maps/DeathMatch&file=dm-asmd_batlle%5D%5B.zip"
main: false
repack: false
state: "OK"
- url: "http://uttexture.com/UT/Downloads/Maps/DeathMatch/MapsA/dm-asmd_batlle%5d%5b.zip"
main: false
repack: false
state: "OK"
- url: "https://files.vohzd.com/unrealarchive/Unreal%20Tournament/Maps/DeathMatch/A/9/f/437b59/dm-asmd_batlle%255D%255B.zip"
main: false
repack: false
state: "OK"
- url: "https://unreal-archive-files.eu-central-1.linodeobjects.com/Unreal%20Tournament/Maps/DeathMatch/A/9/f/437b59/dm-asmd_batlle%255D%255B.zip"
main: false
repack: false
state: "OK"
deleted: false
gametype: "DeathMatch"
title: "ASMD Battle"
playerCount: "2-4"
themes:
Tech: 0.6
Skaarj Tech: 0.4
bots: true
|
content/Unreal Tournament/Maps/DeathMatch/A/9/f/437b59/dm-asmd_battle_[9f437b59].yml
|
backend:
name: github
repo: joaojusto/jose-gomes-landing-page
branch: master
media_folder: "pages/"
collections:
- name: "events"
label: "Eventos"
folder: "Data/events"
create: true
slug: "{{uuid}}"
fields:
- { name: "location", label: "Localização", widget: "string" }
- { name: "title", label: "Nome", widget: "string" }
- { name: "description", label: "Descrição", widget: "text" }
- { name: "descriptionEn", label: "Descrição En", widget: "text" }
- { name: "url", label: "Link", widget: "string", required: false }
- { name: "dateTime", label: "Data", widget: "datetime" }
- name: "news"
label: "Notícias"
folder: "Data/news"
create: true
slug: "{{uuid}}"
fields:
- { name: "url", label: "Link", widget: "string" }
- { name: "title", label: "Título", widget: "string" }
- {
name: "description",
label: "Descrição",
widget: "text",
required: false,
}
- {
name: "descriptionEn",
label: "Descrição En",
widget: "text",
required: false,
}
- { name: "dateTime", label: "Data", widget: "date" }
- name: "languages"
label: "Traduções"
folder: "Data/translations"
create: true
slug: "{{slug}}"
fields:
- {
name: "title",
label: "lingua (en ou pt ou fr etc...)",
widget: "string",
}
- { name: "navbar.agenda", label: "link agenda", widget: "string" }
- { name: "navbar.biography", label: "link biografia", widget: "string" }
- { name: "navbar.news", label: "link noticias", widget: "string" }
- { name: "navbar.gallery", label: "link galeria", widget: "string" }
- { name: "navbar.contact", label: "link contacto", widget: "string" }
- { name: "hero.subtitle", label: "maestro", widget: "string" }
- { name: "hero.title", label: "titulo entrada", widget: "string" }
- { name: "quote", label: "Citaçao", widget: "string" }
- { name: "agenda.title", label: "Título agenda", widget: "string" }
- { name: "news.title", label: "Título noticias", widget: "string" }
- { name: "biography.title", label: "Título biografia", widget: "string" }
- {
name: "biography.excerpt",
label: "Excerto biografia",
widget: "text",
}
- { name: "biography.cta", label: "Botão download", widget: "string" }
- {
name: "biography.file",
label: "PDF, bio_pt.pdf/bio_en.pdf, não te esqueças, tudo em minúsculas :)",
widget: "file",
}
- { name: "galery.title", label: "Título galeria", widget: "string" }
- { name: "contact.title", label: "Título contacto", widget: "string" }
|
public/admin/config.yml
|
- labels:
env: test
namespace: webapp
region: us-east-2
type: kubernetes
key: bucket
value: webapp-test-environment
- labels:
env: test
namespace: webapp
type: kubernetes
key: domain
value: example-test.com
- labels:
env: test
namespace: webapp
type: kubernetes
key: webappenvironment
value: test
- labels:
env: test
namespace: webapp
type: kubernetes
key: newrelicbrowserid
value: '25000001'
- labels:
env: test
namespace: webapp
region: us-east-2
type: kubernetes
key: newrelicappname
value: WebAppTest,WebAppTestUS
- labels:
env: test
namespace: webapp
type: kubernetes
key: emailkey
value: some-sendgrid-value
- labels:
env: test
namespace: webapp
region: us-east-2
type: kubernetes
key: pgpassword
value: supersecret
- labels:
env: test
namespace: webapp
region: us-east-2
type: kubernetes
key: pguser
value: webapp
- labels:
env: test
namespace: webapp
region: us-east-2
type: kubernetes
key: pgdatabase
value: webapp
- labels:
env: test
namespace: webapp
region: us-east-2
type: kubernetes
key: pghost
value: something.something.us-east-2.rds.amazonaws.com
- labels:
env: test
namespace: webapp
region: us-east-2
type: kubernetes
key: encryptedkey
value: someSuperlong
- labels:
type: kubernetes
key: newreliclicense
value: somelongsecret
- labels:
env: prod
namespace: webapp
region: us-east-2
type: kubernetes
key: bucket
value: webapp-prod-us-east-2
- labels:
env: prod
namespace: webapp
type: kubernetes
key: domain
value: example.com
- labels:
env: prod
namespace: webapp
type: kubernetes
key: webappenvironment
value: prod
- labels:
env: prod
namespace: webapp
type: kubernetes
key: newrelicbrowserid
value: '14000002'
- labels:
env: prod
namespace: webapp
region: us-east-2
type: kubernetes
key: newrelicappname
value: WebApp,WebAppUS
- labels:
env: prod
namespace: webapp
type: kubernetes
key: emailkey
value: some-sendgrid-value
- labels:
env: prod
namespace: webapp
region: us-east-2
type: kubernetes
key: pgpassword
value: supersecret
- labels:
env: prod
namespace: webapp
region: us-east-2
type: kubernetes
key: pguser
value: webapp
- labels:
env: prod
namespace: webapp
region: us-east-2
type: kubernetes
key: pgdatabase
value: webapp
- labels:
env: prod
namespace: webapp
region: us-east-2
type: kubernetes
key: pghost
value: something.production.us-east-2.rds.amazonaws.com
- labels:
env: prod
namespace: webapp
region: us-east-2
type: kubernetes
key: encryptedkey
value: someSuperlong
|
examples/kv.yaml
|
uid: "com.azure.resourcemanager.network.fluent.models.ApplicationGatewayBackendHealthServerInner"
fullName: "com.azure.resourcemanager.network.fluent.models.ApplicationGatewayBackendHealthServerInner"
name: "ApplicationGatewayBackendHealthServerInner"
nameWithType: "ApplicationGatewayBackendHealthServerInner"
summary: "Application gateway backendhealth http settings."
inheritances:
- "<xref href=\"java.lang.Object\" data-throw-if-not-resolved=\"False\" />"
inheritedMembers:
- "java.lang.Object.clone()"
- "java.lang.Object.equals(java.lang.Object)"
- "java.lang.Object.finalize()"
- "java.lang.Object.getClass()"
- "java.lang.Object.hashCode()"
- "java.lang.Object.notify()"
- "java.lang.Object.notifyAll()"
- "java.lang.Object.toString()"
- "java.lang.Object.wait()"
- "java.lang.Object.wait(long)"
- "java.lang.Object.wait(long,int)"
syntax: "public final class ApplicationGatewayBackendHealthServerInner"
constructors:
- "com.azure.resourcemanager.network.fluent.models.ApplicationGatewayBackendHealthServerInner.ApplicationGatewayBackendHealthServerInner()"
methods:
- "com.azure.resourcemanager.network.fluent.models.ApplicationGatewayBackendHealthServerInner.address()"
- "com.azure.resourcemanager.network.fluent.models.ApplicationGatewayBackendHealthServerInner.health()"
- "com.azure.resourcemanager.network.fluent.models.ApplicationGatewayBackendHealthServerInner.healthProbeLog()"
- "com.azure.resourcemanager.network.fluent.models.ApplicationGatewayBackendHealthServerInner.ipConfiguration()"
- "com.azure.resourcemanager.network.fluent.models.ApplicationGatewayBackendHealthServerInner.validate()"
- "com.azure.resourcemanager.network.fluent.models.ApplicationGatewayBackendHealthServerInner.withAddress(java.lang.String)"
- "com.azure.resourcemanager.network.fluent.models.ApplicationGatewayBackendHealthServerInner.withHealth(com.azure.resourcemanager.network.models.ApplicationGatewayBackendHealthServerHealth)"
- "com.azure.resourcemanager.network.fluent.models.ApplicationGatewayBackendHealthServerInner.withHealthProbeLog(java.lang.String)"
- "com.azure.resourcemanager.network.fluent.models.ApplicationGatewayBackendHealthServerInner.withIpConfiguration(com.azure.resourcemanager.network.fluent.models.NetworkInterfaceIpConfigurationInner)"
type: "class"
metadata: {}
package: "com.azure.resourcemanager.network.fluent.models"
artifact: com.azure.resourcemanager:azure-resourcemanager-network:2.2.0
|
docs-ref-autogen/com.azure.resourcemanager.network.fluent.models.ApplicationGatewayBackendHealthServerInner.yml
|
items:
- uid: '@azure/storage-blob.IBlobDownloadOptions'
name: IBlobDownloadOptions
fullName: IBlobDownloadOptions
children:
- '@azure/storage-blob.IBlobDownloadOptions.blobAccessConditions'
- '@azure/storage-blob.IBlobDownloadOptions.progress'
- '@azure/storage-blob.IBlobDownloadOptions.rangeGetContentMD5'
- '@azure/storage-blob.IBlobDownloadOptions.snapshot'
langs:
- typeScript
type: interface
summary: ''
package: '@azure/storage-blob'
- uid: '@azure/storage-blob.IBlobDownloadOptions.blobAccessConditions'
name: blobAccessConditions
fullName: blobAccessConditions
children: []
langs:
- typeScript
type: property
summary: ''
optional: true
syntax:
content: 'blobAccessConditions?: IBlobAccessConditions'
return:
type:
- '@azure/storage-blob.IBlobAccessConditions'
package: '@azure/storage-blob'
- uid: '@azure/storage-blob.IBlobDownloadOptions.progress'
name: progress
fullName: progress
children: []
langs:
- typeScript
type: property
summary: ''
optional: true
syntax:
content: 'progress?: (progress: TransferProgressEvent) => void'
return:
type:
- '(progress: TransferProgressEvent) => void'
package: '@azure/storage-blob'
- uid: '@azure/storage-blob.IBlobDownloadOptions.rangeGetContentMD5'
name: rangeGetContentMD5
fullName: rangeGetContentMD5
children: []
langs:
- typeScript
type: property
summary: ''
optional: true
syntax:
content: 'rangeGetContentMD5?: boolean'
return:
type:
- boolean
package: '@azure/storage-blob'
- uid: '@azure/storage-blob.IBlobDownloadOptions.snapshot'
name: snapshot
fullName: snapshot
children: []
langs:
- typeScript
type: property
summary: ''
optional: true
syntax:
content: 'snapshot?: string'
return:
type:
- string
package: '@azure/storage-blob'
references:
- uid: '@azure/storage-blob.IBlobAccessConditions'
name: IBlobAccessConditions
spec.typeScript:
- name: IBlobAccessConditions
fullName: IBlobAccessConditions
uid: '@azure/storage-blob.IBlobAccessConditions'
|
preview-packages/docs-ref-autogen/@azure/storage-blob/IBlobDownloadOptions.yml
|
name: Python testing
on: [push, pull_request]
jobs:
test:
runs-on: ${{ matrix.os }}
strategy:
max-parallel: 4
matrix:
os: [ubuntu-latest, windows-latest]
python-version: [3.5, 3.6, 3.7, 3.8]
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt -r test-requirements.txt
pip install -U coveralls pyyaml
- name: Run test
run: |
coverage run --source=tdclient -m pytest tdclient/test
#
# Disable coverage submission to avoid
# coveralls.exception.CoverallsException: Could not submit coverage: 422 Client Error: Unprocessable Entity for url: https://coveralls.io/api/v1/jobs
#
# - name: Submit to coveralls
# run: coveralls
# env:
# COVERALLS_REPO_TOKEN: ${{ secrets.GITHUB_TOKEN }}
test_arm64:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [ 3.8]
fail-fast: false
steps:
- uses: actions/checkout@v2
- name: Set up QEMU
id: qemu
uses: docker/setup-qemu-action@v1
- name: Install and Run tests
run: |
docker run --rm -v ${{ github.workspace }}:/ws:rw --workdir=/ws \
arm64v8/ubuntu:20.04 \
bash -exc 'apt-get update && apt-get -y install python3.8 curl git && \
ln -fs /usr/share/zoneinfo/America/New_York /etc/localtime && export DEBIAN_FRONTEND=noninteractive && apt-get install -y tzdata && dpkg-reconfigure --frontend noninteractive tzdata && \
apt-get -y install software-properties-common && add-apt-repository ppa:deadsnakes/ppa && apt-get -y update && \
apt install -y python3.8-venv && python3.8 -m venv venv38 && source venv38/bin/activate && \
python3.8 -m pip install --upgrade pip && \
python3.8 --version && \
uname -m && \
python3.8 -m pip install --upgrade pip && \
python3.8 -m pip install -r requirements.txt -r test-requirements.txt && \
python3.8 -m pip install -U coveralls pyyaml && \
coverage run --source=tdclient -m pytest tdclient/test && \
deactivate'
|
.github/workflows/pythontest.yml
|
items:
- uid: com.microsoft.azure.management.network.ApplicationGatewayRedirectConfiguration.DefinitionStages
id: DefinitionStages
artifact: com.microsoft.azure:azure-mgmt-network:1.36.3
parent: com.microsoft.azure.management.network
children:
- com.microsoft.azure.management.network.ApplicationGatewayRedirectConfiguration.DefinitionStages.Blank
- com.microsoft.azure.management.network.ApplicationGatewayRedirectConfiguration.DefinitionStages.WithAttach
- com.microsoft.azure.management.network.ApplicationGatewayRedirectConfiguration.DefinitionStages.WithAttachAndPath
- com.microsoft.azure.management.network.ApplicationGatewayRedirectConfiguration.DefinitionStages.WithPathIncluded
- com.microsoft.azure.management.network.ApplicationGatewayRedirectConfiguration.DefinitionStages.WithQueryStringIncluded
- com.microsoft.azure.management.network.ApplicationGatewayRedirectConfiguration.DefinitionStages.WithTarget
- com.microsoft.azure.management.network.ApplicationGatewayRedirectConfiguration.DefinitionStages.WithType
langs:
- java
name: ApplicationGatewayRedirectConfiguration.DefinitionStages
nameWithType: ApplicationGatewayRedirectConfiguration.DefinitionStages
fullName: com.microsoft.azure.management.network.ApplicationGatewayRedirectConfiguration.DefinitionStages
type: Interface
package: com.microsoft.azure.management.network
summary: Grouping of application gateway redirect configuration configuration stages.
syntax:
content: public static interface ApplicationGatewayRedirectConfiguration.DefinitionStages
references:
- uid: com.microsoft.azure.management.network.ApplicationGatewayRedirectConfiguration.DefinitionStages.Blank
name: ApplicationGatewayRedirectConfiguration.DefinitionStages.Blank<ReturnT>
nameWithType: ApplicationGatewayRedirectConfiguration.DefinitionStages.Blank<ReturnT>
fullName: com.microsoft.azure.management.network.ApplicationGatewayRedirectConfiguration.DefinitionStages.Blank<ReturnT>
- uid: com.microsoft.azure.management.network.ApplicationGatewayRedirectConfiguration.DefinitionStages.WithAttach
name: ApplicationGatewayRedirectConfiguration.DefinitionStages.WithAttach<ReturnT>
nameWithType: ApplicationGatewayRedirectConfiguration.DefinitionStages.WithAttach<ReturnT>
fullName: com.microsoft.azure.management.network.ApplicationGatewayRedirectConfiguration.DefinitionStages.WithAttach<ReturnT>
- uid: com.microsoft.azure.management.network.ApplicationGatewayRedirectConfiguration.DefinitionStages.WithAttachAndPath
name: ApplicationGatewayRedirectConfiguration.DefinitionStages.WithAttachAndPath<ReturnT>
nameWithType: ApplicationGatewayRedirectConfiguration.DefinitionStages.WithAttachAndPath<ReturnT>
fullName: com.microsoft.azure.management.network.ApplicationGatewayRedirectConfiguration.DefinitionStages.WithAttachAndPath<ReturnT>
- uid: com.microsoft.azure.management.network.ApplicationGatewayRedirectConfiguration.DefinitionStages.WithPathIncluded
name: ApplicationGatewayRedirectConfiguration.DefinitionStages.WithPathIncluded<ReturnT>
nameWithType: ApplicationGatewayRedirectConfiguration.DefinitionStages.WithPathIncluded<ReturnT>
fullName: com.microsoft.azure.management.network.ApplicationGatewayRedirectConfiguration.DefinitionStages.WithPathIncluded<ReturnT>
- uid: com.microsoft.azure.management.network.ApplicationGatewayRedirectConfiguration.DefinitionStages.WithQueryStringIncluded
name: ApplicationGatewayRedirectConfiguration.DefinitionStages.WithQueryStringIncluded<ReturnT>
nameWithType: ApplicationGatewayRedirectConfiguration.DefinitionStages.WithQueryStringIncluded<ReturnT>
fullName: com.microsoft.azure.management.network.ApplicationGatewayRedirectConfiguration.DefinitionStages.WithQueryStringIncluded<ReturnT>
- uid: com.microsoft.azure.management.network.ApplicationGatewayRedirectConfiguration.DefinitionStages.WithTarget
name: ApplicationGatewayRedirectConfiguration.DefinitionStages.WithTarget<ReturnT>
nameWithType: ApplicationGatewayRedirectConfiguration.DefinitionStages.WithTarget<ReturnT>
fullName: com.microsoft.azure.management.network.ApplicationGatewayRedirectConfiguration.DefinitionStages.WithTarget<ReturnT>
- uid: com.microsoft.azure.management.network.ApplicationGatewayRedirectConfiguration.DefinitionStages.WithType
name: ApplicationGatewayRedirectConfiguration.DefinitionStages.WithType<ReturnT>
nameWithType: ApplicationGatewayRedirectConfiguration.DefinitionStages.WithType<ReturnT>
fullName: com.microsoft.azure.management.network.ApplicationGatewayRedirectConfiguration.DefinitionStages.WithType<ReturnT>
|
docs-ref-autogen/com.microsoft.azure.management.network.ApplicationGatewayRedirectConfiguration.DefinitionStages.yml
|
name: CD
# Controls when the workflow will run
on:
push:
tags: [ ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# Run CI workflow before
# workflow_run:
# workflows: ["CI"]
# types:
# - completed
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
get-artifactory:
# if: ${{ github.event.workflow_run.conclusion == 'success' }}
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout@v2
- uses: jfrog/setup-jfrog-cli@v2
env:
JF_ARTIFACTORY_1: ${{ secrets.ARTIFACTORY_TOKEN }}
# Runs a single command using the runners shell
- name: Set env for version name
run: |
echo "GITHASH=$(echo $GITHUB_SHA | cut -c 1-6)" >> $GITHUB_ENV
- name: Get from Artifactory
run: |
jfrog rt ping
jfrog rt dl "default-maven-local/petclinic/petclinic-$GITHASH.jar" petclinic.jar --flat=true
- name: check jar file
run: test -f petclinic.jar
- name: copy file via ssh key
uses: appleboy/scp-action@master
with:
host: cda.cefim-formation.org
username: group2
port: 22
key: ${{ secrets.VM_AWS_CEFIM_KEY }}
source: "petclinic.jar"
target: "/home/group2/"
launch:
# The type of runner that the job will run on
runs-on: ubuntu-latest
needs: get-artifactory
steps:
- name: start service
uses: appleboy/ssh-action@master
with:
host: cda.cefim-formation.org
username: group2
port: 22
key: ${{ secrets.VM_AWS_CEFIM_KEY }}
script: |
sudo systemctl stop petclinic@group2.service
sudo systemctl start petclinic@group2.service
|
.github/workflows/CD.yml
|
inheritedMembers:
- com.microsoft.azure.management.resources.fluentcore.arm.models.HasId.id()
- com.microsoft.azure.management.resources.fluentcore.model.HasInner.inner()
- com.microsoft.azure.management.resources.fluentcore.model.Indexable.key()
- com.microsoft.azure.management.resources.fluentcore.arm.models.HasManager.manager()
- com.microsoft.azure.management.resources.fluentcore.arm.models.HasName.name()
- com.microsoft.azure.management.resources.fluentcore.model.Refreshable.refresh()
- com.microsoft.azure.management.resources.fluentcore.model.Refreshable.refreshAsync()
- com.microsoft.azure.management.resources.fluentcore.arm.models.Resource.region()
- com.microsoft.azure.management.resources.fluentcore.arm.models.Resource.regionName()
- com.microsoft.azure.management.resources.fluentcore.arm.models.HasResourceGroup.resourceGroupName()
- com.microsoft.azure.management.resources.fluentcore.arm.models.Resource.tags()
- com.microsoft.azure.management.resources.fluentcore.arm.models.Resource.type()
- com.microsoft.azure.management.resources.fluentcore.model.Updatable.update()
methods:
- com.microsoft.azure.management.servicebus.ServiceBusNamespace.authorizationRules()
- com.microsoft.azure.management.servicebus.ServiceBusNamespace.createdAt()
- com.microsoft.azure.management.servicebus.ServiceBusNamespace.dnsLabel()
- com.microsoft.azure.management.servicebus.ServiceBusNamespace.fqdn()
- com.microsoft.azure.management.servicebus.ServiceBusNamespace.queues()
- com.microsoft.azure.management.servicebus.ServiceBusNamespace.sku()
- com.microsoft.azure.management.servicebus.ServiceBusNamespace.topics()
- com.microsoft.azure.management.servicebus.ServiceBusNamespace.updatedAt()
nameWithType: ServiceBusNamespace
syntax: public interface ServiceBusNamespace extends GroupableResource<ServiceBusManager, NamespaceInner>,Refreshable<ServiceBusNamespace>,Updatable<ServiceBusNamespace.Update>
type: interface
uid: com.microsoft.azure.management.servicebus.ServiceBusNamespace
fullName: com.microsoft.azure.management.servicebus.ServiceBusNamespace
name: ServiceBusNamespace
package: com.microsoft.azure.management.servicebus
summary: <p>An immutable client-side representation of an Azure Service Bus namespace. </p>
metadata: {}
|
legacy/docs-ref-autogen/com.microsoft.azure.management.servicebus.ServiceBusNamespace.yml
|
- company: Gaydio, and Freelance
position: Senior Producer / Researcher (Radio)
duration: Feb 2017 to Present
summary: <ul><li>Coordinated major productions including 'Manchester Together - One Year On', and local, national, and international outside broadcasts for an audience of 500,000+ monthly listeners.</li><li>Devised, led, and created documentaries examining areas of public policy including Section 28, The Gender Recognition Act, as well as examining developing public policy at an EU level in the area of LGBT rights.</li><li>Consulted on a BBC documentary on social policy and the criminal justice system.</li><li>Line management and co-ordination of a large production team (50+ people) including training, mentoring, content development, and overseeing compliance with legal and regulatory frameworks.</li><li>Curated podcast production for English National Ballet, Manchester International Festival, and others.</li></ul>
# Business Consultant
- company: Freelance
position: Business Consultant
duration: Sept 2015 to Present
summary: <ul><li>Writing, to tight deadlines, SEO targeted copy for a range of clients across a variety of industries; from news to PR. Regularly exceeded client expectations, evidenced by a range of regular commissions from repeat clients.</li><li>Prestigious Jerwood Fellowship in digital marketing at Manchester International Festival & English National Ballet. Part of a team which successfully marketed the inaugural, sold-out, run of Akram Khan’s Giselle.</li><li>Providing confidential and secure audio processing support to a range of high-profile businesses (e.g. in the market research and human resources sectors, as well as to government departments).</li></ul>
# PANDA
- company: PANDA
position: Project Manager
duration: Sept 2015 to Mar 2017
summary: <ul><li>Project managed delivery of Creative Breaks, an employability scheme for under and unemployed people looking for experience in the creative industries.</li><li>Successfully delivered a marketing programme promoting Greater Manchester nationally and internationally.</li><li>Organised event promoting Greater Manchester at the European Parliament.</li></ul>
# Year Out
- company: Year Out
position: Career Break
duration: Sept 2014 to Sept 2015
summary: Took time to re-evaluate my career, travel, learn new skills (e.g. photography) and improve my mental health.
# Scouts
- company: The Scout Association
position: County Development Manager & Youth Worker
duration: Jul 2002 to Sept 2014
summary: <ul><li>Responsible for recruitment, induction, and management of 1,500 volunteers across Merseyside in line with national targets.</li><li>Delivered training programmes including media training, safeguarding, first aid, and others.</li><li>Successfully planned and delivered large events for up to 40,000+ participants, including UK participation in high profile international events.</li><li>Planned and managed budgets of up to £1 million including successful bids for grant funding.</li></ul>
# TA
- company: Wigan, Cheshire East, and Warrington Local Authorities
position: Teaching Assistant
duration: October 2005 - Sept 2014
summary: Supporting students at primary and secondary level with additional educational needs.
|
_data/experience.yml
|
---
- name: Inject and merge defaults settings with postgres_cluster_settings
set_fact:
postgres_cluster_settings: "{{ postgres_cluster_settings_defaults | combine(postgres_cluster_settings) }}"
tags: [always]
- name: Create "postgres" group
group:
name: postgres
state: present
- name: Create postgres User
user:
name: postgres
shell: /bin/bash
groups: postgres
append: yes
- name: Create the directories
file:
path: "{{ item.value }}"
state: directory
mode: 0750
owner: postgres
group: root
with_dict: "{{ postgres_cluster_settings.directories | default({}) }}"
- name: Change own the directories
shell: "chown 50010:50010 {{ item.value }}"
with_dict: "{{ postgres_cluster_settings.directories | default({}) }}"
- name: Create hosts file from template for container
template:
src: hosts.j2
dest: "{{ postgres_cluster_settings.directories.config_path }}/hosts"
# PREPARE
- name: PREPARE | Docker login
shell: "echo {{ ANSIBLE_TRIGGER_TOKEN }} | docker login {{ ANSIBLE_REGISTRY_URL }} -u root --password-stdin"
register: docker_login_result
- name: re/Create a First Postgres Cluster Service docker container
docker_container:
name: "{{ hostvars[inventory_hostname].pgcluster_node_name }}-{{ ansible_environment }}-{{ ansible_hostname }}"
image: "{{ ansible_global_gitlab_registry_site_name }}/{{ ansible_git_project_path }}/postgres:{{ default_docker_image_environment_tag }}"
hostname: "{{ hostvars[inventory_hostname].pgcluster_node_name }}-{{ ansible_environment }}-{{ ansible_hostname }}"
volumes:
- "{{ postgres_cluster_settings.directories.data_path }}:/data"
- "{{ postgres_cluster_settings.directories.archive_path }}:/archive"
- "{{ postgres_cluster_settings.directories.config_path }}/hosts:/etc/hosts"
ports:
- "{{ postgres_cluster_settings.pg_port }}:{{ postgres_cluster_settings.pg_port }}"
env:
INITIAL_NODE_TYPE: "{{ hostvars[inventory_hostname].psql_init_node_type }}"
NODE_ID: "{{ hostvars[inventory_hostname].psql_init_node_id }}"
NODE_NAME: "{{ hostvars[inventory_hostname].pgcluster_node_name }}"
MSLIST: "myservice"
# for each micro-service two db users are created, for ex. asset_owner and asset_user, etc.
MSOWNERPWDLIST: "myservice_owner"
MSUSERPWDLIST: "myservice_user"
REPMGRPWD: <PASSWORD>
REPMGRD_FAILOVER_MODE: manual
REPMGRD_RECONNECT_ATTEMPS: "5"
REPMGRD_INTERVAL: "2"
PGMASTER: "{{ hostvars[inventory_hostname].pg_node_master_name }}"
#
privileged: yes
restart_policy: always
detach: True
recreate: yes
state: started
network_mode: host
user: root
|
ansible/roles/!_databases/pgcluster/postgres/tasks/main.yml
|
---
# Controller Service
kind: Deployment
apiVersion: apps/v1
metadata:
name: nifcloud-storage-csi-controller
spec:
replicas: 1
selector:
matchLabels:
app: nifcloud-storage-csi-controller
template:
metadata:
labels:
app: nifcloud-storage-csi-controller
spec:
nodeSelector:
beta.kubernetes.io/os: linux
serviceAccount: nifcloud-storage-csi-controller-sa
priorityClassName: system-cluster-critical
tolerations:
- key: CriticalAddonsOnly
operator: Exists
containers:
- name: nifcloud-storage-driver
image: aokumasan/nifcloud-additional-storage-csi-driver:latest
args :
- --endpoint=$(CSI_ENDPOINT)
- --logtostderr
env:
- name: CSI_ENDPOINT
value: unix:///var/lib/csi/sockets/pluginproxy/csi.sock
- name: NIFCLOUD_REGION
value: jp-east-1
- name: NIFCLOUD_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: nifcloud-secret
key: access_key_id
- name: NIFCLOUD_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: nifcloud-secret
key: secret_access_key
- name: NIFCLOUD_INSTANCE_ID
value: ""
volumeMounts:
- name: socket-dir
mountPath: /var/lib/csi/sockets/pluginproxy/
ports:
- name: healthz
containerPort: 9808
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: healthz
initialDelaySeconds: 10
timeoutSeconds: 3
periodSeconds: 10
failureThreshold: 5
- name: csi-provisioner
image: quay.io/k8scsi/csi-provisioner:v1.3.0
args:
- --csi-address=$(ADDRESS)
- --v=5
- --feature-gates=Topology=true
- --enable-leader-election
- --leader-election-type=leases
- --timeout=60s
env:
- name: ADDRESS
value: /var/lib/csi/sockets/pluginproxy/csi.sock
volumeMounts:
- name: socket-dir
mountPath: /var/lib/csi/sockets/pluginproxy/
- name: csi-attacher
image: quay.io/k8scsi/csi-attacher:v1.2.0
args:
- --csi-address=$(ADDRESS)
- --v=5
- --timeout=60s
env:
- name: ADDRESS
value: /var/lib/csi/sockets/pluginproxy/csi.sock
volumeMounts:
- name: socket-dir
mountPath: /var/lib/csi/sockets/pluginproxy/
- name: csi-resizer
image: quay.io/k8scsi/csi-resizer:v1.0.0
args:
- --csi-address=$(ADDRESS)
- --v=5
- --leader-election
- --timeout=60s
env:
- name: ADDRESS
value: /var/lib/csi/sockets/pluginproxy/csi.sock
volumeMounts:
- name: socket-dir
mountPath: /var/lib/csi/sockets/pluginproxy/
- name: liveness-probe
image: quay.io/k8scsi/livenessprobe:v1.1.0
args:
- --csi-address=/csi/csi.sock
volumeMounts:
- name: socket-dir
mountPath: /csi
volumes:
- name: socket-dir
emptyDir: {}
|
deploy/kubernetes/base/controller.yaml
|
server:
host: localhost:8080
contextPath: /aac
# APPLICATION EXTERNAL URL
application:
url: http://localhost:8080/aac
# DB CONFIGURATION
jdbc:
dialect: org.hibernate.dialect.MySQLDialect
driver: com.mysql.jdbc.Driver
url: jdbc:mysql://mysql:3306/aac?autoReconnect=true&useSSL=false
user: ac
password: ac
# OAUTH2 INTEGRATIONS
oauth-providers:
providers:
- provider: facebook
client:
clientId: YOUR_FACEBOOK_CLIENT_ID
clientSecret: YOUR_FACEBOOK_CLIENT_SECRET
accessTokenUri: https://graph.facebook.com/oauth/access_token
userAuthorizationUri: https://www.facebook.com/dialog/oauth
preEstablishedRedirectUri: ${application.url}/auth/facebook-oauth/callback
useCurrentUri: false
tokenName: oauth_token
authenticationScheme: query
clientAuthenticationScheme: form
scope:
- openid
- email
- profile
- provider: google
client:
clientId: YOUR_GOOGLE_CLIENT_ID
clientSecret: YOUR_GOOGLE_CLIENT_SECRET
accessTokenUri: https://www.googleapis.com/oauth2/v3/token
userAuthorizationUri: https://accounts.google.com/o/oauth2/auth
preEstablishedRedirectUri: ${application.url}/auth/google-oauth/callback
useCurrentUri: false
clientAuthenticationScheme: form
scope:
- openid
- email
- profile
resource:
userInfoUri: https://www.googleapis.com/oauth2/v3/userinfo
preferTokenInfo: true
# AAC ADMIN USER PASSWORD
admin:
password: <PASSWORD>
contexts: apimanager, authorization, components
contextSpaces: apimanager/carbon.super
# EMAIL SERVER FOR NOTIFICATIONS
mail:
username: <EMAIL>
password: <PASSWORD>
host: smtp.smartcommunitylab.it
port: 465
protocol: smtps
# SECURITY PROPERTIES
security:
rememberme:
key: REMEMBER_ME_SECRET_KEY
identity: # IDENTITY MAPPING SOURCE FILE
source: file:///path/to/identities.txt
# API-MANAGEMENT PROPERTIES
api:
contextSpace: apimanager
adminClient:
id: API_MGT_CLIENT_ID
secret: YOUR_API_MNGMT_CLIENT_SECRET
internalUrl: http://localhost:8080/aac
store:
endpoint: https://api-manager:9443/api/am/store/v0.11
publisher:
endpoint: https://api-manager:9443/api/am/publisher/v0.11
identity:
endpoint: https://api-manager:9443/services/IdentityApplicationManagementService
password: <PASSWORD>
usermgmt:
endpoint: https://api-manager:9443/services/RemoteUserStoreManagerService
password: <PASSWORD>
multitenancy:
endpoint: https://api-manager:9443/services/TenantMgtAdminService
password: <PASSWORD>
|
conf/aac/application-local.yml
|
uid: "com.azure.ai.textanalytics.models.TextAnalyticsActions"
fullName: "com.azure.ai.textanalytics.models.TextAnalyticsActions"
name: "TextAnalyticsActions"
nameWithType: "TextAnalyticsActions"
summary: "The <xref uid=\"com.azure.ai.textanalytics.models.TextAnalyticsActions\" data-throw-if-not-resolved=\"false\" data-raw-source=\"TextAnalyticsActions\"></xref> model."
inheritances:
- "<xref href=\"java.lang.Object\" data-throw-if-not-resolved=\"False\" />"
inheritedMembers:
- "java.lang.Object.clone()"
- "java.lang.Object.equals(java.lang.Object)"
- "java.lang.Object.finalize()"
- "java.lang.Object.getClass()"
- "java.lang.Object.hashCode()"
- "java.lang.Object.notify()"
- "java.lang.Object.notifyAll()"
- "java.lang.Object.toString()"
- "java.lang.Object.wait()"
- "java.lang.Object.wait(long)"
- "java.lang.Object.wait(long,int)"
syntax: "public final class TextAnalyticsActions"
constructors:
- "com.azure.ai.textanalytics.models.TextAnalyticsActions.TextAnalyticsActions()"
methods:
- "com.azure.ai.textanalytics.models.TextAnalyticsActions.getDisplayName()"
- "com.azure.ai.textanalytics.models.TextAnalyticsActions.getExtractKeyPhrasesOptions()"
- "com.azure.ai.textanalytics.models.TextAnalyticsActions.getRecognizeEntitiesOptions()"
- "com.azure.ai.textanalytics.models.TextAnalyticsActions.getRecognizeLinkedEntitiesOptions()"
- "com.azure.ai.textanalytics.models.TextAnalyticsActions.getRecognizePiiEntitiesOptions()"
- "com.azure.ai.textanalytics.models.TextAnalyticsActions.setDisplayName(java.lang.String)"
- "com.azure.ai.textanalytics.models.TextAnalyticsActions.setExtractKeyPhrasesOptions(com.azure.ai.textanalytics.models.ExtractKeyPhrasesOptions...)"
- "com.azure.ai.textanalytics.models.TextAnalyticsActions.setRecognizeEntitiesOptions(com.azure.ai.textanalytics.models.RecognizeEntitiesOptions...)"
- "com.azure.ai.textanalytics.models.TextAnalyticsActions.setRecognizeLinkedEntitiesOptions(com.azure.ai.textanalytics.models.RecognizeLinkedEntitiesOptions...)"
- "com.azure.ai.textanalytics.models.TextAnalyticsActions.setRecognizePiiEntitiesOptions(com.azure.ai.textanalytics.models.RecognizePiiEntitiesOptions...)"
type: "class"
metadata: {}
package: "com.azure.ai.textanalytics.models"
artifact: com.azure:azure-ai-textanalytics:5.1.0-beta.5
|
preview/docs-ref-autogen/com.azure.ai.textanalytics.models.TextAnalyticsActions.yml
|
project_list:
pattern: /
defaults: { _controller: "FabricaWebsiteBundle:Project:list" }
project_create:
pattern: /create-project
defaults: { _controller: "FabricaWebsiteBundle:Project:create" }
project_newsfeed:
pattern: /projects/{slug}
defaults: { _controller: "FabricaWebsiteBundle:Project:newsfeed" }
project_history:
pattern: /projects/{slug}/history
defaults: { _controller: "FabricaWebsiteBundle:Project:history" }
project_source:
pattern: /projects/{slug}/source
defaults: { _controller: "FabricaWebsiteBundle:Project:source" }
project_branches:
pattern: /projects/{slug}/branches
defaults: { _controller: "FabricaWebsiteBundle:Project:branches" }
project_branchDelete:
pattern: /projects/{slug}/branches/{branch}/delete
defaults: { _controller: "FabricaWebsiteBundle:Project:branchDelete" }
requirements: { _method: POST }
project_tags:
pattern: /projects/{slug}/tags
defaults: { _controller: "FabricaWebsiteBundle:Project:tags" }
project_commit:
pattern: /projects/{slug}/commits/{hash}
defaults: { _controller: "FabricaWebsiteBundle:Project:commit" }
project_tree:
pattern: /projects/{slug}/tree/{revision}/{path}
defaults: { _controller: "FabricaWebsiteBundle:Project:tree", path: "" }
requirements: { path: ".*" }
project_blame:
pattern: /projects/{slug}/blame/{revision}/{path}
defaults: { _controller: "FabricaWebsiteBundle:Project:blame", path: "" }
requirements: { path: ".*" }
project_tree_history:
pattern: /projects/{slug}/tree-history/{revision}/{path}
defaults: { _controller: "FabricaWebsiteBundle:Project:treeHistory", path: "" }
requirements: { path: ".*" }
project_compare:
pattern: /projects/{slug}/compare/{revision}
defaults: { _controller: "FabricaWebsiteBundle:Project:compare" }
project_permissions:
pattern: /projects/{slug}/permissions
defaults: { _controller: "FabricaWebsiteBundle:Project:permissions" }
requirements: { _method: GET }
project_postPermissionsCreateRole:
pattern: /projects/{slug}/permissions/role
defaults: { _controller: "FabricaWebsiteBundle:Project:postPermissionsCreateRole" }
requirements: { _method: POST }
project_postPermissionsDeleteRole:
pattern: /projects/{slug}/permissions/role/{id}/delete
defaults: { _controller: "FabricaWebsiteBundle:Project:postPermissionsDeleteRole" }
requirements: { _method: POST }
project_postPermissionsCreateGitAccess:
pattern: /projects/{slug}/permissions/git-access
defaults: { _controller: "FabricaWebsiteBundle:Project:postPermissionsCreateGitAccess" }
requirements: { _method: POST }
project_postPermissionsDeleteGitAccess:
pattern: /projects/{slug}/permissions/git-access/{id}/delete
defaults: { _controller: "FabricaWebsiteBundle:Project:postPermissionsDeleteGitAccess" }
requirements: { _method: POST }
project_admin:
pattern: /projects/{slug}/admin
defaults: { _controller: "FabricaWebsiteBundle:Project:admin" }
project_delete:
pattern: /projects/{slug}/admin/remove
defaults: { _controller: "FabricaWebsiteBundle:Project:delete" }
|
src/Bundle/WebsiteBundle/Resources/config/routing/project.yml
|
name: CI/CD eShop
on:
push:
branches: [ 'main' ]
pull_request:
branches: [ 'main' ]
env:
build_config: Release
registry_name: lfraileacr.azurecr.io
repository_name: eshop-web
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
with:
version: latest
- name: Test with dotnet
run: dotnet test ./eShopOnWeb.sln --configuration $build_config
- name: Build docker image
uses: docker/build-push-action@v1.1.1
with:
registry: ${{ env.registry_name }}
username: ${{ secrets.ACR_USER_NAME }}
password: ${{ <PASSWORD> }}
repository: ${{ env.repository_name }}
tags: ${{ github.sha }}
path: .
dockerfile: src/Web/Dockerfile
add_git_labels: true
- name: Setup .NET Core
uses: actions/setup-dotnet@v1
with:
dotnet-version: 3.1.402
- name: Install dotnet tools
run: dotnet tool restore
- name: Catalog SQL Script
run: dotnet ef migrations script -c catalogcontext -i -p ./src/Infrastructure/Infrastructure.csproj -s ./src/Web/Web.csproj -o ./scripts/catalog.sql
- name: Identity SQL Script
run: dotnet ef migrations script -c appidentitydbcontext -i -p ./src/Infrastructure/Infrastructure.csproj -s ./src/Web/Web.csproj -o ./scripts/identity.sql
- name: Upload scripts
uses: actions/upload-artifact@v2
with:
name: sql_scripts
path: ./scripts
- name: Upload ARM
uses: actions/upload-artifact@v2
with:
name: arm_template
path: arm
deploy:
if: github.ref == 'refs/heads/main'
needs: build
runs-on: ubuntu-latest
environment: dockerwebapp
env:
RESOURCE_GROUP: NetCore_GIthubCI_CD_RG
steps:
- name: Download arm
uses: actions/download-artifact@v2
with:
name: arm_template
path: arm_template
- name: 'Login via Azure CLI'
uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: Azure CLI Action
uses: Azure/cli@1.0.4
with:
inlineScript: az group create --location westeurope --resource-group $RESOURCE_GROUP
- name: Deploy Azure Resource Manager (ARM) Template
continue-on-error: true
uses: Azure/arm-deploy@v1
with:
scope: resourcegroup
subscriptionId: '<KEY>'
resourceGroupName: ${{ env.RESOURCE_GROUP }}
template: arm_template/netcore_rg_arm.json
deploymentMode: Incremental
deploymentName: deploy-docker-${{ github.sha }}
parameters: catalogConnstring="${{ secrets.CATALOG_DB_CONNSTRING }}" identityConnstring="${{ secrets.IDENTITY_DB_CONNSTRING }}" sites_netcoregithub_name=netcoreghdck serverfarms_netcoregithubplan_name=netcoregithubplan sqlserver_password=${{ secrets.DB_PASSWORD }} dockerRegistryUrl=${{ env.registry_name }} dockerRegistryUsername=${{ secrets.ACR_USER_NAME }} dockerRegistryPassword=${{ secrets.ACR_PASSWORD }} dockerImage=${{ env.registry_name }}/${{ env.repository_name }}:${{ github.sha }}
deploydb:
if: github.ref == 'refs/heads/main'
needs: deploy
runs-on: windows-latest
environment: database
steps:
- name: Download scripts
uses: actions/download-artifact@v2
with:
name: sql_scripts
path: sql_scripts
- name: 'Login via Azure CLI'
uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: Azure SQL Deploy
uses: Azure/sql-action@v1
with:
server-name: netcoregithub.database.windows.net
connection-string: ${{ secrets.CATALOG_DB_CONNSTRING }}
sql-file: sql_scripts/catalog.sql
- name: Azure SQL Deploy
uses: Azure/sql-action@v1
with:
server-name: netcoregithub.database.windows.net
connection-string: ${{ secrets.IDENTITY_DB_CONNSTRING }}
sql-file: sql_scripts/identity.sql
|
.github/workflows/ci-cd.yml
|
- name: Docs
href: /
topicHref: /
items:
- name: แพลตฟอร์ม Power
href: /power-platform/
topicHref: /power-platform/
items:
- name: Power BI
tocHref: /power-bi/
topicHref: /power-bi/
items:
- name: ผู้บริโภค
tocHref: /power-bi/consumer/
topicHref: /power-bi/consumer/index
- name: อุปกรณ์เคลื่อนที่
tocHref: /power-bi/consumer/mobile/
topicHref: /power-bi/consumer/mobile/index
- name: นักพัฒนา
tocHref: /power-bi/developer/
topicHref: /power-bi/developer/index
items:
- name: การวิเคราะห์แบบฝังตัว
tocHref: /power-bi/developer/embedded/
topicHref: /power-bi/developer/embedded/
- name: วิชวล Power BI
tocHref: /power-bi/developer/visuals/
topicHref: /power-bi/developer/visuals/
- name: อัตโนมัติ
tocHref: /power-bi/developer/automation/
topicHref: /power-bi/developer/automation/
- name: การเรียนรู้ตามคำแนะนำ
tocHref: /power-bi/guided-learning/
topicHref: /power-bi/guided-learning/
- name: แค็ตตาล็อกการเรียนรู้
tocHref: /power-bi/learning-catalog/
topicHref: /power-bi/learning-catalog/index
- name: เซิร์ฟเวอร์รายงาน Power BI
tocHref: /power-bi/report-server/
topicHref: /power-bi/report-server/index
- name: คำแนะนำ
tocHref: /power-bi/guidance/
topicHref: /power-bi/guidance/index
- name: ทำงานร่วมกัน แชร์ รวมเข้าด้วยกัน
tocHref: /power-bi/collaborate-share/
topicHref: /power-bi/collaborate-share/index
- name: สร้างรายงานและแดชบอร์ด
tocHref: /power-bi/create-reports/
topicHref: /power-bi/create-reports/index
- name: เชื่อมต่อกับข้อมูล
tocHref: /power-bi/connect-data/
topicHref: /power-bi/connect-data/index
- name: เริ่มใช้งาน
tocHref: /power-bi/fundamentals/
topicHref: /power-bi/fundamentals/index
- name: แปลง จัดรูปร่าง และสร้างแบบจำลองข้อมูล
tocHref: /power-bi/transform-model/
topicHref: /power-bi/transform-model/index
|
powerbi-docs-pdf/breadcrumb/toc.yml
|
# Offense count: 1
# This cop supports safe autocorrection (--autocorrect).
Layout/EmptyLines:
Exclude:
- 'config/environments/development.rb'
# Offense count: 1
# This cop supports safe autocorrection (--autocorrect).
# Configuration parameters: AllowForAlignment, AllowBeforeTrailingComments, ForceEqualSignAlignment.
Layout/ExtraSpacing:
Exclude:
- 'config/environments/production.rb'
# Offense count: 4
# This cop supports safe autocorrection (--autocorrect).
# Configuration parameters: AllowDoxygenCommentStyle, AllowGemfileRubyComment.
Layout/LeadingCommentSpace:
Exclude:
- 'config/application.rb'
# Offense count: 1
# This cop supports safe autocorrection (--autocorrect).
# Configuration parameters: AllowForAlignment, EnforcedStyleForExponentOperator.
# SupportedStylesForExponentOperator: space, no_space
Layout/SpaceAroundOperators:
Exclude:
- 'config/environments/production.rb'
# Offense count: 2
# This cop supports safe autocorrection (--autocorrect).
# Configuration parameters: EnforcedStyle, EnforcedStyleForEmptyBrackets.
# SupportedStyles: space, no_space, compact
# SupportedStylesForEmptyBrackets: space, no_space
Layout/SpaceInsideArrayLiteralBrackets:
Exclude:
- 'config/environments/production.rb'
# Offense count: 4
# This cop supports safe autocorrection (--autocorrect).
Layout/SpaceInsidePercentLiteralDelimiters:
Exclude:
- 'Gemfile'
# Offense count: 1
# This cop supports safe autocorrection (--autocorrect).
# Configuration parameters: EnforcedStyle.
# SupportedStyles: final_newline, final_blank_line
Layout/TrailingEmptyLines:
Exclude:
- 'Gemfile'
# Offense count: 1
# This cop supports safe autocorrection (--autocorrect).
Lint/RedundantDirGlobSort:
Exclude:
- 'spec/rails_helper.rb'
# Offense count: 1
# This cop supports safe autocorrection (--autocorrect).
RSpec/EmptyLineAfterSubject:
Exclude:
- 'spec/models/task_spec.rb'
# Offense count: 1
# Configuration parameters: IgnoreSharedExamples.
RSpec/NamedSubject:
Exclude:
- 'spec/models/task_spec.rb'
# Offense count: 2
# Configuration parameters: EnforcedStyle.
# SupportedStyles: slashes, arguments
Rails/FilePath:
Exclude:
- 'spec/rails_helper.rb'
- 'spec/support/committee.rb'
# Offense count: 1
# This cop supports safe autocorrection (--autocorrect).
Style/BlockComments:
Exclude:
- 'spec/spec_helper.rb'
# Offense count: 1
# This cop supports safe autocorrection (--autocorrect).
# Configuration parameters: EnforcedStyle.
# SupportedStyles: nested, compact
Style/ClassAndModuleChildren:
Exclude:
- 'app/controllers/v1/tasks_controller.rb'
# Offense count: 3
# Configuration parameters: AllowedConstants.
Style/Documentation:
Exclude:
- 'app/controllers/v1/tasks_controller.rb'
- 'config/application.rb'
- 'db/migrate/20220514114514_create_tasks.rb'
# Offense count: 29
# This cop supports safe autocorrection (--autocorrect).
# Configuration parameters: EnforcedStyle.
# SupportedStyles: always, always_true, never
Style/FrozenStringLiteralComment:
Enabled: false
# Offense count: 1
# This cop supports safe autocorrection (--autocorrect).
Style/GlobalStdStream:
Exclude:
- 'config/environments/production.rb'
# Offense count: 2
# This cop supports unsafe autocorrection (--autocorrect-all).
# Configuration parameters: SafeForConstants.
Style/RedundantFetchBlock:
Exclude:
- 'config/puma.rb'
# Offense count: 2
# This cop supports safe autocorrection (--autocorrect).
# Configuration parameters: .
# SupportedStyles: percent, brackets
Style/SymbolArray:
EnforcedStyle: percent
MinSize: 10
# Offense count: 1
# This cop supports safe autocorrection (--autocorrect).
# Configuration parameters: EnforcedStyleForMultiline.
# SupportedStylesForMultiline: comma, consistent_comma, no_comma
Style/TrailingCommaInHashLiteral:
Exclude:
- 'spec/support/committee.rb'
|
.rubocop_todo.yml
|
tosca_definitions_version: tosca_simple_yaml_1_0
imports:
- custom_types/switch.yaml
- custom_types/switchport.yaml
- custom_types/portinterface.yaml
- custom_types/bngportmapping.yaml
- custom_types/attworkflowdriverwhitelistentry.yaml
- custom_types/attworkflowdriverservice.yaml
- custom_types/serviceinstanceattribute.yaml
- custom_types/onosapp.yaml
description: Configures a full SEBA POD
topology_template:
node_templates:
# Fabric configuration
switch#leaf_1:
type: tosca.nodes.Switch
properties:
driver: ofdpa3
ipv4Loopback: 192.168.127.12
ipv4NodeSid: 17
isEdgeRouter: True
name: AGG_SWITCH
ofId: of:0000000000000001
routerMac: 00:00:02:01:06:01
# Setup the OLT switch port
port#olt_port:
type: tosca.nodes.SwitchPort
properties:
portId: 1
host_learning: false
requirements:
- switch:
node: switch#leaf_1
relationship: tosca.relationships.BelongsToOne
# Port connected to the BNG
port#bng_port:
type: tosca.nodes.SwitchPort
properties:
portId: 31
requirements:
- switch:
node: switch#leaf_1
relationship: tosca.relationships.BelongsToOne
# Setup the fabric switch port where the external
# router is connected to
bngmapping:
type: tosca.nodes.BNGPortMapping
properties:
s_tag: any
switch_port: 31
# DHCP L2 Relay config
onos_app#dhcpl2relay:
type: tosca.nodes.ONOSApp
properties:
name: dhcpl2relay
must-exist: true
dhcpl2relay-config-attr:
type: tosca.nodes.ServiceInstanceAttribute
properties:
name: /onos/v1/network/configuration/apps/org.opencord.dhcpl2relay
value: >
{
"dhcpl2relay" : {
"useOltUplinkForServerPktInOut" : false,
"dhcpServerConnectPoints" : [ "of:0000000000000001/31" ]
}
}
requirements:
- service_instance:
node: onos_app#dhcpl2relay
relationship: tosca.relationships.BelongsToOne
|
src/use_cases/seba_on_arm/install/fabric.yaml
|
image: Visual Studio 2017
clone_folder: C:\projects\sheep-qt
shallow_clone: true
# See https://www.appveyor.com/docs/windows-images-software/#qt
environment:
matrix:
#- QTDIR: C:\Qt\5.6.3\mingw49_32
#- QTDIR: C:\Qt\5.6.3\msvc2015
#- QTDIR: C:\Qt\5.6.3\msvc2015_64
#- QTDIR: C:\Qt\5.7\mingw53_32
#- QTDIR: C:\Qt\5.7\msvc2015
#- QTDIR: C:\Qt\5.9\mingw53_32
#- QTDIR: C:\Qt\5.9\msvc2015
#- QTDIR: C:\Qt\5.9\msvc2015_64
#- QTDIR: C:\Qt\5.10\mingw53_32
#- QTDIR: C:\Qt\5.10\msvc2015
#- QTDIR: C:\Qt\5.10\msvc2015_64
#- QTDIR: C:\Qt\5.11\mingw53_32
#- QTDIR: C:\Qt\5.11\msvc2015
#- QTDIR: C:\Qt\5.11\msvc2015_64
#- QTDIR: C:\Qt\5.12.5\mingw73_64
#- QTDIR: C:\Qt\5.12.5\mingw73_32
#- QTDIR: C:\Qt\5.12.5\msvc2015_64
- QTDIR: C:\Qt\5.13.2\mingw73_64
- QTDIR: C:\Qt\5.13.2\mingw73_32
- QTDIR: C:\Qt\5.13.2\msvc2017
- QTDIR: C:\Qt\5.13.2\msvc2017_64
configuration:
#- debug
- release
init:
- git config --global core.autocrlf input
install:
# Setup the build toolchains
- '%QTDIR%\bin\qtenv2.bat'
- qmake -v
- if %QTDIR:_64=%==%QTDIR% ( set ARCH=x86 ) else set ARCH=x64
- if %QTDIR:msvc=%==%QTDIR% g++ --version
- if %QTDIR:msvc=%==%QTDIR% set make=mingw32-make.exe
- if %QTDIR:msvc=%==%QTDIR% %make% --version
- if not %QTDIR:msvc2015=%==%QTDIR% call "%ProgramFiles(x86)%\Microsoft Visual Studio 14.0\VC\vcvarsall.bat" %ARCH%
- if not %QTDIR:msvc2017=%==%QTDIR% call "%ProgramFiles(x86)%\Microsoft Visual Studio\2017\Community\VC\Auxiliary\Build\vcvarsall.bat" %ARCH%
- if not %QTDIR:msvc=%==%QTDIR% set make=nmake.exe
- if not %QTDIR:msvc=%==%QTDIR% %make% /? > nul
before_build:
# Prepare the out-of-source build directory
- cd %APPVEYOR_BUILD_FOLDER%
- qmake SheepSweeper.pro
build_script:
- '%make%'
after_build:
- cd %APPVEYOR_BUILD_FOLDER%
- mkdir SheepSweeper_win_x64
- cd SheepSweeper_win_x64
- xcopy %APPVEYOR_BUILD_FOLDER%\app\release\app.exe %APPVEYOR_BUILD_FOLDER%\SheepSweeper_win_x64\
- windeployqt app.exe
artifacts:
#- path: \release\arsenic-*.exe
- path: SheepSweeper_win_x64
type: zip
|
.appveyor.yml
|
# This manifest installs Nuage VRS on
# each worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: nuage-vrs
namespace: nuage-network-operator
labels:
k8s-app: nuage-vrs
spec:
selector:
matchLabels:
k8s-app: nuage-vrs
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
k8s-app: nuage-vrs
spec:
nodeSelector:
beta.kubernetes.io/os: linux
tolerations:
- effect: NoSchedule
operator: Exists
- key: CriticalAddonsOnly
operator: Exists
- effect: NoExecute
operator: Exists
hostNetwork: true
containers:
# This container installs Nuage VRS running as a
# container on each worker node
- name: nuage-vrs
image: "{{.ReleaseConfig.VRSTag}}"
securityContext:
privileged: true
env:
# Configure parameters for VRS openvswitch file
- name: NUAGE_ACTIVE_CONTROLLER
value: "{{index .VRSConfig.Controllers 0}}"
{{if (eq (len .VRSConfig.Controllers) 2)}}
- name: NUAGE_STANDBY_CONTROLLER
value: "{{index .VRSConfig.Controllers 1}}"
{{end}}
- name: NUAGE_PLATFORM
value: "\"{{.VRSConfig.Platform}}\""
- name: NUAGE_K8S_SERVICE_IPV4_SUBNET
value: "{{addEscapeChar .ClusterNetworkConfig.ServiceNetworkCIDR}}"
- name: NUAGE_K8S_POD_NETWORK_CIDR
value: "{{addEscapeChar .ClusterNetworkConfig.ClusterNetworkCIDR}}"
- name: NUAGE_NETWORK_UPLINK_INTF
value: "{{.VRSConfig.UnderlayUplink}}"
volumeMounts:
- mountPath: /var/run
name: vrs-run-dir
- mountPath: /var/log
name: vrs-log-dir
- mountPath: /sys/module
name: sys-mod-dir
readOnly: true
- mountPath: /lib/modules
name: lib-mod-dir
readOnly: true
volumes:
- name: vrs-run-dir
hostPath:
path: /var/run
- name: vrs-log-dir
hostPath:
path: /var/log
- name: sys-mod-dir
hostPath:
path: /sys/module
- name: lib-mod-dir
hostPath:
path: /lib/modules
|
bindata/network/vrs/nuage-vrs-daemonset.yaml
|
uid: management.azure.com.compute.virtualmachinesizes.list
name: List
service: Compute
groupName: Virtual Machine Sizes
apiVersion: 2017-12-01
summary: Lists all available virtual machine sizes for a subscription in a location.
consumes:
- application/json
produces:
- application/json
paths:
- content: GET https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Compute/locations/{location}/vmSizes?api-version=2017-12-01
uriParameters:
- name: subscriptionId
in: path
isRequired: true
description: Subscription credentials which uniquely identify Microsoft Azure subscription. The subscription ID forms part of the URI for every service call.
types:
- uid: string
- name: location
in: path
isRequired: true
description: The location upon which virtual-machine-sizes is queried.
types:
- uid: string
pattern: ^[-\w\._]+$
- name: api-version
in: query
isRequired: true
description: Client Api Version.
types:
- uid: string
responses:
- name: 200 OK
description: OK
types:
- uid: VirtualMachineSizeListResult
requestHeader: []
definitions:
- name: VirtualMachineSizeListResult
description: The List Virtual Machine operation response.
kind: object
properties:
- name: value
description: The list of virtual machine sizes.
types:
- uid: VirtualMachineSize
isArray: true
- name: VirtualMachineSize
description: Describes the properties of a VM size.
kind: object
properties:
- name: name
description: The name of the virtual machine size.
types:
- uid: string
- name: numberOfCores
description: The number of cores supported by the virtual machine size.
types:
- uid: integer
- name: osDiskSizeInMB
description: The OS disk size, in MB, allowed by the virtual machine size.
types:
- uid: integer
- name: resourceDiskSizeInMB
description: The resource disk size, in MB, allowed by the virtual machine size.
types:
- uid: integer
- name: memoryInMB
description: The amount of memory, in MB, supported by the virtual machine size.
types:
- uid: integer
- name: maxDataDiskCount
description: The maximum number of data disks that can be attached to the virtual machine size.
types:
- uid: integer
examples: []
security:
- type: oauth2
description: Azure Active Directory OAuth2 Flow
flow: implicit
authorizationUrl: https://login.microsoftonline.com/common/oauth2/authorize
scopes:
- name: user_impersonation
description: impersonate your user account
|
docs-ref-autogen/compute/VirtualMachineSizes/List.yml
|
os: linux
dist: bionic
language: go
git:
depth: false
addons:
apt:
packages:
- ruby-dev
- rpm
- build-essential
- git
- libgnome-keyring-dev
- fakeroot
- zip
- upx
go:
- 1.15.x
services:
- docker
install:
- mkdir -p $GOPATH/bin
# download super-linter: golangci-lint
- curl -sL https://install.goreleaser.com/github.com/golangci/golangci-lint.sh | sh -s -- -b $(go env GOPATH)/bin latest
- rvm install 2.6.0
- rvm 2.6.0 do gem install --no-document fpm
before_script:
# Create your own deploy key, tar it, and encrypt the file to make this work. Optionally add a bitly_token file to the archive.
- openssl aes-256-cbc -K $encrypted_e5c248149abf_key -iv $encrypted_e5c248149abf_iv -in .secret_files.tar.enc -out .secret_files.tar -d
- tar -xf .secret_files.tar
- source .metadata.sh
script:
# Test Go and Docker.
- make test
- make docker
# Test built docker image.
- docker run $BINARY -v 2>&1 | grep -Eq "^$BINARY, version $VERSION"
# Build everything
- rvm 2.6.0 do make release
after_success:
# Display Release Folder
- ls -l release/
# Setup the ssh client so we can clone and push to the homebrew formula repo.
# You must put github_deploy_file into .secret_files.tar.enc
# This is an ssh key added to your homebrew forumla repo.
- |
mkdir -p $HOME/.ssh
declare -r SSH_FILE="$(mktemp -u $HOME/.ssh/XXXXX)"
echo -e "Host github.com\n\tStrictHostKeyChecking no\n" >> $HOME/.ssh/config
[ ! -f github_deploy_key ] || (mv github_deploy_key $SSH_FILE \
&& chmod 600 "$SSH_FILE" \
&& printf "%s\n" \
"Host github.com" \
" IdentityFile $SSH_FILE" \
" StrictHostKeyChecking no" \
" LogLevel ERROR" >> $HOME/.ssh/config)
deploy:
- provider: releases
overwrite: true
skip_cleanup: true
cleanup: false
file_glob: true
token:
# to get a secure api key, run: travis setup releases
# make a copy of this file first because that command will change it.
# or: make a new key manually at https://github.com/settings/tokens/new
# then: echo <NEW_KEY_FROM_GH> | travis encrypt
secure: <KEY>
file: release/*
on:
tags: true
- provider: script
script: scripts/formula-deploy.sh
on:
tags: true
|
.travis.yml
|
- name: Confirm kubeadm version
command: "{{ bin_dir }}/kubeadm version -o short"
register: kubeadm_version_output
- name: Setup kubeadm api version to v1beta1
set_fact:
kubeadmConfig_api_version: v1beta1
when:
- kubeadm_version_output.stdout is version('v1.13.0', '>=')
- kubeadm_version_output.stdout is version('v1.15.0', '<')
- name: Setup kubeadm api version to v1beta2
set_fact:
kubeadmConfig_api_version: v1beta2
when: kubeadm_version_output.stdout is version('v1.15.0', '>=')
- name: Remove unused parameters iptables.max
lineinfile:
path: /etc/kubernetes/kubeadm-config.yaml
regexp: 'max:'
state: absent
- name: Remove unused parameters resourceContainer
lineinfile:
path: /etc/kubernetes/kubeadm-config.yaml
regexp: 'resourceContainer:'
state: absent
- name: Upgrade kubeadm config version
lineinfile:
path: /etc/kubernetes/kubeadm-config.yaml
regexp: '^kubernetesVersion'
line: "kubernetesVersion: {{ kube_upgrade_version }}"
- name: "Migration kubeadm config to {{ kube_upgrade_version }} version"
shell: >
{{ bin_dir }}/kubeadm config migrate
--old-config=/etc/kubernetes/kubeadm-config.yaml
--new-config=/etc/kubernetes/kubeadm-config.yaml
when: kubeadmConfig_api_version != "v1beta2"
- name: "Upgrade the first master node: {{ inventory_hostname }} to {{ kube_upgrade_version }}"
shell: >
{{ bin_dir }}/kubeadm upgrade apply --config=/etc/kubernetes/kubeadm-config.yaml --force
--ignore-preflight-errors=CoreDNSUnsupportedPlugins,CoreDNSMigration
when: inventory_hostname == groups['kube-master'][0]
- name: "Upgrade the remaining master nodes: {{ inventory_hostname }} to {{ kube_upgrade_version }}"
shell: >
{{ bin_dir }}/kubeadm upgrade node
{% if kube_upgrade_version.split('.')[1]|int == 14 %}
experimental-control-plane
{% endif %}
when:
- inventory_hostname != groups['kube-master'][0]
- inventory_hostname in groups['kube-master']
- name: "Upgrade worker node: {{ inventory_hostname }} to {{ kube_upgrade_version }}"
shell: >
{{ bin_dir }}/kubeadm upgrade node
{% if kube_upgrade_version.split('.')[1]|int == 14 %}
config --kubelet-version {{ kube_upgrade_version }}
{% endif %}
when:
- inventory_hostname in (groups['kube-worker'] + groups['new-worker'])
- inventory_hostname not in groups['kube-master']
|
roles/upgrade/tasks/common.yml
|
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "k8s-image-availability-exporter.fullname" . }}
labels:
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
app.kubernetes.io/name: "{{ template "k8s-image-availability-exporter.fullname" . }}"
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
app.kubernetes.io/component: monitoring
spec:
replicas: {{ .Values.k8sImageAvailabilityExporter.replicas }}
selector:
matchLabels:
app: {{ template "k8s-image-availability-exporter.fullname" . }}
template:
metadata:
labels:
app: {{ template "k8s-image-availability-exporter.fullname" . }}
spec:
containers:
- name: k8s-image-availability-exporter
{{- if .Values.k8sImageAvailabilityExporter.args }}
args:
{{- range .Values.k8sImageAvailabilityExporter.args }}
- {{ . }}
{{- end }}
{{- end }}
{{- if .Values.k8sImageAvailabilityExporter.env }}
env:
{{- range .Values.k8sImageAvailabilityExporter.env }}
- name: {{ .name }}
value: {{ .value }}
{{- end }}
{{- end }}
ports:
- containerPort: 8080
name: http
image: {{ .Values.k8sImageAvailabilityExporter.image.repository }}:{{ .Values.k8sImageAvailabilityExporter.image.tag }}
imagePullPolicy: {{ .Values.k8sImageAvailabilityExporter.image.imagePullPolicy }}
livenessProbe:
httpGet:
path: /healthz
port: http
scheme: HTTP
readinessProbe:
httpGet:
path: /healthz
port: http
scheme: HTTP
resources:
{{ if .Values.k8sImageAvailabilityExporter.resources }}
{{ .Values.k8sImageAvailabilityExporter.resources | toYaml | indent 10}}
{{ end }}
serviceAccountName: {{ template "k8s-image-availability-exporter.fullname" . }}
|
helm/k8s-image-availability-exporter/templates/deployment.yaml
|
groups:
# Node exporter rules for CPU
- name: Node Exporter CPU rules
rules:
### CPU alerts ###
# CPU High Utilization
- alert: CPU-HighUtilization
expr: >
100 - (avg by (instance, host) (
irate(node_cpu_seconds_total{mode="idle"}[5m])) * 100
) > 85
for: 15m
labels:
severity: warning
team: linux-systems
annotations:
summary: CPU utilization is high.
description: |
CPU utilization is above 85% for the past 15m (host {{ if $labels.host }}{{ $labels.host }}{{ else }}{{ $labels.instance }}{{ end }}).
Current value is {{ printf "%.2f" $value }}%.
# CPU High Saturation
- alert: CPU-HighSaturation
expr: >
node_load1 / count without (cpu, mode) (node_cpu_seconds_total{mode="idle"}) * 100 > 99
for: 15m
labels:
severity: critical
team: linux-systems
annotations:
summary: CPU saturation is high.
description: |
CPU saturation is high for the past 15m (host {{ if $labels.host }}{{ $labels.host }}{{ else }}{{ $labels.instance }}{{ end }}).
Current value is {{ printf "%.2f" $value }}%.
# CPU sys utilization
- alert: CPU-SysUtilization
expr: >
(avg by(instance, host) (irate(node_cpu_seconds_total{mode="system"}[5m])) * 100) > 30
for: 15m
labels:
severity: warning
team: linux-systems
annotations:
summary: CPU %sys utilization is high.
description: |
CPU %sys utilization is high for the past 15m (host {{ if $labels.host }}{{ $labels.host }}{{ else }}{{ $labels.instance }}{{ end }}).
Current value is {{ printf "%.2f" $value }}%.
# CPU iowait utilization
- alert: CPU-IOWaitUtilization
expr: >
(avg by(instance, host) (irate(node_cpu_seconds_total{mode="iowait"}[5m])) * 100) > 30
for: 15m
labels:
severity: warning
team: linux-systems
annotations:
summary: CPU %iowait utilization is high.
description: |
CPU %iowait utilization is high for the past 15m (host {{ if $labels.host }}{{ $labels.host }}{{ else }}{{ $labels.instance }}{{ end }}).
Current value is {{ printf "%.2f" $value }}%.
# CPU steal utilization
- alert: CPU-StealUtilization
expr: >
(avg by(instance, host) (irate(node_cpu_seconds_total{mode="steal"}[5m])) * 100) > 10
for: 20m
labels:
severity: warning
team: linux-systems
annotations:
summary: CPU %steal utilization is high.
description: |
CPU %steal utilization is high for the past 20m (host {{ if $labels.host }}{{ $labels.host }}{{ else }}{{ $labels.instance }}{{ end }}).
Current value is {{ printf "%.2f" $value }}%.
|
prometheus/rules/node_exporter.cpu.rules.yml
|
name: uci-movie
description: This data set contains a list of over 10000 films including many older,
odd, and cult films. There is information on actors, casts, directors, producers,
studios, etc.
homepage: https://archive.ics.uci.edu/ml/datasets/Movie
files:
- name: doc-html
url: https://archive.ics.uci.edu/ml/machine-learning-databases/movies-mld/doc.html
- name: movies-data-html
url: https://archive.ics.uci.edu/ml/machine-learning-databases/movies-mld/movies.data.html
- name: movies-html
url: https://archive.ics.uci.edu/ml/machine-learning-databases/movies-mld/movies.html
- name: questions
url: https://archive.ics.uci.edu/ml/machine-learning-databases/movies-mld/questions
- name: uciform
url: https://archive.ics.uci.edu/ml/machine-learning-databases/movies-mld/uciform
- files:
- name: awardtypes-doc
url: https://archive.ics.uci.edu/ml/machine-learning-databases/movies-mld/data/Awardtypes.doc
- name: tvchannels-doc
url: https://archive.ics.uci.edu/ml/machine-learning-databases/movies-mld/data/TVChannels.doc
- name: actors-html
url: https://archive.ics.uci.edu/ml/machine-learning-databases/movies-mld/data/actors.html
- name: adds-txt
url: https://archive.ics.uci.edu/ml/machine-learning-databases/movies-mld/data/adds.txt
- name: awtypes-html
url: https://archive.ics.uci.edu/ml/machine-learning-databases/movies-mld/data/awtypes.html
- name: casts-html
url: https://archive.ics.uci.edu/ml/machine-learning-databases/movies-mld/data/casts.html
- name: keys-txt
url: https://archive.ics.uci.edu/ml/machine-learning-databases/movies-mld/data/keys.txt
- name: locales-html
url: https://archive.ics.uci.edu/ml/machine-learning-databases/movies-mld/data/locales.html
- name: main-html
url: https://archive.ics.uci.edu/ml/machine-learning-databases/movies-mld/data/main.html
- name: movies-macros
url: https://archive.ics.uci.edu/ml/machine-learning-databases/movies-mld/data/movies.macros
- name: people-html
url: https://archive.ics.uci.edu/ml/machine-learning-databases/movies-mld/data/people.html
- name: queries-sql
url: https://archive.ics.uci.edu/ml/machine-learning-databases/movies-mld/data/queries.sql
- name: quotes-html
url: https://archive.ics.uci.edu/ml/machine-learning-databases/movies-mld/data/quotes.html
- name: remakes-html
url: https://archive.ics.uci.edu/ml/machine-learning-databases/movies-mld/data/remakes.html
- name: sayings-html
url: https://archive.ics.uci.edu/ml/machine-learning-databases/movies-mld/data/sayings.html
- name: studios-html
url: https://archive.ics.uci.edu/ml/machine-learning-databases/movies-mld/data/studios.html
- name: synonyms-html
url: https://archive.ics.uci.edu/ml/machine-learning-databases/movies-mld/data/synonyms.html
name: data
keywords:
- Multivariate
- Relational
|
uci-movie.yaml
|
id: bitcoin
name: Bitcoin
alt_names:
- BitCoin
start_date: '2008-08-18'
web:
- 'https://bitcoin.org/'
ledgers:
- id: btc
symbol: BTC
name: Bitcoin
type: blockchain
networks:
- id: main
type: main
name: mainnet
proof_type: PoW
algorithm: SHA256
mineable: true
maximum_block_size: 8388608
target_block_time: 15
tools:
- type: explorer
url: 'https://www.blockchain.com/explorer'
- type: explorer
url: 'https://live.blockcypher.com/btc/'
- id: test
type: test
name: testnet
- id: ethereum-classic
name: Ethereum Classic
symbol: ETC
type: blockchain
forked: true
fork_properties:
parent: ethereum
block: 1920000
time: '2016-07-20T13:20:39Z'
assets:
- id: btc
symbol: BTC
name: Bitcoin
type: native
ledger: bitcoin
max_supply: 21000000
first_distribution:
- title: <NAME>
amount: 10000
- title: <NAME>
amount: 25
ico_properties:
start_date: '2018-01-12'
end_date: '2018-01-31'
total_cap: 21000000
ico_price: 0.1 ETH
raised: 1000000 ETH
country: Japan
denominations:
plural: bitcoins
symbol: ₿
subunits:
- unit: 0.001
name: millibitcoin
- unit: 1.e-8
name: satoshi
- id: bgas
name: Bitcoin Gas
symbol: BGAS
type: token
ledger: ethereum
contract_address: 0x00000000..
decimals: 18
team:
- name: <NAME>
role: founder
contacts:
phone: '+53855349695964'
email: <EMAIL>
'location:hq': 'Prague, Czech Republic'
'location:node': 'London, United Kingdom'
whitepapers:
- title: 'Bitcoin: A Peer-to-Peer Electronic Cash System'
file: whitepaper.pdf
url: 'https://bitcoin.org/bitcoin.pdf'
tags:
- freedom
- 'technology:private'
- 'case:currency'
- 'case:store-of-value'
webids:
facebook: bitcoin
cardanocom: bitcoin
'git_repository:github': bitcoin-core
'forum:bitcointalk': bitcoin
forum: 'http://bitcoin.forum'
partners:
- name: Venture Capital Ltd.
url: 'http://vnc.dd'
history:
- date: '2008-08-18'
title: Domain 'Bitcoin.org' registered
- date: '2008-10-31'
title: <NAME> published Bitcoin whitepaper
description: >-
A link to a paper authored by <NAME> titled 'Bitcoin: A
Peer-to-Peer Electronic Cash System' was posted to a cryptography mailing
list.
- date_approx: '2013'
title: Approx date test
- date_approx: 2013-Q1
title: Approx date test vol.2
resources:
- title: Review AAZZZ
url: 'http://example.xyz'
exchanges:
- id: bitjap
name: Bitjap
platform: ethereum:0x
countries:
- JP
|
examples/models/project.yaml
|
items:
- name: Documentation Service Bus Relay
href: index.yml
- name: Vue d’ensemble
items:
- name: Qu’est-ce que Relay ?
href: relay-what-is-it.md
- name: Forum Aux Questions
href: relay-faq.md
- name: Démarrages rapides
items:
- name: Créer une application hybride locale/dans le cloud
href: service-bus-dotnet-hybrid-app-using-service-bus-relay.md
- name: les connexions hybrides
items:
- name: Websockets .NET
href: relay-hybrid-connections-dotnet-get-started.md
- name: HTTP .NET
href: relay-hybrid-connections-http-requests-dotnet-get-started.md
- name: Websockets Node
href: relay-hybrid-connections-node-get-started.md
- name: HTTP Node
href: relay-hybrid-connections-http-requests-node-get-started.md
- name: Didacticiel WCF Relay
href: service-bus-relay-tutorial.md
- name: Didacticiel REST pour les relais WCF
href: service-bus-relay-rest-tutorial.md
- name: Procédure
items:
- name: Planifier et concevoir
items:
- name: Authentification et sécurité
href: relay-authentication-and-authorization.md
items:
- name: Migrer à partir des services ACS vers SAP
href: relay-migrate-acs-sas.md
- name: Protocole Connexions hybrides
href: relay-hybrid-connections-protocol.md
- name: Développement
items:
- name: Créer un espace de noms
href: relay-create-namespace-portal.md
- name: Utiliser WCF Relay pour exposer les services WCF aux clients externes
href: relay-wcf-dotnet-get-started.md
- name: API disponibles
href: relay-api-overview.md
items:
- name: .NET
href: relay-hybrid-connections-dotnet-api-overview.md
- name: Nœud
href: relay-hybrid-connections-node-ws-api-overview.md
- name: gérer
items:
- name: Analyser Azure Relay avec la surveillance Azure
href: relay-metrics-azure-monitor.md
- name: Informations de référence
items:
- name: .NET
items:
- name: Microsoft.Azure.Relay
href: /dotnet/api/microsoft.azure.relay
- name: Microsoft.ServiceBus
href: /dotnet/api/Microsoft.ServiceBus
- name: Exceptions
href: relay-exceptions.md
- name: Paramètres de port
href: relay-port-settings.md
- name: Ressources
items:
- name: Feuille de route Azure
href: https://azure.microsoft.com/roadmap/?category=enterprise-integration
- name: Blog
href: https://blogs.msdn.microsoft.com/servicebus/
- name: Tarifs
href: https://azure.microsoft.com/pricing/details/service-bus/
- name: Calculatrice de prix
href: https://azure.microsoft.com/pricing/calculator/
- name: Exemples
href: https://github.com/azure/azure-relay/tree/master/samples
- name: Dépassement de capacité de la pile
href: http://stackoverflow.com/questions/tagged/azure-servicebusrelay
ms.openlocfilehash: 1c15d984e19f0585f386620f1e6e12667c69da5e
ms.sourcegitcommit: a1140e6b839ad79e454186ee95b01376233a1d1f
ms.translationtype: HT
ms.contentlocale: fr-FR
ms.lasthandoff: 08/28/2018
ms.locfileid: "43144800"
|
articles/service-bus-relay/TOC.yml
|
AWSTemplateFormatVersion: "2010-09-09"
Transform: AWS::Serverless-2016-10-31
Description: >
data-migrator
SAM Template for performing data migrations with Fauna, the data API for modern applications.
Parameters:
FaunaSecretParameter:
Type: String
Default: fauna-secret
Globals:
Function:
Runtime: nodejs14.x
MemorySize: 512
Timeout: 30
Environment:
Variables:
FAUNA_SECRET_PARAMETER: !Ref FaunaSecretParameter
Resources:
DataMigratorStateMachine:
Type: AWS::Serverless::StateMachine
Properties:
DefinitionUri: statemachine/data_migrator.asl.yaml
DefinitionSubstitutions:
MigrateBatchFunctionArn: !GetAtt MigrateBatchFunction.Arn
NotificationTopic: !Ref NotificationTopic
Policies:
- LambdaInvokePolicy:
FunctionName: !Ref MigrateBatchFunction
- SNSPublishMessagePolicy:
TopicName: !GetAtt NotificationTopic.TopicName
DataPopulatorStateMachine:
Type: AWS::Serverless::StateMachine
Properties:
DefinitionUri: statemachine/data_populator.asl.yaml
DefinitionSubstitutions:
PopulateDataFunctionArn: !GetAtt PopulateDataFunction.Arn
NotificationTopic: !Ref NotificationTopic
Policies:
- LambdaInvokePolicy:
FunctionName: !Ref PopulateDataFunction
- SNSPublishMessagePolicy:
TopicName: !GetAtt NotificationTopic.TopicName
MigrateBatchFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: functions/migrate-batch/
Handler: app.lambdaHandler
Policies:
- SSMParameterReadPolicy:
ParameterName: !Ref FaunaSecretParameter
PopulateDataFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: functions/populate-data/
Handler: app.lambdaHandler
Policies:
- SSMParameterReadPolicy:
ParameterName: !Ref FaunaSecretParameter
NotificationTopic:
Type: AWS::SNS::Topic
Properties:
Subscription:
- Protocol: sqs
Endpoint: !GetAtt MonitoringQueue.Arn
MonitoringQueue:
Type: AWS::SQS::Queue
SnsToSqsPolicy:
Type: AWS::SQS::QueuePolicy
Properties:
PolicyDocument:
Version: "2012-10-17"
Statement:
- Sid: "Allow SNS publish to SQS"
Effect: Allow
Principal:
Service: "sns.amazonaws.com"
Resource: !GetAtt MonitoringQueue.Arn
Action: SQS:SendMessage
Condition:
ArnEquals:
aws:SourceArn: !Ref NotificationTopic
Queues:
- Ref: MonitoringQueue
|
data-migrator/template.yaml
|
name: automerge workflow
on:
pull_request_target:
jobs:
automerge_job:
name: automerge job
runs-on: ubuntu-latest
env:
# append additional users to the USER_ALLOWLIST to allow automerge for them
USER_ALLOWLIST: |
'[
"${{github.repository_owner}}",
"dependabot[bot]"
]'
steps:
- name: Print Context Info
run: |
echo "event name:" ${{github.event_name}}
echo "event action:" ${{github.event.action}}
echo "actor:" ${{github.actor}}
echo "repository owner:" ${{github.repository_owner}}
echo "pull request user:" ${{github.event.pull_request.user.login}}
echo "pull request number:" ${{github.event.pull_request.number}}
echo "allow list:" ${{ env.USER_ALLOWLIST }}
echo "actor approved": ${{contains(fromJSON(env.USER_ALLOWLIST), github.actor)}}
echo "pull request user approved": ${{contains(fromJSON(env.USER_ALLOWLIST), github.event.pull_request.user.login)}}
- name: attempt automerge with PAT
# if we don't use a PAT then the push event will not fire when the pull request is merged into main
# if the push event does not fire, then the autoupdate workflow cannot auto-run on merge
id: automerge_pat
continue-on-error: true
if: >-
${{
contains(fromJSON(env.USER_ALLOWLIST), github.actor) &&
contains(fromJSON(env.USER_ALLOWLIST), github.event.pull_request.user.login)
}}
uses: peter-evans/enable-pull-request-automerge@v2
with:
token: ${{secrets.PAT}}
pull-request-number: ${{github.event.pull_request.number}}
merge-method: squash
- name: fallback to automerge with GITHUB_TOKEN
if: >-
${{
steps.automerge_pat.outcome != 'success' &&
contains(fromJSON(env.USER_ALLOWLIST), github.actor) &&
contains(fromJSON(env.USER_ALLOWLIST), github.event.pull_request.user.login)
}}
uses: peter-evans/enable-pull-request-automerge@v2
with:
token: ${{secrets.GITHUB_TOKEN}}
pull-request-number: ${{github.event.pull_request.number}}
merge-method: squash
|
.github/workflows/automerge.yaml
|
name: Build
on:
pull_request:
release:
types:
- published
jobs:
build_wheels:
name: Build wheels on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-18.04, windows-latest, macos-latest]
steps:
- uses: actions/checkout@v2
# Include all history and tags
with:
fetch-depth: 0
- uses: actions/setup-python@v2
name: Install Python
with:
python-version: '3.8'
- name: Install cibuildwheel
run: |
python -m pip install cibuildwheel==1.6.4
- name: Build wheels
run: |
python -m cibuildwheel --output-dir wheelhouse
env:
CIBW_SKIP: pp* cp27-win*
- uses: actions/upload-artifact@v2
with:
path: ./wheelhouse/*.whl
build_wheels_aarch64:
name: Build wheels on linux ${{ matrix.arch }}
strategy:
matrix:
arch: [aarch64]
fail-fast: false
runs-on: ubuntu-18.04
env:
img: quay.io/pypa/manylinux2014_${{ matrix.arch }}
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set up QEMU
id: qemu
uses: docker/setup-qemu-action@v1
- name: Available platforms
run: echo ${{ steps.qemu.outputs.platforms }}
- name: Setup environment
run: |
docker run --rm -v ${{ github.workspace }}:/ws:rw --workdir=/ws \
${{ env.img }} bash -c \
'
for pyv in `which /opt/python/*/bin/python`; do
pyenv=.pyenv`echo $pyv | grep -o -E '[0-9]+' | head -1 | sed -e 's/^0\+//'`;
$pyv -m venv $pyenv;
done
'
- name: Install tools
run: |
docker run --rm -v ${{ github.workspace }}:/ws:rw --workdir=/ws \
${{ env.img }} bash -c \
'
for pyenv in `find . -name activate`; do
source $pyenv;
pip install -U setuptools wheel Cython;
deactivate;
done
'
- name: Make wheel
run: |
docker run --rm -v ${{ github.workspace }}:/ws:rw --workdir=/ws \
${{ env.img }} bash -c \
'
for pyenv in `find . -name activate`; do
source $pyenv;
python setup.py bdist_wheel;
deactivate;
done
'
- name: Repair wheel wheel
run: |
docker run --rm -v ${{ github.workspace }}:/ws:rw --workdir=/ws \
${{ env.img }} bash -c \
'
for whl in `find dist -name "*whl"`; do
auditwheel repair $whl --wheel-dir wheelhouse/
done
'
- uses: actions/upload-artifact@v2
with:
path: ./wheelhouse/*.whl
build_sdist:
name: Build source distribution
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
# Include all history and tags
with:
fetch-depth: 0
- uses: actions/setup-python@v2
name: Install Python
with:
python-version: '3.7'
- name: Build sdist
run: |
pip install cython
python setup.py sdist
- uses: actions/upload-artifact@v2
with:
path: dist/*.tar.gz
upload_pypi:
needs: [build_wheels, build_sdist, build_wheels_aarch64]
runs-on: ubuntu-latest
if: github.event_name == 'release' && github.event.action == 'published'
steps:
- uses: actions/download-artifact@v2
with:
name: artifact
path: dist
- uses: pypa/gh-action-pypi-publish@master
with:
user: __token__
password: ${{ secrets.PYPI_TOKEN }}
# To test: repository_url: https://test.pypi.org/legacy/
|
.github/workflows/build_deploy.yml
|
title: Dokumentation mit grundlegenden Informationen zu Azure Active Directory
summary: Hier werden die grundlegenden Konzepte und Prozesse von Azure Active Directory (Azure AD) erläutert. Dabei erfahren Sie unter anderem, wie Sie eine einfache Umgebung sowie einfache Benutzer und Gruppen erstellen.
metadata:
title: Dokumentation mit grundlegenden Informationen zu Azure Active Directory
description: Hier werden die grundlegenden Konzepte und Prozesse von Azure Active Directory (Azure AD) erläutert. Dabei erfahren Sie unter anderem, wie Sie eine einfache Umgebung sowie einfache Benutzer und Gruppen erstellen.
manager: daveba
ms.author: ajburnle
ms.collection: na
ms.date: 08/27/2019
ms.service: active-directory
ms.subservice: na
ms.topic: landing-page
services: active-directory
ms.openlocfilehash: 7a6692b4ed0314acc8200e3e364e9a045d33e799
ms.sourcegitcommit: 253d4c7ab41e4eb11cd9995190cd5536fcec5a3c
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 03/25/2020
ms.locfileid: "80049678"
landingContent:
- title: Informationen zu Azure AD
linkLists:
- linkListType: overview
links:
- text: Was ist Azure Active Directory?
url: active-directory-whatis.md
- text: Neuerungen in Azure Active Directory
url: whats-new.md
- linkListType: concept
links:
- text: Azure AD-Architektur
url: active-directory-architecture.md
- title: Erste Schritte
linkLists:
- linkListType: quickstart
links:
- text: Zugreifen auf das Portal und Erstellen eines Mandanten
url: active-directory-access-create-new-tenant.md
- text: Anzeigen Ihrer Gruppen mit zugewiesenen Mitgliedern
url: active-directory-groups-view-azure-portal.md
- title: Erstellen eines Azure AD-Kontos
linkLists:
- linkListType: how-to-guide
links:
- text: Registrieren für Azure Active Directory Premium-Editionen
url: active-directory-get-started-premium.md
- text: Registrieren für Azure AD als Organisation
url: sign-up-organization.md
- text: Hinzufügen Ihrer benutzerdefinierten Domänennamens
url: add-custom-domain.md
- title: Erstellen einer Gruppe in Azure AD
linkLists:
- linkListType: how-to-guide
links:
- text: Erstellen einer Basisgruppe und Hinzufügen von Mitgliedern
url: active-directory-groups-create-azure-portal.md
- text: Hinzufügen oder Entfernen von Gruppenbesitzern
url: active-directory-accessmanagement-managing-group-owners.md
- title: Hinzufügen von Benutzern und Zuweisen ihrer Rollen und Lizenzen
linkLists:
- linkListType: how-to-guide
links:
- text: Hinzufügen oder Löschen von Benutzern
url: add-users-azure-active-directory.md
- text: Zuweisen von Rollen zu Benutzern
url: active-directory-users-assign-role-azure-portal.md
- text: Zuweisen von Lizenzen zu Benutzern
url: license-users-groups.md
|
articles/active-directory/fundamentals/index.yml
|
{% set data = load_setup_py_data() %}
package:
name: "pymt_landlab"
version: {{ data.get('version') }}
source:
path: ..
build:
number: 0
script: "{{ PYTHON }} -m pip install . --no-deps --ignore-installed --no-cache-dir -vvv"
requirements:
build:
- {{ compiler('c') }}
host:
- python
- pip
- cython
- numpy 1.11.*
- model_metadata
- landlab
run:
- python
- {{ pin_compatible('numpy') }}
- landlab
test:
requires:
- bmi-tester
- model_metadata
imports:
- pymt_landlab
commands:
- config_file=$(mmd-stage OverlandFlow . > MANIFEST && mmd-query OverlandFlow --var=run.config_file.path)
- bmi-test pymt_landlab.bmi:OverlandFlow --config-file=$config_file --manifest=MANIFEST -v
- config_file=$(mmd-stage Flexure . > MANIFEST && mmd-query Flexure --var=run.config_file.path)
- bmi-test pymt_landlab.bmi:Flexure --config-file=$config_file --manifest=MANIFEST -v
- config_file=$(mmd-stage LinearDiffuser . > MANIFEST && mmd-query LinearDiffuser --var=run.config_file.path)
- bmi-test pymt_landlab.bmi:LinearDiffuser --config-file=$config_file --manifest=MANIFEST -v
- config_file=$(mmd-stage ExponentialWeatherer . > MANIFEST && mmd-query ExponentialWeatherer --var=run.config_file.path)
- bmi-test pymt_landlab.bmi:ExponentialWeatherer --config-file=$config_file --manifest=MANIFEST -v
- config_file=$(mmd-stage TransportLengthHillslopeDiffuser . > MANIFEST && mmd-query TransportLengthHillslopeDiffuser --var=run.config_file.path)
- bmi-test pymt_landlab.bmi:TransportLengthHillslopeDiffuser --config-file=$config_file --manifest=MANIFEST -v
- config_file=$(mmd-stage Vegetation . > MANIFEST && mmd-query Vegetation --var=run.config_file.path)
- bmi-test pymt_landlab.bmi:Vegetation --config-file=$config_file --manifest=MANIFEST -v
- config_file=$(mmd-stage SoilMoisture . > MANIFEST && mmd-query SoilMoisture --var=run.config_file.path)
- bmi-test pymt_landlab.bmi:SoilMoisture --config-file=$config_file --manifest=MANIFEST -v
about:
summary: Python package that wraps the landlab BMI.
home: https://github.com/mcflugen/pymt_landlab
license: MIT license
license_file: LICENSE
dev_url: https://github.com/mcflugen/pymt_landlab
|
recipe/meta.yaml
|
items:
- uid: "com.microsoft.azure.management.network.PublicIPAddress.DefinitionStages.WithReverseFQDN"
id: "WithReverseFQDN"
parent: "com.microsoft.azure.management.network"
children:
- "com.microsoft.azure.management.network.PublicIPAddress.DefinitionStages.WithReverseFQDN.withReverseFqdn(java.lang.String)"
- "com.microsoft.azure.management.network.PublicIPAddress.DefinitionStages.WithReverseFQDN.withoutReverseFqdn()"
langs:
- "java"
name: "PublicIPAddress.DefinitionStages.WithReverseFQDN"
nameWithType: "PublicIPAddress.DefinitionStages.WithReverseFQDN"
fullName: "com.microsoft.azure.management.network.PublicIPAddress.DefinitionStages.WithReverseFQDN"
type: "Interface"
package: "com.microsoft.azure.management.network"
summary: "A public IP address definition allowing the reverse FQDN to be specified."
syntax:
content: "public static interface PublicIPAddress.DefinitionStages.WithReverseFQDN"
- uid: "com.microsoft.azure.management.network.PublicIPAddress.DefinitionStages.WithReverseFQDN.withReverseFqdn(java.lang.String)"
id: "withReverseFqdn(java.lang.String)"
parent: "com.microsoft.azure.management.network.PublicIPAddress.DefinitionStages.WithReverseFQDN"
langs:
- "java"
name: "withReverseFqdn(String reverseFQDN)"
nameWithType: "PublicIPAddress.DefinitionStages.WithReverseFQDN.withReverseFqdn(String reverseFQDN)"
fullName: "com.microsoft.azure.management.network.PublicIPAddress.DefinitionStages.WithReverseFQDN.withReverseFqdn(String reverseFQDN)"
overload: "com.microsoft.azure.management.network.PublicIPAddress.DefinitionStages.WithReverseFQDN.withReverseFqdn*"
type: "Method"
package: "com.microsoft.azure.management.network"
summary: "Specifies the reverse FQDN to assign to this public IP address."
syntax:
content: "public abstract PublicIPAddress.DefinitionStages.WithCreate withReverseFqdn(String reverseFQDN)"
parameters:
- id: "reverseFQDN"
type: "java.lang.String"
description: "the reverse FQDN to assign"
return:
type: "com.microsoft.azure.management.network.PublicIPAddress.DefinitionStages.WithCreate"
description: "the next stage of the definition"
- uid: "com.microsoft.azure.management.network.PublicIPAddress.DefinitionStages.WithReverseFQDN.withoutReverseFqdn()"
id: "withoutReverseFqdn()"
parent: "com.microsoft.azure.management.network.PublicIPAddress.DefinitionStages.WithReverseFQDN"
langs:
- "java"
name: "withoutReverseFqdn()"
nameWithType: "PublicIPAddress.DefinitionStages.WithReverseFQDN.withoutReverseFqdn()"
fullName: "com.microsoft.azure.management.network.PublicIPAddress.DefinitionStages.WithReverseFQDN.withoutReverseFqdn()"
overload: "com.microsoft.azure.management.network.PublicIPAddress.DefinitionStages.WithReverseFQDN.withoutReverseFqdn*"
type: "Method"
package: "com.microsoft.azure.management.network"
summary: "Ensures that no reverse FQDN will be used."
syntax:
content: "public abstract PublicIPAddress.DefinitionStages.WithCreate withoutReverseFqdn()"
return:
type: "com.microsoft.azure.management.network.PublicIPAddress.DefinitionStages.WithCreate"
description: "the next stage of the definition"
references:
- uid: "java.lang.String"
spec.java:
- uid: "java.lang.String"
name: "String"
fullName: "java.lang.String"
- uid: "com.microsoft.azure.management.network.PublicIPAddress.DefinitionStages.WithCreate"
name: "PublicIPAddress.DefinitionStages.WithCreate"
nameWithType: "PublicIPAddress.DefinitionStages.WithCreate"
fullName: "com.microsoft.azure.management.network.PublicIPAddress.DefinitionStages.WithCreate"
- uid: "com.microsoft.azure.management.network.PublicIPAddress.DefinitionStages.WithReverseFQDN.withReverseFqdn*"
name: "withReverseFqdn"
nameWithType: "PublicIPAddress.DefinitionStages.WithReverseFQDN.withReverseFqdn"
fullName: "com.microsoft.azure.management.network.PublicIPAddress.DefinitionStages.WithReverseFQDN.withReverseFqdn"
package: "com.microsoft.azure.management.network"
- uid: "com.microsoft.azure.management.network.PublicIPAddress.DefinitionStages.WithReverseFQDN.withoutReverseFqdn*"
name: "withoutReverseFqdn"
nameWithType: "PublicIPAddress.DefinitionStages.WithReverseFQDN.withoutReverseFqdn"
fullName: "com.microsoft.azure.management.network.PublicIPAddress.DefinitionStages.WithReverseFQDN.withoutReverseFqdn"
package: "com.microsoft.azure.management.network"
|
docs-ref-autogen/com.microsoft.azure.management.network.PublicIPAddress.DefinitionStages.WithReverseFQDN.yml
|
jobs:
# mobile/examples/basic_usage/ios
- job: BasicUsageIos
pool:
vmImage: "macOS-11"
steps:
- template: templates/use-python-step.yml
- bash: |
set -e
pip install -r ../model/requirements.txt
../model/gen_model.sh ./OrtBasicUsage/model
workingDirectory: mobile/examples/basic_usage/ios
displayName: "Generate model"
- script: pod install
workingDirectory: 'mobile/examples/basic_usage/ios'
displayName: "Install CocoaPods pods"
- template: templates/xcode-build-and-test-step.yml
parameters:
xcWorkspacePath: 'mobile/examples/basic_usage/ios/OrtBasicUsage.xcworkspace'
scheme: 'OrtBasicUsage'
# mobile/examples/speech_recognition/ios
- job: SpeechRecognitionIos
pool:
vmImage: "macOS-11"
steps:
- template: templates/use-python-step.yml
- bash: |
set -e
pip install -r ../model/requirements.txt
../model/gen_model.sh ./SpeechRecognition/model
workingDirectory: mobile/examples/speech_recognition/ios
displayName: "Generate model"
- script: pod install
workingDirectory: 'mobile/examples/speech_recognition/ios'
displayName: "Install CocoaPods pods"
- template: templates/xcode-build-and-test-step.yml
parameters:
xcWorkspacePath: 'mobile/examples/speech_recognition/ios/SpeechRecognition.xcworkspace'
scheme: 'SpeechRecognition'
# mobile/examples/object_detection/ios
- job: ObjectDetectionIos
pool:
vmImage: "macOS-11"
steps:
- template: templates/use-python-step.yml
- bash: |
set -e
pip install -r ./prepare_model.requirements.txt
./prepare_model.sh
workingDirectory: mobile/examples/object_detection/ios/ORTObjectDetection
displayName: "Generate model"
- script: pod install
workingDirectory: 'mobile/examples/object_detection/ios'
displayName: "Install CocoaPods pods"
- template: templates/xcode-build-and-test-step.yml
parameters:
xcWorkspacePath: 'mobile/examples/object_detection/ios/ORTObjectDetection.xcworkspace'
scheme: 'ORTObjectDetection'
# mobile/examples/image_classification/android
- job: ImageClassificationAndroid
pool:
vmImage: "macOS-11"
steps:
- template: templates/use-python-step.yml
- bash: ./download_model_files.sh
workingDirectory: mobile/examples/image_classification/android
displayName: "Download model files"
- script: |
python3 ./ci_build/python/run_android_emulator.py \
--android-sdk-root ${ANDROID_SDK_ROOT} \
--create-avd --system-image "system-images;android-30;google_apis;x86_64" \
--start --emulator-extra-args="-partition-size 4096" \
--emulator-pid-file $(Build.BinariesDirectory)/emulator.pid
displayName: "Start Android emulator"
- bash: ./gradlew testDebugUnitTest connectedDebugAndroidTest
workingDirectory: mobile/examples/image_classification/android
displayName: "Build and run tests"
- script: |
python3 ./ci_build/python/run_android_emulator.py \
--android-sdk-root ${ANDROID_SDK_ROOT} \
--stop \
--emulator-pid-file $(Build.BinariesDirectory)/emulator.pid
displayName: "Stop Android emulator"
condition: always()
|
ci_build/azure_pipelines/mobile-examples-pipeline.yml
|
uid: "com.azure.resourcemanager.sql.fluent.WorkloadClassifiersClient.createOrUpdateAsync*"
fullName: "com.azure.resourcemanager.sql.fluent.WorkloadClassifiersClient.createOrUpdateAsync"
name: "createOrUpdateAsync"
nameWithType: "WorkloadClassifiersClient.createOrUpdateAsync"
members:
- uid: "com.azure.resourcemanager.sql.fluent.WorkloadClassifiersClient.createOrUpdateAsync(java.lang.String,java.lang.String,java.lang.String,java.lang.String,java.lang.String,com.azure.resourcemanager.sql.fluent.models.WorkloadClassifierInner)"
fullName: "com.azure.resourcemanager.sql.fluent.WorkloadClassifiersClient.createOrUpdateAsync(String resourceGroupName, String serverName, String databaseName, String workloadGroupName, String workloadClassifierName, WorkloadClassifierInner parameters)"
name: "createOrUpdateAsync(String resourceGroupName, String serverName, String databaseName, String workloadGroupName, String workloadClassifierName, WorkloadClassifierInner parameters)"
nameWithType: "WorkloadClassifiersClient.createOrUpdateAsync(String resourceGroupName, String serverName, String databaseName, String workloadGroupName, String workloadClassifierName, WorkloadClassifierInner parameters)"
summary: "Creates or updates a workload classifier."
parameters:
- description: "The name of the resource group that contains the resource. You can obtain this value\n from the Azure Resource Manager API or the portal."
name: "resourceGroupName"
type: "<xref href=\"java.lang.String?alt=java.lang.String&text=String\" data-throw-if-not-resolved=\"False\" />"
- description: "The name of the server."
name: "serverName"
type: "<xref href=\"java.lang.String?alt=java.lang.String&text=String\" data-throw-if-not-resolved=\"False\" />"
- description: "The name of the database."
name: "databaseName"
type: "<xref href=\"java.lang.String?alt=java.lang.String&text=String\" data-throw-if-not-resolved=\"False\" />"
- description: "The name of the workload group from which to receive the classifier from."
name: "workloadGroupName"
type: "<xref href=\"java.lang.String?alt=java.lang.String&text=String\" data-throw-if-not-resolved=\"False\" />"
- description: "The name of the workload classifier to create/update."
name: "workloadClassifierName"
type: "<xref href=\"java.lang.String?alt=java.lang.String&text=String\" data-throw-if-not-resolved=\"False\" />"
- description: "Workload classifier operations for a data warehouse."
name: "parameters"
type: "<xref href=\"com.azure.resourcemanager.sql.fluent.models.WorkloadClassifierInner?alt=com.azure.resourcemanager.sql.fluent.models.WorkloadClassifierInner&text=WorkloadClassifierInner\" data-throw-if-not-resolved=\"False\" />"
syntax: "public abstract Mono<WorkloadClassifierInner> createOrUpdateAsync(String resourceGroupName, String serverName, String databaseName, String workloadGroupName, String workloadClassifierName, WorkloadClassifierInner parameters)"
returns:
description: "workload classifier operations for a data warehouse."
type: "<xref href=\"reactor.core.publisher.Mono?alt=reactor.core.publisher.Mono&text=Mono\" data-throw-if-not-resolved=\"False\" /><<xref href=\"com.azure.resourcemanager.sql.fluent.models.WorkloadClassifierInner?alt=com.azure.resourcemanager.sql.fluent.models.WorkloadClassifierInner&text=WorkloadClassifierInner\" data-throw-if-not-resolved=\"False\" />>"
type: "method"
metadata: {}
package: "com.azure.resourcemanager.sql.fluent"
artifact: com.azure.resourcemanager:azure-resourcemanager-sql:2.0.0-beta.5
|
preview/docs-ref-autogen/com.azure.resourcemanager.sql.fluent.WorkloadClassifiersClient.createOrUpdateAsync.yml
|
# N.B. Settings that require a conversion to integer should NOT be set here.
# They should be set in config.rb, in the Config class.
home_description:
home_feeds_cache_timeout:
events_enabled:
bioportal_api_key:
jerm_enabled:
email_enabled:
no_reply:
jws_enabled:
jws_online_root:
internal_help_enabled:
external_help_url:
hide_details_enabled:
activation_required_enabled:
project_name:
smtp:
default_pages:
project_type:
project_link:
type_managers_enabled:
type_managers:
pubmed_api_email:
crossref_api_email:
site_base_host:
copyright_addendum_enabled:
copyright_addendum_content:
noreply_sender:
solr_enabled:
application_name:
project_long_name:
dm_project_name:
dm_project_link:
header_image_link:
header_image_title:
header_image_enabled:
header_home_logo_image:
header_image_avatar_id:
google_analytics_enabled:
google_analytics_tracker_id:
piwik_analytics_url:
exception_notification_enabled:
exception_notification_recipients:
sycamore_enabled:
project_news_enabled:
project_news_feed_urls:
community_news_enabled:
community_news_feed_urls:
blacklisted_feeds:
is_virtualliver:
sabiork_ws_base_url:
filestore_path:
tagline_prefix:
events_enabled:
modelling_analysis_enabled:
organisms_enabled:
models_enabled:
assays_enabled:
publications_enabled:
forum_enabled:
jerm_enabled:
email_enabled:
jws_enabled:
external_search_enabled:
piwik_analytics_enabled:
seek_video_link:
scales:
delete_asset_version_enabled:
recaptcha_enabled:
project_hierarchy_enabled:
tabs_lazy_load_enabled:
admin_impersonation_enabled:
auth_lookup_enabled:
publish_button_enabled:
project_browser_enabled:
experimental_features_enabled:
pdf_conversion_enabled:
admin_impersonation_enabled:
auth_lookup_enabled:
guide_box_enabled:
treatments_enabled:
factors_studied_enabled:
experimental_conditions_enabled:
documentation_enabled:
tagging_enabled:
authorization_checks_enabled:
magic_guest_enabled:
workflows_enabled:
programmes_enabled:
assay_type_ontology_file:
technology_type_ontology_file:
modelling_analysis_type_ontology_file:
assay_type_base_uri:
technology_type_base_uri:
modelling_analysis_type_base_uri:
header_tagline_text_enabled:
related_items_limit:
faceted_browsing_enabled:
facet_enable_for_pages:
faceted_search_enabled:
css_appended:
css_prepended:
javascript_appended:
javascript_prepended:
main_layout:
profile_select_by_default:
recaptcha_public_key:
recaptcha_private_key:
support_email_address:
show_as_external_link_enabled:
doi_minting_enabled:
datacite_username:
datacite_password:
datacite_url:
doi_prefix:
doi_suffix:
time_lock_doi_for:
max_extractable_spreadsheet_size:
max_indexable_text_size:
show_announcements:
imprint_enabled:
imprint_description:
zenodo_publishing_enabled:
zenodo_api_url:
zenodo_oauth_url:
zenodo_client_id:
zenodo_client_secret:
programme_user_creation_enabled:
cache_remote_files:
orcid_required:
max_cachable_size:
convert: :to_i
hard_max_cachable_size:
convert: :to_i
tag_threshold:
convert: :to_i
limit_latest:
convert: :to_i
max_visible_tags:
convert: :to_i
piwik_analytics_id_site:
convert: :to_i
project_news_number_of_entries:
convert: :to_i
community_news_number_of_entries:
convert: :to_i
recent_contributions_number_of_entries:
convert: :to_i
home_feeds_cache_timeout:
convert: :to_i
time_lock_doi_for:
convert: :to_i
default_associated_projects_access_type:
convert: :to_i
default_consortium_access_type:
convert: :to_i
default_all_visitors_access_type:
convert: :to_i
default_project_members_access_type:
convert: :to_i
news_enabled:
news_feed_urls:
news_number_of_entries:
convert: :to_i
front_page_buttons_enabled:
samples_enabled:
sample_parent_term:
specimen_culture_starting_date:
sample_age:
specimen_creators:
sample_parser_enabled:
|
lib/seek/config_setting_attributes.yml
|
# This file is the entry point to configure your own services.
# Files in the packages/ subdirectory configure your dependencies.
# Put parameters here that don't need to change on each machine where the app is deployed
# https://symfony.com/doc/current/best_practices/configuration.html#application-related-configuration
parameters:
services:
# default configuration for services in *this* file
_defaults:
autowire: true # Automatically injects dependencies in your services.
autoconfigure: true # Automatically registers your services as commands, event subscribers, etc.
# makes classes in src/ available to be used as services
# this creates a service per class whose id is the fully-qualified class name
App\:
resource: "../src/*"
exclude: "../src/{DependencyInjection,Entity,Migrations,Tests,Kernel.php}"
# controllers are imported separately to make sure services can be injected
# as action arguments even if you don't extend any base controller class
App\Controller\:
resource: "../src/Controller"
tags: ["controller.service_arguments"]
# https://github.com/lexik/LexikJWTAuthenticationBundle/blob/master/Resources/doc/2-data-customization.md#eventsjwt_created---adding-custom-data-or-headers-to-the-jwt
app.event.jwt_authenticated_listener:
class: App\EventListeners\JWTCreatedListener
arguments: ["@request_stack"]
tags:
- {
name: kernel.event_listener,
event: lexik_jwt_authentication.on_jwt_created,
method: onJWTCreated,
}
# https://github.com/TimoBakx/SymfonyTricks/blob/master/security/jwt-ldap.md#using-json_login-instead-of-form_login
App\Security\JsonLdapGuardAuthenticator:
arguments:
- "%security.authentication.hide_user_not_found%"
- "@ldap_tools.security.user.ldap_user_checker"
- "@ldap_tools.ldap_manager"
- "@lexik_jwt_authentication.security.guard.jwt_token_authenticator" # Instead of '@ldap_tools.security.authentication.form_entry_point'
- "@event_dispatcher"
- "@lexik_jwt_authentication.handler.authentication_success" # Instead of '@ldap_tools.security.auth_success_handler'
- "@lexik_jwt_authentication.handler.authentication_failure" # Instead of '@ldap_tools.security.auth_failure_handler'
- "%ldap_tools.security.guard.options%"
- "@ldap_tools.security.user.ldap_user_provider"
#https://api-platform.com/docs/core/serialization/#changing-the-serialization-context-dynamically
App\Serializer\ServiceContextBuilder:
decorates: "api_platform.serializer.context_builder"
arguments: ['@App\Serializer\ServiceContextBuilder.inner']
autoconfigure: false
app.event.login_listener:
class: App\EventListeners\LoginEventListener
arguments: ["@doctrine.orm.entity_manager"]
tags:
- {
name: kernel.event_listener,
event: security.interactive_login,
method: onLoginSuccess,
}
app.event.service_change_listener:
class: App\EventListeners\ServiceChangeListener
tags:
# Must have lower priority than the TranslatableListener
# (must be a listener or a subscriber with priority <-10)
- { name: doctrine.event_listener, event: onFlush }
app.event.post_soft_delete_listener:
class: App\EventListeners\PostSoftDeleteListener
tags:
- { name: doctrine.event_listener, event: postSoftDelete }
app.filter.binary_uuid_filter:
class: App\Filter\UuidFilter
tags: ["api_platform.filter"]
App\DataProviders\ServiceAnalyticsDataProvider:
calls:
- [setViewId, ["%env(ANALYTICS_VIEW_ID)%"]]
# add more service definitions when explicit configuration is needed
# please note that last definitions always *replace* previous ones
gedmo.listener.timestampable:
class: Gedmo\Timestampable\TimestampableListener
tags:
- { name: doctrine.event_subscriber, connection: default }
Gedmo\Translatable\TranslatableListener: "@stof_doctrine_extensions.listener.translatable"
|
api/config/services.yaml
|
plugins:
- serverless-webpack
service: random-images
provider:
name: aws
runtime: nodejs12.x
region: us-east-1
custom:
#domain: images.example.com
#rootDomain: example.com
#certificateArn: arn:aws:acm:us-east-1:000000000000:certificate/00000000-0000-0000-0000-000000000000
#fallbackImage: fallback.jpg
# referers:
# - 'https?:\/\/valid-referer.com'
# - 'https?:\/\/other-referer.com\/index\.php\?id=1234'
webpack:
webpackConfig: ./webpack.config.js
package:
individually: true
excludeDevDependencies: false
functions:
images:
handler: src/functions/random-image.handler
events:
- cloudFront:
eventType: viewer-request
origin:
DomainName: ${self:custom.domain}
resources:
Resources:
Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:custom.domain}
ReadPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket:
Ref: Bucket
PolicyDocument:
Statement:
- Action: 's3:GetObject'
Effect: Allow
Resource: 'arn:aws:s3:::${self:custom.domain}/*'
Principal:
CanonicalUser:
Fn::GetAtt:
- CloudFrontOriginAccessIdentity
- S3CanonicalUserId
CloudFrontOriginAccessIdentity:
Type: AWS::CloudFront::CloudFrontOriginAccessIdentity
Properties:
CloudFrontOriginAccessIdentityConfig:
Comment: ${self:custom.domain}
CloudFrontDistribution:
Type: AWS::CloudFront::Distribution
Properties:
DistributionConfig:
Aliases:
- ${self:custom.domain}
DefaultCacheBehavior:
AllowedMethods:
- GET
- HEAD
CachedMethods:
- GET
- HEAD
Compress: true
DefaultTTL: 86400
ForwardedValues:
Cookies:
Forward: none
QueryString: false
MaxTTL: 31536000
MinTTL: 0
TargetOriginId: s3origin
ViewerProtocolPolicy: 'redirect-to-https'
DefaultRootObject: index.html
Enabled: true
HttpVersion: http2
Origins:
- DomainName:
Fn::GetAtt:
- Bucket
- DomainName
Id: s3origin
S3OriginConfig:
OriginAccessIdentity:
Fn::Join:
- '/'
- - 'origin-access-identity'
- 'cloudfront'
- Ref: CloudFrontOriginAccessIdentity
PriceClass: 'PriceClass_100'
ViewerCertificate:
AcmCertificateArn: ${self:custom.certificateArn}
SslSupportMethod: sni-only
RecordSet:
Type: AWS::Route53::RecordSet
Properties:
HostedZoneName: ${self:custom.rootDomain}.
AliasTarget:
DNSName:
Fn::GetAtt:
- CloudFrontDistribution
- DomainName
HostedZoneId: Z2FDTNDATAQYW2 # Cloudfront always uses this id
Name: ${self:custom.domain}
Type: A
|
serverless.yml
|
category: db
app: oracle
name:
zh-CN: Oracle数据库
en-US: Oracle DB
# 参数映射map. type是参数类型: 0-number数字, 1-string明文字符串, 2-secret加密字符串
# 强制固定必须参数 - host
configmap:
- key: host
type: 1
- key: port
type: 0
- key: username
type: 1
- key: password
type: 2
- key: database
type: 1
- key: timeout
type: 0
- key: url
type: 1
# 指标组列表
metrics:
- name: basic
# 指标组调度优先级(0-127)越小优先级越高,优先级低的指标组会等优先级高的指标组采集完成后才会被调度,相同优先级的指标组会并行调度采集
# 优先级为0的指标组为可用性指标组,即它会被首先调度,采集成功才会继续调度其它指标组,采集失败则中断调度
priority: 0
# 指标组中的具体监控指标
fields:
# 指标信息 包括 field名称 type字段类型:0-number数字,1-string字符串 instance是否为实例主键 unit:指标单位
- field: database_version
type: 1
instance: true
- field: database_type
type: 1
- field: hostname
type: 1
- field: instance_name
type: 1
- field: startup_time
type: 1
- field: status
type: 1
# (非必须)监控指标别名,与上面的指标名映射。用于采集接口数据字段不直接是最终指标名称,需要此别名做映射转换
aliasFields:
- VERSION
- DATABASE_TYPE
- HOST_NAME
- INSTANCE_NAME
- STARTUP_TIME
- STATUS
# (非必须)指标计算表达式,与上面的别名一起作用,计算出最终需要的指标值
# eg: cores=core1+core2, usage=usage, waitTime=allTime-runningTime
calculates:
- database_version=VERSION
- database_type=DATABASE_TYPE
- hostname=HOST_NAME
- instance_name=INSTANCE_NAME
- startup_time=STARTUP_TIME
- status=STATUS
protocol: jdbc
jdbc:
# 主机host: ipv4 ipv6 域名
host: ^_^host^_^
# 端口
port: ^_^port^_^
platform: oracle
username: ^_^username^_^
password: ^_^password^_^
database: ^_^database^_^
timeout: ^_^timeout^_^
# SQL查询方式: oneRow, multiRow, columns
queryType: oneRow
# sql
sql: select * from sys.v_$instance
url: ^_^url^_^
- name: tablespace
priority: 1
# 指标组中的具体监控指标
fields:
# 指标信息 包括 field名称 type字段类型:0-number数字,1-string字符串 instance是否为实例主键 unit:指标单位
- field: file_id
type: 1
instance: true
- field: file_name
type: 1
- field: tablespace_name
type: 1
- field: status
type: 1
- field: bytes
type: 0
unit: MB
- field: blocks
type: 0
unit: 块数
protocol: jdbc
jdbc:
# 主机host: ipv4 ipv6 域名
host: ^_^host^_^
# 端口
port: ^_^port^_^
platform: oracle
username: ^_^username^_^
password: ^_^<PASSWORD>^
database: ^_^database^_^
timeout: ^_^timeout^_^
# SQL查询方式: oneRow, multiRow, columns
queryType: oneRow
# sql
sql: select file_id, file_name, tablespace_name, status, bytes / 1024 / 1024 as bytes, blocks from dba_data_files
url: ^_^url^_^
- name: user_connect
priority: 1
fields:
# 指标信息 包括 field名称 type字段类型:0-number数字,1-string字符串 instance是否为实例主键 unit:指标单位
- field: username
type: 1
instance: true
- field: counts
type: 0
unit: 连接数
protocol: jdbc
jdbc:
# 主机host: ipv4 ipv6 域名
host: ^_^host^_^
# 端口
port: ^_^port^_^
platform: oracle
username: ^_^username^_^
password: ^_^<PASSWORD>^
database: ^_^database^_^
timeout: ^_^timeout^_^
# SQL查询方式: oneRow, multiRow, columns
queryType: oneRow
# sql
sql: SELECT username, count( username ) as counts FROM v$session WHERE username IS NOT NULL GROUP BY username
url: ^_^url^_^
- name: performace
priority: 1
fields:
# 指标信息 包括 field名称 type字段类型:0-number数字,1-string字符串 instance是否为实例主键 unit:指标单位
- field: qps
type: 0
unit: qps
- field: tps
type: 0
unit: tps
- field: mbps
type: 0
unit: mbps
# (非必须)监控指标别名,与上面的指标名映射。用于采集接口数据字段不直接是最终指标名称,需要此别名做映射转换
aliasFields:
- I/O Requests per Second
- User Transaction Per Sec
- I/O Megabytes per Second
# (非必须)指标计算表达式,与上面的别名一起作用,计算出最终需要的指标值
# eg: cores=core1+core2, usage=usage, waitTime=allTime-runningTime
calculates:
- qps=I/O Requests per Second
- tps=User Transaction Per Sec
- mbps=I/O Megabytes per Second
protocol: jdbc
jdbc:
# 主机host: ipv4 ipv6 域名
host: ^_^host^_^
# 端口
port: ^_^port^_^
platform: oracle
username: ^_^username^_^
password: ^_^<PASSWORD>^
database: ^_^database^_^
timeout: ^_^timeout^_^
# SQL查询方式: oneRow, multiRow, columns
queryType: columns
# sql
sql: select metric_name, value from gv$sysmetric where metric_name = 'I/O Megabytes per Second' or metric_name = 'User Transaction Per Sec' or metric_name = 'I/O Requests per Second'
url: ^_^url^_^
|
manager/src/main/resources/define/app/oracle.yml
|
items:
- uid: "com.microsoft.azure.management.operationsmanagement.v2015_11_01_preview.ManagementAssociation.DefinitionStages.WithProvider"
id: "WithProvider"
parent: "com.microsoft.azure.management.operationsmanagement.v2015_11_01_preview"
children:
- "com.microsoft.azure.management.operationsmanagement.v2015_11_01_preview.ManagementAssociation.DefinitionStages.WithProvider.withExistingProvider(java.lang.String)"
langs:
- "java"
name: "ManagementAssociation.DefinitionStages.WithProvider"
nameWithType: "ManagementAssociation.DefinitionStages.WithProvider"
fullName: "com.microsoft.azure.management.operationsmanagement.v2015_11_01_preview.ManagementAssociation.DefinitionStages.WithProvider"
type: "Interface"
package: "com.microsoft.azure.management.operationsmanagement.v2015_11_01_preview"
summary: "The stage of the managementassociation definition allowing to specify Provider."
syntax:
content: "public static interface ManagementAssociation.DefinitionStages.WithProvider"
- uid: "com.microsoft.azure.management.operationsmanagement.v2015_11_01_preview.ManagementAssociation.DefinitionStages.WithProvider.withExistingProvider(java.lang.String)"
id: "withExistingProvider(java.lang.String)"
parent: "com.microsoft.azure.management.operationsmanagement.v2015_11_01_preview.ManagementAssociation.DefinitionStages.WithProvider"
langs:
- "java"
name: "withExistingProvider(String resourceGroupName)"
nameWithType: "ManagementAssociation.DefinitionStages.WithProvider.withExistingProvider(String resourceGroupName)"
fullName: "com.microsoft.azure.management.operationsmanagement.v2015_11_01_preview.ManagementAssociation.DefinitionStages.WithProvider.withExistingProvider(String resourceGroupName)"
overload: "com.microsoft.azure.management.operationsmanagement.v2015_11_01_preview.ManagementAssociation.DefinitionStages.WithProvider.withExistingProvider*"
type: "Method"
package: "com.microsoft.azure.management.operationsmanagement.v2015_11_01_preview"
summary: "Specifies resourceGroupName."
syntax:
content: "public abstract ManagementAssociation.DefinitionStages.WithCreate withExistingProvider(String resourceGroupName)"
parameters:
- id: "resourceGroupName"
type: "java.lang.String"
description: "The name of the resource group to get. The name is case insensitive"
return:
type: "com.microsoft.azure.management.operationsmanagement.v2015_11_01_preview.ManagementAssociation.DefinitionStages.WithCreate"
description: "the next definition stage"
references:
- uid: "java.lang.String"
spec.java:
- uid: "java.lang.String"
name: "String"
fullName: "java.lang.String"
- uid: "com.microsoft.azure.management.operationsmanagement.v2015_11_01_preview.ManagementAssociation.DefinitionStages.WithCreate"
name: "ManagementAssociation.DefinitionStages.WithCreate"
nameWithType: "ManagementAssociation.DefinitionStages.WithCreate"
fullName: "com.microsoft.azure.management.operationsmanagement.v2015_11_01_preview.ManagementAssociation.DefinitionStages.WithCreate"
- uid: "com.microsoft.azure.management.operationsmanagement.v2015_11_01_preview.ManagementAssociation.DefinitionStages.WithProvider.withExistingProvider*"
name: "withExistingProvider"
nameWithType: "ManagementAssociation.DefinitionStages.WithProvider.withExistingProvider"
fullName: "com.microsoft.azure.management.operationsmanagement.v2015_11_01_preview.ManagementAssociation.DefinitionStages.WithProvider.withExistingProvider"
package: "com.microsoft.azure.management.operationsmanagement.v2015_11_01_preview"
|
preview/docs-ref-autogen/com.microsoft.azure.management.operationsmanagement.v2015_11_01_preview.ManagementAssociation.DefinitionStages.WithProvider.yml
|
apiVersion: v1
kind: Template
metadata:
creationTimestamp: null
name: cloud-navigator
labels:
template: cloud-navigator
objects:
- apiVersion: v1
data:
Caddyfile: |
0.0.0.0:2015
root /var/www/html
log stdout
errors stdout
rewrite {
regexp ^\/topic(\/[\w|\-|\_|~|\.]+)+
to {path} {path}/ /topic/
}
gzip {
ext .js .html .css
}
templates {
ext .js
}
# gatsby_cache_control
# https://www.gatsbyjs.org/docs/caching/
header / {
# special js file that should not be cached for offline mode
Cache-Control "public, max-age=0, must-revalidate"
}
header /public/static {
# all static assets SHOULD be cached
Cache-Control "public, max-age=31536000, immutable"
}
header /public/page-data/ {
Cache-Control "public, max-age=0, must-revalidate"
}
header /app-data.json {
Cache-Control "public, max-age=0, must-revalidate"
}
kind: ConfigMap
metadata:
creationTimestamp: null
name: caddy-${NAME}-static${SUFFIX}
labels:
app: ${NAME}-static${SUFFIX}
deploymentconfig: ${NAME}-static${SUFFIX}
app-name: ${NAME}-static${SUFFIX}
- apiVersion: v1
kind: ImageStream
metadata:
creationTimestamp: null
name: ${NAME}-static
spec:
lookupPolicy:
local: false
- apiVersion: v1
kind: DeploymentConfig
metadata:
creationTimestamp: null
name: ${NAME}-static${SUFFIX}
spec:
minReadySeconds: 20 # should be ready for atleast 20 seconds before the container is considered available. This will allow us
# to catch any errors on deploy before they are available to the web
replicas: 3
selector:
deploymentconfig: ${NAME}-static${SUFFIX}
strategy:
resources:
requests:
cpu: '10m'
memory: '50Mi'
limits:
cpu: '150m'
memory: '75Mi'
template:
metadata:
creationTimestamp: null
labels:
deploymentconfig: ${NAME}-static${SUFFIX}
spec:
containers:
- image: ${NAME}-static:${VERSION}
name: ${NAME}
args:
- /tmp/scripts/run
ports:
- containerPort: 2015
protocol: TCP
resources:
requests:
cpu: '10m'
memory: '50Mi'
limits:
cpu: '150m'
memory: '75Mi'
volumeMounts:
- name: caddy-${NAME}-static${SUFFIX}
mountPath: /etc/Caddyfile
readOnly: true
subPath: Caddyfile
volumes:
- name: caddy-${NAME}-static${SUFFIX}
configMap:
defaultMode: 420
name: caddy-${NAME}-static${SUFFIX}
test: false
triggers:
- type: ConfigChange
- imageChangeParams:
automatic: true
containerNames:
- ${NAME}
from:
kind: ImageStreamTag
name: ${NAME}-static:${VERSION}
type: ImageChange
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: ${NAME}-static${SUFFIX}
spec:
ports:
- name: web
port: 2015
protocol: TCP
targetPort: 2015
selector:
deploymentconfig: ${NAME}-static${SUFFIX}
- apiVersion: route.openshift.io/v1
kind: Route
metadata:
annotations:
haproxy.router.openshift.io/disable_cookies: 'true'
creationTimestamp: null
name: ${NAME}-static${SUFFIX}
spec:
host:
port:
targetPort: web
tls:
insecureEdgeTerminationPolicy: Redirect
termination: edge
to:
kind: Service
name: ${NAME}-static${SUFFIX}
weight: 100
wildcardPolicy: None
parameters:
- description: A name used for all objects
displayName: Name
name: NAME
required: true
value: cloud-navigator
- description: A name suffix used for all objects
displayName: Suffix
name: SUFFIX
required: false
value: -dev
- description: A version used for the image tags
displayName: version
name: VERSION
required: true
value: v1.0.0
- description: A name used for routes/services and deployment configs
displayName: Host
name: HOST
required: false
value: ''
- description: A volumn used for the caddy from config map
displayName: volumn name
name: CADDY_VOLUME_NAME
required: false
value: web-caddy-config
|
app/openshift/templates/dc.yaml
|
name: deploy
on:
push:
branches:
- master
- release-
create:
tags:
- v*
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/cache@v1
with:
path: ~/.npm
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
- name: Use Node.js 12.x
uses: actions/setup-node@v1
with:
node-version: 12.x
- run: npm install
- run: npm run build
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v1
- name: Get image tag for push
run: |
export SHA=${{ github.sha }}
echo "::set-env name=IMAGE_TAG::${SHA::6}"
echo "::set-env name=DEPLOY_ENV::dev"
if: github.event_name == 'push'
- name: Get image tag for tag
run: |
echo "::set-env name=IMAGE_TAG::${GITHUB_REF/refs\/tags\//}"
echo "::set-env name=DEPLOY_ENV::prod"
if: github.event_name == 'create'
- name: Build, tag, and push image to Amazon ECR
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: asem-labs
run: |
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
- name: Logout of Amazon ECR
if: always()
run: docker logout ${{ steps.login-ecr.outputs.registry }}
- name: 'Terraform Init'
uses: hashicorp/terraform-github-actions@master
with:
tf_actions_version: 0.12.13
tf_actions_subcommand: 'init'
tf_actions_working_dir: './terraform'
args: '-backend-config="bucket=asem-labs-${{ env.DEPLOY_ENV }}"'
env:
TF_VAR_env_name: ${{ env.DEPLOY_ENV }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID}}
- name: 'Terraform Apply'
uses: hashicorp/terraform-github-actions@master
with:
tf_actions_version: 0.12.13
tf_actions_subcommand: 'apply'
tf_actions_working_dir: './terraform'
args: '-auto-approve -var-file="vars/${{ env.DEPLOY_ENV }}.tfvars"'
env:
TF_VAR_env_name: ${{ env.DEPLOY_ENV }}
TF_VAR_image_tag: ${{ env.IMAGE_TAG }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID}}
|
.github/workflows/deploy.yml
|
name: ErrorModel
uid: '@azure/arm-recoveryservices.ErrorModel'
package: '@azure/arm-recoveryservices'
summary: The resource management error response.
fullName: ErrorModel
remarks: ''
isPreview: false
isDeprecated: false
type: interface
properties:
- name: additionalInfo
uid: '@azure/arm-recoveryservices.ErrorModel.additionalInfo'
package: '@azure/arm-recoveryservices'
summary: >-
The error additional info.
NOTE: This property will not be serialized. It can only be populated by
the server.
fullName: additionalInfo
remarks: ''
isPreview: false
isDeprecated: false
syntax:
content: 'additionalInfo?: ErrorAdditionalInfo[]'
return:
description: ''
type: '<xref uid="@azure/arm-recoveryservices.ErrorAdditionalInfo" />[]'
- name: code
uid: '@azure/arm-recoveryservices.ErrorModel.code'
package: '@azure/arm-recoveryservices'
summary: >-
The error code.
NOTE: This property will not be serialized. It can only be populated by
the server.
fullName: code
remarks: ''
isPreview: false
isDeprecated: false
syntax:
content: 'code?: undefined | string'
return:
description: ''
type: undefined | string
- name: details
uid: '@azure/arm-recoveryservices.ErrorModel.details'
package: '@azure/arm-recoveryservices'
summary: >-
The error details.
NOTE: This property will not be serialized. It can only be populated by
the server.
fullName: details
remarks: ''
isPreview: false
isDeprecated: false
syntax:
content: 'details?: ErrorModel[]'
return:
description: ''
type: '<xref uid="@azure/arm-recoveryservices.ErrorModel" />[]'
- name: message
uid: '@azure/arm-recoveryservices.ErrorModel.message'
package: '@azure/arm-recoveryservices'
summary: >-
The error message.
NOTE: This property will not be serialized. It can only be populated by
the server.
fullName: message
remarks: ''
isPreview: false
isDeprecated: false
syntax:
content: 'message?: undefined | string'
return:
description: ''
type: undefined | string
- name: target
uid: '@azure/arm-recoveryservices.ErrorModel.target'
package: '@azure/arm-recoveryservices'
summary: >-
The error target.
NOTE: This property will not be serialized. It can only be populated by
the server.
fullName: target
remarks: ''
isPreview: false
isDeprecated: false
syntax:
content: 'target?: undefined | string'
return:
description: ''
type: undefined | string
|
preview-packages/docs-ref-autogen/@azure/arm-recoveryservices/ErrorModel.yml
|
name: build
on:
push:
branches: "**"
tags:
- '**'
pull_request:
branches: [ master ]
jobs:
build-and-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Compare version with git tag
if: startsWith(github.ref, 'refs/tags/')
run: |
file_version=$(cat geniust/VERSION)
tag_version=${GITHUB_REF#refs/*/}
if test "$file_version" = "$tag_version";
then
echo "Versions match! >> $file_version"
else
echo "Versions don't match! >> FILE=$file_version != TAG=$tag_version"
exit 1
fi
- name: Set up Python 3.8
uses: actions/setup-python@v2
with:
python-version: 3.8
- name: Cache Pip dependencies
uses: actions/cache@v2
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('requirements.txt', 'setup.py') }}
restore-keys: |
${{ runner.os }}-pip-
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install -e .[dev]
- name: Run tox
run: tox
- name: Archive code coverage results
if: github.ref == 'refs/heads/master'
uses: actions/upload-artifact@v2
with:
name: code-coverage-report
path: coverage.xml
upload_coverage:
if: github.ref == 'refs/heads/master'
runs-on: ubuntu-18.04
env:
CC_TEST_REPORTER_URL: https://codeclimate.com/downloads/test-reporter/test-reporter-0.7.0-linux-amd64
CC_TEST_REPORTER_ID: ${{ secrets.CC_TEST_REPORTER_ID }}
needs: build-and-test
steps:
- uses: actions/checkout@v2
- name: Set ENV for codeclimate
run: |
echo "{GIT_BRANCH}={$GITHUB_REF}" >> $GITHUB_ENV
echo "{GIT_COMMIT_SHA}={$GITHUB_SHA}" >> $GITHUB_ENV
- name: Download test coverage reporter
run: curl -L $CC_TEST_REPORTER_URL > cc-test-reporter
- name: Give test coverage reporter executable permissions
run: chmod +x cc-test-reporter
- name: Download a single artifact
uses: actions/download-artifact@v2
with:
name: code-coverage-report
- name: Upload results to Code Climate
run: |
./cc-test-reporter after-build -t=coverage.py
|
.github/workflows/python-app.yml
|
apiVersion: v1
kind: ConfigMap
metadata:
name: config
namespace: system
10-kubeadm.conf: |
# Note: This dropin only works with kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/var/lib/kubelet/kubeconfig"
Environment="KUBELET_DYNAMIC_ARGS=--config=/var/lib/kubelet/config.yaml --dynamic-config-dir=/var/lib/kubelet/dyncfg"
EnvironmentFile=/var/lib/kubelet/flags.env
EnvironmentFile=/etc/default/kubelet
ExecStart=
ExecStart=/usr/local/bin/kubelet \
--node-labels="${KUBELET_NODE_LABELS}" \
$KUBELET_CONFIG \
$KUBELET_KUBECONFIG_ARGS \
$KUBELET_DYNAMIC_ARGS \
$KUBELET_EXTRA_ARGS
flags.env: |
KUBELET_EXTRA_ARGS=--experimental-kernel-memcg-notification=true
config.yaml: |
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
readOnlyPort: 10255
clusterDNS:
- 10.0.0.10
clusterDomain: cluster.local
authentication:
webhook:
enabled: true
x509:
clientCAFile: "/etc/kubernetes/certs/ca.crt"
authorization:
mode: Webhook
tlsCertFile: "/etc/kubernetes/certs/kubeletserver.crt"
tlsPrivateKeyFile: "/etc/kubernetes/certs/kubeletserver.key"
tlsCipherSuites:
- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
- TLS_RSA_WITH_AES_256_GCM_SHA384
- TLS_RSA_WITH_AES_128_GCM_SHA256
runtime.slice: |
[Unit]
Description=Limited resources slice for Kubernetes services
Documentation=man:systemd.special(7)
DefaultDependencies=no
Before=slices.target
Requires=-.slice
After=-.slice
docker-10-cgroup.conf: |
[Service]
CPUAccounting=true
MemoryAccounting=true
Slice=runtime.slice
kubelet-10-cgroup.conf: |
[Service]
CPUAccounting=true
MemoryAccounting=true
Slice=runtime.slice
|
config/kubelet/patch-configmap.yaml
|
---
- name: CloudFlare Dynamic DNS Updater
block:
- name: Install script
ansible.builtin.get_url:
url: https://github.com/K0p1-Git/cloudflare-ddns-updater/raw/main/cloudflare-template.sh
dest: /usr/local/bin/cloudflare-ddns-updater
owner: root
group: root
mode: 0700
- name: Configure script
ansible.builtin.lineinfile:
dest: /usr/local/bin/cloudflare-ddns-updater
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
no_log: yes
with_items:
- {regexp: '^auth_email=', line: "auth_email='{{ cloudflare_ddns_updater_config.auth_email | default('') }}'"}
- {regexp: '^auth_method=', line: "auth_method='{{ cloudflare_ddns_updater_config.auth_method | default('') }}'"}
- {regexp: '^auth_key=', line: "auth_key='{{ cloudflare_ddns_updater_config.auth_key | default('') }}'"}
- {regexp: '^zone_identifier=', line: "zone_identifier='{{ cloudflare_ddns_updater_config.zone_identifier | default('') }}'"}
- {regexp: '^record_name=', line: "record_name='{{ cloudflare_ddns_updater_config.record_name | default('') }}'"}
- {regexp: '^ttl=', line: "ttl='{{ cloudflare_ddns_updater_config.ttl | default('3600') }}'"}
- {regexp: '^proxy=', line: "proxy={{ cloudflare_ddns_updater_config.proxy | default('false') }}"}
- {regexp: '^slacksitename=', line: "slacksitename='{{ cloudflare_ddns_updater_config.slacksitename | default('') }}'"}
- {regexp: '^slackchannel=', line: "slackchannel='{{ cloudflare_ddns_updater_config.slackchannel | default('') }}'"}
- {regexp: '^slackuri=', line: "slackuri='{{ cloudflare_ddns_updater_config.slackuri | default('') }}'"}
when: cloudflare_ddns_updater_config is defined and cloudflare_ddns_updater_config
- name: Configure cron job
cron:
name: cloudflare_ddns_updater-cron
job: /usr/local/bin/cloudflare-ddns-updater
minute: "*/10"
user: root
state: present
when: cloudflare_ddns_updater_enabled is defined and cloudflare_ddns_updater_enabled
- name: Provision DNS records
community.general.cloudflare_dns:
state: "{{ item.state | default('present') }}"
zone: "{{ item.zone }}"
type: "{{ item.type }}"
record: "{{ item.record }}"
value: "{{ item.value }}"
proxied: "{{ item.proxied | default(omit) }}"
account_email: "{{ cloudflare_account_email | default(omit) }}"
account_api_key: "{{ cloudflare_account_api_key | default(omit) }}"
api_token: "{{ cloudflare_api_token | default(omit) }}"
delegate_to: localhost
run_once: yes
with_items: "{{ cloudflare_dns_records }}"
when: >
( cloudflare_dns_records is defined and cloudflare_dns_records ) and
((( cloudflare_account_email is defined and cloudflare_account_email ) and
( cloudflare_account_api_key is defined and cloudflare_account_api_key )) or
( cloudflare_api_token is defined and cloudflare_api_token ))
- name: Override SystemD Resolve
block:
- name: Disable DNSStubListener
ansible.builtin.copy:
content: |
[Resolve]
DNSStubListener={{ dns_stub_listener | default('yes') }}
dest: /etc/systemd/resolved.conf
owner: root
group: root
mode: 0644
notify: Restart SystemD Resolved
- name: Change symlink for systemd-resolved
ansible.builtin.file:
src: /var/run/systemd/resolve/resolv.conf
dest: /etc/resolv.conf
state: link
force: yes
follow: no
notify: Restart SystemD Resolved
when: dns_stub_listener is defined and dns_stub_listener == 'no'
|
roles/dns/tasks/main.yml
|
trigger:
- master
stages:
- stage: "Build_CPP_Binaries"
jobs:
- job: Windows
pool:
vmImage: 'windows-2019'
steps:
- script: |
git clone --depth 1 https://github.com/evman182/json-schema-validator.git
cd json-schema-validator
mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_GENERATOR_PLATFORM=x64 -DBUILD_SHARED_LIBS=ON -DBUILD_TESTS=OFF -DBUILD_EXAMPLES=OFF -Dnlohmann_json_DIR=.. ..
cmake --build . --config Release
displayName: 'Build json-schema-validator'
- publish: $(System.DefaultWorkingDirectory)/json-schema-validator/build/release/
artifact: WindowsWrapperBinary
- job: Linux
pool:
vmImage: 'ubuntu-18.04'
steps:
- script: |
git clone https://github.com/evman182/json-schema-validator.git
cd json-schema-validator
mkdir build
cd build
cmake -DBUILD_SHARED_LIBS=ON -DBUILD_TESTS=OFF -DBUILD_EXAMPLES=OFF -Dnlohmann_json_DIR=.. ..
cmake --build . --config Release
mkdir artifacts
cp libnlohmann_json_schema_validator.so ./artifacts
displayName: 'Build json-schema-validator'
- publish: $(System.DefaultWorkingDirectory)/json-schema-validator/build/artifacts/
artifact: LinuxWrapperBinary
- stage: "Build_Pendleton_JsonSchemaValidator"
jobs:
- job: "Build_Windows_And_Push"
pool:
vmImage: 'windows-2019'
steps:
- task: DownloadPipelineArtifact@2
inputs:
buildType: 'current'
artifactName: 'WindowsWrapperBinary'
itemPattern: '*.dll'
targetPath: '$(Pipeline.Workspace)'
- task: DownloadPipelineArtifact@2
inputs:
buildType: 'current'
artifactName: 'LinuxWrapperBinary'
itemPattern: '*.so'
targetPath: '$(Pipeline.Workspace)'
- task: CopyFiles@2
displayName: "Copy CPP Binaries"
inputs:
SourceFolder: '$(Pipeline.Workspace)'
Contents: |
*.dll
*.so
TargetFolder: '$(Pipeline.Workspace)\src\Pendleton.JsonSchemaValidator'
OverWrite: true
- task: DotNetCoreCLI@2
displayName: "Build Pendleton.JsonSchemaValidator"
inputs:
command: 'build'
projects: '**/*.csproj'
- task: DotNetCoreCLI@2
displayName: "Run Tests"
inputs:
command: 'test'
projects: '**/*.csproj'
- task: DotNetCoreCLI@2
displayName: "Create Package"
inputs:
command: 'pack'
packagesToPack: '**/Pendleton.JsonSchemaValidator.csproj'
nobuild: true
versioningScheme: 'byPrereleaseNumber'
majorVersion: '0'
minorVersion: '1'
patchVersion: '0'
- task: DotNetCoreCLI@2
displayName: "Push package to Azure"
inputs:
command: 'push'
packagesToPush: '$(Build.ArtifactStagingDirectory)/*.nupkg'
nuGetFeedType: 'internal'
publishVstsFeed: '7f4acbfe-6409-4555-962b-beaf9add995a/dd0a7f4e-a22a-4995-bf7e-250f97daaff2'
- task: DotNetCoreCLI@2
displayName: "Push Nuget Package to Nuget.org"
inputs:
command: custom
custom: nuget
arguments: >
push $(Build.ArtifactStagingDirectory)/**.nupkg
-s $(NuGetSourceServerUrl)
-k $(NuGetSourceServerApiKey)
-n true
- job: "Build_Linux"
pool:
vmImage: 'ubuntu-18.04'
steps:
- task: DownloadPipelineArtifact@2
inputs:
buildType: 'current'
artifactName: 'WindowsWrapperBinary'
itemPattern: '*.dll'
targetPath: '$(Pipeline.Workspace)'
- task: DownloadPipelineArtifact@2
inputs:
buildType: 'current'
artifactName: 'LinuxWrapperBinary'
itemPattern: '*.so'
targetPath: '$(Pipeline.Workspace)'
- task: CopyFiles@2
displayName: "Copy CPP Binaries"
inputs:
SourceFolder: '$(Pipeline.Workspace)'
Contents: |
*.dll
*.so
TargetFolder: '$(Pipeline.Workspace)\src\Pendleton.JsonSchemaValidator'
OverWrite: true
- task: DotNetCoreCLI@2
displayName: "Build Pendleton.JsonSchemaValidator"
inputs:
command: 'build'
projects: '**/*.csproj'
- task: DotNetCoreCLI@2
displayName: "Run Tests"
inputs:
command: 'test'
projects: '**/*.csproj'
|
azure-pipelines.yml
|
---
version: '3.5'
services:
ksql-server:
image: confluentinc/${CP_KSQL_IMAGE}:${TAG}
hostname: ksql-server
container_name: ksql-server
ports:
- "8089:8089"
environment:
KSQL_HOST_NAME: ksql-server
KSQL_CONFIG_DIR: "/etc/ksql"
KSQL_LOG4J_OPTS: "-Dlog4j.configuration=file:/etc/ksql/log4j-rolling.properties"
KSQL_LISTENERS: "http://0.0.0.0:8089"
KSQL_AUTO_OFFSET_RESET: "earliest"
KSQL_COMMIT_INTERVAL_MS: 0
KSQL_CACHE_MAX_BYTES_BUFFERING: 0
KSQL_KSQL_SCHEMA_REGISTRY_URL: $SCHEMA_REGISTRY_URL
KSQL_KSQL_SCHEMA_REGISTRY_BASIC_AUTH_CREDENTIALS_SOURCE: $BASIC_AUTH_CREDENTIALS_SOURCE
KSQL_KSQL_SCHEMA_REGISTRY_BASIC_AUTH_USER_INFO: $SCHEMA_REGISTRY_BASIC_AUTH_USER_INFO
KSQL_BOOTSTRAP_SERVERS: $BOOTSTRAP_SERVERS
KSQL_SECURITY_PROTOCOL: "SASL_SSL"
KSQL_SASL_JAAS_CONFIG: $SASL_JAAS_CONFIG
KSQL_SASL_MECHANISM: "PLAIN"
KSQL_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: "HTTPS"
KSQL_KSQL_STREAMS_PRODUCER_RETRIES: 2147483647
KSQL_KSQL_STREAMS_PRODUCER_CONFLUENT_BATCH_EXPIRE_MS: 9223372036854775807
KSQL_KSQL_STREAMS_PRODUCER_REQUEST_TIMEOUT_MS: 300000
KSQL_KSQL_STREAMS_PRODUCER_MAX_BLOCK_MS: 9223372036854775807
KSQL_KSQL_STREAMS_PRODUCER_DELIVERY_TIMEOUT_MS: 2147483647
KSQL_KSQL_STREAMS_REPLICATION_FACTOR: 3
KSQL_KSQL_INTERNAL_TOPIC_REPLICAS: 3
KSQL_KSQL_SINK_REPLICAS: 3
# Producer Confluent Monitoring Interceptors for Control Center streams monitoring
KSQL_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
KSQL_PRODUCER_CONFLUENT_MONITORING_INTERCEPTOR_SASL_MECHANISM: PLAIN
KSQL_PRODUCER_CONFLUENT_MONITORING_INTERCEPTOR_SECURITY_PROTOCOL: "SASL_SSL"
KSQL_PRODUCER_CONFLUENT_MONITORING_INTERCEPTOR_SASL_JAAS_CONFIG: $SASL_JAAS_CONFIG
# Consumer Confluent Monitoring Interceptors for Control Center streams monitoring
KSQL_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
KSQL_CONSUMER_CONFLUENT_MONITORING_INTERCEPTOR_SASL_MECHANISM: PLAIN
KSQL_CONSUMER_CONFLUENT_MONITORING_INTERCEPTOR_SECURITY_PROTOCOL: "SASL_SSL"
KSQL_CONSUMER_CONFLUENT_MONITORING_INTERCEPTOR_SASL_JAAS_CONFIG: $SASL_JAAS_CONFIG
ksql-cli:
image: confluentinc/${CP_KSQL_CLI_IMAGE}
container_name: ksql-cli
entrypoint: /bin/sh
tty: true
|
troubleshooting/ksql-avro-ccloud/docker-compose.yml
|
# Whether or not the plugin blocks inappropriate party names
block-inappropriate-names: false
# Add words you would like to block here.
blocked-names:
wordlist:
- 'blocked'
- 'words'
- 'these'
- 'are'
- 'placeholders'
# Whether or not the plugin uses permissions
# If set to false everyone will have permission to use PartyChat
# If set to true only players with the permission partychat.use will be able to use PartyChat
use-permissions: false
# Change whether or not all messages sent in PartyChat are logged to console
console-log: true
# Change whether or not the plugin formats hex color codes in normal chat
format-chat: true
# Whether or not staff sees all PartyChat messages by default
# This can be toggled with /spy or /partyspy
auto-spy: true
# The maximum amount of characters allowed for a party name. This does not include color codes.
max-characters: 20
# How long it takes for a party invite/join request to time out (in seconds)
expire-time: 60
# How long a player must wait before attempting to join the same party again (in seconds)
block-time: 60
# Whether or not parties are public by default when created
# If set to false new parties will automatically be private unless toggled
# This is set to true by default
public-on-creation: true
# Whether or not the party summon feature should be use
# Set to true to disable /party summon
disable-party-summon: false
# Whether or not parties will save on restart
# If this is enabled players will not be kicked from a party when the leave the server
persistent-parties: false
# If this is set to true parties will be written to the database file when updated (ex. player join, party rename)
# When false (default) parties will only be saved to the file on restart
# If persistent parties is set to false this will do nothing
# Note: This is an option and not done by default because if there are a lot of parties this could cause lag
party-save-on-update: false
# Disable guis if you don't want them
# Note: this is ignored and guis are automatically disabled if you're running a version below 1.12.2
disable-guis: false
# If enabled players who are vanished will not be shown in the list of party members
hide-vanished-players: true
# Don't set to true unless you want players to see random messages or other undesirable behavior
# No seriously don't turn this on unless you actually need to debug something
# If you have to turn this on you should probably contact the developer
debug: false
|
src/main/resources/config.yml
|
version: '3.8'
networks:
global-network:
external: true
services:
traefik:
container_name: traefik
image: traefik:latest
restart: always
labels:
- traefik.enable=true
- traefik.docker.network=global-network
- traefik.http.routers.api.service=api@internal
- traefik.http.routers.api.entrypoints=https
- traefik.http.routers.api.tls=true
- traefik.http.routers.api.tls.certresolver=le
- traefik.http.routers.api.rule=Host(`t.ibolit.dev`)
- traefik.http.services.api.loadbalancer.server.port=8080
- traefik.http.middlewares.auth.basicauth.users=admin:$$apr1$$o5my7tj4$$/yqDu6Dnplxd7uHBWKWx9/
- traefik.http.routers.api.middlewares=auth
ports:
- 80:80
- 443:443
networks:
- global-network
volumes:
- ./traefik.yml:/etc/traefik/traefik.yml
- ./storage:/storage
- /var/run/docker.sock:/var/run/docker.sock
# depends_on:
# - redis
# redis:
# container_name: redis
# image: redis:5-alpine
# restart: always
app:
container_name: app
build:
context: .
dockerfile: ./Dockerfile
ports:
- 3000:3000
networks:
- global-network
restart: always
environment:
- "PEERCALLS_NETWORK_TYPE=sfu"
# - "PEERCALLS_STORE_TYPE=redis"
- "PEERCALLS_NETWORK_SFU_JITTER_BUFFER=true"
labels:
- traefik.enable=true
- traefik.docker.network=global-network
- traefik.http.services.meet-service.loadbalancer.server.port=3000
- traefik.http.routers.meet.service=meet-service
- traefik.http.routers.meet.entrypoints=http
- traefik.http.routers.meet.rule=Host(`meet.ibolit.dev`)
- traefik.http.routers.ssl-meet.service=meet-service
- traefik.http.routers.ssl-meet.entrypoints=https
- traefik.http.routers.meet.tls=true
- traefik.http.routers.meet.tls.certresolver=le
- traefik.http.routers.ssl-meet.rule=Host(`meet.ibolit.dev`)
- traefik.http.routers.ssl-meet.tls=true
- traefik.tcp.services.meet-service.loadbalancer.server.port=3000
- traefik.tcp.routers.tcp-meet.service=meet-service
- traefik.tcp.routers.tcp-meet.entrypoints=http
- traefik.tcp.routers.tcp-meet.rule=HostSNI(`meet.ibolit.dev`)
# depends_on:
# - redis
|
docker-compose.yml
|
---
- name: Set RAID status
set_fact:
raid_type: true
raid_controller_sensor: "{{ idrac_info.system_info.ControllerSensor[my_idx3].FQDD }}"
raid_enclosure_name: "Enclosure.Internal.0-1:{{ idrac_info.system_info.ControllerSensor[my_idx3].FQDD }}"
raid_vd_status: "{{ idrac_info.system_info.VirtualDisk is defined and idrac_info.system_info.VirtualDisk[0].Name == \"omnia_vd\" }}"
with_items: "{{ idrac_info.system_info.Controller }}"
loop_control:
index_var: my_idx3
when: '"RAID" in idrac_info.system_info.ControllerSensor[my_idx3].FQDD'
- name: View existing storage details
dellemc.openmanage.dellemc_idrac_storage_volume:
idrac_ip: "{{ inventory_hostname }}"
idrac_user: "{{ idrac_username }}"
idrac_password: "{{ <PASSWORD> }}"
state: "view"
register: idrac_volume_list
when:
- raid_type
- not raid_vd_status
- name: Set drives details
set_fact:
drives_id: "{{ idrac_volume_list.storage_status.Message.Controller[raid_controller_sensor].Enclosure[raid_enclosure_name].PhysicalDisk }}"
drives_count: "{{ idrac_volume_list.storage_status.Message.Controller[raid_controller_sensor].Enclosure[raid_enclosure_name].PhysicalDisk | length }}"
when:
- raid_type
- not raid_vd_status
- idrac_volume_list.storage_status.Message.Controller[raid_controller_sensor].Enclosure[raid_enclosure_name].PhysicalDisk is defined
- name: Create VD
dellemc.openmanage.dellemc_idrac_storage_volume:
idrac_ip: "{{ inventory_hostname }}"
idrac_user: "{{ idrac_username }}"
idrac_password: "{{ <PASSWORD> }}"
state: "create"
controller_id: "{{ raid_controller_sensor }}"
raid_reset_config: "True"
volume_type: "{{ raid_level }}"
raid_init_operation: "Fast"
volumes:
- name: "omnia_vd"
span_length: "{{ drives_count }}"
drives:
id: "{{ drives_id }}"
register: create_vd_status
when:
- raid_type
- not raid_vd_status
- idrac_volume_list.storage_status.Message.Controller[raid_controller_sensor].Enclosure[raid_enclosure_name].PhysicalDisk is defined
|
control_plane/roles/provision_idrac/tasks/create_vd.yml
|
controller_list:
- name: left_arm_controller
action_ns: follow_joint_trajectory
type: FollowJointTrajectory
default: true
joints:
- left_arm_shoulder_pitch_joint
- left_arm_shoulder_roll_joint
- left_arm_shoulder_yaw_joint
- left_arm_elbow_roll_joint
- left_arm_forearm_yaw_joint
- left_arm_wrist_roll_joint
- name: right_arm_controller
action_ns: follow_joint_trajectory
type: FollowJointTrajectory
default: true
joints:
- right_arm_shoulder_pitch_joint
- right_arm_shoulder_roll_joint
- right_arm_shoulder_yaw_joint
- right_arm_elbow_roll_joint
- right_arm_forearm_yaw_joint
- right_arm_wrist_roll_joint
- name: torso_left_arm_controller
action_ns: follow_joint_trajectory
type: FollowJointTrajectory
default: true
joints:
- torso_yaw_joint
- left_arm_shoulder_pitch_joint
- left_arm_shoulder_roll_joint
- left_arm_shoulder_yaw_joint
- left_arm_elbow_roll_joint
- left_arm_forearm_yaw_joint
- left_arm_wrist_roll_joint
- name: torso_right_arm_controller
action_ns: follow_joint_trajectory
type: FollowJointTrajectory
default: true
joints:
- torso_yaw_joint
- right_arm_shoulder_pitch_joint
- right_arm_shoulder_roll_joint
- right_arm_shoulder_yaw_joint
- right_arm_elbow_roll_joint
- right_arm_forearm_yaw_joint
- right_arm_wrist_roll_joint
- name: torso_head_controller
action_ns: follow_joint_trajectory
type: FollowJointTrajectory
default: true
joints:
- torso_yaw_joint
- head_yaw_joint
- head_pitch_joint
- name: head_controller
action_ns: follow_joint_trajectory
type: FollowJointTrajectory
default: true
joints:
- head_yaw_joint
- head_pitch_joint
- name: torso_tray_controller
action_ns: follow_joint_trajectory
type: FollowJointTrajectory
default: true
joints:
- torso_yaw_joint
- tray_pitch_joint
- name: tray_controller
action_ns: follow_joint_trajectory
type: FollowJointTrajectory
default: true
joints:
- tray_pitch_joint
- name: torso_controller
action_ns: follow_joint_trajectory
type: FollowJointTrajectory
default: true
joints:
- torso_yaw_joint
- name: left_hand_controller
action_ns: gripper_action
type: GripperCommand
default: true
parallel: true
joints:
- left_gripper_joint
- name: right_hand_controller
action_ns: gripper_action
type: GripperCommand
default: true
parallel: true
joints:
- right_gripper_joint
|
config/controller.yaml
|
nameWithType: Faces.verifyFaceToFace
type: method
members:
- fullName: com.microsoft.azure.cognitiveservices.vision.faceapi.Faces.verifyFaceToFace(UUID faceId1, UUID faceId2)
name: verifyFaceToFace(UUID faceId1, UUID faceId2)
nameWithType: Faces.verifyFaceToFace(UUID faceId1, UUID faceId2)
parameters:
- description: <p>FaceId of the first face, comes from Face - Detect. </p>
name: faceId1
type: <xref href="UUID?alt=UUID&text=UUID" data-throw-if-not-resolved="False"/>
- description: <p>FaceId of the second face, comes from Face - Detect. </p>
name: faceId2
type: <xref href="UUID?alt=UUID&text=UUID" data-throw-if-not-resolved="False"/>
exceptions:
- type: <xref href="IllegalArgumentException?alt=IllegalArgumentException&text=IllegalArgumentException" data-throw-if-not-resolved="False"/>
description: <p>thrown if parameters fail the validation </p>
- type: <xref href="APIErrorException?alt=APIErrorException&text=APIErrorException" data-throw-if-not-resolved="False"/>
description: <p>thrown if the request is rejected by server </p>
- type: <xref href="RuntimeException?alt=RuntimeException&text=RuntimeException" data-throw-if-not-resolved="False"/>
description: <p>all other wrapped checked exceptions if the request fails to be sent </p>
returns:
description: <p>the VerifyResult object if successful. </p>
type: <xref href="com.microsoft.azure.cognitiveservices.vision.faceapi.models.VerifyResult?alt=com.microsoft.azure.cognitiveservices.vision.faceapi.models.VerifyResult&text=VerifyResult" data-throw-if-not-resolved="False"/>
summary: >-
<p>Verify whether two faces belong to a same person or whether one face belongs to a person.</p>
<p></p>
syntax: public VerifyResult verifyFaceToFace(UUID faceId1, UUID faceId2)
uid: com.microsoft.azure.cognitiveservices.vision.faceapi.Faces.verifyFaceToFace(UUID,UUID)
uid: com.microsoft.azure.cognitiveservices.vision.faceapi.Faces.verifyFaceToFace*
fullName: com.microsoft.azure.cognitiveservices.vision.faceapi.Faces.verifyFaceToFace
name: verifyFaceToFace(UUID faceId1, UUID faceId2)
package: com.microsoft.azure.cognitiveservices.vision.faceapi
metadata: {}
|
legacy/docs-ref-autogen/com.microsoft.azure.cognitiveservices.vision.faceapi.Faces.verifyFaceToFace.yml
|
outbound:
# Outbound slots locations
slots:
# name: slot name
# slots_group: name of the slot group where slot is contained
# max_objects: numebr of object that can be contained in a slot, the value can be: minor than 0 if the slot has infinite space (e.g. to simulate a conveyor track that remove immediately the object once the object is released in the Slot), equal to 1 for single space slot (in this case only one object at a time can be contained) and major than 1 for slot that can contain contemporary multiple objects (e.g. to simulate a basket where an object falls after the release).
# frame: reference frame in which the slot location is defined
# position: position of the slot w.r.t. the reference frame
# quaternion: rotation of the slot w.r.t. the reference frame
# approach_distance: approach and leave distances along x,y,z axes from the location
- {name: "A1", slots_group: "group_A", frame: "conveyor_system", max_objects: -1, position: [0.55, 0.25, 0.16], quaternion: [0.7071068, -0.7071068, 0.000, 0.000], approach_distance: [0, 0, 0.1]}
- {name: "A2", slots_group: "group_A", frame: "conveyor_system", max_objects: -1, position: [0.55, 0.25, 0.16], quaternion: [1.0, 0.0, 0.0, 0.0], approach_distance: [0, 0, 0.1]}
- {name: "A3", slots_group: "group_A", frame: "conveyor_system", max_objects: -1, position: [0.55, 0.25, 0.16], quaternion: [0.7071068, 0.7071068, 0.000, 0.000], approach_distance: [0, 0, 0.1]}
- {name: "B1", slots_group: "group_B", frame: "conveyor_system", max_objects: -1, position: [0.65, 1.60, 0.30], quaternion: [1.0, 0.0, 0.0, 0.0], approach_distance: [0, 0, 0.1]}
- {name: "B2", slots_group: "group_B", frame: "conveyor_system", max_objects: -1, position: [0.65, 1.60, 0.30], quaternion: [0.7071068, -0.7071068, 0.000, 0.000], approach_distance: [0, 0, 0.1]}
- {name: "B3", slots_group: "group_B", frame: "conveyor_system", max_objects: -1, position: [0.65, 1.60, 0.30], quaternion: [0.7071068, 0.7071068, 0.000, 0.000], approach_distance: [0, 0, 0.1]}
# N.B. the object placing position is computed as:
# w_T_s = w_T_f * f_T_s
#
# where:
# w: world frame
# f: reference frame
# s: slot frame
|
manipulation_skills/config/slots_distribution.yaml
|
name: Build WebSite
on: workflow_dispatch
jobs:
build:
runs-on: ubuntu-20.04
strategy:
matrix:
node-version: [12.x]
steps:
- name: Checkout Source
uses: actions/checkout@v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v1
with:
node-version: ${{ matrix.node-version }}
- name: Install Dependencies
run: |
npm i
sudo apt-get install -y jq
- name: Process Config
id: config
run: |
touch ./_config.json && touch ./_site_config.json
npx js-yaml ./_config.yml >> ./_config.json
npx js-yaml ./src/site_config.yml >> ./_site_config.json
echo "::set-output name=theme::`jq --raw-output .theme ./_config.json`"
echo "::set-output name=theme_name::`jq --raw-output .theme ./_site_config.json`"
echo "::set-output name=plugin::`jq --raw-output .plugin ./_config.json`"
echo "::set-output name=url::`jq --raw-output .url ./_config.json`"
- name: Process Post
run: node -e "require('./src/main').main()"
env:
GITHUB_TOKEN: ${{ secrets.token }}
- name: Build WebSite
run: |
mkdir hexo && npx hexo init hexo
cd hexo
npm i ${{ steps.config.outputs.theme }}
npm i ${{ steps.config.outputs.plugin }}
cp ../src/site_config.yml ./_config.yml && cp ../src/theme_config.yml ./_config.${{ steps.config.outputs.theme_name }}.yml
rm -rf ./source && cp -r ../dist ./source
npx hexo g
- name: Deploy to GitHub Pages with Custom Domain
if: steps.config.outputs.url != 'null'
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_branch: gh-pages
publish_dir: ./hexo/public
force_orphan: true
cname: ${{ steps.config.outputs.url }}
- name: Deploy to GitHub Pages with Default Domain
if: steps.config.outputs.url == 'null'
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_branch: gh-pages
publish_dir: ./hexo/public
force_orphan: true
|
.github/workflows/deploy.yml
|
name: Build/Test/Deploy
on:
push:
branches:
- master
tags:
- v[0-9]+.[0-9]+.[0-9]+
- v[0-9]+.[0-9]+.[0-9]+.[0-9]+
pull_request:
jobs:
run-scripts:
runs-on: ubuntu-latest
strategy:
matrix:
image: ['centos:centos7', 'rockylinux:8', 'quay.io/centos/centos:stream8']
components: ['udt,myproxy,ssh', 'gram5']
# Ignore UDT for the CentOS Stream 9 case because libnice is not available there yet
include:
- image: 'quay.io/centos/centos:stream9'
components: 'myproxy,ssh'
- image: 'quay.io/centos/centos:stream9'
components: 'gram5'
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 0
- name: build/test on ${{ matrix.image }} with ${{ matrix.components }}
env:
IMAGE: ${{ matrix.image }}
TASK: tests
COMPONENTS: ${{ matrix.components }}
run: travis-ci/setup_tasks.sh
- name: build source tarballs and srpms
# Only run this step for the centos:centos7 case and for only one component selection
if: |
contains(matrix.image , 'centos:centos7') &&
contains(matrix.components , 'udt,myproxy,ssh')
env:
IMAGE: centos:centos7
TASK: srpms
run: travis-ci/setup_tasks.sh
# SSH key recipe from https://www.webfactory.de/blog/use-ssh-key-for-private-repositories-in-github-actions
- name: Establish ssh and upload source tarballs
# Only run this step for the centos:centos7 case
if: |
contains(matrix.image , 'centos:centos7') &&
contains(github.ref , 'refs/tags/')
env:
SSH_AUTH_SOCK: /tmp/ssh_agent.sock
run: |
mkdir -p ~/.ssh
ssh-keyscan github.com >> ~/.ssh/known_hosts
ssh-agent -a "$SSH_AUTH_SOCK" > /dev/null
ssh-add - <<< "${{ secrets.ID_GCTUPLOADER }}"
travis-ci/upload_source_tarballs.sh ${{ github.repository_owner }}
|
.github/workflows/build-test-deploy.yml
|
{{- if not .Values.prometheus.enabled }}
# When multiple Prometheus instances are running outside the mesh in a multi-cluster setup
# remote Prometheus instances could only be reached using mTLS and the local one without it.
# Create a special host to only reach the local Prometheus without mTLS.
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: {{ include "prometheus.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ include "prometheus.name" . }}
app.kubernetes.io/name: {{ include "prometheus.name" . }}
helm.sh/chart: {{ include "backyards.chart" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion | replace "+" "_" }}
app.kubernetes.io/part-of: {{ include "backyards.name" . }}
spec:
hosts:
- prometheus.{{ .Release.Namespace }}.svc.cluster.local
ports:
- number: 80
name: http
protocol: HTTP
location: MESH_EXTERNAL
resolution: DNS
endpoints:
- address: {{ required "Prometheus host is required!" .Values.prometheus.host }}
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: {{ include "prometheus.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ include "prometheus.name" . }}
app.kubernetes.io/name: {{ include "prometheus.name" . }}
helm.sh/chart: {{ include "backyards.chart" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion | replace "+" "_" }}
app.kubernetes.io/part-of: {{ include "backyards.name" . }}
spec:
host: prometheus.{{ .Release.Namespace }}.svc.cluster.local
trafficPolicy:
tls:
mode: DISABLE
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: {{ include "prometheus.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ include "prometheus.name" . }}
app.kubernetes.io/name: {{ include "prometheus.name" . }}
helm.sh/chart: {{ include "backyards.chart" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion | replace "+" "_" }}
app.kubernetes.io/part-of: {{ include "backyards.name" . }}
spec:
hosts:
- {{ required "Prometheus host is required!" .Values.prometheus.host }}
http:
- route:
- destination:
host: prometheus.{{ .Release.Namespace }}.svc.cluster.local
{{- end }}
|
assets/charts/backyards/templates/multicluster-prometheus-workaround.yaml
|
version: "3"
services:
job:
build: ../docker/ops-playground-image
command: "flink run -d /opt/ClickCountJob.jar --bootstrap.servers kafka:9092 --checkpointing"
volumes:
- ./conf:/opt/flink/conf
clickevent-generator:
build: ../docker/ops-playground-image
command: "java -classpath /opt/ClickCountJob.jar:/opt/flink/lib/* org.apache.flink.playgrounds.ops.clickcount.ClickEventGenerator --bootstrap.servers kafka:9092 --topic input"
depends_on:
- kafka
jobmanager1:
image: flink:1.11.0-scala_2.11
command: "jobmanager.sh start-foreground jobmanager1"
hostname: jobmanager1
ports:
- 8081:8081
volumes:
- ./conf:/opt/flink/conf
- flink-checkpoints-directory:/tmp/flink-checkpoints-directory
- flink-savepoints-directory:/tmp/flink-savepoints-directory
- flink-ha-directory:/tmp/flink-ha-directory
jobmanager2:
image: flink:1.11.0-scala_2.11
command: "jobmanager.sh start-foreground jobmanager2"
hostname: jobmanager2
ports:
- 8082:8081
volumes:
- ./conf:/opt/flink/conf
- flink-checkpoints-directory:/tmp/flink-checkpoints-directory
- flink-savepoints-directory:/tmp/flink-savepoints-directory
- flink-ha-directory:/tmp/flink-ha-directory
jobmanager3:
image: flink:1.11.0-scala_2.11
command: "jobmanager.sh start-foreground jobmanager3"
hostname: jobmanager3
ports:
- 8083:8081
volumes:
- ./conf:/opt/flink/conf
- flink-checkpoints-directory:/tmp/flink-checkpoints-directory
- flink-savepoints-directory:/tmp/flink-savepoints-directory
- flink-ha-directory:/tmp/flink-ha-directory
taskmanager:
image: flink:1.11.0-scala_2.11
command: "taskmanager.sh start-foreground"
hostname: taskmanager
volumes:
- ./conf:/opt/flink/conf
- flink-checkpoints-directory:/tmp/flink-checkpoints-directory
- flink-savepoints-directory:/tmp/flink-savepoints-directory
- flink-ha-directory:/tmp/flink-ha-directory
zoo1:
image: zookeeper:3.5
restart: always
hostname: zoo1
ports:
- 2181:2181
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=0.0.0.0:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181
zoo2:
image: zookeeper:3.5
restart: always
hostname: zoo2
ports:
- 2182:2181
environment:
ZOO_MY_ID: 2
ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=0.0.0.0:2888:3888;2181 server.3=zoo3:2888:3888;2181
zoo3:
image: zookeeper:3.5
restart: always
hostname: zoo3
ports:
- 2183:2181
environment:
ZOO_MY_ID: 3
ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=0.0.0.0:2888:3888;2181
kafka:
image: wurstmeister/kafka:2.12-2.2.1
environment:
KAFKA_ADVERTISED_LISTENERS: INSIDE://:9092,OUTSIDE://:9094
KAFKA_LISTENERS: INSIDE://:9092,OUTSIDE://:9094
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_CREATE_TOPICS: "input:2:1, output:2:1"
KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181,zoo2:2181,zoo3:2181"
ports:
- 9094:9094
volumes:
flink-checkpoints-directory:
flink-savepoints-directory:
flink-ha-directory:
|
operations-playground-ha/docker-compose.yaml
|
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: mongodbs.mongodb.com
spec:
group: mongodb.com
names:
kind: MongoDB
listKind: MongoDBList
plural: mongodbs
singular: mongodb
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
description: MongoDB is the Schema for the mongodbs API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: MongoDBSpec defines the desired state of MongoDB
properties:
featureCompatibilityVersion:
description: FeatureCompatibilityVersion configures the feature compatibility
version that will be set for the deployment
type: string
members:
description: Members is the number of members in the replica set
type: integer
type:
description: Type defines which type of MongoDB deployment the resource
should create
type: string
version:
description: Version defines which version of MongoDB will be used
type: string
required:
- type
- version
type: object
status:
description: MongoDBStatus defines the observed state of MongoDB
properties:
mongoUri:
type: string
phase:
type: string
required:
- mongoUri
- phase
type: object
type: object
version: v1
versions:
- name: v1
served: true
storage: true
|
deploy/crds/mongodb.com_mongodbs_crd.yaml
|
_id: eb028580-b45d-11e9-b05b-bd3bcc960d13
message: >-
Shoe cdm.vomp.zacharythomas.github.io.vau.ew coat ligament, recalcitrant
[URL=http://circulateindia.com/generic-cialis/ - generic cialis[/URL -
[URL=http://earthbeours.com/prednisone/ - prednisone no prescription[/URL -
[URL=http://circulateindia.com/cialis/ - lowest cialis prices[/URL -
[URL=http://casatheodoro.com/prednisone/ - prednisone 10 mg[/URL -
[URL=http://gaiaenergysystems.com/canadian-pharmacy-cialis-20mg/ - northwest
pharmacy canada[/URL - [URL=http://vintagepowderpuff.com/viagra-pills/ -
viagra[/URL - [URL=http://fitnesscabbage.com/pharmacy-online/ - viagra
pharmacy us[/URL -
[URL=http://myquickrecipes.com/prednisone-without-dr-prescription/ -
prednisone[/URL - introduces <a
href="http://circulateindia.com/generic-cialis/">generic cialis</a> <a
href="http://earthbeours.com/prednisone/">prednisone without a
prescription</a> <a href="http://circulateindia.com/cialis/">cialis without a
doctor 20mg</a> <a href="http://casatheodoro.com/prednisone/">prednisone 20 mg
purchase no rx</a> prednisone <a
href="http://gaiaenergysystems.com/canadian-pharmacy-cialis-20mg/">canadian
pharmacy</a> <a href="http://vintagepowderpuff.com/viagra-pills/">gratis
viagra</a> <a href="http://fitnesscabbage.com/pharmacy-online/">canada
pharmacy</a> <a
href="http://myquickrecipes.com/prednisone-without-dr-prescription/">buying
prednisone on the interent</a> remissions chlorambucil radiotherapy
http://circulateindia.com/generic-cialis/ generic cialis
http://earthbeours.com/prednisone/ prednisone online
http://circulateindia.com/cialis/ lowest cialis prices
http://casatheodoro.com/prednisone/ buy prednisone online no prescription
http://gaiaenergysystems.com/canadian-pharmacy-cialis-20mg/ canadian pharmacy
cialis 20mg http://vintagepowderpuff.com/viagra-pills/ pharmacy ecstasy viagra
http://fitnesscabbage.com/pharmacy-online/ pharmacy
http://myquickrecipes.com/prednisone-without-dr-prescription/ prednisone
parametric, diagnostically vasculitis progressing.
name: awuxerez
email: <PASSWORD>
url: 'http://circulateindia.com/generic-cialis/'
hidden: ''
date: '2019-08-01T13:11:48.237Z'
|
_data/comments/elasticsearch-restore/comment-1564665108237.yml
|
options:
openstack-origin:
default: distro
type: string
description: |
Repository from which to install. May be one of the following:
distro (default), ppa:somecustom/ppa, a deb url sources entry,
or a supported Cloud Archive release pocket.
Supported Cloud Archive sources include:
cloud:<series>-<openstack-release>
cloud:<series>-<openstack-release>/updates
cloud:<series>-<openstack-release>/staging
cloud:<series>-<openstack-release>/proposed
For series=Bionic we support cloud archives for openstack-release:
* Train
NOTE: updating this setting to a source that is known to provide
a later version of OpenStack will trigger a software upgrade.
rabbit-user:
default: trove
type: string
description: Username used to access rabbitmq queue
rabbit-vhost:
default: openstack
type: string
description: Rabbitmq vhost
database-user:
default: trove
type: string
description: Username for Trove database access (if enabled)
database:
default: trove
type: string
description: Database name for Trove (if enabled)
debug:
default: False
type: boolean
description: Enable debug logging
verbose:
default: False
type: boolean
description: Enable verbose logging, deprecated in Newton
region:
default: RegionOne
type: string
description: OpenStack Region
keystone-api-version:
default: "2"
type: string
description: none, 2 or 3
trove-volume-support:
default: True
type: boolean
description: A cinder volume will be provisioned for all datastores (if enabled)
trove-datastore-database:
default: mysql
type: string
description: |
One or more database type(s) for Trove datastores, list is comma separated,
options are: mysql, redis, cassandra, mongodb, vertica
trove-database-volume-support:
default: mysql
type: string
description: |
A cinder volume will be provisioned for individual datastores (if enabled)
One or more database type(s) for Trove instances, list is comma separated,
options are: mysql, redis, cassandra, mongodb, vertica
default-neutron-networks:
default:
type: string
description: |
List of IDs for management networks which should be attached to the
instance regardless of what NICs are specified in the create API call.
use-nova-server-config-drive:
default: False
type: boolean
description: |
Nova config drive will be used with cloud-init to inject parameters and
files into the database instances.
trove-network-label-regex:
default: ".*"
type: string
description: Regular expression to match neutron network labels to determine
what IP addresses will be displayed by Trove.
trove-ip-regex:
default:
type: string
description: Regular expression to match individual IP addresses to determine
if it will be displayed by Trove.
trove-black-list-regex:
default:
type: string
description: Regular expression to match individual IP addresses to determine
if it should not be displayed by Trove.
|
src/config.yaml
|
---
# vars file for ansible-wal-g
# Storages
## AWS S3 storage setup
walg_aws_config: []
# aws_profile: 'default'
# aws_session_token: ''
# walg_cse_kms_id: ''
# walg_cse_kms_region: ''
# walg_s3_sse: ''
# walg_s3_sse_kms_id: ''
# aws_access_key_id: "{{ walg_s3_id }}"
# aws_secret_access_key: "{{ walg_s3_secret }}"
# aws_region: "{{ walg_s3_region | d(omit) }}"
# aws_endpoint: "{{ walg_s3_endpoint | d(omit) }}"
# aws_s3_force_path_style: "{{ walg_s3_use_path_style | d(false) }}"
# walg_s3_prefix: "s3://{{ walg_s3_bucket | d('backup') }}/{{ walg_s3_prefix | d(omit) }}"
# walg_s3_storage_class: "{{ walg_s3_storage_class | d(omit) }}"
## Azure storage setup
walg_azure_config: []
# azure_storage_access_key: ''
# azure_storage_account: ''
# walg_azure_buffer_size: 33554432
# walg_azure_max_buffers: 3
# walg_az_prefix: 'azure://'
## Google Cloud storage setup
walg_gcs_config: []
# gcs_context_timeout: 3600
# gcs_normalize_prefix: true
# google_application_credentials: 'google_service_account.json'
# walg_gs_prefix: 'gs://'
## Swift object storage setup
walg_swift_config: []
# walg_swift_prefix: 'swift://'
## Backups on files system setup
walg_filesystem_config: []
# walg_file_prefix: '/backup/wal-g'
# Databases
## MongoDB setup
walg_mongodb_config: []
# walg_mongo_oplog_dst: ''
# walg_mongo_oplog_end_ts: ''
## MySQL setup
walg_mysql_config: []
# walg_mysql_ssl_ca: ''
# walg_mysql_datasource_name: ''
# walg_mysql_binlog_dst: ''
# walg_mysql_binlog_replay_command: ''
# walg_mysql_backup_prepare_command: ''
# walg_stream_create_command: ''
# walg_stream_restore_command: ''
## PostgreSQL setup
walg_postgresql_config:
# pguser: "{{ walg_user | d(omit) }}"
# pgpassword: "{{ walg_pg_password | d(omit) }}"
# pgpassfile: "{{ walg_pg_passfile | d('~/.pgpass') }}"
# pgport: "{{ walg_pg_port | d(5432) }}"
pghost: "{{ walg_pg_socket | d('/var/run/postgresql') }}"
# Commons
## Wal-G setup
walg_common_config:
# walg_disk_rate_limit:
# walg_libsodium_key:
# walg_libsodium_key_path:
# walg_network_rate_limit:
# walg_pgp_key:
# walg_pgp_key_passphrase:
# walg_pgp_key_path:
# walg_prevent_wal_overwrite:
# walg_sentinel_user_data:
# walg_tar_size_threshold: 1000000000
# total_bg_uploaded_limit: "{{ walg_backup_upload_bg | d(32) }}"
# walg_compression_method: "{{ walg_backup_compression | d('lz4') }}"
# walg_delta_max_steps: "{{ walg_backup_delta_steps | d(7) }}"
# walg_delta_origin: "{{ walg_backup_delta_origin | d('LATEST') }}"
# walg_download_concurrency: "{{ walg_backup_download | d(10) }}"
# walg_upload_concurrency: "{{ walg_backup_upload | d(10) }}"
# walg_upload_disk_concurrency: "{{ walg_backup_upload_disk | d(2) }}"
|
vars/main.yml
|
---
http_interactions:
- request:
method: get
uri: http://api.themoviedb.org/3/search/list?api_key=c29379565234e20d7cbf4f2e835c3e41&language=en&query=Disney
body:
encoding: US-ASCII
string: ''
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Content-Type:
- application/json; charset=utf-8
User-Agent:
- Ruby
response:
status:
code: 200
message: OK
headers:
Access-Control-Allow-Origin:
- "*"
Cache-Control:
- public, max-age=21600
Content-Encoding:
- gzip
Content-Type:
- application/json;charset=utf-8
Date:
- Tue, 09 Feb 2016 15:17:13 GMT
Etag:
- W/"ac6c2cb0a6a429c27ab1b061843411aa"
Server:
- openresty
Status:
- 200 OK
Vary:
- Accept-Encoding
X-Memc:
- HIT
X-Memc-Age:
- '6600'
X-Memc-Expires:
- '15000'
X-Memc-Key:
- 2101f84ca876c8ea7ab496ce06b3fca9
X-Ratelimit-Limit:
- '40'
X-Ratelimit-Remaining:
- '10'
X-Ratelimit-Reset:
- '1455031036'
Content-Length:
- '809'
Connection:
- keep-alive
body:
encoding: ASCII-8BIT
string: !binary |-
H4sIAAAAAAAAA62WXW/iOBSG/4qVq10JzdiO7SS9gwIq21YgCluV1Qr5E9zJ
F3GgZUbz38fpltFMWthpNZdJ3py8z/GbY38JSr7SwRnqBJV227R2wdk/XwKl
naxsWdsiD86CK+tqUBhQrzXoW5frPThPuXNWgvMiTbVsdB3Q404rUORgtta8
rqzkKZjqVPvboM9r7TqA5+qpytgYK61/njalTVVkh8J/DIutFzVlHuwnW2pl
+Z9BJzB8V1S21kvpn9fBGQk7gVXeHEUYE01wxKDWIU4igglHGvp3vDw76Gmj
d8WShckS+dd07gXN15f1vvQNCLJiZ7W/l/OsuTzK6SVl4WpdLUter73yo7yc
z0Jx9bCfwwWMy9nVNb8ZTi+6mn24L1fB1067n7Ni60CqHTA2zdyBXOcg48DX
dtp/ssg/vEKNDtBEhlJDnWCKCGIophCiGNI2dKP/AdpUvwL9krCr44fbKV/3
F3zTu0R7Kld9daWix9VfRxBPeqciiWMBNeSYIcOZ925U1PIeJj97V43Pd3gX
WzVlRXeRUbovyaeKurvNYLUKexfxEe/P63Hd1HcnQaKQUkE0JoYLYoSEDcmL
RXhP8F5yFCwdm4f5dDuem+Hu/nFudO7mkbi250c4Jjq1cusDDJQG6lD2BA2E
CTOUGyoQ5ER4Gkhki4b9HpquGK27vdt6MamyXb7pzewFvIPVeNN7T6IwZ8j7
TTjCKMY08ckyUcJa1uHbrH/v2E/W822avjT3y5GBBsY0MSiROKEYSkNgxOKw
5TR8o9VjXc5ofw5nt8NquNhtJmYmSrRJQu3Muj7S5WeQZjCdwqAoDjVVFCeI
RjREqskKbhz9iIGit2G83vHgY76/oZUefL6Ag7sBEyM8oX93J1Td1cei/7Rj
ycpvQ35HEnsweKx1rrQa5aYAN09CYIoKXBbKnpyzVIQYmzgUikhIIY4aUNFO
VmvM/h/n68v1erJOmWOUxYRAGfKQxShh/o9lRrbDRMLftAxCift0lKbjaj3g
q/7nTLPzHh6V/cvnSfpvJ6iLmqfL5kzhnibff9ffjxYk+foNkJXPVnIIAAA=
http_version:
recorded_at: Tue, 09 Feb 2016 15:17:13 GMT
recorded_with: VCR 3.0.1
|
spec/vcr/search/list.yml
|