hexsha stringlengths 40 40 | size int64 5 1.04M | ext stringclasses 6 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 344 | max_stars_repo_name stringlengths 5 125 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 11 | max_stars_count int64 1 368k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 344 | max_issues_repo_name stringlengths 5 125 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 11 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 344 | max_forks_repo_name stringlengths 5 125 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 11 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 5 1.04M | avg_line_length float64 1.14 851k | max_line_length int64 1 1.03M | alphanum_fraction float64 0 1 | lid stringclasses 191 values | lid_prob float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
875269be8119df9c9400e8f4c854cf5e246decf3 | 3,642 | markdown | Markdown | _posts/2015-11-24-asynchronous-vs-synchronous-execution.markdown | jasonjson/jasonjson.github.io | 76f3c90eb72ef241d09646f8ff976db718e54e48 | [
"MIT"
] | null | null | null | _posts/2015-11-24-asynchronous-vs-synchronous-execution.markdown | jasonjson/jasonjson.github.io | 76f3c90eb72ef241d09646f8ff976db718e54e48 | [
"MIT"
] | null | null | null | _posts/2015-11-24-asynchronous-vs-synchronous-execution.markdown | jasonjson/jasonjson.github.io | 76f3c90eb72ef241d09646f8ff976db718e54e48 | [
"MIT"
] | null | null | null | ---
layout: post
title: Asynchronous vs synchronous execution
date: 2015-11-24 10:56:59.000000000 -05:00
tags:
- Leetcode
categories:
- Reading Notes
author: Jason
---
<p>Explanation 1:</p>
<p>When you execute something synchronously, you wait for it to finish before moving on to another task. When you execute something asynchronously, you can move on to another task before it finishes.</p>
<p>That being, said, in the context of computers this translates into executing a process or task on another "thread." A thread is a series of commands--a block of code--that exists as a unit of work. The operating system can manage multiple threads and assign a thread a piece ("slice") of processor time before switching to another thread to give it a turn to do some work. At its core (pardon the pun), a processor can simply execute a command--it has no concept of doing two things at one time. The operating system simulates this by allocating slices of time to different threads.</p>
<p>Now, if you introduce multiple cores/processors into the mix, then things CAN actually happen at the same time. The operating system can allocate time to one thread on the first processor, then allocate the same block of time to another thread on a different processor.</p>
<p>All of this is about allowing the operating system to manage the completion of your task while you can go on in your code and do other things. Asynchronous programming is a complicated topic because of the semantics of how things tie together when you can do them at the same time. There are numerous articles and books on the subject; have a look!</p>
<p>Explanation 2:</p>
<p>As an aside, I should mention that technically, the concept of synchronous vs. asynchronous really does not have anything to do with threads. Although, in general, it would be unusual to find asynchronous tasks running on the same thread, it is possible, (see below for e.g.) and it is common to find two or more tasks executing synchronously on separate threads... This concept has to do solely with whether or not a second or subsequent task can be initiated until the other task has completed, and that is all. What thread (or threads), or processes, or CPUs, or indeed, what hardware the task[s] are executed on is not relevant.</p>
<p>ASYNCHRONOUS EXAMPLE. In solving many engineering problems, the software is designed to split up the overall problem into multiple individual tasks, and then execute them asynchronously. Inverting a matrix, or a finite element analysis problem, are good examples. In computing, sorting a list is an example. The quick sort routine, for example, splits the list into two lists, and sorts each of them by calling itself recursively. In both of the above examples, the two tasks can (and often were) executed asynchronously. They do not need to be on separate threads. Even a machine with one CPU, and only one thread of execution can be coded to initiate processing of a second task before a first one has completed. The only criterion is that the results of one task are not necessary as inputs to the other task. As long as the start and end times of the tasks overlap, (possible only if the output of neither is needed as inputs to the other), they are being executed asynchronously, no matter how many threads are in use.</p>
<p>SYNCHRONOUS EXAMPLE. Any process consisting of multiple tasks where the tasks must be executed in sequence, but one must be executed on another machine (Fetch and/or update data, get a stock quote from a financial service, etc.). If it's on a separate machine it is on a separate thread, whether synchronous or asynchronous.</p>
| 182.1 | 1,030 | 0.786656 | eng_Latn | 0.999934 |
87526bfe40421311236c529a2742264a466f081d | 210 | md | Markdown | docs/kdy.md | mybanking/kdy_docs | f8f9269e99ea74f27a4eb17f52b15b11cadf81a3 | [
"Apache-2.0"
] | null | null | null | docs/kdy.md | mybanking/kdy_docs | f8f9269e99ea74f27a4eb17f52b15b11cadf81a3 | [
"Apache-2.0"
] | null | null | null | docs/kdy.md | mybanking/kdy_docs | f8f9269e99ea74f27a4eb17f52b15b11cadf81a3 | [
"Apache-2.0"
] | null | null | null | # 个人简介
我叫孔德焱,是一名软件工程专业学生。
## 那里可以找到我
- **[Github](https://github.com/mybanking/kdy_docs)**
- **微信:k18226458822**
- **邮箱: 1402039615@qq.com**
## 我会做些什么
- 开发语言:java python c++ linux git
- 写文章:编程相关
| 11.666667 | 53 | 0.62381 | yue_Hant | 0.702257 |
87529f8996a0bce6af601f21d11ba500693dd5ab | 1,656 | md | Markdown | _posts/2021-03-03-Regular-Expression.md | RulersOth/RulersOth.blog | b3529fb0fb4cc528c24b1a9e599953c45a082193 | [
"MIT"
] | null | null | null | _posts/2021-03-03-Regular-Expression.md | RulersOth/RulersOth.blog | b3529fb0fb4cc528c24b1a9e599953c45a082193 | [
"MIT"
] | 1 | 2020-09-03T12:05:00.000Z | 2020-09-03T12:05:00.000Z | _posts/2021-03-03-Regular-Expression.md | RulersOth/RulersOth.blog | b3529fb0fb4cc528c24b1a9e599953c45a082193 | [
"MIT"
] | 2 | 2020-09-02T12:06:18.000Z | 2020-09-04T15:10:51.000Z | ---
layout: post
author: JayB
categories: JayB
navigation: true
cover: "assets/images/cover7.jpg"
logo: "assets/images/logo.png"
tags: DevEnv
title: Regex 표현 정리
subclass: "post tag-DevEnv"
---
<br>
Regex를 사용할 때, `multiple` `global` 사용을 유의해야한다.
<br>
## 문법 정리
<br>
### Groups and ranges
| Chracter | 뜻 |
| -------- | -------------------------------------- |
| `\|` | 또는 |
| `()` | 그룹 |
| `[]` | 문자셋, 괄호안의 어떤 문자든 |
| `[^]` | 부정 문자셋, 괄호안의 어떤 문가 아닐때 |
| `(?:)` | 찾지만 기억하지는 않음 |
### Quantifiers
| Chracter | 뜻 |
| ----------- | ----------------------------------- |
| `?` | 없거나 있거나 (zero or one) |
| `*` | 없거나 있거나 많거나 (zero or more) |
| `+` | 하나 또는 많이 (one or more) |
| `{n}` | n번 반복 |
| `{min,}` | 최소 |
| `{min,max}` | 최소, 그리고 최대 |
### Boundary-type
| Chracter | 뜻 |
| -------- | ---------------- |
| `\b` | 단어 경계 |
| `\B` | 단어 경계가 아님 |
| `^` | 문장의 시작 |
| `$` | 문장의 끝 |
### Character classes
| Chracter | 뜻 |
| -------- | ---------------------------- |
| `\` | 특수 문자가 아닌 문자 |
| `.` | 어떤 글자 (줄바꿈 문자 제외) |
| `\d` | digit 숫자 |
| `\D` | digit 숫자 아님 |
| `\w` | word 문자 |
| `\W` | word 문자 아님 |
| `\s` | space 공백 |
| `\S` | space 공백 아님 |
| 26.709677 | 53 | 0.291667 | kor_Hang | 0.997705 |
87534681244a9286fa8d094833abbcd631fc2bb1 | 958 | md | Markdown | README.md | julian-r/angular-qtip2-directive | 7fcad63f93086efcb8abde2abda62106478a3225 | [
"MIT"
] | null | null | null | README.md | julian-r/angular-qtip2-directive | 7fcad63f93086efcb8abde2abda62106478a3225 | [
"MIT"
] | null | null | null | README.md | julian-r/angular-qtip2-directive | 7fcad63f93086efcb8abde2abda62106478a3225 | [
"MIT"
] | 1 | 2015-01-22T03:47:52.000Z | 2015-01-22T03:47:52.000Z | # Angular Qtip2 Directive
Angular directive for [Qtip2](http://qtip2.com/)
## Usage
Install with [Bower](http://bower.io)
bower install angular-qtip2-directive --save-dev
Include the script `qtip2` into your app and add `qtip2` as a module dependency to your app. Use the directive on the elements with the attribute `qtip`
<a href="/cool-stuff" qtip="Text to appear in the tip">Hover me</a>
List of attributes you can use:
* `qtip-content` or `qtip`: Content that will appear in the tip
* `qtip-title`: Title that will appear in the tip
* `qtip-my`: position of the tip arrow - optionnal: default to `bottom center`
* `qtip-at`: position of the tip - optionnal: default to `top center`
* `qtip-class`: class to use on the tip - optionnal: default to `qtip`
* `qtip-visible`: a scope variable to trigger the visibillity from extern
For more details about the options see the [Qtip2 documentation](http://qtip2.com/demos#section-positioning)
| 38.32 | 152 | 0.732777 | eng_Latn | 0.98651 |
8753eac9ecc685e6892eb001b92642268778a8ab | 40 | md | Markdown | README.md | victoryckl/web-demos | 08a86b38cd5959f7da0926d334e8b1267df1455d | [
"MIT"
] | null | null | null | README.md | victoryckl/web-demos | 08a86b38cd5959f7da0926d334e8b1267df1455d | [
"MIT"
] | null | null | null | README.md | victoryckl/web-demos | 08a86b38cd5959f7da0926d334e8b1267df1455d | [
"MIT"
] | null | null | null | web-demos
=========
some web app demos
| 8 | 18 | 0.575 | eng_Latn | 0.457369 |
8753fc4560daa40e27e8a29f49088c1d205e1621 | 2,921 | md | Markdown | README.md | zellnerFinstreet/datev | 800126f8b1bd5e53013762dbf0787e185505e5c0 | [
"MIT"
] | null | null | null | README.md | zellnerFinstreet/datev | 800126f8b1bd5e53013762dbf0787e185505e5c0 | [
"MIT"
] | null | null | null | README.md | zellnerFinstreet/datev | 800126f8b1bd5e53013762dbf0787e185505e5c0 | [
"MIT"
] | 1 | 2019-12-03T11:08:56.000Z | 2019-12-03T11:08:56.000Z | # Datev
Ruby gem to export bookings and more to DATEV format as CSV file
Supported DATEV Version: 7.2
[](https://travis-ci.org/ledermann/datev)
[](https://codeclimate.com/github/ledermann/datev)
[](https://coveralls.io/github/ledermann/datev?branch=master)
## Installation
Add this line to your application's Gemfile:
```ruby
gem 'datev'
```
And then execute:
$ bundle
Or install it yourself as:
$ gem install datev
## Usage
To export bookings, you need an BookingExport instance with an array of records. Example:
```ruby
export = Datev::BookingExport.new(
'Herkunft' => 'XY',
'Exportiert von' => 'Chief Accounting Officer',
'Berater' => 1001,
'Mandant' => 456,
'WJ-Beginn' => Date.new(2018,1,1),
'Datum vom' => Date.new(2018,2,1),
'Datum bis' => Date.new(2018,2,28),
'Bezeichnung' => 'Beispiel-Buchungen'
) # For available hash keys see /lib/datev/base/header.rb
export << {
'Belegdatum' => Date.new(2018,2,21),
'Buchungstext' => 'Fachbuch: Controlling für Dummies',
'Umsatz (ohne Soll/Haben-Kz)' => 24.95,
'Soll/Haben-Kennzeichen' => 'H',
'Konto' => 1200,
'Gegenkonto (ohne BU-Schlüssel)' => 4940,
'BU-Schlüssel' => '8'
} # For available hash keys see /lib/datev/base/booking.rb
export << {
'Belegdatum' => Date.new(2018,2,22),
'Buchungstext' => 'Honorar FiBu-Seminar',
'Umsatz (ohne Soll/Haben-Kz)' => 5950.00,
'Soll/Haben-Kennzeichen' => 'S',
'Konto' => 10000,
'Gegenkonto (ohne BU-Schlüssel)' => 8400,
'Belegfeld 1' => 'RE201802-135'
}
export.to_file('EXTF_Buchungsstapel.csv')
```
Result: [CSV file](examples/EXTF_Buchungsstapel.csv)
All records are validated against the defined schema.
Beside bookings, some other exports are available, too:
* `AccountExport` ("Kontenbeschriftungen")
* `ContactExport` ("Stammdaten")
## Development
After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake spec` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
## Contributing
Bug reports and pull requests are welcome on GitHub at https://github.com/ledermann/datev. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the [Contributor Covenant](http://contributor-covenant.org) code of conduct.
## License
The gem is available as open source under the terms of the [MIT License](http://opensource.org/licenses/MIT).
| 32.820225 | 284 | 0.657309 | eng_Latn | 0.457406 |
8754b435a1609ac0d47c2cac9ae6403e38246277 | 2,275 | md | Markdown | docs/add-ons/keda.md | Nuatu/aws-eks-accelerator-for-terraform | ee7468c52eefa52b5ecfae7a2ffc58b34156550e | [
"MIT-0"
] | null | null | null | docs/add-ons/keda.md | Nuatu/aws-eks-accelerator-for-terraform | ee7468c52eefa52b5ecfae7a2ffc58b34156550e | [
"MIT-0"
] | null | null | null | docs/add-ons/keda.md | Nuatu/aws-eks-accelerator-for-terraform | ee7468c52eefa52b5ecfae7a2ffc58b34156550e | [
"MIT-0"
] | null | null | null | # KEDA
KEDA is a Kubernetes-based Event Driven Autoscaler. With KEDA, you can drive the scaling of any container in Kubernetes based on the number of events needing to be processed.
KEDA is a single-purpose and lightweight component that can be added into any Kubernetes cluster. KEDA works alongside standard Kubernetes components like the Horizontal Pod Autoscaler and can extend functionality without overwriting or duplication. With KEDA you can explicitly map the apps you want to use event-driven scale, with other apps continuing to function. This makes KEDA a flexible and safe option to run alongside any number of any other Kubernetes applications or frameworks..
[KEDA](https://github.com/kedacore/charts/tree/master/keda) chart bootstraps KEDA infrastructure on a Kubernetes cluster using the Helm package manager.
For complete project documentation, please visit the [KEDA documentation site](https://keda.sh/).
## Usage
KEDA can be deployed by enabling the add-on via the following.
```hcl
enable_keda = true
```
Deploy KEDA with custom `values.yaml`
```hcl
# Optional Map value; pass keda-values.yaml from consumer module
keda_helm_config = {
name = "keda" # (Required) Release name.
repository = "https://kedacore.github.io/charts" # (Optional) Repository URL where to locate the requested chart.
chart = "keda" # (Required) Chart name to be installed.
version = "2.4.0" # (Optional) Specify the exact chart version to install. If this is not specified, it defaults to the version set within default_helm_config: https://github.com/aws-samples/aws-eks-accelerator-for-terraform/blob/main/modules/kubernetes-addons/keda/locals.tf
namespace = "keda" # (Optional) The namespace to install the release into. Defaults to default
values = [templatefile("${path.module}/keda-values.yaml", {})]
}
```
### GitOps Configuration
The following properties are made available for use when managing the add-on via GitOps.
```
keda = {
enable = true
serviceAccountName = "<service_account_name>"
}
```
| 52.906977 | 491 | 0.687912 | eng_Latn | 0.982457 |
8754d90c512ecb0b125324d5146e51fc2c34d97b | 3,324 | md | Markdown | _posts/2019-09-01-Download-cargo-hulks-analysis.md | Kirsten-Krick/Kirsten-Krick | 58994392de08fb245c4163dd2e5566de8dd45a7a | [
"MIT"
] | null | null | null | _posts/2019-09-01-Download-cargo-hulks-analysis.md | Kirsten-Krick/Kirsten-Krick | 58994392de08fb245c4163dd2e5566de8dd45a7a | [
"MIT"
] | null | null | null | _posts/2019-09-01-Download-cargo-hulks-analysis.md | Kirsten-Krick/Kirsten-Krick | 58994392de08fb245c4163dd2e5566de8dd45a7a | [
"MIT"
] | null | null | null | ---
layout: post
comments: true
categories: Other
---
## Download Cargo hulks analysis book
At what age this takes place I have not undiscovered for long: perhaps two minutes, bright-eyed "More like a few days," Leilani said. " substances, Cargo hulks analysis decides that he must be disposed to lie. For a while-- a day?" Junior glanced over his shoulder even as Celestina turned and fled. On the 17th6th June "Mine's Barry," he said. Any was nothing there. She feared that a single indulgence in the pleasures of Tom Reamy repeatedly with his gaze. high-pitched oscillating whistle, this kid, you take the watch. " I called Amanda at noon. (Conclusion) Micky crazily thought of killer bees, given the experience of three decades of public speaking) cargo hulks analysis saw two things at once. The revolution came, but she would never opening is allowed to remain open. shot in the head can have an up side. " They cargo hulks analysis born cargo hulks analysis raised in a bucolic town in Indiana, a sort of as fast as possible partly by chafing, looked for his trunks. It was Christmas before he was done. ), apparently with the intention of pulling shut the insulated steel "You keep sayin' no offense, and Mom knew her stuff, revising her prediction upward. "Sure. file:D|Documents20and20Settingsharry. He knew neither he nor the weatherworker could do anything at all to turn the Roke-wind if it blew against them! It was raining. With two fingers, but did not succeed in his The boy follows his spry companion into this cargo hulks analysis blackness? information concerning the state of the land and the sea, the boy says? And When he realizes that he's the only occupant of the restroom, and the latter said to him,"Go. You mean. loathly, there is a force on its way forward to occupy the nose, Brother Hart sat down to eat. Eve looked at' the cargo hulks analysis, she's going to be exactly pressing his feet into the floorboard nearly hard enough to buckle it, Barty said. There was no sound except the whistling of the wind in the scaffolding. He did not return for two days. The dog had gotten her head stuck in the empty cheese-popcorn bag that Curtis had left on He didn't know why he'd spoken her name, it's farther from the sun, vermin. Theel returned borders. In collecting its food the Shakily, 182; ii. He watched her walk away. And did you see. That's what Gelluk's after. Cargo hulks analysis never told The cause of this high morale rests with one programmer in our department, an observance which besides is necessary, a couple of nobody, I guess I'm going to have to start wearing lead brassieres, the bars, inlaid with pearls and jewels, cargo hulks analysis him of his case; whereupon he told them his story and that which had befallen him, I cargo hulks analysis mean for you to push her like that. He disliked the old man for that, by means of the barter he carried on with us and the "You leave your ears in your other pants. Reluctantly withdrawing her hand. " dismiss the message because of the unlikely nature of the messenger, an exact double of my lost love, as if unable to suppress completely an anticipation of cargo hulks analysis objection that he knew would come, and if so, restrained by the consistent teaching and practice of the school and the watchfulness of his colleagues. | 369.333333 | 3,230 | 0.787906 | eng_Latn | 0.999915 |
8755c2dcd1b1d5d11cf4f148b443b8d4b77a6402 | 117 | md | Markdown | _drafts/2021-09-04-template.md | JohnLockwood/johnlockwood.github.io | 6d452fa3ba14086ef4f29559fde70a78a7d50bf2 | [
"MIT"
] | null | null | null | _drafts/2021-09-04-template.md | JohnLockwood/johnlockwood.github.io | 6d452fa3ba14086ef4f29559fde70a78a7d50bf2 | [
"MIT"
] | 3 | 2021-07-22T15:06:02.000Z | 2021-09-18T19:26:44.000Z | _drafts/2021-09-04-template.md | JohnLockwood/johnlockwood.github.io | 6d452fa3ba14086ef4f29559fde70a78a7d50bf2 | [
"MIT"
] | null | null | null | ---
title: Just a Template
date: 2021-09-04
tags: Drafts
comments: True
---
Hi. I am a template in the Drafts folder | 16.714286 | 41 | 0.709402 | eng_Latn | 0.854486 |
8759089ce7d0a7abd664bd8ae5c96df964071ff7 | 23 | md | Markdown | README.md | phamhung17011993/-p | 173784f0b6e170fa7f1fd24d76d1111800f2dce3 | [
"MIT"
] | null | null | null | README.md | phamhung17011993/-p | 173784f0b6e170fa7f1fd24d76d1111800f2dce3 | [
"MIT"
] | null | null | null | README.md | phamhung17011993/-p | 173784f0b6e170fa7f1fd24d76d1111800f2dce3 | [
"MIT"
] | null | null | null | # hello-world
hack não
| 7.666667 | 13 | 0.73913 | por_Latn | 0.936336 |
875954c3fab8dd329ad29f21d987b6598a474a26 | 161 | md | Markdown | site/content/posts/BBQ-Pulled-pork-sandwich.md | pschramm/foodmonger-redesign | 8f41d130543351c4d18145dcb278122e8221cb72 | [
"MIT"
] | null | null | null | site/content/posts/BBQ-Pulled-pork-sandwich.md | pschramm/foodmonger-redesign | 8f41d130543351c4d18145dcb278122e8221cb72 | [
"MIT"
] | null | null | null | site/content/posts/BBQ-Pulled-pork-sandwich.md | pschramm/foodmonger-redesign | 8f41d130543351c4d18145dcb278122e8221cb72 | [
"MIT"
] | null | null | null | ---
title: "BBQ Pulled Pork Sandwich"
date: 2018-12-26T11:40:41+06:00
image: images/blog/blog-img-2.jpg
---
Pulled Pork Sandwich w/ House BBQ Sauce & SV Slaw!
| 17.888889 | 50 | 0.701863 | eng_Latn | 0.150441 |
875a8fdc514ffd0b93bdfe44bf0891f9588d52e6 | 946 | md | Markdown | docs/rcs/rcs-introduction.md | HannaTul/docs | 2dc556f85d6d4318eadb64f4c96afb1b0d2c0a63 | [
"MIT"
] | 2 | 2020-11-26T08:09:43.000Z | 2020-11-30T06:09:29.000Z | docs/rcs/rcs-introduction.md | HannaTul/docs | 2dc556f85d6d4318eadb64f4c96afb1b0d2c0a63 | [
"MIT"
] | null | null | null | docs/rcs/rcs-introduction.md | HannaTul/docs | 2dc556f85d6d4318eadb64f4c96afb1b0d2c0a63 | [
"MIT"
] | 1 | 2021-01-27T16:39:55.000Z | 2021-01-27T16:39:55.000Z | ---
title: Introduction
excerpt: >-
This API provides Universal Profile 2.0 RCS messaging with granular controls
to allow fallback to SMS when a handset is not RCS enabled. Find out more
today.
next:
pages:
- rcs-http-rest
---
## Overview
Sinch is now offering a limited beta trial of our new A2P RCS solution. RCS allows you to send interactive & personalized messages to consumers right within their native SMS inbox, in supported markets.
## What are the supported APIs?
RCS is supported over our [REST API](doc:rcs-http-rest).
## How do I get started?
If you wish to take part of this trial and are an existing customer, please contact your account manager. If you are a new customer, please fill out the [form on this page](https://www.sinch.com/products/messaging/rcs/).
The REST API is described [here](doc:rcs-http-rest).
## Need help?
If you have any questions, feel free to check out our help section or contact us.
| 33.785714 | 220 | 0.748414 | eng_Latn | 0.993103 |
875bc40b8ecb9b9cc9941ccd190eb59a75308f96 | 7,689 | md | Markdown | ContactApp-2.md | rdc-lda/react-native-playground | 45b0852f22fb7335462107417926e47ccee76ad4 | [
"Apache-2.0"
] | null | null | null | ContactApp-2.md | rdc-lda/react-native-playground | 45b0852f22fb7335462107417926e47ccee76ad4 | [
"Apache-2.0"
] | 1 | 2022-02-10T17:51:32.000Z | 2022-02-10T17:51:32.000Z | ContactApp-2.md | rdc-lda/react-native-playground | 45b0852f22fb7335462107417926e47ccee76ad4 | [
"Apache-2.0"
] | null | null | null | # ContactApp - section 2
My notes creating my ContactApp application - based upon previous sections.
Create the file `./app.config/data.js` from the Udemy course Github.
## Ugly list of emails
For convenience, create a central place to store color choices (./app/config/colors.js):
~~~js
export default {
background: '#ffffff'
}
~~~
Next...
* Import the colors via `import colors from '../config/colors';`
* Import the contact data via `import {contacts} from '../config/data';`
* Add Flatlist to imports or react-native: `import { View, Text, FlatList } from 'react-native';`
* Update the render function as shown below:
~~~js
...
render() {
return (
<FlatList
style={{ backgroundColor: colors.background }}
data={contacts}
renderItem={({ item }) =>
<View><Text>{item.email}</Text></View>
}
keyExtractor={(item) => item.email}
/>
)
}
...
~~~
### Extra
Play around with the list, change the `renderItem` to:
~~~html
<View>
<Text>
{item.name.title.capitalizeFirstLetter()}
{item.name.title.length < 5 ? "." : ""}
{item.name.first.capitalizeFirstLetter()}
{item.name.last.capitalizeFirstLetter()}
</Text>
</View>
~~~
... and create the file `./app/helpers/string.js` with:
~~~js
String.prototype.capitalizeFirstLetter = function() {
return this.charAt(0).toUpperCase() + this.slice(1);
}
~~~
Do not forget to import in `Contacts.js`:
~~~js
import '../helpers/string'
~~~
## Organize the components
A component has at least 3 files in its directory:
~~~text
components
└── ListItem
├── ListItem.js
├── index.js
└── styles.js
~~~
* `index.js` - defines what is exported from this component
* `styles.js` - single place where all style related information for this component
* `ComponentName.js` - this is the react native component
### Refactor to component
The current screen has the component logic within `Contact.js`; to externalize this in a separate component update:
`./components/ListItem/index.js`:
~~~js
import ListItem from './ListItem';
import styles from './styles';
export {
ListItem,
styles,
};
~~~
`./components/ListItem/ListItem.js`:
~~~js
import React from 'react';
import { View, Text } from 'react-native';
const ListItem = ({ contact, onPress }) => {
return (
<View>
<Text>{contact.email}</Text>
</View>
)
};
export default ListItem;
~~~
...and update the screen `./app/screens/Contacts.js` to:
~~~js
...
import { ListItem } from '../components/ListItem';
...
renderItem={({ item }) =>
<ListItem contact={item} onPress={() => this.handleRowPress(item)} />
}
...
~~~
## StyleSheet and Flexbox
Useful links:
* https://facebook.github.io/react-native/docs/style
* https://facebook.github.io/react-native/docs/flexbox
Basic example of a style:
~~~js
import { Stylesheet } from 'react-native';
export default StyleSheet.create({
container: {
flex: 1,
alignItems: 'center',
justifyContent: 'center',
},
});
~~~
## Platform API
Allows us to create look and feel for a specific platform (iOS, Android), for example with icons. See the React Native [platform specific code guide](https://facebook.github.io/react-native/docs/platform-specific-code#__docusaurus) for more information.
Basic example:
~~~js
import { Platform } from 'react-native';
const isAndroid = Platform.OS === 'android';
const icon = Platform.OS === 'ios' ? 'ios-icon' : 'android-icon';
export default StyleSheet.create({
container: {
flex: 1,
alignItems: "center",
justifyContent: "center",
padding: Platform.OS === 'ios' ? 50 : 80;
},
});
~~~
## Creating helper functions
I prefer to use a more object oriented approach than what the course suggests, hence:
~~~js
String.prototype.capitalizeFirstLetter = function() {
return this.charAt(0).toUpperCase() + this.slice(1);
}
String.prototype.toPhoneNumber = function() {
const modText = this.replace(/[^\d]/g, '');
return modText.replace(/(\d\d\d)(\d\d\d)(\d\d\d\d)/, '$1-$2-$3');
}
~~~
## Install React Native Vector icons
Base repo: https://github.com/oblador/react-native-vector-icons
~~~bash
# Install the npm in our current project as dependency
$ npm install --save react-native-vector-icons
# You no longer need to link these icons to the native platforms
# React Native CLI uses autolinking for native dependencies!!
# $ npx react-native link react-native-vector-icons
# See: https://github.com/react-native-community/cli/blob/master/docs/autolinking.md
~~~
Next, stop the simulators and ReactPackager and run `pod install` under the `./ios` directory:
~~~bash
# Note the RNVectorIcons!
$ cd ios && pod install & cd ..
Detected React Native module pod for RNVectorIcons
Analyzing dependencies
Downloading dependencies
Installing RNVectorIcons (6.6.0)
Generating Pods project
Integrating client project
Pod installation complete! There are 29 dependencies from the Podfile and 27 total pods installed.
~~~
Last, just start the simulators again and the icon sets will be available!
## Icon component
Docs: https://github.com/oblador/react-native-vector-icons#icon-component
Basic example:
~~~js
import Icon from 'react-native-vector-icons/Ionicons';
<Icon
name="ios-arrow-forward"
size={35}
style={{ alignSelf: 'flex-end' }}
color="#ad16ff"
/>
~~~
## Create and use ListItem Component
Big exercise - would not be able to do this one without "peaking" into the solution due to my limited knowledge of CSS and JS in general... but I managed to get this up and running and solve all the issues.
### Problem: Unrecognized font family on iOS
The cause was the dynamic linking forgot to add the font definitions to my `Info.plist`!
Solution: replace the lines below in my `./ios/ContactApp/Info.plist`:
~~~xml
<key>UIAppFonts</key>
<array />
~~~
...with:
~~~xml
<key>UIAppFonts</key>
<array>
<string>AntDesign.ttf</string>
<string>Entypo.ttf</string>
<string>EvilIcons.ttf</string>
<string>Feather.ttf</string>
<string>FontAwesome.ttf</string>
<string>FontAwesome5_Brands.ttf</string>
<string>FontAwesome5_Regular.ttf</string>
<string>FontAwesome5_Solid.ttf</string>
<string>Foundation.ttf</string>
<string>Ionicons.ttf</string>
<string>MaterialIcons.ttf</string>
<string>MaterialCommunityIcons.ttf</string>
<string>SimpleLineIcons.ttf</string>
<string>Octicons.ttf</string>
<string>Zocial.ttf</string>
</array>
~~~
Restart iOS simulator and Packager solved the issue.
### Problem: Fonts do not render on Android
I noticed a box with an `[X]` icon where I would expect the Ionic font (arrow) to appear... Looks like my fonts are not loading!
Solution: Add the definition below to the Gradle build file `./android/app/build.gradle`.
Replace the lines below in `./android/app/build.gradle`:
~~~gradle
apply from: "../../node_modules/react-native/react.gradle"
~~~
...with:
~~~gradle
apply from: "../../node_modules/react-native/react.gradle"
apply from: "../../node_modules/react-native-vector-icons/fonts.gradle"
~~~
Restart Android simulator and Packager + compile app solved the issue.
### Problem: Thumbnails do not render on Android
None of the images would load on my Android emulator...
Solution: fix Android Wifi (was connected to network, but not able to reach the Internet).
~~~bash
# Start the Android emulator with Google DNS...
$ emulator -avd Galaxy_S10e_API_28 -netdelay none -netspeed full -dns-server 8.8.8.8 &
~~~
| 24.803226 | 253 | 0.679152 | eng_Latn | 0.826227 |
875bda2c3c841ff94efddb782263b3fbd2d3d31a | 15,294 | md | Markdown | _posts/2017-09-05-where-in-the-fluid-am-i.md | roughike/peoplesoft-mods | 273f42fb914f912a5d77960e0c42b68b0bdee0a1 | [
"MIT"
] | 8 | 2019-04-09T22:11:21.000Z | 2021-12-29T00:57:54.000Z | _posts/2017-09-05-where-in-the-fluid-am-i.md | roughike/peoplesoft-mods | 273f42fb914f912a5d77960e0c42b68b0bdee0a1 | [
"MIT"
] | 1 | 2022-03-13T07:02:19.000Z | 2022-03-13T07:02:19.000Z | _posts/2017-09-05-where-in-the-fluid-am-i.md | roughike/peoplesoft-mods | 273f42fb914f912a5d77960e0c42b68b0bdee0a1 | [
"MIT"
] | 3 | 2021-02-28T17:50:43.000Z | 2021-03-01T11:04:07.000Z | ---
id: 1113
title: Where in the Fluid Am I?
date: 2017-09-05T17:29:47+00:00
guid: https://www.peoplesoftmods.com/?p=1113
permalink: /tips-and-tricks/where-in-the-fluid-am-i/
categories:
- Tips and Tricks
---
When navigating in a PeopleSoft system that uses Fluid Navigation, it can be easy to lose your bearings in terms of where you actually are in the Portal Registry. Knowing the exact Content Reference (and how to get to it) in Structure and Content is sometimes crucial when troubleshooting issues. The problem is that the new Fluid Navigation does not directly correlate to the structure of the Portal Registry like it used to in Classic (breadcrumb) Navigation. This results in there being no easy way to determine where a given Fluid page is in Structure and Content. I have recently found that using the combination of a simple Bookmarklet and IScript to be sufficient enough to reveal the Portal Registry path for the pages that I navigate to. In this post, I will share the implementation details of this helpful utility.
<!--more-->
[<span style="text-decoration: underline;"><strong>CLICK HERE</strong></span>](/assets/downloads/PSM_YAH.zip) to download the app designer project. Unzip the project from the downloaded file and import the project from file in App Designer.
There is a Permission List in the project named PSM\_YAH. This Permission List has access to the IScript that provides the Portal Registry path for a given page. Assign the PSM\_YAH Permission List to a Role that you want to have this functionality enabled for.
[<img class="alignnone size-full wp-image-1114" src="/assets/images/2017/09/Assign-Permission-List.png" alt="Assign Permission List" width="929" height="541" srcset="/assets/images/2017/09/Assign-Permission-List.png 929w, /assets/images/2017/09/Assign-Permission-List-300x175.png 300w, /assets/images/2017/09/Assign-Permission-List-768x447.png 768w, /assets/images/2017/09/Assign-Permission-List-653x380.png 653w" sizes="(max-width: 929px) 100vw, 929px" />](/assets/images/2017/09/Assign-Permission-List.png)
Now you should be able to invoke the IScript without receiving an authorization error. You can call the IScript by pasting in the following value into your web browser after authenticating into the application.
<pre><domain>/psc/ps/<portal>/<node>/s/WEBLIB_PSM_YAH.ISCRIPT1.FieldFormula.IScript_YouAreHere</pre>
You should receive a response that states “Current Page Information Not Provided”.
To test the IScript with actual values, you can provide the menu and component URL query parameters to fetch the Portal Registry path for the values. Below is an example call for the User Profiles Portal Registry path.
<pre><domain>/psc/ps/<portal>/<node>/s/WEBLIB_PSM_YAH.ISCRIPT1.FieldFormula.IScript_YouAreHere?menu=MAINTAIN_SECURITY&component=USERMAINT</pre>
The script’s response in this case should be “Root > PeopleTools > Security > User Profiles > User Profiles”.
Alternatively, you can provide the url to a given Content Reference as a query parameter and the script will respond with the Portal Registry path to the Content Reference. Here is an example to get the Portal Registry path for the “User Profiles” Content Reference via its URL:
<pre><domain>/psc/ps/<portal>/<node>/s/WEBLIB_PSM_YAH.ISCRIPT1.FieldFormula.IScript_YouAreHere?url=http://<domain>/psc/ps/<portal>/<node>/c/MAINTAIN_SECURITY.USERMAINT.GBL</pre>
The functionality provided by the IScript is helpful, but the call to the IScript needs to be made more functional. To achieve this, I took some pointers from <a href="http://jjmpsj.blogspot.com/" target="_blank">Jim Marion’s</a> blog posts on bookmarklets and PeopleSoft JavaScript development techniques, and created a “You Are Here” bookmarklet. I think exposing the Portal Registry path information for a PeopleSoft page through a bookmarklet provides an acceptable level of usability. To make use of the bookmarklet, drag the link below into your browser’s bookmark bar.
**[PS You Are Here](javascript:(function()%7Bvar%20xhttp%20%3D%20new%20XMLHttpRequest()%3B%20xhttp.onreadystatechange%20%3D%20function()%20%7B%20if%20(this.readyState%20%3D%3D%204%20%26%26%20this.status%20%3D%3D%20200)%20%7B%20var%20yah%20%3D%20(doc.getElementById('youarehere')%20%3F%20doc.getElementById('youarehere')%20%3A%20doc.createElement('div'))%3B%20yah.id%20%3D%20'youarehere'%3B%20yah.style%20%3D%20'text-align%3A%20center%3B%20border%3Asolid%20black%201px%3B'%3B%20yah.innerHTML%20%3D%20this.responseText%3B%20var%20bodyEl%20%3D%20doc.getElementsByTagName(%22BODY%22)%5B0%5D%3B%20bodyEl.insertBefore(yah%2C%20bodyEl.firstChild)%3B%20%7D%20%7D%3B%20var%20currUrl%20%3D%20(!!frames%5B%22TargetContent%22%5D%20%3F%20!!frames%5B%22TargetContent%22%5D.strCurrUrl%20%3F%20frames%5B%22TargetContent%22%5D.strCurrUrl%20%3A%20window.location.href%20%3A%20window.location.href)%3B%20var%20parts%20%3D%20currUrl.match(%2Fps%5Bpc%5D%5C%2F(.%2B%3F)(%3F%3A_(%5Cd%2B))*%3F%5C%2F(.%2B%3F)%5C%2F(.%2B%3F)%5C%2F%5Bchs%5D%5C%2F%2F)%3B%20var%20doc%20%3D%20(frames%5B%22TargetContent%22%5D%20%3F%20frames%5B%22TargetContent%22%5D.document%20%3A%20document)%3B%20var%20divId%20%3D%20(doc.getElementById('pt_pageinfo')%20%3F%20'pt_pageinfo'%20%3A%20parts%5B2%5D%20%3F%20'pt_pageinfo_win'%20%2B%20parts%5B2%5D%20%3A%20'pt_pageinfo_win0')%3B%20var%20pageInfo%20%3D%20doc.getElementById(divId)%3B%20var%20menu%20%3D%20(pageInfo%20%3F%20pageInfo.getAttribute('menu')%20%3A%20'')%3B%20var%20component%20%3D%20(pageInfo%20%3F%20pageInfo.getAttribute('component')%20%3A%20'')%3B%20var%20mode%20%3D%20(pageInfo%20%3F%20pageInfo.getAttribute('mode')%20%3A%20'')%3B%20var%20portalNeeded%20%3D%20(frames%5B%22TargetContent%22%5D%20%3F%20'n'%20%3A%20'y')%3B%20var%20scriptUrl%20%3D%20window.location.origin%20%2B%20%22%2Fpsc%2F%22%20%2B%20parts%5B1%5D%20%2B%20%22%2F%22%20%2B%20parts%5B3%5D%20%2B%20%22%2F%22%20%2B%20parts%5B4%5D%20%2B%20%22%2Fs%2FWEBLIB_PSM_YAH.ISCRIPT1.FieldFormula.IScript_YouAreHere%3Furl%3D%22%20%2B%20encodeURIComponent(currUrl)%20%2B%20%22%26menu%3D%22%20%2B%20encodeURIComponent(menu)%20%2B%20%22%26component%3D%22%20%2B%20encodeURIComponent(component)%20%2B%20%22%26p%3D%22%20%2B%20portalNeeded%3B%20xhttp.open(%22GET%22%2C%20scriptUrl%2C%20true)%3B%20xhttp.send()%7D)())**
Now when you are on a PeopleSoft page and you have access to the IScript mentioned above, you can click the bookmarklet to get a div element injected into the page that contains the Portal Registry path information for the current page. Here is an example of when I invoke the script on the Fluid Addresses page:
[<img class="alignnone size-full wp-image-1115" src="/assets/images/2017/09/Fluid-Addresses.png" alt="Fluid Addresses" width="1079" height="800" srcset="/assets/images/2017/09/Fluid-Addresses.png 1079w, /assets/images/2017/09/Fluid-Addresses-300x222.png 300w, /assets/images/2017/09/Fluid-Addresses-768x569.png 768w, /assets/images/2017/09/Fluid-Addresses-1024x759.png 1024w, /assets/images/2017/09/Fluid-Addresses-513x380.png 513w" sizes="(max-width: 1079px) 100vw, 1079px" />](/assets/images/2017/09/Fluid-Addresses.png)
And here is the script’s output for the Fluid Homepage:
[<img class="alignnone size-full wp-image-1116" src="/assets/images/2017/09/Fluid-Home.png" alt="Fluid Home" width="1274" height="924" srcset="/assets/images/2017/09/Fluid-Home.png 1274w, /assets/images/2017/09/Fluid-Home-300x218.png 300w, /assets/images/2017/09/Fluid-Home-768x557.png 768w, /assets/images/2017/09/Fluid-Home-1024x743.png 1024w, /assets/images/2017/09/Fluid-Home-524x380.png 524w" sizes="(max-width: 1274px) 100vw, 1274px" />](/assets/images/2017/09/Fluid-Home.png)
As you can see, each breadcrumb in the outputted Portal Registry path is clickable. Clicking the breadcrumb will take you to the corresponding Content Reference in Structure and Content. When I click the Fluid Homepage breadcrumb, I am taken to the Structure and Content for the Fluid Homepage Content Reference.
[<img class="alignnone size-full wp-image-1117" src="/assets/images/2017/09/Fluid-Home-CREF.png" alt="Fluid Home CREF" width="920" height="748" srcset="/assets/images/2017/09/Fluid-Home-CREF.png 920w, /assets/images/2017/09/Fluid-Home-CREF-300x244.png 300w, /assets/images/2017/09/Fluid-Home-CREF-768x624.png 768w, /assets/images/2017/09/Fluid-Home-CREF-467x380.png 467w" sizes="(max-width: 920px) 100vw, 920px" />](/assets/images/2017/09/Fluid-Home-CREF.png)
The script is also capable of giving the Portal Registry path for non-Menu/Component based Content References. For example, my [online PeopleCode editor](https://www.peoplesoftmods.com/tips-and-tricks/online-peoplecode-editor-project/) is an IScript based Content Reference. When I invoke the bookmarkelt on this particular page, the script responds with the correct Portal Registry path.
[<img class="alignnone size-full wp-image-1118" src="/assets/images/2017/09/IScript-CREF.png" alt="IScript CREF" width="1139" height="669" srcset="/assets/images/2017/09/IScript-CREF.png 1139w, /assets/images/2017/09/IScript-CREF-300x176.png 300w, /assets/images/2017/09/IScript-CREF-768x451.png 768w, /assets/images/2017/09/IScript-CREF-1024x601.png 1024w, /assets/images/2017/09/IScript-CREF-647x380.png 647w" sizes="(max-width: 1139px) 100vw, 1139px" />](/assets/images/2017/09/IScript-CREF.png)
I will admit that there are some rough edges with this utility, but I have found it to be very useful for the most part. While this tool is helpful in determining how to get to a particular Content Reference in Structure and Content, it fails to provide the actual path that a user took to get to the given page. For example: Which homepage the user came from, which tile the user clicked on, etc. <a href="https://community.oracle.com/ideas/19018" target="_blank">Dan Iverson has an idea</a> in the My Oracle Support Community <a href="https://docs.oracle.com/cd/E52319_01/infoportal/mosc.html" target="_blank">PeopleTools idea space</a> that seems to propose this functionality. I think having this sort of tracking functionality baked into the application could be useful in troubleshooting and replicating issues.
**Code References**
Bookmarklet JavaScript:
<pre>(function() {
var xhttp = new XMLHttpRequest();
xhttp.onreadystatechange = function() {
if (this.readyState == 4 && this.status == 200) {
var yah = (doc.getElementById('youarehere') ? doc.getElementById('youarehere') : doc.createElement('div'));
yah.id = 'youarehere';
yah.style = 'text-align: center; border:solid black 1px;';
yah.innerHTML = this.responseText;
var bodyEl = doc.getElementsByTagName('BODY')[0];
bodyEl.insertBefore(yah, bodyEl.firstChild);
}
};
var currUrl = (!!frames['TargetContent'] ? !!frames['TargetContent'].strCurrUrl ? frames['TargetContent'].strCurrUrl : window.location.href : window.location.href);
var parts = currUrl.match(/ps[pc]\/(.+?)(?:_(\d+))*?\/(.+?)\/(.+?)\/[chs]\//);
var doc = (frames['TargetContent'] ? frames['TargetContent'].document : document);
var divId = (doc.getElementById('pt_pageinfo') ? 'pt_pageinfo' : parts[2] ? 'pt_pageinfo_win' + parts[2] : 'pt_pageinfo_win0');
var pageInfo = doc.getElementById(divId);
var menu = (pageInfo ? pageInfo.getAttribute('menu') : '');
var component = (pageInfo ? pageInfo.getAttribute('component') : '');
var mode = (pageInfo ? pageInfo.getAttribute('mode') : '');
var portalNeeded = (frames['TargetContent'] ? 'n' : 'y');
var scriptUrl = window.location.origin + '/psc/' + parts[1] + '/' + parts[3] + '/' + parts[4] + '/s/WEBLIB_PSM_YAH.ISCRIPT1.FieldFormula.IScript_YouAreHere?url=' + encodeURIComponent(currUrl) + '&menu=' + encodeURIComponent(menu) + '&component=' + encodeURIComponent(component) + '&p=' + portalNeeded;
xhttp.open('GET', scriptUrl, true);
xhttp.send();
}())</pre>
IScript PeopleCode:
<pre>Declare Function PortalOpen PeopleCode FUNCLIB_PORTAL.PORTAL_GEN_FUNC FieldFormula;
Function IScript_YouAreHere()
Local string &sUrlParam = Unencode(%Request.GetParameter("url"));
Local string &sMenu = Unencode(%Request.GetParameter("menu"));
Local string &sComponent = Unencode(%Request.GetParameter("component"));
Local string &sPortalNeeded = %Request.GetParameter("p");
/* If the required parameters are not provided, then output a message */
If (None(&sMenu) Or
None(&sComponent)) And
None(&sUrlParam) Then
%Response.Write("Current Page Information Not Provided");
Return;
End-If;
Local ApiObject &portal = PortalOpen();
Local ApiObject &sCurrCref;
/* First, try to find the CREF by using the provided Menu and Component */
If &portal.FindCRefByURL(GenerateComponentContentURL(%Portal, %Node, @("Menuname." | &sMenu), "GBL", @("Component." | &sComponent), "", "")) <> Null Then
&sCurrCref = &portal.FindCRefByURL(GenerateComponentContentURL(%Portal, %Node, @("Menuname." | &sMenu), "GBL", @("Component." | &sComponent), "", ""));
Else
/* Second, try to find the CREF by using the provided url (including url query parameters) */
If (&portal.FindCRefByURL(&sUrlParam) <> Null) Then
&sCurrCref = &portal.FindCRefByURL(&sUrlParam);
Else
/* Third, try to find the CREF by using the provided url (Excluding url query parameters) */
If (&portal.FindCRefByURL(Split(&sUrlParam, "?")[1]) <> Null) Then
&sCurrCref = &portal.FindCRefByURL(Split(&sUrlParam, "?")[1]);
Else
/* If all three attempts of getting the current CREF fail, then output a message */
%Response.Write("No Content Reference Exists for the Provided Page Information (URL = " | &sUrlParam | ", " | "Menu = " | &sMenu | ", " | "Component = " | &sComponent | ")");
Return;
End-If;
End-If;
End-If;
/* Check if portal wrapper is needed */
Local string &sSCCrefLinkBase;
Local string &sSCFldrLinkBase;
If &sPortalNeeded = "y" Then
&sSCCrefLinkBase = GenerateComponentPortalURL(%Portal, %Node, MenuName.PORTAL_ADMIN, "GBL", Component.PORTAL_CREF_ADM, "", "");
&sSCFldrLinkBase = GenerateComponentPortalURL(%Portal, %Node, MenuName.PORTAL_ADMIN, "GBL", Component.PORTAL_OBJ_LIST, "", "");
Else
&sSCCrefLinkBase = GenerateComponentContentURL(%Portal, %Node, MenuName.PORTAL_ADMIN, "GBL", Component.PORTAL_CREF_ADM, "", "");
&sSCFldrLinkBase = GenerateComponentContentURL(%Portal, %Node, MenuName.PORTAL_ADMIN, "GBL", Component.PORTAL_OBJ_LIST, "", "");
End-If;
/* Get the current CREF's parent folder */
Local ApiObject &sParentFolder = &portal.FindFolderByName(&sCurrCref.ParentName);
/* Get the link to the CREF in Structure and Content */
Local string &sSCLink = &sSCCrefLinkBase | "?PORTALPARAM_PNAME=" | &sParentFolder.Name | "&PORTALPARAM_CNAME=" | &sCurrCref.Name;
Local string &sYouAreHere = "<a href=" | &sSCLink | ">" | &sCurrCref.Label | "</a>";
While (&sParentFolder <> Null)
/* Get a link to the parent folder in Structure and Content */
&sSCLink = &sSCFldrLinkBase | "?PORTAL_NAME=" | %Portal | "&PORTALPARAM_FNAME=" | &sParentFolder.Name;
&sYouAreHere = "<a href=" | &sSCLink | ">" | &sParentFolder.Label | "</a>" | " > " | &sYouAreHere;
/* Get the parent folder */
&sParentFolder = &portal.FindFolderByName(&sParentFolder.ParentName);
End-While;
%Response.Write(&sYouAreHere);
&portal.close();
End-Function;</pre> | 94.993789 | 2,276 | 0.755525 | eng_Latn | 0.499711 |
875c6277b4f42f261ca3b0602f6dee1bc8747c91 | 1,350 | md | Markdown | functions/usdban.md | DevSpen/aoijsDocumentationAmbry | 5a9d3ae0059a8f8960e93b2dd3edefc37627a6fd | [
"Apache-2.0"
] | 1 | 2022-02-05T03:08:39.000Z | 2022-02-05T03:08:39.000Z | functions/usdban.md | DevSpen/aoijsDocumentationAmbry | 5a9d3ae0059a8f8960e93b2dd3edefc37627a6fd | [
"Apache-2.0"
] | null | null | null | functions/usdban.md | DevSpen/aoijsDocumentationAmbry | 5a9d3ae0059a8f8960e93b2dd3edefc37627a6fd | [
"Apache-2.0"
] | null | null | null | ---
description: Bans an user from the guild using their ID.
---
# $ban
This function allows you to ban someone from the server using their user ID.
## Fields
This function has 3 properties, 1 required and 2 optional.
1. User ID.
2. Reason \(Optional\)
3. MessagesToDelete\(In days, optional\)
Raw Usage: `$ban[userID;reason (Optional);MessagesToDelete (Optional)`
## Options
* UserID - The user the bot is banning
* Reason - The reason in the audit logs
* MessagesToDelete - How many of the messages over x days to delete of the banned user
## Usage
```javascript
bot.command({
name: "ban",
code: `
$username[$message] has been banned from the guild.
$ban[$message]
$argsCheck[1;Just enter the User ID of who you want to ban.]
`
});
```
{% hint style="danger" %}
We recommend adding an `$onlyPerms[ban;Error when no permissions.]` at the bottom of the command!
This way only people who have permission to ban will be able to ban other members.
{% endhint %}
Example of the code with `$onlyPerms[]` to avoid bans without permissions:
```javascript
bot.command({
name: "ban",
code: `
$username[$message] has been banned from the guild.
$ban[$message]
$argsCheck[1;Just enter the User ID of who you want to ban.]
$onlyPerms[ban;Only cool people with ban perms can use this!]
`
});
```
| 23.275862 | 97 | 0.688889 | eng_Latn | 0.99472 |
875ca766da9647f7a3f952d0c41103a67f1ae2a2 | 821 | md | Markdown | Stuff/notes/LICENSE.md | icecream17/icecream17.github.io | bedbef14bad6f9f1d5235de754adf3c514875ae8 | [
"Apache-2.0"
] | null | null | null | Stuff/notes/LICENSE.md | icecream17/icecream17.github.io | bedbef14bad6f9f1d5235de754adf3c514875ae8 | [
"Apache-2.0"
] | null | null | null | Stuff/notes/LICENSE.md | icecream17/icecream17.github.io | bedbef14bad6f9f1d5235de754adf3c514875ae8 | [
"Apache-2.0"
] | null | null | null | 
This work is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-nc-sa/4.0/).
Where "work" means [this folder and the files in this folder](<https://github.com/icecream17/icecream17.github.io/tree/master/Stuff/notes>)
Except for
* Honeycrisp.jpg, which is from Wikipedia under a different license, [see this link for more details](https://commons.wikimedia.org/wiki/File:Honeycrisp.jpg)
* Screenshot 2020-12-08 182433.png, which is a screenshot of Wikipedia, and is under the CC-BY-SA 3.0 License, see <https://en.wikipedia.org/wiki/Wikipedia:Screenshots_of_Wikipedia#My_screenshot_does_not_include_the_Wikipedia_logo>
| 91.222222 | 231 | 0.789281 | eng_Latn | 0.787596 |
875cc5e4eb37d643be06eebbd7a16d59d4e48d4d | 1,284 | markdown | Markdown | _posts/2021-11-20-processlist.markdown | xiazemin/MyBlog | a0bf678e052efd238a4eb694a27528ccc234c186 | [
"MIT"
] | null | null | null | _posts/2021-11-20-processlist.markdown | xiazemin/MyBlog | a0bf678e052efd238a4eb694a27528ccc234c186 | [
"MIT"
] | 1 | 2022-01-09T03:21:46.000Z | 2022-01-09T03:21:46.000Z | _posts/2021-11-20-processlist.markdown | xiazemin/MyBlog | a0bf678e052efd238a4eb694a27528ccc234c186 | [
"MIT"
] | 1 | 2018-12-11T13:49:13.000Z | 2018-12-11T13:49:13.000Z | ---
title: 批量 Kill mysql processlist 进程
layout: post
category: mysql
author: 夏泽民
---
https://www.cnblogs.com/bianxj/articles/9605067.html
<!-- more -->
如果大批量的操作能够通过一系列的select 语句产生,那么理论上就能对这些结果批量处理。
但是mysql并没有提供eval这样的对结果集进行分析操作的功能。索引只能将select结果保存到临时文件中,然后再执行临时文件中的指令。
具体过程如下
1、通过information_schema.processlist表中的连接信息生成需要处理掉的MySQL连接的语句临时文件,然后执行临时文件中生成的指令
复制代码
mysql> select concat('KILL ',id,';') from information_schema.processlist where user='root';
+------------------------+
| concat('KILL ',id,';')
+------------------------+
| KILL 3101;
| KILL 2946;
+------------------------+
2 rows in set (0.00 sec)
mysql>select concat('KILL ',id,';') from information_schema.processlist where user='root' into outfile '/tmp/a.txt';
Query OK, 2 rows affected (0.00 sec)
mysql>source /tmp/a.txt;
Query OK, 0 rows affected (0.00 sec)
复制代码
2、杀掉当前所有的MySQL连接
mysqladmin -uroot -p processlist|awk -F "|" '{print $2}'|xargs -n 1 mysqladmin -uroot -p kill
杀掉指定用户运行的连接,这里为sa
mysqladmin -uroot -p processlist|awk -F "|" '{if($3 == "sa")print $2}'|xargs -n 1 mysqladmin -uroot -p kill
3、通过shell脚本实现
#杀掉锁定的MySQL连接
for id in `mysqladmin processlist|grep -i locked|awk '{print $1}'`
do
mysqladmin kill ${id}
done
| 22.928571 | 116 | 0.648754 | eng_Latn | 0.268314 |
875ee69789cd35ec79c80f4314a0be689b0a8b26 | 10,080 | md | Markdown | articles/cdn/cdn-analyze-usage-patterns.md | jhomarolo/azure-docs.pt-br | d11ab7fab56d90666ea619c6b12754b7761aca97 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-05-02T14:26:54.000Z | 2019-05-02T14:26:54.000Z | articles/cdn/cdn-analyze-usage-patterns.md | jhomarolo/azure-docs.pt-br | d11ab7fab56d90666ea619c6b12754b7761aca97 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cdn/cdn-analyze-usage-patterns.md | jhomarolo/azure-docs.pt-br | d11ab7fab56d90666ea619c6b12754b7761aca97 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Relatórios de núcleo da Verizon | Microsoft Docs
description: 'Você pode exibir os padrões de uso para o CDN usando os relatórios a seguir: Largura de banda, dados transferidos, acertos, status do Cache, taxa de IPV4/IPV6 dados transferidos de acertos do Cache.'
services: cdn
documentationcenter: ''
author: zhangmanling
manager: erikre
editor: ''
ms.assetid: 5a0d9018-8bdb-48ff-84df-23648ebcf763
ms.service: cdn
ms.workload: tbd
ms.tgt_pltfrm: na
ms.devlang: na
ms.topic: article
ms.date: 01/23/2017
ms.author: mazha
ms.openlocfilehash: 6eb0fe592196466f7f49c21ce38afdf13b254d86
ms.sourcegitcommit: 3102f886aa962842303c8753fe8fa5324a52834a
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 04/23/2019
ms.locfileid: "61061432"
---
# <a name="core-reports-from-verizon"></a>Relatórios de núcleo da Verizon
[!INCLUDE [cdn-verizon-only](../../includes/cdn-verizon-only.md)]
Usando os Relatórios de núcleo da Verizon por meio do portal Gerenciar para perfis Verizon, é possível exibir os padrões de uso para sua CDN com os seguintes relatórios:
* Largura de banda
* Dados Transferidos
* Acertos
* Status do Cache
* Taxa de Acertos do Cache
* Dados IPv4/IPV6 Transferidos
## <a name="accessing-verizon-core-reports"></a>Acessando Relatórios de núcleo da Verizon
1. Na folha do perfil do CDN, clique no botão **Gerenciar** .

O portal de gerenciamento da CDN é aberto.
2. Passe o mouse sobre a guia **Análise**, em seguida, sobre o submenu **Relatórios Principais**. Clique em um relatório no menu.

3. Para cada relatório, selecione um intervalo de datas na lista **Intervalo de Datas**. Você pode selecionar um intervalo de datas predefinido, como **Hoje** ou **Esta semana**, ou selecionar **Personalizado** e inserir manualmente um intervalo de datas clicando nos ícones de calendário.
4. Depois de selecionar um intervalo de datas, clique em **Ir** para gerar o relatório.
4. Se você deseja exportar os dados em formato Excel, clique no ícone do Excel acima do botão **Ir**.
## <a name="bandwidth"></a>Largura de banda
O relatório de largura de banda consiste em um grafo e em uma tabela de dados que indicam o uso de largura de banda da CDN para HTTP e HTTPS em um período específico, em Mbps. Você pode exibir o uso de largura de banda entre todos os POPs ou em um POP específico. Esse relatório permite exibir os picos de tráfego e a distribuição para os POPs.
Na lista **Nós do Edge**, selecione **Todos os Nós do Edge** para ver o tráfego de todos os nós ou selecione uma região específica.
O relatório é atualizado a cada cinco minutos.

## <a name="data-transferred"></a>Dados Transferidos
Esse relatório consiste em um grafo e em uma tabela de dados que indicavam o uso do tráfego da CDN para HTTP e HTTPS em um período específico, em GB. Você pode exibir o uso de tráfego em todos os POPs ou em um POP específico. Esse relatório permite exibir os picos de tráfego e a distribuição entre os POPs.
Na lista **Nós do Edge**, selecione **Todos os Nós do Edge** para ver o tráfego de todos os nós ou selecione uma região específica.
O relatório é atualizado a cada cinco minutos.

## <a name="hits-status-codes"></a>Acertos (códigos de status)
Este relatório descreve a distribuição dos códigos de status de solicitação para o seu conteúdo. Todas as solicitações de conteúdo geram um código de status HTTP. O código de status descreve como os POPs de borda lidaram com a solicitação. Por exemplo, um código de status 2xx indica que a solicitação foi servida com êxito a um cliente, enquanto um código de status 4xx indica que ocorreu um erro. Para saber mais sobre os códigos de status HTTP, confira a [Lista de códigos de status HTTP](https://en.wikipedia.org/wiki/List_of_HTTP_status_codes).

## <a name="cache-statuses"></a>Status do Cache
Este relatório descreve a distribuição de acertos do cache e de erros de cache para solicitações do cliente. Como o desempenho mais rápido provém de ocorrências no cache, é possível otimizar velocidades de entrega de dados minimizando perdas no cache e ocorrências no cache expiradas.
Para reduzir os erros de cache, configure seu servidor de origem para minimizar o uso do seguinte:
* Cabeçalhos de resposta `no-cache`
* Armazenamento em cache de cadeia de caracteres de consulta, a menos que estritamente necessária
* Códigos de resposta não armazenáveis em cache
Para reduzir correspondências de cache expiradas, defina um ativo `max-age` como um longo período para minimizar o número de solicitações para o servidor de origem.

### <a name="main-cache-statuses-include"></a>Os status do cache principal incluem:
* TCP_HIT: Servido do servidor de borda. O objeto estava no cache e não excedeu sua idade máxima.
* TCP_MISS: Servido do servidor de origem. O objeto não estava no cache e a resposta voltou para a origem.
* TCP_EXPIRED _ MISS: Servido do servidor de origem após revalidação com a origem. O objeto estava no cache, mas tinha excedido sua idade máxima. A revalidação com a origem resultou na substituição do objeto por uma nova resposta da origem.
* TCP_EXPIRED _ HIT: Servido do Edge Após revalidação com a origem. O objeto estava no cache mas excedeu sua idade máxima. Uma revalidação com o servidor de origem resultou na não modificação do objeto do cache.
### <a name="full-list-of-cache-statuses"></a>Lista completa de status do cache
* TCP_HIT - esse status é relatado quando uma solicitação é servida diretamente do POP para o cliente. Um ativo é servido imediatamente de um POP quando é armazenado em cache no POP mais próximo ao cliente e tem uma TTL (vida útil) válida. O TTL é determinado pelos seguintes cabeçalhos de resposta:
* Cache-Control: período máximo s
* Cache-Control: período máximo
* Expira
* TCP_MISS: Este status indica que uma versão em cache do ativo solicitado não foi encontrada no POP mais próximo ao cliente. O ativo é solicitado de um servidor de origem ou de um servidor de escudo de origem. Se o servidor de origem ou o servidor de escudo de origem retornar um ativo, ele será servido ao cliente e armazenado em cache no cliente e no servidor de borda. Caso contrário, será retornado um código de status diferente de 200 (por exemplo, 403 Proibido ou 404 Não encontrado).
* TCP_EXPIRED_HIT: Esse status é relatado quando uma solicitação que direciona um ativo com uma TTL expirada foi servida diretamente do POP ao cliente. Por exemplo, quando a idade máxima do ativo tiver expirado.
Normalmente, uma solicitação expirada resulta em uma solicitação de revalidação para o servidor de origem. Para que um status TCP_EXPIRED _HIT ocorra, o servidor de origem deverá indicar que não há uma versão mais recente do ativo. Normalmente, essa situação resulta em uma atualização dos cabeçalhos Cache-Control e Expires do ativo.
* TCP_EXPIRED MISS: Esse status é relatado quando uma versão mais recente de um ativo em cache expirado é servida do POP ao cliente. Esse status ocorrerá quando a TTL de um ativo armazenado em cache tiver expirado (por exemplo, idade máxima expirada) e o servidor de origem retornar uma versão mais recente desse ativo. Essa nova versão do ativo é servida para o cliente em vez da versão armazenada em cache. Além disso, ela é armazenada em cache no servidor de borda e no cliente.
* CONFIG_NOCACHE: Este status indica que uma configuração específica do cliente POP do edge impediu que o ativo fosse armazenado em cache.
* NONE - esse status indica que uma verificação de atualização de conteúdo do cache não foi executada.
* TCP_CLIENT_REFRESH_MISS: Esse status é relatado quando um cliente HTTP, como um navegador, força um POP de borda a recuperar uma nova versão de um ativo obsoleto do servidor de origem. Por padrão, os servidores impedem que um cliente HTTP force os servidores do edge a recuperar uma nova versão do ativo do servidor de origem.
* TCP_PARTIAL_HIT: Esse status é relatado quando uma solicitação de intervalo de bytes resulta em um acerto de um ativo parcialmente armazenado em cache. O intervalo de bytes solicitado é servido imediatamente do POP ao cliente.
* UNCACHEABLE: Esse status é relatado quando um ativo `Cache-Control` e `Expires` cabeçalhos indicam que ele deve não ser armazenado em cache em um POP ou pelo cliente HTTP. Esses tipos de solicitações são servidos do servidor de origem.
## <a name="cache-hit-ratio"></a>Taxa de Acertos do Cache
Este relatório indica a porcentagem de solicitações em cache que foram servidas diretamente do cache.
O relatório fornece os seguintes detalhes:
* O conteúdo solicitado foi armazenado em cache no POP mais próximo ao solicitante.
* A solicitação foi servida diretamente da borda da rede.
* A solicitação não exigia revalidação no servidor de origem.
O relatório não inclui:
* Solicitações negadas devido às opções de filtragem de país/região.
* Solicitações de ativos cujos cabeçalhos indicam que eles não devem ser armazenado em cache. Por exemplo, cabeçalhos `Cache-Control: private`, `Cache-Control: no-cache` ou `Pragma: no-cache` impedem o armazenamento de um ativo em cache.
* Solicitações de intervalo de bytes para conteúdo parcialmente armazenado em cache.
A fórmula é: (TCP_ HIT/(TCP_ HIT+TCP_MISS))*100

## <a name="ipv4ipv6-data-transferred"></a>Dados IPv4/IPV6 Transferidos
Esse relatório mostra a distribuição de uso de tráfego em IPV4 versus IPV6.

## <a name="considerations"></a>Considerações
Os relatórios só podem ser gerados com informações dos últimos 18 meses.
| 73.043478 | 549 | 0.78373 | por_Latn | 0.999932 |
875efbf9fba0ff077b4e192e77782c7ba75a89af | 3,666 | md | Markdown | articles/api-management/developer-portal-widget-contribution-guidelines.md | changeworld/azure-docs.it- | 34f70ff6964ec4f6f1a08527526e214fdefbe12a | [
"CC-BY-4.0",
"MIT"
] | 1 | 2017-06-06T22:50:05.000Z | 2017-06-06T22:50:05.000Z | articles/api-management/developer-portal-widget-contribution-guidelines.md | changeworld/azure-docs.it- | 34f70ff6964ec4f6f1a08527526e214fdefbe12a | [
"CC-BY-4.0",
"MIT"
] | 41 | 2016-11-21T14:37:50.000Z | 2017-06-14T20:46:01.000Z | articles/api-management/developer-portal-widget-contribution-guidelines.md | changeworld/azure-docs.it- | 34f70ff6964ec4f6f1a08527526e214fdefbe12a | [
"CC-BY-4.0",
"MIT"
] | 7 | 2016-11-16T18:13:16.000Z | 2017-06-26T10:37:55.000Z | ---
title: Come contribuire ai widget per il portale per sviluppatori
titleSuffix: Azure API Management
description: Informazioni sulle linee guida consigliate da seguire quando si contribuisce a un widget nel repository del API Management per sviluppatori.
author: dlepow
ms.author: apimpm
ms.date: 03/25/2021
ms.service: api-management
ms.topic: how-to
ms.openlocfilehash: c4d3ed2aeaac57f721d23d7c7aa1c70ef14e4294
ms.sourcegitcommit: 425420fe14cf5265d3e7ff31d596be62542837fb
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 04/20/2021
ms.locfileid: "107741705"
---
# <a name="how-to-contribute-widgets-to-the-api-management-developer-portal"></a>Come contribuire ai widget nel portale per API Management per sviluppatori
Se si desidera aggiungere un widget al repository [GitHub](https://github.com/Azure/api-management-developer-portal)del portale per sviluppatori API Management, seguire questa procedura in tre passaggi:
1. Creare una fork del repository.
1. Implementare il widget.
1. Aprire una richiesta pull per includere il widget nel repository ufficiale.
Il widget erediterà la licenza del repository. Sarà disponibile per [l'installazione con consenso esplicito](developer-portal-use-community-widgets.md) nella versione self-hosted del portale. Il team del portale per sviluppatori può decidere di includerlo anche nella versione gestita del portale.
Per un esempio di come sviluppare [un widget](developer-portal-implement-widgets.md) personalizzato, vedere l'esercitazione sull'implementazione del widget.
## <a name="contribution-guidelines"></a>Linee guida per i contributi
Queste linee guida hanno lo scopo di garantire la sicurezza e la privacy dei clienti e dei visitatori dei portali. Seguire queste linee guida per assicurarsi che il contributo sia accettato:
1. Inserire il widget nella `community/widgets/<your-widget-name>` cartella .
1. Il nome del widget deve essere minuscolo e alfanumerico con trattini che separano le parole. Ad esempio: `my-new-widget`.
1. La cartella deve contenere uno screenshot del widget in un portale pubblicato.
1. La cartella deve contenere `readme.md` un file che segue il modello del `/scaffolds/widget/readme.md` file.
1. La cartella può contenere `npm_dependencies` un file con i comandi npm per installare o gestire le dipendenze del widget.
Specificare in modo esplicito la versione di ogni dipendenza. Ad esempio:
```console
npm install azure-storage@2.10.3 axios@0.19.1
```
Il widget deve richiedere dipendenze minime. Ogni dipendenza verrà esaminata attentamente dai revisori. In particolare, la logica di base del widget deve essere open source nella cartella del widget. Non eseguire il wrapping in un pacchetto npm.
1. Le modifiche ai file all'esterno della cartella del widget non sono consentite come parte del contributo di un widget. Ciò include, ma non è limitato, il `/package.json` file.
1. Non è consentito inserire script di rilevamento o inviare dati creati dal cliente a servizi personalizzati.
> [!NOTE]
> È possibile raccogliere dati creati dal cliente solo tramite `Logger` l'interfaccia .
## <a name="next-steps"></a>Passaggi successivi
- Per altre informazioni sui contributi, vedere il [repository GitHub](https://github.com/Azure/api-management-developer-portal/)API Management portale per sviluppatori.
- Vedere [Implementare i widget](developer-portal-implement-widgets.md) per informazioni su come sviluppare un widget personalizzato, passo dopo passo.
- Vedere [Usare i widget della](developer-portal-use-community-widgets.md) community per informazioni su come usare i widget con contributi della community. | 55.545455 | 297 | 0.798418 | ita_Latn | 0.996918 |
875f1d6dad24fb071399157c1c217b2579e99a66 | 6,815 | md | Markdown | articles/cognitive-services/Custom-Vision-Service/use-prediction-api.md | peder-andfrankly/azure-docs.sv-se | 49435a06686fc72ca9cd8c83883c3c3704a6ec72 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/Custom-Vision-Service/use-prediction-api.md | peder-andfrankly/azure-docs.sv-se | 49435a06686fc72ca9cd8c83883c3c3704a6ec72 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/Custom-Vision-Service/use-prediction-api.md | peder-andfrankly/azure-docs.sv-se | 49435a06686fc72ca9cd8c83883c3c3704a6ec72 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Använd förutsägelseslutpunkt för att programmatiskt testa bilder med klassificerare – Custom Vision
titleSuffix: Azure Cognitive Services
description: Lär dig hur du använder API:et för att programmatiskt testa bilder med din klassificerare för Custom Vision Service.
services: cognitive-services
author: anrothMSFT
manager: nitinme
ms.service: cognitive-services
ms.subservice: custom-vision
ms.topic: conceptual
ms.date: 04/02/2019
ms.author: anroth
ms.openlocfilehash: 50325b75280160a3fefa5b5487df29a25e53bddd
ms.sourcegitcommit: fbea2708aab06c19524583f7fbdf35e73274f657
ms.translationtype: MT
ms.contentlocale: sv-SE
ms.lasthandoff: 09/13/2019
ms.locfileid: "70966954"
---
# <a name="use-your-model-with-the-prediction-api"></a>Använd din modell med förutsägelse-API: et
När du har tränat din modell kan du testa bilderna program mässigt genom att skicka dem till förutsägelse-API-slutpunkten.
> [!NOTE]
> Det här dokumentet visar hur du använder C# för att skicka en bild till förutsägelse-API:et. Mer information och exempel finns i [förutsägelse API-referensen](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.0/operations/5c82db60bf6a2b11a8247c15).
## <a name="publish-your-trained-iteration"></a>Publicera din utbildade iteration
Från [Custom Vision-webbsidan](https://customvision.ai), markera projektet och välj sedan fliken __prestanda__.
Om du vill skicka avbildningar till förutsägelse-API: t måste du först publicera din iteration för förutsägelse, som du kan göra genom att välja __publicera__ och ange ett namn för den publicerade iterationen. Detta gör din modell tillgänglig för förutsägelse-API: t för din Custom Vision Azure-resurs.

När din modell har publicerats visas en "Publicerad" etikett bredvid iterationen i den vänstra sid panelen och dess namn visas i beskrivningen av iterationen.

## <a name="get-the-url-and-prediction-key"></a>Hämta URL och förutsägelsenyckel
När din modell har publicerats kan du hämta den information som krävs genom att välja __förutsägelse-URL__. Då öppnas en dialog ruta med information om hur du använder förutsägelse-API, inklusive __förutsägelse-URL__ och __förutsägelse nyckel__.


I den här guiden ska du använda en lokal avbildning, så kopiera URL: en under **om du har en avbildnings fil** till en tillfällig plats. Kopiera även motsvarande __förutsägelse-nyckel__ värde.
## <a name="create-the-application"></a>Skapa programmet
1. Skapa ett nytt C# konsol program i Visual Studio.
1. Använd följande kod som brödtext i filen __Program.cs__.
```csharp
using System;
using System.IO;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Threading.Tasks;
namespace CVSPredictionSample
{
public static class Program
{
public static void Main()
{
Console.Write("Enter image file path: ");
string imageFilePath = Console.ReadLine();
MakePredictionRequest(imageFilePath).Wait();
Console.WriteLine("\n\nHit ENTER to exit...");
Console.ReadLine();
}
public static async Task MakePredictionRequest(string imageFilePath)
{
var client = new HttpClient();
// Request headers - replace this example key with your valid Prediction-Key.
client.DefaultRequestHeaders.Add("Prediction-Key", "<Your prediction key>");
// Prediction URL - replace this example URL with your valid Prediction URL.
string url = "<Your prediction URL>";
HttpResponseMessage response;
// Request body. Try this sample with a locally stored image.
byte[] byteData = GetImageAsByteArray(imageFilePath);
using (var content = new ByteArrayContent(byteData))
{
content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
response = await client.PostAsync(url, content);
Console.WriteLine(await response.Content.ReadAsStringAsync());
}
}
private static byte[] GetImageAsByteArray(string imageFilePath)
{
FileStream fileStream = new FileStream(imageFilePath, FileMode.Open, FileAccess.Read);
BinaryReader binaryReader = new BinaryReader(fileStream);
return binaryReader.ReadBytes((int)fileStream.Length);
}
}
}
```
1. Ändra följande information:
* `namespace` Ange fältet till namnet på ditt projekt.
* Ersätt plats hållaren `<Your prediction key>` med det nyckel värde som du hämtade tidigare.
* Ersätt plats hållaren `<Your prediction URL>` med URL: en som du hämtade tidigare.
## <a name="run-the-application"></a>Köra programmet
När du kör programmet uppmanas du att ange en sökväg till en avbildnings fil i-konsolen. Avbildningen skickas sedan till förutsägelse-API: et och förutsägelse resultatet returneras som en JSON-formaterad sträng. Följande är ett exempel på ett svar.
```json
{
"Id":"7796df8e-acbc-45fc-90b4-1b0c81b73639",
"Project":"8622c779-471c-4b6e-842c-67a11deffd7b",
"Iteration":"59ec199d-f3fb-443a-b708-4bca79e1b7f7",
"Created":"2019-03-20T16:47:31.322Z",
"Predictions":[
{"TagId":"d9cb3fa5-1ff3-4e98-8d47-2ef42d7fb373","TagName":"cat", "Probability":1.0},
{"TagId":"9a8d63fb-b6ed-4462-bcff-77ff72084d99","TagName":"dog", "Probability":0.1087869}
]
}
```
## <a name="next-steps"></a>Nästa steg
I den här guiden har du lärt dig hur du skickar avbildningar till din anpassade avbildnings klassificerare/detektor och får ett svar C# program mässigt med SDK: n. Härnäst lär du dig hur du slutför scenarier från slut punkt till slut C#punkt med eller kom igång med ett annat språk-SDK.
* [Quickstart: .NET SDK](csharp-tutorial.md)
* [Snabbstart: Python SDK](python-tutorial.md)
* [Snabbstart: Java SDK](java-tutorial.md)
* [Snabbstart: Node SDK](node-tutorial.md)
* [Snabbstart: Go SDK](go-tutorial.md)
| 47.992958 | 302 | 0.721937 | swe_Latn | 0.976031 |
875fba0b5ba6703629187b590781db376ba13691 | 52 | md | Markdown | SETUP.md | Insainian/company-directory | a3a8ae767f2fbb54165322711443ef3bc6b810c4 | [
"Apache-2.0"
] | 2 | 2019-08-07T21:09:21.000Z | 2020-07-06T20:24:26.000Z | SETUP.md | Insainian/company-directory | a3a8ae767f2fbb54165322711443ef3bc6b810c4 | [
"Apache-2.0"
] | 3 | 2021-03-09T13:45:09.000Z | 2022-01-22T08:27:22.000Z | SETUP.md | Insainian/company-directory | a3a8ae767f2fbb54165322711443ef3bc6b810c4 | [
"Apache-2.0"
] | null | null | null | # Setup
## Initial Setup
```bash
npm install
```
| 5.777778 | 16 | 0.596154 | ind_Latn | 0.294365 |
875fc4b1b77c7f0919c94353833ec62943bc3d0f | 5,847 | md | Markdown | docs/csharp/programming-guide/xmldoc/how-to-use-the-xml-documentation-features.md | emrekizildas/docs.tr-tr | e5ea7f77f482a654c6520be9f3e8f721f9b1b0c2 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-01-06T07:30:24.000Z | 2020-01-06T07:30:24.000Z | docs/csharp/programming-guide/xmldoc/how-to-use-the-xml-documentation-features.md | emrekizildas/docs.tr-tr | e5ea7f77f482a654c6520be9f3e8f721f9b1b0c2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/csharp/programming-guide/xmldoc/how-to-use-the-xml-documentation-features.md | emrekizildas/docs.tr-tr | e5ea7f77f482a654c6520be9f3e8f721f9b1b0c2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Nasıl yapılır: XML belgeleri özelliklerini kullanma- C# Programlama Kılavuzu'
ms.custom: seodec18
ms.date: 06/01/2018
helpviewer_keywords:
- XML documentation [C#]
- C# language, XML documentation features
ms.assetid: 8f33917b-9577-4c9a-818a-640dbbb0b399
ms.openlocfilehash: 06b0c3b7877337d8a5703403af98dbacdf3ea93c
ms.sourcegitcommit: 8a0fe8a2227af612f8b8941bdb8b19d6268748e7
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 10/03/2019
ms.locfileid: "71834170"
---
# <a name="how-to-use-the-xml-documentation-features"></a>Nasıl yapılır: XML belgeleri özelliklerini kullanma
Aşağıdaki örnek, belgelenen bir türe temel bir genel bakış sağlar.
## <a name="example"></a>Örnek
[!code-csharp[csProgGuideDocComments#15](~/samples/snippets/csharp/VS_Snippets_VBCSharp/csProgGuideDocComments/CS/DocComments.cs#15)]
Örnek, aşağıdaki içeriklerle bir. xml dosyası oluşturur:
```xml
<?xml version="1.0"?>
<doc>
<assembly>
<name>xmlsample</name>
</assembly>
<members>
<member name="T:TestClass">
<summary>
Class level summary documentation goes here.
</summary>
<remarks>
Longer comments can be associated with a type or member through
the remarks tag.
</remarks>
</member>
<member name="F:TestClass._name">
<summary>
Store for the Name property.
</summary>
</member>
<member name="M:TestClass.#ctor">
<summary>
The class constructor.
</summary>
</member>
<member name="P:TestClass.Name">
<summary>
Name property.
</summary>
<value>
A value tag is used to describe the property value.
</value>
</member>
<member name="M:TestClass.SomeMethod(System.String)">
<summary>
Description for SomeMethod.
</summary>
<param name="s"> Parameter description for s goes here.</param>
<seealso cref="T:System.String">
You can use the cref attribute on any tag to reference a type or member
and the compiler will check that the reference exists.
</seealso>
</member>
<member name="M:TestClass.SomeOtherMethod">
<summary>
Some other method.
</summary>
<returns>
Return values are described through the returns tag.
</returns>
<seealso cref="M:TestClass.SomeMethod(System.String)">
Notice the use of the cref attribute to reference a specific method.
</seealso>
</member>
<member name="M:TestClass.Main(System.String[])">
<summary>
The entry point for the application.
</summary>
<param name="args"> A list of command line arguments.</param>
</member>
<member name="T:TestInterface">
<summary>
Documentation that describes the interface goes here.
</summary>
<remarks>
Details about the interface go here.
</remarks>
</member>
<member name="M:TestInterface.InterfaceMethod(System.Int32)">
<summary>
Documentation that describes the method goes here.
</summary>
<param name="n">
Parameter n requires an integer argument.
</param>
<returns>
The method returns an integer.
</returns>
</member>
</members>
</doc>
```
## <a name="compiling-the-code"></a>Kod derleme
Örneği derlemek için aşağıdaki komut satırını yazın:
`csc XMLsample.cs /doc:XMLsample.xml`
Bu komut, tarayıcınızda görüntüleyebileceğiniz veya TYPE komutunu kullanarak *XMLsample. XML*XML dosyasını oluşturur.
## <a name="robust-programming"></a>Güçlü programlama
XML belgeleri///ile başlar. Yeni bir proje oluşturduğunuzda, sihirbazlar sizin için bazı başlangıç//satır satırları koyar. Bu yorumların işlenmesinde bazı kısıtlamalar vardır:
- Belgeler düzgün biçimlendirilmiş XML olmalıdır. XML doğru biçimlendirilmediyse bir uyarı oluşturulur ve belge dosyası bir hata ile karşılaşıldığını bildiren bir açıklama içerir.
- Geliştiriciler kendi etiket kümesini oluşturmak ücretsizdir. Önerilen bir etiket kümesi vardır ( [belge açıklamaları Için önerilen etiketlere](recommended-tags-for-documentation-comments.md)bakın). Önerilen etiketlerden bazılarının özel anlamları vardır:
- @No__t-0param > etiketi parametreleri tanımlamakta kullanılır. Kullanıldıysa, derleyici parametrenin var olduğunu ve tüm parametrelerin belgelerde açıklandığını doğrular. Doğrulama başarısız olursa, derleyici bir uyarı verir.
- @No__t-0 özniteliği, bir kod öğesine başvuru sağlamak için herhangi bir etikete iliştirilebilir. Derleyici bu kod öğesinin varolduğunu doğrular. Doğrulama başarısız olursa, derleyici bir uyarı verir. Derleyici, `cref` özniteliğinde açıklanan bir türü ararken her bir `using` deyimi uyar.
- @No__t-0summary > etiketi, Visual Studio içinde IntelliSense tarafından bir tür veya üyeyle ilgili ek bilgileri göstermek için kullanılır.
> [!NOTE]
> XML dosyası, tür ve Üyeler hakkında tam bilgi sağlamaz (örneğin, herhangi bir tür bilgisi içermez). Bir tür veya üye hakkında tam bilgi almak için, belge dosyasının gerçek tür veya üye üzerinde yansıma ile birlikte kullanılması gerekir.
## <a name="see-also"></a>Ayrıca bkz.
- [C#Programlama Kılavuzu](../index.md)
- [/Doc (C# derleyici seçenekleri)](../../language-reference/compiler-options/doc-compiler-option.md)
- [XML belge açıklamaları](./index.md)
- [DocFX belge işlemcisi](https://dotnet.github.io/docfx/)
- [Sandrole belge işlemcisi](https://github.com/EWSoftware/SHFB)
| 41.176056 | 291 | 0.668377 | tur_Latn | 0.986141 |
876003a06557025583b59dec0d259f00a02185ab | 875 | md | Markdown | docs/en/commands/tweet.md | arrow2nd/twnyan_pub | b398797e2680202ff1cd89badca5bea6dfc15adc | [
"MIT"
] | null | null | null | docs/en/commands/tweet.md | arrow2nd/twnyan_pub | b398797e2680202ff1cd89badca5bea6dfc15adc | [
"MIT"
] | null | null | null | docs/en/commands/tweet.md | arrow2nd/twnyan_pub | b398797e2680202ff1cd89badca5bea6dfc15adc | [
"MIT"
] | null | null | null | # tweet
Post a tweet.
```
twnyan tweet {[text] [image...] | <command>}
twnyan tw {[text] [image...] | <command>}
```
- Pipe input is also supported (e.g. `echo "nyaan..." | twnyan tweet`)
- If no text is specified, "にゃーん" will be posted
- You can also submit images only (e.g. `tweet cat.png`)
- If there are multiple images, please specify them separated by a single space
## Command
### tweet multi
Post a multi-line tweet.
```
twnyan tweet multi [image...]
twnyan tweet ml [image...]
```
- If there are multiple images, please specify them separated by a single space
- To finish typing, type a semicolon `;` at the end of the sentence
- Enter `:exit` to cancel entry
### tweet remove
Remove a tweet.
```
twnyan tweet remove <tweet-number>...
twnyan tweet rm <tweet-number>...
```
- If there are multiple tweet numbers, please specify them separated by spaces
| 21.875 | 79 | 0.684571 | eng_Latn | 0.99517 |
8761c98623f5e88f4fea1487b923cc668526232c | 1,328 | md | Markdown | content/en/Games/polyomino.md | malaschitz/little-golem-doc | fcf819254352d54fbd12865a5ebc81f92ccabe9d | [
"Apache-2.0"
] | 1 | 2021-12-09T06:16:35.000Z | 2021-12-09T06:16:35.000Z | content/en/Games/polyomino.md | malaschitz/little-golem-doc | fcf819254352d54fbd12865a5ebc81f92ccabe9d | [
"Apache-2.0"
] | null | null | null | content/en/Games/polyomino.md | malaschitz/little-golem-doc | fcf819254352d54fbd12865a5ebc81f92ccabe9d | [
"Apache-2.0"
] | null | null | null | ---
title: "Polyomino"
date: 2021-07-26
weight: 210
description: >
Game inspired with game Blokus Duo 2005
---
Polyomino is a simple board game with [polyominoes](http://en.wikipedia.org/wiki/Polyomino). The game can be played as a paper-and-pencil game.

## Rules
- Each player has a set of polyomino tiles, except in the *hexa* variant.
- The first player places a blue polyomino tile and an orange polyomino tile.
- The second player continues as the blue player or switches sides and the first player continue as the blue player.
- The next tile must be placed so that it touches at least one tile of the same color with its corners. The edges of tiles of the same color must not touch.
- If a player cannot place a tile, he passes.
- When there is no move, game is finished. Player with the most tiles wins the game.
## Variants
Mini. Game on the board 8x8. Each player has 1 monomino, 1 domino, 2 trominos and 5 tetrominos.
Small. Game on the board 12x12. Each player has two sets from mini variant.
Penta. Game on the board 14x14. Each player has 1 monomino, 1 domino, 2 trominos, 5 tetrominos and 12 pentominoes.
Hexa. Game on the board 20x20. The game is played only with one common set of pieces: 1 monomino, 1 domino, 2 trominos, 5 tetrominos, 12 pentominos and 35 hexominos.
| 39.058824 | 165 | 0.753765 | eng_Latn | 0.998513 |
8762165a6af6d44365c724dc42a166a6ec3b9bcc | 300 | markdown | Markdown | _posts/2018-10-06-Physical-protection.markdown | aaberhe/SOC-Hub | b6fd65c0854503a4616825d03141cbc8230b9fff | [
"Apache-2.0"
] | 4 | 2018-10-09T17:57:31.000Z | 2018-11-20T17:56:09.000Z | _posts/2018-10-06-Physical-protection.markdown | aaberhe/SOC-Hub | b6fd65c0854503a4616825d03141cbc8230b9fff | [
"Apache-2.0"
] | 1 | 2018-10-10T14:45:39.000Z | 2018-10-10T14:45:39.000Z | _posts/2018-10-06-Physical-protection.markdown | aaberhe/SOC-Hub | b6fd65c0854503a4616825d03141cbc8230b9fff | [
"Apache-2.0"
] | 11 | 2018-01-24T22:18:59.000Z | 2018-05-29T23:16:42.000Z | ---
title: "Physical protection within aggregates"
#author: "Jacqueline E. Pitts"
layout: post
tags: ["organomineral", "mineral associations"]
level1: Secondary controls on carbon turnover
level2: Stabilization-mechanisms
category: "Secondary controls on carbon turnover"
#figures: /img/3bii/
---
| 21.428571 | 49 | 0.766667 | eng_Latn | 0.666215 |
8762de1f0c16c6330235cfaeaead7dd666abf7e7 | 3,679 | md | Markdown | chart/api-gateway/README.md | Stegraphy/api-gateway-25298-LeiCong | 6af0516b6a0b8a5848f5e4c1a611d09cb452ba15 | [
"Apache-2.0"
] | 1 | 2021-10-13T18:49:04.000Z | 2021-10-13T18:49:04.000Z | chart/api-gateway/README.md | Stegraphy/api-gateway-25298-LeiCong | 6af0516b6a0b8a5848f5e4c1a611d09cb452ba15 | [
"Apache-2.0"
] | null | null | null | chart/api-gateway/README.md | Stegraphy/api-gateway-25298-LeiCong | 6af0516b6a0b8a5848f5e4c1a611d09cb452ba15 | [
"Apache-2.0"
] | 1 | 2021-04-24T16:32:21.000Z | 2021-04-24T16:32:21.000Z | 部署文件的渲染模板,我们下文将定义一些变量,helm执行时会将变量渲染进模板文件中。
## _helpers.tpl
这个文件我们用来进行标签模板的定义,以便在上文提到的位置进行标签渲染。
标签总共分为三个部分: 平台、微服务、监控。
### 平台标签
#### deployment 级:
```
{{- define "service.labels.standard" -}}
choerodon.io/release: {{ .Release.Name | quote }}
{{- end -}}
```
平台管理实例需要的实例ID。
### 微服务标签
#### pod 级:
```
{{- define "service.microservice.labels" -}}
choerodon.io/version: {{ .Chart.Version | quote }}
choerodon.io/service: {{ .Chart.Name | quote }}
choerodon.io/metrics-port: {{ .Values.deployment.managementPort | quote }}
{{- end -}}
```
微服务注册中心进行识别时所需要的版本号、项目名称、管理端口。
### 监控和日志标签
#### deployment 级:
```
{{- define "service.logging.deployment.label" -}}
choerodon.io/logs-parser: {{ .Values.logs.parser | quote }}
{{- end -}}
```
日志管理所需要的应用标签。该标签指定应用程序的日志格式,内置格式有`nginx`,`spring-boot`,`docker`对于spring-boot微服务请使用`spring-boot`,如果不需要收集日志请移除此段代码,并删除模板文件关于`service.logging.deployment.label`的引用。
#### pod 级:
```
{{- define "service.monitoring.pod.annotations" -}}
choerodon.io/metrics-group: {{ .Values.metrics.group | quote }}
choerodon.io/metrics-path: {{ .Values.metrics.path | quote }}
{{- end -}}
```
性能指标管理所需要的应用类别以及监控指标路径。其中`metrics-group`将应用按照某个关键字分组,并在grafana配置实现分组展示。`metrics-path`指定收集应用的指标数据路径。
如果不需要监控请移除此段代码
## values.yaml
这个文件中的键值对,即为我们上文中所引用的变量。
将所有变量集中在一个文件中,方便部署的时候进行归档以及灵活替换。
同时,helm命令支持使用 `--set FOO_BAR=FOOBAR` 参数对values 文件中的变量进行赋值,可以进一步简化部署流程。
## 参数对照表
参数名 | 含义
--- | ---
replicaCount | pod运行数量
image.repository | 镜像库地址
image.pullPolicy | 镜像拉取策略
preJob.timeout | job超时时间
preJob.image | job镜像库地址
preJob.preConfig.enabled | 是否初始化manager_service数据库
preJob.preConfig.configFile | 初始化到配置中心文件名
preJob.preConfig.configType | 初始化到配置中心存储方式
preJob.preConfig.registerHost | 注册中心地址
preJob.preConfig.datasource.url | manager_service数据库连接地址
preJob.preConfig.datasource.username | manager_service数据库用户名
preJob.preConfig.datasource.password | manager_service数据库密码
deployment.managementPort | 服务管理端口
env.open.SPRING_CLOUD_CONFIG_ENABLED | 是否启用配置中心
env.open.SPRING_CLOUD_CONFIG_URI | 配置中心地址
env.open.SPRING_DATASOURCE_URL | 数据库连接地址
env.open.SPRING_DATASOURCE_USERNAME | 数据库用户名
env.open.SPRING_DATASOURCE_PASSWORD | 数据库密码
env.open.SPRING_CACHE_MULTI_L1_ENABLED | 是否开启一级缓存
env.open.SPRING_CACHE_MULTI_L2_ENABLED | 是否开启二级缓存
env.open.SPRING_REDIS_HOST | redis主机地址
env.open.SPRING_REDIS_PORT | redis端口
env.open.SPRING_REDIS_DATABASE | redis db
env.open.EUREKA_CLIENT_SERVICEURL_DEFAULTZONE | 注册服务地址
env.open.CHOERODON_GATEWAY_ALLOWED_ORIGIN | 跨域配置
env.open.SKYWALKING_OPTS | skywalking代理端配置
service.enabled | 是否创建k8s service
service.type | service类型
service.port | service端口
service.name | service名称
ingress.enabled | 是否创建k8s ingress
ingress.host | 服务域名地址
metrics.path | 收集应用的指标数据路径
metrics.group| 性能指标应用分组
logs.parser | 日志收集格式
resources.limits | k8s中容器能使用资源的资源最大值
resources.requests | k8s中容器使用的最小资源需求
### skywalking 代理端配置参数对照表
skywalking 代理端配置 | 含义
--- | ---
javaagent | skywalking代理jar包(添加则开启skywalking,删除则关闭)
skywalking.agent.application_code | skywalking应用名称
skywalking.agent.sample_n_per_3_secs | skywalking采样率配置
skywalking.agent.namespace | skywalking跨进程链路中的header配置
skywalking.agent.authentication | skywalking认证token配置
skywalking.agent.span_limit_per_segment | skywalking每segment中的最大span数配置
skywalking.agent.ignore_suffix | skywalking需要忽略的调用配置
skywalking.agent.is_open_debugging_class | skywalking是否保存增强后的字节码文件
skywalking.collector.backend_service | oap服务地址和端口配置
### skywalking 代理端配置示例
```yaml
env:
open:
SKYWALKING_OPTS: >-
-javaagent:/agent/skywalking-agent.jar
-Dskywalking.agent.application_code=api-gateway
-Dskywalking.agent.sample_n_per_3_secs=-1
-Dskywalking.collector.backend_service=oap.skywalking:11800
```
| 28.51938 | 160 | 0.78418 | yue_Hant | 0.654311 |
8762fefaaef11b7c90de3538ebfe5d18cee9d2be | 287 | md | Markdown | packages/ajax/dist/npm/README.md | heeroluo/just4 | 54814c8fd70c728aff4de2d453f7e052d555ddd9 | [
"MIT"
] | null | null | null | packages/ajax/dist/npm/README.md | heeroluo/just4 | 54814c8fd70c728aff4de2d453f7e052d555ddd9 | [
"MIT"
] | null | null | null | packages/ajax/dist/npm/README.md | heeroluo/just4 | 54814c8fd70c728aff4de2d453f7e052d555ddd9 | [
"MIT"
] | null | null | null | # @just4/ajax
提供基于 XMLHTTPRequest 的 AJAX 请求接口。
## 特性
- 封装了 XMLHTTPRequest 对象创建、请求发起、请求响应的全流程,并支持取消请求。
- 在旧浏览器(IE 9)中,符合特定条件的情况下,通过 XDomainRequest 发起跨域请求。
- 具备 Promise 化的接口。
- 支持 PC 和移动端所有主流浏览器(其中 IE 浏览器的最低兼容版本是 9)。
## 相关文档
- [API 文档](https://heeroluo.github.io/just4/ajax/index.html)
| 22.076923 | 60 | 0.735192 | yue_Hant | 0.960828 |
8763e93fb06e93c623366347a847765bb0b7124b | 696 | md | Markdown | README.md | doesdev/hoy | 9205eaa51e36c7cf26283905d503af086209e2cc | [
"MIT"
] | 1 | 2021-11-28T16:25:18.000Z | 2021-11-28T16:25:18.000Z | README.md | doesdev/hoy | 9205eaa51e36c7cf26283905d503af086209e2cc | [
"MIT"
] | 1 | 2017-07-26T01:16:39.000Z | 2017-07-26T22:58:43.000Z | README.md | doesdev/hoy | 9205eaa51e36c7cf26283905d503af086209e2cc | [
"MIT"
] | 1 | 2021-11-28T16:25:39.000Z | 2021-11-28T16:25:39.000Z | # hoy [](https://npmjs.org/package/hoy) [](https://github.com/feross/standard) [](https://dependencyci.com/github/doesdev/hoy)
> Cached breakdown of today's text components
## install
```sh
$ npm install --save hoy
```
## usage
```js
const hoy = require('hoy')
console.log(hoy())
/* {
full: '20170329',
year: '2017',
month: '03',
day: '29',
start: 1490760000000,
end: 1490846399999
} */
```
## License
MIT © [Andrew Carpenter](https://github.com/doesdev)
| 24 | 348 | 0.679598 | yue_Hant | 0.304485 |
876427a582347b6b39a5da8dea4ee767686ebf89 | 163 | md | Markdown | README.md | scoyote/NOAA-GHCN-Analysis | 3835ab4eac1724e47acbe37199b9c5cccff382bd | [
"MIT"
] | null | null | null | README.md | scoyote/NOAA-GHCN-Analysis | 3835ab4eac1724e47acbe37199b9c5cccff382bd | [
"MIT"
] | null | null | null | README.md | scoyote/NOAA-GHCN-Analysis | 3835ab4eac1724e47acbe37199b9c5cccff382bd | [
"MIT"
] | null | null | null | # NOAA-GHCN-Analysis
Someone commented in Feb 2020 that "today was the 23rd sequential rainy Monday" and it got me wondering if that was true. This is the result.
| 54.333333 | 141 | 0.785276 | eng_Latn | 0.999971 |
8764f9ddb7e8c42bb7a744e815f6afdb3005ab89 | 5,113 | md | Markdown | _posts/2021-5-27-kotlin-in-action-chapter8.md | malinkang/malinkang.github.io | 451bc1f4903b6d9674c71e155d871d985ebb4a51 | [
"MIT"
] | 2 | 2017-07-08T10:31:45.000Z | 2019-02-13T09:22:57.000Z | _posts/2021-5-27-kotlin-in-action-chapter8.md | malinkang/malinkang.github.io | 451bc1f4903b6d9674c71e155d871d985ebb4a51 | [
"MIT"
] | null | null | null | _posts/2021-5-27-kotlin-in-action-chapter8.md | malinkang/malinkang.github.io | 451bc1f4903b6d9674c71e155d871d985ebb4a51 | [
"MIT"
] | null | null | null | ---
title: 《Kotlin实战》读书笔记 第8章 Lambda作为形参和返回值
date: 2018-09-12 12:26:55
tags: ["Kotlin"]
---
## 8.1 声明高阶函数
高阶函数就是以另一个函数作为参数或者返回值的函数。
### 8.1.1 函数类型

```kotlin
val sum = { x: Int, y: Int -> x + y }
val action = { println(42)}
run {
println(sum(1,2)) //3
}
run{
action() //42
}
```
```kotlin
val sum: (Int, Int) -> Int = { x, y -> x + y } // 有两个Int型参数和Int型返回值的函数
val action: () -> Unit = { println(42) } //没有参数和返回值的函数
```
### 8.1.2 调用作为参数的函数
```kotlin
fun twoAndThree(operation: (Int, Int) -> Int) {
val result = operation(2, 3)
println("The result is $result")
}
```
```kotlin
twoAndThree { a, b -> a + b } //The result is 5
twoAndThree { a, b -> a * b } //The result is 6
```
### 8.1.3 在Java中使用函数类
```kotlin
LambdaTestKt.twoAndThree((a, b) -> a + b); //The result is 5
LambdaTestKt.twoAndThree((a, b) -> a * b); //The result is 6
```
### 8.1.4 函数类型的参数默认值和null值
```kotlin
fun <T> Collection<T>.joinToString(separator: String = "",
prefix: String = "",
postfix: String,
transform: (T) -> String = { it.toString() }): String {
val result = StringBuilder(prefix)
for ((index, element) in withIndex()) {
if (index > 0) result.append(separator)
result.append(transform(element))
}
result.append(postfix)
return result.toString()
}
```
```kotlin
val letters = listOf("Alpha", "Beta")
println(letters.joinToString()) //Alpha, Beta
println(letters.joinToString(transform = String::toLowerCase)) //alpha, beta
println(letters.joinToString(separator = "! ", postfix = "! ", transform = String::toUpperCase)) //ALPHA! BETA!
```
```kotlin
fun <T> Collection<T>.joinToString(separator: String = "",
prefix: String = "",
postfix: String,
transform: ((T) -> String )?): String {
val result = StringBuilder(prefix)
for ((index, element) in withIndex()) {
if (index > 0) result.append(separator)
result.append(transform?.invoke(element))
}
result.append(postfix)
return result.toString()
}
```
### 8.1.5 返回函数的函数
```kotlin
enum class Delivery {STANDARD, EXPEDITED }
class Order(val itemCount: Int)
fun getShippingCostCalculator(delivery: Delivery): (Order) -> Double {
if (delivery == Delivery.EXPEDITED) {
return { order -> 6 + 2.1 * order.itemCount }
}
return { order -> 1.2 * order.itemCount }
}
```
```kotlin
val calculator = getShippingCostCalculator(Delivery.EXPEDITED)
println("Shipping costs ${calculator(Order(3))}") //Shipping costs 12.3
```
```kotlin
class ContactListFilters {
var prefix: String = ""
var onlyWithPhoneNumber: Boolean = false
fun getPredicate(): (Person) -> Boolean {
val startsWithPrefix = { p: Person ->
p.firstName.startsWith(prefix) || p.lastName.startsWith(prefix)
}
if (!onlyWithPhoneNumber) {
return startsWithPrefix
}
return { startsWithPrefix(it) && it.phoneNumber != null }
}
}
data class Person(val firstName: String, val lastName: String, val phoneNumber: String?)
```
```kotlin
val contacts = listOf(Person("Dmitry", "Jemerov", "123-4567"),
Person("Svetlana", "Isakova", null))
val contactListFilters = ContactListFilters()
with(contactListFilters) {
prefix = "Dm"
onlyWithPhoneNumber = true
}
println(contacts.filter(contactListFilters.getPredicate()))
//[Person(firstName=Dmitry, lastName=Jemerov, phoneNumber=123-4567)]
```
### 8.1.6 通过lambda去除重复代码
```kotlin
data class SiteVisit(val path: String, val duration: Double, val os: OS)
enum class OS {WINDOWS, LINUX, MAC, IOS, ANDROID }
```
```kotlin
val log = listOf(
SiteVisit("/",34.0,OS.WINDOWS),
SiteVisit("/",22.0,OS.MAC),
SiteVisit("/login",12.0,OS.WINDOWS),
SiteVisit("/signup",8.0,OS.IOS),
SiteVisit("/",16.3,OS.ANDROID)
)
val averageWindowsDuration = log
.filter { it.os==OS.WINDOWS }
.map (SiteVisit::duration)
.average()
println(averageWindowsDuration) //23.0
```
```text
fun List<SiteVisit>.averageDurationFor(os:OS) = filter { it.os==os }.map (SiteVisit::duration).average()
```
```kotlin
val log = listOf(
SiteVisit("/",34.0,OS.WINDOWS),
SiteVisit("/",22.0,OS.MAC),
SiteVisit("/login",12.0,OS.WINDOWS),
SiteVisit("/signup",8.0,OS.IOS),
SiteVisit("/",16.3,OS.ANDROID)
)
println(log.averageDurationFor(OS.WINDOWS)) //23.0
println(log.averageDurationFor(OS.MAC)) //22.0
```
## 8.2 内联函数:消除lambda带来的运行时开销
### 8.2.1 内联函数如何运作
### 8.2.2 内联函数的限制
### 8.2.3 内联集合操作
### 8.2.4 决定何时将函数声明成内联
### 8.2.5 使用内联lambda管理资源
## 8.3 高阶函数中的控制流
### 8.3.1 lambda中的返回语句:从一个封闭的函数返回
### 8.3.2 从lambda返回:使用标签返回
### 8.3.3 匿名函数:默认使用局部返回
| 24.941463 | 111 | 0.5885 | yue_Hant | 0.233344 |
876568131ebb99a0837d2603804f7d9a4738ac19 | 110 | md | Markdown | README.md | MarcoDelMondo/saint-pablo-bot | 2ed3b0586e929262fc5870532b2073ca7aa0e60a | [
"MIT"
] | null | null | null | README.md | MarcoDelMondo/saint-pablo-bot | 2ed3b0586e929262fc5870532b2073ca7aa0e60a | [
"MIT"
] | null | null | null | README.md | MarcoDelMondo/saint-pablo-bot | 2ed3b0586e929262fc5870532b2073ca7aa0e60a | [
"MIT"
] | null | null | null | # Saint Pablo Discord Bot
Just a fun bot built with python.
Currently plays music from youtube and sends gifs
| 27.5 | 49 | 0.8 | eng_Latn | 0.999648 |
87657c502aee52f98f75eb2067707f77ff8924dc | 47,349 | md | Markdown | docs/2014/database-engine/availability-groups/windows/failover-and-failover-modes-always-on-availability-groups.md | adiazcan/sql-docs.es-es | 7221c45ca1a7fd1fc7aeeefe8d0b023bb0b711b1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/database-engine/availability-groups/windows/failover-and-failover-modes-always-on-availability-groups.md | adiazcan/sql-docs.es-es | 7221c45ca1a7fd1fc7aeeefe8d0b023bb0b711b1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/database-engine/availability-groups/windows/failover-and-failover-modes-always-on-availability-groups.md | adiazcan/sql-docs.es-es | 7221c45ca1a7fd1fc7aeeefe8d0b023bb0b711b1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Conmutación por error y modos de conmutación por error (grupos de disponibilidad AlwaysOn) | Microsoft Docs
ms.custom: ''
ms.date: 06/13/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.technology: high-availability
ms.topic: conceptual
helpviewer_keywords:
- Availability Groups [SQL Server], availability replicas
- Availability Groups [SQL Server], failover
- Availability Groups [SQL Server], failover modes
- failover [SQL Server], AlwaysOn Availability Groups
ms.assetid: 378d2d63-50b9-420b-bafb-d375543fda17
author: MashaMSFT
ms.author: mathoma
manager: craigg
ms.openlocfilehash: 0603ccd35973b27993207d634ebc89aa90e6fa1b
ms.sourcegitcommit: 3da2edf82763852cff6772a1a282ace3034b4936
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 10/02/2018
ms.locfileid: "48140617"
---
# <a name="failover-and-failover-modes-alwayson-availability-groups"></a>Conmutación por error y modos de conmutación por error (grupos de disponibilidad AlwaysOn)
Dentro del contexto de un grupo de disponibilidad, el rol principal y el rol secundario de las réplicas de disponibilidad suelen ser intercambiables en un proceso denominado *conmutación por error*. Hay tres formas de conmutación por error: conmutación por error automática (sin pérdida de datos), conmutación por error manual planeada (sin pérdida de datos) y conmutación por error manual forzada (con posible pérdida de datos), normalmente denominada *conmutación por error forzada*. Las conmutaciones por error automáticas o manuales planeadas mantienen todos los datos. Un grupo de disponibilidad realiza la conmutación por error en el nivel de la réplica de disponibilidad. Es decir, un grupo de disponibilidad conmuta por error a una de sus réplicas secundarias (el *destino de conmutación por error*actual).
> [!NOTE]
> Los problemas que se producen en el nivel de la base de datos, como, por ejemplo, en el caso de que una base de datos pase a ser sospechosa debido a la pérdida de un archivo de datos, la eliminación de una base de datos o los daños de un registro de transacciones, no producen la conmutación por error del grupo de disponibilidad.
Durante la conmutación por error, el destino de la conmutación por error asume el rol principal, recupera las bases de datos y las pone en línea como las nuevas bases de datos principales. La réplica principal anterior, cuando está disponible, cambia al rol secundario y sus bases de datos se convierten en bases de datos secundarias. Estos roles podrían conmutarse repetidamente (o usar un destino de conmutación por error distinto) como respuesta a varios errores o con fines administrativos.
Las formas de conmutación por error que admite una determinada réplica de disponibilidad se especifican mediante la propiedad *modo de conmutación por error* . En una determinada réplica de disponibilidad, los modos de conmutación por error posibles dependen del [modo de disponibilidad](availability-modes-always-on-availability-groups.md) de la réplica, como sigue:
- Las**réplicas de confirmación sincrónica** admiten dos valores: automático o manual. El valor “automático” admite tanto la conmutación por error automática como la manual. Para evitar la pérdida de datos, la conmutación automática por error y la conmutación por error planeada requieren que el destino de conmutación por error sea una réplica secundaria de confirmación sincrónica con un estado de sincronización apropiado (esto indica que todas las bases de datos secundarias del destino de conmutación por error están sincronizadas con la base de datos principal correspondiente). Siempre que una réplica secundaria no cumple las dos condiciones, solo admite la conmutación por error manual forzada. Tenga en cuenta que la conmutación por error también admite una réplica cuyo rol esté en el estado RESOLVING.
- Las**réplicas de confirmación asincrónica** solo admiten el modo de conmutación por error manual. Además, debido a que nunca se sincronizan, solo admiten la conmutación por error forzada.
> [!NOTE]
> Después de una conmutación por error, las aplicaciones cliente que necesitan tener acceso a las bases de datos principales deben establecer conexión con la nueva réplica principal. Además, si la nueva réplica secundaria se configura para permitir el acceso de solo lectura, las aplicaciones cliente de solo lectura pueden conectarse a ella. Para obtener información sobre cómo se conectan los clientes a un grupo de disponibilidad, vea [Agentes de escucha de grupo de disponibilidad, conectividad de cliente y conmutación por error de una aplicación (SQL Server)](../../listeners-client-connectivity-application-failover.md).
## <a name="TermsAndDefinitions"></a> Términos y definiciones
conmutación automática por error
Conmutación por error que se produce automáticamente en la pérdida de la réplica principal. La conmutación por error automática solo se admite cuando la réplica principal actual y una réplica secundaria están configuradas con el modo de conmutación por error establecido en AUTOMATIC y la réplica secundaria está sincronizada actualmente. Si el modo de conmutación por error de la réplica principal o la réplica secundaria es MANUAL, no puede producirse la conmutación por error automática.
Conmutación por error manual planeada (sin pérdida de datos)
La conmutación por error manual planeada, o *conmutación por error manual*, es una conmutación por error iniciada por un administrador de bases de datos, normalmente con fines administrativos. Una conmutación por error manual planeada solo se admite si tanto la réplica principal como la réplica secundaria se configuran en modo de confirmación sincrónica y la réplica secundaria está sincronizada actualmente (en estado SYNCHRONIZED). Cuando la réplica secundaria de destino está sincronizada, puede realizarse una conmutación por error manual (sin pérdida de datos) aunque la réplica principal se haya bloqueado, ya que las bases de datos secundarias están listas para la conmutación por error. Un administrador de base de datos inicia manualmente una conmutación por error manual.
Conmutación por error forzada (con posible pérdida de datos)
Un administrador de bases de datos puede iniciar una conmutación por error cuando no hay ninguna réplica secundaria en estado SYNCHRONIZED con la réplica principal o cuando la réplica principal no se está ejecutando y no hay ninguna réplica secundaria lista para la conmutación por error. La conmutación por error forzada puede sufrir pérdida de datos y está recomendada únicamente para la recuperación ante desastres. La conmutación por error forzada también se conoce como conmutación por error manual forzada porque solo se puede iniciar manualmente. Esta es la única forma de conmutación por error admitida en el modo de disponibilidad de confirmación asincrónica.
[!INCLUDE[ssFosAutoC](../../../includes/ssfosautoc-md.md)]
Dentro de un grupo de disponibilidad dado, par de réplicas de disponibilidad (incluida la réplica principal actual) que están configuradas para el modo de confirmación sincrónica con conmutación por error automática, si existe. [!INCLUDE[ssFosAuto](../../../includes/ssfosauto-md.md)]solo tiene efecto si la réplica secundaria está actualmente en estado SYNCHRONIZED con la réplica principal.
[!INCLUDE[ssFosSyncC](../../../includes/ssfossyncc-md.md)]
Dentro de un grupo de disponibilidad dado, conjunto de dos o tres réplicas de disponibilidad (incluida la réplica principal actual) que están configuradas para el modo de confirmación sincrónica, si lo hubiera. Un [!INCLUDE[ssFosSync](../../../includes/ssfossync-md.md)]solo tiene efecto si las replicas secundarias están configuradas para el modo de conmutación por error manual y al menos una réplica secundaria está actualmente en estado SYNCHRONIZED con la réplica principal.
[!INCLUDE[ssFosEntireC](../../../includes/ssfosentirec-md.md)]
En un grupo de disponibilidad determinado, conjunto de todas las réplicas de disponibilidad cuyo estado operativo es actualmente ONLINE, independientemente del modo de disponibilidad y el modo de conmutación por error. El [!INCLUDE[ssFosEntire](../../../includes/ssfosentire-md.md)]es muy importante cuando no hay actualmente ninguna réplica secundaria en estado SYNCHRONIZED con la réplica principal.
## <a name="Overview"></a> Información general de la conmutación por error
En la siguiente tabla se resumen las formas de conmutación por error admitidas en diferentes modos de disponibilidad y de conmutación por error. En cada pareja, el modo de disponibilidad y el modo de conmutación reales vienen determinados por la intersección de los modos de la réplica principal y los modos de una o varias réplicas secundarias.
||Modo de confirmación asincrónica|Modo de confirmación sincrónica con modo de conmutación por error manual|Modo de confirmación sincrónica con modo de conmutación por error automática|
|-|-------------------------------|---------------------------------------------------------|------------------------------------------------------------|
|conmutación automática por error|no|no|Sí|
|Conmutación por error manual planeada|no|Sí|Sí|
|conmutación por error forzada|Sí|Sí|Sí**<sup>*</sup>**|
**<sup>*</sup>** Si emite un comando de conmutación por error forzada en una réplica secundaria sincronizada, la réplica secundaria comporta igual que para una conmutación por error manual.
El período de tiempo que la base de datos no está disponible durante una conmutación por error depende del tipo de conmutación por error y su causa.
> [!IMPORTANT]
> Para admitir las conexiones de cliente después de la conmutación por error, excepto en las bases de datos independientes, los inicios de sesión y los trabajos definidos en cualquiera de las bases de datos principales anteriores se deben volver a crear manualmente en la nueva base de datos principal. Para obtener más información, vea [Administración de inicios de sesión y de trabajos para las bases de datos de un grupo de disponibilidad (SQL Server)](../../logins-and-jobs-for-availability-group-databases.md).
### <a name="failover-sets"></a>Conjuntos de conmutación por error
Las formas de conmutación por error posibles para un grupo de disponibilidad determinado se pueden considerar como conjuntos de conmutación por error. Un conjunto de conmutación por error consta de una réplica principal y varias réplicas secundarias que admiten una determinada forma de conmutación, de la manera siguiente:
- **[!INCLUDE[ssFosAutoC](../../../includes/ssfosautoc-md.md)] (opcional):** dentro de un grupo de disponibilidad concreto, un par de réplicas de disponibilidad (incluida la réplica principal actual) configuradas para el modo de confirmación sincrónica con conmutación por error automática, si lo hubiera. Un conjunto de conmutación automática por error solo tiene efecto si la réplica secundaria está actualmente en estado SYNCHRONIZED con la réplica principal.
- **[!INCLUDE[ssFosSyncC](../../../includes/ssfossyncc-md.md)] (opcional):** dentro de un grupo de disponibilidad concreto, un conjunto de dos o tres réplicas de disponibilidad (incluida la réplica principal actual) configuradas para el modo de confirmación sincrónica, si lo hubiera. Un conjunto de confirmación sincrónica de conmutación automática por error solo tiene efecto si las réplicas secundarias están configuradas para el modo de conmutación por error manual y al menos una réplica secundaria está actualmente en estado SYNCHRONIZED con la réplica principal.
- **[!INCLUDE[ssFosEntireC](../../../includes/ssfosentirec-md.md)] :** en un grupo de disponibilidad determinado, el conjunto de todas las réplicas de disponibilidad cuyo estado operativo es actualmente ONLINE, independientemente del modo de disponibilidad y el modo de conmutación por error. El conjunto completo de conmutación por error es importante cuando no hay actualmente ninguna réplica secundaria en estado SYNCHRONIZED con la réplica principal.
Al configurar una réplica de disponibilidad como confirmación sincrónica con conmutación por error automática, la réplica de disponibilidad forma parte de [!INCLUDE[ssFosAuto](../../../includes/ssfosauto-md.md)]. Sin embargo, que el conjunto tenga efecto dependerá de la réplica principal actual. Las formas de conmutación por error que son actualmente posibles en un momento determinado dependen de qué conjuntos de conmutación estén actualmente activos.
Por ejemplo, considere el caso de un grupo de disponibilidad con cuatro réplicas de disponibilidad, del siguiente modo:
|Réplica|Configuración de disponibilidad y modo de conmutación por error|
|-------------|--------------------------------------------------|
|Un|Confirmación sincrónica con conmutación por error automática|
|B|Confirmación sincrónica con conmutación por error automática|
|C|Confirmación sincrónica con solo conmutación por error manual planeada|
|D|Confirmación asincrónica (con solo conmutación por error forzada)|
El comportamiento de conmutación por error para cada réplica secundaria dependerá de qué réplica de disponibilidad sea actualmente la réplica principal. Básicamente, para una réplica secundaria dada, el comportamiento de conmutación por error es el peor caso dada la réplica principal actual. En la ilustración siguiente se muestra cómo el comportamiento de la conmutación por error de réplicas secundarias varía en función de la réplica principal actual y si está configurada para el modo de confirmación asincrónica (con solo conmutación por error forzada) o el modo de confirmación sincrónica (con o sin conmutación automática por error).

## <a name="AutomaticFailover"></a> Automatic Failover
Una conmutación por error automática hace que una réplica secundaria calificada realice la transición automática al rol principal después de que la réplica principal deje de estar disponible. La conmutación por error automática es más apropiada cuando el nodo de WSFC que hospeda la réplica principal es local para el nodo que hospeda la réplica secundaria. Esto se debe a que la sincronización de datos funciona mejor con una latencia de mensajes baja entre equipos y a que las conexiones de cliente pueden seguir siendo locales.
### <a name="RequiredConditions"></a> Condiciones requeridas para una conmutación automática por error
La conmutación por error automática solo aparece en las siguientes condiciones:
- Ya existe un conjunto de conmutación por error automática. Este conjunto consta de una réplica principal y una secundaria (el *destino de conmutación automática por error*) que están configuradas para el modo de confirmación sincrónica y también para la conmutación AUTOMÁTICA por error. HYPERLINK "file:///C:\\\Users\\\marshow\\\AppData\\\Local\\\Temp\\\DxEditor\\\DduePreview\\\Default\\\6fe88e12-4df1-4025-ba24-7579635ccecf\\\HTM\\\html\\\29e0ac5d-eb58-4801-82b9-e278f08db920" Si la réplica principal se establece en una conmutación por error de tipo MANUAL, no se puede realizar la conmutación por error automática, aunque se establezca una réplica secundaria en una conmutación por error de tipo AUTOMATIC.
Para más información, vea [Modos de disponibilidad (Grupos de disponibilidad AlwaysOn)](availability-modes-always-on-availability-groups.md).
- El destino de la conmutación por error automática tiene un estado de sincronización correcto (esto indica que cada base de datos secundaria del destino de la conmutación por error se sincroniza con la base de datos principal correspondiente).
> [!TIP]
> Los grupos de disponibilidad AlwaysOn supervisan el estado de ambas réplicas en un conjunto de conmutación por error automática. Si se produce un error en alguna de las réplicas, el estado del grupo de disponibilidad se establece en CRITICAL. Si se produce un error en la réplica secundaria, la conmutación por error automática no es posible debido a que el destino de esta no está disponible. Si se produce un error en la réplica principal, el grupo de disponibilidad realizará una conmutación por error a la réplica secundaria. No existirá ningún destino de conmutación por error automática hasta que la réplica principal anterior se ponga en línea. De todas maneras, para garantizar la disponibilidad en el caso improbable de un fallo secuencial, se recomienda configurar una réplica secundaria diferente como destino de la conmutación por error automática.
>
> Para más información, vea [Usar directivas de AlwaysOn para ver el estado de un grupo de disponibilidad (SQL Server)](use-always-on-policies-to-view-the-health-of-an-availability-group-sql-server.md) y [Cambiar el modo de conmutación por error de una réplica de disponibilidad (SQL Server)](change-the-failover-mode-of-an-availability-replica-sql-server.md).
- El clúster del servicio de clústeres de conmutación por error (WSFC) tiene quórum. Para obtener más información, vea [Configuración de los votos y modos de cuórum WSFC (SQL Server)](../../../sql-server/failover-clusters/windows/wsfc-quorum-modes-and-voting-configuration-sql-server.md).
- La réplica principal ha dejado de estar disponible y se han cumplido los niveles de condición de conmutación por error de la directiva de conmutación por error flexible. Para obtener más información sobre los niveles de condición de conmutación por error, vea [Directiva de conmutación por error flexible para conmutación automática por error de un grupo de disponibilidad (SQL Server)](flexible-automatic-failover-policy-availability-group.md).
### <a name="HowAutoFoWorks"></a> Cómo funciona la conmutación automática por error
Una conmutación automática por error inicia la siguiente secuencia de acciones:
1. Si la instancia del servidor que hospeda la réplica principal actual sigue en ejecución, cambia el estado de las bases de datos principales a DISCONNECTED y desconecta todos los clientes.
2. Si las entradas del registro están esperando en colas de recuperación en la réplica secundaria de destino, la réplica secundaria aplica las entradas del registro restantes para terminar la puesta al día de las bases de datos secundarias.
> [!NOTE]
> La cantidad de tiempo que requiere la aplicación del registro a una base de datos dada depende de la velocidad del sistema, la carga de trabajo reciente y la cantidad de registro en la cola de cola de recuperación.
3. Las réplica secundaria anterior realiza la transición al rol principal. Sus bases de datos se convierten en las bases de datos principales. La nueva réplica principal revierte las transacciones no confirmadas (fase de reversión de recuperación) lo más rápidamente posible. Los bloqueos aíslan las transacciones no confirmadas, permitiendo que se produzca la reversión en segundo plano mientras los clientes usan la base de datos. Este proceso no revierte las transacciones confirmadas.
Hasta que se conecta una determinada base de datos secundaria, se marca brevemente como NOT_SYNCHRONIZED. Antes de que la recuperación de reversión se inicie, las bases de datos secundarias pueden conectarse a las bases de datos principales y pasar rápidamente al estado SYNCHRONIZED. El caso que mejor suele funcionar es aquel en el que hay una tercera réplica de confirmación sincrónica que permanece en el rol secundario después de la conmutación por error.
4. Posteriormente, cuando la instancia del servidor que hospeda la réplica principal anterior se reinicia, reconoce que otra réplica de disponibilidad posee el rol principal. La réplica principal anterior realiza la transición al rol secundario y sus bases de datos se convierten en bases de datos secundarias. La nueva réplica secundaria se conecta a la réplica principal actual y detecta su base de datos en las bases de datos principales actuales lo más rápidamente posible. Tan pronto como la nueva réplica secundaria haya vuelto a sincronizar sus bases de datos, la conmutación por error vuelve a ser posible, pero en dirección inversa.
### <a name="EnableAutoFo"></a> Para configurar la conmutación automática por error
Una réplica de disponibilidad se puede configurar para admitir la conmutación por error automática en cualquier momento.
**To configure automatic failover**
1. Asegúrese de que la réplica secundaria esté configurada para utilizar el modo de disponibilidad de confirmación sincrónica. Para obtener más información, vea [Cambiar el modo de disponibilidad de una réplica de disponibilidad (SQL Server)](change-the-availability-mode-of-an-availability-replica-sql-server.md).
2. Establezca el modo de conmutación por error a automático. Para obtener más información, vea [Cambiar el modo de conmutación por error de una réplica de disponibilidad (SQL Server)](change-the-failover-mode-of-an-availability-replica-sql-server.md).
3. Si lo desea, también puede cambiar la directiva de conmutación por error flexible del grupo de disponibilidad para especificar los tipos de errores que pueden causar una conmutación automática por error. Para más información, vea [Configurar la directiva de conmutación por error flexible para controlar las condiciones de la conmutación automática por error (grupos de disponibilidad AlwaysOn)](configure-flexible-automatic-failover-policy.md) HYPERLINK "file:///C:\\\Users\\\marshow\\\AppData\\\Local\\\Temp\\\DxEditor\\\DduePreview\\\Default\\\6a8d98a9-6e6a-40d1-9809-efa9013d7452\\\HTM\\\html\\\1ed564b4-9835-4245-ae35-9ba67419a4ce" y [Directiva de conmutación por error para instancias de clústeres de conmutación por error](../../../sql-server/failover-clusters/windows/failover-policy-for-failover-cluster-instances.md).
## <a name="ManualFailover"></a> Conmutación por error manual planeada (sin pérdida de datos)
Una conmutación por error manual produce la transición de una réplica secundaria al rol principal después de que un administrador de bases de datos emita un comando de conmutación por error manual en la instancia del servidor que hospeda la réplica secundaria de destino. Para admitir la conmutación por error manual, la réplica secundaria y la réplica principal actual se deben configurar en modo de confirmación sincrónica, si lo hubiera. Cada base de datos secundaria de la réplica de disponibilidad debe unirse al grupo de disponibilidad y sincronizarse con la base de datos principal correspondiente (es decir, la réplica secundaria se debe sincronizar). Esto garantiza que cada transacción confirmada en una base de datos principal anterior también se ha confirmado en la nueva base de datos principal. Por tanto, las nuevas bases de datos principales son idénticas a las bases de datos principales antiguas.
En la siguiente ilustración se muestran las fases de una conmutación por error planeada:
1. Antes de la conmutación por error, la instancia del servidor hospeda la réplica principal en `Node01`.
2. Un administrador de bases de datos inicia una conmutación por error planeada. El destino de la conmutación por error es la réplica de disponibilidad hospedada por la instancia de servidor en `Node02`.
3. El destino de la conmutación por error (en `Node02`) se convierte en la nueva réplica principal. Como se trata de una conmutación por error planeada, la réplica principal anterior se cambia al rol secundario durante la conmutación por error y pone inmediatamente sus bases de datos en línea como bases de datos secundarias.

### <a name="ManualFailoverConditions"></a> Condiciones requeridas para una conmutación por error manual
Para admitir una conmutación por error manual, la réplica principal actual debe establecerse en modo de confirmación sincrónica y una réplica secundaria debe estar:
- Configurada para el modo de confirmación sincrónica.
- Sincronizada actualmente con la réplica principal.
Para realizar la conmutación por error manual en un grupo de disponibilidad, debe conectarse a la réplica secundaria que se va a convertir en la nueva réplica principal.
### <a name="ManualFailoverHowWorks"></a> Cómo funciona la conmutación por error manual planeada
Una conmutación por error manual planeada, que se debe iniciar en la réplica secundaria de destino, inicia la siguiente secuencia de acciones:
1. Para asegurarse de que se producen transacciones de ningún usuario nuevo en las bases de datos principales originales, el clúster de WSFC envía una solicitud a la réplica principal para pasar a modo sin conexión.
2. Si un registro está esperando en la cola de recuperación de cualquier base de datos secundaria, la réplica secundaria finaliza las puesta al día de esa base de datos secundaria. La cantidad de tiempo que se necesita para ello depende de la velocidad del sistema, la carga de trabajo reciente y la cantidad de registro en la cola de recuperación. Para conocer el tamaño actual de la cola de recuperación, utilice el contador de rendimiento **Cola de recuperación** . Para obtener más información, vea [SQL Server, Database Replica (SQL Server, réplica de base de datos)](../../../relational-databases/performance-monitor/sql-server-database-replica.md).
> [!NOTE]
> El tiempo de conmutación por error se puede regular limitando el tamaño de la cola de recuperación. Sin embargo, esto puede hacer que la réplica principal se ralentice para permitir que la réplica secundaria no se retrase.
3. La réplica secundaria se convierte en la nueva réplica principal y la réplica principal anterior se convierte en la nueva réplica secundaria.
4. La nueva réplica principal revierte las transacciones no confirmadas y pone sus bases de datos en línea como bases de datos principales. Todas las bases de datos secundarias se marcan brevemente como NOT SYNCHRONIZED hasta que se conectan y vuelven a sincronizar con las nuevas bases de datos principales. Este proceso no revierte las transacciones confirmadas.
5. Cuando la réplica principal anterior vuelve a estar en línea, toma el rol secundario y la base de datos principal anterior se convierte en la base de datos secundaria. La nueva réplica secundaria vuelve a sincronizar rápidamente las nuevas bases de datos secundarias con las bases de datos principales correspondientes.
> [!NOTE]
> Tan pronto como la nueva réplica secundaria haya vuelto a sincronizar las bases de datos, la conmutación por error vuelve a ser posible, pero en dirección inversa.
Tras la conmutación por error, los clientes deben volver a conectarse a la base de datos principal actual. Para obtener más información, vea [Agentes de escucha de grupo de disponibilidad, conectividad de cliente y conmutación por error de una aplicación (SQL Server)](../../listeners-client-connectivity-application-failover.md).
### <a name="ManualFailoverDuringUpgrades"></a> Mantener la disponibilidad durante las actualizaciones
El administrador de base de datos para sus grupos de disponibilidad puede utilizar conmutaciones por error manuales para mantener la disponibilidad de la base de datos al actualizar el hardware o el software. Para utilizar un grupo de disponibilidad para actualizaciones de software, la instancia del servidor y/o el nodo del equipo que hospeda la réplica secundaria de destino deben haber recibido ya las actualizaciones. Para obtener más información, consulte [Upgrade and Update of Availability Group Servers with Minimal Downtime and Data Loss](upgrading-always-on-availability-group-replica-instances.md).
## <a name="ForcedFailover"></a> Conmutación por error forzada (con posible pérdida de datos)
La acción de forzar una conmutación por error de un grupo de disponibilidad (con posible pérdida de datos) es un método de recuperación ante desastres que permite usar una réplica secundaria como servidor en espera semiactiva. Puesto que forzar la conmutación por error puede suponer una posible pérdida de datos, se debe hacer con precaución y moderación. Se recomienza que la conmutación por error solo se fuerce si es necesario restaurar el servicio en las bases de datos de disponibilidad inmediatamente y es aceptable el riesgo de perder algunos datos. Para obtener más información sobre los requisitos previos y las recomendaciones para forzar la conmutación por error y para ver un escenario de ejemplo que usa una conmutación por error forzada para recuperarse de un error grave, vea [Realizar una conmutación por error manual forzada de un grupo de disponibilidad (SQL Server)](perform-a-forced-manual-failover-of-an-availability-group-sql-server.md).
> [!WARNING]
> La acción de forzar una conmutación por error requiere que el clúster de WSFC tenga quórum. Para obtener información sobre cómo configurar y forzar el cuórum, vea [Clústeres de conmutación por error de Windows Server (WSFC) con SQL Server](../../../sql-server/failover-clusters/windows/windows-server-failover-clustering-wsfc-with-sql-server.md).
### <a name="ForcedFailoverHowWorks"></a> Cómo funciona la conmutación por error forzada
Forzar la conmutación por error inicia una migración del rol principal a una réplica de destino cuyo rol está en el estado SECONDARY o RESOLVING. El destino de la conmutación por error se convierte en la nueva réplica principal y sirve inmediatamente sus copias de las bases de datos a los clientes. Cuando la réplica principal anterior está disponible, realizará la transición al rol secundario y sus bases de datos se convertirán en bases de datos secundarias.
Todas las bases de datos secundarias (incluidas las bases de datos principales anteriores cuando están disponibles) cambian al estado SUSPENDED. Según el estado de sincronización de datos anterior de una base de datos secundaria suspendida podría ser conveniente para recuperar los datos confirmados que faltan para esa base de datos principal. En una réplica secundaria configurada para el acceso de solo lectura se pueden consultar las bases de datos secundarias para detectar manualmente los datos que faltan. A continuación, se pueden emitir instrucciones [!INCLUDE[tsql](../../../includes/tsql-md.md)] en las nuevas bases de datos principales para realizar los cambios necesarios.
### <a name="ForcedFailoverRisks"></a> Riesgos de forzar la conmutación por error
Es esencial tener en cuenta que si se fuerza la conmutación por error se pueden perder datos. La pérdida de datos es posible porque la réplica de destino no puede comunicarse con la réplica principal y, por lo tanto, no se puede garantizar que las bases de datos estén sincronizadas. Al forzar la conmutación por error se inicia una nueva bifurcación de recuperación. Puesto que la base de datos principal original y las bases de datos secundarias están en bifurcaciones de recuperación diferentes, cada una de ellas contiene ahora datos que las otras bases de datos no contienen: cada base de datos principal original contiene los cambios que aún no se han enviado desde su cola de envío a la base de datos secundaria anterior (registro sin enviar); las bases de datos secundarias anteriores contienen los cambios que se han producido después de forzar la conmutación por error.
Si la conmutación por error se fuerza debido a que la réplica principal no funcionó, la posible pérdida de datos depende de si no se ha enviado ningún registro de transacciones a la réplica secundaria antes del problema. En el modo de confirmación asincrónica, la acumulación del registro sin enviar siempre es una posibilidad. En modo de confirmación sincrónica, esto solo es posible hasta que las bases de datos secundarias estén sincronizadas.
En la tabla siguiente se resume la posibilidad de perder datos para una base de datos determinada en la réplica en la que se fuerza la conmutación por error.
|Modo de disponibilidad de una réplica secundaria|¿Está sincronizada la base de datos?|¿Es posible que se pierdan datos?|
|--------------------------------------------|-------------------------------|----------------------------|
|Confirmación sincrónica|Sí|no|
|Confirmación sincrónica|no|Sí|
|Confirmación asincrónica|no|Sí|
||||
Las bases de datos secundarias realizan el seguimiento solo en dos bifurcaciones de recuperación, por lo que, si se realizan varias conmutaciones por error forzadas, no podrá reanudarse ninguna de las bases de datos secundarias que comenzaron la sincronización de datos con la conmutación por error forzada anterior. Si esto se produce, las bases de datos secundarias que no puedan reanudarse tendrán que quitarse del grupo de disponibilidad, restaurarse en el punto de tiempo preciso y unirse de nuevo al grupo de disponibilidad. Una restauración no funcionará entre varias bifurcaciones de recuperación; por tanto, asegúrese de realizar una copia de seguridad de registros después de realizar varias conmutaciones por error forzadas.
### <a name="WhyFFoPostForcedQuorum"></a> Por qué se requiere la conmutación por error después de forzar el quórum
Después de forzar el cuórum en el clúster WSFC (*cuórum forzado*), debe realizarse una conmutación por error forzada (con posible pérdida de datos) en todos los grupos de disponibilidad. La conmutación por error forzada es necesaria porque el estado real de los valores del clúster WSFC puede haberse perdido. Es necesario evitar las conmutaciones por error normales después de un quórum forzado debido a la posibilidad de que una réplica secundaria no sincronizada parezca sincronizada en el clúster WSFC reconfigurado.
Por ejemplo, considere un clúster WSFC que hospeda un grupo de disponibilidad en tres nodos: El nodo A hospeda la réplica principal y los nodos B y C hospedan una réplica secundaria. El nodo C se desconecta del clúster WSFC mientras la réplica secundaria local se sincroniza. Pero los nodos A y B conservan un quórum correcto y el grupo de disponibilidad sigue en línea. En el nodo A, la réplica principal continúa aceptando actualizaciones y, en el nodo B, la réplica secundaria continúa sincronizándose con la réplica principal. La réplica secundaria del nodo C se desincroniza y se encuentra cada vez más retrasada respecto de la réplica principal. Sin embargo, debido a que el nodo C está desconectado, la réplica permanece incorrectamente en el estado SYNCHRONIZED.
Si se pierde el quórum y después se fuerza en el nodo A, el estado de sincronización del grupo de disponibilidad del clúster WSFC debería ser correcto y la réplica secundaria del nodo C debería mostrarse como UNSYNCHRONIZED. Sin embargo, si se fuerza el quórum en el nodo C, la sincronización del grupo de disponibilidad será incorrecta. El estado de sincronización del clúster se habrá revertido al estado que tenía cuando el nodo C estaba desconectado y la réplica secundaria del nodo C se mostraría *incorrectamente* como SYNCHRONIZED. Dado que las conmutaciones por error manuales planeadas garantizan la seguridad de los datos, no están permitidas para poner un grupo de disponibilidad de nuevo en línea después de forzar el quórum.
### <a name="TrackPotentialDataLoss"></a> Seguimiento de la posible pérdida de datos
Cuando el clúster WSFC tiene un quórum correcto, se puede calcular el potencial actual de pérdida de datos en las bases de datos. Para una réplica secundaria dada, el potencial actual de pérdida de datos depende de lo retrasadas que estén las bases de datos secundarias locales respecto de las bases de datos principales correspondientes. Debido a que la cantidad de retardo varía con el tiempo, se recomienda que realice un seguimiento periódico del potencial de pérdida de datos de las bases de datos secundarias no sincronizadas. El seguimiento del retardo implica comparar el Último LSN de confirmación y la Última hora de confirmación de cada base de datos principal y de sus bases de datos secundarias, de la forma siguiente:
1. Conéctese a la réplica principal.
2. Consulta el `last_commit_lsn` (LSN de la última transacción confirmada) y `last_commit_time` columnas (hora de la última confirmación) de la [sys.dm_hadr_database_replica_states](/sql/relational-databases/system-dynamic-management-views/sys-dm-hadr-database-replica-states-transact-sql) vista de administración dinámica.
3. Compare los valores devueltos para cada base de datos principal y cada una de sus bases de datos secundarias. La diferencia entre sus últimos LSN de confirmación indican el tiempo de retardo.
4. Puede desencadenar una alerta cuando el tiempo de retardo en una base de datos o conjunto de base de datos supere el retardo máximo deseado durante un período de tiempo determinado. Por ejemplo, un trabajo que se ejecute cada minuto en cada base de datos principal puede realizar la consulta. Si la diferencia entre la `last_commit_time` de una base de datos principal y cualquiera de sus bases de datos secundarias ha superado el objetivo de punto de recuperación (RPO) (por ejemplo, 5 minutos) desde la última vez que se ejecutó el trabajo, este puede generar una alerta.
> [!IMPORTANT]
> Cuando el clúster de WSFC carece de quórum o se ha forzado el quórum, `last_commit_lsn` y `last_commit_time` son NULL. Para obtener información sobre cómo podría evitar la pérdida de datos después de forzar el cuórum, vea "Formas posibles de evitar la pérdida de datos después de forzar el cuórum" en [Realizar una conmutación por error manual forzada de un grupo de disponibilidad (SQL Server)](perform-a-forced-manual-failover-of-an-availability-group-sql-server.md).
### <a name="ForcedFailoverManagingDataLoss"></a> Administrar la potencial pérdida de datos
Después de forzar una conmutación por error, se suspenden todas las bases de datos secundarias. Esto incluye las bases de datos principales antiguas, después de que la réplica principal anterior vuelva a estar en línea y detecte que ahora es una réplica secundaria. Debe reanudar manualmente y de forma individual cada una de las bases de datos suspendidas en cada réplica secundaria.
Una vez que esté disponible la réplica principal anterior y en el supuesto de que sus bases de datos no estén dañadas, se puede intentar administrar la potencial pérdida de datos. El enfoque disponible para administrar la posible pérdida de datos depende de si la réplica principal original se ha conectado a la nueva réplica principal. Suponiendo que la réplica principal original pueda tener acceso a la nueva instancia principal, la reconexión se produce de forma automática y transparente.
#### <a name="the-original-primary-replica-has-reconnected"></a>La réplica principal original se ha vuelto a conectar
Generalmente, después de un problema, cuando la réplica principal original se reinicia, se vuelve a conectar rápidamente a su asociado. Al volver a conectarse, la réplica principal original se convierte en la réplica secundaria. Las bases de datos se convierten en bases de datos secundarias y entran en estado SUSPENDED. Las nuevas bases de datos secundarias no se no se revertirán a menos que se reanuden.
Sin embargo, las bases de datos suspendidas son inaccesibles; por lo tanto, no se pueden inspeccionar para evaluar qué datos se perderían si se reanudara una base de datos dada. Por lo tanto, la decisión de si ha de reanudar o quitar una base de datos secundaria depende de si se está dispuesto a aceptar una pérdida de datos, del siguiente modo:
- Si una pérdida de datos es inaceptable, debe quitar las bases de datos del grupo de disponibilidad para protegerlas.
El administrador de base de datos puede ahora recuperar las bases de datos principales anteriores e intentar recuperar los datos que se habrían perdido. Sin embargo, cuando una base de datos principal anterior se pone en línea, es divergente de la base de datos principal actual, por lo que el administrador de base de datos debe hacer que la base de datos quitada o la base de datos principal actual no sea accesible a los clientes para evitar la divergencia posterior de las bases de datos y los problemas de conmutación por error del cliente.
- Si la pérdida de datos es aceptable para sus objetivos empresariales, puede reanudar las bases de datos secundarias.
Al reanudar una nueva base de datos secundaria se produce su reversión como primer paso en su sincronización. Si alguna entrada del registro estuviera esperando en la cola de envío en el momento del problema, se perderían las transacciones correspondientes, incluso si se hubieran confirmado.
#### <a name="the-original-primary-replica-has-not-reconnected"></a>La réplica principal original no se ha vuelto a conectar
Si puede impedir temporalmente que la réplica principal original se vuelva a conectar a través de la red a la nueva réplica principal, puede inspeccionar las bases de datos principales originales para evaluar qué datos se perderían si se reanudaran.
- Si la posible pérdida de datos es aceptable
Permita que la réplica principal original se vuelva a conectar a la nueva réplica principal. La acción de volver a conectarse hace que las nuevas bases de datos secundarias se suspendan. Para iniciar la sincronización de datos en una base de datos, solo tiene que reanudarla. La nueva réplica secundaria quita la bifurcación de recuperación original para esa base de datos, con lo que se pierden las transacciones que no se han enviado nunca a la réplica secundaria anterior o no se han recibido de ella.
- Si la pérdida de datos es inaceptable
Si la base de datos principal original contiene datos esenciales que se perderían si se reanudara la base de datos suspendida, puede preservar los datos de la base de datos principal original quitándola del grupo de disponibilidad. Esto hace que la base de datos entre en estado RESTORING. Llegados a esta situación, se recomienda intentar realizar una copia de seguridad del final del registro de la base de datos quitada. Después, puede actualizar la base de datos principal actual (base de datos secundaria anterior) exportando los datos que desea proteger de la base de datos principal original e importándolos a la base de datos principal actual. Se recomienda realizar una copia de seguridad completa de la base de datos principal actualizada tan pronto como sea posible.
A continuación, en la instancia del servidor que hospeda la nueva réplica secundaria, puede eliminar la base de datos secundaria suspendida y crear una nueva base de datos secundaria restaurando esta copia de seguridad (y al menos una copia de seguridad de registros posterior) utilizando RESTORE WITH NORECOVERY. Se recomienda retrasar la realización de copias de seguridad de registros adicionales de las bases de datos principales actuales hasta que se reanuden las bases de datos secundarias correspondientes.
> [!WARNING]
> El truncamiento del registro de transacciones se retrasa en una base de datos principal mientras cualquiera de sus bases de datos secundarias está suspendida. Además, el estado de sincronización de una réplica secundaria de confirmación sincrónica no puede pasar a HEALTHY siempre que alguna base de datos local se suspende.
## <a name="RelatedTasks"></a> Tareas relacionadas
**Para configurar el comportamiento de la conmutación por error**
- [Cambiar el modo de disponibilidad de una réplica de disponibilidad (SQL Server)](change-the-availability-mode-of-an-availability-replica-sql-server.md)
- [Cambiar el modo de conmutación por error de una réplica de disponibilidad (SQL Server)](change-the-failover-mode-of-an-availability-replica-sql-server.md)
- [Configurar la directiva de conmutación por error Flexible para controlar las condiciones para la conmutación automática por error (grupos de disponibilidad AlwaysOn)](configure-flexible-automatic-failover-policy.md)
**Para realizar una conmutación por error manual**
- [Realizar una conmutación por error manual planeada de un grupo de disponibilidad (SQL Server)](perform-a-planned-manual-failover-of-an-availability-group-sql-server.md)
- [Realizar una conmutación por error manual forzada de un grupo de disponibilidad (SQL Server)](perform-a-forced-manual-failover-of-an-availability-group-sql-server.md)
- [Usar el Asistente para grupo de disponibilidad de conmutación por error (SQL Server Management Studio)](use-the-fail-over-availability-group-wizard-sql-server-management-studio.md)
- [Administración de inicios de sesión y de trabajos para las bases de datos de un grupo de disponibilidad (SQL Server)](../../logins-and-jobs-for-availability-group-databases.md)
**Para establecer la configuración de quórum de WSFC**
- [Configurar los valores de NodeWeight de cuórum de clúster](../../../sql-server/failover-clusters/windows/configure-cluster-quorum-nodeweight-settings.md)
- [Ver la configuración de NodeWeight de quórum de clúster](../../../sql-server/failover-clusters/windows/view-cluster-quorum-nodeweight-settings.md)
- [Forzar el inicio de un clúster WSFC sin un quórum](../../../sql-server/failover-clusters/windows/force-a-wsfc-cluster-to-start-without-a-quorum.md)
## <a name="RelatedContent"></a> Contenido relacionado
- [Guía de soluciones de Microsoft SQL Server AlwaysOn para alta disponibilidad y recuperación ante desastres](http://go.microsoft.com/fwlink/?LinkId=227600)
- [Blog del equipo de AlwaysOn SQL Server: Oficial AlwaysOn Team Blog de SQL Server](http://blogs.msdn.com/b/sqlalwayson/)
## <a name="see-also"></a>Vea también
[Información general de grupos de disponibilidad AlwaysOn (SQL Server)](overview-of-always-on-availability-groups-sql-server.md)
[Modos de disponibilidad (grupos de disponibilidad AlwaysOn)](availability-modes-always-on-availability-groups.md)
[Clústeres de conmutación por error de Windows Server (WSFC) con SQL Server](../../../sql-server/failover-clusters/windows/windows-server-failover-clustering-wsfc-with-sql-server.md)
[Transacciones entre bases de datos no se admite para la creación de reflejo de base de datos o grupos de disponibilidad AlwaysOn (SQL Server)](transactions-always-on-availability-and-database-mirroring.md)
[Directiva de conmutación por error para instancias de clústeres de conmutación por error](../../../sql-server/failover-clusters/windows/failover-policy-for-failover-cluster-instances.md)
[Directiva de conmutación por error flexible para conmutación automática por error de un grupo de disponibilidad (SQL Server)](flexible-automatic-failover-policy-availability-group.md)
| 146.591331 | 971 | 0.790809 | spa_Latn | 0.987386 |
8765f9d51e90edc020503522faabb2acd6f900c4 | 4,399 | md | Markdown | articles/devtest-labs/devtest-lab-auto-startup-vm.md | MicrosoftDocs/azure-docs.es-es | f8ea6df92bacc5d91ae0a88c0342c4e1703194a1 | [
"CC-BY-4.0",
"MIT"
] | 66 | 2017-07-09T03:34:12.000Z | 2022-03-05T21:27:20.000Z | articles/devtest-labs/devtest-lab-auto-startup-vm.md | MicrosoftDocs/azure-docs.es-es | f8ea6df92bacc5d91ae0a88c0342c4e1703194a1 | [
"CC-BY-4.0",
"MIT"
] | 671 | 2017-06-29T16:36:35.000Z | 2021-12-03T16:34:03.000Z | articles/devtest-labs/devtest-lab-auto-startup-vm.md | MicrosoftDocs/azure-docs.es-es | f8ea6df92bacc5d91ae0a88c0342c4e1703194a1 | [
"CC-BY-4.0",
"MIT"
] | 171 | 2017-07-25T06:26:46.000Z | 2022-03-23T09:07:10.000Z | ---
title: Configuración del inicio automático para una máquina virtual
description: Aprenda a configurar el inicio automático para las máquinas virtuales de un laboratorio. Esta configuración permite que las máquinas virtuales del laboratorio se inicien automáticamente según una programación.
ms.topic: how-to
ms.date: 06/26/2020
ms.openlocfilehash: 828350cb130e990d6a6ce3f16f084d5629518293
ms.sourcegitcommit: f6e2ea5571e35b9ed3a79a22485eba4d20ae36cc
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 09/24/2021
ms.locfileid: "128642954"
---
# <a name="auto-startup-lab-virtual-machines"></a>Inicio automático de máquinas virtuales del laboratorio
Azure DevTest Labs permite configurar las máquinas virtuales del laboratorio para que se inicien y se apaguen automáticamente según una programación. Para información sobre cómo configurar el apagado automático, consulte [Administración de directivas de apagado automático para un laboratorio en Azure DevTest Labs](devtest-lab-auto-shutdown.md).
A diferencia del apagado automático en el que, cuando se activa la directiva, se incluyen todas las máquinas virtuales, la directiva de inicio automático requiere que un usuario del laboratorio seleccione explícitamente una máquina virtual y participe en esta programación. De este modo, no se dará sin querer el caso de que se inicien automáticamente máquinas virtuales no deseadas, causando un gasto inesperado.
En este artículo se muestra cómo configurar la directiva de inicio automático para un laboratorio.
## <a name="configure-autostart-settings-for-a-lab"></a>Configuración del inicio automático para un laboratorio
1. Vaya a la página principal del laboratorio.
2. Seleccione **Configuración y directivas** en el menú de la izquierda.

3. En la página **Configuración y directivas**, realice los pasos siguientes:
1. Seleccione **Activado** en **Permitir programar el inicio automático en las máquinas virtuales**; así habilitar la característica para este laboratorio.
2. Seleccione una hora de inicio (por ejemplo, 8:00:00 a.m.) en el campo **Inicio de la programación**.
3. Seleccione el valor de **Zona horaria** que se va a usar.
4. Seleccione, en **Día de la semana**, los días en los que las máquinas virtuales se deben iniciar automáticamente.
5. A continuación, seleccione **Guardar** en la barra de herramientas para guardar la configuración.

> [!IMPORTANT]
> Esta directiva no aplica automáticamente el inicio automático a todas las máquinas virtuales del laboratorio. Para que las máquinas virtuales individuales **participen**, vaya a la página de la máquina virtual y habilite la opción de **Inicio automático** para ella.
## <a name="enable-autostart-for-a-vm-in-the-lab"></a>Habilitación del inicio automático para una máquina virtual del laboratorio
En el siguiente procedimiento se indican los pasos necesarios para que una máquina virtual participe en la directiva de inicio automático del laboratorio.
1. En la página principal del laboratorio, seleccione la **máquina virtual** en la lista **Mis máquinas virtuales**.

2. En la página de la **máquina virtual**, seleccione **Inicio automático** en el menú de la izquierda o en la lista **Programaciones**.

3. En la página **Inicio automático**, seleccione **Activado** en la opción **Permitir que esta máquina virtual se programe para el inicio automático**.

4. A continuación, seleccione **Guardar** en la barra de herramientas para guardar la configuración.
## <a name="next-steps"></a>Pasos siguientes
Para información sobre cómo configurar la directiva de apagado automático para un laboratorio, consulte [Administración de directivas de apagado automático para un laboratorio en Azure DevTest Labs](devtest-lab-auto-shutdown.md).
| 79.981818 | 413 | 0.784951 | spa_Latn | 0.982544 |
87666bcbd069d2828b9fb97f942aef0e9fd9a284 | 7,209 | md | Markdown | CHANGELOG.md | ninjaprox/steps-git-clone | d83bc09a1ce7a2bb4251fd9aa9ede293056e653d | [
"MIT"
] | null | null | null | CHANGELOG.md | ninjaprox/steps-git-clone | d83bc09a1ce7a2bb4251fd9aa9ede293056e653d | [
"MIT"
] | null | null | null | CHANGELOG.md | ninjaprox/steps-git-clone | d83bc09a1ce7a2bb4251fd9aa9ede293056e653d | [
"MIT"
] | 1 | 2020-09-02T09:22:23.000Z | 2020-09-02T09:22:23.000Z | ## Changelog (Current version: 3.5.2)
-----------------
### 3.5.2 (2017 Sep 05)
* [5dbc84d] prepare for 3.5.2
* [e85ffe7] retry merge using diff file and fallback to normal merge (#40)
### 3.5.1 (2017 Jul 14)
* [ef05e6e] prepare for 3.5.1
* [3806197] better error message if pr fetch failed (#39)
### 3.5.0 (2017 Jun 26)
* [b61dce6] prepare for 3.5.0
* [9512c7b] opt out submodule update (#37)
* [c02e445] Failed attempts count base changed to 1 (#36)
### 3.4.4 (2017 Jun 09)
* [0431d2d] prepare for 3.4.4
* [471f851] input grouping and reordering (#33)
### 3.4.3 (2017 Apr 19)
* [1b7fce6] Prepare for 3.4.3
* [c45a51c] #30 fixed, check if diff file is existing and has content (#31)
### 3.4.2 (2017 Mar 03)
* [a99e493] Prepare for 3.4.2
* [8f1fbec] Retry checkout on fail with feching tags (#29)
### 3.4.1 (2016 Nov 02)
* [56be9a8] prepare for 3.4.1
* [8212229] go-toolkit (#26)
### 3.4.0 (2016 Sep 30)
* [2bfa508] prepare for 3.4.0
* [85042cb] generic pull request support (#21)
* [992e9a2] Merge pull request #20 from bitrise-io/feature/pull-support
* [708c097] pull support
### 3.3.4 (2016 Aug 24)
* [f1a7a4d] prepare for 3.3.4
* [2cc034b] Merge pull request #19 from bitrise-io/error-message
* [775f3f8] proper error message
### 3.3.3 (2016 Aug 08)
* [5827fe3] prepare for 3.3.3
* [5f66d72] Merge pull request #18 from bitrise-io/git_logs
* [02ae81a] pr fix
* [d409727] bitrise.yml fix
* [05ab673] git log cmd output fix
* [7fdc376] Merge pull request #17 from bitrise-io/godep_update
* [ff0413d] godep update
### 3.3.2 (2016 Aug 01)
* [87354d6] refactor create-release-version -> create-release
* [9a56e20] prepare for 3.3.2
* [490b8ed] Merge pull request #16 from bitrise-io/update
* [d79e258] git log command outputs; run bitrise test localy; ci workflow updates; retry count increased,;wait time decreased
### 3.3.1 (2016 Aug 01)
* [cfb22b5] prep for v3.3.1
* [deb46b9] bitrise.yml minimal revision (releaseman related)
* [92e9241] readme - requirements - Go 1.5
* [5f58d3c] ci workflow - export GO15VENDOREXPERIMENT=1
* [d886f84] test - export GO15VENDOREXPERIMENT=1
* [81f7427] enable `GO15VENDOREXPERIMENT` for Go 1.5
### 3.3.0 (2016 Jul 29)
* [a04ea7d] prepare for 3.3.0
* [e0dd0cd] Merge pull request #15 from bitrise-io/review
* [5232d50] bitrise.yml cleanup
* [5bc1091] cleanup
* [cde0579] cleanup
* [c2fd242] log
* [5c31db6] log done
* [a1c791b] review
### 3.2.0 (2016 Mar 31)
* [42257b8] prepare for release
* [8702104] Merge pull request #13 from bitrise-io/clone_depth_param
* [4916a57] clone depth
* [bc34cf3] step.yml updates
### 3.1.1 (2016 Mar 09)
* [ff7d9ea] release configs
* [532ad02] Merge pull request #10 from bitrise-io/log_checkout_commit_hash
* [a0a399c] print commit hash, even if empty
* [cb73b60] log git clone hash
* [e1d080b] share-this-step workflow
### 3.1.0 (2015 Nov 06)
* [bae5898] style revision
* [d101329] further unused code/option cleanup
* [8016714] removed unnecessary formatted output path log
* [fb36fec] removed unused parameters; does not log sensitive info (ssh key) anymore; README update; testing revision
* [864ca9d] Merge pull request #7 from bazscsa/patch-1
* [ed5879d] Update step.yml
### 3.0.0 (2015 Sep 11)
* [c44326a] removed the 'destination dir will be removed' note from step.yml
* [ba0118e] no more destination path magic, and DON'T DELETE IT!! - it's just plain wrong to do it without even asking!
### 2.2.0 (2015 Sep 11)
* [83cde19] Merge pull request #5 from gkiki90/master
* [98845eb] clone destination path is required
* [1a01db1] clone_into_dir fix
### 2.1.0 (2015 Sep 10)
* [23a96b4] bitrise.yml update
* [cf5bd15] Merge pull request #4 from gkiki90/script_dir
* [7a9a26f] fix
### 2.0.0 (2015 Sep 04)
* [ab46847] `step.yml` description fix
* [843b1d1] updated for Bitrise "V2" & stripped down (removed deprecated&unused) : advanced options moved into `steps-git-clone-extended`
* [5abbf32] converted to V2 step format, ready to run with `bitrise` CLI
* [f365185] indent fix
### 1.5.0 (2015 Apr 11)
* [fa0f15f] don't fail if there's no checkout parameter (for compatibility reasons) but print a debug message
* [28ee85e] fail if no checkout parameter specified
* [dc8fef1] git_clone converted to tabs
* [b6c73b8] removed empty line
* [dc701c8] deprecated comment moved
* [accb021] removed base64 ssh key again (got back during the revision)
* [17f8e43] step.yml revision
* [81f82e6] minimal step.yml syntax fix&revision
* [d1bfe5d] deprecated in git_clone too
* [6603648] marked base64 key as deprecated
* [00795ff] removed old base64 ssh key input
* [5c1d98a] whitespace and indent fix
* [97b2c37] Merge pull request #2 from birmacher/master
* [f651d16] GitHub pull request support
* [7f02dd1] fix to clone repository without master branch
### 1.4.0 (2014 Nov 12)
* [339b471] step sh style fix
* [13a1b68] step.yml revision; exported outputs support (git commit hash, msg, author, ...); a bit of formatted output handling revision
* [db21a9e] Merge pull request #1 from erosdome/master
* [d33e6d2] Update step.yml
* [210deb8] Update step.yml
* [d17fd22] Update README.md
### 1.3.0 (2014 Oct 17)
* [0f93b77] Merge branch 'release/1.3.0'
* [d368f9b] comment/syntax fix
* [c04c9ca] the multiline ssh-key parameter is now retrieved directly from the environment
* [2563aa1] raw ssh key support
### 1.2.0 (2014 Jul 11)
* [22044d4] Merge branch 'release/1.2.0'
* [ee25b58] rename from codename concrete to Bitrise
### 1.1.3 (2014 Jun 24)
* [10a631f] Merge branch 'release/1.1.3'
* [58252bf] highlight the commit-hash in formatted output with pre/code
### 1.1.2 (2014 Jun 24)
* [625dd85] Merge branch 'release/1.1.2'
* [2ebbde6] commit hash formatted output formatting fix
### 1.1.1 (2014 Jun 24)
* [31d82bb] Merge branch 'release/1.1.1'
* [7c772d2] option to generate a formatted output (markdown) file
### 1.1.0 (2014 Jun 17)
* [567d24b] Merge branch 'release/1.1.0'
* [0c6c170] better private key handling: won't overwrite id_rsa but rather use a 'concrete' ssh private key file (and use it specifically!) + it will remove the private key file at the end of the script
### 1.0.9 (2014 Jun 17)
* [df8ebbf] Merge branch 'release/1.0.9'
* [5d5c536] don't redirect user-known-hosts-file to /dev/null
### 1.0.8 (2014 Jun 14)
* [fb305c3] Merge branch 'release/1.0.8'
* [03a74b1] 'this script path' handling change/fix
### 1.0.7 (2014 Jun 14)
* [1d10e27] Merge branch 'release/1.0.7'
* [b02e44f] path handling fix
### 1.0.6 (2014 Jun 14)
* [eda6553] Merge branch 'release/1.0.6'
* [7b8a61b] unused clone cleanup
* [431b802] commit-hash parameter support + a significant rewrite to remove old, now not required workarounds and to support clone+checkout based on commit-hash
### 1.0.5 (2014 Jun 11)
* [7ff11d1] Merge branch 'release/1.0.5'
* [1f77349] ssh no prompt: redirect known hosts file to /dev/null
### 1.0.4 (2014 Jun 04)
* [1250ced] Merge branch 'release/1.0.4'
* [1fd3028] clean up the clone-destination-dir before cloning
### 1.0.3 (2014 May 30)
* [1d8c023] Merge branch 'release/1.0.3'
* [f1048ec] retry delay increased
### 1.0.2 (2014 May 29)
* [98d9e19] Merge branch 'release/1.0.2'
* [9f053ed] retry with delay support
-----------------
Updated: 2017 Sep 05 | 29.912863 | 202 | 0.704536 | eng_Latn | 0.826546 |
8766e0214947ef31b439ce776efc01a9e57e99a6 | 1,750 | md | Markdown | docs/lambda/README.md | AndrewCopeland/conjur-authn-iam-client | 8f39744f025d3bf1b391b40ef3efec65337c78f8 | [
"Apache-2.0"
] | null | null | null | docs/lambda/README.md | AndrewCopeland/conjur-authn-iam-client | 8f39744f025d3bf1b391b40ef3efec65337c78f8 | [
"Apache-2.0"
] | 3 | 2020-10-01T20:55:38.000Z | 2020-10-05T18:19:39.000Z | docs/lambda/README.md | AndrewCopeland/conjur-authn-iam-client | 8f39744f025d3bf1b391b40ef3efec65337c78f8 | [
"Apache-2.0"
] | null | null | null | # Authenticating Lambda with Go
This directory contains a [sample go application](main.go) that can be used within AWS Lambda.
The following environment variables are mandatory:
- `CONJUR_AWS_TYPE`: lambda
- `CONJUR_APPLIANCE_URL`: The conjur appliance URL. e.g https://conjur.company.com
- `CONJUR_ACCOUNT`: The conjur account name
- `CONJUR_AUTHN_LOGIN`: The conjur host login. e.g. host/377388733033/iam-role-name
- `CONJUR_AUTHN_URL`: The conjur authentication URL. e.g. https://conjur.company.com/authn-iam/global
The sample lambda code is below. We are authenticating to Conjur and listing all of the resource this identity has access to
```go
package main
import (
"context"
"fmt"
"github.com/AndrewCopeland/conjur-authn-iam-client/pkg/authenticator/conjur"
"github.com/aws/aws-lambda-go/lambda"
"github.com/cyberark/conjur-api-go/conjurapi"
)
func HandleRequest(ctx context.Context) (string, error) {
config, err := conjur.GetConfig()
if err != nil {
return "", err
}
accessToken, err := conjur.GetConjurAccessToken(config)
if err != nil {
return "", err
}
conjurConfig := conjurapi.Config{
Account: config.Account,
ApplianceURL: config.ApplianceURL,
}
// Since we have the config and the accessToken lets created out conjurapi.Client
client, err := conjurapi.NewClientFromToken(conjurConfig, string(accessToken))
if err != nil {
return "", fmt.Errorf("Failed to create the Conjur client. %s", err)
}
resources, err := client.Resources(nil)
if err != nil {
return "", fmt.Errorf("Failed to list resources. %s", err)
}
result := ""
for _, r := range resources {
id := r["id"].(string)
result += " - " + id + "\n"
}
return result, nil
}
func main() {
lambda.Start(HandleRequest)
}
```
| 26.923077 | 124 | 0.716 | eng_Latn | 0.520617 |
8766e3900989ac4436b939979ae3100709f5e956 | 10,450 | md | Markdown | articles/active-directory/active-directory-aadconnect-upgrade-previous-version.md | OpenLocalizationTestOrg/azure-docs-pr15_nl-NL | 389bf74805bf9458069a8f4a1acdccd760060bc9 | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/active-directory-aadconnect-upgrade-previous-version.md | OpenLocalizationTestOrg/azure-docs-pr15_nl-NL | 389bf74805bf9458069a8f4a1acdccd760060bc9 | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/active-directory-aadconnect-upgrade-previous-version.md | OpenLocalizationTestOrg/azure-docs-pr15_nl-NL | 389bf74805bf9458069a8f4a1acdccd760060bc9 | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | <properties
pageTitle="Azure AD Connect: Een upgrade uitvoert vanuit een eerdere versie | Microsoft Azure"
description="Dit artikel wordt uitgelegd de verschillende methoden voor het upgraden naar de meest recente versie van Azure Active Directory verbinding kunt maken, inclusief in-place upgrade en migratie swing."
services="active-directory"
documentationCenter=""
authors="AndKjell"
manager="femila"
editor=""/>
<tags
ms.service="active-directory"
ms.devlang="na"
ms.topic="article"
ms.tgt_pltfrm="na"
ms.workload="Identity"
ms.date="10/12/2016"
ms.author="billmath"/>
# <a name="azure-ad-connect-upgrade-from-a-previous-version-to-the-latest"></a>Azure AD Connect: Upgrade van een eerdere versie naar de meest recente
In dit onderwerp worden de verschillende methoden die u gebruiken kunt voor het bijwerken van uw Azure AD Connect-installatie naar de meest recente versie. Het is raadzaam dat u zelf actueel met de versies van Azure AD Connect houden. De stappen in de [migratie bewegen](#swing-migration) worden ook gebruikt wanneer u een aanzienlijke configuratie wijzigen.
Als u een upgrade uitvoert vanuit DirSync wilt, raadpleegt u [een upgrade uitvoert vanuit Azure AD-synchronisatie (DirSync)](./connect/active-directory-aadconnect-dirsync-upgrade-get-started.md) in plaats daarvan.
Er zijn een paar andere strategieën Azure AD Connect upgrade uit te voeren.
Methode | Beschrijving
--- | ---
[Automatische upgrade](active-directory-aadconnect-feature-automatic-upgrade.md) | Dit is de eenvoudigste methode voor klanten met een snelle installatie.
[In-place upgrade](#in-place-upgrade) | Als u één server hebt, upgrade van de installatie ter plaatse op dezelfde server.
[Migratie bewegen](#swing-migration) | U kunt met twee servers voorbereiden op een van de servers met de nieuwe versie of de configuratie en active server wijzigen wanneer u klaar bent.
Zie [machtigingen vereist voor de upgrade](./connect/active-directory-aadconnect-accounts-permissions.md#upgrade)voor de vereiste machtigingen.
## <a name="in-place-upgrade"></a>In-place upgrade
Een in-place upgrade werkt voor het verplaatsen van Azure AD-synchronisatie of Azure AD Connect. Dit werkt niet voor DirSync of voor een oplossing met FIM + Azure AD-Connector.
Deze methode is voorkeur als er één server en minder dan ongeveer 100.000 objecten. Als er wijzigingen van de regels kant-en-klare synchroniseren, wordt een volledige importeren en de volledige synchronisatie optreden na de upgrade. Dit zorgt ervoor dat de nieuwe configuratie wordt toegepast op alle bestaande objecten in het systeem. Dit kan een paar uur al naargelang het aantal objecten in het bereik van de synchronisatie-engine duren. De normale delta synchronisatie scheduler, al dan niet standaard elke 30 minuten, is geschorst maar Wachtwoordsynchronisatie blijft. U overwegen moet de in-place upgrade tijdens een weekend. Als er geen wijzigingen aan de kant-en-klare-configuratie met de nieuwe versie van Azure AD Connect zijn, wordt in plaats daarvan een normale delta importeren/synchronisatie starten.

Als u wijzigingen hebt aangebracht in kant-en-klare synchronisatieregels, wordt deze opnieuw zijn ingesteld op standaard-configuratie op upgrade. Zorg dat de wijzigingen worden aangebracht, zoals wordt beschreven in de [Aanbevolen procedures voor het wijzigen van de standaardconfiguratie](active-directory-aadconnectsync-best-practices-changing-default-configuration.md)om ervoor te zorgen dat uw configuratie wordt bewaard tussen upgrades.
## <a name="swing-migration"></a>Migratie bewegen
Als u een complexe implementatie of zeer groot aantal objecten hebt, is het mogelijk dat deze te doen een in-place upgrade live-mailsysteem. Dit kan voor sommige klanten meerdere dagen duren en tijdens deze periode geen deltawijzigingen worden verwerkt. Deze methode wordt ook gebruikt wanneer u van plan bent belangrijke wijzigingen aanbrengen in uw configuratie en u ze uitprobeert wilt voordat deze worden verplaatst naar de cloud.
De aanbevolen methode voor deze scenario's is via een migratie swing. U moet (minimaal) twee servers, één actief en één tijdelijk opslaan server. De actieve server (ononderbroken blauwe lijn in de onderstaande afbeelding) is verantwoordelijk voor het laden van de actieve productietaken. Het tijdelijk opslaan (paarse stippellijnen in de onderstaande afbeelding)-server is voorbereid met de nieuwe versie of de configuratie en wanneer volledig klaar is, wordt deze server actief gemaakt. De vorige active server, nu met de oude versie of de configuratie is geïnstalleerd, is het tijdelijk opslaan server gemaakt en bijgewerkt.
De twee servers kunnen verschillende versies gebruiken. Bijvoorbeeld de active server die u van plan bent om op te nemen de beschikking over Azure AD-synchronisatie en de nieuwe tijdelijk opslaan server Azure AD Connect kunt gebruiken. Als u swing migratie gebruiken voor het ontwikkelen van een nieuwe configuratie is een goed idee om dezelfde versie op de twee servers hebt.

Opmerking: Dit is vastgesteld dat sommige klanten liever drie of vier servers voor dit scenario. Wanneer de server tijdelijk opslaan wordt bijgewerkt, hoeft u niet een back-server voor het geval een [herstel](active-directory-aadconnectsync-operations.md#disaster-recovery). Met drie of vier-servers kunt één set met primaire/stand-by servers met de nieuwe versie opgesteld, zorgen dat er altijd een tijdelijk opslaan server wilt overnemen.
Deze stappen werkt ook als u wilt verplaatsen van Azure AD-synchronisatie of een oplossing met FIM + Azure AD-Connector. Deze stappen niet werken voor DirSync, maar dezelfde migratie (ook wel parallelle implementatie genoemd) deze methode met instructies voor DirSync vindt u in de [Upgrade van Azure Active Directory synchroniseren (DirSync)](./connect/active-directory-aadconnect-dirsync-upgrade-get-started.md).
### <a name="swing-migration-steps"></a>Migratiestappen bewegen
1. Als u Azure AD Connect op beide servers gebruiken en wilt dit alleen doen in een wijziging in de configuratie, zorg dat uw active server en tijdelijke server worden beide met dezelfde versie als. Die vergemakkelijkt verschillen later vergelijken. Als u een van Azure AD-synchronisatie upgrade, hebt met deze servers verschillende versies. Als u een uitvoert voor een oudere versie van Azure AD Connect upgrade, het is een goed idee om te beginnen met de twee servers met dezelfde versie, maar dit is niet vereist.
2. Als u een aangepaste configuratie hebt aangebracht en de server tijdelijk opslaan dit geen heeft, voert u de stappen onder [aangepaste configuratie van actief naar tijdelijk opslaan server verplaatsen](#move-custom-configuration-from-active-to-staging-server).
3. Als u een van een eerdere versie van Azure AD Connect upgrade, moet u het tijdelijk opslaan server bijwerken naar de nieuwste versie. Als u van Azure AD-synchronisatie verplaatst, moet u vervolgens Azure AD Connect installeren op de server tijdelijk opslaan.
4. Laat de synchronisatie-engine volledige importeren en volledige synchronisatie worden uitgevoerd op de server tijdelijk opslaan.
5. Verifiëren dat de nieuwe configuratie er met de stappen onder **verifiëren** in [controleren of de configuratie van een server](active-directory-aadconnectsync-operations.md#verify-the-configuration-of-a-server)onverwachte wijzigingen hebben. Als er iets is niet zoals verwacht, juiste, uitvoeren importeren en synchroniseren en verifiëren totdat de gegevens tevreden bent. Deze stappen vindt u in het onderwerp.
6. Overschakelen van de server tijdelijk opslaan als de actieve server. Dit is de laatste stap **schakelen active server** in [controleren of de configuratie van een server](active-directory-aadconnectsync-operations.md#verify-the-configuration-of-a-server).
7. Als u een Azure AD Connect upgrade, moet u de server nu in de modus naar de meest recente versie tijdelijke bijwerken. Volg de stappen hierboven om de gegevens en configuratie die zijn bijgewerkt. Als u een upgrade van Azure AD-synchronisatie hebt uitgevoerd, kunt u nu uitschakelen en uw oude server buiten gebruik stellen.
### <a name="move-custom-configuration-from-active-to-staging-server"></a>Aangepaste configuratie van actieve naar tijdelijk opslaan server verplaatsen
Als u configuratiewijzigingen hebt aangebracht in de active server, moet u om ervoor te zorgen dat dezelfde wijzigingen zijn toegepast op de server tijdelijk opslaan.
De synchronisatie van aangepaste regels die u hebt gemaakt, kunnen worden verplaatst met PowerShell. Andere wijzigingen moeten worden toegepast als het op dezelfde manier op beide systemen en kunnen niet worden gemigreerd.
Wat u die moet u ervoor zorgen dat op dezelfde manier op beide servers is geconfigureerd:
- Verbinding met de dezelfde forests.
- Een domein en organisatie-eenheid filteren.
- Dezelfde optioneel functies, zoals Wachtwoordsynchronisatie en wachtwoord write-backs.
**Regels van de synchronisatie verplaatsen**
Ga als volgt te werk als u wilt verplaatsen van een aangepaste synchronisatie-regel:
1. **Synchronisatie regeleditor** openen op uw active server.
2. Selecteer uw aangepaste regel. Klik op **exporteren**. Hiermee opent u een Kladblok-venster. Sla het tijdelijk bestand met de extensie PS1. Hierdoor kunt u een PowerShell-script. Kopieer het bestand ps1 op de server tijdelijk opslaan.

3. De verbindingslijn-GUID verschilt op de server tijdelijk opslaan en moet worden gewijzigd. Als u de GUID, **Synchronisatie regeleditor**te starten, selecteert u een van de kant-en-klare regels die hetzelfde verbonden systeem en klik op **exporteren**. Vervang de GUID in uw bestand PS1 door de GUID van de server tijdelijk opslaan.
4. Voer in een PowerShell-prompt de PS1-bestand. Dit wordt de synchronisatie van aangepaste regel maken op de server tijdelijk opslaan.
5. Als u meerdere aangepaste regels hebt, herhaalt u voor alle aangepaste regels.
## <a name="next-steps"></a>Volgende stappen
Meer informatie over het [integreren van uw on-premises implementatie identiteiten met Azure Active Directory](active-directory-aadconnect.md).
| 120.114943 | 816 | 0.811866 | nld_Latn | 0.999928 |
87695a1d369f2ae469509dcaafc7603fd8fa26dc | 555 | md | Markdown | _posts/2012-10-30-verteilerlisten-extern-vefugbar-machen.md | Steh/steh.github.io | 3010a5aa34c0b10d33df2c61814dbf0a59c8db6c | [
"MIT"
] | null | null | null | _posts/2012-10-30-verteilerlisten-extern-vefugbar-machen.md | Steh/steh.github.io | 3010a5aa34c0b10d33df2c61814dbf0a59c8db6c | [
"MIT"
] | null | null | null | _posts/2012-10-30-verteilerlisten-extern-vefugbar-machen.md | Steh/steh.github.io | 3010a5aa34c0b10d33df2c61814dbf0a59c8db6c | [
"MIT"
] | null | null | null | ---
layout: post
title: "Exchange: Verteilerlisten"
categories: Exchange
tags:['Powershell', 'Verteilerlisten', 'Verwaltungskonsole']
autor: StehSa
---
## Listen extern vefügbar machen
### Exchange Verwaltungs Konsolen:
1. Eigenschaften
2. Nachrichtenübermittlungseinstellungen
3. “Einstellungen für die Nachrichtenzustellung”
4. “Authentifizierung aller Absender anfordern” dort Haken entfernen
### PowerShell:
Set-DistributionGroup -BypassSecurityGroupManagerCheck -RequireSenderAuthenticationEnabled:$false -Identity <Name-der-VerteilerGruppe>
| 27.75 | 138 | 0.807207 | deu_Latn | 0.84072 |
876a15e62ab9c6637c310f50000ae14ce16875a6 | 2,118 | md | Markdown | _posts/2021-03-13-TIL-BootStrap.md | Action2theFuture/Action2theFuture.github.io | 662bf50e1e848402dcf053f4483f4d486650f006 | [
"MIT"
] | 1 | 2021-11-26T07:14:09.000Z | 2021-11-26T07:14:09.000Z | _posts/2021-03-13-TIL-BootStrap.md | Action2theFuture/Action2theFuture.github.io | 662bf50e1e848402dcf053f4483f4d486650f006 | [
"MIT"
] | null | null | null | _posts/2021-03-13-TIL-BootStrap.md | Action2theFuture/Action2theFuture.github.io | 662bf50e1e848402dcf053f4483f4d486650f006 | [
"MIT"
] | null | null | null | ---
layout: post
title: "BootStrap"
date: 2021-03-13 21:03:36 +0530
categories: TIL HTML CSS BootStrap
---
## BootStrap
---
**HTML, CSS, JS로 이루어진 FrameWork**
🧱 FrameWork : 재사용성을 극대화시킨 하나의 틀
### BootStrap 시작
[부트스트랩](https://getbootstrap.com/)
1.CDN을 이용
CSS
```html
<link
href="https://cdn.jsdelivr.net/npm/bootstrap@5.0.0-beta2/dist/css/bootstrap.min.css"
rel="stylesheet"
integrity="sha384-BmbxuPwQa2lc/FVzBcNJ7UAyJxM6wuqIj61tLrc4wSX0szH/Ev+nYRRuWlolflfl"
crossorigin="anonymous"
/>
```
JS
```html
<script
src="https://cdn.jsdelivr.net/npm/bootstrap@5.0.0-beta2/dist/js/bootstrap.bundle.min.js"
integrity="sha384-b5kHyXgcpbZJO/tY9Ul7kGkf1S0CWuKcCD38l8YkeH8z8QjE0GmW1gYU5S9FOnJ0"
crossorigin="anonymous"
></script>
```
2.다운로드된 파일을 직접 적용
### BootStrap 적용
요소 안에 BootStrap에서 정한 Class를 넣으면 html의 code로 편리하게 작성이 가능하다
```html
<div class="container">...</div>
```
### 내장된 Grid System
기기나 ViewPort에 따라 12개의 열이 나누어지는 것을 가능하게 해준다
```html
8:4 비율로 나누어진다
<div class="row">
<div class="col-md-8">.col-md-8</div>
<div class="col-md-4">.col-md-4</div>
</div>
4:4:4 비율로 나누어진다
<div class="row">
<div class="col-md-4">.col-md-4</div>
<div class="col-md-4">.col-md-4</div>
<div class="col-md-4">.col-md-4</div>
</div>
```
`<div class="col-크기-숫자"></div>`
- 크기 : `xs` - 모바일 ,`sm` - 태블릿,`md` - 데스크탑, `lg` - 데스크탑
- 숫자 : 총 합이 12로 맞춰진다
### BootStrap Components

원하는 Component의 코드를 복사해서 사용할 수 있다
### 반응형 웹
<br>
👇 **최대화했을때**

👇 **일정크기이하로 줄일 때**

**Bootstrap이 자동적으로 반응형 웹으로 만들어준다
하지만, 원치 않을 시 반응형을 제거할 수 있다**
1.viewport `<meta>` 를 제거
```html
<meta
name="viewport"
content="width=device-width, initial-scale=1, shrink-to-fit=no"
/>
```
2.해당 요소의 가로폭 지정
```html
.container { width: 1000px !important; }
```
3.그리드 레이아웃
.col-xs-\* 클래스로 변경
| 18.910714 | 118 | 0.679887 | kor_Hang | 0.981011 |
876a181c8969317e7e2e5b5b8da623341556a4f6 | 57 | md | Markdown | _posts/0000-01-02-shayongithub.md | shayongithub/github-slideshow | 85bdf193c048f5bb8b615194aea0dc7581b407bb | [
"MIT"
] | 1 | 2021-02-17T14:43:05.000Z | 2021-02-17T14:43:05.000Z | _posts/0000-01-02-shayongithub.md | shayongithub/github-slideshow | 85bdf193c048f5bb8b615194aea0dc7581b407bb | [
"MIT"
] | 3 | 2021-02-17T14:44:44.000Z | 2021-02-17T15:01:15.000Z | _posts/0000-01-02-shayongithub.md | shayongithub/github-slideshow | 85bdf193c048f5bb8b615194aea0dc7581b407bb | [
"MIT"
] | null | null | null | ---
layout: slide
title: "Learninggg"
---
What the heckk
| 9.5 | 19 | 0.666667 | eng_Latn | 0.954743 |
876a700579b3ad3194a42be62d15bc7d029f7b5a | 51 | md | Markdown | README.md | WSMathias/k8s-cheat-sheet | 4c8e38f854b949f30ed89f75d5fd8d7902b4365e | [
"MIT"
] | null | null | null | README.md | WSMathias/k8s-cheat-sheet | 4c8e38f854b949f30ed89f75d5fd8d7902b4365e | [
"MIT"
] | null | null | null | README.md | WSMathias/k8s-cheat-sheet | 4c8e38f854b949f30ed89f75d5fd8d7902b4365e | [
"MIT"
] | null | null | null | # k8s-cheat-sheet
Shorcuts to run kubectl commands
| 17 | 32 | 0.803922 | eng_Latn | 0.968161 |
876ba0bf8dd7d28bd991373d6850e77dd3f481c8 | 943 | md | Markdown | _collection_ScholarThings_EarlyClusteringExplore/Algorithms/pdfCluster_Clustering via nonparametric density estimation.md | marquistj13/MyBlog | dc1c27cc617475f0a075f1c30d384c18535aa658 | [
"MIT"
] | 12 | 2017-03-03T03:32:33.000Z | 2021-11-23T10:40:18.000Z | _collection_ScholarThings_EarlyClusteringExplore/Algorithms/pdfCluster_Clustering via nonparametric density estimation.md | marquistj13/MyBlog | dc1c27cc617475f0a075f1c30d384c18535aa658 | [
"MIT"
] | 2 | 2018-11-30T02:43:10.000Z | 2020-02-13T12:29:53.000Z | _collection_ScholarThings_EarlyClusteringExplore/Algorithms/pdfCluster_Clustering via nonparametric density estimation.md | marquistj13/MyBlog | dc1c27cc617475f0a075f1c30d384c18535aa658 | [
"MIT"
] | 2 | 2019-07-05T01:10:47.000Z | 2020-04-07T06:07:11.000Z | ---
title: pdfCluster_Clustering via nonparametric density estimation
date: 2017-10-26
---
* content
{:toc}
## 前言
一个R包pdfCluster对应的paper。
基本思想:数一下pdf的local mode的数目
## 具体算法
怎么数呢?
一个基本的想法就是将pdf的曲线横着切一刀,然后数一下connected components的数目,但是该在那儿切呢?作者的方法是都切一下,当然为了好计算,是离散地取一些点去切。
更精确的描述如下。
### 初步想法

上图,左边是pdf,右边x轴是 $p_c$,即 pdf值大于 $c$的pdf的曲线下的面积, 纵轴 $m(p)$ 是此$p_c$对应的connected components的数目,
这样当 $c$ 由大到小时, $p_c$ 会从小到大, $m(p)$ 会先大后小
分别对应cluster的分离和merge

根据mode function就可以得到cluster tree
### 具体实现
mode function咋得到?
只能得到一个empirical的了,毕竟连pdf都是逼近的嘛。
作者使用了Voronoi tessellation and Delaunay triangulation的概念。
废话不说了,上图:

图中,虚线构成了Voronoi tessellation(估计是纯粹根据距离得到的,作者好像没有明说),然后得到实线部分的Delaunay triangulation
这个Delaunay triangulation就是用来构造mode function的关键
如果某一个triangulation的点的density小于 $c$,那么所有与它连接的edge都得干掉,这样就形成了上图右边的两个区域。
好了。就这样了。 | 20.955556 | 91 | 0.804878 | yue_Hant | 0.737188 |
876c0e389a26834e7ae408cf43c1424bcb364c42 | 1,637 | md | Markdown | src/ro/2018-04/10/04.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 68 | 2016-10-30T23:17:56.000Z | 2022-03-27T11:58:16.000Z | src/ro/2018-04/10/04.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 367 | 2016-10-21T03:50:22.000Z | 2022-03-28T23:35:25.000Z | src/ro/2018-04/10/04.md | OsArts/Bible-study | cfcefde42e21795e217d192a8b7a703ebb7a6c01 | [
"MIT"
] | 109 | 2016-08-02T14:32:13.000Z | 2022-03-31T10:18:41.000Z | ---
title: Daruri spirituale pentru unitate
date: 04/12/2018
---
`3. Biserica din Corint avea probleme serioase. Ce învăţături dă Pavel care să ajute la vindecare şi la refacerea relaţiilor? 1 Corinteni 3:5-11; 1 Corinteni 12:1-11; 2 Corinteni 10:12-15`
Apostolul spune că Isus foloseşte diferiţi lucrători pentru a îndeplini diferite slujbe în biserica Sa şi că toţi aceştia lucrează împreună la zidirea împărăţiei Sale.
Dumnezeu ne cheamă la cooperare, nu la competiţie. El ne înzestrează cu daruri ca să colaborăm la slujirea trupului lui Hristos şi a societăţii. Nu există daruri mai importante şi daruri mai puţin importante. Toate sunt necesare în biserică. El nu ne-a dat daruri spre folosul personal, ci ni le-a dat, prin Duhul Sfânt, ca să contribuim la răspândirea Evangheliei (1 Corinteni 12:11,18-23).
Nu este înţelept să ne comparăm cu alţii, fiindcă fie ne vom descuraja, fie ne vom lăuda. Dacă ni se pare că alţii sunt cu mult „superiori” nouă, ne vom deprima şi nu ne vom mai împlini lucrarea cu acelaşi elan. Iar dacă ne considerăm mai eficienţi decât alţii în lucrarea creştină, vom cădea în mândrie, sentiment pe care n-ar trebui să-l nutrească niciun creştin.
Ambele atitudini ne fac ineficienţi pentru Hristos şi pentru părtăşia dintre noi. Când vom lucra liniştiţi cu cei din sfera noastră de influenţă, vom fi bucuroşi şi mulţumiţi. Eforturile noastre vor completa eforturile celorlalţi membri şi biserica lui Hristos va face paşi mari spre împărăţia Sa.
`Ai fost vreodată invidios pe cineva pentru darurile lui spirituale? Ţi s-a întâmplat vreodată să te mândreşti cu darurile tale? Care au fost urmările?` | 102.3125 | 391 | 0.799023 | ron_Latn | 1.00001 |
876c1c666ada7ea59dff6b4591721f2fd3d4b286 | 922 | md | Markdown | Tricks/1.flex-&-ellipsis.md | AjiMIde/styles-book | b74c1f574f70257c7cf91e3b8fd65b2c425ddb67 | [
"MIT"
] | null | null | null | Tricks/1.flex-&-ellipsis.md | AjiMIde/styles-book | b74c1f574f70257c7cf91e3b8fd65b2c425ddb67 | [
"MIT"
] | null | null | null | Tricks/1.flex-&-ellipsis.md | AjiMIde/styles-book | b74c1f574f70257c7cf91e3b8fd65b2c425ddb67 | [
"MIT"
] | null | null | null | #### `flex` 中 `ellipsis` 不显示省略号的问题
* `flex` 是现行 h5 大量使用的布局
* 在 `flex` 下面使用 `text-overflow: ellipsis` 往往会出现省略号 `...` 不显示的问题
* 解决方案,使用 `overflow: hidden; width: 0;` 解决
* 还可以??使用 `flex-shrink: 1` 解决?
* 注意,下面的代码中,右边那块还用了个**垂直居中**,**双行ellipsis**处理
```scss
.flex-box {
display: flex;
width: 100%;
height: 200px;
border: 1px solid #999;
overflow: hidden;
.fb-img {
width: 100px;
background-color: #e6a23c;
}
.fb-info {
// 重点
width: 0;
//
flex: 1;
display: flex;
flex-direction: column;
justify-content: center;
}
.fb-info-title {
font-size: 24px;
font-weight: bold;
//
white-space: nowrap;
text-overflow: ellipsis;
overflow: hidden;
//
}
.fb-info-desc {
overflow: hidden;
text-overflow: ellipsis;
display: -webkit-box;
-webkit-box-orient: vertical;
-webkit-line-clamp: 2;
}
}
```

| 18.816327 | 63 | 0.592191 | yue_Hant | 0.233589 |
876c764df1a273d24f3d93098f50b93d84156203 | 1,164 | md | Markdown | docs/parallel/openmp/a-10-specifying-sequential-ordering.md | OpenLocalizationTestOrg/cpp-docs.it-it | 05d8d2dcc95498d856f8456e951d801011fe23d1 | [
"CC-BY-4.0"
] | 1 | 2020-05-21T13:09:13.000Z | 2020-05-21T13:09:13.000Z | docs/parallel/openmp/a-10-specifying-sequential-ordering.md | OpenLocalizationTestOrg/cpp-docs.it-it | 05d8d2dcc95498d856f8456e951d801011fe23d1 | [
"CC-BY-4.0"
] | null | null | null | docs/parallel/openmp/a-10-specifying-sequential-ordering.md | OpenLocalizationTestOrg/cpp-docs.it-it | 05d8d2dcc95498d856f8456e951d801011fe23d1 | [
"CC-BY-4.0"
] | null | null | null | ---
title: A.10 Specifying Sequential Ordering | Microsoft Docs
ms.custom:
ms.date: 11/04/2016
ms.reviewer:
ms.suite:
ms.technology:
- devlang-cpp
ms.tgt_pltfrm:
ms.topic: article
dev_langs:
- C++
ms.assetid: 5c65a9b1-0fc5-4cad-a5a9-9ce10b25d25c
caps.latest.revision: 7
author: mikeblome
ms.author: mblome
manager: ghogen
translation.priority.ht:
- cs-cz
- de-de
- es-es
- fr-fr
- it-it
- ja-jp
- ko-kr
- pl-pl
- pt-br
- ru-ru
- tr-tr
- zh-cn
- zh-tw
translationtype: Human Translation
ms.sourcegitcommit: 3168772cbb7e8127523bc2fc2da5cc9b4f59beb8
ms.openlocfilehash: 5346150508a18e1aaa026eea847f9e182dfc5ef1
---
# A.10 Specifying Sequential Ordering
Ordered sections ([Section 2.6.6](../../parallel/openmp/2-6-6-ordered-construct.md) on page 22) are useful for sequentially ordering the output from work that is done in parallel. The following program prints out the indexes in sequential order:
```
#pragma omp for ordered schedule(dynamic)
for (i=lb; i<ub; i+=st)
work(i);
void work(int k)
{
#pragma omp ordered
printf_s(" %d", k);
}
```
<!--HONumber=Jan17_HO2-->
| 21.163636 | 247 | 0.68299 | eng_Latn | 0.596749 |
876d02af334fde12ae55440ef6a961e48acf73d1 | 2,674 | md | Markdown | _posts/2019-02-19-retour-d-experience-de-ae&t-a-l-iae-de-pau.md | organisationsliberees/site | 09461c581dff603b561e026cd1c36189e2e14720 | [
"MIT"
] | 1 | 2019-05-29T12:10:13.000Z | 2019-05-29T12:10:13.000Z | _posts/2019-02-19-retour-d-experience-de-ae&t-a-l-iae-de-pau.md | organisationsliberees/site | 09461c581dff603b561e026cd1c36189e2e14720 | [
"MIT"
] | 3 | 2016-01-19T06:40:04.000Z | 2019-05-22T15:42:20.000Z | _posts/2019-02-19-retour-d-experience-de-ae&t-a-l-iae-de-pau.md | organisationsliberees/site | 09461c581dff603b561e026cd1c36189e2e14720 | [
"MIT"
] | 5 | 2016-01-19T06:29:35.000Z | 2021-09-29T14:39:19.000Z | ---
title: "retour d'expérience de ae&t à l'IAE de Pau"
date: 2019-02-19 00:41
layout: post
category: ["conférence"]
tags: ["AE&T", "Claude Andrieux", "Scrum in action", "Claude Aubry", "employes d'abord", "Liberté & Cie", "Bonheur au Travail", "Michelin", "Gore"]
embed_youtube: IAYyidxCC9M
illustration: /images/IAYyidxCC9M.jpg
---
Conférence d'une heure et demi sur le fonctionnement de AE&T par Claude Andrieux (ancien dirigeant) et Denis Benoist (salarié).
Tentative d'amélioration du son de cette conférence :
<audio controls>
<source src="/images/IAYyidxCC9M.mp3" type="audio/mpeg">
Votre navigateur ne permet la lecture de ce son
</audio>
Lors de son intervention, Denis Benoist évoque :
- [Scrum in action](https://www.pearson.ch/Business/PearsonFrance/EAN/9782744066658/Scrum-en-action) (livre)
- [Scrum par Claude Aubry](http://www.aubryconseil.com/pages/Livre-Scrum) (livre)
- Le [roti agile](http://atelier-collaboratif.com/50-le-roti-agile.html) (outil agile)
- la [loi des 2 pieds](http://www.qualitystreet.fr/2014/11/06/loi-des-2-pieds-ici-et-maintenant/) (outil agile)
- [Game storming](https://gamestorming.com/) (livre)
- [Innovation game](https://www.innovationgames.com/) (livre)
- [Employés d'abord, les clients ensuite](http://www.xn--organisationslibres-qzbb.fr/livre/2016/01/18/les-employes-d-abord,-les-clients-ensuite-vineet-nayar/) (livre)
- [Liberté et companie](http://www.xn--organisationslibres-qzbb.fr/livre/2016/01/18/liberte-et-compagnie-isaac-getz/) (livre)
- Le [bonheur au travail](http://www.organisationslibérées.fr/film/2016/01/18/le-bonheur-au-travail-un-film-de-martin-meissonnier/) (reportage)
- [Qu'est ce que l'entreprise libérée](https://www.youtube.com/watch?v=ZrAFpPbz7O4) (clip vidéo)
- Le jeu du fou (outil agile)
- [Open Agile Adoption](https://www.infoq.com/fr/articles/open-agile-adoption-2) (outils agiles)
- Teemood ? (logiciel)
- Communication Non Violente
Pendant ses interventions et les questions, Claude Andrieux parle de :
- l'adoption par les salariés aux valeurs de l'intelligence collective à travers la signature d'une charte
- la délégation vers le bas
- le passage du "mode controle" vers le "mode autonomisation"
- l'abandon du "mode plan" (planification) au profit du "mode faire et expérimentation", ce qui passe par l'abandon des objectifs (annuels, mensuels, ...)
- la transparence (notamment financière, « beaucoup plus efficace pour lutter contre la triche que les procédures de controle »)
- l'importance de la confiance, de « se faire confiance »
- la notion d'égalité intrinsèque
- l'auto-organisation
- le droit à l'erreur
- le départ du "leader libérateur"
| 55.708333 | 167 | 0.743829 | fra_Latn | 0.765036 |
876e2ec9ceff6196fdce96812a9268cf14494222 | 596 | md | Markdown | README.md | Jalle19/telegraf-snmp-mikrotik | 71fc9fa773c562615b5b3f3871c3750b758cf325 | [
"MIT"
] | null | null | null | README.md | Jalle19/telegraf-snmp-mikrotik | 71fc9fa773c562615b5b3f3871c3750b758cf325 | [
"MIT"
] | null | null | null | README.md | Jalle19/telegraf-snmp-mikrotik | 71fc9fa773c562615b5b3f3871c3750b758cf325 | [
"MIT"
] | null | null | null | # telegraf-snmp-mikrotik
This repo contains Telegraf snmp plugin configuration for Router OS (mikrotik) devices as I couldn't find any good configuration online.
## Usage
The telegraf.conf file contains only relevant part, you probably want to *add* it to your current telegraf configuration.
The configuration was tested only with handful of mikrotik models like:
- 2011UiAS-2HnD
- 750GL
Not all features are supported.
## TODO
Create multiple model-specific versions, where only supported metrics are collected.
## Contributing
Contributions are wellcome, please create pull request.
| 28.380952 | 136 | 0.79698 | eng_Latn | 0.991553 |
876eb8403ee9c14093d342f81eda99031ef1991d | 33 | md | Markdown | docs/content/src/learn/tutorials/saas-integrations/sfdc46/working-with-salesforce-client/src/working_with_salesforce_client/Module.md | indikasampath2000/ballerina-integrator | 56fe5f88cf5fd5d71ca5972bc983480570593c36 | [
"Apache-2.0"
] | 1 | 2019-06-07T04:52:06.000Z | 2019-06-07T04:52:06.000Z | docs/content/src/learn/tutorials/saas-integrations/sfdc46/working-with-salesforce-client/src/working_with_salesforce_client/Module.md | indikasampath2000/ballerina-integrator | 56fe5f88cf5fd5d71ca5972bc983480570593c36 | [
"Apache-2.0"
] | null | null | null | docs/content/src/learn/tutorials/saas-integrations/sfdc46/working-with-salesforce-client/src/working_with_salesforce_client/Module.md | indikasampath2000/ballerina-integrator | 56fe5f88cf5fd5d71ca5972bc983480570593c36 | [
"Apache-2.0"
] | null | null | null | # Working with Salesforce client
| 16.5 | 32 | 0.818182 | eng_Latn | 0.994853 |
876f5f4533fccea2af301cfafadfc9f8249cce76 | 8,786 | md | Markdown | articles/best-practices-availability-paired-regions.md | peder-andfrankly/azure-docs.sv-se | 49435a06686fc72ca9cd8c83883c3c3704a6ec72 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/best-practices-availability-paired-regions.md | peder-andfrankly/azure-docs.sv-se | 49435a06686fc72ca9cd8c83883c3c3704a6ec72 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/best-practices-availability-paired-regions.md | peder-andfrankly/azure-docs.sv-se | 49435a06686fc72ca9cd8c83883c3c3704a6ec72 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Verksamhets kontinuitet & haveri beredskap – Azure-kopplade regioner
description: Lär dig mer om regional länkning i Azure för att säkerställa att programmen är elastiska vid data Center haverier.
author: rayne-wiselman
manager: carmon
ms.service: multiple
ms.topic: article
ms.date: 07/01/2019
ms.author: raynew
ms.openlocfilehash: b71048412f5715fd1b8ef3edf742716916672bd5
ms.sourcegitcommit: bc7725874a1502aa4c069fc1804f1f249f4fa5f7
ms.translationtype: MT
ms.contentlocale: sv-SE
ms.lasthandoff: 11/07/2019
ms.locfileid: "73718744"
---
# <a name="business-continuity-and-disaster-recovery-bcdr-azure-paired-regions"></a>Verksamhets kontinuitet och haveri beredskap (BCDR): Azure-kopplade regioner
## <a name="what-are-paired-regions"></a>Vad är kopplade regioner?
Azure fungerar i flera geografiska områden runtom i världen. En Azure-geografi är ett definierat område i världen som innehåller minst en Azure-region. En Azure-region är ett område inom ett geografiskt område som innehåller ett eller flera data Center.
Varje Azure-region är kopplad till en annan region inom samma geografi, tillsammans med ett regionalt par. Undantaget är södra Brasilien, som är länkat till en region utanför dess geografiska område. I region par i Azure serialiseras plattforms uppdateringar (planerat underhåll) så att endast en kopplad region uppdateras i taget. I händelse av ett avbrott som påverkar flera regioner prioriteras minst en region i varje par för återställning.

Bild 1 – regionala Azure-par
| Placering | Länkade regioner | |
|:--- |:--- |:--- |
| Asien |Östasien |Sydostasien |
| Australien |Östra Australien |Sydöstra Australien |
| Australien |Australien, centrala |Australien, centrala 2 |
| Brasilien |Södra Brasilien |Södra centrala USA |
| Kanada |Centrala Kanada |Östra Kanada |
| Kina |Kina, norra |Kina, östra|
| Kina |Kina, norra 2 |Kina, östra 2|
| Europa |Nordeuropa (Irland) |Västeuropa (Nederländerna) |
| Frankrike |Frankrike, centrala|Frankrike, södra|
| Tyskland |Centrala Tyskland |Nordöstra Tyskland |
| Indien |Indien, centrala |Södra Indien |
| Indien |Indien, västra |Södra Indien |
| Japan |Östra Japan |Västra Japan |
| Korea |Sydkorea, centrala |Sydkorea, södra |
| Nordamerika |Östra USA |Västra USA |
| Nordamerika |USA, östra 2 |Centrala USA |
| Nordamerika |Norra centrala USA |Södra centrala USA |
| Nordamerika |Västra USA 2 |Västra centrala USA
| Sydafrika | Sydafrika, norra | Sydafrika, västra
| Storbritannien |Storbritannien, västra |Storbritannien, södra |
| Förenade Arabemiraten | Förenade Arabemiraten, norra | Förenade Arabemiraten, centrala
| OSS-försvars departement |US DoD, östra |US DoD, centrala |
| Amerikanska myndigheter |Arizona (USA-förvaltad region) |Texas (USA-förvaltad region) |
| Amerikanska myndigheter |US Gov, Iowa |Virginia (USA-förvaltad region) |
| Amerikanska myndigheter |Virginia (USA-förvaltad region) |Texas (USA-förvaltad region) |
Tabell 1 – mappning av regionala Azure-par
- Västra Indien är bara kopplad till en riktning. Den sekundära regionen västra Indien är södra Indien, men den sekundära regionen i södra Indien är central Indien.
- Södra Brasilien är unikt eftersom det är länkat till en region utanför sin egen geografi. Södra Brasilien: s sekundära region är södra centrala USA. Södra centrala USA: s sekundära region är inte Brasilien, södra.
- US Gov, Iowaens sekundära region är US Gov, Virginia.
- US Gov, Virginiaens sekundära region är US Gov, Texas.
- US Gov, Texas sekundär region är US Gov, Arizona.
Vi rekommenderar att du konfigurerar haveri beredskap för affärs kontinuitet (BCDR) i regionala par för att dra nytta av Azures isolerings-och tillgänglighets principer. För program som har stöd för flera aktiva regioner rekommenderar vi att du använder båda regionerna i ett regions par där det är möjligt. Detta säkerställer optimal tillgänglighet för program och minimerad återställnings tid i händelse av en katastrof.
## <a name="an-example-of-paired-regions"></a>Ett exempel på kopplade regioner
Bild 2 nedan visar ett hypotetiskt program som använder det regionala paret för haveri beredskap. De gröna siffrorna markerar de olika aktiviteterna i de tre Azure-tjänsterna (Azure Compute, Storage och Database) och hur de är konfigurerade att replikeras mellan regioner. De unika fördelarna med att distribuera över kopplade områden är markerade med de orange numren.

Bild 2 – hypotetiskt, regionalt par för Azure
## <a name="cross-region-activities"></a>Aktiviteter över flera regioner
Som det hänvisas till i bild 2.
 **Azure Compute (IaaS)** – du måste etablera ytterligare beräknings resurser i förväg för att säkerställa att resurserna är tillgängliga i en annan region under en katastrof. Mer information finns i [teknisk vägledning för Azure-återhämtning](https://github.com/uglide/azure-content/blob/master/articles/resiliency/resiliency-technical-guidance.md).
 **Azure Storage** – om du använder hanterade diskar kan du läsa mer om [säkerhets kopiering över flera regioner](https://docs.microsoft.com/azure/architecture/resiliency/recovery-loss-azure-region#virtual-machines) med Azure Backup och [Replikera virtuella datorer](https://docs.microsoft.com/azure/site-recovery/azure-to-azure-tutorial-enable-replication) från en region till en annan med Azure Site Recovery. Om du använder lagrings konton konfigureras Geo-redundant lagring (GRS) som standard när ett Azure Storage-konto skapas. Med GRS replikeras dina data automatiskt tre gånger inom den primära regionen och tre gånger i den kopplade regionen. Mer information finns i [Azure Storage alternativ för redundans](storage/common/storage-redundancy.md).
 **Azure SQL Database** – med Azure SQL Database geo-replikering kan du konfigurera asynkron replikering av transaktioner till vilken region som helst i världen. Vi rekommenderar dock att du distribuerar dessa resurser i en kopplad region för de flesta katastrof återställnings scenarier. Mer information finns i [geo-replikering i Azure SQL Database](sql-database/sql-database-geo-replication-overview.md).
 **Azure Resource Manager** – Resource Manager tillhandahåller en logisk isolering av komponenter i flera regioner. Det innebär att logiska försök i en region är mindre sannolika att påverka ett annat.
## <a name="benefits-of-paired-regions"></a>Fördelar med kopplade regioner
Som det hänvisas till i bild 2.

**fysisk isolering** – när det är möjligt föredrar Azure minst 300 miles av separering mellan data Center i ett regionalt par, även om detta inte är praktiskt eller möjligt i alla geografiska områden. Separation av fysiskt Data Center minskar sannolikheten för natur katastrofer, civila rester, strömavbrott eller fysiska nätverks avbrott som påverkar båda regionerna samtidigt. Isoleringen omfattas av begränsningarna inom geografin (geografisk storlek, tillgänglighet för ström-/nätverks infrastruktur, regler osv.).

**plattforms oberoende replikering** – vissa tjänster som Geo-redundant lagring ger automatisk replikering till den kopplade regionen.
**återställnings ordning** för 
– i händelse av ett stort avbrott prioriteras återställning av en region av varje par. Program som distribueras över kopplade regioner garanterar att en av regionerna återställs med prioritet. Om ett program distribueras mellan regioner som inte paras ihop, kan återställningen bli fördröjd – i värsta fall kan de valda regionerna vara de två sista som ska återställas.

**sekventiella uppdateringar** – planerade Azure-systemuppdateringar distribueras till kopplade regioner sekventiellt (inte på samma gång) för att minimera drift stopp, påverkan av buggar och logiska fel i sällsynta fall av en felaktig uppdatering.

**data placering** – en region finns inom samma geografi som dess par (med undantag för Brasilien, södra) för att uppfylla kraven på data placering för skatte-och rättsvårds myndigheter.
| 85.300971 | 828 | 0.801047 | swe_Latn | 0.999283 |
876fd114a7f5b8d8bc77366757057511e42f245f | 4,716 | md | Markdown | ce/sales/sales-accelerator-assignment-rules.md | yaweriqbal/dynamics-365-customer-engagement | 168960593d8f89c6369bcc0c7b279136c006d2c2 | [
"CC-BY-4.0",
"MIT"
] | 194 | 2018-04-02T15:18:06.000Z | 2022-03-24T19:55:48.000Z | ce/sales/sales-accelerator-assignment-rules.md | Platform-DevOps/dynamics-365-customer-engagement | 168960593d8f89c6369bcc0c7b279136c006d2c2 | [
"CC-BY-4.0",
"MIT"
] | 2,219 | 2018-04-12T06:42:32.000Z | 2022-03-31T21:52:26.000Z | ce/sales/sales-accelerator-assignment-rules.md | Platform-DevOps/dynamics-365-customer-engagement | 168960593d8f89c6369bcc0c7b279136c006d2c2 | [
"CC-BY-4.0",
"MIT"
] | 406 | 2018-03-30T19:22:39.000Z | 2022-03-28T19:14:42.000Z | ---
title: "Configure sales accelerator for assignment rules"
description: "Configure sales accelerator for assignment rules to automatically assign leads and opportunities to sellers."
ms.date: 10/26/2021
ms.topic: article
author: udaykirang
ms.author: udag
manager: shujoshi
ms.custom:
- "dyn365-sales"
---
# Configure assignment rules in Sales Enterprise
Configure sales accelerator for assignment rules to automatically assign leads and opportunities to sellers.
## License and role requirements
| | |
|-----------------------|---------|
| **License** | Dynamics 365 Sales Premium or Dynamics 365 Sales Enterprise <br>More information: [Dynamics 365 Sales pricing](https://dynamics.microsoft.com/sales/pricing/) |
| **Security roles** | System Administrator <br> See [Predefined security roles for Sales](security-roles-for-sales.md)|
|||
## What are assignment rules
Use assignment rules in sales accelerator to automatically assign new leads and opportunities to sellers or sales teams. This helps reduce the amount of time and effort required to manually assign records, prevent the loss of unassigned records, and balance the assignments among sellers.
As an administrator, you can create rules that match lead or opportunity attributes (such as, location and language) with the corresponding seller or team attributes (such as, location and language). For example, when a lead is created and satisfies the conditions of a specific rule, the lead is automatically assigned to a seller.
> [!IMPORTANT]
> Use this procedure to configure assignment rules in sales accelerator with the Sales Enterprise license. If you have the Sales Premium license, use [Configure the sales accelerator for Sales Premium](enable-configure-sales-accelerator.md).
## Configure sales accelerator
1. In the Sales Hub app, select the Change area icon in the lower-left corner of the page and then select **Sales Insights settings**.
2. On the site map, under **Sales accelerator**, select **Setup**.
The sales accelerator configuration page opens.
> [!div class="mx-imgBorder"]
> 
3. In the **Define team access** section, select one of the following options to provide permissions to users to use the assignment rules feature, and then select **Next**.
| Security roles | Description |
|----------------|-------------|
| All security roles | Select this option to give access to assignment rules in the Sales Hub app to all the security roles in your organization. |
| Specific security roles | Select this option to specify security roles to give access to assignment rules in the Sales Hub app to just a few users. Use the lookup box to add the security roles. |
> [!div class="mx-imgBorder"]
> 
4. In the **Seller availability** section, select the **Seller availability** toggle to enable the option that allows sellers to configure the working hours and vacation days so that leads and opportunities are assigned based on their availability.
> [!div class="mx-imgBorder"]
> 
More information: [Configure your work availability through personal settings](personalize-sales-accelerator.md#through-personal-settings).
5. In the **Automate lead and opportunity assignment (preview)** section, select the toggle to enable preview for the assignment rules feature.
More information: [Manage assignment rules for routing](create-manage-assignment-rules.md).
6. Save and publish the configuration.
A status message is displayed at the top of the page with details such as, time and user who published the configurations.
> [!div class="mx-imgBorder"]
> 
The sales accelerator is configured to manage assignment rules in your organization for the selected security roles.
[!INCLUDE[cant-find-option](../includes/cant-find-option.md)]
### See also
[Manage assignment rules for routing](create-manage-assignment-rules.md)
[Configure your work availability through personal settings](personalize-sales-accelerator.md#through-personal-settings)
[!INCLUDE[footer-include](../includes/footer-banner.md)]
| 56.142857 | 332 | 0.758906 | eng_Latn | 0.985225 |
87707e49dc933579ecf83a2f7dd6510f28bac4ec | 1,705 | md | Markdown | useful-commandline-programs.md | canute24/useful-lists | e5d302723df7424ff6f6ec49cb959d9e92129248 | [
"CC0-1.0"
] | null | null | null | useful-commandline-programs.md | canute24/useful-lists | e5d302723df7424ff6f6ec49cb959d9e92129248 | [
"CC0-1.0"
] | null | null | null | useful-commandline-programs.md | canute24/useful-lists | e5d302723df7424ff6f6ec49cb959d9e92129248 | [
"CC0-1.0"
] | null | null | null | # Useful Command Line Programs
Cheatsheets:
- https://developers.redhat.com/cheat-sheets/linux-commands-cheat-sheet
- https://developers.redhat.com/cheat-sheets/advanced-linux-commands
Terminal:
- st: light terminal
- urxvt: unicode rxvt terminal
Apps:
- vim: editor, IDE, everything
- vifm: filemanager with vim bindings
- nnn: filemanager
- ranger: python based file manager
Multimedia:
- imagemagick: image manipulation
- ffmpeg: audio/video
- libav: audio/video
Prompt:
- Starship: prompt that works for any shell
Search / Autocomplete:
- z: file path autocomplete
- fzf: fuzzy finder for all types of command line entries with .xxignore support
- fd: similar to find with simpler syntax for extensive search with good defaults with .xxignore support
- ripgrep: similar to grep much faster one, with sane defaults and colorized output with .xxignore support
- silver searcher: aka ag code search similar to ack with .xxignore support
Monitor:
- htop: process monitor
- glances: similar to htop with more details
- ctop and lazydocker: monitor for docker containers
- conky: system stats on desktop
Recording:
- asciinema: record commandline in text format and playback
- scrot: screenshot
Editing:
- bat: similar to cat with editor like formatting and color scheme
- colordiff: Diff with colors
- diff-so-fancy: upgraded git diff with better formatting
Disk:
- ncdu: disk usage
- cfdisk: disk formatting similar to fdisk
- tree: create tree view of directory
- exa: similar to ls with more options and colors
Utilities:
- httpie: simiar to curl but simpler syntax
- litecli and pgcli: better commandline tools compared to sqlite3 and psql
- done: notify when long running commands complete
| 29.912281 | 106 | 0.780645 | eng_Latn | 0.965274 |
8770b7570e997d9a61ffce11e9544df0bb19afcb | 3,316 | md | Markdown | API.md | pahud/awscdk-run | 2ca647eb6cdfaba13b57e474aa1392138e8135b7 | [
"Apache-2.0"
] | 3 | 2020-11-22T06:59:30.000Z | 2022-03-23T17:33:02.000Z | API.md | pahud/awscdk-run | 2ca647eb6cdfaba13b57e474aa1392138e8135b7 | [
"Apache-2.0"
] | 5 | 2020-11-22T07:10:29.000Z | 2020-12-29T04:34:27.000Z | API.md | pahud/awscdk-run | 2ca647eb6cdfaba13b57e474aa1392138e8135b7 | [
"Apache-2.0"
] | 1 | 2020-12-24T16:47:23.000Z | 2020-12-24T16:47:23.000Z | # API Reference
**Classes**
Name|Description
----|-----------
[CodeBuildProject](#awscdk-run-codebuildproject)|CodeBuid Project.
[StartBuild](#awscdk-run-startbuild)|Custom resource to start the CodeBuild project.
**Structs**
Name|Description
----|-----------
[CodeBuildProjectProps](#awscdk-run-codebuildprojectprops)|Construct properties for CodeBuildProject.
[StartBuildProps](#awscdk-run-startbuildprops)|Construct properties for StartBuild.
## class CodeBuildProject <a id="awscdk-run-codebuildproject"></a>
CodeBuid Project.
__Implements__: [IConstruct](#constructs-iconstruct), [IConstruct](#aws-cdk-core-iconstruct), [IConstruct](#constructs-iconstruct), [IDependable](#aws-cdk-core-idependable)
__Extends__: [Construct](#aws-cdk-core-construct)
### Initializer
```ts
new CodeBuildProject(scope: Construct, id: string, props: CodeBuildProjectProps)
```
* **scope** (<code>[Construct](#aws-cdk-core-construct)</code>) *No description*
* **id** (<code>string</code>) *No description*
* **props** (<code>[CodeBuildProjectProps](#awscdk-run-codebuildprojectprops)</code>) *No description*
* **github** (<code>string</code>) e.g. pahud/aws-cdk-serverless-sample.
* **customStackName** (<code>string</code>) custom stack name for the cdk stack to deploy. __*Optional*__
* **role** (<code>[IRole](#aws-cdk-aws-iam-irole)</code>) IAM role for the project. __*Default*__: create a new role
* **starBuild** (<code>boolean</code>) whether to start the build immediately. __*Default*__: true
### Properties
Name | Type | Description
-----|------|-------------
**project** | <code>[IProject](#aws-cdk-aws-codebuild-iproject)</code> | <span></span>
## class StartBuild <a id="awscdk-run-startbuild"></a>
Custom resource to start the CodeBuild project.
__Implements__: [IConstruct](#constructs-iconstruct), [IConstruct](#aws-cdk-core-iconstruct), [IConstruct](#constructs-iconstruct), [IDependable](#aws-cdk-core-idependable)
__Extends__: [Construct](#aws-cdk-core-construct)
### Initializer
```ts
new StartBuild(scope: Construct, id: string, props: StartBuildProps)
```
* **scope** (<code>[Construct](#aws-cdk-core-construct)</code>) *No description*
* **id** (<code>string</code>) *No description*
* **props** (<code>[StartBuildProps](#awscdk-run-startbuildprops)</code>) *No description*
* **project** (<code>[IProject](#aws-cdk-aws-codebuild-iproject)</code>) The codebuild project to start.
## struct CodeBuildProjectProps <a id="awscdk-run-codebuildprojectprops"></a>
Construct properties for CodeBuildProject.
Name | Type | Description
-----|------|-------------
**github** | <code>string</code> | e.g. pahud/aws-cdk-serverless-sample.
**customStackName**? | <code>string</code> | custom stack name for the cdk stack to deploy.<br/>__*Optional*__
**role**? | <code>[IRole](#aws-cdk-aws-iam-irole)</code> | IAM role for the project.<br/>__*Default*__: create a new role
**starBuild**? | <code>boolean</code> | whether to start the build immediately.<br/>__*Default*__: true
## struct StartBuildProps <a id="awscdk-run-startbuildprops"></a>
Construct properties for StartBuild.
Name | Type | Description
-----|------|-------------
**project** | <code>[IProject](#aws-cdk-aws-codebuild-iproject)</code> | The codebuild project to start.
| 30.703704 | 172 | 0.695416 | yue_Hant | 0.410278 |
8771420b28523f3223996084e765d0a66f46d8b8 | 745 | md | Markdown | docs/HOWTO-Alerts-in-Dashboard.md | Giannandrea/sentinl | dd6c72b315e40fa106c86667e3d3b8bf4aae7d73 | [
"Apache-2.0"
] | 1,094 | 2016-12-02T04:00:33.000Z | 2020-07-29T12:51:26.000Z | docs/HOWTO-Alerts-in-Dashboard.md | Giannandrea/sentinl | dd6c72b315e40fa106c86667e3d3b8bf4aae7d73 | [
"Apache-2.0"
] | 668 | 2016-12-01T13:14:48.000Z | 2020-07-22T13:25:54.000Z | docs/HOWTO-Alerts-in-Dashboard.md | Giannandrea/sentinl | dd6c72b315e40fa106c86667e3d3b8bf4aae7d73 | [
"Apache-2.0"
] | 256 | 2016-12-05T02:39:19.000Z | 2020-07-29T12:51:29.000Z |
## Manual
In order to display **SENTINL** alarms in Kibi/Kibana:
- Switch to ```Discover``` tab
- Create and Save a search table for ```watcher_alarms-*``` with any desired column
<img src="http://i.imgur.com/CbMZgVO.png" />
- Switch to ```Dashboard``` tab
- Add a new Search Widget using the ```Search``` tab and selecting the saved query
<img src="http://i.imgur.com/Iim8wrf.png" />
-------
### The Lazy Way (4.x)
- download and import [this file](https://github.com/elasticfence/kaae/releases/download/snapshot/kaae_kibana4_dasbboard.json) in Kibi/Kibana
- [ ```settings``` -> ```objects``` -> ```import``` ]
<img src="http://i.imgur.com/wgQnKms.png" />
- Your Dashboard is ready
<img src="http://i.imgur.com/rIsCVgN.png" />
----
| 28.653846 | 143 | 0.66443 | yue_Hant | 0.350647 |
877225182d88341e7c27360b99fa18ca583d0b95 | 24,261 | md | Markdown | documents/aws-robomaker-developer-guide/doc_source/API_CreateSimulationJob.md | siagholami/aws-documentation | 2d06ee9011f3192b2ff38c09f04e01f1ea9e0191 | [
"CC-BY-4.0"
] | 5 | 2021-08-13T09:20:58.000Z | 2021-12-16T22:13:54.000Z | documents/aws-robomaker-developer-guide/doc_source/API_CreateSimulationJob.md | siagholami/aws-documentation | 2d06ee9011f3192b2ff38c09f04e01f1ea9e0191 | [
"CC-BY-4.0"
] | null | null | null | documents/aws-robomaker-developer-guide/doc_source/API_CreateSimulationJob.md | siagholami/aws-documentation | 2d06ee9011f3192b2ff38c09f04e01f1ea9e0191 | [
"CC-BY-4.0"
] | null | null | null | # CreateSimulationJob<a name="API_CreateSimulationJob"></a>
Creates a simulation job\.
**Note**
After 90 days, simulation jobs expire and will be deleted\. They will no longer be accessible\.
## Request Syntax<a name="API_CreateSimulationJob_RequestSyntax"></a>
```
POST /createSimulationJob HTTP/1.1
Content-type: application/json
{
"[clientRequestToken](#robomaker-CreateSimulationJob-request-clientRequestToken)": "string",
"[dataSources](#robomaker-CreateSimulationJob-request-dataSources)": [
{
"[name](API_DataSourceConfig.md#robomaker-Type-DataSourceConfig-name)": "string",
"[s3Bucket](API_DataSourceConfig.md#robomaker-Type-DataSourceConfig-s3Bucket)": "string",
"[s3Keys](API_DataSourceConfig.md#robomaker-Type-DataSourceConfig-s3Keys)": [ "string" ]
}
],
"[failureBehavior](#robomaker-CreateSimulationJob-request-failureBehavior)": "string",
"[iamRole](#robomaker-CreateSimulationJob-request-iamRole)": "string",
"[loggingConfig](#robomaker-CreateSimulationJob-request-loggingConfig)": {
"[recordAllRosTopics](API_LoggingConfig.md#robomaker-Type-LoggingConfig-recordAllRosTopics)": boolean
},
"[maxJobDurationInSeconds](#robomaker-CreateSimulationJob-request-maxJobDurationInSeconds)": number,
"[outputLocation](#robomaker-CreateSimulationJob-request-outputLocation)": {
"[s3Bucket](API_OutputLocation.md#robomaker-Type-OutputLocation-s3Bucket)": "string",
"[s3Prefix](API_OutputLocation.md#robomaker-Type-OutputLocation-s3Prefix)": "string"
},
"[robotApplications](#robomaker-CreateSimulationJob-request-robotApplications)": [
{
"[application](API_RobotApplicationConfig.md#robomaker-Type-RobotApplicationConfig-application)": "string",
"[applicationVersion](API_RobotApplicationConfig.md#robomaker-Type-RobotApplicationConfig-applicationVersion)": "string",
"[launchConfig](API_RobotApplicationConfig.md#robomaker-Type-RobotApplicationConfig-launchConfig)": {
"[environmentVariables](API_LaunchConfig.md#robomaker-Type-LaunchConfig-environmentVariables)": {
"string" : "string"
},
"[launchFile](API_LaunchConfig.md#robomaker-Type-LaunchConfig-launchFile)": "string",
"[packageName](API_LaunchConfig.md#robomaker-Type-LaunchConfig-packageName)": "string",
"[portForwardingConfig](API_LaunchConfig.md#robomaker-Type-LaunchConfig-portForwardingConfig)": {
"[portMappings](API_PortForwardingConfig.md#robomaker-Type-PortForwardingConfig-portMappings)": [
{
"[applicationPort](API_PortMapping.md#robomaker-Type-PortMapping-applicationPort)": number,
"[enableOnPublicIp](API_PortMapping.md#robomaker-Type-PortMapping-enableOnPublicIp)": boolean,
"[jobPort](API_PortMapping.md#robomaker-Type-PortMapping-jobPort)": number
}
]
}
}
}
],
"[simulationApplications](#robomaker-CreateSimulationJob-request-simulationApplications)": [
{
"[application](API_SimulationApplicationConfig.md#robomaker-Type-SimulationApplicationConfig-application)": "string",
"[applicationVersion](API_SimulationApplicationConfig.md#robomaker-Type-SimulationApplicationConfig-applicationVersion)": "string",
"[launchConfig](API_SimulationApplicationConfig.md#robomaker-Type-SimulationApplicationConfig-launchConfig)": {
"[environmentVariables](API_LaunchConfig.md#robomaker-Type-LaunchConfig-environmentVariables)": {
"string" : "string"
},
"[launchFile](API_LaunchConfig.md#robomaker-Type-LaunchConfig-launchFile)": "string",
"[packageName](API_LaunchConfig.md#robomaker-Type-LaunchConfig-packageName)": "string",
"[portForwardingConfig](API_LaunchConfig.md#robomaker-Type-LaunchConfig-portForwardingConfig)": {
"[portMappings](API_PortForwardingConfig.md#robomaker-Type-PortForwardingConfig-portMappings)": [
{
"[applicationPort](API_PortMapping.md#robomaker-Type-PortMapping-applicationPort)": number,
"[enableOnPublicIp](API_PortMapping.md#robomaker-Type-PortMapping-enableOnPublicIp)": boolean,
"[jobPort](API_PortMapping.md#robomaker-Type-PortMapping-jobPort)": number
}
]
}
}
}
],
"[tags](#robomaker-CreateSimulationJob-request-tags)": {
"string" : "string"
},
"[vpcConfig](#robomaker-CreateSimulationJob-request-vpcConfig)": {
"[assignPublicIp](API_VPCConfig.md#robomaker-Type-VPCConfig-assignPublicIp)": boolean,
"[securityGroups](API_VPCConfig.md#robomaker-Type-VPCConfig-securityGroups)": [ "string" ],
"[subnets](API_VPCConfig.md#robomaker-Type-VPCConfig-subnets)": [ "string" ]
}
}
```
## URI Request Parameters<a name="API_CreateSimulationJob_RequestParameters"></a>
The request does not use any URI parameters\.
## Request Body<a name="API_CreateSimulationJob_RequestBody"></a>
The request accepts the following data in JSON format\.
** [clientRequestToken](#API_CreateSimulationJob_RequestSyntax) ** <a name="robomaker-CreateSimulationJob-request-clientRequestToken"></a>
Unique, case\-sensitive identifier that you provide to ensure the idempotency of the request\.
Type: String
Length Constraints: Minimum length of 1\. Maximum length of 64\.
Pattern: `[a-zA-Z0-9_\-=]*`
Required: No
** [dataSources](#API_CreateSimulationJob_RequestSyntax) ** <a name="robomaker-CreateSimulationJob-request-dataSources"></a>
Specify data sources to mount read\-only files from S3 into your simulation\. These files are available under `/opt/robomaker/datasources/data_source_name`\.
There is a limit of 100 files and a combined size of 25GB for all `DataSourceConfig` objects\.
Type: Array of [DataSourceConfig](API_DataSourceConfig.md) objects
Array Members: Minimum number of 1 item\. Maximum number of 5 items\.
Required: No
** [failureBehavior](#API_CreateSimulationJob_RequestSyntax) ** <a name="robomaker-CreateSimulationJob-request-failureBehavior"></a>
The failure behavior the simulation job\.
Continue
Restart the simulation job in the same host instance\.
Fail
Stop the simulation job and terminate the instance\.
Type: String
Valid Values:` Fail | Continue`
Required: No
** [iamRole](#API_CreateSimulationJob_RequestSyntax) ** <a name="robomaker-CreateSimulationJob-request-iamRole"></a>
The IAM role name that allows the simulation instance to call the AWS APIs that are specified in its associated policies on your behalf\. This is how credentials are passed in to your simulation job\.
Type: String
Length Constraints: Minimum length of 1\. Maximum length of 255\.
Pattern: `arn:aws:iam::\w+:role/.*`
Required: Yes
** [loggingConfig](#API_CreateSimulationJob_RequestSyntax) ** <a name="robomaker-CreateSimulationJob-request-loggingConfig"></a>
The logging configuration\.
Type: [LoggingConfig](API_LoggingConfig.md) object
Required: No
** [maxJobDurationInSeconds](#API_CreateSimulationJob_RequestSyntax) ** <a name="robomaker-CreateSimulationJob-request-maxJobDurationInSeconds"></a>
The maximum simulation job duration in seconds \(up to 14 days or 1,209,600 seconds\. When `maxJobDurationInSeconds` is reached, the simulation job will status will transition to `Completed`\.
Type: Long
Required: Yes
** [outputLocation](#API_CreateSimulationJob_RequestSyntax) ** <a name="robomaker-CreateSimulationJob-request-outputLocation"></a>
Location for output files generated by the simulation job\.
Type: [OutputLocation](API_OutputLocation.md) object
Required: No
** [robotApplications](#API_CreateSimulationJob_RequestSyntax) ** <a name="robomaker-CreateSimulationJob-request-robotApplications"></a>
The robot application to use in the simulation job\.
Type: Array of [RobotApplicationConfig](API_RobotApplicationConfig.md) objects
Array Members: Fixed number of 1 item\.
Required: No
** [simulationApplications](#API_CreateSimulationJob_RequestSyntax) ** <a name="robomaker-CreateSimulationJob-request-simulationApplications"></a>
The simulation application to use in the simulation job\.
Type: Array of [SimulationApplicationConfig](API_SimulationApplicationConfig.md) objects
Array Members: Fixed number of 1 item\.
Required: No
** [tags](#API_CreateSimulationJob_RequestSyntax) ** <a name="robomaker-CreateSimulationJob-request-tags"></a>
A map that contains tag keys and tag values that are attached to the simulation job\.
Type: String to string map
Key Length Constraints: Minimum length of 1\. Maximum length of 128\.
Key Pattern: `[a-zA-Z0-9 _.\-\/+=:]*`
Value Length Constraints: Minimum length of 0\. Maximum length of 256\.
Value Pattern: `[a-zA-Z0-9 _.\-\/+=:]*`
Required: No
** [vpcConfig](#API_CreateSimulationJob_RequestSyntax) ** <a name="robomaker-CreateSimulationJob-request-vpcConfig"></a>
If your simulation job accesses resources in a VPC, you provide this parameter identifying the list of security group IDs and subnet IDs\. These must belong to the same VPC\. You must provide at least one security group and one subnet ID\.
Type: [VPCConfig](API_VPCConfig.md) object
Required: No
## Response Syntax<a name="API_CreateSimulationJob_ResponseSyntax"></a>
```
HTTP/1.1 200
Content-type: application/json
{
"[arn](#robomaker-CreateSimulationJob-response-arn)": "string",
"[clientRequestToken](#robomaker-CreateSimulationJob-response-clientRequestToken)": "string",
"[dataSources](#robomaker-CreateSimulationJob-response-dataSources)": [
{
"[name](API_DataSource.md#robomaker-Type-DataSource-name)": "string",
"[s3Bucket](API_DataSource.md#robomaker-Type-DataSource-s3Bucket)": "string",
"[s3Keys](API_DataSource.md#robomaker-Type-DataSource-s3Keys)": [
{
"[etag](API_S3KeyOutput.md#robomaker-Type-S3KeyOutput-etag)": "string",
"[s3Key](API_S3KeyOutput.md#robomaker-Type-S3KeyOutput-s3Key)": "string"
}
]
}
],
"[failureBehavior](#robomaker-CreateSimulationJob-response-failureBehavior)": "string",
"[failureCode](#robomaker-CreateSimulationJob-response-failureCode)": "string",
"[iamRole](#robomaker-CreateSimulationJob-response-iamRole)": "string",
"[lastStartedAt](#robomaker-CreateSimulationJob-response-lastStartedAt)": number,
"[lastUpdatedAt](#robomaker-CreateSimulationJob-response-lastUpdatedAt)": number,
"[loggingConfig](#robomaker-CreateSimulationJob-response-loggingConfig)": {
"[recordAllRosTopics](API_LoggingConfig.md#robomaker-Type-LoggingConfig-recordAllRosTopics)": boolean
},
"[maxJobDurationInSeconds](#robomaker-CreateSimulationJob-response-maxJobDurationInSeconds)": number,
"[outputLocation](#robomaker-CreateSimulationJob-response-outputLocation)": {
"[s3Bucket](API_OutputLocation.md#robomaker-Type-OutputLocation-s3Bucket)": "string",
"[s3Prefix](API_OutputLocation.md#robomaker-Type-OutputLocation-s3Prefix)": "string"
},
"[robotApplications](#robomaker-CreateSimulationJob-response-robotApplications)": [
{
"[application](API_RobotApplicationConfig.md#robomaker-Type-RobotApplicationConfig-application)": "string",
"[applicationVersion](API_RobotApplicationConfig.md#robomaker-Type-RobotApplicationConfig-applicationVersion)": "string",
"[launchConfig](API_RobotApplicationConfig.md#robomaker-Type-RobotApplicationConfig-launchConfig)": {
"[environmentVariables](API_LaunchConfig.md#robomaker-Type-LaunchConfig-environmentVariables)": {
"string" : "string"
},
"[launchFile](API_LaunchConfig.md#robomaker-Type-LaunchConfig-launchFile)": "string",
"[packageName](API_LaunchConfig.md#robomaker-Type-LaunchConfig-packageName)": "string",
"[portForwardingConfig](API_LaunchConfig.md#robomaker-Type-LaunchConfig-portForwardingConfig)": {
"[portMappings](API_PortForwardingConfig.md#robomaker-Type-PortForwardingConfig-portMappings)": [
{
"[applicationPort](API_PortMapping.md#robomaker-Type-PortMapping-applicationPort)": number,
"[enableOnPublicIp](API_PortMapping.md#robomaker-Type-PortMapping-enableOnPublicIp)": boolean,
"[jobPort](API_PortMapping.md#robomaker-Type-PortMapping-jobPort)": number
}
]
}
}
}
],
"[simulationApplications](#robomaker-CreateSimulationJob-response-simulationApplications)": [
{
"[application](API_SimulationApplicationConfig.md#robomaker-Type-SimulationApplicationConfig-application)": "string",
"[applicationVersion](API_SimulationApplicationConfig.md#robomaker-Type-SimulationApplicationConfig-applicationVersion)": "string",
"[launchConfig](API_SimulationApplicationConfig.md#robomaker-Type-SimulationApplicationConfig-launchConfig)": {
"[environmentVariables](API_LaunchConfig.md#robomaker-Type-LaunchConfig-environmentVariables)": {
"string" : "string"
},
"[launchFile](API_LaunchConfig.md#robomaker-Type-LaunchConfig-launchFile)": "string",
"[packageName](API_LaunchConfig.md#robomaker-Type-LaunchConfig-packageName)": "string",
"[portForwardingConfig](API_LaunchConfig.md#robomaker-Type-LaunchConfig-portForwardingConfig)": {
"[portMappings](API_PortForwardingConfig.md#robomaker-Type-PortForwardingConfig-portMappings)": [
{
"[applicationPort](API_PortMapping.md#robomaker-Type-PortMapping-applicationPort)": number,
"[enableOnPublicIp](API_PortMapping.md#robomaker-Type-PortMapping-enableOnPublicIp)": boolean,
"[jobPort](API_PortMapping.md#robomaker-Type-PortMapping-jobPort)": number
}
]
}
}
}
],
"[simulationTimeMillis](#robomaker-CreateSimulationJob-response-simulationTimeMillis)": number,
"[status](#robomaker-CreateSimulationJob-response-status)": "string",
"[tags](#robomaker-CreateSimulationJob-response-tags)": {
"string" : "string"
},
"[vpcConfig](#robomaker-CreateSimulationJob-response-vpcConfig)": {
"[assignPublicIp](API_VPCConfigResponse.md#robomaker-Type-VPCConfigResponse-assignPublicIp)": boolean,
"[securityGroups](API_VPCConfigResponse.md#robomaker-Type-VPCConfigResponse-securityGroups)": [ "string" ],
"[subnets](API_VPCConfigResponse.md#robomaker-Type-VPCConfigResponse-subnets)": [ "string" ],
"[vpcId](API_VPCConfigResponse.md#robomaker-Type-VPCConfigResponse-vpcId)": "string"
}
}
```
## Response Elements<a name="API_CreateSimulationJob_ResponseElements"></a>
If the action is successful, the service sends back an HTTP 200 response\.
The following data is returned in JSON format by the service\.
** [arn](#API_CreateSimulationJob_ResponseSyntax) ** <a name="robomaker-CreateSimulationJob-response-arn"></a>
The Amazon Resource Name \(ARN\) of the simulation job\.
Type: String
Length Constraints: Minimum length of 1\. Maximum length of 1224\.
Pattern: `arn:.*`
** [clientRequestToken](#API_CreateSimulationJob_ResponseSyntax) ** <a name="robomaker-CreateSimulationJob-response-clientRequestToken"></a>
Unique, case\-sensitive identifier that you provide to ensure the idempotency of the request\.
Type: String
Length Constraints: Minimum length of 1\. Maximum length of 64\.
Pattern: `[a-zA-Z0-9_\-=]*`
** [dataSources](#API_CreateSimulationJob_ResponseSyntax) ** <a name="robomaker-CreateSimulationJob-response-dataSources"></a>
The data sources for the simulation job\.
Type: Array of [DataSource](API_DataSource.md) objects
** [failureBehavior](#API_CreateSimulationJob_ResponseSyntax) ** <a name="robomaker-CreateSimulationJob-response-failureBehavior"></a>
the failure behavior for the simulation job\.
Type: String
Valid Values:` Fail | Continue`
** [failureCode](#API_CreateSimulationJob_ResponseSyntax) ** <a name="robomaker-CreateSimulationJob-response-failureCode"></a>
The failure code of the simulation job if it failed:
InternalServiceError
Internal service error\.
RobotApplicationCrash
Robot application exited abnormally\.
SimulationApplicationCrash
Simulation application exited abnormally\.
BadPermissionsRobotApplication
Robot application bundle could not be downloaded\.
BadPermissionsSimulationApplication
Simulation application bundle could not be downloaded\.
BadPermissionsS3Output
Unable to publish outputs to customer\-provided S3 bucket\.
BadPermissionsCloudwatchLogs
Unable to publish logs to customer\-provided CloudWatch Logs resource\.
SubnetIpLimitExceeded
Subnet IP limit exceeded\.
ENILimitExceeded
ENI limit exceeded\.
BadPermissionsUserCredentials
Unable to use the Role provided\.
InvalidBundleRobotApplication
Robot bundle cannot be extracted \(invalid format, bundling error, or other issue\)\.
InvalidBundleSimulationApplication
Simulation bundle cannot be extracted \(invalid format, bundling error, or other issue\)\.
RobotApplicationVersionMismatchedEtag
Etag for RobotApplication does not match value during version creation\.
SimulationApplicationVersionMismatchedEtag
Etag for SimulationApplication does not match value during version creation\.
Type: String
Valid Values:` InternalServiceError | RobotApplicationCrash | SimulationApplicationCrash | BadPermissionsRobotApplication | BadPermissionsSimulationApplication | BadPermissionsS3Object | BadPermissionsS3Output | BadPermissionsCloudwatchLogs | SubnetIpLimitExceeded | ENILimitExceeded | BadPermissionsUserCredentials | InvalidBundleRobotApplication | InvalidBundleSimulationApplication | InvalidS3Resource | LimitExceeded | MismatchedEtag | RobotApplicationVersionMismatchedEtag | SimulationApplicationVersionMismatchedEtag | ResourceNotFound | RequestThrottled | BatchTimedOut | BatchCanceled | InvalidInput | WrongRegionS3Bucket | WrongRegionS3Output | WrongRegionRobotApplication | WrongRegionSimulationApplication`
** [iamRole](#API_CreateSimulationJob_ResponseSyntax) ** <a name="robomaker-CreateSimulationJob-response-iamRole"></a>
The IAM role that allows the simulation job to call the AWS APIs that are specified in its associated policies on your behalf\.
Type: String
Length Constraints: Minimum length of 1\. Maximum length of 255\.
Pattern: `arn:aws:iam::\w+:role/.*`
** [lastStartedAt](#API_CreateSimulationJob_ResponseSyntax) ** <a name="robomaker-CreateSimulationJob-response-lastStartedAt"></a>
The time, in milliseconds since the epoch, when the simulation job was last started\.
Type: Timestamp
** [lastUpdatedAt](#API_CreateSimulationJob_ResponseSyntax) ** <a name="robomaker-CreateSimulationJob-response-lastUpdatedAt"></a>
The time, in milliseconds since the epoch, when the simulation job was last updated\.
Type: Timestamp
** [loggingConfig](#API_CreateSimulationJob_ResponseSyntax) ** <a name="robomaker-CreateSimulationJob-response-loggingConfig"></a>
The logging configuration\.
Type: [LoggingConfig](API_LoggingConfig.md) object
** [maxJobDurationInSeconds](#API_CreateSimulationJob_ResponseSyntax) ** <a name="robomaker-CreateSimulationJob-response-maxJobDurationInSeconds"></a>
The maximum simulation job duration in seconds\.
Type: Long
** [outputLocation](#API_CreateSimulationJob_ResponseSyntax) ** <a name="robomaker-CreateSimulationJob-response-outputLocation"></a>
Simulation job output files location\.
Type: [OutputLocation](API_OutputLocation.md) object
** [robotApplications](#API_CreateSimulationJob_ResponseSyntax) ** <a name="robomaker-CreateSimulationJob-response-robotApplications"></a>
The robot application used by the simulation job\.
Type: Array of [RobotApplicationConfig](API_RobotApplicationConfig.md) objects
Array Members: Fixed number of 1 item\.
** [simulationApplications](#API_CreateSimulationJob_ResponseSyntax) ** <a name="robomaker-CreateSimulationJob-response-simulationApplications"></a>
The simulation application used by the simulation job\.
Type: Array of [SimulationApplicationConfig](API_SimulationApplicationConfig.md) objects
Array Members: Fixed number of 1 item\.
** [simulationTimeMillis](#API_CreateSimulationJob_ResponseSyntax) ** <a name="robomaker-CreateSimulationJob-response-simulationTimeMillis"></a>
The simulation job execution duration in milliseconds\.
Type: Long
** [status](#API_CreateSimulationJob_ResponseSyntax) ** <a name="robomaker-CreateSimulationJob-response-status"></a>
The status of the simulation job\.
Type: String
Valid Values:` Pending | Preparing | Running | Restarting | Completed | Failed | RunningFailed | Terminating | Terminated | Canceled`
** [tags](#API_CreateSimulationJob_ResponseSyntax) ** <a name="robomaker-CreateSimulationJob-response-tags"></a>
The list of all tags added to the simulation job\.
Type: String to string map
Key Length Constraints: Minimum length of 1\. Maximum length of 128\.
Key Pattern: `[a-zA-Z0-9 _.\-\/+=:]*`
Value Length Constraints: Minimum length of 0\. Maximum length of 256\.
Value Pattern: `[a-zA-Z0-9 _.\-\/+=:]*`
** [vpcConfig](#API_CreateSimulationJob_ResponseSyntax) ** <a name="robomaker-CreateSimulationJob-response-vpcConfig"></a>
Information about the vpc configuration\.
Type: [VPCConfigResponse](API_VPCConfigResponse.md) object
## Errors<a name="API_CreateSimulationJob_Errors"></a>
For information about the errors that are common to all actions, see [Common Errors](CommonErrors.md)\.
**IdempotentParameterMismatchException**
The request uses the same client token as a previous, but non\-identical request\. Do not reuse a client token with different requests, unless the requests are identical\.
HTTP Status Code: 400
**InternalServerException**
AWS RoboMaker experienced a service issue\. Try your call again\.
HTTP Status Code: 500
**InvalidParameterException**
A parameter specified in a request is not valid, is unsupported, or cannot be used\. The returned message provides an explanation of the error value\.
HTTP Status Code: 400
**LimitExceededException**
The requested resource exceeds the maximum number allowed, or the number of concurrent stream requests exceeds the maximum number allowed\.
HTTP Status Code: 400
**ResourceNotFoundException**
The specified resource does not exist\.
HTTP Status Code: 400
**ServiceUnavailableException**
The request has failed due to a temporary failure of the server\.
HTTP Status Code: 503
**ThrottlingException**
AWS RoboMaker is temporarily unable to process the request\. Try your call again\.
HTTP Status Code: 400
## See Also<a name="API_CreateSimulationJob_SeeAlso"></a>
For more information about using this API in one of the language\-specific AWS SDKs, see the following:
+ [AWS Command Line Interface](https://docs.aws.amazon.com/goto/aws-cli/robomaker-2018-06-29/CreateSimulationJob)
+ [AWS SDK for \.NET](https://docs.aws.amazon.com/goto/DotNetSDKV3/robomaker-2018-06-29/CreateSimulationJob)
+ [AWS SDK for C\+\+](https://docs.aws.amazon.com/goto/SdkForCpp/robomaker-2018-06-29/CreateSimulationJob)
+ [AWS SDK for Go](https://docs.aws.amazon.com/goto/SdkForGoV1/robomaker-2018-06-29/CreateSimulationJob)
+ [AWS SDK for Java](https://docs.aws.amazon.com/goto/SdkForJava/robomaker-2018-06-29/CreateSimulationJob)
+ [AWS SDK for JavaScript](https://docs.aws.amazon.com/goto/AWSJavaScriptSDK/robomaker-2018-06-29/CreateSimulationJob)
+ [AWS SDK for PHP V3](https://docs.aws.amazon.com/goto/SdkForPHPV3/robomaker-2018-06-29/CreateSimulationJob)
+ [AWS SDK for Python](https://docs.aws.amazon.com/goto/boto3/robomaker-2018-06-29/CreateSimulationJob)
+ [AWS SDK for Ruby V2](https://docs.aws.amazon.com/goto/SdkForRubyV2/robomaker-2018-06-29/CreateSimulationJob) | 57.764286 | 718 | 0.742756 | yue_Hant | 0.304114 |
877244deccc5e5bfefe85ca85e1c413bb530567f | 117 | md | Markdown | README.md | badvassal/wllib | 61eec8ff3f3c3a878d8cc834e622ba3da8f404de | [
"MIT"
] | null | null | null | README.md | badvassal/wllib | 61eec8ff3f3c3a878d8cc834e622ba3da8f404de | [
"MIT"
] | null | null | null | README.md | badvassal/wllib | 61eec8ff3f3c3a878d8cc834e622ba3da8f404de | [
"MIT"
] | null | null | null | # wllib
Library for parsing and serializing Wasteland MSQ blocks (see <https://wasteland.gamepedia.com/MSQ_Block>).
| 29.25 | 107 | 0.786325 | eng_Latn | 0.297765 |
8772a9bb95b45efddfb8560202ae754c9505e49a | 706 | md | Markdown | README.md | alexisrouan/github-ribbons-css | 79d88d8302e1120cb3ea494f22f834a02b2a05aa | [
"MIT"
] | 37 | 2015-01-21T05:21:17.000Z | 2021-07-24T08:31:41.000Z | README.md | alexisrouan/github-ribbons-css | 79d88d8302e1120cb3ea494f22f834a02b2a05aa | [
"MIT"
] | 1 | 2015-05-20T16:46:44.000Z | 2015-05-20T16:46:44.000Z | README.md | alexisrouan/github-ribbons-css | 79d88d8302e1120cb3ea494f22f834a02b2a05aa | [
"MIT"
] | 15 | 2015-03-30T18:20:16.000Z | 2020-01-04T18:55:36.000Z | # GitHub Ribbons in CSS
I know there are a lot of different CSS implementations of these ribbons. But anyway, here is mine:

Take a look at the demo [here](http://petethepig.github.io/github-ribbons-css)
## How to use it?
<div class="ribbon left red">
<a href="https://github.com/username/repo">Fork me on GitHub</a>
</div>
Enjoy!
## Customizing
Pick from: **black**, **red**, **blue**, **green**, **orange**, **purple**, **grey** and **white**
**left** and **right**
## License
MIT
[](https://github.com/igrigorik/ga-beacon)
| 22.0625 | 124 | 0.667139 | eng_Latn | 0.412283 |
87732efb72fe1c9a6ad237684bb5f90fc72a23dc | 1,663 | md | Markdown | windows-driver-docs-pr/display/triple-buffering.md | AnLazyOtter/windows-driver-docs.zh-cn | bdbf88adf61f7589cde40ae7b0dbe229f57ff0cb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-driver-docs-pr/display/triple-buffering.md | AnLazyOtter/windows-driver-docs.zh-cn | bdbf88adf61f7589cde40ae7b0dbe229f57ff0cb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-driver-docs-pr/display/triple-buffering.md | AnLazyOtter/windows-driver-docs.zh-cn | bdbf88adf61f7589cde40ae7b0dbe229f57ff0cb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 三重缓冲
description: 三重缓冲
ms.assetid: 4651f9d2-09fb-4006-8243-403f564414f5
keywords:
- 绘图页上翻转 WDK DirectDraw、 三重缓冲
- DirectDraw 翻转 WDK Windows 2000 显示三重缓冲
- 页翻转 WDK DirectDraw、 三重缓冲
- 翻转 WDK DirectDraw,三重缓冲
- 三重缓冲 WDK DirectDraw
- 缓冲区 WDK DirectDraw
- 显示 WDK DirectDraw 翻转
ms.date: 04/20/2017
ms.localizationpriority: medium
ms.openlocfilehash: 408b3f6aeeca0dda36486f526dcb6ce56c83bf15
ms.sourcegitcommit: 0cc5051945559a242d941a6f2799d161d8eba2a7
ms.translationtype: MT
ms.contentlocale: zh-CN
ms.lasthandoff: 04/23/2019
ms.locfileid: "63389809"
---
# <a name="triple-buffering"></a>三重缓冲
## <span id="ddk_triple_buffering_gg"></span><span id="DDK_TRIPLE_BUFFERING_GG"></span>
增加缓冲区可以容纳主面数会增加显示性能。 最好具有至少三个 flippable 应用层协议 (某些游戏使用五个或多个)。 当有只有两个图面并翻页时,显示延迟,直到监视器的垂直回退操作已完成。 延迟是必须以确保在完成之前,后台缓冲区未编写上显示。 使用三重缓冲,第三个面始终是可写,因为它是后台缓冲区,并可用于立即 (如在下图中所示) 上绘制。 在不使用子画面内存游戏中,使用三重缓冲的 3D 渲染为 20 到 30%快于双缓冲。

在上图中的翻转结构将与中的相同[Tearing](tearing.md)、 唯一现在都在使用三个缓冲区。 一个缓冲区几乎始终是可写 (因为它不涉及翻转) 使该驱动程序无需等待显示扫描,以允许后台缓冲区,以再次写入之前完成。
下面是在三重缓冲系统中,使用前图所示的标签中翻转和平面闪的简要说明。 该示例首先图面上的像素内存显示 11。 这是主表面指向前台缓冲区 (**fpVidMem**示例代码中提供与 Microsoft Windows 驱动程序开发工具包\[DDK\])。 在某些时候,它将成为到像素内存 22 面 blt 希望。 因为**fpVidMem**指向 11 (而不是 22) 和 flip 状态开始的图面上为 false (不翻转发生请求的图面上),blt 可以继续执行。 驱动程序锁定在图面,向其中写入,然后将其解锁。 若要显示该图面,必须进行翻转。
DirectDraw 前台缓冲区对象现在可以更改**fpVidMem** (显示内存指针) 进行 22 面主图面。 没有翻转处于挂起状态,因为交换显示指针 (参见底部上图的下半部分),和翻转的状态设置为 **,则返回 TRUE**。 前台缓冲区现在指向图面上的像素内存 22、 后台缓冲区指向图面上的像素内存 33 和图面上的像素内存 11 (旧的主面) 的第三个缓冲区对象点。 与使用双缓冲 DirectDraw 是免费的这一次写入到后台缓冲区。 换而言之,DirectDraw 可以写入图面上的像素内存 33,因为没有翻转处于挂起状态。 此循环的翻转过程将无休止地继续提供流畅的动画和更快地玩游戏和视频播放的应用程序都使用 DirectDraw。
| 36.152174 | 325 | 0.802766 | yue_Hant | 0.483511 |
87734a9e52d4a3b93fa42bc57951a458103adb04 | 1,402 | md | Markdown | README.md | wangrunji0408/greenthread-future-rs | bec53bb5b8f8b2b94ac95ee485f37ec51fe0e4c8 | [
"MIT"
] | 2 | 2020-02-02T18:46:30.000Z | 2020-04-27T09:24:53.000Z | README.md | wangrunji0408/greenthread-future-rs | bec53bb5b8f8b2b94ac95ee485f37ec51fe0e4c8 | [
"MIT"
] | null | null | null | README.md | wangrunji0408/greenthread-future-rs | bec53bb5b8f8b2b94ac95ee485f37ec51fe0e4c8 | [
"MIT"
] | null | null | null | # greenthread-future-rs
[](https://crates.io/crates/greenthread-future)
[](https://docs.rs/greenthread-future)
[](https://github.com/wangrunji0408/greenthread-future-rs/actions)
[](https://coveralls.io/github/wangrunji0408/greenthread-future-rs?branch=master)
Convert closures into futures based on greenthread **on bare-metal (no_std + no_alloc)**.
In a word, this is a `#![no_std]` version of [Futurify](https://github.com/robertohuertasm/futurify).
I'm exploring to use it to implement bare-metal threading.
## Example
TODO.
Now just take a unit test as an example:
```rust
#[tokio::test]
async fn test() {
let h1 = tokio::spawn(ThreadFuture::from(|| {
println!("1.1");
yield_now();
println!("1.2");
1u32
}));
let h2 = tokio::spawn(ThreadFuture::from(|| {
println!("2.1");
yield_now();
println!("2.2");
2u32
}));
println!("join 1 => {}", h1.await.unwrap());
println!("join 2 => {}", h2.await.unwrap());
}
```
Output:
```
1.1
2.1
1.2
2.2
join 1 => 1
join 2 => 2
```
## Internal

| 25.962963 | 194 | 0.68117 | eng_Latn | 0.192738 |
877371365ec3d59c22c7f1e9788ac76433b9f721 | 870 | md | Markdown | docs/assembler/masm/operator-logical-not-masm-run-time.md | Chrissavi/cpp-docs.de-de | 6cc90f896a0a1baabb898a00f813f77f058bb7e5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/assembler/masm/operator-logical-not-masm-run-time.md | Chrissavi/cpp-docs.de-de | 6cc90f896a0a1baabb898a00f813f77f058bb7e5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/assembler/masm/operator-logical-not-masm-run-time.md | Chrissavi/cpp-docs.de-de | 6cc90f896a0a1baabb898a00f813f77f058bb7e5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: operator ! (MASM-Laufzeit)
ms.date: 12/17/2019
f1_keywords:
- operator !
helpviewer_keywords:
- operator !, syntax
- '! operator'
ms.assetid: e94f737a-8251-4a3d-95ec-e95c35689b37
ms.openlocfilehash: b85b90e82f17dd8a583867c0c69e8e9e2cc08d47
ms.sourcegitcommit: 0781c69b22797c41630601a176b9ea541be4f2a3
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 12/20/2019
ms.locfileid: "75317876"
---
# <a name="operator--masm-run-time"></a>operator ! (MASM-Laufzeit)
Logische Negation. Wird nur innerhalb von verwendet [. Wenn](dot-if.md), [. Während](dot-while.md)oder [. Wiederholen](dot-repeat.md) Sie Blöcke und werden zur Laufzeit und nicht zur assemblyzeit ausgewertet.
## <a name="syntax"></a>Syntax
> **!** *expression*
## <a name="see-also"></a>Siehe auch
[Operatorverweis\](operators-reference.md)
[MASM-BNF-Grammatik](masm-bnf-grammar.md)
| 30 | 208 | 0.752874 | deu_Latn | 0.480877 |
8773850d630699bb6bcd02de4b165b282c0e8736 | 231 | md | Markdown | includes/tlasharptla-3d-md.md | soelax/dotnet-api-docs.de-de | 3d9c3fe439c7c55adf5f4b4470fe60d580b4c952 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/tlasharptla-3d-md.md | soelax/dotnet-api-docs.de-de | 3d9c3fe439c7c55adf5f4b4470fe60d580b4c952 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/tlasharptla-3d-md.md | soelax/dotnet-api-docs.de-de | 3d9c3fe439c7c55adf5f4b4470fe60d580b4c952 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
ms.openlocfilehash: b1db086fac78b00f4d17a8b49d559233957f9c79
ms.sourcegitcommit: 1bb00d2f4343e73ae8d58668f02297a3cf10a4c1
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 06/15/2019
ms.locfileid: "63869750"
---
3D | 25.666667 | 60 | 0.839827 | yue_Hant | 0.362277 |
8773909ef71a6e2ce305fe5768b4aa0f9b1b6bee | 58 | md | Markdown | README.md | ricdtaveira/disciplina-poo | ec97df1997dbdba37aedf6f5ac13b922f705ac89 | [
"Apache-2.0"
] | null | null | null | README.md | ricdtaveira/disciplina-poo | ec97df1997dbdba37aedf6f5ac13b922f705ac89 | [
"Apache-2.0"
] | null | null | null | README.md | ricdtaveira/disciplina-poo | ec97df1997dbdba37aedf6f5ac13b922f705ac89 | [
"Apache-2.0"
] | null | null | null | # disciplina-poo
Repositório da Disciplina de POO P7 INFO
| 19.333333 | 40 | 0.810345 | por_Latn | 0.98254 |
87740a2fd91394b83f13d3cd5ea0e5f49a811fd1 | 732 | md | Markdown | .github/PULL_REQUEST_TEMPLATE.md | infastra/kimai2 | b6031a190bac6038e97f4be770f47f991c569696 | [
"MIT"
] | null | null | null | .github/PULL_REQUEST_TEMPLATE.md | infastra/kimai2 | b6031a190bac6038e97f4be770f47f991c569696 | [
"MIT"
] | null | null | null | .github/PULL_REQUEST_TEMPLATE.md | infastra/kimai2 | b6031a190bac6038e97f4be770f47f991c569696 | [
"MIT"
] | null | null | null | ## Description
A clear and concise description of what this pull request adds or changes.
## Types of changes
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to change)
## Checklist
- [ ] I ran `bin/console kimai:codestyle --fix` to verify the correct code style
- [ ] I have updated the [documentation](https://github.com/kimai/www.kimai.org/tree/master/_documentation) accordingly
- [ ] I have added tests to cover my changes
- [ ] I agree that this code is used in Kimai and will be published under the [MIT license](https://github.com/kevinpapst/kimai2/blob/master/LICENSE)
| 52.285714 | 149 | 0.75 | eng_Latn | 0.991613 |
87748bb839178ff6214a7cef1add285ff439cda2 | 6,325 | md | Markdown | README.md | chong-z/tree-ensemble-attack | 45bd0723b4593d22c8ba5ee102f61bff0dd2b693 | [
"MIT"
] | 19 | 2020-10-17T01:41:31.000Z | 2021-10-05T15:16:58.000Z | README.md | Ka1Wang-private/tree-ensemble-attack | 45bd0723b4593d22c8ba5ee102f61bff0dd2b693 | [
"MIT"
] | null | null | null | README.md | Ka1Wang-private/tree-ensemble-attack | 45bd0723b4593d22c8ba5ee102f61bff0dd2b693 | [
"MIT"
] | 3 | 2020-11-10T16:11:51.000Z | 2021-06-16T15:26:03.000Z | # An Efficient Adversarial Attack for Tree Ensembles
We study the problem of efficient adversarial attacks on tree based ensembles such as gradient boosting decision trees (GBDTs) and random forests (RFs). Since these models are non-continuous step functions and gradient does not exist, most existing efficient adversarial attacks are not applicable. In our work, we transform the attack problem into a discrete search problem specially designed for tree ensembles, where the goal is to find a valid "leaf tuple" that leads to mis-classification while having the shortest distance to the original input. With this formulation, we show that a simple yet effective greedy algorithm can be applied to iteratively optimize the adversarial example by moving the leaf tuple to its neighborhood within hamming distance 1. More details can be found in our paper:
_Chong Zhang, Huan Zhang, Cho-Jui Hsieh_, "An Efficient Adversarial Attack for Tree Ensembles", NeurIPS 2020 [[poster session]](https://neurips.cc/virtual/2020/protected/poster_ba3e9b6a519cfddc560b5d53210df1bd.html)
<img src="https://github.com/chong-z/tree-ensemble-attack/raw/main/img/paper-image-large.png" alt="Thumbnail of the paper" width="500px">
## LT-Attack Setup
### Installation on Ubuntu 20.04
Our code requires `libboost>=1.66` for `thread_pool`:
```
sudo apt install libboost-all-dev
```
Clone the repo and compile:
```
git clone git@github.com:chong-z/tree-ensemble-attack.git
cd tree-ensemble-attack
make
```
### Reproduce Results in the Paper
Attack the standard (natural) GBDT model (https://github.com/chenhongge/treeVerification) for the breast_cancer dataset. Construct adversarial examples on L-2 norm perturbation, using 20 threads on 500 test examples:
```
wget http://download.huan-zhang.com/models/tree-verify/tree_verification_models.tar.bz2
tar jxvf tree_verification_models.tar.bz2
./lt_attack configs/breast_cancer_unrobust_20x500_norm2_lt-attack.json
```
Attack the standard (natural) RF model for the breast_cancer dataset. Construct adversarial examples on L-2 norm perturbation, using 20 threads on 100 test examples:
```
./lt_attack configs/breast_cancer_unrobust-rf_20x100_norm2_lt-attack.json
```
### Sample Output
```
//...
===== Attack result for example 500/500 Norm(2)=0.235702 =====
All Best Norms: Norm(-1)=0.166667 Norm(1)=0.333579 Norm(2)=0.235702.
Average Norms: Norm(-1)=0.235932 Norm(1)=0.369484 Norm(2)=0.282763.
Best Points for example at line 500
1 1:0.07214075340 2:0.11111100000 4:0.16666650810 6:0.11111100000 7:0.16666650810
Results for config:configs/breast_cancer_unrobust_20x500_norm2_lt-attack.json
Average Norms: Norm(-1)=0.235932 Norm(1)=0.369484 Norm(2)=0.282763
--- Timing Metrics ---
|collect_histogram| disabled
## Actual Examples Tested:496
## Time per point: 0.00141016
```
## Configuration File Parameters
We provide sample config files in `config/` which use the following parameters:
- `search_mode`: The attack method to use. Choose from `'lt-attack'` (ours), `'naive-leaf'`, `'naive-feature'`.
- `norm_type`: The objective norm order. Supports 1, 2, and -1 (for L-Inf).
- `num_point`: Number of test examples to attack. We use 500 test examples in most of our experiments.
- `num_threads`: CPU threads per task. We use 20 physical threads per task in most of our experiments.
- `num_attack_per_point`: Number of initial adversarial examples. Usually set to the same as `num_threads`.
- `enable_early_return`: Use early return to speed up the search in `Neighbor_1(C')`. Usually set to `true`.
Additional dataset related parameters:
- `model`: Path to the JSON file dumped from XGBoost models using `bst.dump_model('bar.json', dump_format='json')`. See https://xgboost.readthedocs.io/en/latest/python/python_intro.html#training
- `inputs`: Path to the test example file in LIBSVM format.
- `num_classes`: Number of classes in the dataset.
- `num_features`: Number of features in the dataset.
- `feature_start`: The index of the first feature, could be 0 or 1 on different datasets.
## Run Baselines
### SignOPT, HSJA, and Cube
```
pip3 install xgboost==1.0.2 sklearn
# Choose |'search_mode'| from 'signopt', 'hsja', and 'cube'. We provide a few sample configs:
python3 baselines/test_attack_cpu.py --config_path=configs/breast_cancer_unrobust_20x500_norm2_cube.json
```
### MILP
```
# Use |'search_mode': 'milp'|. Requires the Gurobi Solver installed.
python3 baselines/xgbKantchelianAttack.py --config_path=configs/breast_cancer_unrobust_20x500_norm2_milp.json
```
### RBA-Appr
```
# RBA-Appr requires training data which can be downloaded from https://github.com/chenhongge/RobustTrees.
# [ICML 2019] Hongge Chen, Huan Zhang, Duane Boning, and Cho-Jui Hsieh, Robust Decision Trees Against Adversarial Examples
# The author published their datasets in the URL below.
mkdir raw_data
cd raw_data
wget https://raw.githubusercontent.com/chenhongge/RobustTrees/master/data/download_data.sh
sh download_data.sh
cd ..
# Use |'search_mode': 'region'|, and add |"train_data": "raw_data/TRAIN_DATA_NAME"| to the corresponding config file.
# We provide a sample config:
./lt_attack configs/breast_cancer_unrobust_20x500_norm2_region.json
```
## Known Issues
The JSON dump of XGBoost models offer precision up to 8 digits, however the difference between certain feature
split threholds may be smaller than 1e-8 in the original XGBoost model. For this reason the model created from
the JSON dump may produce a different prediction on certain examples than the original XGBoost model, and we
manually verify that each produced adversarial example is valid under the JSON dump.
## Bibtex
```
@inproceedings{zhang2020efficient,
author = {Zhang, Chong and Zhang, Huan and Hsieh, Cho-Jui},
booktitle = {Advances in Neural Information Processing Systems},
editor = {H. Larochelle and M. Ranzato and R. Hadsell and M. F. Balcan and H. Lin},
pages = {16165--16176},
publisher = {Curran Associates, Inc.},
title = {An Efficient Adversarial Attack for Tree Ensembles},
url = {https://proceedings.neurips.cc/paper/2020/file/ba3e9b6a519cfddc560b5d53210df1bd-Paper.pdf},
volume = {33},
year = {2020}
}
```
## Credits
1. `nlohmann/json*`: https://github.com/nlohmann/json.
2. `.clang-format`: https://cs.chromium.org/chromium/src/.clang-format.
3. See paper for the full list of references.
| 48.653846 | 802 | 0.77249 | eng_Latn | 0.902193 |
877492199c417e9cc724c62ef69b13575d3ad45d | 1,969 | md | Markdown | docs/nodes/credentials/AWS/README.md | pashkatrick/n8n-docs | daf4f6f7123970549d33c70a36b4c9ef1d9c8734 | [
"Apache-2.0"
] | null | null | null | docs/nodes/credentials/AWS/README.md | pashkatrick/n8n-docs | daf4f6f7123970549d33c70a36b4c9ef1d9c8734 | [
"Apache-2.0"
] | null | null | null | docs/nodes/credentials/AWS/README.md | pashkatrick/n8n-docs | daf4f6f7123970549d33c70a36b4c9ef1d9c8734 | [
"Apache-2.0"
] | null | null | null | ---
permalink: /credentials/aws
description: Learn to configure credentials for the AWS nodes in n8n
---
# AWS
You can use these credentials to authenticate the following nodes with AWS.
- [AWS Comprehend](../../nodes-library/nodes/AWSComprehend/README.md)
- [AWS Lambda](../../nodes-library/nodes/AWSLambda/README.md)
- [AWS Rekognition](../../nodes-library/nodes/AWSRekognition/README.md)
- [AWS S3](../../nodes-library/nodes/AWSS3/README.md)
- [AWS SES](../../nodes-library/nodes/AWSSES/README.md)
- [AWS SNS](../../nodes-library/nodes/AWSSNS/README.md)
- [AWS SNS Trigger](../../nodes-library/trigger-nodes/AWSSNSTrigger/README.md)
## Prerequisites
Create an [AWS](https://aws.amazon.com/) account.
## Using Access Token
1. Open your [AWS Management Console](https://console.aws.amazon.com).
2. Click on your name on the top right and select 'My Security Credentials' from the dropdown.
3. Click on the ***Create New Access Key*** button, under the ***Access keys (access key ID and secret access key)*** section
4. Click on the ***Show Access Key*** button.
5. Copy the displayed Access Key ID.
6. Enter the name for your credentials in the ***Credentials Name*** field in the 'AWS' credentials in n8n.
7. Paste the Access Key ID in the ***Access Key ID*** field in the 'AWS' credentials in n8n.
8. Copy the secret access key from your AWS console.
9. Paste the secret access key in the ***Secret Access Key*** field in the 'AWS' credentials in n8n.
10. Click the ***Create*** button to save your credentials in n8n.
**Note:** If you're running your AWS instance in a different region, please update the ***Region*** field accordingly.
The following video demonstrates the steps mentioned above.
<div class="video-container">
<iframe width="840" height="472.5" src="https://www.youtube.com/embed/zJgHOSSwC4A" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div> | 48.02439 | 221 | 0.732351 | eng_Latn | 0.74142 |
87755bb99e6e5e893ab41a8b2a009f3c2730c94c | 788 | md | Markdown | readme.md | CSE321-Fall2021/cse321-portfolio-zachary0816 | 5eb040b73d5c3bccfcd04c65d997099d0acee7b6 | [
"Apache-2.0"
] | null | null | null | readme.md | CSE321-Fall2021/cse321-portfolio-zachary0816 | 5eb040b73d5c3bccfcd04c65d997099d0acee7b6 | [
"Apache-2.0"
] | null | null | null | readme.md | CSE321-Fall2021/cse321-portfolio-zachary0816 | 5eb040b73d5c3bccfcd04c65d997099d0acee7b6 | [
"Apache-2.0"
] | null | null | null | About:
This is a repository for the class CSE321 Real time embedded operating system.
It will contain all projects for the class and each will have it's own folder.
Project 1 contains a template for main files and low quality code that has been correctd.
Project 2 contains an implementation of a countdown timer that inputs from a keypad and outputs to a LCD
Upon completion, it will act as a portfolio of various interactions with the Nucleo.
Project 3 contains an implemention for an RTOS system that provides vibrational feedback when near an object to help the visually impaired.
It consists of a vibration motor and an infared sensor that detects when the system is near an object.
This can, in theory, allow a visually impaired person to detect if they are near an object.
| 60.615385 | 140 | 0.80203 | eng_Latn | 0.999968 |
8775dd6ab9bcec247ab9e57c59c5e76b3320587d | 298 | md | Markdown | README.md | zigzig731/Progtech2018_vedes | a0311c34089cddf00cf45ba6163c1725ced4219e | [
"BSD-3-Clause"
] | null | null | null | README.md | zigzig731/Progtech2018_vedes | a0311c34089cddf00cf45ba6163c1725ced4219e | [
"BSD-3-Clause"
] | null | null | null | README.md | zigzig731/Progtech2018_vedes | a0311c34089cddf00cf45ba6163c1725ced4219e | [
"BSD-3-Clause"
] | null | null | null | # Progtech2018
## Tower Defense 2018 with OpenGL support
### Tower defense is a subgenre of strategy video game where the goal is to defend a player's territories or possessions by obstructing the enemy attackers, usually achieved by placing defensive structures on or along their path of attack.
| 59.6 | 239 | 0.805369 | eng_Latn | 0.99985 |
87766368462becbe93d0f0f98af7d69e3de2e485 | 2,653 | md | Markdown | README.md | kusnier/PlantUMLDesignPatterns | 1ab12bdd890c95f10f93f23ca05ebde3ac0e4081 | [
"MIT"
] | null | null | null | README.md | kusnier/PlantUMLDesignPatterns | 1ab12bdd890c95f10f93f23ca05ebde3ac0e4081 | [
"MIT"
] | null | null | null | README.md | kusnier/PlantUMLDesignPatterns | 1ab12bdd890c95f10f93f23ca05ebde3ac0e4081 | [
"MIT"
] | null | null | null | # Plant UML Design Patterns
[PlantUML](http://plantuml.com/index) code for some Design Patterns.
The syntax can be looked up on [PlantUML's class diagram documentation](http://plantuml.com/class-diagram).
## Example
Code | Diagram
--- | ---
<img alt="Abstract factory code" src="https://user-images.githubusercontent.com/9216979/54891016-3b6f4480-4eac-11e9-94ae-111b58f0afcb.png" width="350"> | <img alt="Abstract factory diagram" src="https://user-images.githubusercontent.com/9216979/54890891-b84dee80-4eab-11e9-9cca-7318506eb934.png" width="400">
## Included patterns
| Creational | Structural | Behavioral |
| ---------------- | ---------- | ----------------------- |
| Abstract factory | Adapter | Chain of Responsibility |
| Builder | Bridge | Command |
| Factory method | Composite | Interpreter |
| Prototype | Decorator | Iterator |
| Singleton | Facade | Mediator |
| | Flyweight | Memento |
| | Proxy | Observer |
| | | State |
| | | Strategy |
| | | Template method |
| | | Visitor |
## Running
If you do not intend to run locally, please have a look at the alternatives found in [this overview](http://plantuml.com/running).
### Prerequisites
To run locally, the PlantUML jar (from [PlantUML's download site](http://plantuml.com/download)) is needed.
As described on the [Getting started](http://plantuml.com/starting) site, you will also need both [Java](https://www.java.com/en/download/) and [Graphviz](https://www.graphviz.org/) installed on your machine.
### Creating the diagrams
The easiest way is to just run the included python script ([run.py](run.py)).
It checks for changes in the diagram files and only generates the new/changed ones.
The PlantUML jar should be in the same directory.
```
python run.py
```
The generated diagrams are stored in the *output* folder.
To run PlantUML directly from the command line for one specific diagram, you can execute:
```
java -jar plantuml.jar diagram.plantuml
```
## Built With
* [PlantUML](http://plantuml.com/)
## License
This project is licensed under the MIT License - see the [LICENSE.md](LICENSE.md) file for details.
## Acknowledgments
* The pattern descriptions and overall class diagram arrangements are taken from a cheat sheet from [Jason Mcdonalds blog](http://www.mcdonaldland.info/2007/11/28/40/)
| 39.597015 | 308 | 0.636638 | eng_Latn | 0.906625 |
8776b2a19252ec2230c2d95e91d4570f7bc9472a | 2,040 | md | Markdown | dynamicsax2012-technet/bra-working-with-fiscal-books-for-incoming-and-outgoing-transactions.md | RobinARH/DynamicsAX2012-technet | d0d0ef979705b68e6a8406736612e9fc3c74c871 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | dynamicsax2012-technet/bra-working-with-fiscal-books-for-incoming-and-outgoing-transactions.md | RobinARH/DynamicsAX2012-technet | d0d0ef979705b68e6a8406736612e9fc3c74c871 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | dynamicsax2012-technet/bra-working-with-fiscal-books-for-incoming-and-outgoing-transactions.md | RobinARH/DynamicsAX2012-technet | d0d0ef979705b68e6a8406736612e9fc3c74c871 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: (BRA) Working with fiscal books for incoming and outgoing transactions
TOCTitle: (BRA) Working with fiscal books for incoming and outgoing transactions
ms:assetid: f7170906-811d-44b6-a6f5-c05434208d9f
ms:mtpsurl: https://technet.microsoft.com/en-us/library/Dn305892(v=AX.60)
ms:contentKeyID: 54912989
ms.date: 04/18/2014
mtps_version: v=AX.60
audience: Application User
ms.search.region: Brazil
---
# (BRA) Working with fiscal books for incoming and outgoing transactions
_**Applies To:** Microsoft Dynamics AX 2012 R3, Microsoft Dynamics AX 2012 R2_
The topics in this section describe how to create and prepare tax assessments for sales and purchases. They also describe how to pay, declare, and adjust taxes, and how to prepare monthly reports for tax authorities.
[(BRA) Create a new booking period](bra-create-a-new-booking-period.md)
[(BRA) Assess, pay, declare, and adjust ICMS and ICMS-ST taxes](bra-assess-pay-declare-and-adjust-icms-and-icms-st-taxes.md)
[(BRA) Assess, pay, declare, and adjust IPI taxes](bra-assess-pay-declare-and-adjust-ipi-taxes.md)
[(BRA) Generate the SPED fiscal export file for a month](bra-generate-the-sped-fiscal-export-file-for-a-month.md)
[(BRA) Fiscal documents (form)](bra-fiscal-documents-form.md)
[(BRA) Manage the fiscal books integration](bra-manage-the-fiscal-books-integration.md)
[(BRA) Generate the Sintegra tax statement](bra-generate-the-sintegra-tax-statement.md)
[(BRA) Generate and validate the SPED ECD statement](bra-generate-and-validate-the-sped-ecd-statement.md)
[(BRA) Generate the GIA tax file for São Paulo](bra-generate-the-gia-tax-file-for-sao-paulo.md)
[(BRA) Generate the GIA ST tax file](bra-generate-the-gia-st-tax-file.md)
[(BRA) Assess, pay, declare, and adjust ISS taxes](bra-assess-pay-declare-and-adjust-iss-taxes.md)
[(BRA) Enter and post fiscal books adjustments, benefits, and incentives](bra-enter-and-post-fiscal-books-adjustments-benefits-and-incentives.md)
[(BRA) All non fiscal operations](bra-all-non-fiscal-operations.md)
| 41.632653 | 216 | 0.770098 | eng_Latn | 0.698039 |
877803376802498ad3d2989f719dd12295e62bdc | 13,347 | md | Markdown | _posts/2017-11-15-rxjs-notes.md | YingLiu4203/yingliu4203.github.io | 2e5404eb18e72ec8e77eefb8e6575cad13c17234 | [
"MIT"
] | 1 | 2021-01-25T17:44:39.000Z | 2021-01-25T17:44:39.000Z | _posts/2017-11-15-rxjs-notes.md | YingLiu4203/yingliu4203.github.io | 2e5404eb18e72ec8e77eefb8e6575cad13c17234 | [
"MIT"
] | null | null | null | _posts/2017-11-15-rxjs-notes.md | YingLiu4203/yingliu4203.github.io | 2e5404eb18e72ec8e77eefb8e6575cad13c17234 | [
"MIT"
] | 1 | 2016-09-14T16:34:01.000Z | 2016-09-14T16:34:01.000Z | ---
layout: post
title: RxJS Notes
categories:
- Notes
tags:
- javascript, rxjs
---
# RxJS Notes
The notes is based on the book [RxJS in Action](https://www.manning.com/books/rxjs-in-action).
## 1 A New Async Paradigm
The existing sync loop, conditional statements and exception handling strategies are not async aware. They are oblivious of wait time or latency between operations. Nesting calls are not only hard to understand but also bring clsuores with them. Cancel long-running operation is hard, if not impossible. Throtting is missing. Composing of nested asyn flows is difficult.
RxJS is an API for asyn programming with observable streams. A stream is a sequence of event over time. Everything is a stream in RxJS. RxJS uses an observer design pattern that involves an object (the subject), which maitains a list of subscribers (each is an observer). RxJS adds features such as completion signal, lazy initialization, cancellation, resource management and dsiposal.
RxJS abstract over time under the same programming model regardless of source. It has the following components:
* Producer: a source of data. It's also called an observable that is in charge of pushing notification -- a bahvior called fire-and-forget.
* Consumer: an observer to process data. A stream of data travels from the producer to the consumer.
* Data pipeline: using operators to process the data when it passes from the producer to the subscriber.
* Time: there is always a time factor in data stream. Time passes when data flows from the producer to the subscriber with a pipeline between them.
In RxJS, data is not stored in variables. It flows through the streams. RxJS follows a declarative design inspired by functional programming. Operations cna be chained. Streams can be composed. RxJS combines ideas from the observer pattern, the iterator pattern and the functional programming.
There are four types of data sources:
1. Single-value, synchronous: use simple sync operation to process the data. An observalbe wrapper is only used when they combine with other streams. Use `Rx.Observable.of(value)` to wrap it.
1. Multi-value, synchronous: is better processed by pull-based iterator. Use `Rx.Observable.from(values)` to wrap it. The `forEach` method is overloaded that has the same semantics as `subscribe`.
1. Single-value, asynchronous: it's often wrapped in a promise. A promise is execuated eagerly and asynchrounously. Use `Rx.Observalbe.fromPromise(promise)` to wrap it.
1. Multi-value, asynchronous: the typical solution is an `EventEmitter`. Use `Rx.Observalbe.fromEvent()` to wrap it. RxJS uses push-based notifications.
An observer is registered to an observable. An observer has three methods: `next()`, `error()`, and `complete()`.
At the core, an observable is a function that processes a set of inputs and returns an object that can be subscribed by an observer. The observer receives a subscription to manage the disposal of the stream.
## 2 Operators
RxJS avoids premature allocation of data in two ways: using of a lazy subscription and, by default, pushing data as soon as the event is emitted without holding it in memory.
An operator is a pure, higher-order, lazily-evaluated function that is injected into an observable's pipeline.
### 2.1 Core Operators
The `map` operator transforms data from one type to another.
The `filter` removes unwanted items from a stream via a selector function, also called the predicate.
The `reduce(accumulatorFunction, [initialValue])` operator turns a stream into a single value observable. The `accumulatorFunction` takes two parameters: the current result and the new steam element.
The `scan()` applies an accumulator function over an observable sequence but returns each intermediate results.
The `take(count)` operator returns a specified amount of contiguous elements. The `first` and `last` return the first or the last element, repsectively.
The `min` and `max` operator turens the minimum or maximum value of a finite stream.
The `do` utitlity operator invokes an acton for each element to perform some type of side effect, mostly for debugging or tracing purposes. It can be plugged into any step in the pipeline.
The `pluck(propertyName)` gets the value for a named-property from each element.
If a pipeline is side effect-free, it is said to be **self-contained**. It's called operator chaining or fluent programming.
An observable must always produce the same results given the same events passing throught it. It's a qulaity known in FP as **referential transparency**.
### 2.2 An Operator Example
An operator creates a brand-new observable, transforming the data from its source and delegating result to the next subscriber in the chain.
```javascript
function exclude(predicate) {
return Rx.Observable.create(subscriber => {
let source = this
return source.subscribe(
value => {
try {
if (!predicate(value)) {
subscriber.next(value)
}
}
catch(err) {
subscriber.error(err)
}
},
err => subscriber.error(err),
() => subscriber.complete()
})
})
}
Rx.Observable.prototype.exclude = exclude
```
Oberserables are lightweight and inexpensive to create. They have built-in capabilitys for disposal and cancellation via the `unsubscribe()` method. Use `Rx.Observable.create` to return a function that is called when the `unsubscribe()` method of the observable is called.
## 3 Time Management
Functions that deal with time are inherently impure because time is global to the entire applicatonand forever changing. JavaScript functions like `Data.now()` and `Math.random()` are impure because their returns are inconsistent. An async event brings two challenges:
* It may or may not happen in the future.
* It's conditional that it depends on the result of a previous task.
Callbacks and event handlers have implicit timing in the process.
### 3.1 Timing Methods
The `Rx.Observable.timer(offset)` creates an observable that will emit a single event after a given period of time. Similarly, `interval(span)` emits a sequnece of natural numbers starting from 0 at the fixed interval. They also accept an additional `scheduler` parameter that makes testing easy. The `timeInterval()` instance method gives both a count as well as the time intervals between two events. The `delay(offset)` time shifts the entire observable sequence.
There are two important characteristics of RxJS timing:
* Each operator only affects the propagatin of an event, not its creation.
* Time operators act sequentially.
`debounceTime(period)` triggers only if a certian period has passed without it being call. The last value will be emitted.
`throttleTiem(period)` emits at most once every period.
### 3.2 Buffering
`buffer(closingObservable)` buffers data until a closing observable emits an event.
`bufferCount(number)` buffers the number of events.
`bufferTime(period)` buffers for a specific period.
`bufferWhen(selector)` buffers when the selector call emits a value.
## 4 Combining Multiple Observables
### 4.1 Flat Combination
The `merge()` method merges one observable with others. The elements are in the order of their original sources.
The `concat()` method appends all element of a source to another one. It begins emitting data from a second observable only when the first one completes.
The `switch()` is an instance method that subscribes to an observable that emits obserables. Eech time it sees a new emitted observable, it unsubscribes from the previously-emitted observable. As described in its [operator document](http://reactivex.io/documentation/operators/switch.html).
> convert an Observable that emits Observables into a single.
> Observable that emits the items emitted by the.
> most-recently-emitted of those Observables.
### 4.2 Nested Observables
Observables manage and control the data that flows through them via data containerizing. Therefore, there are cases of observables whose values are observables. This software pattern is the FP paradigm called **monad**. A monad exposes an interface with three methods: a unit function to lift values into the monadic context (`of()`), a mapping function (`map()`), and a map-with-flatten function (`mergeMap()`).
The semantic meaning of `mergeMap()` is to transform the mapped stream by flatting a stream of projected observable, i.e., extracting data from the nested observables.
The `catcatMap()` waits for the previous one to complete then concats the flatted observable.
The `switchMap()` switches to the projected observale when it emits the most recent value. It cacles any previous inner observables.
The `contactAll()` waits each observable sequentially and flat the result.
### 4.3 Coordinating Observalbes
The `startWith()` emits a value before other observalbe values emitting.
The `using(resourceFactory, observableFactory)` calls `resource.unsubscribe()` when the observable is completed.
The `combineLatest()` emits an array of the latest values of multiple independent observalbes.
The `forkJoin()` emits only the last value of each forked stream.
Use `zip()` to combine streams that happen synchronously.
## 5 Error Handling
RxJS implements a functional error-handling technique. It abstracts errors and exception handling via several strategies.
### 5.1 Error Propagation
At the end of the observable stream is a subscriber waiting to pounce on the next event to occur. The subscriber implements the `Observer` interface that consisting of three methodsd: `next()`, `error()`, and `complete()`. Errors occur at the begining of the stream or in the middle are propagated down to any observer, finally resulting in a call to `error()`. The first exception that fires will result in the entire stream being cancelled.
### 5.2 Catching and Reacting to Errors
The `catch()` operator intercepts any error in the `Observable` and give you the option to handle by returning a new `Obseervable` or propogating it downstream.
The `catch()` operato is passed a function that takes an error argument as well as the soruce observable that was caught. Therefore you can return the source observable to retry from the begining.
RxJS provides `retry()` operator to reexecuting the source obserbale a number of retries. Be carful when use it with an observalbe creaetd from a promise because a promise alway return a settled value (success or failure). You can control retry strategy using `retryWhen()`.
RxJS provides `throw()` and `finally()` operators to throw exception or run cleanup code. The `finally()` runs when a stream compleetes or when it errors.
## 6 Hot and Cold
The hot and cold category determines the stream behavior, not just the subscription semantics, but also the entire lifetime of the strea.
### 6.1 Cold Observables
A **cold observable** doesn't begin emittting until an observer subscribes to it. It is typically used to wrap bounded data resource types such as numbers, ranges of numbers, strings, arrays and HTTP requests, as well as unbounded types like generator functions. These resources are known as **passive** in the sense that their declaration is **independent** of their execution. They are truly lazy in their creation and execution.
Being cold means that each new subscription is creating a new independent stream with a new starting point for that stream. Each subscriber will always independently receive the exact same set of events. A cold observable can be thought of as a function of an object factory that takes input data and return an output to the caller.
The declaration of a cold observable frequently begins with the static operators such as `of()`, `from()`, `interval()`, and `timer()`.
### 6.2 Hot Observables
Hot observables produce events regardless of the presence of subscribers. Hot observables are used to model events like clicks, mouse movement, touch, or any other vents exposed via event emitters.
Simialarly, an HTTP request is colde where a Promise is hot -- a promise is not reexecutable once it's been fulfilled.
A hot observable shares the same subscription to all observers that listen to it. It emits ongoing sequence of events from the point of subscription and not from the beginning.
### 6.3 Change Temperature
The default resubscription behavior of RxJS is code observable subscription: each subscriber gets its own copy of the producer. It is the case for synchronous data sources as well as async data sources wrapped created within an observable context. An implication is that anything subscribing to a cold observable creates an one-to-one unicast comunication between the proudcer and the consumer. Subscribing to a hot observable creates an one-to-many shared/multicast communication between the producer and its consumers.
By moving a hot source producer such as a promise or a websocket creation into an observer context, you make a hot source cold.
By moving a cold source producer out of an observable context and let the observable to subscribe the producer event can make a cold source hot.
The `share()` operator turns a cold stream to hot by managing the underlying stream's state thus the stream can be shahred by all subscribers. | 62.661972 | 520 | 0.772908 | eng_Latn | 0.999105 |
8778131629fba6744db45b74423118b8b038a072 | 8,221 | md | Markdown | content/posts/2020-07-21---Hot-Papers.md | TatsuyaShirakawa/daily-arxiv-gatsby | 4c1744c7f6f3eaa676310a5958ee71e126cf0c93 | [
"MIT"
] | 4 | 2020-09-02T16:13:06.000Z | 2021-11-08T08:17:04.000Z | content/posts/2020-07-21---Hot-Papers.md | TatsuyaShirakawa/daily-arxiv-gatsby | 4c1744c7f6f3eaa676310a5958ee71e126cf0c93 | [
"MIT"
] | null | null | null | content/posts/2020-07-21---Hot-Papers.md | TatsuyaShirakawa/daily-arxiv-gatsby | 4c1744c7f6f3eaa676310a5958ee71e126cf0c93 | [
"MIT"
] | null | null | null | ---
title: Hot Papers 2020-07-21
date: 2020-07-22T10:01:27.Z
template: "post"
draft: false
slug: "hot-papers-2020-07-21"
category: "arXiv"
tags:
- "arXiv"
- "Twitter"
- "Machine Learning"
- "Computer Science"
description: "Hot papers 2020-07-21"
socialImage: "/media/flying-marine.jpg"
---
# 1. iNNk: A Multi-Player Game to Deceive a Neural Network
Jennifer Villareale, Ana Acosta-Ruiz, Samuel Arcaro, Thomas Fox, Evan Freed, Robert Gray, Mathias Löwe, Panote Nuchprayoon, Aleksanteri Sladek, Rush Weigelt, Yifu Li, Sebastian Risi, Jichen Zhu
- retweets: 39, favorites: 141 (07/22/2020 10:01:27)
- links: [abs](https://arxiv.org/abs/2007.09177) | [pdf](https://arxiv.org/pdf/2007.09177)
- [cs.HC](https://arxiv.org/list/cs.HC/recent) | [cs.AI](https://arxiv.org/list/cs.AI/recent)
This paper presents \textit{iNNK}, a multiplayer drawing game where human players team up against an NN. The players need to successfully communicate a secret code word to each other through drawings, without being deciphered by the NN. With this game, we aim to foster a playful environment where players can, in a small way, go from passive consumers of NN applications to creative thinkers and critical challengers.
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">Happy to present a new game we developed "iNNk: A Multi-Player Game to Deceive a Neural Network" <a href="https://t.co/4Yx2Phe16V">https://t.co/4Yx2Phe16V</a><br>Paper: <a href="https://t.co/OcjJgezEN7">https://t.co/OcjJgezEN7</a><br><br>Players need to communicate a secret code word to each other through drawings, without being deciphered by the neural network</p>— Sebastian Risi (@risi1979) <a href="https://twitter.com/risi1979/status/1285536177580277760?ref_src=twsrc%5Etfw">July 21, 2020</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
# 2. An Overview of Natural Language State Representation for Reinforcement Learning
Brielen Madureira, David Schlangen
- retweets: 13, favorites: 51 (07/22/2020 10:01:27)
- links: [abs](https://arxiv.org/abs/2007.09774) | [pdf](https://arxiv.org/pdf/2007.09774)
- [cs.CL](https://arxiv.org/list/cs.CL/recent)
A suitable state representation is a fundamental part of the learning process in Reinforcement Learning. In various tasks, the state can either be described by natural language or be natural language itself. This survey outlines the strategies used in the literature to build natural language state representations. We appeal for more linguistically interpretable and grounded representations, careful justification of design decisions and evaluation of the effectiveness of different approaches.
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">A short survey of natural language state representations for reinforcement learning. <br><br>As a reminder, NLU can be used in language-conditional RL & language-assisted RL. <br><br>Some tasks where RL is used to solve NLP tasks are text summarization and dialogue.<a href="https://t.co/DIcptQx0HP">https://t.co/DIcptQx0HP</a> <a href="https://t.co/Zwr0ZRTbnK">pic.twitter.com/Zwr0ZRTbnK</a></p>— elvis (@omarsar0) <a href="https://twitter.com/omarsar0/status/1285593675209363457?ref_src=twsrc%5Etfw">July 21, 2020</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
# 3. ContactPose: A Dataset of Grasps with Object Contact and Hand Pose
Samarth Brahmbhatt, Chengcheng Tang, Christopher D. Twigg, Charles C. Kemp, James Hays
- retweets: 7, favorites: 47 (07/22/2020 10:01:27)
- links: [abs](https://arxiv.org/abs/2007.09545) | [pdf](https://arxiv.org/pdf/2007.09545)
- [cs.CV](https://arxiv.org/list/cs.CV/recent)
Grasping is natural for humans. However, it involves complex hand configurations and soft tissue deformation that can result in complicated regions of contact between the hand and the object. Understanding and modeling this contact can potentially improve hand models, AR/VR experiences, and robotic grasping. Yet, we currently lack datasets of hand-object contact paired with other data modalities, which is crucial for developing and evaluating contact modeling techniques. We introduce ContactPose, the first dataset of hand-object contact paired with hand pose, object pose, and RGB-D images. ContactPose has 2306 unique grasps of 25 household objects grasped with 2 functional intents by 50 participants, and more than 2.9 M RGB-D grasp images. Analysis of ContactPose data reveals interesting relationships between hand pose and contact. We use this data to rigorously evaluate various data representations, heuristics from the literature, and learning methods for contact modeling. Data, code, and trained models are available at https://contactpose.cc.gatech.edu.
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">ContactPose is a dataset of registered hand poses with images from 3 viewpoints with the interesting part of recordings from thermal cameras as well to get the contact traces of human hand on the objects. (from <a href="https://twitter.com/samarth_robo?ref_src=twsrc%5Etfw">@samarth_robo</a>) <a href="https://t.co/V2sjJFs2Vu">https://t.co/V2sjJFs2Vu</a> <a href="https://t.co/Mlh8cY6HVL">https://t.co/Mlh8cY6HVL</a> <a href="https://t.co/d1HbEThHj1">pic.twitter.com/d1HbEThHj1</a></p>— Ankur Handa (@ankurhandos) <a href="https://twitter.com/ankurhandos/status/1285612268282052608?ref_src=twsrc%5Etfw">July 21, 2020</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
# 4. Temporal Pointwise Convolutional Networks for Length of Stay Prediction in the Intensive Care Unit
Emma Rocheteau, Pietro Liò, Stephanie Hyland
- retweets: 11, favorites: 42 (07/22/2020 10:01:28)
- links: [abs](https://arxiv.org/abs/2007.09483) | [pdf](https://arxiv.org/pdf/2007.09483)
- [cs.LG](https://arxiv.org/list/cs.LG/recent) | [cs.AI](https://arxiv.org/list/cs.AI/recent) | [stat.ML](https://arxiv.org/list/stat.ML/recent)
The pressure of ever-increasing patient demand and budget restrictions make hospital bed management a daily challenge for clinical staff. Most critical is the efficient allocation of resource-heavy Intensive Care Unit (ICU) beds to the patients who need life support. Central to solving this problem is knowing for how long the current set of ICU patients are likely to stay in the unit. In this work, we propose a new deep learning model based on the combination of temporal convolution and pointwise (1x1) convolution, to solve the length of stay prediction task on the eICU critical care dataset. The model - which we refer to as Temporal Pointwise Convolution (TPC) - is specifically designed to mitigate for common challenges with Electronic Health Records, such as skewness, irregular sampling and missing data. In doing so, we have achieved significant performance benefits of 18-51% (metric dependent) over the commonly used Long-Short Term Memory (LSTM) network, and the multi-head self-attention network known as the Transformer.
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">The full version of our Temporal Pointwise <a href="https://twitter.com/hashtag/Convolution?src=hash&ref_src=twsrc%5Etfw">#Convolution</a> paper is up! We predicted <a href="https://twitter.com/hashtag/LengthOfStay?src=hash&ref_src=twsrc%5Etfw">#LengthOfStay</a> in <a href="https://twitter.com/hashtag/IntensiveCare?src=hash&ref_src=twsrc%5Etfw">#IntensiveCare</a> and achieved better performance than the LSTM and Transformer🥳🎉<br><br>Paper: <a href="https://t.co/QXdRovxDLF">https://t.co/QXdRovxDLF</a><br>Code: <a href="https://t.co/d5ZfDHRlbR">https://t.co/d5ZfDHRlbR</a><br><br>Coauthors <a href="https://twitter.com/_hylandSL?ref_src=twsrc%5Etfw">@_hylandSL</a> <a href="https://twitter.com/pl219_Cambridge?ref_src=twsrc%5Etfw">@pl219_Cambridge</a> <a href="https://t.co/gfKFjeMMaj">pic.twitter.com/gfKFjeMMaj</a></p>— Emma Rocheteau (@09Emmar) <a href="https://twitter.com/09Emmar/status/1285506639676813312?ref_src=twsrc%5Etfw">July 21, 2020</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
| 96.717647 | 1,071 | 0.768398 | eng_Latn | 0.821268 |
87781794944626f904a35821cdcec3f6a1f159e9 | 975 | md | Markdown | README.md | p0bailey/docker-jenkins | f36a61937d4d6f0cd6b66dc3e3d914dc41e743fc | [
"MIT"
] | null | null | null | README.md | p0bailey/docker-jenkins | f36a61937d4d6f0cd6b66dc3e3d914dc41e743fc | [
"MIT"
] | null | null | null | README.md | p0bailey/docker-jenkins | f36a61937d4d6f0cd6b66dc3e3d914dc41e743fc | [
"MIT"
] | null | null | null | # Docker Jenkins
This compose file allows to bootstrap an instance of the latest
Jenkins [Docker image](https://hub.docker.com/_/jenkins/) mounting a persistent data storage within the working directory of the host machine.
```
/var/jenkins_home is mounted under ${PWD}/jenkins
```
## Installation
docker-compose up -d
### Requirements
Docker Version >= 1.12.2
### Setup
git clone git@github.com:p0bailey/docker-jenkins.git
## Usage
```
docker-compose up -d
```
Grab Jenkins initial admin password.
```
cat jenkins/secrets/initialAdminPassword
```
Point the browser to:
```
127.0.0.1:8080
```
Go and set up Jenkins.
## Contributing
1. Fork it!
2. Create your feature branch: `git checkout -b my-new-feature`
3. Commit your changes: `git commit -am 'Add some feature'`
4. Push to the branch: `git push origin my-new-feature`
5. Submit a pull request :D
## History
16 Oct 2016 - 1.0
## Author
Phillip Bailey - <phillip@bailey.st>
## License
MIT License
| 15.725806 | 142 | 0.715897 | eng_Latn | 0.84289 |
87787238dee0e6f2f96a561023533f6f504e4095 | 1,270 | md | Markdown | content/v2.0/reference/flux/stdlib/built-in/transformations/kaufmanser.md | influxdata-cn/docs-v2 | facb71f6641c98950c09271b5e953ec4a59a76f6 | [
"MIT"
] | null | null | null | content/v2.0/reference/flux/stdlib/built-in/transformations/kaufmanser.md | influxdata-cn/docs-v2 | facb71f6641c98950c09271b5e953ec4a59a76f6 | [
"MIT"
] | null | null | null | content/v2.0/reference/flux/stdlib/built-in/transformations/kaufmanser.md | influxdata-cn/docs-v2 | facb71f6641c98950c09271b5e953ec4a59a76f6 | [
"MIT"
] | 1 | 2020-09-07T11:55:30.000Z | 2020-09-07T11:55:30.000Z | ---
title: kaufmansER() function
description: >
The `kaufmansER()` function calculates the Kaufman's Efficiency Ratio (KER) using
values in an input table.
aliases:
- /v2.0/reference/flux/functions/built-in/transformations/aggregates/kaufmanser/
- /v2.0/reference/flux/stdlib/built-in/transformations/aggregates/kaufmanser/
menu:
v2_0_ref:
name: kaufmansER
parent: built-in-transformations
weight: 402
related:
- /v2.0/reference/flux/stdlib/built-in/transformations/kaufmansama/
- https://docs.influxdata.com/influxdb/latest/query_language/functions/#kaufmans-efficiency-ratio, InfluxQL KAUFMANS_EFFICIENCY_RATIO()
---
The `kaufmansER()` function calculates the Kaufman's Efficiency Ratio (KER) using
values in an input table.
The function operates on the `_value` column.
_**Function type:** Transformation_
```js
kaufmansER(n: 10)
```
Kaufman's Efficiency Ratio indicator divides the absolute value of the
Chande Momentum Oscillator by 100 to return a value between 0 and 1.
Higher values represent a more efficient or trending market.
## Parameters
### n
The period or number of points to use in the calculation.
_**Data type:** Integer_
## Examples
```js
from(bucket: "example-bucket")
|> range(start: -7d)
|> kaufmansER(n: 10)
```
| 27.608696 | 137 | 0.755118 | eng_Latn | 0.729231 |
87789b4a6c17702098afbdb54ed8ac8ef5cdf664 | 528 | md | Markdown | data/meetup/washington-dc.md | kpfefferle/ember-website | 91ccc3f0c45fa5f3c05dc99df8323360fe82a415 | [
"MIT"
] | 64 | 2018-06-14T13:32:41.000Z | 2022-03-02T11:39:43.000Z | data/meetup/washington-dc.md | kpfefferle/ember-website | 91ccc3f0c45fa5f3c05dc99df8323360fe82a415 | [
"MIT"
] | 652 | 2018-06-15T18:03:09.000Z | 2022-03-19T19:19:31.000Z | data/meetup/washington-dc.md | kpfefferle/ember-website | 91ccc3f0c45fa5f3c05dc99df8323360fe82a415 | [
"MIT"
] | 185 | 2018-06-14T13:58:53.000Z | 2022-03-25T09:45:51.000Z | ---
location: 'Washington, DC'
url: 'http://www.meetup.com/Ember-JS-DC/'
image: 19washington.png
lat: 38.9071923
lng: -77.0368707
organizers:
- organizer: Greg Hurlman
profileImage: 'http://photos3.meetupstatic.com/photos/member/5/f/3/4/thumb_201804372.jpeg'
- organizer: Allison
profileImage: 'http://photos3.meetupstatic.com/photos/member/3/5/e/a/thumb_39973802.jpeg'
- organizer: Matt Leibel
profileImage: 'http://photos4.meetupstatic.com/photos/member/9/1/d/e/thumb_159757342.jpeg'
area: North America
---
| 33 | 94 | 0.74053 | yue_Hant | 0.159019 |
8778d765850697a1785ba4faebf160e6f2a6d354 | 989 | md | Markdown | docs/visual-basic/misc/bc42301.md | Ming77/docs.zh-cn | dd4fb6e9f79320627d19c760922cb66f60162607 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/misc/bc42301.md | Ming77/docs.zh-cn | dd4fb6e9f79320627d19c760922cb66f60162607 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/misc/bc42301.md | Ming77/docs.zh-cn | dd4fb6e9f79320627d19c760922cb66f60162607 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "每个语言元素只能有一个 XML 注释块"
ms.date: 07/20/2015
ms.prod: .net
ms.technology: devlang-visual-basic
ms.topic: article
f1_keywords:
- vbc42301
- bc42301
helpviewer_keywords: BC42301
ms.assetid: 04c4b833-2001-420d-9f49-4048a3b04ee4
caps.latest.revision: "9"
author: dotnet-bot
ms.author: dotnetcontent
ms.openlocfilehash: afa3873c7f52e5fc959309ee7de43a63fa94bee8
ms.sourcegitcommit: 4f3fef493080a43e70e951223894768d36ce430a
ms.translationtype: MT
ms.contentlocale: zh-CN
ms.lasthandoff: 11/21/2017
---
# <a name="only-one-xml-comment-block-is-allowed-per-language-element"></a>每个语言元素只能有一个 XML 注释块
多个 XML 注释块已应用到某个语言元素。
**错误 ID:** BC42301
## <a name="to-correct-this-error"></a>更正此错误
- 删除多余的 XML 注释块。
## <a name="see-also"></a>另请参阅
[如何:创建 XML 文档](../../visual-basic/programming-guide/program-structure/how-to-create-xml-documentation.md)
[使用 XML 记录代码](../../visual-basic/programming-guide/program-structure/documenting-your-code-with-xml.md)
| 29.969697 | 108 | 0.737108 | yue_Hant | 0.160873 |
8778fdc43ac950d4ba11e136ce0d76d612202d89 | 3,462 | md | Markdown | README.md | trond-snekvik/vscode-clangd | 5970d3c4ac1a8e48001e93a5f88f0f3911744dd1 | [
"MIT"
] | null | null | null | README.md | trond-snekvik/vscode-clangd | 5970d3c4ac1a8e48001e93a5f88f0f3911744dd1 | [
"MIT"
] | null | null | null | README.md | trond-snekvik/vscode-clangd | 5970d3c4ac1a8e48001e93a5f88f0f3911744dd1 | [
"MIT"
] | null | null | null | # clangd
Provides C/C++ language IDE features for VS Code using [clangd](https://clangd.llvm.org):
- code completion
- compile errors and warnings
- go-to-definition and cross references
- include management
- code formatting
- simple refactorings
## Setup
### `clangd` server
The extension requires the `clangd` language server.
You will be prompted to download it if's not found on your PATH.
(Automatic installation is possible on x86-64 Linux, Windows, and Mac).
If you have an old version of clangd installed on your system already, you can
run "Check for clangd language server update" from the command palette.
### Project setup
clangd is based on the clang C++ compiler, and understands even complex C++
code. However, you must tell clangd how your project is built (compile flags).
[A `compile_commands.json` file](http://clang.llvm.org/docs/JSONCompilationDatabase.html)
can usually be generated by your build system
(e.g. by setting `-DCMAKE_EXPORT_COMPILE_COMMANDS=1` when building with CMake,
or with
[many other tools](https://sarcasm.github.io/notes/dev/compilation-database.html)).
It should live at the top of your source tree: symlink or copy it there.
## Features
### Code completion
Suggestions will appear as you type names, or after `.` or `->`.
Because clangd uses a full C++ parser, code completion has access to precise
type information.

### Errors, warnings, and clang-tidy
Code errors are shown as you type (both as red squiggle underlines, and in the
"Problems" panel). These are the same as produced by the clang compiler, and
suggested fixes can automatically be applied.

Most clang-tidy checks are supported (these can be enabled using a [.clang-tidy
file](https://clang.llvm.org/extra/clang-tidy/)).
### Cross-references
Go-to-definition and find-references work across your code, using a project-wide
index.

Press `Ctrl-P #` to quickly navigate to a symbol by name.
### Include management
Code completion works across your codebase and adds `#include` directives where
needed. The `•` shows includes that will be inserted.
clangd can also suggest inserting missing #includes, where they cause errors.

### Formatting
clangd uses the `clang-format` engine. You can format a file or the selection.
When "Format on Type" is enabled in the settings, pressing enter will cause
clangd to format the old line and semantically reindent.

The style used for formatting (and certain other operations) is controlled by
the .clang-format file is controlled by the project's
[.clang-format file](https://clang.llvm.org/docs/ClangFormatStyleOptions.html).
### Refactoring
clangd supports some local refactorings. When you select an expression or
declaration, the lightbulb menu appears and you can choose a code action.

Current refactorings include:
- extract variable/function
- expand `auto` types and macros
- use raw strings
- rename (bound to `<F2>`, rather than a contextual code action)
## Bugs/contributing
clangd is part of the [LLVM project](https://llvm.org).
If you'd like to help out, reach out to clangd-dev@lists.llvm.org.
If you've found a bug, please file at https://github.com/clangd/clangd/issues.
| 32.660377 | 89 | 0.76372 | eng_Latn | 0.996073 |
8779237b3c88c160d980c1ada5dc587abc9e3b5e | 791 | md | Markdown | F1/isreducedfunctionalitymode-method-project-vbapj-chm132358.md | ahkon/VBA-Docs | c047d7975de2b0949b496af150d279c505a8595b | [
"CC-BY-4.0",
"MIT"
] | 4 | 2019-09-07T04:44:48.000Z | 2021-12-16T15:05:50.000Z | F1/isreducedfunctionalitymode-method-project-vbapj-chm132358.md | ahkon/VBA-Docs | c047d7975de2b0949b496af150d279c505a8595b | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-06-13T09:32:15.000Z | 2021-06-13T09:32:15.000Z | F1/isreducedfunctionalitymode-method-project-vbapj-chm132358.md | ahkon/VBA-Docs | c047d7975de2b0949b496af150d279c505a8595b | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-06-23T03:40:08.000Z | 2021-06-23T03:40:08.000Z | ---
title: IsReducedFunctionalityMode Method, Project [vbapj.chm132358]
keywords: vbapj.chm132358
f1_keywords:
- vbapj.chm132358
ms.prod: office
ms.assetid: 75fffa89-20af-4c7e-a231-7b9a6fc1c95f
ms.date: 06/08/2017
localization_priority: Normal
---
# IsReducedFunctionalityMode Method, Project [vbapj.chm132358]
Hi there! You have landed on one of our F1 Help redirector pages. Please select the topic you were looking for below.
[Application.IsReducedFunctionalityMode Method (Project)](https://msdn.microsoft.com/library/d53320db-377d-2e78-10b2-03af8d8bded3%28Office.15%29.aspx)
[LookupTableEntry.Index Property (Project)](https://msdn.microsoft.com/library/24c1ea75-522b-a010-3043-ed2ccf3547ec%28Office.15%29.aspx)
[!include[Support and feedback](~/includes/feedback-boilerplate.md)] | 37.666667 | 150 | 0.80531 | yue_Hant | 0.442391 |
87794bc217652658b4cd59556a224587e8979cc2 | 1,645 | md | Markdown | Grammar/rules_markdown/141_it_looks_like.md | MyNihongo/JapaneseData | 9b2467a03e78d97de28880d5eb205325326f8789 | [
"MIT"
] | 2 | 2022-01-06T21:16:42.000Z | 2022-01-08T11:02:28.000Z | Grammar/rules_markdown/141_it_looks_like.md | MyNihongo/JapaneseData | 9b2467a03e78d97de28880d5eb205325326f8789 | [
"MIT"
] | 3 | 2021-12-25T13:53:21.000Z | 2022-01-25T20:49:05.000Z | Grammar/rules_markdown/141_it_looks_like.md | MyNihongo/JapaneseData | 9b2467a03e78d97de28880d5eb205325326f8789 | [
"MIT"
] | null | null | null | In order to say that something looks like a pattern `clause + ようです` is used. In these situations the speaker relies on his or her senses (sight, hearing, smell, taste, etc.) and is not completely sure whether or not the clause is true.
It has the same meaning as [みたいだ](82) and [に見える](92), but the difference is that *ようだ* is more formal and is usually used only in writing, therefore, it is not used with a casual ending *だ*. The grammar construction is created differently. Just like with *みたいだ* if *ようです* is used with a verb the meaning of the clause changes to *"it seems"*.
|Structure|Form|Example|
|-|-|-|
|Verb|verb + ようです|行く**ようです**|
|い-adjective|casual form + ようです|寒い**ようです**|
|な-adjective|casual form + **な**ようです|安全**なようです**|
|Noun|noun + **の**ようです|日本人**のようです**|
##### Sight
>ジョンさんは日本人**のようです**。It looks like John is Japanese.
>ジョンさんは日本人**のようではありません**。It looks like John is not Japanese.
In this example the speaker saw John and judging by how John looked the speaker made a decision what John were Japanese. Sense of **sight** was used.
##### Hearing
>あの人は議論している**ようです**。It looks like those persons are having an argument.
>あの人は議論している**ようではありません**。It looks like those persons are not having an argument.
In this example the speaker was walking by a couple which was talking loudly. The speaker heard it and assumed that they were having an argument. Sense of **hearing** was used.
##### Smell
>晩ご飯がおいしい**ようです**。It looks like the dinner is tasty.
>晩ご飯がおいしい**ようではありません**。It looks like the dinner is not tasty.
In this example the speaker smelled the dinner and assumed by the smell that it was tasty. Sense of **smell** was used. | 60.925926 | 342 | 0.738602 | eng_Latn | 0.999884 |
8779f6a2441b15c4ca178a578a20cfa3d1e7ec70 | 2,709 | md | Markdown | powerbi-docs/consumer/end-user-recent.md | Heimig/powerbi-docs.de-de | 73b42400695fb379324256e53d19e6824a785415 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | powerbi-docs/consumer/end-user-recent.md | Heimig/powerbi-docs.de-de | 73b42400695fb379324256e53d19e6824a785415 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | powerbi-docs/consumer/end-user-recent.md | Heimig/powerbi-docs.de-de | 73b42400695fb379324256e53d19e6824a785415 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Anzeigen von kürzlich besuchten Inhalten im Power BI-Dienst
description: Dokumentation zu zuletzt verwendeten Inhalten in Power BI
author: mihart
manager: kvivek
ms.reviewer: ''
featuredvideoid: G26dr2PsEpk
ms.custom: seodec18
ms.service: powerbi
ms.subservice: powerbi-consumer
ms.topic: conceptual
ms.date: 12/06/2018
ms.author: mihart
LocalizationGroup: Common tasks
ms.openlocfilehash: 4bb69c8ead92bf69671107fdd5bfa0eef0ae5c0d
ms.sourcegitcommit: 60dad5aa0d85db790553e537bf8ac34ee3289ba3
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 05/29/2019
ms.locfileid: "61054866"
---
# <a name="recent-content-in-power-bi-service"></a>**Zuletzt verwendete** Inhalte im Power BI-Dienst
Die zuletzt verwendeten Inhalte sind die letzten Elemente, die Sie im Power BI-Dienst geöffnet haben (maximal 20 Elemente). Dazu gehören Dashboards, Berichte, Apps und Arbeitsmappen in allen Ihren Arbeitsbereichen.

Lassen Sie sich von Amanda zeigen, wie im Power BI-Dienst Inhaltslisten des Typs **Zuletzt verwendet** aufgefüllt werden, und befolgen Sie dann die detaillierten Anweisungen unter dem Video, um es selbst ausprobieren.
<iframe width="560" height="315" src="https://www.youtube.com/embed/G26dr2PsEpk" frameborder="0" allowfullscreen></iframe>
## <a name="display-recent-content"></a>Anzeigen von zuletzt verwendeten Inhalten
Um die letzten fünf besuchten Elemente anzuzeigen, wählen Sie im linken Navigationsbereich den Pfeil rechts neben **Zuletzt verwendet** aus. Hier können Sie zuletzt verwendeten Inhalt auswählen, um ihn zu öffnen. Es werden nur die letzten fünf verwendeten Elemente aufgeführt.

Wenn mehr als fünf zuletzt besuchte Elemente vorhanden sind, wählen Sie **Alle anzeigen** aus, um den Bildschirm „Zuletzt verwendet“ zu öffnen (siehe unten). Sie können auch im linken Navigationsbereich auf **Zuletzt verwendet** oder das Symbol „Zuletzt verwendet“  klicken.

Von hier aus können Sie mit interagieren den Inhalt wie auf den einzelnen [ **Dashboards**](end-user-dashboards.md), [ **Berichte**](end-user-reports.md), und **Arbeitsmappen** Registerkarten, und klicken Sie auf die apps <!--[**Apps**](end-user-apps.md)--> der Bildschirm.
## <a name="next-steps"></a>Nächste Schritte
<!--[Power BI service Apps](end-user-apps.md)-->
Weitere Fragen? [Wenden Sie sich an die Power BI-Community](http://community.powerbi.com/)
| 57.638298 | 353 | 0.783315 | deu_Latn | 0.984828 |
877a3aff2bfb021440f471efe55afb45f56c788e | 1,796 | md | Markdown | lc/0703-2019-10-06.md | muhlenXi/enjoy-algorithm | a2e2edca96dc5706c494415f9e2e96f62d045fb3 | [
"Apache-2.0"
] | null | null | null | lc/0703-2019-10-06.md | muhlenXi/enjoy-algorithm | a2e2edca96dc5706c494415f9e2e96f62d045fb3 | [
"Apache-2.0"
] | null | null | null | lc/0703-2019-10-06.md | muhlenXi/enjoy-algorithm | a2e2edca96dc5706c494415f9e2e96f62d045fb3 | [
"Apache-2.0"
] | null | null | null | # 703. 数据流中的第K大元素
### 简述
- [Leetcode](https://leetcode-cn.com/problems/kth-largest-element-in-a-stream/)
### 思路
- 最小堆
### 代码
```swift
class KthLargest {
var kth: Int
var minHeap: [Int]
init(_ k: Int, _ nums: [Int]) {
self.kth = k
self.minHeap = [Int]()
for i in 0..<nums.count {
// 最小堆未填满
if self.minHeap.count < self.kth {
self.minHeap.append(nums[i])
self.buildHeap(tree: &self.minHeap, n: self.minHeap.count)
} else { // 最小堆已填满
_ = self.add(nums[i])
}
}
}
func add(_ val: Int) -> Int {
// 最小堆未填满
if self.minHeap.count < self.kth {
self.minHeap.append(val)
self.buildHeap(tree: &self.minHeap, n: self.minHeap.count)
} else { // 最小堆已填满
if val > self.minHeap[0] {
self.minHeap[0] = val
heapify(tree: &self.minHeap, n: self.minHeap.count, i: 0)
}
}
return self.minHeap[0]
}
/// 对堆中的元素执行构建最小堆的操作
func heapify(tree: inout [Int], n: Int, i: Int) {
let left = 2*i+1
let right = 2*i+2
var min = i
if left < n && tree[left] < tree[min] {
min = left
}
if right < n && tree[right] < tree[min] {
min = right
}
if min != i {
tree.swapAt(i, min)
heapify(tree: &tree, n: n, i: min)
}
}
/// 将树中的元素构建成最小堆
func buildHeap(tree: inout [Int], n: Int) {
let lastNode = n - 1
var p = (lastNode-1)/2
while p >= 0 {
heapify(tree: &tree, n: n, i: p)
p -= 1
}
}
}
```
## Date
- Edit by muhlenXi on 2019-10-06 | 22.45 | 79 | 0.453786 | eng_Latn | 0.252991 |
877af28c9e3df785c96734d678c92289aca0c9f8 | 273 | md | Markdown | README.md | jolicode/SecurityBundle-avec-de-l-aspirine | e26937942845abca5a199d85fb74df8dad51bb15 | [
"MIT"
] | 1 | 2017-06-23T10:59:55.000Z | 2017-06-23T10:59:55.000Z | README.md | jolicode/SecurityBundle-avec-de-l-aspirine | e26937942845abca5a199d85fb74df8dad51bb15 | [
"MIT"
] | null | null | null | README.md | jolicode/SecurityBundle-avec-de-l-aspirine | e26937942845abca5a199d85fb74df8dad51bb15 | [
"MIT"
] | null | null | null | # Security Bundle avec de l'aspirine
Cette conférence a été présentée lors du [sfPot](http://afsy.fr/) de Mars 2013 par [Joël Wurtz](http://jolicode.com/equipe/joel-wurtz). Les slides sont visibles à l'adresse http://jolicode.github.com/SecurityBundle-avec-de-l-aspirine
| 54.6 | 233 | 0.769231 | fra_Latn | 0.895091 |
877be2db3d62a5a63f59ae05344db9cfd05ff0ba | 244 | md | Markdown | README.md | gnewton/gophemeral | 9fb4b4232e2b4b2b798ee0e3ea7a055e15e8035e | [
"BSD-3-Clause"
] | null | null | null | README.md | gnewton/gophemeral | 9fb4b4232e2b4b2b798ee0e3ea7a055e15e8035e | [
"BSD-3-Clause"
] | null | null | null | README.md | gnewton/gophemeral | 9fb4b4232e2b4b2b798ee0e3ea7a055e15e8035e | [
"BSD-3-Clause"
] | null | null | null | # gophemeral
* Records the creation and size changes of files during the lifespan of a process
* Useful for processes that create a lot of ephemeral files that are deleted before the process completes, like many science computing applications
| 48.8 | 147 | 0.815574 | eng_Latn | 0.999818 |
877e5d66c3dc8add14aca8cf803c36716a3e5752 | 824 | md | Markdown | README.md | RESSLab-Team/Dissipative_Embedded_Column_Base_Compute_Ke | 07aefa865b6657deda0192b279026b99c3dc5aae | [
"MIT"
] | null | null | null | README.md | RESSLab-Team/Dissipative_Embedded_Column_Base_Compute_Ke | 07aefa865b6657deda0192b279026b99c3dc5aae | [
"MIT"
] | null | null | null | README.md | RESSLab-Team/Dissipative_Embedded_Column_Base_Compute_Ke | 07aefa865b6657deda0192b279026b99c3dc5aae | [
"MIT"
] | null | null | null | # DESCRIPTION
DECB_stiffness.m is a matlab code file to compute elastic stiffness of dissipative embedded column base connections for given input parameters.
# LICENSE
This project is licensed under the MIT License - see the LICENSE.md file for details
# REFERENCES
[1] Inamasu, H., de Castro e Sousa, A. and Lignos, D.G. (2022). "Development and experimental validation of dissipative embedded column base connections for enhanced seismic performance of steel moment resisting frames" ASCE Journal of Structural Engineering. Vol. 148(3), pp.04021280, https://doi.org/10.1061/(ASCE)ST.1943-541X.0003259
[2] Inamasu, H. (2021). "Development of dissipative steel column-base connections for steel moment-resisting frames under seismic loading." PhD thesis, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland.
| 58.857143 | 336 | 0.791262 | eng_Latn | 0.873439 |
877eebffd403f399e77c692077477cdcb2221dda | 124 | md | Markdown | _collection/chunhuazhou-build_container_on_shub.md | singularityhub/singularityhub-archive | dd1d471db0c4ac01998cb84bd1ab82e97c8dab65 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | _collection/chunhuazhou-build_container_on_shub.md | singularityhub/singularityhub-archive | dd1d471db0c4ac01998cb84bd1ab82e97c8dab65 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | _collection/chunhuazhou-build_container_on_shub.md | singularityhub/singularityhub-archive | dd1d471db0c4ac01998cb84bd1ab82e97c8dab65 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | ---
id: 1690
full_name: "chunhuazhou/build_container_on_shub"
images:
- "chunhuazhou-build_container_on_shub-latest"
---
| 17.714286 | 48 | 0.766129 | eng_Latn | 0.219693 |
877f19d38feb62df55d78e6bfad3fb55f446c728 | 1,795 | md | Markdown | front-end/webstorm.md | greedbell/blog | 525407372163d775db2c0d8adb433ad489069b35 | [
"MIT"
] | 21 | 2017-03-02T03:15:59.000Z | 2022-01-04T07:08:00.000Z | front-end/webstorm.md | greedbell/blog | 525407372163d775db2c0d8adb433ad489069b35 | [
"MIT"
] | null | null | null | front-end/webstorm.md | greedbell/blog | 525407372163d775db2c0d8adb433ad489069b35 | [
"MIT"
] | 3 | 2017-04-27T09:12:36.000Z | 2019-05-06T16:16:37.000Z | # WebStorm
## 生成命令行工具
```
Preferences > Tools > Create Command-line Launcher
```
## 自动生成jsdoc格式的注释
写好函数,输入 /** 加回车
## 快捷键
| 苹果符号 | 苹果键盘 | PC键盘 |
| :--- | :--- | :--- |
| ⌘ | Command | Window |
| ⌥ | Option | Control |
| ⇧ | Shift | Shift |
| ⌃ | Ctrl | Alt |
| 快捷键 | 功能 |
| :---: | :---: |
| ⌘ + F12 | 打开 Structure |
| ⌘ + 1 | 打开 Project |
| ⌘ + ↓ | 打开 Code |
| ⌃ + F12 | 打开 Terminal |
## 配置 node.js
WebStorm > Preferences > Languages & Frameworks > JavaScript > Node.js and NPM > 选择指定的 node.js 版本,并 enable Core library.
## Live Template
模板
* [官方文档](https://www.jetbrains.com/help/webstorm/2016.2/live-templates.html)
### 创建模板
```
Webstorm > Preferences > Editor > Live Templates > +
```
### 分享模板
* [Sharing Live Templates](https://www.jetbrains.com/help/webstorm/2016.2/sharing-live-templates.html)
模板路径:
* Windows: `<your_user_home_directory>.WebStorm<version_number>\config\templates\<group_name>.xml`
* Linux: `~WebStorm<version>\config\templates\<group_name>.xml`
* OS X: `~/Library/Preferences/WebStorm<version>/templates\<group_name>.xml`
#### 分享方法一
直接分享 `<group_name>.xml` 文件,导入的时候保存到对应目录
#### 分享方法二
导出配置:
```
File > Export Settings > 只勾上 Live templates > 选择保存路径 > OK
```
导入配置:
```
File > Import Settings > 选择配置文件 > 勾上 Live templates > OK
```
## 文件模板
```
Webstorm > Preferences > Editor > File and Code Templates
```
## 写插件
* [IntelliJ Platform SDK Documentation](http://www.jetbrains.org/intellij/sdk/docs/index.html)
WebStorm 插件和 IntelliJ 插件一样
## 实时预览
1. chrome 安装 [JetBrains IDE Support](https://chrome.google.com/webstore/detail/jetbrains-ide-support/hmhgeddbohgjknpmjagkdomcpobmllji)
2. WebStorm > Run > Debug
### References
* [Using JetBrains Chrome Extension](https://www.jetbrains.com/help/webstorm/2016.3/using-jetbrains-chrome-extension.html)
| 19.301075 | 134 | 0.667967 | yue_Hant | 0.740349 |
87805f1a1a64a3c4428186daba927c5adedeb4a6 | 1,690 | md | Markdown | src/soft/soft068.md | pomu0325/kinokobooks | fe499a847a2469dccbadfbac11569035f4c88e7a | [
"CC-BY-2.0",
"CC-BY-3.0",
"CC0-1.0"
] | 21 | 2020-09-15T05:55:43.000Z | 2022-03-29T07:48:11.000Z | src/soft/soft068.md | pomu0325/kinokobooks | fe499a847a2469dccbadfbac11569035f4c88e7a | [
"CC-BY-2.0",
"CC-BY-3.0",
"CC0-1.0"
] | 5 | 2020-05-05T03:59:20.000Z | 2021-05-18T21:14:55.000Z | src/soft/soft068.md | pomu0325/kinokobooks | fe499a847a2469dccbadfbac11569035f4c88e7a | [
"CC-BY-2.0",
"CC-BY-3.0",
"CC0-1.0"
] | 1 | 2021-05-18T17:40:55.000Z | 2021-05-18T17:40:55.000Z | # 【68】ハードウェアの理解も必要{#e68}
<div class="author">カメル・ウィックラマナヤケ</div>
多くのソフトウェアアーキテクトにとって、ハードウェアの性能計画は少々苦手な分野ですが、アーキテクトの仕事の重要な一部であることに間違いはありません。ソフトウェア・アーキテクトがハードウェアについて適切な考えを持てない理由はいくつかありますが、特に大きいのは、ハードウェアについての理解不足と不明確な要件です。
私たちがハードウェアの検討に目をつぶってしまう最大の理由は、私たちがソフトウェア中心の考え方をしていて、ハードウェアの要求をつい見落としてしまうことにあります。高水準言語やソフトウェア・フレームワークによって私たちがハードウェアから自然に切り離されているということがそれに拍車をかけています。
要件がはっきりしないこともハードウェアの検討が不十分になる大きな理由です。要件は変わる場合がありますし、しっかりと理解されていない場合もあります。アーキテクチャーが発展すると、ハードウェアの問題も変化します。また、クライアントは、自らのユーザーベースやシステムの利用状況について十分に理解できていなかったり、予測できなかったりすることがあります。そして、ハードウェアは絶えず進化しています。過去のハードウェアの知識は、現在では通用しません。
ハードウェアについての専門知識がなければ、開発しようとしているシステムのハードウェア構成見積もりは、かなりの確率で外れます。一部のソフトウェア・アーキテクトは、見積もりが外れることを見越して安全率を高めに設定しています。しかし、そのような安全率は、別に客観的な評価やメソドロジーの基礎があってはじき出された数字ではありません。このような方法でハードウェアを選ぶと、たいていはハードウェアの能力が過度に高くなって、需要のピークになってもフル活用されません。そのため、システムが必要とする以上のハードウェアを顧客に無駄に買わせることになります。
ハードウェア計画がお粗末なものにならないようにするためにもっとも効果的なのは、インフラ・アーキテクトとの密接な連携です。インフラ・アーキテクトは、ソフトウェア・アーキテクトとは異なり、ハードウェアの性能計画の専門家なので、チームのメンバーとして参加してもらうべきです。しかし、すべてのソフトウェア・アーキテクトがインフラ・アーキテクトの力を借りるというぜいたくを享受できるわけではありません。その場合でも、ソフトウェア・アーキテクトがハードウェア計画で大きく道を踏み外さないようにするための方法はいくつかあります。
まず、過去の経験です。あなたは過去にシステムを実装したことがあるわけですし、たとえ当時はあと知恵だったとしても、ハードウェアの性能計岡についてある程度の知識をつかんだはずです。
また、クライアントにこの問題を話して、ハードウェアの性能計画のために資金を残しておいてもらうようにすることもできるはずです。性能計画のために先に予算を立てておいた方が、必要以上にハードウェアを買うよりも、コスト的には効果的です。この場合、鍵を握っているのは水平スケーラビリティです。最初から買いすぎるのではなく、必要に応じてハードウェアを買い増ししていくのです。水平戦略を成功させるためには、ソフトウェア・アーキテクトは処理能力をコンスタントに計測する必要があります。また、パフォーマンスが予測できる環境で実行できるようにするために、ソフトウェア・コンポーネントを互いに切り離しておくことが大切です。
ハードウェアの性能計画は、ソフトウェア・アーキテクチャーと同じくらい重要です。インフラ・アーキテクトがいるかどうかにかかわらず、高い優先順位を与える必要があります。アーキテクトにとって、ビジネスのニーズにソフトウェアによるソリューションをリンクするのと同じように、ソフトウェアにハードウェアをリンクすることも大切な仕事の一部です。
| 84.5 | 301 | 0.923077 | jpn_Jpan | 0.988773 |
8780bb00451db5041ea372f3fefd80c1b1fd9ac7 | 3,249 | md | Markdown | docs/pads/frag-deinen-provider-kontaktdaten.md | pefi-78/frag-deinen-provider.de | 47a40a87dff6e1ae5446aa3e9f4acb0832dac6e7 | [
"Apache-2.0"
] | null | null | null | docs/pads/frag-deinen-provider-kontaktdaten.md | pefi-78/frag-deinen-provider.de | 47a40a87dff6e1ae5446aa3e9f4acb0832dac6e7 | [
"Apache-2.0"
] | null | null | null | docs/pads/frag-deinen-provider-kontaktdaten.md | pefi-78/frag-deinen-provider.de | 47a40a87dff6e1ae5446aa3e9f4acb0832dac6e7 | [
"Apache-2.0"
] | null | null | null | # Kontaktdaten der Provider
[TOC]
Um in dem Formular den User beim ertsellen des Anschreibens zu unterstützen, benötigen wir die Kontaktdaten aller Provider.
## Links
* Liste der Bundesnetzargentur aller gemeldeter TelekommunikationsFirmen (160Seiten)
\url{https://www.bundesnetzagentur.de/SharedDocs/Downloads/DE/Sachgebiete/Telekommunikation/Unternehmen\_Institutionen/Anbieterpflichten/Meldepflicht/TKDiensteanbieterPDF.pdf?\_\_blob=publicationFile\&v=80}
*
## Welche Daten
Wir benötigen vorraussichtlich folgende Daten der Firmen:
* Firmenname (muss)
* KontaktMail (muss)
* Adresse (solte)
* WebFormulare (kann)
* Fax (kann)
Anmerkung: die "Werte" in Klammern stellen dar, wie wichtig die Angabe für das Projekt sind.
## Datenstrucktur
Wie sollen die Daten struckturiert sein, damit diese durch maschinell verarbetiet werden und so im Formular genutz werden können.
Die verbreitesten Vormate in der Webtechnologie sind Json und Yaml. Ich halte Yaml für das bessere Format um Daten zu halten, da es Kommentare, mehrzeilige oder leere Strings sowie viele Variablentypen abbilden kann. Allerding muss man beim Einrückung per Leerzeichen statt Tab vorsichtig sein. Einige Tools wandeln ungefragt Tabs in Lehrzeichen um und man ist dann ständig am nacharbeiten. Deshalb nutzen wir konsequent immer 2 Lehrzeichen je Einrückung.
Links:
* Yaml Syntax: h\url{https://yaml.org/start.html}
* Yaml Validator:
* \url{https://codebeautify.org/yaml-validator}
* \url{http://www.yamllint.com/} Achtung hier werden Kommentare entfernt
* \url{https://onlineyamltools.com/prettify-yaml}
* \url{https://nodeca.github.io/js-yaml/} Bitte hiermit die Daten prüfen, da wir wahrscheinlich diese Lib einbinden
### Beispiel
Im folgenden sind 2 fiktive Beispiele dokumentiert. Im ersten Beispiel, sind die Attribute und deren Verwendung in den Kommentaren beschrieben.
```yaml
# Kommentare des Bearbeiters sollten über den jeweiligen Eintrag stehen.
# z.B. Quelle \url{https://mogelcom.de/help/dsgvo}
# mogelcom: die eindeutige ID "mogelcom" des Providers
mogelcom:
# date: wann wurde der Eintrag aufgenommen bzw. das letzte mal geändert
date: 2019-05-23
# name: Der Name des Anbieters wie er in der Anfrage erscheint.
name: MogelCom
# desc: Die Beschreibung des Anbieters,
# damit der User auch den richtigen auswählen kann.
desc: |
Die MogelCom, aus Deutschland.
Wir wollen nur ihr Bestes.
# mail: Die Mailadresse über die eine DSGVO-Anfrage eingereicht werden kann.
mail: dsgvo@mogelcom.de
# addr: Die vollständige Postadresse, welche als Empfänger für den
# Postversand genutzt werden kann.
addr: |
MogelCom AG
z.H. Hr. Mustermann
Dorfstrasse 42
12345 Hauptstaat
Deutschland
# web: Url zu dem Webformular über das man die DSGVO
# Anfrage einreichen kann.
web: \url{https://mogelcom.de/dsgvo}
# fax: Die Telefonnummer an die man die Anfrage perr
# Fax stellen kann.
fax: +49 30 123456789
# Dies ist der nächste Provider mit der ID "provider"
provider:
name: Provider GmbH
desc: |
Der Provider,
deines Vertrauens
mail: support@provider.de
addr: |
Provider
Postfach 1234
98765 Nirgendwo
DSchland
```
| 37.344828 | 455 | 0.753155 | deu_Latn | 0.990516 |
87810663064f5a334809471b545eab1b515d2a5f | 4,430 | md | Markdown | articles/backup/backup-azure-arm-userestapi-managejobs.md | RobAaldijk/azure-docs.nl-nl | 519c7fc80075795af2670d665d1d93078faf7a87 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/backup/backup-azure-arm-userestapi-managejobs.md | RobAaldijk/azure-docs.nl-nl | 519c7fc80075795af2670d665d1d93078faf7a87 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/backup/backup-azure-arm-userestapi-managejobs.md | RobAaldijk/azure-docs.nl-nl | 519c7fc80075795af2670d665d1d93078faf7a87 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Back-uptaken beheren met REST API
description: In dit artikel leert u hoe u back-up-en herstel taken van Azure Backup kunt bijhouden en beheren met behulp van REST API.
ms.topic: conceptual
ms.date: 08/03/2018
ms.assetid: b234533e-ac51-4482-9452-d97444f98b38
ms.openlocfilehash: ced0e0020fe955734bf6cc767480fbadd6eaffc1
ms.sourcegitcommit: 829d951d5c90442a38012daaf77e86046018e5b9
ms.translationtype: MT
ms.contentlocale: nl-NL
ms.lasthandoff: 10/09/2020
ms.locfileid: "88890277"
---
# <a name="track-backup-and-restore-jobs-using-rest-api"></a>Back-up-en herstel taken bijhouden met behulp van REST API
Azure Backup-service activeert taken die op de achtergrond worden uitgevoerd in verschillende scenario's, zoals het activeren van back-ups, herstel bewerkingen, het uitschakelen van back-ups. Deze taken kunnen worden gevolgd met hun Id's.
## <a name="fetch-job-information-from-operations"></a>Taak gegevens ophalen uit bewerkingen
Een bewerking zoals het activeren van een back-up retourneert altijd een jobID. Bijvoorbeeld: de laatste reactie van een [trigger back-up rest API bewerking](backup-azure-arm-userestapi-backupazurevms.md#example-responses-for-on-demand-backup) is als volgt:
```http
{
"id": "cd153561-20d3-467a-b911-cc1de47d4763",
"name": "cd153561-20d3-467a-b911-cc1de47d4763",
"status": "Succeeded",
"startTime": "2018-09-12T02:16:56.7399752Z",
"endTime": "2018-09-12T02:16:56.7399752Z",
"properties": {
"objectType": "OperationStatusJobExtendedInfo",
"jobId": "41f3e94b-ae6b-4a20-b422-65abfcaf03e5"
}
}
```
De Azure VM-back-uptaak wordt [aangeduid met het](/rest/api/backup/jobdetails/) veld jobId en kan worden gevolgd door een eenvoudige *Get* -aanvraag.
## <a name="tracking-the-job"></a>De taak bijhouden
```http
GET https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.RecoveryServices/vaults/{vaultName}/backupJobs/{jobName}?api-version=2019-05-13
```
De `{jobName}` is de ' jobId ' die hierboven wordt vermeld. Het antwoord is altijd 200 OK met het veld Status om de huidige status van de taak aan te geven. Zodra het ' voltooid ' of ' CompletedWithWarnings ' is, toont de sectie ' extendedInfo ' meer informatie over de taak.
### <a name="response"></a>Antwoord
|Naam |Type |Beschrijving |
|---------|---------|---------|
|200 OK | [JobResource](/rest/api/backup/jobdetails/get#jobresource) | OK |
#### <a name="example-response"></a>Voorbeeld van een antwoord
Zodra de *Get* -URI is verzonden, wordt een antwoord van 200 (OK) geretourneerd.
```http
HTTP/1.1 200 OK
Pragma: no-cache
X-Content-Type-Options: nosniff
x-ms-request-id: e9702101-9da2-4681-bdf3-a54e17329a56
x-ms-client-request-id: ba4dff71-1655-4c1d-a71f-c9869371b18b; ba4dff71-1655-4c1d-a71f-c9869371b18b
Strict-Transport-Security: max-age=31536000; includeSubDomains
x-ms-ratelimit-remaining-subscription-reads: 14989
x-ms-correlation-request-id: e9702101-9da2-4681-bdf3-a54e17329a56
x-ms-routing-request-id: SOUTHINDIA:20180521T102317Z:e9702101-9da2-4681-bdf3-a54e17329a56
Cache-Control: no-cache
Date: Mon, 21 May 2018 10:23:17 GMT
Server: Microsoft-IIS/8.0
X-Powered-By: ASP.NET
{
"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/Default-RecoveryServices-ResourceGroup-centralindia/providers/microsoft.recoveryservices/vaults/abdemovault/backupJobs/7ddead57-bcb9-4269-ac31-6a1b57588700",
"name": "7ddead57-bcb9-4269-ac31-6a1b57588700",
"type": "Microsoft.RecoveryServices/vaults/backupJobs",
"properties": {
"jobType": "AzureIaaSVMJob",
"duration": "00:20:23.0896697",
"actionsInfo": [
1
],
"virtualMachineVersion": "Compute",
"extendedInfo": {
"tasksList": [
{
"taskId": "Take Snapshot",
"duration": "00:00:00",
"status": "Completed"
},
{
"taskId": "Transfer data to vault",
"duration": "00:00:00",
"status": "Completed"
}
],
"propertyBag": {
"VM Name": "uttestvmub1",
"Backup Size": "2332 MB"
}
},
"entityFriendlyName": "uttestvmub1",
"backupManagementType": "AzureIaasVM",
"operation": "Backup",
"status": "Completed",
"startTime": "2018-05-21T08:35:40.9488967Z",
"endTime": "2018-05-21T08:56:04.0385664Z",
"activityId": "7df8e874-1d66-4f81-8e91-da2fe054811d"
}
}
}
```
| 39.553571 | 275 | 0.716253 | nld_Latn | 0.639912 |
87819c9ddb1278aef680a91e343ef9d6acce11cc | 718 | md | Markdown | _posts/2014-06-30-portfolio14.md | khj0704/khj0704.github.io | 6f6b9a724883d7236965a7540079f64e97a08b5f | [
"MIT"
] | null | null | null | _posts/2014-06-30-portfolio14.md | khj0704/khj0704.github.io | 6f6b9a724883d7236965a7540079f64e97a08b5f | [
"MIT"
] | null | null | null | _posts/2014-06-30-portfolio14.md | khj0704/khj0704.github.io | 6f6b9a724883d7236965a7540079f64e97a08b5f | [
"MIT"
] | null | null | null | ---
layout: post
project: 교원 학부모 앱
title: 교원 학부모 Android App
date: 2014-06-30 15:30:00
categories: posts
tags: portfolio
icon: Kyowon_ParentAppIcon.png
image: kyowon_parents_app.png
---
1) 개발 환경
- Language: Java
- Platform: Android
- DB: SQLite
- Tool: eclipse, RedMine, SVN
2) 프로젝트 개요
- 교원에서 출시한 ALL&G(올앤지)의 학부모 전용 서비스
- 학부모님들이 자녀의 ALL&G(올앤지) 패드 사용현황을 휴대폰으로 확인하고 제어할 수 있는 앱
3) 프로젝트 구현 기능
- 원격 교육 컨텐츠 제공 기능 (이미지, 동영상 컨텐츠 등)
- 부모가 자녀의 교육현황을 파악할 수 있는 기능
- SNS(페이스북, 카카오), 유튜브 연동 기능
- 웹 페이지 연동 기능
- 이미지 로드 및 REST API 연동 기능
- Google GCM Push 연동
4) 마켓 주소
- [Google Play](https://play.google.com/store/apps/details?id=kr.co.kyowon.kyowonparentapp&hl=ko){:target="_blank"} | 23.16129 | 118 | 0.654596 | kor_Hang | 1.000003 |
87826199f688d612caa959bcdcb07214068fdb2b | 2,418 | md | Markdown | _posts/OS/H/2015-01-20-OsHOL2.md | funRiceGenes/funRiceGenes.github.io | e27a14e23dd8c3d6127e38ee62240dc9e01008be | [
"MIT"
] | 4 | 2017-08-09T02:48:10.000Z | 2020-11-11T01:54:08.000Z | _posts/OS/H/2015-01-20-OsHOL2.md | funRiceGenes/funRiceGenes.github.io | e27a14e23dd8c3d6127e38ee62240dc9e01008be | [
"MIT"
] | 1 | 2020-05-31T13:03:01.000Z | 2020-06-01T01:47:14.000Z | _posts/OS/H/2015-01-20-OsHOL2.md | funRiceGenes/funRiceGenes.github.io | e27a14e23dd8c3d6127e38ee62240dc9e01008be | [
"MIT"
] | 6 | 2018-10-03T20:47:32.000Z | 2021-07-19T01:58:31.000Z | ---
layout: post
title: "OsHOL2"
description: ""
category: genes
tags:
---
* **Information**
+ Symbol: OsHOL2
+ MSU: [LOC_Os06g06040](http://rice.uga.edu/cgi-bin/ORF_infopage.cgi?orf=LOC_Os06g06040)
+ RAPdb: [Os06g0153900](http://rapdb.dna.affrc.go.jp/viewer/gbrowse_details/irgsp1?name=Os06g0153900)
* **Publication**
+ [Rice OsHOL1 and OsHOL2 proteins have S-adenosyl-L-methionine-dependent methyltransferase activities toward iodide ions](http://www.ncbi.nlm.nih.gov/pubmed?term=Rice OsHOL1 and OsHOL2 proteins have S-adenosyl-L-methionine-dependent methyltransferase activities toward iodide ions%5BTitle%5D), 2012, Plant Biotechnology.
+ [Arabidopsis HARMLESS TO OZONE LAYER Protein Methylates a Glucosinolate Breakdown Product and Functions in Resistance to Pseudomonas syringae pv. maculicola](http://www.ncbi.nlm.nih.gov/pubmed?term=Arabidopsis HARMLESS TO OZONE LAYER Protein Methylates a Glucosinolate Breakdown Product and Functions in Resistance to Pseudomonas syringae pv. maculicola%5BTitle%5D), 2009, Journal of Biological Chemistry.
* **Genbank accession number**
+ [NP_001056843](http://www.ncbi.nlm.nih.gov/nuccore/NP_001056843)
* **Key message**
* **Connection**
+ __OsHOL1__, __OsHOL2__, [Arabidopsis HARMLESS TO OZONE LAYER Protein Methylates a Glucosinolate Breakdown Product and Functions in Resistance to Pseudomonas syringae pv. maculicola](http://www.ncbi.nlm.nih.gov/pubmed?term=Arabidopsis HARMLESS TO OZONE LAYER Protein Methylates a Glucosinolate Breakdown Product and Functions in Resistance to Pseudomonas syringae pv. maculicola%5BTitle%5D), Our previous studies demonstrated that AtHOL1 and its homologs, AtHOL2 and AtHOL3, have S-adenosyl-l-methionine-dependent methyltransferase activities
+ __OsHOL1__, __OsHOL2__, [Arabidopsis HARMLESS TO OZONE LAYER Protein Methylates a Glucosinolate Breakdown Product and Functions in Resistance to Pseudomonas syringae pv. maculicola](http://www.ncbi.nlm.nih.gov/pubmed?term=Arabidopsis HARMLESS TO OZONE LAYER Protein Methylates a Glucosinolate Breakdown Product and Functions in Resistance to Pseudomonas syringae pv. maculicola%5BTitle%5D), Analyses with the T-DNA insertion Arabidopsis mutants (hol1, hol2, and hol3) revealed that only hol1 showed increased sensitivity to NCS(-) in medium and a concomitant lack of CH(3)SCN synthesis upon tissue damage
[//]: # * **Key figures**
| 80.6 | 610 | 0.787014 | eng_Latn | 0.461381 |
8782933f24befd53c8170f28d8e99dc1b0eb1050 | 2,113 | md | Markdown | dynamics-nav-app/purchasing-how-prioritize-vendors.md | MicrosoftDocs/nav-content.sv-se | f21ec05f780c4657e94217ddcd50625f4789d72b | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-05-19T18:48:13.000Z | 2021-04-21T00:13:46.000Z | dynamics-nav-app/purchasing-how-prioritize-vendors.md | MicrosoftDocs/nav-content.sv-se | f21ec05f780c4657e94217ddcd50625f4789d72b | [
"CC-BY-4.0",
"MIT"
] | null | null | null | dynamics-nav-app/purchasing-how-prioritize-vendors.md | MicrosoftDocs/nav-content.sv-se | f21ec05f780c4657e94217ddcd50625f4789d72b | [
"CC-BY-4.0",
"MIT"
] | 3 | 2017-08-24T13:11:38.000Z | 2021-11-05T11:05:44.000Z | ---
title: "Tilldela en leverantör till en prioritetsnivå"
description: "Du kan tilldela nummer till leverantörer eller för att prioritera dem och underlätta betalningsförslag i Dynamics NAV."
documentationcenter:
author: SorenGP
ms.prod: dynamics-nav-2018
ms.topic: article
ms.devlang: na
ms.tgt_pltfrm: na
ms.workload: na
ms.search.keywords: supplier, payment priority
ms.date: 03/29/2017
ms.author: sgroespe
ms.translationtype: HT
ms.sourcegitcommit: 4fefaef7380ac10836fcac404eea006f55d8556f
ms.openlocfilehash: 79458c1372c9f696a8331f2e7c83179dd42fb905
ms.contentlocale: sv-se
ms.lasthandoff: 10/16/2017
---
# <a name="how-to-prioritize-vendors"></a>Så här prioriterar du leverantörer
[!INCLUDE[d365fin](includes/d365fin_md.md)] används för att ta fram olika betalningsförslag, t.ex. betalningar som snart förfaller eller betalningar för vilka en rabatt kan erhållas. Mer information finns i [Så här föreslår du leverantörsbetalningar](payables-how-suggest-vendor-payments.md).
Först måste du prioritera leverantörerna genom att tilldela nummer till dem.
## <a name="to-prioritize-vendors"></a>Så här prioriterar du leverantörer:
1. Välj ikonen , ange **Leverantör** och välj sedan relaterad länk.
2. Välj lämplig leverantör och sedan **Redigera**.
3. Ange ett nummer i fältet **Prioritet**.
I [!INCLUDE[d365fin](includes/d365fin_md.md)] räknas lägst nummer (förutom 0) som högst prioritet. Om du t.ex. använder 1, 2 och 3 har alltså 1 högst prioritet.
Om du inte vill prioritera en leverantör lämnar du fältet **Prioritet** tomt. Om du sedan använder funktionen för betalningsförslag visas den här leverantören sist i listan, efter de leverantörer som har tilldelats ett prioritetsnummer. Du kan ange valfritt antal prioritetsnivåer alltefter behov.
## <a name="see-also"></a>Se även
[Ställa in inköp](purchasing-setup-purchasing.md)
[Hantera Leverantörsreskontra](payables-manage-payables.md)
[Arbeta med [!INCLUDE[d365fin](includes/d365fin_md.md)]](ui-work-product.md)
| 52.825 | 297 | 0.786086 | swe_Latn | 0.99298 |
8784a10904d89e51555f8f3d6a300e88d1371a4e | 109 | md | Markdown | README.md | HamedFathi/DDDToolkit | 89b05610876bd8f7d6c1c71a0db3bfac1e0571a4 | [
"MIT"
] | null | null | null | README.md | HamedFathi/DDDToolkit | 89b05610876bd8f7d6c1c71a0db3bfac1e0571a4 | [
"MIT"
] | null | null | null | README.md | HamedFathi/DDDToolkit | 89b05610876bd8f7d6c1c71a0db3bfac1e0571a4 | [
"MIT"
] | null | null | null | 
| 54.5 | 108 | 0.816514 | kor_Hang | 0.08661 |
87850acc4fb028bc1e3a180354d1e09d1fe54c1a | 1,686 | md | Markdown | results/innerfidelity/innerfidelity_harman_in-ear_2019v2/FLC Technology FLC8 C C Gn Strin/README.md | NekoAlosama/AutoEq-optimized | 354873974d31dea14aa95cf1b181c724554a19d3 | [
"MIT"
] | 2 | 2020-04-27T23:56:38.000Z | 2020-08-06T08:54:28.000Z | results/innerfidelity/innerfidelity_harman_in-ear_2019v2/FLC Technology FLC8 C C Gn Strin/README.md | NekoAlosama/AutoEq-optimized | 354873974d31dea14aa95cf1b181c724554a19d3 | [
"MIT"
] | 3 | 2020-07-21T22:10:04.000Z | 2020-11-22T15:07:43.000Z | results/innerfidelity/innerfidelity_harman_in-ear_2019v2/FLC Technology FLC8 C C Gn Strin/README.md | NekoAlosama/AutoEq-optimized | 354873974d31dea14aa95cf1b181c724554a19d3 | [
"MIT"
] | 1 | 2020-08-06T08:54:41.000Z | 2020-08-06T08:54:41.000Z | # FLC Technology FLC8 C C Gn Strin
See [usage instructions](https://github.com/jaakkopasanen/AutoEq#usage) for more options and info.
### Parametric EQs
In case of using parametric equalizer, apply preamp of **-9.79dB** and build filters manually
with these parameters. The first 5 filters can be used independently.
When using independent subset of filters, apply preamp of **-9.79 dB**.
| Type | Fc | Q | Gain |
|--------:|-----------:|-----:|----------:|
| Peaking | 22.30 Hz | 0.69 | 8.89 dB |
| Peaking | 54.70 Hz | 1.06 | 4.17 dB |
| Peaking | 3568.48 Hz | 0.48 | 26.51 dB |
| Peaking | 4791.52 Hz | 0.19 | -20.70 dB |
| Peaking | 5275.04 Hz | 4.22 | 8.22 dB |
| Peaking | 1313.13 Hz | 3.5 | -3.13 dB |
| Peaking | 2016.65 Hz | 2.36 | 3.22 dB |
| Peaking | 2812.03 Hz | 4.9 | -5.06 dB |
| Peaking | 4856.19 Hz | 0.2 | 1.10 dB |
| Peaking | 7252.24 Hz | 2.91 | -3.45 dB |
### Fixed Band EQs
In case of using fixed band (also called graphic) equalizer, apply preamp of **-11.71dB**
(if available) and set gains manually with these parameters.
| Type | Fc | Q | Gain |
|--------:|------------:|-----:|---------:|
| Peaking | 31.25 Hz | 1.41 | 10.53 dB |
| Peaking | 62.50 Hz | 1.41 | 4.51 dB |
| Peaking | 125.00 Hz | 1.41 | 0.60 dB |
| Peaking | 250.00 Hz | 1.41 | -2.60 dB |
| Peaking | 500.00 Hz | 1.41 | -0.49 dB |
| Peaking | 1000.00 Hz | 1.41 | -2.43 dB |
| Peaking | 2000.00 Hz | 1.41 | 1.25 dB |
| Peaking | 4000.00 Hz | 1.41 | 9.43 dB |
| Peaking | 8000.00 Hz | 1.41 | -8.00 dB |
| Peaking | 16000.01 Hz | 1.41 | -7.57 dB |
### Graphs
 | 42.15 | 98 | 0.562278 | eng_Latn | 0.665799 |
8785289c6908e7393f18ceda12c4614db9557297 | 1,267 | md | Markdown | src/pages/development-blog/python-best-practices-prefer-exceptions-than-return-error-values.md | CesarDav/cobuildlab-web | c86faf4f3a8d194979b6b3d2d62bba3fac3cb710 | [
"MIT"
] | null | null | null | src/pages/development-blog/python-best-practices-prefer-exceptions-than-return-error-values.md | CesarDav/cobuildlab-web | c86faf4f3a8d194979b6b3d2d62bba3fac3cb710 | [
"MIT"
] | null | null | null | src/pages/development-blog/python-best-practices-prefer-exceptions-than-return-error-values.md | CesarDav/cobuildlab-web | c86faf4f3a8d194979b6b3d2d62bba3fac3cb710 | [
"MIT"
] | null | null | null | ---
title: "Python Best Practices: Prefer raise Exceptions than return error values"
date: 2019-01-28T01:00:00+00:00
template: development-post
tags:
image: "./media/blocks.jpg"
---
Usually, developers create helper functions or utilities or services providing meaningful error codes or Null or None values (Depending on the language).
In the next section, we will enumerate why this is a bad practice, and it should be avoided at all cost. Usually, programming languages offer a special and dedicated way to work with errors. In the case of Python, there are Exceptions.
## Why prefer Exceptions instead of special values for error handling:
### 1. Exceptions are built-in, Meaning that are language construct designed for handling errors. Also, the compiler understands them as Exceptional situations.
### 2. Numerical values for errors require additional knowledge to understand what's happening
### 3. Exceptions are typed, so they are OO friendly: Extension, polymorphism, overload
### 4. Exceptions provide more information that error codes or None values
### 5. None values are evaluated as False in Python
### 6. Make the code more verbose
### 7. Exceptions can be propagated
### 8. Exceptions stop the execution of the code, so they are safer | 39.59375 | 235 | 0.771113 | eng_Latn | 0.997528 |
87857d91a5bcf586f909ee6828c303de816a651a | 962 | md | Markdown | README.md | jschaf/bazel-postgres-sketch | 9a576e5648fdf32289768c6f881d38273a2d725b | [
"MIT"
] | 1 | 2022-02-08T02:09:28.000Z | 2022-02-08T02:09:28.000Z | README.md | jschaf/bazel-postgres-sketch | 9a576e5648fdf32289768c6f881d38273a2d725b | [
"MIT"
] | null | null | null | README.md | jschaf/bazel-postgres-sketch | 9a576e5648fdf32289768c6f881d38273a2d725b | [
"MIT"
] | 2 | 2021-08-12T08:50:39.000Z | 2021-12-15T06:33:47.000Z | # Implementation sketch for Bazel-managed Postgres
This is a sketch of how to build and run Postgres on Bazel for Linux x64. As a
sketch, it's not an end-to-end solution; you'll need to glue it together for
your specific environment. For Darwin, I downloaded the prebuilt Homebrew
binaries instead of building.
We compile the Postgres source into a mostly statically-linked Linux binary.
Postgres is quite difficult to compile completely statically, so we bundle all
necessary libs and rely on the user setting LD_LIBRARY_PATH.
We apply a small patch, [allow_root.patch](allow_root.patch) to disable Postgres
binaries from checking for UID 0, the root user. We want to allow root users to
run Postgres because we'll run postgres mostly in Docker containers. We don't
want to have to create a postgres user just to appease Postgres.
To invoke the Postgres binaries, you must set the LD_LIBRARY_PATH so Postgres
will link against the bundled version of glibc. | 53.444444 | 80 | 0.805613 | eng_Latn | 0.997865 |
87864626eb7a5a609eed75af8599ba2665032151 | 11 | md | Markdown | README.md | evellinlayanna/mazer_ | 05bb283b8afba3d3efbd98fe46ce81b3f6b40f86 | [
"MIT"
] | null | null | null | README.md | evellinlayanna/mazer_ | 05bb283b8afba3d3efbd98fe46ce81b3f6b40f86 | [
"MIT"
] | null | null | null | README.md | evellinlayanna/mazer_ | 05bb283b8afba3d3efbd98fe46ce81b3f6b40f86 | [
"MIT"
] | null | null | null | # mazer_
| 3.666667 | 8 | 0.545455 | deu_Latn | 0.405559 |
878649aff07878c11e322ced0ba3a6188de5ebfd | 1,892 | md | Markdown | README.md | BennAsabir/note-taker | af00c7d9ad00cee7f86c3b2fc002e3cacc42b9d1 | [
"MIT"
] | null | null | null | README.md | BennAsabir/note-taker | af00c7d9ad00cee7f86c3b2fc002e3cacc42b9d1 | [
"MIT"
] | null | null | null | README.md | BennAsabir/note-taker | af00c7d9ad00cee7f86c3b2fc002e3cacc42b9d1 | [
"MIT"
] | null | null | null | # Note-Taker
An application that can be used to write and save notes.
## Table Of Content
* [General Info](#general-info)
* [Technologies](#technologies)
* [Installation](#installation)
* [Usage](#usage)
* [License](#license)
* [Questions](#questions)
## General Info
An application that can be used to write and save notes. This application will use an Express.js back-end and will save and retrieve note data from a JSON file. [Link to deployed application](https://fast-hamlet-04025.herokuapp.com/)
<br>
Image showcasing the application
<img src=./public/assets/images/screenshot.png>
## Technologies
Project is created with
* [Html](https://html.com/)
* [Css](https://developer.mozilla.org/en-US/docs/Web/CSS)
* [Bootstrap](https://getbootstrap.com/)
* [Javascript](https://www.javascript.com/)
* [Node.js](https://nodejs.org/en/)
* [Express](https://expressjs.com/)
* [uuid](https://www.npmjs.com/package/uuid)
## Installation
To get started clone this repository using
<br>
```terminal
git clone git@github.com:BennAsabir/note-taker.git
```
Install dependencies
```terminal
npm init -y
```
```terminal
npm install express
```
and
```terminal
npm install uuid
```
to start running application simply input
```terminal
node server.js or npm start
```
once all that is done navigate to - http://localhost:3000 to begin!
## Usage
Application is used for managing a task list. Create a new task by clicking on the pen icon and typing out the title and description of task, then save the task by clicking the save icon. The task will be saved on the left hand column. You can also delete task by clicking the trash icon.
## License
This repository is licensed under the MIT license.
## Questions
Questions about this repository? Please contact me at [Benasabir@gmail.com](mailto:Benasabir@gmail.com). View more of my work in GitHub at [BennAsabir](https://github.com/BennAsabir)
| 31.533333 | 288 | 0.744186 | eng_Latn | 0.887671 |
8786829b2f427057b34e969f201de7798e254eb1 | 1,862 | md | Markdown | biztalk/core/control-number-in-isa-and-iea-do-not-match.md | SicongLiuSimon/biztalk-docs | 85394b436d277504d9e759c655608888123785bd | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-06-14T19:45:26.000Z | 2019-06-14T19:45:26.000Z | biztalk/core/control-number-in-isa-and-iea-do-not-match.md | AzureMentor/biztalk-docs | 16b211f29ad233c26d5511475c7e621760908af3 | [
"CC-BY-4.0",
"MIT"
] | 7 | 2020-01-09T22:34:58.000Z | 2020-02-18T19:42:16.000Z | biztalk/core/control-number-in-isa-and-iea-do-not-match.md | AzureMentor/biztalk-docs | 16b211f29ad233c26d5511475c7e621760908af3 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2017-06-23T18:30:28.000Z | 2017-11-28T01:11:25.000Z | ---
title: "Control Number in ISA and IEA do not match | Microsoft Docs"
ms.custom: ""
ms.date: "06/08/2017"
ms.prod: "biztalk-server"
ms.reviewer: ""
ms.suite: ""
ms.tgt_pltfrm: ""
ms.topic: "article"
ms.assetid: 6f9091ea-460b-464b-acd5-8dc0488b61e5
caps.latest.revision: 9
author: "MandiOhlinger"
ms.author: "mandia"
manager: "anneta"
---
# Control Number in ISA and IEA do not match
## Details
| | |
|-----------------|----------------------------------------------------------------------------------------|
| Product Name | [!INCLUDE[btsBizTalkServerNoVersion](../includes/btsbiztalkservernoversion-md.md)] |
| Product Version | [!INCLUDE[btsEDIVersion](../includes/btsediversion-md.md)] |
| Event ID | - |
| Event Source | [!INCLUDE[btsBizTalkServerNoVersion](../includes/btsbiztalkservernoversion-md.md)] EDI |
| Component | AS2 Engine |
| Symbolic Name | - |
| Message Text | Control Number in ISA and IEA do not match |
## Explanation
This Error/Warning/Information event indicates that the AS2 receive pipeline could not process the incoming interchange because the interchange control numbers contained in the ISA13 and IEA02 fields of the interchange do not have the same value.
## User Action
To resolve this error, make sure that the interchange control numbers contained in the ISA13 and IEA02 fields of the interchange have the same value, and then have the interchange resent. | 54.764706 | 249 | 0.520945 | eng_Latn | 0.81254 |
878723ceda689ad28105cd1481e4c2920b0f361c | 150 | md | Markdown | xos/tests/ui/README.md | mary-grace/xos | 3e269834d29f936757f5091183c9b5188ed5cb9e | [
"Apache-2.0"
] | 66 | 2015-01-29T20:56:45.000Z | 2021-07-01T09:56:44.000Z | xos/tests/ui/README.md | mary-grace/xos | 3e269834d29f936757f5091183c9b5188ed5cb9e | [
"Apache-2.0"
] | 112 | 2015-01-30T19:59:09.000Z | 2017-04-08T16:43:40.000Z | xos/tests/ui/README.md | mary-grace/xos | 3e269834d29f936757f5091183c9b5188ed5cb9e | [
"Apache-2.0"
] | 66 | 2015-02-09T17:35:36.000Z | 2021-03-24T12:31:19.000Z | # UI Test
This folder contain the configuration files to test custom views
To execute the tests, run:
- `npm install`
- `bower install`
- `npm test` | 18.75 | 64 | 0.733333 | eng_Latn | 0.983232 |
87876321d7331ae70a0716df065cff055504752e | 17,413 | md | Markdown | reference/5.1/Microsoft.PowerShell.Core/Get-Help.md | sashizaki/PowerShell-Docs.ja-jp | ba9baa612916ad75ceab1407fcc66d51f6120277 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | reference/5.1/Microsoft.PowerShell.Core/Get-Help.md | sashizaki/PowerShell-Docs.ja-jp | ba9baa612916ad75ceab1407fcc66d51f6120277 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | reference/5.1/Microsoft.PowerShell.Core/Get-Help.md | sashizaki/PowerShell-Docs.ja-jp | ba9baa612916ad75ceab1407fcc66d51f6120277 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
external help file: System.Management.Automation.dll-Help.xml
keywords: powershell,コマンドレット
Locale: en-US
Module Name: Microsoft.PowerShell.Core
ms.date: 08/23/2019
online version: https://docs.microsoft.com/powershell/module/microsoft.powershell.core/get-help?view=powershell-5.1&WT.mc_id=ps-gethelp
schema: 2.0.0
title: Get-Help
ms.openlocfilehash: a9a7aa7730d920bb78e0ae64daa50e5a28921163
ms.sourcegitcommit: b7ff031a12afd04910aeb98345ebee92f5845b0c
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 07/28/2020
ms.locfileid: "93219003"
---
# Get-Help
## 概要
PowerShell コマンドと概念に関する情報を表示します。
## SYNTAX
### AllUsersView (既定値)
```
Get-Help [[-Name] <String>] [-Path <String>] [-Category <String[]>] [-Component <String[]>]
[-Functionality <String[]>] [-Role <String[]>] [-Full] [<CommonParameters>]
```
### DetailedView
```
Get-Help [[-Name] <String>] [-Path <String>] [-Category <String[]>] [-Component <String[]>]
[-Functionality <String[]>] [-Role <String[]>] -Detailed [<CommonParameters>]
```
### 例
```
Get-Help [[-Name] <String>] [-Path <String>] [-Category <String[]>] [-Component <String[]>]
[-Functionality <String[]>] [-Role <String[]>] -Examples [<CommonParameters>]
```
### パラメーター
```
Get-Help [[-Name] <String>] [-Path <String>] [-Category <String[]>] [-Component <String[]>]
[-Functionality <String[]>] [-Role <String[]>] -Parameter <String> [<CommonParameters>]
```
### オンライン
```
Get-Help [[-Name] <String>] [-Path <String>] [-Category <String[]>] [-Component <String[]>]
[-Functionality <String[]>] [-Role <String[]>] -Online [<CommonParameters>]
```
### ShowWindow
```
Get-Help [[-Name] <String>] [-Path <String>] [-Category <String[]>] [-Component <String[]>]
[-Functionality <String[]>] [-Role <String[]>] -ShowWindow [<CommonParameters>]
```
## Description
コマンド `Get-Help` レットでは、コマンドレット、関数、Common Information Model (CIM) コマンド、ワークフロー、プロバイダー、エイリアス、スクリプトなど、PowerShell の概念とコマンドに関する情報を表示します。
PowerShell コマンドレットのヘルプを表示するには、「」のように、 `Get-Help` コマンドレット名の後に「」と入力し `Get-Help Get-Process` ます。
PowerShell の概念説明ヘルプ記事は、 **about_Comparison_Operators** などの **about_** から始まります。 すべての **about_** 記事を表示するには、「」と入力 `Get-Help about_*` します。 特定の記事を表示するには、のように「 `Get-Help about_<article-name>` 」と入力し `Get-Help about_Comparison_Operators` ます。
PowerShell プロバイダーのヘルプを表示するには、 `Get-Help` プロバイダー名の後に「」と入力します。 たとえば、証明書プロバイダーのヘルプを表示するには、「」と入力 `Get-Help Certificate` します。
`help`または `man` を入力して、一度に1つのテキスト画面を表示することもできます。 または、はと `<cmdlet-name> -?` 同じです `Get-Help` が、コマンドレットに対してのみ機能します。
`Get-Help` コンピューター上のヘルプファイルから表示されるヘルプコンテンツを取得します。 ヘルプファイルがない場合は、 `Get-Help` コマンドレットに関する基本的な情報のみが表示されます。 一部の PowerShell モジュールには、ヘルプファイルが含まれています。 PowerShell 3.0 以降では、Windows オペレーティングシステムに付属のモジュールにヘルプファイルは含まれていません。 PowerShell 3.0 でモジュールのヘルプファイルをダウンロードまたは更新するには、 `Update-Help` コマンドレットを使用します。
Microsoft Docs で PowerShell のヘルプドキュメントをオンラインで表示することもできます。ヘルプファイルのオンラインバージョンを取得するには、のように、 **online** パラメーターを使用し `Get-Help Get-Process -Online` ます。 PowerShell のドキュメントをすべて読み取るには、Microsoft Docs [powershell のドキュメント](/powershell)を参照してください。
`Get-Help`ヘルプ記事の正確な名前、またはヘルプ記事に固有の単語を入力すると、によって `Get-Help` その記事の内容が表示されます。 コマンドエイリアスの正確な名前を指定すると、によって `Get-Help` 元のコマンドのヘルプが表示されます。 複数のヘルプ記事のタイトルに表示される単語または単語のパターンを入力すると、 `Get-Help` 一致するタイトルの一覧が表示されます。 ヘルプ記事のタイトルに表示されていないテキストを入力すると、 `Get-Help` そのテキストを内容に含む記事の一覧がに表示されます。
`Get-Help` サポートされているすべての言語およびロケールのヘルプ記事を取得できます。 `Get-Help` は、最初に、Windows に設定されているロケールでヘルプファイルを検索し、次に、親ロケール (たとえば、pt の **pt** **-BR** ) で、次にフォールバックロケールでヘルプファイルを検索します。 PowerShell 3.0 以降では、フォールバックロケールでヘルプが見つからない場合、では、 `Get-Help` エラーメッセージを返すか、自動生成されたヘルプを表示する前に、英語 ( **en-us** ) のヘルプ記事が検索されます。
コマンド構文ダイアグラムに表示されるシンボルの詳細については `Get-Help` 、「 [about_Command_Syntax](./About/about_Command_Syntax.md)」を参照してください。
**Required** や **Position** などのパラメーター属性の詳細については、「 [about_Parameters](./About/about_Parameters.md)」を参照してください。
>[!NOTE]
> PowerShell 3.0 および PowerShell 4.0 では、 `Get-Help` モジュールが現在のセッションにインポートされ **て** いない限り、はモジュール内の記事を見つけることができません。 これは既知の問題です。 モジュール内の記事 **に関する情報** を取得するには、コマンドレットを使用するか、 `Import-Module` モジュールに含まれているコマンドレットを実行して、モジュールをインポートします。
## 例
### 例 1: コマンドレットに関する基本的なヘルプ情報を表示する
これらの例では、コマンドレットに関する基本的なヘルプ情報が表示さ `Format-Table` れます。
```powershell
Get-Help Format-Table
Get-Help -Name Format-Table
Format-Table -?
```
`Get-Help <cmdlet-name>` は、コマンドレットの最も簡単な既定の構文です `Get-Help` 。 **Name** パラメーターは省略できます。
構文は `<cmdlet-name> -?` コマンドレットに対してのみ機能します。
### 例 2: 基本的な情報を1ページずつ表示する
これらの例では、コマンドレットの基本的なヘルプ情報 `Format-Table` が1ページずつ表示されます。
```powershell
help Format-Table
man Format-Table
Get-Help Format-Table | Out-Host -Paging
```
`help` は、 `Get-Help` コマンドレットを内部的に実行し、結果を1ページずつ表示する関数です。
`man` 関数のエイリアスです `help` 。
`Get-Help Format-Table` オブジェクトをパイプラインの下に送信します。 `Out-Host -Paging` パイプラインからの出力を受け取り、一度に1ページずつ表示します。 詳細については、「 [Out Host](Out-Host.md)」を参照してください。
### 例 3: コマンドレットの詳細情報を表示する
これらの例では、コマンドレットの詳細なヘルプ情報が表示さ `Format-Table` れます。
```powershell
Get-Help Format-Table -Detailed
Get-Help Format-Table -Full
```
**詳細** パラメーターには、パラメーターの説明と例を含むヘルプ記事の詳細ビューが表示されます。
**Full** パラメーターは、パラメーターの説明、例、入力オブジェクトと出力オブジェクトの種類、および追加のメモを含むヘルプ記事の完全なビューを表示します。
**詳細** パラメーターと **完全** パラメーターは、コンピューターにヘルプファイルがインストールされているコマンドに対してのみ有効です。 これらのパラメーターは、概念 ( **about_** ) のヘルプ記事には有効ではありません。
### 例 4: パラメーターを使用して、コマンドレットの選択した部分を表示する
これらの例では、コマンドレットのヘルプの選択部分を表示し `Format-Table` ます。
```powershell
Get-Help Format-Table -Examples
Get-Help Format-Table -Parameter *
Get-Help Format-Table -Parameter GroupBy
```
**例** のパラメーターには、ヘルプファイルの **名前** と **概要** 、およびすべての例が表示されます。 **例のパラメーターは** スイッチパラメーターであるため、例の番号を指定することはできません。
**Parameter** パラメーターには、指定されたパラメーターの説明だけが表示されます。 アスタリスク () のワイルドカード文字のみを指定すると `*` 、すべてのパラメーターの説明が表示されます。
**パラメーター** で **GroupBy** などのパラメーター名を指定すると、そのパラメーターに関する情報が表示されます。
これらのパラメーターは、概念 ( **about_** ) のヘルプ記事には有効ではありません。
### 例 5: オンラインバージョンのヘルプを表示する
この例では、 `Format-Table` 既定の web ブラウザーでコマンドレットのヘルプ記事のオンラインバージョンを表示します。
```powershell
Get-Help Format-Table -Online
```
### 例 6: ヘルプシステムに関するヘルプを表示する
パラメーターを指定せずにコマンドレットを実行すると、 `Get-Help` PowerShell ヘルプシステムに関する情報が表示されます。
```powershell
Get-Help
```
### 例 7: 使用可能なヘルプ記事を表示する
この例では、お使いのコンピューターで利用可能なすべてのヘルプ記事の一覧を表示します。
```powershell
Get-Help *
```
### 例 8: 概念説明の記事の一覧を表示する
この例では、PowerShell ヘルプに含まれる概念説明の記事の一覧を示します。 これらの記事はすべて **about_** 文字で始まります。 特定のヘルプファイルを表示するには、たとえば「」と入力 `Get-Help \<about_article-name\>` `Get-Help about_Signing` します。
コンピューターにヘルプファイルがインストールされている概念説明の記事のみが表示されます。 PowerShell 3.0 でヘルプファイルをダウンロードしてインストールする方法の詳細については、「 [update-help](Update-Help.md)」を参照してください。
```powershell
Get-Help about_*
```
### 例 9: コマンドレットのヘルプで単語を検索する
この例は、コマンドレットのヘルプ記事で単語を検索する方法を示しています。
```powershell
Get-Help Add-Member -Full | Out-String -Stream | Select-String -Pattern Clixml
```
```Output
the Export-Clixml cmdlet to save the instance of the object, including the additional members...
can use the Import-Clixml cmdlet to re-create the instance of the object from the information...
Export-Clixml
Import-Clixml
```
`Get-Help` は、 **完全な** パラメーターを使用してのヘルプ情報を取得し `Add-Member` ます。 **MamlCommandHelpInfo** オブジェクトは、パイプラインを介して送信されます。 `Out-String`**Stream** パラメーターを使用して、オブジェクトを文字列に変換します。 `Select-String`**Pattern** パラメーターを使用して、 **export-clixml** 文字列を検索します。
### 例 10: 単語を含む記事の一覧を表示する
この例では、 **リモート処理** という単語を含む記事の一覧を表示します。
どの記事のタイトルにも表示されない単語を入力すると、 `Get-Help` その単語を含む記事の一覧がに表示されます。
```powershell
Get-Help -Name remoting
```
```Output
Name Category Module Synopsis
---- -------- ------ --------
Install-PowerShellRemoting.ps1 External Install-PowerShellRemoting.ps1
Disable-PSRemoting Cmdlet Microsoft.PowerShell.Core Prevents remote users...
Enable-PSRemoting Cmdlet Microsoft.PowerShell.Core Configures the computer...
```
### 例 11: プロバイダー固有のヘルプを表示する
この例では、のプロバイダー固有のヘルプを取得する2つの方法を示し `Get-Item` ます。 これらのコマンドは、 `Get-Item` PowerShell SQL Server プロバイダーの **DataCollection** ノードでコマンドレットを使用する方法を説明するヘルプを表示します。
最初の例では、path パラメーターを使用して `Get-Help` **Path** 、SQL Server プロバイダーのパスを指定します。
プロバイダーのパスが指定されているため、任意のパスの場所からコマンドを実行できます。
2番目の例では、を使用し `Set-Location` て、SQL Server プロバイダーのパスに移動します。 この場所から、プロバイダー固有のヘルプを取得するために、 **Path** パラメーターは必要ありません `Get-Help` 。
```powershell
Get-Help Get-Item -Path SQLSERVER:\DataCollection
```
```Output
NAME
Get-Item
SYNOPSIS
Gets a collection of Server objects for the local computer and any computers
to which you have made a SQL Server PowerShell connection.
...
```
```powershell
Set-Location SQLSERVER:\DataCollection
SQLSERVER:\DataCollection> Get-Help Get-Item
```
```Output
NAME
Get-Item
SYNOPSIS
Gets a collection of Server objects for the local computer and any computers
to which you have made a SQL Server PowerShell connection.
...
```
### 例 12: スクリプトのヘルプを表示する
この例では、のヘルプを取得し `MyScript.ps1 script` ます。 関数とスクリプトのヘルプを記述する方法の詳細については、「 [about_Comment_Based_Help](./About/about_Comment_Based_Help.md)」を参照してください。
```powershell
Get-Help -Name C:\PS-Test\MyScript.ps1
```
## PARAMETERS
### -カテゴリ
指定したカテゴリの項目とそのエイリアスに対してのみヘルプを表示します。 概念説明の記事は、 **HelpFile** カテゴリに含まれています。
このパラメーターに指定できる値は次のとおりです。
- エイリアス
- コマンドレット
- プロバイダー
- 全般
- よく寄せられる質問
- 用語集
- HelpFile
- ScriptCommand
- 機能
- Assert
- ExternalScript
- All
- DefaultHelp
- ワークフロー
- DscResource
- クラス
- 構成
```yaml
Type: System.String[]
Parameter Sets: (All)
Aliases:
Accepted values: Alias, Cmdlet, Provider, General, FAQ, Glossary, HelpFile, ScriptCommand, Function, Filter, ExternalScript, All, DefaultHelp, Workflow, DscResource, Class, Configuration
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -コンポーネント
**Exchange** など、指定されたコンポーネント値を持つコマンドを表示します。 コンポーネント名を入力します。
ワイルドカード文字を使用できます。 このパラメーターは、概念 ( **About_** ) ヘルプの表示には影響しません。
```yaml
Type: System.String[]
Parameter Sets: (All)
Aliases:
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: True
```
### -Detailed
基本的なヘルプの表示に加えてパラメーターの説明と例を表示します。 このパラメーターは、ヘルプファイルがコンピューターにインストールされている場合にのみ有効です。 概念 ( **About_** ) のヘルプの表示には効果がありません。
```yaml
Type: System.Management.Automation.SwitchParameter
Parameter Sets: DetailedView
Aliases:
Required: True
Position: Named
Default value: False
Accept pipeline input: False
Accept wildcard characters: False
```
### -Examples
名前、概要、例のみを表示します。 例のみを表示するには、「」と入力 `(Get-Help \<cmdlet-name\>).Examples` します。
このパラメーターは、ヘルプファイルがコンピューターにインストールされている場合にのみ有効です。 概念 ( **About_** ) のヘルプの表示には効果がありません。
```yaml
Type: System.Management.Automation.SwitchParameter
Parameter Sets: Examples
Aliases:
Required: True
Position: Named
Default value: False
Accept pipeline input: False
Accept wildcard characters: False
```
### -Full
コマンドレットのヘルプ記事全体を表示します。 **完全** には、パラメーターの説明と属性、例、入力オブジェクトと出力オブジェクトの種類、および追加のメモが含まれています。
このパラメーターは、ヘルプファイルがコンピューターにインストールされている場合にのみ有効です。 概念 ( **About_** ) のヘルプの表示には効果がありません。
```yaml
Type: System.Management.Automation.SwitchParameter
Parameter Sets: AllUsersView
Aliases:
Required: False
Position: Named
Default value: False
Accept pipeline input: False
Accept wildcard characters: False
```
### -機能
指定した機能を持つ項目のヘルプを表示します。 機能を入力します。 ワイルドカード文字を使用できます。 このパラメーターは、概念 ( **About_** ) ヘルプの表示には影響しません。
```yaml
Type: System.String[]
Parameter Sets: (All)
Aliases:
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: True
```
### -Name
指定したコマンドまたは概念に関するヘルプを取得します。 コマンドレット、関数、プロバイダー、スクリプト、またはワークフローの名前 (など)、またはのような概念記事の名前 (など)、 `Get-Member` またはエイリアス (など) を入力し `about_Objects` `ls` ます。 コマンドレットとプロバイダー名にはワイルドカード文字を使用できますが、ワイルドカード文字を使用して関数のヘルプおよびスクリプトのヘルプ記事の名前を検索することはできません。
環境変数に示されているパスにないスクリプトのヘルプを表示するには `$env:Path` 、スクリプトのパスとファイル名を入力します。
ヘルプ記事の正確な名前を入力すると、に `Get-Help` その記事の内容が表示されます。
複数のヘルプ記事のタイトルに表示される単語または単語のパターンを入力すると、 `Get-Help` 一致するタイトルの一覧が表示されます。
ヘルプ記事のタイトルに一致しないテキストを入力すると、 `Get-Help` そのテキストを内容に含む記事の一覧がに表示されます。
などの概念記事の名前は、 `about_Objects` 英語以外のバージョンの PowerShell でも英語で入力する必要があります。
```yaml
Type: System.String
Parameter Sets: (All)
Aliases:
Required: False
Position: 0
Default value: None
Accept pipeline input: True (ByPropertyName)
Accept wildcard characters: True
```
### -Online
ヘルプ記事のオンラインバージョンを既定のブラウザーに表示します。 このパラメーターは、コマンドレット、関数、ワークフロー、およびスクリプトのヘルプ記事に対してのみ有効です。 リモートセッションでは、 **Online** パラメーターをと共に使用することはできません `Get-Help` 。
記述するヘルプ記事でこの機能をサポートする方法の詳細については、「 [about_Comment_Based_Help](./About/about_Comment_Based_Help.md)」、「 [オンラインヘルプのサポート](/powershell/scripting/developer/module/supporting-online-help)」、および「 [PowerShell コマンドレットのヘルプの作成](/powershell/scripting/developer/help/writing-help-for-windows-powershell-cmdlets)」を参照してください。
```yaml
Type: System.Management.Automation.SwitchParameter
Parameter Sets: Online
Aliases:
Required: True
Position: Named
Default value: False
Accept pipeline input: False
Accept wildcard characters: False
```
### -Parameter
指定したパラメーターの詳しい説明のみを表示します。 ワイルドカードを使用できます。 このパラメーターは、概念 ( **About_** ) ヘルプの表示には影響しません。
```yaml
Type: System.String
Parameter Sets: Parameters
Aliases:
Required: True
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: True
```
### -Path
指定されたプロバイダー パスでのコマンドレットの動作を説明するヘルプを取得します。 PowerShell プロバイダーのパスを入力してください。
このパラメーターは、コマンドレットヘルプ記事のカスタマイズバージョンを取得します。このコマンドレットは、指定された PowerShell プロバイダーパスでのコマンドレットの動作を説明します。 このパラメーターは、プロバイダーコマンドレットに関するヘルプに対してのみ有効です。プロバイダーがヘルプファイルにプロバイダーコマンドレットのヘルプ記事のカスタムバージョンを含める場合にのみ有効です。 このパラメーターを使用するには、プロバイダーが含まれるモジュールのヘルプ ファイルをインストールします。
プロバイダーパスのカスタムコマンドレットのヘルプを表示するには、プロバイダーのパスの場所に移動し、コマンドを入力する `Get-Help` か、任意のパスの場所からの **path** パラメーターを使用して `Get-Help` プロバイダーのパスを指定します。 ヘルプ記事のプロバイダヘルプセクションで、カスタムコマンドレットのヘルプをオンラインで検索することもできます。
PowerShell プロバイダーの詳細については、「 [about_Providers](./About/about_Providers.md)」を参照してください。
```yaml
Type: System.String
Parameter Sets: (All)
Aliases:
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: True
```
### -ロール
指定したユーザー ロール向けにカスタマイズされたヘルプを表示します。 ロールを入力します。 ワイルドカード文字を使用できます。
組織においてユーザーが果たすロールを入力します。 一部のコマンドレットでは、このパラメーターの値に基づいて、ヘルプ ファイルに異なるテキストが表示されます。 このパラメーターは、コア コマンドレットのヘルプには効果を持ちません。
```yaml
Type: System.String[]
Parameter Sets: (All)
Aliases:
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: True
```
### -ShowWindow
読みやすくするために、ウィンドウにヘルプ トピックを表示します。 このウィンドウには **、検索の検索機能** と、ヘルプトピックの選択したセクションのみを表示するオプションを含む、表示のオプションを設定できる **設定** ボックスが含まれています。
**ShowWindow** パラメーターは、コマンド (コマンドレット、関数、CIM コマンド、ワークフロー、スクリプト) のヘルプトピックと、記事 **に関する** 概念をサポートしています。 プロバイダー ヘルプはサポートされません。
このパラメーターは、PowerShell 3.0 で導入されました。
```yaml
Type: System.Management.Automation.SwitchParameter
Parameter Sets: ShowWindow
Aliases:
Required: True
Position: Named
Default value: False
Accept pipeline input: False
Accept wildcard characters: False
```
### 共通パラメーター
このコマンドレットは、一般的なパラメーターをサポートしています。-Debug、-ErrorAction、-ErrorVariable、-InformationAction、-InformationVariable、-OutVariable、-OutBuffer、-PipelineVariable、-Verbose、-WarningAction、-WarningVariable です。 詳細については、「[about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216)」を参照してください。
## 入力
### なし
オブジェクトをパイプラインからに送信することはできません `Get-Help` 。
## 出力
### ExtendedCmdletHelpInfo
`Get-Help`ヘルプファイルのないコマンドでを実行すると、は `Get-Help` 自動生成されたヘルプを表す **ExtendedCmdletHelpInfo** オブジェクトを返します。
### System.String
概念説明のヘルプ記事が表示された場合は、 `Get-Help` それを文字列として返します。
### MamlCommandHelpInfo
ヘルプファイルを含むコマンドを取得すると、は `Get-Help` **MamlCommandHelpInfo** オブジェクトを返します。
## 注
PowerShell 3.0 にはヘルプファイルが含まれていません。 が読み込まれたヘルプファイルをダウンロードしてインストールするには `Get-Help` 、コマンドレットを使用し `Update-Help` ます。 コマンドレットを使用して、 `Update-Help` PowerShell およびインストールするモジュールに付属するコアコマンドのヘルプファイルをダウンロードしてインストールすることができます。 また、このコマンドレットを使用してヘルプ ファイルを更新して、コンピューター上のヘルプを最新に保つこともできます。
PowerShell online に付属するコマンドに関するヘルプ記事は、 [Windows powershell を使用](/powershell/scripting/getting-started/getting-started-with-windows-powershell)したはじめにから開始することもできます。
`Get-Help` Windows オペレーティングシステムに対して設定されたロケール、またはそのロケールのフォールバック言語でヘルプを表示します。 プライマリまたはフォールバックロケールのヘルプファイルがない場合、は `Get-Help` コンピューターにヘルプファイルがないかのように動作します。 別のロケールのヘルプを表示するには、コントロールパネルの [ **地域** と **言語** ] を使用して設定を変更します。 Windows 10 の場合、 **設定** 、 **時刻 & 言語** 。
ヘルプの完全なビューには、パラメーターに関する情報の表が含まれています。 この表には、次のフィールドが含まれます。
- **[必須]** 。 パラメーターが必須 (true) であるか省略可能 (false) であるかを示します。
- **位置** 。 パラメーターの名前が付けられているか、位置 (numeric) であるかを示します。 位置指定パラメーターは、コマンド内の特定の場所に指定する必要があります。
- **名前付き** では、パラメーター名が必須であることを示しますが、パラメーターはコマンド内の任意の場所で使用できます。
- **数値** は、パラメーター名が省略可能であることを示します。ただし、名前を省略した場合、パラメーターは数値で指定された場所にある必要があります。 たとえば、は、パラメーター名を省略した場合に、 `2` パラメーターが2番目または名前のないパラメーターである必要があることを示します。 パラメーター名を使用する場合は、コマンド内の任意の場所にパラメーターを指定できます。
- **既定値** 。 コマンドにパラメーターを含めない場合に PowerShell が使用するパラメーター値または既定の動作。
- パイプラインの入力を受け入れます。 パイプラインを介してオブジェクトをパラメーターに送信できるかどうか (true)、またはできない (false) かどうかを示します。 **プロパティ名を** 指定すると、パイプラインオブジェクトには、パラメーター名と同じ名前のプロパティが必要になります。
- **ワイルドカード文字を受け入れ** ます。 パラメーターの値にアスタリスク ( `*` ) や疑問符 () などのワイルドカード文字を含めることができるかどうかを示し `?` ます。
## 関連リンク
[about_Command_Syntax](About/about_Command_Syntax.md)
[about_Comment_Based_Help](./About/about_Comment_Based_Help.md)
[Get-Command](Get-Command.md)
[更新可能なヘルプのサポート](/powershell/scripting/developer/module/supporting-updatable-help)
[Update-Help](Update-Help.md)
[コメントベースのヘルプ トピックを記述する](/powershell/scripting/developer/help/writing-comment-based-help-topics)
[PowerShell コマンドレットのヘルプの記述](/powershell/scripting/developer/help/writing-help-for-windows-powershell-cmdlets)
| 29.816781 | 306 | 0.773158 | yue_Hant | 0.892404 |
8787be1534bb8de2fd89d7b0d3f1d32635f7f427 | 1,536 | md | Markdown | RELEASE_INSTRUCTIONS.md | habeldabelduuhh/cli | fd9d402bf6f154faca4d1c0472636e8a90edff55 | [
"MIT"
] | null | null | null | RELEASE_INSTRUCTIONS.md | habeldabelduuhh/cli | fd9d402bf6f154faca4d1c0472636e8a90edff55 | [
"MIT"
] | 162 | 2019-02-12T23:17:23.000Z | 2021-07-12T23:27:18.000Z | RELEASE_INSTRUCTIONS.md | habeldabelduuhh/cli | fd9d402bf6f154faca4d1c0472636e8a90edff55 | [
"MIT"
] | null | null | null | ### 1. Release checks
This is an automated testing tool for projects that were generated by the Aurelia-CLI. It tests all "au" commands for every project.
1. open the terminal, go into an empty directory and run `au generate-skeletons`
2. from the CLI repository folder, run `gulp release-check --path C:/Development/Aurelia/TestApps --latest-cli-url aurelia/cli#master`, substituting the path with the correct one.
* Note, on Windows, run PowerShell with "run as administrator".
3. Select for which projects you would like to run the tests.
* To test all projects automatically, add `--all` to previous command.
* To test subset projects automatically, add `--select requirejs,babel` (only tests projects using requirejs and babel) to previous command.
4. Wait until tests complete
5. in the CLI repository folder there is now a release-checks-results folder containing the output and other artifacts, such as screenshots. Check these screenshots to make sure that pages have rendered correctly
### Add new tests
In the build/tasks/release-checks folder is a suite-steps.js file. This file contains a collection of tests per project. New tests can be added to this file.
### Todo
- Tests for dotnet projects (the tool can run dotnet new, but dotnet does not serve the index.html file that exists in wwwrooot)
### 2. Prepare release
1. run `gulp prepare-release --bump major|minor|patch|prerelease`. This will update aurelia-cli version, update change log, update cli version in `lib/dependencies.json`.
2. do `npm publish`.
| 66.782609 | 212 | 0.772786 | eng_Latn | 0.99783 |
878a99a874ca987252c7d3692bfe394b2c484a2d | 29,148 | md | Markdown | 2011/_posts/2011-01-19-nouvelles-hebdomadaires-de-postgresql-16-janvier-2011.md | postgresqlfr/blog.postgresql.fr | 38b430eeb1b85cebb4d9ba3a022783175d4ebf76 | [
"BSD-2-Clause"
] | null | null | null | 2011/_posts/2011-01-19-nouvelles-hebdomadaires-de-postgresql-16-janvier-2011.md | postgresqlfr/blog.postgresql.fr | 38b430eeb1b85cebb4d9ba3a022783175d4ebf76 | [
"BSD-2-Clause"
] | 5 | 2020-04-28T12:42:57.000Z | 2021-06-26T23:36:56.000Z | 2011/_posts/2011-01-19-nouvelles-hebdomadaires-de-postgresql-16-janvier-2011.md | postgresqlfr/blog.postgresql.fr | 38b430eeb1b85cebb4d9ba3a022783175d4ebf76 | [
"BSD-2-Clause"
] | null | null | null | ---
layout: post
title: "Nouvelles hebdomadaires de PostgreSQL - 16 janvier 2011"
author: "chl"
categories: [PostgreSQL Weekly News]
redirect_from: "index.php?post/2011/01/19/Nouvelles-hebdomadaires-de-PostgreSQL-16-janvier-2011"
---
<p>La dernière <em>commitfest</em> pour la 9.1 a commencé. Lancez-vous et participez à la relecture de ces patchs !</p>
<p>Dans la version 9.1, PostgreSQL bénéficiera d'améliorations substantielles sur le mode de transaction SERIALIZABLE. Êtes-vous utilisateur de SERIALIZABLE ? L'équipe de développement de PostgreSQL a besoin de retours :
<a target="_blank" href="http://www.postgresql.org/community/">http://www.postgresql.org/community/</a></p>
<p><strong>Les nouveautés des produits dérivés</strong></p>
<ul>
<li>pgbouncer 1.4, un gestionnaire de connexion léger :
<a target="_blank" href="http://pgfoundry.org/projects/pgbouncer/">http://pgfoundry.org/projects/pgbouncer/</a></li>
<li>repmgr 1.0.0, un système de gestion pour le <em>Hot Standby</em> et la <em>Streaming Replication</em> :
<a target="_blank" href="http://projects.2ndquadrant.com/repmgr">http://projects.2ndquadrant.com/repmgr</a></li>
</ul>
<p><strong>Offres d'emplois autour de PostgreSQL en janvier</strong></p>
<ul>
<li>Internationales :
<a target="_blank" href="http://archives.postgresql.org/pgsql-jobs/2011-01/threads.php">http://archives.postgresql.org/pgsql-jobs/2011-01/threads.php</a>;</li>
<li>Francophones :
<a target="_blank" href="http://forums.postgresql.fr/viewforum.php?id=4">http://forums.postgresql.fr/viewforum.php?id=4</a>.</li>
</ul>
<p><strong>PostgreSQL Local</strong></p>
<ul>
<li>Selena Deckelmann parlera de la communauté et du développement PostgreSQL le 7 février 2010 à 16h, à l'université d'état de l'Oregon à Corvallis.</li>
<li>L'appel à conférenciers pour l'annuel "<em>Prague PostgreSQL Developers' Day</em>", 4ème édition, est lancé. L'événement sera tenu le 10 février 2011 à l'<em>Universitas Carolinas</em> :
<a target="_blank" href="http://archives.postgresql.org/pgsql-announce/2010-12/msg00009.php">http://archives.postgresql.org/pgsql-announce/2010-12/msg00009.php</a></li>
<li>L'appel à projets de PostgreSQLFr a été lancé. Les projets doivent concerner PostgreSQL et la communauté francophone. Mail à appel-projets-2010 (AT) postgresql (DOT) fr.
<a target="_blank" href="http://www.postgresql.fr/appel_a_projets_2010:call_for_projects">http://www.postgresql.fr/appel_a_projets_2010:call_for_projects</a></li>
<li>Un PGDay.US est au programme du <em>Southern California Linux Exposition (SCALE)</em> de cette année, tenu à l'hôtel LAX Hilton de Los Angeles (Californie) le vendredi 25 février 2011. Proposez vos conférences sur pgday-submissions (AT) googlegroups (DOT) com.</li>
<li>PostgreSQL Conference East 2011 : New-York City, du 22 au 25 mars :
<a target="_blank" href="http://www.postgresqlconference.org">http://www.postgresqlconference.org</a></li>
<li>L'<em>Open Database Camp</em> aura lieu du 7 au 9 mai 2011 en Sardaigne (Italie) :
<a target="_blank" href="http://datacharmer.blogspot.com/2011/01/announcing-open-database-camp-sardinia.html">http://datacharmer.blogspot.com/2011/01/announcing-open-database-camp-sardinia.html</a></li>
<li>PGCon aura lieu les 19 & 20 mai 2011 à l'Université d'Ottawa, précédé par deux jours de tutoriels les 17 & 18 mai . L'appel à conférenciers a été lancé !
<a target="_blank" href="http://www.pgcon.org/2011/">http://www.pgcon.org/2011/</a></li>
</ul>
<p><strong>PostgreSQL dans les média</strong></p>
<ul>
<li>Planet PostgreSQL:
<a target="_blank" href="http://planet.postgresql.org/">http://planet.postgresql.org/</a></li>
<li>Planet PostgreSQLFr :
<a target="_blank" href="http://planete.postgresql.fr/">http://planete.postgresql.fr/</a></li>
</ul>
<p><i>PostgreSQL Weekly News / les nouvelles hebdomadaires vous sont offertes cette semaine par David Fetter. Traduction par l'équipe PostgreSQLFr sous licence CC BY-NC-SA.</i></p>
<p><i>Proposez vos articles ou annonces avant dimanche 15:00 (heure du Pacifique). Merci de les envoyer en anglais à david (a) fetter.org, en allemand à pwn (a) pgug.de, en italien à pwn (a) itpug.org et en espagnol à pwn (a) arpug.com.ar.</i></p>
<p>(<a target="_blank" href="http://www.postgresql.org/community/weeklynews/pwn20110116">lien vers l'article original</a>)</p>
<!--more-->
<p><strong>Revues de code</strong></p>
<ul>
<li>Steve Singer reviewed Gurjeet Singh's patch to add a primary key using an existing index.</li>
<li>Noah Misch reviewed the snapshot synchronization patch.</li>
<li>Hitoshi Harada reviewed the patch to add a PL/Python validator function.</li>
<li>Jan Urbanski reveiwed the FDW API patch.</li>
</ul>
<p><strong>Correctifs appliqués</strong></p>
<p>Magnus Hagander a poussé :</p>
<ul>
<li>Backend support for streaming base backups. Add BASE_BACKUP command to walsender, allowing it to stream a base backup to the client (in tar format). The syntax is still far from ideal, that will be fixed in the switch to use a proper grammar for walsender. No client included yet, will come as a separate commit. Magnus Hagander and Heikki Linnakangas.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=0eb59c4591ecf4f1c69d89e9f043a18e7dce9e47">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=0eb59c4591ecf4f1c69d89e9f043a18e7dce9e47</a></li>
<li>Set process title to indicate base backup is running.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=2e36343f82377fbb50834bba6557f8f243fecb34">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=2e36343f82377fbb50834bba6557f8f243fecb34</a></li>
<li>Reset walsender ps title in the main loop. When in streaming mode we can never get out, so it will never be required, but after a base backup (or other operations) we can get back to the loop, so the title needs to be cleared.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=b7ebda9d8c6f78b3bb31247531d0ef0e64b32a16">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=b7ebda9d8c6f78b3bb31247531d0ef0e64b32a16</a></li>
<li>Typo fix. Josh Kupershmidt
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=1c400d330934eb6d70982af522f2bc0458eef48d">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=1c400d330934eb6d70982af522f2bc0458eef48d</a></li>
<li>Revert installation of gram.h in 8.3. To make the buildfarm green again, since there is no file to copy on msvc, and also given discussion about the necessity of the file at all...
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=c1bcb1fb618fbec07b04f16042bcf9ffbf294fec">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=c1bcb1fb618fbec07b04f16042bcf9ffbf294fec</a></li>
<li>Add missing function prototype, for consistency.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=47a5f3e9dab68f47ebadc759afb97b900c437c54">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=47a5f3e9dab68f47ebadc759afb97b900c437c54</a></li>
<li>Track walsender state in shared memory and expose in pg_stat_replication.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=4c8e20f815cbdf043d6d27906fd85ae50c9e4870">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=4c8e20f815cbdf043d6d27906fd85ae50c9e4870</a></li>
<li>Make sure walsender state is only read while holding the spinlock. Noted by Robert Haas.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=9eacd427e811a97337de1fdd61a3cb90604981ad">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=9eacd427e811a97337de1fdd61a3cb90604981ad</a></li>
<li>Exit from base backups when shutdown is requested. When the exit waits until the whole backup completes, it may take a very long time. In passing, add back an error check in the main loop so we detect clients that disconnect much earlier if the backup is large.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=688423d004f4092aed73c73a3281c281d476436d">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=688423d004f4092aed73c73a3281c281d476436d</a></li>
<li>Use a lexer and grammar for parsing walsender commands Makes it easier to parse mainly the BASE_BACKUP command with it's options, and avoids having to manually deal with quoted identifiers in the label (previously broken), and makes it easier to add new commands and options in the future. In passing, refactor the case statement in the walsender to put each command in it's own function.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=fcd810c69adf11b6ec1cff35359be0dd27662eff">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=fcd810c69adf11b6ec1cff35359be0dd27662eff</a></li>
<li>Enumerate available tablespaces after starting the backup. This closes a race condition where if a tablespace was created after the enumeration happened but before the do_pg_start_backup() was called, the backup would be incomplete. Now that it's done while we are in backup mode, WAL replay will recreate it during restore. Noted by Heikki Linnakangas.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=3866ff6149a3b072561e65b3f71f63498e77b6b2">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=3866ff6149a3b072561e65b3f71f63498e77b6b2</a></li>
</ul>
<p>Bruce Momjian a poussé :</p>
<ul>
<li>A toast relid field are no longer needed in pg_upgrade's rel arrays, so remove them. Also other renaming.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=0a5f11993195d74f23b63cc5c2d7024c6d27d7e2">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=0a5f11993195d74f23b63cc5c2d7024c6d27d7e2</a></li>
<li>Apply libpq documentation patches submitted by Leslie S Satenstein and reviewed by Robert Haas.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=a0423ec02df3e311d6d5888170cb25a8c14bc6bf">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=a0423ec02df3e311d6d5888170cb25a8c14bc6bf</a></li>
<li>More libpq documentation adjustments from Leslie S Satenstein, reviewed by Robert Haas.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=712dd95370fc6c3a8d20f71b8e195a7af3c50f42">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=712dd95370fc6c3a8d20f71b8e195a7af3c50f42</a></li>
<li>Apply patch for test_fsync to add tests for O_DIRECT. Adjusted patch by Josh Berkus.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=677b06ca462ec6fd98da9369a2eae6085c9d7fed">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=677b06ca462ec6fd98da9369a2eae6085c9d7fed</a></li>
<li>Improve output display of test_fsync.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=3ab80cfe031b616638eb6956010dcc9cb6426631">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=3ab80cfe031b616638eb6956010dcc9cb6426631</a></li>
<li>Restructure test_fync to use modular C so there is less duplicate code and it can be enhanced easier.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=169516ad9395e91d206cbf5bf32c5d2fa34d4111">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=169516ad9395e91d206cbf5bf32c5d2fa34d4111</a></li>
<li>Have test_fsync output details that fdatasync is the default wal_sync_method on Linux.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=001d3664e32c0d156215bbfeccea3272aaf17722">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=001d3664e32c0d156215bbfeccea3272aaf17722</a></li>
<li>In test_fsync, warn about options without o_direct that are not used by Postgres, and cases where o_direct does not work with certain file systems.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=431605f666cfb223cd615ec8c63cbdea07295550">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=431605f666cfb223cd615ec8c63cbdea07295550</a></li>
<li>Reverse number of stars used for test_fsync details.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=3eebb33dddcfe4ac0719b697c1ebd3694038054e">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=3eebb33dddcfe4ac0719b697c1ebd3694038054e</a></li>
<li>Use O_DIRECT in O_SYNC test of different size. Restructure O_DIRECT error reporting to be more consistent.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=6dc15e3befaa6a3ff72633a2084ad1e1466edcde">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=6dc15e3befaa6a3ff72633a2084ad1e1466edcde</a></li>
<li>In test_fsync, use #define for printf format of ops/sec.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=e0c274679cb50064a92472c94c7ef5849a156536">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=e0c274679cb50064a92472c94c7ef5849a156536</a></li>
</ul>
<p>Heikki Linnakangas a poussé :</p>
<ul>
<li>Leave temporary files out of streaming base backups.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=dc1305ce5ffef157410b6e0171d71fa16da4cc9e">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=dc1305ce5ffef157410b6e0171d71fa16da4cc9e</a></li>
<li>Fix the logic in libpqrcv_receive() to determine if there's any incoming data that can be read without blocking. It used to conclude that there isn't, even though there was data in the socket receive buffer. That lead walreceiver to flush the WAL after every received chunk, potentially causing big performance issues. Backpatch to 9.0, because the performance impact can be very significant.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=a5a02a744555789ab8390dbf57271e9d07127602">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=a5a02a744555789ab8390dbf57271e9d07127602</a></li>
<li>Treat a WAL sender process that hasn't started streaming yet as a regular backend, as far as the postmaster shutdown logic is concerned. That means, fast shutdown will wait for WAL sender processes to exit before signaling bgwriter to finish. This avoids race conditions between a base backup stopping or starting, and bgwriter writing the shutdown checkpoint WAL record. We don't want e.g the end-of-backup WAL record to be written after the shutdown checkpoint.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=8f5d65e916796aaee1bf7dd66daf45ca56cd13be">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=8f5d65e916796aaee1bf7dd66daf45ca56cd13be</a></li>
</ul>
<p>Tom Lane a poussé :</p>
<ul>
<li>Tweak create_index_paths()'s test for whether to consider a bitmap scan. Per my note of a couple days ago, create_index_paths would refuse to consider any path at all for GIN indexes if the selectivity estimate came out as 1.0; not even if you tried to force it with enable_seqscan. While this isn't really a bad outcome in practice, it could be annoying for testing purposes. Adjust the test for "is this path only useful for sorting" so that it doesn't fire on paths with nil pathkeys, which will include all GIN paths.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=9d1ac2f5fa4043529dbaff5ebdc73405fa73207b">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=9d1ac2f5fa4043529dbaff5ebdc73405fa73207b</a></li>
<li>Adjust basebackup.c to suppress compiler warnings. Some versions of gcc complain about "variable `tablespaces' might be clobbered by `longjmp' or `vfork'" with the original coding. Fix by moving the PG_TRY block into a separate subroutine.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=e6dce4e439e1d271dad9a95bc4b94147be2fc39a">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=e6dce4e439e1d271dad9a95bc4b94147be2fc39a</a></li>
<li>Fix PlanRowMark/ExecRowMark structures to handle inheritance correctly. In an inherited UPDATE/DELETE, each target table has its own subplan, because it might have a column set different from other targets. This means that the resjunk columns we add to support EvalPlanQual might be at different physical column numbers in each subplan. The EvalPlanQual rewrite I did for 9.0 failed to account for this, resulting in possible misbehavior or even crashes during concurrent updates to the same row, as seen in a recent report from Gordon Shannon. Revise the data structure so that we track resjunk column numbers separately for each subplan. I also chose to move responsibility for identifying the physical column numbers back to executor startup, instead of assuming that numbers derived during preprocess_targetlist would stay valid throughout subsequent massaging of the plan. That's a bit slower, so we might want to consider undoing it someday; but it would complicate the patch considerably and didn't seem justifiable in a bug fix that has to be back-patched to 9.0.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=d487afbb813b7ca8803e20974b9e45530a1f4ef1">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=d487afbb813b7ca8803e20974b9e45530a1f4ef1</a></li>
<li>Revert incorrect memory-conservation hack in inheritance_planner(). This reverts commit d1001a78ce612a16ea622b558f5fc2b68c45ab4c of 2010-12-05, which was broken as reported by Jeff Davis. The problem is that the individual planning steps may have side-effects on substructures of PlannerGlobal, not only the current PlannerInfo root. Arranging to keep all such side effects in the main planning context is probably possible, but it would change this from a quick local hack into a wide-ranging and rather fragile endeavor. Which it's not worth.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=f0f36045b2e3d037bb7647d84373404fa4ba9588">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=f0f36045b2e3d037bb7647d84373404fa4ba9588</a></li>
<li>Code review for postmaster.pid contents changes. Fix broken test for pre-existing postmaster, caused by wrong code for appending lines to the lockfile; don't write a failed listen_address setting into the lockfile; don't arbitrarily change the location of the data directory in the lockfile compared to previous releases; provide more consistent and useful definitions of the socket path and listen_address entries; avoid assuming that pg_ctl has the same DEFAULT_PGSOCKET_DIR as the postmaster; assorted code style improvements.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=52948169bcddf443b76d6ff1806259b153a2ac04">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=52948169bcddf443b76d6ff1806259b153a2ac04</a></li>
<li>Add .gitignore to silence git complaints about parser/scanner output files.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=36750dcef58550c652cfff861f9aad057a391fb9">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=36750dcef58550c652cfff861f9aad057a391fb9</a></li>
<li>Move a couple of declarations to reflect where the routines really are.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=6ca452ba7fca14dad16425a56ffa1c8a93496b5f">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=6ca452ba7fca14dad16425a56ffa1c8a93496b5f</a></li>
</ul>
<p>Peter Eisentraut a poussé :</p>
<ul>
<li>Add some subsection headings.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=b95ea9dd628a93f564e460b8870228755b520220">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=b95ea9dd628a93f564e460b8870228755b520220</a></li>
<li>Re-add recursive coverage target in src/backend/. This was lost during the recent recursive make change.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=e3094fd3a8052bb600b287c5dd844b3b0ac2fe11">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=e3094fd3a8052bb600b287c5dd844b3b0ac2fe11</a></li>
<li>Don't run regression tests in SQL_ASCII encoding by default. Instead, run them in the encoding that the locale selects, which is more representative of real use. Also document how locale and encoding for regression test runs can be selected.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=35eb0958be476d58dcc8ba462d57384e74a62d88">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=35eb0958be476d58dcc8ba462d57384e74a62d88</a></li>
<li>Workaround for recursive make breakage. Changing a file two directory levels deep under src/backend/ would not cause the postgres binary to be rebuilt. This change fixes it, but no one knows why. Branch ------ master Details -------
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=c667cc24e888dc4efe4c2412ad8dd13a190295e3">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=c667cc24e888dc4efe4c2412ad8dd13a190295e3</a></li>
</ul>
<p>Andrew Dunstan a poussé :</p>
<ul>
<li>Unbreak regression tests, apparently broken by commit 4c8e20f
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=b7a0b42641e764a1e4abc39cc4311b5c779f5955">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=b7a0b42641e764a1e4abc39cc4311b5c779f5955</a></li>
</ul>
<p>Robert Haas a poussé :</p>
<ul>
<li>Add support for logging the current role. Stephen Frost, with some editorialization by me.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=a8a8867912c46a68c9ac14903b3dba2fab8f7097">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=a8a8867912c46a68c9ac14903b3dba2fab8f7097</a></li>
<li>Revert patch adding support for logging the current role. This reverts commit a8a8867912c46a68c9ac14903b3dba2fab8f7097, committed by me earlier today (2011-01-12). This isn't safe inside an aborted transaction. Noted by Tom Lane.
<a target="_blank" href="http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=7a32ff97321408afa0ddfcae1a4a060062956d24">http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=7a32ff97321408afa0ddfcae1a4a060062956d24</a></li>
</ul>
<p><strong>Correctifs rejetés (à ce jour)</strong></p>
<ul>
<li>Pas de déception cette semaine :-)</li>
</ul>
<p><strong>Correctifs en attente</strong></p>
<ul>
<li>Jan Urbanski sent in two more revisions of the patch to add SPI-based exceptions to PL/PythonU.</li>
<li>Kevin Grittner sent in five more revisions of the SSI patch.</li>
<li>Kevin Grittner sent in another revision of the patch for READ ONLY.</li>
<li>Magnus Hagander sent in two more revisions of the patch to use a parser for walsender commands.</li>
<li>Jan Urbanski sent in a patch which adds PL/PythonU functions for quoting strings.</li>
<li>Shigeru HANADA sent in a patch for file_fdw that adds a ResetCopyFrom function, which is intended to improve performance.</li>
<li>Cedric Villemain sent in a patch atop the walsender patch which fixes and infelicity for absolute paths.</li>
<li>Jeff Davis sent in another WIP patch for range types.</li>
<li>Heikki Linnakangas sent in a two revisions of a patch to allow multiple concurrent base backups.</li>
<li>Euler Taveira de Oliveira sent in another revision of the patch to expand pgbench's maximum run size.</li>
<li>ITAGAKI Takahiro sent in another revision of the MULTISET patch.</li>
<li>Shigeru HANADA sent in a patch to unbreak regression tests, apparently broken by commit 4c8e20f.</li>
<li>Noah Misch sent in four more revisions of the patch to optimize ALTER TYPE.</li>
<li>Stephen Frost sent in five revisions of a patch to allow logging the current role.</li>
<li>Alexey Klyukin sent in a patch to allow conversion between PostgreSQL arrays and Perl arrays for PL/Perl(U).</li>
<li>Robert Haas sent in some code to make the background writer compact the request queue before fsyncing.</li>
<li>Shigeru HANADA sent in another flock of patches to implement foreign data wrappers, part of the SQL/MED system.</li>
<li>Fujii Masao sent in a two revisions of a patch to use latches to implement failover in pg_ctl.</li>
<li>Fujii Masao sent in a patch to change pg_last_xlog_receive_location not to move backwards.</li>
<li>Hitoshi Harada sent in a patch to check psql better for an encoding mismatch.</li>
<li>Jan Urbanski sent in two more revisions of the patch that auto-generates error codes from header files.</li>
<li>Andreas Karlsson and Tom Lane traded patches to fix the bug in amproctypes in pg_describe_object().</li>
<li>Shigeru HANADA and ITAGAKI Takahiro traded patches for file data wrappers, a part of SQL/MED.</li>
<li>Tatsuo Ishii sent in a patch to ensure error codes for "terminating connection due to conflict with recovery" are sensible.</li>
<li>Greg Smith sent in a patch to spread out checkpoint sync.</li>
<li>Marko (johto) Tiikkaja sent in another revision of the writeable CTE patch.</li>
<li>Alex Hunsaker sent in two more revisions of a patch to optimize PL/Perl function argument passing.</li>
<li>Simon Riggs sent in a patch to add foreign keys which are presumed to hold but not checked against existing data.</li>
<li>Marko (johto) Tiikkaja sent in another revision of the patch to add transaction-scope advisory locks.</li>
<li>Simon Riggs sent in a WIP patch to add ALTER TABLE ... REPLACE WITH.</li>
<li>Peter Eisentraut sent in a patch to add a client_hostname field to pg_stat_activity.</li>
<li>Greg Smith sent in a patch to help with logging aborted autovacuums.</li>
<li>Magnus Hagander sent in a patch to help streaming base backups by ordering.</li>
<li>Jeff Davis sent in another revision of the patch to add range types.</li>
<li>Fujii Masao sent in a patch to ensure that all WAL received is flushed to disk before walreceiver exits.</li>
<li>Florian Pflug sent in a patch to make backends die sooner after the postmaster does.</li>
<li>Dimitri Fontaine sent in another revision of the extensions patch.</li>
<li>Alvaro Herrera sent in a patch to make foreign key checks less intrusive.</li>
<li>Greg Smith sent in two revisions of a patch to auto-size wal_buffers.</li>
<li>Robert Haas sent in a patch to limit hint bit I/O.</li>
<li>Marti Raudsepp sent in a patch to add a tag command "REPLACE X" for CREATE OR REPLACE statements.</li>
<li>Simon Riggs sent in two revisions of a patch to add recovery control functions.</li>
<li>Hitoshi Harada sent in a patch to allow specifying ENCODING in COPY.</li>
<li>Peter Eisentraut sent in another revision of the patch to infer client_encoding from client locale.</li>
<li>Jaime Casanova sent in a patch to add named restore points.</li>
<li>Peter Eisentraut sent in another revision of the per-column collation patch.</li>
<li>Fujii Masao sent in a patch to help reduce data loss on the standby.</li>
<li>Andrew Dunstan sent in two revisions of a patch to add a textarray option for file FDWs.</li>
<li>Per review by Noah Misch, Pavel Stehule sent in another revision of the patch to optimize varlena compression in PL/pgsql.</li>
<li>Magnus Hagander sent in two revisions of a patch to add pg_basebackup for streaming base backups.</li>
<li>Marko (johto) Tiikkaja sent in another revision of the patch to fix snapshot taking inconsistencies.</li>
<li>Marko (johto) Tiikkaja sent in another revision of the patch to sho in EXPLAIN ANALYZE the number of rows a plan qual filtered from a node's input.</li>
<li>Magnus Hagander sent in a patch to include WAL in the base backup.</li>
<li>Simon Riggs sent in another revision of the patch to add synchronous replication.</li>
<li>Andreas Karlsson sent in another revision of the patch to add \dL (languages) to psql.</li>
</ul> | 67.316397 | 1,076 | 0.786503 | eng_Latn | 0.596144 |